Artificial Intelligence In Game Design
AI has come to the game industry.
Last year, Electronic Arts established an R&D division called SEED. The team leverages AI to explore new technologies and creative opportunities that they can enable for future games. Recently they showcased their latest work with real-time ray-tracing and self-learning AI agents that can play Battlefield.
And in the digital world, the billion dollar game company Epic Games created a believable virtual human in an collaborate effort with CubicMotion, 3Lateral, Tencent, and Vicon. The virtual human, named Siren, was rendered in real-time using Epic’s Unreal Engine 4 technology, a tremendous step forward in transforming both films and games.
If you look at today’s AI application in the gaming industry, you will note that AI is mainly used in two areas: Saving the budget on the game design and upgrade the in-game experience.
Let’s get started with the game design. Procedural content generation (PCG) has become a critical area of game development since the early 1980s. It refers to methods — mostly automated technologies — for generating game content that is contained in a game, such as levels, maps, game rules, textures, stories, items, quests, music, weapons, vehicles, characters, etc.
Gaming companies are urged to save their labor-costs in game design as the human game artists and developers are both demanded and expensive. Today, the development cost of a successful commercial game has ramped up more or less steadily — a game to be developed by hundreds of people over a period of several years is everywhere now. Will Wright, a renowned game designer, indicated in a 2005 Game Developers Conference talk that “a game development company that could replace some of the artists and designers with algorithms would have a competitive advantage.”
As a result, the gaming industry is turning to cutting-edge artificial intelligence to free their staff from time-consuming tasks and create content faster and cheaper. AI is quite well fitted in PCG problems as it is capable of handling visual and audio data in and learning patterns from vast volumes of data.
While the AI algorithms in game design are still in a very early stage, there is a considerable research effort delving deep into this field. Here are our noteworthy picks of research results.
Games cannot exist without characters, whether they are player characters who are controlled or controllable by a player, or non-player characters who are controlled by the game-master.
Creating characters is time-consuming as there are so many things that need to be taken into account. Generating a 2D Super Mario is for sure a piece of cake, but what about a human-like character in a 3D role-playing video games?
There are a variety of research results that shine a spotlight on generating game characters’ facial, voices and motions. Last year, Nvidia researchers and the independent game developer Remedy Entertainment put together an automated real-time deep learning technique to create 3D facial animations from audio with low latency. This end-to-end model takes input from waveforms and outputs a 3D vertex coordinates of a face model. The technique can be used for in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.
A research team from the University of Edinburgh and Method Studios keeps their eyes on motion simulation. They choreographed a machine learning system that is fed by motion capture clips showing various kinds of movement. The system correspondingly generates an animation that can, for example, go from a jog or hop over a small obstacle.
Beyond the generation of in-game characters, developers may focus on general properties of the visual output such as pixel shaders, lighting, brightness and saturation, which can all influence the overall appearance of any game scene.
Creating maps, levels
An emerging method in PCG research is generative adversarial networks (GANs), the deep neural net architectures comprised of two nets contesting one with the other. GANs have attained excellent results in producing content of the same type or style based on the existing content.
Researchers are using GANs in the generation of levels, which is by far the most popular use of PCG in games as levels and game rules are the most necessary building blocks of any game.
This May, computer scientists at Italy’s Politecnico di Milano introduced a level-design AI using GANs to create new maps for DOOM, a popular first-person-shooter video game. They created a generative network to compose the overall size of the level, the heights of the walls, the number of rooms in a degree, and other measurements, and a discriminator to evaluate its work. The results were impressive as the AI-generated levels look akin to human-made old-school-style graphics.
AI invents new games
Just recently, researchers at George Tech took a step forward by using GANs to invent new games! In their paper, they sought input in the form of video game levels from already developed games and converted them into an output that lays out the environments, objects, and rules for a new video game. The system learns from two Nintendo games Super Mario Bros. and Kirby’s Adventure and outputs a new game akin to Mega Man.
Mega Man 11
The research suggests AI cannot replace the human developers yet as it only generates simple games with the most basic game rules and scenes. But researchers believe further development could lead to the automated development of games with 3D environments and complex rules and menu systems.
Enhancing Gaming Experience with AI
Another primary use of AI in games is modeling a human player to understand how the interaction with a game is experienced by individual players. Generally speaking, AI needs to understand what a player does and how a player feels during the play.
To gauge a player’s in-game experience, developers use machine learning methods, such as supervised learning like support vector machines or neural networks to build the models of player experience. The training data here consists of some aspect of the game or player-game interaction, and the targets are labels derived from some assessment of player experience, gathered for example from physiological measurements or questionnaires.
Increasing the engagement of players is a bit more complex problem. Developers usually identify four main player modeling subtasks that are particularly relevant for game AI:
- Develop a smart and human-like NPCs to better interact with gamers;
- Predict human players’ behaviors that lead to improved game testing and game design;
- Classify their behaviors to enable the personalization of the game;
- Discovery frequent patterns or sequences of actions to determine how a player behaves in a game.
Besides, there are so many underrepresented use cases of game AI that we have not discussed, such as game testings, game studies, and in-game chat monitoring.
In the upcoming AI Frontiers Conference held from November 9 to November 11, Long Lin, Director of Data&AI at EA, will show a couple of use cases where ML/AI helps in-game development and enhances player experience.
“AI has become increasingly popular and widely used in the gaming industry. The typical characteristics of games and game development make them an ideal playground for practicing and implementing AI techniques, especially deep learning and reinforcement learning. Most games are well scoped; it is relatively easy to generate and use the data, and states/actions/rewards are relatively clear,” says Lin.
We look forward to hearing what Lin has to say about the role of AI in games.
Long Lin will speak at AI Frontiers Conference on Nov 9, 2018 in San Jose, California.
AI Frontiers Conference brings together AI thought leaders to showcase cutting-edge research and products. This year, our speakers include: Ilya Sutskever (Founder of OpenAI), Jay Yagnik (VP of Google AI), Kai-Fu Lee(CEO of Sinovation), Mario Munich (SVP of iRobot), Quoc Le (Google Brain), Pieter Abbeel (Professor of UC Berkeley) and more.