• The work of video game art designers has been transformed by artificial intelligence, with generative AI tools opening a huge range of possibilities for the rapid creation of content using prompts which does not require them to take charge of all the technical details.
  • Researchers using AI technologies to automatically generate terrain, characters and even textures, are dramatically speeding up the design process for video games.
  • Generative AI technologies, which are increasingly present in professional studios, will have a major impact long-term impact on the video games industry.

Machine translation, infinitely customisable players, adaptive scenarios, design and even complete development of video games: artificial intelligence has the potential to radically transform the video games industry. Users of Club Koala (by Play for Fun) and other games can now customise their virtual worlds thanks to generative AI and, more specifically, diffusion models. Players can generate music and also customise non-player characters. (NPCs). The game’s publisher explains that “NPCs operate autonomously, adapting to player behaviour, offering personalized quests, and contributing to intricate narratives.”  What has also broken new ground in the field is the generation of unique scenarios: “By incorporating prepossessing, pan-dialogue, text parsing, and an AI NPC behaviour tree structure, the game generates one-of-a-kind storylines, reflecting players’ individual creativity.”

Rogerio Tavares, a computer science PhD at the Polytechnic Institute of Braganca (Portugal), points out that “Video games have always used AI [editor’s note : he uses the term in a broader sense than is usual nowadays] to perform calculations. For example, in Pac-Man, AI was used to generate the movement of the ghosts. Today’s models are more advanced and can respond to players’ actions, much like in a game of chess, where AI can calculate a wide range of probabilities and even predict their opponents’ moves.” In 2022, Tavares notably co-authored a Review and analysis of research on Video Games and Artificial Intelligence: a look back and a step forward in the journal Procedia Computer Science detailing all of the artificial intelligence methods that have been used in the design of video games.

Certain techniques enable artists to modify content using prompts, but do not necessarily give them full control.

Deep Learning to generate open worlds

Studios like Ubisoft are now developing techniques based on research in the field of image synthesis like the work of Éric Guérin, a doctor of computer science who lectures at INSA Lyon. As Guérin explains “when you create an open world covering dozens of square kilometres, you need to be accurate to within 50 cm. This represents an enormous amount of data, which is why we need to provide artists with algorithms that can help them expedite the design process.” He is also eager to emphasize the need for tools that give artists a great deal of control over their designs. “It’s important not to undermine their existence as artists, like some of the techniques today, which enable them to modify , but do not necessarily give them full control.” To this end, generative AI can also be combined with other technologies: “There are deep learning tools to harness data, but there are also other technologies that have been around for a long time, which are solely based on algorithms.”

Automating the generation of vegetation

A specialist in the creation of virtual worlds, Éric Guérin works on technologies that automatically generate landscapes but still enable designers to choose where rivers, cliffs and ridgelines will be. “It is a 2D diffusion model that allows the artist to choose the features of an environment and move them around.” Training plays an essential role in the building these tools: “We take existing terrain, for example from IGN or NASA map databases, and try to annotate it automatically, so that we can then generate new terrain.” Vegetation can also be generated using conditional Generative Adversarial Networks (cGANS) (a specific machine learning framework): “We also use automatic annotation to determine if areas are dense or sparsely vegetated. The aim is to enable the designer to generate canopy heights, and also to provide an associated algorithm that plants trees that respect that height.”

Characters in just a few clicks

In Portugal, Rogerio Tavares explains how he uses an AI image creation tool, Stable Diffusion, to design game characters. “I trained Stable Diffusion to draw like me so as to produce additional views of my characters. In the past when I first designed a character, it would usually take me a day to create an additional view, but the process is now much quicker.” For 3D modelling, the design process involves additional steps that require further expertise. The researcher uses Blender an open-source tool that can be equipped with AI plug-ins. The plug-ins “make it easy to generate textures on characters, whereas before I had to either create these or find them on the Internet.” With techniques for generating environments, characters and scenarios, designers and artists creating visuals for video games now have tools that facilitate and accelerate their work without undermining their artistic control. Without having to master machine learning, they are already benefiting from a much wider range of possibilities for artistic expression.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here