Of all the use cases for generative AI, video games are perhaps the most significant. While simple games have been created using GPT-4, it’s clear that this powerful technology has potential in higher levels of game development as well. To gain insight into this potential, I spoke with Marc Whitten, Senior Vice President and General Manager of Unity Create. Whitten is particularly enthusiastic about how AI can transform game development, and we discussed how the tools to enable this revolution are already being introduced to creators.

One major benefit of AI in game development is the ability to significantly reduce the time it takes to create content. According to Whitten, around 80% of the team at a typical 300-person AAA studio is dedicated to content creation. AI can greatly accelerate this process. Whitten provided an example with Ziva Face Trainer, a tool developed by Unity after its acquisition of Ziva in early 2022. Ziva Face Trainer trains a model on a large dataset of emotions and movements to generate usable content. Traditionally, high-end rigging of a character can take a team of four to six artists four to six months. With Ziva Face Trainer, developers can get a rigged model in just five minutes, allowing for real-time usage. Ziva’s technology has already been used in Spider-Man: Miles Morales and the trailer for Senua’s Saga: Hellblade 2, as well as in movies and TV shows like Captain Marvel, John Wick 3, and Game of Thrones.

While machine learning and procedural techniques are not new in game development, generative AI, specifically large language models (LLMs) and diffusion models, have the potential to bring about significant changes. Whitten hopes that AI can make games “ten to the third better”, meaning games that are ten times faster, easier, and cheaper to develop. However, this doesn’t mean a flood of similar games. Whitten believes that AI will lead to “broader, bigger, deeper worlds”. For example, he imagines a game like Skyrim enhanced by a generative AI model, where individual guards have unique backstories impacted by player choices, and an AI model generates rational responses based on those events. While we’ve seen some efforts in this direction with games like The Portopia Serial Murder Case, there is still work to be done to fully realize the potential.

Additionally, sandbox-style games have a lot of potential with generative AI. Whitten envisions a GTA-style game where players can recruit non-player characters (NPCs) based on unique interactions and events. He also mentions Scribblenauts, but in a world where the players can create and assign properties to anything. However, the challenge lies in getting AI to work in these scenarios without specific prompts, as evidenced by the limitations faced by previous AI systems like Kinect and smart assistants like Alexa. Large language models like LLMs change this dynamic by allowing for more diverse prompts.

To enable generative AI in game development, a middleman is necessary. For Unity, that middleman is Barracuda, a neural network inference library. Barracuda allows for the runtime execution of diffusion or generative content models on CPU or GPU, eliminating the need to rely on cloud services. Unity is continually working on improving Barracuda and has received high interest from the game creator community. This tool, along with other similar tools like Unreal Engine’s NeuralNetworkInference tool, empowers creators to target a large part of game design without worrying about limitations or hardware requirements.

In conclusion, the integration of generative AI in game development holds immense potential for transforming the industry. By reducing content creation time and enabling the creation of broader and more immersive worlds, AI can advance game development in significant ways. With tools like Barracuda, game creators can harness the power of generative AI without the need for specialized hardware, allowing for greater creativity and innovation in game design.

LEAVE A REPLY

Please enter your comment!
Please enter your name here