In the last few months, artificial intelligence has been having a moment. The rollout of programs like ChatGPT, DALL-E and Stable Diffusion, that can produce text and graphics – seemingly with the creativity of a human — has sparked the imagination of droves of innovators, who are already looking for new ways to use the technology. The next testbed for these advanced algorithms could be in the virtual worlds of video games.   Microsoft recently showed an internal demonstration of an AI program that allows users to create things in the sandbox game Minecraft by giving a single command – rather than performing individual tasks to build it, as they would in current gameplay. Sony is currently allowing players of its racing game “Gran Turismo” to face off against its proprietary AI program, GT Sophy. And “Roblox,” a game based around players building and sharing new virtual worlds, is planning to use the technology to make it easier for creators to bring their visions to life.   While video games have been using less-sophisticated versions of AI algorithms for as long as ghosts have been chasing Pacman, advances in technology and processing power create some exciting new opportunities for game designers, according to Frank Lee, PhD, director of Drexel University’s Entrepreneurial Game Studio in the ExCITe Center. Lee recently shared his insights on the role AI has played in gaming and how it is likely to shape the future of gameplay.

How is AI already being integrated into games?

The AI use in gaming has been fairly rudimentary. One example is the A* algorithm, which is a fast algorithm for finding an optimal path for computer-controlled NPC (non-player character) in the virtual game world. Another is the Alpha-Beta pruning algorithm which allows the NPCs to make decisions based on a set of options. Both of these algorithms have been around since the 1960s and games would make use of both in some capacity in game AI. But even as algorithms evolved into machine learning and neural networks and what we call artificial intelligence today — programs like ChatGPT and DALL-E – we haven’t seen much effort to integrate them into games.  

One of the more modern examples of what we would currently think of as artificial intelligence, or machine learning, in gaming was in the 1990s when IBM developed its machine learning algorithm Deep Blue to play chess, which was the first computer chess program to beat a human grandmaster.

But that was specific hardware designed at the chip level to rapidly conduct Alpha-Beta pruning search of a limited set of data — chess moves in this instance.   But we haven’t seen much more sophisticated use of AI than that over the years. In the early 2000s, the “Halo” franchise integrated a behavior tree algorithm to make non-playable characters’ behavior less predictable – but those were still based on a very limited set of conditions.  

So, there are little bits of what might be considered AI in games, but it’s actually more like rules coded into the game to make it seem more intelligent or responsive.

How would using this type of AI in gaming different?

What we’re looking at now is deep learning. So instead of the program sifting through hundreds or thousands of pieces of information, these “neural network” programs can process millions or billions of pieces of data to produce its response. For example, ChatGPT’s neural network algorithms can pull in all the written materials available on the internet, and trains on it so that if you ask a question it uses that question as a prompt to search through this large corpus of training materials to then generate a response that’s similar to the examples it has trained on.  

If these AI programs were built into games, you could potentially see them directing the narrative of the game in new directions in real-time – like having your own personal DM. However, this would lead to the game designer having little to no control over the game or the play experience for players. Hence we may have similar situations, as have been reported with ChatGPT, producing unsettling answers.   

We’re already seeing it be used on the design side to help make characters more lifelike and save some of the labor-intensive work of designing environments. But for the most part, integrating it into gameplay is still theoretical.

What could AI mean for sandbox games? The Microsoft demonstration with ‘Minecraft’ is a shortcut for building things, but doesn’t that kind of ruin the game?

The question is, is that a good game design? Role-playing games, for example, sometimes involve elaborate crafting systems that allow you to make all kinds of things. And that’s a big part of the gameplay in that game. If the players didn’t want to go through all the steps – to the point where they’re not playing – then the game designers would just include a button as a shortcut. Some of the fun is in knowing and learning these elaborate recipes or sets of directions to make these things.  

But I could see more accurate AI language models being integrated to help from an accessibility standpoint. If they could take speech input and convert that into an appropriate action, it could improve gameplay for players with various disabilities.

What are the challenges of bringing AI into gameplay?

The reason I think AI hasn’t been integrated as much is because gaming is such a rapidly cycling industry. As consumers, we expect new games and new updates of games to happen about once a year or every other year. So, developers are given half a year or a year to make a game, and for the most part what people have been and still are judging the games on are graphics. That has been true for most of the history of gaming.  

As a result, most of the development and advancement in games have been in graphics engines and hardware acceleration of graphics — making it look pretty has been the focus, so there has been little time and effort put toward building AI into gameplay.  

With large language models, like ChatGPT, we could see the creation of a non-playable character that you could talk to that feels like you’re talking to a real person. But the challenge with that is the developers have no control over what it’s going to say. From a game designer perspective, your goal is to create a compelling narrative, but if you’re turning control over to AI and have no idea what it is going to say, it’s hard to design and drive the narrative forward.

What’s next? How do you foresee the latest revolution in AI affecting gaming (from design, to production, to play) in the future?

Where I see AI potentially is more on the development side. Generating more complex and detailed scenes could take much less time with assistance from AI. We’re already seeing examples of programs being used to help reduce the hard labor of artists who design the environments and create more humanlike character movements and behaviors.  

There was a lot of buzz around “No Man’s Sky” because of their use of generative AI. In the game, full worlds were spontaneously generated as part of gameplay, but again, this is done through a set of procedural algorithmic rules, rather than a neural network AI program.  

But this shows the potential of using AI to generate new levels that are unique to the player. There are currently games that generate new levels, but those are crowd-sourced from other players who created them. But theoretically AI could be used to generate an infinite number of new levels.   I could also see something like DALL-E used in the idea generation process to quickly mock-up ideas in the early stages of conceptualizing a game. You could just put in some ideas in the brainstorming phase of design to stimulate creativity and come up with other ideas, or to quickly see if it might work.

Media interested in speaking with Lee should contact Britt Faulstick, executive director, News & Media Relations, at 215-895-2617 or bef29@drexel.edu



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here