Generative AI Is Breathing New Life Into Classic Computer Games | Bernard Marr

0
871


Generative AI has the potential to revolutionize every creative industry, and the gaming industry is no exception. Game worlds are getting richer, more immersive, and in many ways closer to being simulations of our own “real” world. This means that the cost and the size of the teams needed to build them are skyrocketing, too.

Generative AI tools – large language models like GPT-4 and image generation algorithms like Dall-E 2 – can help by taking the strain off artists and designers, creating thousands of unique assets with subtle differences. These assets could be the locations, objects, characters and adversaries that make up the game world.

Here’s an example. Stroll through a forest in even a very recent video game, and if you pay attention, you might start to notice that there’s only a small number of individual tree models. After a while, they start repeating, and you see the same trees in different locations.

 While the game’s job is to keep you entertained and engaged in the action or storyline so you don’t notice these technical limitations, when you do, it’s jarring and immediately breaks the suspension of disbelief created by the game.

With generative AI, a forest might be populated by thousands of completely unique trees and home to the same diversity of critters and creepy crawlies as a real stretch of woodland.

Revolution Software

That’s a likely scenario for game design in the near future, but here’s an example of generative AI in use today.

Revolution Software is a UK game developer that scored a big hit with the Broken Sword series of adventure games in the 1990s – before the days of multiplayer online gaming and photorealistic 3D graphics.

Since then, it’s taken a different path than many studios of the time, which either expanded into multimedia production powerhouses to cope with the increasing cost and complexity of game design or didn’t and went bust – or got subsumed into others that did.

According to Polygon, Revolution has retained its small-team structure and mainly supported itself with sequels and re-issues in the Broken Sword series.

When making plans to update the first games in the series to enable them to work on the latest generation of game consoles and PCs, the studio hit a problem. The old graphics were all scaled to fit the far lower-resolution displays that were in use back then. As they were hand-drawn artworks, recreating all of them at the resolution expected by gamers today on their Ultra HD displays would be prohibitively expensive.

Studio founder Charles Cecil connected with generative AI researchers at the University of York, who were able to take a few sample pieces of art designed for a modern-day update and use them to train a generative adversarial network (GAN).

After getting some help fine-tuning the model from an Nvidia engineer, the result was a generative AI model capable of creating one piece of in-game artwork, such as an object or character, in five to 10 minutes.

Human artists are then used to retouch the AI-generated art, focusing particularly on hands and faces (the places where, as many have noted, imperfections are most likely to occur in AI-generated images of people!)

This made the studio’s plans to bring the much-loved games to a new generation of modern-day gamers economically feasible.

As Cecil said, “The ability to use AI … is an absolute game changer … [without it] we just couldn’t afford to do it.

“It really is, you know, allowing very talented character artists and animators to take the original and mold it into something really special, rather than having to go through the drudgery of redrawing everything.”

Automating the Mundane Aspects of Creativity

As in other industries, the most exciting applications of generative AI might seem slightly mundane, considering the hype and glamor being built up around AI.

But the real magic doesn’t lie in the thousands of very similar images it can churn out at rapid speeds. Rather, it’s about what the artists and designers can do with their time once they’re freed from the “drudgery”!

 There are a number of other ways that I can foresee generative AI causing a stir in the games industry.

In the near future, it may be possible to meet and interact with characters in-game that behave and converse far more naturally than we’re used to today. NVidia’s Avatar Cloud Engine (ACE) is intended to let game designers put characters with generative AI-driven personalities into their creations.

It could also be used to create dynamic storylines. Stories could adjust more flexibly to deal with individual player choices, creating more personalized experiences than would be possible using only human writers. ChatGPT, for example, can be instructed to create games with ongoing AI-generated storylines using only simple prompts.

It can also be used for automated testing – creating legions of simulated players, all playing the game in different ways in line with their AI-generated play styles and personalities. This means game developers can quickly work out what play styles are likely to lead to less satisfying gaming experiences and adapt their products accordingly.

It could even be used for dynamically-generated voiceovers, enabling characters to speak their lines and maintain the way they sound, even when the player forces them to go off-script.

What Does This Mean for Game Design and Game Designers?

It’s very exciting to imagine that small, independent studios will be able to harness the power of generative AI in order to make games that would otherwise require much larger teams and a huge outlay of money.

At the same time, the industry must take care to manage the impact that these emerging technologies will have on human jobs. A small studio can’t really be blamed for wanting to use AI to create a game that would otherwise be out of its league. Many might say, however, that a larger studio has a duty of care to the creatives it employs to ensure they aren’t made redundant by machines.

My own opinion is that it goes beyond a duty of care, though, and there are sound business reasons for humans to be kept in the mix.

Some are intangible but nevertheless undeniable – such as the fact that AI simply isn’t capable of recreating the “spark” of human ingenuity and creative nuance. Or that an AI’s lack of emotional intelligence means it’s unlikely to create in a way that resonates with us on an emotional level.

Others are very practical. For example, we’ve all seen that generative AI has the potential to hallucinate, creating in a way that deviates wildly from what its user intended. This can even result in output that’s hateful, discriminatory or in other ways harmful. Without human oversight and expertise at mitigating this in a way that’s relevant to creativity, it could spell disaster for a company going all-in on AI.





Source link