Little-known indie platform holder Valve have announced a new policy for Steam releases that make use of “AI” technology. To boil it down, developers will now have to disclose how they’re using AI tools on Steam pages, including what “guardrails” they’re putting in place for live-generated stuff that might be illegal or infringe on copyright. Valve are also introducing a new player reporting system for breaches. The company say these adjustments “will enable us to release the vast majority of games that use AI”, with the exception of Adult Only Sexual content that is generated live.
Valve have been pondering whether or not to accept games that make use of the latest generative tools for a while. Back in June, the company told Kaan that they wanted to “encourage innovation” on this front, but that it’s hard to gauge whether or not developers have the rights to images, text and music generated by tools “trained” on other images, text and music.
“We know it is a constantly evolving tech, and our goal is not to discourage the use of it on Steam,” a spokesperson commented. “Instead, we’re working through how to integrate it into our already-existing review policies. Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion. As these laws and policies evolve over time, so will our process.”
While Valve still feel that AI-based game development is a “fast-moving and legally murky space”, they’ve at last thrashed out a set of policies they’re happy with. The company have added an “AI disclosure section” to the content survey developers fill out when they submit their games for publication, “where you’ll need to describe how you are using AI in the development and execution of your game”. It separates AI usage in games into two broad categories, as below:
Pre-Generated: Any kind of content (art/code/sound/etc) created with the help of AI tools during development. Under the Steam Distribution Agreement, you promise Valve that your game will not include illegal or infringing content, and that your game will be consistent with your marketing materials. In our pre-release review, we will evaluate the output of AI generated content in your game the same way we evaluate all non-AI content – including a check that your game meets those promises.
Live-Generated: Any kind of content created with the help of AI tools while the game is running. In addition to following the same rules as Pre-Generated AI content, this comes with an additional requirement: in the Content Survey, you’ll need to tell us what kind of guardrails you’re putting on your AI to ensure it’s not generating illegal content.
Valve will “include much of” this disclosure on the Steam store page if they accept the game in question. As for the player reporting system, this will allow you to submit a report using the in-game Steam overlay if you see anything that might be illegal.
Given that there’s a lot of discussion online about what exactly an AI-generated image “looks like”, I think Valve are going to get a lot of false positives on this front – it might have been an idea to include a guide to spotting them alongside the policy updates, with the caveat again that the technologies in question are undergoing rapid evolution as they devour and regurgitate more and more of the existing internet.
“Today’s changes are the result of us improving our understanding of the landscape and risks in this space, as well as talking to game developers using AI, and those building AI tools,” Valve continue. “This will allow us to be much more open to releasing games using AI technology on Steam. The only exception to this will be Adult Only Sexual Content that is created with Live-Generated AI – we are unable to release that type of content right now.
“It’s taken us some time to figure this out, and we’re sorry that has made it harder for some developers to make decisions around their games. But we don’t feel like we serve our players or developer partners by rushing into decisions that have this much complexity. We’ll continue to learn from the games being submitted to Steam, and the legal progress around AI, and will revisit this decision when necessary.”
Valve’s announcement continues a positive January for advocates of “AI” generation. Last week, Microsoft announced plans to add a dedicated “AI Copilot” button to their keyboards, having sunk billions into an associated research collaboration with InWorld. Square Enix also aim to make “aggressive” use of AI tools in game development.
All this takes place against a backdrop of disputation over whether software that essentially borrows and creatively rearranges material devised by other people is legally permissible, or ethical, and (putting it baldly) whether it might be an excuse to fire people and give their jobs to machines. There’s a court case underway between the New York Times, Microsoft and OpenAI over ChatGPT’s alleged “unlawful use” of the NYT’s publications. OpenAI have argued that it’s “impossible to train today’s leading AI models without using copyrighted materials.”
Given how many megacorporations are betting big on “AI”, it feels like the best detractors can hope for is a bit of oversight and transparency. Last year, Microsoft reached an agreement with one of the games industry’s larger unions that gave them a say over how the tools are used in the workplace.