Even though generative-AI tools such as OpenAI’s ChatGPT and Google’s Bard often respond to user queries with some of the copyrighted material that makes them function, major tech companies have suggested the users are to blame for any claims of infringement.
Google, OpenAI, and Microsoft called for users to be held responsible for the way they interacted with generative-AI tools, according to their comments to the US Copyright Office made accessible to the public last week. The USCO is considering new rules on artificial intelligence and the tech industry’s use of owned content to train the large language models underlying generative-AI tools.
Many Big Tech companies submitted comments to the office, generally arguing against any kind of new rules for generative AI and saying that having to pay for copyrighted material would ruin their plans in the AI field. While none of the companies denied that they were training their AI tools by using a mass amount of copyrighted work scraped from the internet without paying for it or that these tools could show copyrighted material, Google, OpenAI, and Microsoft (a major investor in OpenAI) said the user was responsible whenever the latter happened.
Google argued that when an AI tool was “made to replicate content from its training data,” it wasn’t the fault of the tool’s developer, who had made attempts to prevent such data from being shown.
“When an AI system is prompted by a user to produce an infringing output, any resulting liability should attach to the user as the party whose volitional conduct proximately caused the infringement,” Google wrote in its comment.
Google added that holding a developer such as itself responsible for copyright infringement would create a “crushing liability” as AI developers tried to prevent copyrighted material from being shown. Holding developers responsible for the copyrighted training data making their AI tools tick was akin to holding photocopiers and audio or video recorders responsible for infringement, Google argued.
Microsoft also brought up how people could use photocopiers, as well as a “camera, computer, or smartphone,” to create infringing works and were not held liable for such activity. It said a generative-AI tool was, like a camera, a “general purpose tool.”
“Users must take responsibility for using the tools responsibly and as designed,” Microsoft said.
OpenAI argued that when one of its tools turned up copyrighted content, “it is the user who is the ‘volitional actor.'” In copyright law, the definition of a volitional actor is typically answered by the question: “Who made this copy?”
“In evaluating claims of infringement relating to outputs, the analysis starts with the user,” OpenAI wrote. “After all, there is no output without a prompt from a user, and the nature of the output is directly influenced by what was asked for.”
Courts have typically found that machines lack the “mental state” or human-level thinking to be considered having volition enough for liability. Yet, as technology progresses, tools such as generative AI may rise to an operational level that means the companies behind them can be held liable, as suggested by a 2019 paper from Columbia Law Review. Big Tech and other companies involved in AI development frequently put forward their AI tools as humanlike in their learning and abilities, including in many of their comments to the USCO.
Already, many governments and regulatory bodies around the world are proposing or considering new laws on AI.
Are you a tech employee or someone with a tip or insight to share? Contact Kali Hays at khays@insider.com, on the secure messaging app Signal at 949-280-0267, or through X/Twitter DM at @hayskali. Reach out using a nonwork device.