- The Writers Guild of America’s labor deal with Hollywood studios was celebrated as a big win for writers.
- As AI continues to rapidly advance, some skeptics question whether the contract will do enough to protect writers.
- Artificial intelligence remains a big sticking point for actors’ labor negotiations with studios.
Members of the Writers Guild of America (WGA) East hold signs as they walk for the second day on the picket-line outside of Netflix’s New York office on May 03, 2023 in New York City.
Spencer Platt | Getty Images
The Writers Guild of America’s labor deal with Hollywood studios was billed as a big win for writers, but industry experts fear the agreement’s artificial intelligence guardrails won’t be enough.
As it stands, the industry faces several questions about AI and writing now that the deal is ratified, particularly about copyright law, detection of AI usage and how studios will behave. AI also remains a major sticking point in the ongoing actors’ strike as talks broke down on Thursday in part due to a disagreement on AI guardrails between actors and studios.
Writers and actors have long feared the increasing prominence of AI, primarily due to concerns that the technology could replace the need for them in Hollywood.
“I hope I’m wrong, but I do think that the use of AI is going to take over the entertainment industry,” Justine Bateman, a member of the writers, directors and actors guilds, told CNBC in July.
The WGA agreement established that AI cannot be used to undermine a writer’s credit or be used as a means to reduce a writer’s compensation. The contract does, however, leave room for studios to train AI using preexisting material. WGA’s original May proposal, which triggered the strike, would have disallowed studios from using any materials to train AI outright.
The Alliance of Motion Picture and Television Producers did not immediately respond to CNBC’s request for comment.
Hollywood studios training AI with preexisting materials could create a whole new set of issues for writers by allowing the studios to use previous work to generate similar materials without the writer’s consent or even awareness.
It is in this gray area that thorny issues could sprout, according to Lisa Callif, partner at Beverly Hills entertainment law firm Donaldson Callif Perez LLP.
“One of the biggest issues we’re dealing with is the misappropriation of how AI uses source material and creates new material out of it without permission,” Callif said. “How do you control this? I think it really comes down to human behavior.”
Allowing studios to train AI with preexisting material was a “punt” down the line, and studios will inevitably “push to use AI as far as possible,” said Peter Csathy, founder and chairman of media legal advisory company Creative Media.
“The biggest inhibitor is probably existing copyright law,” he said.
AI has upended traditional copyright law in the U.S.
Jodi Picoult, author
Darren McCollester | Getty Images
Prominent authors, including Jodi Picoult and George R.R. Martin, sued OpenAI earlier this year for copyright infringement, accusing the startup of using their published works to train ChatGPT.
“We’re having productive conversations with many creators around the world, including the Authors Guild, and have been working cooperatively to understand and discuss their concerns about AI,” a spokesperson for OpenAI told ABC News.
In January, a group of visual artists sued Stability AI, Midjourney and DeviantArt, arguing that Stability AI’s Stable Diffusion software scraped billions of copyrighted images from the internet without licensure and allowed Midjourney and DeviantArt AI tools to generate images in the artists’ style.
In the United States, non-human-generated content is not eligible for copyright, which presents challenges for studios wishing to utilize AI.
“It’s clear from the U.S. copyright laws that AI-generated content is not capable of protection or exclusivity, and the studios will not have that,” Csathy said. “They need to own their intellectual property.”
Accusations of copyright infringement have have long relied on the general principle of substantial similarity. In other words, if one body of work is found to be substantially similar to an earlier body of work, the original artist would be entitled to compensation.
Earlier this year, the Supreme Court ruled that photographer Lynn Goldsmith’s pictures taken of late pop superstar Prince were entitled to copyright protection after artist Andy Warhol, who died in 1987, used one of her unlicensed photographs as a starting point to add his signature bold and colorful style. After Prince’s death in 2016, Vanity Fair licensed one of Warhol’s images created using Goldsmith’s original photograph without compensating Goldsmith in any form.
The ruling has particular applicability to writers, Csathy said.
“In the case [of using AI], if there’s substantial similarity to an existing script and it takes a commercial opportunity away, they could claim copyright infringement and cite the Warhol case,” Csathy said.
AI regulation is notoriously minimal given how quickly the technology evolves. But some, like Csathy, say that detection and guardrail technology is advancing.
Intel Labs is behind the development of “My Art My Choice,” an initiative that aims to protect copyrighted works from being used in AI learning. The technology works by adding a protective layer over an image that makes the image unusable by an AI learning model. The team plans to apply the technology to other modalities in the future.
Earlier this month, machine learning company HuggingFace announced a collaboration with media verification company Truepic to embed a digital “watermark” into images to easily identify authorship, edits and label AI-generated content.
The advancements are reminiscent of digital fingerprinting tool Content ID, which quelled fears that YouTube would thwart copyright regulations in its early days. The tool, introduced in 2007, has since been scaled to detect copyright infringements on a massive scale. Content ID flagged more than 826 million possible copyright violations in the second half of 2022, nearly all automatically, according to a July YouTube Transparency Report. The claims generated $9 billion in payouts to rights holders.
“The technology is increasing on the detection side,” Csathy said. “There’s a whole burgeoning industry of forensic AI that’s going to be policing this.”
Despite strides being made in content verification and AI detection technology, many are still not convinced that this will be enough contain the risks of AI.
“The courts will say there are hundreds of thousands or millions of works in the training set,” Csathy said. The courts will ask “so how can you possibly say that there was an infringement and not a fair use of your works? It’s going to be constant push and pull. There’s no way to regulate this technology perfectly.”
Disclosure: Comcast owns NBCUniversal, the parent company of CNBC. NBCUniversal is a member of the Alliance of Motion Picture and Television Producers.