OpenAI’s Copyright Shield Is Business As Usual For Enterprise IT

0
991


Sam Altman, the CEO of OpenAI, said this in his opening keynote at yesterday’s OpenAI DevDay conference in San Francisco: “We’re introducing Copyright Shield. Copyright Shield means that we will step in and defend our customers and pay the costs incurred, if you face legal claims [for] copyright infringement” from use of OpenAI’s ChatGPT Enterprise and API. That statement has attracted a lot of attention, but it’s really just business as usual in the competitive world of enterprise software.

Altman’s statement has drawn notice because he used a name that sounds like a technological solution to the copyright concerns of a technology platform that has drawn repeated complaints for copyright liability. To media and tech industry followers, “Copyright Shield” suggests something like Google’sGOOG Content ID or Meta Rights Manager. These are systems that screen your uploads to YouTube, Facebook, or Instagram to see if they match any of items in their vast databases of copyrighted works, and if so, take some action such as sharing ad revenue with rights holders or (in rare cases) blocking the upload.

But Copyright Shield is something different—and it’s far simpler, at least from a technology perspective: it’s an indemnification clause in a contract—in this case, a software license agreement that a user of ChatGPT Enterprise or OpenAI’s ChatGPT API signs (or clicks to accept) before using the technology. It means that OpenAI will defend those users against legal claims for copyright infringement that arise from output generated by those tools, and will pay damages that result from such claims.

The only thing new about this is the catchy name attached to it. Otherwise it’s a standard feature of license agreements for enterprise technology, one that has existed for decades. For example, software license agreements often contain clauses that indemnify customers (licensees) against liability from data breaches that are attributable to the software being licensed. Many technology license agreements indemnify customers against lawsuits for patent infringement due to the technology.

Generative AI technologies such as ChatGPT generate output that someone could claim infringes their copyrights. For example, let’s say a bank uses ChatGPT to generate content for its credit card marketing campaigns. Someone sues it for copyright infringement. The bank could find that ChatGPT generated the content that the lawsuit is about and invoke the indemnification, so that OpenAI handles the defense of the lawsuit and pays any legal fees and damages that result. (Indemnifications often come with limits in dollars; OpenAI has not made public what the limits are, if any, with Copyright Shield.)

Why is OpenAI doing this? To attract developers in a fiercely competitive market for generative AI tools. In his keynote, Altman mentioned Copyright Shield among several “developer requests.” The most attractive markets for generative AI include large companies (such as banks), which will insist on indemnification clauses in licensing agreements for enterprise technology and view a lack of indemnification as a reason not to select a vendor.

This is especially true where the technology in question is known to draw lawsuits. OpenAI is currently a defendant in four separate copyright lawsuits brought by groups of authors against ChatGPT, including one facilitated by the Authors Guild and featuring such literary stars as Jonathan Franzen, John Grisham, George R.R. Martin, and Scott Turow. And it’s a co-defendant in another lawsuit brought by anonymous plaintiffs against OpenAI’s Codex programming code generation technology. These are among the growing number of similar litigations brought against generative AI companies. They are all putative class action suits with potential damages going into the stratosphere.

Generative AI technology companies have varied in their willingness to indemnify their customers. Startups such as Stability AI and Midjourney both leave it to their customers to defend themselves against copyright claims, while more mature companies like Amazon,AMZN and IBM,IBM Adobe, and Google offer some level of indemnification. Technology companies have to assess their capacities to assume risk—such as the risk of being saddled with tens of millions of dollars in damages and legal fees in copyright lawsuits—and it makes sense that an Amazon or an IBM has more capacity to assume risk than a startup, even one with a unicorn valuation.

OpenAI is offering indemnification for ChatGPT Enterprise, the paid service that the company announced two months ago, which is geared towards large customers. It’s also offering indemnification for users of the ChatGPT API, a programming interface for developers that has a pay-per-use pricing structure—presumably on the rationale that the experience that individual programmers or small companies gain with the API could lead to more profitable enterprise deployments later on. It’s not offering indemnification for users of its extremely popular free ChatGPT tool.

As long as there’s a white-hot competitive market for generative AI tools, and as long as the juries are out (literally and figuratively) on generative AI companies’ responsibilities to guard against copyright violations, AI technology vendors will need to offer legal comfort to their customers as a feature. Those are bound to be long time horizons. Technology tools for addressing copyright concerns will come too, but that’s a separate matter.

Follow me on LinkedInCheck out my website



Source link