Artist’s Copyright Claim against Stability AI Moves Forward Following a First-of-Its Kind Ruling

By: Eric Ball , Vejay Lalla , Tyler G. Newby , Kimberly Culp , Garner Kropp , Charles Moulins

Over the past year, groups of plaintiffs filed multiple copyright infringement claims against companies behind generative artificial intelligence software. These lawsuits allege that training AI models involves mass-scale copyright infringement—a potential threat to the commercial viability of these models. On October 30, in Andersen v. Stability AI Ltd. et al. (Andersen), Judge William Orrick became the first federal judge to rule on a challenge to such claims at the pleading stage. This first ruling, while not precedential, provides insight into the potential legal exposure of companies providing AI-enabled products, and in particular, AI models.

Judge Orrick allowed the direct copyright infringement claim based on training of an AI model with copyrighted material to move forward for consideration on a more developed factual record. He dismissed—and expressed skepticism about—claims alleging that, because an AI model’s training inputs may be copyright-infringing, all outputs are automatically infringing derivative works. However, the court will allow the plaintiffs to amend all the dismissed claims to attempt to cure the deficiencies identified in the ruling.

Andersen Background

The Andersen plaintiffs are three visual artists who brought putative class action claims against three sets of defendants: (i) Stability AI Ltd. and Stability AI, Inc. (Stability); (ii) DeviantArt, Inc. (DeviantArt); and (iii) Midjourney, Inc. (Midjourney). Stability created and offers consumers an AI-enabled image generator called Stable Diffusion, which consumers can prompt to generate images “in the style of” real artists. DeviantArt and Midjourney did not create Stable Diffusion but incorporated the AI model into their own commercial products that produce images in response to text prompts.

The plaintiffs alleged that Stability “trained” Stable Diffusion by inputting into the model billions of images obtained through internet “scraping,” including their own copyright-protected works. The plaintiffs further alleged that Stable Diffusion “contains” unauthorized reproductions of their works, is itself a copyright-infringing derivative work and that all outputs from the image generator are also infringing derivative works. The plaintiffs conceded, however, that none of the image outputs “is likely to be a close match for any specific image in training data.” Along with their other causes of action, the plaintiffs asserted claims for direct and vicarious copyright infringement under the Copyright Act and for violation of their right of publicity under California law by allowing use of the plaintiffs’ names to generate images “in the style of” their work. Stability, DeviantArt and Midjourney moved to dismiss the plaintiffs’ complaint for failure to state any viable claim.

Copyright Infringement Claims

The court refused to dismiss the direct infringement claims against Stability, stating that it is premature to decide at the pleading stage whether “copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run.” He found the allegations that Stability “downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion” and used those images to train Stable Diffusion were sufficient to state a direct infringement claim. The critical question of whether using copyright-protected material to train an AI model constitutes infringement will be resolved at a later time on a more developed factual record. Resolution of this question in this and other similar cases will involve consideration of the doctrine of “fair use,” which shields certain unauthorized uses of copyrighted works—e.g., uses that transform an original work into something sufficiently different—against infringement claims. See Section 107 of the Copyright Act (17 U.S.C. § 107).

On the other hand, the court dismissed the direct copyright infringement claims against Midjourney and DeviantArt, while granting the plaintiffs leave to replead those claims. The plaintiffs alleged that Stable Diffusion contains “compressed copies” of all the images inputted into the model during training. For that reason, they argued, Midjourney and DeviantArt’s distribution of Stable Diffusion constitutes unauthorized distribution of copyrighted works, and Stable Diffusion is itself an unauthorized derivative work of all the training images. The court rejected those theories because the plaintiffs’ complaint does not clarify whether Stable Diffusion contains actual copies of training images (e.g., compressed JPEG files) or rather uses algorithms with the ability to reconstruct those images. Thus, the plaintiffs cannot plead a claim unless they allege with technical specificity how an AI model reproduces their works.

The plaintiffs had also alleged that all the outputs of an AI model trained on infringing images are automatically infringing, even though their complaint admitted that Stable Diffusion’s outputs are usually not a “close match” to any of the training images. Seizing on this admission, the court pointed out that infringement claims traditionally require a showing of “substantial similarity” between the protected work and the allegedly infringing work. Judge Orrick held that the “substantial similarity” requirement applies to the expressive elements of the work with equal force in the AI context and rejected the plaintiffs’ derivative works claim for failure to allege such similarity.

Finally, Judge Orrick rejected all claims of vicarious copyright infringement, holding that the plaintiffs failed to allege an underlying act of direct infringement that Midjourney and DeviantArt controlled and profited from—a necessary precondition for vicarious liability. And while the plaintiffs sufficiently alleged direct infringement by Stability (for training its AI model), they failed to allege a third party’s act of direct infringement for which Stability could be vicariously liable.

Right of Publicity Claims

The court also dismissed, without prejudice, the claim that the defendants violated the plaintiffs’ right of publicity by “misusing” their names to promote AI products that generated images “in the style of” the plaintiffs’ works. While the plaintiffs alleged that the defendants knowingly used the plaintiffs’ name in their product by allowing users to request art in the style of their names, the plaintiffs had not alleged any specific instance of the defendants using their names in advertisements, nor did they allege how use of their names to prompt image generation could result in images that somehow profited off the plaintiffs’ identities. Thus, the plaintiffs failed to show how their right of publicity—rather than the right of publicity of other unidentified artists—were violated.

Relatedly, the court deferred ruling on the issue—raised by DeviantArt—of whether the right to free speech precludes the plaintiffs’ right of publicity claims. DeviantArt argued that its generative AI services are a form of protected expressive conduct, because the model outputs constitute a “transformative use” of the plaintiffs’ works. The court ruled that this defense is better considered after the plaintiffs clarify their right of publicity claims and when the parties have submitted evidence.

Takeaways

  • The Andersen court did not definitively decide whether training an AI model with copyright-protected material can give rise to liability, but allowed that claim to proceed into fact discovery. And the court has given the plaintiffs an opportunity to correct their dismissed claims, including the vicarious liability claims against DeviantArt and Midjourney for incorporating Stability’s model in their own products. This means that the issue of whether companies deploying generative AI models developed by a third party can be vicariously liable for copyright infringement remains open.
  • The ruling serves as a reminder that traditional principles of copyright law apply to AI-related claims, irrespective of the novelty of the underlying technology. For example, courts are unlikely to find that the outputs from AI models are automatically infringing derivative works unless the expressive elements of the outputs are substantially similar to specific expressive elements of the training inputs.
  • While the ruling cast doubt on whether AI models that allow consumers to prompt outputs “in the style of” particular artists infringe those artists’ right of publicity, plaintiffs will get another chance to plead that claim. The “transformative use” defense may prove valuable in protecting companies whose models significantly transform the artistic works used during the training process.
  • In light of the ruling and the potential road map it provides plaintiffs (in Andersen and other similar cases) on how to properly plead infringement clams, companies building AI models still need to weigh the legal risk of training their models with copyright-protected works. Likewise, companies wishing to incorporate such models in their own products/services should carefully assess risks specific to each model—for example, by considering whether a model generates sufficiently transformative outputs.

Also published in Law360 (subscription needed).