As generative AI technology continues to evolve, legal disputes over the use of copyrighted material to train AI models are intensifying. In the early wave of lawsuits, courts have taken differing positions on whether fair use can be used as a defense to copyright infringement. Meanwhile, other potential legal arguments—such as the right of publicity and claims of unfair competition under state and federal law—remain largely untested.
Under 17 U.S.C. § 107, fair use is a doctrine that permits limited use of copyrighted content without permission from the rights holder. Courts evaluating fair use weigh four factors:
-
The purpose and character of the use (whether it’s commercial or non-commercial);
-
The nature of the copyrighted work (factual vs. creative);
-
The amount and substantiality of the work used; and
-
The effect of the use on the market for the original work.
In May 2025, the U.S. Copyright Office released a third pre-publication report that provided arguments for both sides of the debate—whether using copyrighted material to train AI models qualifies as fair use or constitutes infringement.
So far in 2025, four major cases have addressed the question directly. Each case has taken a different approach, underscoring how unsettled and fragmented the legal landscape remains.
Lehrman v. Lovo, Inc., No. 1:24-cv-03770 (S.D.N.Y. July 10, 2025)
This class action, filed in May 2024 by two voice actors in the Southern District of New York, was among the first to test legal limits on AI-generated content. The plaintiffs allege that Lovo, Inc. used AI-generated replicas of their voices without authorization.
Their amended complaint spans 313 paragraphs and includes 17 legal claims, including violations of New York’s right of publicity statutes, copyright infringement, and multiple claims under the Lanham Act for unfair competition and false advertising.
Lovo sought dismissal, asserting that AI-generated voices did not infringe copyrights or violate applicable state laws. However, the court ruled that:
The claims for copyright infringement and violation of state laws were sufficiently stated, but dismissed the claims based on the Lanham Act.
The court also allowed the actors to amend their claim that training the AI using their voices infringed copyright, and most importantly:
“The class action claims in Lovo remain alive.”
Bartz v. Anthropic PBC, No. 24-cv-05417 (N.D. Cal. June 23, 2025)
In this case, authors sued Anthropic for allegedly infringing their copyrights by using millions of digital and scanned books—some obtained from pirated libraries—to train its AI model, Claude.
Anthropic moved for early summary judgment, claiming fair use. The judge evaluated three separate activities:
-
Training on purchased books: The court found that “training an AI model on lawfully obtained, copyrighted books constituted fair use,” because the use was “exceedingly transformative.”
-
Digitizing purchased books: The court ruled this was also fair use, stating:
“Every purchased print copy was copied in order to save storage space and to enable searchability as a digital copy.”
-
Pirated library copies: While training on pirated books was seen as transformative, the judge denied summary judgment, explaining that:
“The creation and maintenance of a permanent, general-purpose digital library of pirated works was not protected by fair use.”
Kadrey v. Meta Platforms, Inc., No. 23-cv-03417 (N.D. Cal. June 25, 2025)
Just two days later, another judge in the Northern District of California issued a significant ruling in favor of Meta in a copyright suit brought by 13 authors. The plaintiffs alleged that Meta used their books to train its LLaMA AI model without permission.
The court held that:
“The use of the plaintiffs’ works in training the AI model qualified as fair use under copyright law—particularly because the training was found to be highly transformative, serving purposes like summarization and content generation that differ fundamentally from the original works.”
The court also noted the lack of evidence of market harm, which weakened the plaintiffs’ case. However, the judge clarified:
“The court’s decision was narrowly decided—focused on the plaintiffs’ inadequate arguments—and does not establish a blanket legality for AI training practices.”
“More compelling evidence in future cases could lead to different outcomes… evidence of infringement or economic damage could succeed.”
Thomson Reuters Enter. Ctr. GMBH v. Ross Intel. Inc., 765 F. Supp. 3d 382 (D. Del. 2025)
(Currently on appeal to the 3rd Circuit)
In contrast to the earlier cases, the District of Delaware ruled against fair use in Thomson Reuters’ lawsuit against Ross Intelligence. The case centers on Ross’s use of Westlaw headnotes to train its AI-powered legal research engine.
The court granted partial summary judgment for Thomson Reuters, rejecting Ross’s fair use defense:
The court found that Ross’s use was “commercial” and not “transformative,” and therefore did not qualify as fair use.
This decision is currently under appeal and could set a significant precedent depending on how the Third Circuit rules.
So What’s Next?
The law around AI model training and fair use is still unsettled. Courts are clearly divided—some finding training activities transformative and protected, others viewing them as infringements, particularly when commercial motives or copied materials are involved.
“While some courts appear poised to accept AI model training as transformative, other courts do not.”
As AI technology becomes more advanced, courts—and potentially Congress—will need to define clear legal boundaries. In the meantime, creators and businesses alike should stay vigilant and consider proactive risk strategies. Our Maryland IP attorneys are here to help guide through these new boundaries as they develop.