Judge Rules AGAINST Authors in AI Training Suit

Person in suit with gavel and scales of justice

Federal judges just dealt a devastating blow to authors by ruling that AI companies can legally use thousands of copyrighted books without permission, setting a dangerous precedent that could permanently alter the publishing landscape.

Key Takeaways

  • A federal judge ruled Anthropic’s use of books to train AI was “fair use” and “transformative,” though the company still faces trial for pirating 7 million books
  • Meta won dismissal of a similar lawsuit when authors failed to provide sufficient evidence of market harm
  • Judge Vince Chhabria warned that AI training on copyrighted works could “dramatically undermine the incentive for human beings to create things”
  • The rulings establish early legal precedent favoring AI companies over content creators in copyright disputes
  • A December trial will determine potential damages against Anthropic, with statutory penalties up to $150,000 per work for willful infringement

Major Victory for AI Companies Against Authors

Two significant court rulings in San Francisco have delivered major wins for AI companies in their ongoing battle with authors over copyright protections. U.S. District Judge William Alsup ruled that Anthropic’s use of legally purchased books to train its AI chatbot Claude was protected under “fair use” doctrine, describing the practice as “quintessentially transformative.” In a separate case, Judge Vince Chhabria dismissed a lawsuit against Meta, finding that authors failed to provide sufficient evidence that AI training on their works caused market dilution or economic harm.

“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them, but to turn a hard corner and create something different,” wrote Judge William Alsup in his ruling.

Anthropic’s Legal Complications Continue

Despite the favorable ruling on fair use, Anthropic’s legal troubles aren’t over. Judge Alsup ordered a trial set for December to determine if the company should face damages for downloading approximately 7 million pirated books for what it called a “central library.” Internal documents revealed that Anthropic employees had concerns about using pirated materials, ultimately deciding against using them for training their language learning models. However, the judge made it clear that purchasing books after initially pirating them does not eliminate liability.

“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” said Judge William Alsup wrote.

The case was originally filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who claimed copyright infringement by Anthropic. Under U.S. copyright law, statutory damages could reach up to $150,000 per work for willful infringement, potentially creating massive liability for AI companies found to have improperly used copyrighted materials. Anthropic, founded by former OpenAI executives, introduced Claude in 2023 and has secured significant backing from major tech investors.

Warning Signs for Content Creators

Despite ruling in favor of Meta, Judge Chhabria issued a stark warning about the potential impact of AI on creative industries. His comments suggest that while these particular cases were decided in favor of AI companies, the broader question of whether AI training fundamentally undermines creative markets remains unresolved. The judge’s statements indicate that properly constructed legal arguments might succeed in future cases where economic harm can be clearly demonstrated.

“So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,” said Judge Vince Chhabria

These rulings come as other AI companies including OpenAI and Microsoft face similar lawsuits over content used for AI training. Some media companies have begun seeking compensation by licensing content to AI developers rather than pursuing litigation. The cases represent early attempts to establish legal boundaries for an industry that has rapidly developed by consuming vast amounts of copyrighted material without clear permission, setting up what will likely be years of continued legal conflict between content creators and AI developers.

AI Industry Celebrates While Authors Regroup

Anthropic expressed satisfaction with the ruling, framing it as consistent with copyright law’s fundamental purpose of enabling creativity and scientific progress. The company has positioned itself as responsible in its training approach while acknowledging the complex legal terrain. For authors and publishers, these rulings represent a significant setback in their efforts to maintain control over how their works are used in AI development, though the judges’ specific comments about market harm provide a roadmap for potentially more successful future litigation.

“We are pleased that the Court recognized that using works to train LLMs (language learning models) was transformative, spectacularly so,” an Anthropic spokesperson said following the ruling.

The decisions underscore the inherent tension between protecting authors’ economic interests and enabling AI development. While these rulings appear to favor technological innovation over traditional creative rights, both judges carefully limited the scope of their decisions to the specific arguments presented, leaving the door open for more nuanced cases in the future. For content creators concerned about AI’s impact on their livelihoods, these cases highlight the urgent need for clearer legislation rather than relying solely on judicial interpretation of copyright laws written before AI’s emergence.