News

Claude's Law: Inside the Ruling That Just Redrew the Boundaries of AI and Copyright

The federal court decision came not with a bang, but with a bookmark.

On June 23, 2025, U.S. District Judge William Alsup dropped a ruling that could echo across the AI industry for years to come: Training a large language model on lawfully purchased books is fair use—even if you cut the spines off first.

The case, Bartz v. Anthropic, was brought by a coalition of authors who claimed their works had been hoovered up and digested by Claude, Anthropic's high-profile AI assistant. What began as a copyright complaint against alleged AI-fueled content theft has now become a defining precedent for what it means to "read" in the age of machine intelligence.

Yet for all its clarity on some fronts, the ruling leaves behind a ragged edge of uncertainty—and a legal thundercloud still brewing over Anthropic's decision to build its AI on millions of pirated books downloaded from LibGen.

Claude Goes to School
The essential facts weren't in dispute. Anthropic had acquired tens of thousands of physical books, chopped the bindings, scanned them, and used them to teach Claude how to converse, summarize, and write with astonishing fluency.

To some, it sounded like Frankenstein's Kindle. To the court, it sounded like "transformative learning."

Judge Alsup ruled that the training of AI on these legally acquired texts—even without permission or licensing—qualified as fair use, likening it to a writer absorbing knowledge from books to develop their own voice. In a moment of judicial literary flourish, he noted that Claude wasn't memorizing passages to regurgitate them, but was instead building statistical intuition, much like a human reader.

The ruling carved out a meaningful legal safe harbor for AI developers: if you bought the book, you could teach your model with it. The analogy isn't to a pirate photocopy. It's to a writer keeping a stack of dog-eared paperbacks beside their desk."

But Then Came the Piracy
While Anthropic won the philosophical battle over fair use, it may yet lose the war over how it built Claude's brain.

Buried in court filings was a bombshell: Anthropic had downloaded more than 7 million pirated books from shadow libraries like LibGen and Z-Library to build what it internally called a "central reference library." This wasn't a couple of stray torrents—it was a systematic acquisition of the literary canon without paying a dime.

That part? Not fair use.

Judge Alsup's ruling drew a hard line: lawful acquisition is critical. Using pirated materials—no matter how "transformative" the application—is still copyright infringement.

"AI isn't exempt from the basic principle that theft is theft," Alsup wrote. The case will now move to trial in December, where Anthropic could face up to $150,000 per work in damages for willful infringement.

That financial exposure is massive, and the company's future could hinge on how the court calculates what damage was actually done by the pirated data.

The Claude Dilemma
The ruling splits Anthropic's training corpus down the middle. The court green-lit one part of its development pipeline, while placing a ticking legal timebomb under the other.

And yet, the implications may be more liberating than limiting. Many AI companies have long relied on gray-market data scraping in the name of progress. Now, they know the terms: if you acquire your training data lawfully, and if your outputs are transformative and non-substitutive, the law is likely on your side.

It's a nuanced but powerful precedent. The court isn't saying AI can do anything it wants with content. It's saying that AI learns differently, and that difference matters under copyright law.

Meanwhile, in the Valley…
Across Silicon Valley, AI developers are treating the ruling like a legal North Star. Meta, OpenAI, and Google—each fending off their own copyright lawsuits—are now furiously auditing their training datasets. Venture capitalists, many spooked by the specter of multibillion-dollar liability, see the decision as a partial reprieve.

"This ruling doesn't end the legal fog," said one tech policy strategist, "but it draws a clear outline around what's permissible. You can't claim ignorance anymore."
In a July 30 investor update, Anthropic CEO Dario Amodei struck a defiant tone.

"Claude was built to be the most honest and helpful AI in the world," he said. "That starts with honesty in how we train it."

It's a strategic message—but one undercut by the messy reality of past shortcuts. The December trial may ultimately decide not only Anthropic's financial fate, but also whether the AI industry can afford to clean up its training practices retroactively.

One Judgment, Two Futures
For now, the takeaway is sharp: the legal system sees a fundamental difference between learning and copying. But it also sees a difference between buying a book and pirating one. Anthropic, and perhaps much of the AI world, did both.

As of July 31, Claude remains online—wiser, perhaps, for the ruling that legitimized its training. But its creators are headed to court.

The next chapter of Claude's Law will be written not in code, but in testimony.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

Upcoming Training Events

0 AM
TechMentor @ Microsoft HQ
August 11-15, 2025
Live! 360 Orlando
November 16-21, 2025
Cloud & Containers Live! Orlando
November 16-21, 2025
Data Platform Live! Orlando
November 16-21, 2025
TechMentor Orlando
November 16-21, 2025
TechMentor @ Microsoft HQ
August 10-14, 2026