Authors have until March 30 to file claims in what is already the largest copyright class action settlement in U.S. history — and the most consequential one yet for the AI industry. Anthropic has agreed to pay $1.5 billion to settle claims that it trained its Claude models on pirated copies of at least 500,000 books from the LibGen and PiLiMi online libraries, resolving Bartz v. Anthropic (N.D. Cal., No. 3:24-cv-05417) ahead of a final approval hearing.
The per-work payout is roughly $3,000, according to Author Media. Payments are spread across four installments beginning October 2025 and ending in 2027, with the first distributions estimated to reach authors around August 10, 2026. Anthropic must also destroy the pirated training datasets and certify they were not used in any commercial Claude model within 30 days of final judgment.
Named plaintiffs Andrea Bartz, Charles Graever, and Kirk Wallace Johnson — all authors and journalists — receive $50,000 each on top of their per-work share. The settlement administrator gets $18 million.
The numbers are large. But the harder legal question is still very much alive — in a separate case filed the day before this settlement drew its breath, music publishers are making a more aggressive bet on the same underlying fight.
Concord Music Group Inc v. Anthropic PBC (N.D. Cal., No. 5:24-cv-03811) was filed March 23, the same week the book settlement was entering its final approval phase, before Judge Eumi Lee. Their argument: reproducing song lyrics on demand is categorically different from training on books. When Claude generates a song's lyrics, users get something functionally equivalent to the original work — not a distant analytical echo of it. The publishers are asking the court to reject Anthropic's fair use defense before the case even goes to trial, a direct challenge to the logic Judge Alsup used when he called AI training on books "quintessentially transformative" in the Bartz case.
That distinction matters. Training a model on millions of books to learn language patterns is one thing. Generating a specific, copyrightable expression on request is another. The music publishers' case is built on exactly that line — and if it survives a motion to dismiss, it goes to trial with a more sympathetic set of facts for copyright holders than the book case ever did.
The book settlement doesn't resolve the fundamental fair use question. Authors who take the payout are giving up their claims against Anthropic for this particular training run — but they're not settling anyone else's case. Visual artists, journalists, news publishers, and other rights holders with live suits against AI companies are watching closely. Whatever happens in Concord v. Anthropic will cut through all of them.
The fee fight in Bartz is worth noting on its own. Lead counsel Susman Godfrey and Lieff Cabraser initially asked for $300 million — 20% of the settlement. The judges overseeing the case called that oversized, according to Reuters. After pushback, the firms revised their request to $187.5 million, or 12.5%. Even at the reduced rate, that's a large number — but the court's willingness to push back on the initial ask signals the kind of scrutiny these settlements are now drawing.
Authors who haven't filed have less than a week left. The official settlement website is AnthropicCopyrightSettlement.com. The deadline is March 30, according to Writer Beware. Miss it and you don't just lose your share of this settlement — the FAQ makes clear you forfeit your right to sue Anthropic separately over the same training data. For thousands of authors who may not have known their books were in these datasets, that deadline has real teeth.