Universal Music Group, Concord Music Group, and ABKCO filed a motion March 23 asking a federal judge to rule, before trial and on the papers alone, that Anthropic infringed their copyrights when it trained its Claude models on song lyrics. Attached to the motion: 218 undisputed facts, backed by internal company documents and deposition testimony the public had never seen. At the center of those documents, according to the motion, is a sworn declaration from Jared Kaplan, Anthropic's own chief science officer, stating that the publishers' works were essentially interchangeable fuel for the model. He used the word "fungible."
The contradiction the publishers are deploying as their sharpest exhibit: Anthropic argued in court that its training data was generic and replaceable. But the same internal documents show that Claude reproduced specific copyrighted lyrics, including portions of "American Girl," "Dog Days Are Over," and "White Christmas," even after Anthropic implemented post-lawsuit guardrails explicitly designed to stop exactly that behavior. The guardrails did not work.
"Anthropic trained Claude using Publishers' lyrics precisely so the model could respond to queries for those lyrics, including by serving up unauthorized copies and derivatives of Publishers' lyrics on demand, and Claude has repeatedly been put to that very use," the filing states.
The motion asks Judge Eumi K. Lee of the Northern District of California to rule on a narrow question: whether the act of ingesting lyrics to build the model qualifies as infringement and cannot be shielded by fair use, Reuters reported. A win for publishers would not end the case, but it would eliminate Anthropic's primary defense heading into trial and could force a settlement before the broader question of whether AI-generated lyrics themselves constitute infringement goes to a jury.
Anthropic is fighting this while managing several other copyright suits simultaneously. The company agreed in September 2025 to pay $1.5 billion to settle a separate class-action lawsuit with authors over pirated book training data. The current music publisher lawsuit seeks statutory damages that could reach $3 billion or more. A second lawsuit filed in January 2026 covers an additional 20,517 copyrighted works. BMG filed its own suit March 17, citing 493 compositions and claiming that Anthropic co-founder Dario Amodei personally authorized access to pirated training data. All of this against a company currently raising money at a $380 billion valuation, with a revenue run rate approaching $14 billion that has grown more than tenfold annually for three consecutive years.
Judge Lee denied a preliminary injunction request from the publishers in March 2025, finding they had not demonstrated irreparable harm. The summary judgment standard is higher. But the 218 undisputed facts change the evidentiary posture in ways the preliminary injunction phase did not. Anthropic has argued that its post-lawsuit guardrails substantially reduce ongoing harm and that the market for licensed lyric display already exists, meaning AI outputs do not substitute for original works in ways that damage publisher revenue. The company also points to Judge William Alsup's ruling in a related case, Bartz v. Anthropic, that AI training was fair use. That ruling involved different works and a different judge.
The RIAA (Recording Industry Association of America), NMPA (National Music Publishers' Association), and 21 other music organizations filed an amicus brief supporting the publishers, arguing that AI companies chose to scrape copyrighted content rather than license it. That brief cited research suggesting that large language models (AI systems trained on massive text datasets to predict and generate text) can memorize and reproduce copyrighted material even after fine-tuning and safety measures are added. It also cited data from Deezer showing that more than 60,000 AI-generated tracks were being submitted to the platform daily by January 2026, accounting for roughly 3 percent of all streams.
Judge Lee has not yet ruled on the March 23 motion. The filings from both sides converge on one point: whoever wins the training-data question controls the outcome for the rest of the case. For Anthropic, the Kaplan declaration is the problem that doesn't resolve quietly. A chief science officer testified under oath that his company treated publishers' works as interchangeable fuel for its model. The documents attached to the motion show the model reproduced those same works even after Anthropic claimed to have fixed the problem. Whether that is a legal technicality or a pattern is the question Judge Lee now has to answer.