
BMG Rights Management filed suit against Anthropic in California federal court Tuesday, alleging the AI company infringed hundreds of copyrights by using lyrics from Rolling Stones, Bruno Mars, Ariana Grande, and other artists to train its Claude chatbot. The complaint cites 493 specific examples of infringed works — a number that, under U.S. copyright law's statutory damages framework for willful infringement, could expose Anthropic to significant financial liability.
The lawsuit escalates a legal battle that has been building since October 2023, when Universal Music Group and other publishers first sued Anthropic over song lyrics. That case remains ongoing. Anthropic settled a separate class action brought by authors last September for $1.5 billion.
BMG, owned by German media conglomerate Bertelsmann, is not the first music publisher to take this route. But the scale of the citation — 493 examples — signals a more aggressive posture than earlier filings. Statutory damages for willful copyright infringement can reach $150,000 per work, which means the headline number in this case could reach into the tens of millions if BMG prevails on the willfulness theory.
The legal theory is worth examining carefully. AI companies have argued that training on copyrighted material constitutes fair use because the resulting models produce transformative outputs. Anthropic prevailed on a similar fair use argument in a separate proceeding before Judge William Alsup, who found that using copyrighted content as training data was sufficiently transformative. But that ruling concerned text summarization, not the literal reproduction of song lyrics — a meaningfully different factual scenario.
When a user asks Claude to explain the meaning of a Rolling Stones song, the model may reproduce lyrics verbatim. That output — a direct copy of protected expression — is harder to characterize as transformative than a summary of a news article. BMG's complaint appears to target exactly this use case, arguing that Anthropic didn't just learn from the lyrics but reproduced them in ways that substitute for the original works.
The timing of the filing is notable. Anthropic has been navigating a difficult stretch: the $1.5 billion author settlement resolved one major front, but the music industry cases present distinct factual and legal questions. The publisher suit from 2023 has progressed through discovery, which may have given BMG's legal team a clearer picture of how extensively lyrics appear in Claude's training data and outputs.
Mastercard, separately, announced a large tabular model for fraud detection this week — one of many examples of how financial institutions are building foundation models on proprietary data precisely because that data is theirs to use. The contrast is sharp: where Mastercard's model trains on anonymized transaction records it owns, Anthropic's models trained on text scraped from the internet, including, allegedly, lyrics that someone posted somewhere without authorization.
Whether Anthropic can distinguish its position from the music publishers' claims will depend on specifics that haven't fully surfaced yet: what the training data actually included, how Claude reproduces lyrics, and whether courts continue to distinguish between training-time use and output-time reproduction. The 2023 UMG case will likely answer many of these questions first. BMG's suit adds a second front and higher stakes.
Neither Anthropic nor BMG responded to requests for comment.
Blake Brittain reported the original story for Reuters.

