Three of the most aggressive competitors in AI have quietly built something that looks like a defense alliance. On April 6, OpenAI, Anthropic, and Google disclosed, according to Bloomberg, they are sharing threat intelligence through the Frontier Model Forum, the industry nonprofit they co-founded with Microsoft in 2023, to detect and disrupt adversarial distillation: the practice of using a rival model's outputs to train a competitor's system. The named targets, according to the Bankless Times and the Straits Times, include DeepSeek, Moonshot, and MiniMax.
The backdrop is a running dispute that dates to January 2025, when DeepSeek released its R1 reasoning model and showed that a capable frontier model could be produced with far fewer advanced chips than the U.S. assumed was necessary. That claim unsettled investors and policymakers. But the more specific complaint from U.S. labs involves what they say is systematic extraction: not just building comparable models, but using U.S. outputs to do it.
In a memo sent to the U.S. House Select Committee on China on February 12, 2026, OpenAI alleged that DeepSeek employees used obfuscated third-party routers to bypass access restrictions and developed code for programmatic distillation of U.S. model outputs. The memo, reported by Reuters, described what OpenAI characterized as a sustained, evolving effort to extract capabilities rather than develop them independently. Anthropic, the AI safety company behind Claude, has made a similar allegation, accusing Chinese-linked companies of stripping safety guardrails from distilled outputs. The company already banned Chinese-controlled firms from using Claude outright last year.
The commercial framing from U.S. officials involves a number that appears in nearly every account of this dispute: billions of dollars in annual profit lost to unauthorized distillation. That figure is worth treating carefully. It is an estimate from U.S. officials, not a figure independently verified by outside auditors or academic researchers. OpenAI has commercial incentives to characterize distillation as an existential threat to its business model; the number should be read in that context.
What makes the April 6 announcement structurally interesting is the antitrust problem it exposes. Information sharing between competitors is restricted by guidance that limits what companies can legally disclose to one another without triggering antitrust scrutiny. The Straits Times reported that this legal uncertainty is actively limiting how much the three labs can share through the Forum. They can discuss general threat patterns. They likely cannot share the specific model weights, training data characteristics, or output signatures that would most directly detect distillation. The ISAC, in other words, may be legally hobbled from birth.
This tension was prefigured by the Trump administration's AI Action Plan, released in 2025, which called for the creation of an information sharing and analysis center for exactly this purpose. The administration identified adversarial distillation as a threat before the labs publicly acknowledged the scope of their coordination. That sequencing suggests the ISAC idea originated in policy circles, not in the labs themselves, and that the labs are responding to a mandate more than pioneering a solution.
Counterpoint Research's vice president, quoted by Rest of World, offered a blunt reframe: the entire industry has mostly evolved based on recursive learning, with newer entrants going through the same routes of distillation and optimization. Distillation is not a novel tactic invented by Chinese labs. It is how the field has always worked. What has changed is the competitive framing: U.S. labs now hold the models that are worth distilling, and they want legal and technical protection for that asymmetry.
The practical levers available to the three companies, as described by the Bankless Times, are conventional: account cancellations, IP range bans, altered rate limits, and modified output formats designed to make programmatic extraction harder. These are real constraints, but they are also easily evaded by a motivated actor with sufficient infrastructure. A RAND Corporation analyst, cited by Rest of World, suggested OpenAI's public escalation may be as much about pressuring policymakers to restrict chip exports to competitors as it is about the distillation problem itself.
The deeper question is whether an alliance born from antitrust constraints and dependent on voluntary compliance from the target companies can achieve its stated goal. The labs are asking Washington for permission to cooperate against a common threat, while the same legal regime that enables their market dominance limits their ability to do so. That contradiction is the actual story.
What to watch next: whether the Frontier Model Forum publishes any operational summary of what it has actually shared, versus what it says it has shared. A functioning ISAC produces metrics. A press release alliance produces announcements. The gap between those two will tell you which this is.