America’s AI Containment Strategy Has an Enforcement Problem
Anthropic spent years drawing a line on Chinese AI development. Tencent may have crossed it anyway.
The company that makes Claude — the AI assistant popular in US enterprises — explicitly prohibits Chinese companies from using it to develop competing models, a policy it calls necessary to prevent adversarial distillation: reverse-engineering a frontier model's capabilities by studying its outputs, according to Anthropic's updated sales restrictions. Yet Tencent engineers appear to have used Claude to evaluate and improve Hy3, a competitive AI model released on April 23, according to The Information. That release came 16 days after OpenAI, Anthropic, and Google announced a joint monitoring effort through the Frontier Model Forum specifically to detect this kind of cross-border model exploitation, ResultSense reported.
The irony runs through the lab's leadership. Yao Shunyu joined Tencent as chief AI scientist last year after leaving Google DeepMind. He previously worked at Anthropic and left, he told reporters at the time, because he disagreed with the company's China restrictions. "I don't think there is a way for me to stay," he said, SCMP reported. Yao now runs the lab whose model appears to have been built with the technology he once helped restrict.
Hy3's performance makes this more than a policy story. On SWE-bench Verified, a benchmark testing how well AI models handle real software engineering tasks, Hy3 scored 74.4%, up from 53.0% for its predecessor Hy2, according to Tencent's Hugging Face page. Tencent has also said it plans to more than double AI investment to over $5 billion in 2026, TheLEF Korea reported, signaling serious intent at the frontier.
The timing of Hy3's release relative to the Frontier Model Forum announcement is the part that matters most. OpenAI, Anthropic, and Google launched the pact on April 7, publicly committing to share intelligence on Chinese attempts to extract capabilities from US models. The pact's first real test appears to have come 16 days later, with a model that benchmarks competitively — and that The Information reports was built with Claude. The pact's monitoring mechanism either failed to detect what Tencent was doing, found nothing actionable, or was working from different information than what The Information is reporting.
Anthropic declined to comment on whether Tencent's Claude usage violated its terms of service. Tencent did not respond to a request for comment. The Information's report is based on unnamed sources described as familiar with the matter, and no other outlet has independently confirmed the specific mechanism of access.
The factual question that matters most for the broader policy debate: whether Tencent used Claude through a direct API call, a third-party route that intermediates requests, or some other channel. Each has different implications for whether the restriction is a technical control or a legal signal. A direct API block would require Anthropic to have flagged and rejected Tencent's traffic. A third-party intermediary would mean Tencent paid someone else to make requests on its behalf — a workaround that is harder to detect and easier to deny.
What the pact actually monitors, and how effectively, remains undisclosed. The Frontier Model Forum described its detection work as ongoing and shared confidentially among members. Whether it caught Tencent's use of Claude — or found something it chose not to make public — is a question the pact's members have not answered. The next test will be whether the next Chinese model release that benchmarks suspiciously well gets the same silence.