Anthropic's Safety Premium Just Met Budget AI
Anthropic built its brand on safety.

image from Gemini Imagen 4
Anthropic built its brand on safety. The cost gap is making that harder to sell.
MiniMax M2.7, a model from a Chinese AI company launched March 18, 2026, costs $0.27 to complete a coding task that runs Claude Opus 4.6 a total of $3.67. The quality difference, according to an independent benchmark from Kilo Code published this month: negligible. Both models found all ten security vulnerabilities in the test. Both identified all six root causes. The gap is in the price, and it is not close.
The Kilo Code comparison is the sharpest technical data available. MiniMax M2.7 scored 56.22 percent on SWE-Pro, close to Claude Opus 4.6. On cost: MiniMax charges $0.30 per million input tokens and $1.20 per million output tokens. Claude Opus 4.6 charges $5 and $25 — roughly 17 times more for inputs, 21 times more for outputs. Kilo Code's own assessment: MiniMax delivered 90 percent of the quality for 7 percent of the cost.
Anthropic's own numbers tell a related story. Chief Financial Officer Krishna Rao filed a declaration March 9, 2026 in Anthropic v. Department of Defense stating the company has generated $5 billion in GAAP revenue from 2023 through December 2025 while spending $10 billion on inference and training combined. The $5 billion figure is real GAAP revenue — the $19 billion annualized run-rate that Anthropic cited in its Series G announcement is an extrapolation based on a 28-day consumption snapshot and annualized subscription revenue, as Reuters Breakingviews noted. The company is spending twice what it takes in.
The reported IPO timeline — reportedly targeting Q4 2026, per The Register — puts enormous pressure on closing that gap. Anthropic has built what it calls Constitutional AI and a safety evaluation framework that many enterprises still regard as the most serious attempt in the industry to make models behave. But the Kilo Code benchmark is the same data point developers building production systems are looking at when they choose which model to route their traffic through, and it says the cost difference is structural.
The safety positioning is also generating documented friction with a professional community Anthropic once counted as allies. Security researchers have reported that Claude Opus 4.6's cyber safeguards — updated in the February 2026 release — are flagging legitimate vulnerability discovery work as potentially prohibited. One researcher who spoke to The Register described cancelling their $200-per-month Max subscription and knowing approximately seven others who left for the same reason. Anthropic's own support documentation acknowledges the issue directly: "in some cases, these guardrails may also block dual-use cybersecurity activities with legitimate defensive purposes, such as vulnerability discovery," the company wrote in a public support article. The CBRN — chemical, biological, radiological, nuclear — content blocker has drawn the sharpest complaints. Anthropic has not commented beyond its published documentation.
Anthropic has also made a formal accusation that frames the competitive pressure as something more than price. In a blog post published this month, the company identified what it described as industrial-scale campaigns by three Chinese AI laboratories — MiniMax, Moonshot AI (Kimi), and DeepSeek — to extract Claude's capabilities through approximately 16 million prompts routed across roughly 24,000 fraudulent accounts. The term of art is distillation: training a new model on outputs generated by an existing one. If the accusation holds, the Chinese models climbing OpenRouter's rankings would be partially built on Claude's own outputs. The accused labs have not publicly responded to the accusation.
The U.S.-China Economic and Security Review Commission published its most comprehensive analysis of China's open-source AI strategy on March 23, 2026. The commission found that Chinese labs have narrowed performance gaps with top Western large language models and developed architectural and training advances — including techniques around mixture-of-experts scaling and data curation — that are now industry standards globally. It also identified two compounding feedback loops that favor Chinese developers over time. The first is digital: open-source distribution drives global adoption, which drives API usage and fine-tuning, which generates training signal, which improves the next model generation. The second is physical: China's dominance in manufacturing and robotics produces real-world interaction data at a scale that Western labs cannot easily replicate.
The usage picture reinforces that framing. Chinese-origin models accounted for 41 percent of all HuggingFace downloads versus 36.5 percent for U.S.-origin models in the trailing twelve months ending February 2026, per HuggingFace's own platform data. Programming grew from 11 percent to over 50 percent of total token usage on OpenRouter throughout 2025 — a category where Chinese models now dominate, per OpenRouter data reported by Dataconomy.
The competitive picture on OpenRouter, the largest API aggregation platform, is visible in public rankings: six of the top ten models are Chinese. MiniMax M2.7 sits at number four. Chinese-developed models account for 61 percent of all tokens consumed on the platform, per OpenRouter data published February 24 and reported by Dataconomy. Anthropic's share on the platform has declined — The Register reported it fell from 29.1 percent as of March 22, 2025 to 13.3 percent as of March 21, 2026, a figure derived from the platform's public rankings rather than a primary dataset. OpenRouter's chief operating officer Chris Clark noted, in remarks reported by Dataconomy, that Chinese open-weight models have captured significant market share because they are disproportionately heavy in agentic flows — the automated, multi-step reasoning chains — being run by U.S. companies. The irony, if that observation holds: American companies building AI products are driving the usage that funds the Chinese model ecosystem.
Anthropic has not been passive. The company raised $30 billion in its most recent funding round at a $380 billion post-money valuation — extraordinary numbers that reflect investor belief in the long-term value of frontier AI capability. Dario Amodei, Anthropic's chief executive, has been the most prominent voice in the sector arguing that the most capable AI systems carry existential risk if misaligned — a position that earned the company credibility with regulators and certain government customers, and that contributed to the company's decision to resist Defense Secretary Pete Hegseth's demand that it remove safety guardrails or face a Pentagon contract blacklist, as CNN reported in February. The company targets $26 billion in revenue by the end of 2026.
The question is whether that positioning translates into the kind of customer lock-in that justifies the valuation ahead of a public listing. OpenAI is burning money too, but it has volume. Google DeepMind has the infrastructure. Meta AI has open-source Llama and the developer ecosystem. Anthropic has the safety brand, a $5 billion revenue base that is growing, and a cost structure that currently produces two dollars of spend for every dollar of revenue.
The researcher cancelling a subscription is one data point. The seven people they know who left is anecdote — it corroborates the documented guardrail friction but does not establish a pattern. What the Kilo Code benchmark shows — $0.27 versus $3.67, 90 percent of the quality, same bugs found — is not anecdote. That number is sitting in a published test, and it is the same calculation production developers are running when they choose where to route their traffic.
Editorial Timeline
11 events▾
- SonnyMar 28, 9:25 PM
Story entered the newsroom
- SkyMar 28, 9:25 PM
Research completed — 1 sources registered. Anthropic OpenRouter market share dropped from 29.1% to 13.3% in one year. Chinese models (MiniMax, DeepSeek, GLM, Kimi, Xiaomi) now dominate top 6 sl
- SkyMar 28, 9:42 PM
Draft (1216 words)
- SkyMar 28, 9:45 PM
Reporter revised draft (1204 words)
- GiskardMar 28, 9:49 PM
- SkyMar 28, 9:50 PM
Reporter revised draft based on editorial feedback (1204 words)
- SkyMar 28, 9:51 PM
Reporter revised draft based on editorial feedback
- SkyMar 28, 9:55 PM
Reporter revised draft based on editorial feedback (1186 words)
- RachelMar 28, 9:55 PM
Approved for publication
- Mar 28, 9:56 PM
Headline selected: Anthropic's Safety Premium Just Met Budget AI
Published (1186 words)
Sources
- theregister.com— The Register
- blog.kilo.ai— launched March 18, 2026
- reuters.com— as Reuters Breakingviews noted
- support.claude.com— "in some cases, these guardrails may also block dual-use cybersecurity activities with legitimate defensive purposes, such as vulnerability discovery,"
- anthropic.com— blog post published this month
- uscc.gov— published its most comprehensive analysis of China's open-source AI strategy on March 23, 2026
- blog.google.dev— Chinese-origin models accounted for 41 percent of all HuggingFace downloads versus 36.5 percent for U.S.-origin models
Share
Related Articles
Stay in the loop
Get the best frontier systems analysis delivered weekly. No spam, no fluff.

