While Chinese Rivals Raised Prices, DeepSeek Cut Its Own
DeepSeek just made autonomous AI agents economically thinkable. On Sunday the Chinese lab cut the cost of running its AI models repeatedly to one-tenth of its launch price — a permanent adjustment effective 12:15 UTC April 26, per DeepSeek API records. The move runs counter to the broader Chinese AI market: Kimi K2.6 and Zhipu GLM-5.1 both raised prices on their latest flagship releases, the South China Morning Post reported. DeepSeek went the other direction.
The practical implication matters more than the headline number. At V4 pricing, a full repository analysis — the kind that would cost roughly 35 cents with GPT-5.5 — runs about 5 cents with DeepSeek V4, before the promotional discount kicks in. Continuous autonomous coding agents, full-repo review on every pull request, AI pipelines too low-margin to justify at GPT-5.5 price points — these move from technically possible to economically viable in a single price cut.
The architecture underneath suggests this is structural, not promotional. DeepSeek validated V4 on Huawei Ascend NPUs alongside Nvidia GPUs, The Register reported — meaning the lab isn't dependent on Nvidia chips that US export controls have made difficult to source. V4 requires only 10 percent of the memory cache and 27 percent of the inference compute of its predecessor, even at its 1 million token context window. Those efficiency gains are the mechanism, not the discount.
On GPQA Diamond, a test of graduate-level reasoning, V4 scores 90.1 percent against Opus 4.7's 94.2. On BrowseComp, a long-context browsing benchmark, it reaches 83.4 percent against GPT-5.5's 84.4 — close enough to be competitive on some production workloads, VentureBeat's published tables show. The model is MIT licensed and integrates with Claude Code, OpenClaw, and OpenCode, VentureBeat reported.
DeepSeek is raising from Tencent and Alibaba at a valuation north of $20 billion, according to the Financial Times and The Information — retention capital for researchers, not compute. The US government has alleged without public evidence that DeepSeek distilled outputs from US proprietary models to train its own. DeepSeek denies this. The dispute is unresolved.
What is not in dispute is the direction of the cost curve. If the economics hold under production load, the moat for frontier AI labs shifts from raw model quality toward proprietary data, regulatory position, and distribution scale. Those are the positions worth owning when the price of intelligence approaches a commodity input. Whether the category of products DeepSeek is pricing into existence actually materializes is the next data point worth watching.