Anthropic Is Teaching AI to Dream. That Might Not Mean What You Think.
Anthropic is teaching its AI to dream. The question is whether that means anything yet.
At its annual developer conference on Wednesday, Anthropic introduced a feature called "dreaming" for its Claude Managed Agents product — a process that lets AI systems review past sessions, identify patterns across multiple conversations, and reorganize their own memory stores between tasks. The feature, rolling out in research preview, is described by Anthropic as a way for agents to "find patterns and help agents self-improve" by consolidating what they have learned across extended projects.
Dreaming is a departure from how most AI assistants handle memory today. Standard context management compacts long conversations into a shorter summary, but that compaction is usually confined to a single session with a single agent. Dreaming, as Anthropic describes it, runs across multiple sessions and multiple agents — surfacing recurring mistakes, workflow patterns, and team preferences that no single conversation would reveal. Ars Technica reported that the technical process takes a pre-existing memory store and up to 100 session transcripts as input, then produces a reorganized output store with duplicates merged and stale entries replaced.
The feature matters less as a product announcement and more as a signal of where Anthropic is betting. Memory consolidation across sessions is a prerequisite for agents that can actually improve from experience rather than waiting for the next model training run. If it works as described, the moat for AI agents shifts from what a model can do out of the box to what a system has learned from deployment.
That bet requires infrastructure. On the same day, Anthropic announced it had signed an agreement with SpaceX to use the full computing capacity of the Colossus 1 data center in Memphis, Tennessee — more than 220,000 Nvidia processors delivering 300 megawatts of new capacity within a month, Reuters reported. The company cited the deal, along with previously announced partnerships with Amazon, Google, and Microsoft, as the reason it is doubling Claude Code's five-hour rate limits for Pro, Max, Team, and Enterprise plans, and removing peak-hour throttling for Pro and Max accounts. The capacity expansion is real; the product benefit is still theoretical.
Also graduating from research preview to public beta on Wednesday: outcomes, a feature that lets developers define rubric-style goals for agents, and multiagent orchestration, which allows a single agent to delegate tasks to specialist sub-agents. These are concrete. Dreaming is the more ambitious claim — and the more speculative one.
The competitive pressure behind these announcements is not subtle. Reuters reported Wednesday that OpenAI scaled back its Sora video generation tool to redirect focus toward AI coding, citing competition from Anthropic's Claude Code as the reason. That shift is the clearest signal yet that the coding agent market is where the money is moving. Dreaming, if it delivers on its stated promise, would extend that lead: an agent that remembers mistakes across projects is more useful than one that starts fresh every session.
The framing Anthropic chose for the feature is revealing. Naming memory consolidation "dreaming" borrows the language of human cognition to make a technical process sound like something it is not. The feature does not involve sleep, unconscious processing, or anything resembling human dreaming. It is an asynchronous job that runs model inference over a memory store and session transcripts. That is useful — it is also not dreaming, by any conventional definition.
Ars Technica reported that Jack Clark, Anthropic's co-founder and policy head, published an essay Monday arguing that there is a 60 percent chance frontier AI models will be able to autonomously train their successors by the end of 2028. Dreaming is not mentioned in that essay, but it occupies adjacent territory: systems that improve themselves between deployment rather than waiting for the next lab training run. The essay and the product announcement are separate events. Taken together, they suggest Anthropic is building toward self-improving agents as a product direction, not just a research hypothesis.
Musk, who accused Anthropic's AI of bias in February and called the company "misanthropic," offered a different assessment Wednesday after meeting with Anthropic's leadership. "No one set off my evil detector," he wrote on X. SpaceX will lease its Colossus 1 facility to Anthropic. The two companies that once seemed furthest apart in the AI race are now infrastructure partners.
The feature is still in research preview. Developers must request access. The self-improvement claims are unverified. Whether dreaming produces measurable improvements in agent performance, or whether it is advanced session summarization with a science-fiction label, is a question the announcement does not answer. What is clear is that Anthropic is treating multi-session memory as a product priority, and is spending heavily on the infrastructure to support it.
That spending — the SpaceX deal, the Amazon and Google partnerships, the rate limit expansion — is concrete. The capability is not yet. "Dreaming" is the right word for the marketing. Whether it describes something real is what the research preview is supposed to determine.