The agentic AI framework market is consolidating fast around two open-source winners, but the more interesting story is what enterprises are doing with them: replacing middle management, not just software development.
LangGraph and CrewAI dominate production deployments. LangGraph, built by LangChain's team, has 25,000 GitHub stars and 34.5 million monthly downloads. Its strength is deterministic workflow control via cyclical state machines, which matters for enterprise use cases that require auditability. CrewAI has 46,000 stars and wins on speed to prototype through role-based multi-agent collaboration, where agents get assigned backstories, goals, and tools. Together they account for the majority of the 68% of production agents that run on open-source frameworks rather than proprietary platforms, according to Channel Tel's framework comparison.
The counterpoint is cost. Open-source frameworks require 2.3 times more initial setup time than platforms, and organizations using dedicated open-source stacks have 55% lower cost-per-agent than pure platform solutions, per Dev.to analysis. The break-even calculation depends on scale: small teams prototype on CrewAI, production systems at scale converge on LangGraph for the observability story. OpenAI's Agents SDK and Microsoft's Semantic Kernel target the enterprise integration market, where existing vendor relationships matter more than framework capability.
AutoGen, Microsoft's multi-agent framework, has effectively exited enterprise consideration. Its near-zero security mechanisms make it suitable for academic research and rapid experimentation, per the same Dev.to analysis. This is not a close call: any production deployment touching sensitive data should be on LangGraph with LangSmith observability or a comparable enterprise stack.
The real deployment story is not in the framework rankings. It's in what the frameworks are being asked to do. Klarna handles customer support for 85 million users via LangGraph, cutting resolution time by 80%. That is a mid-management function being automated, not a development task. Channel Tel documented this, but the enterprise adoption surveys have been clearer: McKinsey's 2025 State of AI survey found 62% of organizations at least experimenting with agents, and approximately 45% of firms with high agentic AI adoption rates are anticipating reductions in middle management within 36 months, per Observer's synthesis of the data.
That number is not in the framework announcements. It is not in the developer tool coverage. It is in the enterprise change management surveys, and nobody in the infrastructure press is asking what comes next.
The gap is organizational knowledge. Current AI systems can execute on documented processes. They cannot inherit tacit institutional memory: the unwritten rules, relationship capital, situational judgment that senior individual contributors and middle managers carry. When a manager who knows which vendor will actually deliver and which will sandbag a timeline is gone, that knowledge does not transfer to the agent that replaced their team. The MIT finding that 95% of enterprise AI pilots fail to scale has a human substrate that the framework vendors are not building for, because it is not a software problem.
The Agentic AI Foundation, founded by Anthropic, OpenAI, and Block with backing from Google, Microsoft, Amazon, and Bloomberg, published AGENTS.md as a community standard in August 2025. It has since been adopted by more than 60,000 open-source projects. The standard covers agent interaction patterns, not organizational memory. That gap is not an oversight. It is a structural limit of what software can represent.
For teams building or deploying agent infrastructure: the framework choice is increasingly settled. LangGraph or CrewAI, with MCP now native across both, Vercel's AI SDK, Mastra, and Microsoft's Agent Framework. The open question is what you are actually automating, and whether the organizational knowledge required to do it has been documented anywhere. If it hasn't, the agent will execute the process and fail at the parts nobody wrote down.
The 40% cancellation rate Gartner projects for agentic AI projects through 2027 is not, at its core, a technology failure. It is a knowledge management failure that the infrastructure press has not yet named as such.