The Accountability Gap: When an AI Agent Runs the Government and Something Goes Wrong
The UAE Cabinet announced plans to place autonomous AI agents in charge of half the country's federal services within two years, marking the most ambitious state level commitment to agentic AI governance. Legal scholars and engineers have identified an unresolved accountability…

The UAE wants AI agents to run half its government. Nobody has figured out who answers when things go wrong.
On Thursday, the UAE Cabinet committed to placing autonomous AI agents in charge of half the country's federal services within two years. Sheikh Mohammed bin Rashid Al Maktoum described AI not as a tool but as an executive partner that would "analyse, decide, execute and improve in real time." (UAE Media Office) The announcement laid out a two-year timeline, a ministerial performance-review framework tied to AI adoption speed, and a training mandate for every federal employee. By any measure, it is the most ambitious state-level commitment to agentic AI governance ever made.
What it did not include is an answer to the question that every legal scholar, every civil-liberties advocate, and every engineer building these systems has been asking: who is liable when the AI gets it wrong?
"The real determinant of success will be agentic readiness at the data and process layer, not infrastructure," said Manish Ranjan, research director for software and cloud at IDC EMEA. "Workflow, policy and process redesign is the hardest part and, in a federal government, a multi-year change management exercise rather than a technology roll-out." (Computer Weekly) That is the consensus view among practitioners who have studied government AI deployments. The hard part is not building the AI. It is redesigning everything around it.
Which raises the harder question. When a redesigned process, powered by an autonomous agent, produces a wrong outcome — a permit denied in error, a benefit revoked incorrectly, a visa rejected on bad data — what happens next?
In most jurisdictions, the answer is clear: a human official is responsible, the affected party can appeal, and the chain of accountability runs through the department to the minister. That chain is what makes democratic governance legible to citizens. Agentic AI dissolves it. The decision is made by a system that may be operating across multiple departments simultaneously, drawing on data sources that no single official oversees, with a reasoning process that even its operators cannot fully explain in real time.
"The lived experience shifts from dealing with government to government working around you," said Jessica Constantinidis, innovation officer EMEA at ServiceNow. "The UAE isn't automating government. It's rearchitecting the relationship between the country and its people, with AI as the connective tissue." (Khaleej Times) That is a compelling vision. It is also one where the connective tissue has no accountable node.
The UAE has spent two decades building the infrastructure that makes this plausible. UAE Pass, the national digital identity system, and TAMM, Abu Dhabi's integrated government services platform, represent genuine state-of-the-art digital government architecture. (Gulf News) The announcement cited Government Services 2.0, which introduced proactive, data-driven service delivery. These are not vaporware. They are real systems that real people use.
But moving from integrated digital services to genuinely autonomous agentic decision-making is a different kind of step. The first moves fast and keeps humans in the loop. The second removes the loop — or at least relocates the human from the decision point to somewhere further back, reviewing outcomes rather than making them.
Nobody has worked out what accountability looks like in that world. The EU's AI Act creates risk classifications for high-stakes AI deployments but was designed around static systems, not agents that can chain actions together in ways that weren't predicted at design time. US administrative law has mechanisms for algorithmic decision review but no established framework for autonomous agents making sequential decisions across department boundaries. The UAE itself has no published AI liability or audit framework specific to agentic systems.
"Bias is a particular concern in multilingual, multicultural populations like the UAE," IDC's Ranjan noted. (Computer Weekly) "Governments moving to autonomous service delivery must invest in ongoing model auditing, not just pre-deployment testing." That is correct. It is also something the announcement did not address.
The practical stakes are concrete. Residency renewals, permit applications, trade licences, school admissions — the UAE's experts described services that would shift from "apply and wait" to "handled proactively in the background." (Khaleej Times) That is appealing when it works. When a residency renewal is initiated prematurely, or a permit is approved against changed regulations, or a school shortlist is generated from outdated data, the citizen needs somewhere to go. The announcement creates that somewhere in theory. In practice, tracing an error through an agentic system to identify what went wrong and who can fix it is an unsolved engineering and legal problem.
The GCC dimension adds another layer. The UAE is not making this bet in isolation. For the past decade, the benchmark in Gulf government technology has been digital maturity — e-service availability, digital identity adoption. "The UAE is effectively uplifting that benchmark and replacing it with agentic readiness," Ranjan said. (Computer Weekly) If the benchmark shifts, other Gulf states will feel pressure to follow. Saudi Arabia's own digital government programmes, Qatar's smart-city infrastructure, Oman's e-government services — all of them are now implicitly measured against a two-year UAE target. The regional competitive dynamic is real. Whether the governance frameworks can keep pace is a different question.
There is a version of this story where the UAE pulls it off. The infrastructure is real. The political commitment is unambiguous. The financial resources are available. A government that has spent twenty years building toward this moment may be the only entity on earth with the institutional patience and technical foundation to attempt it.
But the question the announcement sidestepped is the one that will define whether this is a model or a cautionary tale. When an AI agent running a government service produces an outcome that harms a citizen, someone needs to be able to explain why. Right now, no legal or technical framework in the UAE — or anywhere else — can guarantee that. The UAE has announced the largest autonomous government deployment in history. The accountability architecture to go with it does not yet exist.





