When a senior diplomat in a crisis follows an AI system's recommendation and it goes wrong, the liability lands on the career official — not the algorithm. That accountability gap is the real story as the State Department explores handing AI systems that take action autonomously real decisions, months before the federal standards governing exactly that are written.
The State Department is at an exploration stage with AI systems that take action autonomously, according to Amy Ritualo, the acting chief data and AI officer, who discussed the plans at an AITalks fireside chat in Washington, D.C. this week. The use case she described: orchestrating permanent changes of station for diplomats across family relocations, school transfers, pet import regulations, and dozens of government systems that do not talk to each other. StateChat, the department's existing chatbot, already has roughly 58,000 users and is used across 98% of 270 missions and posts globally, per Ritualo's remarks. But the department's own AI inventory, filed in January 2026, lists zero agentic systems as deployed.
The timing matters. NIST published a request for information on AI agent security in January 2026, and COSAiS SP 800-53 — the specific federal controls that would govern this class of deployment — is still in development as of March 2026. A GAO report from September 2025 found that even the best-performing AI agent tested could complete only about 30% of software development tasks autonomously. The State Department is exploring agentic AI for logistics and administrative workflows while the rule book does not exist.
CIO Kelly Fletcher described the approach plainly at a GDIT Emerge event in February: her vision is to slap AI agents on top of older systems to buy time. Ritualo drew one firm line — foreign policy decisions will not be made by agentic AI. But the accountability question is harder than the scope question. When an agentic system recommends a course of action in a crisis and a career officer acts on it, the career officer bears the consequence. Nobody has written the rule book for who is responsible when that recommendation is wrong.
The State Department is not alone. Agencies across the federal government are moving toward agentic AI deployments at a moment when the governance infrastructure is under construction. State has 58,000 people on an existing AI system and wants to extend it. The question no announcement has answered is who is liable when that extension makes a consequential mistake.