Oracle shipped 22 applications last week that can execute real business transactions — approve a payment, route a purchase order, close a collections dispute — inside systems that already hold the money, the contracts, and the customer data. Its answer on who is liable when something goes wrong: disclaimers and monitoring tools.
That is the product. The liability gap is the bill customers are already holding.
At Oracle AI World in London on March 24, the company introduced Fusion Agentic Applications: 22 applications built natively into Oracle Fusion Cloud, operating inside transactional systems with access to enterprise data, workflows, approval hierarchies, and financial context. Steve Miranda, Oracle's executive vice president of applications development, described them as "applications that can reason, decide, and act" in pursuit of defined business objectives. They are scheduled for general availability in April as part of release 26b of Fusion Cloud Applications, according to CIO.
Oracle declined to comment beyond its public announcements.
The pricing structure sharpens the problem. Oracle introduced Action Units — approximately one cent per unit — as a consumption-based metric replacing per-user per-month SaaS pricing. More agent actions executed means more Oracle revenue. That is the explicit business model. Standard technology agreements, however, disclaim responsibility for AI agent output under "as is" terms, according to a February analysis by law firm Clifford Chance. Customers bear the risk of actions taken by AI agents even when the software is correctly configured. Negotiating around that gap requires explicit contractual language most enterprises do not have in place.
The result is a structural misalignment with no market solution currently available. Munich Re's HSB unit launched AI liability coverage in late March, but the product targets small business general liability — bodily injury, property damage, privacy violations from AI-generated content. It does not cover consequential damages from autonomous enterprise transactions: a misapproved payment, a mispriced product, a compliance failure triggering a regulatory fine. Lloyd's and major commercial carriers have not announced products specifically addressing AI agent consequential damages in enterprise transactional environments. The gap between what Oracle sells and what any party will pay for if it goes wrong is unhedged.
"I do not see a clear response from any vendor on the liability issue," said Balaji Abbabatulla, a vice president at Gartner who covers Oracle. His firm's position: "this sounds good, but be cautious. It does not necessarily look as glittery as it sounds. There are challenges under the hood which are not being overcome right now." Mickey North Rizza, an IDC group vice president, called the announcement a significant shift in agentic systems while acknowledging the liability question as open rather than resolved.
Abbabatulla also noted that customers with established investments in Databricks, Snowflake, or Cloudera face significant transition overhead that complicates Oracle's broader AI Data Platform pitch.
The 22 applications span finance, procurement, and human resources — the workflows where Oracle's consulting ecosystem has historically lived. Oracle has trained 63,000-plus certified experts in Oracle AI Agent Studio and added an Agentic Applications Builder that lets users compose workflows using natural language without traditional coding. Chris Leone, Oracle's executive vice president of applications development, said early testing showed 40 to 50 percent time savings in support scenarios.
The liability gap is not Oracle-specific — it is a structural feature of enterprise software contracts applied to a new class of system behavior. What Oracle did this week was push those systems from "record what happened" to "decide what happens next" inside transactional environments where the consequences are financial, contractual, and regulatory. The Action Units pricing model means Oracle's revenue scales with agent activity. The contract model means Oracle absorbs none of the downside when agent activity causes harm. The insurance market has not closed the loop.
That question — who writes the policy that covers consequential damages from an AI agent executing a real transaction — does not have an answer yet. The 22 applications are available in April. The liability framework is not.