UK's OpenAI partnership promised transformation. Eight months later: nothing.
When UK Technology Secretary Peter Kyle and OpenAI CEO Sam Altman signed a memorandum of understanding in July 2025, they called it a partnership that would harness AI to "address society's greatest challenges" — changing "how people live, learn, work, and access public services." Eight months la...

image from Gemini Imagen 4
When UK Technology Secretary Peter Kyle and OpenAI CEO Sam Altman signed a memorandum of understanding in July 2025, they called it a partnership that would harness AI to "address society's greatest challenges" — changing "how people live, learn, work, and access public services." Eight months later, a freedom of information request has produced an unusually flat answer: the Department for Science, Innovation and Technology has conducted zero trials under the agreement.
The FOI, filed by Tarek Nseir, CEO of the AI consultancy Valliance, asked DSIT to provide information about trials conducted under the memorandum. DSIT's response, reported by The Guardian, was unambiguous: it held no such information and had "not undertaken any trials under the memorandum of understanding with OpenAI."
That single sentence deserves to sit without softening. Eight months after a high-profile signing between a sitting Cabinet minister and the CEO of the world's most-discussed AI company, the government cannot document a single trial.
What the MoU actually said
Reading the MoU text published on GOV.UK explains a great deal about why this could happen. Section 2 states that OpenAI and DSIT "will collaborate to identify opportunities for how advanced AI models can be deployed throughout government and the private sector." Section 3 covers infrastructure. Section 4 covers technical information exchange.
Every section uses permissive language: "may include," "exploring," "potential routes." At the end of the document, a single sentence disposes of any enforcement question: "This memorandum is voluntary, not legally binding, and without prejudice to any binding agreements. It does not prejudice against future procurement decisions."
No timelines. No milestones. No accountability mechanism. No named deliverables. It is a press release that two parties have cosigned.
This matters not as a legal technicality but as a structural feature. Without binding commitments or measurable targets, the MoU creates high-profile announcement value for both parties at essentially zero cost of delivery. For the government, it signals ambition. For OpenAI, it signals market access. For the public, it provides very little.
The deflection and what it reveals
When the Guardian asked DSIT to respond, the department pointed to two things: the Ministry of Justice's October 2025 agreement giving 2,500 civil servants ChatGPT Enterprise access, and ongoing work with the UK AI Safety Institute.
The MoJ arrangement is real. OpenAI confirmed it in an October 2025 announcement, describing 2,500 MoJ civil servants getting ChatGPT Enterprise with an option for UK-based data storage. AI Magazine reported the pilots reduced task completion times from half a day to twenty minutes on some tasks. That is a concrete deployment.
But it wasn't conducted under the MoU. It was rolled out as part of the AI Action Plan for Justice, a separate initiative. DSIT retroactively citing it as evidence of MoU progress is the kind of answer that passes media scrutiny on a slow news day but dissolves immediately under examination.
Nseir, who filed the FOI, put it plainly: "We use PowerPoint — that doesn't mean we have a strategic relationship with Microsoft. If this was the intent of the MoU then our government is not taking the impact of AI on our economy seriously."
Stargate UK: the other number that doesn't add up
OpenAI's July 2025 announcement of the partnership also introduced Stargate UK — a collaboration with Nscale and Nvidia to deploy up to 8,000 Nvidia GPUs at a UK data centre in Q1 2026, with a stated path to 31,000 GPUs over time. This was presented as evidence that OpenAI was "increasing its footprint in the UK."
We are now in Q1 2026. The Guardian's investigation, published this week, found that Nscale will almost certainly not complete its promised UK supercomputer by the end of 2026 and has publicly misrepresented progress on its construction site. OpenAI told the Guardian it had "nothing to share" on the status of the GPU deployment — a commitment it had previously described as imminent.
Two pillars of the partnership announcement, public sector deployment and infrastructure build-out, have both stalled. This is not a coincidence of bad luck. It is what happens when high-profile commitments are built on voluntary language and PR timelines rather than contracts and capital commitments.
A pattern, not an outlier
The UK government has signed nearly identical MoUs with Anthropic, Google DeepMind (December 2025), and Nvidia. The Google DeepMind agreement, concluded in December, is reported to be in the early stages of planning. Anthropic told the Guardian it is planning to build an AI assistant to help citizens navigate government services and is working with the AI Safety Institute on safety research — which at least represents something concrete and forward-looking.
The government's AI Opportunities Action Plan: One Year On, published in January 2026, describes real achievements: 2.4 million NHS chest X-rays now AI-assisted, a Sovereign AI Unit with up to £500 million in funding, five designated AI Growth Zones. These are genuine numbers attached to specific programs. They are, notably, not the MoUs with OpenAI.
The contrast suggests the government is capable of executing AI deployments when it uses established procurement channels and dedicated funding. The MoU model bypasses that infrastructure — and appears to produce proportionally less.
Matt Davies, economic and social policy lead at the Ada Lovelace Institute, identified what's at stake: "Voluntary partnerships with big AI companies don't follow the usual procurement rules, raising real questions about accountability and scrutiny. The memorandum with OpenAI doesn't clearly explain how progress will be measured or how it will deliver public benefit, and the risks of 'lock-in' — becoming dependent on a company's product and services — aren't addressed anywhere."
An Ada Lovelace Institute poll found 84 percent of the British public are concerned the government is putting the AI sector's interests ahead of protecting the public.
What the FOI actually cost
FOI requests are a blunt instrument, but they are often the only instrument that works. The UK government issued no progress report on the OpenAI MoU at the six-month mark. No parliamentary question appears to have produced an equivalent disclosure. It took a private AI consultancy paying a filing fee and waiting the statutory response period to establish that the country's most-publicised AI partnership had produced, in DSIT's own words, no trials.
That is the accountability gap. Not the absence of trials — governments move slowly, and eight months is a short window — but the absence of any mechanism to surface that absence except a citizen filing a FOI request.
DSIT said it is "pleased with the progress we are making on the memorandum of understanding with OpenAI" and that "this work is active, ongoing and focused on delivering real results." OpenAI said the FOI's scope "did not capture the full scale of its activities in the UK."
Both statements may be true. Neither is verifiable. That is the problem.

