The EU Is About to Demand Something From ChatGPT That No Government Has Ever Demanded From Any AI
The European Commission is asking a question that does not yet have a clear answer: does ChatGPT fit inside the Digital Services Act, or is it something the law never anticipated?
On Friday, the Commission confirmed it is analyzing whether ChatGPT should be classified as a very large online platform, or as a very large online search engine, under the DSA. The distinction matters enormously. If Brussels designates ChatGPT as a platform, it inherits the content moderation obligations that Meta and Google already comply with. If it designates ChatGPT as a search engine, it inherits a different and in some ways more demanding set of requirements: systemic risk audits, adversarial testing disclosures, and transparency about how the system produces its outputs.
That second category is the one with no precedent.
The DSA's systemic risk provisions, drawn from the same legal framework that governs financial institutions and critical infrastructure operators, require designated platforms to disclose their failure modes to regulators. For a search engine, that means explaining why a given query might return a harmful result. For a general-purpose AI model that generates text, images, and code in response to open-ended prompts, the disclosure obligation would have to cover training data provenance, behavioral testing under adversarial conditions, and the conditions under which the model produces outputs that could be classified as systemic risks.
Joan Barata, a visiting professor at Católica University in Porto who specializes in platform regulation, told Tech Policy Press that the fit is awkward. A large language model does not fit neatly into any of the DSA's three main service categories: mere conduit, caching, or hosting. Regulators are reaching for a category that does not quite contain the thing they are trying to regulate.
OpenAI reported that ChatGPT's search feature had approximately 120.4 million average monthly active users in the European Union over the six-month period ending September 2025. That figure, self-reported by OpenAI, is nearly three times the 45 million user threshold that triggers the DSA's extra obligations for very large online platforms and very large online search engines. The Commission said it is assessing the information and whether further clarifications are needed.
If the designation holds, ChatGPT becomes the first AI chatbot formally subject to mandatory systemic risk audits under the DSA. The audits would require OpenAI to disclose to Brussels, on a recurring basis, how the model behaves under adversarial conditions, what training data informed its outputs, and what the company knows about its own system's failure modes. That is a different kind of disclosure than anything required of a traditional platform: not just what content appeared on the service, but how the system that generated that content actually works.
The practical question is whether those disclosures are even technically feasible at the level the DSA would require. Model behavior under adversarial conditions is an active area of AI safety research; no consensus exists on how to measure it, and the outputs that regulators might classify as systemic risks are defined broadly enough that a general-purpose chatbot could plausibly trigger them. The compliance cost of producing those disclosures, whatever they end up being, falls on OpenAI alone, in real time, with no public benchmark for what satisfies the requirement.
The EC is considering classifying ChatGPT as a search engine specifically for the search feature, which allows users to prompt the chatbot to retrieve live information from the web. That narrower designation might trigger lighter obligations than a full platform classification, but it also raises the question of what it means to regulate one feature of an AI system while the same system operates under a different classification for its other functions.
What Brussels decides in the coming months will not only determine what OpenAI owes European regulators. It will establish whether the DSA is a document that can accommodate a general-purpose technology, or whether the first generation of AI governance is being written in real time, against a law that was designed for a different internet.