The first thing the legal system did when it took AI seriously was define what you lose by using it wrong.
On January 30, 2026, the Department of Justice convicted Linwei Ding, a former Google software engineer, on charges related to the alleged theft of AI trade secrets. The DOJ called it the first successful prosecution of Chinese AI-related economic espionage. That case represents the old threat: a person physically taking something they were never supposed to take.
The more consequential rulings are quieter. Two federal courts recently held that sharing proprietary information with a public AI platform, without a contract guaranteeing confidentiality, destroys trade secret protection. The legal mechanism is the same in both cases. The Defend Trade Secrets Act requires that alleged secrets be subject to reasonable measures to maintain secrecy and not be freely available through public channels. Using a public AI platform like ChatGPT or Claude without contractual safeguards attacks both requirements simultaneously.
The disclosure problem
In Trinidad v. OpenAI, a plaintiff alleged that OpenAI misappropriated proprietary AI development frameworks she had created. She had developed those frameworks using ChatGPT and admitted this in discovery. The court found she had not taken reasonable measures to maintain secrecy: she had voluntarily shared the information with a party under no confidentiality obligation. The court, in a January 5, 2026 ruling by Judge Jon S. Tigar of the U.S. District Court for the Northern District of California, applied a 1984 Supreme Court precedent, Ruckelshaus v. Monsanto Co., which held that disclosure to a party not bound by confidentiality protections extinguishes trade secret rights. By accepting OpenAI's Terms of Service and using ChatGPT to develop her frameworks, the plaintiff had consented to disclosure without establishing any confidentiality protections. The holding is cited in Trade Secret Litigator via the Justia docket, 2026 WL 21791.
The ruling did not address whether AI outputs could themselves qualify as trade secrets. The disclosure problem was enough to end the case.
In United States v. Heppner, a criminal defendant claimed attorney-client privilege over approximately 31 documents that memorialized his communications with Anthropic's Claude AI. The FBI had seized the documents during a search. The court rejected the privilege claim, reasoning that Anthropic's privacy policy allows the company to collect inputs and outputs, use them for training, and disclose them to third parties including government regulators. Communications that a platform can share with regulators are not confidential. The court, in a February 10, 2026 ruling by Judge Jed Rakoff of the U.S. District Court for the Southern District of New York, held the documents failed the confidentiality requirement for attorney-client privilege.
Anthropic offers an Enterprise License tier with a Zero Data Retention addendum, under which it does not retain inputs beyond the abuse-screening stage. This structure is legally analogous to a non-disclosure agreement with a consultant. That option was not available or used in Heppner's case. The holding is not that AI-created documents can never be privileged. It is that documents created through a public platform with permissive data-use terms fail the confidentiality requirement.
The inference question
Both rulings address voluntary disclosure. The plaintiff in Trinidad chose to use ChatGPT, and Heppner chose to use Claude for attorney communications. The harder question is what happens when someone uses AI to infer or reconstruct information that was never directly shared. That question is less settled.
Courts have distinguished between information that is readily ascertainable through proper means and information obtained through improper means. Prompt injection attacks designed to extract internal AI logic have been found to constitute improper means under the DTSA. Strategic manipulation to extract protected instructions is not lawful reverse engineering. But routine API-based inference, the kind any enterprise does when querying a model about publicly available information, has not been clearly classified.
The stakes of that distinction are large. If inferring a trade secret from fragments is not improper means, then using AI to reconstruct proprietary information becomes legal provided the information was not directly uploaded. You cannot take the secret, but you can ask the right questions and let the model synthesize. That effectively lowers the bar for acquiring trade secrets through AI analysis.
What enterprises are actually doing about this
The leak paths that concern practitioners are more varied than a single bad upload. At the input stage, employees may feed sensitive information into third-party AI services. At the processing stage, non-anonymized data used for fine-tuning may embed secrets in model parameters. At the output stage, multi-agent collaboration creates information flows that are difficult to audit and easy to lose track of. One agent's outputs become another's inputs, and by the time information leaves the system, it has passed through enough layers that no single party knows where it came from.
China's Regulation on Trade Secret Protection, issued February 24, 2026 by the State Administration for Market Regulation and effective June 1, 2026, is the most significant update to its administrative enforcement framework in three decades, replacing the 1995 provisions that had governed trade secret protection since before the internet era. The rules define protectable AI assets to include model weights, deployment architecture, prompt sets, agent orchestration logic, and knowledge bases. Shanghai courts have already processed the country's first trade secret pledge financing case, transforming algorithm models and security architectures into Data IP for financing purposes. These are early signals that the scope of what qualifies as a protectable AI-related trade secret is expanding, and that enforcement infrastructure is following.
The practical problem for enterprises is that the legal incentive to restrict AI use runs directly against the engineering incentive to use every available tool. Development teams reach for AI assistants to move faster. Legal teams restrict access to prevent exactly the kind of disclosure that Trinidad and Heppner now make explicit. The gap between those two pressures is where trade secret protection erodes.
The path forward is contractual. Anthropic's Enterprise tier, and comparable offerings from other providers, attempt to solve the confidentiality problem at the platform level. But the multi-agent case is not solved by a single provider's enterprise agreement. When information flows through multiple systems, some inside the enterprise and some outside it, no one contract covers the path. That is where the legal exposure is most unconstrained and the technical controls are least mature.
Trinidad and Heppner are early rulings on unsettled law. The doctrine they establish is now explicit: disclosure to a public AI platform without contractual confidentiality protections waives trade secret rights. What remains to be worked out is how the principle applies in more complex scenarios: multi-agent workflows, inference-based reconstruction, and the growing class of cases where the entity using the AI is itself an agent acting on behalf of the enterprise. The answers will define what trade secret protection means in a world where the most capable tools are also the most legally hazardous to use.