The De-Identified Data Loophole in CU's OpenAI Contract
The University of Colorado's system-wide OpenAI contract raises a question that's becoming familiar across higher education: who actually decides when a university buys AI?

image from FLUX 2.0 Pro
The University of Colorado system signed a $2.1 million contract with OpenAI in February to provide ChatGPT Edu access to all students, faculty, and staff across its four campuses, effective March 31. According to the Daily Camera, CU President Todd Saliman and the system's chancellors made the call without going through a shared governance process. More than 450 faculty members have since signed a "GPT Agreement Dissent Letter" opposing the agreement.
Axios Boulder reported the faculty pushback on Monday, adding new attention to a controversy that has been building on campus for weeks.
What the Contract Actually Says
The full contract is publicly available, which is more than most university AI deals offer. The headline protection: OpenAI cannot use student data — including chat history and uploaded files — to train, fine-tune, or otherwise improve its AI models. OpenAI also cannot sell student data to third parties.
But there's a carve-out that has drawn attention from faculty and students with technical backgrounds. OpenAI may create and own de-identified data derived from the platform, and the prohibition on sharing does not apply to de-identified data. The university says OpenAI would only use such data for maintaining and troubleshooting ChatGPT services — understanding things like downtime or error rates at an aggregate level. CU system spokesperson Michele Ames told the Daily Camera the university is working to reduce risk through contractual protections, training, and oversight.
CU Boulder doctoral student Aaron Gluck, whose research focuses on AI, put the skepticism plainly: "I have concerns about whether or not that's a promise that's actually going to be kept by the company. It's not like we have a way of monitoring it."
That's the monitoring problem in a sentence. The contract establishes protections. Whether those protections are auditable is a different question — one the contract doesn't answer.
The Administration's Case
CU Boulder Chancellor Justin Schwartz made the case to the Boulder Faculty Assembly in early March with a pragmatic argument rather than an ideological one. According to CU Boulder Today, he noted that 30,000 members of the CU Boulder community are already using ChatGPT through their colorado.edu email addresses — a figure that tracks closely with the provost page separately documenting more than 28,000 users with @colorado.edu credentials — including more than 3,000 faculty and staff. "This is not something new to us," Schwartz said. "This is already here."
The argument is essentially: the university can either provide a governed institutional version with data protections and required AI literacy training, or leave people to use the consumer product on personal accounts with no protections at all. Provost Ann Stevens made a similar case in a public Q&A, calling the agreement "a starting point not a final answer" — time-limited, evaluative, and not a mandate for classroom use.
The three-year contract covers 100,000 users and is renewable annually. The first year's $2.1 million cost comes from the CU system office. After that, if the contract is renewed, individual campuses would begin absorbing the cost as operating expense.
The Governance Problem
The more persistent complaint isn't about the contract terms — it's about process. Flynn Zook, a CU Denver junior who organized student opposition, told the Daily Camera that the "lack of transparency on such a major issue is frankly alarming." A student survey found 94% of 289 respondents across CU campuses opposed the deal.
This pattern is not unique to Boulder. The American Association of University Professors, which represents 55,000 faculty nationwide, published a report warning that universities were adopting AI "uncritically" and with little transparency, as reported by The Guardian. USC faced similar faculty criticism over its own ChatGPT Edu deal in January 2026. The Los Angeles Times reported in December that OpenAI has used aggressive discounting to lock in university partnerships — a commercial strategy that accelerates adoption timelines and, in some cases, compresses the governance windows that faculty shared governance is supposed to fill.
The CU administration's framing — "we need to engage with this, the question is how" — is reasonable. The faculty's complaint — "this decision was made without us" — is also reasonable. Both can be true. What they reveal together is that higher education doesn't yet have a governance architecture that moves at the pace AI contracts require.
Analysis: The specific terms of CU's contract are probably fine, or at least defensible. The de-identified data carve-out is worth watching but isn't obviously predatory. What's harder to dismiss is the gap between the scale of institutional commitment — $2.1 million, three years, 100,000 users — and the process that produced it. When a university buys a new building, faculty governance is involved. When it buys infrastructure that shapes how 100,000 people think and write, the decision apparently moves faster. That's the question CU's dissent letter is really asking, and it's not going to go away when the contract renews.

