OpenAI and Anthropic brief Congress in private as cyber AI pressure rises
Axios reported that OpenAI and Anthropic gave House Homeland Security Committee staff a private briefing on cyber-capable AI models, meaning systems the companies say can help find software flaws and support other hacking-related work. If you build software, the pressure is simple: lawmakers weighing how to handle those systems are getting some of their first detailed warnings in private from the companies selling the models.
OpenAI told Axios that the Homeland Security session was one of several briefings it held with House and Senate committees last week, plus a White House briefing. House Homeland Security Chair Andrew Garbarino told Axios that government-industry partnerships are essential both for defensive AI use and for protecting U.S. AI development from adversaries including China. The news is not just one closed-door meeting. The briefing sat inside a broader Washington sprint by the labs making the biggest public claims about AI and cyber risk.
Those claims have been arriving through company-run programs as much as through public evidence. In a recent post, OpenAI said the highest tiers of its Trusted Access for Cyber program will get GPT-5.4-Cyber, a version tuned for stronger cyber work with fewer restrictions, and that the program is expanding to thousands of verified defenders and hundreds of teams responsible for critical software. In its Project Glasswing announcement, Anthropic said its unreleased Claude Mythos2 Preview model can outperform nearly all humans at finding and exploiting software vulnerabilities, and said it is committing up to $100 million in usage credits plus $4 million in donations to security groups.
Both labs are also building government-facing lanes around those systems. Anthropic said this month that its new National Security and Public Sector Advisory Council will help deepen public-private partnerships and support industry standards, and said it had introduced Claude Gov models for U.S. national security customers. OpenAI launched Trusted Access for Cyber in February and is now widening that program instead of releasing its strongest cyber tooling broadly.
Congress was already moving in this direction before the Axios scoop. CNBC reported that Vice President JD Vance and Treasury Secretary Scott Bessent questioned tech leaders including Sam Altman and Dario Amodei last week about AI model security and responses to cyberattacks before Anthropic released Mythos. The Washington Post reported that Garbarino wants more visibility into suspicious AI chatbot queries as Congress negotiates a national AI framework. A December hearing page from the House Homeland Security Committee shows lawmakers were already framing AI-assisted cyberattacks as a live problem, and an April 13 committee release shows Garbarino and Rep. Vince Fong were holding cyber roundtables with CISA and regional stakeholders this month.
That does not prove OpenAI or Anthropic used the Homeland Security session to push a specific private policy ask. The public record is thinner than that. It does show a policymaking lane where private lab briefings, company-run access programs, and public warnings are arriving faster than independent evidence about what these systems can actually do.
The hard part is still public proof. The meeting is reported. The access programs are real. But many of the strongest public claims about what these models can do still come from the labs themselves.
What to watch next is whether this private warning circuit turns into something outsiders can inspect: draft language on AI query visibility, a public hearing record, or any clearer evidence of what lawmakers think they learned and what standard of proof they think is enough to act.