Federal prosecutors are weighing whether to designate the firebombing of Sam Altman's home an act of domestic terrorism. The more revealing fact may be what Daniel Moreno-Gama said before he came to San Francisco.
Months before the April 10 attack on Altman's house, Moreno-Gama messaged producers of a podcast called The Last Invention with a specific suggestion: "Luigi'ing some tech CEOs." The reference was to Luigi Mangione, the man who shot UnitedHealthcare CEO Brian Thompson in December 2025. The exchange, reviewed by The Wall Street Journal and cited by Fox News, took place before Moreno-Gama ever traveled from Texas. He was not writing to a co-conspirator. He was floating a methodology to strangers, in public, and nobody stopped him.
The DOJ has acknowledged the Mangione connection obliquely. In announcing the charges, Acting U.S. Attorney Troy Rivette said that if evidence showed Moreno-Gama executed the attacks "to change public policy or to coerce government and other officials," prosecutors would treat it as domestic terrorism. The question — whether anti-AI extremism constitutes a political ideology or a personal grievance — is now central to how federal prosecutors build the case.
The distinction matters. Domestic terrorism charges carry broader surveillance authority and a more serious sentencing framework. The defense, led by Diamond Ward, has already said Moreno-Gama is autistic and was in an acute mental health crisis at the time of the attack, according to SF Standard. Prosecutors will need more than a manifesto to rebut that argument. What they have, and what remains sealed, is described by security analysts as the most significant signal in the entire record.
The manifesto's first section, titled "Your Last Warning," did not philosophize. It listed names, addresses, and instructions — the home addresses of multiple AI executives and investors, and a call to kill them. The second section was titled "Some more words on the matter of our impending extinction," framing AI as an existential threat to humanity. The third section was addressed to Altman personally: if you survive this, take it as a divine sign to redeem yourself, according to DNYUZ.
The full text has not been made public. Court filings do not name the other executives on the list. The DOJ press release states the document "advocated against AI and for the killing and commission of other crimes against CEOs of AI companies and their investors." Moreno-Gama emailed a version to Lone Star College, his former school, on the same day as the attack — suggesting the document was not written in a single impulsive moment but carried with him as an intentional communication, according to The Guardian.
What Moreno-Gama did not carry into public view was a record of violent intent. Records from PauseAI, a public Discord server focused on AI risk, reviewed by The Guardian, show he posted 34 messages over roughly two years — none containing explicit calls to violence. Several months before the firebombing, he joined Stop AI, a separate online community, introduced himself, and asked a question that reads differently now than it did at the time: "Will speaking about violence get me banned?" The answer was yes. He then ceased all activity in that community.
The episode is revealing: Moreno-Gama understood the norms of these communities and calibrated his public statements accordingly. His private radicalization — whatever spaces he moved into after being warned — is not visible in the public record. The gap between his non-violent Discord presence and his actual plan is precisely where the copycat risk lives.
"Every time an act like this succeeds without immediate consequences, the probability of imitation increases," said Bruce Hoffman, who studies political violence at Georgetown University's Center for Peace and Security Studies, in an interview with SF Standard. "This is not hypothetical. This is the documented pattern."
The attack on Altman arrives at a moment of acute tension between the AI industry and the public it serves. According to an NBC News poll, 46 percent of Americans hold negative feelings about AI — a lower net rating than ICE. More than half of respondents in Stanford's 2026 AI Index Report, published April 14, said products using AI made them feel nervous. AI was cited in more than 55,000 U.S. layoffs in 2025 — more than 12 times the number attributed to the technology two years earlier, according to Challenger, Gray & Christmas. At least 25 data center projects were canceled following local pushback in 2025 alone, four times more than in 2024. At least $18 billion in data center projects have been blocked and another $46 billion delayed over the past two years, per Data Center Watch.
These numbers describe a population that is not merely skeptical of AI but actively organizing against its physical infrastructure. The connection to individual violence is not automatic. Altman himself has written that fear of AI was, in some sense, justified. That framing — AI as an existential threat, AI as something that might end human civilization — is now standard language in the industry. The gap between "this technology might destroy humanity" as a research conclusion and "this CEO deserves to die" as a personal conclusion is vast. Moreno-Gama crossed it. The question the industry has not yet answered is whether its own rhetoric made that crossing easier to imagine.
What comes next depends largely on what prosecutors find in Moreno-Gama's communications and devices — and whether a judge agrees that a manifesto calling for killing AI executives constitutes political violence rather than personal grievance. The domestic terrorism designation, if applied, would give prosecutors broader tools and a stiffer sentencing framework. For now, the immediate effect is operational: every AI company with a public-facing executive is running the same calculation. The answer, for anyone whose name has appeared in an anti-AI document, is almost certainly yes.