SentinelOne caught a supply chain attack last week using a compromised version of LiteLLM. The entry point was an unlikely one: Claude Code, running with a flag that tells it to ignore permission checks.
On March 24, SentinelOne's autonomous detection system identified and blocked a trojaned version of LiteLLM, a widely used proxy layer for LLM API calls, executing malicious Python across multiple customer environments. The package had been compromised hours earlier. No analyst wrote a query. No SOC team triaged an alert. The system's AI flagged the behavior, classified it as malicious, and killed it across 424 related events in 44 seconds.
The attack was indirect and, in that sense, novel. The adversary, operating under the alias TeamPCP, first compromised Trivy, an open-source security scanner, on March 19. They used Trivy's access to obtain the LiteLLM maintainer's PyPI credentials and published two malicious versions, 1.82.7 and 1.82.8. The payload was embedded in proxy_server.py for version 1.82.7, and in a Python startup file for 1.82.8, meaning any script importing the package would trigger it.
In one customer environment, the infection arrived through Claude Code, Anthropic's AI coding assistant, running with the --dangerously-skip-permissions flag. That flag, which disables permission checks to allow autonomous operation, is what made the assistant an unwitting delivery mechanism. The tool updated LiteLLM to the compromised version as part of its normal workflow, without human review. The payload attempted to execute. SentinelOne's behavioral detection caught it.
The irony writes itself. SentinelOne's blog post frames the outcome as a proof of concept for AI-native security — the system saw and stopped the attack autonomously, on the day it launched, across every affected environment. But the attack succeeded as a proof of concept for a different class of risk: AI coding tools running with unrestricted permissions, pulling packages at speed, with no human in the loop to notice when one of those packages has been weaponized.
The behavioral detection worked. The architectural choice to run AI coding assistants with --dangerously-skip-permissions did not cause this incident, but it is the reason the delivery mechanism was available.
The campaign was not limited to LiteLLM. TeamPCP went on to compromise Checkmarx KICS and AST on March 23, and Telnyx on March 27. This was a coordinated, multi-stage operation exploiting the transitive trust embedded in open-source supply chains — a security tool compromised to compromise a security package, which was then used to compromise downstream environments.
For organizations running AI coding assistants in production, the lesson is not that behavioral detection failed. It is that the blast radius of an AI agent with unrestricted permissions includes every package that agent touches.
Sources: SentinelOne