The litellm attack: how a malicious PyPI package was live for 47 minutes
In early 2024, a package called litellm — a popular Python library for working with language model APIs, with over 2 million monthly PyPI downloads — was briefly replaced by a malicious version. The malicious package was live for approximately 47 minutes before it was removed. During that window, an unknown number of developers downloaded and executed it.
This is a post-mortem of what happened, what the malicious package did, and exactly what a tool like Veln would have caught and when.
What is litellm?
litellm is a Python library that provides a unified interface for calling various language model APIs — OpenAI, Anthropic, Google, and dozens of others. It is widely used in Python ML workflows, AI agent frameworks, and automation tools. At the time of the attack, it was installed approximately 2.4 million times per month from PyPI.
What happened
At approximately 14:00 UTC on the day of the attack, a threat actor published a malicious version of litellm to PyPI. The exact mechanism of the account compromise is not fully confirmed — likely a combination of a phishing attempt against a maintainer and the absence of two-factor authentication on the publishing account.
The malicious version matched the legitimate version number in the project's changelog (suggesting the attacker was watching the repository for upcoming releases). The code changes were minimal and deliberately obscured.
Within 4 minutes of publication, the first automated install occurred — a CI pipeline that ran on a schedule and had litellm in its requirements.txt without pinned hashes.
By T+12 minutes, the package had been installed by at least 11 distinct IP addresses.
At T+47 minutes, the litellm team was alerted (by a user who noticed anomalous network connections from their ML training environment) and removed the malicious version from PyPI.
What the malicious code did
The changes were concentrated in two files. In litellm/utils.py, approximately 23 new lines were added to an existing function that runs during module initialization. The added code:
- Checked for the presence of several common environment variable names associated with cloud credentials:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,OPENAI_API_KEY,ANTHROPIC_API_KEY, and several others - Encoded the found values using base64
- Made an outbound HTTP POST request to a domain registered 3 days earlier, sending the encoded credentials
The code was wrapped in a try/except block that silently swallowed all exceptions — including network errors, import errors, and encoding failures. If the exfiltration failed for any reason, the module loaded normally with no visible indication that anything had gone wrong.
How Veln would have caught it
Tier 1 — Cooling gate: The malicious version was published and immediately available. It had zero prior community observations when the first automated install attempted to download it. Veln's cooling gate requires a minimum publication age (default: 2 hours) and a minimum number of community observations (default: 10) before auto-approving a version. At T+0 to T+12, this version had neither. Every install attempt during this window would have received a hold verdict, with a message explaining the cooling period and a link to review the package manually.
This alone would have blocked all 11 observed malicious installations.
Tier 3 — Veln Lens (obfuscation detection): Even if the cooling gate had been bypassed (e.g., by a developer manually approving the package), Veln Lens would have flagged new os.environ.get() calls for credential-adjacent variable names, a new outbound HTTP call at module import time, and AST diff anomalies versus the previous version.
Veln would have blocked this attack automatically at T+0, before any credentials were exfiltrated.