Skip to content
← Blog

Technical explainer

The supply chain attack surface of LLM-generated code

3 min read

Language models have become a standard part of software development workflows. Developers use ChatGPT, Claude, Copilot, and similar tools to suggest code, recommend libraries, and generate implementation boilerplate. This has introduced a new supply chain attack vector that has no precedent in traditional software security.

The hallucination problem

LLMs sometimes recommend packages that don't exist. This is well-documented: a model trained on code up to a certain date doesn't know about packages published after that date, and sometimes generates plausible-sounding package names that are not real npm or PyPI packages.

When an LLM says "use the validate-json-schema package for this" and validate-json-schema doesn't exist on npm, the developer runs npm install validate-json-schema and gets a 404 error. That's the benign case.

The malicious case: an attacker who is watching LLM outputs (either through red-teaming research or by monitoring what package names start getting registration attempts) sees that LLMs frequently recommend validate-json-schema. They register it on npm with malicious content. Now when any developer who gets that recommendation runs npm install validate-json-schema, they install a malicious package.

How attackers exploit LLM hallucinations

This attack pattern — sometimes called "package hallucination attacks" — has three steps:

  1. Research: Using LLMs to generate common patterns of code in a specific domain (data validation, authentication, ML preprocessing, etc.) and noting which package names appear in the suggestions. LLMs have consistent patterns in what they recommend, so the same non-existent package name may appear thousands of times across different user sessions.

  2. Registration: Register the hallucinated package names on npm or PyPI with a simple placeholder package initially — just enough to claim the name. Monitor for install attempts.

  3. Activation: Once install attempts are observed (indicating developers are using the hallucinated package name), publish a version with a malicious payload.

Why this is particularly insidious

The developer did nothing obviously wrong. They received a recommendation from a tool they trust, checked that the package exists on npm (it does — the attacker put something there), and installed it. There was no typo, no suspicious source, no obvious warning sign.

The packages are targeted at developer contexts. LLM-hallucinated packages appear in development and data science contexts where developers are working on legitimate projects. These environments have valuable credentials — the same credentials that supply chain attackers target.

Volume advantage for attackers. Registering 100 hallucinated package names costs nothing. If even 1 in 100 gets significant install volume, the return on investment is positive.

Documented cases

Security researchers at multiple firms (including Vulnu, Lasso Security, and others) have published research demonstrating successful package hallucination attacks. In their experiments:

  • Hallucinated packages installed in up to 30% of sessions where an LLM recommended a non-existent package
  • Some hallucinated package names were installed thousands of times in the week after registration

What to do when an LLM recommends a package

Before running npm install <package-name> or pip install <package-name> based on an LLM recommendation:

  1. Verify the package on the registry. Check npmjs.com/package/<name> or pypi.org/project/<name>. Does it exist? How old is it? How many downloads does it have?

  2. Check the source code. Click through to the GitHub repository. Is there real source code? Does the code actually implement what the LLM said it would?

  3. Check the publisher. When was the account created? Do they have other packages?

  4. Search for the package independently. If the LLM recommends validate-json-schema, search npm for "JSON schema validation". If the package doesn't appear in search results or community recommendations, it may be a hallucination.

How Veln catches this

An attacker who registers a hallucinated package name starts with zero community observations and a new publisher account. When you run npm install validate-json-schema (from an LLM recommendation), Veln's Tier 1 checks apply:

  • Publisher account age: a new account registers near zero on the publisher reputation signal
  • Community observations: zero prior installs → cooling gate HOLD
  • Veln Lens: if the package contains malicious code, the AST analysis and obfuscation rules will flag it

The cooling gate is the key protection here. A package registered yesterday specifically to catch LLM hallucination victims will never have accumulated the community observations Veln requires for auto-approval. Every install from the first one to the thousandth will receive a HOLD verdict — with a message explaining that the package has insufficient community trust.


LLMs hallucinate package names. Attackers register them. Veln's cooling gate holds every install of a new, unobserved package.