Security research
Slopsquatting: when LLM hallucinations become an attack surface
When LLMs hallucinate plausible-sounding package names, attackers register those names and publish stubs with malicious payloads. The inverse of typosquatting — the model produced the name, not the developer.