Researchers and safety developers stimulate alarms on “slopsquatting”, a new form of attack by the supply chain which exploits the disinformation generated by AI, commonly known as hallucinations. As developers are counting more and more on coding tools such as GitHub Copilot, Chatgpt and Deepseek, the attackers exploit the trend of AI to invent software packages, encouraging users to download malware.
What is slopsquatting?
The term slopsquatting was initially invented by Seth Larson, developer of the Python Software Foundation, then popularized by the technological security researcher Andrew Nesbitt. It refers to cases where attackers record software packages that do not really exist but are mistakenly suggested by AI tools; Once live, these false packages can contain harmful code.
If a developer installs one without verifying it – simply trusting the AI - he can introduce a malicious code without knowing it in his project, giving a door access to pirates to sensitive environments.
Unlike the typosquat, where malicious actors are counting on human spelling errors, slopsquat is fully based on AI defects and developers and developers in the automated suggestions.
AI hallucinated software packages are increasing
This problem is more than theoretical. A recent joint study of researchers from the University of Texas in San Antonio, Virginia Tech and the University of Oklahoma have analyzed more than 576,000 Code generated by AI Samples of 16 large -language models (LLMS). They found that almost 1 out of 5 packages suggested by AI did not exist.
“The average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open source models, including 205,474 unique examples of hallucinated packages, further emphasizing the gravity and omnipresence of this threat”, ” The study revealed.
Even more worrying, these hallucinated names were not random. In several series using the same guests, 43% of hallucinated packages constantly reappear, showing how predictable these hallucinations can be. As the safety company's socket explains, this consistency gives attackers a roadmap – they can monitor AI behavior, identify rehearsal suggestions and record these packages names before anyone.
The study also noted differences between the models: Codellama 7B and 34B had the highest hallucination rates of more than 30%; GPT-4 Turbo had the lowest rate at 3.59%.
How to cod the atmosphere could increase this risk of safety
A growing trend called Vibe Coding, a term invented by AI researcher, Andrej Karpathy, can worsen the problem. He refers to a workflow where developers describe what they want and IA tools generate the code. This approach is strongly based on confidence – developers often copy and stick the exit of AI without covering everything.
In this environment, hallucinated packages become easy entry points for attackers, especially when developers ignore manual examination stages and rely only on the suggestions generated by AI.
How developers can protect themselves
To avoid being the victim of the task, the experts recommend:
- Manually check all the names of packages before installation.
- Use of package safety tools which scans dependencies for risks.
- Verification of suspicious or new libraries.
- Avoid copying installation commands directly from AI suggestions.
In the meantime, there are good news: some AI Models improve in self-political. GPT-4 Turbo and Deepseek, for example, have shown that they can detect and report hallucinated packages in their own production with more than 75% precision, according to the first internal tests.