AI Hallucinations Introduce New Software Supply Chain Vulnerabilities
Researchers from the University of Texas at San Antonio, the University of Oklahoma, and Virginia Tech have identified a novel software supply chain threat from hallucinations in code-generating large language models. This phenomenon, termed slopsquatting, occurs when LLMs suggest fictitious package names during code generation. Malicious actors can exploit this by creating packages with these non-existent names, leading unsuspecting developers to incorporate potentially harmful code into their projects.
Key Findings
- In a study involving 16 popular LLMs, none were free from package hallucinations. The models generated over 205,000 unique fictitious package names, 81 percent unique to the model that produced them.
- Commercial models exhibited hallucination rates of at least 5.2 percent, while open-source models showed higher rates at 21.7 percent. Notably, 58 percent of these hallucinations recurred within ten iterations, indicating persistent behavior.
- The researchers conducted 30 tests, generating 576,000 code samples across Python and JavaScript. These tests resulted in 2.23 million package suggestions, of which 440,445 (19.7 percent) were hallucinations.
- The study suggests that LLMs possess an inherent self-regulatory capability, as they can detect most of their hallucinations.
Mitigation Strategies
To address this emerging threat, the researchers recommend several approaches:
- Prompt engineering techniques include retrieval-augmented generation, self-refinement, and prompt tuning.
- Model development methods include decoding strategies and supervised fine-tuning.
- Implementing validation mechanisms to cross-reference suggested packages with official repositories before integration.
Implications
The emergence of slopsquatting underscores the need for heightened vigilance in AI-assisted software development. As LLMs become integral tools for developers, ensuring the accuracy and legitimacy of their outputs is crucial to maintaining software integrity and security.
Arghire, I. (2025, April 14). AI Hallucinations Create a New Software Supply Chain Threat. SecurityWeek. https://www.securityweek.com/ai-hallucinations-create-a-new-software-supply-chain-threat/
#AI #Cybersecurity #SupplyChain #LLM #Slopsquatting
Discord Tests Facial Scan Age Verification Amid Privacy Concerns
Discord is currently testing facial scan technology to verify users' ages, aiming to restrict access to specific spaces or settings based on age. According to a company spokesperson, the facial scan solution operates on-device, ensuring no biometric information is collected or stored. For ID
verification, any scanned documents are deleted.
Discord's Facial Scan Age Verification Sparks Privacy Debate
Discord is piloting a facial scan-based age verification system to regulate access to sensitive content on its platform, particularly in response to regulatory pressure from countries like the United Kingdom and Australia. Users attempting to view age-restricted material are prompted to verify their age by scanning their face using a mobile device or uploading an official ID.
According to Discord, the facial scan operates entirely on-device, with no biometric data being stored, and any uploaded IDs are deleted after verification. This approach is intended to comply with the UK's Online Safety Act and similar Australian legislation restricting access to social media for users under 16. However, critics have raised concerns about the effectiveness and security of the system. Questions persist over the accuracy of facial age estimation algorithms and the potential for teenagers to circumvent the technology using images of older individuals.
Furthermore, privacy advocates remain skeptical about the complete deletion of user data and worry about potential mission creep if biometric verification becomes normalized. Discord's rollout adds to a growing trend in the tech industry, where biometric authentication is positioned as a solution to age gating and content moderation, despite ongoing concerns about transparency, enforcement, and civil liberties.
Schneier, B. (2025, April 17). Age Verification Using Facial Scans. Schneier on Security. https://www.schneier.com/blog/archives/2025/04/age-verification-using-facial-scans.html
#Biometrics #Privacy #FacialRecognition #AgeVerification #SocialMedia
Amazon's Red Teaming Strategy Secures Alexa Plus From Misuse
Amazon has taken an aggressive security-first approach in developing Alexa Plus, its next-generation AI assistant. According to Chief Information Security Officer Amy Herzog, red teamers and penetration testers were involved early in the development process to anticipate abuse scenarios and reduce the risk of misuse. This proactive security embedding contrasts with traditional product rollouts, where security is often tacked on late.
One key concern addressed was preventing unauthorized or unintended actions by users, such as children instructing Alexa Plus to order dozens of pizzas. By simulating real-world attacks and directly incorporating security experts into product design meetings, Amazon ensured that Alexa Plus had guardrails for everyday abuse cases. This early collaboration between developers and security engineers was essential for balancing user convenience with system safety.Currently available to select early testers, Alexa Plus is powered by Amazon’s large in-house language models and is designed to streamline complex real-world tasks such as scheduling appointments or placing orders. By integrating security into every stage of product development, Amazon hopes to build trust in Alexa Plus's capacity to act autonomously without compromising user privacy or control.
Lyons, J. (2025, May 1). How Amazon red-teamed Alexa Plus to keep your kids from ordering 50 pizzas. The Register. https://www.theregister.com/2025/05/01/amazon_red_teamed_alexaplus_interview/
#Amazon #Alexa #AI #Security #RedTeam
SMS Social Engineering Attack
Another ham-handed attempt. Upwork, your name is being used. "Hello! I'm Sophia from the Upwork HR team. We recently came across your outstanding resume and would like to introduce you to a job opportunity that pays over $10,000 per month!
This is a flexible, remote part-time or full-time position where you have autonomy over your schedule, the work is easy, and free training is provided. The work day will be approximately 60 minutes per day. Pay ranges from $300 to $900 per day, ensuring that you will earn no less than $10,000 per month, with daily paychecks. Regular employees also have 15 to 25 days of paid annual leave, perfect for those of you looking for a steady income (Hiring Requirements: 22+).
If you are interested, please contact us via WhatsApp or Telegram!
WhatsApp: + 15412287550
Telegram:@Katherine1581
Looking forward to your reply!"
Comments
Post a Comment