UTSA: ~20% of AI-suggested packages don't exist. Slopsquatting could let attackers slip malicious libs into projects.
AI safety tests found to rely on 'obvious' trigger words; with easy rephrasing, models labeled 'reasonably safe' suddenly fail, with attacks succeeding up to 98% of the time. New corporate research ...