JOIN FeedMan BOT
Home
Blog
Support
How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns
Wait 5 sec.
Read post on techradar.com
Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.