How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns

Wait 5 sec.

Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.