Researchers detail how a prompt injection attack bypassed Apple Intelligence protections

Wait 5 sec.

A now corrected issue allowed researchers to circumvent Apple’s restrictions and force the on-device LLM to execute attacker-controlled actions. Here’s how they did it. more…