Hallucinations May Open LLMs to Phishing Threats

Wait 5 sec.

The AI models at times steered users to the wrong URL, giving bad actors an opening, NetCraft found.