Anthropic's AI was used by Chinese hackers to run a Cyberattack

Wait 5 sec.

A few months ago, Anthropic published a report detailing how its Claude AI model had been weaponized in a "vibe hacking" extortion scheme. The company has continued to monitor how the agentic AI is being used to coordinate cyberattacks, and now claims that a state-backed group of hackers in China utilized Claude in an attempted infiltration of 30 corporate and political targets around the world, with some success.In what it labeled "the first documented case of a large-scale cyberattack executed without substantial human intervention," Anthropic said that the hackers first chose their targets, which included unnamed tech companies, financial institutions and government agencies. They then used Claude Code to develop an automated attack framework, after successfully bypassing the model’s training to avoid harmful behavior. This was achieved by breaking the planned attack into smaller tasks that didn’t obviously reveal their wider malicious intent, and telling Claude that it was a cybersecurity firm using the AI for defensive training purposes.After writing its own exploit code, Anthropic said Claude was then able to steal usernames and passwords that allowed it to extract "a large amount of private data" through backdoors it had created. The obedient AI reportedly even went to the trouble of documenting the attacks and storing the stolen data in separate files. The hackers used AI for 80-90 percent of its operation, only occasionally intervening, and Claude was able to orchestrate an attack in far less time than humans could have done. It wasn’t flawless, with some of the information it obtained turning out to be publicly available, but Anthropic said that attacks like this will likely become more sophisticated and effective over time.You might be wondering why an AI company would want to publicize the dangerous potential of its own technology, but Anthropic says its investigation also acts as evidence of why the assistant is "crucial" for cyber defense. It said Claude was successfully used to analyze the threat level of the data it collected, and ultimately sees it as a tool that can assist cybersecurity professionals when future attacks happen.Claude is by no means the only AI that has benefited cybercriminals. Last year, OpenAI said that its generative AI tools were being used by hacker groups with ties to China and North Korea. They reportedly used GAI to assist with code debugging, researching potential targets and drafting phishing emails. OpenAI said at the time that it had blocked the groups' access to its systems.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-ai-was-used-by-chinese-hackers-to-run-a-cyberattack-142313551.html?src=rss