Attackers are exploiting AI faster than defenders can keep up, new report warns

Wait 5 sec.

Cybersecurity is entering “a new phase” as artificial intelligence tools have matured and given IT defenders significantly less time to respond to cyberattacks and other threats, according to a new report released Monday.The report, authored by federal contractor Booz Allen Hamilton, concludes that threat actors have adopted AI more quickly than governments and private companies have adopted it for cyber defense.It points to multiple incidents over the past two years, like attacks carried out with the help of Anthropic’s Claude, that show both cybercriminals and state-sponsored hacking groups are moving and scaling faster than ever before. Brad Medairy, executive vice president and lead for Booz Allen’s cyber business practice, told CyberScoop that one of the biggest advantages LLMs have given to attackers is the ability to identify places where the windows are “slightly open” – obscure weaknesses in a system like a perimeter vulnerability — and then quickly use an exploit to establish persistence.“If you have a vulnerability in your perimeter and the adversary gets inside the wall, at that point they’re going to be moving at machine speed,” he said.Booz Allen’s report argues that most defensive cybersecurity operations, by contrast, still rely on slower, human-oriented processes that can struggle to keep up with that faster tempo.For example, when the Cybersecurity and Infrastructure Security Agency adds a CVE to its Known Exploited Vulnerabilities list, defenders are given 15-day timelines to implement a patch. That would be insufficient for something like HexStrike, an open source AI security framework popular with cybercriminals that exploited “thousands” of Citrix Netscaler products in less than 10 minutes using a single critical CVE.Booz Allen Hamilton sells AI cybersecurity tools, but the primary conclusions of the report fall in line with what other third-party and independent cybersecurity experts say, namely that large language models have been a boon to cybercriminals and nation-states.The report describes two general models’ malicious actors have for using AI.In one, it becomes an amplifier for their individual hacking operations. This approach uses LLMs to add speed and scale to what hackers are already doing, while keeping  the human in the loop on key decisions. Using this approach, “a single operator using agentic tooling can run reconnaissance, exploitation and follow-on actions across dozens of targets at once.”The other model, called “orchestration” is more akin to vibe coding, connecting the LLM to offensive security tools, pointing it at a target and setting the agent’s limits and parameters.Medairy said it’s likely that regulation and policies around AI will continue to lag behind its development, forcing cybersecurity officials to make hard decisions around shifting to automated and AI-assisted defenses to keep up. In this scenario, organizations would plan and run tabletop exercises ahead of time to game out how their AI agents should respond to an ongoing attack, what limits or parameters to set, and what assets to prioritize.But there are real risks to handing over critical cyber or IT functions to an AI system. Amazon has dealt with multiple outages related to software changes made automated through AI, and recently required its senior engineers to personally sign off on any AI-assisted code changes.Medairy acknowledged the risks but noted that “the adversary gets a vote” and has already moved to exploit AI systems for offensive security, so defenders are going to have to reevaluate what “acceptable risk tolerance” looks like when it comes to defense at machine speed.“I think that we’re going to be forced to kind of move outside of our comfort zone and really embrace some of this more automated remediation much faster than we’re probably comfortable with,” he said.The post Attackers are exploiting AI faster than defenders can keep up, new report warns appeared first on CyberScoop.