Why AI Can’t Replace Cybersecurity Analysts

Wait 5 sec.

As we face an extreme downturn in cybersecurity hiring which entry level candidates bear the brunt of, I want to address an elephant in the room: AI. I spend a lot of my time providing career clinics and mentorship, and I truly understand this is one of the worst cybersecurity job markets for young people since 2000 or 2008. So, the intended audience of this blog is security leadership and seniors.There are a multitude of reasons the job market is poor right now. Staff reduction due to global economic uncertainty, too many graduates pitched unrealistic “skills shortages” by at best unaware universities and boot camps, but also a push towards automation and speculation about the ability of artificial intelligence to handle cybersecurity jobs.All three of those causes are papers in themselves. Let’s talk about AI. First, let’s talk about what AI is good at. AI, particularly machine learning (ML) and large language models (LLMs) are good at handling large quantities of data, finding norms, and identifying deviations from those norms. Computers are good at efficiently handling more data faster than human beings. That’s always been true, and capabilities to do this have been improving with hardware and software development. However, the bottom line is that it is an interesting tool. It is a screwdriver. It’s not magic. It cannot solve every problem. AI can be used to solve some computer data problems at scale if it is configured and trained properly.The Impact on AnalystsOn the surface, that looks really good for getting rid of cybersecurity unskilled and entry level work. That’s lots of log data and digital analysis, right? We have been using machine learning for a long time in cybersecurity. Rudimentary machine intelligence was used to identify trends in alerts, and polymorphic code. It was then used to identify trends in phishing emails and flag baseline deviations. It has done this for the past twenty years to varying levels of efficiency and success. Yes, those capabilities are improving.What does that mean for the people working in the SOC? It means less human triage of alarms. It means less looking at long csv files and agonizing log outputs. It means redundant, monotonous computer data tasks can be automated and processed by SIEMs and EDR without significant human oversight. Which is great! These are tasks that often burnt-out analysts – they involve large quantities of computer data and computers are best at quickly handling computer stuff.Stark Realities of AdversariesHere’s the big gotcha. Adversaries have AI too. They also are fully aware that defenders use it. And they are in the business of making money, doing espionage, and planning for sabotage.When I first started really dealing with state adversary compromises around 2010, unfriendly nations were heavily reliant on complicated, custom malware and hacking tools. Really whiz-bang stuff. Exploits. At the time, our security tools were not great at detecting that computer stuff. It was slow going getting those deeply embedded adversaries out of some of the biggest names in the world. We mostly, eventually managed. Our automated tools got much better at looking for similar tools and tactics. Computers again, were good at detecting computers. The same thing happened during the rise of ransomware in the mid 2010s. We saw a lot of use of exploit-driven worms to deliver large-scale ransomware across networks and the entire internet. They were quite effective and had major impacts that drew the world’s attention to the growing criminal ransomware market. The industry built computer tools like enterprise EDR to better detect and prevent these threats.Adversary motivations have not changed. They still want to make money, do espionage, and do sabotage. That motivation does not go away because we make our defense tools better. What they did was heavily pivot to human-driven attacks, because AI/ML/whiz-bang computer tools are not great at detecting novel and unpredictable human and “living off the land” type attacks. Even the most advanced machine learning has a very hard time knowing if the administrator logging in from an authorized source is Sue the domain admin, or somebody who has stolen Sue’s access and credentials. Five basic rules I can set forth are as such:Computers are good at detecting computer-automated stuff, and should be used for this as much as possible.Computers are not great at detecting novel human-driven stuff, and they never will be. Particularly abuse of authorized tools and access.Adversary goals to intrude into networks will never change.Adversaries will always invest in both AI tools and large human work forces.Keeping both these rules in mind, we will always need human defenders to keep up with novel human techniques, and to improve our automation-detecting and task-automating tools.The Future of Security AnalysisThat does not mean that a cybersecurity analyst trained five years ago can skate on their degree or training. It means analysts must keep growing and adapting to automation and adversary tactics. They must learn new tools, think critically, and leverage AI intelligently. An analyst’s work today looks incredibly different than an analyst’s job ten or twenty years ago. They are not spending the bulk of their time (in a sensible environment) just looking at unfiltered logs or triaging poorly tuned alarms. A 2025 analyst should be spending a great deal of their time doing hypothesis-driven threat hunting for threats that their automation will have challenges in detecting (for a multitude of reasons). They should be tuning and improving automated detection and identifying gaps in coverage. Adversaries have the motivation and resources to hire a next generation of well-trained intruders, and they have access to all the same automation.