Fired employee sought AI help to hide deletion of hosting firm’s customer data

Wait 5 sec.

The apparent revenge deletion of US federal databases after the dismissal of twin brothers from an online hosting company is another reminder to IT and HR leaders that tough off-boarding procedures have to be implemented to prevent insider attacks.Destructive attacks either from disgruntled current or former employees aren’t new. But the conviction by a Virginia jury last week of one of the brothers raises a number of issues that IT pros and CEOs have to keep in mind.A federal jury convicted Sohaib Akhter, 34, of Alexandria, Virgina, on charges of conspiracy to commit computer fraud, password trafficking, and possession of a firearm by a prohibited person. He will be sentenced in September. And last month his brother, Muneeb, signed an agreed statement of facts about the siblings’ activities in response to several charges against him. But according to documents from the case provided on the Free Law Project’s archive of court data, The Court Listener, Muneeb is now trying to have the charges dismissed.Still, the incident has led one expert, Robert Enderle of the Enderle Group, to say, “it should serve as a wake-up call: Organizations must not only tighten their internal controls, but also begin accounting for how AI tools can be weaponized against them, and these AI tools need far stronger guardrails than they currently have.”The statement of factsAccording to the statement of facts Muneeb agreed to, but now disputes, he and his brother, Sohaib, worked for an unnamed company in Washington, DC that provided software and services to more than 45 US government agencies, including hosting data for some federal clients. They included the US Equal Employment Opportunity Commission (EEOC), Homeland Security, and the Internal Revenue Service (IRS).On Feb 18, 2025, both brothers were terminated by the company after it discovered Sohaib had been convicted nine years earlier of a felony. After the firing, they both allegedly tried to harm their former employer by accessing computers without authorization, deleting databases and destroying evidence of their work. In his statement of facts this year, Muneeb admitted to deleting 96 databases.How? While five minutes after they were fired in 2025, Sohaib’s VPN was disconnected and he lost access to the hosting provider, his brother still had access. The brothers also still had their company-issued laptops. They went to work.As part of their alleged destructive work, when Muneeb didn’t know the database commands necessary to accomplish his goals, he used an AI tool to help him, asking “how do I clear system logs from SQL servers after deleting databases” and later, “how do you clear all event and application logs from Microsoft Windows Server 2012.” The agreed statement of facts doesn’t make it clear, but presumably the AI tool was a public chatbot.In the statement of facts, Muneeb agreed he stole copies of IRS information on a virtual machine that included federal tax information of 450 people.Muneeb also admitted that between May and December 2025, he committed fraud and stole credentials for the EEOC public portal in an attempt to access email and other online accounts of 4,500 people. In hundreds of instances, he successfully logged into victims’ email accounts without their authorization.State of insider attacksAccording to the State of Human Risk Report from Mimecast, 42% of organizations have experienced an increase in malicious insider incidents over the past year, with 42% also reporting a rise in negligent incidents for the first time.A report this year by the Ponemon Institute on the costs of insider risks, commissioned by insider threat detection provider DTEX, estimated incidents cost organizations an average of $19.5 million last year, up from $17.4 million in 2024.The biggest cause of losses last year (53%) was negligence and mistakes, it said. The second biggest cause, however, was malicious activity (27%).Musa Ishaq, senior principal insider threat analyst at DTEX, said last week’s conviction “is a clear and sobering reminder that termination is not the end of risk. In many cases, it is the beginning of it.”The off-boarding moment “is one of the most dangerous windows in any organization’s security posture,” he said, “and it remains one of the most underestimated. Every departing employee, whether they leave willingly or are terminated, represents a live risk event that must be treated in real time. That means immediate access revocation, active session termination, and active monitoring, not a checklist completed the following day. When those steps fail, or when even a single access pathway is left open, the consequences can be catastrophic, as this case demonstrates.”‘AI didn’t give attackers a new capability’Equally important, he added, is what this case reveals about AI’s role in accelerating insider threats. “AI did not give them a new capability; they already had the access and the intent. What it did was compress their decision cycle, turning what might have taken several minutes of research into seconds of execution. The new threat reality is that AI does not create malicious insiders, but it dramatically amplifies what they can accomplish before defenders are able to respond.”As a result, organizations have to shift to proactive and risk-adaptive security approaches, Ishaq said. A privileged user querying an AI tool through a company owned or controlled computer for log evasion techniques while simultaneously executing destructive commands on production servers is an escalation signal, he said. “Behavioral visibility, not just technical controls, is what enables security teams to detect that pattern and act before deletion becomes destruction,” he said.“This case is a preview of what insider threats look like in an AI-enabled world in terms of being faster, harder to trace, and far more consequential when governance gaps exist,” he said. “As such, the fundamentals, including strict access control, real-time off-boarding protocols, and layered monitoring of privileged users, have never been more critical.”‘Textbook example’ of need to re-think processesEnderle agreed. He said this incident “is a textbook example of why we need to rethink the speed and process of our off-boarding processes. The fact that a former employee was able to access and delete government databases post-termination highlights a massive failure in basic access control. In a modern enterprise, access revocation needs to be instantaneous, automatic, and comprehensive; any gap between a firing and a lockout is a window for significant liability.”The most disturbing aspect, he added, is the role AI played. “Using an AI tool to solicit instructions on clearing system logs is a clear signal that the barrier to entry for sophisticated digital sabotage is dropping,” Enderle said. “We are entering an era where AI can act as a force multiplier for malicious intent, making it easier for individuals to cover their tracks. Even AI protections can be bypassed. I saw a demonstration on YouTube the other day where a user just re-asked a question to a public AI site on preparing a bomb until the AI gave up saying ‘No,’ and provided the answer.”Queries like ‘How to clear SQL logs’ have legitimate administrative purposes, he acknowledged. But, he added, AI providers must move beyond simple keyword filtering and implement intent-aware guardrails that can identify attack chains.“When a sequence of prompts moves from technical curiosity to a roadmap for destroying evidence and obfuscating logs, the AI should recognize the malicious context and refuse the request, Enderle argued.“Ultimately,” he warned, “if AI providers don’t take responsibility for preventing their platforms from becoming a ‘How-to’  manual for criminal activity, they risk a regulatory backlash and potential civil and criminal liability that could stifle the very innovation they are trying to promote.”