CERT-In outlines safeguards for Indian orgs, MSMEs amid Mythos AI cybersecurity risk concerns

Wait 5 sec.

India’s nodal cybersecurity agency has sounded the alarm on escalating cybersecurity threats driven by recent developments in frontier AI models, urging organisations and MSMEs to step up defenses through stronger threat detection, continuous monitoring, vulnerability disclosures, and rigorous log preservation, among others.The maturing cyber capabilities of frontier AI systems gives them the ability to autonomously discover security vulnerabilities in widely used software, analyse source code, and plan and chain together multi-stage attacks to compromise enterprise networks end-to-end, the Indian Computer Emergency Response Team (CERT-In) said in a new advisory titled ‘Defending Against Frontier AI Driven Cyber Risks’ issued on Sunday, April 26.Based on its risk assessment, CERT-In said that AI could potentially enable fast, low-cost, and automated attacks that could aid threat actors in exploiting vulnerabilities, siphoning credentials, and carrying out targeted social engineering attacks against poorly secured systems and users. This may further result in service disruption, data exfiltration, identity compromise, financial fraud, impersonation, etc., according to the agency under the aegis of the IT Ministry.“These activities can be performed at a speed and scale that previously required teams of skilled human experts,” CERT-In said. “Keeping pace with frontier AI-driven cyber developments is critical for maintaining cyber resilience. Baseline cybersecurity controls remain critical and should be rigorously enforced,” it added.The advisory from CERT-In comes amid growing concern over advanced AI capabilities, with Anthropic’s new AI model ‘Mythos’ – considered too risky to be released widely to the public – serving as a wake-up call of sorts for regulators in India and globally. Last week, Finance Minister Nirmala Sitharaman chaired a high-level meeting over concerns that Mythos could pose significant risks to India’s banking sector. The government is also in conversation with Anthropic’s senior leadership in the US on the issue, The Indian Express reported earlier.Potential risks identified by CERT-InAcknowledging the potential application of cybersecurity-focused AI systems in the defence sector, CERT-In said that their duality poses heightened risks to organisations by lowering the entry barrier for malicious actors. It highlighted the following cyber capabilities to watch out for in emerging frontier AI models:– Large-scale software analysis for identification of known and zero-day vulnerabilities across extensive codebases.Story continues below this ad– Accelerated exploit development, including proof-of-concept generation for newly disclosed vulnerabilities.– Automated reconnaissance against internet-facing infrastructure, APIs, cloud services and enterprise attack surfaces.– Credential harvesting and attack-path discovery through automated enumeration.– AI-generated phishing and impersonation attacks, including highly convincing multilingual social engineering content.– Autonomous multi-stage attack orchestration, including privilege escalation and lateral movement planning.– Rapid weaponisation of vulnerabilities and adaptive exploitation workflows.Story continues below this adOrg-level recommendations by CERT-InIn light of the cybersecurity risks posed by frontier AI models, CERT-In recommended that organisations should increase the frequency of monitoring, threat detection and review of system logs by their security operations teams. Security monitoring tools should be adjusted to look for unusual activity (such as abnormal patterns of access requests and unfamiliar scripts or commands running on systems) that may indicate an AI-driven attack, the agency said.Other recommended measures include enabling DDoS protection and enforcing Multi-Factor Authentication (MFA) for all internet-facing assets. “Treat every newly disclosed critical vulnerability in widely deployed software as something that could be exploited within hours, not weeks,” CERT-In said.It also highlighted older VPN applications as potential entry points for hackers as such legacy remote-access systems are “particularly attractive to automated tools.”Also Read | What is Claude Mythos, and why is Anthropic limiting its rollout?Organisations should also look to apply critical patches within 24 hours of their release by adopting automated, risk-based patching along with continuous monitoring of systems across software, systems, and supply chains. “If any suspicious activity is found, preserve all logs as per CERT-In Directions 2022, take containment measures and report with all relevant logs to CERT-In,” the cybersecurity watchdog said.Story continues below this adFor MSMEs, CERT-In recommended more cost-effective measures such as downloading security updates for operating systems, browsers, and applications, enforcing MFA, avoiding unverified AI tools in production environments, and conducting regular cybersecurity training programmes for employees, and more.How individual users can stay safeIn order to protect personal devices, accounts, and user data from AI-driven attacks, CERT-In recommended the following steps:– Avoid downloading apps or files from unverified sources.– Use strong and unique passwords for all online accounts.– Verify the authenticity of voice calls, video messages, and urgent requests, particularly those involving financial transactions or sensitive information, as AI-generated deepfakes and impersonation attempts may be highly convincing.– Be cautious of AI-generated phishing content, fake websites and social engineering attempts designed to mimic trusted individuals, organisations or services.– Use a strong Wi Fi password and WPA3 encryption if available.– Avoid public Wi Fi for sensitive transactions; use a VPN when necessary.