AI browsers are a significant security threat

Wait 5 sec.

Among the explosion of AI systems, AI web browsers such as Fellou and Comet from Perplexity have begun to make appearances on the corporate desktop. Such applications are described as the next evolution of the humble browser, and come with AI features built in; they can read and summarise web pages – and, at their most advanced – act on web content autonomously.In theory, at least, the promise of an AI browser is that it will speed up digital workflows, undertake online research, and retrieve information from internal sources and the wider internet.However, security research teams are concluding that AI browsers introduce serious risks into the enterprise that simply can’t be ignored.The problem lies in the fact that AI browsers are highly vulnerable to indirect prompt injection attacks. These are where the model in the browser (or accessed via the browser) receives instructions hidden in specially-crafted websites. By embedding text into web pages or images in ways humans find difficult to discren, AI models can be fed instructions in the form of AI prompts, or amendments to prompts that are input by the user.The bottom line for IT departments and decision-makers is that AI browsers are not yet suitable for use in the enterprise, and represent a significant security threat.Automation meets exposureIn tests, researchers discovered that embedded text in online content is processed by the AI browser and is interpreted as instructions to the smart model. These instructions can be executed using the user’s privileges, so the greater the degree of access to information that the user has, the greater the risk to the organisation. The autonomy that AI gives users is the same mechanism that magnifies the attack surface, and the more autonomy, the greater the potential scope for data loss.For example, it’s possible to embed text commands into an image that, when displayed in the browser, could trigger an AI assistant to interact with sensitive assets, like corporate email, or online banking dashboards. Another test showed how an AI assistant’s prompt can be hijacked and made to perform unauthorised actions on the behalf of the user.These types of vulnerabilities clearly go against all principles of data governance, and are the most obvious example of how ‘shadow AI’ in the form of an unauthorised browser, poses a real threat to an organisation’s data. The AI model acts as a bridge between domains, and circumvents same-origin policies – the rule that prevents the access of data from one domain by another.Implementation and governance challengesThe root of the problem is the merging of user queries in the browser with live data accessed on the web. If the LLM can’t distinguish between safe and malicious input, then it can blithely access data not requested by its human operator and act on it. When given agentic abilities, the consequences can be far-reaching, and could easily cause a cascade of malicious activity across the enterprise.For any organisation that relies on data segmentation and access control, a compromised AI layer in a user’s browser can circumvent firewalls, enact token exchanges, and use secure cookies in exactly the same way that a user might. Effectively, the AI browser becomes an insider threat, with access to all the data and facility of its human operator. The browser user will not necessarily be aware of activity ‘under the hood,’ so an infected browser may act for significant periods of time without detection.Threat mitigationThe first generation of AI browsers should be regarded by IT teams in the same way they treat unauthorised installation of third-party software. While it is relatively easy to prevent specific software being installed by users, it’s worth noting that mainstream browsers such as Chrome and Edge are shipping with increased numbers of AI features in the form of Gemini (in Chrome) and Copilot (in Edge). The browser-producing companies are actively exploring AI-augmented browsing capabilities, and agentic features (that grant significant autonomy to the browser) will be quick to appear, driven by the need for competitive advantage between browser companies.Without proper oversight and controls, organisations are opening themselves to significant risk. Future generations of browsers should be checked for the following features:Prompt isolation, separating user intent from third-party web content before LLM prompt generation.Gated permissions. AI agents should not be able to execute autonomous actions, including navigation, data retrieval, or file access without explicit user confirmation.Sandboxing of sensitive browsing (like HR, finance, internal dashboards, etc.) so there is no AI activity in these sensitive areas.Governance integration. Browser-based AI has to align with data security policies, and the software should provide records to make agentic actions traceable.To date, no browser vendor has presented a smart browser with the ability to distinguish between user-driven intent, and model-interpreted commands. Without this, browsers may be coerced to act against the organisation by the use of relatively trivial prompt injection.Decision-maker takeawayAgentic AI browsers are presented as the next logical evolution in web browsing and automation in the workplace. They are designed deliberately to blur the distinction between user/human activity and become part of interactions with the enterprise’s digital assets. Given the ease with which the LLMs in AI browsers are circumvented and corrupted, the current generation of AI browsers can be regarded as dormant malware.The major browser vendors look set to embed AI (with or without agentic abilities) into future generations of their platforms, so careful monitoring of each release should be undertaken to ensure security oversight.(Image source: “Unexploded bomb!” by hugh llewelyn is licensed under CC BY-SA 2.0.)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.The post AI browsers are a significant security threat appeared first on AI News.