Senior leaders across financial services have warned of a critical gap in AI governance standards, leaving the UK exposed to systemic risk, according to new research from Zango.It comes amid the Bank of England preparing to convene the Treasury, FCA and National Cyber Security Centre to assess the risks posed by Anthropic’s Mythos model.Lord Clement-Jones, Liberal Democrat Spokesperson for Science, Innovation and Technology in the House of Lords and Co-Chair of the All-Party Parliamentary Group on AI, writes in the foreword to the report: "What is immediately missing is the translation of high-level regulatory principles into day-to-day operational practice. We cannot simply wait for the aftermath of the first major AI-fuelled financial scandal to force us into action."The Future of AI Governance & Compliance in Financial Services, coordinated by compliance technology firm Zango AI, draws on interviews with 27 C-suite and senior leaders across risk, compliance and AI governance at UK and European financial institutions, and four industry roundtables with 60 additional senior practitioners.Contributors to the report include senior leaders from Santander, Stripe, St James’s Place, Standard Chartered, Lloyds Banking Group, Monzo, Allica Bank, Commerzbank, Revolut, and Ecommpay alongside John Glen MP, Member of the Treasury Committee.The findings highlight a shift in the AI systems being adopted by UK financial institutions, moving from tools that produced predictable outputs to generative and agentic systems producing context-dependent outputs that cannot be fully validated in advance, changing the requirements of governance.That shift is creating a widening oversight gap. Business and technology teams are deploying AI at a much faster pace than the risk and compliance functions responsible for overseeing them, with several institutions unable to identify all the AI tools in use across their own organisations.Criminal organisations are already exploiting that gap: global fraud losses hit $579 billion in 2025, with 90 per cent of financial professionals reporting an increase in AI-enabled attacks.Ritesh Singhania, CEO of Zango, said: "Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone's defences. Weak governance doesn't just create individual risk - it creates systemic vulnerability across the entire sector. What's missing is a shared implementation standard that gives firms a consistent basis for governing AI as they adopt it."Leaders cited a lack of operational guidance as a significant gap in the UK compared to the US. The US published a practical Financial Services AI Risk Management Framework in February 2026, developed by a Treasury-led public-private collaboration involving 108 financial institutions, with input from agencies including NIST. The Singapore regulator, the Monetary Authority of Singapore, published an equivalent in March. No comparable standard exists in the UK or EU.Without shared operational guidance, firms are solving the same governance problems independently. This leads to inconsistent control standards and creates oversight gaps that can be exploited at scale - a dynamic that sits at the heart of the AI-enabled risks regulators are now urgently examining.Dean Nash, adviser to Zango and Global Chief Operating Officer (Legal) at Santander, said: “Closing the accountability gap requires a fundamental rethink of governance architecture. We must shift from controlling and auditing a system’s internal logic to governing the dynamic environment in which it operates. Risk management is no longer about predicting every result, but about orchestrating an ecosystem of continuous, real-time guardrails.”The report calls for practitioner-built, sector-specific implementation guidance - developed with regulator engagement and modelled on the precedent set by the Joint Money Laundering Steering Group (JMLSG), the industry-developed standard for financial crime compliance that carries government endorsement without being mandated by regulators. No equivalent exists for AI.NoYesCybersecurity29 Apr, 2026