Everyone Talks About AI’s Power. Few Ask What It Does to Financial Decisions

Wait 5 sec.

If you spend any time in conversations about AI and financial services, you'll notice they tend to follow a pattern. Someone mentions faster execution. Someone else raises smarter signals. Personalisation at scale comes up. Frictionless everything. Everyone nods.It's not that any of it is wrong. It's that it skips the part that actually matters.I've been in financial services long enough to know that the interesting questions about any new technology are rarely about what it can do. They're about what happens when it lands in the real world, in the hands of people with very different levels of experience, making decisions under genuine uncertainty. That's the conversation we need to be having about AI right now. And I don't think we're quite there yet.Singapore Summit: Meet the largest APAC brokers you know (and those you still don't!)What's Already in the RoomLet's be clear about one thing: AI isn't coming to trading. It's already here. It's been here for a while. It's just unevenly distributed and not always well understood. At the institutional end, this isn't news. Algorithmic and AI-driven execution, real-time sentiment analysis, and high-frequency pattern recognition have been standard practice for years. What's newer is the application of large language models to unstructured data: Earnings call transcripts, regulatory filings, and news flow.The ability to process and synthesise that kind of material faster than any human team is genuinely changing how institutional research and risk assessment work. That's a meaningful shift. For retail, the change has been more visible but perhaps less examined. AI-powered charting tools, personalised market summaries, automated alerts, in-app education: these have become fairly standard across most major platforms. But what's less visible, and arguably more consequential, is what's happening in the background - onboarding decision automation, suitability assessments, detection of unusual trading patterns that might indicate a problem. That's where AI is doing some of its most important work, quietly, without much fanfare.What's ComingThe next wave is less about execution and more about judgment. Agentic AI is what I watch most closely. The ability of AI systems to take sequences of actions on their own, to research, assess and act without needing a human prompt at every step, is already being tested in institutional settings. For retail, the implications are significant and not yet fully worked through. An AI system that monitors a portfolio and flags when something has changed materially is one thing. An AI that decides what to do about it is quite another. That distinction matters, and the industry needs to think carefully about where it draws the line.Personalisation is the other big one. The combination of behavioural data, trading history and AI modelling is producing systems that can genuinely adapt to individual users in ways that simply weren't possible before. For financial education, which I care about a lot, this is genuinely exciting. The ability to deliver the right context to the right person at the right moment, rather than generic content that may or may not land, could change how people engage with markets in a real and lasting way.Risk management is moving from detection to prediction, too. Identifying the patterns that tend to precede bad outcomes, rather than just flagging them after the fact. For anyone serious about client protection, that's one of the most valuable things on the horizon.The Bit that Keeps Me Up at NightThe same capabilities that make AI genuinely useful in the hands of a well-run, well-governed platform also make it genuinely dangerous in the hands of one that isn't. An AI optimised for engagement rather than outcomes could learn, very efficiently, how to keep people trading, even when that is not in their best interests. It will surface content that stimulates rather than informs. It will personalise in ways that exploit the biases it identifies rather than counteract them. AI doesn't change the incentive; it just makes the execution more precise. Whether AI accelerates the good version of what platforms can do, or the bad version, comes down entirely to intent and governance. That's it. We discussed governance at length at the House of Lords this week. The question isn't whether AI can increase the volume of information available to people. It's whether it improves the quality of the decisions they make with that information. Those are genuinely different problems. The governance frameworks being built right now, in regulation, in business practice, in how platforms are designed, will determine which one gets solved. The FCA's Consumer Duty is a step in the right direction. Requiring firms to demonstrate good outcomes rather than just disclose risks creates real accountability for how AI gets used. But regulation sets the floor. What happens above it is down to us. The firms that earn trust over the long term will be the ones that treat governance as a design principle, not a compliance exercise, and build AI that makes people better at decisions. Not just faster at making them.This article was written by Rupert Osborne at www.financemagnates.com.