Agentic AI systems must have ‘a human in the loop,’ says Google exec

Wait 5 sec.

Good morning. Agentic AI could fundamentally reshape businesses in less than three years.At the Fortune Brainstorm AI Singapore conference this week, Sapna Chadha, VP for Southeast Asia and South Asia Frontier at Google, explained that AI agents are evolving beyond single-task assistants. AI agents take powerful language models and equip them with tools, enabling them to carry out multi-step or complex actions—not just single isolated tasks, she explained. It’s about stitching capabilities together so that agents can act on behalf of users in increasingly sophisticated ways, she said.By 2028, it is expected that almost 33% of all enterprise software will have agentic AI built in, automating nearly 15% of day-to-day work and workflows, Chadha said.Vivek Luthra, Accenture’s Asia Pacific data and AI lead, told Fortune‘s Jeremy Kahn that clients are experiencing three stages of agentic AI adoption:—AI Assist: Agents help employees with individual tasks.—AI Adviser: Agents provide insights to empower better decisions.—Autonomation: Agents autonomously manage entire workflows.Luthra noted that, while most companies are still in the “assist” or “adviser” stages, Accenture is already observing fully autonomous processes in select strategic functions.Within Accenture, AI agents are deployed internally across HR, finance, marketing, and IT. Externally, industries such as life sciences use agents to speed up regulatory approvals, while sectors such as insurance and banking leverage them for fraud management.Accenture’s recent “front-runners” report surveyed 2,000 industry executives, finding that about 8% of companies have truly scaled up their AI adoption. “AI is very high on the agenda, but companies are still figuring out how to scale it,” Luthra noted.Chadha shared that agentic AI features appear in both Google’s consumer products and enterprise solutions. She highlighted Project Astra as Google’s vision for a universal AI agent capable of handling diverse tasks, from diagnosing bike repairs via camera to initiating support calls.As agentic systems become more powerful and autonomous, the need for responsible AI and improved safety standards increases. Google is working with trusted testers and moving carefully, Chadha said. Key risks could include agents going rogue or sharing sensitive data without authorization, she explained. That’s why Google is setting clear guidelines and developing toolkits for safe deployment, including standards, she said. The company recently release a white paper, titled “Google’s Approach for Secure AI Agents.” Both panelists highlighted the importance of transparency and user control. Chadha advised that agentic platforms must clearly communicate actions and request user approval at key decision points.  “You wouldn’t want to have a system that can do this fully without a human in the loop,” Chadha said.Regulation is also critical: “It’s too important not to regulate,” Chadha insisted, calling for robust protocols and industry standards.Sheryl Estradasheryl.estrada@fortune.comThis story was originally featured on Fortune.com