Increasingly, organizations are waking up to the fact that using AI to write more code is creating bottlenecks further up in the development pipeline, as all of this code still has to be tested and integrated. For this episode of The New Stack Agents, we talked to Harness co-founder and CEO Jyoti Bansal about how his company has been trying to solve this problem since its launch in 2017, and how Harness is now using agents as the core feature of its platform to speed up the outer development loop.“We have done AI for coding, but we’re not really shipping any faster. We are creating more code, but it’s not like we’re shipping any faster. So what’s the problem? And then you realize, more code doesn’t mean you can test it fast enough, you can deploy it fast enough, you can secure it, you can ensure compliance — all kinds of things that need to be done on the code,” he said.Bansal argues that the majority of a developer’s time has long been spent on all of the tasks needed to ensure that the code is ready for production. The goal for Harness was always to streamline this process, and even in its early days, the company used machine learning (ML) wherever it made sense to remove some of the repetitive work.From ML to GenAIAs large language models (LLMs) became smarter, it was a natural fit for the Harness team to start implementing more AI smarts into its products, too.“When we started looking at generative AI [GenAI], of course, we started with asking, ‘Can we bring LLMs to simplify the configuration of something? Can you create a pipeline easily with it? Can you fix security vulnerabilities with LLMs?’ And that was the first set of use cases that we started with,” Bansal explained. “Then over time, it became very clear [that for] many of these problems, agents, or agentic AI, is the right solution. So we have been building a lot of agents to solve the problems. And now the way people consume our platform is through a library of AI agents.”Today, Harness runs dozens of agents as part of its Harness AI platform. These include a DevOps agent, a site reliability engineer (SRE) agent, an AppSec agent, a FinOps agent, a test agent, a reliability agent and a release agent — all with various subagents for more specialized tasks. Those agents have access to a graph that describes a given organization’s infrastructure, the tools they use for testing and more. The right context, after all, is crucial for agents to do their best work.“Our users don’t see any of these agents,” Bansal explained. “Our users see just one Harness AI, and the Harness AI is like a unified agent that combines all of them together.”Trust in AI AgentsAI agents will inevitably make mistakes — at least for the time being. Bansal argues that for Harness, that’s not really an issue, though, because of its position in the software development life cycle.“Our agents are not doing a deployment in production. Our agents are creating a production deployment pipeline. The production deployment pipeline is deterministic. It’s auditable. Every step that happens is reviewed by someone from a compliance perspective, from a security perspective. So it’s not that AI is just going and touching your production environments — people are not comfortable with that, and people probably should not be, in many cases, for a long time.”MCP FTWAs for the enterprise adoption of these tools, Bansal noted how surprised he has been by how fast enterprises have adopted these tools and Harness’ own Model Context Protocol (MCP) server that exposes its agent capabilities and allows tools like Windsurf and Cursor to connect to the platform.“I was kind of surprised that even in very large enterprises — where you would think slow-moving enterprises — the adoption of integrating AI in their toolchain and how they adopt is pretty high,” Bansal said.One major airline customer took Harness’ AI and built their own internal AI toolchain for all their engineers on top of it. “We’re seeing a lot of creativity in how people want to transform the software engineering practices,” he noted.Augmenting vs. ReplacingAnd while Bansal himself had coded much of the early code for his previous startup, AppDynamics (which sold to Cisco for $3.7 billion), he stopped coding for a while but has now picked it up again thanks to these new AI coding tools. “I’m starting to get dangerous again,” he joked, but also noted that AI has helped him speed up some of the more time-consuming product management and spec writing tasks.He does not believe that these tools are replacing human developers or product managers, though.“The interesting part of AI is the pace. The competitive pressure and everything is much higher,” he said. Where before, it may have been okay to spend a year to more to launch a new feature or move into a new market, that’s not an option now. “That timeline has shrunk down to, like, six months for everyone. … It all comes down to: If you get more productive, what do you do? Do you have fewer people, or do you do more work? I think at this point, I’m seeing mostly that there is not a lot of ‘you’re doing less, you need fewer people’ — you’re just doing more work.”The AI Bubble Is OKSince Bansal is also an active startup investor, I was curious to hear what his thoughts were about a potential AI bubble that will soon implode. All areas of AI are bubbly right now, he conceded, but he also argues that this isn’t necessarily a problem.“A bubble sometimes is not necessarily a bad thing,” he said. “Let’s say if something is a big disruptive technology. Bubbles happen because people are expecting it to go from 1 to 100 in two years — but maybe it will go from 1 to 100 in five years, which still is going from one to 100.” He cited the internet as an example for this, with its dotcom boom and bust. In AI, quite a few companies are now trying to solve similar problems — and maybe only a few of those will survive. But Bansal argues that this is exactly how the system is supposed to work.“We’ll have some big winners — companies that will succeed. Many incumbents will also figure out what to do. They will pick up some of these innovations. So everything is happening, which is valid, but things might be happening much more than would eventually be needed, and that’s where the bubble is — and that’s probably OK.”The post Harness CEO Jyoti Bansal on Why AI Coding Doesn’t Help You Ship Faster appeared first on The New Stack.