In application security, one of the promises of AI agents is that they will be able to test and scan your code for vulnerabilities 24/7 — and then fix any issues autonomously, whether during the development phase or in production. We’re not quite there yet, but plenty of startups and incumbents are making a run at this. One of the startups in this space is Aptori, which is building an AI security engineer for the enterprise.The service builds a real-time contextual map of your codebase, APIs and cloud infrastructure, helping it understand the authorization logic and how data flows in your systems so that it can then effectively scan for flaws in the code and other issues. Ideally, it can then also remediate those issues.I sat down with Aptori CEO and co-founder Sumeet Singh at Google’s Cloud Next Conference to discuss the company, as well as the state of agents in security today. Singh, together with his co-founder, Travis Newhouse, is no stranger to building startups. I first met them during the heyday of OpenStack, when they were working on AppFormix, a startup that automated cloud operations for OpenStack users. They sold the company to Juniper Networks in late 2016 and then left a few years ago to start Aptori.“OpenStack was this most amazing cloud computing system, and, you know, our previous startup, we built all of this tooling to make OpenStack autonomous,” Singh said. “You have an OpenStack cloud environment. You run our software on it, and it would just essentially run it for you, without humans. There was some amount of AI, but not [Generative] AI. There was lots of machine learning in there.”Then, once the company was acquired by Juniper, the company’s customers largely became telecoms and financial institutions in regulated environments. What the team experienced there was that even as developers were able to move increasingly fast on this automated infrastructure, they still weren’t able to actually release their software any faster, in large part because of the security requirements that held them back.“Every single time, it’d be like: Yeah, we ready to release. But there’s this long list of requirements that comes from security that, hey, they’ve got to do all these things. But you’re ready to release. And it’s got to be kind of heartbreaking, right? You’ve put all this effort in. You have this goal that, hey, I’ve got to release everything in six minutes — but you can’t achieve that.”The idea is for security agents to always scan the code, test the applications and APIs for vulnerabilities, so that when it’s ready for release, there are far fewer potential blockers.When Aptori started, the models weren’t ready yet to provide fixes, all they did was provide alerts. Singh admitted that those early code fixes the team tried to generate were actually unusable, so the first generation of the product had to focus on alerting. Now, with models like Gemini 2.5 Flash, Anthropic’s Claude Sonnet 4 and others, that’s all changing.“I used to tell my co-founder, Travis, all the time that, look, we’re only giving these developers bad news,” he said. “We’re going to them and we’re telling them: hey, you need to fix these six more things. Hey, you need to fix eight more things — and they’re busy getting the feature out. But with AI generating these code fixes — and complex code fixes — it’s not like, hey, change this line here. It’s the proper diff through different sections of your code.”The next step for agentic systems is to become more autonomous. But as Singh rightly noted, engineers will always want to have visibility into what those tools are doing.He also stressed that there is still a lot of traditional software development, as well as machine learning, that goes into building a tool like Aptori. For a lot of security features, you need a deterministic tool, after all.“You can’t just leave it to GenAI to take the greedy approach and say, hey, look, these are the 10 things I found, or these are the 20 things I found. You want to make sure that whatever those controls are, whatever those checks are, that all of those checks were done,” he said.The post Aptori Is Building an Agentic AI Security Engineer appeared first on The New Stack.