Let’s face it, vibe coding or using AI to assist with writing code, has already become a reality for many developers. Especially when used as a productivity tool by a skilled developer, it’s effective and can speed up certain tasks dramatically. However, not enough is being done to mitigate the risks that come with the speed and additional abstraction that comes with getting a tool to write your code for you.It is harder to have a robust understanding of code security when reviewing someone else’s work (the AI agent’s), compared to reviewing code that’s your own. We’ve seen this first-hand, and it’s led to real vulnerabilities that were directly introduced by AI. And since it’s happening at a vulnerability management company with highly skilled security professionals, you can guarantee it’s happening everywhere.Vibe Coding a HoneypotTo deliver Intruder’s Rapid Response service, we have started using our own honeypots to catch emerging exploits in the wild and use them to write automated checks that protect our customers.Public vulnerability reports rarely come with details on how an exploit works, and in the early stages of a vulnerability’s life cycle, when such information is only known by a small group of attackers, having a real example of someone using the vulnerability can provide key details that we can use to write robust detections that don’t rely on the system exposing its version number.To aid with this aim, we decided to build a low-interaction honeypot that could be rapidly deployed and simulate any web application, logging requests that matched specific patterns for analysis. We couldn’t find an open source project that quite met our needs, so naturally, rather than starting a three-month sprint, we vibe-coded a proof-of-concept Honeypot using AI.Our vibe-coded honeypots were deployed as “vulnerable infrastructure” in environments where compromise is assumed (not connected to any sensitive Intruder tech), but we still took a brief look at the code for security considerations before it went live.After testing it out for a few weeks, we noticed something odd: Some of the logs, which should have been saved in a directory named after the attacker IP address, were being saved with a name that was definitely not an IP address:AI-Generated VulnerabilitiesSeeing this kind of payload in a filename obviously rang alarm bells, as this had the signs of user input being used in a place we expected trusted data. After taking another look at the AI-generated code, we found the following:An astute pen tester or developer should notice the problem right away. The issue wasn’t even poorly-documented behavior of the Go API, or anything like that — just explicit behavior (and even nicely commented!). How did we miss this?The code takes the X-Forwarded-For and X-Real-IP headers from the visitor’s request and uses those as the IP address when present. These headers are intended for use where you have a frontend proxy between your user and your web server, so you use the real visitor IP and not your proxy IP, but if using them, you must make sure you only trust them if sent from your trusted proxy!These headers are client-controlled data and are, therefore, an easy injection point for attackers. The site visitor can easily spoof their IP address or exploit an injection weakness using these headers as the attack vector. This is a common vulnerability we often find when pen testing.In this case, the payload the attacker was using was being inserted into this header, hence our unusual directory name. Though there wasn’t any major impact in our case, and there was no sign of a full exploit chain, the attacker did gain some control over the program’s execution, and it wasn’t far off from being much worse. If we had been using the IP address in another manner, it could easily have led to vulnerabilities like local file disclosure or server-side request forgery (SSRF).What About Static Application Security Testing and Code Review?Could static code analysis tools have helped here? We ran both Semgrep OSS and Gosec on the code, and neither reported this issue, though Semgrep did find other potential issues in the code. Detecting this vulnerability isn’t easy for a static scanner as it’s a contextual problem. A taint-checking rule could (and some scanners probably do) detect the use of the headers in the filename, but automatically deciding whether it has been made safe with an allowlist is a difficult problem.So why didn’t a seasoned penetration tester notice it during the code review step in the first place? We think the answer is AI automation complacency.AI Automation ComplacencyThe airline industry has long been aware of a concept known as automation complacency reducing vigilance of pilots — in other words, it is much harder for us to monitor an automated process without making mistakes than it is to avoid making mistakes ourselves when actively engaged in a task.This is exactly what happened here during the code review step when reviewing the AI-written code. The human mind inherently wants to be as efficient as possible, and when automation appears to “just work,” it’s much easier to fall into a false sense of security and relax a little too much.There’s one key difference between vibe coding and the airline industry though: The lack of rigorous safety testing. Where pilots might get a little too relaxed and let the autopilot take the wheel, they have many years of safety testing and improvement to fall back on. This is not true for AI-written code in the current climate, as it’s still very immature as an industry. Security vulnerabilities are just one quick merge away from production, and only a keen-eyed code review stands in the way.Not an Isolated IncidentFor any readers thinking this was possibly a fluke and wouldn’t happen often, it’s unfortunately not the first time we’ve seen this happen. We have used the Gemini reasoning model to help generate custom identity and access management roles for an AWS cloud environment that were vulnerable to privilege escalation. Even when prompted, the AI model responds with the classic “You’re absolutely right…” and then proceeds to once again post another vulnerable role.With a security engineer at the helm who’s ready to scrutinize the model carefully, these issues will get caught. But vibe coding is opening up these tasks to those with much less security knowledge, so it’s only a matter of time before we start seeing AI-generated code proliferate. After all, it takes a transparent organization to talk about its vulnerabilities, and even fewer are going to admit the source of the weakness was their use of AI. This won’t be the last you hear of this, of that much we’re sure!The post Automation Complacency Can Compound Vibe Coding Risks appeared first on The New Stack.