Lovable, a so-called "vibe coding" app that allows practically anybody to build websites and apps by using natural language by harnessing the power of artificial intelligence, has a huge cybersecurity problem.As Semafor reports, a critical security flaw has remained unfixed for months, allowing practically anyone to access critical information about the site's users, including names, email addresses, and even financial information.In March, Matt Palmer, a staffer at AI coding assistant company Replit, wrote a report finding that 170 out of 1,645 Lovable-created web apps were suffering from the same glaring security flaw, easily allowing hackers to get away with highly sensitive information.But the bug seemingly hasn't been meaningfully addressed."Lovable later introduced a 'security scanner,' but it merely checks for the existence of any [row level security] policy, not its correctness or alignment with application logic," Palmer tweeted on Thursday. "This provides a false sense of security, failing to detect the misconfigurations that expose data."Row-level security (RLS) is the "practice of controlling access to data in a database by row, so that users are only able to access the data they are authorized for," per security firm NextLabs.Palmer and his colleagues discovered the email addresses of roughly 500 users who had engaged with a Lovable-created website that turns a LinkedIn profile into a webpage.Software engineer Daniel Asaria claimed that he was able to infiltrate multiple "top launched" Lovable sites, extracting personal debt amounts, home addresses, API keys, and "spicy prompts" in a matter of just 47 minutes."This isn't a breach story (I reported it), this is a wake-up call," Asaria tweeted in April. "Be cautious which 'vibe coder' you trust with your personal data."Following three months of "no meaningful remediation or user notification from Lovable," Palmer and his colleagues made their discovered bug public on the National Vulnerabilities Database."This is the single biggest challenge with vibe coding," veteran software developer Simon Willison told Semafor. "The most obvious problem is that they’re going to build stuff insecurely."Lovable founder Anton Osika, however, accused Replit's CEO Amjad Masad, who pointed out that Lovable makes it "too easy to expose private data," of being jealous for having been overtaken in "usage and making vibe coding secure."It's truly a sign of the times, with experts warning for years now that AI coding tools could easily introduce a litany of errors that could easily be overlooked. Researchers have also found that many of the most advanced AI models simply don't have what it takes to solve the majority of coding tasks.The trend has some uncomfortable implications for the programming industry as a whole, with young coders starting to heavily rely on AI tools — which could greatly undermine their foundational knowledge, often gleaned from difficult and manual problem-solving.Lovable has since pushed back on X-formerly-Twitter, claiming that it's "now significantly better at building secure apps than a few months ago and this is improving quickly.""That being said, we’re not yet where we want to be in terms of security and we’re committed to keep improving the security posture for all Lovable users," the company wrote.More on AI coding: Advanced OpenAI Model Caught Sabotaging Code Intended to Shut It DownThe post Companies Are Discovering a Grim Problem With "Vibe Coding" appeared first on Futurism.