Wiz: Security lapses emerge amid the global AI race

Wait 5 sec.

According to Wiz, the race among AI companies is causing many to overlook basic security hygiene practices.65 percent of the 50 leading AI firms the cybersecurity firm analysed had leaked verified secrets on GitHub. The exposures include API keys, tokens, and sensitive credentials, often buried in code repositories that standard security tools do not check.Glyn Morgan, Country Manager for UK&I at Salt Security, described this trend as a preventable and basic error. “When AI firms accidentally expose their API keys they lay bare a glaring avoidable security failure,” he said.“It’s the textbook example of governance paired with a security configuration, two of the risk categories that OWASP flags. By pushing credentials into code repositories they hand attackers a golden ticket to systems, data, and models, effectively sidestepping the usual defensive layers.”Wiz’s report highlights the increasingly complex supply chain security risk. The problem extends beyond internal development teams; as enterprises increasingly partner with AI startups, they may inherit their security posture. The researchers warn that some of the leaks they found “could have exposed organisational structures, training data, or even private models.”The financial stakes are considerable. The companies analysed with verified leaks have a combined valuation of over $400 billion.The report, which focused on companies listed in the Forbes AI 50, provides examples of the risks:LangChain was found to have exposed multiple Langsmith API keys, some with permissions to manage the organisation and list its members. This type of information is highly valued by attackers for reconnaissance.An enterprise-tier API key for ElevenLabs was discovered sitting in a plaintext file.An unnamed AI 50 company had a HuggingFace token exposed in a deleted code fork. This single token “allow[ed] access to about 1K private models”. The same company also leaked WeightsAndBiases keys, exposing the “training data for many private models.”The Wiz report suggests this problem is so prevalent because traditional security scanning methods are no longer sufficient. Relying on basic scans of a company’s main GitHub repositories is a “commoditised approach” that misses the most severe risks .The researchers describe the situation as an “iceberg” (i.e. the most obvious risks are visible, but the greater danger lies “below the surface”.) To find these hidden risks, the researchers adopted a three-dimensional scanning methodology they call “Depth, Perimeter, and Coverage”:Depth: Their deep scan analysed the “full commit history, commit history on forks, deleted forks, workflow logs and gists”—areas most scanners “never touch”.Perimeter: The scan was expanded beyond the core company organisation to include organisation members and contributors. These individuals might “inadvertently check company-related secrets into their own public repositories”. The team identified these adjacent accounts by tracking code contributors, organisation followers, and even “correlations in related networks like HuggingFace and npm.”Coverage: The researchers specifically looked for new AI-related secret types that traditional scanners often miss, such as keys for platforms like WeightsAndBiases, Groq, and Perplexity.This expanded attack surface is particularly worrying given the apparent lack of security maturity at many fast-moving companies. The report notes that when researchers tried to disclose the leaks, almost half of disclosures either failed to reach the target or received no response. Many firms lacked an official disclosure channel or simply failed to resolve the issue when notified.Wiz’s findings serve as a warning for enterprise technology executives, highlighting three immediate action items for managing both internal and third-party security risk.Security leaders must treat their employees as part of their company’s attack surface. The report recommends creating a Version Control System (VCS) member policy to be applied during employee onboarding. This policy should mandate practices such as using multi-factor authentication for personal accounts and maintaining a strict separation between personal and professional activity on platforms like GitHub.Internal secret scanning must evolve beyond basic repository checks. The report urges companies to mandate public VCS secret scanning as a “non-negotiable defense”. This scanning must adopt the aforementioned “Depth, Perimeter, and Coverage” mindset to find threats lurking below the surface.This level of scrutiny must be extended to the entire AI supply chain. When evaluating or integrating tools from AI vendors, CISOs should probe their secrets management and vulnerability disclosure practices. The report notes that many AI service providers are leaking their own API keys and should “prioritise detection for their own secret types.”The central message for enterprises is that the tools and platforms defining the next generation of technology are being built at a pace that often outstrips security governance. As Wiz concludes, “For AI innovators, the message is clear: speed cannot compromise security”. For the enterprises that depend on that innovation, the same warning applies.See also: Exclusive: Dubai’s Digital Government chief says speed trumps spending in AI efficiency raceWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.The post Wiz: Security lapses emerge amid the global AI race appeared first on AI News.