This simple infrastructure gap is holding back AI productivity

Wait 5 sec.

Organizations are investing heavily in AI, with some allocating up to 8% of their total revenue in AI tools for internal productivity. Yet, the 2025 AI at Work report found that only 1% of companies consider themselves “mature” in terms of AI deployment. This is an uncomfortable contradiction.The DORA Impact of Generative AI in Software Development report found that as AI adoption increased, delivery throughput declined by 1.5% and stability declined by 7.2%. Code is being written faster than ever, but it isn’t reaching production any quicker.The Widening AI Value Gap report by BCG found a similar result from a different angle, where 74% of companies struggle to scale AI value, with only 21% of pilots reaching production. The other 5% generating real returns had first built fit-for-purpose technology architecture and data foundations. This suggests that the problem is not AI but the underlying infrastructure to which it is being added.How Productivity Gains Get Absorbed Downstream Developers use AI to produce code more quickly; however, it still must pass through the deployment pipeline (including stages such as code review, testing, security checks, and deployment).The time saved in the system by using AI tools is lost and absorbed downstream at the unchanged bottleneck. AI may help developers write code more quickly, but the overall delivery timeline does not change proportionally.This pattern is common in research, and AI-generated code exacerbates it by producing larger change sets that increase the size of code reviews, often making them conceptually too large to be well understood.The 2025 Stack Overflow Developer survey reflects this reality, with 44% of respondents frustrated with “AI solutions that are almost right, but not quite,” and 30% reporting that “debugging AI-generated code is more time-consuming.”Where Developer Time Actually GoesAtlassian’s State of Developer Experience 2025 report found that developers spend about 16% of their time writing code. The remaining time is devoted to technical decision-making, mentoring, project coordination, and deployment processes. Developers report that using AI tools saves them time; however, 63% say their leaders don’t understand the pain points they face in their role. This indicates a gap between where AI is thought to deliver value and where developers actually experience issues.AI excels at high-volume, routine tasks, pattern matching, and generating boilerplate code, but these account for only a portion of a developer’s workload. A study by DX found that while developers appreciate faster code generation with AI tools and help with routine tasks, they actually want automation for non-coding tasks such as onboarding, compliance, security, incident, and deployment management. Figure label: Current AI Focus vs. Developer NeedsAdding AI tools to workflows addresses only a small part of the underlying issue. Continuous Delivery (CD) practices (such as Continuous Integration, CI automation, automated testing, deployment automation, and observability) handle the rest.How Continuous Delivery Practices Unlock AI ValueContinuous Delivery is the ability to release changes to production safely, quickly, and sustainably.Figure label: The Stages of a CD PipelineThe real question here is how Continuous Delivery practices help organizations scale value from AI tools.Automation can handle increased volume. As AI generates more code, automation enables a consistent throughput through the delivery pipeline by replacing manual processes with automated ones.Faster feedback loops and incident response. Continuous Integration with automated testing provides developers with fast feedback. This helps developers catch and fix issues early, when they are cheaper and easier to resolve.AI scales the groundwork; teams that successfully adopt AI typically already have solid foundational practices in place, while those lacking them struggle to get value from their AI investments.For a deeper look at how delivery bottlenecks affect AI value, Octopus Deploy’s “AI Pulse Report” examines the code review and deployment process specifically, identifying where automation can reduce friction and where AI capabilities are most effectively applied.Converting CD Into Software Delivery PerformanceDORA metrics (i.e., deployment frequency, lead time for changes, time to recover after a failed deployment, and change in failure rate) are practical measures here. The DORA State of AI-Assisted Software Development report found that high-performing teams consistently share specific Continuous Deployment characteristics (like CI automation, build automation, deployment automation, observability, and auditability).These practices remove manual bottlenecks and create fast feedback loops by catching issues early, producing consistent artifacts, eliminating change freezes and bypassing approval gates. All of which directly improve DORA metrics, which, in turn, increase software delivery performance, thereby converting these practices into positive organizational outcomes.In 2024, Amazon reported that improvements in developer experience and investments in automation infrastructure led to an 18.3% increase in weekly production deployments per builder and a 15.4% reduction in software delivery processing costs relative to the previous year, without relying solely on AI tools in workflows.The order is essential. Organizations that strengthen their underlying deployment pipeline before scaling AI investments are better positioned to translate productivity gains into tangible improvements in software delivery.Looking AheadOrganizations without a clear intent and objective for AI adoption amplify their existing problems rather than solving them. Identifying bottlenecks before scaling your AI investment is crucial to realizing its full potential. By recognizing and addressing these constraints, you can better determine the value that AI tools can provide. Automating verification steps prior to human intervention and measuring delivery outcomes are essential, as these metrics indicate whether AI adoption is improving software delivery performance.Building automated infrastructure requires substantial investment and long-term commitment, and every organization’s delivery challenges differ. The research is consistent: organizations that see strong AI returns typically build their foundations first, with a clear understanding of what they need AI to do. Without that foundation and clarity, AI investment may not translate into the delivery improvements teams are hoping for.The post This simple infrastructure gap is holding back AI productivity appeared first on The New Stack.