As both corporate vice president of product in Microsoft’s Developer Division and general manager of the company’s first-party engineering systems, Amanda Silver oversees what might be the world’s largest platform engineering operation.Her team is responsible for ensuring that thousands of Microsoft engineers across hundreds of products build software that’s secure, consistent and maintainable, as well as preserving the developer velocity that keeps Microsoft competitive.Traditionally, this effort has been primarily human-driven: creating standards, generating thousands of action items and hoping developers implement them correctly, she said. But over the past year, Silver’s team has adopted a different approach that replaces human toil with AI agents. And the results represent a shift in how platform engineering will work at scale, she told The New Stack.The Scale Conundrum: 10,000 Tickets and CountingTo understand the magnitude of Microsoft’s platform engineering challenge, one can look at a recent security initiative. As part of Microsoft’s Secure Future Initiative — which it refers to as “the largest cybersecurity engineering project in history” — Silver’s team needed to update authentication libraries across all Microsoft codebases. This was a critical security requirement affecting thousands of repositories and millions of lines of code, she said.“Previously, when we had to drive consistency across the entire organization, we would create action items for everybody across the organization that essentially was a ticket, and we would create tens of thousands of these,” Silver said. “We would issue them across the entire organization that a human needed to then go and respond to and understand what we would call the troubleshooting guide, and then go and incorporate that troubleshooting guide into all of the instances in their codebases.”However, each ticket required human interpretation of sometimes complex technical matters. Implementation quality varied across teams and progress was slow and difficult to track. And, most importantly, Silver explained that it pulled developers away from feature work to focus on infrastructure compliance, which is the kind of “soul-draining” work that she argues AI should eliminate.The authentication library update was just one example, though. Similar challenges emerged, such as updating dependencies with known vulnerabilities, modernizing build pipelines, ensuring consistent logging practices and implementing new security scanning tools. Each initiative meant thousands more tickets, more human interpretation and more inconsistent implementation, Silver said.Enter the AI Agent: From Tickets To Autonomous ImplementationSilver’s team has reimagined this process using “coding agents,” which are AI systems that can understand complex technical requirements and autonomously implement changes across codebases.“What I’m finding now, from a platform engineering perspective, is these capabilities of both coding agents, but also especially the coding agent kinds of capabilities, is that that’s a way that I can actually start to much more rapidly accelerate the adoption of these standards across our overall codebases,” Silver said.So, instead of creating tickets for human developers, the platform engineering team feeds their troubleshooting guides and implementation requirements directly into AI agents. These agents then analyze codebases, understand the context of existing implementations and either autonomously submit pull requests or provide developers with nearly complete solutions that require minimal human review.For the authentication library update, this meant the AI agents could analyze existing authentication patterns in each codebase, identify all locations requiring updates, generate contextually appropriate code changes, create pull requests with detailed explanations and handle edge cases and legacy implementation patterns, Silver noted.“In some cases, it can actually autonomously submit the pull request. In other cases, it just helps get the developer, you know, get much further along,” she explained.Beyond Authentication: Scaling Consistency Across the Engineering SystemThe authentication library project was just the beginning. Silver’s team has applied similar approaches in other areas:Dependency management: Automatically identifying and updating packages with known vulnerabilities across thousands of repositories. The AI agents understand dependency trees, compatibility requirements and testing implications, often handling updates that would have required significant manual research.Pipeline modernization: Updating build and deployment pipelines to use newer, more secure patterns. This involves understanding existing pipeline configurations, identifying optimization opportunities and implementing changes while preserving existing functionality.Security scanning integration: Implementing new security tools across codebases, including configuring appropriate scanning rules, handling exceptions for legacy code and ensuring results integrate with existing development workflows.Code quality standards: Enforcing new coding standards, refactoring patterns and best practices across diverse codebases. This previously required extensive code review and manual implementation.Before the use of agents, each of these initiatives would have generated thousands of tickets and months of implementation work. With AI agents, Silver’s team can push changes across Microsoft’s entire codebase in weeks rather than quarters — with higher consistency and lower developer disruption, she said.Implications for Platform Engineering TeamsMicrosoft’s approach relies on several key technical components that other platform engineering teams should understand. These include context-aware code analysis, incremental implementation, developer workflow integration, feedback loops and risk assessment.Meanwhile, Microsoft’s experience suggests several important shifts for platform engineering, including moving from enforcement to enablement, scaling expertise, faster iteration (of security updates, etc.), reduced developer friction and quality consistency.The Broader Industry ImpactIndeed, Silver’s team is employing techniques that could become standard across the industry. As she told The New Stack, “This is the epicenter of where developers get their tools and their platforms. And we have a tremendous opportunity to have huge impact across the entire industry.”The implications extend beyond Microsoft. One implication is a startup advantage, as smaller companies can implement enterprise-grade platform engineering practices without large teams, potentially accelerating their ability to scale. Other implications include enterprise adoption and platform engineering evolution as platform engineers shift from manual implementation to AI orchestration.In addition, platform teams must build confidence in AI-generated changes, which requires transparency, testing and gradual rollout processes, Silver said. Also, existing development tools and processes may need modification to support AI-driven platform engineering workflows.Looking Forward: The Platform Engineering Team of TomorrowSilver said she envisions a future where platform engineering teams look fundamentally different: smaller, more strategic and focused on designing systems and standards rather than implementing them manually.“We’re tackling the most miserable, soul-draining parts of the job. We’re transforming them so that developers can really focus on the creative and the aspects of the role that they really enjoy,” she said.For platform engineering, this means shifting from reactive maintenance to proactive system design. Instead of responding to security vulnerabilities with manual remediation, platform teams can build AI-driven systems that continuously monitor and automatically resolve issues across their entire infrastructure, Silver said.The post AI Agents Transform Platform Engineering at Microsoft appeared first on The New Stack.