For this episode of The New Stack Agents, I sat down with Marcin Wyszynski, the technical co-founder of Spacelift and co-founder of OpenTofu, to talk about how AI is reshaping infrastructure as code (IaC).Wyszynski, a former SRE at Google and Facebook, says he built Spacelift because IaC tooling like Terraform worked great for solo operators but broke down the moment teams got involved. After HashiCorp changed the Terraform license in 2023, he co-founded OpenTofu as a Linux Foundation-backed fork. But the shift Wyszynski is focused on now isn’t so much focused on licensing but what’s next in IaC and platform engineering in this age of rapidly evolving AI tools. Traditionally, IaC tooling assumed that the person writing the code understood what it did, but that’s changing. The Portuguese phrase bookOn a recent customer discovery tour, Wyszynski tells The New Stack, the message he got was unanimous. Nobody writes HCL – the HashiCorp Configuration Language that’s at the core of Terraform and OpenTofu — by hand anymore. AI coding tools handle it now, and the learning curve for infrastructure configuration has collapsed.But there’s a catch, Wyszynski argues. He illustrates this with a story about buying a Portuguese phrase book before a vacation in Portugal. Phrasebook in hand, you can walk up to a local and ask a question in perfect Portuguese. The problem is that the local answers in equally perfect Portuguese.“He understood our question, but we have no way of understanding his answer,” Wyszynski says.For infrastructure, he argues, that comprehension gap is dangerous. A bad application deploy can usually be rolled back, but a bad infrastructure change can destroy a production database. Yet, customers are actively pushing for democratized access to infrastructure provisioning as software development has sped up. ButdData scientists shouldn’t have to file a Jira ticket and wait two weeks for a DevOps team to spin up the servers they need.The fine line between ‘stupid’ and ‘ceremonial’Before AI entered the picture, infrastructure teams had two options, Wyszynski says. One was what he calls “stupid”: clicking around in a cloud console with no record. The other was the full IaC ‘ceremony:’ you write your code, open a PR, get a review, pass policy checks, and deploy. That takes a while and costs time.“If all you have is a choice between stupid and ceremonial, if all you have is a hammer, everything looks like a ceremonial problem,” Wyszynski says. The result, he adds, is that infrastructure teams are now moving much slower than application teams, and that’s creating a backlog.Spacelift’s answer is a product called Intent. Rather than having an LLM write configuration code that then runs through the standard pipeline, Intent has the LLM query cloud provider schemas directly and create, update, or delete resources on the fly in close to real time. When a resource needs to move to production, there’s a one-click path to generate full IaC code.What’s important here is that the guardrails are deterministic — not just other LLM calls. Spacelift injects Open Policy Agent policies as middleware that keeps an eye on what the LLM can provision. On top of that sits Spacelift Intelligence, which launched this March, a context layer that gives the LLM awareness of an organization’s existing projects, reusable modules, and enforced policies. Balancing speed and controlThe core question every platform team is wrestling with now, Wyszynski says, is how to balance speed and control. Some customers want engineers to experiment freely in throwaway AWS accounts and then import the results into Terraform for production. Others want every change to pass through code review. Both are valid, he believes, and both are responses to the same underlying tension.Spacelift itself eats its own dog food, Wyszynski says. The company’s teams define its infrastructure with OpenTofu but deploy its applications using AWS CloudFormation, because CloudFormation can roll back a deployment atomically if containers start dying.Wyszynski argues this pragmatism is important, especially when it comes to trusting LLMs with production infrastructure. The enterprise objection is always that LLMs aren’t deterministic, so you can’t trust them.“Humans are non-deterministic as well,” he says. We’ve built guardrails for people for decades, he argues, and the same logic applies to LLMs. “We got used to the fact that humans need guardrails. There’s nothing new conceptually in having LLMs require guardrails as well.”The post AI can write your infrastructure code. There’s a reason most teams won’t let it. appeared first on The New Stack.