Developers are coding to a moving target, and nobody knows where AI lands next

Wait 5 sec.

In Barcelona this week, a bit of tension is in the air for certain attendees at the annual Mobile World Congress: How can technology companies get the most out of AI right now while staying nimble enough to handle any curveballs as AI evolves?At the MWC conference, “Rewiring Business: The New AI-Driven Enterprise Ecosystem,” speakers explored how to address this new challenge.Among them was Shahid Ahmed, the Global EVP, New Ventures and Innovation at NTT, the sixth-largest telecommunications company globally.“I think the telecoms sector in particular is ripe for reinvention in the age of AI. If you look at older systems, they could all be managed by a single LLM,” opined Ahmed during his remarks. “Today, thousands of people manage each telecom network. This is a reality that will surely change… and there are lessons for both developers and business here.The last vestiges of utility telecomsThe conversation here centers on how we are now moving away from the “last vestiges of telecoms as a utility,” as firms work to put AI behind billing systems, networking interconnects, and AI services all the way down to the command line.But this is not a point-and-click lift-and-shift migration; the telco industry and others need to realize that it is no longer a question of how fast we can move packets around; it’s more about using AI services to oversee increasingly distributed systems that demand complex orchestration.“Frontier models will replace certain workflows, but there is still some way to go, i.e., there needs to be solid governance behind AI if it is going to handle billing systems for organizations running mission-critical services at this level,” said Ahmed.Intelligence moves to the edge — but at what cost?Ahmed also warned that the next phase of this shift comes with a price that could undercut its own promise.“Further, here in terms of trends, we can all agree that intelligence is now moving from the core to the edge, but GPU costs are at risk of spiraling if we do this, so software engineering teams will need to think about using small AI models to be more efficient.“When physics-based (as opposed to language-based) models are used, there are key gains in terms of security, latency, and sovereignty. Telcos now have the opportunity to get it right and really differentiate themselves in front of customers.”Developers, “don’t boil the ocean.”Ahmed’s colleague, Parm Sandhu, is group VP for enterprise 5G & edge compute products and services at NTT. Talking about the need for balance in this industry, Sandhu told The New Stack that, when it comes to rewiring for AI-native, not everything needs to be a gargantuan public cloud undertaking running hundreds of millions of parameters.“Let’s say we’re looking at developing AI applications for a factory, the deployment only needs to learn the processes and workflows that exist in its own particular universe, so AI models will be able to run on smaller infrastructure,” said Sandhu.“The message for developers here is that they don’t have to boil the ocean, and they can help their department achieve ROI much faster than other deployments.”“There’s a sovereignty benefit here because data never needs to leave the factory, and the geopolitical ramifications of this have never been greater than right now. So the message for developers here is that they don’t have to boil the ocean, and they can help their department achieve ROI much faster than other deployments.”Sandhu qualified his comment by saying that software engineering teams will still need to think about how they approach orchestration in these environments; they will need to cover everything from inventory controls to model security across multiple models, each with its own security requirements.NTT says its orchestration technologies operate at this level to analyze traffic patterns, providing observability into how data moves between machines and enabling network slicing (the use of virtual sub-networks running on shared infrastructure) at the private network level when necessary.New threats for a new stackAs the shifting sands of this new universe of codebases now develop with AI-native technologies at the heart, key questions will arise across the ancillary disciplines that support the software development lifecycle. Cybersecurity will change, penetration testing will have to improve, site reliability engineering will evolve, and observability will have to become more rigorous.Noam Levy, founding engineer and field CTO at Groundcover, told The New Stack this week that enterprise leaders are being forced to experiment — maybe faster than they would like.“From an observability architecture perspective, the uncertainty enterprises face in this landscape requires them to rapidly adopt and experiment with AI, while continuously correcting their trajectory in motion,” Levy said.Levy tells The New Stack that organizations need to adopt software engineering tools that shorten the feedback loop, deliver higher fidelity, and operate with minimal reliance on integrations. This way, software engineers can focus on this demanding new frontier to enable a genuine human-out-of-the-loop model with accelerated trial and error.David Girvin, AI security researcher at Sumo Logic, told The New Stack that developers are feeling the pinch.“AI is moving faster than any traditional software development cycle, so expecting developers to ‘keep up’ is the wrong framing,” Girvin explains. “Some developers have adopted AI tooling, however they’re stuck building on a landscape that shifts every quarter. They’re not responsible for rewiring the business, but they are the first group to feel the effects of every strategic AI decision a company makes.”Guardrails before featuresGirvin suggests that developers succeed when leadership provides stable guardrails: clear data boundaries, defined model-control policies, and concrete expectations about what’s allowed, what’s measured, and what’s off-limits. Without that, every AI feature becomes an experiment in both capability and risk.“AI introduces attack surfaces [developers have] never had to design around.”“On the security side, Day 2 thinking still isn’t natural for most teams. Not because developers don’t care, most do, but because AI introduces attack surfaces they’ve never had to design around. On this side of the codebase, it’s a brand-new set of problems: prompt injection, model drift, shadow data pipelines, and insecure agent behavior are not traditional threats and don’t appear in legacy SDLC checklists. Which were ignored anyway. I know, I wrote them,” Girvin said.Where we go next is AI-native software application development as a de facto standard, with the promise of quantum acceleration not far behind. What may shape the next phase is how far proprietary closed models and open-source technologies go.The post Developers are coding to a moving target, and nobody knows where AI lands next appeared first on The New Stack.