The Pentagon’s standoff with Anthropic raises a question for any CTO building on a single frontier AI model: If access changed tomorrow, how hard would it be to move, and what would that actually involve?Axios reports that Secretary of Defense Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning to meet over the Department of Defense’s (DoD) use of Anthropic’s Claude.DoD frustrationThe DoD is frustrated over the restrictions Anthropic has placed on its use of Claude. However, Anthropic says it will loosen some restrictions but does not want its technology used for mass surveillance of Americans or the development of weapons that fire without human involvement, Axios reports.“It would be a massive task to offboard Anthropic, which is deeply entrenched, and replace it with another AI lab that currently has inferior capabilities,” the article states.Really hard to move?But suppose it didn’t have to be so hard to move?Rob May, CEO of NeuroMetric AI, says it doesn’t.NeuroMetric helps enterprises move beyond relying solely on frontier AI models, such as Claude, by analyzing AI traffic and routing queries to smaller, cheaper, and faster models where appropriate — including open-source options or custom-built small models.“So, what we do is companies plug us into their AI traffic, and we analyze it and say, ‘Well, did you know, like, maybe these queries here you can run over this open source model, these you could probably make your own,’” May tells The New Stack.Overkill?May argues that big frontier models are overkill for many tasks, too slow for multi-step agent workflows, and expensive. NeuroMetric’s tooling handles model evaluation, orchestration, and automated small model generation.“The big frontier model lab companies are sort of like the mainframe era…you need these big model companies to show you what AI models could do, but what we noticed is people that move to these models and use them for a while start to realize, like, half of your AI queries don’t need to go to Anthropic or OpenAI. It’s overkill, and you’re paying too much,” May says.Moreover, “The big models [represent] the smartest person you’ve ever worked with. Would you go to that person with all your really simple stuff? Like, ‘Hey, can you answer this simple customer support question?’ No, it’s a waste of their time. Your AI workloads are going to be structured similarly,” May adds.Enterprises need two thingsAccording to May, enterprises need two things: An orchestration layer that routes to multiple models with failovers, and an evaluation framework for testing new models against real workloads on cost, speed, and accuracy.May also notes that he believes the DoD should be fine-tuning open-source models like Meta’s Llama or Reinforcement Learning from Human Feedback models internally rather than relying on commercial providers.“The future of warfare is AI-driven, and the DoD is going to need some of the stuff,” May says. “Now, my assumption is that the DoD should be building their own models.”NeuroMetric advertises its services as helping enterprises reduce costs and achieve faster AI workflows.“If you have a 12-step agent workflow and you have to call Claude at every step, and that’s an 800 millisecond to two-and-a-half second delay, you have a 25-second workflow,” May tells The New Stack. “So, if you can run smaller models for some of those steps, it’s cheaper, it’s faster.”A tough positionWhether due to usage policy conflicts, pricing, or geopolitical pressure, the DoD situation is not a one-off. Anthropic’s willingness to enforce usage guardrails even against a $200 million government client shows that frontier AI vendors will prioritize their own policy decisions over customer needs.“Should they [Anthropic] violate their Terms of Service for the federal government?” May asks. “If the federal government wants to use the models to help plan the destruction of a drug boat in the Caribbean — you wouldn’t tolerate that for anybody else. So, it puts Anthropic in a tough position.”Indeed, Axios reports that the morning meeting will not be a friendly one. “This is sh*t-or-get-off-the-pot meeting,” a senior Defense official tells the publication.The post The Pentagon’s Anthropic problem is every enterprise’s AI problem appeared first on The New Stack.