The biggest obstacle to shipping AI agents in the enterprise isn’t model quality. It’s trust. Or at least that’s the argument VAST Data is making this week at its first VAST Forward user conference.The company may not be a household name yet, but it has quietly grown into one of the fastest-scaling players in data and AI infrastructure. Its more than 700 customers include some of the best-known hosting services, frontier AI labs, unicorn startups, and neoclouds, as well as established enterprises.“One of the primary blockers to broad adoption of AI within organizations is trust in the models… trust in what the models are allowed to do with different data or different tools…”“One of the primary blockers to broad adoption of AI within organizations is trust in the models,” VAST co-founder Jeff Denworth said during a press briefing ahead of the event. “Trust in what the models have been trained upon, trust in what the models are allowed to do with different data or different tools, and then ultimately, trust in agents that use these models that talk to each other and talk to tools and things like that.”Unsurprisingly, it’s this trust issue that is central to the company’s announcements.Credit: The New Stack.Founded in 2016 with a focus on disaggregating storage and compute, VAST’s contracted annual recurring revenue has now exceeded $500 million this year, with total sales roughly tripling year over year. According to Denworth, it has also eclipsed $4 billion in all-time software bookings and is now operating-income positive while sitting on about $1 billion in cash.VAST customers and partners. (Credit: The New Stack).It’s this kind of growth that has allowed the company to make the aggressive investment necessary to expand into the AI field and launch what it calls “the first AI Operating System” that natively combines storage, database, and compute.VAST’s product suite currently includes its core data management and storage systems, but in 2025, it also launched VAST AgentEngine, a low-code tool for building, deploying, and orchestrating agentic workflows. Like other data-centric companies, it also launched tools for vector storage and search on its platform, which form the basis of how AI models access proprietary data through retrieval-augmented generation.The company’s latest set of updates builds on these products and expands them by addressing specific enterprise needs that are becoming more apparent as companies try to move their AI projects into production.PolicyEngine: AI guardrails at the infrastructure layerThe first new capability the company is launching at its conference is PolicyEngine. Since AI agents and the large language models (LLMs) they are built upon aren’t deterministic, there has to be a system of ‘trust but verify’ to enable their adoption in the enterprise.The PolicyEngine sits between agents and other agents, MCP tools, and the memory and data stores they work with. VAST describes it as a decisioning framework that can block or allow actions based on customer-defined policies, and that mediates all inputs and outputs in the system.Credit: The New Stack.What is important here is that PolicyEngine can also apply AI-powered data transformations inline — redacting and reformatting data before it reaches an agent — so that sensitive information never surfaces where it shouldn’t.Every action that passes through the system is logged in tamper-proof audit logs, allowing customers (especially those in regulated industries) to replay any agent action after the fact.The company said PolicyEngine will roll out over the course of 2026.Building guardrails for AI systems is increasingly table stakes for engineering teams, but those guardrails are often bolted on after the fact. The idea behind PolicyEngine is to push enforcement into the infrastructure layer, enabling consistent policies across every agent and workflow.“The idea behind PolicyEngine is to push enforcement into the infrastructure layer, enabling consistent policies across every agent and workflow.”VAST’s Denworth admitted that this is an opinionated architecture.“If you want to get the full benefit of this, you need to put your data — and you need to run your compute — within a VAST cluster,” he said. “It’s not really meant to go and orchestrate a lot of independent services. We’re really trying to build a single operating system. And the reason that we focus on this is we think applying these control points can only be done with a unified system in a way that it’s truly trustable.”TuningEngine: closing the fine-tuning loopThe other major update to VAST’s AI capabilities is the launch of its TuningEngine. As the name implies, this is about ensuring that models — and agents — can learn from user feedback.But the company also framed fine-tuning as a security problem, not just a capability problem. To trust a model, you need to trust what it was trained on. And if your platform doesn’t handle fine-tuning, that’s a gap, VAST argues.“Our conclusion was if we don’t handle fine-tuning, then that’s going to be a security gap that ultimately makes AI less trustable,” Denworth said.Credit: The New Stack.TuningEngine captures telemetry from agents running in VAST’s Agent engine, processes it into artifact tables, and feeds those into its tuners.VAST says it uses popular methods like Low-Rank Adaptation (LoRA), supervised fine-tuning, and reinforcement learning for this. The tuned models are then evaluated within the system and deployed back into the agent runtime as new iterations.There are other fine-tuning systems on the market, including those from major cloud computing players like AWS and Google, but VAST argues that these don’t work for everyone, especially customers who may need to deploy them entirely within their own firewall.VAST says TuningEngine will take about a year to reach full deployment, but it is already working with early customers.GPU acceleration in the data layerWhile PolicyEngine and TuningEngine will take a while to roll out, VAST also announced a major new collaboration with NVIDIA to launch an end-to-end accelerated data stack.The company argues that accelerated computing hasn’t yet reached the data level, but its services can now run directly on GPU-accelerated servers. VAST believes this unified platform will eliminate many of the data bottlenecks its customers currently experience.It’s important to note that VAST is not getting into the hardware business. Its OEM partners, such as HPE, Lenovo, and Supermicro, will handle that.During a separate press conference during the company’s conference, VAST co-founder Renen Hallak stressed this. “We’re not doing any hardware or any chips, but we are partnering with Nvidia and other chip vendors such that our software layer works really well with the underlying hardware, the networking hardware, with the SSDs, etc. We’re filling in that software infrastructure layer, that cloud services layer that I call the operating system, that’s our part,” he explained.Credit: The New Stack.The marquee launch here is CNode-X, a new GPU-accelerated server designed in collaboration with (and certified by) NVIDIA, which brings VAST’s high-performance storage servers to large GPU clusters.VAST is integrating several NVIDIA acceleration libraries: cuVS for hardware-accelerated vector search, Sirius for GPU-accelerated SQL queries (the company claims 44 percent faster queries at 80 percent lower cost), NIM inference microservices for running RAG pipelines natively in the cluster, and NVIDIA CMS — the context memory extensions — to accelerate access to shared key-value caches and lower time-to-first-token speeds.NVIDIA founder and CEO Jensen Huang unsurprisingly agrees. “NVIDIA is reinventing every pillar of computing for AI. With VAST Data, we’re transforming the storage of AI infrastructure,” he says in a statement in VAST’s announcement. “CNode-X is CUDA-accelerated at every layer to give AI agents persistent memory so they can work on complex problems over days or weeks, and eventually years, without forgetting — opening the world to the next frontier of AI.”To round things out and put a point on its focus on trust, VAST is also partnering with CrowdStrike to embed that company’s threat detection and response capabilities directly into its AI systems.All of these new capabilities, VAST believes, will allow it to get closer to building systems “that automatically evolve as they interact with data from the natural world,” as the company puts it in its announcement.“With today’s announcement, VAST AI OS finally creates a closed operational computing loop that observes, reasons, acts, evaluates, and improves — all while fortifying security and explainability by unifying and safeguarding all activities in one unified system,” the company says — though since those tools will take a while to roll out, we still have to wait and see what this looks like in practice. All of these tools, after all, are developing faster than enterprises can adopt them.Credit: The New Stack.The post VAST Data tackles the enterprise AI trust gap appeared first on The New Stack.