Cisco Expands Secure AI Factory with NVIDIA!

Wait 5 sec.

Cisco Expands Secure AI Factory with NVIDIA!Cisco Systems, Inc.BATS:CSCOKalaGhaziCisco Strengthens Its Partnership with NVIDIA to Deliver Unified AI Infrastructure, Extending Capabilities from Centralized Data Centers to Decentralized Edge Locations Where Real-Time Decisions Are Critical SAN JOSE, Calif., March 16, 2026 /PRNewswire/ — In a significant advancement aimed at accelerating enterprise AI adoption, Cisco (NASDAQ: CSCO) today announced a major expansion of its Secure AI Factory in collaboration with NVIDIA. This initiative provides organizations with a comprehensive, integrated framework for deploying artificial intelligence workloads across their entire infrastructure—from centralized, high-capacity data centers to distributed local sites where data is generated and instantaneous decisions are required. The expanded solution is designed to serve a broad spectrum of users, including large enterprises, neocloud providers, sovereign cloud operators, and service providers, enabling them to transition AI projects from experimental pilots to full-scale production without the complexity of integrating disparate, multi-vendor systems. By streamlining this process, Cisco compresses deployment timelines from months to weeks while embedding security as a foundational element from the outset. "Most organizations understand the potential for AI to transform their businesses, but they're navigating how to deploy the technology safely and at scale," said Chuck Robbins, Chair and CEO of Cisco. "In partnership with NVIDIA, we're solving that challenge with an architecture that sets a new standard for performance—making it simpler to deploy, operate, and secure AI infrastructure." Jensen Huang, founder and CEO of NVIDIA, echoed this sentiment, emphasizing the critical importance of security across the AI stack. "AI factories are transforming every industry, and security must be built into every layer—from silicon to software—to protect data, applications, and infrastructure," Huang stated. "Together, NVIDIA and Cisco are building the secure foundation for AI infrastructure—core to edge—so companies can scale intelligence with confidence." AI That Operates Across the Entire Distributed Enterprise, Not Solely Within the Data Center A central theme of the announcement is the recognition that AI inference increasingly occurs at the edge—where data originates and where time-sensitive decisions cannot afford the latency of round-trip communication with a centralized data center. Use cases span critical environments such as hospital floors requiring immediate diagnostic support, manufacturing facilities analyzing video feeds in real time to ensure worker safety, and moving vehicles that depend on instantaneous processing for navigation and operational integrity. This distributed reality fundamentally reshapes infrastructure requirements, necessitating that inference workloads run locally, in closer proximity to data sources, devices, and the precise moment a decision must be executed. To address this shift, Cisco and NVIDIA are enabling organizations to support edge inferencing use cases through two key strategic initiatives: Transforming the Enterprise Edge: Cisco is now supporting NVIDIA RTX PRO™ 4500 Blackwell Server Edition GPUs across its Unified Computing System (UCS) and Unified Edge product portfolios. This integration empowers enterprises to run mission-critical AI workloads at the edge without incurring the substantial energy costs and physical footprint typically associated with data center-scale hardware. The solution delivers enterprise-grade performance in a form factor optimized for space-constrained and power-sensitive edge environments. Transforming the Service Provider Edge: In a parallel development, Cisco today introduced the Cisco AI Grid with NVIDIA reference design. This architecture combines the capabilities of Cisco's Mobility Services Platform with NVIDIA RTX PRO Blackwell Series GPUs, enabling service providers to leverage their existing network infrastructure to offer managed edge AI services. This approach delivers carrier-grade reliability, performance, and data sovereignty—allowing providers to meet enterprise customer demands for secure, localized AI processing while utilizing their established network assets. Driving Performance and Efficiency for Massive-Scale AI Factories Building upon the momentum generated by recently launched systems powered by Cisco Silicon One G300—designed for scale-out architectures—and the P200—optimized for scale-across configurations—Cisco continues to push the boundaries of performance while simultaneously simplifying the deployment process. Key advancements in this area include: Next-Generation Performance: Cisco has unveiled its latest high-speed switches engineered to power the most demanding AI workloads. This includes a new 102.4 Terabits per second (Tbps) Cisco N9100 switch powered by NVIDIA Spectrum-6 Ethernet switch silicon, representing a significant leap in data throughput capacity. This new offering joins the now generally available 800G N9100 switch, which is powered by NVIDIA Spectrum-4 Ethernet switch silicon, providing organizations with a range of high-performance networking options tailored to varying scale and performance requirements. Rapid Deployment Through Simplified Operations: Cisco Nexus Hyperfabric, now integrated as a component of the broader Cisco Nexus One platform, will extend its support to include Cisco N9000 Series switches, notably the N9100 Series powered by NVIDIA Spectrum-X Ethernet silicon. This integration transforms what has traditionally been a complex, multi-vendor integration puzzle into a streamlined, full-stack solution. By unifying management and operations, organizations can dramatically reduce AI infrastructure deployment times and alleviate the operational burden on IT teams. For customers constructing large-scale AI factories, Cisco now offers two validated architectural paths. The first is an AI factory based on a reference architecture compliant with the NVIDIA Cloud Partner (NCP) program. The second is a Cisco Cloud Reference Architecture built on Cisco Silicon One, which adheres to the same foundational design principles, providing customers with flexibility and choice without compromising on performance or reliability. Security Deeply Integrated Into Every Layer of the AI Stack In an era where AI models represent high-value intellectual property and AI agents operate with increasing autonomy—taking actions, making decisions, and interacting with other agents—security can no longer be an afterthought. Cisco is embedding comprehensive protection into the fabric of its Secure AI Factory with NVIDIA, safeguarding against both external threats and anomalous behavior from autonomous agents. This multi-layered security approach encompasses: Securing AI Infrastructure: The security of AI is fundamentally dependent on the integrity of the hardware on which it runs. Recognizing that attackers increasingly target infrastructure layers, Cisco Hybrid Mesh Firewall delivers consistent security policy enforcement across a diverse array of enforcement points, including network switches, workload agents, and other critical infrastructure components. Expanding this capability, Cisco is now extending Hybrid Mesh Firewall policy enforcement to NVIDIA BlueField data processing units (DPUs) embedded within NVIDIA GPU servers connected to Cisco Nexus One fabrics. This deeper integration enables threats to be blocked at the server level before they can propagate to organizational data, providing protection from the inside out with zero compromise on performance. Securing AI Agents: Cisco AI Defense provides robust model security and automated vulnerability testing. Through integration with NVIDIA NeMo Guardrails—a component of NVIDIA AI Enterprise software—Cisco now adds purpose-built guardrails specifically designed for AI agents operating at the edge. This integration helps AI developers and security teams proactively stay ahead of emerging threats and maintain trust in AI deployments. As AI deployments become increasingly distributed, with agents at edge locations frequently interacting with core systems to execute complex workflows, AI Defense now extends its protective capabilities to secure these agent-to-agent interactions, ensuring end-to-end governance across the distributed environment. Cisco Secures Enterprise AI Agent Development Building on its commitment to infuse security across all layers of AI infrastructure and support the emerging agentic workforce, Cisco also announced that Cisco AI Defense will now provide support and security for NVIDIA's OpenShell runtimes. OpenShell is a component of the NVIDIA Agent Toolkit, an open platform for agent development. Cisco AI Defense adds critical controls and guardrails to govern both agent and claw actions, ensuring that every tool use and operational action is continuously monitored and validated. By providing this level of oversight, Cisco AI Defense enables enterprises to confidently deploy AI agents to manage critical workflows, effectively bridging the gap between rapid innovation and robust risk management. This integration allows organizations to trust that their autonomous systems will operate reliably, securely, and in alignment with established governance policies.