Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid

Wait 5 sec.

CERAWeek — dubbed the Davos of energy — is where policymakers, producers, technologists and financiers gather to discuss how the world powers itself next. NVIDIA and Emerald AI unveiled at the conference last week a new way forward — treating AI factories not as static power loads but as flexible, intelligent grid assets. This collaboration unifies accelerated computing, AI factory reference architectures and real‑time energy orchestration, helping large AI deployments connect to the grid faster, operate more efficiently and fortify system reliability.Built on the NVIDIA Vera Rubin DSX AI Factory reference design and Emerald AI’s Conductor platform, the approach brings together compute, power networking and control into a single architecture. The result is an AI factory that can generate high‑value AI tokens while dynamically responding to grid conditions — flexing when needed, supporting reliability and reducing the need to overbuild infrastructure for peak demand. AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power and Vistra are working to build the energy generation capacity needed to meet rapidly growing power demand. The companies plan to collaborate on optimized generation strategies to support AI factories built on the NVIDIA and Emerald AI architecture, including hybrid projects that use co‑located power to accelerate time to power while delivering value to the broader grid. By pairing large AI loads with flexible operations, new generation resources and intelligent controls, this approach strengthens grid reliability. It’s an important milestone in grid resilience, supported by an ecosystem for advanced AI factories. This new computing infrastructure paradigm — described by NVIDIA founder and CEO Jensen Huang as a five-layer AI cake — has energy as its foundational layer. Driving Improvements in Tokens Per Second Per WattPower constraints are reshaping AI data centers, with energy efficiency or performance per watt, specifically tokens per second per watt, the defining metric of our modern computing infrastructure. By prioritizing computational efficiency, organizations can lower operating costs, maximize revenue and create a resilient digital infrastructure for businesses and consumers across America and worldwide. “Power is a concern, but it’s not the only concern,” Huang said on a recent Lex Fridman podcast. “That’s the reason why we’re pushing so hard on extreme codesign, so that we can improve the tokens per second per watt orders of magnitude every single year.” NVIDIA has a long history of driving performance and energy efficiency. From the NVIDIA Kepler GPU in 2012 to the NVIDIA Vera Rubin platform this year, the number of tokens generated within the same power budget has increased by more than 1 million times. It takes industry collaboration across the five-layer AI cake — from energy to chips, infrastructure, models and applications — to make this happen.Robotics, Digital Twins and AI Upskilling Drive Energy AdvancesNVIDIA ecosystem partners showcased at the event how AI, simulation and workforce innovation are accelerating the energy infrastructure needed to support the intelligence era. Announcements from Maximo, TerraPower and Adaptive Construction Solutions exemplify how AI is compressing timelines across construction, power generation and talent development.Maximo, a solar robotics company incubated at AES, announced the completion of a 100‑megawatt robotic solar installation at AES’ Bellefield site. Using AI‑driven robotics developed with NVIDIA accelerated computing, NVIDIA Omniverse libraries and the NVIDIA Isaac Sim framework, Maximo demonstrated that autonomous installations can now operate reliably at utility scale. The approach improves installation speed, safety and consistency, helping close the gap between rising electricity demand and construction capacity.TerraPower, working with SoftServe, previewed an NVIDIA Omniverse‑powered digital twin platform designed to dramatically shorten advanced nuclear plant siting and design timelines. By applying AI and simulation to early‑stage engineering, the platform reduces design cycles from years to months, accelerating deployment of TerraPower’s Natrium energy plants while improving design and grid integration.Adaptive Construction Solutions announced a national registered apprenticeship initiative, in collaboration with NVIDIA, to help build the skilled workforce required for AI factories and energy infrastructure. The program aims to scale training for critical trades, expanding access to high‑demand careers while supporting the rapid buildout of AI‑driven power systems.The efforts articulated how AI, digital twins and workforce innovation are converging to deliver faster, more resilient energy infrastructure.Coming Together on Scaling AI Factories for Grid Reliability GE Vernova, Schneider Electric and Vertiv highlighted how digital twins, validated reference designs and converged infrastructure are becoming essential to scaling AI factories as reliable grid participants. The announcements address the “power‑to‑rack” challenge — designing AI infrastructure as an integrated energy and compute system from day one. GE Vernova outlined how high‑fidelity digital twins aligned with the NVIDIA Omniverse DSX Blueprint enable utilities and developers to simulate grid behavior, substations and AI factory loads together before deployment. Such system‑level modeling helps validate interconnection strategies, reduce risk and accelerate time to power in constrained grid environments.Schneider Electric announced new validated NVIDIA Vera Rubin reference designs and lifecycle digital twin architectures developed with AVEVA. By simulating power, cooling and controls in Omniverse, Schneider enables operators to optimize performance per watt, validate designs before buildout and operate AI factories more efficiently and predictably at scale.Vertiv outlined converged, simulation‑ready physical infrastructure built on repeatable power and cooling building blocks. Integrated with the Vera Rubin DSX reference design, Vertiv’s approach reduces design and deployment complexity while supporting faster, more confident scaling of AI factories.Together, these industry efforts provide a digital path forward, including the validated architectures and physical infrastructure needed to turn AI factories into flexible, grid‑aware assets for efficiently powering the world.Learn more about how NVIDIA and its partners are advancing energy solutions with AI and high-performance computing.