Dr. Allen Yang, University of California, BerkeleyAs companies and governments pour billions of dollars into data centers and graphics processors in a global race to expand artificial intelligence capacity, some researchers say the industry’s obsession with the cloud risks missing where the next major breakthrough is likely to occur — not in servers, but in machines that operate in the physical world.“There is no cloud on the Moon,” said Allen Yang, founding executive director of the Vive Center for Enhanced Reality at the University of California, Berkeley, at a CES 2026 forum in Las Vegas. “If we want intelligent systems that can drive, rescue, build, explore and care for people, that intelligence has to live on the device itself.”Yang argues that while large language models and cloud-based AI have had their defining moments — from IBM’s Deep Blue to DeepMind’s AlphaGo — so-called “physical AI,” which controls vehicles, robots and machines, has yet to reach its own watershed breakthrough.“We have not had our AlphaGo moment in the physical world yet,” he said. “But that is where the next one will happen.”The comments come as the artificial intelligence sector enters what many executives describe as a “GPU race,” with companies competing to secure ever larger pools of computing power to train and deploy models.At CES this year, chipmakers, cloud providers and AI startups promoted systems designed to support ever-larger models, longer reasoning chains and more complex generative tasks. Investors increasingly track metrics such as compute capacity per company or per country as indicators of competitive advantage.But Yang says those metrics reflect the needs of digital intelligence, not physical intelligence.“Large language models became powerful because they could absorb decades of internet data,” he said. “Physical systems don’t have that luxury.”What makes physical AI differentPhysical AI refers to systems that must sense, interpret and act in the real world, including autonomous vehicles, humanoid robots, drones, industrial automation and field robotics.These systems face constraints that cloud-based AI does not.First is the lack of training data for rare but critical events — known as edge cases — such as severe weather, mechanical failures, sensor degradation or unpredictable human behavior.“You cannot scrape a million dangerous driving situations from the internet,” Yang said. “And you cannot ethically create them.”Second is latency. Physical systems often operate on millisecond timescales. A delayed decision in a car, robot or rescue system can lead to accidents or fatalities.Yang cited open-source experiments showing that smaller, faster models can outperform larger, more accurate ones in real-time control tasks because they respond more quickly.“Perfect is the enemy of good,” he said.Third is connectivity. Many physical AI applications cannot rely on stable networks, whether in disaster zones, remote mines or space exploration.“In many of the most important places we want AI to work,” Yang said, “there is no internet, no data center and no cloud.”Yang’s views are shaped by his work leading the Berkeley AI Racing Team in the Indy Autonomous Challenge, where university teams compete using fully autonomous race cars.In the 2025 competition, Berkeley’s car achieved overtaking maneuvers at speeds of up to 163 miles per hour (262 kilometers per hour). The competition requires vehicles not only to drive fast but also to interact safely with other autonomous agents under strict safety rules.In one race, both cars triggered emergency braking when their distance dropped below safety thresholds during a curve.“That moment showed what physical AI really is,” Yang said. “It’s not just perception and planning. It’s risk management, safety and interaction — all in real time.”The Berkeley team has expanded its testing to more complex environments.In 2025, researchers brought autonomous vehicles to Tianmen Mountain in Zhangjiajie, China — a 10.77-kilometer road with 99 sharp turns, steep elevation changes and variable weather.Nine universities participated, including Tsinghua University, Zhejiang University and Shanghai Jiao Tong University.The setting forces systems to handle unstructured environments, long-tail risks and degraded sensing conditions — challenges closer to real-world deployment than controlled highways or test tracks.From vehicles to humanoidsYang says the next frontier for physical AI will be humanoid robots.At CES 2026, he announced plans to add a humanoid robotics challenge at Tianmen Mountain, where robots will attempt to climb the site’s 999 steps.The goal is not speed, but adaptability and robustness.“Climbing stairs in an unstructured outdoor environment is much harder than it looks,” Yang said. “It tests balance, perception, energy management and decision-making all at once.”Major technology and automotive companies are increasingly investing in physical AI.Tesla is developing humanoid robots alongside its autonomous driving systems. Nvidia has launched platforms for embodied AI and robotics. Mobileye, owned by Intel, is expanding beyond vehicles into robotics.But progress is slower and riskier than in cloud AI.Hardware development is expensive. Failures can be dangerous. Regulatory approval is complex.“These systems must earn trust in a way that software never had to,” Yang said.Yang emphasized that education and interdisciplinary training are critical.Physical AI sits at the intersection of computer science, mechanical engineering, electrical engineering, materials science and human factors.“You cannot build this with software engineers alone,” he said.He sees university competitions and field experiments as essential for training the next generation of engineers.“The most important outcome is not winning the race,” he said. “It is seeing students understand what it really takes to make machines work safely in the real world.”The next watershedWhether physical AI will have a single defining moment comparable to AlphaGo remains uncertain.Yang believes it will be less a single victory and more a gradual shift, as machines begin to outperform humans in complex physical tasks safely and reliably.“When a robot can work next to you in a factory, care for an elderly person at home, or explore a dangerous environment better than a human can — that will be our AlphaGo moment,” he said.For now, he says, the industry should temper its cloud fixation.“GPUs matter,” Yang said. “But intelligence is not just computation. It is action, interaction and responsibility in the real world.”“And that,” he added, “is where the next breakthrough will come from.”更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App