Alibaba Cloud Says It Cut Nvidia AI GPU Use By 82% With New Pooling System

Wait 5 sec.

Alibaba Cloud claims its new Aegaeon GPU pooling system cuts Nvidia GPU use by 82%, letting 213 H20 accelerators handle workloads that previously required 1,192. The advancements have been detailed in a paper (PDF) at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul. Tom's Hardware reports: Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide "goodput" -- a measure of effective output -- rising by as much as nine times compared to older serverless systems. The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba's infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs -- ranging in size up to 72 billion parameters -- fell from 1,192 to just 213. While the paper does not break down which models contributed most to the savings, reporting by the South China Morning Post says the tests were conducted using Nvidia's H20, one of the few accelerators still legally available to Chinese buyers under current U.S. export controls.Read more of this story at Slashdot.