OpenAI and Broadcom Ink 10-Gigawatt Chip Deal to Feed AI Hunger

Wait 5 sec.

OpenAI makes a multibillion-dollar bet on bespoke silicon as it doubles down onAMD and edges out Nvidia reliance.The Energy Behind IntelligenceLet’s be blunt: artificial intelligence (AI) is an energy guzzler. Running largemodels—especially inference at scale and real-time interaction, means billionsof flops, memory bandwidth, and networking. As AI systems proliferate, thepower draw is no longer an afterthought. Inannouncing its 10 GW deal with Broadcom, OpenAI said the deployment wouldbegin in the second half of 2026 and finish by 2029.We're partnering with Broadcom to deploy 10GW of chips designed by OpenAI.Building our own hardware, in addition to our other partnerships, will help all of us meet the world’s growing demand for AI.https://t.co/3vLZFPO0jF— OpenAI Newsroom (@OpenAINewsroom) October 13, 2025Ten gigawatts is not trivial. To put it in perspective, 10 GWwould be enough to power more than 8 million U.S. households. Why go custom? Because dependence on general-purpose AI accelerators(hello, Nvidia) means you're subject to supply, margin, and roadmap constraints.Building your own gear (or co-designing) lets you tailor the stack: chip,memory, interconnects, software. In the Broadcom deal, OpenAI will design theaccelerators; Broadcom will build and deploy them. One more twist: Broadcom’s networking tech (Ethernet, etc.) is intendedto be integrated with this stack. This is, perhaps, an opportuninty for OpenAIand Broadcom to displace Nvidia’s InfiniBand technology.When AMD and Broadcom MeetOpenAI isn’t putting all its eggs in one chip basket. In early October2025, itstruck a multi-year deal with AMD to deploy 6 GW of Instinct GPUs overseveral generations. The first tranche, 1 GW, will begin deploying in thesecond half of 2026. That AMD arrangement includes an interesting wrinkle: AMD issued OpenAIwarrants to acquire up to 160 million shares (about 10 %) at a nominal price,vesting as deployment and share-price milestones are met.BREAKING: Broadcom stock, $AVGO, surges over +13% after signing a "multi-billion dollar chip deal" with OpenAI.Broadcom will build custom data center chips for OpenAI and the deal covers 10GW of compute capacity.Broadcom now up +$200 BILLION of market cap today. pic.twitter.com/NzktpbSmFo— The Kobeissi Letter (@KobeissiLetter) October 13, 2025Taken together, those agreements (Broadcom and AMD) suggest OpenAI isdiversifying its compute partnerships while retaining leverage in its stack.It’s not abandoning Nvidia (whichrecently pledged 10 GW of systems), but it is signaling it wants morecontrol.If the math holds, OpenAI could control or influence some 16 GW ofcompute across custom accelerators, AMD GPUs, and Nvidia systems (plusthird-party cloud or collaborative deals). That level of scale is not justambitious, it’s borderline industrial.Power, Scaling, and RiskThis isn’t a vanity project. AI compute is on a Moore’s-Law–litetreadmill: the more models, the deeper the memory, the fatter the activationtraffic, the bigger the cluster. The connections between compute and energy aremultiplying.Yet risks abound. Designing a chip is one thing. Executing yield,software stack maturity, cooling/infrastructure, supply chain (memory,packaging), and the ramp from prototype to volume are where dreams often die.Just ask Intel.OpenAI and Broadcom signed a multiyear agreement to collaborate on custom chips and networking equipment, planning to add 10 gigawatts’ worth of AI data center capacity. Caroline Hyde reports https://t.co/hdsxCxOY5l pic.twitter.com/QdYRMI4PSm— Bloomberg TV (@BloombergTV) October 13, 2025Also, the financing is staggering. Even at $50B–$60B per GW (abenchmark that’s often cited when talking about AI infrastructure) theBroadcom component can easily run into the hundreds of billions. OpenAI’srevenue is orders of magnitude smaller today. That implies heavy leverage, pre-commitments,and bet-the-future construction. Analysts have warned of the mismatch betweenOpenAI’s spending commitments and its current cash flow. “What’s real aboutthis announcement is OpenAI’s intention of having its own custom chips,” saidanalyst Gil Luria, head of technology research at D.A. Davidson. “The rest isfantastical. OpenAI has made, at this point, approaching $1 trillion ofcommitments, and it’s a company that only has $15 billion of revenue,” GilLuria, head of technology research at D.A. Davidson saidto AP in an interview.Still, for those who say “this is too much,” remember: AI is now asmuch about infrastructure as algorithms. The winners will be those who masterboth.What this Means for the AI Arms RaceNvidia’s dominance is challenged, not toppled, custom chips rarely beatincumbents early.AMD gets a line into the compute mix by partnering rather than competinghead-on.Broadcom gets elevated from networking parts to full AI silicon andstack supplier.OpenAI tightens control over its destiny: fewer black boxes, morecustom layers.The Broadcom deal is audacious. The AMD deal is clever. Combined,they’re a bold wager: that compute is no longer a cost center, it’s the centralbattlefield of AI.If these deals succeed, OpenAI might well emerge not just as a modelhouse but a compute juggernaut. If they fail (in yield or funding), it couldcrowd out liquidity and distract attention from model advances. Either way, AIinfrastructure just got a lot more interesting.For more stories around the edges of fintech, visit our trending pages.This article was written by Louis Parks at www.financemagnates.com.