On Monday, Amazon and Anthropic announced a new agreement under which Anthropic commits to investing more than $100 billion in AWS technologies over the next 10 years, securing up to 5 gigawatts (GW) of Trainium and Graviton cores, current and future generations. The AI company also makes the Claude Platform available on AWS and accepts up to $25 billion in investment from Amazon. Amazon and Anthropic make three promises for compute, Claude access, and investmentFirst, with its pledge to spend over $100 billion on AWS technologies in the coming decade, Anthropic secures up to 5GW of current and future generations of Trainium (Amazon’s custom silicon) to power, deploy, and train its advanced AI models, as well as tens of millions of Graviton cores (Amazon’s CPU chip) to “provide superior price performance.” Per both companies’ announcements, significant Trainium2 capacity is expected to come online in the first half of 2026, followed by nearly 1GW of Trainium2 and Trainium3 capacity later this year. The agreement also gives Anthropic “the option to purchase future generations of Amazon’s custom silicon as they become available.” Second, Anthropic makes its Claude Platform available on AWS. Amazon says more than 100,000 customers are running Anthropic Claude models (e.g., Opus, Sonnet, and Haiku) on AWS through Amazon Bedrock, Amazon’s fully managed service for building and scaling generative AI applications. By making the Claude Platform available directly within AWS, Anthropic is allowing AWS customers to access the full Anthropic-native Claude console with their existing AWS account, access controls, and monitoring — no additional billing, contracts, or credentials needed. Now in private beta, the Claude Platform on AWS should make it easier for AWS users to directly access Claude while upholding their existing governance and compliance requirements. Third, Amazon commits to investing up to $25 billion in Anthropic: $5 billion down today and potentially up to $20 billion coming down the pike, “tied to certain commercial milestones,” Amazon says in its announcement. This investment adds to the $8 billion Amazon has already invested in Anthropic, including the $4 billion minority ownership position it took in 2024. An ongoing give-and-take between two tech giantsThe new Amazon-Anthropic agreement builds on almost three years of collaboration between the two companies, dating back to 2023 when Anthropic chose Amazon as its primary cloud provider and began using AWS Trainium and Inferentia chips to train and deploy its foundation models. That collaboration already deepened last year when Amazon and Anthropic collaborated on Project Rainier, the then-largest AI compute cluster in the world with almost half a million Trainium2 chips. Since it went fully operational in October 2025, the large-scale infrastructure project has been actively used by Anthropic to build, train, and deploy Claude models. That’s not everything Amazon and Anthropic have been up to together. Per Amazon’s announcement blog post, Anthropic has played an integral role in developing AWS Trainium chips. While using the chips to build, train, and deploy its AI models, Anthropic reportedly works closely with Annapurna Labs, the Amazon-acquired specialist microelectronics company, sending feedback from Claude training workloads to help optimize future chip design. Anthropic is clearly eyeing more computeAs Anthropic splashes out $100 billion on AWS technologies, it continues its spree of infrastructure spending — and it needs to. In March, Claude Code users complained they were hitting usage limits faster than normal. It came amid a bad run of luck for Anthropic, which experienced five outages in March alone. As Anthropic splashes out $100 billion on AWS technologies, it continues its spree of infrastructure spending — and it needs to. Then, in an April letter to investors seen by Bloomberg, OpenAI reportedly drew attention to Anthropic’s infrastructure woes, stating it had “rapidly and consistently” added to its own compute capacity, giving it a self-proclaimed edge over Anthropic. Anthropic, for its part, acknowledges its recent infrastructure hiccups in its announcement blog post, chalking up reliability and performance issues to rapidly growing enterprise and developer demand for Claude: “Growth at this pace places an inevitable strain on our infrastructure.” Beyond its new collaboration with Amazon, the AI company is working to scale up its infrastructure. Earlier this month, it announced an agreement with Google and Broadcom to expand its compute infrastructure with “multiple gigawatts of next-generation TPU capacity,” expected to come online in 2027. That’s in addition to its October 2025 announcement expanding its use of Google Cloud technologies, including up to one million TPUs.“Growth at this pace places an inevitable strain on our infrastructure.” Anthropic closes its Amazon collaboration announcement by calling attention to the fact that it “train[s] and run[s] Claude on a range of AI hardware — AWS Trainium, Google TPUs, and NVIDIA GPUs — which means we can match workloads to the chips best suited for them.” In this statement, the AI company makes it clear that it’s diversifying its hardware strategy to improve performance and resilience. Time will tell if that means fewer outages ahead. The post Amazon and Anthropic deepen AI ties with a $100B AWS commitment appeared first on The New Stack.