SUNYA Energy

AWS and OpenAI announce multi-year strategic partnership

November 3, 2025
SUNYA SUMMARY
- AWS and OpenAI announced a multi-year strategic partnership that enables OpenAI to run its advanced AI workloads on AWS’s infrastructure immediately. - The partnership will provide OpenAI with access to Amazon EC2 UltraServers capable of scaling to tens of millions of CPUs for generative AI workloads. - OpenAI has committed to a $38 billion investment to rapidly expand compute capacity on AWS while benefiting from its performance, security, and scale. - The infrastructure includes hundreds of thousands of NVIDIA GPUs and can expand to support tens of millions of CPUs, supporting workload scalability. - AWS's experience with large-scale AI infrastructure, including clusters with over 500,000 chips, ensures secure, reliable, and efficient AI processing. - The partnership's infrastructure features a sophisticated architecture optimized for high-performance AI processing, including clustering NVIDIA GPUs on UltraServers with low-latency interconnects. - This setup supports various workloads like inference for ChatGPT and training next-generation models, with flexibility to meet OpenAI's evolving needs. - OpenAI plans to deploy all capacity by the end of 2026, with potential further expansion into 2027 and beyond. - OpenAI CEO Sam Altman emphasized that massive reliable compute is essential for scaling frontier AI and that AWS's infrastructure will power this new era. - AWS CEO Matt Garman highlighted that AWS’s optimized, immediate compute resources make it uniquely positioned to support OpenAI’s ambitious AI initiatives. - The partnership continues to build on previous collaborations, including making OpenAI models available on Amazon Bedrock for thousands of AWS customers.
PRESS RELEASE
AWS and OpenAI announce multi-year strategic partnership

Partnership will enable OpenAI to run its advanced AI workloads on AWS’s world-class infrastructure starting immediately.

Written by Amazon Staff

November 03, 2025

Key takeaways

The multi-year, strategic partnership empowers OpenAI with immediate and increasing access to AWS’s world-class infrastructure for their advanced AI workloads.
AWS to provide OpenAI with Amazon EC2 UltraServers, featuring hundreds of thousands of chips, and the ability to scale to tens of millions of CPUs for its advanced generative AI workloads.
Representing a $38B commitment, OpenAI will rapidly expand compute capacity while benefitting from the price, performance, scale, and security of AWS.

Today, Amazon Web Services (AWS) and OpenAI announced a multi-year, strategic partnership that provides AWS’s world-class infrastructure to run and scale OpenAI’s core artificial intelligence (AI) workloads starting immediately. Under this new $38 billion agreement, which will have continued growth over the next seven years, OpenAI is accessing AWS compute comprising hundreds of thousands of state-of-the-art NVIDIA GPUs, with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads. AWS has unusual experience running large-scale AI infrastructure securely, reliably, and at scale–with clusters topping 500K chips. AWS's leadership in cloud infrastructure combined with OpenAI's pioneering advancements in generative AI will help millions of users continue to get value from ChatGPT.
The rapid advancement of AI technology has created unprecedented demand for computing power. As frontier model providers seek to push their models to new heights of intelligence, they are increasingly turning to AWS due to the performance, scale, and security they can achieve. OpenAI will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.
The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—via Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI's evolving needs.

“Scaling frontier AI requires massive, reliable compute," said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
“As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads.”

This news continues the companies’ work together to provide cutting-edge AI technology to benefit organizations worldwide. Earlier this year, OpenAI open weight foundation models became available on Amazon Bedrock, bringing these additional model options to millions of customers on AWS. OpenAI has quickly become one of the most popular publicly available model providers in Amazon Bedrock with thousands of customers—including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health—working with their models for agentic workflows, coding, scientific analysis, mathematical problem-solving, and more.