OpenAI and Amazon announce strategic partnership
TL;DR
OpenAI and Amazon announce a multi-year strategic partnership, including a $50 billion investment from Amazon. They will co-create a Stateful Runtime Environment on AWS and make OpenAI Frontier exclusively available through AWS, enhancing AI capabilities for enterprises.
Key Takeaways
- •Amazon invests $50 billion in OpenAI, starting with $15 billion initially and $35 billion later, to accelerate AI innovation globally.
- •OpenAI and AWS will develop a Stateful Runtime Environment on Amazon Bedrock, enabling developers to build scalable AI applications with context and memory.
- •AWS becomes the exclusive third-party cloud provider for OpenAI Frontier, allowing organizations to deploy and manage AI agents with enterprise security.
- •OpenAI commits to using 2 gigawatts of AWS Trainium capacity to support advanced workloads, improving efficiency and reducing costs for AI production.
- •Customized OpenAI models will be developed for Amazon's customer-facing applications, complementing existing tools like Amazon Nova for enhanced AI products.
Tags
News:
- Amazon Web Services (AWS) and OpenAI will co-create a Stateful Runtime Environment powered by OpenAI models, available on Amazon Bedrock for AWS customers to build generative AI applications and agents at production scale.
- AWS will be the exclusive third-party cloud distribution provider for OpenAI Frontier, which enables organizations to build, deploy, and manage teams of AI agents.
- OpenAI to consume 2 gigawatts of Trainium capacity through AWS infrastructure to support demand for Stateful Runtime Environment, Frontier, and other advanced workloads.
- OpenAI and Amazon will develop customized models available to power Amazon’s customer-facing applications.
- Amazon will invest $50 billion in OpenAI.
OpenAI and Amazon (NASDAQ: AMZN) today announced a multi-year strategic partnership to accelerate AI innovation for enterprises, startups, and end consumers around the world. Amazon will also invest $50 billion in OpenAI, starting with an initial $15 billion investment and followed by another $35 billion in the coming months when certain conditions are met.
Partnering to bring new advanced AI capabilities to enterprises worldwide
OpenAI and Amazon are jointly developing a Stateful Runtime Environment powered by OpenAI’s models, which will be available through Amazon Bedrock.
Stateful developer environments are the next generation of how frontier models will be used, seamlessly enabling models to access elements like compute, memory, and identity. A Stateful Runtime Environment allows developers to keep context, remember prior work, work across software tools and data sources, and access compute. They're designed to handle ongoing projects and workflows.
These stateful developer environments will be trained to run optimally on AWS's infrastructure and integrated with Amazon Bedrock AgentCore and infrastructure services so customers’ AI applications and agents run cohesively with the rest of their infrastructure applications running in AWS. The Stateful Runtime Environment is expected to launch in the next few months.
Bringing OpenAI’s most advanced enterprise platform to AWS customers
AWS will serve as the exclusive third-party cloud distribution provider for OpenAI Frontier, expanding access to OpenAI’s most advanced enterprise platform as demand for AI deployment accelerates across industries.
Frontier enables organizations to build, deploy, and manage teams of AI agents that operate across real business systems with shared context, built-in governance, and enterprise-grade security, without managing underlying infrastructure. As companies move from experimentation to production AI, Frontier makes it straightforward to integrate powerful AI into existing workflows quickly, securely, and at global scale.
OpenAI to use Trainium compute to power growing Amazon customer demand
OpenAI and AWS are expanding their existing $38 billion multi-year agreement by $100 billion over 8 years. The expansion includes OpenAI committing to consume approximately 2 gigawatts of Trainium capacity through AWS infrastructure, which will support demand for Stateful Runtime, Frontier, and other advanced workloads. This agreement lowers the cost and improves the efficiency of producing intelligence at scale.
Under this structure, OpenAI secures long-term capacity while working with AWS to deploy purpose-built silicon alongside its broader compute ecosystem, enabling enterprises to consume intelligence on demand without managing underlying infrastructure.
This commitment spans both Trainium3 and next-generation Trainium4 chips and will power a broad range of advanced AI workloads. Trainium4, expected to begin delivery in 2027, will provide another major performance gain, including significantly higher FP4 compute performance, expanded memory bandwidth, and increased high-bandwidth memory capacity to support increasingly capable AI systems at scale.
Custom models available to power Amazon’s customer-facing applications
OpenAI and Amazon will collaborate to develop customized models available to Amazon developers to power Amazon’s customer-facing applications. Amazon teams will be able to tailor OpenAI models for use across AI products and agents that serve customers directly. These capabilities will complement the models already available to Amazon developers, including Amazon’s Nova family, offering another tool for teams to build and deliver at scale.
“OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people. Combining OpenAI’s intelligence with Amazon’s infrastructure and global reach helps us put powerful AI into the hands of businesses and users at real scale.”