Behind OpenAI's $110 billion funding is a competition between Amazon and Microsoft

robot
Abstract generation in progress

Original | Odaily Planet Daily (@OdailyChina)

Author | Azuma (@azuma_eth)

On the evening of February 27th, OpenAI announced it had completed a new funding round of $110 billion at a pre-money valuation of $730 billion.

The funds came from three major giants, with Amazon investing $50 billion (initial investment of $15 billion, with the remaining $35 billion to be gradually funded over the coming months upon meeting certain conditions), Nvidia contributing $30 billion (to be recouped through a total of 5 GW of compute capacity purchases), and SoftBank also investing $30 billion.

After the funding was announced, OpenAI founder Sam Altman took to his personal X account to thank the three major investors. Notably, the order of his thanks was Amazon, Microsoft, Nvidia, and SoftBank — the longstanding shareholder and key partner Microsoft, which did not invest this time, was mentioned immediately after Amazon, the largest committed investor.

Long-term AI track observer overseas blogger Aakash Gupta pointed out that while most people focus on the astronomical figure of $110 billion, the most critical information in Sam Altman’s statements involves two overlooked technical terms: “Stateless API” (stateless API) and “Stateful Runtime Environment” (stateful runtime environment), which are included in the commitments from Microsoft and Amazon.

Behind these technical terms lie the present and future of AI

The core difference between Stateless API and Stateful Runtime Environment hinges on the words “Stateless” and “Stateful.”

Stateless API means the server does not retain any persistent state across requests — each call completes a reasoning process; you ask a question, AI responds, and once the request cycle ends, the system does not keep context or continue running. In contrast, “Stateful Runtime Environment” implies a persistent execution environment — an agent with memory that can exist long-term, collaborate across tasks, and execute ongoing operations.

Currently, Stateless API is the mainstream form of commercialized large language models (LLMs). Industries like finance, retail, manufacturing, and healthcare mainly embed AI into existing systems via this method (e.g., chat assistants, document summarization, search enhancement). The advantage of this mode is that companies can quickly add AI capabilities without major organizational or process restructuring, enabling rapid feature optimization with low friction. However, as model capabilities converge, compute costs decline, and price competition intensifies, token-based billing for Stateless API tends toward standardization and commoditization, with margins likely to be squeezed over time.

In contrast, the Stateful Runtime Environment, while still limited in commercial scale, signifies a paradigm shift — it’s not just about “functional optimization,” but about transforming business models. It can be viewed as digital labor capable of executing tasks concretely. This means its reach extends beyond simple API calls to include automation, process management, and even some human labor costs. As a result, market expectations for Stateful Runtime Environment are much higher than its current scale.

Aakash Gupta also stated that by 2026-2027, nearly all enterprise roadmaps will revolve around “autonomous agent workflows,” rather than one-off API calls. Companies heavily investing in AI will increasingly prefer systems that can run sustainably, collaborate across tools, and maintain context over the long term.

In simple terms, Stateless API represents the present, while Stateful Runtime Environment points to the future.

What are Microsoft and Amazon taking?

On the day of the funding announcement, Microsoft and Amazon separately announced their latest cooperation agreements with OpenAI.

Microsoft stated that the terms of their partnership, announced jointly in October 2025, remain unchanged (including OpenAI’s commitment to purchase $25 billion worth of Azure services). Azure remains the exclusive cloud provider for OpenAI’s Stateless API, and any Stateless API calls made by OpenAI models in cooperation with third parties (including Amazon) will be hosted on Azure; OpenAI’s first-party products, including Frontier, will continue to be hosted on Azure.

Amazon, meanwhile, announced that AWS will collaborate with OpenAI to develop a Stateful Runtime Environment powered by OpenAI models, and will offer it via Amazon Bedrock to AWS customers, helping enterprises build generative AI applications and agents at production scale; AWS will also become the exclusive third-party cloud distribution provider for OpenAI Frontier; the existing $38 billion multi-year partnership between OpenAI and Amazon will be expanded to $100 billion over 8 years, with OpenAI consuming 2 GW of Trainium compute capacity via AWS infrastructure to support Stateful Runtime Environment, Frontier, and other advanced workloads; OpenAI and Amazon will also develop customized models for Amazon’s customer-facing applications.

These two announcements clearly illustrate the current landscape.

Microsoft is locking in current traffic with a $250 billion agreement and exclusive service rights — whenever OpenAI’s Stateless API is called, Azure will bill in the background — regardless of who the customer is or through which channel, all traffic ultimately flows back to Azure. This guarantees highly predictable cash flow, but the problem is that profit margins for Stateless API are shrinking; call volume may continue to grow, but long-term profitability remains uncertain.

On the other hand, Amazon’s $50 billion investment and $100 billion expansion agreement secure the underlying hosting for the AI agent era. Once agents become core productivity tools for enterprises, the resources consumed — compute, storage, orchestration, workflow management, cross-tool collaboration — will all be embedded within AWS’s environment.

One controls current cash flow, the other bets on future productivity structures.

OpenAI’s Dispersed Betting

Before the future truly arrives, no one can say who is right or wrong — but what is clear is that under these two clearly defined, mutually exclusive cooperation agreements, OpenAI’s bargaining power is significantly increasing.

In recent years, OpenAI has been highly dependent on Microsoft’s cloud infrastructure. Microsoft owns 27% of OpenAI and controls its infrastructure. This binding provided early resource advantages but also tilted bargaining power toward Microsoft. With Amazon’s strong entry, a direct competition over OpenAI’s future service rights is inevitable.

For OpenAI, this is a typical dispersed betting strategy — not deeply binding itself to any single cloud provider, not letting future growth be fully controlled by one party, using future business opportunities as leverage for better terms.

Whether it’s Microsoft or Amazon, neither can afford to give up on OpenAI now. When both sides are unable to back down, bargaining power will naturally shift back to OpenAI.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)