Key Takeaways
- Amazon Web Services will receive 1 million GPUs from Nvidia by the conclusion of 2027.
- Deliveries commence in 2025 and continue through the end of 2027.
- The agreement encompasses networking equipment, Groq inference processors, and upcoming Blackwell and Rubin architectures.
- AWS plans to deploy seven distinct Nvidia chip types for AI inference operations.
- Shares of both NVDA and AMZN saw gains in extended trading after the disclosure.
The partnership between Nvidia and Amazon Web Services represents one of the chipmaker’s most substantial single-client contracts to date. The scope and complexity of this arrangement reveal significant strategic implications for both companies.
🚨 $NVDA × $AMZN — Massive GPU Deal Just Dropped
At GTC 2026, Nvidia and AWS officially announced a deal to deploy 1M+ Nvidia GPUs — including Blackwell & Rubin architectures — across AWS global regions starting this year.
Nvidia VP Ian Buck confirmed deliveries run through… pic.twitter.com/kKABFWM1FW
— Invest Alpha Pro (@InvestAlphaPro) March 19, 2026
According to statements made to Reuters by Nvidia VP Ian Buck, the million-GPU delivery schedule kicks off in 2025 and extends through 2027. This timeframe aligns precisely with CEO Jensen Huang’s forecast of a $1 trillion addressable market for Nvidia’s Blackwell and Rubin processor lines during the identical period.
The scope of this arrangement extends far beyond simple processor procurement. AWS is acquiring a comprehensive portfolio of Nvidia infrastructure, including Spectrum-X and ConnectX networking solutions. This development is particularly significant because AWS has traditionally relied on proprietary networking infrastructure. The incorporation of Nvidia’s networking technology into AWS data centers signals a notable strategic pivot.
AWS Commits to Multi-Chip Nvidia Inference Strategy
AI inference — the computational phase where artificial intelligence models generate outputs and execute tasks — forms the cornerstone of this partnership’s technical architecture. AWS intends to leverage seven distinct Nvidia processor types for inference operations.
Buck articulated the strategy directly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq processors, unveiled by Nvidia earlier this week following its $17 billion licensing agreement with an AI chip developer, form part of this inference ecosystem. These chips operate in concert with six additional Nvidia processors to deliver what the company characterizes as industry-leading inference capabilities.
AWS will also implement Nvidia’s Blackwell processors and is positioned to integrate the forthcoming Rubin platform upon its release. Neither Nvidia nor Amazon has revealed the monetary value of this partnership.
Both companies’ shares experienced modest upticks in after-hours trading Thursday after the announcement. NVDA closed regular trading down approximately 1%, while AMZN declined about 0.5%.
Amazon Continues Developing Proprietary Chip Solutions
Amazon continues advancing its internal AI chip initiatives, including its Trainium2 processor. Nevertheless, the cloud giant is engaging Nvidia for the most intensive computational requirements. These dual strategies appear to function in a complementary rather than competitive capacity.
This agreement underscores the ongoing substantial capital allocation toward AI infrastructure by leading cloud service providers. AWS isn’t abandoning its custom silicon development — rather, it’s augmenting those systems with Nvidia hardware for particular high-performance applications.
The Nvidia-AWS partnership was initially disclosed earlier this week without detailed timelines. Buck’s Thursday remarks to Reuters provided the most comprehensive information to date: shipments beginning in 2025, concluding at 2027’s end, and encompassing a diverse array of Nvidia offerings spanning computation, networking, and inference capabilities.
