Key Highlights
- Rental rates for Nvidia’s H100 GPUs have jumped approximately 40% from October levels, climbing from $1.70/hour to $2.35/hour as available GPU capacity has essentially been exhausted.
- Supply constraints extend to Nvidia’s latest Blackwell architecture, where delivery timelines now reach mid-2026, contradicting predictions that newer technology would relieve demand for previous-generation hardware.
- The top four cloud providers — Alphabet, Microsoft, Meta, and Amazon — plan to invest roughly $700 billion combined on AI infrastructure throughout 2026, with Nvidia commanding 85-90% of the GPU marketplace.
- Recent regulatory clearance from China allowing H100 chip sales, with licenses already granted to various customers, opens a potential $25 billion yearly revenue stream absent from current forecasts.
- NVDA shares currently trade at 15.7x forward earnings — beneath its 3-year average multiple of 19.4x — while analyst consensus price targets average $273.57, suggesting approximately 55% potential upside.
Rather than declining in value, Nvidia’s H100 GPU hardware has experienced significant price appreciation. Hourly rental costs have surged nearly 40% from October through today, escalating from $1.70 to approximately $2.35 per hour, based on research from SemiAnalysis covering surveys of over 100 industry stakeholders.
Available GPU infrastructure has reached complete saturation. Organizations that locked in capacity during earlier periods are maintaining their allocations despite escalating costs. Some enterprises are turning to premium-priced spot market instances through platforms such as AWS just to secure availability.
Supply limitations aren’t confined to legacy architectures. Nvidia’s latest Blackwell GPU lineup confronts identical constraints, with procurement lead times now extending well into mid-2026. This contradicts initial industry assumptions that superior efficiency from next-generation processors would diminish interest — and pricing pressure — on products like the H100.
The surge in demand stems from diverse AI deployment scenarios, ranging from media generation platforms at organizations like ByteDance and Google to expanding adoption of sophisticated models such as Anthropic’s Claude.
Cloud Giant Capital Commitments Establish Strong Revenue Foundation
Underlying the capacity shortage is substantial committed investment capital. Alphabet, Microsoft, Meta, and Amazon collectively plan approximately $700 billion in AI infrastructure expenditures during 2026 alone. These represent confirmed capital allocation plans, not speculative forecasts.
Microsoft has disclosed that roughly two-thirds of its infrastructure spending targets GPUs and CPUs. Given Nvidia’s dominant 85-90% share of the GPU sector, the majority of these investments flow directly to Nvidia. Even assuming chips represent merely 20% of total AI infrastructure expenditure, this suggests more than $140 billion in annual chip procurement from just these four major customers.
Nvidia’s fourth-quarter revenues reached $68.13 billion, representing 73% year-over-year expansion, while its first-quarter guidance of $78 billion exceeded analyst expectations by over $5 billion. Fiscal 2027 revenue growth currently projects at 71%.
Nevertheless, NVDA shares have declined approximately 6.5% year-to-date, pressured by wider macroeconomic headwinds related to energy inflation and general market risk aversion. The stock presently trades at 15.7x forward earnings — below both its 3-year historical mean of 19.4x and AMD’s forward multiple of 18.9x, despite Nvidia maintaining substantially greater market position, profit margins, and growth trajectory.
Chinese Market Access and Vera Rubin Platform Represent Additional Catalysts
A potentially significant growth driver not yet reflected in projections: Chinese regulators have granted Nvidia authorization to distribute H100 chips, with multiple customer licenses already issued. Wells Fargo projects this opportunity could generate $25 billion or more in annual revenue. This revenue potential was excluded from Nvidia’s latest guidance.
From a product development perspective, the forthcoming Vera Rubin platform provides ten-fold performance-per-watt improvement over Blackwell and approximately 50-fold token-per-watt gains compared to the earlier Hopper architecture. Initial shipments are scheduled to commence during the second half of 2026.
Nvidia additionally completed a $2 billion equity stake in Marvell Technology (MRVL) on March 31, extending its NVLink ecosystem compatibility to Marvell’s custom AI silicon — deployed by Amazon, Alphabet, and Microsoft. NVDA appreciated over 5% following that announcement. Marvell gained 13%.
Wall Street maintains a Strong Buy consensus rating on NVDA, with 41 Buy recommendations, one Hold, and one Sell rating over the preceding three months. The consensus price target stands at $273.57.
