John Maeland
Edited
💡 The New Reality in AI Infrastructure Before: OpenAI used to rent $NVDA (NVIDIA Corporation) chips through cloud giants like $MSFT (Microsoft) $AMZN (Amazon.com Inc) or $GOOG (Alphabet) The clouds built the data centers, bought the GPUs, and leased them out to AI companies. Now: OpenAI is going straight to the source. It’s signing massive, multi-year deals with both NVIDIA and $AMD (Advanced Micro Devices Inc) — not just to buy chips, but to secure guaranteed computing power for years ahead. 🔹 NVIDIA: ~10 GW of compute and up to $100 billion in staged investment 🔹 AMD: 6 GW of compute and an option for OpenAI to buy up to 10 % of AMD if milestones are met Instead of paying everything upfront, OpenAI and the chipmakers are now sharing both costs and risks. This lets OpenAI scale faster — and locks in steady demand for NVIDIA and AMD. ⚙️ What’s Changing ☑️ Compute is becoming the new fuel. AI companies are reserving power and hardware years in advance. ☑️ Multi-sourcing. OpenAI will no longer rely on a single supplier like Nvidia. ☑️ Vendors as investors. Chipmakers like Nvidia & AND now help finance the very infrastructure they sell. ☑️ Power is becoming the real bottleneck in AI. The challenge is no longer just building chips, it’s finding enough electricity, land, and cooling to run them. As AI keeps growing, the real winners will be the ones who secure power and data-center capacity, not only those who design GPUs. The AI infrastructure race is more about securing the gigawatts behind it now (looks like it in my honest opinion). 🔗 Track my performance on Bullaware: bullaware.com/etoro/Lordhumpe $BTC $NSDQ100
null
.