Codex now runs on GPT-5.5 hosted on Nvidia's GB200 NVL72 rack-scale systems. Nvidia says the systems deliver 35x lower cost per million tokens and 50x higher token output per second per megawatt than prior-generation systems, economics it argues make frontier-model inference viable at enterprise scale.Read Entire Article
OpenAI debuts GPT 5.5, Nvidia gave early access to 10,000 employees through Codex
Why This Matters
The debut of GPT 5.5 integrated with Nvidia's advanced hardware signifies a major leap in AI efficiency and scalability, potentially transforming enterprise applications and AI development. Early access to such powerful models could accelerate innovation and reduce costs across the tech industry and consumer sectors.
Key Takeaways
- GPT 5.5 runs on Nvidia's cutting-edge GB200 systems, offering significant performance improvements.
- Nvidia's hardware reduces costs and increases output, making large-scale AI inference more viable for enterprises.
- Early access to GPT 5.5 through Nvidia's Codex could accelerate AI adoption and innovation across industries.
Get alerts for these topics