Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
Today, many eyes are on OpenAI CEO and co-founder Sam Altman‘s ongoing public feud with Elon Musk on the latter’s social network, X.
But Altman’s recent statements regarding the ongoing rollout of his company’s latest and greatest large language model (LLM), GPT-5, are probably more important to customers and enterprise decision-makers.
After an admittedly “bumpy” debut of GPT-5 last week that saw some users clamoring for restored access to deprecated older LLMs in ChatGPT such as GPT-4o and o3 — OpenAI granted the former — Altman is now pivoting towards ensuring OpenAI’s underlying infrastructure and usage limits are a good fit for the company and its 700 million active weekly ChatGPT users.
The company’s latest updates include a more detailed compute allocation plan and the introduction of additional third-party connectors for ChatGPT Plus and Pro plans.
AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO
Managing GPT-5 demand and usage limits
In a post on X last night, Altman outlined how OpenAI will prioritize computing resources over the next several months.
... continue reading