Tech News
← Back to articles

OpenAI's new Spark model codes 15x faster than GPT-5.3-Codex - but there's a catch

read original related products more articles

OpenAI / Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

OpenAI targets "conversational" coding, not slow batch-style agents.

Big latency wins: 80% faster roundtrip, 50% faster time-to-first-token.

Runs on Cerebras WSE-3 chips for a latency-first Codex serving tier.

The Codex team at OpenAI is on fire. Less than two weeks after releasing a dedicated agent-based Codex app for Macs, and only a week after releasing the faster and more steerable GPT-5.3-Codex language model, OpenAI is counting on lightning striking for a third time.

Also: OpenAI's new GPT-5.3-Codex is 25% faster and goes way beyond coding now - what's new

Today, the company has announced a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex built for real-time coding in Codex. The company reports that it generates code 15 times faster while "remaining highly capable for real-world coding tasks." There is a catch, and I'll talk about that in a minute.

Also: OpenAI's Codex just got its own Mac app - and anyone can try it for free now

... continue reading