Tech News
← Back to articles

ChatGPT-5 Lets You Choose Your AI Model. These Are Your Options

read original related products more articles

The biggest pushback after OpenAI announced its new GPT-5 model for ChatGPT came from devotees of older models who felt the new generative AI chatbot lacked the panache of its predecessors.

Now you have more choices of pre-GPT-5 models (although you'll have to hunt for some of them) and better control over which components of GPT-5 handle your questions.

OpenAI is still sorting through a somewhat rocky launch of GPT-5, led by complaints about the lack of model choices. The model has been anticipated for more than two years and comes as competitors like Anthropic and Google have released powerful new versions of their AI models this year. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

OpenAI planned for one model that could handle everything: GPT-5 includes two different modes, one fast and lean for simple tasks, and one aimed at reasoning for complicated ones. A routing program would decide which model handled a given prompt. That's still the default in ChatGPT, but it's not your only option.

Watch this: ChatGPT Users Want the Old Models Back, Intel CEO Goes to the White House & More | Tech Today 03:03

Choices of GPT-5 models

There are a few different modes of GPT-5 you can select between if you want to use OpenAI's newest technology. Here's a quick rundown:

Enlarge Image This is what your choices will look like if you don't enable legacy models in settings. Screenshot by Jon Reed/CNET Auto: This mode allows the switching program built into GPT-5 to decide whether your query is handled by a lighter, faster large language model or a bigger, slower reasoning model. OpenAI CEO Sam Altman posted on X that this will be the best fit for most people.

This mode allows the switching program built into GPT-5 to decide whether your query is handled by a lighter, faster large language model or a bigger, slower reasoning model. OpenAI CEO Sam Altman posted on X that this will be the best fit for most people. Fast: Your query will go straight to the fastest, lightest model. Expect quick, basic answers, but not the in-depth research you'd get from a reasoning model.

Your query will go straight to the fastest, lightest model. Expect quick, basic answers, but not the in-depth research you'd get from a reasoning model. Thinking: This is a reasoning model, meaning it'll try to answer your question over several steps. It might use web searches and other tools, or it might go back and redo its past steps to try to get the right answer. There are some limits on how much you can use this model (3,000 messages per week right now).

... continue reading