Tech News
← Back to articles

This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom

read original related products more articles

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI’s new, powerful open weights AI large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache 2.0 license — the company’s first open weights model launch since GPT-2 in 2019 — but developers outside the company are already reshaping it.

One of the most striking examples comes from Jack Morris, a Cornell Tech PhD student, former Google Brain Resident, and current researcher at Meta, who this week unveiled gpt-oss-20b-base, his own reworked version of OpenAI’s smaller gpt-oss-20B model, which removes the “reasoning” behavior of the model and returns it to a pre-trained “base” version that offers faster, freer, more uncensored and unconstrained responses.

The model is available now on Hugging Face under a permissive MIT License, allowing it to be used for both additional research and commercial applications.

How gpt-oss-20B-base is different than OpenAI’s gpt-oss models

To understand what Morris did, it helps to know the difference between OpenAI’s release and what AI researchers call a “base model.”

AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO

Most LLMs offered by leading AI labs such as OpenAI, Anthropic, Google and even open source players like Meta, DeepSeek, and Alibaba’s Qwen team are “post-trained.”

... continue reading