Beata Zawrzel/NurPhoto via Getty Images
We all know AI relies on open-source software, but most of the big AI companies avoid opening their code or their large language model (LLM) weights. Today, things have changed. OpenAI, the artificial intelligence titan behind ChatGPT, announced a landmark return to its open-source origins.
The company unveiled two new open-weight language models, gpt-oss-120b and gpt-oss-20b, marking its first public release of freely available AI model weights since GPT-2 in 2019, long before the AI hype took over the tech world.
Also: OpenAI could launch GPT-5 any minute now - what to expect
Open-weight models enable anyone to download, examine, run, or fine-tune the LLM, and they eliminate the need to rely on remote cloud APIs or expose in-house sensitive data to external services.
OpenAI has not, however, released the training data used for these models because of legal and safety concerns. That will not please open-source AI purists, but developers worldwide are already putting the two models to the test.
This change contrasts with OpenAI's approach over the past five years. The business has prioritized proprietary releases fueled by massive Microsoft investments and lucrative API deals.
After all, you can't hope to become a trillion-dollar AI company without maximizing your profits. On the other hand, open source has consistently demonstrated that when code is developed openly, everyone, including the company that releases the code, benefits.
The gpt-oss-120b model targets high-performance servers and desktops with beefed-up specifications -- 60 GB of VRAM and multiple GPUs -- while the gpt-oss-20b version is compact enough for most laptops. You can download the models from Hugging Face or GitHub. In both cases, your hardware must run MacOS or Linux specifically, with MacOS 11 Big Sur or later, or Linux with Ubuntu 18.04 or later to run the programs. It could also work on Windows Subsystem for Linux (WSL) 2.0 on high-powered Windows systems.
OpenAI says, "The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory."
... continue reading