Skip to content
Tech News
← Back to articles

OpenAI is throwing everything into building a fully automated researcher

read original get AI Research Assistant Tool → more articles
Why This Matters

OpenAI is heavily investing in developing fully automated research models that can independently solve complex problems, aiming to enhance real-world applications like coding and problem-solving. This shift signifies a move toward more autonomous AI systems capable of longer, more sophisticated reasoning, which could revolutionize industries reliant on research and automation.

Key Takeaways

For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.

So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better.

But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks.

The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it's not something we're going to prioritize now because, you know, at the point where you believe you can do it, there's much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds.

Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem.

The line always goes up

It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.

“Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we'll see much more acceleration coming from this technology in the near future,” Pachocki says.