Skip to content
Tech News
← Back to articles

I tried Karpathy's Autoresearch on an old research project

read original get Autoresearch AI Toolkit → more articles
Why This Matters

This article highlights the practical application of Karpathy's Autoresearch framework, demonstrating how AI agents can automate iterative research processes in machine learning projects. Its significance lies in showcasing a method to accelerate experimentation, reduce manual effort, and improve efficiency in AI development, which benefits both researchers and industry practitioners.

Key Takeaways

Ever since it showed up on my GH feed, Karpathy’s Autoresearch was rattling around in the back of my mind. I wanted to try it on a research problem I fully understood. So this weekend, I picked up my old research code from eCLIP , dusted it off the legacy dependencies and gave it to Claude Code. And just let it cook while I did some chores around the house.

This is my journey…

Core Idea

Autoresearch is a simple constrained optimization loop with an LLM agent in the middle. The agent iteratively improves some eval metric by modifying a single file ( train.py ), while reading instructions from program.md . I added a scratchpad.md file for the agent to use as working memory to document its thought process and experiment history.

In the program.md , I split the exploration into “phases”, starting with some obvious hyperparameter tuning, then moving on to small architectural changes and finally some moonshot ideas. In the final phase, I basically let the agent run with minimal constraints, and gave it web access to read papers and look for new ideas.

The whole thing is a tight loop: hypothesize → edit → train → evaluate → commit or revert → repeat.

The experiment should be short, around 5 minutes wall clock per run, to encourage quick iterations and prevent overfitting to noise. The agent is free to change anything in train.py as long as it runs within the time budget.

Sandboxing

Since I was paranoid about letting the agent run arbitrary code in my workstation, I containerized the training loop and removed network access. The whole experimentation flow is orchestrated by a run.sh . Then I lock down Claude Code’s permissions to only edit these two files and run run.sh . No direct Python execution, no pip installs, no network access, no git push, etc.

I won’t bore you with the details, you can check out the repo here!

... continue reading