At Browser Use I spend a lot of time deciding what model to use. It's not easy to choose between LLMs, agent parameters, or compare two different versions of Browser Use and tell which one is better.
To truly understand our agent performance, we built a suite of internal tools for evaluating our agent in a standardized and repeatable way so we can compare versions and models and continuously improve. We take evaluations seriously. As of now, we have over 600,000 tasks run in testing.
Today we are releasing our first open source benchmark.
The Tasks
Existing browser benchmark task sets all have strengths and weaknesses. All tasks fall somewhere in the tradeoff between interpretability and realism.
On the interpretable end are tasks with synthetic websites that can deterministically confirm if the agent succeeds. But synthetic sites don't capture the bizarre reality and diversity of how real websites work, so we avoid them.
A good middle ground is web tasks that involve researching verifiable information, often involving multiple steps (like BrowseComp and GAIA) and comparing the answer to ground truth.
The end of the spectrum that most represents real user tasks involves finding real-time information or following complex workflows on various pages (Mind2Web 2, WebBench). The challenge here is judging them accurately at scale.
Tasks left out of evaluations are those that make real changes to websites (like creating a post) or require authentication. There has yet to be an economical solution for running these at scale.
Another challenge is difficulty. Many tasks have become trivial to modern browser agents, while others simply are not completable. For our benchmark, we selected 100 of the best tasks from existing open source benchmarks. We chose WebBench, Mind2Web, GAIA, and BrowseComp for a mix of verifiable and real-time tasks. We also added 20 tasks on a custom site to test the hardest browser interactions, such as iframe inception, clicking and dragging, etc.
... continue reading