Tech News
← Back to articles

Context is the bottleneck for coding agents now

read original related products more articles

Intelligence is rapidly improving with each model release. Just last week it was announced that OpenAI got a perfect score on the 2025 ICPC programming contest, beating every single human contestant. They achieved this using a version (presumably a very high compute version, but still) of their publicly available GPT-5 model.

And yet, coding agents are nowhere near capable of replacing software developers. Why is that?

I’m going to argue that the limiting factor is no longer raw intelligence, but rather context. Existing coding agents simply do not have enough context about the problems they’re being asked to solve. This severely limits how long they can work effectively without human guidance.

Intelligence and context

How autonomous are existing coding agents? Let’s think about autonomy as a spectrum and see how far along that spectrum we are.

Level 1 - A few lines of code

This is what autocomplete does, and it works very well. Level 2 - One commit

Cursor and Claude Code work well for tasks in this size range. Level 3 - One PR

Devin and other async agents are built to tackle tasks of this size. But do they work reliably? Only on relatively simple tasks. Level 4 - Major feature or refactor

Doing this autonomously on an existing codebase is beyond the reach of current agents. Level 5 - Entire codebase

... continue reading