Why we need bots that elicit good discussions, not just write better code
The jury is out on the effectiveness of AI use in production, and it is not a pretty picture.
Teams using AI completed 21% more tasks, yet company-wide delivery metrics showed no improvement (Index.dev, 2025)
Experienced developers were 19% slower when using AI coding assistants—yet believed they were faster (METR, 2025)
48% of AI-generated code contains security vulnerabilities (Apiiro, 2024)
To understand why, we have to take a closer look at the day-to-day software development. Consider this point raised in a colorful exchange on r/ExperiencedDev:
A developers’ job is to reduce ambiguity. We take the business need and outline its logic precisely so a machine can execute. The act of writing the code is the easy part. Odds are, you aren’t creating perfect code specs into tickets, even with meeting notes, because developers will encounter edge cases that demand clarification over the course of implementation…
There are two key points raised in this comment. Firstly, coding assistants require clearly-defined requirements in order to perform well. Secondly, edge cases and product gaps are often discovered over the course of implementation.
These two facts come head-to-head in the application of coding agents to complex codebases. Unlike their human counterparts who would and escalate a requirements gap to product when necessary, coding assistants are notorious for burying those requirement gaps within hundreds of lines of code, leading to breaking changes and unmaintainable code.
As a result, more overhead is spent on downstream code reviews (Index.dev, 2025) and fire-patching security vulnerabilities (Apiiro, 2025).
... continue reading