Tech News
← Back to articles

We need a clearer framework for AI-assisted contributions to open source

read original related products more articles

As both developers and stewards of significant open source projects, we’re watching AI coding tools create a new problem for open source maintainers.

AI assistants like GitHub Copilot, Cursor, Codex, and Claude can now generate hundreds of lines of code in minutes. This is genuinely useful; but it has an unintended consequence: reviewing machine generated code is very costly.

The core issue: AI tools have made code generation cheap, but they haven’t made code review cheap. Every incomplete PR consumes maintainer attention that could go toward ready-to-merge contributions.

At Discourse, we’re already seeing this accelerating across our contributor community. In the next year, every engineer maintaining open source projects will face the same challenge.

We need a clearer framework for AI-assisted contributions that acknowledges the reality of limited maintainer time.

A binary system works extremely well here. On one side there are prototypes that simply demonstrate an idea. On the other side there are ready for review PRs that meet a project’s contribution guidelines and are ready for human review.

The lack of proper labeling and rules is destructive to the software ecosystem

The new tooling is making it trivial to create a change set and lob it over the fence. It can introduce a perverse system where project maintainers spend disproportionate effort reviewing lopsided AI generated code that took seconds for contributors to create and now will take many hours to review.

This can be frustrating, time consuming and demotivating. On one side there is a contributor who spent a few minutes fiddling with AI prompts, on the other side you have an engineer that needs to spend many hours or even days deciphering alien intelligence.

This is not sustainable and is extremely destructive.

... continue reading