Skip to content
Tech News
← Back to articles

I tried to make Claude make me money on Algora bounties (data and tool)

read original more articles
Why This Matters

This article highlights the challenges and limitations of using AI agents like Claude for automated open-source bounty hunting, emphasizing that despite the potential, practical hurdles such as safety, ethical considerations, and platform restrictions can hinder success. It underscores the importance of human oversight and cautious automation in the evolving landscape of AI-driven software development and monetization.

Key Takeaways

I tried to make Claude make me money on open-source bounties. Here's the data from 60 fresh issues.

A few days ago a tweet from @chatgpt21 went around showing an AI coding agent that ran unsupervised for 22 hours, found a bounty on its own, shipped a PR, and got paid $16.88. 22M tokens spent, a real first dollar collected. The thread was triumphant: "the loop works."

I wanted to see if I could replicate it on a $20 token budget with Claude as the agent. I picked the closest public analog to what the tweet described: Algora, the open-source bounty platform where maintainers attach a dollar amount to a GitHub issue and the first acceptable PR gets the money.

Forty-eight hours later I have $0 and some data that I think is more interesting than a win would have been.

The setup

The plan, on paper, was simple:

Discover open bounties via the public Algora board / GitHub label search

Pick a small, scoped issue in TS / Python / Go (something a human reviewer could sanity-check)

Let Claude clone the repo, attempt the fix, run tests

Human-in-the-loop review of the diff before pushing a PR

... continue reading