Skip to content
Tech News
← Back to articles

The Wonders of AI: We Are Retiring Our Bug Bounty Program

read original get AI Security Bug Bounty Kit → more articles
Why This Matters

Turso is retiring its bug bounty program due to the overwhelming influx of low-quality bug reports, which hampers open contribution efforts and increases the risk of malicious exploits. This decision highlights the challenges faced by open-source projects in balancing security incentives with community sustainability. It underscores the need for innovative governance approaches in the evolving landscape of software security and open collaboration.

Key Takeaways

For almost a year now, Turso has had a program that pays $1,000 for any bug that can be demonstrated to lead to data corruption. Today, with immense sadness, we are retiring this program.

The reason is simple: everybody is being inundated by the slop machine. We are not unique in this regard. However, a program that offers money in exchange for a specific class of bugs is just too juicy of a target for the slop makers. For days, our maintainers have done little else other than close slop PRs claiming to have found bugs that led to data corruption in Turso. In a time where many OSS projects are closing their doors to contributions, we want to make every effort possible to keep the doors of Turso open. Being an Open Contribution project is part of our DNA. It is how Turso was born. But unfortunately, the financial reward is making this close to impossible and it has to go.

We are sharing this publicly and loudly because we believe that we will all have to find new ways to establish good governance in this new era, and should learn from each other. This is our contribution to that conversation.

# Why did we start this program

We started this program because we are rewriting SQLite, known to be one of the most reliable pieces of software in the world. The community expects a high bar from a project with such ambition, and we invest tremendous effort into making sure that we can match or even surpass SQLite’s legendary reliability. Turso ships with a native Deterministic Simulator, a collection of fuzzers, an oracle-based differential testing engine against SQLite, a concurrency simulator, and on top of that, we have extensive runs on Antithesis.

We take our testing discipline seriously. And we wanted to communicate our confidence. On the other hand, all of that testing infrastructure is, at the end of the day, just software and is not perfect. You can write all the fuzzers and simulators in the world, but they will only catch bugs in the combinations that are effectively generated. For example, if your fuzzer never generates indexes, you will by definition not find any bugs related to indexes, regardless of how well you stress the rest of the system. As a real example, we found bugs that escaped our simulator because they would only appear in databases that were larger than 1GB, and because we injected faults aggressively into every run, databases would never get big enough to trigger.

The main advantage of automated testing is that a bug escapes your validation, once you improve the test generators, an entire class of bugs go away. So we envisioned this program as a great way to do both things: it helped us establish the confidence we had in the methodology, but at the same time, if someone did find areas that our simulators didn’t cover well, we’d be more than happy to pay for it! We started the program with a $1,000 reward for bugs that would lead to data corruption until we could release a 1.0 version of Turso. Our plan was that once we’d reach 1.0, we would progressively increase both the size of the reward to substantial levels, and the scope of the issues we’d reward people for.

# And before the “singularity”, this worked great

We were delighted by this program. We paid a total of 5 individuals. All of those people who were awarded were incredibly special people. Worth highlighting the work of Alperen, who was actually one of the core contributors to our simulator itself (so little surprise that he knew of a couple of places where it could be improved). Then Mikael, who in fact used LLMs in a very creative ways to identify places where the simulator was not reaching (we later hired Mikael), and Pavan Nambi, who paired the simulator with formal methods and ended up not only finding bugs in Turso, but in fact found more than TEN bugs on SQLite itself through is methodology.

# But after the “singularity”, we got drowned

... continue reading