A database is just files. SQLite is a single file on disk. PostgreSQL is a directory of files with a process sitting in front of them. Every database you have ever used reads and writes to the filesystem, exactly like your code does when it calls open() .
So the question is not whether to use files. You're always using files. The question is whether to use a database's files or your own. And for a lot of applications, especially early-stage ones, the answer might be: your own.
Now, obviously we love databases. We're building DB Pro, a database client for Mac, Windows, and Linux. But the honest answer to "do you need one?" depends on your scale, and most applications are smaller than people assume. We tested this. We built the same HTTP server in Go, Bun, and Rust, using two storage strategies, and hammered them with wrk. Here's what the numbers look like.
The setup
Three flat files: users.jsonl , products.jsonl , orders.jsonl . The format is newline-delimited JSON (JSONL): one record per line, appended on write. Each file holds one entity type.
JSON { "id" : "a3f1..." , "name" : "Alice Chen" , "email" : "[email protected]" , "created_at" : "2026-04-15T..." } { "id" : "b7d2..." , "name" : "Bob Torres" , "email" : "[email protected]" , "created_at" : "2026-04-15T..." }
Two HTTP endpoints: POST /users to create, GET /users/:id to fetch by ID. We benchmarked the GET path. Reads are where the strategies diverge.
Approach 1: Read the file every time
The simplest thing you can do: when a request comes in for user abc-123 , open the file, scan every line, parse each one as JSON, check the ID. Return when you find a match.
Go:
... continue reading