September 2025
What if we treated Postgres like SQLite?
I love SQLite. It just works. It is extremely fast, because it’s just a local file access. It is very operationally and mentally simple, and it is extremely stable and well-tested. However, over my years of using it, there are just some things that I miss from the world of Postgres. So what happens if we just treat a local Postgres like SQLite? What does that mean? In this article, I’d like to expand on that idea.
I’ve talked and written extensively about the virtues of SQLite. I’ve written about best practices of using Go and SQLite in the cloud, as well as benchmarked it to show it’s not just a toy database. I guess you could call me a fanboy. :D
But I am somewhat envious of the ecosystem and community around Postgres. The community is much larger and more diverse, for one. PostgreSQL as a product is definitely as mature as SQLite, and improvements keep on being developed. But what I like the most is the large ecosystem of extensions.
Extensions in Postgres §
Take pgvector as an example, the vector database extension that makes it possible to work with embeddings. Something similar exists in SQLite in the form of sqlite-vec, but the maturity of the pgvector extension seems to be much greater than sqlite-vec, or anything else like that from the SQLite world. (It even looks like sqlite-vec isn’t actively maintained anymore, which is understandable, but a shame nonetheless.)
Extensions in Postgres just work. In SQLite, they have to be loaded every time you want to use the database, and need to be compiled and available on the client system. That obviously isn’t a concern in Postgres, because it is a server-client architecture and the server has all necessary extensions installed. But when you need to copy a SQLite production database file to your laptop to debug something, because there isn’t remote access, and you’ve forgotten how to compile the extension in the right version on your local system, and the production system architecture is different than your local one, and you are in a hurry, things can get frustrating quickly.
Treating Postgres as a local-only database §
Common wisdom tells you to have a cluster of Postgres instances replicating each other in some configuration or other, for operational robustness and fault tolerance. But that’s not a universal truth. You can just install Postgres on your single big and beefy application server (because there’s just the one when you use SQLite, scaled vertically), and run your application right next to it.
If you also disable network access and just use Postgres over a Unix socket, things should be plenty fast. I guess Postgres could even run in a Docker container without taking a big performance hit, when everything is configured correctly.
Of course, you still need to backup everything to somewhere else, in case of hardware failure or other catastrophic failure, preferably incrementally and online. That’s exactly the same as in SQLite. There, I use Litestream for that, and I’m sure something equivalent exists in the Postgres space.
And also of course, there’s now another component you have to worry about (the database server), with its own configuration parameters, tuning, etc. But is that much different than the various configuration options, compile-time parameters, and runtime gotchas that you need to know about and take care of in SQLite? (My “favorite”, by far, is that referential integrity, i.e. foreign keys, isn’t enabled by default, for backward compatibility reasons.)
I’m not really done thinking about this. Maybe it’s a bad idea, but I think it’s something I’d like to experiment with.
Have you tried this? Do you have other thoughts around this? I’d love to hear your story, and share it with others here (with permission). Write me an email, I promise I’ll answer. :-)