Modern databases are all good enough. If your system is falling over, it is almost never because Postgres is too weak or Mongo cannot keep up.
When the SQL vs. NoSQL question comes up, that’s how you avoid sounding like you’re quoting a 2015 blog post.
The SQL vs NoSQL debate still shows up in interviews like it is some deep philosophical divide. It is not. Most mainstream databases today all can handle serious traffic, flexible schemas, transactions, replication, all of it. The hard part is living with the consequences of your choice for the next five years.
I have watched teams blame “scale” for issues that were just bad access patterns. Full table scans in hot paths. No indexes on fields that get hammered every second. Migrations run in the middle of peak traffic because nobody thought through locking behavior. Then someone says maybe we need a new database.
Swapping the engine does not magically fix weak thinking. It just gives you a new set of sharp edges.
The better answers start with workload. What are you actually doing every day? Reading rows by primary key with strict consistency because money is involved? Relational databases are very good at that. Writing massive append-only events where a little staleness does not hurt anyone? There are tools that lean into that pattern. Running complex joins across entities with evolving business logic? SQL is still ridiculously effective.
The difference is not in the tool list. It is in whether you can explain the failure modes without hand-waving.
What happens when replication lag spikes and your checkout flow reads stale data? What happens when a schema migration needs to backfill hundreds of millions of rows and your I/O graph looks like a heart attack? What happens when the one engineer who understands your distributed consensus setup leaves the company?
I have seen small teams pick distributed databases because they want to look “future proof.” What they get instead is more moving parts, more cognitive load, and longer onboarding. Engineers avoid touching the data layer because it feels risky. Features take longer. Roadmaps stretch. The database did not limit them. The complexity did.
On the other hand, I have seen teams run a single relational database far longer than outsiders thought reasonable. They invested in modeling. They added the right indexes. They understood isolation levels and locking. When they finally hit real limits, it was obvious and measurable, not hypothetical. That is when adding something specialized made sense.
... continue reading