Tech News
← Back to articles

Everyone's trying vectors and graphs for AI memory. We went back to SQL

read original related products more articles

When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.

You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.

Over the past few years, people have tried a bunch of ways to fix it:

1. Prompt stuffing / fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast.

2. Vector databases (RAG) – Store embeddings in Pinecone/Weaviate. Recall is semantic, but retrieval is noisy and loses structure.

3. Graph databases – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain.

4. Hybrid systems – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex.

And then there’s the twist: Relational databases! Yes, the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory.

Instead of exotic stores, you can:

- Keep short-term vs long-term memory in SQL tables

... continue reading