Ask HN: Building LLM apps? How are you handling user context?
Published on: 2025-06-22 11:23:27
I've been building stuff with LLMs, and every time I need user context, I end up manually wiring up a context pipeline.
Sure, the model can reason and answer questions well, but it has zero idea who the user is, where they came from, or what they've been doing in the app. Without that, I either have to make the model ask awkward initial questions to figure it out or let it guess, which is usually wrong.
So I keep rebuilding the same setup: tracking events, enriching sessions, summarizing behavior, and injecting that into prompts.
It makes the app way more helpful, but it's a pain.
What I wish existed is a simple way to grab a session summary or user context I could just drop into a prompt. Something like:
const context = await getContext();
const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...] });
Some examples of how I use this:
- For support, I pass in the docs they viewed or the error page they landed on.
- For marketing, I sum
... Read full article.