I’ve been experimenting with adding AI agents to UserJot, our feedback, roadmap, and changelog platform. Not the simple “one prompt, one response” stuff. Real agent systems where multiple specialized agents communicate, delegate tasks, and somehow don’t crash into each other.
The goal was to analyze customer feedback at scale. Find patterns across hundreds of posts. Auto-generate changelog entries. Things that were basically impossible to do manually. I spent weeks reverse engineering tools like Gemini CLI and OpenCode, running experiments, breaking things, fixing them, breaking them again. Just pushed a basic version to production as beta. It’s working. Mostly.
Here’s what I learned about building agent systems from studying what works in the wild and testing it myself.
The Two-Tier Agent Model That Actually Works
Forget complex hierarchies. You need exactly two levels:
Primary Agents handle the conversation. They understand context, break down tasks, and talk to users. Think of them as project managers who never write code.
Subagents do one thing well. They get a task, complete it, return results. No memory. No context. Just pure function execution.
I tried three-tier systems. Four-tier systems. Agents talking to agents talking to agents. It all breaks down. Two tiers is the sweet spot.
Here’s what I landed on:
User → Primary Agent (maintains context) ├─→ Research Agent (finds relevant feedback) ├─→ Analysis Agent (processes sentiment) └─→ Summary Agent (creates reports)
... continue reading