Why This Matters
Agent-cache introduces a multi-tier caching solution for AI agents, integrating LLM responses, tool results, and session states into a unified cache backed by Valkey or Redis. This innovation addresses limitations of existing options by supporting multiple frameworks and enabling seamless multi-tier caching, with upcoming features like streaming support. Its flexibility and integration capabilities make it a valuable tool for developers building scalable AI applications.
Key Takeaways
- Supports multi-tier caching for LLM responses, tool results, and session states in one connection.
- Compatible with multiple frameworks including LangChain, LangGraph, and Vercel AI SDK.
- Offers built-in monitoring with OpenTelemetry and Prometheus, and now includes cluster mode support.
Multi-tier exact-match cache for AI agents backed by Valkey or Redis. LLM responses, tool results, and session state behind one connection. Framework adapters for LangChain, LangGraph, and Vercel AI SDK. OpenTelemetry and Prometheus built in. No modules required - works on vanilla Valkey 7+ and Redis 6.2+.
Shipped v0.1.0 yesterday, v0.2.0 today with cluster mode. Streaming support coming next.
Existing options locked you into one tier (LangChain = LLM only, LangGraph = state only) or one framework. This solves both.
npm: https://www.npmjs.com/package/@betterdb/agent-cache Docs: https://docs.betterdb.com/packages/agent-cache.html Examples: https://valkeyforai.com/cookbooks/betterdb/ GitHub: https://github.com/BetterDB-inc/monitor/tree/master/packages...
Happy to answer questions.