Contextual Observation & Recall Engine
C.O.R.E is a shareable memory for LLMs which is private, portable and 100% owned by the user. You can either run it locally or use our hosted version and then connect with other tools like Cursor, Claude to share your context at multiple places.
C.O.R.E is built for two reasons:
To give you complete ownership of your memory, stored locally and accessible across any app needing LLM context. To help SOL (your AI assistant) access your context, facts, and preferences for more relevant and personalized responses.
Note: We are actively working on improving support for Llama models. At the moment, C.O.R.E does not provide optimal results with Llama-based models, but we are making progress to ensure better compatibility and output in the near future.
Demo Video
Check C.O.R.E Demo
How is C.O.R.E different from other Memory Providers?
Unlike most memory systems—which act like basic sticky notes, only showing what’s true right now. C.O.R.E is built as a dynamic, living temporal knowledge graph:
Every fact is a first-class “Statement” with full history, not just a static edge between entities.
... continue reading