中文版
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn , recall , save_perception , start_session , end_session # Start an episode session = start_session ( context = '{"robot_id": "arm-01", "task": "push"}' ) # Record experience learn ( insight = "grip_force=12.5N yields highest grasp success rate" , context = '{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}' ) # Retrieve experiences (structured filtering + spatial nearest-neighbor) memories = recall ( query = "grip force parameters" , context_filter = '{"task.success": true}' , spatial_sort = '{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}' ) # Store perception data save_perception ( description = "Grasp trajectory: 30 steps, success" , perception_type = "procedural" , data = '{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}' ) # End episode (auto-consolidation + proactive recall) end_session ( session_id = session [ "session_id" ])
7 APIs
API Purpose learn Record physical experiences (parameters / strategies / lessons) recall Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort save_perception Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) forget Delete incorrect memories update Correct memory content start_session Begin an episode end_session End an episode (auto-consolidation + proactive recall)
Key Features
... continue reading