Launch HN: Lucidic (YC W25) – Debug, test, and evaluate AI agents in production
Hi HN, we’re Abhinav, Andy, and Jeremy, and we’re building Lucidic AI ( https://dashboard.lucidic.ai ), an AI agent interpretability tool to help observe/debug AI agents. Here is a demo: https://youtu.be/Zvoh1QUMhXQ. Getting started is easy with just one line of code. You just call lai.init() in your agent code and log into the dashboard. You can see traces of each run, cumulative trends across sessions, built-in or custom evals, and grouped failure modes. Call lai.create_step() with any metad