Tech News
← Back to articles

Bringing Observability to Claude Code: OpenTelemetry in Action

read original related products more articles

AI coding assistants like Claude Code are becoming core parts of modern development workflows. But as with any powerful tool, the question quickly arises: how do we measure and monitor its usage? Without proper visibility, it’s hard to understand adoption, performance, and the real value Claude brings to engineering teams. For leaders and platform engineers, that lack of observability can mean flying blind when it comes to understanding ROI, productivity gains, or system reliability.

That’s where observability comes in. By leveraging OpenTelemetry and SigNoz, we built an observability pipeline that makes Claude Code usage measurable and actionable. From request volumes to latency metrics, everything flows into SigNoz dashboards, giving us clarity on how Claude is shaping developer workflows and helping us spot issues before they snowball.

In this post, we’ll walk through how we connected Claude Code’s monitoring hooks with OpenTelemetry and exported everything into SigNoz. The result: a streamlined, data-driven way to shine a light on how developers actually interact with Claude Code and to help teams make smarter, evidence-backed decisions about scaling AI-assisted coding.

Why Monitor Claude Code?

Claude Code is powerful, but like any tool that slips seamlessly into a developer’s workflow, it can quickly turn into a black box. You know people are using it, but how much, how effectively, and at what cost? Without telemetry, you’re left guessing whether Claude is driving real impact or just lurking quietly in the background.

That’s why monitoring matters. With the right observability pipeline, Claude Code stops being an invisible assistant and starts showing its true footprint in your engineering ecosystem. By tracking key logs and metrics in SigNoz dashboards, we can answer questions that directly tie usage to value:

Total token usage & cost → How much are we spending, and where are those tokens going?

→ How much are we spending, and where are those tokens going? Sessions, conversations & requests per user → Who’s using Claude regularly, and what does “active usage” really look like?

→ Who’s using Claude regularly, and what does “active usage” really look like? Quota visibility → How close are we to hitting limits (like the 5-hour quota), and do we need to adjust capacity?

→ How close are we to hitting limits (like the 5-hour quota), and do we need to adjust capacity? Performance trends → From command duration over time to request success rate, are developers getting fast, reliable responses?

... continue reading