Building Untrace and Acquiring to OpenRouter
Why I built an LLM observability SDK, how it worked, and what happened when OpenRouter acquired it.
I started Untrace to solve a problem I kept hitting: shipping LLM-powered products without a clear picture of what was actually running in production.
Most teams were either duct-taping logs together or buying into a big observability platform that didn't quite fit. I wanted something in between—privacy-first trace forwarding, simple evaluation hooks, and a path to understand cost and latency without locking the whole stack into one vendor.
The aha moment: Segment for LLM apps
The mental model that stuck was Segment for LLM apps. Segment had analytics.js: you instrument once, and events fan out to whatever analytics or warehouse you want—Mixpanel, Amplitude, your own pipeline. You don't re-instrument when you add a new tool. You don't care which backend you're sending to at write time. Same idea for LLMs: one instrumentation layer that doesn't care which SDK, provider, or model you're using. Capture the trace, then fan it out to any observability platform—Langfuse, Datadog, your own eval harness, whatever. That was the product.
What we built
Untrace was an open-source SDK that did exactly that. We built on the OpenTelemetry Gen AI semantic conventions (OTel Gen AI spec)—so traces are standardized, portable, and play nice with the rest of the OTel ecosystem. Instrument once with a single API; your traces look the same whether they came from the OpenAI SDK, LangChain, or a custom wrapper. Then you route them wherever you need: your own endpoint, a commercial observability product, or an eval pipeline. No mandatory cloud, no "you must use our dashboard." We focused on:
- Privacy-first forwarding — Your traces, your rules. Send to your own infra or to providers that matched your data policies.
- Provider-agnostic — OTel Gen AI spec meant we didn't care which SDK or model you used. One shape, many destinations.
- Evaluation-friendly — Structure that made it easy to run evals, score outputs, and plug into CI.
- Minimal lock-in — Open source, standard shapes, so you could swap backends or run everything yourself.
We shipped the SDK, iterated with early users, and kept the surface area small. The goal was to be the piece that made observability and evals tractable for small teams, without becoming a platform.
Who actually looks at LLM trace data
One thing that became obvious fast: it's not just engineers. Product managers want to see which flows use which models and what it costs. Support and ops need to debug user-facing failures. Finance cares about cost and usage. Everyone's looking at LLM trace data now—or wants to. The value of a single, fan-out pipeline is that the same stream can feed dashboards for PMs, alerts for on-call, and eval reports for eng. One instrumentation story, many audiences.
Why OpenRouter made sense
OpenRouter was already the unified API layer for LLMs—routing, fallbacks, cost and latency at the edge. What they didn't have was a first-party story for ingesting and evaluating traces from the wild. Untrace fit that gap: we had the SDK and the pipeline thinking; they had the distribution and the roadmap to make observability and evals a core part of their offering.
So we started talking. The fit was clear: same audience (builders shipping on multiple models), same philosophy (open, portable, no unnecessary lock-in). After a few months, we agreed on an acquisition. Untrace would become part of OpenRouter; I'd join as Head of Engineering and help fold the SDK and the thinking behind it into their stack.
Where things stand
Untrace is now part of OpenRouter. The SDK and the ideas behind it are being integrated into OpenRouter's observability and evaluation roadmap. If you were using Untrace or thinking about it, the best place to follow what's next is OpenRouter—that's where the product and the team live now.
Building something small, shipping it in the open, and having it become part of a larger story was the outcome I'd hoped for. Grateful to everyone who tried Untrace and to the OpenRouter team for making the next chapter possible.