Agent Observability Platform

Observe every agent decision in
production with confidence

Lumiqtrace gives engineering, AI Ops, and FinOps teams one place to trace, evaluate, and control real-world agent behavior before issues reach users.

Lumiqtrace Dashboard

Built to work with the AI stack your team already runs

OpenAIAnthropicGoogle GeminiLangChainLangGraphCrewAIGoogle ADKAutoGenOpenRouterVercel AI

Reliability

See the full decision chain

Trace planning, model calls, tool usage, and delegation paths in one timeline your team can act on.

Cost

Understand spend in context

Attribute costs by agent, operation, and model so optimization work happens with shared context, not guesswork.

Operations

Move from signal to action

Surface anomalies and root-cause context quickly so engineers, AI Ops, and PMs can respond with confidence.

Product workflow

From setup to incident clarity
in minutes

01

Install the SDK

One package. TypeScript or Python. No config files, no environment variables, no build steps.

# npm npm install @lumiqtrace/sdk # python pip install lumiqtrace
02

Wrap your client

Two lines of code. Your existing calls stay exactly the same. We capture everything silently.

import { lumiqtrace } from "@lumiqtrace/sdk" lumiqtrace.init({ apiKey: "lqt_..." }) const openai = lumiqtrace.wrapOpenAI(new OpenAI())
03

See everything

Costs, latency, errors, traces, and AI-powered insights — all live in your dashboard within seconds.

✓ Every call tracked automatically ✓ AI highlights optimization opportunities ✓ Anomalies surfaced early ✓ Lightweight runtime impact
Core capabilities

See what your agents are
actually doing

Not just LLM call logs. Full agent lifecycle visibility — planning, delegation, tool execution, and quality evaluation.

Multi-Agent Tracing

Visualize agent delegation as a flow graph

See how your triage agent delegates to the order agent, which calls tools and makes LLM decisions. Interactive tree shows every span with cost, latency, and token counts.

  • Agent → Agent delegation chains
  • Planning steps visible as first-class spans
  • Click any node to inspect I/O and tokens
  • HANDOFF and CHAIN spans for complex workflows
Agent flow diagram showing delegation chains
Auto-Discovery

Every agent and tool, discovered automatically

No manual registration. Lumiqtrace detects agents from your telemetry and builds a live registry showing traces, cost, latency, error rates, memory usage, and which tools each agent uses.

  • Zero-config agent detection from trace data
  • Tools mapped to the agents that call them
  • Sort by usage, cost, or error rate
  • Memory and Planning capabilities flagged
Agent registry showing auto-discovered agents
Agent-native traces

Filter traces by what matters

Every trace shows which agents and tools were involved, not just which model was called. Filter by span kind — Agent, LLM, Tool, Planning, RAG, Chain — to find exactly what you need.

  • Agent and tool badges on every trace row
  • "Agentic Only" toggle to filter noise
  • Span kind chips: Agent, LLM, Tool, Plan, RAG
  • Cost and token breakdown per trace
Agent-native trace list with span kind filters
Tool Analytics

Know which tools work and which don't

Auto-discovered tool usage table shows invocations, success rate, latency, cost, and which agents call each tool. Find the bottleneck before your users do.

  • Every tool call tracked with success/failure
  • Agent ↔ Tool relationship mapping
  • Latency and cost per tool invocation
  • No instrumentation required
Discovered tools table with usage analytics
Quality Evaluation

One-click quality checks, zero setup

Deploy evaluation templates for relevance, groundedness, coherence, toxicity, and agent-specific metrics. LLM-as-judge scores every trace automatically.

  • 12 built-in templates across Quality, Safety, Performance, Agent
  • Custom evaluator prompts with your rubric
  • Scores tracked in ClickHouse with trend charts
  • Alert when quality drops below threshold
Evaluation template catalog
Operational AI

From anomaly to root cause in seconds

Lumiqtrace doesn't just show you what happened — it tells you why, and what to do about it. AI-powered intelligence surfaces incidents, correlates causes, and recommends fixes.

Cost OptimizationAI reviews spend patterns and suggests model switches with estimated savings per endpoint.
Incident CorrelationDetects anomalies, correlates with model changes and prompt deploys, creates incidents with AI root cause analysis.
Natural Language QueryAsk "what's my most expensive agent this week?" and get charts, tables, and actionable context.
Auto Root CauseTransforms deep multi-agent traces into hypotheses and next-step recommendations for incident response.
AI incident dashboard with root cause analysis
Integrations

Every provider.
Every framework.
Two lines.

Wrap your existing SDK calls — we handle the rest. Automatic tracing across multi-step agent pipelines.

  • OpenAI, Anthropic, Google — native wrappers
  • LangChain callback handler (JS + Python)
  • Google ADK agent tracer
  • OpenRouter compatible via OpenAI wrapper
  • Streaming + TTFT tracking
  • Automatic trace context propagation
import { lumiqtrace, withAgent } from "@lumiqtrace/sdk" lumiqtrace.init({ apiKey: "lqt_..." }) // Trace an entire agent workflow await withAgent({ name: "CustomerSupportAgent", tools: [lookupOrder, checkRefund, sendEmail], }, async (agent) => { agent.logPlan(["Lookup order", "Check policy"]) const order = await agent.traceTool("lookup_order", { orderId: "123" }, () => lookupOrder("123")) // LLM calls auto-inherit agent context const res = await openai.chat.completions.create({...}) })
Auto-Detection

Zero-config agent detection

Lumiqtrace automatically detects and traces agent patterns from popular frameworks. No manual instrumentation required.

OpenAI

Function calling, tool_use, assistants API — auto-detected as agent tool spans

Anthropic

Tool use blocks, multi-turn agents — auto-detected with span kind inference

Google

Gemini function calling, ADK agents — auto-traced with delegation tracking

LangChain

Chains, agents, tools, callbacks — full graph traced via callback handler

Comparison

How Lumiqtrace stacks up

Agent-first context

Built around agent planning, delegation, tooling, and execution context — not only raw LLM requests.

Operations visibility

Connect quality, latency, and cost signals so teams can triage faster with shared operational context.

Actionable workflows

From detection to root-cause exploration and follow-up decisions in one product flow.

LumiqtraceHeliconeLangSmithLangfuseBraintrust
SetupMinimal code changesProxy configSDK + configSDK + configSDK + config
Cost tracking
P50/P90/P99 latency
Trace flame graph
AI Cost Optimizer
AI Anomaly Detection
Natural Language Query
Root Cause Analysis
Agent-native tracing
Auto tool discovery
Agent registry
Multi-agent delegation
Evaluation templates
Free tier10K traces10K events5K traces50K obs.1K logs
Starting price$29/mo$50/mo$39/mo$59/mo$50/mo

Comparison reflects publicly available documentation and product behavior observed at the time of writing; features and pricing may change over time.

Pricing

Transparent pricing.
No surprises.

Most teams start with Pro and move to Team as agent volume and governance needs grow.

Starter
For side projects
$0 /mo
Monthly billing
  • 10K traces/month
  • 14-day retention
  • 2 projects
  • Basic request log
  • Community support
Solo
For indie devs
$29 /mo
Monthly billing
  • 50K traces/month
  • $5 per 50K extra traces
  • 30-day retention
  • 10 projects
  • Agent tracing
  • AI Hub (limited)
  • 2 seats
Pro
For small teams
$79 /mo
Monthly billing
  • 100K traces/month
  • $5 per 50K extra traces
  • 60-day retention
  • Unlimited projects
  • Agent tracing & registry
  • Evaluation templates
  • AI Hub (unlimited)
  • 5 seats included, +$10/seat
Enterprise
For production workloads
Custom
Talk to sales
  • Unlimited traces
  • 365-day retention
  • SSO/SAML
  • Dedicated support
  • Custom integrations
  • SLA guarantee
  • Unlimited seats
  • On-prem available
FAQ

Frequently asked questions

What is agent observability?+

Agent observability is the practice of tracing, evaluating, and monitoring AI agent behavior in production. Unlike traditional LLM monitoring that tracks individual API calls, agent observability captures the full decision lifecycle — planning steps, tool usage, model calls, and delegation between agents — giving teams the context needed to debug failures, control costs, and ensure quality.

How is Lumiqtrace different from LangSmith or Langfuse?+

Lumiqtrace is built agent-first rather than LLM-first. It automatically discovers agents, tools, and delegation chains from your telemetry without manual registration. It also includes AI-powered cost optimization, anomaly detection, natural language querying, and root cause analysis — capabilities not available in LangSmith or Langfuse.

How long does setup take?+

Setup takes under 5 minutes. Install the SDK (npm or pip), wrap your existing AI client with two lines of code, and traces appear in your dashboard within seconds. No config files, environment variables, or build steps required.

What AI providers and frameworks does Lumiqtrace support?+

Lumiqtrace supports OpenAI, Anthropic, Google Gemini, LangChain, LangGraph, CrewAI, Google ADK, AutoGen, OpenRouter, and Vercel AI SDK. Agent patterns are auto-detected from popular frameworks with zero manual instrumentation.

Is there a free tier?+

Yes. The Starter plan is free and includes 10,000 traces per month, 14-day retention, and 2 projects. No credit card required. Most teams start with Pro ($79/month) for agent tracing, evaluation templates, and the AI Hub.

Can Lumiqtrace trace multi-agent systems?+

Yes. Lumiqtrace provides a swimlane timeline that visualizes agent delegation chains, tool execution, and LLM reasoning across multiple agents. You can click any span to inspect inputs, outputs, token usage, and cost attribution per agent.

Your agents are running.
Do you know what they're doing?

Join the waitlist. Be the first to know when we launch.

✓ Free tier available✓ Fast setup✓ No credit card