Skip to Content

Understanding Telemetry

What’s Collected

Tandemu uses the OpenTelemetry standard to collect three types of data from Claude Code sessions:

TypeExamples
TracesTask session spans (start/end), tool executions
MetricsAI-generated lines, manual lines per session
LogsPrompt loops, tool errors, friction events

Privacy

Tandemu tracks session-level metrics, not individual actions:

What IS tracked:

  • How long a task session lasted (from /morning to /finish)
  • How many lines of code were AI-generated vs manually typed
  • Which files had repeated errors (friction)
  • Task completion rate and cycle times

What is NOT tracked:

  • Individual keystrokes
  • Screen recordings
  • Prompt content (what the developer asked Claude)
  • Code content (what was written)
  • Idle time or break tracking

Developers see the same data their leads see. There’s no hidden dashboard.

Pipeline

Developer's Terminal Claude Code (emits OpenTelemetry data) OTel Collector (port 4317/4318) ├── Validates and batches data └── Tags with organization ID ClickHouse (analytical database) ├── otel_traces — task session spans ├── otel_metrics_sum — code line counts └── otel_logs — friction events NestJS Backend (queries ClickHouse) Dashboard / Claude Code Skills

Metrics Explained

AI vs Manual Ratio

Measures the proportion of code generated by Claude Code versus code typed by the developer.

  • Tracked via the code.lines.ai_generated and code.lines.manual metrics
  • Calculated per session and aggregated by team, sprint, or time period
  • A ratio of 2.5x means 2.5 lines of AI code for every 1 line of manual code

Tandemu uses two tiers of attribution:

  1. Native OTEL attribution — Per-file AI line counts sent directly from Claude Code telemetry (preferred, more accurate)
  2. Co-Authored-By fallback — When native attribution isn’t available, commits with Co-Authored-By: Claude tags are classified as AI-generated, with lines distributed proportionally

Friction Heatmap

Identifies code areas where developers struggle repeatedly.

Friction is detected by:

  • Prompt loops — The developer asks Claude to fix the same issue multiple times
  • Tool execution errors — Claude’s file edits or commands fail repeatedly

Each friction event is tagged with the file path, so Tandemu can show which files cause the most trouble.

Severity is calculated using a weighted score: promptLoops + (errors × 2).

SeverityCriteria
High (red)Weighted score >= 20
Medium (yellow)Weighted score >= 10
Low (green)Weighted score < 10

DORA Metrics

Calculated from merged GitHub PRs, synced every 4 hours.

  • Deployment Frequency — Merged PRs per week, rated elite/high/medium/low
  • Lead Time for Changes — Median time from PR creation to merge, rated elite/high/medium/low

See What Gets Measured for the complete metrics reference.

Passive Time Tracking

Session duration is calculated from task session spans:

  • A task session starts when a developer runs /morning
  • It ends when they run /finish or /pause
  • Total hours are aggregated per developer per day

Data Retention

The memory_access_log table in ClickHouse has a 90-day TTL. Core telemetry tables (otel_traces, otel_metrics_sum, otel_logs) created by the OTel Collector do not have a TTL configured by default — data is retained indefinitely unless you add a TTL to the table definitions.

Last updated on