Understanding Telemetry
What’s Collected
Tandemu uses the OpenTelemetry standard to collect three types of data from Claude Code sessions:
| Type | Examples |
|---|---|
| Traces | Task session spans (start/end), tool executions |
| Metrics | AI-generated lines, manual lines per session |
| Logs | Prompt loops, tool errors, friction events |
Privacy
Tandemu tracks session-level metrics, not individual actions:
What IS tracked:
- How long a task session lasted (from
/morningto/finish) - How many lines of code were AI-generated vs manually typed
- Which files had repeated errors (friction)
- Task completion rate and cycle times
What is NOT tracked:
- Individual keystrokes
- Screen recordings
- Prompt content (what the developer asked Claude)
- Code content (what was written)
- Idle time or break tracking
Developers see the same data their leads see. There’s no hidden dashboard.
Pipeline
Developer's Terminal
│
Claude Code (emits OpenTelemetry data)
│
▼
OTel Collector (port 4317/4318)
├── Validates and batches data
└── Tags with organization ID
│
▼
ClickHouse (analytical database)
├── otel_traces — task session spans
├── otel_metrics_sum — code line counts
└── otel_logs — friction events
│
▼
NestJS Backend (queries ClickHouse)
│
▼
Dashboard / Claude Code SkillsMetrics Explained
AI vs Manual Ratio
Measures the proportion of code generated by Claude Code versus code typed by the developer.
- Tracked via the
code.lines.ai_generatedandcode.lines.manualmetrics - Calculated per session and aggregated by team, sprint, or time period
- A ratio of 2.5x means 2.5 lines of AI code for every 1 line of manual code
Tandemu uses two tiers of attribution:
- Native OTEL attribution — Per-file AI line counts sent directly from Claude Code telemetry (preferred, more accurate)
- Co-Authored-By fallback — When native attribution isn’t available, commits with
Co-Authored-By: Claudetags are classified as AI-generated, with lines distributed proportionally
Friction Heatmap
Identifies code areas where developers struggle repeatedly.
Friction is detected by:
- Prompt loops — The developer asks Claude to fix the same issue multiple times
- Tool execution errors — Claude’s file edits or commands fail repeatedly
Each friction event is tagged with the file path, so Tandemu can show which files cause the most trouble.
Severity is calculated using a weighted score: promptLoops + (errors × 2).
| Severity | Criteria |
|---|---|
| High (red) | Weighted score >= 20 |
| Medium (yellow) | Weighted score >= 10 |
| Low (green) | Weighted score < 10 |
DORA Metrics
Calculated from merged GitHub PRs, synced every 4 hours.
- Deployment Frequency — Merged PRs per week, rated elite/high/medium/low
- Lead Time for Changes — Median time from PR creation to merge, rated elite/high/medium/low
See What Gets Measured for the complete metrics reference.
Passive Time Tracking
Session duration is calculated from task session spans:
- A task session starts when a developer runs
/morning - It ends when they run
/finishor/pause - Total hours are aggregated per developer per day
Data Retention
The memory_access_log table in ClickHouse has a 90-day TTL. Core telemetry tables (otel_traces, otel_metrics_sum, otel_logs) created by the OTel Collector do not have a TTL configured by default — data is retained indefinitely unless you add a TTL to the table definitions.