Skip to Content

Dashboard Overview

The dashboard is at app.tandemu.dev . Log in to see your organization’s data.

All dashboard pages support team and time range filters via the controls in the top-right corner.

Dashboard Home

Path: /

The home page is the command center with org-wide KPI cards and four charts.

KPI Cards

  • Total Sessions — Claude Code sessions across the team
  • AI Code Ratio — Percentage of code generated by AI vs written manually
  • Active Developers — Team members who had sessions recently
  • Total Lines of Code — Combined output (AI + manual)
  • Avg Cycle Time — Average task duration from /morning to /finish
  • Tool Success Rate — Percentage of successful tool executions in Claude Code

Charts

  • AI vs Manual Code — Donut chart showing the current AI ratio
  • Tool Usage — Horizontal bar chart of Claude Code tool calls by type
  • Task Velocity — Weekly average task duration trend
  • Investment Allocation — Where engineering time goes (features vs bugs vs tech debt vs maintenance)

Activity

Path: /activity

Developer sessions, time tracking, and code-level analysis:

Stats Cards

  • Total Active Time — Combined developer session hours
  • Total Sessions — Number of Claude Code sessions
  • Avg per Developer — Average session time per team member

Components

  • Activity Chart — Daily session and hour counts over time
  • Session Log — Table with date, developer name, active time, session count
  • Developer Leaderboard — Per-developer table with sessions, active time, AI lines, manual lines
  • Hot Files — Most-changed files ranked by commit frequency
  • AI Effectiveness — AI-written lines by file (where AI output survives)

No manual timesheets needed. Time is tracked automatically from Claude Code sessions.

Insights

Path: /insights

AI investment analysis for engineering leads.

Stats Cards

  • AI Lines — Total AI-generated lines of code
  • Manual Lines — Total manually-written lines
  • Tasks Completed — Number of finished tasks
  • Total AI Cost — Estimated AI token spend

ROI Metrics

  • Productivity Multiplier — How much more output the team produces with AI assistance compared to manual-only coding
  • Capacity Freed — Hours of developer time freed by AI assistance per period
  • Cost per Task — Average engineering cost per completed task

An assumptions banner at the top lets you configure the developer hourly rate used for cost calculations.

Tandemu Impact

  • Memory Hits — How often Claude accessed stored memories during sessions
  • Friction Trend — Whether friction is increasing or decreasing over time
  • Knowledge Shared — Organization memories created and published

Charts

  • Throughput Chart — Tasks completed over time, showing velocity trends
  • Cost Efficiency Chart — Cost per task over time
  • Token Usage — AI token consumption patterns across the team
  • AI Adoption Leaderboard — Per-developer ranking by AI code ratio

Friction Map

Path: /friction-map

Identifies files and components where developers get stuck:

  • Repository paths ranked by friction score
  • Prompt loop count — How many times developers re-prompted Claude on the same issue
  • Error count — Tool execution failures
  • Sessions count — Number of unique sessions affected per path
  • Severity badges — Based on a weighted score: promptLoops + (errors × 2). Low (below 10), Medium (10-19), High (20+)

High friction on a file usually means:

  • The code is complex or poorly documented
  • There are subtle bugs that AI can’t easily fix
  • The architecture needs refactoring

Use this to prioritize tech debt cleanup — focus on the files that slow your team down the most.

AI Memory

Path: /memory

Browse and manage your team’s AI knowledge base. See Memory Insights for the full guide.

Stats

Four KPI cards: Total Memories, Personal, Organization, Memory Health (% actively used by Claude).

Charts

  • Memory Categories — Bar chart showing knowledge distribution (architecture, patterns, gotchas, decisions, etc.)
  • Memory Coverage — Donut showing what percentage of memories Claude actually references

Insights

  • Knowledge Gaps — Modules with frequent changes but few/no documented memories
  • Most Referenced — The memories Claude relies on most
  • Cleanup Candidates — Unused memories that may be stale or irrelevant

Browser

Search, filter, edit, and delete memories. Toggle between Personal and Organization scope. View as a list grouped by repo or browse as a file tree.

DORA Metrics

DORA metrics are calculated from merged GitHub PRs, synced automatically every 4 hours.

MetricSourceRating Bands
Deployment FrequencyMerged PRs per weekElite (7+), High (1–7), Medium (1–4/mo), Low (<1/mo)
Lead Time for ChangesPR created → merged (median)Elite (<1h), High (<1d), Medium (<1wk), Low (>1wk)
Change Failure RateComing soon (CI/CD integration)
Mean Time to RestoreComing soon (CI/CD integration)

The dashboard shows a DORA card with performance rating badges and a weekly trend chart. You can trigger a manual sync via POST /api/telemetry/github-sync.

Last updated on