Skip to Content
DocumentationMethodologyOverview

The Tandemu Methodology

Software teams have spent decades organizing work around sprints, story points, and standup meetings. These rituals made sense when the bottleneck was coordination between humans writing code. That bottleneck has shifted.

With AI coding agents handling implementation, the limiting factor is no longer typing speed or individual productivity — it’s how fast a team can move from intent to shipped code while maintaining quality. Tandemu introduces a methodology designed for this reality.

The core idea

A developer picks a task. They work on it with an AI agent. When it’s done, they mark it finished. Everything in between — time spent, code written, friction encountered — is measured automatically.

No standups to prepare. No timesheets to fill. No story points to estimate. The work itself generates the signal.

/morning → pick a task → work with AI → /finish → metrics are captured

This is not a replacement for how your team organizes work. It’s a layer on top that captures what actually happened, regardless of whether you use Scrum, Kanban, or no framework at all.

Principles

One task per context

Each task runs in its own isolated context — a git worktree with a dedicated branch. Developers can have multiple tasks active simultaneously, across different repos or even within the same repo. Starting a new task doesn’t require finishing the current one; /pause snapshots progress and /morning spins up a fresh worktree. This keeps tasks measurable — each has a clear start, a known set of changes, and accurate cycle time — without forcing single-threaded work.

The task is the unit of delivery

Traditional frameworks measure velocity in story points or sprint completions. Tandemu measures at the task level. Each completed task is a unit of delivered work — with a known duration, a known set of code changes, and a known ratio of AI-generated vs manually-written lines.

Tandemu approximates two of the four DORA metrics using task completion data: deployment frequency (task completion rate) and lead time (wall-clock time from /morning to /finish). This gives teams a starting point without CI/CD integration, though the numbers aren’t directly comparable to traditional DORA benchmarks.

AI attribution is built in

Tandemu uses two tiers of AI attribution. The primary method uses native OpenTelemetry data from Claude Code, which provides per-file AI line counts — the most accurate measurement available. When OTEL data isn’t available, it falls back to Co-Authored-By: Claude commit tags with proportional attribution by commit ratio. Either way, when a task is finished, Tandemu knows exactly how much code was AI-generated vs manually written — per file, per commit, per task.

Observability without surveillance

Tandemu captures session duration, code metrics, and friction events from Claude Code’s native telemetry. It does not record keystrokes, screen activity, or idle time. The data flows through standard OpenTelemetry, so it’s auditable and transparent.

Developers see the same data their leads see. There’s no hidden dashboard.

In this section

Last updated on