Skip to main content
The Hard Parts.dev
EP-39 Team EP Engineering Playbook
Difficulty medium-high Owner · tech lead

Reduce review bottlenecks without lowering quality

Reduce bottlenecks by changing review shape, diff quality, ownership, and review expectations-not by silently lowering the bar or converting review into a rubber stamp.

Difficulty
medium-high
Time horizon
days to weeks for initial change, longer for review culture shift
Primary owner
tech lead
Confidence
high
At a glanceEP-39
Situation
Code or design review is slowing delivery, but quality still matters.
Goal
Increase review throughput while preserving understanding, risk control, and team learning.
Do not use when
the real problem is weak code quality before review
Primary owner
tech lead
Roles involved

tech leadreviewersengineering managercontributorsquality lead if review and testing are entangled

Context

The situation

Deciding whether to reach for this playbook: when it fits, and when it doesn't.

Use when

Conditions where this playbook is the right tool.

  • Review turnaround time is hurting delivery
  • A few reviewers are overloaded
  • PRs are too large or too numerous
  • AI-assisted coding is increasing review volume

Stakes

Why this matters

What this playbook protects against, and why skipping or half-running it tends to be expensive.

Review bottlenecks often trigger the worst fake fix: shallow approval. The right fix is to improve what enters review, clarify what review is for, and distribute review capability intelligently.

Quality bar

What good looks like

The observable qualities of a team or system that is actually doing this well. Not just going through the motions.

Signs of the playbook done well

  • Reviews are smaller, clearer, and faster for the right reasons
  • Review depth matches change risk
  • More people can review meaningful changes safely
  • Review comments focus on behavior, risks, and design-not only style
  • Review quality remains trusted even as flow improves

Preparation

Before you start

What you need available and true before running the procedure. Skipping this is the most common reason playbooks fail.

Inputs

Material you'll want to gather first.

  • Review timing data
  • PR size and volume patterns
  • Review concentration map
  • Post-merge defect patterns
  • Team review expectations

Prerequisites

Conditions that should be true for this to work.

  • The team is willing to inspect current review behavior honestly
  • Review data or observable patterns exist
  • Owners of risky areas are known

Procedure

The procedure

Each step carries its purpose (why it exists), its actions (what you do), and its outputs (what you produce). Read the purpose. It's what keeps the step from degenerating into checklist theatre.

  1. Diagnose the bottleneck shape

    Find out whether the issue is volume, size, concentration, or unclear expectations.

    Actions

    • Analyze review turnaround, reviewer concentration, and diff size
    • Identify where reviews stall and why
    • Separate high-risk reviews from routine ones

    Outputs

    • Review bottleneck profile
  2. Improve what enters review

    Make the unit of review more reviewable.

    Actions

    • Reduce PR size where possible
    • Require clearer context, test notes, and risk notes from authors
    • Separate mechanical changes from conceptual changes

    Outputs

    • Improved review entry standard
  3. Match review depth to change risk

    Avoid over-reviewing trivial changes and under-reviewing risky ones.

    Actions

    • Define risk tiers for common change types
    • Set expectations for routine, medium-risk, and high-risk reviews
    • Reserve scarce expert review time for changes that truly need it

    Outputs

    • Risk-based review model
  4. Spread review capability

    Reduce concentration and improve team understanding.

    Actions

    • Identify areas where only one or two people review effectively
    • Pair on reviews for risky areas
    • Teach more engineers how to review specific domains competently

    Outputs

    • Review capability plan
  5. Audit whether speed damaged quality

    Ensure the fix stayed honest.

    Actions

    • Watch post-merge defects and review comment quality
    • Check whether faster review became shallower approval
    • Adjust thresholds if quality dropped

    Outputs

    • Review health check

Judgment

Judgment calls and pitfalls

The places where execution actually diverges: decisions that need thought, questions worth asking, and mistakes that recur regardless of good intent.

Decision points

Moments where judgment and trade-offs matter more than procedure.

  • What changes deserve deep review versus routine review?
  • Where is review concentrated for good reason versus bad habit?
  • Should review speed be improved by smaller PRs, more reviewers, or better pre-review quality first?

Questions worth asking

Prompts to use on yourself, the team, or an AI assistant while running the procedure.

  • Where is review slow for the right reason versus the wrong reason?
  • What can we change before review begins to make review faster?
  • Which review classes need real expert depth and which do not?

Common mistakes

Patterns that surface across teams running this playbook.

  • Measuring review health by speed alone
  • Lowering review depth silently instead of changing the system
  • Keeping giant PRs and asking reviewers to be faster
  • Assuming automation can replace conceptual review everywhere

Warning signs you are doing it wrong

Signals that the playbook is being executed but not landing.

  • Review comments get shorter while defect rates rise
  • The same reviewers remain overloaded despite the change
  • Authors still ship context-poor diffs into review
  • High-risk AI-generated changes are reviewed like trivial routine edits

Outcomes

Outcomes and signals

What should exist after the playbook runs, how you'll know it worked, and what to watch for over time.

Artifacts to produce

Durable outputs the playbook should leave behind.

  • Review bottleneck profile
  • Review entry standard
  • Risk-based review model
  • Review capability plan
  • Review health check

Success signals

Observable changes that mean the playbook landed.

  • Review latency drops without trust collapsing
  • More reviewers can handle meaningful areas safely
  • Comment quality remains behavior- and risk-focused
  • Post-merge surprises do not rise as review flow improves

Follow-up actions

Moves that keep the playbook's effects compounding after it finishes.

  • Refresh review tiers as architecture and team shape evolve
  • Train reviewers in under-covered domains
  • Connect review improvements with testability and ownership work

Metrics or signals to watch

Longer-horizon indicators that the underlying problem is receding.

  • Median review turnaround time
  • Reviewer concentration ratio
  • Average diff size
  • Post-merge defect rate
  • Review comment depth indicators

AI impact

AI effects on this playbook

How AI-assisted and AI-driven workflows help execution, and the ways they can make it worse.

AI can help with

Where AI tooling genuinely reduces the cost of running this playbook well.

  • Pre-review summarization of changed files and likely risk areas
  • Spotting large mechanical edits versus behavioral edits
  • Drafting author-side change summaries and checklists

AI can make worse by

Distortions AI introduces that make the underlying problem harder to see.

  • Increasing diff volume faster than review depth can scale
  • Making weakly understood code look cleaner and safer
  • Encouraging shallow review when summaries replace reading

Relationships

Connected playbooks

Failure modes this playbook tends to address, decisions behind the situation, red flags that motivate running it, and neighboring playbooks.