Skip to main content
The Hard Parts.dev
EP-08 Ai EP Engineering Playbook
Difficulty medium Owner · engineering manager

Create AI usage norms for a team

Create team-level norms that define how AI is used, disclosed, reviewed, challenged, and learned from so that the team behaves intentionally rather than drifting into private habit systems.

Difficulty
medium
Time horizon
days to define, ongoing reinforcement through practice
Primary owner
engineering manager
Confidence
high
At a glanceEP-08
Situation
AI is already being used by the team, but expectations and norms are inconsistent.
Goal
Replace implicit, person-specific AI behavior with shared norms that protect quality, trust, and collaboration.
Do not use when
AI use is still negligible
Primary owner
engineering manager
Roles involved

engineering managertech leadcontributorsreviewerssecurity or compliance partner if needed

Context

The situation

Deciding whether to reach for this playbook: when it fits, and when it doesn't.

Use when

Conditions where this playbook is the right tool.

  • Different engineers use AI in very different ways
  • Reviewers are unsure what to expect from AI-assisted work
  • The team debates AI socially but lacks operating agreements
  • AI usage is common enough to change team behavior

Stakes

Why this matters

What this playbook protects against, and why skipping or half-running it tends to be expensive.

Unwritten AI norms create invisible inequality and quality variance. One person overuses AI, another hides it, a third avoids it entirely, and the team slowly stops sharing a common model of authorship, review, and acceptable risk.

Quality bar

What good looks like

The observable qualities of a team or system that is actually doing this well. Not just going through the motions.

Signs of the playbook done well

  • The team can explain how AI use fits its engineering values
  • People know what should be disclosed, reviewed, or challenged differently
  • AI use does not become a source of hidden mistrust
  • New team members can learn the norms without guessing
  • The norms evolve from observed reality rather than abstract ideology

Preparation

Before you start

What you need available and true before running the procedure. Skipping this is the most common reason playbooks fail.

Inputs

Material you'll want to gather first.

  • Current AI usage patterns
  • Review friction points
  • Team values and quality expectations
  • Known wins and failures from current use
  • Security or policy constraints where relevant

Prerequisites

Conditions that should be true for this to work.

  • The team is willing to talk honestly about current use
  • Leaders will back the norms operationally
  • There is enough observed usage to norm around

Procedure

The procedure

Each step carries its purpose (why it exists), its actions (what you do), and its outputs (what you produce). Read the purpose. It's what keeps the step from degenerating into checklist theatre.

  1. Make current usage visible

    Start from reality, not theory.

    Actions

    • Collect how people currently use AI across coding, docs, debugging, and planning
    • Surface where the team already feels tension or confusion
    • Identify invisible norms that already exist

    Outputs

    • Current AI usage map
  2. Define what the team values around AI use

    Ground norms in engineering principles.

    Actions

    • State what matters most: quality, explainability, speed, ownership, privacy, trust
    • Identify where AI use supports those values and where it can distort them
    • Turn values into practical behavioral statements

    Outputs

    • AI norms principles
  3. Set norms for usage, disclosure, and review

    Translate principle into day-to-day behavior.

    Actions

    • Decide when AI assistance should be disclosed or made visible
    • Clarify expectations for reviewing, testing, and explaining AI-assisted work
    • Define unacceptable behavior such as merging generated code without understanding

    Outputs

    • Team AI norms
    • Review and disclosure expectations
  4. Teach the norms with examples

    Make the norms easier to apply than ignore.

    Actions

    • Show concrete good, bad, and gray-area examples
    • Turn repeated team questions into FAQ-style guidance
    • Align onboarding and review templates with the norms

    Outputs

    • AI norms examples pack
  5. Revisit the norms as practice changes

    Keep them alive and relevant.

    Actions

    • Review recurring confusion, near-misses, and successes
    • Update the norms when tools or risks change materially
    • Prevent drift between stated norms and actual team behavior

    Outputs

    • AI norms review cycle

Judgment

Judgment calls and pitfalls

The places where execution actually diverges: decisions that need thought, questions worth asking, and mistakes that recur regardless of good intent.

Decision points

Moments where judgment and trade-offs matter more than procedure.

  • What AI use needs disclosure in this team?
  • What behaviors are acceptable but require caution?
  • What should be explicitly disallowed or escalated?
  • How will the team know the norms are helping rather than becoming theater?

Questions worth asking

Prompts to use on yourself, the team, or an AI assistant while running the procedure.

  • What does responsible AI use mean in this team specifically?
  • What AI use should be visible to reviewers or collaborators?
  • Which current behaviors are already causing confusion or mistrust?

Common mistakes

Patterns that surface across teams running this playbook.

  • Writing generic principles with no operational meaning
  • Ignoring current usage reality and imposing abstract ideals
  • Assuming everyone shares the same definition of responsible AI use
  • Treating the norms as static while tooling changes rapidly

Warning signs you are doing it wrong

Signals that the playbook is being executed but not landing.

  • The team still handles AI use through private judgment only
  • Reviewers and authors have incompatible expectations
  • People hide AI usage because norms feel punitive or performative
  • The team cites norms, but repeated AI-related confusion persists

Outcomes

Outcomes and signals

What should exist after the playbook runs, how you'll know it worked, and what to watch for over time.

Artifacts to produce

Durable outputs the playbook should leave behind.

  • Current AI usage map
  • AI norms principles
  • Team AI norms
  • Review and disclosure expectations
  • AI norms examples pack
  • AI norms review cycle

Success signals

Observable changes that mean the playbook landed.

  • AI-related review conflicts decline
  • Contributors understand what responsible use looks like in this team
  • New engineers learn the norms quickly
  • The team can revise norms based on evidence instead of ideology

Follow-up actions

Moves that keep the playbook's effects compounding after it finishes.

  • Fold norms into onboarding, PR templates, and team rituals
  • Review incidents or near-misses involving AI against the norms
  • Connect norm failures to clearer zone design or review strengthening

Metrics or signals to watch

Longer-horizon indicators that the underlying problem is receding.

  • AI-related review conflict frequency
  • Team clarity on disclosure expectations
  • Number of repeated AI gray-area questions
  • Norm adoption in onboarding and review workflows

AI impact

AI effects on this playbook

How AI-assisted and AI-driven workflows help execution, and the ways they can make it worse.

AI can help with

Where AI tooling genuinely reduces the cost of running this playbook well.

  • Summarizing current usage patterns and recurring questions
  • Drafting examples and FAQ language
  • Turning team discussions into clearer norm statements

AI can make worse by

Distortions AI introduces that make the underlying problem harder to see.

  • Producing polished but generic norms that do not affect behavior
  • Masking unresolved disagreement behind pretty language
  • Encouraging the team to outsource norm-setting to the tool itself

Relationships

Connected playbooks

Failure modes this playbook tends to address, decisions behind the situation, red flags that motivate running it, and neighboring playbooks.