Skip to main content
The Hard Parts.dev
RF-18 Team · Ai Quality RF Red Flags
Severity medium-high Freq increasing

AI use is widespread but norms are unclear

People use AI heavily, but the team lacks shared rules about where it is safe, expected, or dangerous.

Severity
medium-high
Frequency
increasing
trend
First noticed by
reviewers · engineering manager · staff engineers
Detectability
easy-to-normalize
Confidence
high
At a glanceRF-18
Where you see this

engineering teams adopting copilotsAI-heavy documentation or support workflowsmixed-seniority teams

Not necessarily a problem when
the team is in an explicit short experimentation phase and is actively turning observations into norms
Often mistaken for
tool ubiquity means healthy adoption maturity
Time horizon
near-term
Best placed to act

engineering leadAI policy ownermanager

The signal

What you would actually notice

Quality, authorship, review depth, and risk exposure start to vary by individual habit rather than team design.

Field observation

Some people use AI constantly, some avoid it, and nobody can clearly explain the team’s expectations.

Also observed

  • Everyone uses it differently.
  • I did not know we were supposed to disclose AI-generated sections.
  • Our review bar depends on who wrote it.

Primary reading

What it usually indicates

Most likely underlying patterns when this signal shows up. Not a diagnosis, a starting hypothesis.

Usually indicates

Most likely underlying patterns when this signal shows up.

  • immature AI adoption
  • weak leadership on usage boundaries
  • social rather than policy-based tool norms

Stakes

Why it matters

Quality, authorship, review depth, and risk exposure start to vary by individual habit rather than team design.

Inspection

What to check next

Deliberate steps to confirm or disconfirm the primary reading above. Not a checklist. An order of inspection.

  1. team norms
  2. review practices
  3. incident examples tied to AI usage

Diagnostic questions

Questions to ask the team, or yourself, before concluding anything.

  1. Where do we allow AI use and why?
  2. What work still requires direct human authorship or deeper review?
  3. How do we review AI-assisted changes differently?

Progression

Under the signal

Where this pattern tends to come from, what's holding it up, and where it goes if nothing changes.

Leading indicators

What tends to show up first.

  • different reviewers expect different standards
  • AI use is disclosed inconsistently
  • mistakes reveal uneven assumptions about safe use

Common root causes

What is usually sitting under the signal.

  • adoption speed outrunning governance
  • novelty bias
  • lack of explicit local policy

Likely consequences

What happens if nothing changes.

  • inconsistent quality
  • conflict in reviews
  • hidden risk concentration

Look-alikes

Not what it looks like

Patterns that can be mistaken for this signal, and 'fix' attempts that make it worse.

False friends Things the signal is often confused with, but isn't.
  • tool ubiquity means healthy adoption maturity

Anti-patterns when responding

Responses that feel sensible and usually make the underlying pattern worse.

  • letting each engineer invent their own AI safety model
  • pretending everyone is already aligned because usage is common

Context

Context and ownership

Where this signal surfaces, who sees it first, who can actually act, and how much runway there usually is before escalation.

Common contexts

Where it shows up

  • engineering teams adopting copilots
  • AI-heavy documentation or support workflows
  • mixed-seniority teams
Most likely to notice

Who sees it first

Before it escalates.

  • reviewers
  • engineering manager
  • staff engineers
Best placed to act

Who can move on it

Not always the same as who notices it.

  • engineering lead
  • AI policy owner
  • manager
Time horizon

near-term

How much runway there usually is before the signal hardens into the underlying pattern.

AI impact

AI effects on this signal

How AI-assisted and AI-driven workflows tend to amplify or hide this signal.

AI amplifies

Ways AI tooling tends to make this signal louder or more common.

  • This red flag is itself AI-driven.

AI masks

Ways AI tooling tends to hide this signal, so it keeps growing under the surface.

  • High output and clean style can make uneven usage quality harder to notice.

Relationships

Connected signals

Related failure modes, decisions behind the signal, response playbooks, and neighboring red flags.