AI use is widespread but norms are unclear
People use AI heavily, but the team lacks shared rules about where it is safe, expected, or dangerous.
- Where you see this
engineering teams adopting copilotsAI-heavy documentation or support workflowsmixed-seniority teams
- Not necessarily a problem when
- the team is in an explicit short experimentation phase and is actively turning observations into norms
- Often mistaken for
- tool ubiquity means healthy adoption maturity
- Time horizon
- near-term
- Best placed to act
engineering leadAI policy ownermanager
The signal
What you would actually notice
Quality, authorship, review depth, and risk exposure start to vary by individual habit rather than team design.
Field observation
Some people use AI constantly, some avoid it, and nobody can clearly explain the team’s expectations.
Also observed
- Everyone uses it differently.
- I did not know we were supposed to disclose AI-generated sections.
- Our review bar depends on who wrote it.
Primary reading
What it usually indicates
Most likely underlying patterns when this signal shows up. Not a diagnosis, a starting hypothesis.
Usually indicates
Most likely underlying patterns when this signal shows up.
- immature AI adoption
- weak leadership on usage boundaries
- social rather than policy-based tool norms
Not necessarily a problem when
Contexts where this signal is expected and does not indicate a deeper issue.
- the team is in an explicit short experimentation phase and is actively turning observations into norms
Stakes
Why it matters
Quality, authorship, review depth, and risk exposure start to vary by individual habit rather than team design.
Heuristic
Broad tool usage without shared norms creates invisible quality variance.
Inspection
What to check next
Deliberate steps to confirm or disconfirm the primary reading above. Not a checklist. An order of inspection.
- team norms
- review practices
- incident examples tied to AI usage
Diagnostic questions
Questions to ask the team, or yourself, before concluding anything.
- Where do we allow AI use and why?
- What work still requires direct human authorship or deeper review?
- How do we review AI-assisted changes differently?
Progression
Under the signal
Where this pattern tends to come from, what's holding it up, and where it goes if nothing changes.
Leading indicators
What tends to show up first.
- different reviewers expect different standards
- AI use is disclosed inconsistently
- mistakes reveal uneven assumptions about safe use
Common root causes
What is usually sitting under the signal.
- adoption speed outrunning governance
- novelty bias
- lack of explicit local policy
Likely consequences
What happens if nothing changes.
- inconsistent quality
- conflict in reviews
- hidden risk concentration
Look-alikes
Not what it looks like
Patterns that can be mistaken for this signal, and 'fix' attempts that make it worse.
- tool ubiquity means healthy adoption maturity
Anti-patterns when responding
Responses that feel sensible and usually make the underlying pattern worse.
- letting each engineer invent their own AI safety model
- pretending everyone is already aligned because usage is common
Context
Context and ownership
Where this signal surfaces, who sees it first, who can actually act, and how much runway there usually is before escalation.
Where it shows up
- engineering teams adopting copilots
- AI-heavy documentation or support workflows
- mixed-seniority teams
Who sees it first
Before it escalates.
- reviewers
- engineering manager
- staff engineers
Who can move on it
Not always the same as who notices it.
- engineering lead
- AI policy owner
- manager
near-term
How much runway there usually is before the signal hardens into the underlying pattern.
AI impact
AI effects on this signal
How AI-assisted and AI-driven workflows tend to amplify or hide this signal.
AI amplifies
Ways AI tooling tends to make this signal louder or more common.
- This red flag is itself AI-driven.
AI masks
Ways AI tooling tends to hide this signal, so it keeps growing under the surface.
- High output and clean style can make uneven usage quality harder to notice.
AI synthesis
Different team members operate under incompatible assumptions about AI disclosure, verification, and acceptable use.
Relationships
Connected signals
Related failure modes, decisions behind the signal, response playbooks, and neighboring red flags.