Failure Modes
Named failure patterns that recur in software engineering: how they start, how they escalate, what they look like at each stage, and what good responses look like.
Entries
31
Classes
06
Classes
- planning
- people
- technical
- process
- leadership
- ai
The catalog is grouped by where the pattern lives: planning, people, technical, process, leadership, AI-specific. Each entry carries a severity rating shown two ways: as a five-step bar on the card and as the card’s own weight. The fill darkens through five grays, from paper at low to near-black at critical. Color is reserved for category identity; severity reads purely as gravity. Scan the grid for weight first, then read the names.
-
Severity key
- low
- medium
- medium-high
- high
- critical
Chip = card fill on the grid.
-
Frequency
How often this pattern actually shows up in practice: from rare one-offs to near-universal.
increasing Not a point on the scale. A trend. Flags patterns whose prevalence is rising (often AI-era).
- rare
- occasional
- common
- very common
- universal
-
Recovery
How hard it is to climb back out once you are in it: tactical fix vs. structural teardown.
- easy
- medium
- medium-hard
- hard
- very hard
-
Confidence
How sure we are the pattern is real and consistent: provisional vs. repeatedly observed.
- low
- medium
- medium-high
- high
Planning
02 entriesPeople
06 entriesThe Hero Trap
One person becomes the informal system of record for critical knowledge, decisions, and rescue work.
The Consensus Trap
Decision-making slows or stalls as the team seeks broad agreement that never fully arrives.
Ownership Drift
Responsibility for systems, services, or decisions becomes unclear over time as teams and structures evolve.
The Quiet Quitter Team
A team stops raising risks, pushing back on decisions, or flagging problems - appearing harmonious while actually disengaged.
Promotion-Driven Architecture
Technical decisions are made primarily to create visible impact for career advancement rather than product or engineering need.
Discovery Theater
User research and discovery activities produce artifacts but do not meaningfully change decisions.
Technical
06 entriesAbstraction Addiction
The system grows more layers, indirection, and generic structure than current reality actually demands.
Platform Before Product
Internal platform investment grows significantly beyond proven user or product need.
Premature Scaling
Teams design and build for scale that has not arrived, creating unnecessary complexity before product-market fit is established.
Interface Contract Neglect
APIs, events, and data contracts between teams degrade silently as systems evolve without formal contract management.
Migration Debt
A planned migration starts well but stalls mid-execution, leaving the system permanently split across old and new states.
Test Theater
A team has high coverage numbers and a passing CI pipeline, but tests that do not catch real regressions.
Process
02 entriesLeadership
06 entriesThe Invisible Deadline
A date exists socially or politically, but not explicitly enough for the team to manage the trade-offs honestly.
Stakeholder Capture
A team's direction gets distorted by one loud stakeholder's agenda at the expense of broader product coherence.
Local Optimization
One team improves its own metric while creating or worsening bottlenecks in the wider system.
Metric Myopia
Teams overvalue what is measurable and systematically underweight what actually matters.
The Feature Factory Trap
A team ships features continuously but never pauses to measure whether any of them worked, prioritizing delivery speed over learning.
Scope Negotiation Theater
Scope negotiation processes exist and are followed, but the real scope is never actually reduced - only the formal acknowledgment of it changes.
Ai
09 entriesSynthetic Velocity
Output volume rises sharply while true understanding, maintainability, and durable progress do not.
Silent Model Drift
Model behavior changes materially in production before the organization notices or responds effectively.
Autocomplete Architecture
Teams accept AI-suggested structures faster than they understand or own them, embedding design decisions nobody made consciously.
The Benchmark Mirage
Model selection or evaluation is guided by benchmark performance that does not reflect real production behavior.
RAG Without Ground Truth
A retrieval-augmented system is built and deployed before source quality, citation reliability, and answer validation are established.
Prompt Ops Chaos
Prompts, model settings, and hidden instructions change without version control, making system behavior unpredictable and undebuggable.
Eval Goodhart
Internal evaluation sets become optimization targets rather than honest capability measures, producing models or prompts that score well but behave poorly in production.
Context Window Hoarding
Teams fill context windows maximally with documents, history, and examples without understanding what actually helps, leading to unpredictable behavior, high cost, and debugging nightmares.
Human-in-the-Loop Decay
Human review steps designed to catch AI errors are gradually skipped as volume increases and confidence grows, removing oversight before the risk does.