Nobody can explain this module simply
A module performs important work, but nobody can describe its purpose in plain language without hand-waving.
- Where you see this
legacy systemsshared service layersheavily abstracted internal frameworks
- Not necessarily a problem when
- the module is genuinely low-level and highly specialized, but at least one owner can still explain it clearly
- Often mistaken for
- it is complex so nobody can explain it simply
- Time horizon
- medium-term
- Best placed to act
tech leadarchitectmodule owner
The signal
What you would actually notice
Poor conceptual clarity usually means poor ownership, weak boundaries, and dangerous change behavior.
Field observation
Explanations become circular, jargon-heavy, or depend on historical trivia rather than current purpose.
Also observed
- It handles orchestration, transformation, policy, and fallback logic.
- Ask Sam, they know how it really works.
- We should not touch that right now.
Primary reading
What it usually indicates
Most likely underlying patterns when this signal shows up. Not a diagnosis, a starting hypothesis.
Usually indicates
Most likely underlying patterns when this signal shows up.
- unclear domain boundary
- responsibility creep
- historic layering without cleanup
- knowledge concentrated in a few people
Not necessarily a problem when
Contexts where this signal is expected and does not indicate a deeper issue.
- the module is genuinely low-level and highly specialized, but at least one owner can still explain it clearly
- the audience lacks domain context but documentation and ownership are strong
Stakes
Why it matters
Poor conceptual clarity usually means poor ownership, weak boundaries, and dangerous change behavior.
Heuristic
If a module cannot be explained simply, its boundary is probably already decaying.
Inspection
What to check next
Deliberate steps to confirm or disconfirm the primary reading above. Not a checklist. An order of inspection.
- dependency graph
- recent change history
- module-level documentation
- ownership map
Diagnostic questions
Questions to ask the team, or yourself, before concluding anything.
- What single job does this module do?
- What would clearly not belong here?
- Who is accountable for its behavior?
Progression
Under the signal
Where this pattern tends to come from, what's holding it up, and where it goes if nothing changes.
Leading indicators
What tends to show up first.
- different people describe the module differently
- ownership boundaries sound vague
- bug fixes in the module require multiple experts
Common root causes
What is usually sitting under the signal.
- scope creep
- architecture drift
- no boundary reviews
- hero ownership
Likely consequences
What happens if nothing changes.
- change fear
- slow onboarding
- bug-prone edits
- dependency fog
Look-alikes
Not what it looks like
Patterns that can be mistaken for this signal, and 'fix' attempts that make it worse.
- it is complex so nobody can explain it simply
- the code works so the conceptual model must be fine
Anti-patterns when responding
Responses that feel sensible and usually make the underlying pattern worse.
- adding more generic naming instead of clarifying boundaries
- treating confusion as a documentation-only problem
Context
Context and ownership
Where this signal surfaces, who sees it first, who can actually act, and how much runway there usually is before escalation.
Where it shows up
- legacy systems
- shared service layers
- heavily abstracted internal frameworks
Who sees it first
Before it escalates.
- new joiner
- tech lead
- staff engineer
Who can move on it
Not always the same as who notices it.
- tech lead
- architect
- module owner
medium-term
How much runway there usually is before the signal hardens into the underlying pattern.
AI impact
AI effects on this signal
How AI-assisted and AI-driven workflows tend to amplify or hide this signal.
AI amplifies
Ways AI tooling tends to make this signal louder or more common.
- AI can generate confident explanations that sound coherent while preserving underlying architectural confusion.
AI masks
Ways AI tooling tends to hide this signal, so it keeps growing under the surface.
- AI summaries can make the module sound cleaner than it really is.
AI synthesis
Generated documentation creates false confidence in a boundary nobody has actually cleaned up.
Relationships
Connected signals
Related failure modes, decisions behind the signal, response playbooks, and neighboring red flags.