Skip to main content
The Hard Parts.dev
EP-14 Architecture EP Engineering Playbook
Difficulty medium-high Owner · platform or shared-layer owner

Audit a shared layer for accidental complexity

Audit the shared layer by testing whether it serves real repeated needs, has clear consumers, and simplifies product work - or whether it has become a prestige dumping ground for abstracted uncertainty.

Difficulty
medium-high
Time horizon
days to weeks for audit, longer for simplification
Primary owner
platform or shared-layer owner
Confidence
high
At a glanceEP-14
Situation
A common/shared layer keeps growing and may be hiding speculative abstraction.
Goal
Separate real leverage from speculative reuse and reduce shared-layer complexity that is costing teams more than it helps them.
Do not use when
the shared layer is small, stable, and has a few clear proven use cases
Primary owner
platform or shared-layer owner
Roles involved

platform or shared-layer ownerconsumer teamsarchitecttech leadsengineering manager where capacity or authority is needed

Context

The situation

Deciding whether to reach for this playbook: when it fits, and when it doesn't.

Use when

Conditions where this playbook is the right tool.

  • Common or shared packages keep expanding
  • Teams complain the shared layer is hard to understand or risky to change
  • Consumer demand is vague while shared output is high
  • The layer has many generic helpers, flags, or abstractions

Stakes

Why this matters

What this playbook protects against, and why skipping or half-running it tends to be expensive.

Shared layers often accumulate accidental power. They feel central and sophisticated, but can quietly increase coupling, hide ownership, and slow product teams that did not ask for the abstraction in the first place.

Quality bar

What good looks like

The observable qualities of a team or system that is actually doing this well. Not just going through the motions.

Signs of the playbook done well

  • Each major shared capability has identifiable consumers and value
  • Shared code stays smaller and more intentional than product-specific code
  • Ownership and change policy for the layer are explicit
  • Teams can explain what belongs in shared and what should stay local
  • The audit leads to pruning, clarification, or productization-not just criticism

Preparation

Before you start

What you need available and true before running the procedure. Skipping this is the most common reason playbooks fail.

Inputs

Material you'll want to gather first.

  • Shared layer inventory
  • Consumer list
  • Change history
  • Adoption and support pain
  • Docs and ownership model

Prerequisites

Conditions that should be true for this to work.

  • You can identify real consumers
  • Consumer teams are willing to speak honestly about fit and cost
  • There is permission to prune or localize parts of the layer

Procedure

The procedure

Each step carries its purpose (why it exists), its actions (what you do), and its outputs (what you produce). Read the purpose. It's what keeps the step from degenerating into checklist theatre.

  1. Map the shared layer by capability and consumer

    See what the layer actually provides, not what it claims to provide.

    Actions

    • Inventory the major capabilities and modules
    • List real current consumers for each capability
    • Identify parts with low or unclear adoption

    Outputs

    • Shared capability map
  2. Review whether reuse is real or speculative

    Test whether abstraction earned its place.

    Actions

    • Ask what repeated problem each capability solved
    • Identify which abstractions existed before real repeated demand
    • Find where teams are forced into the layer rather than choosing it

    Outputs

    • Reuse validity assessment
  3. Inspect complexity and coupling costs

    Price the hidden costs of sharing.

    Actions

    • Review optionality, configuration sprawl, extension points, and consumer breakage risk
    • Identify where shared changes impose product-team coordination or fear
    • Note where the layer is acting as a weak platform instead of a clear product

    Outputs

    • Shared complexity review
  4. Decide what to keep, prune, localize, or productize

    Turn audit into architecture movement.

    Actions

    • Keep capabilities with clear multi-consumer value
    • Prune dead or speculative abstractions
    • Move low-value shared logic back to local ownership where useful
    • Productize high-value shared capabilities with clear support and change policy

    Outputs

    • Shared layer action plan
  5. Set entry rules for future shared code

    Stop accidental complexity from rebuilding itself.

    Actions

    • Define what evidence is required before adding new shared abstractions
    • Clarify ownership and support expectations
    • Teach teams when local code is the better choice

    Outputs

    • Shared layer contribution policy

Judgment

Judgment calls and pitfalls

The places where execution actually diverges: decisions that need thought, questions worth asking, and mistakes that recur regardless of good intent.

Decision points

Moments where judgment and trade-offs matter more than procedure.

  • Is this truly shared value or just centrally stored code?
  • Should a capability become product-like with support expectations, or go back local?
  • What evidence is enough to justify a new abstraction in shared space?
  • Which complexity costs are acceptable for the reuse gained?

Questions worth asking

Prompts to use on yourself, the team, or an AI assistant while running the procedure.

  • Which parts of this shared layer have real repeated demand?
  • What is this abstraction saving consumers, and what is it costing them?
  • Should this stay shared, become a supported platform capability, or go back local?

Common mistakes

Patterns that surface across teams running this playbook.

  • Treating all reuse as obviously good
  • Keeping low-value shared abstractions because they feel strategic
  • Auditing the layer without asking consumers what it costs them
  • Pruning code without clarifying future rules for what belongs in shared

Warning signs you are doing it wrong

Signals that the playbook is being executed but not landing.

  • The audit concludes the layer is too abstract but changes nothing
  • Consumer teams still cannot explain what the shared layer is for
  • New shared additions continue without evidence or review
  • The layer keeps growing while product teams route around it

Outcomes

Outcomes and signals

What should exist after the playbook runs, how you'll know it worked, and what to watch for over time.

Artifacts to produce

Durable outputs the playbook should leave behind.

  • Shared capability map
  • Reuse validity assessment
  • Shared complexity review
  • Shared layer action plan
  • Shared layer contribution policy

Success signals

Observable changes that mean the playbook landed.

  • The shared layer becomes smaller or clearer
  • Consumer trust and adoption quality improve
  • Change fear around shared code decreases
  • Teams use stronger criteria before extracting new abstractions

Follow-up actions

Moves that keep the playbook's effects compounding after it finishes.

  • Review consumer experience after pruning or productization
  • Repeat the audit periodically for high-growth shared layers
  • Update onboarding and architecture guidance with clearer shared-layer rules

Metrics or signals to watch

Longer-horizon indicators that the underlying problem is receding.

  • Number of real consumers per capability
  • Shared-layer growth rate versus product-layer growth rate
  • Consumer breakage or coordination incidents
  • Route-around rate by product teams

AI impact

AI effects on this playbook

How AI-assisted and AI-driven workflows help execution, and the ways they can make it worse.

AI can help with

Where AI tooling genuinely reduces the cost of running this playbook well.

  • Inventorying shared capabilities and imports
  • Mapping consumer usage patterns from code and config
  • Summarizing complexity hotspots and unused abstractions

AI can make worse by

Distortions AI introduces that make the underlying problem harder to see.

  • Accelerating speculative helper and wrapper generation into shared space
  • Making generic abstractions look more coherent than they are
  • Producing over-neat capability summaries that ignore consumer pain

Relationships

Connected playbooks

Failure modes this playbook tends to address, decisions behind the situation, red flags that motivate running it, and neighboring playbooks.