Skip to main content
The Hard Parts.dev
RF-24 Process · Operational RF Red Flags
Severity medium-high Freq common

Metrics are visible but not trusted

Dashboards, KPIs, and delivery metrics exist, but people do not believe they reflect reality well enough to act on them confidently.

Severity
medium-high
Frequency
common
First noticed by
managers · team leads · analysts
Detectability
visible-if-you-look
Confidence
high
At a glanceRF-24
Where you see this

engineering managementdelivery reportingplatform dashboardsAI evaluation reporting

Not necessarily a problem when
a new metric is still being calibrated openly and trust-building is active
Often mistaken for
if it is measured, it is under control
Time horizon
medium-term
Best placed to act

metric ownerengineering leadershipplatform/data owners

The signal

What you would actually notice

Measurement without trust creates reporting theater rather than better decisions.

Field observation

Teams still ask people for reality even though dashboards exist, or interpret dashboards defensively instead of operationally.

Also observed

  • The dashboard says green, but everyone knows we are not green.
  • We do not use that metric for real decisions.

Primary reading

What it usually indicates

Most likely underlying patterns when this signal shows up. Not a diagnosis, a starting hypothesis.

Usually indicates

Most likely underlying patterns when this signal shows up.

  • poor metric design
  • bad data quality
  • incentives distorting reporting
  • metrics misaligned with lived reality

Stakes

Why it matters

Measurement without trust creates reporting theater rather than better decisions.

Inspection

What to check next

Deliberate steps to confirm or disconfirm the primary reading above. Not a checklist. An order of inspection.

  1. metric definitions
  2. data quality
  3. decision use cases
  4. local workarounds

Diagnostic questions

Questions to ask the team, or yourself, before concluding anything.

  1. What reality does this metric fail to capture?
  2. Who trusts it least, and why?
  3. What behavior is the metric incentivizing?

Progression

Under the signal

Where this pattern tends to come from, what's holding it up, and where it goes if nothing changes.

Leading indicators

What tends to show up first.

  • teams caveat every dashboard discussion
  • manual spreadsheets or side channels are used to explain the truth
  • leaders ask for alternate numbers

Common root causes

What is usually sitting under the signal.

  • measurement divorced from practice
  • status incentives
  • poor instrumentation

Likely consequences

What happens if nothing changes.

  • bad decisions
  • defensive reporting
  • reporting inflation

Look-alikes

Not what it looks like

Patterns that can be mistaken for this signal, and 'fix' attempts that make it worse.

False friends Things the signal is often confused with, but isn't.
  • if it is measured, it is under control

Anti-patterns when responding

Responses that feel sensible and usually make the underlying pattern worse.

  • adding more dashboards instead of repairing trust
  • using disputed metrics for judgment-heavy performance conversations

Context

Context and ownership

Where this signal surfaces, who sees it first, who can actually act, and how much runway there usually is before escalation.

Common contexts

Where it shows up

  • engineering management
  • delivery reporting
  • platform dashboards
  • AI evaluation reporting
Most likely to notice

Who sees it first

Before it escalates.

  • managers
  • team leads
  • analysts
Best placed to act

Who can move on it

Not always the same as who notices it.

  • metric owner
  • engineering leadership
  • platform/data owners
Time horizon

medium-term

How much runway there usually is before the signal hardens into the underlying pattern.

AI impact

AI effects on this signal

How AI-assisted and AI-driven workflows tend to amplify or hide this signal.

AI amplifies

Ways AI tooling tends to make this signal louder or more common.

  • AI can make reporting easier to produce, which can increase the volume of low-trust metrics quickly.

AI masks

Ways AI tooling tends to hide this signal, so it keeps growing under the surface.

  • AI-generated narrative around a dashboard can make weak metrics feel more legitimate.

Relationships

Connected signals

Related failure modes, decisions behind the signal, response playbooks, and neighboring red flags.