LOBUS.WORKS
← All insights
Systems & Transformation

The Metrics That Protect Careers and Break Delivery Systems

17 Feb 2026·8 min

Velocity has been broken on this team for years. Everyone in the room knows it. The sprint retrospective confirmed it three quarters running. The velocity chart is still on the dashboard and still goes into the monthly report.

This is not an oversight. It is a solution.

What comfort metrics are solving for

Velocity, RAG status, burndown charts, completion percentage — these are metrics that teams and managers reach for because they produce reportable numbers with a clear directional signal. Green means okay. Trending up means progress. They answer "are we doing what we said?" in a form that satisfies the question without exposing anyone to scrutiny.

They are not primarily diagnostic tools. They are social tools.

When a manager reports to leadership, they need a narrative that is defensible. A metric that says "we delivered 43 points this sprint" is defensible — even if story points measure nothing real, even if 43 is down from last sprint, even if cycle time has been rising for eight weeks. The number exists. It can be referenced. It absorbs the conversation.

For teams, velocity solves a specific and persistent threat: the performance conversation. When the metric is trending in the right direction, the question of whether the team is working hard enough doesn't get asked. The number provides cover. Replacing it with a metric that accurately reflects system health means giving up that cover — before the culture that would make accurate reporting safe has been established.

Why knowing it's broken isn't enough

The instinct is to assume that better information will replace worse information once the gap is recognised. Show the team a better metric, demonstrate that velocity is unreliable, and the replacement follows naturally.

This happens sometimes. More often, the broken metric persists. The reason is that understanding a metric is inaccurate and choosing to replace it are two different decisions. The first is a technical conclusion. The second is a career calculation.

Consider a weather forecaster choosing between two models. One reliably says "moderate risk." It is wrong frequently but never catastrophically wrong in a way that generates scrutiny. The other accurately reports high risk when the data warrants it — and gets reviewed every time the predicted storm doesn't arrive at full intensity. Over time, forecasters and organisations calibrate toward the level of certainty that minimises personal exposure. Accuracy is a secondary concern.

Teams and managers do the same with delivery metrics. The velocity number that's always defensible is preferred over the cycle time number that accurately shows a system under stress — not because anyone decided to deceive, but because the metric that protects its reporter from risk is the metric that survives.

The career logic is sound

It is worth being precise about why comfort metrics persist, because the usual diagnosis — that teams lack data literacy, or that managers don't understand flow — misses the mechanism.

A manager who reports rising cycle time is surfacing a system problem. Leadership's response, in most organisations, is to ask what the manager is doing about it. The metric that was meant to show system health becomes the basis for a performance conversation about the person who reported it. The messenger gets the problem assigned back to them.

A manager who reports green RAG status and stable velocity has nothing assigned back to them. The problem, if it exists, remains invisible. The quarter ends. Nothing happened.

The rational response to this incentive structure is to report the metric that produces the better outcome for the reporter — which is almost always the metric that conceals the most. This is not cynicism. It is the entirely predictable behaviour of people navigating an environment where accurate information carries personal cost.

Warning

A metric that protects its reporter from career risk will always trade accuracy for safety. It will never reliably surface what the system needs leadership to see.

What the system loses

The cost of comfort metrics is not primarily that they're inaccurate. It is that they hide the information needed for structural decisions.

When cycle time is rising because WIP is uncontrolled, velocity hides it. When a team is adjusting their estimates to protect a number, burndown charts confirm the fiction. When a project is genuinely at risk, RAG status stays amber until the week it's too late to change the outcome. These are not edge cases. They are the normal behaviour of comfort metrics under pressure.

The decisions that require accurate flow data — whether to add capacity, whether to reduce WIP, whether a commitment is still achievable — are being made on lagging, buffered, socially-calibrated signals instead. The structural problems that accurate data would reveal go unaddressed. They compound. When they eventually surface, they surface as crises rather than as early signals that could have been acted on.

What displaces comfort metrics

System-level metrics — throughput, cycle time, WIP, item age — are not personally attributable in the same way as velocity or RAG status. A rising cycle time is a system signal. It does not have a person's name on it. It requires a structural response, not a performance conversation.

This distinction changes the social contract around the metric. If the data implicates the system rather than the individual, the incentive to suppress accurate reporting is lower. A team that knows cycle time data will be used to diagnose structure — not to assign blame — has less reason to protect themselves from it.

But this requires an explicit agreement, not just a new dashboard. The replacement metric and the cultural agreement about how it will be used have to arrive together. A team that is given accurate flow data but continues to operate in an environment where that data triggers performance conversations will manage the new metric the same way they managed the old one. The form changes. The protection mechanism reasserts itself.

This is why replacing comfort metrics is an organisational design problem, not a measurement problem. The data is available. The better metrics are well understood. What isn't present, in most organisations, is the agreement that accurate system data will be responded to at the system level — not handed back to the team that reported it as their problem to solve alone.

The metric wasn't chosen because it was accurate. It was chosen because it was safe. That is still why it's there.

The path forward is not to demand better metrics from teams operating in an environment that punishes accurate reporting. It is to change the environment first. When surfacing a problem produces a structural response rather than a career consequence, the incentive to conceal it weakens. When that shift is credible and sustained, the comfort metric loses its primary function. Teams stop reaching for it not because they were told to, but because they no longer need it.