Portfolio Flow: Why Your Fastest Teams Are Hiding a Shared Problem
The fastest team in the portfolio has a 22-day cycle time. A slower team has 9 days. In the monthly review, the fast team's performance looks broken. The slower team looks fine.
The fast team is not broken. They use the platform team four times as often as the slower team. Every piece of work they deliver requires a platform integration that routes through a shared resource with fixed capacity. Their speed creates their wait.
This is not unusual. It is the structural behaviour of shared resources in multi-team systems — and per-team metrics make it invisible.
How shared resources work
Most delivery organisations have resources that multiple teams depend on but no single team owns: a platform team, an architecture review board, a security sign-off process, a data team, a QA environment. These resources have limited capacity. They serve requests from whichever team needs them.
When teams operate independently, the shared resource looks fine — each team has an occasional dependency, waits are short, no one notices. As team throughput increases, shared resource utilisation increases too. Wait times grow. Items sit in "blocked" or "waiting for platform" states that don't show up as a bottleneck in any single team's metrics.
The resource is constrained. The evidence of the constraint is distributed across every team that uses it — and invisible in any individual team's dashboard.
The inverse relationship
The teams that deliver fastest use shared resources most. This is not accidental — it is structural.
High-throughput teams are completing more items per week. Each completed item required the same shared-resource touchpoints as items completed by slower teams. More completions means more requests to the shared resource. More requests means more time waiting for capacity.
A team completing 8 items per week sends twice as many requests to the shared resource as a team completing 4. Their wait time accumulates twice as fast. Their cycle time includes more of the shared resource's backlog.
This creates a perverse signal: the team that has improved its own process most — the one doing the most things right — ends up looking worst in portfolio reviews because its cycle time is dominated by a constraint it does not control.
Why per-team metrics hide it
The evidence is present in the data. It just isn't surfaced.
Team cycle time reports show total time from start to done. A team with 8 days of "own work" and 14 days waiting for the platform team shows 22 days total. The 14 days in "blocked" or "waiting" state is visible as a line item if you look for it — but most portfolio dashboards sum or average without decomposing by state.
The platform team's queue depth is visible in the platform team's own metrics. But that dashboard lives in the platform team's space, not the portfolio view. No one is looking at both simultaneously and seeing the connection.
The fast team gets the review comment about their cycle time. The platform team gets a separate review about queue depth and response time. Neither conversation surfaces the relationship between them.
How to decompose it
Breaking the dependency is straightforward once the data is connected.
Trace item age by state, not just total. How much of each team's cycle time is spent in "own work" states versus "waiting for external dependency" states? Which dependency accounts for most of the wait?
Then look at the shared resource: who is requesting it, at what rate, and what is the current queue depth? Plot team throughput against time spent waiting for that resource. The slope will be positive — and the steepest slope will belong to the fastest team.
Insight
If your fastest team has the worst cycle time, look for what they use most that other teams use less. The shared bottleneck is there.
What to do about it
Three interventions address shared bottlenecks, in order of structural depth.
Scale the shared resource. Add capacity to the constrained team or function. This is often the right first step and can be done relatively quickly. It does not address the coupling but removes the immediate constraint.
Reduce coupling to the shared resource. Change what requires the shared resource. If all platform integrations need approval, examine whether approval can be more targeted — automated for low-risk changes, required only for high-risk ones. Reduce the request rate rather than just adding capacity to serve it.
Change the delivery model. Embed shared capability into teams that use it most. Make it self-service. Build async approval flows that don't require synchronous availability. These changes are higher investment but break the structural coupling rather than managing it.
The shared bottleneck doesn't punish every team equally. It punishes the ones working hardest to deliver.
The goal of a portfolio flow analysis is not to explain why the fast team looks slow. It is to find the shared constraint that is limiting every team — most severely the ones that have otherwise eliminated their own. Decomposing by dependency state rather than by total cycle time is what makes it visible.