Walk into almost any public sector programme in 2026 and the delivery teams will be running fortnightly sprints, conducting user research, and iterating on prototypes. The methodology is agile. The vocabulary is agile. Now look at the business case approving the work. It identifies a handful of long-term benefits, values them in pounds and percentages, and commits the programme to realising them over a three to five year horizon. It is a waterfall artefact sitting at the entrance of an agile programme, and the gap between the two is where most outcome failures live.
I wrote the Information and Analytics Benefits Management Approach at NHS Digital a decade ago. The structural problem I tried to address then is still the structural problem now. Benefits management as practised across UK government is a lagging-indicator framework grafted onto a delivery method which has otherwise abandoned the same logic. The two halves of the programme do not share a methodology. Almost nobody names it.
Lagging indicators tell you what already happened
A benefit profile, in the orthodox sense, is a forecast of a long-term outcome with an attribution percentage and a target date. Realisation is measurable only after delivery. The review stage of the standard five-stage model is the point at which the programme finds out whether the bet paid off, often eighteen months to two years after go-live. By then the team has dispersed, the senior responsible owner has moved on, and the opportunity to course-correct has gone.
Course-correction was never part of the model. The model assumed the benefits identification work done at the start was sound enough to be measured against at the end. Modern delivery teams iterate fortnightly against working software and observed user behaviour. The benefits framework wrapped around them iterates once, at the end. The asymmetry produces a programme responsive to delivery feedback and deaf to outcome feedback for the entire duration of the work.
OKRs give the long-term goal an agile pulse
Objectives and Key Results are the obvious bridge, and the reason they have spread through technology organisations is the reason they belong inside public sector benefits frameworks. They allow a long-term outcome to be expressed as an Objective, then decomposed into Key Results which are leading indicators measurable on a quarterly cadence.
Take a programme aiming to reduce repair response times in social housing. The reduction holds as the long-term benefit. The Key Results sit closer to operational reality: a measurable drop in median time-to-triage, an increase in cases routed to the correct trade on first allocation, a decline in repeat-visit rates for the same fault. These are not the benefit. They are the leading signals the benefit is being built.
OKRs do not replace benefits management. They make it functional. The benefit profile remains the long-term commitment to Treasury or to the board. The OKR layer sits underneath it, providing quarterly evidence the programme is on a trajectory, and the early warning when it is not. When a Key Result misses for two quarters running, the programme has the information to intervene while intervention is still cheap. The lagging-only model gives the same information eighteen months later, when intervention costs the next funding cycle.
The granularity also fixes a political problem. Benefit profiles are difficult to revise once approved because revision looks like failure. OKRs are designed to be revised quarterly. They carry no political weight on their own. They allow the programme to learn without admitting the original case was flawed which, given the case was written without operational evidence, it almost always will have been.
User-centred design is what makes the KPIs honest
The OKR layer only works if the Key Results are grounded in real operational behaviour. This is where user-centred design earns its place in the benefits architecture, and it is the part of the system most public sector programmes still treat as a delivery activity rather than a benefits activity.
UCD provides three things benefits management currently lacks. Observed baseline data, gathered through contextual inquiry and diary studies, measures how work happens before the programme is announced and before awareness contaminates the baseline. Leading-indicator candidacy comes from user research surfacing the behaviours which will change first if the intervention is working, and these are precisely the Key Results the OKR layer needs. Dis-benefit visibility comes from observation finding the workarounds, the informal processes, and the metric distortions which workshops about risk never surface.
Done in this sequence, UCD is not an early-stage discovery activity bolted onto delivery. It is the empirical foundation of the benefits architecture. The personas inform the Objectives. The observed behaviours inform the Key Results. The contextual data informs the baselines. The dis-benefit register comes from the field rather than the room.
What changes if you accept this
Three things, none of which require new governance, only a reordering of existing activities. A short pre-business-case discovery, producing observed baselines and provisional personas before the benefits section is finalised. A quarterly OKR layer wrapped around the existing benefit profiles, providing leading-indicator evidence between gate reviews. A standing UCD function continuing through delivery, refreshing the data the OKRs measure against and surfacing dis-benefits as they emerge.
The benefit profiles stay. Treasury still gets its long-term commitment. The board still gets its line of sight to outcome. What changes is the programme, in between approval and review, is no longer flying blind. The waterfall artefact at the entrance is reinforced by an agile evidence loop running underneath it, and the two halves of the programme finally share a methodology.
Until then, benefits management will keep doing what it currently does: ratifying decisions made before the evidence existed, then measuring outcomes against assumptions nobody had the chance to test. This is not benefits management. It is benefits theatre.