There is a ritual at the start of almost every public sector programme. Someone in policy writes a business case. They identify the benefits, assign them values, and get approval. The project initiates. Then the user researchers arrive.
What they find, fairly reliably, is the benefits in the business case were drawn from strategic intent rather than operational evidence. The people who wrote the case knew what they wanted the programme to achieve. They did not know whether the people who would use the resulting system would behave in the ways the benefit calculations assumed. Nobody had asked.
This is not a cynical observation. It is structural. Business cases get written before project budgets are approved. User research costs money. The sequence is almost always: case first, evidence later. By the time evidence arrives, benefit profiles are signed off, change-controlled, and politically difficult to revise.
The thread which is supposed to connect strategic intent to operational reality gets cut before anyone picks it up.
Benefit identification is a top-down exercise pretending to be evidence
Benefits management, done properly, is one of the more rigorous disciplines in programme delivery. It insists every piece of work traces back to a measurable outcome, with someone accountable for realising it, and the programme does not close until there is evidence of the benefit being achieved — not only of the system going live.
The five-stage model is sound in principle: identify and quantify, value and appraise, plan, realise, review. The tools are good. Properly applied, they create a line of sight from investment to outcome which most programmes lack.
The problem is at stage one.
Benefit identification in most programmes is top-down. Policy leads and senior responsible owners describe the benefits they need the programme to deliver in terms which satisfy HM Treasury Green Book categories and the investment appraisal which follows. These get translated into benefit profiles which look precise, with percentage attributions, pound values, and target dates. What they rarely are is grounded.
The baseline is measured after the organisation knows something is coming
There is a closely related problem nobody mentions.
The baseline against which benefit gets measured has to be established before the project begins. If you want to demonstrate your new system saves thirty minutes per user per day, you need to know how long the current process takes. Before. Not after the project has started, when awareness of the change has already begun to shift behaviour.
In practice, baseline data collection is treated as an early-stage project activity rather than a pre-project one. By the time researchers go out to measure the current state, the organisation already knows something is coming. Teams have started compensating. Workarounds have been tidied up. Managers have briefed people to perform well. The baseline you measure is not the actual baseline. It is a slightly improved version of it.
This matters because the benefit calculation depends on the gap between baseline and target. If the baseline is shifted upward by awareness of the programme, the gap narrows. The benefit you claim at review is real, but smaller than forecast. The programme gets marked as underdelivering against a business case contaminated from the start.
I spent years working in benefits management at NHS Digital, including writing the Information and Analytics Benefits Management Approach which governed how a directorate tracked its benefits portfolio. The baseline timing problem was one of the first structural issues I identified. A decade later, most programme governance frameworks still do not address it directly.
Dis-benefits get identified by people who have not watched the work
PRINCE2 requires dis-benefit identification: outcomes either certain or highly likely to occur which stakeholders will experience as negative. A common example is the productivity dip while users learn a new system.
What frameworks do poorly is connect dis-benefit identification to user research in a structured way.
Dis-benefits are usually identified by programme teams sitting in a room asking each other what might go wrong. This produces the obvious ones: transition disruption, training time, temporary error rates. It rarely surfaces the subtler ones. The user who maintains a workaround nobody in the centre knows about. The team whose informal process depends on a data quality issue the new system will fix, disrupting the compensation mechanism they have built around it. The manager whose performance metrics assume the current inefficiency, making the improvement look like a shortfall.
These are not hypothetical. They are the kinds of things which surface in user research — specifically in contextual inquiry and diary studies, where you observe actual work rather than asking people to describe it. They do not appear in dis-benefit registers because nobody doing the register has been out to look.
Fix the sequence
The fix is not complicated, though it requires accepting a sequence most programme governance processes resist.
Before the business case is written — or at the latest before the benefits section is finalised — a short discovery sprint should produce four things. A set of provisional personas for the people whose work the programme will affect, derived from observation rather than the stakeholder map. A preliminary benefit hypothesis map testing whether the benefits the programme intends to deliver are ones users will experience. A baseline measurement protocol ready to run before project initiation, with data collected before awareness of the programme has had time to contaminate the result. An initial dis-benefit risk register built from observational research rather than workshop assumptions.
None of this requires a large team or a long runway. A handful of interviews and a round of contextual observation will surface more relevant information than most business case benefits sections contain.
The current convention produces benefit profiles politically committed before they are evidentially grounded. The steel thread of traceability good benefits management promises runs from approved benefit through user need through delivery evidence. If the benefit was identified without the user evidence, the thread has a gap at its origin no amount of subsequent measurement will close.
Why the gap persists
The obvious reason for the sequence problem is governance: processes are designed around funding gates, and user research costs money before the money is approved.
The less obvious one is disciplinary. Benefits management and user research are owned by different teams, follow different methodologies, and rarely work in the same room. The benefits manager is tracking profiles against a plan. The user researcher is conducting discovery. Neither is required to account for what the other is doing.
I have worked across both disciplines, and across enough sectors — health informatics, data governance, housing technology, enterprise architecture — to recognise the gap between them is a consistent source of programme failure. Not the spectacular failure of a system which does not work. The quieter failure of a system which works exactly as specified, delivers against its plan, and does not produce the outcomes the business case promised.
The failure is usually attributed to adoption issues, organisational resistance, or the complexity of change. It is rarely attributed to what it is: a benefit identification process which did not go far enough before the numbers were written down.