There is a ritual at the start of almost every public sector programme. Someone in policy writes a business case. They identify the benefits, assign them values, and get approval. The project initiates. Then the user researchers arrive.
What they find, fairly reliably, is benefits drawn from strategic intent rather than operational evidence. The people who wrote the case knew what the programme should achieve. They did not know whether users would behave in the ways the calculations assumed. Nobody had asked.
This is not cynical. It is structural. Business cases get written before project budgets are approved. User research costs money. The sequence is almost always case first, evidence later. By the time evidence arrives, benefit profiles are signed off, change-controlled, and politically difficult to revise.
The thread which is supposed to connect strategic intent to operational reality gets cut before anyone picks it up.
Identification is a top-down exercise pretending to be evidence
Benefits management, done properly, is one of the more rigorous disciplines in programme delivery. The five-stage model creates a line of sight from investment to outcome which most programmes lack. The problem is at stage one.
In most programmes, identification is top-down. Policy leads and senior responsible owners describe the benefits they need the programme to deliver in terms which satisfy HM Treasury Green Book categories and the investment appraisal which follows. These get translated into benefit profiles which look precise: percentage attributions, pound values, and target dates. What they rarely are is grounded.
The baseline is measured after the organisation knows something is coming
There is a closely related problem nobody mentions.
The baseline against which benefit gets measured has to be established before the project begins. If you want to demonstrate your new system saves thirty minutes per user per day, you need to know how long the current process takes. Before. Not after the project has started, when awareness of the change has already begun to shift behaviour.
In practice, baseline data collection is treated as an early-stage project activity rather than a pre-project one. By the time researchers go out to measure the current state, the organisation already knows something is coming. Teams have started compensating. Workarounds have been tidied up. Managers have briefed people to perform well. The baseline you measure is a slightly improved version of the actual baseline.
The benefit calculation depends on the gap between baseline and target. Shift the baseline upward through awareness of the programme, and the gap narrows. The benefit you claim at review is real, but smaller than forecast. The programme gets marked as underdelivering against a business case contaminated from the start.
I spent years working in benefits management at NHS Digital, including writing the Information and Analytics Benefits Management Approach which governed how a directorate tracked its benefits portfolio. The baseline timing problem was one of the first structural issues I identified. A decade later, most programme governance frameworks still do not address it.
Dis-benefits get identified by people who have not watched the work
PRINCE2 requires dis-benefit identification: outcomes either certain or highly likely to occur which stakeholders will experience as negative. A common example is the productivity dip while users learn a new system.
Dis-benefits are normally identified by programme teams sitting in a room asking each other what might go wrong. This produces the obvious ones: transition disruption, training time, temporary error rates. It rarely surfaces the subtler ones. The user who maintains a workaround nobody in the centre knows about. The team whose informal process depends on a data quality issue the new system will fix, disrupting the compensation mechanism they have built around it. The manager whose performance metrics assume the current inefficiency, making the improvement look like a shortfall.
These surface in user research, specifically in contextual inquiry and diary studies, where you observe actual work rather than asking people to describe it. They do not appear in dis-benefit registers because nobody compiling the register has been out to look.
Fix the sequence
The fix is not complicated, though it requires accepting a sequence most programme governance processes resist.
Before the business case is written, or at the latest before the benefits section is finalised, a short discovery sprint should produce four things. Provisional personas derived from observation rather than the stakeholder map. A preliminary benefit hypothesis map testing whether the benefits the programme intends to deliver are ones users will experience. A baseline measurement protocol ready to run before initiation, with data collected before awareness has had time to contaminate the result. An initial dis-benefit risk register built from observational research rather than workshop assumptions.
A handful of interviews and a round of contextual observation will surface more relevant information than most business case benefits sections contain.
The failure is quieter than people think
The obvious reason the sequence problem persists is governance. Processes are designed around funding gates, and user research costs money before the money is approved. The less obvious reason is disciplinary. Benefits management and user research are owned by different teams, follow different methodologies, and rarely work in the same room.
I have worked across both disciplines, and across enough sectors, health informatics, data governance, housing technology, enterprise architecture, to recognise the gap is a consistent source of programme failure. Not the spectacular failure of a system which does not work. The quieter failure of a system which works exactly as specified, delivers against its plan, and does not produce the outcomes the business case promised.
The failure is normally attributed to adoption issues, organisational resistance, or the complexity of change. It is rarely attributed to what it is: a benefit identification process which did not go far enough before the numbers were written down.