← Richard Sutcliffe

The Benefits Case Was Wrong, and the User Researcher Knew It

Most large programmes have a benefits manager and a user researcher. They are rarely in the same room. The benefits manager is building profiles, tracking realisation, and managing attribution percentages for a programme board. The user researcher is mapping journeys and feeding findings into a product backlog. Both concern what real people do and why. The disciplines are sequenced so they almost never overlap in time, and the seam between them is where benefit cases quietly come apart.

I have worked on both sides of it.

The attribution number nobody is qualified to give

Every benefit profile contains an attribution percentage: the proportion of the benefit attributable to this programme rather than other factors. In practice, it is a judgement call made by the programme team, informed by analogous work, professional experience, and a fair degree of optimism. What it is rarely informed by is user research data about the human conditions determining whether the benefit lands at all.

Take a clinical information system claiming to save four hours per clinician per week, attributed sixty per cent to the programme. User research at benefit identification would tell you whether those four hours are lost to genuine inefficiency or to a compensating behaviour. If clinicians have built workarounds reliable enough to prefer over an unfamiliar system, the behavioural change required to land the benefit is far larger than the technical change. The attribution is not wrong in arithmetic. It is applied to a benefit dependent on a behaviour shift the programme has neither planned for nor staffed to deliver.

This is the attribution inflation problem. It is endemic in public sector programmes, and it persists because the person doing the calculation is not the person who knows what users do.

The dis-benefit hiding inside the workaround

The same gap shows up on the dis-benefit side, in a different form.

Dis-benefits are the negative outcomes a programme accepts as likely consequences of delivery. A productivity dip during transition is the standard example. Few profiles include the dis-benefit only visible to someone who has been in the room with users: the workaround the fix will break.

Every operational system of any age accumulates informal processes. Things people do to compensate for gaps in the official system. Not malicious. Adaptive. Invisible to programme teams unless someone has gone to look for them. When a programme fixes the underlying gap, the workaround breaks. Users who depended on it experience disruption. It surfaces as resistance to adoption, temporary performance decline, or complaints about the new system being worse than the old one even when, by every designed metric, it is better.

A dis-benefit register built from contextual inquiry surfaces these reliably. But you have to know to look for them. A specialist user researcher knows how to run the research. A specialist benefits manager knows what to do with the findings. Neither, in isolation, knows what the other needs.

The structural reason specialists miss this

This is not a criticism of specialists. Deep expertise produces capability generalists cannot replicate.

There is, though, a category of problem specialisation is structurally unlikely to see: the problem living in the gap between disciplines. The benefits manager does not go and do user research because it is not their job. The user researcher does not translate findings into benefit attribution adjustments because it is not their framework. The programme manager is responsible for both but lacks the depth in either to spot where the seam will let the programme down.

The person who sees it is the one who has spent time in both, with enough working knowledge of each to understand what the other needs from them.

It is exactly the profile I spent years explaining away in interviews. Broad experience reads as a softer hire than deep expertise, and gets quietly deprioritised in favour of specialisation. For most of my career, being a generalist felt like a liability to manage. It turns out to be the wrong reading.

What it looks like when both perspectives are in the room

Three things become obvious when someone holds both at once.

Baseline data needs collecting before project initiation, before awareness of the programme has contaminated user behaviour. Benefits management depends on a clean baseline. User research tells you how quickly behaviour responds to the anticipation of change. The implication does not appear in either framework alone.

Dis-benefit identification needs observational research, not workshop assumption. Workshops surface what people say. Observation surfaces what they do. The gap between the two is where the unbudgeted dis-benefits live.

Benefit profiles need a change readiness assessment: whether users have the motivation, capability, and environmental conditions to change their behaviour in the way the profile assumes. This is neither a user research output nor a benefits management output. It is what emerges when someone who has read the user research data asks the question the benefit profile needs answered but does not know how to ask.

None of this is complicated. It requires someone to be in both conversations, which does not happen when the disciplines are staffed and managed independently.

What this means for how programmes are staffed

I am not arguing everyone should be a generalist. I am arguing programmes need people who are, and those people are structurally undervalued in how delivery capability is staffed and procured.

The gap between benefits management and user research is one instance of a pattern I have seen repeatedly across health informatics, data governance, and now housing technology. The technical discipline and the human discipline are both present, both competent, both working in good faith. The gap is not a performance problem. It is structural, created by the way expertise is organised and credentialled.

People with broad cross-domain experience are often the only ones standing on both sides of the gap. The ability to translate between disciplines without a shared language is the thing preventing programmes from failing in the quiet way: on time, on budget, and not delivering what the business case said they would.

The broad experience was not the liability. It was the point.