← Richard Sutcliffe

Why Your Benefits Case Is Wrong, and Your User Researcher Knows It

Most large programmes have a benefits manager and a user researcher. The two are rarely in the same room at the same time, working from a shared understanding of what the other is trying to do. The benefits manager is building profiles, tracking realisation, managing attribution percentages, and reporting to a programme board. The user researcher is conducting discovery, mapping journeys, and feeding findings into a product backlog. Their work overlaps in subject matter. Both concern what real people do and why. But the disciplines are sequenced so they rarely overlap in time, and the seam between them is where benefit cases quietly come apart.

I know this because I have worked on both sides of it.

The attribution number nobody is qualified to give

Every benefit profile contains an attribution percentage: the proportion of the benefit attributable to this programme rather than other factors. It is a difficult number to arrive at honestly. In practice, it tends to be a judgement call made by the programme team, informed by analogous work, professional experience, and a fair degree of optimism.

What it is rarely informed by is user research data about the human conditions determining whether the benefit lands at all.

Take a clinical information system claiming to save four hours per clinician per week, attributed sixty per cent to the programme. User research conducted at benefit identification would tell you whether those four hours are lost to genuine inefficiency or to a compensating behaviour. If clinicians have built workarounds reliable enough to prefer over an unfamiliar system, the behavioural change required to land the benefit is far larger than the technical change. The attribution is not wrong in arithmetic. It is applied to a benefit dependent on a behaviour shift the programme has neither planned for nor staffed to deliver.

This is the attribution inflation problem. It is endemic in public sector programmes, and it persists because the person doing the calculation is not the person who knows what users do.

The dis-benefit hiding inside the workaround

The same gap shows up on the dis-benefit side, in a different form.

Dis-benefits are the negative outcomes a programme accepts as likely consequences of delivery. A productivity dip during transition is the standard example. Most benefit profiles include something like it. Few include the dis-benefit only visible to someone who has been in the room with users: the workaround the fix will break.

Every operational system of any age accumulates informal processes: things people do to compensate for gaps in the official system. They are invisible to programme teams unless someone has gone to look for them. Not malicious. Adaptive. When a programme fixes the underlying gap, the workaround breaks. Users who depended on it experience disruption. It surfaces as resistance to adoption, temporary performance decline, or complaints the new system is worse than the old one even when, by every designed metric, it is better.

A dis-benefit register built from contextual inquiry — the practice of observing people doing their actual work — surfaces these reliably. But you have to know to look for them. A specialist user researcher knows how to run the research. A specialist benefits manager knows what to do with the findings. Neither, in isolation, knows what the other needs.

The structural reason specialists miss this

This is not a criticism of specialists. Deep expertise in a single discipline produces capability generalists cannot replicate. Specialisation exists for good reasons.

There is, though, a category of problem specialisation is structurally unlikely to see: the problem living in the gap between disciplines. The benefits manager does not go out and do user research because it is not their job. The user researcher does not translate findings into benefit attribution adjustments because it is not their framework. The programme manager is responsible for both but lacks the depth in either to spot where the seam will later let the programme down.

The person who sees it is the one who has spent time in both disciplines, with enough working knowledge of both to understand what each needs from the other.

It is not a conventional career profile. It is, in fact, exactly the profile I spent years explaining away in interviews. Broad experience reads as a softer hire than deep expertise. For most of my career, being a generalist felt like a liability I had to manage. It turns out to be the wrong reading.

What it looks like when both sides are in the room

When you hold both perspectives at once, three things become obvious.

Baseline data needs collecting before project initiation, before awareness of the programme has contaminated user behaviour. Benefits management depends on a clean baseline. User research tells you how quickly behaviour responds to the anticipation of change. The implication does not appear in either framework alone.

Dis-benefit identification needs observational research, not workshop assumption. Workshops surface what people say. Observation surfaces what they do. The gap between the two is where the unbudgeted dis-benefits live.

Benefit profiles need a change readiness assessment: whether users have the motivation, capability, and environmental conditions to change their behaviour in the way the profile assumes. This is neither a user research output nor a benefits management output. It is what emerges when someone who has read the user research data asks the question the benefit profile needs answered but does not know how to ask.

None of this is complicated. It is available to anyone with working knowledge of both disciplines. But it requires someone to be in both conversations, which does not happen when the disciplines are staffed and managed independently.

What this means for how we staff programmes

I am not arguing everyone should be a generalist. I am arguing programmes need people who are, and those people are structurally undervalued in how we staff and procure delivery capability.

The gap between benefits management and user research is one instance of a pattern I have seen repeatedly across health informatics, data governance, and now housing technology. The technical discipline and the human discipline are both present, both competent, both working in good faith. Nobody is doing bad work. The gap is not a performance problem. It is structural, created by the way we organise and credential expertise.

People with broad cross-domain experience are often the only ones standing on both sides of the gap. The ability to translate between disciplines without a shared language is not a soft skill. It is a specific capability, and it is the thing preventing programmes from failing in the quiet way: on time, on budget, and not delivering what the business case said they would.

The broad experience was not the liability. It was the point.