What a billion tests a year and a burning platform taught me about User-Centred Design
Somewhere in England right now, a laboratory technician is staring at a dropdown of clinical codes, scrolling past dozens of near-identical options, trying to find the right one before the queue behind them grows. They do this hundreds of times a day. The codes were designed without them in mind. Three previous programmes, over more than a decade, had tried to fix this and failed. The reason they failed is the reason most data standards work fails. They were designed for terminological correctness, not for the people using them.
Why pathology codes matter to anyone outside pathology
Every GP visit, hospital admission, and blood test is tagged with clinical codes. Those codes determine whether you are on the right care pathway, how your practice gets paid, what national statistics get published, and whether a researcher in five years understands what test was run and what the result meant.
Get them right and the system works. Get them wrong and you get inconsistency, failed linkages, and data which looks clean but tells a misleading story.
Pathology sits at the heart of this. England runs over a billion tests a year. The NHS spends around £2.5 billion on pathology annually, with volume growing about ten per cent each year. The workforce is shrinking. Experienced pathologists are retiring faster than they are being replaced. Automation is the only sustainable answer, and automation depends entirely on clean, machine-readable clinical data.
The failure mode three programmes shared
The Pathology Bounded Code List, introduced in 1996, became the de facto standard because nothing better existed. By 2011, a Royal College of Pathologists review had identified three significant problems with it. Inconsistencies caused by local variations across trusts. Gaps in coverage for modern tests. And the fact the Read coding system it was based on had been retired and replaced with SNOMED CT across the rest of the NHS.
Three replacement attempts followed. All of them failed. The reasons were consistent. Insufficient funding, and an approach prioritising terminological purity over usability. Nobody had asked the people using the codes whether the replacements would work in practice.
Then COVID arrived. With Read coding retired, there was no official mechanism to create new codes for COVID testing. The existing EDIFACT messaging format — originally designed in 1987 for United Nations trade documentation — physically would not carry SNOMED CT codes because they were too large. The workaround was to transport standardised descriptions in the free-text field of the existing messaging system. A billion-test-per-year clinical discipline was, in a crisis, routing around its own infrastructure.
This was the burning platform.
What we did differently, and why it worked
The programme I led introduced three interrelated standards. The Unified Test List, a Units of Measure standard, and a new Pathology Message Specification based on HL7 FHIR. All built on internationally recognised, maintained technologies. This is the technical story.
The part I find more interesting is how we developed them.
Previous attempts had been led by the question: what is the correct code? We started from a different question. Who needs to use this, under what conditions, and what will work for them?
So we conducted user research. We developed personas. We mapped user journeys and process maps. We identified pain points. We brought users and subject matter experts into alpha and beta reviews, and incorporated their feedback before the standards were finalised. We applied User-Centred Design to clinical coding — which is about as left-field an application of UCD as you are likely to find.
The insights were concrete. The screen real estate available to a lab technician selecting a code is limited. Long lists of similar descriptions are slow to scan, and the convention of writing descriptions right to left forced users into time-consuming horizontal scrolling whenever multiple terms shared a prefix. We redesigned the code descriptions to be shorter, more distinctive, and differentiable on screen without scrolling. The codes became usable because we understood the environment they would be used in.
We also used the understanding of information recorded alongside different tests to design better messaging structures — structures holding the quantification and qualification data each specialism needed, rather than forcing a compromise serving nobody well. Working directly with users and their representative bodies let us prioritise the roadmap to deliver coding and messaging for the specialisms with the greatest need first, rather than working through an arbitrary order.
The result was the first new NHS pathology standard in 25 years, published as DAPB4017 in 2020.
The pattern repeats wherever standards meet people
I have spent over 25 years delivering healthcare technology, data strategy, and digital transformation. User-Centred Design comes up constantly in the context of consumer-facing digital products. It is the obvious application. The projects where I have seen it make the most difference are often the ones where it seems least obvious.
Clinical coding standards are not a user experience problem on the surface. They are a terminological and technical problem. And yet the failures preceding this work were, at their core, failures of user understanding. The people designing the codes were not the people using them. This gap was never bridged, and the result was standards correct in theory and unworkable in practice.
The same pattern appears in data governance, metadata strategy, and enterprise data architecture. The tendency in those disciplines is to design for logical correctness and assume adoption will follow. It rarely does. The questions determining whether a system gets used — and used correctly — are questions about people. What they are trying to do, under what constraints, with what level of expertise, in what environment.
Those questions are not answered by looking at the data model. They are answered by listening.