Health Policy
Algorithms of destruction?

By - Integrated Care Journal

While our exam boards dish out perplexing results and our health policy drives lockdown, they don’t always get the right result. The models they use are sophisticated and complex, taking account of an extensive range of variables but often end up missing the mark. So, what is wrong with our algorithms?

Algorithms and models are everywhere, and we approach them sensibly most of the time. For instance, a car driven by a speed demon during the week and by a fuel-obsessive at the weekend may suddenly display a peculiarity: the further it travels the more miles it predicts are left in the tank! We understand this and use other gauges when deciding whether to fill up. Some policymakers, however, trust algorithms implicitly (or implicitly trust those who do).

Throughout the Covid crisis, home shopping has had a bonanza on the back of sophisticated track-and-trace algorithms. Remember the mess at the start when getting a supermarket delivery was like finding gold dust? The companies soon fixed that, opened more slots, recruited staff, and the sector has boomed. Meanwhile, the wider economy, education and health, have withered.

Why has retail been so lucky and policymaking so unfortunate? Is it the algorithms?

I’ve spent my life with computer models from photonics to healthcare. At first, I was involved in specification and coding but later lost sight of the details, relying on other people’s eyes and keyboards when leading research teams.

Making good decisions with algorithms is clearly possible, even if it is hard at times. Success relies on good test cases and measuring key trends, as well as sound judgement. A good test case runs the model against what really happens, but tests fail without good data. Our problems with data at the start were understandable but the lack of quality now almost suggests a willful commitment to the joys of flying blind.

However, even if a model’s predictions cannot be checked directly – because the model or the data are not good enough – we can still work with the trends it predicts.

The 50,000 cases a day scenario was a great example of testing against reality and also of testing for trends: it was a brave move to make it public. We can now see that the algorithm significantly overpredicted cases of people testing positive while raising questions about the shape of the trend. Evidence is also emerging that many more people may have immunity than the model assumes.

It is hard to be clear because the data is so poor, but data specialists (@RP131, for instance) as well as public health commentators (such as the Centre for Evidence-Based Medicine at Oxfordwww.cebm.net), note the differences between the predicted trends and the measured rises.

There were always good scientific reasons to doubt a tsunami. Wavelets? Probably. A second wave to dwarf the first? Unlikely. Manageable waves from here? Absolutely!

It’s more complicated than that, of course, but the key message is that algorithms are neither magic nor a menace. At their best, they have kept us going through the madness. At worst, faith in them may be contributing to it.


Professor Terry Young worked in industrial R&D before becoming an academic and is now Director of Datchet Consulting. With over 30 years' experience in technology development and strategy, health systems, and methods to ensure value for money, his current focus lies in designing services using computer models and he set up the Cumberland Initiative to support healthcare organisations wishing to develop their services more systematically.

Three of his downloadable papers are:

Using industrial processes to improve patient care (2004, with Brailsford et al., British Medical Journal)

Performing or not performing: what’s in a target? (2017, with Eatock & Cooke, Future Hospital Journal)

Systems, design and value-for-money in the NHS: mission impossible? (2018, with Morton and Soorapanth, Future Hospital Journal)


#ACJInsight #ACJDigital #terryyoung