A few things on my mind at the moment, with what feels like a common theme that I’ll try to write a few words about …
In all of the pandemic retrospectives going round in the British media this week, one thing seems to be missing. It’s stuck in my mind, but seemingly nobody else’s, that according to the Global Health Security Index, as of the start of 2020, the United Kingdom was the second-best prepared country in the world in terms of overall preparedness for a novel pathogen, and the best prepared when it came to “rapid response to and mitigation of the spread of an epidemic”. The best prepared country in the world was the USA.
What I think is on my mind is a concept related to “target fixation”, which was the term coined in World War 2 to describe the tendency of pilots to sometimes crash into the things they were meant to be attacking. I worry that something like target fixation is a huge risk in the otherwise useful and important exercise of scenario planning. The purpose of thinking in terms of scenarios is to see what they reveal about the flexibility and responsiveness of your systems. But there is always a temptation to judge the exercise based on how good the outcome is assessed to be in the specific scenario.
That seems to be what happened with respect to pandemic preparedness; some systems, including that of the UK, were incredibly well-drilled for a pathogen that behaved basically like flu. The novel coronavirus had slightly different characteristics, which meant that it spread in a slightly different way, and it turned out that preparation for the outcome we had planned for was surprisingly little help in the outcome we got. My worry is that by trying too hard to “learn the lessons” of the 2020-21 pandemic, we risk doing exactly the same thing again.
One of the reasons why I’ve always been a little bit leery of “the lessons from history” is that although history in general can be very useful as a means of amplifying your ability to deal with complex situations, very big historical events seem to be really prone to causing target-fixation. Armies, for example, are so prone to develop fragile solutions which only apply to a recent and highly salient war that there’s a proverb about it. I have similar worries about climate risk scenarios, and I think it’s really quite obvious that financial regulatory stress testing is heading for outright uselessness[1] through a combination of target fixation and arbitrage behaviour.
What can be done? I suspect that the “Campaign For Real Stress Tests” will make more appearances on this ‘stack, because I’ve always been an advocate of reverse stress testing (assume that your project has failed extremely badly, and then write the scenario which took you there). Similarly, “lessons learned” exercises should concentrate not on learning what we might have done to make things better, but on what we did do that made things worse.
I think you’re a lot less likely to get target fixated if you do scenario planning in a way that recognises that there are some scenarios where, as my old boss said “there’s nothing you can do except sit down and take your medicine”, rather than ones where there’s a solution. As I say above, the purpose of these things is not the hypothetical outcome, it’s what you learn about the effectiveness of your process. Interestingly to me, cyber risk “red team” exercises seem to be much better at understanding this than any other kind I’m familiar with; people just accept that the core of the scenario is “computer done stopped working”, rather than getting obsessed with the minutiae. But I suspect this is because cyber risk is relatively new, and that they will develop all the same pathologies as the rest of management in time.
Anyway, I’m currently writing a proposal for what I hope will be my next book, and consequently my computer screen is taken up with this sentence:
“The great irony – perhaps the tragedy – of our time is that just at the moment when we have delegated so much important decision making to data, models and evidence, we are faced with so many problems that aren’t in the dataset because they have never happened before.”
[1] Not quite uselessness; these days, the stress test is just a complicated way to reach a result of “stick another couple of points on the capital requirement”. But this is a bit of a disappointment for people who had hoped that the regulation of such a complex system might one day progress beyond arithmetic.
