The UK is rightly transfixed with the unfolding story of the catastrophic fire at Grenfell Tower in Kensington, London, which at the time of writing had led to 30 confirmed deaths.  This follows – as the Queen pointed out in her birthday message – terrorist attacks on Westminster and London Bridges, and in Manchester.

‘Disaster economics’ seems like an inappropriately technocratic topic at a time like this.  But disasters often have their root in the inherent challenges of disaster economics [or rather disaster economics and statistics].  And failing to rise to them can lead to more disasters than is necessary.

One of the main challenges is figuring out the frequency of disaster-events of different severity, when such things are relatively rare.  In small samples of a few years, you will have many observations of rainfall around the most common quantity, but you will have very few – perhaps even no – occurrences of huge floods.  Estimating the probability of a huge flood or a catastrophic drought is therefore a more hazardous business than guessing the probabilities of milder events.

A key part of this problem is figuring out not just the frequency of very bad things, but how that changes as policy changes.  How high would a sea wall have to be to reduce floods of a seaside town to 1 year in an 100?  How much resources would need to be spent on potential terrorist supervision to reduce the frequency of London Bridge style attacks to one every ten years?

A focus of the literature on policymaking when the chance of very bad things happening is poorly estimated is the idea of ‘robustness’.  I came across late applications by Tom Sargent and others of this idea to monetary policymaking, but the idea was stolen from engineering and control work done in the 1960s and 1970s.  The idea here is to set up the policy problem so that one chooses a policy that does best in the event that things turn out as worse as could be compared to the benchmark understanding of the policy problem.  To translate:  imagine we start out with a guess at the height needed to build a sea wall to get the frequency of a flood down to 100 years equal to 3 metres.  We then say – what’s the worst this height could actually be without it being apparent in the data we have?  Suppose the answer to that is 5m.  We then fund a sea wall to 5m.

Two difficulties follow from this.  The first is that it is not often easy to put a boundary on the ‘worst case scenario’.  If our time series on floods is short, or patchy, or not that accurate, we might not have a good idea where that boundary lies.

A second difficulty arises from the fact that scarce public funds have to try to deal pre-emptively with multiple sets of disasters of unknown probability.  If the only thing we had to spend money on was terrorist attacks, we could simply define the worse case scenario of the amount needed to get attacks down to 1 in 10 years to be the total feasible tax take.

But in reality governments have to deal with the risk of tower block fires, hospital epidemics, terrorist attacks, wars, floods, road pile-ups, corruption, cyber attacks, financial crises, climate change, prison riots, and much more.

An overly cautious approach to avoiding one kind of catastrophe deprives funds available to prevent others, and will lead to more catastrophes of those kind.

The problem of disaster policymaking gets harder when we place it in the context of a real life democracy with real voters.   Several issues arise.

First is gaining acceptance that – particularly given the multi-dimensional and competing nature of disasters that we face – it is impossible to eliminate risk entirely.

A second problem that derives from this is the need for ‘something to be done’ in response to a disaster.  I say this derives from the first difficulty, because even with optimal disaster policy, there are going to be disasters, so it may be that nothing needs to be done at all.  I make this point not to pretend that this is the position we are in at the moment.   There are plenty of persuasive arguments emerging out of the coverage of the Grenfell Tower fire and recent terrorist attacks that might lead us to think that things have to be done.

Third, given the news cycle, short memories, and the limited horizons of politicians in a competitive democracy, there is pressure for something to be done quickly enough for the incumbents to salvage credit for responding appropriately and quickly to the disaster.  A better something – that did not drain money from effective disaster prevention elsewhere – that emerged out of a time-consuming investigation, can’t always be waited for.

Fourth, policy is made in the prism of voters psychological responses to different kinds of risk.  These responses are not always rational, as research in behavioural economics and related fields has shown.

A famous recent example is the response of US citizens to the 2001 terrorist attacks involving hijacking planes to be used as bombs.  The thought of being caught up in such a horrible event, however, unlikely, was sufficient to cause so many people to use road transport instead, that far more were killed on the roads, due to mundane, but less awful to contemplate, risks of crashes [than would have been given a plausible estimate of the chance of repeat plane hijacks].

Making this point is rather distasteful given what Grenfell Tower residents went through, and how that disaster might well have been averted with safer construction, or better evacuation advice.  I’m hoping that you take me to mean not that these reactions in the analysis so far are misplaced, just that the good that comes out of this tragedy is not confined to fire safety, but involves an appreciation of disaster economics and policy as a whole.

Another feature of the disaster policy problem is that it can be easier to muster political support and mission to respond to events that are fresh in the mind, rather than to risks that appear, at least perhaps to some local constituency, to be latent, that is risks that have a certain probability of happening but have not yet happened.  This is perhaps what dogs climate change mitigation, where the connection between our individual choices and the problem is hard to detect.

Climate change’s most dramatic effects to date seem to be far from the UK, at the polls, or in glacial areas, or in low-lying, poorer economies that most of us have not visited.   Or, like climate change, the connection between the policy choice and the event is far removed in time [in this case, discussion revolves around temperature changes over 100 years].

This is not to push back against the likely response to the Grenfell Tower fire.  Far from it.  The point would be that winding back time to before the fire policy choices up to that point might later be seen to have been tainted by the issue of failing to respond appropriately to risks that were at that time latent, yet to crystallize.

A final aspect of the disaster policy problem relates to general difficulties that people, the media and policymakers tend to have in dealing with statistics and policy analysis.

These difficulties surface all the time, and cropped up in the much more mundane and less tragic context of the debate around Brexit.

For example:  framing the analysis of the cost of Brexit as the assertion that people will, with certainty, be x amount poorer [which led to the counter under the banner of ‘project fear’];  the deduction by Brexiteers that the counterfactual analysis HMT and others did could be dismissed as a simple ‘forecast’;  the observation by Brexiteers that pre-referendum forecasts of the UK turned out to be ‘wrong’.  And many more.

There are pockets of wisdom in public policy thinking that relate to this disaster economics issue.  For example, in health, there is the ‘qualy’, a way of figuring out how many units of good life a given amount of spending on different treatments confers, and thus allocating money between them to preserve the maximum amount of life for a £.  And in defence analysis there was a tradition of the exact opposite:  working out how much it costs to kill the enemy using different weapons, and therefore optimising the number killed for a £ of expenditure.

But this kind of analysis tends to be kept under wraps for fear of causing revulsion and a collapse in support for state activities.

Reading this draft back, there is a risk that some are going to take it as a tactless and dry response to the sickening events of the last few months.

But the intention is to point out that there is a need not just to get the Grenfell Tower response right, but to take a look at the government’s approach to disaster economics as a whole.  Is the tax take reserved for such things large enough?  And is it divided up in the right way?  Are all regulations – not just fire regulations – striking the right balance between liberty and disaster prevention?  And not just one determined by the kind of dysfunctions described above?