Methods known from climate science can make Covid-19 models more robust

University of Amsterdam

How reliable are the computer models that governments use to decide which measures they should take to control the Covid-19 pandemic? A team led by chemist and computer scientist Peter Coveney put the British model CovidSim to the test, using techniques that are known from other complex models like those used in weather prediction and climate science. They show that small variations in the input parameters can lead to very high variations in the outcomes. And they offer a solution.

When Covid-19 suddenly spread around the world in the early months of 2020, the need for models that could predict which measures would be most useful to reduce the spread of the virus and limit the number of deaths was high. There was still so much uncertainty surrounding this new virus, its infectiousness, and the health toll. One of the first models that received a lot of attention when it was developed in March was the CovidSim model by a group of scientist at the Imperial College London. Back then, this model helped convince both British and American politicians that they should introduce lockdowns to prevent the high number of projected deaths. However, over the months doubts have risen as to its reliability.

6000 runs

In the United Kingdom, London's Royal Society therefore decided to commission a team of independent researchers to put the model to the test. This team was led by Peter Coveney, who next to his work as director of the Centre for Computational Science at University College London (UCL) and professor in physical chemistry at the same university is also professor by special appointment at the University of Amsterdam's Informatics Institute. Coveney and his team decided to test the robustness of the CovidSim model with the help of techniques known from the modeling of other highly complex systems, like the weather and climate models. They used a supercomputer to run the model 6000 times, each time with slightly different initial parameters.

Coveney's team found that tweaking the initial parameters led to highly variable outcomes. Which is important, since many of these parameters have a rather high degree of uncertainty. For instance, what is the exact effectiveness of a behavioral measure like social distancing on the spread of the virus? Slight variations in the prediction of the effectiveness of such measures can, after running the entire model, be amplified and eventually lead to a difference of tens of thousands of predicted deaths.

Most crucial parameters

There are as many as 940 variable parameters in the CovidSim model. But Coveney's team found that nineteen of those are most crucial for the eventual outcome. And up to two-thirds of the variation in the outcome of the model turned out to be determined by a set of three parameters: the length of the phase in which an individual is already infected but can't yet pass the virus on to others; the effectiveness of social distancing; and how soon an infected person goes into isolation.

All this doesn't mean that the model cannot be used. And Coveney is hesitant to criticize the predictions made by the Imperial College team in March. 'They did the best job possible under the circumstances', he comments. But he also stresses that a different approach to these models is needed. Coveney: 'Our findings are important for government and healthcare policy decision making, given that CovidSim and other such epidemiological models are - quite rightly - still used in forecasting the spread of COVID-19. Like predicting the weather, forecasting a pandemic carries a high degree of uncertainty and this needs to be recognized.'

Solution

The solution, according to Coveney, is to always run these models like an ensemble. As is customary in, for instance, climate science. The outcome would then be a range instead of a single number. The average value within that range is the most probable outcome. When Coveney's team did this with the CovidSim model to predict the death toll in the UK under lockdown, the average number their runs produced was twice as high as the number predicted by the Imperial College team in March but closer to the actual figures.

The team's findings have not been peer-reviewed yet, but are available as a preprint and expected to be officially published soon. The preprint has already attracted a good amount of attention; it is, among other places, discussed in the news section of Nature and at the well-read blog of the British Science Museum Group.

Partners

Next to University College London and the University of Amsterdam, the other partners involved in this study are the Dutch CWI (Centrum voor Wiskunde en Informatica), Brunel University London, and the Poznan Supercomputing and Networking Centre in Poland.

As soon as it is available, a link to the peer reviewed version of this study will be added.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.