Sean Muller: “South Africa’s use of COVID-19 modelling has been deeply flawed. Here’s why”

South Africa’s use of COVID-19 modelling has been deeply flawed. Here’s why

President Ramaphosa’s government is easing the lockdown because of unsustainable economic costs.
Getty Images

Seán Mfundza Muller, University of Johannesburg

When President Cyril Ramaphosa announced the decision to implement an initial 21-day national lockdown in response to the threat posed by the COVID-19 pandemic, he referred to “modelling” on which the decision was based. A media report a few days later based on leaked information claimed that the government had been told that “a slow and inadequate response by government to the outbreak could result in anywhere between 87,900 and 351,000 deaths”. These estimates, the report said, were based on, respectively, population infection rates of 10% to 40%.

In late April, the chair of the health minister’s advisory committee sub-committee on public health referred to the early models used by the government as “back-of-the-envelope calculations”, saying they were “flawed and illogical and made wild assumptions”.

These assertions have been impossible to fully assess. This is because no official information on the modelling has ever been released – despite its apparently critical role.

A briefing by the chair of the health minister’s advisory committee in mid-April sketched some basic details of what the government’s health advisors believed about the likely peak and timing of the epidemic. But no details were given on expected infections, hospital admissions or deaths.

A spokesperson for the presidency said that government was withholding such numbers “to avoid panic”.

Finally, towards the end of May the health minister hosted an engagement between journalists and some of the modellers government was relying on. It then started releasing details of the models and projections.

The predictions of these models for an “optimistic scenario” are that the vast majority of the population will be infected, there will be a peak of 8 million infections in mid-August and in total there will be 40,000 deaths.

To understand the significance of these – and the previous numbers – it is useful to consider more broadly what models are and how they are being used in the current context.

What models are and how they are used

A theoretical model – whether in epidemiology, economics or even physics – is a simplified representation of how the modeller thinks the world works.

To produce estimates or forecasts of how things might play out in the real world, such models need to make assumptions about the strength of relationships between different variables. Those assumptions reflect some combination of the modeller’s beliefs, knowledge and available evidence.

To put it differently: modelling is sophisticated guesswork. Where models have been successfully used across different contexts and time periods we can have more confidence in their accuracy and reliability.

But models, especially outside sciences like physics, are almost always wrong to some degree. What matters for decision-making is that they are “right enough”. In the current situation, the difference between predicting 35,000 and 40,000 deaths probably won’t change policy decisions, but 5,000 or 500,000 instead of 40,000 might.

In the case of South Africa’s COVID-19 response, available information indicates that epidemiological models have played two main roles.

First, they have provided predictions of the possible scale of death and illness relative to health system capacity, as well as how this is expected to play out over time.

Second, they have been used to assess the success and effects of the government’s intervention strategies.

There are reasons to believe that there have been significant failures in both cases, in the modelling itself and especially in the way that it has been used.

In recent weeks, the government and its advisors have been keen to emphasise the uncertainty of the modelling predictions. From a methodological point of view, that is the responsible stance. But it’s too little too late.

Modelling COVID-19 is challenging in general, but there are at least four additional reasons to be cautious about our COVID-19 models.

Reasons for caution

First, certain key characteristics of SARS-CoV-2 remain unknown and the subject of debate among medical experts.

Second, unlike some countries, South Africa does not have detailed data on the dynamics of social interactions and the models presented so far do not use household survey data as a proxy. Nuanced questions therefore aren’t addressed. For example, most cases early on in the epidemic appeared to have been relatively wealthy travellers. But there was no way to model the consequences of domestic workers being exposed by their employers and thereby infecting others in their (poorer) communities. So the structure of South Africa’s models is high level and does not account for country-specific factors.

Third, the values for the parameters of the models (representing the strength of relationships between different factors) are being taken from evidence in other countries. They may not actually be the same in South Africa.

Finally, the unsystematic nature of aspects of the government’s approach to testing, such as through its community screening programme, makes it much harder to infer the effects of its interventions.

Unclear objectives

There is little reason to believe that government had anything other than good intentions. Nevertheless, the consequences of its lack of sophistication in using evidence and expertise may burden an entire generation of South Africans.

A major problem linked to the combination of excessive confidence and secrecy is that the government’s strategy was never clear: although it referred to “flattening the curve” it never stated what its specific objectives were. In the terms of the most influential modelling-based advice during the pandemic, was its strategy “suppression” or “mitigation”?

The government and its advisors have made much of the fact that the lockdown probably delayed the peak of the epidemic. But there is no evidence so far that this was worth the cost – since most of the population is expected to be infected anyway.

One key claim is that the lockdown bought the country time to prepare the health system.

The Imperial model defined the primary objective of “flattening the curve” as reducing ICU admissions below the number of critical care beds. On that dimension, the government’s own modellers predict a peak of 20,000 critical cases in the optimistic scenario and only about 4,000 ICU beds with little increase from the pre-lockdown numbers. By this definition, it has failed dismally.

There appears to have been more success with securing supplies of personal protective equipment, quarantine locations, overflow beds and some ventilators. But there is also little evidence that many of those small gains could not have been achieved without such a costly lockdown.

Given this, it is concerning that many academics and commentators have praised the success of government’s strategy. This has included the Academy of Sciences, which has asserted that “strong, science-based governmental leadership has saved many lives, for which South Africa can be thankful”.

This is entirely unsubstantiated.

First, the full toll of the epidemic will be experienced over time and so it is possible to have fewer deaths at the outset due to a policy intervention being exceeded by a larger number of deaths later because of the limitations of that same policy intervention.

Second, the only way to substantiate such claims would be to use models of different scenarios. But we’ve seen that the early models were not credible and the subsequent ones are subject to a great deal of uncertainty. It seems that the government and some of its advisors want to have the best of both worlds: they want to use dramatically incorrect predictions by early models to claim success of their interventions. This is misleading and does not meet the most basic standards by which academics in quantitative disciplines establish causal effects of policy interventions.

In an earlier article, I noted that “if the current lockdown fails to drastically curb transmission, which is possible, it would layer one disaster on another … the country may exhaust various resources by the time the potentially more dangerous winter period arrives”.

This appears to be the situation in which South Africa finds itself.The Conversation

Seán Mfundza Muller, Senior Lecturer in Economics, Research Associate at the Public and Environmental Economics Research Centre (PEERC) and Visiting Fellow at the Johannesburg Institute of Advanced Study (JIAS), University of Johannesburg

This article is republished from The Conversation under a Creative Commons license. Read the original article.

I’ve got an opinion out in the Sunday Independent 31 May: ‘We were set up to lock down’ People who say “It was right to lock down as a precaution but things have changed and now we should unlock” are wrong and should admit it or we won’t do better next time #epitwitter

This was published in 31 May in the Sunday Independent (South Africa) but for some reason they have not made this available online. So:

  1. Here is an image of what was published (presumably fine to share because it was in print only) We were set up to lock down (The Sunday Independent)
  2. Below is the text I submitted. They did not run the final text past me and there are some irritating editorial bungles that make the published text less readable (and sometimes ungrammatical). So, the one below is probably a better read.

We were set up to lock down

There’s a standard line. South Africa’s decision to lockdown when we did was sensible. Little was known about COVID-19 and its potential impact here. Since then, the situation has changed. We know more about how the pandemic is likely to unfold and who the disease affects, and we have made preparations to deal with the likely impact. The economy continues to deteriorate each day we stay locked down, and with it, people’s livelihoods. It is now time to unlock; in fact, unlocking is overdue. Decisive steps should now be taken to restore the economy, education, health services, and other pillars of the nation to their “new normal” function.

This familiar story is wrong. The evidence available at the time we locked down supported doing something more moderate. Lockdown was not the right response for South Africa to the threat COVID-19 posed in South Africa. Its potential benefits for a population the majority of whom is under 27, and can expect to be dead by their mid-sixties, did not outweigh the certain costs to the one in four living in poverty, and the many more who would join them on losing their livelihoods. Besides, it was obvious that, for most of the population, lockdown was impossible, due to overcrowding, shared sanitation, and the necessity of travel to receive social grants.

Contrary to what’s said, the evidence hasn’t changed. The relevant characteristics of COVID-19 were apparent by the end of March, when the decision to lock down was taken. Much of it is cited in an opinion piece published on the same day lockdown was announced, 23 March, a piece arguing that a one-size-fits-all approach could not be applied to achieving social distancing. The piece was written by a colleague and myself, unaware that that same day the country would move in exactly the opposite direction to the one we advised. We wrote several further pieces, and by 8 April I was sure that lockdown was wrong for Africa, including but not limited to South Africa, and published an opinion to that effect. The next day lockdown, was extended.

What has changed? Is it the evidence, or is it intellectual fashion?

It’s possible that those of us making anti-lockdown arguments two months ago are like stopped clocks that inevitably tell the right time when it comes. But the salient evidence was there all along. The dominance of age as a predictive (who knows whether causal, or how) risk factor for serious, critical and fatal COVID-19. One credible infection fatality estimate published in March based on data from China was 0.66%, with a marked age gradient. A credible systematic review concluding that school closures were not supported by evidence was published in early April. Perhaps the major uncertainty concerned HIV as a potential vulnerability of the South African population. But it was known early that treated HIV status was not correlated with COVID-19 risk, and in early April early results emerged that this might be true even for untreated HIV. Those same results are being relied on in current opinions, in some cases by people who dismissed them at the time.

If that’s correct, and many will deny it, then how could so many academics, politicians, analysts and commentators have got it wrong? And what stops them seeing it now?

Obviously there are social costs to admitting error, and perhaps psychological ones too. Certainly we’re better at spotting each other’s mistakes than our own. But I think there was something else in play, which continues to confuse us. We felt we were presented with two options, and chose one of them as a precaution. This was not the reality, but a product of the modelling approaches that informed policy and perception alike at the time, and that still play worryingly prominent roles in the policy approach.

These models had and have three misleading features.

First, they did not and do not estimate the health burden of COVID-19. This is because they model the effects of reduction in social contact without properly modelling the effects of the actual measures taken to achieve that reduction. A free decision to stay home is represented in the same way as being chained to the bed, or indeed being shot dead on the spot. These have different consequences for mortality, none of which show up in the models. Perhaps this doesn’t matter in the developed world, where economic downturn means poverty but not starvation. But it’s crucial in the developing world, where recession often means death.

Second, and relatedly, contextual differences were obliterated by the use of using a simple percentage scale to measure the reduction in social distancing. This meant that, for instance, a 60% reduction in social distancing was represented as the same thing in Geneva and Johannesburg. Whereas, of course, that is an outcome one takes by implementing policy decisions, which would usually be informed by the local context.

Third, the different scenarios modelled were then given different names, re-introducing a qualitative difference between them that was simply absent in the input. Qualitative differences were thus obliterated in the inputs – perfectly reasonably, from a modelling perspective – then introduced in the output. Where before we had (say) a 40% reduction in distancing, we have “mitigation”. And instead of (say) a 60% reduction, we have “suppression”. These began life as arbitrary points on a continuous scale, as the modellers would have been the first to admit. But with different names, they became treated as qualitatively different strategies. Moreover, the leading models at the time predicted hugely greater benefits from suppression compared to mitigation.

Thus, almost magically, the huge range of possible measures, varying between context depending on context and policy priorities, became transformed into a choice between lockdown and no-lockdown. Lockdown was exemplified already in China and Europe as a set of specific restrictions, and not as an abstract percentage reduction in social contact.

All context, all nuance, all qualitative factors were lost, washed out in a modelling exercise that was insensitive to contextual differences when formulating its inputs, and unwise in giving qualitatively different labels to its outputs.

Against this background, precautionary thinking naturally overtakes cost-benefit thinking. Proportionality gave way to precaution. The anti-COVID measure has a clear form: restricting on economic activities and confining people to their homes. It is so much more effective than any other measure that it presents us with a binary choice; other measures are pathetically ineffective by comparison, because in the process of de-quantifying the effectiveness of suppression over mitigation, regional differences have been lost. The choice is between action and inaction, and the cost of doing nothing appears huge: just look at the footage from Italy. Yes, it will be painful, but it’s better than the alternative.

But the precautionary approach was never necessary. There was always a range of possible actions, the costs of lockdown were always obvious, and the most significant determinants of the risk profile of the South African population were known.

Now, European countries have passed their peak, and we are again ignoring our own context. Our curve remains exactly the same as it was the day we went into lockdown (a straight line on a logarithmic scale, which is the relevant scale here – for both cases and deaths). Lockdown made no difference, if those graphs are to be believed; and it’s hard to know what other data to look at. The decision to unlock is, as Glenda Gray pointed out, not backed by any scientific case. Yet it’s the right one, not because the evidence changed, but because it was right all along. Lockdown was always wrong for Africa, including South Africa.