Sean Muller: “South Africa’s use of COVID-19 modelling has been deeply flawed. Here’s why”

South Africa’s use of COVID-19 modelling has been deeply flawed. Here’s why

President Ramaphosa’s government is easing the lockdown because of unsustainable economic costs.
Getty Images

Seán Mfundza Muller, University of Johannesburg

When President Cyril Ramaphosa announced the decision to implement an initial 21-day national lockdown in response to the threat posed by the COVID-19 pandemic, he referred to “modelling” on which the decision was based. A media report a few days later based on leaked information claimed that the government had been told that “a slow and inadequate response by government to the outbreak could result in anywhere between 87,900 and 351,000 deaths”. These estimates, the report said, were based on, respectively, population infection rates of 10% to 40%.

In late April, the chair of the health minister’s advisory committee sub-committee on public health referred to the early models used by the government as “back-of-the-envelope calculations”, saying they were “flawed and illogical and made wild assumptions”.

These assertions have been impossible to fully assess. This is because no official information on the modelling has ever been released – despite its apparently critical role.

A briefing by the chair of the health minister’s advisory committee in mid-April sketched some basic details of what the government’s health advisors believed about the likely peak and timing of the epidemic. But no details were given on expected infections, hospital admissions or deaths.

A spokesperson for the presidency said that government was withholding such numbers “to avoid panic”.

Finally, towards the end of May the health minister hosted an engagement between journalists and some of the modellers government was relying on. It then started releasing details of the models and projections.

The predictions of these models for an “optimistic scenario” are that the vast majority of the population will be infected, there will be a peak of 8 million infections in mid-August and in total there will be 40,000 deaths.

To understand the significance of these – and the previous numbers – it is useful to consider more broadly what models are and how they are being used in the current context.

What models are and how they are used

A theoretical model – whether in epidemiology, economics or even physics – is a simplified representation of how the modeller thinks the world works.

To produce estimates or forecasts of how things might play out in the real world, such models need to make assumptions about the strength of relationships between different variables. Those assumptions reflect some combination of the modeller’s beliefs, knowledge and available evidence.

To put it differently: modelling is sophisticated guesswork. Where models have been successfully used across different contexts and time periods we can have more confidence in their accuracy and reliability.

But models, especially outside sciences like physics, are almost always wrong to some degree. What matters for decision-making is that they are “right enough”. In the current situation, the difference between predicting 35,000 and 40,000 deaths probably won’t change policy decisions, but 5,000 or 500,000 instead of 40,000 might.

In the case of South Africa’s COVID-19 response, available information indicates that epidemiological models have played two main roles.

First, they have provided predictions of the possible scale of death and illness relative to health system capacity, as well as how this is expected to play out over time.

Second, they have been used to assess the success and effects of the government’s intervention strategies.

There are reasons to believe that there have been significant failures in both cases, in the modelling itself and especially in the way that it has been used.

In recent weeks, the government and its advisors have been keen to emphasise the uncertainty of the modelling predictions. From a methodological point of view, that is the responsible stance. But it’s too little too late.

Modelling COVID-19 is challenging in general, but there are at least four additional reasons to be cautious about our COVID-19 models.

Reasons for caution

First, certain key characteristics of SARS-CoV-2 remain unknown and the subject of debate among medical experts.

Second, unlike some countries, South Africa does not have detailed data on the dynamics of social interactions and the models presented so far do not use household survey data as a proxy. Nuanced questions therefore aren’t addressed. For example, most cases early on in the epidemic appeared to have been relatively wealthy travellers. But there was no way to model the consequences of domestic workers being exposed by their employers and thereby infecting others in their (poorer) communities. So the structure of South Africa’s models is high level and does not account for country-specific factors.

Third, the values for the parameters of the models (representing the strength of relationships between different factors) are being taken from evidence in other countries. They may not actually be the same in South Africa.

Finally, the unsystematic nature of aspects of the government’s approach to testing, such as through its community screening programme, makes it much harder to infer the effects of its interventions.

Unclear objectives

There is little reason to believe that government had anything other than good intentions. Nevertheless, the consequences of its lack of sophistication in using evidence and expertise may burden an entire generation of South Africans.

A major problem linked to the combination of excessive confidence and secrecy is that the government’s strategy was never clear: although it referred to “flattening the curve” it never stated what its specific objectives were. In the terms of the most influential modelling-based advice during the pandemic, was its strategy “suppression” or “mitigation”?

The government and its advisors have made much of the fact that the lockdown probably delayed the peak of the epidemic. But there is no evidence so far that this was worth the cost – since most of the population is expected to be infected anyway.

One key claim is that the lockdown bought the country time to prepare the health system.

The Imperial model defined the primary objective of “flattening the curve” as reducing ICU admissions below the number of critical care beds. On that dimension, the government’s own modellers predict a peak of 20,000 critical cases in the optimistic scenario and only about 4,000 ICU beds with little increase from the pre-lockdown numbers. By this definition, it has failed dismally.

There appears to have been more success with securing supplies of personal protective equipment, quarantine locations, overflow beds and some ventilators. But there is also little evidence that many of those small gains could not have been achieved without such a costly lockdown.

Given this, it is concerning that many academics and commentators have praised the success of government’s strategy. This has included the Academy of Sciences, which has asserted that “strong, science-based governmental leadership has saved many lives, for which South Africa can be thankful”.

This is entirely unsubstantiated.

First, the full toll of the epidemic will be experienced over time and so it is possible to have fewer deaths at the outset due to a policy intervention being exceeded by a larger number of deaths later because of the limitations of that same policy intervention.

Second, the only way to substantiate such claims would be to use models of different scenarios. But we’ve seen that the early models were not credible and the subsequent ones are subject to a great deal of uncertainty. It seems that the government and some of its advisors want to have the best of both worlds: they want to use dramatically incorrect predictions by early models to claim success of their interventions. This is misleading and does not meet the most basic standards by which academics in quantitative disciplines establish causal effects of policy interventions.

In an earlier article, I noted that “if the current lockdown fails to drastically curb transmission, which is possible, it would layer one disaster on another … the country may exhaust various resources by the time the potentially more dangerous winter period arrives”.

This appears to be the situation in which South Africa finds itself.The Conversation

Seán Mfundza Muller, Senior Lecturer in Economics, Research Associate at the Public and Environmental Economics Research Centre (PEERC) and Visiting Fellow at the Johannesburg Institute of Advanced Study (JIAS), University of Johannesburg

This article is republished from The Conversation under a Creative Commons license. Read the original article.

I’ve got an opinion out in the Sunday Independent 31 May: ‘We were set up to lock down’ People who say “It was right to lock down as a precaution but things have changed and now we should unlock” are wrong and should admit it or we won’t do better next time #epitwitter

This was published in 31 May in the Sunday Independent (South Africa) but for some reason they have not made this available online. So:

  1. Here is an image of what was published (presumably fine to share because it was in print only) We were set up to lock down (The Sunday Independent)
  2. Below is the text I submitted. They did not run the final text past me and there are some irritating editorial bungles that make the published text less readable (and sometimes ungrammatical). So, the one below is probably a better read.

We were set up to lock down

There’s a standard line. South Africa’s decision to lockdown when we did was sensible. Little was known about COVID-19 and its potential impact here. Since then, the situation has changed. We know more about how the pandemic is likely to unfold and who the disease affects, and we have made preparations to deal with the likely impact. The economy continues to deteriorate each day we stay locked down, and with it, people’s livelihoods. It is now time to unlock; in fact, unlocking is overdue. Decisive steps should now be taken to restore the economy, education, health services, and other pillars of the nation to their “new normal” function.

This familiar story is wrong. The evidence available at the time we locked down supported doing something more moderate. Lockdown was not the right response for South Africa to the threat COVID-19 posed in South Africa. Its potential benefits for a population the majority of whom is under 27, and can expect to be dead by their mid-sixties, did not outweigh the certain costs to the one in four living in poverty, and the many more who would join them on losing their livelihoods. Besides, it was obvious that, for most of the population, lockdown was impossible, due to overcrowding, shared sanitation, and the necessity of travel to receive social grants.

Contrary to what’s said, the evidence hasn’t changed. The relevant characteristics of COVID-19 were apparent by the end of March, when the decision to lock down was taken. Much of it is cited in an opinion piece published on the same day lockdown was announced, 23 March, a piece arguing that a one-size-fits-all approach could not be applied to achieving social distancing. The piece was written by a colleague and myself, unaware that that same day the country would move in exactly the opposite direction to the one we advised. We wrote several further pieces, and by 8 April I was sure that lockdown was wrong for Africa, including but not limited to South Africa, and published an opinion to that effect. The next day lockdown, was extended.

What has changed? Is it the evidence, or is it intellectual fashion?

It’s possible that those of us making anti-lockdown arguments two months ago are like stopped clocks that inevitably tell the right time when it comes. But the salient evidence was there all along. The dominance of age as a predictive (who knows whether causal, or how) risk factor for serious, critical and fatal COVID-19. One credible infection fatality estimate published in March based on data from China was 0.66%, with a marked age gradient. A credible systematic review concluding that school closures were not supported by evidence was published in early April. Perhaps the major uncertainty concerned HIV as a potential vulnerability of the South African population. But it was known early that treated HIV status was not correlated with COVID-19 risk, and in early April early results emerged that this might be true even for untreated HIV. Those same results are being relied on in current opinions, in some cases by people who dismissed them at the time.

If that’s correct, and many will deny it, then how could so many academics, politicians, analysts and commentators have got it wrong? And what stops them seeing it now?

Obviously there are social costs to admitting error, and perhaps psychological ones too. Certainly we’re better at spotting each other’s mistakes than our own. But I think there was something else in play, which continues to confuse us. We felt we were presented with two options, and chose one of them as a precaution. This was not the reality, but a product of the modelling approaches that informed policy and perception alike at the time, and that still play worryingly prominent roles in the policy approach.

These models had and have three misleading features.

First, they did not and do not estimate the health burden of COVID-19. This is because they model the effects of reduction in social contact without properly modelling the effects of the actual measures taken to achieve that reduction. A free decision to stay home is represented in the same way as being chained to the bed, or indeed being shot dead on the spot. These have different consequences for mortality, none of which show up in the models. Perhaps this doesn’t matter in the developed world, where economic downturn means poverty but not starvation. But it’s crucial in the developing world, where recession often means death.

Second, and relatedly, contextual differences were obliterated by the use of using a simple percentage scale to measure the reduction in social distancing. This meant that, for instance, a 60% reduction in social distancing was represented as the same thing in Geneva and Johannesburg. Whereas, of course, that is an outcome one takes by implementing policy decisions, which would usually be informed by the local context.

Third, the different scenarios modelled were then given different names, re-introducing a qualitative difference between them that was simply absent in the input. Qualitative differences were thus obliterated in the inputs – perfectly reasonably, from a modelling perspective – then introduced in the output. Where before we had (say) a 40% reduction in distancing, we have “mitigation”. And instead of (say) a 60% reduction, we have “suppression”. These began life as arbitrary points on a continuous scale, as the modellers would have been the first to admit. But with different names, they became treated as qualitatively different strategies. Moreover, the leading models at the time predicted hugely greater benefits from suppression compared to mitigation.

Thus, almost magically, the huge range of possible measures, varying between context depending on context and policy priorities, became transformed into a choice between lockdown and no-lockdown. Lockdown was exemplified already in China and Europe as a set of specific restrictions, and not as an abstract percentage reduction in social contact.

All context, all nuance, all qualitative factors were lost, washed out in a modelling exercise that was insensitive to contextual differences when formulating its inputs, and unwise in giving qualitatively different labels to its outputs.

Against this background, precautionary thinking naturally overtakes cost-benefit thinking. Proportionality gave way to precaution. The anti-COVID measure has a clear form: restricting on economic activities and confining people to their homes. It is so much more effective than any other measure that it presents us with a binary choice; other measures are pathetically ineffective by comparison, because in the process of de-quantifying the effectiveness of suppression over mitigation, regional differences have been lost. The choice is between action and inaction, and the cost of doing nothing appears huge: just look at the footage from Italy. Yes, it will be painful, but it’s better than the alternative.

But the precautionary approach was never necessary. There was always a range of possible actions, the costs of lockdown were always obvious, and the most significant determinants of the risk profile of the South African population were known.

Now, European countries have passed their peak, and we are again ignoring our own context. Our curve remains exactly the same as it was the day we went into lockdown (a straight line on a logarithmic scale, which is the relevant scale here – for both cases and deaths). Lockdown made no difference, if those graphs are to be believed; and it’s hard to know what other data to look at. The decision to unlock is, as Glenda Gray pointed out, not backed by any scientific case. Yet it’s the right one, not because the evidence changed, but because it was right all along. Lockdown was always wrong for Africa, including South Africa.

Predicting Pandemics: Lessons from (and for) COVID-19

This is a live online discussion between Jonathan Fuller and Alex Broadbent, hosted by the Institute for the Future of Knowledge in partnership with the Library of the University of Johannesburg. Comments and discussion are hosted on this page, and you can watch the broadcast here:

We know considerably more about COVID-19 than anyone has previously known about a pandemic of a new disease. Yet we are uncertain about what to do. Even where it appears obvious that strategies have worked or failed, it will take some time to establish that the observed trends are fully or even partly explained by anything we did or didn’t do. And when we take a lesson from one place and try to apply it in another, we have to contend with the huge differences between different places in the world, especially age and wealth. This conversation explores these difficulties, in the hope of improving our response to the uncertainty that always accompanies pandemics, our ability to tell what works, our sensitivity to context, and thus our collective ability to arrive at considered decisions with clearly identified goals and a based on a comprehensive assessment of the relevant costs, benefits, risks, and other factors.

Further reading:

Professor Alex Broadbent (PhD) is Director of the Institute for the Future of Knowledge at the University of Johannesburg and Professor of Philosophy at the University of Johannesburg. He specialises in prediction, causal inference, and explanation, especially in epidemiology and medicine. He publishes in major journals in philosophy, epidemiology, medicine and law, and his books include the path-breaking Philosophy of Epidemiology (Palgrave 2013) and Philosophy of Medicine (Oxford University Press 2019).

Dr Jonathan Fuller (PhD, MD) is a philosopher working in philosophy of science, especially philosophy of medicine. He is an Assistant Professor in the Department of History and Philosophy of Science (HPS) at the University of Pittsburgh, and a Research Associate with the University of Johannesburg. He is also on the International Philosophy of Medicine Roundtable Scientific Committee. He was previously a postdoctoral research fellow in the Institute for the History and Philosophy of Science at the University of Toronto.

M, PhD and PostDoc opportunities at UJ

The University of Johannesburg has released a special call offering masters, doctoral and postdoctoral fellowships, for start asap, deadline 8th Feb 2020.

These are in any area, but I would like to specifically invite anyone wishing to work with myself (or colleagues at UJ) on any of the areas listed below. From May 2020, I will be Director of the Institute for the Future of Knowledge at UJ (a new institute – no website yet – but watch this space!), and being part of this enterprise will, I think, be very exciting for potential students/post-docs. I would be delighted to receive inquiries in any of the following areas:

  • Philosophy of medicine
  • Philosophy of epidemiology
  • Causation
  • Counterfactuals
  • Causal inference
  • Prediction
  • Explanation (not just causal)
  • Machine learning (in relation to any of the above)
  • Cognitive science
  • Other things potentially relevant to the Institute, my interests, your interests… please suggest!

If you’re interested please get in touch: abbroadbent@uj.ac.za

The call is here, along with instructions for applicants:

2020 Call for URC Scholarships for Master’s_Doctoral_Postdoctoral Fellowships_Senior Postdoctoral fellowships

Book published: Philosophy of Medicine

My book Philosophy of Medicine (Oxford University Press) has now been published in the USA, and in paperback in the UK. Hardback date in the UK is 28 March. E-books are of course available.

I am putting together a series of YouTube videos corresponding to each of the chapters, by way of segue into the fourth industrial revolution.

The book carves out some new territory in the field, by taking a broad view of medicine as something existing in different forms, in different times and places. I argue that any adequate understanding of medicine must say something about what medicine is, given this apparent variety of actual practices that are either claimed to be or regarded as medical. I argue that, while the goal of medicine is to cure, its track record in this regard is patchy at best. This gives rise to the question of why medicine has persisted despite being so commonly ineffective. I argue that this persistence shows that the business of medicine – the practice of a core medical competence – cannot be cure, even if that is the goal. Instead, what doctors provide is understanding and prediction, or at least engagement with the project of understanding health and disease.

I also cover the familiar question of the nature of health. The naturalism/normativism dichotomy is a false one, since it elides two dimensions of disagreement, one concerning objectivity, the other concerning value-ladenness. It is obvious that these are logically distinct properties. I argue that health is a secondary property, like colour, consisting in a disposition on our part to respond to an underlying reality which, however, does not carve the world in the way that our responses do. The reason that we have this disposition to respond to the underlying properties rather than some other – the reason that we have this particular health concept – is the advantages it conferred on groups of humans during our evolutionary history. My secondary property view sees health as a non-objective but non-evaluative property, and this places it in a previously unoccupied portion of the logical space created by distinguishing clearly between the dimensions of traditional disagreement.

The second part of the book concerns the attitude we should have towards medicine, and is informed by the understanding of the nature of medicine developed in the first part. Evidence Based Medicine and Medical Nihilism are discussed. The former sets high standards for what counts as evidence. The latter basically accepts these standards and then argues that so little medical research meets these standards that we should despair of medicine, and regard even apparently well-supported interventions as probably ineffective. Both views are rejected on their merits, but a connecting theme is their location of the whole value of medicine in its curative powers. I see value in medicine beyond cure, and thus even if the arguments of EBMers and nihilists succeeded on their merits (which I deny), they would not warrant such a negative attitude to the majority of medicine.

Philosophy of medicine has had little to say about non-Mainstream traditions, beyond occasional spats with alternative therapists. The last three chapters of the book seek to remedy this. A view called Medical Cosmopolitanism is advanced (inspired by Kwame Anthony Appiah’s book and ethical position Cosmopolitanism) as an alternative to the evidence-basing and nihilistic stances. The main tenets are realism about medical facts, especially what works, epistemic humility when discussing these facts, and the primacy of practice – focusing on specific problems rather than grand principles. Realism means that we should not shy away from trying to determine whether one or another intervention is better; we should not have a “hands off” approach, even where deep and/or cultural beliefs are at stake. Epistemic humility means that when approaching disagreements we must be mindful of the less-than-distinguished history of medical claims, and must be respectful, tentative, open to changing our mind. The primacy of practice is the idea that we focus first on what to do in particular cases, since agreement here is usually easier than on larger principles.

I then apply this position to medical dissidence and decolonization of medicine. Medical dissidence occurs when traditions co-exist with a more dominant tradition and reject parts of it. Homeopathy is the paradigm case. I advocate a much more tolerant stance between disputants about alternative medicine, arguing that the reason for different views (also extending to topics such as vaccination) is that all of our medical evidence reaches us through testimony, and trust then becomes king-maker as to which medical evidence you accept. It’s no good telling someone that a trial was fantastic if they just don’t believe you, and nor are they irrational to reject evidence from a trial if they just don’t believe that the trial occurred, or was fair, or similar. Unless you run a trial yourself, you are in the position of receiving your medical information second-hand, and then trust relationships become paramount. This patchy history of medical success amply explains why trust in any given tradition might be hard to come by.

Finally, contact between medicines deriving from different cultures presents interesting epistemic and practical challenges. In former colonies, these challenges must be handled carefully. Medicine is imbued with culture, and to insist on one medicine over another can be culturally oppressive. At the same time, cosmopolitanism is committed to realism. So, no matter how deeply held a belief in the efficacy of a certain intervention or ritual, if this ritual does not work or is less effective than one provided by Mainstream Medicine (as I call it – since it is no longer strictly Western) then this fact must be confronted. Moreover, ordinary people just want efficacy: we can quibble at the periphery, but fundamentally, illness is a universal human experience, as is holding a sick child in your arms. Thus I advocate something a little more critical than “dialogue” between traditions. I invite a critical attitude. The approach must be humble, and Mainstream Medicine must concede that it may well have something to learn from, e.g., African Medicine. But decolonization must fundamentally consist in the adoption of a critical mindset, one that rejected political colonization, and that goes on to reject epistemic colonization. This critical mindset demands that African, Chinese, Indian and other traditions take the inevitable confrontation with Mainstream Medicine seriously, and seriously consider whether their various interventions and strategies are effective, just as they ask Mainstream Medicine to take these interventions and strategies seriously.

African Medicine in the JMP

My inaugural lecture, somewhat edited, has been published in the Journal of Medicine and Philosophy, along with relies by Thaddeus Metz and Chadwin Harris, and a rejoinder from me.

https://academic.oup.com/jmp/issue/43/3

I’m proud of this, particularly because it’s the first time to my knowledge that this journal has published material on African medicine. It may even be the first publication in the contemporary philosophy of medicine, whether analytic or continental, that discusses African medicine in any meaningful way, or discusses Africa in a non-victim role (as opposed to discussions of unfair drug testing practices, neglected diseases, and so forth). I must emphasize that I stand to be corrected on each of these progressively more provocative speculations, and would be delighted for references which I will happily collate and list on another post.

The paper is about the nature of medicine and the role of cure, which I argue is not the main business of medicine, even if it’s the goal. My respondents, naturally, disagree.

Please let me know if you don’t have institutional access to the paper.

America Tour: Attribution, prediction, and the causal interpretation problem in epidemiology

Next week I’ll be visiting America to talk in Pittsburgh, Richmond, and twice at Tufts. I do not expect audience overlap so I’ll give the same talk in all venues, with adjustments for audience depending on whether it’s primarily philosophers or epidemiologists I’m talking to. The abstract is below. I haven’t got a written version of the paper that I can share yet but would of course welcome comments at this stage.

ABSTRACT

Attribution, prediction, and the causal interpretation problem in epidemiology

In contemporary epidemiology, there is a movement, part theoretical and part pedagogical, attempting to discipline and clarify causal thinking. I refer to this movement as the Potential Outcomes Aproach (POA). It draws inspiration from the work of Donald Ruben and, more recently, Judea Pearl, among others. It is most easily recognized by its use of Directed Acycylic Graphs (DAGs) to describe causal situations, but DAGs are not the conceptual basis of the POA in epidemiology. The conceptual basis (as I have argued elsewhere) is a commitment to the view that the hallmark of a meaningful causal claim is that they can be used to make predictions about hypothetical scenarios. Elsewhere I have argued that this commitment is problematic (notwithstanding the clear connections with counterfactual, contrastive and interventionist views in philosophy). In this paper I take a more constructive approach, seeking to address the problem that troubles advocates of the POA. This is the causal interpretation problem (CIP). We can calculate various quantities that are supposed to be measures of causal strength, but it is not always clear how to interpret these quantities. Measures of attributability are most troublesome here, and these are the measures on which POA advocates focus. What does it mean, they ask, to say that a certain fraction of population risk of mortality is attributable to obesity? The pre-POA textbook answer is that, if obesity were reduced, mortality would be correspondingly lower. But this is not obviously true, because there are methods for reducing obesity (smoking, cholera infection) which will not reduce mortality. In general, say the POA advocates, a measure of attributability tells us next to nothing about the likely effect of any proposed public health intervention, rendering these measures useless, and so, for epidemiological purposes, meaningless. In this paper I ask whether there is a way to address and resolve the causal interpretation problem without resorting to the extreme view that a meaningful causal claim must always support predictions in hypothetical scenarios. I also seek connections with the notorious debates about heritability.

Causation, prediction, epidemiology – talks coming up

Perhaps an odd thing to do, but I’m posting the abstracts of my two next talks, which will also become papers. Any offers to discuss/read welcome!

The talks will be at Rhodes on 1 and 3 October. I’ll probably deliver a descendant of one of them at the Cambridge Philosophy of Science Seminar on 3 December, and may also give a very short version of 1 at the World Health Summit in Berlin on 22 Oct.

1. Causation and Prediction in Epidemiology

There is an ongoing “methodological revolution” in epidemiology, according to some commentators. The revolution is prompted by the development of a conceptual framework for thinking about causation called the “potential outcomes approach”, and the mathematical apparatus of directed acyclic graphs that accompanies it. But once the mathematics are stripped away, a number of striking assumptions about causation become evident: that a cause is something that makes a difference; that a cause is something that humans can intervene on; and that epidemiologists need nothing more from a notion of causation than picking out events satisfying those two criteria. This is especially remarkable in a discipline that has variously identified factors such as race and sex as determinants of health. In this talk I seek to explain the significance of this movement in epidemiology, separate its insights from its errors, and draw a general philosophical lesson about confusing causal knowledge with predictive knowledge.

2. Causal Selection, Prediction, and Natural Kinds

Causal judgements are typically – invariably – selective. We say that striking the match caused it to light, but we do not mention the presence of oxygen, the ancestry of the striker, the chain of events that led to that particular match being in her hand at that time, and so forth. Philosophers have typically but not universally put this down to the pragmatic difficulty of listing the entire history of the universe every time one wants to make a causal judgement. The selective aspect of causal judgements is typically thought of as picking out causes that are salient for explanatory or moral purposes. A minority, including me, think that selection is more integral than that to the notion of causation. The difficulty with this view is that it seems to make causal facts non-objective, since selective judgements clearly vary with our interests. In this paper I seek to make a case for the inherently selective nature of causal judgements by appealing to two contexts where interest-relativity is clearly inadequate to fully account for selection. Those are the use of causal judgements in formulating predictions, and the relation between causation and natural kinds.

Acetaminophen (paracetamol), Asthma, and the Causal Fallacy

In November 2011, a senior American pediatrician suggested that there was enough evidence to warrant restricting acetaminophen (paracetamol) use among children at risk of asthma, despite inadequate evidence for a causal inference. His argument was based on an ethical principle. However neither his argument nor the evidence he surveys are sufficient to warrant the recommendation, which therefore has the status, not of a sensible precaution, but a stab in the dark. I have written to the editors of Pediatrics to explain why – the link is here:

http://pediatrics.aappublications.org/content/128/6/1181.full/reply#pediatrics_el_53669

The theoretical point underlying this is one under-emphasized in both philosophical and epidemiological thinking, namely, that causal inference is something rather different from making a prediction based on the causal knowledge so obtained. The temptation to suppose that we have even a hunch what we happen when we restrict acetaminophen use on the basis that we have a hunch that it causes asthma is fallacious. It all depends on what consequences the non-use of acetaminophen has, and that in turn depends on the form that non-use takes. The point is familiar from philosophical studies of counterfactuals, but those studies arguably either do not offer much of practical use for epidemiology or else have not received an epidemiological audience. (I favour the former option, although I realise many philosophers will disagree.)

The result is a common fallacy of reasoning which we might call The Causal Fallacy: epidemiologists, policy makers, and probably the public assume that because we have causal knowledge, we have knowledge of what will happen when we manipulate those causes. In practice we do not. (This under-appreciated point has been emphasized by Sander Greenland among epidemiologists and Nancy Cartwright among philosophers, and as I see it tells heavily against the programme of manipulationist or interventionist theories of causation.) Establishing whether an exposure such as acetaminophen is a cause of an outcome such as asthma is not sufficient to predict the outcome of a given recommendation on the use of acetaminophen, for the simple reason that more than one such policy is possible, and each may in principle have a different outcome.