The role of philosophers in the coronavirus pandemic

What is the point of philosophy? That’s a question many philosophers struggle with, not just because it is difficult to answer. That goes for many academic disciplines, including “hard” sciences and applied disciplines like economics. However, unlike physicists and economists, philosophers ought to be able to answer this question, in the perception of many. And many of us can’t, at least to our own satisfaction.

I’ve written some opinion pieces (1,2) and given some interviews during this period, and I know of a handful of other philosophers who have done so (like Benjamin Smart, Arthur Caplan, and Stefano Canali). However, I also know of philosophers who have expressed frustration at the “uselessness” of philosophy in times like these. At the same time, I’ve seen an opinion piece by a computer scientist, whose expert contribution is confined to the nature of exponential growth: something that all of us with a basic mathematical education have studied, and which anyone subject to a compound interest rate, for example through a mortgage, will have directly experienced.

Yet computer science hasn’t covered itself in glory in this epidemic. Machine learning publications claiming to be able to arrive at predictive models in a matter of weeks have been notably lacking in this episode, confirming, for me, the view that machine learning and epidemiology have yet to interact meaningfully. Why do computer scientists (only one, admittedly; most of them are surely more sensible) and philosophers have such different levels of confidence at pronouncing on matters beyond their expertise?

There are no experts on the COVID-19 pandemic

This pandemic is subject to nobody’s expertise. It’s a novel situation, and expertise is remarkably useless when things change, as economists discovered in 2008 and pollsters in 2016.

Of course, parts of the current situation fall within the domains of various experts. Infectious disease epidemiologists can predict its spread. But there is considerably more to this pandemic than predicting its spread. In particular, the prediction of the difference that interventions make requires a grasp of causal inference that is a distinct skill set from that of the prediction of a trend, as proponents of the potential outcomes approach have correctly pointed out. Likewise, the attribution, after the fact, of a certain outcome to an intervention only makes good sense when we know what course of action we are comparing that intervention with; and this may be underspecified, because the “would have died otherwise” trend is so hard to establish.

Non-infectious-disease epidemiologists may understand the conceptual framework, methodology, terminology and pitfalls of the current research on the pandemic, but they do not necessarily have better subject-specific expertise than many in public health, the medical field, or others with a grasp on epidemiological principles. Scientists from other disciplines may be worse than the layperson because, like the computer scientist just mentioned, they wrongly assume that their expertise is relevant, and in doing so either simplify the issue to a childish extent, or make pronouncements that are plain wrong. (Epidemiology is, in my view, widely under-respected by other scientists.)

Turning to economics and politics, economists can predict the outcome of a pandemic or of measures to control it only if they have input from infectious disease epidemiologists on the predictive claims whose impacts they are seeking to assess.

Moreover, the health impact of economic policies are well-studied by epidemiologists, and to some extent by health economists; but these are not typically knowledgeable about the epidemiology of infectious disease outbreaks of this nature.

Jobs for philosophers

In this situation, my opinion is that philosophers can contribute substantially. My own thinking has been around cost-benefit analysis of public health interventions, and especially the neglect of the health impact – especially in very different global locations – of boilerplate measures being recommended to combat the health impact of the virus. This is obviously a lacuna, and especially pressing for me as I sit writing this in my nice study in Johannesburg, where most people do not have a nice study. Africa is always flirting with famine (there are people who will regard this as an insult; it is not). Goldman Sachs is predicting a 24% decline in US GDP next quarter.

If this does not cost lives in Africa, that would be remarkable. It might even cost more lives than the virus would, in a region where only 3% are over 65 (and there’s no evidence that HIV status makes a difference to outcomes of COVID-19). South Africa is weeks into the epidemic and saw its first two deaths just today.

Yet the epidemiological community (at least on my Twitter feed) has entirely ignored either the consequences of interventions on health, merely pointing out that the virus will have its own economic impact even without interventions, which is like justifying the Bay of Pigs by pointing out that Castro would have killed people even without the attempted invasion. And context is nearly totally ignored. The discipline appears mostly to have fallen behind the view that the stronger the measure, the more laudable. Weirdly, those who usually press for more consideration of social angles seem no less in favour, despite the fact that they spend most of the rest of their time arguing that poverty is wrongly neglected as a cause of ill-health.

Do I sound disappointed in the science that I’m usually so enthusiastic about, and that shares with philosophy the critical study of the unknown? Here we have a virus that may well claim a larger death toll in richer countries with older populations, and a set of measures that are designed by and for those countries, and a total lack of consideration of local context. Isn’t this remarkable?

There is more to say, and many objections; I’ll write this up in an academically rigorous way as soon as I can. Meanwhile, I’ll continue to publish opinion pieces, where I think it’s useful. Right now, my point is that there’s a lot for philosophers to dissect here. I don’t mean in this particular problem, but in the pandemic as a whole. And the points don’t have to be rocket science. They can be as simple as recommending that a ban on sale of cigarettes be lifted.

What is required for us to be useful, however, is that we apply our critical thinking skills to the issue at hand. Falling in with common political groupings adds nothing unique and requires the suspension of the same critical faculties that we philosophers pride ourselves on in other contexts. This is a situation where nearly all the information on which decisions are being made is publicly available, where none of it is the exclusive preserve of a single discipline, and where fear clouds rational thought. Expert analyses of specific technical problems are also readily available. These are ideal conditions for someone trained to apply analytic skills in a relatively domain-free manner to contribute usefully.

Off the top of my head, here are a handful topic ideas:

  • How to circumscribe the consequences of COVID-19 that we are interested in when devising our measures of intervention (this is an ethical spin on the issue I’m interested in above)
  • The nature of good prediction (which I’ve worked on in the public health context – but there is so much more to say)
  • The epistemology of testimony, especially concerning expertise, in a context of minimal information (to get us past the “trust the scientists FFS” dogma – that’s an actual quote from Twitter)
  • The weighing of the rights of different groups, given the trade off between young and old deaths (COVID-19 kills almost no children, while they will die in droves in a famine)

One’s own expertise will suggest other topics, provided that the effort is to think critically rather than simply identify people with whom one agrees. I very much hope that we will not see a straightforward application of existing topics: inductive risk and coronavirus; definition of health and coronavirus; rights and coronavirus; etc. To be clear, I’m not saying that no treatment of coronavirus can mention inductive risk, definition of health, or rights; just that the treatment must start with Coronavirus. My motto in working on the philosophy of epidemiology is that my work is philosophical in character but epidemiological in subject: it is philosophical work about epidemiology. Where it suggests modifications to existing debates in philosophy, as does happen, that is great, but it’s not the purpose. The idea is to identify new problems, not to cast old ones in a new light. Perhaps there are no such things as new philosophical problems; but then again, perhaps it’s only by trying to identify new problems that we can cast new light on old ones.

Call to arms

The skill of philosophers, and the value in philosophy, does not lie in our knowledge of debates that we have had with each other. It lies in our ability to think fruitfully about the unfamiliar, the disturbing, the challenging, and even the abhorrent. The coronavirus pandemic is all these things. Let’s get stuck in.

Potential Outcomes Approach as “epidemiometrics”

In a review of Jan Tinbergen’s work, Maynard Keynes wrote:

At any rate, Prof. Tinbergen agrees that the main purpose of his method is to discover, in cases where the economist has correctly analysed beforehand the qualitative character of the causal relations, with what strength each of them operates… [1]

Nancy Cartwright cites this passage in the context of describing the business of econometrics, in the introduction to her Hunting Causes and Using Them [2]. Her idea is that econometrics assumes that economics can be an exact science, that economic phenomena are governed by causal laws, and sets out to quantify them, making econometrics a fruitful domain for a study of the connection between laws and causes.

This helped me with an idea that first occurred to me at the 9th Nordic Conference of Epidemiology and Register-Based Health Research, that the potential outcomes approach to causal inference in epidemiology might be understood as the foundational work of a sub-discipline within epidemiology, related to epidemiology as econometrics is to economics. We might call it epidemiometrics.

This suggestion appears to resonate with Tyler Vanderweele’s contention that:

A distinction should be drawn between under what circumstances it is reasonable to refer to something as a cause and under what circumstances it is reasonable to speak of an estimate of a causal effect… The potential outcomes framework provides a way to quantify causal effects… [3]

The distinction between causal identification and estimation of causal effects does not resolve the various debates around the POA in epidemiology, since the charge against the POA is that as an approach (the A part in POA) it is guilty of overreach. For example, the term “causal inference” is used prominently where “quantitative causal estimation” might be more accurate [4]. 

Maybe there is a lesson here from the history of economics. While the discipline of epidemiology does not pretend to uncover causal laws, as does economics, it nevertheless does seek to uncover causal relationships, at least sometime. The Bradford Hill viewpoints are for answering a yes/no question: “is there any other way of explaining the facts before us, is there any other answer equally, or more, likely than cause and effect?” [5]. Econometrics answers a quantitative question: what is the magnitude of the causal effect, assuming that there is one? This question deserves its own disciplines because, like any quantitative question, it admits of many more precise and non-equivalent formulations, and of the development of mathematical tools. Recognising the POA not as an approach to epidemiology research, but as a discipline within epidemiology is deserved.

Many involved in discussions of the POA (including myself and co-authors) have made the point that the POA is part of a larger toolkit and that this is not always recognised [6,7], while others have argued that causal identification is a separate goal of epidemiology from causal estimation and that is at risk of neglect [8]. The italicised components of these contentions do not in fact concern the business of discovering or estimating causality. They are points about the way epidemiology is taught, and how it is understood by those who practice it. They are points, not about causality, but about epidemiology itself.

A disciplinary distinction between epidemiology and a sub-discipline of epidemiometrics might assist in realising this distinction that many are sensitive to, but that does not seem to have poured oil on the water of discussions of causality. By “realising”, I mean enabling institutional recognition at departmental or research unit level, enabling people to list their research interests on CVs and websites, assisting students in understanding the significance of the methods they are learning, and, most important of all, softening the dynamics between those who “advocate” and those who “oppose” the POA. To advocate econometrics over economics, or vice versa, would be nonsensical, like arguing liner algebra is more or less important than mathematics. Likewise, to advocate or oppose epidemiometrics would be recognisably wrong-headed. There would remain questions about emphasis, completeness, relative distribution of time and resources–but not about which is the right way to achieve the larger goals.

Few people admit to “advocating” or “opposing” the methods themselves, because in any detailed discussion it immediately becomes clear that the methods are neither universally, nor never, applicable. A disciplinary distinction–or, more exactly, a distinction of a sub-discipline of study that contributes in a special way to the larger goals of epidemiology–might go a long way to alleviating the tensions that sometimes flare up, occasionally in ways that are unpleasant and to the detriment of the scientific and public health goals of epidemiology as a whole.

[1] J.M. Keynes, ‘Professor Tinbergen’s Method’, Economic Journal, 49 (1939), 558-68 n. 195.

[2] N. Cartwright, Hunting Causes and Using Them (New York: Cambridge University Press, 2007), 15.

[3] T. Vanderweele, ‘On causes, causal inference, and potential outcomes’, International Journal of Epidemiology, 45 (2016), 1809.

[4] M.A. Hernán and J.M. Robins, Causal Inference: What If (Boca Raton: Chapman & Hall/CRC, 2020).

[5] A. Bradford Hill, ‘The Environment and Disease: Association or Causation?’, Proceedings of the Royal Society of Medicine, 58 (1965), 299.

[6] J. Vandenbroucke, A. Broadbent, and N. Pearce, ‘Causality and causal inference in epidemiology: the need for a pluralistic approach’, International Journal of Epidemiology, 45 (2016), 1776-86.

[7] A. Broadbent, J. Vandenbroucke, and N. Pearce, ‘Response: Formalism or pluralism? A reply to commentaries on ‘Causality and causal inference in epidemiology”, International Journal of Epidemiology, 45 (2016), 1841-51.

[8] Schwartz et al., ‘Causal identification: a charge of epidemiology in danger of marginalization’, Annals of Epidemiology, 26 (2016), 669-673.

Causal Inference: IJE Special Issue

Papers from the December 2016 special issue of IJE are now all available online. Several are open access, and I attach these.

Philosophers who want to engage with real life science, on topics relating to causation, epidemiology, and medicine, will find these papers a great resource. So will epidemiologists and other scientists who want or need to reflect on causal inference. Most of the papers are not written by philosophers, and most do not start from standard philosophical starting points. Yet the topics are clearly philosophical. This collection would also form a great starting point for a doctoral research projects in various science-studies disciplines.

Papers 1 and 2 were first available in January. Two letters were written in response (being made available online around April) along with a response and I have included these in the list for completeness. The remaining papers were written during the course of 2016 and are now available. Many of the authors met at a Radcliffe Workshop in Harvard in December 2016. An account of that workshop may be forthcoming at some stage, but equally it may not, since not all of the participants felt that it was necessary to prolong the discussion or to share the outcomes of the workshop more widely. At some point I might simply write up my own account, by way of part-philosophical, part-sociological story.

  1. Causality and causal inference in epidemiology: the need for  a pluralistic approach‘ Jan P Vandenbroucke, Alex Broadbent and Neil Pearce. doi: 10.1093/ije/dyv341
  2. ‘The tale wagged by the DAG: broadening the scope of causal inference and explanation for epidemiology.’ Nancy Krieger and George Davey-Smith. doi: 10.1093/ije/dyw114
    1. Letter: Tyler J. VanderWeele, Miguel A. Hernán, Eric J. Tchetgen Tchetgen, and James M. Robins. Letter to the Editor. Re: Causality and causal inference in epidemiology: the need for a pluralistic approach.
    2. Letter: Arnaud Chiolero. Letter to the Editor. Counterfactual and interventionist approach to cure risk factor epidemiology.
    3. Letter: Broadbent, A., Pearce, N., and Vandenbroucke, J. Authors’ Reply to: VanderWeele et al., Chiolero, and Schooling et al.
  3. ‘Causal inference in epidemiology: potential outcomes, pluralism and peer review.’ Douglas L Weed. doi: 10.1093/ije/dyw229
  4. ‘On Causes, Causal Inference, and Potential Outcomes.’ Tyler VanderWeele. doi: 10.1093/ije/dyw230
  5. ‘Counterfactual causation and streetlamps: what is to be done?’ James M Robins and Michael B Weissman. doi: 10.1093/ije/dyw231
  6. ‘DAGs and the restricted potential outcomes approach are tools, not theories of causation.’ Tony Blakely, John Lynch and Rebecca Bentley. doi: 10.1093/ije/dyw228
  7. ‘The formal approach to quantitative causal inference in epidemiology: misguided or misrepresented?’ Rhian M Daniel, Bianca L De Stavola and Stijn Vansteelandt. doi: 10.1093/ije/dyw227
  8. Formalism or pluralism? A reply to commentaries on ‘Causality and causal inference in epidemiology.’ Alex Broadbent, Jan P Vandenbroucke and Neil Pearce. doi: 10.1093/ije/dyw298
  9. ‘FACEing reality: productive tensions between our epidemiological questions, methods and mission.’ Nancy Krieger and George Davey-Smith. doi: 10.1093/ije/dyw330

Paper: Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

Delighted to announce the online publication of this paper in International Journal of Epidemiology, with Jan Vandenbroucke and Neil Pearce: ‘Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

This paper has already generated some controversy and I’m really looking forward to talking about it with my co-authors at the London School of Hygiene and Tropical Medicine on 7 March. (I’ll also be giving some solo talks while in the UK, at Cambridge, UCL, and Oxford, as well as one in Bergen, Norway.)

The paper is on the same topic as a single-authored paper of mine published late 2015, ‘Causation and Prediction in Epidemiology: a Guide to the Methodological Revolution.‘ But it is much shorter, and nonetheless manages to add a lot that was not present in my sole-authored paper – notably a methodological dimension that, as a philosopher by training, I was ignorant. The co-authoring process was thus really rich and interesting for me.

It also makes me think that philosophy papers should be shorter… Do we really need the first 2500 words summarising the current debate etc? I wonder if a more compressed style might actually stimulate more thinking, even if the resulting papers are less argumentatively airtight. One might wonder how often the airtight ideal is achieved even with traditional length paper… Who was it who said that in philosophy, it’s all over by the end of the first page?

America Tour: Attribution, prediction, and the causal interpretation problem in epidemiology

Next week I’ll be visiting America to talk in Pittsburgh, Richmond, and twice at Tufts. I do not expect audience overlap so I’ll give the same talk in all venues, with adjustments for audience depending on whether it’s primarily philosophers or epidemiologists I’m talking to. The abstract is below. I haven’t got a written version of the paper that I can share yet but would of course welcome comments at this stage.

ABSTRACT

Attribution, prediction, and the causal interpretation problem in epidemiology

In contemporary epidemiology, there is a movement, part theoretical and part pedagogical, attempting to discipline and clarify causal thinking. I refer to this movement as the Potential Outcomes Aproach (POA). It draws inspiration from the work of Donald Ruben and, more recently, Judea Pearl, among others. It is most easily recognized by its use of Directed Acycylic Graphs (DAGs) to describe causal situations, but DAGs are not the conceptual basis of the POA in epidemiology. The conceptual basis (as I have argued elsewhere) is a commitment to the view that the hallmark of a meaningful causal claim is that they can be used to make predictions about hypothetical scenarios. Elsewhere I have argued that this commitment is problematic (notwithstanding the clear connections with counterfactual, contrastive and interventionist views in philosophy). In this paper I take a more constructive approach, seeking to address the problem that troubles advocates of the POA. This is the causal interpretation problem (CIP). We can calculate various quantities that are supposed to be measures of causal strength, but it is not always clear how to interpret these quantities. Measures of attributability are most troublesome here, and these are the measures on which POA advocates focus. What does it mean, they ask, to say that a certain fraction of population risk of mortality is attributable to obesity? The pre-POA textbook answer is that, if obesity were reduced, mortality would be correspondingly lower. But this is not obviously true, because there are methods for reducing obesity (smoking, cholera infection) which will not reduce mortality. In general, say the POA advocates, a measure of attributability tells us next to nothing about the likely effect of any proposed public health intervention, rendering these measures useless, and so, for epidemiological purposes, meaningless. In this paper I ask whether there is a way to address and resolve the causal interpretation problem without resorting to the extreme view that a meaningful causal claim must always support predictions in hypothetical scenarios. I also seek connections with the notorious debates about heritability.

Is consistency trivial in randomized controlled trials?

Here are some more thoughts on Hernan and Taubman’s famous 2008 paper, from a chapter I am finalising for the epidemiology entry in a collection on the philosophy of medicine. I realise I have made a similar point in an earlier post on this blog, but I think I am getting closer to a crisp expression. The point concerns the claimed advantage of RCTs for ensuring consistency. Thoughts welcome!

Hernan and Taubman are surely right to warn against too-easy claims about “the effect of obesity on mortality”, when there are multiple ways to reduce obesity, each with different effects on mortality, and perhaps no ethically acceptable way to bring about a sudden change in body mass index from say 30 to 22 (Hernán and Taubman 2008, 22). To this extent, their insistence on assessing causal claims as contrasts to well-defined interventions is useful.

On the other hand, they imply some conclusions that are harder to accept. They suggest, for example, that observational studies are inherently more likely to suffer from this sort of difficulty, and that experimental studies (randomized controlled trials) will ensure that interventions are well-specified. They express their point using the technical term “consistency”:

consistency… can be thought of as the condition that the causal contrast involves two or more well-defined interventions. (Hernán and Taubman 2008, S10)

They go on:

…consistency is a trivial condition in randomized experiments. For example, consider a subject who was assigned to the intervention group … in your randomized trial. By definition, it is true that, had he been assigned to the intervention, his counterfactual out- come would have been equal to his observed outcome. But the condition is not so obvious in observational studies. (Hernán and Taubman 2008, s11)

This is a non-sequitur, however, unless we appeal to a background assumption that an intervention—something that an actual human investigator actually does—is necessarily well-defined. Without this assumption, there is nothing to underwrite the claim that “by definition”, if a subject actually assigned to the intervention had been assigned to the intervention, he would have had the outcome that he actually did have.

Consider the intervention in their paper, one hour of strenuous exercise per day. “Strenuous exercise” is not a well-defined intervention. Weightlifting? Karate? Swimming? The assumption behind their paper seems to be that if an investigator “does” an intervention, it is necessarily well-defined; but on reflection this is obviously not true. An investigator needs to have some knowledge of which features of the intervention might affect the outcome (such as what kind of exercise one performs), and thus need to be controlled, and which don’t (such as how far west of Beijing one lives). Even randomization will not protect against confounding arising from preference for a certain type of exercise (perhaps because people with healthy hearts are predisposed both to choose running and to live longer, for example), unless one knows to randomize the assignment of exercise-types and not to leave it to the subjects’ choice.

This is exactly the same kind of difficulty that Hernan and Taubman press against observational studies. So the contrast they wish to draw, between “trivial” consistency in randomized trials and a much more problematic situation in observational studies, is a mirage. Both can suffer from failure to define interventions.