From Judea Pearl’s blog: report of a webinar: “Artificial Intelligence and COVID-19: A wake-up call” #epitwitter @TheBJPS

Check the entry on Pearl’s blog which includes a write-up provided by the organisers

Video of the event is available too

“…regardless of government interventions [, after] around a two week exponential growth of cases (and, subsequently, deaths) some kind of break kicks in, and growth starts slowing down. The curve quickly becomes “sub-exponential”.

https://unherd.com/thepost/nobel-prize-winning-scientist-the-covid-19-epidemic-was-never-exponential/

Freddie Sayers of Unherd interviews Michael Levitt (a Nobel-prize-winning non-epidemiologist) on a purely statistical observations of the pattern of the epidemic. Given that the only way we have of measuring effectiveness of government interventions is statistical, that’s interesting. The fun stuff (epidemiological and statistical) comes in deciding whether the correlation is causal. But there’s been no progress with that, in my opinion; in fact for me it is here that the epidemiological profession has disappointed me – it is at if epidemiology has forgotten everything it ever taught itself about causal inference. Against that background, this is ought to give pause for thought.

Lower COVID risk among smokers?… #epitwitter

Some evidence that some smokers may in fact have LESS serious symptoms than non smokers. Interesting! Wonder if this will hold water on further investigation. The researchers are now planning to test nicotine patches.

https://www.theguardian.com/world/2020/apr/22/french-study-suggests-smokers-at-lower-risk-of-getting-coronavirus

The role of philosophers in the coronavirus pandemic

What is the point of philosophy? That’s a question many philosophers struggle with, not just because it is difficult to answer. That goes for many academic disciplines, including “hard” sciences and applied disciplines like economics. However, unlike physicists and economists, philosophers ought to be able to answer this question, in the perception of many. And many of us can’t, at least to our own satisfaction.

I’ve written some opinion pieces (1,2) and given some interviews during this period, and I know of a handful of other philosophers who have done so (like Benjamin Smart, Arthur Caplan, and Stefano Canali). However, I also know of philosophers who have expressed frustration at the “uselessness” of philosophy in times like these. At the same time, I’ve seen an opinion piece by a computer scientist, whose expert contribution is confined to the nature of exponential growth: something that all of us with a basic mathematical education have studied, and which anyone subject to a compound interest rate, for example through a mortgage, will have directly experienced.

Yet computer science hasn’t covered itself in glory in this epidemic. Machine learning publications claiming to be able to arrive at predictive models in a matter of weeks have been notably lacking in this episode, confirming, for me, the view that machine learning and epidemiology have yet to interact meaningfully. Why do computer scientists (only one, admittedly; most of them are surely more sensible) and philosophers have such different levels of confidence at pronouncing on matters beyond their expertise?

There are no experts on the COVID-19 pandemic

This pandemic is subject to nobody’s expertise. It’s a novel situation, and expertise is remarkably useless when things change, as economists discovered in 2008 and pollsters in 2016.

Of course, parts of the current situation fall within the domains of various experts. Infectious disease epidemiologists can predict its spread. But there is considerably more to this pandemic than predicting its spread. In particular, the prediction of the difference that interventions make requires a grasp of causal inference that is a distinct skill set from that of the prediction of a trend, as proponents of the potential outcomes approach have correctly pointed out. Likewise, the attribution, after the fact, of a certain outcome to an intervention only makes good sense when we know what course of action we are comparing that intervention with; and this may be underspecified, because the “would have died otherwise” trend is so hard to establish.

Non-infectious-disease epidemiologists may understand the conceptual framework, methodology, terminology and pitfalls of the current research on the pandemic, but they do not necessarily have better subject-specific expertise than many in public health, the medical field, or others with a grasp on epidemiological principles. Scientists from other disciplines may be worse than the layperson because, like the computer scientist just mentioned, they wrongly assume that their expertise is relevant, and in doing so either simplify the issue to a childish extent, or make pronouncements that are plain wrong. (Epidemiology is, in my view, widely under-respected by other scientists.)

Turning to economics and politics, economists can predict the outcome of a pandemic or of measures to control it only if they have input from infectious disease epidemiologists on the predictive claims whose impacts they are seeking to assess.

Moreover, the health impact of economic policies are well-studied by epidemiologists, and to some extent by health economists; but these are not typically knowledgeable about the epidemiology of infectious disease outbreaks of this nature.

Jobs for philosophers

In this situation, my opinion is that philosophers can contribute substantially. My own thinking has been around cost-benefit analysis of public health interventions, and especially the neglect of the health impact – especially in very different global locations – of boilerplate measures being recommended to combat the health impact of the virus. This is obviously a lacuna, and especially pressing for me as I sit writing this in my nice study in Johannesburg, where most people do not have a nice study. Africa is always flirting with famine (there are people who will regard this as an insult; it is not). Goldman Sachs is predicting a 24% decline in US GDP next quarter.

If this does not cost lives in Africa, that would be remarkable. It might even cost more lives than the virus would, in a region where only 3% are over 65 (and there’s no evidence that HIV status makes a difference to outcomes of COVID-19). South Africa is weeks into the epidemic and saw its first two deaths just today.

Yet the epidemiological community (at least on my Twitter feed) has entirely ignored either the consequences of interventions on health, merely pointing out that the virus will have its own economic impact even without interventions, which is like justifying the Bay of Pigs by pointing out that Castro would have killed people even without the attempted invasion. And context is nearly totally ignored. The discipline appears mostly to have fallen behind the view that the stronger the measure, the more laudable. Weirdly, those who usually press for more consideration of social angles seem no less in favour, despite the fact that they spend most of the rest of their time arguing that poverty is wrongly neglected as a cause of ill-health.

Do I sound disappointed in the science that I’m usually so enthusiastic about, and that shares with philosophy the critical study of the unknown? Here we have a virus that may well claim a larger death toll in richer countries with older populations, and a set of measures that are designed by and for those countries, and a total lack of consideration of local context. Isn’t this remarkable?

There is more to say, and many objections; I’ll write this up in an academically rigorous way as soon as I can. Meanwhile, I’ll continue to publish opinion pieces, where I think it’s useful. Right now, my point is that there’s a lot for philosophers to dissect here. I don’t mean in this particular problem, but in the pandemic as a whole. And the points don’t have to be rocket science. They can be as simple as recommending that a ban on sale of cigarettes be lifted.

What is required for us to be useful, however, is that we apply our critical thinking skills to the issue at hand. Falling in with common political groupings adds nothing unique and requires the suspension of the same critical faculties that we philosophers pride ourselves on in other contexts. This is a situation where nearly all the information on which decisions are being made is publicly available, where none of it is the exclusive preserve of a single discipline, and where fear clouds rational thought. Expert analyses of specific technical problems are also readily available. These are ideal conditions for someone trained to apply analytic skills in a relatively domain-free manner to contribute usefully.

Off the top of my head, here are a handful topic ideas:

  • How to circumscribe the consequences of COVID-19 that we are interested in when devising our measures of intervention (this is an ethical spin on the issue I’m interested in above)
  • The nature of good prediction (which I’ve worked on in the public health context – but there is so much more to say)
  • The epistemology of testimony, especially concerning expertise, in a context of minimal information (to get us past the “trust the scientists FFS” dogma – that’s an actual quote from Twitter)
  • The weighing of the rights of different groups, given the trade off between young and old deaths (COVID-19 kills almost no children, while they will die in droves in a famine)

One’s own expertise will suggest other topics, provided that the effort is to think critically rather than simply identify people with whom one agrees. I very much hope that we will not see a straightforward application of existing topics: inductive risk and coronavirus; definition of health and coronavirus; rights and coronavirus; etc. To be clear, I’m not saying that no treatment of coronavirus can mention inductive risk, definition of health, or rights; just that the treatment must start with Coronavirus. My motto in working on the philosophy of epidemiology is that my work is philosophical in character but epidemiological in subject: it is philosophical work about epidemiology. Where it suggests modifications to existing debates in philosophy, as does happen, that is great, but it’s not the purpose. The idea is to identify new problems, not to cast old ones in a new light. Perhaps there are no such things as new philosophical problems; but then again, perhaps it’s only by trying to identify new problems that we can cast new light on old ones.

Call to arms

The skill of philosophers, and the value in philosophy, does not lie in our knowledge of debates that we have had with each other. It lies in our ability to think fruitfully about the unfamiliar, the disturbing, the challenging, and even the abhorrent. The coronavirus pandemic is all these things. Let’s get stuck in.

Causal Inference: IJE Special Issue

Papers from the December 2016 special issue of IJE are now all available online. Several are open access, and I attach these.

Philosophers who want to engage with real life science, on topics relating to causation, epidemiology, and medicine, will find these papers a great resource. So will epidemiologists and other scientists who want or need to reflect on causal inference. Most of the papers are not written by philosophers, and most do not start from standard philosophical starting points. Yet the topics are clearly philosophical. This collection would also form a great starting point for a doctoral research projects in various science-studies disciplines.

Papers 1 and 2 were first available in January. Two letters were written in response (being made available online around April) along with a response and I have included these in the list for completeness. The remaining papers were written during the course of 2016 and are now available. Many of the authors met at a Radcliffe Workshop in Harvard in December 2016. An account of that workshop may be forthcoming at some stage, but equally it may not, since not all of the participants felt that it was necessary to prolong the discussion or to share the outcomes of the workshop more widely. At some point I might simply write up my own account, by way of part-philosophical, part-sociological story.

  1. Causality and causal inference in epidemiology: the need for  a pluralistic approach‘ Jan P Vandenbroucke, Alex Broadbent and Neil Pearce. doi: 10.1093/ije/dyv341
  2. ‘The tale wagged by the DAG: broadening the scope of causal inference and explanation for epidemiology.’ Nancy Krieger and George Davey-Smith. doi: 10.1093/ije/dyw114
    1. Letter: Tyler J. VanderWeele, Miguel A. Hernán, Eric J. Tchetgen Tchetgen, and James M. Robins. Letter to the Editor. Re: Causality and causal inference in epidemiology: the need for a pluralistic approach.
    2. Letter: Arnaud Chiolero. Letter to the Editor. Counterfactual and interventionist approach to cure risk factor epidemiology.
    3. Letter: Broadbent, A., Pearce, N., and Vandenbroucke, J. Authors’ Reply to: VanderWeele et al., Chiolero, and Schooling et al.
  3. ‘Causal inference in epidemiology: potential outcomes, pluralism and peer review.’ Douglas L Weed. doi: 10.1093/ije/dyw229
  4. ‘On Causes, Causal Inference, and Potential Outcomes.’ Tyler VanderWeele. doi: 10.1093/ije/dyw230
  5. ‘Counterfactual causation and streetlamps: what is to be done?’ James M Robins and Michael B Weissman. doi: 10.1093/ije/dyw231
  6. ‘DAGs and the restricted potential outcomes approach are tools, not theories of causation.’ Tony Blakely, John Lynch and Rebecca Bentley. doi: 10.1093/ije/dyw228
  7. ‘The formal approach to quantitative causal inference in epidemiology: misguided or misrepresented?’ Rhian M Daniel, Bianca L De Stavola and Stijn Vansteelandt. doi: 10.1093/ije/dyw227
  8. Formalism or pluralism? A reply to commentaries on ‘Causality and causal inference in epidemiology.’ Alex Broadbent, Jan P Vandenbroucke and Neil Pearce. doi: 10.1093/ije/dyw298
  9. ‘FACEing reality: productive tensions between our epidemiological questions, methods and mission.’ Nancy Krieger and George Davey-Smith. doi: 10.1093/ije/dyw330

Paper: Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

Delighted to announce the online publication of this paper in International Journal of Epidemiology, with Jan Vandenbroucke and Neil Pearce: ‘Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

This paper has already generated some controversy and I’m really looking forward to talking about it with my co-authors at the London School of Hygiene and Tropical Medicine on 7 March. (I’ll also be giving some solo talks while in the UK, at Cambridge, UCL, and Oxford, as well as one in Bergen, Norway.)

The paper is on the same topic as a single-authored paper of mine published late 2015, ‘Causation and Prediction in Epidemiology: a Guide to the Methodological Revolution.‘ But it is much shorter, and nonetheless manages to add a lot that was not present in my sole-authored paper – notably a methodological dimension that, as a philosopher by training, I was ignorant. The co-authoring process was thus really rich and interesting for me.

It also makes me think that philosophy papers should be shorter… Do we really need the first 2500 words summarising the current debate etc? I wonder if a more compressed style might actually stimulate more thinking, even if the resulting papers are less argumentatively airtight. One might wonder how often the airtight ideal is achieved even with traditional length paper… Who was it who said that in philosophy, it’s all over by the end of the first page?

America Tour: Attribution, prediction, and the causal interpretation problem in epidemiology

Next week I’ll be visiting America to talk in Pittsburgh, Richmond, and twice at Tufts. I do not expect audience overlap so I’ll give the same talk in all venues, with adjustments for audience depending on whether it’s primarily philosophers or epidemiologists I’m talking to. The abstract is below. I haven’t got a written version of the paper that I can share yet but would of course welcome comments at this stage.

ABSTRACT

Attribution, prediction, and the causal interpretation problem in epidemiology

In contemporary epidemiology, there is a movement, part theoretical and part pedagogical, attempting to discipline and clarify causal thinking. I refer to this movement as the Potential Outcomes Aproach (POA). It draws inspiration from the work of Donald Ruben and, more recently, Judea Pearl, among others. It is most easily recognized by its use of Directed Acycylic Graphs (DAGs) to describe causal situations, but DAGs are not the conceptual basis of the POA in epidemiology. The conceptual basis (as I have argued elsewhere) is a commitment to the view that the hallmark of a meaningful causal claim is that they can be used to make predictions about hypothetical scenarios. Elsewhere I have argued that this commitment is problematic (notwithstanding the clear connections with counterfactual, contrastive and interventionist views in philosophy). In this paper I take a more constructive approach, seeking to address the problem that troubles advocates of the POA. This is the causal interpretation problem (CIP). We can calculate various quantities that are supposed to be measures of causal strength, but it is not always clear how to interpret these quantities. Measures of attributability are most troublesome here, and these are the measures on which POA advocates focus. What does it mean, they ask, to say that a certain fraction of population risk of mortality is attributable to obesity? The pre-POA textbook answer is that, if obesity were reduced, mortality would be correspondingly lower. But this is not obviously true, because there are methods for reducing obesity (smoking, cholera infection) which will not reduce mortality. In general, say the POA advocates, a measure of attributability tells us next to nothing about the likely effect of any proposed public health intervention, rendering these measures useless, and so, for epidemiological purposes, meaningless. In this paper I ask whether there is a way to address and resolve the causal interpretation problem without resorting to the extreme view that a meaningful causal claim must always support predictions in hypothetical scenarios. I also seek connections with the notorious debates about heritability.

Is consistency trivial in randomized controlled trials?

Here are some more thoughts on Hernan and Taubman’s famous 2008 paper, from a chapter I am finalising for the epidemiology entry in a collection on the philosophy of medicine. I realise I have made a similar point in an earlier post on this blog, but I think I am getting closer to a crisp expression. The point concerns the claimed advantage of RCTs for ensuring consistency. Thoughts welcome!

Hernan and Taubman are surely right to warn against too-easy claims about “the effect of obesity on mortality”, when there are multiple ways to reduce obesity, each with different effects on mortality, and perhaps no ethically acceptable way to bring about a sudden change in body mass index from say 30 to 22 (Hernán and Taubman 2008, 22). To this extent, their insistence on assessing causal claims as contrasts to well-defined interventions is useful.

On the other hand, they imply some conclusions that are harder to accept. They suggest, for example, that observational studies are inherently more likely to suffer from this sort of difficulty, and that experimental studies (randomized controlled trials) will ensure that interventions are well-specified. They express their point using the technical term “consistency”:

consistency… can be thought of as the condition that the causal contrast involves two or more well-defined interventions. (Hernán and Taubman 2008, S10)

They go on:

…consistency is a trivial condition in randomized experiments. For example, consider a subject who was assigned to the intervention group … in your randomized trial. By definition, it is true that, had he been assigned to the intervention, his counterfactual out- come would have been equal to his observed outcome. But the condition is not so obvious in observational studies. (Hernán and Taubman 2008, s11)

This is a non-sequitur, however, unless we appeal to a background assumption that an intervention—something that an actual human investigator actually does—is necessarily well-defined. Without this assumption, there is nothing to underwrite the claim that “by definition”, if a subject actually assigned to the intervention had been assigned to the intervention, he would have had the outcome that he actually did have.

Consider the intervention in their paper, one hour of strenuous exercise per day. “Strenuous exercise” is not a well-defined intervention. Weightlifting? Karate? Swimming? The assumption behind their paper seems to be that if an investigator “does” an intervention, it is necessarily well-defined; but on reflection this is obviously not true. An investigator needs to have some knowledge of which features of the intervention might affect the outcome (such as what kind of exercise one performs), and thus need to be controlled, and which don’t (such as how far west of Beijing one lives). Even randomization will not protect against confounding arising from preference for a certain type of exercise (perhaps because people with healthy hearts are predisposed both to choose running and to live longer, for example), unless one knows to randomize the assignment of exercise-types and not to leave it to the subjects’ choice.

This is exactly the same kind of difficulty that Hernan and Taubman press against observational studies. So the contrast they wish to draw, between “trivial” consistency in randomized trials and a much more problematic situation in observational studies, is a mirage. Both can suffer from failure to define interventions.

Workshop, Helsinki: What do diseases and financial crises have in common?

AID Forum: “Epidemiology: an approach with multidisciplinary applicability”

(Unfamiliar with AID forum? For the very idea and the programme of Agora for Interdisciplinary Debate, see www.helsinki.fi/tint/aid.htm)

DISCUSSED BY:

Mervi Toivanen (economics, Bank of Finland)

Jaakko Kaprio (genetic epidemiology, U of Helsinki)

Alex Broadbent (philosophy of science, U of Johannesburg)

Moderated by Academy professor Uskali Mäki

Session jointly organised by TINT (www.helsinki.fi/tintand the Finnish Epidemiological Society (www.finepi.org)

TIME AND PLACE:

Monday 9 February, 16:15-18

University Main Building, 3rd Floor, Room 5

http://www.helsinki.fi/teknos/opetustilat/keskusta/f33/ls5.htm

TOPIC: What do diseases and financial crises have in common?

Epidemiology has traditionally been used to model the spreading of diseases in populations at risk. By applying parameters related to agents’ responses to infection and network of contacts it helps to study how diseases occur, why they spread and how one could prevent epidemic outbreaks. For decades, epidemiology has studied also non-communicable diseases, such as cancer, cardiovascular disease, addictions and accidents. Descriptive epidemiology focuses on providing accurate information on the occurrence (incidence, prevalence and survival) of the condition. Etiological epidemiology seeks to identify the determinants be they infectious agents, environmental or social exposures, or genetic variants. A central goal is to identify determinants amenable to intervention, and hence prevention of disease.

There is thus a need to consider both reverse causation and confounding as possible alternative explanations to a causal one. Novel designs are providing new tools to address these issues. But epidemiology also provides an approach that has broad applicability to a number of domains covered by multiple disciplines. For example, it is widely and successfully used to explain the propagation of computer viruses, macroeconomic expectations and rumours in a population over time.

As a consequence, epidemiological concepts such as “super-spreader” have found their way also to economic literature that deals with financial stability issues. There is an obvious analogy between the prevention of diseases and the design of economic policies against the threat of financial crises. The purpose of this session is to discuss the applicability of epidemiology across various domains and the possibilities to mutually benefit from common concepts and methods.

QUESTIONS:

1. Why is epidemiology so broadly applicable?

2. What similarities and differences prevail between these various disciplinary applications?

3. What can they learn from one another, and could the cooperation within disciplines be enhanced?

4. How could the endorsement of concepts and ideas across disciplines be improved?

5. Can epidemiology help to resolve causality?

READINGS:

Alex Broadent, Philosophy of Epidemiology (Palgrave Macmillan 2013)

http://www.palgrave.com/page/detail/?sf1=id_product&st1=535877

Alex Broadbent’s blog on the philosophy of epidemiology:

https://philosepi.wordpress.com/

Rothman KJ, Greenland S, Lash TL. Modern Epidemiology 3rd edition.

Lippincott, Philadelphia 2008

D’Onofrio BM, Lahey BB, Turkheimer E, Lichtenstein P. Critical need for family-based, quasi-experimental designs in integrating genetic and social science research. Am J Public Health. 2013 Oct;103 Suppl 1:S46-55. doi:10.2105/AJPH.2013.301252.

Taylor, AE, Davies, NM, Ware, JJ, Vanderweele, T, Smith, GD & Munafò, MR 2014, ‘Mendelian randomization in health research: Using appropriate genetic variants and avoiding biased estimates’. Economics and Human Biology, vol 13., pp. 99-106

Engholm G, Ferlay J, Christensen N, Kejs AMT, Johannesen TB, Khan S, Milter MC, Ólafsdóttir E, Petersen T, Pukkala E, Stenz F, Storm HH. NORDCAN: Cancer Incidence, Mortality, Prevalence and Survival in the Nordic Countries, Version 7.0 (17.12.2014). Association of the Nordic Cancer Registries. Danish Cancer Society. Available from http://www.ancr.nu.

Andrew G. Haldane, Rethinking of financial networks; Speech by Mr Haldane, Executive Director, Financial Stability, Bank of England, at the Financial Student Association, Amsterdam, 28 April 2009: http://www.bis.org/review/r090505e.pdf

Antonios Garas et al., Worldwide spreading of economic crisis: http://iopscience.iop.org/1367-2630/12/11/113043/pdf/1367-2630_12_11_113043.pdf

Christopher D. Carroll, The epidemiology of macroeconomic expectations: http://www.econ2.jhu.edu/people/ccarroll/epidemiologySFI.pdf

Is the Methodological Axiom of the Potential Outcomes Approach Circular?

Hernan, VanderWeele, and others argue that causation (or a causal question) is well-defined when interventions are well-specified. I take this to be a sort of methodological axiom of the approach.

But what is a well-specified intervention?

Consider an example from Hernan & Taubman’s influential 2008 paper on obesity. In that paper, BMI is shown up as failing to correspond to a well-specified intervention; better-specifed interventions include one hour of strenuous physical exercise per day (among others).

But what kind of exercise? One hour of running? Powerlifting? Yoga? Boxing?

It might matter – it might turn out that, say, boxing and running for an hour a day reduce BMI by similar amounts but that one of them is associated with longer life. Or it might turn out not to matter. Either way, it would be a matter of empirical inquiry.

This has two consequences for the mantra that well-defined causal questions require well-specified interventions.

First, as I’ve pointed out before on this blog, it means that experimental studies don’t necessarily guarantee well-specified interventions. Just because you can do it doesn’t mean you know what you are doing. The differences you might think don’t matter might matter: different strains of broccoli might have totally different effects on mortality, etc.

Second, more fundamentally, it means that the whole approach is circular. You need a well-specified intervention for a good empirical inquiry into causes and you need good empirical inquiry into causes to know whether your intervention is well-specified.

To me this seems to be a potentially fatal consequence for the claim that well-defined causal questions require well-specified interventions. For if that were true, we would be trapped in a circle, and could never have any well-specified interventions, and thus no well-defined causal questions either. Therefore either we really are trapped in that circle; or we can have well-defined causal questions, in which case, it is false that these always require well-specified interventions.

This is a line of argument I’m developing at present, inspired in part by Vandebroucke and Pearce’s critique of the “methodological revolution” at the recent WCE 2014 in Anchorage. I would welcome comments.