Western Cape COVID-19 levels higher than rest of SA. Is it because they defy lockdown there? Probably not, says phone data

https://www.ecologi.st/post/covid/ Evidence from phone data that W Cape adherence to lockdown has been quite strict thus lack of adherence is less likely to be the cause of the spike there. Thanks to Monomiat Ebrahim for the share.

Wondering if this means it is more likely to be:

1. A demographic feature such as age

2. A latitude feature – around the equator, COVID-19 has generally been less prevalent

3. A climate feature

4. High concentrations of “starters” leading to a critical mass for an epidemic

…add your pet hypothesis here!

From Judea Pearl’s blog: report of a webinar: “Artificial Intelligence and COVID-19: A wake-up call” #epitwitter @TheBJPS

Check the entry on Pearl’s blog which includes a write-up provided by the organisers

Video of the event is available too

Predicting Pandemics: Lessons from (and for) COVID-19

This is a live online discussion between Jonathan Fuller and Alex Broadbent, hosted by the Institute for the Future of Knowledge in partnership with the Library of the University of Johannesburg. Comments and discussion are hosted on this page, and you can watch the broadcast here:

We know considerably more about COVID-19 than anyone has previously known about a pandemic of a new disease. Yet we are uncertain about what to do. Even where it appears obvious that strategies have worked or failed, it will take some time to establish that the observed trends are fully or even partly explained by anything we did or didn’t do. And when we take a lesson from one place and try to apply it in another, we have to contend with the huge differences between different places in the world, especially age and wealth. This conversation explores these difficulties, in the hope of improving our response to the uncertainty that always accompanies pandemics, our ability to tell what works, our sensitivity to context, and thus our collective ability to arrive at considered decisions with clearly identified goals and a based on a comprehensive assessment of the relevant costs, benefits, risks, and other factors.

Further reading:

Professor Alex Broadbent (PhD) is Director of the Institute for the Future of Knowledge at the University of Johannesburg and Professor of Philosophy at the University of Johannesburg. He specialises in prediction, causal inference, and explanation, especially in epidemiology and medicine. He publishes in major journals in philosophy, epidemiology, medicine and law, and his books include the path-breaking Philosophy of Epidemiology (Palgrave 2013) and Philosophy of Medicine (Oxford University Press 2019).

Dr Jonathan Fuller (PhD, MD) is a philosopher working in philosophy of science, especially philosophy of medicine. He is an Assistant Professor in the Department of History and Philosophy of Science (HPS) at the University of Pittsburgh, and a Research Associate with the University of Johannesburg. He is also on the International Philosophy of Medicine Roundtable Scientific Committee. He was previously a postdoctoral research fellow in the Institute for the History and Philosophy of Science at the University of Toronto.

M, PhD and PostDoc opportunities at UJ

The University of Johannesburg has released a special call offering masters, doctoral and postdoctoral fellowships, for start asap, deadline 8th Feb 2020.

These are in any area, but I would like to specifically invite anyone wishing to work with myself (or colleagues at UJ) on any of the areas listed below. From May 2020, I will be Director of the Institute for the Future of Knowledge at UJ (a new institute – no website yet – but watch this space!), and being part of this enterprise will, I think, be very exciting for potential students/post-docs. I would be delighted to receive inquiries in any of the following areas:

  • Philosophy of medicine
  • Philosophy of epidemiology
  • Causation
  • Counterfactuals
  • Causal inference
  • Prediction
  • Explanation (not just causal)
  • Machine learning (in relation to any of the above)
  • Cognitive science
  • Other things potentially relevant to the Institute, my interests, your interests… please suggest!

If you’re interested please get in touch: abbroadbent@uj.ac.za

The call is here, along with instructions for applicants:

2020 Call for URC Scholarships for Master’s_Doctoral_Postdoctoral Fellowships_Senior Postdoctoral fellowships

Paper: Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

Delighted to announce the online publication of this paper in International Journal of Epidemiology, with Jan Vandenbroucke and Neil Pearce: ‘Causality and Causal Inference in Epidemiology: the Need for a Pluralistic Approach

This paper has already generated some controversy and I’m really looking forward to talking about it with my co-authors at the London School of Hygiene and Tropical Medicine on 7 March. (I’ll also be giving some solo talks while in the UK, at Cambridge, UCL, and Oxford, as well as one in Bergen, Norway.)

The paper is on the same topic as a single-authored paper of mine published late 2015, ‘Causation and Prediction in Epidemiology: a Guide to the Methodological Revolution.‘ But it is much shorter, and nonetheless manages to add a lot that was not present in my sole-authored paper – notably a methodological dimension that, as a philosopher by training, I was ignorant. The co-authoring process was thus really rich and interesting for me.

It also makes me think that philosophy papers should be shorter… Do we really need the first 2500 words summarising the current debate etc? I wonder if a more compressed style might actually stimulate more thinking, even if the resulting papers are less argumentatively airtight. One might wonder how often the airtight ideal is achieved even with traditional length paper… Who was it who said that in philosophy, it’s all over by the end of the first page?

Paper – Tobacco in Korea

Alex Broadbent and Seung-sik Hwang, 2016. ‘Tobacco and epidemiology in Korea: old tricks, new answers?’ Journal of Epidemiology and Community Health doi:10.1136/jech-2015-206567.

Now available online first, open access.

http://jech.bmj.com/content/early/2016/01/14/jech-2015-206567.full

For those at the recent CauseHealth workshop N=1, this relates to the same key topic (viz. the application of population evidence to an individual), but in the legal rather than clinical context.

 

America Tour: Attribution, prediction, and the causal interpretation problem in epidemiology

Next week I’ll be visiting America to talk in Pittsburgh, Richmond, and twice at Tufts. I do not expect audience overlap so I’ll give the same talk in all venues, with adjustments for audience depending on whether it’s primarily philosophers or epidemiologists I’m talking to. The abstract is below. I haven’t got a written version of the paper that I can share yet but would of course welcome comments at this stage.

ABSTRACT

Attribution, prediction, and the causal interpretation problem in epidemiology

In contemporary epidemiology, there is a movement, part theoretical and part pedagogical, attempting to discipline and clarify causal thinking. I refer to this movement as the Potential Outcomes Aproach (POA). It draws inspiration from the work of Donald Ruben and, more recently, Judea Pearl, among others. It is most easily recognized by its use of Directed Acycylic Graphs (DAGs) to describe causal situations, but DAGs are not the conceptual basis of the POA in epidemiology. The conceptual basis (as I have argued elsewhere) is a commitment to the view that the hallmark of a meaningful causal claim is that they can be used to make predictions about hypothetical scenarios. Elsewhere I have argued that this commitment is problematic (notwithstanding the clear connections with counterfactual, contrastive and interventionist views in philosophy). In this paper I take a more constructive approach, seeking to address the problem that troubles advocates of the POA. This is the causal interpretation problem (CIP). We can calculate various quantities that are supposed to be measures of causal strength, but it is not always clear how to interpret these quantities. Measures of attributability are most troublesome here, and these are the measures on which POA advocates focus. What does it mean, they ask, to say that a certain fraction of population risk of mortality is attributable to obesity? The pre-POA textbook answer is that, if obesity were reduced, mortality would be correspondingly lower. But this is not obviously true, because there are methods for reducing obesity (smoking, cholera infection) which will not reduce mortality. In general, say the POA advocates, a measure of attributability tells us next to nothing about the likely effect of any proposed public health intervention, rendering these measures useless, and so, for epidemiological purposes, meaningless. In this paper I ask whether there is a way to address and resolve the causal interpretation problem without resorting to the extreme view that a meaningful causal claim must always support predictions in hypothetical scenarios. I also seek connections with the notorious debates about heritability.

Tobacco and epidemiology in Korea: old tricks, new answers?

Today I participated in a seminar hosted by the National Health Insurance Service (NHIS) of Korea, which is roughly the equivalent of the NHS in the UK, although the health systems differ. The seminar concerned a recent lawsuit in which tobacco companies were sued by the NHIS for the costs of treating lung cancer patients. The suit is part of a larger drive to get a grip on smoking in Korea, where over 40% of males smoke, and a packet of 20 cigarettes costs 4500 Korean Won (about USD 4.10 or UKP 2.80). The NHIS recently suffered a blow at the Supreme Court, where the ruling was somewhat luke-warm about a causal link between smoking and lung cancer in general, and moreover argued that such a link would anyway fail to prove anything about the two specific plaintiffs in the case at hand.

I was struck by the familiarity of some of the arguments that are apparently being used by the tobacco companies. For example, the Supreme Court has been convinced that diseases come in two kinds, specific and non-specific, and that since lung-cancer is a non-specific disease, it is wrong to seek to apply measures of attributability (excess/attributable fraction, population excess/attributable fraction) at all.

This is reminiscent of the use of non-specificity in the 1950s, when it was seen as a problem for the causal hypothesis that smoking causes lung cancer. It also gives rise to a strategy which is legally sound but dubious from a public health perspective, namely, first going for lung cancer, and leaving other health-risks of smoking for later. This is legally sound because lung cancer exhibits the highest relative risk of the smoking-related diseases, and perhaps it is good PR too because cancer of any kind catches the imagination. But the health burden of lung cancer is low, even in a population where smoking is relatively prevalent, since lung cancer is a rare disease even among smokers.

The health burden of heart disease, at the other end of the spectrum, is very large, and even though smoking less than doubles this risk (RR about 1.7), the base rate of heart disease is so high that this amounts to a very significant public health problem. I do not know what the right response to this complex of problems is: clearly, high-profile court cases are have an impact that extends far beyond their outcome, and also the reason that people stop smoking, or accept legislation, need not be an accurate reflection of the true risks in order for those risks to be mitigated. (If you stop smoking to avoid lung cancer, you also avoid heart disease, which is a much better reason to stop smoking from the perspective of a rational individual motivated to avoid fatal disease.) Nonetheless I am struck by the way that legal and health policy objectives interact here.

I was also interested to hear that the case of McTear was a significant blow to the Korean case because of its findings about causality, which indeed are exactly those of the Korean case. That case is not well regarded in the UK, and not authoritative (being first instance), so it is interesting – and unfortunate – that it has had an effect here.

The event was an extremely good-spirited affair, and the other speakers had some interesting things to say. My book, in Korean, received a significant plug, not least, I suspect, because the audience not understanding much of my talk, were repeatedly referred to it for more detail. The most shocking thing about the event was to hear the same obfuscatory strategies that are now history in Europe and America being used, to good effect, by the very same companies in this part of the world. It is one thing to defend a case on grounds that one believes, but there is not anyone who still reasonably believes that smoking does not cause lung cancer, which seems to be the initial burden that plaintiffs in this sort of case need to prove. It is a bit like being asked to begin your case against a scaffolder who dropped a metal bar on your head with a proof of the law of gravity, and then being asked to prove that the general evidence concerning gravity proves that gravity was the cause in this particular case, given that not all downward motions are caused by gravity. – Not exactly like that, of course, but not exactly unlike, either.

On the positive side, I am hoping that a clear explanation of the reasoning behind the PC Inequality that I favour might help with the next stage of the case, although I am unclear what that stage might be.

Is consistency trivial in randomized controlled trials?

Here are some more thoughts on Hernan and Taubman’s famous 2008 paper, from a chapter I am finalising for the epidemiology entry in a collection on the philosophy of medicine. I realise I have made a similar point in an earlier post on this blog, but I think I am getting closer to a crisp expression. The point concerns the claimed advantage of RCTs for ensuring consistency. Thoughts welcome!

Hernan and Taubman are surely right to warn against too-easy claims about “the effect of obesity on mortality”, when there are multiple ways to reduce obesity, each with different effects on mortality, and perhaps no ethically acceptable way to bring about a sudden change in body mass index from say 30 to 22 (Hernán and Taubman 2008, 22). To this extent, their insistence on assessing causal claims as contrasts to well-defined interventions is useful.

On the other hand, they imply some conclusions that are harder to accept. They suggest, for example, that observational studies are inherently more likely to suffer from this sort of difficulty, and that experimental studies (randomized controlled trials) will ensure that interventions are well-specified. They express their point using the technical term “consistency”:

consistency… can be thought of as the condition that the causal contrast involves two or more well-defined interventions. (Hernán and Taubman 2008, S10)

They go on:

…consistency is a trivial condition in randomized experiments. For example, consider a subject who was assigned to the intervention group … in your randomized trial. By definition, it is true that, had he been assigned to the intervention, his counterfactual out- come would have been equal to his observed outcome. But the condition is not so obvious in observational studies. (Hernán and Taubman 2008, s11)

This is a non-sequitur, however, unless we appeal to a background assumption that an intervention—something that an actual human investigator actually does—is necessarily well-defined. Without this assumption, there is nothing to underwrite the claim that “by definition”, if a subject actually assigned to the intervention had been assigned to the intervention, he would have had the outcome that he actually did have.

Consider the intervention in their paper, one hour of strenuous exercise per day. “Strenuous exercise” is not a well-defined intervention. Weightlifting? Karate? Swimming? The assumption behind their paper seems to be that if an investigator “does” an intervention, it is necessarily well-defined; but on reflection this is obviously not true. An investigator needs to have some knowledge of which features of the intervention might affect the outcome (such as what kind of exercise one performs), and thus need to be controlled, and which don’t (such as how far west of Beijing one lives). Even randomization will not protect against confounding arising from preference for a certain type of exercise (perhaps because people with healthy hearts are predisposed both to choose running and to live longer, for example), unless one knows to randomize the assignment of exercise-types and not to leave it to the subjects’ choice.

This is exactly the same kind of difficulty that Hernan and Taubman press against observational studies. So the contrast they wish to draw, between “trivial” consistency in randomized trials and a much more problematic situation in observational studies, is a mirage. Both can suffer from failure to define interventions.

Is the Methodological Axiom of the Potential Outcomes Approach Circular?

Hernan, VanderWeele, and others argue that causation (or a causal question) is well-defined when interventions are well-specified. I take this to be a sort of methodological axiom of the approach.

But what is a well-specified intervention?

Consider an example from Hernan & Taubman’s influential 2008 paper on obesity. In that paper, BMI is shown up as failing to correspond to a well-specified intervention; better-specifed interventions include one hour of strenuous physical exercise per day (among others).

But what kind of exercise? One hour of running? Powerlifting? Yoga? Boxing?

It might matter – it might turn out that, say, boxing and running for an hour a day reduce BMI by similar amounts but that one of them is associated with longer life. Or it might turn out not to matter. Either way, it would be a matter of empirical inquiry.

This has two consequences for the mantra that well-defined causal questions require well-specified interventions.

First, as I’ve pointed out before on this blog, it means that experimental studies don’t necessarily guarantee well-specified interventions. Just because you can do it doesn’t mean you know what you are doing. The differences you might think don’t matter might matter: different strains of broccoli might have totally different effects on mortality, etc.

Second, more fundamentally, it means that the whole approach is circular. You need a well-specified intervention for a good empirical inquiry into causes and you need good empirical inquiry into causes to know whether your intervention is well-specified.

To me this seems to be a potentially fatal consequence for the claim that well-defined causal questions require well-specified interventions. For if that were true, we would be trapped in a circle, and could never have any well-specified interventions, and thus no well-defined causal questions either. Therefore either we really are trapped in that circle; or we can have well-defined causal questions, in which case, it is false that these always require well-specified interventions.

This is a line of argument I’m developing at present, inspired in part by Vandebroucke and Pearce’s critique of the “methodological revolution” at the recent WCE 2014 in Anchorage. I would welcome comments.