Potential Outcomes Approach as “epidemiometrics”

In a review of Jan Tinbergen’s work, Maynard Keynes wrote:

At any rate, Prof. Tinbergen agrees that the main purpose of his method is to discover, in cases where the economist has correctly analysed beforehand the qualitative character of the causal relations, with what strength each of them operates… [1]

Nancy Cartwright cites this passage in the context of describing the business of econometrics, in the introduction to her Hunting Causes and Using Them [2]. Her idea is that econometrics assumes that economics can be an exact science, that economic phenomena are governed by causal laws, and sets out to quantify them, making econometrics a fruitful domain for a study of the connection between laws and causes.

This helped me with an idea that first occurred to me at the 9th Nordic Conference of Epidemiology and Register-Based Health Research, that the potential outcomes approach to causal inference in epidemiology might be understood as the foundational work of a sub-discipline within epidemiology, related to epidemiology as econometrics is to economics. We might call it epidemiometrics.

This suggestion appears to resonate with Tyler Vanderweele’s contention that:

A distinction should be drawn between under what circumstances it is reasonable to refer to something as a cause and under what circumstances it is reasonable to speak of an estimate of a causal effect… The potential outcomes framework provides a way to quantify causal effects… [3]

The distinction between causal identification and estimation of causal effects does not resolve the various debates around the POA in epidemiology, since the charge against the POA is that as an approach (the A part in POA) it is guilty of overreach. For example, the term “causal inference” is used prominently where “quantitative causal estimation” might be more accurate [4]. 

Maybe there is a lesson here from the history of economics. While the discipline of epidemiology does not pretend to uncover causal laws, as does economics, it nevertheless does seek to uncover causal relationships, at least sometime. The Bradford Hill viewpoints are for answering a yes/no question: “is there any other way of explaining the facts before us, is there any other answer equally, or more, likely than cause and effect?” [5]. Econometrics answers a quantitative question: what is the magnitude of the causal effect, assuming that there is one? This question deserves its own disciplines because, like any quantitative question, it admits of many more precise and non-equivalent formulations, and of the development of mathematical tools. Recognising the POA not as an approach to epidemiology research, but as a discipline within epidemiology is deserved.

Many involved in discussions of the POA (including myself and co-authors) have made the point that the POA is part of a larger toolkit and that this is not always recognised [6,7], while others have argued that causal identification is a separate goal of epidemiology from causal estimation and that is at risk of neglect [8]. The italicised components of these contentions do not in fact concern the business of discovering or estimating causality. They are points about the way epidemiology is taught, and how it is understood by those who practice it. They are points, not about causality, but about epidemiology itself.

A disciplinary distinction between epidemiology and a sub-discipline of epidemiometrics might assist in realising this distinction that many are sensitive to, but that does not seem to have poured oil on the water of discussions of causality. By “realising”, I mean enabling institutional recognition at departmental or research unit level, enabling people to list their research interests on CVs and websites, assisting students in understanding the significance of the methods they are learning, and, most important of all, softening the dynamics between those who “advocate” and those who “oppose” the POA. To advocate econometrics over economics, or vice versa, would be nonsensical, like arguing liner algebra is more or less important than mathematics. Likewise, to advocate or oppose epidemiometrics would be recognisably wrong-headed. There would remain questions about emphasis, completeness, relative distribution of time and resources–but not about which is the right way to achieve the larger goals.

Few people admit to “advocating” or “opposing” the methods themselves, because in any detailed discussion it immediately becomes clear that the methods are neither universally, nor never, applicable. A disciplinary distinction–or, more exactly, a distinction of a sub-discipline of study that contributes in a special way to the larger goals of epidemiology–might go a long way to alleviating the tensions that sometimes flare up, occasionally in ways that are unpleasant and to the detriment of the scientific and public health goals of epidemiology as a whole.

[1] J.M. Keynes, ‘Professor Tinbergen’s Method’, Economic Journal, 49 (1939), 558-68 n. 195.

[2] N. Cartwright, Hunting Causes and Using Them (New York: Cambridge University Press, 2007), 15.

[3] T. Vanderweele, ‘On causes, causal inference, and potential outcomes’, International Journal of Epidemiology, 45 (2016), 1809.

[4] M.A. Hernán and J.M. Robins, Causal Inference: What If (Boca Raton: Chapman & Hall/CRC, 2020).

[5] A. Bradford Hill, ‘The Environment and Disease: Association or Causation?’, Proceedings of the Royal Society of Medicine, 58 (1965), 299.

[6] J. Vandenbroucke, A. Broadbent, and N. Pearce, ‘Causality and causal inference in epidemiology: the need for a pluralistic approach’, International Journal of Epidemiology, 45 (2016), 1776-86.

[7] A. Broadbent, J. Vandenbroucke, and N. Pearce, ‘Response: Formalism or pluralism? A reply to commentaries on ‘Causality and causal inference in epidemiology”, International Journal of Epidemiology, 45 (2016), 1841-51.

[8] Schwartz et al., ‘Causal identification: a charge of epidemiology in danger of marginalization’, Annals of Epidemiology, 26 (2016), 669-673.

America Tour: Attribution, prediction, and the causal interpretation problem in epidemiology

Next week I’ll be visiting America to talk in Pittsburgh, Richmond, and twice at Tufts. I do not expect audience overlap so I’ll give the same talk in all venues, with adjustments for audience depending on whether it’s primarily philosophers or epidemiologists I’m talking to. The abstract is below. I haven’t got a written version of the paper that I can share yet but would of course welcome comments at this stage.


Attribution, prediction, and the causal interpretation problem in epidemiology

In contemporary epidemiology, there is a movement, part theoretical and part pedagogical, attempting to discipline and clarify causal thinking. I refer to this movement as the Potential Outcomes Aproach (POA). It draws inspiration from the work of Donald Ruben and, more recently, Judea Pearl, among others. It is most easily recognized by its use of Directed Acycylic Graphs (DAGs) to describe causal situations, but DAGs are not the conceptual basis of the POA in epidemiology. The conceptual basis (as I have argued elsewhere) is a commitment to the view that the hallmark of a meaningful causal claim is that they can be used to make predictions about hypothetical scenarios. Elsewhere I have argued that this commitment is problematic (notwithstanding the clear connections with counterfactual, contrastive and interventionist views in philosophy). In this paper I take a more constructive approach, seeking to address the problem that troubles advocates of the POA. This is the causal interpretation problem (CIP). We can calculate various quantities that are supposed to be measures of causal strength, but it is not always clear how to interpret these quantities. Measures of attributability are most troublesome here, and these are the measures on which POA advocates focus. What does it mean, they ask, to say that a certain fraction of population risk of mortality is attributable to obesity? The pre-POA textbook answer is that, if obesity were reduced, mortality would be correspondingly lower. But this is not obviously true, because there are methods for reducing obesity (smoking, cholera infection) which will not reduce mortality. In general, say the POA advocates, a measure of attributability tells us next to nothing about the likely effect of any proposed public health intervention, rendering these measures useless, and so, for epidemiological purposes, meaningless. In this paper I ask whether there is a way to address and resolve the causal interpretation problem without resorting to the extreme view that a meaningful causal claim must always support predictions in hypothetical scenarios. I also seek connections with the notorious debates about heritability.