Absolute and relative measures – what’s the difference?

I’m re-working a paper on risk relativism in response to some reviewer comments, and also preparing a talk on the topic for Friday’s meeting at KCL, “Prediction in Epidemiology and Healthcare”. The paper originates in Chapter 8 of my book, where I identify some possible explanations for “risk relativism” and settle on the one I think is best. Briefly, I suggest that there isn’t really a principled way of distinguishing “absolute” and “relative” measures, and instead explain the popularity of relative risk by its superficial similarity to a law of physics, and its apparent independence of any given population. These appearances are misleading, I suggest.

In the paper I am trying to develop the suggestion a bit into an argument. Two remarks by reviewers point me in the direction of further work I need to do. One is the question as to what, exactly, the relation between RR and law of nature is supposed to be. Exactly what character am I supposing that laws have, or that epidemiologists think laws have, such that RR is more similar to a law-like statement than, say, risk difference, or population attributable fraction?

The other is a reference to a literature I don’t know but certainly should, concerning statistical modelling in the social sciences. I am referred to a monograph by Achen in 1982, and a paper by Jan Vandebroucke in 1987, both of which suggest – I gather – a deep scepticism about statistical modelling in the social sciences. Particularly thought-provoking is the idea that all such models are “qualitative descriptions of data”. If there is any truth in that, then it is extremely significant, and deserves unearthing in the age of big data, Google Analytics, Nate Silver, and generally the increasing confidence in the possibility of accurately modelling real world situations, and – crucially – generating predictions out of them.

A third question concerns the relation between these two thoughts: (i) the apparent law-likeness of certain measures contrasted with the apparently population-specific, non-general nature of others; and (ii) the limitations claimed for statistical modelling in some quarters contrasted with confidence in others. I wonder whether degree of confidence has anything to do with perceived law-likeness. One’s initial reaction would be to doubt this: when Nate Silver adjusts his odds on a baseball outcome, he surely does not take himself to be basing his prediction on a law-like generalisation. Yet on reflection, he must be basing it on some generalisation, since the move from observed to unobserved is a kind of generalising. What more, then, is there to the notion of a law, than generalisability on the basis of instances? It is surprising how quickly the waters deepen.

Relative Activity in philosepi

Having neglected this blog for several months I find myself suddenly swamped with things to write about. My book has been translated into Korean by Hyundeuk Cheon, Hwang Seung-sik, and Mr Jeon, and judging by their insightful comments and questions they have done a superb and careful job. Next week there is a workshop on Prediction in Epidemiology and Healthcare at KCL, organised by Jonathan Fuller and Luis Jose Flores, which promises to be exciting. Coming up in August is the World Congress of Epidemiology, where I’m giving two talks, hopefully different ones – one on stability for a session on translation and public engagement, and one on the definition of measures of causal strength as part of a session for the next Dictionary of Epidemiology. And I’m working on a paper on risk relativism which has been accepted by Journal of Epidemiology and Community Health subject to revisions in response to the extremely interesting comments of 5 reviewers – I think this is possibly the most rigorous and most useful review process I have encountered. Thus this is a promissory note, by which I hope to commit my conscience to writing here about risk relativism, stability and measures of causal strength in the coming weeks.

“The Exposome” – a lab for epidemiology?

In February 2011, Nature ran a journalistic piece on the development of technologies designed to increase the accuracy of measuring exposures, spurred by various dissatisfactions with questionnaires. (Thanks to Thad Metz for pointing me to this.) The “exposome” is presented in that piece as the logical conclusion of improved measurement techniques. It is supposed to be a device (I am imagining an enormous plastic bubble) capable of measuring every exposure of study subjects. A quick hunt around the internet reveals that the idea is capturing at least a few imaginations, including some at the US Centers for Disease Control.

The CDC’s Overview of the exposome defines the exposome like this:

The exposome can be defined as the measure of all the exposures of an individual in a lifetime and how those exposures relate to health.

The idea of the exposome suggested two questions to me.

First, the idea of the exposome puts pressure on the concept of an exposure. In most epidemiological practice, the question “What is an exposure?” is of no practical importance. But if the aim is to measure every exposure, then we must answer the question in order to know whether we have succeeded.

The CDC article contrasts the target of the exposome with genetic risk factors, suggesting that exposures exclude genetic make-up. But the CDC article also suggests that exposures measured by the exposome may begin before birth. (I am imagining babies born in little plastic bags.) So it is not clear exactly what the rationale for excluding genetic make-up from “exposures” would be. If the goal is simply to measure anything that might affect a given health outcome then we should include genetics. We should also include our entire solar system, indeed the galaxy, so as to account for the effects of solar flares, meteorites, and so forth. (The plastic bubble in my imagination is getting very big.)

My first worry, then, about “exposomics” is that it will not get very far without circumscribing the notion of exposure, so as to be something less than what the authors of the CDC overview probably think they mean – that is, something less than all factors potentially affecting health outcomes.

My second question is whether striving for an exposome is a good idea, judged by the goals of epidemiology, which I take to be providing information which can be used to improve public health.

One central point of epidemiology is that it studies people, not in labs, but as they actually live their lives. The exposome is a sort of lab, and striving for it is nothing other than striving for the controlled experiment. Aside from the complete fantasy of ever achieving an exposome (my imaginary bubble just burst), it does not seem helpful even to “study” the exposome, or whatever else it is “exposomists” are supposed to do. (And, incidentally, it does not seem that the exposome is a logical extension of increasing accuracy of measurements of exposure.) Epidemiologists want to know what happens in reality, not in the exposome.

Epidemiology and laboratory sciences complement each other in this way. Tar may be shown to produce cancer in the skin of laboratory rats, but epidemiology tells us what happens when humans smoke cigarettes. The two sources of knowledge complement each other. Each has flaws. Causal inference is harder in epidemiology because of the lack of control over potentially relevant variables: exposures, for short. But lab sciences suffer a different inferential limitation: not in making a causal inference, but in inferring that the results obtained in the lab will apply outside. So it is hard to see how doing away with either source of knowledge could be a good idea, and hard to see what “exposomics” could add to epidemiology, except another buzz word.