Category Archives: Correlation does not equal causation

The Independent’s anti-vaccine scaremongering

Last weekend The Independent published a ridiculous piece of antivaccine scaremongering by Paul Gallagher on their front page. They report the story of girls who became ill after receiving HPV vaccine, and strongly imply that the HPV vaccine was the cause of the illnesses, flying in the face of massive amounts of scientific evidence to the contrary.

I could go on at length about how dreadful, irresponsible, and scientifically illiterate the article was, but I won’t, because Jen Gunter and jdc325 have already done a pretty good job of that. You should go and read their blogposts. Do it now.

Right, are you back? Let’s carry on then.

What I want to talk about today is the response I got from the Independent when I emailed the editor of the Independent on Sunday, Lisa Markwell, to suggest that they might want to publish a rebuttal to correct the dangerous misinformation in the original article. Ms Markwell was apparently too busy to reply to a humble reader, so my reply was from the deputy editor, Will Gore.  Here it is below, with my annotations.

Dear Dr Jacobs

Thank you for contacting us about an article which appeared in last weekend’s Independent on Sunday.

Media coverage of vaccine programmes – including reports on concerns about real or perceived side-effects – is clearly something which must be carefully handled; and we are conscious of the potential pitfalls. Equally, it is important that individuals who feel their concerns have been ignored by health care professionals have an outlet to explain their position, provided it is done responsibly.

I’d love to know what they mean by “provided it is done responsibly”. I think a good start would be not to stoke anti-vaccine conspiracy theories with badly researched scaremongering. Obviously The Independent has a different definition of “responsibly”. I have no idea what that definition might be, though I suspect it includes something about ad revenue.

On this occasion, the personal story of Emily Ryalls – allied to the comparatively large number of ADR reports to the MHRA in regard to the HPV vaccine – prompted our attention. We made clear that no causal link has been established between the symptoms experienced by Miss Ryalls (and other teenagers) and the HPV vaccine. We also quoted the MHRA at length (which says the possibility of a link remains ‘under review’), as well as setting out the views of the NHS and Cancer Research UK.

Oh, seriously? You “made it clear that no causal link has been established”? Are we even talking about the same article here? The one I’m talking about has the headline “Thousands of teenage girls enduring debilitating illnesses after routine school cancer vaccination”. On what planet does that make it clear that the link was not causal?

I think what they mean by “made it clear that no causal link has been established” is that they were very careful with their wording not to explicitly claim a causal link, while nonetheless using all the rhetorical tricks at their disposal to make sure a causal link was strongly implied.

Ultimately, we were not seeking to argue that vaccines – HPV, or others for that matter – are unsafe.

No, you’re just trying to fool your readers into thinking they’re unsafe. So that’s all right then.

Equally, it is clear that for people like Emily Ryalls, the inexplicable onset of PoTS has raised questions which she and her family would like more fully examined.

And how does blaming it on something that is almost certainly not the real cause help?

Moreover, whatever the explanation for the occurrence of PoTS, it is notable that two years elapsed before its diagnosis. Miss Ryalls’ family argue that GPs may have failed to properly assess symptoms because they were irritated by the Ryalls mentioning the possibility of an HPV connection.

I don’t see how that proves a causal link with the HPV vaccine. And anyway, didn’t you just say that you were careful to avoid claiming a causal link?

Moreover, the numbers of ADR reports in respect of HPV do appear notably higher than for other vaccination programmes (even though, as the quote from the MHRA explained, the majority may indeed relate to ‘known risks’ of vaccination; and, as you argue, there may be other particular explanations).

Yes, there are indeed other explanations. What a shame you didn’t mention them in your story. Perhaps if you had done, your claim to be careful not to imply a causal link might look a bit more plausible. But I suppose you don’t like the facts to get in the way of a good story, do you?

The impact on the MMR programme of Andrew Wakefield’s flawed research (and media coverage of it) is always at the forefront of editors’ minds whenever concerns about vaccines are raised, either by individuals or by medical studies. But our piece on Sunday was not in the same bracket.

No, sorry, it is in exactly the same bracket. The media coverage of MMR vaccine was all about hyping up completely evidence-free scare stories about the risks of MMR vaccine. The present story is all about hyping up completely evidence-free scare stories about the risk of HPV vaccine. If you’d like to explain to me what makes those stories different, I’m all ears.

It was a legitimate item based around a personal story and I am confident that our readers are sophisticated enough to understand the wider context and implications.

Kind regards

Will Gore
Deputy Managing Editor

If Mr Gore seriously believes his readers are sophisticated enough to understand the wider context, then he clearly hasn’t read the readers’ comments on the article. It is totally obvious that a great many readers have inferred a causal relationship between the vaccine and subsequent illness from the article.

I replied to Mr Gore about that point, to which he replied that he was not sure the readers’ comments are representative.

Well, that’s true. They are probably not. But they don’t need to be.

There are no doubt some readers of the article who are dyed-in-the-wool anti-vaccinationists. They believed all vaccines are evil before reading the article, and they still believe all vaccines are evil. For those people, the article will have had no effect.

Many other readers will have enough scientific training (or just simple common sense) to realise that the article is nonsense. They will not infer a causal relationship between the vaccine and the illnesses. All they will infer is that The Independent is spectacularly incompetent at reporting science stories and that it would be really great if The Independent could afford to employ someone with a science GCSE to look through some of their science articles before publishing them. They will also not be harmed by the article.

But there is a third group of readers. Some people are not anti-vaccine conspiracy theorists, but nor do they have science training. They probably start reading the article with an open mind. After reading the article, they may decide that HPV vaccine is dangerous.

And what if some of those readers are teenage girls who are due for the vaccination? What if they decide not to get vaccinated? What if they subsequently get HPV infection, and later die of cervical cancer?

Sure, there probably aren’t very many people to whom that description applies. But how many is an acceptable number? Perhaps Gallagher, Markwell, and Gore would like to tell me how many deaths from cervical cancer would be a fair price to pay for writing the article?

It is not clear to me whether Gallagher, Markwell, and Gore are simply unaware of the harm that such an article can do, or if they are aware, and simply don’t care. Are they so naive as to think that their article doesn’t promote an anti-vaccinationist agenda, or do they think that clicks on their website and ad revenue are a more important cause than human life?

I really don’t know which of those possibilities I think is more likely, nor would I like to say which is worse.

Obesity and dementia

It’s always difficult to draw firm conclusions from epidemiological research. No matter how large the sample size and how carefully conducted the study, it’s seldom possible to be sure that the result you have found is what you were looking for, and not some kind of bias or confounding.

So when I heard in the news yesterday that overweight and obese people were at reduced risk of dementia, my first thought was “I wonder if that’s really true?”

Well, the paper is here. Sadly behind a paywall (seriously guys? You know it’s 2015, right?), though luckily the researchers have made a copy of the paper available as a Word document here.

In many ways, it’s a pretty good study. Certainly no complaints about the sample size: they analysed data on nearly 2 million people. With a median follow-up time of over 9 years, their analysis was based on a long enough time period to be meaningful. They had also thought about the obvious problem with looking at obesity and dementia, namely that obese people may be less likely to get dementia not because obesity protects them against dementia, but just because they are more likely to die of an obesity-related disease before they are old enough to develop dementia.

The authors did a sensitivity analysis in which they assumed that patients who died during the observation period had twice the risk of developing dementia had they lived of patients who survived to the end of follow-up. Although that weakened the negative association between overweight and dementia, it was still present.

There are, of course, other ways to do this. Perhaps it might have been appropriate to use a competing risks survival model instead of the Poisson model they used for their statistical analysis, and if you were going to be picky, you could say their choice of statistical analysis was a bit fishy (sorry, couldn’t resist).

But I don’t think the method of analysis is the big problem here.

For a start, although some of the most obvious confounders (age, sex, smoking, drinking, relevant medication use, diabetes, and previous myocardial infarction) were adjusted for in the analysis, there was no adjustment for socioeconomic status or education level, which is a big omission.

But more importantly, I think the major limitation of these results comes from what is known as the healthy survivor effect.

Let me explain.

The people followed up in the study were all aged over 40 at the start. But there was no upper age limit. Some people were aged over 90 at the start. And not surprisingly, most of the cases of dementia occurred in older people.  Only 18 cases of dementia occurred in those aged 40-44, whereas over 12,000 cases were observed in those aged 80-84. So it’s really the older age groups who are dominating the analysis. Over half the cases of dementia occurred in people aged > 80, and over 90% occurred in people aged > 70.

Now, let’s think about those 80+ year olds for a minute.

There is reasonably good evidence that obese people die younger, on average, than those of normal weight. So the obese people who were aged > 80 at the start of the study are probably not normal obese people. They are probably healthier than average obese people. Many obese people who are less healthy than average would be dead before they are 80, so would never have the chance to be included in that age group of the study.

So in other words, the old obese people in the study are not typical obese people: they are unusually healthy obese people.

That may be because they have good genes or it may be because something about their lifestyle is keeping them healthy, but one way or another, they have managed to live a long life despite their obesity. This is an example of the healthy survivor effect.

There will also be a healthy survivor effect at play in the people of normal weight at the upper end of the age range, but that will probably be less marked, as they haven’t had to survive despite obesity.

I think it is therefore possible that this healthy survivor effect may have skewed the results. The people with obesity may have been at less risk of dementia not because their obesity protected them, but because they were a biased subset of unusually healthy obese people.

This does not, of course, mean that obesity doesn’t protect against dementia. Maybe it does. One thing that would have been interesting would be to see the results broken down by the type of dementia. It is hard to believe that obesity would protect against vascular dementia, when on the whole it is a risk factor for other vascular diseases, but the hypothesis that it could protect against Alzheimer’s disease doesn’t seem so implausible.

What it does mean is that we have to be really careful when interpreting the results of epidemiological studies such as this one. It is always extremely hard to know to what extent the various forms of bias that can creep into epidemiological studies have influenced the results.

 

 

Ovarian cancer and HRT

Yesterday’s big health story in the news was the finding that HRT ‘increases ovarian cancer risk’. The scare quotes there, of course, tell us that that’s probably not really true.

So let’s look at the study and see what it really tells us. The BBC can be awarded journalism points for linking to the actual study in the above article, so it was easy enough to find the relevant paper in the Lancet.

This was not new data: rather, it was a meta-analysis of existing studies. Quite a lot of existing studies, as it turns out. The authors found 52 epidemiological studies investigating the association between HRT use and ovarian cancer. This is quite impressive. So despite ovarian cancer being a thankfully rare disease, the analysis included over 12,000 women who had developed ovarian cancer. So whatever other criticisms we might make of the paper, I don’t think a small sample size is going to be one of them.

But what other criticisms might we make of the paper?

Well, the first thing to note is that the data are from epidemiological studies. There is a crucial difference between epidemiological studies and randomised controlled trials (RCTs). If you want to know if an exposure (such as HRT) causes an outcome (such as ovarian cancer), then the only way to know for sure is with an RCT. In an epidemiological study, where you are not doing an experiment, but merely observing what happens in real life, it is very hard to be sure if an exposure causes an outcome.

The study showed that women who take HRT are more likely to develop ovarian cancer than women who don’t take HRT. That is not the same thing as showing that HRT caused the excess risk of ovarian cancer. It’s possible that HRT was the cause, but it’s also possible that women who suffer from unpleasant menopausal symptoms (and so are more likely to take HRT than those women who have an uneventful menopause) are more likely to develop ovarian cancer. That’s not completely implausible. Ovaries are a pretty relevant organ in the menopause, and so it’s not too hard to imagine some common factor that predisposes both to unpleasant menopausal symptoms and an increased ovarian cancer risk.

And if that were the case, then the observed association between HRT use and ovarian cancer would be completely spurious.

So what this study shows us is a correlation between HRT use and ovarian cancer, but as I’ve said many times before, correlation does not equal causation. I know I’ve been moaned at by journalists for endlessly repeating that fact, but I make no apology for it. It’s important, and I shall carry on repeating it until every story in the mainstream media about epidemiological research includes a prominent reminder of that fact.

Of course, it is certainly possible that HRT causes an increased risk of ovarian cancer. We just cannot conclude it from that study.

It would be interesting to look at how biologically plausible it is. Now, I’m no expert in endocrinology, but one little thing I’ve observed makes me doubt the plausibility. We know from a large randomised trial that HRT increases breast cancer risk (at least in the short term). There also seems to be evidence that oral contraceptives increase breast cancer risk but decrease ovarian cancer risk. With my limited knowledge of endocrinology, I would have thought the biological effects of HRT and oral contraceptives on cancer risk would be similar, so it just strikes me as odd that they would have similar effects on breast cancer risk but opposite effects on ovarian cancer risk. Anyone who knows more about this sort of thing than I do, feel free to leave a comment below.

But leaving aside the question of whether the results of the latest study imply a causal relationship (though of course we’re not really going to leave it aside, are we? It’s important!), I think there may be further problems with the study.

The paper tells us, and this was widely reported in the media, that “women who use hormone therapy for 5 years from around age 50 years have about one extra ovarian cancer per 1000 users”.

I’ve been looking at how they arrived at that figure, and it’s not totally clear to me how it was calculated. The crucial data in the paper is this table.  The table is given in a bit more detail in their appendix, and I’m reproducing the part of the table for 5 years of HRT use below.

 

 Age group  Baseline risk (per 1000)  Relative excess risk Absolute excess risk (per 1000)
 50-54  1.2  0.43  0.52
 55-59  1.6  0.23  0.37
 60-64  2.1  0.05  0.10
 Total  0.99

The table is a bit complicated, so some words or explanation are probably helpful. The baseline risk is the probability (per 1000) of developing ovarian cancer over a 5 year period in the relevant age group. The relative excess risk is the proportional amount by which that risk is increased by 5 years of HRT use starting at age 50. The absolute excess risk is the baseline risk multiplied by the relative excess risk.

The risk in each 5 year period is then added together to give the total excess lifetime risk of ovarian cancer for a woman who takes HRT for 5 years starting at age 50. I assume excess risks at older age groups are ignored as there is no evidence that HRT increases the risk after such a long delay. It’s important to note here that the figure of 1 in 1000 excess ovarian cancer cases refers to lifetime risk: not the excess in a 5 year period.

The figures for incidence seem plausible. The figures for absolute excess risk are correct if the relative excess risk is correct. However, it’s not completely clear where the figures for relative risk come from. We are told they come from figure 2 in the paper. Maybe I’m missing something, but I’m struggling to match the 2 sets of figures. The excess risk of 0.43 for the 50-54 year age group matches the relative risk 1.43 for current users with duration < 5 years (which will be true while the women are still in that age group), but I can’t see where the relative excess risks of 0.23 and 0.05 come from.

Maybe it doesn’t matter hugely, as the numbers in figure 2 are in the same ballpark, but it always makes me suspicious when numbers should match and don’t.

There are some further statistical problems with the paper. This is going to get a bit technical, so feel free to skip the next two paragraphs if you’re not into statistical details. To be honest, it all pales into insignificance anyway beside the more serious problem that correlation does not equal causation.

The methods section tells us that cases were matched with controls. We are not told how the matching was done, which is the sort of detail I would not expect to see left out of a paper in the Lancet. But crucially, a matched case control study is different to a non-matched case control study, and it’s important to analyse it in a way that takes account of the matching, with a technique such as conditional logistic regression. Nothing in the paper suggests that the matching was taken into account in the analysis. This may mean that the confidence intervals for the relative risks are wrong.

It also seems odd that the data were analysed using Poisson regression (and no, I’m not going to say “a bit fishy”). Poisson regression makes the assumption that the baseline risk of developing ovarian cancer remains constant over time. That seems a highly questionable assumption here. It would be interesting to see if the results were similar using a method with more relaxed assumptions, such as Cox regression. It’s also a bit fishy (oh damn, I did say it after all) that the paper tells us that Poisson regression yielded odds ratios. Poisson regression doesn’t normally yield odds ratios: the default statistic is an incidence rate ratio. Granted, the interpretation is similar to an odds ratio, but they are not the same thing. Perhaps there is some cunning variation on Poisson regression in which the analysis can be coaxed into giving odds ratios, but if there is, I’m not aware of it.

I’m not sure how much those statistical issues matter. I would expect that you’d get broadly similar results with different techniques. But as with the opaque way in which the lifetime excess risk was calculated, it just bothers me when statistical methods are not as they should be. It makes you wonder if anything else was wrong with the analysis.

Oh, and a further oddity is that nowhere in the paper are we told the total sample size for the analysis. We are told the number of women who developed ovarian cancer, but we are not told the number of controls that were analysed. That’s a pretty basic piece of information that I would expect to see in any journal, never mind a top-tier journal such as the Lancet.

I don’t know whether those statistical oddities have a material impact on the analysis. Perhaps they do, perhaps they don’t. But ultimately, I’m not sure it’s the most important thing. The really important thing here is that the study has not shown that HRT causes an increase in ovarian cancer risk.

Remember folks, correlation does not equal causation.

Hospital special measures and regression to the mean

Forgive me for writing 2 posts in a row about regression to the mean. But it’s an important statistical concept, which also happens to be widely misunderstood. Sometimes with important consequences.

Last week, I blogged about a claim that student tuition fees had not put off disadvantaged applicants. The research was flawed, because it defined disadvantage on the basis of postcode areas, and not on the individual characteristics of applicants. This means that an increase in university applications from disadvantaged areas could have simply been due to regression to the mean (ie the most disadvantaged areas becoming less disadvantaged) rather than more disadvantaged individual students applying to university.

Today, we have a story in the news where exactly the same statistical phenomenon is occurring. The story is that putting hospitals into “special measures” has been effective in reducing their death rates, according to new research by Dr Foster.

The research shows no such thing, of course.

The full report, “Is [sic] special measures working?” is available here. I’m afraid the authors’ statistical expertise is no better than their grammar.

The research looked at 11 hospital trusts that had been put into special measures, and found that their mortality rates fell faster than hospitals on average. They thus concluded that special measures were effective in reducing mortality.

Wrong, wrong, wrong. The 11 hospital trusts had been put into special measures not at random, but precisely because they had higher than expected mortality. If you take 11 hospital trusts on the basis of a high mortality rate and then look at them again a couple of years later, you would expect the mortality rate to have fallen more than in other hospitals simply because of regression to the mean.

Maybe those 11 hospitals were particularly bad, but maybe they were just unlucky. Perhaps it’s a combination of both. But if they were unusually unlucky one year, you wouldn’t expect them to be as unlucky the next year. If you take the hospitals with the worst mortality, or indeed the most extreme examples of anything, you would expect it to improve just by chance.

This is a classic example of regression to the mean. The research provides no evidence whatsoever that special measures are doing anything. To do that, you would need to take poorly performing hospitals and allocate them at random either to have special measures or to be in a control group. Simply observing that the worst trusts got better after going into special measures tells you nothing about whether special measures were responsible for the improvement.