Category Archives: Dodgy reporting

Are two thirds of cancers really due to bad luck?

A paper published in Science has been widely reported in the media today. According to media reports, such as this one, the paper showed that two thirds of cancers are simply due to bad luck, and only one third are due to environmental, lifestyle, or genetic risk factors.

The paper shows no such thing, of course.

It’s actually quite an interesting paper, and I’d encourage you to read it in full (though sadly it’s paywalled, so you may or may not be able to). But it did not show that two thirds of cancers are due to bad luck.

What the authors did was they looked at the published literature on 31 different types of cancer (eg lung cancer, thyroid cancer, colorectal cancer, etc) and estimated 2 quantities for each type of cancer. They estimated the lifetime risk of getting the cancer, and how often stem cells divide in those tissues.

They found a very strong correlation between those two quantities: tissues in which stem cells divided frequently (eg the colon) were more likely to develop cancer than tissues in which stem cell division was less frequent (eg the brain).

The correlation was so strong, in fact, that it explained two thirds of the variation among different tissue types in their cancer incidence. The authors argue that because mutations that can lead to cancer can occur during stem cell division purely by chance, that means that two thirds of the variation in cancer risk is due to bad luck.

So, that explains where the “two thirds” figure comes from.

The problem is that it applies only to explaining the variation in cancer risk from one tissue to another. It tells us nothing about how much of the risk within a given tissue is due to modifiable factors. You could potentially see exactly the same results whether each specific type of cancer struck completely at random or whether each specific type were hugely influenced by environmental risk factors.

Let’s take lung cancer as an example. Smoking is a massively important risk factor. Here’s a study that estimated that over half of all lung cancer deaths in Japanese males were due to smoking. Or to take cervical cancer as another example, about 70% of cervical cancers are due to just 2 strains of the HPV virus.

Those are important statistics when considering what proportion of cancers are just bad luck and what proportion are due to modifiable risk factors, but they did not figure anywhere in the latest analysis.

So in fact, interesting though this paper is, it tells us absolutely nothing about what proportion of cancer cases are due to modifiable risk factors.

We often see medical research badly reported in the newspapers. Often it doesn’t matter very much. But here, I think real harm could be done. The message that comes across from the media is that cancer is just a matter of luck, so changing your lifestyle won’t make much difference anyway.

We know that lifestyle is hugely important not only for cancer, but for many other diseases as well. For the media to claim give the impression that lifestyle isn’t important, based on a misunderstanding of what the research shows, is highly irresponsible.

Edit 5 Jan 2015:

Small correction made to the last paragraph following discussion in the comments below. Old text in strikethrough, new text in bold.

Does peer review fail to spot outstanding research?

A paper by Siler et al was published last week which attracted quite a bit of attention among those of us who take an interest in scientific publishing and the peer review process. It looked at the citation count of papers that had been submitted to 3 high-impact medical journals and subsequently published, either in one of those 3 journals or in another journal if rejected by one of the 3.

The accompanying press release from the publisher told us that “scientific peer review may have difficulties identifying unconventional and/or outstanding work”. This wasn’t too far off what was claimed in the paper, where Siler et al concluded that their work suggested that peer review “had difficulties in identifying outstanding or breakthrough work”.

The press release was reported uncritically by several organisations that should have known better, including Science, Nature,  and Retraction Watch.

It’s an interesting theory. The theory goes that peer reviewers don’t like to get out of their comfort zone, and while they may give good reviews to small incremental advances in their field, they don’t like radical new research that breaks new ground, so such research may be rejected.

The only problem with this theory is that Siler et al’s paper provides absolutely no data to support it.

Let’s look at what they did. They looked at 1008 manuscripts that were submitted to 3 top-tier medical journals (Annals of Internal Medicine, British Medical Journal, and The Lancet). Most of those papers were rejected, but subsequently published in other journals. Siler et al tracked the papers to see how many times each paper was cited.

Now, there we have our first problem. Using the number of times a paper is cited as a measure of groundbreaking research is pretty crude. Papers can be highly cited for many reasons, and presenting groundbreaking research is only one of them. I am writing this blogpost on the same day that I found that the 6th most important paper of the year according to “Altmetrics” (think of it as citation counting for the Facebook generation), was about how long it takes for boxes of chocolates on hospital wards to be eaten. A nicely conducted and amusing piece of research, to be sure, but hardly breaking new frontiers in science.

There’s also something rather fishy about the numbers of citations reported in the paper. The group of papers with the lowest citation rate reported in the paper were cited an average of 69.8 times each. That’s an extraordinarily high number. Of the 3 top-tier journals studied, The Lancet has the highest impact factor, at 39.2. That means that papers in The Lancet are cited an average of 39.2 times each. Doesn’t it seem rather odd that papers rejected from it are cited almost twice as often? I’m not sure what to make of that, but it does make me wonder if there is a problem with data quality.

Anyway, the main piece of evidence used to support the idea that peer review was bad at recognising outstanding research is that the 14 most highly cited papers of the 1008 papers examined were rejected by the 3 top journals. The first problem with that is that 12 of those 14 were rejected by the journals’ in-house editorial staff without being sent for peer review. So even if there were no further problems with the paper, we couldn’t draw any conclusions about failings of peer review: the failings would be down to journals’ in-house staff.

Another problem is that those 14 papers were not, of course, rejected by the peer review system. They were all published in peer reviewed journals: just not the first journal that the authors tried. So we really can’t conclude that peer review is preventing groundbreaking work from being published.

But in any case, if we ignore those flaws and ask ourselves is it still not true that groundbreaking (or at least highly cited) research is being rejected, I think we’d want to know that the highly cited research is more likely to be rejected than other research.

And I’m afraid the evidence for that is totally lacking.

Rejecting the top 14 papers sounds bad. But it’s important to realise that the overall rejection rate was very high: only 6.2% of the papers submitted were accepted. If the probability of accepting each of the top 14 papers was 6.2%, like all the others, then there is about a 40% chance that all 14 of them would be rejected. And that is ignoring the fact that looking specifically at the top 14 papers is a post-hoc analysis. The only robust way to see if the more highly cited papers were more likely to be rejected would have been to specify a specific hypothesis in advance, rather than to focus on what came out of the data as being the most impressive statistic.

So, to recap, this paper used a crude measure of whether papers were groundbreaking, did not look at what peer reviewers thought of them, found precisely zero high impact articles that were rejected by the peer review system, and found no evidence whatsoever that high-impact articles were more likely to be rejected than any others.

Call me a cynic if you like, but I’m not convinced. The peer review process is not perfect, of course, But if you want to convince me that one of its flaws is that it is biased against groundbreaking research, you’re going to have to come up with better evidence than Siler et al’s paper.