Category Archives: Dodgy reporting

Do 41% of middle aged adults really walk for less than 10 minutes each month?

I was a little surprised when I heard the news on the radio this morning and heard that a new study had been published allegedly showing that millions of middle aged adults are so inactive that they don’t even walk for 10 minutes each month. The story has been widely covered in the media, for example here, here, and here.

The specific claim is that 41% of adults aged 40 to 60 in England, or about 6 million people, do not walk for 10 minutes in one go at a brisk pace at least once a month, based on a survey by Public Health England (PHE). I tracked down the source of this claim to this report on the PHE website.

I found that hard to believe. Walking for just 10 minutes a month is a pretty low bar. Can it really be true that 41% of middle aged adults don’t even manage that much?

Well, if it is, which I seriously doubt, then the statistic is at best highly misleading. The same survey tells us that less than 20% of the same sample of adults were physically inactive, where physical activity is defined as “participating in less than 30 minutes of moderate intensity physical activity per week”. Here is the table from the report about physical activity:

So we have about 6 million people doing less than 10 minutes of walking per month, but only 3 million people doing less than 30 minutes of moderate intensity physical activity per week. So somehow, there must be 3 million people who are doing at least 30 minutes of physical activity per week while simultaneously walking for less than 10 minutes per month.

I suppose that’s possible. Maybe those people cycle a lot, or perhaps drive to the gym and have a good old workout and then drive home again. But it seems unlikely.

And even if it’s true, the headline figure that 41% of middle aged adults are doing so little exercise that they don’t even manage 10 minutes of walking a month is grossly misleading. Because in fact over 80% of middle aged adults are exercising for at least 30 minutes per week.

I notice that the report on the PHE website doesn’t link to the precise questions asked in the survey. I am always sceptical of any survey results that aren’t accompanied by a detailed description of the survey methods, including specifying the precise questions asked, and this example only serves to remind me of the importance of maintaining that scepticism.

The news coverage focuses on the “41% walk for less than 10 minutes per month” figure and not on the far less alarming figure that less than 20% exercise for less than 30 minutes per week. The 41% figure is also presented first on the PHE website, and I’m guessing, given the similarity of stories in the media, that that was the figure they emphasised in their press release.

I find it disappointing that a body like PHE is prioritising newsworthiness over honest science.

Dangerous nonsense about vaping

If you thought you already had a good contender for “most dangerous, irresponsible, and ill-informed piece of health journalism of 2015”, then I’m sorry to tell you that it has been beaten into second place at the last minute.

With less than 36 hours left of 2015, I am confident that this article by Sarah Knapton in the Telegraph will win the title.

The article is titled “E-cigarettes are no safer than smoking tobacco, scientists warn”. The first paragraph is

“Vaping is no safer that [sic] smoking, scientists have warned after finding that e-cigarette vapour damages DNA in ways that could lead to cancer.”

There are such crushing levels of stupid in this article it’s hard to know where to start. But perhaps I’ll start by pointing out that a detailed review of the evidence on vaping by Public Health England, published earlier this year, concluded that e-cigarettes are about 95% less harmful than smoking.

If you dig into the detail of that review, you find that most of the residual 5% is the harm of nicotine addiction. It’s debatable whether that can really be called a harm, given that most people who vape are already addicted to nicotine as a result of years of smoking cigarettes.

But either way, the evidence shows that vaping, while it may not be 100% safe (though let’s remember that nothing is 100% safe: even teddy bears kill people), is considerably safer than smoking. This should not be a surprise. We have a pretty good understanding of what the toxic components of cigarette smoke are that cause all the damage, and most of those are either absent from e-cigarette vapour or present at much lower concentrations.

So the question of whether vaping is 100% safe is not the most relevant thing here. The question is whether it is safer than smoking. Nicotine addiction is hard to beat, and if a smoker finds it impossible to stop using nicotine, but can switch from smoking to vaping, then that is a good thing for that person’s health.

Now, nothing is ever set in stone in science. If new evidence comes along, we should always be prepared to revise our beliefs.

But obviously to go from a conclusion that vaping is 95% safer than smoking to concluding they are both equally harmful would require some pretty robust evidence, wouldn’t it?

So let’s look at the evidence Knapton uses as proof that all the previous estimates were wrong and vaping is in fact as harmful as smoking.

The paper it was based on is this one, published in the journal Oral Oncology.  (Many thanks to @CaeruleanSea for finding the link for me, which had defeated me after Knapton gave the wrong journal name in her article.)

The first thing to notice about this is that it is all lab based, using cell cultures, and so tells us little about what might actually happen in real humans. But the real kicker is that if we are going to compare vaping and smoking and conclude that they are as harmful as each other, then the cell cultures should have been exposed to equivalent amounts of e-cigarette vapour and cigarette smoke.

The paper describes how solutions were made by drawing either the vapour or smoke through cell media. We are then told that the cells were treated with the vaping medium every 3 days for up to 8 weeks. So presumably the cigarette medium was also applied every 3 days, right?

Well, no. Not exactly. This is what the paper says:

“Because of the high toxicity of cigarette smoke extract, cigarette-treated samples of each cell line could only be treated for 24 h.”

Yes, that’s right. The cigarette smoke was applied at a much lower intensity, because otherwise it killed the cells altogether. So how can you possibly conclude that vaping is no worse than smoking, when smoking is so harmful it kills the cells altogether and makes it impossible to do the experiment?

And yet despite that, the cigarettes still had a larger effect than the vaping. It is also odd that the results for cigarettes are not presented at all for some of the assays. I wonder if that’s because it had killed the cells and made the assays impossible? As primarily a clinical researcher, I’m not an expert in lab science, but not showing the results of your positive control seems odd to me.

But the paper still shows that the e-cigarette extract was harming cells, so that’s still a worry, right?

Well, there is the question of dose. It’s hard for me to know from the paper how realistic the doses were, as this is not my area of expertise, but the press release accompanying this paper (which may well be the only thing that Knapton actually read before writing her article) tells us the following:

“In this particular study, it was similar to someone smoking continuously for hours on end, so it’s a higher amount than would normally be delivered,”

Well, most things probably damage cells in culture if used at a high enough dose, so I don’t think this study really tells us much. All it tells us is that cigarettes do far more damage to cell cultures than e-cigarette vapour does. Because, and I can’t emphasise this point enough, THEY COULDN’T DO THE STUDY WITH EQUIVALENT DOSES OF CIGARETTE SMOKE BECAUSE IT KILLED ALL THE CELLS.

A charitable explanation of how Knapton could write such nonsense might be that she simply took the press release on trust (to be clear, the press release also makes the claim that vaping is as dangerous as smoking) and didn’t have time to check it. But leaving aside the question of whether a journalist on a major national newspaper should be regurgitating press releases without any kind of fact checking, I note that many people (myself included) have been pointing out to Knapton on Twitter that there are flaws in the article, and her response has been not to engage with such criticism, but to insist she is right and to block anyone who disagrees: the Twitter equivalent of the “la la la I’m not listening” argument.

It seems hard to come up with any explanation other than that Knapton likes to write a sensational headline and simply doesn’t care whether it’s true, or, more importantly, what harm the article may do.

And make no mistake: articles like this do have the potential to cause harm. It is perfectly clear that, whether or not vaping is completely safe, it is vastly safer than smoking. It would be a really bad outcome if smokers who were planning to switch to vaping read Knapton’s article and thought “oh, well if vaping is just as bad as smoking, maybe I won’t bother”. Maybe some of those smokers will then go on to die a horrible death of lung cancer, which could have been avoided had they switched to vaping.

Is Knapton really so ignorant that she doesn’t realise that is a possible consequence of her article, or does she not care?

And in case you doubt that anyone would really be foolish enough to believe such nonsense, I’m afraid there is evidence that people do believe it. According to a survey by Action on Smoking and Health (ASH), the proportion of people who believe that vaping is as harmful or more harmful than smoking increased from 14% in 2014 to 22% in 2015. And in the USA, the figures may be even worse: this study found 38% of respondents thought e-cigarettes were as harmful or more harmful than smoking. (Thanks again to @CaeruleanSea for finding the links to the surveys.)

I’ll leave the last word to Deborah Arnott, Chief Executive of ASH:

“The number of ex-smokers who are staying off tobacco by using electronic cigarettes is growing, showing just what value they can have. But the number of people who wrongly believe that vaping is as harmful as smoking is worrying. The growth of this false perception risks discouraging many smokers from using electronic cigarettes to quit and keep them smoking instead which would be bad for their health and the health of those around them.”

Spinning good news as bad

It seems to have become a popular sport to try to exaggerate problems with disclosure of clinical trials, and to pretend that the problem of “secret hidden trials” is far worse than it really is. Perhaps the most prominent example of this is the All Trials campaign’s favourite statistic that “only half of all clinical trials have ever been published”, which I’ve debunked before. But a new paper was published last month which has given fresh material to the conspiracy theorists.

The paper in question was published in BMJ Open by Jennifer Miller and colleagues. They looked at 15 of the 48 drugs approved by the FDA in 2012. It’s not entirely clear to me why they focused on this particular subgroup: they state that they focused on large companies because they represented the majority of new drug applications. Now I’m no mathematician, but I have picked up some of the basics of maths in my career as a statistician, and I’m pretty sure that 15 out of 48 isn’t a majority. Remember that we are dealing with a subgroup analysis here: I think it might be important, and I’ll come back to it later.

Anyway, for each of those 15 drugs, Miller et al looked at the trials that had been used for the drug application, and then determined whether the trials had been registered and whether the results had been disclosed. They found that a median (per drug) of 65% of trials had been disclosed and 57% had been registered.

This study drew the kinds of responses you might expect from the usual suspects, describing the results as “inexcusable” and “appalling”.

SAS tweet

Goldacre tweet

(Note that both of those tweets imply that only 15 drugs were approved by the FDA in 2012, and don’t mention that it was a subgroup analysis from the 48 drugs that were really approved that year.)

The story was picked up in the media as well. “How pharma keeps a trove of drug trials out of public view” was how the Washington Post covered it. The Scientist obviously decided that even 65% disclosure wasn’t sensational enough, and reported “just one-third of the clinical trials that ought to have been reported by the trial sponsors were indeed published”.

But as you have probably guessed by now, when you start to look below the surface, some of these figures are not quite as they seem.

Let’s start with the figures for trial registration (the practice of making the design a trial publicly available before it starts, which makes it harder to hide negative results or pretend that secondary outcomes were really primary). Trial registration is a fairly recent phenomenon. It only really came into being in the early 2000s, and did not become mandatory until 2007. Bear in mind that drugs take many years to develop, so some of the early trials done for drugs that were licensed in 2012 would have been done many years earlier, perhaps before the investigators had even heard of trial registration, and certainly before it was mandatory. So it’s not surprising that such old studies had not been prospectively registered.

Happily, Miller et al reported a separate analysis of those trials that were subject to mandatory registration. In that analysis, the median percentage of registered trials increased from 57% to 100%.

So I think a reasonable conclusion might be that mandatory trial registration has been successful in ensuring that trials are now being registered. I wouldn’t call that “inexcusable” or “appalling”. I’d call that a splendid sign of progress in making research more transparent.

So what about the statistic that only 65% of the trials disclosed results? That’s still bad, right?

Again, it’s a bit more complicated than that.

First, it’s quite important to look at how the results break down by phase of trial. It is noteworthy that the vast majority of the unpublished studies were phase I studies. These are typically small scale trials in healthy volunteers which are done to determine whether it is worth developing the drug further in clinical trials in patients. While I do not dispute for a minute that phase I trials should be disclosed, they are actually of rather little relevance to prescribers. If we are going to make the argument that clinical trials should be disclosed so that prescribers can see the evidence on what those drugs do to patients, then the important thing is that trials in patients should be published. Trials in healthy volunteers, while they should also be published in an ideal world, are a lower priority.

So what about the phase III trials? Phase III trials are the important ones, usually randomised controlled trials in large numbers of patients, which tell you whether the drug works and what its side effects are like. Miller et al report that 20% of drugs had at least 1 undisclosed phase III trial. That’s an interesting way of framing it. Another way of putting is is that 80% of the drugs had every single one of their phase III trials in the public domain. I think that suggests that trial disclosure is working rather well, don’t you? Unfortunately, the way Miller et al present their data doesn’t allow the overall percentage disclosure of phase III trials to be determined, and my request to the authors to share their data has so far gone unheeded (of which more below), but it is clearly substantially higher than 80%. Obviously anything less than 100% still has room for improvement, but the scare stories about a third of trials being hidden clearly don’t stack up.

And talking of trials being “hidden”, that is rather emotive language to describe what may simply be small delays in publication. Miller et al applied a cutoff date of 1 February 2014 in their analysis, and if results were not disclosed by that date then they considered them to be not disclosed. Now of course results should be disclosed promptly, and if it takes a bit longer, then that is a problem, but it is really not the same thing as claiming that results are being “kept secret”. Just out of interest, I checked on one of the drugs that seemed to have a particularly low rate of disclosure. According to Miller et al, the application for Perjeta was based on 12 trials, and only 8% had results reported on clinicaltrials.gov. That means they considered only one of them to have been reported. According to the FDA’s medical review (see page 29), 17 trials were submitted, not 12, which makes you wonder how thorough Miller et al’s quality control was. Of those 17 trials, 14 had been disclosed on clinicaltrials.gov when I looked. So had Miller et al used a different cut-off date, they would have found 82% of trials with results posted, not 8%.

I would like to be able to tell you more about the lower disclosure rates for phase I trials. Phase I trials are done early in a drug’s development, and so the phase I trials included in this study would typically have been done many years ago. It is possible that the lower publication rate for phase I trials is because phase I trials are intrinsically less likely to be published than trials in patients, but it is also possible that it is simply a function of when they were done. We know that publication rates have been improving over recent years, and it is possible that the publication rate for phase I trials done a decade or more ago is not representative of the situation today.

Sadly, I can’t tell you more about that. To distinguish between those possibilities, I would need to see Miller et al’s raw data. I did email them to ask for their raw data, and they emailed back to say how much they support transparency and data sharing, but haven’t actually sent me their data. It’s not entirely clear to me whether that’s because they have simply been too busy to send it or whether they are only in favour of transparency if other people have to do it, but if they do send the data subsequently I’ll be sure to post an update.

The other problem here is that, as I mentioned earlier, we are looking at a subgroup analysis. I think this may be important, as another study that looked at disclosure of drugs approved in 2012 found very different results. Rawal and Deane looked at drugs approved by the EMA in 2012, and found that 92% of the relevant trials had been disclosed. Again, it’s less than 100%, and so not good enough, but it certainly shows that things are moving in the right direction. And it’s a lot higher than the 65% that Miller et al found.

Why might these studies have come to such different results? Well, they are not looking at the same drugs. Not all of the drugs approved by the FDA in 2012 were approved by the EMA the same year. 48 drugs were approved by the FDA, and 23 by the EMA. Only 11 drugs were common to both agencies, and only 3 of those 11 drugs were included in Miller et al’s analysis. Perhaps the 15 drugs selected by Miller et al were not a representative sample of all 48 drugs approved by the FDA. It would be interesting to repeat Miller et al’s analysis with all 48 of the drugs approved by the FDA to see if the findings were similar, although I doubt that anyone will ever do that.

But personally, I would probably consider a study that looked at all eligible trials more reliable than one that chose an arbitrary subset, so I suspect that 92% is a more accurate figure for trial disclosure for drugs approved in 2012 than 65%.

Are 100% of clinical trials being disclosed? No, and this study confirms that. But it also shows that we are getting pretty close, at least for the trials most relevant for prescribers. Until 100% of trials are disclosed, there is still work to do, but things are not nearly as bad as the doom-mongers would have you believe. Transparency of clinical trial reporting is vastly better than it used to be, and don’t let anyone tell you otherwise.

Update 23 January 2016:

I have still not received the raw data for this study, more than 2 months after I asked for it. I think it is safe to assume that I’m not going to get it now. That’s disappointing, especially from authors who write in support of transparency.

 

 

 

The Independent’s anti-vaccine scaremongering

Last weekend The Independent published a ridiculous piece of antivaccine scaremongering by Paul Gallagher on their front page. They report the story of girls who became ill after receiving HPV vaccine, and strongly imply that the HPV vaccine was the cause of the illnesses, flying in the face of massive amounts of scientific evidence to the contrary.

I could go on at length about how dreadful, irresponsible, and scientifically illiterate the article was, but I won’t, because Jen Gunter and jdc325 have already done a pretty good job of that. You should go and read their blogposts. Do it now.

Right, are you back? Let’s carry on then.

What I want to talk about today is the response I got from the Independent when I emailed the editor of the Independent on Sunday, Lisa Markwell, to suggest that they might want to publish a rebuttal to correct the dangerous misinformation in the original article. Ms Markwell was apparently too busy to reply to a humble reader, so my reply was from the deputy editor, Will Gore.  Here it is below, with my annotations.

Dear Dr Jacobs

Thank you for contacting us about an article which appeared in last weekend’s Independent on Sunday.

Media coverage of vaccine programmes – including reports on concerns about real or perceived side-effects – is clearly something which must be carefully handled; and we are conscious of the potential pitfalls. Equally, it is important that individuals who feel their concerns have been ignored by health care professionals have an outlet to explain their position, provided it is done responsibly.

I’d love to know what they mean by “provided it is done responsibly”. I think a good start would be not to stoke anti-vaccine conspiracy theories with badly researched scaremongering. Obviously The Independent has a different definition of “responsibly”. I have no idea what that definition might be, though I suspect it includes something about ad revenue.

On this occasion, the personal story of Emily Ryalls – allied to the comparatively large number of ADR reports to the MHRA in regard to the HPV vaccine – prompted our attention. We made clear that no causal link has been established between the symptoms experienced by Miss Ryalls (and other teenagers) and the HPV vaccine. We also quoted the MHRA at length (which says the possibility of a link remains ‘under review’), as well as setting out the views of the NHS and Cancer Research UK.

Oh, seriously? You “made it clear that no causal link has been established”? Are we even talking about the same article here? The one I’m talking about has the headline “Thousands of teenage girls enduring debilitating illnesses after routine school cancer vaccination”. On what planet does that make it clear that the link was not causal?

I think what they mean by “made it clear that no causal link has been established” is that they were very careful with their wording not to explicitly claim a causal link, while nonetheless using all the rhetorical tricks at their disposal to make sure a causal link was strongly implied.

Ultimately, we were not seeking to argue that vaccines – HPV, or others for that matter – are unsafe.

No, you’re just trying to fool your readers into thinking they’re unsafe. So that’s all right then.

Equally, it is clear that for people like Emily Ryalls, the inexplicable onset of PoTS has raised questions which she and her family would like more fully examined.

And how does blaming it on something that is almost certainly not the real cause help?

Moreover, whatever the explanation for the occurrence of PoTS, it is notable that two years elapsed before its diagnosis. Miss Ryalls’ family argue that GPs may have failed to properly assess symptoms because they were irritated by the Ryalls mentioning the possibility of an HPV connection.

I don’t see how that proves a causal link with the HPV vaccine. And anyway, didn’t you just say that you were careful to avoid claiming a causal link?

Moreover, the numbers of ADR reports in respect of HPV do appear notably higher than for other vaccination programmes (even though, as the quote from the MHRA explained, the majority may indeed relate to ‘known risks’ of vaccination; and, as you argue, there may be other particular explanations).

Yes, there are indeed other explanations. What a shame you didn’t mention them in your story. Perhaps if you had done, your claim to be careful not to imply a causal link might look a bit more plausible. But I suppose you don’t like the facts to get in the way of a good story, do you?

The impact on the MMR programme of Andrew Wakefield’s flawed research (and media coverage of it) is always at the forefront of editors’ minds whenever concerns about vaccines are raised, either by individuals or by medical studies. But our piece on Sunday was not in the same bracket.

No, sorry, it is in exactly the same bracket. The media coverage of MMR vaccine was all about hyping up completely evidence-free scare stories about the risks of MMR vaccine. The present story is all about hyping up completely evidence-free scare stories about the risk of HPV vaccine. If you’d like to explain to me what makes those stories different, I’m all ears.

It was a legitimate item based around a personal story and I am confident that our readers are sophisticated enough to understand the wider context and implications.

Kind regards

Will Gore
Deputy Managing Editor

If Mr Gore seriously believes his readers are sophisticated enough to understand the wider context, then he clearly hasn’t read the readers’ comments on the article. It is totally obvious that a great many readers have inferred a causal relationship between the vaccine and subsequent illness from the article.

I replied to Mr Gore about that point, to which he replied that he was not sure the readers’ comments are representative.

Well, that’s true. They are probably not. But they don’t need to be.

There are no doubt some readers of the article who are dyed-in-the-wool anti-vaccinationists. They believed all vaccines are evil before reading the article, and they still believe all vaccines are evil. For those people, the article will have had no effect.

Many other readers will have enough scientific training (or just simple common sense) to realise that the article is nonsense. They will not infer a causal relationship between the vaccine and the illnesses. All they will infer is that The Independent is spectacularly incompetent at reporting science stories and that it would be really great if The Independent could afford to employ someone with a science GCSE to look through some of their science articles before publishing them. They will also not be harmed by the article.

But there is a third group of readers. Some people are not anti-vaccine conspiracy theorists, but nor do they have science training. They probably start reading the article with an open mind. After reading the article, they may decide that HPV vaccine is dangerous.

And what if some of those readers are teenage girls who are due for the vaccination? What if they decide not to get vaccinated? What if they subsequently get HPV infection, and later die of cervical cancer?

Sure, there probably aren’t very many people to whom that description applies. But how many is an acceptable number? Perhaps Gallagher, Markwell, and Gore would like to tell me how many deaths from cervical cancer would be a fair price to pay for writing the article?

It is not clear to me whether Gallagher, Markwell, and Gore are simply unaware of the harm that such an article can do, or if they are aware, and simply don’t care. Are they so naive as to think that their article doesn’t promote an anti-vaccinationist agenda, or do they think that clicks on their website and ad revenue are a more important cause than human life?

I really don’t know which of those possibilities I think is more likely, nor would I like to say which is worse.

Is smoking plunging children into poverty?

If we feel it necessary to characterise ourselves as being “pro” or “anti” certain things, I would unambiguously say that I am anti-smoking. Smoking is a vile habit. I don’t like being around people who are smoking. And as a medical statistician, I am very well aware of the immense harm that smoking does to the health of smokers and those unfortunate enough to be exposed to their smoke.

So it comes as a slight surprise to me that I find myself writing what might be seen as a pro-smoking blogpost for the second time in just a few weeks.

But this blogpost is not intended to be pro-smoking: it is merely anti the misuse of statistics by some people in the anti-smoking lobby. Just because you are campaigning against a bad thing does not give you a free pass to throw all notions of scientific rigour and social responsibility to the four winds.

An article appeared yesterday on the Daily Mail website with the headline:

“Smoking not only kills, it plunges children into POVERTY because parents ‘prioritise cigarettes over food'”

and a similar, though slightly less extreme, version appeared in the Independent:

“Smoking parents plunging nearly half a million children into poverty, says new research”

According to the Daily Mail, parents are failing to feed their children because they are spending money on cigarettes instead of food. The Independent is not quite so explicit in claiming that, but it’s certainly implied.

Regular readers of this blog will no doubt already have guessed that those articles are based on some research which may have been vaguely related to smoking and poverty, but which absolutely did not show that any children were going hungry because of their parents’ smoking habits. And they would be right.

The research behind these stories is this paper by Belvin et al. There are a number of problems with it, and particularly with the way their findings have been represented in the media.

The idea of children being “plunged into poverty” came from looking at the number of families with at least one smoker who were just above the poverty line. Poverty in this case is defined as a household income less than 60% of the median household income (taking into account family size). If the amount families above the poverty line spent on cigarettes took their remaining income after deducting their cigarette expenditure below the poverty line, then they were regarded as being taken into poverty by smoking.

Now, for a start, Belvin et al did not actually measure how much any family just above the poverty line spent on smoking. They made a whole bunch of estimates and extrapolations from surveys that were done for different purposes. So that’s one problem for a start.

Another problem is that absolutely nowhere did Belvin et al look at expenditure on food. There is no evidence whatsoever from their study that any family left their children hungry, and certainly not that smoking was the cause. Claiming that parents were prioritising smoking over food is not even remotely supported by the study, as it’s just not something that was measured at all.

Perhaps the most pernicious problem is the assumption that poverty was specifically caused by smoking. I expect many families with an income above 60% of the median spend some of their money on something other than feeding their children. Perhaps some spend their money on beer. Perhaps others spend money on mobile phone contracts. Or maybe on going to the cinema. Or economics textbooks. Or pretty much anything else you can think of that is not strictly essential. Any of those things could equally be regarded as “plunging children into poverty” if deducting it from expenditure left you below median income.

So why single out smoking?

I have a big problem with this. I said earlier that I thought smoking was a vile habit. But there is a big difference between believing smoking is a vile habit and believing smokers are vile people. They are not. They are human beings. To try to pin the blame on them for their children’s poverty (especially in the absence of any evidence that their children are actually going hungry) is troubling. I am not comfortable with demonising minority groups. It wouldn’t be OK if the group in question were, say, Muslims, and it’s not OK when the group is smokers.

There are many and complex causes of poverty. But blaming the poor is really not the response of a civilised society.

The way this story was reported in the Daily Mail is, not surprisingly, atrocious. But it’s not entirely their fault. The research was filtered through Nottingham University’s press office before it got to the mainstream media, and I’m afraid to say that Nottingham University are just as guilty here. Their press release states

“The reserch [sic] suggests that parents are likely to forgo basic household and food necessities in order to fund their smoking addiction.”

No, the research absolutely does not suggest that, because the researchers didn’t measure it. In fact I think Nottingham University are far more guilty than the Daily Mail. An academic institution really ought to know better than to misrepresent the findings of their research in this socially irresponsible way.

Chocolate, clueless reporting, and ethics

I have just seen a report of a little hoax pulled on the media by John Bohannon. What he did was to run a small and deliberately badly designed clinical trial, the results of which showed that eating chocolate helps you lose weight.

The trial showed no such thing, of course, as Bohannon points out. It just used bad design and blatant statistical trickery to come up with the result, which should not have fooled anyone who read the paper even with half an eye open.

Bohannon then sent press releases about the study to various media outlets, many of which printed the story completely uncritically. Here’s an example from the Daily Express.

This may be a lovely little demonstration of how lazy and clueless the media are, but I have a nasty feeling it’s actually highly problematic.

The problem is that neither Bohannon’s description of the hoax nor the paper publishing the results of the study make any mention of ethical review. Let’s remember that although the science was deliberately flawed, there was still a real clinical trial here with real human participants.

What were those participants told? Were they deceived about the true nature of the study? According to Bohannon,

“They used Facebook to recruit subjects around Frankfurt, offering 150 Euros to anyone willing to go on a diet for 3 weeks. They made it clear that this was part of a documentary film about dieting, but they didn’t give more detail.”

That certainly sounds to me like deception. It is an absolutely essential feature of clinical research that all research must be approved by an independent ethics committee. This is all the more important if participants are being deceived, which is always a tricky ethical issue. There is no rule that gives an exception to research done as a hoax.

The research was apparently done under the supervision of a German doctor, Gunter Frank. While I can’t claim to be an expert in professional requirements of German doctors, I would be astonished if running a clinical trial without ethical approval was not a serious disciplinary matter.

And yet there is no mention anywhere of ethical approval for this study. I really, really hope that’s just an oversight. Recruiting human participants to a clinical trial without proper ethical approval is absolutely not acceptable.

Update 29 May:

According to the normally reliable Retraction Watch, my fears about this study were justified. They are reporting that Bohannon had confirmed to them that the study did not have ethical approval.

Also, the paper has mysteriously disappeared from the journal’s website, so I’ve replaced the link to the paper with a link to a copy of it preserved thanks to Google’s web cache and Freezepage.

Are strokes really rising in young people?

I woke up to the news this morning that there has been an alarming increase in the number of strokes in people aged 40-54.

My first thought was “this has been sponsored by a stroke charity, so they probably have an interest in making the figures seem alarming”. So I wondered how robust the research was that led to this conclusion.

The article above did not link to a published paper describing the research. So I looked on the Stroke Association’s website. There, I found a press release. This press release also didn’t link to any published paper, which makes me think that there is no published paper. It’s hard to believe a press release describing a new piece of research would fail to tell you if it had been published in a respectable journal.

The press release describes data on hospital admissions provided by the NHS, which shows that the number of men aged 40 to 54 admitted to hospital with strokes increased from 4260 in the year 2000 to to 6221 in 2014, and the equivalent figures for women were an increase from 3529 to 4604.

Well, yes, those figures are certainly substantial increases. But there could be various different reasons for them, some worrying, others reassuring.

It is possible, as the press release certainly wants us to believe, that the main reason for the increase is that strokes are becoming more common. However, it is also possible that recognition of stroke has improved, or that stroke patients are more likely now to get the hospital treatment they need than in the past. Both of those latter explanations would be good things.

So how do the stroke association distinguish among those possibilities?

Well, they don’t. The press release says “It is thought that the rise is due to increasing sedentary and unhealthy lifestyles, and changes in hospital admission practice.”

“It is thought that”? Seriously? Who thinks that? And why do they think it?

It’s nice that the Stroke Association acknowledge the possibility that part of the reason might be changes in hospital admission practice, but given that the title of the press release is “Stroke rates soar among men and women in their 40s and 50s” (note: not “Rates of hospital admission due to stroke soar”), there can be no doubt which message the Stroke Association want to emphasise.

I’m sorry, but they’re going to need better evidence than “it is thought that” to convince me they have teased out the relative contributions of different factors to the rise in hospital admissions.

Vaping among teenagers

Vaping, or use of e-cigarettes, has the potential to be a huge advance in public health. It provides an alternative to smoking that allows addicted smokers to get their nicotine fix without exposing them to all the harmful chemicals in cigarette smoke. This is a development that should be welcomed with open arms by everyone in the public health community, though oddly, it doesn’t seem to be. Many in the public health community are very much against vaping. The reasons for that might make an interesting blogpost for another day.

But today, I want to talk about a piece of research into vaping among teenagers that’s been in the news a lot today.

Despite the obvious upside of vaping, there are potential downsides. The concern is that it may be seen as a “gateway” to smoking. There is a theoretical risk that teenagers may be attracted to vaping and subsequently take up smoking. Obviously that would be a thoroughly bad thing for public health.

Clearly, it is an area that is important to research so that we can better understand what the downside might be of vaping.

So I was interested to see that a study has been published today that looks specifically at smoking among teenagers. Can that help to shed light on these important questions?

Looking at some of the stories in the popular media, you might think it could. We are told that e-cigs are the “alcopops of the nicotine world“, that there are “high rates of usage among secondary school pupils” and that e-cigs are “encouraging people to take up smoking“.

Those claims are, to use a technical term, bollocks.

Let’s look at what the researchers actually did. They used cross sectional questionnaire data in which a single question was asked about vaping: “have you ever tried or purchased e-cigarettes?”

The first thing to note is that the statistics are about the number of teenagers who have ever tried vaping. So they will be included in the statistics if they tried it once. Perhaps they were at a party and they had a single puff on a mate’s e-cig. The study gives us absolutely no information on the proportion of teenagers who vaped regularly. So to conclude “high rates of usage” just isn’t backed up by any evidence. Overall, about 1 in 5 of the teenagers answered yes to the question. Without knowing how many of those became regular users, it becomes very hard to draw any conclusions from the study.

But it gets worse.

The claim that vaping is encouraging people to take up smoking isn’t even remotely supported by the data. To do that, you would need to know what proportion of teenagers who hadn’t previously smoked try vaping, and subsequently go on to start smoking. Given that the present study is a cross sectional one (ie participants were studied only at a single point in time), it provides absolutely no information on that.

Even if you did know that, it wouldn’t tell you that vaping was necessarily a gateway to smoking. Maybe teenagers who start vaping and subsequently start smoking would have smoked anyway. To untangle that, you’d ideally need a randomised trial of areas in which vaping is available and areas in which it isn’t, though I can’t see that ever being done. The next best thing would be to look at changes in the prevalence of smoking among teenagers before and after vaping became available. If it increased after vaping became available, that might give you some reason to think vaping is acting as a gateway to smoking. But the current study provides absolutely no information to help with this question.

I’ve filed post this under “Dodgy reporting”, and of course the journalists who wrote about the study in such uncritical terms really should have known better, but actually I think the real fault lies here with the authors of the paper. In their conclusions, they write “Findings suggest that e-cigarettes are being accessed by teenagers more for experimentation than smoking cessation.”

No, they really don’t show that at all. Of those teenagers who had tried e-cigs, only 15.8% were never-smokers. And bear in mind that most of the overall sample (61.2%) were never-smokers. That suggests that e-cigs are far more likely to be used by current or former smokers than by non-smokers. In fact while only 4.9% of never smokers had tried e-cigs, (remember, that may mean only trying them once), 50.7% of ex-smokers had tried them. So a more reasonable conclusion might be that vaping is helping ex-smokers to quit, though in fact I don’t think it’s possible even to conclude that much from a cross-sectional study that didn’t measure whether vaping was a one-off puff or a habit.

While there are some important questions to be asked about how vaping is used by teenagers, I’m afraid this new study does absolutely nothing to help answer them.

 Update 1 April:

It seems I’m not the only person in the blogosphere to pick up some of the problems with the way this study has been spun. Here’s a good blogpost from Clive Bates, which as well as making several important points in its own right also contains links to some other interesting comment on the study.

 

Student tuition fees and disadvantaged applicants

Those of you who have known me for a while will remember that I used to blog on the now defunct Dianthus Medical website. The Internet Archive has kept some of those blogposts for posterity, but sadly not all of them. As I promised when I started this blog, I will get round to putting all those posts back on the internet one of these days, but I’m afraid I haven’t got round to that just yet.

But in the meantime, I’m going to repost one of those blogposts here, as it has just become beautifully relevant again. About this time last year, UCAS (the body responsible for university admissions in the UK) published a report which claimed to show that applications to university from disadvantaged young people  were increasing proportionately more than applications from the more affluent, or in other words, the gap between rich and poor was narrowing.

Sadly, the report showed no such thing. The claim was based on a schoolboy error in statistics.

Anyway, UCAS have recently published their next annual report. Again, this claims to show that the gap between rich and poor is narrowing, but doesn’t. Again, we see the same inaccurate headlines in the media that naively take the report’s conclusions at face value, and we see exactly the same schoolboy error in the way the statistics were analysed in the report.

So as what I wrote last year is still completely relevant today, here goes…

One of the most significant political events of the current Parliament has been the huge increase in student tuition fees, which mean that most university students now need to pay £9000 per year for their education.

One of the arguments against this rise used by its opponents was that it would put off young people from disadvantaged backgrounds from applying to university. Supporters of the new system argued that it would not, as students can borrow the money via a student loan to be paid back over a period of decades, so no-one would have to find the money up front.

The new fees came into effect in 2012, so we should now have some empirical data that should allow us to find out who was right. So what do the statistics show? Have people from disadvantaged backgrounds been deterred from applying to university?

A report was published earlier this year by UCAS, the organisation responsible for handling applications to university. This specifically addresses the question of applications from disadvantaged areas. This shows (see page 17 of the report) that although there was a small drop in application rates from the most disadvantaged areas immediately after the new fees came into effect, from 18.0% in 2011 to 17.5% in 2012, the rates have since risen to 20.5% in 2014. And the ratio of the rate of applications from the most advantaged areas to the most disadvantaged areas fell from 3.0 in 2011 to 2.5 in 2014.

So, case closed, then? Clearly the new fees have not stopped people from disadvantaged areas applying to university?

Actually, no. It’s really not that simple. You see, there is a big statistical problem with the data.

That problem is known as regression to the mean. This is a tendency of characteristics with particularly high or low values to become more like average values over time. It’s something we know all about in clinical trials, and is one of the reasons why clinical trials need to include control groups if they are going to give reliable data. For example, in a trial of a medication for high blood pressure, you would expect patients’ blood pressure to decrease during the trial no matter what you do to them, as they had to have high blood pressure at the start of the trial or they wouldn’t have been included in it in the first place.

In the case of the university admission statistics, the specific problem is the precise way in which “disadvantaged areas” and “advantaged areas” were defined.

The advantage or disadvantage of an area was defined by the proportion of young people participating in higher education during the period 2000 to 2004. Since the “disadvantaged” areas were specifically defined as those areas that had previously had the lowest participation rates, it is pretty much inevitable that those rates would increase, no matter what the underlying trends were.

Similarly, the most advantaged areas were almost certain to see decreases in participation rates (at least relatively speaking, though this is somewhat complicated by the fact that overall participation rates have increased since 2004).

So the finding that the ratio of applications from most advantaged areas to those from least advantaged areas has decreased was exactly what we would expect from regression to the mean. I’m afraid this does not provide evidence that the new tuition fee regime has been beneficial to people from disadvantaged backgrounds. It is very had to disentangle any real changes in participation rates from different backgrounds from the effects of regression to the mean.

Unless anyone can point me to any better statistics on university applications from disadvantaged backgrounds, I think the question of whether the new tuition fee regime has helped or hindered social inequalities in higher education remains open.

The Saatchi Bill

I was disappointed to see yesterday that the Saatchi Bill (or Medical Innovations Bill, to give it its official name) passed its third reading in the House of Lords.

The Saatchi Bill, if passed, will be a dreadful piece of legislation. The arguments against it have been well rehearsed elsewhere, so I won’t go into them in detail here. But briefly, the bill sets out to solve a problem that doesn’t exist, and then offers solutions that wouldn’t solve it even if it did exist.

It is based on the premise that the main reason no progress is ever made in medical research (which is nonsense to start with, of course, because progress made all the time) is because doctors are afraid to try innovative treatments in case they get sued. There is, however, absolutely no evidence that that’s true, and in any case, the bill would not help promote real innovation, as it specifically excludes the use of treatments as part of research. Without research, there is no meaningful innovation.

If the bill were simply ineffective, that would be one thing, but it’s also actively harmful. By removing the legal protection that patients  currently enjoy against doctors acting irresponsibly, the bill will be a quack’s charter. It would certainly make it more likely that someone like Stanislaw Burzynski, an out-and-out quack who makes his fortune from fleecing cancer patients by offering them ineffective and dangerous treatments, could operate legally in the UK. That would not be a good thing.

One thing that has struck me about the sorry story of the Saatchi bill is just how dishonest Maurice Saatchi and his team have been. A particularly dishonourable mention goes to the Daily Telegraph, who have been the bill’s “official media partner“. Seriously? Since when did bills going through parliament have an official media partner? Some of the articles they have written have been breathtakingly dishonest. They wrote recently that the bill had “won over its critics“,  which is very far from the truth. Pretty much the entire medical profession is against it: this response from the Academy of Royal Medical Colleges is typical. The same article says that one way the bill had won over its critics was by amending it to require that doctors treating patients under this law must publish their research. There are 2 problems with that: first, the law doesn’t apply to research, and second, it doesn’t say anything about a requirement to publish results.

In an article in the Telegraph today, Saatchi himself continued the dishonesty. As well as continuing to pretend that the bill is now widely supported, he also claimed that more than 18,000 patients responded to the Department of Health’s consultation on the bill. In fact, the total number of responses to the consultation was only 170.

The dishonesty behind the promotion of the Saatchi bill has been well documented by David Hills (aka “the Wandering Teacake”), and I’d encourage you to read his detailed blogpost.

The question that I want to ask about all this is why? Why is Maurice Saatchi doing all this? What does he have to gain from promoting a bill that’s going to be bad for patients but good for unscrupulous quacks?

I cannot know the answers to any of those questions, of course. Only Saatchi himself can know, and even he may not really know: we are not always fully aware of our own motivations. The rest of us can only speculate. But nonetheless, I think it’s interesting to speculate, so I hope you’ll bear with me while I do so.

The original impetus for the Saatchi bill came when Saatchi lost his wife to ovarian cancer. Losing a loved one to cancer is always difficult, and ovarian cancer is a particularly nasty disease. There can be no doubt that Saatchi was genuinely distressed by the experience, and deserves our sympathy.

No doubt it seemed like a good idea to try to do something about this. After all, as a member of the House of Lords, he has the opportunity to propose new legislation. It is completely understandable that if he thought a new law could help people who were dying of cancer, he would be highly motivated to introduce one.

All of that is very plausible and easy to understand. What has happened subsequently, however, is a little harder to understand.

It can’t have been very long after Saatchi proposed the bill that many people who know more about medicine than he does told him why it simply wouldn’t work, and would have harmful consequences. So I think what is harder to understand is why he persisted with the bill after all the problems with it had been explained to him.

It has been suggested that this is about personal financial gain: his advertising company works for various pharmaceutical companies, and pharmaceutical companies will gain from the bill.

However, I don’t believe that that is a plausible explanation for Saatchi’s behaviour.

For a start, I’m pretty sure that the emotional impact of losing a beloved wife is a far stronger motivator than money, particularly for someone who is already extremely rich. It’s not as if Saatchi needs more money. He’s already rich enough to buy the support of a major national newspaper and to get a truly dreadful bill through parliament.

And for another thing, I’m not at all sure that pharmaceutical companies would do particularly well out of the bill anyway. They are mostly interested in getting their drugs licensed so that they can sell them in large quantities. Selling them as a one-off to individual patients is unlikely to be at the top of their list of priorities.

For what it’s worth, my guess is that Saatchi just has difficulty admitting that he was wrong. It’s not a particularly rare personality trait. He originally thought the bill would genuinely help cancer patients, and when told otherwise, he simply ignored that information. You might see this as an example of the Dunning Kruger effect, and it’s certainly consistent with the widely accepted phenomenon of confirmation bias.

Granted, what we’re seeing here is a pretty extreme case of confirmation bias, and has required some spectacular dishonesty on the part of Saatchi to maintain the illusion that he was right all along. But Saatchi is a politician who originally made his money in advertising, and it would be hard to think of 2 more dishonest professions than politics and advertising. It perhaps shouldn’t be too surprising that dishonesty is something that comes naturally to him.

Whatever the reasons for Saatchi’s insistence on promoting the bill in the face of widespread opposition, this whole story has been a rather scary tale of how money and power can buy your way through the legislative process.

The bill still has to pass its third reading in the House of Commons before it becomes law. We can only hope that our elected MPs are smart enough to see what a travesty the bill is. If you want to write to your MP to ask them to vote against the bill, now would be a good time to do it.