All posts by Adam

STAT investigation on failure to report research results

A news story by the American health news website STAT has appeared in my Twitter feed many times over the last few days.

The story claims to show that “prestigious medical research institutions have flagrantly violated a federal law requiring public reporting of study results, depriving patients and doctors of complete data to gauge the safety and benefits of treatments”. They looked at whether results of clinical trials that should have been posted on the clinicaltrials.gov website actually were posted, and found that many of them were not. It’s all scary stuff, and once again, shows that those evil scientists are hiding the results of their clinical trials.

Or are they?

To be honest, it’s hard to know what to make of this one. The problem is that the “research” on which the story is based has not been published in a peer reviewed journal. It seems that the only place the “research” has been reported is on the website itself. This is a significant problem, as the research is simply not reported in enough detail to know whether the methods it used were reliable enough to allow us to trust its conclusions. Maybe it was a fantastically thorough and entirely valid piece of research, or maybe it was dreadful. Without the sort of detail we would expect to see in a peer-reviewed research paper, it is impossible to know.

For example, the rather brief “methods section” of the article tells us that they filtered the data to exclude trials which were not required to report results, but they give no detail about how. So how do we know whether their dataset really contained only trials subject to mandatory reporting?

They also tell us that they excluded trials for which the deadline had not yet arrived, but again, they don’t tell us how. That’s actually quite important. If a trial has not yet reported results, then it’s hard to be sure when the trial finished. The clinicaltrials.gov website uses both actual and estimated dates of trial completion, and also has two different definitions of trial completion. We don’t know which definition was used, and if estimated dates were used, we don’t know if those estimates were accurate. In my experience, estimates of the end date of a clinical trial are frequently inaccurate.

Some really basic statistical details are missing. We are told that the results include “average” times by which results were late, but not whether they are mean or medians. With skewed data such as time to report something, the difference is important.

It appears that the researchers did not determine whether results had been published in peer-reviewed journals. So the claim that results are being hidden may be totally wrong. Even if a trial was not posted on clinicaltrials.gov, it’s hard to support a claim that the results are hidden if they’ve been published in a medical journal.

It is hardly surprising there are important details missing. Publishing “research” on a news website rather than in a peer reviewed journal is not how you do science. A wise man once said “If you have a serious new claim to make, it should go through scientific publication and peer review before you present it to the media“. Only a fool would describe the STAT story as “excellent“.

One of the findings of the STAT story was that academic institutions were worse than pharmaceutical companies at reporting their trials. Although it’s hard to be sure if that result is trustworthy, for all the reasons I describe above, it is at least consistent with more than one other piece of research (and I’m not aware of any research that has found the opposite).

There is a popular narrative that says clinical trial results are hidden because of evil conspiracies. However, no-one ever has yet given a satisfactory explanation of how hiding their clinical trial results furthers academics’ evil plans for global domination.

A far more likely explanation is that posting results is a time consuming and faffy business, which may often be overlooked in the face of competing priorities. That doesn’t excuse it, of course, but it does help to understand why results posting on clinicaltrials.gov is not as good as it should be, particularly from academic researchers, who are usually less well resourced than their colleagues in the pharmaceutical industry.

If the claims of the STAT article are true and researchers are indeed falling below the standards we expect in terms of clinical trial disclosure, then I suggest that rather than getting indignant and seeking to apportion blame, the sensible approach would be to figure out how to fix things.

I and some colleagues published a paper about 3 years ago in which we suggest how to do exactly that. I hope that our suggestions may help to solve the problem of inadequate clinical trial disclosure.

Spinning good news as bad

It seems to have become a popular sport to try to exaggerate problems with disclosure of clinical trials, and to pretend that the problem of “secret hidden trials” is far worse than it really is. Perhaps the most prominent example of this is the All Trials campaign’s favourite statistic that “only half of all clinical trials have ever been published”, which I’ve debunked before. But a new paper was published last month which has given fresh material to the conspiracy theorists.

The paper in question was published in BMJ Open by Jennifer Miller and colleagues. They looked at 15 of the 48 drugs approved by the FDA in 2012. It’s not entirely clear to me why they focused on this particular subgroup: they state that they focused on large companies because they represented the majority of new drug applications. Now I’m no mathematician, but I have picked up some of the basics of maths in my career as a statistician, and I’m pretty sure that 15 out of 48 isn’t a majority. Remember that we are dealing with a subgroup analysis here: I think it might be important, and I’ll come back to it later.

Anyway, for each of those 15 drugs, Miller et al looked at the trials that had been used for the drug application, and then determined whether the trials had been registered and whether the results had been disclosed. They found that a median (per drug) of 65% of trials had been disclosed and 57% had been registered.

This study drew the kinds of responses you might expect from the usual suspects, describing the results as “inexcusable” and “appalling”.

SAS tweet

Goldacre tweet

(Note that both of those tweets imply that only 15 drugs were approved by the FDA in 2012, and don’t mention that it was a subgroup analysis from the 48 drugs that were really approved that year.)

The story was picked up in the media as well. “How pharma keeps a trove of drug trials out of public view” was how the Washington Post covered it. The Scientist obviously decided that even 65% disclosure wasn’t sensational enough, and reported “just one-third of the clinical trials that ought to have been reported by the trial sponsors were indeed published”.

But as you have probably guessed by now, when you start to look below the surface, some of these figures are not quite as they seem.

Let’s start with the figures for trial registration (the practice of making the design a trial publicly available before it starts, which makes it harder to hide negative results or pretend that secondary outcomes were really primary). Trial registration is a fairly recent phenomenon. It only really came into being in the early 2000s, and did not become mandatory until 2007. Bear in mind that drugs take many years to develop, so some of the early trials done for drugs that were licensed in 2012 would have been done many years earlier, perhaps before the investigators had even heard of trial registration, and certainly before it was mandatory. So it’s not surprising that such old studies had not been prospectively registered.

Happily, Miller et al reported a separate analysis of those trials that were subject to mandatory registration. In that analysis, the median percentage of registered trials increased from 57% to 100%.

So I think a reasonable conclusion might be that mandatory trial registration has been successful in ensuring that trials are now being registered. I wouldn’t call that “inexcusable” or “appalling”. I’d call that a splendid sign of progress in making research more transparent.

So what about the statistic that only 65% of the trials disclosed results? That’s still bad, right?

Again, it’s a bit more complicated than that.

First, it’s quite important to look at how the results break down by phase of trial. It is noteworthy that the vast majority of the unpublished studies were phase I studies. These are typically small scale trials in healthy volunteers which are done to determine whether it is worth developing the drug further in clinical trials in patients. While I do not dispute for a minute that phase I trials should be disclosed, they are actually of rather little relevance to prescribers. If we are going to make the argument that clinical trials should be disclosed so that prescribers can see the evidence on what those drugs do to patients, then the important thing is that trials in patients should be published. Trials in healthy volunteers, while they should also be published in an ideal world, are a lower priority.

So what about the phase III trials? Phase III trials are the important ones, usually randomised controlled trials in large numbers of patients, which tell you whether the drug works and what its side effects are like. Miller et al report that 20% of drugs had at least 1 undisclosed phase III trial. That’s an interesting way of framing it. Another way of putting is is that 80% of the drugs had every single one of their phase III trials in the public domain. I think that suggests that trial disclosure is working rather well, don’t you? Unfortunately, the way Miller et al present their data doesn’t allow the overall percentage disclosure of phase III trials to be determined, and my request to the authors to share their data has so far gone unheeded (of which more below), but it is clearly substantially higher than 80%. Obviously anything less than 100% still has room for improvement, but the scare stories about a third of trials being hidden clearly don’t stack up.

And talking of trials being “hidden”, that is rather emotive language to describe what may simply be small delays in publication. Miller et al applied a cutoff date of 1 February 2014 in their analysis, and if results were not disclosed by that date then they considered them to be not disclosed. Now of course results should be disclosed promptly, and if it takes a bit longer, then that is a problem, but it is really not the same thing as claiming that results are being “kept secret”. Just out of interest, I checked on one of the drugs that seemed to have a particularly low rate of disclosure. According to Miller et al, the application for Perjeta was based on 12 trials, and only 8% had results reported on clinicaltrials.gov. That means they considered only one of them to have been reported. According to the FDA’s medical review (see page 29), 17 trials were submitted, not 12, which makes you wonder how thorough Miller et al’s quality control was. Of those 17 trials, 14 had been disclosed on clinicaltrials.gov when I looked. So had Miller et al used a different cut-off date, they would have found 82% of trials with results posted, not 8%.

I would like to be able to tell you more about the lower disclosure rates for phase I trials. Phase I trials are done early in a drug’s development, and so the phase I trials included in this study would typically have been done many years ago. It is possible that the lower publication rate for phase I trials is because phase I trials are intrinsically less likely to be published than trials in patients, but it is also possible that it is simply a function of when they were done. We know that publication rates have been improving over recent years, and it is possible that the publication rate for phase I trials done a decade or more ago is not representative of the situation today.

Sadly, I can’t tell you more about that. To distinguish between those possibilities, I would need to see Miller et al’s raw data. I did email them to ask for their raw data, and they emailed back to say how much they support transparency and data sharing, but haven’t actually sent me their data. It’s not entirely clear to me whether that’s because they have simply been too busy to send it or whether they are only in favour of transparency if other people have to do it, but if they do send the data subsequently I’ll be sure to post an update.

The other problem here is that, as I mentioned earlier, we are looking at a subgroup analysis. I think this may be important, as another study that looked at disclosure of drugs approved in 2012 found very different results. Rawal and Deane looked at drugs approved by the EMA in 2012, and found that 92% of the relevant trials had been disclosed. Again, it’s less than 100%, and so not good enough, but it certainly shows that things are moving in the right direction. And it’s a lot higher than the 65% that Miller et al found.

Why might these studies have come to such different results? Well, they are not looking at the same drugs. Not all of the drugs approved by the FDA in 2012 were approved by the EMA the same year. 48 drugs were approved by the FDA, and 23 by the EMA. Only 11 drugs were common to both agencies, and only 3 of those 11 drugs were included in Miller et al’s analysis. Perhaps the 15 drugs selected by Miller et al were not a representative sample of all 48 drugs approved by the FDA. It would be interesting to repeat Miller et al’s analysis with all 48 of the drugs approved by the FDA to see if the findings were similar, although I doubt that anyone will ever do that.

But personally, I would probably consider a study that looked at all eligible trials more reliable than one that chose an arbitrary subset, so I suspect that 92% is a more accurate figure for trial disclosure for drugs approved in 2012 than 65%.

Are 100% of clinical trials being disclosed? No, and this study confirms that. But it also shows that we are getting pretty close, at least for the trials most relevant for prescribers. Until 100% of trials are disclosed, there is still work to do, but things are not nearly as bad as the doom-mongers would have you believe. Transparency of clinical trial reporting is vastly better than it used to be, and don’t let anyone tell you otherwise.

Update 23 January 2016:

I have still not received the raw data for this study, more than 2 months after I asked for it. I think it is safe to assume that I’m not going to get it now. That’s disappointing, especially from authors who write in support of transparency.

 

 

 

Not clinically proven after all

I blogged last year about my doubts about the following advert:

Boots

 

Those seemed like rather bold claims, which as far as I could tell were not supported by the available evidence. It turns out the Advertising Standards Authority agrees with me. I reported the advert last year, and last week the ASA finally ruled on my complaint, which they upheld.

It’s worth reading the ASA’s ruling in full. They were very thorough. They came to similar conclusions that I did: that although there was some hint of activity against a cold, there was no evidence of activity against flu, and even the evidence for a cold was not strong enough to make “clinically proven” a reasonable claim.

While the ASA get good marks for being thorough, they get less good marks for being prompt. It took them 11 months to make this ruling, which allowed Boots to continue misleading customers all that time. I suppose being thorough does take time, but even so, I’m disappointed that it took them quite as long as it did.

Boots are now no longer advertising Boots Cold and Flu Defence Nasal Spray on their website. They are, however, advertising a spookily similar looking product called Boots Cold Defence Nasal Spray. Although they have now dropped claims about flu, they are still claiming the product is “clinically proven” to both treat and prevent colds.

It is not clear to me whether this is the same product that’s just been rebranded or whether it is something different. I note that it says the active ingredient is carrageenan, which was the same active ingredient in the previous product. If it is the same product, then it’s good to see that they have dropped the flu claim, as that was totally unsupported. However, the cold claim is just as dubious as before, unless they have done new studies in the last year.

I have been in touch with the ASA about the Cold Defence product, and they have told me that since it’s a different product (or at least has a different name) it wouldn’t be covered by the previous ruling. If I felt that the claim was unjustified it would need a new complaint.

Is it just me who thinks Boots is being a bit cynical here? Unless the new product is something different that actually has a robust evidence base, they must know that the claim that it is clinically proven to treat and prevent colds does not stack up. But they are making it anyway. No doubt safe in the knowledge that by the time the ASA gets round to ruling on it, this year’s cold season will be well and truly over, and they will have had time to mislead plenty of customers in the meantime.

If the new advert is also found to be misleading, there will be no punishment for anyone at Boots. The worst that will happen to them is that they will be told to change the advert.

Why are big corporations allowed to mislead consumers with impunity?

The amazing magic Saatchi Bill

Yesterday saw the dangerous and misguided Saatchi Bill (now reincarnated as the Access to Medical Treatments (Innovation) Bill) debated in the House of Commons.

The bill started out as an attempt by the Conservative peer Lord Saatchi to write a new law to encourage innovation in medical research. I have no doubt that the motivation for doing so was based entirely on good intentions, but sadly the attempt was badly misguided. Although many people explained to Lord Saatchi why he was wrong to tackle the problem in the way he did, it turns out that listening to experts is not Saatchi’s strong suit, and he blundered on with his flawed plan anyway.

If you want to know what is wrong with the bill I can do no better than direct you to the Stop the Saatchi Bill website, which explains the problems with the bill very clearly. But briefly, it sets out to solve a problem that does not exist, and causes harm at the same time. It attempts to promote innovation in medical research by removing the fear of litigation from doctors who innovate, despite the fact that fear of litigation is not what stops doctors innovating. But worse, it removes important legal protection for patients. Although the vast majority of doctors put their patients’ best interests firmly at the heart of everything they do, there will always be a small number of unscrupulous quacks who will be only too eager to hoodwink patients into paying for ineffective or dangerous treatments if they think there is money in it.

If the bill is passed, any patients harmed by unscrupulous quacks will find it harder to get redress through the legal system. That does not protect patients.

Although the bill as originally introduced by Saatchi failed to make sufficient progress through Parliament, it has now been resurrected in a new, though essentially similar, form as a private member’s bill in the House of Commons.

I’m afraid to say that the debate in the House of Commons did not show our lawmakers in a good light.

We were treated to several speeches by people who clearly either didn’t understand what the bill was about or were being dishonest. The two notable exceptions were Heidi Alexander, the Shadow Health Secretary, and Sarah Wollaston, chair of the Health Select Committee and a doctor herself in a previous career. Both Alexander and Wollaston clearly showed that they had taken the trouble to read the bill and other relevant information carefully, and based their contributions on facts rather than empty rhetoric.

I won’t go into detail on all the speeches, but if you want to read them you can do so in Hansard.

The one speech I want to focus on is by George Freeman, the Parliamentary Under-Secretary of State for Life Sciences. As he is a government minister, his speech gives us a clue about the government’s official thinking on the bill. Remember that it is a private member’s bill, so government support is crucial if it is to have a chance of becoming law. Sadly, Freeman seems to have swallowed the PR surrounding the bill and was in favour of it.

Although Freeman said many things, many of which showed either a poor understanding of the issues or blatant dishonesty, the one I particularly want to focus on is where he imbued the bill with magic powers.

He repeated the myths about fear of litigation holding back medical research. He was challenged in those claims by both Sarah Wollaston and Heidi Alexander.

When he reeled off a whole bunch of statistics about how much money medical litigation cost the NHS, Wollaston asked him how much of that was specifically related to complaints about innovative treatments. His reply was telling:

“Most of the cases are a result of other contexts— as my hon. Friend will know, obstetrics is a big part of that—rather than innovation. I am happy to write to her with the actual figure as I do not have it to hand.”

Surely that is the one statistic he should have had to hand if he’d wanted to appear even remotely prepared for his speech? What is the point of being able to quote all sorts of irrelevant statistics about the total cost of litigation in the NHS if he didn’t know the one statistic that actually mattered? Could it be that he knew it was so tiny it would completely undermine his case?

He then proceeded to talk about the fear of litigation, at which point Heidi Alexander asked him what evidence he had. He had to admit that he had none, and muttered something about “anecdotally”.

But anyway, despite having failed to make a convincing case that fear of litigation was holding back innovation, he was very clear that he thought the bill would remove that fear.

And now we come to the magic bit.

How exactly was that fear of litigation to be removed? Was it by changing the law on medical negligence to make it harder to sue “innovative” doctors? This is what Freeman said:

“As currently drafted the Bill provides no change to existing protections on medical negligence, and that is important. It sets out the power to create a database, and a mechanism to make clear to clinicians how they can demonstrate compliance with existing legal protection—the Bolam test has been referred to—and allow innovations to be recorded for the benefit of other clinicians and their patients. Importantly for the Government, that does not change existing protections on medical negligence, and it is crucial to understand that.”

So the bill makes no change whatsoever to the law on medical negligence, but removes the fear that doctors will be sued for negligence. If you can think of a way that that could work other than by magic, I’m all ears.

In the end, the bill passed its second reading by 32 votes to 19. Yes, that’s right: 599 well over 500* MPs didn’t think protection of vulnerable patients from unscrupulous quacks was worth turning up to vote about.

I find it very sad that such a misguided bill can make progress through Parliament on the basis of at best misunderstandings and at worst deliberate lies.

Although the bill has passed its second reading, it has not yet become law. It needs to go through its committee stage and then return to the House of Commons for its third reading first. It is to be hoped that common sense will prevail some time during that process, or patients harmed by unscrupulous quacks will find that the law does not protect them as much as it does now.

If you want to write to your MP to urge them to turn up and vote against this dreadful bill when it comes back for its third reading, now would be a good time.

* Many thanks to @_mattl on Twitter for pointing out the flaw in my original figure of 599: I hadn’t taken into account that the Speaker doesn’t vote, the Tellers aren’t counted in the totals, Sinn Fein MPs never turn up at all, and SNP MPs are unlikely to vote as this bill doesn’t apply to Scotland.

Equality of opportunity

Although this is primarily a blog about medical stuff, I did warn you that there might be the occasional social science themed post. This is one such post.

In his recent speech to the Conservative Party conference, David Cameron came up with many fine words about equality of opportunity. He led us to believe that he was for it. Here is an extract from the relevant part of his speech:

If we tackle the causes of poverty, we can make our country greater.

But there’s another big social problem we need to fix.

In politicians’ speak: a “lack of social mobility”.

In normal language: people unable to rise from the bottom to the top, or even from the middle to the top, because of their background.

Listen to this: Britain has the lowest social mobility in the developed world.

Here, the salary you earn is more linked to what your father got paid than in any other major country.

I’m sorry, for us Conservatives, the party of aspiration, we cannot accept that.

We know that education is the springboard to opportunity.

Fine words indeed. Cameron is quite right to identify lack of social mobility as a major problem. It cannot be right that your life chances should depend so much on who your parents are.

Cameron is also quite right to highlight the important role of education. Inequality of opportunity starts at school. If you have pushy middle class parents who get you into a good school, then you are likely to do better than if you have disadvantaged parents and end up in a poor school.

But it is very hard to reconcile Cameron’s fine words with today’s announcement of a new grammar school. In theory, grammar schools are supposed to aid social mobility by allowing bright kids from disadvantaged backgrounds to have a great education.

But in practice, they do no such thing.

In practice, grammar schools perpetuate social inequalities. Grammar schools are largely the preserve of the middle classes. According to research from the Institute for Fiscal studies, children from disadvantaged backgrounds are less likely than their better off peers to get into grammar schools, even if they have the same level of academic achievement.

It’s almost as if Cameron says one thing but means something else entirely, isn’t it?

If Cameron is serious about equality of opportunity, I have one little trick from the statistician’s toolkit which I think could help, namely randomisation.

My suggestion is this. All children should be randomly allocated to a school. Parents would have no say in which school their child goes to: it would be determined purely by randomisation. The available pool of schools would of course need to be within reasonable travelling distance of where the child lives, but that distance could be defined quite generously, so that you wouldn’t still have cosy middle class schools in cosy middle class neighbourhoods and poor schools in disadvantaged neighbourhoods.

At the moment, it is perfectly accepted by the political classes that some schools are good schools and others are poor. Once the middle classes realise that their own children might have to go to the poor schools, my guess is that the acceptance of the existence of poor schools would rapidly diminish. Political pressure would soon make sure that all schools are good schools.

That way, all children would have an equal start in life, no matter how rich their parents were.

This suggestion is, of course, pure fantasy. There is absolutely no way that our political classes would ever allow it. Under a system like that, their own children might have to go to school with the plebs, and that would never do, would it?

But please don’t expect me to take any politician seriously if they talk about equality of opportunity on the one hand but still support a system in which the school that kids go to is determined mainly by the socioeconomic status of their parents.

Mythbusting medical writing

I have recently published a paper, along with my colleagues at GAPP, addressing some of the myths surrounding medical writing.

As an aside, this was my last act as a GAPP member, and I have now stood down from the organisation. It was a privilege to be a founder member, and I am very proud of the work that GAPP has done, but now that I am no longer professionally involved in medical writing it seemed appropriate to move on.

Anyway, the paper addresses 3 myths surrounding the role of professional medical writers in preparing publications for the peer-reviewed medical literature:

  • Myth No 1: Medical writers are ghostwriters
  • Myth No 2: Ghostwriting is common
  • Myth No 3: Researchers should not need medical writing support

(Spoiler alert: none of those 3 things is actually true.)

Unfortunately, the full paper is paywalled. Sorry about that. This wasn’t our first choice of journal: the article was originally written in response to an invitation to write the article from another journal, who then rejected it. And as GAPP has no funding, there was no budget to pay for open access publishing.

But luckily, the journal allows me to post the manuscript as submitted (but not the nice neat typeset version) on my own website.

So here it is. Happy reading.

Zombie statistics on half of all clinical trials unpublished

You know what zombies are, right? No matter how often you kill them, they just keep coming back. So it is with zombie statistics. No matter how often they are debunked, people will keep repeating them as if they were a fact.

zombies

Picture credit: Scott Beale / Laughing Squid

As all fans of a particular horror movie genre know, the only way you can kill a zombie is to shoot it in the head. This blog post is my attempt at a headshot for the zombie statistic “only half of all clinical trials have ever been published”.

That statistic has been enthusiastically promoted by the All Trials campaign. The campaign itself is fighting for a thoroughly good cause. Their aim is to ensure that the results of all clinical trials are disclosed in the public domain. Seriously, who wouldn’t want to see that happen? Medical science, or indeed any science, can only progress if we know what previous research has shown.

But sadly, All Trials are not being very evidence-based in their use of statistics. They have recently written yet another article promoting the “only half of all clinical trials are published” zombie statistic, which I’m afraid is misleading in a number of ways.

The article begins: “We’re sometimes asked if it’s still true that around half of clinical trials have never reported results. Yes, it is.” Or at least that’s how it starts today. The article has been silently edited since it first appeared, with no explanation of why. That’s a bit odd for an organisation that claims to be dedicated to transparency.

The article continues “Some people point towards recent studies that found a higher rate of publication than that.” Well, yes. There are indeed many studies showing much higher rates of publication for recent trials, and I’ll show you some of those studies shortly. It’s good that All Trials acknowledge the recent increase in publication rates.

“But these studies look at clinical trials conducted very recently, often on the newest drugs, and therefore represent a tiny fraction of all the clinical trials that have ever been conducted”, the All Trials campaign would have us believe.

It’s worth looking at that claim in some detail.

Actually, the studies showing higher rates of publication are not necessary conducted very recently. It’s true that some of the highest rates come from the most recent studies, as there has been a general trend to greater disclosure for some time, which still seems to be continuing. But rates have been increasing for a while now (certainly since long before the All Trials campaign was even thought of, in case you are tempted to believe the spin that recent increases in disclosure rates are a direct result of the campaign), so it would be wrong to think that rates of publication substantially higher than 50% have only been seen in the last couple of years. For example, Bourgeois et al’s 2010 study, which found 80% of trials were disclosed in the public domain, included mostly trials conducted over 10 years ago.

It’s a big mistake to think that trials in the last 10 years have a negligible effect on the totality of trials. The number of clinical trials being done has increased massively over time, so more recent trials are actually quite a large proportion of all trials that have ever been done. And certainly a large proportion of all trials that are still relevant. How much do you think this 1965 clinical trial of carbenoxolone sodium is going to inform treatment of gastric ulcers today in the era of proton pump inhibitors, for example?

If we look at the number of randomised controlled trials indexed in PubMed over time, we see a massive increase over the last couple of decades:

Graph

In fact over half of all those trials have been published since 2005. I wouldn’t say over half is a “tiny fraction”, would you?

“Ah”, I hear you cry, “but what if more recent trials are more likely to be published? Maybe it only looks like more trials have been done recently.”

Yes, fair point. It is true that in the last century, a significant proportion of trials were unpublished. Maybe it was even about half, although it’s hard to know for sure, as there is no good estimate of the overall proportion, despite what All Trials would have you believe (and we’ll look at their claims in more detail shortly).

But even if we make the rather extreme assumption that up to 2000 only half of all trials were published, then the rate increased evenly up to 2005 from which point 100% of trials were published, then the date after which half of all trials were done only shifts back as far as 2001.

So the contribution of recent trials matters. In fact even the All Trials team themselves tacitly acknowledge this, if you look at the last sentence of their article:

“Only when all of this recent research is gathered together with all other relevant research and assessed in another systematic review will we know if this new data changes the estimate that around half of clinical trials have ever reported results.”

In other words, at the moment, we don’t know whether it’s still true that only around half of clinical trials have ever reported results. So why did they start by boldly stating that it is true?

The fact is that no study has ever estimated the overall proportion of trials that have been published. All Trials claim that their figure of 50% comes from a 2010 meta-analysis by Song et al. This is a strange claim, as Song et al do not report a figure for the proportion of trials published. Go on. Read their article. See if you can find anything saying “only 50% of trials are published”. I couldn’t. So it’s bizarre that All Trials claim that this paper is the primary support for their claim.

The paper does, however, report publication rates in several studies of completeness of publication, and although no attempt is made to combine them into an overall estimate, some of the figures are in the rough ballpark of 50%. Maybe All Trials considered that close enough to support a nice soundbite.

But the important thing to remember about the Song et al study is that although it was published in 2010, it is based on much older data. Most of the trials it looks at were from the 1990s, and many were from the 1980s. The most recent study included in the review only included trials done up to 2003. I think we can all agree that publication rates in the last century were way too low, but what has happened since then?

Recent evidence

Several recent studies have looked at completeness of publication, and have shown disclosure rates far higher than 50%.

One important thing to remember is that researchers today have the option of posting their results on websites such as clinicaltrials.gov, which were not available to researchers in the 1990s. So publication in peer reviewed journals is not the only way for clinical trial results to get into the public domain. Any analysis that ignores results postings on websites is going to greatly underestimate real disclosure rates. Some trials are posted on websites and not published in journals, while others are published in journals but not posted on websites. To look at the total proportion of trials with results disclosed in the public domain, you have to look at both.

There may be a perception among some that posting results on a website is somehow “second best”, and only publication in a peer-reviewed journal really counts as disclosure. However, the evidence says otherwise. Riveros et al published an interesting study in 2013, in which they looked at completeness of reporting in journal publications and on clinicaltrials.gov. They found that postings on clinicaltrials.gov were generally more complete than journal articles, particularly in the extent to which they reported adverse events. So perhaps it might even be reasonable to consider journal articles second best.

But nonetheless, I think we can reasonably consider a trial to be disclosed in the public domain whether it is published in a journal or posted on a website.

So what do the recent studies show?

Bourgeois et al (2010) looked at disclosure for 546 trials that had been registered on clinicaltrials.gov. They found that 80% of them had been disclosed (66% in journals, and a further 14% on websites). The results varied according to the funder: industry-sponsored trials were disclosed 88% of the time, and government funded trials 55% of the time, with other trials somewhere in between.

Ross et al (2012) studied 635 trials that had been funded by the NIH, and found 68% had been published in journals. They didn’t look at results posting on websites, so the real disclosure rate may have been higher than that. And bear in mind that government funded trials were the least likely to be published in Bourgeois et al’s study, so Ross et al’s results are probably an underestimate of the overall proportion of studies that were being disclosed in the period they studied.

Rawal and Deane published 2 studies, one in 2014 and one in 2015. Their 2014 study included 807 trials, of which 89% were disclosed, and their 2015 study included 340 trials, of which 92% were disclosed. However, both studies included only trials done by the pharmaceutical industry, which had the highest rates of disclosure in Bourgeois et al’s study, so we can’t necessarily assume that trials from non-industry sponsors are being disclosed at such a high rate.

Taken together, these trials show that the claim that only 50% of trials are published is really not tenable for trials done in the last decade or so. And remember that trials done in the last decade or so make up about half the trials that have ever been done.

Flaws in All Trials’s evidence

But perhaps you’re still not convinced? After all, All Trials include on their page a long list of quite recent references, which they say support their claim that only half of all trials are unpublished.

Well, just having a long list of references doesn’t necessarily mean that you are right. If it did, then we would have to conclude that homeopathy is an effective treatment, as this page from the Society of Homeopaths has an even longer reference list. The important thing is whether the papers cited actually back up your claim.

So let’s take a look at the papers that All Trials cite. I’m afraid this section is a bit long and technical, which is unavoidable if we want to look at the papers in enough detail to properly assess the claims being made. Feel free to skip to the conclusions of this post if long and technical looks at the evidence aren’t your bag.

We’ve already looked at All Trials’s primary reference, the Song et al systematic review. This does show low rates of publication for trials in the last century, but what do the more recent studies show?

Ross et al, 2009, which found that 46% of trials on ClinicalTrials.gov, the world’s largest clinical trials register, had reported results.

For a start, this trial is now rather old, and only included trials up to 2005, so it doesn’t tell us about what’s been happening in the last decade. It is also likely to be a serious underestimate of the publication rate even then, for 3 reasons. First, the literature search for publication only used Medline. Many journals are not indexed in Medline, so just because a study can’t be found with a Medline search does not mean it’s not been published. Pretty much the first thing you learn at medical literature searching school is that searching Medline alone is not sufficient if you want to be systematic, and it is important to search other databases such as Embase as well. Second, and perhaps most importantly, it only considers publications in journals, and does not look at results postings on websites. Third, although they only considered completed trials, 46% of the trials they studied did not report an end date, so it is quite possible that those trials had finished only recently and were still being written up for publication.

Prayle et al, 2012, which found 22% of clinical trials had reported summary results on ClinicalTrials.gov within one year of the trial’s completion, despite this being a legal requirement of the US’s Food and Drug Administration Amendments Act 2007.

This was a study purely of results postings, so tells us nothing about the proportion of trials published in journals. Also, the FDA have criticised the methods of the study on several grounds.

Jones et al, 2013, which found 71% of large randomised clinical trials (those with 500 participants or more) registered on ClinicalTrials.gov had published results. The missing 29% of trials had approximately 250,000 trial participants.

71% is substantially higher than 50%, so it seems odd to use this as evidence to support the 50% figure. Also, 71% is only those trials published in peer-reviewed journals. The figure is 77% if you include results postings on websites. Plus the study sample included some active trials and some terminated trials, so is likely to be an underestimate for completed trials.

Schmucker et al, 2014, which found that 53% of clinical trials are published in journals. This study analysed 39 previous studies representing more than 20,000 trials.

This is quite a complex study. It was a meta-analysis, divided into 2 parts: cohorts of studies approved by ethics committees, and cohorts of studies registered in trial registries. The first part included predominantly old trials from the 1980s and 1990s.

The second part included more recent trials, but things start to unravel if you look at some of the studies more carefully. The first problem is that they only count publications in journals, and do not look at results postings on websites. Where the studies reported both publications and results postings, only the publications were considered, and results postings were ignored.

As with any meta-analysis, the results are only as good as the individual trials. I didn’t look in detail at all the trials included, but I did look at some of the ones with surprisingly low rates of disclosure. The largest study was Huser et al 2013, which found only a 28% rate of disclosure. But this is very misleading. It was only the percentage of trials that had a link to a publication in the clinicaltrials.gov record. Although sponsors should come back and update the clinicaltrials.gov record when they have published the results in a journal to provide a link to the article, in practice many don’t. So only to look at records with such a link is going to be a massive underestimate of the true publication rate (and that’s before we remember that results postings on the clinicaltrials.gov website weren’t counted). It is likely that manually searching for the articles would have found many more trials published.

Another study with a low publication rate included in the meta-analysis was Gopal et al 2012. The headline publication rate was 30%, out of a sample size of of 818. However, all 818 of those had had results posted on clinicaltrials.gov, so in fact the total disclosure rate was 100%, although of course that is meaningless as that was determined by their study design rather than a finding of the study.

The other study with a surprisingly low proportion of disclosed trials was Shamilyan et al 2012, which found only a 23% publication rate. This was only a small study (N=112), but apart from that the main flaw was that it only searched Medline, and used what sounds like a rather optimistic search strategy, using titles and ID numbers, with no manual search. So as far as I can tell from this, if a paper is published without indexing the clinicaltrials.gov ID number (and unfortunately many papers don’t) and didn’t use exactly the same verbatim title for the publication as the clinicaltrials.gov record, then publications wouldn’t have been found.

I haven’t checked all the papers, but if these 3 are anything to go by, there are some serious methodological problems behind Schumcker et al’s results.

Munch et al, 2014, which found 46% of all trials on treatments for pain had published results.

This was a study of 391 trials, of which only 181 had published results, which is indeed 46%. But those 391 trials included some trials that were still ongoing. I don’t think it’s reasonable to expect that a trial should be published before it is completed, do you? If you use the 270 completed trials as the denominator, then the publication rate increases to 67%. And even then, there was no minimum follow-up time specified in the paper. It is possible that some of those trials had only completed shortly before Munch et al searched for the results and were still being written up. It is simply not possible to complete a clinical study one day and publish the results the next day.

Anderson et al, 2015, which found that 13% of 13,000 clinical trials conducted between January 2008 and August 2012 had reported results within 12 months of the end of the trial. By 5 years after the end of the trial, approximately 80% of industry-funded trials and between 42% and 45% of trials funded by government or academic institutions had reported results.

I wonder if they have given the right reference here, as I can’t match up the numbers for 5 years after the end of the trial to anything in the paper. But the Anderson et al 2015 study that they cite did not look at publication rates, only at postings on clinicaltrials.gov. It tells us absolutely nothing about total disclosure rates.

Chang et al, 2015, which found that 49% of clinical trials for high-risk medical devices in heart disease were published in a journal.

The flaws in this study are very similar to those in Ross et al 2009: the literature search only used Medline, and results posting on websites was ignored.

Conclusions

When you look at the evidence in detail, it is clear that the claim that half of all clinical trials are unpublished is not supported. The impression one gets from reading the All Trials blog post is that they have decided that “half of all trials are unpublished” is a great soundbite, and then they try desperately to find evidence that looks like it might back it up if it is spun in a certain way and limitations in the research are ignored. And of course research showing higher rates of disclosure is also ignored.

This is not how to do science. You do not start with an answer and then try to look for a justification for it, while ignoring all disconfirming evidence. You start with a question and then look for the answer in as unbiased a way as possible.

It is disappointing to see an organisation nominally dedicated to accuracy in the scientific literature misusing statistics in this way.

And it is all so unnecessary. There are many claims that All Trials could make in support of their cause without having to torture the data like this. They could (and indeed do) point out that the historic low rate of reporting is still a problem, as many of the trials done in the last century are still relevant to today’s practice, and so it would be great if they could be retrospectively disclosed. If that was where their argument stopped, I would have no problem with it, but to claim that those historic low rates of reporting apply to the totality of clinical trials today is simply not supported by evidence.

All Trials could also point out that the rates of disclosure today are less than 100%, which is not good enough. That would also be a statement no reasonable person could argue with. They could even highlight the difficulty in finding research: many of the studies above do not show low rates of reporting, but they do show that reports of clinical trials can be hard to find. That is definitely a problem, and if All Trials want to suggest a way to fix it, that would be a thoroughly good thing.

There is no doubt that All Trials is fighting for a worthy cause. We should not be satisfied until 100% of clinical trials are disclosed, and we are not there yet. But to claim we are still in a position where only half of clinical trials are disclosed, despite all the evidence that rates of disclosure today are more typically in the region of 80-90%, is nothing short of dishonest.

I don’t care how good your cause is, there is never an excuse for using dodgy statistics as part of your campaigning.

 

Energy prices rip off

Today we have learned that the big six energy providers have been overcharging customers to the tune of over £1 billion per year.

Obviously your first thought on this story is “And what will we learn next week? Which religion the Pope follows? Or perhaps what do bears do in the woods?” But I think it’s worth taking a moment to think about why the energy companies have got away with this, and what might be done about it.

Energy companies were privatised by the Thatcher government back in the late 1980s and early 1990s, based on the ideological belief that competition would make the market more efficient. I’m not sure I’d call overcharging consumers by over £1 billion efficient.

It’s as if Thatcher had read the first few pages of an economics textbook that talks about the advantages of competition and the free market, and then gave up on the book without reading the rest of it to find out what can go wrong with free markets in practice.

Many things can go wrong with free markets, but the big one here is information asymmetry. It’s an important assumption of free market competition that buyers and sellers have perfect information. If buyers do not know how much something is costing them, how can they choose the cheapest supplier?

It is extraordinarily difficult to compare prices among energy suppliers. When I last switched my energy supplier, I spent well over an hour constructing a spreadsheet to figure out which supplier would be cheapest for me. And I’m a professional statistician, so I’m probably better equipped to do that task than most.

Even finding out the prices is a struggle. Here is what I was presented with after I looked on NPower’s website to try to find the prices of their energy:

Screenshot from 2015-07-07 08:24:19

It seems that they want to know everything about me before they’ll reveal their prices. And I’d already had to give them my postcode before I even got that far. Not exactly transparent, is it?

It was similarly impossible to find out Eon’s prices without giving them my entire life history. EDF and SSE were a bit more transparent, though both of them needed to know my postcode before they’d reveal their prices.

Here are EDF’s rates:

Screenshot from 2015-07-07 08:31:06

And here are SSE’s rates:

Screenshot from 2015-07-07 08:30:20

Which of those is cheaper? Without going through that spreadsheet exercise, I have no idea. And that’s just the electricity prices. Obviously I have to do the same calculations for gas, and given that they all give dual fuel discounts, I then have to calculate a total as well as figuring out whether I would be better off going with separate suppliers for gas and electricity to take the cheapest deal on each and whether that would compensate for the dual fuel discount.

And then of course I also have to take into account how long prices are fixed for, what the exit charges are, etc etc.

Seriously, if I as a professional statistician find this impossibly difficult, how is anyone else supposed to figure it out? There are price comparison websites that are supposed to help people compare prices, but of course they have to make a living, and have their own problems.

It’s no wonder that competition is not working for the benefit of consumers.

So what is to be done about it?

I think there is a simple solution here. All suppliers should be required to charge in a simple and transparent way. The standing charge should go. Suppliers should be required simply to quote a price per unit, and should also be required to publish those prices prominently on their website without consumers having to give their inside leg measurements first. If different rates are given for day and night use, a common ratio of day rate to night rate should be required (the ratio used could be reviewed annually in response to market conditions).

Suppliers will no doubt argue that a flat price per unit is inefficient, as there are costs involved in simply having a customer even before any energy is used, and a customer who uses twice as much energy as another does not cost them twice as much.

Tough. The energy companies have had over 20 years to sort out their act, and have failed. While I’m not a fan of governments intervening in markets as a general principle, there are times when it is useful, and this is one of them. I don’t see how anyone can argue that an industry that overcharges consumers by over £1 billion per year is efficient. No one energy company would be at  a disadvantage, as all their competitors would be in the same position.

There would be a further benefit to this idea, in that it would add an element of progressiveness to energy pricing. At the moment, poor people who don’t use much energy pay more per unit than rich people. That doesn’t really seem fair, does it?

This is such a simple and workable idea it is hard to understand why it hasn’t already been implemented. Unless, of course, recent governments were somehow on the side of big business and cared far less about ordinary consumers.

But that can’t be true, can it?

The Independent’s anti-vaccine scaremongering

Last weekend The Independent published a ridiculous piece of antivaccine scaremongering by Paul Gallagher on their front page. They report the story of girls who became ill after receiving HPV vaccine, and strongly imply that the HPV vaccine was the cause of the illnesses, flying in the face of massive amounts of scientific evidence to the contrary.

I could go on at length about how dreadful, irresponsible, and scientifically illiterate the article was, but I won’t, because Jen Gunter and jdc325 have already done a pretty good job of that. You should go and read their blogposts. Do it now.

Right, are you back? Let’s carry on then.

What I want to talk about today is the response I got from the Independent when I emailed the editor of the Independent on Sunday, Lisa Markwell, to suggest that they might want to publish a rebuttal to correct the dangerous misinformation in the original article. Ms Markwell was apparently too busy to reply to a humble reader, so my reply was from the deputy editor, Will Gore.  Here it is below, with my annotations.

Dear Dr Jacobs

Thank you for contacting us about an article which appeared in last weekend’s Independent on Sunday.

Media coverage of vaccine programmes – including reports on concerns about real or perceived side-effects – is clearly something which must be carefully handled; and we are conscious of the potential pitfalls. Equally, it is important that individuals who feel their concerns have been ignored by health care professionals have an outlet to explain their position, provided it is done responsibly.

I’d love to know what they mean by “provided it is done responsibly”. I think a good start would be not to stoke anti-vaccine conspiracy theories with badly researched scaremongering. Obviously The Independent has a different definition of “responsibly”. I have no idea what that definition might be, though I suspect it includes something about ad revenue.

On this occasion, the personal story of Emily Ryalls – allied to the comparatively large number of ADR reports to the MHRA in regard to the HPV vaccine – prompted our attention. We made clear that no causal link has been established between the symptoms experienced by Miss Ryalls (and other teenagers) and the HPV vaccine. We also quoted the MHRA at length (which says the possibility of a link remains ‘under review’), as well as setting out the views of the NHS and Cancer Research UK.

Oh, seriously? You “made it clear that no causal link has been established”? Are we even talking about the same article here? The one I’m talking about has the headline “Thousands of teenage girls enduring debilitating illnesses after routine school cancer vaccination”. On what planet does that make it clear that the link was not causal?

I think what they mean by “made it clear that no causal link has been established” is that they were very careful with their wording not to explicitly claim a causal link, while nonetheless using all the rhetorical tricks at their disposal to make sure a causal link was strongly implied.

Ultimately, we were not seeking to argue that vaccines – HPV, or others for that matter – are unsafe.

No, you’re just trying to fool your readers into thinking they’re unsafe. So that’s all right then.

Equally, it is clear that for people like Emily Ryalls, the inexplicable onset of PoTS has raised questions which she and her family would like more fully examined.

And how does blaming it on something that is almost certainly not the real cause help?

Moreover, whatever the explanation for the occurrence of PoTS, it is notable that two years elapsed before its diagnosis. Miss Ryalls’ family argue that GPs may have failed to properly assess symptoms because they were irritated by the Ryalls mentioning the possibility of an HPV connection.

I don’t see how that proves a causal link with the HPV vaccine. And anyway, didn’t you just say that you were careful to avoid claiming a causal link?

Moreover, the numbers of ADR reports in respect of HPV do appear notably higher than for other vaccination programmes (even though, as the quote from the MHRA explained, the majority may indeed relate to ‘known risks’ of vaccination; and, as you argue, there may be other particular explanations).

Yes, there are indeed other explanations. What a shame you didn’t mention them in your story. Perhaps if you had done, your claim to be careful not to imply a causal link might look a bit more plausible. But I suppose you don’t like the facts to get in the way of a good story, do you?

The impact on the MMR programme of Andrew Wakefield’s flawed research (and media coverage of it) is always at the forefront of editors’ minds whenever concerns about vaccines are raised, either by individuals or by medical studies. But our piece on Sunday was not in the same bracket.

No, sorry, it is in exactly the same bracket. The media coverage of MMR vaccine was all about hyping up completely evidence-free scare stories about the risks of MMR vaccine. The present story is all about hyping up completely evidence-free scare stories about the risk of HPV vaccine. If you’d like to explain to me what makes those stories different, I’m all ears.

It was a legitimate item based around a personal story and I am confident that our readers are sophisticated enough to understand the wider context and implications.

Kind regards

Will Gore
Deputy Managing Editor

If Mr Gore seriously believes his readers are sophisticated enough to understand the wider context, then he clearly hasn’t read the readers’ comments on the article. It is totally obvious that a great many readers have inferred a causal relationship between the vaccine and subsequent illness from the article.

I replied to Mr Gore about that point, to which he replied that he was not sure the readers’ comments are representative.

Well, that’s true. They are probably not. But they don’t need to be.

There are no doubt some readers of the article who are dyed-in-the-wool anti-vaccinationists. They believed all vaccines are evil before reading the article, and they still believe all vaccines are evil. For those people, the article will have had no effect.

Many other readers will have enough scientific training (or just simple common sense) to realise that the article is nonsense. They will not infer a causal relationship between the vaccine and the illnesses. All they will infer is that The Independent is spectacularly incompetent at reporting science stories and that it would be really great if The Independent could afford to employ someone with a science GCSE to look through some of their science articles before publishing them. They will also not be harmed by the article.

But there is a third group of readers. Some people are not anti-vaccine conspiracy theorists, but nor do they have science training. They probably start reading the article with an open mind. After reading the article, they may decide that HPV vaccine is dangerous.

And what if some of those readers are teenage girls who are due for the vaccination? What if they decide not to get vaccinated? What if they subsequently get HPV infection, and later die of cervical cancer?

Sure, there probably aren’t very many people to whom that description applies. But how many is an acceptable number? Perhaps Gallagher, Markwell, and Gore would like to tell me how many deaths from cervical cancer would be a fair price to pay for writing the article?

It is not clear to me whether Gallagher, Markwell, and Gore are simply unaware of the harm that such an article can do, or if they are aware, and simply don’t care. Are they so naive as to think that their article doesn’t promote an anti-vaccinationist agenda, or do they think that clicks on their website and ad revenue are a more important cause than human life?

I really don’t know which of those possibilities I think is more likely, nor would I like to say which is worse.

Is smoking plunging children into poverty?

If we feel it necessary to characterise ourselves as being “pro” or “anti” certain things, I would unambiguously say that I am anti-smoking. Smoking is a vile habit. I don’t like being around people who are smoking. And as a medical statistician, I am very well aware of the immense harm that smoking does to the health of smokers and those unfortunate enough to be exposed to their smoke.

So it comes as a slight surprise to me that I find myself writing what might be seen as a pro-smoking blogpost for the second time in just a few weeks.

But this blogpost is not intended to be pro-smoking: it is merely anti the misuse of statistics by some people in the anti-smoking lobby. Just because you are campaigning against a bad thing does not give you a free pass to throw all notions of scientific rigour and social responsibility to the four winds.

An article appeared yesterday on the Daily Mail website with the headline:

“Smoking not only kills, it plunges children into POVERTY because parents ‘prioritise cigarettes over food'”

and a similar, though slightly less extreme, version appeared in the Independent:

“Smoking parents plunging nearly half a million children into poverty, says new research”

According to the Daily Mail, parents are failing to feed their children because they are spending money on cigarettes instead of food. The Independent is not quite so explicit in claiming that, but it’s certainly implied.

Regular readers of this blog will no doubt already have guessed that those articles are based on some research which may have been vaguely related to smoking and poverty, but which absolutely did not show that any children were going hungry because of their parents’ smoking habits. And they would be right.

The research behind these stories is this paper by Belvin et al. There are a number of problems with it, and particularly with the way their findings have been represented in the media.

The idea of children being “plunged into poverty” came from looking at the number of families with at least one smoker who were just above the poverty line. Poverty in this case is defined as a household income less than 60% of the median household income (taking into account family size). If the amount families above the poverty line spent on cigarettes took their remaining income after deducting their cigarette expenditure below the poverty line, then they were regarded as being taken into poverty by smoking.

Now, for a start, Belvin et al did not actually measure how much any family just above the poverty line spent on smoking. They made a whole bunch of estimates and extrapolations from surveys that were done for different purposes. So that’s one problem for a start.

Another problem is that absolutely nowhere did Belvin et al look at expenditure on food. There is no evidence whatsoever from their study that any family left their children hungry, and certainly not that smoking was the cause. Claiming that parents were prioritising smoking over food is not even remotely supported by the study, as it’s just not something that was measured at all.

Perhaps the most pernicious problem is the assumption that poverty was specifically caused by smoking. I expect many families with an income above 60% of the median spend some of their money on something other than feeding their children. Perhaps some spend their money on beer. Perhaps others spend money on mobile phone contracts. Or maybe on going to the cinema. Or economics textbooks. Or pretty much anything else you can think of that is not strictly essential. Any of those things could equally be regarded as “plunging children into poverty” if deducting it from expenditure left you below median income.

So why single out smoking?

I have a big problem with this. I said earlier that I thought smoking was a vile habit. But there is a big difference between believing smoking is a vile habit and believing smokers are vile people. They are not. They are human beings. To try to pin the blame on them for their children’s poverty (especially in the absence of any evidence that their children are actually going hungry) is troubling. I am not comfortable with demonising minority groups. It wouldn’t be OK if the group in question were, say, Muslims, and it’s not OK when the group is smokers.

There are many and complex causes of poverty. But blaming the poor is really not the response of a civilised society.

The way this story was reported in the Daily Mail is, not surprisingly, atrocious. But it’s not entirely their fault. The research was filtered through Nottingham University’s press office before it got to the mainstream media, and I’m afraid to say that Nottingham University are just as guilty here. Their press release states

“The reserch [sic] suggests that parents are likely to forgo basic household and food necessities in order to fund their smoking addiction.”

No, the research absolutely does not suggest that, because the researchers didn’t measure it. In fact I think Nottingham University are far more guilty than the Daily Mail. An academic institution really ought to know better than to misrepresent the findings of their research in this socially irresponsible way.