All posts by Adam

The amazing magic Saatchi Bill

Yesterday saw the dangerous and misguided Saatchi Bill (now reincarnated as the Access to Medical Treatments (Innovation) Bill) debated in the House of Commons.

The bill started out as an attempt by the Conservative peer Lord Saatchi to write a new law to encourage innovation in medical research. I have no doubt that the motivation for doing so was based entirely on good intentions, but sadly the attempt was badly misguided. Although many people explained to Lord Saatchi why he was wrong to tackle the problem in the way he did, it turns out that listening to experts is not Saatchi’s strong suit, and he blundered on with his flawed plan anyway.

If you want to know what is wrong with the bill I can do no better than direct you to the Stop the Saatchi Bill website, which explains the problems with the bill very clearly. But briefly, it sets out to solve a problem that does not exist, and causes harm at the same time. It attempts to promote innovation in medical research by removing the fear of litigation from doctors who innovate, despite the fact that fear of litigation is not what stops doctors innovating. But worse, it removes important legal protection for patients. Although the vast majority of doctors put their patients’ best interests firmly at the heart of everything they do, there will always be a small number of unscrupulous quacks who will be only too eager to hoodwink patients into paying for ineffective or dangerous treatments if they think there is money in it.

If the bill is passed, any patients harmed by unscrupulous quacks will find it harder to get redress through the legal system. That does not protect patients.

Although the bill as originally introduced by Saatchi failed to make sufficient progress through Parliament, it has now been resurrected in a new, though essentially similar, form as a private member’s bill in the House of Commons.

I’m afraid to say that the debate in the House of Commons did not show our lawmakers in a good light.

We were treated to several speeches by people who clearly either didn’t understand what the bill was about or were being dishonest. The two notable exceptions were Heidi Alexander, the Shadow Health Secretary, and Sarah Wollaston, chair of the Health Select Committee and a doctor herself in a previous career. Both Alexander and Wollaston clearly showed that they had taken the trouble to read the bill and other relevant information carefully, and based their contributions on facts rather than empty rhetoric.

I won’t go into detail on all the speeches, but if you want to read them you can do so in Hansard.

The one speech I want to focus on is by George Freeman, the Parliamentary Under-Secretary of State for Life Sciences. As he is a government minister, his speech gives us a clue about the government’s official thinking on the bill. Remember that it is a private member’s bill, so government support is crucial if it is to have a chance of becoming law. Sadly, Freeman seems to have swallowed the PR surrounding the bill and was in favour of it.

Although Freeman said many things, many of which showed either a poor understanding of the issues or blatant dishonesty, the one I particularly want to focus on is where he imbued the bill with magic powers.

He repeated the myths about fear of litigation holding back medical research. He was challenged in those claims by both Sarah Wollaston and Heidi Alexander.

When he reeled off a whole bunch of statistics about how much money medical litigation cost the NHS, Wollaston asked him how much of that was specifically related to complaints about innovative treatments. His reply was telling:

“Most of the cases are a result of other contexts— as my hon. Friend will know, obstetrics is a big part of that—rather than innovation. I am happy to write to her with the actual figure as I do not have it to hand.”

Surely that is the one statistic he should have had to hand if he’d wanted to appear even remotely prepared for his speech? What is the point of being able to quote all sorts of irrelevant statistics about the total cost of litigation in the NHS if he didn’t know the one statistic that actually mattered? Could it be that he knew it was so tiny it would completely undermine his case?

He then proceeded to talk about the fear of litigation, at which point Heidi Alexander asked him what evidence he had. He had to admit that he had none, and muttered something about “anecdotally”.

But anyway, despite having failed to make a convincing case that fear of litigation was holding back innovation, he was very clear that he thought the bill would remove that fear.

And now we come to the magic bit.

How exactly was that fear of litigation to be removed? Was it by changing the law on medical negligence to make it harder to sue “innovative” doctors? This is what Freeman said:

“As currently drafted the Bill provides no change to existing protections on medical negligence, and that is important. It sets out the power to create a database, and a mechanism to make clear to clinicians how they can demonstrate compliance with existing legal protection—the Bolam test has been referred to—and allow innovations to be recorded for the benefit of other clinicians and their patients. Importantly for the Government, that does not change existing protections on medical negligence, and it is crucial to understand that.”

So the bill makes no change whatsoever to the law on medical negligence, but removes the fear that doctors will be sued for negligence. If you can think of a way that that could work other than by magic, I’m all ears.

In the end, the bill passed its second reading by 32 votes to 19. Yes, that’s right: 599 well over 500* MPs didn’t think protection of vulnerable patients from unscrupulous quacks was worth turning up to vote about.

I find it very sad that such a misguided bill can make progress through Parliament on the basis of at best misunderstandings and at worst deliberate lies.

Although the bill has passed its second reading, it has not yet become law. It needs to go through its committee stage and then return to the House of Commons for its third reading first. It is to be hoped that common sense will prevail some time during that process, or patients harmed by unscrupulous quacks will find that the law does not protect them as much as it does now.

If you want to write to your MP to urge them to turn up and vote against this dreadful bill when it comes back for its third reading, now would be a good time.

* Many thanks to @_mattl on Twitter for pointing out the flaw in my original figure of 599: I hadn’t taken into account that the Speaker doesn’t vote, the Tellers aren’t counted in the totals, Sinn Fein MPs never turn up at all, and SNP MPs are unlikely to vote as this bill doesn’t apply to Scotland.

Equality of opportunity

Although this is primarily a blog about medical stuff, I did warn you that there might be the occasional social science themed post. This is one such post.

In his recent speech to the Conservative Party conference, David Cameron came up with many fine words about equality of opportunity. He led us to believe that he was for it. Here is an extract from the relevant part of his speech:

If we tackle the causes of poverty, we can make our country greater.

But there’s another big social problem we need to fix.

In politicians’ speak: a “lack of social mobility”.

In normal language: people unable to rise from the bottom to the top, or even from the middle to the top, because of their background.

Listen to this: Britain has the lowest social mobility in the developed world.

Here, the salary you earn is more linked to what your father got paid than in any other major country.

I’m sorry, for us Conservatives, the party of aspiration, we cannot accept that.

We know that education is the springboard to opportunity.

Fine words indeed. Cameron is quite right to identify lack of social mobility as a major problem. It cannot be right that your life chances should depend so much on who your parents are.

Cameron is also quite right to highlight the important role of education. Inequality of opportunity starts at school. If you have pushy middle class parents who get you into a good school, then you are likely to do better than if you have disadvantaged parents and end up in a poor school.

But it is very hard to reconcile Cameron’s fine words with today’s announcement of a new grammar school. In theory, grammar schools are supposed to aid social mobility by allowing bright kids from disadvantaged backgrounds to have a great education.

But in practice, they do no such thing.

In practice, grammar schools perpetuate social inequalities. Grammar schools are largely the preserve of the middle classes. According to research from the Institute for Fiscal studies, children from disadvantaged backgrounds are less likely than their better off peers to get into grammar schools, even if they have the same level of academic achievement.

It’s almost as if Cameron says one thing but means something else entirely, isn’t it?

If Cameron is serious about equality of opportunity, I have one little trick from the statistician’s toolkit which I think could help, namely randomisation.

My suggestion is this. All children should be randomly allocated to a school. Parents would have no say in which school their child goes to: it would be determined purely by randomisation. The available pool of schools would of course need to be within reasonable travelling distance of where the child lives, but that distance could be defined quite generously, so that you wouldn’t still have cosy middle class schools in cosy middle class neighbourhoods and poor schools in disadvantaged neighbourhoods.

At the moment, it is perfectly accepted by the political classes that some schools are good schools and others are poor. Once the middle classes realise that their own children might have to go to the poor schools, my guess is that the acceptance of the existence of poor schools would rapidly diminish. Political pressure would soon make sure that all schools are good schools.

That way, all children would have an equal start in life, no matter how rich their parents were.

This suggestion is, of course, pure fantasy. There is absolutely no way that our political classes would ever allow it. Under a system like that, their own children might have to go to school with the plebs, and that would never do, would it?

But please don’t expect me to take any politician seriously if they talk about equality of opportunity on the one hand but still support a system in which the school that kids go to is determined mainly by the socioeconomic status of their parents.

Mythbusting medical writing

I have recently published a paper, along with my colleagues at GAPP, addressing some of the myths surrounding medical writing.

As an aside, this was my last act as a GAPP member, and I have now stood down from the organisation. It was a privilege to be a founder member, and I am very proud of the work that GAPP has done, but now that I am no longer professionally involved in medical writing it seemed appropriate to move on.

Anyway, the paper addresses 3 myths surrounding the role of professional medical writers in preparing publications for the peer-reviewed medical literature:

  • Myth No 1: Medical writers are ghostwriters
  • Myth No 2: Ghostwriting is common
  • Myth No 3: Researchers should not need medical writing support

(Spoiler alert: none of those 3 things is actually true.)

Unfortunately, the full paper is paywalled. Sorry about that. This wasn’t our first choice of journal: the article was originally written in response to an invitation to write the article from another journal, who then rejected it. And as GAPP has no funding, there was no budget to pay for open access publishing.

But luckily, the journal allows me to post the manuscript as submitted (but not the nice neat typeset version) on my own website.

So here it is. Happy reading.

Zombie statistics on half of all clinical trials unpublished

You know what zombies are, right? No matter how often you kill them, they just keep coming back. So it is with zombie statistics. No matter how often they are debunked, people will keep repeating them as if they were a fact.

zombies

Picture credit: Scott Beale / Laughing Squid

As all fans of a particular horror movie genre know, the only way you can kill a zombie is to shoot it in the head. This blog post is my attempt at a headshot for the zombie statistic “only half of all clinical trials have ever been published”.

That statistic has been enthusiastically promoted by the All Trials campaign. The campaign itself is fighting for a thoroughly good cause. Their aim is to ensure that the results of all clinical trials are disclosed in the public domain. Seriously, who wouldn’t want to see that happen? Medical science, or indeed any science, can only progress if we know what previous research has shown.

But sadly, All Trials are not being very evidence-based in their use of statistics. They have recently written yet another article promoting the “only half of all clinical trials are published” zombie statistic, which I’m afraid is misleading in a number of ways.

The article begins: “We’re sometimes asked if it’s still true that around half of clinical trials have never reported results. Yes, it is.” Or at least that’s how it starts today. The article has been silently edited since it first appeared, with no explanation of why. That’s a bit odd for an organisation that claims to be dedicated to transparency.

The article continues “Some people point towards recent studies that found a higher rate of publication than that.” Well, yes. There are indeed many studies showing much higher rates of publication for recent trials, and I’ll show you some of those studies shortly. It’s good that All Trials acknowledge the recent increase in publication rates.

“But these studies look at clinical trials conducted very recently, often on the newest drugs, and therefore represent a tiny fraction of all the clinical trials that have ever been conducted”, the All Trials campaign would have us believe.

It’s worth looking at that claim in some detail.

Actually, the studies showing higher rates of publication are not necessary conducted very recently. It’s true that some of the highest rates come from the most recent studies, as there has been a general trend to greater disclosure for some time, which still seems to be continuing. But rates have been increasing for a while now (certainly since long before the All Trials campaign was even thought of, in case you are tempted to believe the spin that recent increases in disclosure rates are a direct result of the campaign), so it would be wrong to think that rates of publication substantially higher than 50% have only been seen in the last couple of years. For example, Bourgeois et al’s 2010 study, which found 80% of trials were disclosed in the public domain, included mostly trials conducted over 10 years ago.

It’s a big mistake to think that trials in the last 10 years have a negligible effect on the totality of trials. The number of clinical trials being done has increased massively over time, so more recent trials are actually quite a large proportion of all trials that have ever been done. And certainly a large proportion of all trials that are still relevant. How much do you think this 1965 clinical trial of carbenoxolone sodium is going to inform treatment of gastric ulcers today in the era of proton pump inhibitors, for example?

If we look at the number of randomised controlled trials indexed in PubMed over time, we see a massive increase over the last couple of decades:

Graph

In fact over half of all those trials have been published since 2005. I wouldn’t say over half is a “tiny fraction”, would you?

“Ah”, I hear you cry, “but what if more recent trials are more likely to be published? Maybe it only looks like more trials have been done recently.”

Yes, fair point. It is true that in the last century, a significant proportion of trials were unpublished. Maybe it was even about half, although it’s hard to know for sure, as there is no good estimate of the overall proportion, despite what All Trials would have you believe (and we’ll look at their claims in more detail shortly).

But even if we make the rather extreme assumption that up to 2000 only half of all trials were published, then the rate increased evenly up to 2005 from which point 100% of trials were published, then the date after which half of all trials were done only shifts back as far as 2001.

So the contribution of recent trials matters. In fact even the All Trials team themselves tacitly acknowledge this, if you look at the last sentence of their article:

“Only when all of this recent research is gathered together with all other relevant research and assessed in another systematic review will we know if this new data changes the estimate that around half of clinical trials have ever reported results.”

In other words, at the moment, we don’t know whether it’s still true that only around half of clinical trials have ever reported results. So why did they start by boldly stating that it is true?

The fact is that no study has ever estimated the overall proportion of trials that have been published. All Trials claim that their figure of 50% comes from a 2010 meta-analysis by Song et al. This is a strange claim, as Song et al do not report a figure for the proportion of trials published. Go on. Read their article. See if you can find anything saying “only 50% of trials are published”. I couldn’t. So it’s bizarre that All Trials claim that this paper is the primary support for their claim.

The paper does, however, report publication rates in several studies of completeness of publication, and although no attempt is made to combine them into an overall estimate, some of the figures are in the rough ballpark of 50%. Maybe All Trials considered that close enough to support a nice soundbite.

But the important thing to remember about the Song et al study is that although it was published in 2010, it is based on much older data. Most of the trials it looks at were from the 1990s, and many were from the 1980s. The most recent study included in the review only included trials done up to 2003. I think we can all agree that publication rates in the last century were way too low, but what has happened since then?

Recent evidence

Several recent studies have looked at completeness of publication, and have shown disclosure rates far higher than 50%.

One important thing to remember is that researchers today have the option of posting their results on websites such as clinicaltrials.gov, which were not available to researchers in the 1990s. So publication in peer reviewed journals is not the only way for clinical trial results to get into the public domain. Any analysis that ignores results postings on websites is going to greatly underestimate real disclosure rates. Some trials are posted on websites and not published in journals, while others are published in journals but not posted on websites. To look at the total proportion of trials with results disclosed in the public domain, you have to look at both.

There may be a perception among some that posting results on a website is somehow “second best”, and only publication in a peer-reviewed journal really counts as disclosure. However, the evidence says otherwise. Riveros et al published an interesting study in 2013, in which they looked at completeness of reporting in journal publications and on clinicaltrials.gov. They found that postings on clinicaltrials.gov were generally more complete than journal articles, particularly in the extent to which they reported adverse events. So perhaps it might even be reasonable to consider journal articles second best.

But nonetheless, I think we can reasonably consider a trial to be disclosed in the public domain whether it is published in a journal or posted on a website.

So what do the recent studies show?

Bourgeois et al (2010) looked at disclosure for 546 trials that had been registered on clinicaltrials.gov. They found that 80% of them had been disclosed (66% in journals, and a further 14% on websites). The results varied according to the funder: industry-sponsored trials were disclosed 88% of the time, and government funded trials 55% of the time, with other trials somewhere in between.

Ross et al (2012) studied 635 trials that had been funded by the NIH, and found 68% had been published in journals. They didn’t look at results posting on websites, so the real disclosure rate may have been higher than that. And bear in mind that government funded trials were the least likely to be published in Bourgeois et al’s study, so Ross et al’s results are probably an underestimate of the overall proportion of studies that were being disclosed in the period they studied.

Rawal and Deane published 2 studies, one in 2014 and one in 2015. Their 2014 study included 807 trials, of which 89% were disclosed, and their 2015 study included 340 trials, of which 92% were disclosed. However, both studies included only trials done by the pharmaceutical industry, which had the highest rates of disclosure in Bourgeois et al’s study, so we can’t necessarily assume that trials from non-industry sponsors are being disclosed at such a high rate.

Taken together, these trials show that the claim that only 50% of trials are published is really not tenable for trials done in the last decade or so. And remember that trials done in the last decade or so make up about half the trials that have ever been done.

Flaws in All Trials’s evidence

But perhaps you’re still not convinced? After all, All Trials include on their page a long list of quite recent references, which they say support their claim that only half of all trials are unpublished.

Well, just having a long list of references doesn’t necessarily mean that you are right. If it did, then we would have to conclude that homeopathy is an effective treatment, as this page from the Society of Homeopaths has an even longer reference list. The important thing is whether the papers cited actually back up your claim.

So let’s take a look at the papers that All Trials cite. I’m afraid this section is a bit long and technical, which is unavoidable if we want to look at the papers in enough detail to properly assess the claims being made. Feel free to skip to the conclusions of this post if long and technical looks at the evidence aren’t your bag.

We’ve already looked at All Trials’s primary reference, the Song et al systematic review. This does show low rates of publication for trials in the last century, but what do the more recent studies show?

Ross et al, 2009, which found that 46% of trials on ClinicalTrials.gov, the world’s largest clinical trials register, had reported results.

For a start, this trial is now rather old, and only included trials up to 2005, so it doesn’t tell us about what’s been happening in the last decade. It is also likely to be a serious underestimate of the publication rate even then, for 3 reasons. First, the literature search for publication only used Medline. Many journals are not indexed in Medline, so just because a study can’t be found with a Medline search does not mean it’s not been published. Pretty much the first thing you learn at medical literature searching school is that searching Medline alone is not sufficient if you want to be systematic, and it is important to search other databases such as Embase as well. Second, and perhaps most importantly, it only considers publications in journals, and does not look at results postings on websites. Third, although they only considered completed trials, 46% of the trials they studied did not report an end date, so it is quite possible that those trials had finished only recently and were still being written up for publication.

Prayle et al, 2012, which found 22% of clinical trials had reported summary results on ClinicalTrials.gov within one year of the trial’s completion, despite this being a legal requirement of the US’s Food and Drug Administration Amendments Act 2007.

This was a study purely of results postings, so tells us nothing about the proportion of trials published in journals. Also, the FDA have criticised the methods of the study on several grounds.

Jones et al, 2013, which found 71% of large randomised clinical trials (those with 500 participants or more) registered on ClinicalTrials.gov had published results. The missing 29% of trials had approximately 250,000 trial participants.

71% is substantially higher than 50%, so it seems odd to use this as evidence to support the 50% figure. Also, 71% is only those trials published in peer-reviewed journals. The figure is 77% if you include results postings on websites. Plus the study sample included some active trials and some terminated trials, so is likely to be an underestimate for completed trials.

Schmucker et al, 2014, which found that 53% of clinical trials are published in journals. This study analysed 39 previous studies representing more than 20,000 trials.

This is quite a complex study. It was a meta-analysis, divided into 2 parts: cohorts of studies approved by ethics committees, and cohorts of studies registered in trial registries. The first part included predominantly old trials from the 1980s and 1990s.

The second part included more recent trials, but things start to unravel if you look at some of the studies more carefully. The first problem is that they only count publications in journals, and do not look at results postings on websites. Where the studies reported both publications and results postings, only the publications were considered, and results postings were ignored.

As with any meta-analysis, the results are only as good as the individual trials. I didn’t look in detail at all the trials included, but I did look at some of the ones with surprisingly low rates of disclosure. The largest study was Huser et al 2013, which found only a 28% rate of disclosure. But this is very misleading. It was only the percentage of trials that had a link to a publication in the clinicaltrials.gov record. Although sponsors should come back and update the clinicaltrials.gov record when they have published the results in a journal to provide a link to the article, in practice many don’t. So only to look at records with such a link is going to be a massive underestimate of the true publication rate (and that’s before we remember that results postings on the clinicaltrials.gov website weren’t counted). It is likely that manually searching for the articles would have found many more trials published.

Another study with a low publication rate included in the meta-analysis was Gopal et al 2012. The headline publication rate was 30%, out of a sample size of of 818. However, all 818 of those had had results posted on clinicaltrials.gov, so in fact the total disclosure rate was 100%, although of course that is meaningless as that was determined by their study design rather than a finding of the study.

The other study with a surprisingly low proportion of disclosed trials was Shamilyan et al 2012, which found only a 23% publication rate. This was only a small study (N=112), but apart from that the main flaw was that it only searched Medline, and used what sounds like a rather optimistic search strategy, using titles and ID numbers, with no manual search. So as far as I can tell from this, if a paper is published without indexing the clinicaltrials.gov ID number (and unfortunately many papers don’t) and didn’t use exactly the same verbatim title for the publication as the clinicaltrials.gov record, then publications wouldn’t have been found.

I haven’t checked all the papers, but if these 3 are anything to go by, there are some serious methodological problems behind Schumcker et al’s results.

Munch et al, 2014, which found 46% of all trials on treatments for pain had published results.

This was a study of 391 trials, of which only 181 had published results, which is indeed 46%. But those 391 trials included some trials that were still ongoing. I don’t think it’s reasonable to expect that a trial should be published before it is completed, do you? If you use the 270 completed trials as the denominator, then the publication rate increases to 67%. And even then, there was no minimum follow-up time specified in the paper. It is possible that some of those trials had only completed shortly before Munch et al searched for the results and were still being written up. It is simply not possible to complete a clinical study one day and publish the results the next day.

Anderson et al, 2015, which found that 13% of 13,000 clinical trials conducted between January 2008 and August 2012 had reported results within 12 months of the end of the trial. By 5 years after the end of the trial, approximately 80% of industry-funded trials and between 42% and 45% of trials funded by government or academic institutions had reported results.

I wonder if they have given the right reference here, as I can’t match up the numbers for 5 years after the end of the trial to anything in the paper. But the Anderson et al 2015 study that they cite did not look at publication rates, only at postings on clinicaltrials.gov. It tells us absolutely nothing about total disclosure rates.

Chang et al, 2015, which found that 49% of clinical trials for high-risk medical devices in heart disease were published in a journal.

The flaws in this study are very similar to those in Ross et al 2009: the literature search only used Medline, and results posting on websites was ignored.

Conclusions

When you look at the evidence in detail, it is clear that the claim that half of all clinical trials are unpublished is not supported. The impression one gets from reading the All Trials blog post is that they have decided that “half of all trials are unpublished” is a great soundbite, and then they try desperately to find evidence that looks like it might back it up if it is spun in a certain way and limitations in the research are ignored. And of course research showing higher rates of disclosure is also ignored.

This is not how to do science. You do not start with an answer and then try to look for a justification for it, while ignoring all disconfirming evidence. You start with a question and then look for the answer in as unbiased a way as possible.

It is disappointing to see an organisation nominally dedicated to accuracy in the scientific literature misusing statistics in this way.

And it is all so unnecessary. There are many claims that All Trials could make in support of their cause without having to torture the data like this. They could (and indeed do) point out that the historic low rate of reporting is still a problem, as many of the trials done in the last century are still relevant to today’s practice, and so it would be great if they could be retrospectively disclosed. If that was where their argument stopped, I would have no problem with it, but to claim that those historic low rates of reporting apply to the totality of clinical trials today is simply not supported by evidence.

All Trials could also point out that the rates of disclosure today are less than 100%, which is not good enough. That would also be a statement no reasonable person could argue with. They could even highlight the difficulty in finding research: many of the studies above do not show low rates of reporting, but they do show that reports of clinical trials can be hard to find. That is definitely a problem, and if All Trials want to suggest a way to fix it, that would be a thoroughly good thing.

There is no doubt that All Trials is fighting for a worthy cause. We should not be satisfied until 100% of clinical trials are disclosed, and we are not there yet. But to claim we are still in a position where only half of clinical trials are disclosed, despite all the evidence that rates of disclosure today are more typically in the region of 80-90%, is nothing short of dishonest.

I don’t care how good your cause is, there is never an excuse for using dodgy statistics as part of your campaigning.

 

Energy prices rip off

Today we have learned that the big six energy providers have been overcharging customers to the tune of over £1 billion per year.

Obviously your first thought on this story is “And what will we learn next week? Which religion the Pope follows? Or perhaps what do bears do in the woods?” But I think it’s worth taking a moment to think about why the energy companies have got away with this, and what might be done about it.

Energy companies were privatised by the Thatcher government back in the late 1980s and early 1990s, based on the ideological belief that competition would make the market more efficient. I’m not sure I’d call overcharging consumers by over £1 billion efficient.

It’s as if Thatcher had read the first few pages of an economics textbook that talks about the advantages of competition and the free market, and then gave up on the book without reading the rest of it to find out what can go wrong with free markets in practice.

Many things can go wrong with free markets, but the big one here is information asymmetry. It’s an important assumption of free market competition that buyers and sellers have perfect information. If buyers do not know how much something is costing them, how can they choose the cheapest supplier?

It is extraordinarily difficult to compare prices among energy suppliers. When I last switched my energy supplier, I spent well over an hour constructing a spreadsheet to figure out which supplier would be cheapest for me. And I’m a professional statistician, so I’m probably better equipped to do that task than most.

Even finding out the prices is a struggle. Here is what I was presented with after I looked on NPower’s website to try to find the prices of their energy:

Screenshot from 2015-07-07 08:24:19

It seems that they want to know everything about me before they’ll reveal their prices. And I’d already had to give them my postcode before I even got that far. Not exactly transparent, is it?

It was similarly impossible to find out Eon’s prices without giving them my entire life history. EDF and SSE were a bit more transparent, though both of them needed to know my postcode before they’d reveal their prices.

Here are EDF’s rates:

Screenshot from 2015-07-07 08:31:06

And here are SSE’s rates:

Screenshot from 2015-07-07 08:30:20

Which of those is cheaper? Without going through that spreadsheet exercise, I have no idea. And that’s just the electricity prices. Obviously I have to do the same calculations for gas, and given that they all give dual fuel discounts, I then have to calculate a total as well as figuring out whether I would be better off going with separate suppliers for gas and electricity to take the cheapest deal on each and whether that would compensate for the dual fuel discount.

And then of course I also have to take into account how long prices are fixed for, what the exit charges are, etc etc.

Seriously, if I as a professional statistician find this impossibly difficult, how is anyone else supposed to figure it out? There are price comparison websites that are supposed to help people compare prices, but of course they have to make a living, and have their own problems.

It’s no wonder that competition is not working for the benefit of consumers.

So what is to be done about it?

I think there is a simple solution here. All suppliers should be required to charge in a simple and transparent way. The standing charge should go. Suppliers should be required simply to quote a price per unit, and should also be required to publish those prices prominently on their website without consumers having to give their inside leg measurements first. If different rates are given for day and night use, a common ratio of day rate to night rate should be required (the ratio used could be reviewed annually in response to market conditions).

Suppliers will no doubt argue that a flat price per unit is inefficient, as there are costs involved in simply having a customer even before any energy is used, and a customer who uses twice as much energy as another does not cost them twice as much.

Tough. The energy companies have had over 20 years to sort out their act, and have failed. While I’m not a fan of governments intervening in markets as a general principle, there are times when it is useful, and this is one of them. I don’t see how anyone can argue that an industry that overcharges consumers by over £1 billion per year is efficient. No one energy company would be at  a disadvantage, as all their competitors would be in the same position.

There would be a further benefit to this idea, in that it would add an element of progressiveness to energy pricing. At the moment, poor people who don’t use much energy pay more per unit than rich people. That doesn’t really seem fair, does it?

This is such a simple and workable idea it is hard to understand why it hasn’t already been implemented. Unless, of course, recent governments were somehow on the side of big business and cared far less about ordinary consumers.

But that can’t be true, can it?

The Independent’s anti-vaccine scaremongering

Last weekend The Independent published a ridiculous piece of antivaccine scaremongering by Paul Gallagher on their front page. They report the story of girls who became ill after receiving HPV vaccine, and strongly imply that the HPV vaccine was the cause of the illnesses, flying in the face of massive amounts of scientific evidence to the contrary.

I could go on at length about how dreadful, irresponsible, and scientifically illiterate the article was, but I won’t, because Jen Gunter and jdc325 have already done a pretty good job of that. You should go and read their blogposts. Do it now.

Right, are you back? Let’s carry on then.

What I want to talk about today is the response I got from the Independent when I emailed the editor of the Independent on Sunday, Lisa Markwell, to suggest that they might want to publish a rebuttal to correct the dangerous misinformation in the original article. Ms Markwell was apparently too busy to reply to a humble reader, so my reply was from the deputy editor, Will Gore.  Here it is below, with my annotations.

Dear Dr Jacobs

Thank you for contacting us about an article which appeared in last weekend’s Independent on Sunday.

Media coverage of vaccine programmes – including reports on concerns about real or perceived side-effects – is clearly something which must be carefully handled; and we are conscious of the potential pitfalls. Equally, it is important that individuals who feel their concerns have been ignored by health care professionals have an outlet to explain their position, provided it is done responsibly.

I’d love to know what they mean by “provided it is done responsibly”. I think a good start would be not to stoke anti-vaccine conspiracy theories with badly researched scaremongering. Obviously The Independent has a different definition of “responsibly”. I have no idea what that definition might be, though I suspect it includes something about ad revenue.

On this occasion, the personal story of Emily Ryalls – allied to the comparatively large number of ADR reports to the MHRA in regard to the HPV vaccine – prompted our attention. We made clear that no causal link has been established between the symptoms experienced by Miss Ryalls (and other teenagers) and the HPV vaccine. We also quoted the MHRA at length (which says the possibility of a link remains ‘under review’), as well as setting out the views of the NHS and Cancer Research UK.

Oh, seriously? You “made it clear that no causal link has been established”? Are we even talking about the same article here? The one I’m talking about has the headline “Thousands of teenage girls enduring debilitating illnesses after routine school cancer vaccination”. On what planet does that make it clear that the link was not causal?

I think what they mean by “made it clear that no causal link has been established” is that they were very careful with their wording not to explicitly claim a causal link, while nonetheless using all the rhetorical tricks at their disposal to make sure a causal link was strongly implied.

Ultimately, we were not seeking to argue that vaccines – HPV, or others for that matter – are unsafe.

No, you’re just trying to fool your readers into thinking they’re unsafe. So that’s all right then.

Equally, it is clear that for people like Emily Ryalls, the inexplicable onset of PoTS has raised questions which she and her family would like more fully examined.

And how does blaming it on something that is almost certainly not the real cause help?

Moreover, whatever the explanation for the occurrence of PoTS, it is notable that two years elapsed before its diagnosis. Miss Ryalls’ family argue that GPs may have failed to properly assess symptoms because they were irritated by the Ryalls mentioning the possibility of an HPV connection.

I don’t see how that proves a causal link with the HPV vaccine. And anyway, didn’t you just say that you were careful to avoid claiming a causal link?

Moreover, the numbers of ADR reports in respect of HPV do appear notably higher than for other vaccination programmes (even though, as the quote from the MHRA explained, the majority may indeed relate to ‘known risks’ of vaccination; and, as you argue, there may be other particular explanations).

Yes, there are indeed other explanations. What a shame you didn’t mention them in your story. Perhaps if you had done, your claim to be careful not to imply a causal link might look a bit more plausible. But I suppose you don’t like the facts to get in the way of a good story, do you?

The impact on the MMR programme of Andrew Wakefield’s flawed research (and media coverage of it) is always at the forefront of editors’ minds whenever concerns about vaccines are raised, either by individuals or by medical studies. But our piece on Sunday was not in the same bracket.

No, sorry, it is in exactly the same bracket. The media coverage of MMR vaccine was all about hyping up completely evidence-free scare stories about the risks of MMR vaccine. The present story is all about hyping up completely evidence-free scare stories about the risk of HPV vaccine. If you’d like to explain to me what makes those stories different, I’m all ears.

It was a legitimate item based around a personal story and I am confident that our readers are sophisticated enough to understand the wider context and implications.

Kind regards

Will Gore
Deputy Managing Editor

If Mr Gore seriously believes his readers are sophisticated enough to understand the wider context, then he clearly hasn’t read the readers’ comments on the article. It is totally obvious that a great many readers have inferred a causal relationship between the vaccine and subsequent illness from the article.

I replied to Mr Gore about that point, to which he replied that he was not sure the readers’ comments are representative.

Well, that’s true. They are probably not. But they don’t need to be.

There are no doubt some readers of the article who are dyed-in-the-wool anti-vaccinationists. They believed all vaccines are evil before reading the article, and they still believe all vaccines are evil. For those people, the article will have had no effect.

Many other readers will have enough scientific training (or just simple common sense) to realise that the article is nonsense. They will not infer a causal relationship between the vaccine and the illnesses. All they will infer is that The Independent is spectacularly incompetent at reporting science stories and that it would be really great if The Independent could afford to employ someone with a science GCSE to look through some of their science articles before publishing them. They will also not be harmed by the article.

But there is a third group of readers. Some people are not anti-vaccine conspiracy theorists, but nor do they have science training. They probably start reading the article with an open mind. After reading the article, they may decide that HPV vaccine is dangerous.

And what if some of those readers are teenage girls who are due for the vaccination? What if they decide not to get vaccinated? What if they subsequently get HPV infection, and later die of cervical cancer?

Sure, there probably aren’t very many people to whom that description applies. But how many is an acceptable number? Perhaps Gallagher, Markwell, and Gore would like to tell me how many deaths from cervical cancer would be a fair price to pay for writing the article?

It is not clear to me whether Gallagher, Markwell, and Gore are simply unaware of the harm that such an article can do, or if they are aware, and simply don’t care. Are they so naive as to think that their article doesn’t promote an anti-vaccinationist agenda, or do they think that clicks on their website and ad revenue are a more important cause than human life?

I really don’t know which of those possibilities I think is more likely, nor would I like to say which is worse.

Is smoking plunging children into poverty?

If we feel it necessary to characterise ourselves as being “pro” or “anti” certain things, I would unambiguously say that I am anti-smoking. Smoking is a vile habit. I don’t like being around people who are smoking. And as a medical statistician, I am very well aware of the immense harm that smoking does to the health of smokers and those unfortunate enough to be exposed to their smoke.

So it comes as a slight surprise to me that I find myself writing what might be seen as a pro-smoking blogpost for the second time in just a few weeks.

But this blogpost is not intended to be pro-smoking: it is merely anti the misuse of statistics by some people in the anti-smoking lobby. Just because you are campaigning against a bad thing does not give you a free pass to throw all notions of scientific rigour and social responsibility to the four winds.

An article appeared yesterday on the Daily Mail website with the headline:

“Smoking not only kills, it plunges children into POVERTY because parents ‘prioritise cigarettes over food'”

and a similar, though slightly less extreme, version appeared in the Independent:

“Smoking parents plunging nearly half a million children into poverty, says new research”

According to the Daily Mail, parents are failing to feed their children because they are spending money on cigarettes instead of food. The Independent is not quite so explicit in claiming that, but it’s certainly implied.

Regular readers of this blog will no doubt already have guessed that those articles are based on some research which may have been vaguely related to smoking and poverty, but which absolutely did not show that any children were going hungry because of their parents’ smoking habits. And they would be right.

The research behind these stories is this paper by Belvin et al. There are a number of problems with it, and particularly with the way their findings have been represented in the media.

The idea of children being “plunged into poverty” came from looking at the number of families with at least one smoker who were just above the poverty line. Poverty in this case is defined as a household income less than 60% of the median household income (taking into account family size). If the amount families above the poverty line spent on cigarettes took their remaining income after deducting their cigarette expenditure below the poverty line, then they were regarded as being taken into poverty by smoking.

Now, for a start, Belvin et al did not actually measure how much any family just above the poverty line spent on smoking. They made a whole bunch of estimates and extrapolations from surveys that were done for different purposes. So that’s one problem for a start.

Another problem is that absolutely nowhere did Belvin et al look at expenditure on food. There is no evidence whatsoever from their study that any family left their children hungry, and certainly not that smoking was the cause. Claiming that parents were prioritising smoking over food is not even remotely supported by the study, as it’s just not something that was measured at all.

Perhaps the most pernicious problem is the assumption that poverty was specifically caused by smoking. I expect many families with an income above 60% of the median spend some of their money on something other than feeding their children. Perhaps some spend their money on beer. Perhaps others spend money on mobile phone contracts. Or maybe on going to the cinema. Or economics textbooks. Or pretty much anything else you can think of that is not strictly essential. Any of those things could equally be regarded as “plunging children into poverty” if deducting it from expenditure left you below median income.

So why single out smoking?

I have a big problem with this. I said earlier that I thought smoking was a vile habit. But there is a big difference between believing smoking is a vile habit and believing smokers are vile people. They are not. They are human beings. To try to pin the blame on them for their children’s poverty (especially in the absence of any evidence that their children are actually going hungry) is troubling. I am not comfortable with demonising minority groups. It wouldn’t be OK if the group in question were, say, Muslims, and it’s not OK when the group is smokers.

There are many and complex causes of poverty. But blaming the poor is really not the response of a civilised society.

The way this story was reported in the Daily Mail is, not surprisingly, atrocious. But it’s not entirely their fault. The research was filtered through Nottingham University’s press office before it got to the mainstream media, and I’m afraid to say that Nottingham University are just as guilty here. Their press release states

“The reserch [sic] suggests that parents are likely to forgo basic household and food necessities in order to fund their smoking addiction.”

No, the research absolutely does not suggest that, because the researchers didn’t measure it. In fact I think Nottingham University are far more guilty than the Daily Mail. An academic institution really ought to know better than to misrepresent the findings of their research in this socially irresponsible way.

Chocolate, clueless reporting, and ethics

I have just seen a report of a little hoax pulled on the media by John Bohannon. What he did was to run a small and deliberately badly designed clinical trial, the results of which showed that eating chocolate helps you lose weight.

The trial showed no such thing, of course, as Bohannon points out. It just used bad design and blatant statistical trickery to come up with the result, which should not have fooled anyone who read the paper even with half an eye open.

Bohannon then sent press releases about the study to various media outlets, many of which printed the story completely uncritically. Here’s an example from the Daily Express.

This may be a lovely little demonstration of how lazy and clueless the media are, but I have a nasty feeling it’s actually highly problematic.

The problem is that neither Bohannon’s description of the hoax nor the paper publishing the results of the study make any mention of ethical review. Let’s remember that although the science was deliberately flawed, there was still a real clinical trial here with real human participants.

What were those participants told? Were they deceived about the true nature of the study? According to Bohannon,

“They used Facebook to recruit subjects around Frankfurt, offering 150 Euros to anyone willing to go on a diet for 3 weeks. They made it clear that this was part of a documentary film about dieting, but they didn’t give more detail.”

That certainly sounds to me like deception. It is an absolutely essential feature of clinical research that all research must be approved by an independent ethics committee. This is all the more important if participants are being deceived, which is always a tricky ethical issue. There is no rule that gives an exception to research done as a hoax.

The research was apparently done under the supervision of a German doctor, Gunter Frank. While I can’t claim to be an expert in professional requirements of German doctors, I would be astonished if running a clinical trial without ethical approval was not a serious disciplinary matter.

And yet there is no mention anywhere of ethical approval for this study. I really, really hope that’s just an oversight. Recruiting human participants to a clinical trial without proper ethical approval is absolutely not acceptable.

Update 29 May:

According to the normally reliable Retraction Watch, my fears about this study were justified. They are reporting that Bohannon had confirmed to them that the study did not have ethical approval.

Also, the paper has mysteriously disappeared from the journal’s website, so I’ve replaced the link to the paper with a link to a copy of it preserved thanks to Google’s web cache and Freezepage.

Are strokes really rising in young people?

I woke up to the news this morning that there has been an alarming increase in the number of strokes in people aged 40-54.

My first thought was “this has been sponsored by a stroke charity, so they probably have an interest in making the figures seem alarming”. So I wondered how robust the research was that led to this conclusion.

The article above did not link to a published paper describing the research. So I looked on the Stroke Association’s website. There, I found a press release. This press release also didn’t link to any published paper, which makes me think that there is no published paper. It’s hard to believe a press release describing a new piece of research would fail to tell you if it had been published in a respectable journal.

The press release describes data on hospital admissions provided by the NHS, which shows that the number of men aged 40 to 54 admitted to hospital with strokes increased from 4260 in the year 2000 to to 6221 in 2014, and the equivalent figures for women were an increase from 3529 to 4604.

Well, yes, those figures are certainly substantial increases. But there could be various different reasons for them, some worrying, others reassuring.

It is possible, as the press release certainly wants us to believe, that the main reason for the increase is that strokes are becoming more common. However, it is also possible that recognition of stroke has improved, or that stroke patients are more likely now to get the hospital treatment they need than in the past. Both of those latter explanations would be good things.

So how do the stroke association distinguish among those possibilities?

Well, they don’t. The press release says “It is thought that the rise is due to increasing sedentary and unhealthy lifestyles, and changes in hospital admission practice.”

“It is thought that”? Seriously? Who thinks that? And why do they think it?

It’s nice that the Stroke Association acknowledge the possibility that part of the reason might be changes in hospital admission practice, but given that the title of the press release is “Stroke rates soar among men and women in their 40s and 50s” (note: not “Rates of hospital admission due to stroke soar”), there can be no doubt which message the Stroke Association want to emphasise.

I’m sorry, but they’re going to need better evidence than “it is thought that” to convince me they have teased out the relative contributions of different factors to the rise in hospital admissions.

Obesity and dementia

It’s always difficult to draw firm conclusions from epidemiological research. No matter how large the sample size and how carefully conducted the study, it’s seldom possible to be sure that the result you have found is what you were looking for, and not some kind of bias or confounding.

So when I heard in the news yesterday that overweight and obese people were at reduced risk of dementia, my first thought was “I wonder if that’s really true?”

Well, the paper is here. Sadly behind a paywall (seriously guys? You know it’s 2015, right?), though luckily the researchers have made a copy of the paper available as a Word document here.

In many ways, it’s a pretty good study. Certainly no complaints about the sample size: they analysed data on nearly 2 million people. With a median follow-up time of over 9 years, their analysis was based on a long enough time period to be meaningful. They had also thought about the obvious problem with looking at obesity and dementia, namely that obese people may be less likely to get dementia not because obesity protects them against dementia, but just because they are more likely to die of an obesity-related disease before they are old enough to develop dementia.

The authors did a sensitivity analysis in which they assumed that patients who died during the observation period had twice the risk of developing dementia had they lived of patients who survived to the end of follow-up. Although that weakened the negative association between overweight and dementia, it was still present.

There are, of course, other ways to do this. Perhaps it might have been appropriate to use a competing risks survival model instead of the Poisson model they used for their statistical analysis, and if you were going to be picky, you could say their choice of statistical analysis was a bit fishy (sorry, couldn’t resist).

But I don’t think the method of analysis is the big problem here.

For a start, although some of the most obvious confounders (age, sex, smoking, drinking, relevant medication use, diabetes, and previous myocardial infarction) were adjusted for in the analysis, there was no adjustment for socioeconomic status or education level, which is a big omission.

But more importantly, I think the major limitation of these results comes from what is known as the healthy survivor effect.

Let me explain.

The people followed up in the study were all aged over 40 at the start. But there was no upper age limit. Some people were aged over 90 at the start. And not surprisingly, most of the cases of dementia occurred in older people.  Only 18 cases of dementia occurred in those aged 40-44, whereas over 12,000 cases were observed in those aged 80-84. So it’s really the older age groups who are dominating the analysis. Over half the cases of dementia occurred in people aged > 80, and over 90% occurred in people aged > 70.

Now, let’s think about those 80+ year olds for a minute.

There is reasonably good evidence that obese people die younger, on average, than those of normal weight. So the obese people who were aged > 80 at the start of the study are probably not normal obese people. They are probably healthier than average obese people. Many obese people who are less healthy than average would be dead before they are 80, so would never have the chance to be included in that age group of the study.

So in other words, the old obese people in the study are not typical obese people: they are unusually healthy obese people.

That may be because they have good genes or it may be because something about their lifestyle is keeping them healthy, but one way or another, they have managed to live a long life despite their obesity. This is an example of the healthy survivor effect.

There will also be a healthy survivor effect at play in the people of normal weight at the upper end of the age range, but that will probably be less marked, as they haven’t had to survive despite obesity.

I think it is therefore possible that this healthy survivor effect may have skewed the results. The people with obesity may have been at less risk of dementia not because their obesity protected them, but because they were a biased subset of unusually healthy obese people.

This does not, of course, mean that obesity doesn’t protect against dementia. Maybe it does. One thing that would have been interesting would be to see the results broken down by the type of dementia. It is hard to believe that obesity would protect against vascular dementia, when on the whole it is a risk factor for other vascular diseases, but the hypothesis that it could protect against Alzheimer’s disease doesn’t seem so implausible.

What it does mean is that we have to be really careful when interpreting the results of epidemiological studies such as this one. It is always extremely hard to know to what extent the various forms of bias that can creep into epidemiological studies have influenced the results.