Made up statistics on sugar tax

I woke up this morning to the sound of Radio 4 telling me that Cancer Research UK had done an analysis showing that a 20% tax on sugary drinks could reduce the number of obese people in the UK by 3.7 million by 2025. (That could be the start of the world’s worst ever blues song, but it isn’t.)

My first thought was that was rather surprising, as I wasn’t aware of any evidence on how sugar taxes impact on obesity. So I went hunting for the report with interest.

Bizarrely, Cancer Research UK didn’t link to the full report from their press release (once you’ve read the rest of this post, you may conclude that perhaps they were too embarrassed to let anyone see it), but I tracked it down here. Well, I’m not sure even that is the full report. It says it’s a “technical summary”, but the word “summary” makes me wonder if it is still not the full report. But that’s all that seems to be made publicly available.

There are a number of problems with this report. Christopher Snowdon has blogged about some of them here, but I want to focus on the extent to which the model is based on untested assumptions.

It turns out that the conclusions were indeed not based on any empirical data about how a sugar tax would impact on obesity, but on  a modelling study. This study made various assumptions about various things, principally the following:

  1. The price elasticity of demand for sugary drinks (ie the extent to which an increase in price reduces consumption)
  2. The extent to which a reduction in sugary drink consumption would reduce total calorie intake
  3. The effect of total calorie intake on body mass

The authors get 0/10 for transparent reporting for the first of those, as they don’t actually say what price elasticity they used. That’s pretty basic stuff, and not to report it is somewhat akin to reporting the results of a clinical trial of a new drug and not saying what dose of the drug you used.

However, the report does give a reference for their price elasticity data, namely this paper. I must say I don’t find the methods of that paper easy to follow. It’s not at all clear to me whether the price elasticities they calculated were actually based on empirical data or themselves the results of a modelling exercise. But the data that are used in that paper come from the period 2008 to 2010, when the UK was in the depths of  recession, and when it might be hypothesised that price elasticities were greater than in more economically buoyant times. They don’t give a single figure for price elasticity, but a range of 0.8 to 0.9. In other words, a 20% increase in the price of sugary drinks would be expected to lead to a 16-18% decrease in the quantity that consumers buy. At least in the depths of the worst recession since the 1930s.

That figure for price elasticity is a crucial input to the model, and if it is wrong, then the answers of the model will be wrong.

The next input is the extent to which a reduction in sugary drink consumption reduces total calorie intake.  Here, an assumption is made that total calorie intake is reduced by 60% of the amount of calories not consumed in sugary drinks. Or in other words, that if you forego the calories of a sugary drink, you only make up 40% of those from elsewhere.

Where does that 60% figure come from? Well, they give a reference to this paper. And how did that paper arrive at the 60% figure? Well, they in turn give a reference to this paper. And where did that get it from? As far as I can tell, it didn’t, though I note it reports the results of a clinical study in people trying to lose weight by dieting. Even if that 60% figure is based on actual data from that study, rather than just plucked out of thin air, I very much doubt that data on calorie substitution taken from people trying to lose weight would be applicable to the general population.

What about the third assumption, the weight loss effects of reduced calorie intake? We are told that reducing energy intake by 100 KJ per day results in 1 kg body weight loss. The citation given for that information is this study, which is another modelling study. Are none of the assumptions in this study based on actual empirical data?

A really basic part of making predictions by mathematical modelling is to use sensitivity analyses. The model is based on various assumptions, and sensitivity analyses answer the questions of what happens if those assumptions were wrong. Typically, the inputs to the model are varied over plausible ranges, and then you can see how the results are affected.

Unfortunately, no sensitivity analysis was done. This, folks, is real amateur hour stuff. The reason for the lack of sensitivity analysis is given in the report as follows:

“it was beyond the scope of this project to include an extensive sensitivity analysis. The microsimulation model is complex involving many thousands of calculations; therefore sensitivity analysis would require many thousands of consecutive runs using super computers to undertake this within a realistic time scale.”

That has to be one of the lamest excuses for shoddy methods I’ve seen in a long time. This is 2016. You don’t have to run the analysis on your ZX Spectrum.

So this result is based on a bunch of heroic assumptions which have little basis in reality, and the sensitivity of the model to those assumptions were not tested. Forgive me if I’m not convinced.

 

The dishonesty of the All Trials campaign

The All Trials campaign is very fond of quoting the statistic that only half of all clinical trials have ever been published. That statistic is not based on good evidence, as I have explained at some length previously.

Now, if they are just sending the odd tweet or writing the odd blogpost with dodgy statistics, that is perhaps not the most important thing in the whole world, as the wonderful XKCD pointed out some time ago:

Wrong on the internet

But when they are using dodgy statistics for fundraising purposes, that is an entirely different matter. On their USA fundraising page, they prominently quote the evidence-free statistic about half of clinical trials not having been published.

Giving people misleading information when you are trying to get money from them is a serious matter. I am not a lawyer, but my understanding is that the definition of fraud is not dissimilar to that.

The All Trials fundraising page allows comments to be posted, so I posted a comment questioning their “half of all clinical trials unpublished” statistic. Here is a screenshot of the comments section of the page after I posted my comment,  in case you want to see what I wrote:Screenshot from 2016-02-02 18:16:32

Now, if the All Trials campaign genuinely believed their “half of all trials unpublished” statistic to be correct, they could have engaged with my comment. They could have explained why they thought they were right and I was wrong. Perhaps they thought there was an important piece of evidence that I had overlooked. Perhaps they thought there was a logical flaw in my arguments.

But no, they didn’t engage. They just deleted the comment within hours of my posting it. That is the stuff of homeopaths and anti-vaccinationists. It is not the way that those committed to transparency and honesty in science behave.

I am struggling to think of any reasonable explanation for this behaviour other than that they know their “half of all clinical trials unpublished” statistic to be on shaky ground and simply do not wish anyone to draw attention to it. That, in my book, is dishonest.

This is such a shame. The stated aim of the All Trials campaign is entirely honourable. They say that their aim is for all clinical trials to be published. This is undoubtedly important. All reasonable people would agree that to do a clinical trial and keep the results secret is unethical. I do not see why they need to spoil the campaign by using exactly the sort of intellectual dishonesty themselves that they are campaigning against.