All posts by Adam

Psychology journal bans P values

I was rather surprised to see recently (OK, it was a couple of months ago, but I do have a day job to do as well as writing this blog) that the journal Basic and Applied Social Psychology has banned P values.

That’s quite a bold move. There are of course many problems with P values, about which David Colquhoun has written some sensible thoughts. Those problems seem to be particularly acute in the field of psychology, which suffers from something of a problem when it comes to replicating results. It’s undoubtedly true that many published papers with significant P values haven’t really discovered what they claimed to have discovered, but have just made type I errors, or in other words, have obtained significant results just by chance, rather than because what they claim to have discovered is actually true.

It’s worth reminding ourselves what the conventional test of statistical significance actually means. If we say we have a significant result with P < 0.05, then that means that there is a 1 in 20 chance we would have seen that result if in fact we had completely random data. A 1 in 20 chance is not at all rare, particularly when you consider the huge number of papers that are published every day. Many of them are going to have type I errors.

Clearly, something must be done.

However, call me a cynic if you like, but I’m not sure how banning P values (and confidence intervals as well, if you thought just banning P values was radical enough) is going to help. Perhaps if all articles in Basic and Applied Social Psychology in the future have robust Bayesian analyses that would be an improvement. But I hardly think that’s likely to happen. What is more likely is that researchers will claim to have discovered effects even if they are not conventionally statistically significant, which surely is even worse than where we were before.

I suspect one of the problems with psychology research is that much research, particularly negative research, goes unpublished. It’s probably a lot easier to get a paper published showing that you have just demonstrated some fascinating psychological effect than if you have just demonstrated that the effect you had hypothesised doesn’t in fact exist.

This is a problem we know well in my world of clinical trials. There is abundant evidence that positive clinical trials are more likely to be published than negative ones. This is a problem that the clinical research community has become very much aware of, and has been working quite hard to solve. I wouldn’t say it is completely solved yet, but things are a lot better now than they were a decade or two ago.

One relevant factor is the move to prospective trial registration.  It seems that prospectively registering trials is helping to solve the problem of publication bias. While clinical research doesn’t yet have a 100% publication record (though some recent studies do show disclosure rates of > 80%), I suspect clinical research is far ahead of the social sciences.

Perhaps a better solution to the replication crisis in psychology would be a system for prospectively registering all psychology experiments and a commitment by researchers and journals to publish all results, positive or negative. That wouldn’t necessarily mean more results get replicated, of course, but it would mean that we’d be more likely to know about it when results are not replicated.

I’m not pretending this would be easy. Clinical trials are often multi-million dollar affairs, and the extra bureaucracy involved in trial registration is trivial in comparison with the overall effort. Many psychology experiments are done on a much smaller scale, and the extra bureaucracy would probably add proportionately a lot more to the costs. But personally, I think we’d all be better off with fewer experiments done and more of them being published.

I don’t think the move by Basic and Applied Social Psychology is likely to improve the quality of reporting in that journal. But if it gets us all talking about the limitations of P values, then maybe that’s not such a bad thing.

 

Vaping among teenagers

Vaping, or use of e-cigarettes, has the potential to be a huge advance in public health. It provides an alternative to smoking that allows addicted smokers to get their nicotine fix without exposing them to all the harmful chemicals in cigarette smoke. This is a development that should be welcomed with open arms by everyone in the public health community, though oddly, it doesn’t seem to be. Many in the public health community are very much against vaping. The reasons for that might make an interesting blogpost for another day.

But today, I want to talk about a piece of research into vaping among teenagers that’s been in the news a lot today.

Despite the obvious upside of vaping, there are potential downsides. The concern is that it may be seen as a “gateway” to smoking. There is a theoretical risk that teenagers may be attracted to vaping and subsequently take up smoking. Obviously that would be a thoroughly bad thing for public health.

Clearly, it is an area that is important to research so that we can better understand what the downside might be of vaping.

So I was interested to see that a study has been published today that looks specifically at smoking among teenagers. Can that help to shed light on these important questions?

Looking at some of the stories in the popular media, you might think it could. We are told that e-cigs are the “alcopops of the nicotine world“, that there are “high rates of usage among secondary school pupils” and that e-cigs are “encouraging people to take up smoking“.

Those claims are, to use a technical term, bollocks.

Let’s look at what the researchers actually did. They used cross sectional questionnaire data in which a single question was asked about vaping: “have you ever tried or purchased e-cigarettes?”

The first thing to note is that the statistics are about the number of teenagers who have ever tried vaping. So they will be included in the statistics if they tried it once. Perhaps they were at a party and they had a single puff on a mate’s e-cig. The study gives us absolutely no information on the proportion of teenagers who vaped regularly. So to conclude “high rates of usage” just isn’t backed up by any evidence. Overall, about 1 in 5 of the teenagers answered yes to the question. Without knowing how many of those became regular users, it becomes very hard to draw any conclusions from the study.

But it gets worse.

The claim that vaping is encouraging people to take up smoking isn’t even remotely supported by the data. To do that, you would need to know what proportion of teenagers who hadn’t previously smoked try vaping, and subsequently go on to start smoking. Given that the present study is a cross sectional one (ie participants were studied only at a single point in time), it provides absolutely no information on that.

Even if you did know that, it wouldn’t tell you that vaping was necessarily a gateway to smoking. Maybe teenagers who start vaping and subsequently start smoking would have smoked anyway. To untangle that, you’d ideally need a randomised trial of areas in which vaping is available and areas in which it isn’t, though I can’t see that ever being done. The next best thing would be to look at changes in the prevalence of smoking among teenagers before and after vaping became available. If it increased after vaping became available, that might give you some reason to think vaping is acting as a gateway to smoking. But the current study provides absolutely no information to help with this question.

I’ve filed post this under “Dodgy reporting”, and of course the journalists who wrote about the study in such uncritical terms really should have known better, but actually I think the real fault lies here with the authors of the paper. In their conclusions, they write “Findings suggest that e-cigarettes are being accessed by teenagers more for experimentation than smoking cessation.”

No, they really don’t show that at all. Of those teenagers who had tried e-cigs, only 15.8% were never-smokers. And bear in mind that most of the overall sample (61.2%) were never-smokers. That suggests that e-cigs are far more likely to be used by current or former smokers than by non-smokers. In fact while only 4.9% of never smokers had tried e-cigs, (remember, that may mean only trying them once), 50.7% of ex-smokers had tried them. So a more reasonable conclusion might be that vaping is helping ex-smokers to quit, though in fact I don’t think it’s possible even to conclude that much from a cross-sectional study that didn’t measure whether vaping was a one-off puff or a habit.

While there are some important questions to be asked about how vaping is used by teenagers, I’m afraid this new study does absolutely nothing to help answer them.

 Update 1 April:

It seems I’m not the only person in the blogosphere to pick up some of the problems with the way this study has been spun. Here’s a good blogpost from Clive Bates, which as well as making several important points in its own right also contains links to some other interesting comment on the study.

 

Tobacco vs teddy bears

Now, before we go any further, I’d like to make one thing really clear. Smoking is bad for you. It’s really bad for you. Anything that results in fewer people smoking is likely to be a thoroughly good thing for public health.

But sadly, I have to say there are times when I think the anti-tobacco movement is losing the plot. One such time came this week when I saw the headline “Industry makes $7,000 for each tobacco death“. That has to be one of the daftest statistics I’ve seen for a long time, and I speak as someone who takes a keen interest in daft statistics.

I’m not saying the number is wrong. I haven’t checked it in detail, so it could be, but that’s not the point, and in any case, the numbers look more or less plausible.

The calculation goes like this. Total tobacco industry profits in 2013 (the most recent year for which figures are available) were $44 billion. In the same year, 6.3 million people died from smoking related diseases. Divide the first number by the second, and you end up with $7000 profit per death.

I think we’re supposed to be shocked by that. Perhaps the message is that the tobacco industry is profiting from deaths. In fact given we are told that this figure has increased from $6000 a couple of years ago as if that were a bad thing, I guess that is what we’re supposed to think.

If you haven’t yet figured out how absurd that is, let’s compare it with the teddy bear industry.

Now, some of the figures that follow come from sources that might not score 10/10 for reliability, and these calculations might look like they’ve been made up on the back of a fag packet.  But please bear with me, because all that we really require for today’s purposes is that these numbers be at least approximately correct to within a couple of orders of magnitude, and I think they probably are.

Let’s start with the number of teddy bear related deaths each year. I haven’t been able to find reliable global figures for that, but according to this website, there are 22 fatal incidents involving teddy bears and other toys in the US each year. Let’s assume that teddy bears account for half of those. That gives us 11 teddy bear related deaths per year in the US.

Since we’re looking at the US, how much profit does the US teddy bear industry make each year? I’ve struggled to find good figures for that, but I think we can get a rough idea by looking at the profits of the Vermont Teddy Bear Company, which is apparently one of the largest players in the US teddy bear market. I don’t know what their market share is. Let’s just take a wild guess that it’s about 1/3 of the total teddy bear market.

The company is now owned by private equity and so isn’t required to report its profits, but I found some figures from the last few years (2001 to 2005) before it was bought by private equity, and its average annual profit for that period was about $1.7 million. So if that represents 1/3 of the total teddy bear market, and if its competitors are similarly profitable (wild assumptions I know, but we’re only going for wild approximations here), then the total annual profits of the US teddy bear market are about £5 million.

So, if we now do the same calculation as for the tobacco industry, we see that the teddy bear industry makes a profit of about $450,000 per death ($5 million divided by 11 deaths).

So do we conclude that the teddy bear industry is far more evil than the tobacco industry?

No. What we conclude is that using “profits per death” as a measure of the social harm of an industry is an incredibly daft use of statistics. You are dividing by the number of deaths, so the more people you kill, the smaller will be your profits per death.

There are many statistics you could choose to show the harms of the tobacco industry. That it kills about half its users is a good place to start.  That chronic obstructive pulmonary disease, a disease that is massively associated with smoking, is the world’s third leading cause of death, also makes a pretty powerful point. Or one of my personal favourite statistics about smoking, that a 35-year-old smoker is twice as likely to die before age 70 as a non-smoker of the same age.

But let’s not try to show how bad smoking is by using a measure which increases the fewer people your product kills, OK?

 

How to spot dishonest nutribollocks

I saw a post on Facebook earlier today from GDZ Supplements, a manufacturer of nutribollocks products aimed at gullible sports people.

The post claimed that “Scientific studies suggest that substances in milk thistle protect the liver from toxins.” This was as part of their sales spiel for their “Milk Thistle Liver Cleanse”. No doubt we are supposed to believe that taking the product makes your liver healthier.

Well, if there really are scientific studies, it should be possible to cite them. So I commented on their Facebook post to ask them. They first replied to say that they would email me information if I shared my email address with them, and then when I asked why they couldn’t simply post the links on their Facebook page, they deleted my question and blocked me from their Facebook page.

Screenshot from 2015-02-21 11:40:18

This, folks, is not the action of someone selling things honestly. If there were really scientific studies that supported the use of their particular brand of nutribollocks, it would have been perfectly easy to simply post the citation on their Facebook page.

But as it is, GDZ Supplements clearly don’t want anyone asking about the alleged scientific studies. It is hard to think of any explanation for that other than dishonesty on GDZ Supplements’ part.

What my hip tells me about the Saatchi bill

I have a hospital appointment tomorrow, at which I shall have a non-evidence-based treatment.

This is something I find somewhat troubling. I’m a medical statistician: I should know about evidence for the efficacy of medical interventions. And yet even I find myself ignoring the lack of good evidence when it comes to my own health.

I have had pain in my hip for the last few months. It’s been diagnosed by one doctor as trochanteric bursitis and by another as gluteus medius tendinopathy. Either way, something in my hip is inflammed, and is taking longer than it should to settle down.

So tomorrow, I’m having a steroid injection. This seems to be the consensus among those treating me. My physiotherapist was very keen that I should have it. My GP thought it would be a good idea. The consultant sports physician I saw last week thought it was the obvious next step.

And yet there is no good evidence that steroid injections work. I found a couple of open label randomised trials which showed reasonably good short-term effects for steroid injections, albeit little evidence of benefit in the long term. Here’s one of them. The results look impressive on a cursory glance, but something that really sticks out at me is that the trials weren’t blinded. Pain is subjective, and I fear the results are entirely compatible with a placebo effect. Perhaps my literature searching skills are going the same way as my hip, but I really couldn’t find any double-blind trials.

So in other words, I have no confidence whatsoever that a steroid injection is effective for inflammation in the hip.

So why am I doing this? To be honest, I’m really not sure. I’m bored of the pain, and even more bored of not being able to go running, and I’m hoping something will help. I guess I like to think that the health professionals treating me know what they’re doing, though I really don’t see how they can know, given the lack of good evidence from double blind trials.

What this little episode has taught me is how powerful the desire is to have some sort of treatment when you’re ill. I have some pain in my hip, which is pretty insignificant in the grand scheme of things, and yet even I’m getting a treatment which I have no particular reason to think is effective. Just imagine how much more powerful that desire must be if you’re really ill, for example with cancer. I have no reason to doubt that the health professionals treating me are highly competent and well qualified professionals who have my best interests at heart. But it has made me think how easy it must be to follow advice from whichever doctor is treating you, even if that doctor might be less scrupulous.

This has made me even more sure than ever that the Saatchi bill is a really bad thing. If a medical statistician who thinks quite carefully about these things is prepared to undergo a non-evidence-based treatment for what is really quite a trivial condition, just think how much the average person with a serious disease is going to be at the mercy of anyone treating them. The last thing we want to do is give a free pass for quacks to push completely cranky treatments at anyone who will have them.

And that’s exactly what the Saatchi bill will facilitate.

Ovarian cancer and HRT

Yesterday’s big health story in the news was the finding that HRT ‘increases ovarian cancer risk’. The scare quotes there, of course, tell us that that’s probably not really true.

So let’s look at the study and see what it really tells us. The BBC can be awarded journalism points for linking to the actual study in the above article, so it was easy enough to find the relevant paper in the Lancet.

This was not new data: rather, it was a meta-analysis of existing studies. Quite a lot of existing studies, as it turns out. The authors found 52 epidemiological studies investigating the association between HRT use and ovarian cancer. This is quite impressive. So despite ovarian cancer being a thankfully rare disease, the analysis included over 12,000 women who had developed ovarian cancer. So whatever other criticisms we might make of the paper, I don’t think a small sample size is going to be one of them.

But what other criticisms might we make of the paper?

Well, the first thing to note is that the data are from epidemiological studies. There is a crucial difference between epidemiological studies and randomised controlled trials (RCTs). If you want to know if an exposure (such as HRT) causes an outcome (such as ovarian cancer), then the only way to know for sure is with an RCT. In an epidemiological study, where you are not doing an experiment, but merely observing what happens in real life, it is very hard to be sure if an exposure causes an outcome.

The study showed that women who take HRT are more likely to develop ovarian cancer than women who don’t take HRT. That is not the same thing as showing that HRT caused the excess risk of ovarian cancer. It’s possible that HRT was the cause, but it’s also possible that women who suffer from unpleasant menopausal symptoms (and so are more likely to take HRT than those women who have an uneventful menopause) are more likely to develop ovarian cancer. That’s not completely implausible. Ovaries are a pretty relevant organ in the menopause, and so it’s not too hard to imagine some common factor that predisposes both to unpleasant menopausal symptoms and an increased ovarian cancer risk.

And if that were the case, then the observed association between HRT use and ovarian cancer would be completely spurious.

So what this study shows us is a correlation between HRT use and ovarian cancer, but as I’ve said many times before, correlation does not equal causation. I know I’ve been moaned at by journalists for endlessly repeating that fact, but I make no apology for it. It’s important, and I shall carry on repeating it until every story in the mainstream media about epidemiological research includes a prominent reminder of that fact.

Of course, it is certainly possible that HRT causes an increased risk of ovarian cancer. We just cannot conclude it from that study.

It would be interesting to look at how biologically plausible it is. Now, I’m no expert in endocrinology, but one little thing I’ve observed makes me doubt the plausibility. We know from a large randomised trial that HRT increases breast cancer risk (at least in the short term). There also seems to be evidence that oral contraceptives increase breast cancer risk but decrease ovarian cancer risk. With my limited knowledge of endocrinology, I would have thought the biological effects of HRT and oral contraceptives on cancer risk would be similar, so it just strikes me as odd that they would have similar effects on breast cancer risk but opposite effects on ovarian cancer risk. Anyone who knows more about this sort of thing than I do, feel free to leave a comment below.

But leaving aside the question of whether the results of the latest study imply a causal relationship (though of course we’re not really going to leave it aside, are we? It’s important!), I think there may be further problems with the study.

The paper tells us, and this was widely reported in the media, that “women who use hormone therapy for 5 years from around age 50 years have about one extra ovarian cancer per 1000 users”.

I’ve been looking at how they arrived at that figure, and it’s not totally clear to me how it was calculated. The crucial data in the paper is this table.  The table is given in a bit more detail in their appendix, and I’m reproducing the part of the table for 5 years of HRT use below.

 

 Age group  Baseline risk (per 1000)  Relative excess risk Absolute excess risk (per 1000)
 50-54  1.2  0.43  0.52
 55-59  1.6  0.23  0.37
 60-64  2.1  0.05  0.10
 Total  0.99

The table is a bit complicated, so some words or explanation are probably helpful. The baseline risk is the probability (per 1000) of developing ovarian cancer over a 5 year period in the relevant age group. The relative excess risk is the proportional amount by which that risk is increased by 5 years of HRT use starting at age 50. The absolute excess risk is the baseline risk multiplied by the relative excess risk.

The risk in each 5 year period is then added together to give the total excess lifetime risk of ovarian cancer for a woman who takes HRT for 5 years starting at age 50. I assume excess risks at older age groups are ignored as there is no evidence that HRT increases the risk after such a long delay. It’s important to note here that the figure of 1 in 1000 excess ovarian cancer cases refers to lifetime risk: not the excess in a 5 year period.

The figures for incidence seem plausible. The figures for absolute excess risk are correct if the relative excess risk is correct. However, it’s not completely clear where the figures for relative risk come from. We are told they come from figure 2 in the paper. Maybe I’m missing something, but I’m struggling to match the 2 sets of figures. The excess risk of 0.43 for the 50-54 year age group matches the relative risk 1.43 for current users with duration < 5 years (which will be true while the women are still in that age group), but I can’t see where the relative excess risks of 0.23 and 0.05 come from.

Maybe it doesn’t matter hugely, as the numbers in figure 2 are in the same ballpark, but it always makes me suspicious when numbers should match and don’t.

There are some further statistical problems with the paper. This is going to get a bit technical, so feel free to skip the next two paragraphs if you’re not into statistical details. To be honest, it all pales into insignificance anyway beside the more serious problem that correlation does not equal causation.

The methods section tells us that cases were matched with controls. We are not told how the matching was done, which is the sort of detail I would not expect to see left out of a paper in the Lancet. But crucially, a matched case control study is different to a non-matched case control study, and it’s important to analyse it in a way that takes account of the matching, with a technique such as conditional logistic regression. Nothing in the paper suggests that the matching was taken into account in the analysis. This may mean that the confidence intervals for the relative risks are wrong.

It also seems odd that the data were analysed using Poisson regression (and no, I’m not going to say “a bit fishy”). Poisson regression makes the assumption that the baseline risk of developing ovarian cancer remains constant over time. That seems a highly questionable assumption here. It would be interesting to see if the results were similar using a method with more relaxed assumptions, such as Cox regression. It’s also a bit fishy (oh damn, I did say it after all) that the paper tells us that Poisson regression yielded odds ratios. Poisson regression doesn’t normally yield odds ratios: the default statistic is an incidence rate ratio. Granted, the interpretation is similar to an odds ratio, but they are not the same thing. Perhaps there is some cunning variation on Poisson regression in which the analysis can be coaxed into giving odds ratios, but if there is, I’m not aware of it.

I’m not sure how much those statistical issues matter. I would expect that you’d get broadly similar results with different techniques. But as with the opaque way in which the lifetime excess risk was calculated, it just bothers me when statistical methods are not as they should be. It makes you wonder if anything else was wrong with the analysis.

Oh, and a further oddity is that nowhere in the paper are we told the total sample size for the analysis. We are told the number of women who developed ovarian cancer, but we are not told the number of controls that were analysed. That’s a pretty basic piece of information that I would expect to see in any journal, never mind a top-tier journal such as the Lancet.

I don’t know whether those statistical oddities have a material impact on the analysis. Perhaps they do, perhaps they don’t. But ultimately, I’m not sure it’s the most important thing. The really important thing here is that the study has not shown that HRT causes an increase in ovarian cancer risk.

Remember folks, correlation does not equal causation.

Hospital special measures and regression to the mean

Forgive me for writing 2 posts in a row about regression to the mean. But it’s an important statistical concept, which also happens to be widely misunderstood. Sometimes with important consequences.

Last week, I blogged about a claim that student tuition fees had not put off disadvantaged applicants. The research was flawed, because it defined disadvantage on the basis of postcode areas, and not on the individual characteristics of applicants. This means that an increase in university applications from disadvantaged areas could have simply been due to regression to the mean (ie the most disadvantaged areas becoming less disadvantaged) rather than more disadvantaged individual students applying to university.

Today, we have a story in the news where exactly the same statistical phenomenon is occurring. The story is that putting hospitals into “special measures” has been effective in reducing their death rates, according to new research by Dr Foster.

The research shows no such thing, of course.

The full report, “Is [sic] special measures working?” is available here. I’m afraid the authors’ statistical expertise is no better than their grammar.

The research looked at 11 hospital trusts that had been put into special measures, and found that their mortality rates fell faster than hospitals on average. They thus concluded that special measures were effective in reducing mortality.

Wrong, wrong, wrong. The 11 hospital trusts had been put into special measures not at random, but precisely because they had higher than expected mortality. If you take 11 hospital trusts on the basis of a high mortality rate and then look at them again a couple of years later, you would expect the mortality rate to have fallen more than in other hospitals simply because of regression to the mean.

Maybe those 11 hospitals were particularly bad, but maybe they were just unlucky. Perhaps it’s a combination of both. But if they were unusually unlucky one year, you wouldn’t expect them to be as unlucky the next year. If you take the hospitals with the worst mortality, or indeed the most extreme examples of anything, you would expect it to improve just by chance.

This is a classic example of regression to the mean. The research provides no evidence whatsoever that special measures are doing anything. To do that, you would need to take poorly performing hospitals and allocate them at random either to have special measures or to be in a control group. Simply observing that the worst trusts got better after going into special measures tells you nothing about whether special measures were responsible for the improvement.

Student tuition fees and disadvantaged applicants

Those of you who have known me for a while will remember that I used to blog on the now defunct Dianthus Medical website. The Internet Archive has kept some of those blogposts for posterity, but sadly not all of them. As I promised when I started this blog, I will get round to putting all those posts back on the internet one of these days, but I’m afraid I haven’t got round to that just yet.

But in the meantime, I’m going to repost one of those blogposts here, as it has just become beautifully relevant again. About this time last year, UCAS (the body responsible for university admissions in the UK) published a report which claimed to show that applications to university from disadvantaged young people  were increasing proportionately more than applications from the more affluent, or in other words, the gap between rich and poor was narrowing.

Sadly, the report showed no such thing. The claim was based on a schoolboy error in statistics.

Anyway, UCAS have recently published their next annual report. Again, this claims to show that the gap between rich and poor is narrowing, but doesn’t. Again, we see the same inaccurate headlines in the media that naively take the report’s conclusions at face value, and we see exactly the same schoolboy error in the way the statistics were analysed in the report.

So as what I wrote last year is still completely relevant today, here goes…

One of the most significant political events of the current Parliament has been the huge increase in student tuition fees, which mean that most university students now need to pay £9000 per year for their education.

One of the arguments against this rise used by its opponents was that it would put off young people from disadvantaged backgrounds from applying to university. Supporters of the new system argued that it would not, as students can borrow the money via a student loan to be paid back over a period of decades, so no-one would have to find the money up front.

The new fees came into effect in 2012, so we should now have some empirical data that should allow us to find out who was right. So what do the statistics show? Have people from disadvantaged backgrounds been deterred from applying to university?

A report was published earlier this year by UCAS, the organisation responsible for handling applications to university. This specifically addresses the question of applications from disadvantaged areas. This shows (see page 17 of the report) that although there was a small drop in application rates from the most disadvantaged areas immediately after the new fees came into effect, from 18.0% in 2011 to 17.5% in 2012, the rates have since risen to 20.5% in 2014. And the ratio of the rate of applications from the most advantaged areas to the most disadvantaged areas fell from 3.0 in 2011 to 2.5 in 2014.

So, case closed, then? Clearly the new fees have not stopped people from disadvantaged areas applying to university?

Actually, no. It’s really not that simple. You see, there is a big statistical problem with the data.

That problem is known as regression to the mean. This is a tendency of characteristics with particularly high or low values to become more like average values over time. It’s something we know all about in clinical trials, and is one of the reasons why clinical trials need to include control groups if they are going to give reliable data. For example, in a trial of a medication for high blood pressure, you would expect patients’ blood pressure to decrease during the trial no matter what you do to them, as they had to have high blood pressure at the start of the trial or they wouldn’t have been included in it in the first place.

In the case of the university admission statistics, the specific problem is the precise way in which “disadvantaged areas” and “advantaged areas” were defined.

The advantage or disadvantage of an area was defined by the proportion of young people participating in higher education during the period 2000 to 2004. Since the “disadvantaged” areas were specifically defined as those areas that had previously had the lowest participation rates, it is pretty much inevitable that those rates would increase, no matter what the underlying trends were.

Similarly, the most advantaged areas were almost certain to see decreases in participation rates (at least relatively speaking, though this is somewhat complicated by the fact that overall participation rates have increased since 2004).

So the finding that the ratio of applications from most advantaged areas to those from least advantaged areas has decreased was exactly what we would expect from regression to the mean. I’m afraid this does not provide evidence that the new tuition fee regime has been beneficial to people from disadvantaged backgrounds. It is very had to disentangle any real changes in participation rates from different backgrounds from the effects of regression to the mean.

Unless anyone can point me to any better statistics on university applications from disadvantaged backgrounds, I think the question of whether the new tuition fee regime has helped or hindered social inequalities in higher education remains open.

The Saatchi Bill

I was disappointed to see yesterday that the Saatchi Bill (or Medical Innovations Bill, to give it its official name) passed its third reading in the House of Lords.

The Saatchi Bill, if passed, will be a dreadful piece of legislation. The arguments against it have been well rehearsed elsewhere, so I won’t go into them in detail here. But briefly, the bill sets out to solve a problem that doesn’t exist, and then offers solutions that wouldn’t solve it even if it did exist.

It is based on the premise that the main reason no progress is ever made in medical research (which is nonsense to start with, of course, because progress made all the time) is because doctors are afraid to try innovative treatments in case they get sued. There is, however, absolutely no evidence that that’s true, and in any case, the bill would not help promote real innovation, as it specifically excludes the use of treatments as part of research. Without research, there is no meaningful innovation.

If the bill were simply ineffective, that would be one thing, but it’s also actively harmful. By removing the legal protection that patients  currently enjoy against doctors acting irresponsibly, the bill will be a quack’s charter. It would certainly make it more likely that someone like Stanislaw Burzynski, an out-and-out quack who makes his fortune from fleecing cancer patients by offering them ineffective and dangerous treatments, could operate legally in the UK. That would not be a good thing.

One thing that has struck me about the sorry story of the Saatchi bill is just how dishonest Maurice Saatchi and his team have been. A particularly dishonourable mention goes to the Daily Telegraph, who have been the bill’s “official media partner“. Seriously? Since when did bills going through parliament have an official media partner? Some of the articles they have written have been breathtakingly dishonest. They wrote recently that the bill had “won over its critics“,  which is very far from the truth. Pretty much the entire medical profession is against it: this response from the Academy of Royal Medical Colleges is typical. The same article says that one way the bill had won over its critics was by amending it to require that doctors treating patients under this law must publish their research. There are 2 problems with that: first, the law doesn’t apply to research, and second, it doesn’t say anything about a requirement to publish results.

In an article in the Telegraph today, Saatchi himself continued the dishonesty. As well as continuing to pretend that the bill is now widely supported, he also claimed that more than 18,000 patients responded to the Department of Health’s consultation on the bill. In fact, the total number of responses to the consultation was only 170.

The dishonesty behind the promotion of the Saatchi bill has been well documented by David Hills (aka “the Wandering Teacake”), and I’d encourage you to read his detailed blogpost.

The question that I want to ask about all this is why? Why is Maurice Saatchi doing all this? What does he have to gain from promoting a bill that’s going to be bad for patients but good for unscrupulous quacks?

I cannot know the answers to any of those questions, of course. Only Saatchi himself can know, and even he may not really know: we are not always fully aware of our own motivations. The rest of us can only speculate. But nonetheless, I think it’s interesting to speculate, so I hope you’ll bear with me while I do so.

The original impetus for the Saatchi bill came when Saatchi lost his wife to ovarian cancer. Losing a loved one to cancer is always difficult, and ovarian cancer is a particularly nasty disease. There can be no doubt that Saatchi was genuinely distressed by the experience, and deserves our sympathy.

No doubt it seemed like a good idea to try to do something about this. After all, as a member of the House of Lords, he has the opportunity to propose new legislation. It is completely understandable that if he thought a new law could help people who were dying of cancer, he would be highly motivated to introduce one.

All of that is very plausible and easy to understand. What has happened subsequently, however, is a little harder to understand.

It can’t have been very long after Saatchi proposed the bill that many people who know more about medicine than he does told him why it simply wouldn’t work, and would have harmful consequences. So I think what is harder to understand is why he persisted with the bill after all the problems with it had been explained to him.

It has been suggested that this is about personal financial gain: his advertising company works for various pharmaceutical companies, and pharmaceutical companies will gain from the bill.

However, I don’t believe that that is a plausible explanation for Saatchi’s behaviour.

For a start, I’m pretty sure that the emotional impact of losing a beloved wife is a far stronger motivator than money, particularly for someone who is already extremely rich. It’s not as if Saatchi needs more money. He’s already rich enough to buy the support of a major national newspaper and to get a truly dreadful bill through parliament.

And for another thing, I’m not at all sure that pharmaceutical companies would do particularly well out of the bill anyway. They are mostly interested in getting their drugs licensed so that they can sell them in large quantities. Selling them as a one-off to individual patients is unlikely to be at the top of their list of priorities.

For what it’s worth, my guess is that Saatchi just has difficulty admitting that he was wrong. It’s not a particularly rare personality trait. He originally thought the bill would genuinely help cancer patients, and when told otherwise, he simply ignored that information. You might see this as an example of the Dunning Kruger effect, and it’s certainly consistent with the widely accepted phenomenon of confirmation bias.

Granted, what we’re seeing here is a pretty extreme case of confirmation bias, and has required some spectacular dishonesty on the part of Saatchi to maintain the illusion that he was right all along. But Saatchi is a politician who originally made his money in advertising, and it would be hard to think of 2 more dishonest professions than politics and advertising. It perhaps shouldn’t be too surprising that dishonesty is something that comes naturally to him.

Whatever the reasons for Saatchi’s insistence on promoting the bill in the face of widespread opposition, this whole story has been a rather scary tale of how money and power can buy your way through the legislative process.

The bill still has to pass its third reading in the House of Commons before it becomes law. We can only hope that our elected MPs are smart enough to see what a travesty the bill is. If you want to write to your MP to ask them to vote against the bill, now would be a good time to do it.

Plain packaging for tobacco

Plain packaging for tobacco is in the news today. The idea behind it is that requiring tobacco manufacturers to sell cigarettes in unbranded packages, where all the branding has been replaced by prominent health warnings, will reduce the number of people who smoke, and thereby benefit public health.

But will it work?

That’s an interesting question. There’s a lot of research that’s been done, though it’s fair to say none of it is conclusive. For example, there has been research on how it affects young people’s perceptions of cigarettes and on what happened to the number of people looking for help with quitting smoking after plain packaging was introduced in Australia.

But for me, those are not the most interesting pieces of evidence.

What tells me that plain packaging is overwhelmingly likely to be an extremely effective public health measure is that the tobacco industry are strongly opposed to it. They probably know far more about the likely effects than the rest of us: after all, for me, it’s just a matter of idle curiosity, but for them, millions of pounds of their income depends on it. So the fact they are against it tells us plenty.

Let’s look in a little more detail at exactly what it tells us. Advertising and branding generally has 2 related but distinguishable aims for a company that sells something. One aim is to increase their share of the market, in other words to sell more of their stuff than their competitors in the same market. The other is to increase the overall size of the market, so that they sell more, and their competitors sell more as well. Both those things can be perfectly good reasons for a company to spend their money on advertising and branding.

But the difference between those 2 aims is crucial here.

If the point of cigarette branding were just to increase market share without affecting the overall size of the market, then the tobacco industry should be thoroughly in favour of a ban. Advertising and branding budgets, when the overall size of the market is constant, are a classic prisoner’s dilemma. If all tobacco companies spend money on branding, they will all have pretty much the same share as if no-one did, so they will gain nothing, but they will spend money on branding, so they’re worse off than if they didn’t. However, they can’t afford not to spend money on branding, as then they would lose market share to their competitors, who are still spending money on it.

The ideal situation for the tobacco industry in that case would be that no-one would spend any money on branding. But how can you achieve that? For all the companies to agree not to spend money on branding might be an illegal cartel, and there’s always a risk that someone would break the agreement to increase their market share.

A government-mandated ban solves that problem nicely. If all your competitors are forced not to spend money on branding, then you don’t have to either. All the tobacco companies win.

So if that were really the situation, then you would expect the tobacco companies to be thoroughly in favour of it. But they’re not. So that tells me that we are not in the situation where the total market size is constant.

The tobacco companies must believe, and I’m going to assume here that they know what they’re doing, that cigarette branding affects the overall size of the market. If branding could increase the overall size of the market (or more realistically when smoking rates in the UK are on a long-term decline, stop it shrinking quite as fast), then it would be entirely rational for the tobacco companies to oppose mandatory plain packaging.

I don’t know about you, but that’s all the evidence I need to convince me that plain packaging is overwhelmingly likely to be an effective public health measure.