All posts by Adam

Consequences of dishonest advertising

As I was travelling on the London Underground the other day, I saw an advert that caught my eye.

Please note: if you are a journalist from the Daily Mirror and would like to use this photo in a story, it would be appreciated if you would ask permission first rather than just stealing it like you did with the last photo I posted of a dodgy advert.

That was a surprising claim, I thought. Just wipe a magic potion across your brow, and you get fast, effective relief from a headache.

So I had a look to see what the medical literature had to say about it. Here is what a PubMed search for 4head or its active ingredient levomenthol turned up:

A Google Scholar search similarly failed to find a shred of evidence that the product has any effect whatever on headaches. So I have reported the advert to the ASA. It will be interesting to see if the manufacturer has any evidence to back up their claim. I suppose they might, but they are keeping it pretty well hidden if they do.

But it occurred to me that something is very wrong with the way advertising regulation works. If the advert is indeed making claims which turn out to be completely unsubstantiated, the manufacturer can do that with no adverse consequences whatever. False advertising is effectively legalised lying.

When I last reported a misleading advert to the ASA, the ASA did eventually rule that the advert was misleading  and asked the advertiser to stop using the advert. It took almost a year from when I reported the advert to when the ruling was made, giving the advertiser completely free rein to continue telling lies for almost a whole year.

In a just society, there might be some penalty for misleading the public like that. But there isn’t. The only sanction is being asked to take the advert down. As long as you comply (and with very rare exceptions, even if you don’t), there are no fines or penalties of any sort.

So where is the incentive for advertisers to be truthful? Most dishonest adverts probably don’t get reported, or even if they do get reported the ASA might be prepared to be generous to the advertiser and not find against them anyway. Advertisers know that they can be dishonest with no adverse consequences.

I would like to suggest a new way of regulation for adverts. Every company that advertises would need to nominate an advertising compliance officer, probably a member of the board of directors. That person would need to sign off every advert that the company uses. If an advert is found to be dishonest, that would be a criminal offence, and the advertising compliance officer would be personally liable, facing a criminal record and a substantial fine. The company would be fined as well.

We criminalise other forms of taking money by fraud. Why does fraudulent advertising have to be different?

The Trials Tracker and post-truth politics

The All Trials campaign was founded in 2013 with the stated aim of ensuring that all clinical trials are disclosed in the public domain. This is, of course, an entirely worthy aim. There is no doubt that sponsors of clinical trials have an ethical responsibility to make sure that the results of their trials are made public.

However, as I have written before, I am not impressed by the way the All Trials campaign misuses statistics in pursuit of its aims. Specifically, the statistic they keep promoting, “about half of all clinical trials are unpublished”, is simply not evidence based. Most recent studies show that the extent of trials that are undisclosed is more like 20% than 50%.

The latest initiative by the All Trials campaign is the Trials Tracker. This is an automated tool that looks at all trials registered on clinicaltrials.gov since 2006 and determines, using an automated algorithm, which of them have been disclosed. They found 45% were undisclosed (27% of industry sponsored-trials and 54% of non-industry trials). So, surely this is evidence to support the All Trials claim that about half of trials are undisclosed, right?

Wrong.

In fact it looks like the true figure for undisclosed trials is not 45%, but at most 21%. Let me explain.

The problem is that an automated algorithm is not very good at determining whether trials are disclosed or not. The algorithm can tell if results have been posted on clinicaltrials.gov, and also searches PubMed for publications with a matching clinicaltrials.gov ID number. You can probably see the flaw in this already. There are many ways that results could be disclosed that would not be picked up by that algorithm.

Many pharmaceutical companies make results of clinical trials available on their own websites. The algorithm would not pick that up. Also, although journal publications of clinical trials should ideally make sure they are indexed by the clinicaltrials.gov ID number, in practice that system is imperfect. So the automated algorithm misses many journal articles that aren’t indexed correctly with their ID number.

So how bad is the algorithm?

The sponsor with the greatest number of unreported trials, according to the algorithm, is Sanofi. I started by downloading the raw data, picked the first 10 trials sponsored by Sanofi that were supposedly “undisclosed”, and tried searching for results manually.

As an aside, the Trials Tracker team get 7/10 for transparency. They make their raw data available for download, which is great, but they don’t disclose their metadata (descriptions of what each variable in the dataset represents), so it was rather hard work figuring out how to use the data. But I think I figured it out in the end, as after trying a few combinations of interpretations I was able to replicate their published results exactly.

Anyway, of those 10 “undisclosed” trials by Sanofi, 8 of them were reported on Sanofi’s own website, and one of the remaining 2 was published in a journal. So in fact only 1 of the 10 was actually undisclosed. I posted this information in a comment on the journal article in which the Trials Tracker is described, and it prompted another reader, Tamas Ferenci, to investigate the Sanofi trials more systematically. He found that 227 of the 285 Sanofi trials (80%) listed as undisclosed by Trials Tracker were in fact published on Sanofi’s website. He then went on to look at “undisclosed” trials sponsored by AstraZeneca, and found that 38 of the 68 supposedly undisclosed trials (56%) were actually published on AstraZeneca’s website. Ferenci’s search only looked at company websites, so it’s possible that more of the trials were reported in journal articles.

The above analyses only looked at a couple of sponsors, and we don’t know if they are representative. So to investigate more systematically the extent to which the Trials Tracker algorithm underestimates disclosure, I searched for results manually for 100 trials: a random selection of 50 industry trials and a random selection of 50 non-industry trials.

I found that 54% (95% confidence interval 40-68%) of industry trials and 52% (95% CI 38-66%) of non-industry trials that had been classified as undisclosed by Trials Tracker were available in the public domain. This might be an underestimate, as my search was not especially thorough. I searched Google, Google Scholar, and PubMed, and if I couldn’t find any results in a few minutes then I gave up. A more systematic search might have found more articles.

If you’d like to check the results yourself, my findings are in a csv file here. This follows the same structure as the original dataset (I’d love to be able to give you the metadata for that, but as mentioned above, I can’t), but with the addition of 3 variables at the end. “Disclosed” specifies whether the trial was disclosed, and if so, how (journal, company website, etc). It’s possible that trials were disclosed in more than one place, but once I’d found a trial in one place I stopped searching. “Link” is a link to the results if available, and “Comment” is any other information that struck me as relevant, such as whether a trial was terminated prematurely or was of a product which has since been discontinued.

Putting these figures together with the Trials Tracker main results, this suggests that only 12% of industry trials and 26% of non-industry trials are undisclosed, or 21% overall (34% of the trials were sponsored by industry). And given the rough and ready nature of my search strategy, this is probably an upper bound for the proportion of undisclosed trials. A far cry from “about half”, and in fact broadly consistent with the recent studies showing that about 80% of trials are disclosed. It’s also worth noting that industry are clearly doing better at disclosure than academia. Much of the narrative that the All Trials campaign has encouraged is of the form “evil secretive Big Pharma deliberately withholding their results”. The data don’t seem to support this. It seems far more likely that trials are undisclosed simply because triallists lack the resources to write them up for publication. Research in industry is generally better funded than research in academia, and my guess is that the better funding explains why industry do better at disclosing their results. I and some colleagues have previously suggested that one way to increase trial disclosure rates would be to ensure that funders of research ringfence a part of their budget specifically for the costs of publication.

There are some interesting features of the 23 out of the 50 industry-sponsored trials that really did seem to be undisclosed. 9 of them were not trials of a drug intervention. Of the 14 undisclosed drug trials, 4 were of products that had been discontinued and a further 3 had sample sizes less than 12 subjects, so none of those 7 studies are likely to be relevant to clinical practice. It seems that undisclosed industry-sponsored drug trials of relevance to clinical practice are very rare indeed.

The Trials Tracker team would no doubt respond by saying that the trials missed by their algorithm have been badly indexed, which is bad in itself. And they would be right about that. Trial sponsors should update clinicaltrials.gov with their results. They should also make sure that the clinicaltrials.gov ID number is included in the publication (although in several cases of published trials that were missed by the algorithm, the ID number was in fact included in the abstract of the paper, so this seems to be a fault of Medline indexing rather than any fault of the triallists).

However, the claim made by the Trials Tracker is not that trials are badly indexed. If they stuck to making only that claim, then the Trials Tracker would be a perfectly worthy and admirable project. But the problem is they go beyond that, and claim something which their data simply do not show. Their claim is that the trials are undisclosed. This is just wrong. It is another example of what seems to be all the rage these days, namely “post-truth politics”. It is no different from when the Brexit campaign said “We spend £350 million a week on the EU and could spend it on the NHS instead” or when Donald Trump said, well, pretty much every time his lips moved really.

Welcome to the post-truth world.

 

Evidence-based house moving

I live in London. I didn’t really intend to live in London. But I got a job here that seemed to suit me, so I thought maybe it would be OK to live here for a couple of years and then move on.

That was in 1994. Various life events intervened and I sort of got stuck here. But now I’m in the fortunate position where my job is home-based, and my partner also works from home, so we could pretty much live anywhere. So finally, moving out of London is very much on the agenda.

But where should we move to? The main intention is “somewhere more rural than London”, which you will appreciate doesn’t really narrow it down very much. Many people move to a specific location for a convenient commute to work, but we have no such constraints, so we need some other way of deciding.

So I decided to do what all good statisticians do, and use data to come up with the answer.

There is a phenomenal amount of data that can be freely downloaded from the internet these days about various attributes of small geographic areas.

House prices are obviously one of the big considerations. You can download data from the Land Registry on every single residential property transaction going back many years. This needs a bit of work before it becomes usable, but it’s nothing a 3-level mixed effects model with random coefficients at both middle-layer super output area and local authority area can’t sort out (the model actually took about 2 days to run: it’s quite a big dataset).

Although I don’t have to commute to work every day, I’m not completely free of geographic constraints. I travel for work quite a bit, so I don’t want to be too far away from the nearest international airport. My parents, who are not as young as they used to be, live in Sussex, and I don’t want to be too many hours’ drive away from them. My partner also has family in the southeast of England and would like to remain in easy visiting distance. And we both love going on holiday to the Lake District, so somewhere closer to there would be nice (which is of course not all that easy to reconcile with being close to Sussex).

Fortunately, you can download extensive data on journey times from many bits of the  country to many other bits, so that can be easily added to the data.

We’d like to live somewhere more rural than London, but don’t want to be absolutely in the middle of nowhere. Somewhere with a few shops and a couple of takeaways and pubs would be good. So I also downloaded data on population density.  I figured about 2500 people/square km would be a good compromise between escaping to somewhere more rural and not being in the middle of nowhere, and gave areas more points the closer they came to that ideal.

I’d like to have a big garden, so we also give points to places that have a high ratio of garden space to house space, which can easily be calculated from land use data. Plenty of green space in the area would also be welcome, and we can calculate that from the same dataset.

One of the problems with choosing places with low house prices is that they might turn out to be rather run-down and unpleasant places to live. So I’ve also downloaded data on crime rates and deprivation indices, so that run-down and crime-ridden areas can be penalised.

In addition to all that, I also found data on flood risk, political leanings, education levels, and life satisfaction, which I figured are probably also relevant.

I dare say there are probably other things that could be downloaded and taken into account, though that’s all I can think of for now. Suggestions for other things very welcome via the commenst below.

I then calculate a score for each of those things for each middle-layer super output area (an area of approximately 7000 people), weight each of those things by how important I think it is, and take a weighted average. Anything that scores too badly on an item I figured was important (this was just house prices and distance to my parents) automatically gets a score of zero.

The result is a database of a score for every middle-layer super output area in England and Wales (I figured Scotland was just too far away from Sussex), which I then mapped using the wonderful QGIS mapping software.

The results are actually quite sensitive to the weightings applied to each attribute, so I allowed some of the weightings to vary over reasonable ranges, and then picked the areas that consistently performed well.

The final map looks like this:

map

Red areas are those with low scores, green areas are those with high scores.

Not surprisingly, setting a constraint on house prices ruled out almost all of the southeast of England. Setting a constraint on travelling time to visit my parents ruled out most of the north of England. What is left is mainly a little band around the midlands.

And which is the best place to live, taking all that into account? Turns out that it’s Stafford. I’ve actually never been to Stafford. I wonder if it’s a nice place to live? I suppose I should go and visit it sometime and see how well my model did.

Asking for evidence from All Trials

I’ve written before about how the statistic “50% of all clinical trials are unpublished”, much beloved of the All Trials campaign, is simply not evidence based.

The charity Sense About Science, who run the All Trials campaign, also run a rather splendid website called “ask for evidence“, which encourages people to ask for evidence when people make dodgy claims.

So I used Sense About Science’s website to ask for evidence from Sense About Science for their claim that 50% of all trials are unpublished.

To their credit, they responded promptly, and pointed me to this article, which they claimed provided the evidence behind their claim.

So how well does it support their claim?

Interestingly, the very first sentence states that they don’t really have evidence. The first paragraph of the document reads as follows:

“We may never know the answer to this question. In some ways, maybe it doesn’t matter. Even one clinical trial left unreported is unacceptable.”

So if they don’t know, why are they making such a confident claim?

As an aside, they are of course spot on in the rest of that paragraph. Even if the proportion of unpublished trials is substantially less than 50%, if it’s greater than zero it’s still too high.

They go on to further emphasise the point that they really don’t know what proportion of trials are unpublished:

“It is clearly not a statistic, and we wouldn’t advocate trying to roll up the results of all the studies listed below to produce something spuriously precise.”

They then go on to explain the complexity of estimating the proportion of unpublished trials. It certainly is complex, and they give a good explanation of why. It’s not a bad document, and even includes some studies showing much higher rates of disclosure that they don’t admit to on their main website article.

But if they understand, as they clearly do, that the claim that half of all trials are unpublished is spuriously precise and that it would be wrong to claim that, why do they do so anyway?

Not only do they make the claim very confidently on their own website,  the claim also often appears in the very many articles that their PR machine churns out. These articles state it as fact, and do not acknowledge the problems that they describe in their background document. You will never see those articles citing the most recent research showing greater than 90% disclosure rates.

This still seems dishonest to me.

The Dianthus blog

Those of you who have known me for some time will know that I used to blog on the website of my old company, Dianthus Medical.

Well, Dianthus Medical is no more, but I have preserved the blog for posterity. You can find it again at its original home, dianthus.co.uk. If by some remote chance you happen to have any links to any of the old blogposts, they should work again now. All the blogposts have their original URLs.

The Dianthus blog no longer accepts comments, but if you have an urgent need to leave a comment on anything posted there, you are welcome to leave a comment on this page.

Sugar tax

One of the most newsworthy features of yesterday’s budget was the announcement that the UK will introduce a tax on sugary drinks.

There is reason to think this may have primarily been done as a dead cat move, to draw attention away from the fact the the Chancellor is missing all his deficit reduction targets and cutting disability benefits (though apparently he can still afford tax cuts for higher rate tax payers).

But what effect might a tax on sugary drinks have?

Obviously it will reduce consumption of sugary drinks:  it’s economics 101 that when the price of something goes up, consumption falls. But that by itself is not interesting or useful. The question is what effect will that have on health and well-being?

The only honest answer to that is we don’t know, as few countries have tried such a tax, and we do not have good data on what the effects have been in countries that have.

For millionaires such as George Osborne and Jamie Oliver, the tax is unlikely to make much difference. Sugary drinks are such a tiny part of their expenditure, they will probably not notice.

But what about those at the other end of the income scale? While George Osborne may not realise this, there are some people for whom the weekly grocery shop is a significant proportion of their total expenditure. For such people, taxing sugary drinks may well have a noticeable effect.

For a family who currently spends money on sugary drinks, 3 outcomes are possible.

The first possibility is that they continue to buy the same quantity of sugary drinks as before (or sufficiently close to the same quantity that their total expenditure still rises). They will then be worse off, as they will have less money to spend on other things. This is bad in itself, but also one of the strongest determinants of ill health is poverty, so taking money away from people is unlikely to make them healthier.

The second possibility is that they reduce their consumption of sugary drinks by an amount roughly equivalent to the increased price. They will then be no better or worse off in terms of the money left in their pocket after the weekly grocery shopping, but they will be worse off in welfare terms, as they will have less of something that they value (sugary drinks). We know that they value sugary drinks, because if they didn’t, they wouldn’t buy them in the first place.

Proponents of the sugar tax will argue that they will be better off in health terms, as sugary drinks are bad for you, and they are now consuming less of them. Well, maybe. But that really needs a great big [citation needed]. This would be a relatively modest decrease in sugary drink consumption, and personally I would be surprised if it made much difference to health. There is certainly no good evidence that it would have benefits on health, and given that you are harming people by depriving them of something they value, I think it is up to proponents of the sugar tax to come up with evidence that the benefits outweigh those harms. It seems rather simplistic to suppose that obesity, diabetes, and the other things the the sugar tax is supposed to benefit are primarily a function of sugary drink consumption, when there are so many other aspects of diet, and of course exercise, which the sugar tax will not affect.

The third possibility is that they reduce their consumption by more than the amount of the price increase. They will now have more money in their pocket at the end of the weekly grocery shop. Perhaps they will spend that money on vegan tofu health drinks and gym membership, and be healthier as a result, as the supporters of the sugar tax seem to believe. Or maybe they’ll spend it on cigarettes and boiled sweets. We simply don’t know, as there are no data to show what happens here. The supposed health benefits of the sugar tax are at this stage entirely hypothetical.

But whatever they spend it on, they would have preferred to spend it on sugary drinks, so we are again making them worse off in terms of the things that they value.

All these considerations are trivial for people on high incomes. They may not be for people on low incomes. What seems certain is that the costs of the sugar tax will fall disproportionately on the poor.

You may think that’s a good idea. George Osborne obviously does. But personally, I’m not a fan of regressive taxation.

Are a fifth of drug trials really designed for marketing purposes?

A paper by Barbour et al was published in the journal Trials a few weeks ago making the claim that “a fifth of drug trials published in the highest impact general medical journals in 2011 had features that were suggestive of being designed for marketing purposes”.

That would be bad if it were true. Clinical trials are supposed to help to advance medical science and learn things about drugs or other interventions that we didn’t know before. They are not supposed to be simply designed to help promote the use of the drug. According to an editorial by Sox and Rennie, marketing trials are not really about testing hypotheses, but “to get physicians in the habit of prescribing a new drug.”

Marketing trials are undoubtedly unethical in my opinion, and the question of how common they are is an important one.

Well, according to Barbour et al, 21% of trials in high impact medical journals were designed for marketing purposes. So how did they come up with that figure?

That, unfortunately, is where the paper starts to go downhill. They chose a set of criteria which they believed were associated with marketing trials. Those criteria were:

“1) a high level of involvement of the product manufacturer in study design 2) data analysis, 3) and reporting of the study, 4) recruitment of small numbers of patients from numerous study sites for a common disease when they could have been recruited without difficulty from fewer sites, 5) misleading abstracts that do not report clinically relevant findings, and 6) conclusions that focus on secondary end-points and surrogate markers”

Those criteria appear to be somewhat arbitrary. Although Barbour et al give 4 citations to back up those criteria, none of the papers cited provides any data to validate those criteria.

A sample of 194 papers from 6 top medical journals were then assessed against those criteria by 6 raters (or sometimes 5, as raters who were journal editors didn’t assess papers that came from their own journal), and each rater rated each paper as “no”, “maybe”, or “yes” for how likely it was to be a marketing trial. Trials rated “yes” by 4 or more raters were considered to be marketing trials, and trials with fewer than 4 “yes” ratings could also be considered marketing trials if there were no more than 3 “no” ratings and a subsequent consensus discussion decided they should be classified as marketing trials.

The characteristics of marketing trials were then compared with other trials. Not surprisingly, the characteristics described above were more common in the trials characterised as marketing trials. Given that that’s how the “marketing” trials were defined, that outcome was completely predictable. This is a perfectly circular argument. Though to be fair to the authors, they do acknowledge the circularity of their argument in the discussion.

One of the first questions that came to my mind was how well the 6 raters agreed. Unfortunately, no measure of inter-rater agreement is presented in the paper.

Happily, the authors get top marks for their commitment to transparency here. When I emailed to ask for their raw data so that I could calculate the inter-rater agreement myself, the raw data was sent promptly. If only all authors were so co-operative.

So, how well did the authors agree? Not very well, it turns out. The kappa coefficient for agreement among the raters was a mere 0.36 (kappa values vary between 0 and 1, where 0 is no better than guessing and 1 is perfect agreement, with values above about 0.7 generally considered to be acceptable agreement). This does not suggest that the determination of what counted as a marketing trial was obvious.

To look at this another way, of the 41 trials characterised as marketing trials, only 4 of those trials were rated “yes” by all raters, and only 9 were rated “yes” by all but one. This really doesn’t suggest that the authors could agree on what constituted a marketing trial.

So what about those 4 trials rated “yes” by all reviewers? Let’s take a look at them and see if the conclusion that they were primarily for marketing purposes stacks up.

The first paper is a report of 2 phase III trials of linaclotide for chronic constipation. This appears to have been an important component of the clinical trial data leading to licensing of rifamixin for IBS, as the trials are mentioned in the press release where the FDA describes the licensing of the drug. So the main purpose of the study seems to have been to get the drug licensed. And in contrast to point 6) in the criteria for determining a marketing study, the conclusions were based squarely on the primary endpoint. As for point 5), obviously the FDA thought the findings were clinically relevant as they were prepared to grant the drug a license on the back of them.

The second is a report of 2 phase III trials of rifamixin for patients with irritable bowel syndrome. Again, the FDA press release shows that the main purpose of the studies was for getting the drug licensed.  And again, the conclusions were based on the primary endpoint and were clearly considered clinically relevant by the FDA.

The third paper reports a comparative trial of tiotropium versus salmeterol for the prevention of exacerbations of COPD. Tiotropium was already licensed when this trial was done so this trial was not for the purposes of original licensing, but it does appear that it was important in subsequent changes to the licensing, as it is specifically referred to in the prescribing information.  Again, the conclusions focussed on the primary outcome measure, which was prevention of exacerbations: certainly a clinically important outcome in COPD.

The fourth paper was also done after the drug was originally licensed, which in this case was eplerenone. The study looked at overall mortality in patients with heart failure. Again, the study is specifically referenced in the prescribing information, and again, the study’s main conclusions are based on the primary outcome measure. In this case, the primary outcome measure was overall mortality. How much more clinically relevant do you want it to be?

Those 4 studies are the ones with the strongest evidence of being designed for marketing purposes. I haven’t looked at any of the others, but I think it’s fair to say that there is really no reason to think that those 4 were designed primarily for marketing.

Of course in one sense, you could argue that they are all marketing studies. You cannot market a drug until it is licensed. So doing studies with the aim of getting a drug licensed (or its licensed indications extended) could be regarded as for marketing purposes. But I’m pretty sure that’s not what most people would understand by the term.

So unfortunately, I think Barbour et al have not told us anything useful about how common marketing studies are.

I suspect they are quite rare. I have worked in clinical research for about 20 years, and have worked on many trials in that time. I have never worked on a study that I would consider to be designed mainly for marketing. All the trials I have worked on have had a genuine scientific question behind them.

This is not to deny, of course. that marketing trials exist. Barbour et al refer to some well documented examples in their paper. Also, in my experience as a research ethics committee member, I have certainly seen studies that seem to serve little scientific purpose and the accusation of being designed mainly for marketing would be reasonable.

Again, they are rare: certainly nothing like 1 in 5. I have been an ethics committee member for 13 years, and typically review about 50 or so studies per year. The number of studies I have suspected of being marketing studies in that time could be counted on the fingers of one hand. If it had been up to me, I would have not given those studies ethical approval, though other members of my ethics committee do not share my views on the ethics of marketing trials, so I was outvoted and the trials were approved.

So although Barbour et al ask an important question, it does not seem to me that they have answered it. Still, by being willing to share their raw data, they have participated fully in the scientific process. Publishing something and letting others scrutinise your results is how science is supposed to be done, and for that they deserve credit.

 

 

 

Solving the economics of personalised medicine

It’s a well known fact that many drugs for many diseases don’t work very well in in many patients. If we could identify in advance which patients will benefit from a drug and which won’t, then drugs could be prescribed in a much more targeted manner. That is actually a lot harder to do than it sounds, but it’s an active area of research, and I am confident that over the coming years and decades medical research will make much progress in that direction.

This is the world of personalised medicine.

Although giving people targeted drugs that are likely to be of substantial benefit to them has obvious advantages, there is one major disadvantage. Personalised medicine simply does not fit the economic model that has evolved for the pharmaceutical industry.

Developing new drugs is expensive. It’s really expensive. Coming up with a precise figure for the cost of developing a new drug is controversial, but some reasonable estimates run into billions of dollars.

The economic model of the pharmaceutical industry is based on the idea of a “blockbuster” drug. You develop a drug like Prozac, Losec, or Lipitor that can be used in millions of patients, and the huge costs of that development can be recouped by the  huge sales of the drug.

But what if you are developing drugs based on personalised medicine for narrowly defined populations?  Perhaps you have developed a drug for patients with a specific variant of a rare cancer, and it is fantastically effective in those patients, but there may be only a few hundred patients worldwide who could benefit. There is no way you’re going to be able to recoup the costs of a billion dollars or more of development by selling the drug to a few hundred patients, without charging sums of money that are crazily unaffordable to each patient.

Although the era of personalised medicine is still very much in its infancy, we have already seen this effect at work with drugs like Kadcyla, which works for only a specific subtype of breast cancer patients, but at £90,000 a pop has been deemed too expensive to fund in the NHS. What happens when even more targeted drugs are developed?

I was discussing this question yesterday evening over a nice bottle of Chilean viognier with Chris Winchester. I think between us we may have come up with a cunning plan.

Our idea is as follows. If a drug is being developed for a suitably narrow patient population that it could be reasonably considered a “personalised medicine”, different licensing rules would apply. You would no longer have to obtain such a convincing body of evidence of efficacy and safety before licensing. You would need some evidence, of course, but the bar would be set much lower. Perhaps some convincing laboratory studies followed by some small clinical trials that could be done much more cheaply than the typical phase III trials that enrol hundreds of patients and cost many millions to run.

At that stage, you would not get a traditional drug license that would allow you to market the drug in the normal way. The license would be provisional, with some conditions attached.

So far, this idea is not new. The EMA has already started a pilot project of “adaptive licensing“, which is designed very much in this spirit.

But here comes the cunning bit.

Under our plan, the drug would be licensed to be marketed as a mixture of the active drug and placebo. Some packs of the drug would contain the active drug, and some would contain placebo. Neither the prescriber nor the patient would know whether they have actually received the drug. Obviously patients would need to be told about this and would then have the choice to take part or not. But I don’t think this is worse than the current situation, where at that stage the drug would not be licensed at all, so patients would either have to find a clinical trial (where they may still get placebo) or not get the drug at all.

In effect, every patient who uses the drug during the period of conditional licensing would be taking part in a randomised, double-blind, placebo-controlled trial.  Prescribers would be required to collect data on patient outcomes, which, along with a code number on the medication pack, could then be fed back to the manufacturer and analysed. The manufacturer would know from the code number whether the patient received the drug or placebo.

Once sufficient numbers of patients had been treated, then the manufacturer could run the analysis and the provisional license could be converted to a full license if the results show good efficacy and safety, or revoked if they don’t.

This wouldn’t work in all cases. There will be times when other drugs are available but would not be compatible with the new drug. You could not then ethically put patients in a position where a drug is available but they get no drug at all. But in cases where no effective treatment is available, or the new drug can be used in addition to standard treatments, use of a placebo in this way is perfectly acceptable from an ethical point of view.

Obviously even when placebo treatment is a reasonable option, there would be logistical challenges with this approach (for example, making sure that the same patient gets the same drug when their first pack of medicine runs out). I don’t pretend it would be easy. But I believe it may be preferable to a system in which the pharmaceutical industry has to abandon working on personalised medicine because it has become unaffordable.

Made up statistics on sugar tax

I woke up this morning to the sound of Radio 4 telling me that Cancer Research UK had done an analysis showing that a 20% tax on sugary drinks could reduce the number of obese people in the UK by 3.7 million by 2025. (That could be the start of the world’s worst ever blues song, but it isn’t.)

My first thought was that was rather surprising, as I wasn’t aware of any evidence on how sugar taxes impact on obesity. So I went hunting for the report with interest.

Bizarrely, Cancer Research UK didn’t link to the full report from their press release (once you’ve read the rest of this post, you may conclude that perhaps they were too embarrassed to let anyone see it), but I tracked it down here. Well, I’m not sure even that is the full report. It says it’s a “technical summary”, but the word “summary” makes me wonder if it is still not the full report. But that’s all that seems to be made publicly available.

There are a number of problems with this report. Christopher Snowdon has blogged about some of them here, but I want to focus on the extent to which the model is based on untested assumptions.

It turns out that the conclusions were indeed not based on any empirical data about how a sugar tax would impact on obesity, but on  a modelling study. This study made various assumptions about various things, principally the following:

  1. The price elasticity of demand for sugary drinks (ie the extent to which an increase in price reduces consumption)
  2. The extent to which a reduction in sugary drink consumption would reduce total calorie intake
  3. The effect of total calorie intake on body mass

The authors get 0/10 for transparent reporting for the first of those, as they don’t actually say what price elasticity they used. That’s pretty basic stuff, and not to report it is somewhat akin to reporting the results of a clinical trial of a new drug and not saying what dose of the drug you used.

However, the report does give a reference for their price elasticity data, namely this paper. I must say I don’t find the methods of that paper easy to follow. It’s not at all clear to me whether the price elasticities they calculated were actually based on empirical data or themselves the results of a modelling exercise. But the data that are used in that paper come from the period 2008 to 2010, when the UK was in the depths of  recession, and when it might be hypothesised that price elasticities were greater than in more economically buoyant times. They don’t give a single figure for price elasticity, but a range of 0.8 to 0.9. In other words, a 20% increase in the price of sugary drinks would be expected to lead to a 16-18% decrease in the quantity that consumers buy. At least in the depths of the worst recession since the 1930s.

That figure for price elasticity is a crucial input to the model, and if it is wrong, then the answers of the model will be wrong.

The next input is the extent to which a reduction in sugary drink consumption reduces total calorie intake.  Here, an assumption is made that total calorie intake is reduced by 60% of the amount of calories not consumed in sugary drinks. Or in other words, that if you forego the calories of a sugary drink, you only make up 40% of those from elsewhere.

Where does that 60% figure come from? Well, they give a reference to this paper. And how did that paper arrive at the 60% figure? Well, they in turn give a reference to this paper. And where did that get it from? As far as I can tell, it didn’t, though I note it reports the results of a clinical study in people trying to lose weight by dieting. Even if that 60% figure is based on actual data from that study, rather than just plucked out of thin air, I very much doubt that data on calorie substitution taken from people trying to lose weight would be applicable to the general population.

What about the third assumption, the weight loss effects of reduced calorie intake? We are told that reducing energy intake by 100 KJ per day results in 1 kg body weight loss. The citation given for that information is this study, which is another modelling study. Are none of the assumptions in this study based on actual empirical data?

A really basic part of making predictions by mathematical modelling is to use sensitivity analyses. The model is based on various assumptions, and sensitivity analyses answer the questions of what happens if those assumptions were wrong. Typically, the inputs to the model are varied over plausible ranges, and then you can see how the results are affected.

Unfortunately, no sensitivity analysis was done. This, folks, is real amateur hour stuff. The reason for the lack of sensitivity analysis is given in the report as follows:

“it was beyond the scope of this project to include an extensive sensitivity analysis. The microsimulation model is complex involving many thousands of calculations; therefore sensitivity analysis would require many thousands of consecutive runs using super computers to undertake this within a realistic time scale.”

That has to be one of the lamest excuses for shoddy methods I’ve seen in a long time. This is 2016. You don’t have to run the analysis on your ZX Spectrum.

So this result is based on a bunch of heroic assumptions which have little basis in reality, and the sensitivity of the model to those assumptions were not tested. Forgive me if I’m not convinced.

 

The dishonesty of the All Trials campaign

The All Trials campaign is very fond of quoting the statistic that only half of all clinical trials have ever been published. That statistic is not based on good evidence, as I have explained at some length previously.

Now, if they are just sending the odd tweet or writing the odd blogpost with dodgy statistics, that is perhaps not the most important thing in the whole world, as the wonderful XKCD pointed out some time ago:

Wrong on the internet

But when they are using dodgy statistics for fundraising purposes, that is an entirely different matter. On their USA fundraising page, they prominently quote the evidence-free statistic about half of clinical trials not having been published.

Giving people misleading information when you are trying to get money from them is a serious matter. I am not a lawyer, but my understanding is that the definition of fraud is not dissimilar to that.

The All Trials fundraising page allows comments to be posted, so I posted a comment questioning their “half of all clinical trials unpublished” statistic. Here is a screenshot of the comments section of the page after I posted my comment,  in case you want to see what I wrote:Screenshot from 2016-02-02 18:16:32

Now, if the All Trials campaign genuinely believed their “half of all trials unpublished” statistic to be correct, they could have engaged with my comment. They could have explained why they thought they were right and I was wrong. Perhaps they thought there was an important piece of evidence that I had overlooked. Perhaps they thought there was a logical flaw in my arguments.

But no, they didn’t engage. They just deleted the comment within hours of my posting it. That is the stuff of homeopaths and anti-vaccinationists. It is not the way that those committed to transparency and honesty in science behave.

I am struggling to think of any reasonable explanation for this behaviour other than that they know their “half of all clinical trials unpublished” statistic to be on shaky ground and simply do not wish anyone to draw attention to it. That, in my book, is dishonest.

This is such a shame. The stated aim of the All Trials campaign is entirely honourable. They say that their aim is for all clinical trials to be published. This is undoubtedly important. All reasonable people would agree that to do a clinical trial and keep the results secret is unethical. I do not see why they need to spoil the campaign by using exactly the sort of intellectual dishonesty themselves that they are campaigning against.