Category Archives: Clinical trials

The Trials Tracker and post-truth politics

The All Trials campaign was founded in 2013 with the stated aim of ensuring that all clinical trials are disclosed in the public domain. This is, of course, an entirely worthy aim. There is no doubt that sponsors of clinical trials have an ethical responsibility to make sure that the results of their trials are made public.

However, as I have written before, I am not impressed by the way the All Trials campaign misuses statistics in pursuit of its aims. Specifically, the statistic they keep promoting, “about half of all clinical trials are unpublished”, is simply not evidence based. Most recent studies show that the extent of trials that are undisclosed is more like 20% than 50%.

The latest initiative by the All Trials campaign is the Trials Tracker. This is an automated tool that looks at all trials registered on clinicaltrials.gov since 2006 and determines, using an automated algorithm, which of them have been disclosed. They found 45% were undisclosed (27% of industry sponsored-trials and 54% of non-industry trials). So, surely this is evidence to support the All Trials claim that about half of trials are undisclosed, right?

Wrong.

In fact it looks like the true figure for undisclosed trials is not 45%, but at most 21%. Let me explain.

The problem is that an automated algorithm is not very good at determining whether trials are disclosed or not. The algorithm can tell if results have been posted on clinicaltrials.gov, and also searches PubMed for publications with a matching clinicaltrials.gov ID number. You can probably see the flaw in this already. There are many ways that results could be disclosed that would not be picked up by that algorithm.

Many pharmaceutical companies make results of clinical trials available on their own websites. The algorithm would not pick that up. Also, although journal publications of clinical trials should ideally make sure they are indexed by the clinicaltrials.gov ID number, in practice that system is imperfect. So the automated algorithm misses many journal articles that aren’t indexed correctly with their ID number.

So how bad is the algorithm?

The sponsor with the greatest number of unreported trials, according to the algorithm, is Sanofi. I started by downloading the raw data, picked the first 10 trials sponsored by Sanofi that were supposedly “undisclosed”, and tried searching for results manually.

As an aside, the Trials Tracker team get 7/10 for transparency. They make their raw data available for download, which is great, but they don’t disclose their metadata (descriptions of what each variable in the dataset represents), so it was rather hard work figuring out how to use the data. But I think I figured it out in the end, as after trying a few combinations of interpretations I was able to replicate their published results exactly.

Anyway, of those 10 “undisclosed” trials by Sanofi, 8 of them were reported on Sanofi’s own website, and one of the remaining 2 was published in a journal. So in fact only 1 of the 10 was actually undisclosed. I posted this information in a comment on the journal article in which the Trials Tracker is described, and it prompted another reader, Tamas Ferenci, to investigate the Sanofi trials more systematically. He found that 227 of the 285 Sanofi trials (80%) listed as undisclosed by Trials Tracker were in fact published on Sanofi’s website. He then went on to look at “undisclosed” trials sponsored by AstraZeneca, and found that 38 of the 68 supposedly undisclosed trials (56%) were actually published on AstraZeneca’s website. Ferenci’s search only looked at company websites, so it’s possible that more of the trials were reported in journal articles.

The above analyses only looked at a couple of sponsors, and we don’t know if they are representative. So to investigate more systematically the extent to which the Trials Tracker algorithm underestimates disclosure, I searched for results manually for 100 trials: a random selection of 50 industry trials and a random selection of 50 non-industry trials.

I found that 54% (95% confidence interval 40-68%) of industry trials and 52% (95% CI 38-66%) of non-industry trials that had been classified as undisclosed by Trials Tracker were available in the public domain. This might be an underestimate, as my search was not especially thorough. I searched Google, Google Scholar, and PubMed, and if I couldn’t find any results in a few minutes then I gave up. A more systematic search might have found more articles.

If you’d like to check the results yourself, my findings are in a csv file here. This follows the same structure as the original dataset (I’d love to be able to give you the metadata for that, but as mentioned above, I can’t), but with the addition of 3 variables at the end. “Disclosed” specifies whether the trial was disclosed, and if so, how (journal, company website, etc). It’s possible that trials were disclosed in more than one place, but once I’d found a trial in one place I stopped searching. “Link” is a link to the results if available, and “Comment” is any other information that struck me as relevant, such as whether a trial was terminated prematurely or was of a product which has since been discontinued.

Putting these figures together with the Trials Tracker main results, this suggests that only 12% of industry trials and 26% of non-industry trials are undisclosed, or 21% overall (34% of the trials were sponsored by industry). And given the rough and ready nature of my search strategy, this is probably an upper bound for the proportion of undisclosed trials. A far cry from “about half”, and in fact broadly consistent with the recent studies showing that about 80% of trials are disclosed. It’s also worth noting that industry are clearly doing better at disclosure than academia. Much of the narrative that the All Trials campaign has encouraged is of the form “evil secretive Big Pharma deliberately withholding their results”. The data don’t seem to support this. It seems far more likely that trials are undisclosed simply because triallists lack the resources to write them up for publication. Research in industry is generally better funded than research in academia, and my guess is that the better funding explains why industry do better at disclosing their results. I and some colleagues have previously suggested that one way to increase trial disclosure rates would be to ensure that funders of research ringfence a part of their budget specifically for the costs of publication.

There are some interesting features of the 23 out of the 50 industry-sponsored trials that really did seem to be undisclosed. 9 of them were not trials of a drug intervention. Of the 14 undisclosed drug trials, 4 were of products that had been discontinued and a further 3 had sample sizes less than 12 subjects, so none of those 7 studies are likely to be relevant to clinical practice. It seems that undisclosed industry-sponsored drug trials of relevance to clinical practice are very rare indeed.

The Trials Tracker team would no doubt respond by saying that the trials missed by their algorithm have been badly indexed, which is bad in itself. And they would be right about that. Trial sponsors should update clinicaltrials.gov with their results. They should also make sure that the clinicaltrials.gov ID number is included in the publication (although in several cases of published trials that were missed by the algorithm, the ID number was in fact included in the abstract of the paper, so this seems to be a fault of Medline indexing rather than any fault of the triallists).

However, the claim made by the Trials Tracker is not that trials are badly indexed. If they stuck to making only that claim, then the Trials Tracker would be a perfectly worthy and admirable project. But the problem is they go beyond that, and claim something which their data simply do not show. Their claim is that the trials are undisclosed. This is just wrong. It is another example of what seems to be all the rage these days, namely “post-truth politics”. It is no different from when the Brexit campaign said “We spend £350 million a week on the EU and could spend it on the NHS instead” or when Donald Trump said, well, pretty much every time his lips moved really.

Welcome to the post-truth world.

 

Are a fifth of drug trials really designed for marketing purposes?

A paper by Barbour et al was published in the journal Trials a few weeks ago making the claim that “a fifth of drug trials published in the highest impact general medical journals in 2011 had features that were suggestive of being designed for marketing purposes”.

That would be bad if it were true. Clinical trials are supposed to help to advance medical science and learn things about drugs or other interventions that we didn’t know before. They are not supposed to be simply designed to help promote the use of the drug. According to an editorial by Sox and Rennie, marketing trials are not really about testing hypotheses, but “to get physicians in the habit of prescribing a new drug.”

Marketing trials are undoubtedly unethical in my opinion, and the question of how common they are is an important one.

Well, according to Barbour et al, 21% of trials in high impact medical journals were designed for marketing purposes. So how did they come up with that figure?

That, unfortunately, is where the paper starts to go downhill. They chose a set of criteria which they believed were associated with marketing trials. Those criteria were:

“1) a high level of involvement of the product manufacturer in study design 2) data analysis, 3) and reporting of the study, 4) recruitment of small numbers of patients from numerous study sites for a common disease when they could have been recruited without difficulty from fewer sites, 5) misleading abstracts that do not report clinically relevant findings, and 6) conclusions that focus on secondary end-points and surrogate markers”

Those criteria appear to be somewhat arbitrary. Although Barbour et al give 4 citations to back up those criteria, none of the papers cited provides any data to validate those criteria.

A sample of 194 papers from 6 top medical journals were then assessed against those criteria by 6 raters (or sometimes 5, as raters who were journal editors didn’t assess papers that came from their own journal), and each rater rated each paper as “no”, “maybe”, or “yes” for how likely it was to be a marketing trial. Trials rated “yes” by 4 or more raters were considered to be marketing trials, and trials with fewer than 4 “yes” ratings could also be considered marketing trials if there were no more than 3 “no” ratings and a subsequent consensus discussion decided they should be classified as marketing trials.

The characteristics of marketing trials were then compared with other trials. Not surprisingly, the characteristics described above were more common in the trials characterised as marketing trials. Given that that’s how the “marketing” trials were defined, that outcome was completely predictable. This is a perfectly circular argument. Though to be fair to the authors, they do acknowledge the circularity of their argument in the discussion.

One of the first questions that came to my mind was how well the 6 raters agreed. Unfortunately, no measure of inter-rater agreement is presented in the paper.

Happily, the authors get top marks for their commitment to transparency here. When I emailed to ask for their raw data so that I could calculate the inter-rater agreement myself, the raw data was sent promptly. If only all authors were so co-operative.

So, how well did the authors agree? Not very well, it turns out. The kappa coefficient for agreement among the raters was a mere 0.36 (kappa values vary between 0 and 1, where 0 is no better than guessing and 1 is perfect agreement, with values above about 0.7 generally considered to be acceptable agreement). This does not suggest that the determination of what counted as a marketing trial was obvious.

To look at this another way, of the 41 trials characterised as marketing trials, only 4 of those trials were rated “yes” by all raters, and only 9 were rated “yes” by all but one. This really doesn’t suggest that the authors could agree on what constituted a marketing trial.

So what about those 4 trials rated “yes” by all reviewers? Let’s take a look at them and see if the conclusion that they were primarily for marketing purposes stacks up.

The first paper is a report of 2 phase III trials of linaclotide for chronic constipation. This appears to have been an important component of the clinical trial data leading to licensing of rifamixin for IBS, as the trials are mentioned in the press release where the FDA describes the licensing of the drug. So the main purpose of the study seems to have been to get the drug licensed. And in contrast to point 6) in the criteria for determining a marketing study, the conclusions were based squarely on the primary endpoint. As for point 5), obviously the FDA thought the findings were clinically relevant as they were prepared to grant the drug a license on the back of them.

The second is a report of 2 phase III trials of rifamixin for patients with irritable bowel syndrome. Again, the FDA press release shows that the main purpose of the studies was for getting the drug licensed.  And again, the conclusions were based on the primary endpoint and were clearly considered clinically relevant by the FDA.

The third paper reports a comparative trial of tiotropium versus salmeterol for the prevention of exacerbations of COPD. Tiotropium was already licensed when this trial was done so this trial was not for the purposes of original licensing, but it does appear that it was important in subsequent changes to the licensing, as it is specifically referred to in the prescribing information.  Again, the conclusions focussed on the primary outcome measure, which was prevention of exacerbations: certainly a clinically important outcome in COPD.

The fourth paper was also done after the drug was originally licensed, which in this case was eplerenone. The study looked at overall mortality in patients with heart failure. Again, the study is specifically referenced in the prescribing information, and again, the study’s main conclusions are based on the primary outcome measure. In this case, the primary outcome measure was overall mortality. How much more clinically relevant do you want it to be?

Those 4 studies are the ones with the strongest evidence of being designed for marketing purposes. I haven’t looked at any of the others, but I think it’s fair to say that there is really no reason to think that those 4 were designed primarily for marketing.

Of course in one sense, you could argue that they are all marketing studies. You cannot market a drug until it is licensed. So doing studies with the aim of getting a drug licensed (or its licensed indications extended) could be regarded as for marketing purposes. But I’m pretty sure that’s not what most people would understand by the term.

So unfortunately, I think Barbour et al have not told us anything useful about how common marketing studies are.

I suspect they are quite rare. I have worked in clinical research for about 20 years, and have worked on many trials in that time. I have never worked on a study that I would consider to be designed mainly for marketing. All the trials I have worked on have had a genuine scientific question behind them.

This is not to deny, of course. that marketing trials exist. Barbour et al refer to some well documented examples in their paper. Also, in my experience as a research ethics committee member, I have certainly seen studies that seem to serve little scientific purpose and the accusation of being designed mainly for marketing would be reasonable.

Again, they are rare: certainly nothing like 1 in 5. I have been an ethics committee member for 13 years, and typically review about 50 or so studies per year. The number of studies I have suspected of being marketing studies in that time could be counted on the fingers of one hand. If it had been up to me, I would have not given those studies ethical approval, though other members of my ethics committee do not share my views on the ethics of marketing trials, so I was outvoted and the trials were approved.

So although Barbour et al ask an important question, it does not seem to me that they have answered it. Still, by being willing to share their raw data, they have participated fully in the scientific process. Publishing something and letting others scrutinise your results is how science is supposed to be done, and for that they deserve credit.