Are a fifth of drug trials really designed for marketing purposes?

A paper by Barbour et al was published in the journal Trials a few weeks ago making the claim that “a fifth of drug trials published in the highest impact general medical journals in 2011 had features that were suggestive of being designed for marketing purposes”.

That would be bad if it were true. Clinical trials are supposed to help to advance medical science and learn things about drugs or other interventions that we didn’t know before. They are not supposed to be simply designed to help promote the use of the drug. According to an editorial by Sox and Rennie, marketing trials are not really about testing hypotheses, but “to get physicians in the habit of prescribing a new drug.”

Marketing trials are undoubtedly unethical in my opinion, and the question of how common they are is an important one.

Well, according to Barbour et al, 21% of trials in high impact medical journals were designed for marketing purposes. So how did they come up with that figure?

That, unfortunately, is where the paper starts to go downhill. They chose a set of criteria which they believed were associated with marketing trials. Those criteria were:

“1) a high level of involvement of the product manufacturer in study design 2) data analysis, 3) and reporting of the study, 4) recruitment of small numbers of patients from numerous study sites for a common disease when they could have been recruited without difficulty from fewer sites, 5) misleading abstracts that do not report clinically relevant findings, and 6) conclusions that focus on secondary end-points and surrogate markers”

Those criteria appear to be somewhat arbitrary. Although Barbour et al give 4 citations to back up those criteria, none of the papers cited provides any data to validate those criteria.

A sample of 194 papers from 6 top medical journals were then assessed against those criteria by 6 raters (or sometimes 5, as raters who were journal editors didn’t assess papers that came from their own journal), and each rater rated each paper as “no”, “maybe”, or “yes” for how likely it was to be a marketing trial. Trials rated “yes” by 4 or more raters were considered to be marketing trials, and trials with fewer than 4 “yes” ratings could also be considered marketing trials if there were no more than 3 “no” ratings and a subsequent consensus discussion decided they should be classified as marketing trials.

The characteristics of marketing trials were then compared with other trials. Not surprisingly, the characteristics described above were more common in the trials characterised as marketing trials. Given that that’s how the “marketing” trials were defined, that outcome was completely predictable. This is a perfectly circular argument. Though to be fair to the authors, they do acknowledge the circularity of their argument in the discussion.

One of the first questions that came to my mind was how well the 6 raters agreed. Unfortunately, no measure of inter-rater agreement is presented in the paper.

Happily, the authors get top marks for their commitment to transparency here. When I emailed to ask for their raw data so that I could calculate the inter-rater agreement myself, the raw data was sent promptly. If only all authors were so co-operative.

So, how well did the authors agree? Not very well, it turns out. The kappa coefficient for agreement among the raters was a mere 0.36 (kappa values vary between 0 and 1, where 0 is no better than guessing and 1 is perfect agreement, with values above about 0.7 generally considered to be acceptable agreement). This does not suggest that the determination of what counted as a marketing trial was obvious.

To look at this another way, of the 41 trials characterised as marketing trials, only 4 of those trials were rated “yes” by all raters, and only 9 were rated “yes” by all but one. This really doesn’t suggest that the authors could agree on what constituted a marketing trial.

So what about those 4 trials rated “yes” by all reviewers? Let’s take a look at them and see if the conclusion that they were primarily for marketing purposes stacks up.

The first paper is a report of 2 phase III trials of linaclotide for chronic constipation. This appears to have been an important component of the clinical trial data leading to licensing of rifamixin for IBS, as the trials are mentioned in the press release where the FDA describes the licensing of the drug. So the main purpose of the study seems to have been to get the drug licensed. And in contrast to point 6) in the criteria for determining a marketing study, the conclusions were based squarely on the primary endpoint. As for point 5), obviously the FDA thought the findings were clinically relevant as they were prepared to grant the drug a license on the back of them.

The second is a report of 2 phase III trials of rifamixin for patients with irritable bowel syndrome. Again, the FDA press release shows that the main purpose of the studies was for getting the drug licensed.  And again, the conclusions were based on the primary endpoint and were clearly considered clinically relevant by the FDA.

The third paper reports a comparative trial of tiotropium versus salmeterol for the prevention of exacerbations of COPD. Tiotropium was already licensed when this trial was done so this trial was not for the purposes of original licensing, but it does appear that it was important in subsequent changes to the licensing, as it is specifically referred to in the prescribing information.  Again, the conclusions focussed on the primary outcome measure, which was prevention of exacerbations: certainly a clinically important outcome in COPD.

The fourth paper was also done after the drug was originally licensed, which in this case was eplerenone. The study looked at overall mortality in patients with heart failure. Again, the study is specifically referenced in the prescribing information, and again, the study’s main conclusions are based on the primary outcome measure. In this case, the primary outcome measure was overall mortality. How much more clinically relevant do you want it to be?

Those 4 studies are the ones with the strongest evidence of being designed for marketing purposes. I haven’t looked at any of the others, but I think it’s fair to say that there is really no reason to think that those 4 were designed primarily for marketing.

Of course in one sense, you could argue that they are all marketing studies. You cannot market a drug until it is licensed. So doing studies with the aim of getting a drug licensed (or its licensed indications extended) could be regarded as for marketing purposes. But I’m pretty sure that’s not what most people would understand by the term.

So unfortunately, I think Barbour et al have not told us anything useful about how common marketing studies are.

I suspect they are quite rare. I have worked in clinical research for about 20 years, and have worked on many trials in that time. I have never worked on a study that I would consider to be designed mainly for marketing. All the trials I have worked on have had a genuine scientific question behind them.

This is not to deny, of course. that marketing trials exist. Barbour et al refer to some well documented examples in their paper. Also, in my experience as a research ethics committee member, I have certainly seen studies that seem to serve little scientific purpose and the accusation of being designed mainly for marketing would be reasonable.

Again, they are rare: certainly nothing like 1 in 5. I have been an ethics committee member for 13 years, and typically review about 50 or so studies per year. The number of studies I have suspected of being marketing studies in that time could be counted on the fingers of one hand. If it had been up to me, I would have not given those studies ethical approval, though other members of my ethics committee do not share my views on the ethics of marketing trials, so I was outvoted and the trials were approved.

So although Barbour et al ask an important question, it does not seem to me that they have answered it. Still, by being willing to share their raw data, they have participated fully in the scientific process. Publishing something and letting others scrutinise your results is how science is supposed to be done, and for that they deserve credit.

 

 

 

One thought on “Are a fifth of drug trials really designed for marketing purposes?”

Leave a Reply

Your email address will not be published. Required fields are marked *