Category Archives: Cancer

Solving the economics of personalised medicine

It’s a well known fact that many drugs for many diseases don’t work very well in in many patients. If we could identify in advance which patients will benefit from a drug and which won’t, then drugs could be prescribed in a much more targeted manner. That is actually a lot harder to do than it sounds, but it’s an active area of research, and I am confident that over the coming years and decades medical research will make much progress in that direction.

This is the world of personalised medicine.

Although giving people targeted drugs that are likely to be of substantial benefit to them has obvious advantages, there is one major disadvantage. Personalised medicine simply does not fit the economic model that has evolved for the pharmaceutical industry.

Developing new drugs is expensive. It’s really expensive. Coming up with a precise figure for the cost of developing a new drug is controversial, but some reasonable estimates run into billions of dollars.

The economic model of the pharmaceutical industry is based on the idea of a “blockbuster” drug. You develop a drug like Prozac, Losec, or Lipitor that can be used in millions of patients, and the huge costs of that development can be recouped by the  huge sales of the drug.

But what if you are developing drugs based on personalised medicine for narrowly defined populations?  Perhaps you have developed a drug for patients with a specific variant of a rare cancer, and it is fantastically effective in those patients, but there may be only a few hundred patients worldwide who could benefit. There is no way you’re going to be able to recoup the costs of a billion dollars or more of development by selling the drug to a few hundred patients, without charging sums of money that are crazily unaffordable to each patient.

Although the era of personalised medicine is still very much in its infancy, we have already seen this effect at work with drugs like Kadcyla, which works for only a specific subtype of breast cancer patients, but at £90,000 a pop has been deemed too expensive to fund in the NHS. What happens when even more targeted drugs are developed?

I was discussing this question yesterday evening over a nice bottle of Chilean viognier with Chris Winchester. I think between us we may have come up with a cunning plan.

Our idea is as follows. If a drug is being developed for a suitably narrow patient population that it could be reasonably considered a “personalised medicine”, different licensing rules would apply. You would no longer have to obtain such a convincing body of evidence of efficacy and safety before licensing. You would need some evidence, of course, but the bar would be set much lower. Perhaps some convincing laboratory studies followed by some small clinical trials that could be done much more cheaply than the typical phase III trials that enrol hundreds of patients and cost many millions to run.

At that stage, you would not get a traditional drug license that would allow you to market the drug in the normal way. The license would be provisional, with some conditions attached.

So far, this idea is not new. The EMA has already started a pilot project of “adaptive licensing“, which is designed very much in this spirit.

But here comes the cunning bit.

Under our plan, the drug would be licensed to be marketed as a mixture of the active drug and placebo. Some packs of the drug would contain the active drug, and some would contain placebo. Neither the prescriber nor the patient would know whether they have actually received the drug. Obviously patients would need to be told about this and would then have the choice to take part or not. But I don’t think this is worse than the current situation, where at that stage the drug would not be licensed at all, so patients would either have to find a clinical trial (where they may still get placebo) or not get the drug at all.

In effect, every patient who uses the drug during the period of conditional licensing would be taking part in a randomised, double-blind, placebo-controlled trial.  Prescribers would be required to collect data on patient outcomes, which, along with a code number on the medication pack, could then be fed back to the manufacturer and analysed. The manufacturer would know from the code number whether the patient received the drug or placebo.

Once sufficient numbers of patients had been treated, then the manufacturer could run the analysis and the provisional license could be converted to a full license if the results show good efficacy and safety, or revoked if they don’t.

This wouldn’t work in all cases. There will be times when other drugs are available but would not be compatible with the new drug. You could not then ethically put patients in a position where a drug is available but they get no drug at all. But in cases where no effective treatment is available, or the new drug can be used in addition to standard treatments, use of a placebo in this way is perfectly acceptable from an ethical point of view.

Obviously even when placebo treatment is a reasonable option, there would be logistical challenges with this approach (for example, making sure that the same patient gets the same drug when their first pack of medicine runs out). I don’t pretend it would be easy. But I believe it may be preferable to a system in which the pharmaceutical industry has to abandon working on personalised medicine because it has become unaffordable.

Dangerous nonsense about vaping

If you thought you already had a good contender for “most dangerous, irresponsible, and ill-informed piece of health journalism of 2015”, then I’m sorry to tell you that it has been beaten into second place at the last minute.

With less than 36 hours left of 2015, I am confident that this article by Sarah Knapton in the Telegraph will win the title.

The article is titled “E-cigarettes are no safer than smoking tobacco, scientists warn”. The first paragraph is

“Vaping is no safer that [sic] smoking, scientists have warned after finding that e-cigarette vapour damages DNA in ways that could lead to cancer.”

There are such crushing levels of stupid in this article it’s hard to know where to start. But perhaps I’ll start by pointing out that a detailed review of the evidence on vaping by Public Health England, published earlier this year, concluded that e-cigarettes are about 95% less harmful than smoking.

If you dig into the detail of that review, you find that most of the residual 5% is the harm of nicotine addiction. It’s debatable whether that can really be called a harm, given that most people who vape are already addicted to nicotine as a result of years of smoking cigarettes.

But either way, the evidence shows that vaping, while it may not be 100% safe (though let’s remember that nothing is 100% safe: even teddy bears kill people), is considerably safer than smoking. This should not be a surprise. We have a pretty good understanding of what the toxic components of cigarette smoke are that cause all the damage, and most of those are either absent from e-cigarette vapour or present at much lower concentrations.

So the question of whether vaping is 100% safe is not the most relevant thing here. The question is whether it is safer than smoking. Nicotine addiction is hard to beat, and if a smoker finds it impossible to stop using nicotine, but can switch from smoking to vaping, then that is a good thing for that person’s health.

Now, nothing is ever set in stone in science. If new evidence comes along, we should always be prepared to revise our beliefs.

But obviously to go from a conclusion that vaping is 95% safer than smoking to concluding they are both equally harmful would require some pretty robust evidence, wouldn’t it?

So let’s look at the evidence Knapton uses as proof that all the previous estimates were wrong and vaping is in fact as harmful as smoking.

The paper it was based on is this one, published in the journal Oral Oncology.  (Many thanks to @CaeruleanSea for finding the link for me, which had defeated me after Knapton gave the wrong journal name in her article.)

The first thing to notice about this is that it is all lab based, using cell cultures, and so tells us little about what might actually happen in real humans. But the real kicker is that if we are going to compare vaping and smoking and conclude that they are as harmful as each other, then the cell cultures should have been exposed to equivalent amounts of e-cigarette vapour and cigarette smoke.

The paper describes how solutions were made by drawing either the vapour or smoke through cell media. We are then told that the cells were treated with the vaping medium every 3 days for up to 8 weeks. So presumably the cigarette medium was also applied every 3 days, right?

Well, no. Not exactly. This is what the paper says:

“Because of the high toxicity of cigarette smoke extract, cigarette-treated samples of each cell line could only be treated for 24 h.”

Yes, that’s right. The cigarette smoke was applied at a much lower intensity, because otherwise it killed the cells altogether. So how can you possibly conclude that vaping is no worse than smoking, when smoking is so harmful it kills the cells altogether and makes it impossible to do the experiment?

And yet despite that, the cigarettes still had a larger effect than the vaping. It is also odd that the results for cigarettes are not presented at all for some of the assays. I wonder if that’s because it had killed the cells and made the assays impossible? As primarily a clinical researcher, I’m not an expert in lab science, but not showing the results of your positive control seems odd to me.

But the paper still shows that the e-cigarette extract was harming cells, so that’s still a worry, right?

Well, there is the question of dose. It’s hard for me to know from the paper how realistic the doses were, as this is not my area of expertise, but the press release accompanying this paper (which may well be the only thing that Knapton actually read before writing her article) tells us the following:

“In this particular study, it was similar to someone smoking continuously for hours on end, so it’s a higher amount than would normally be delivered,”

Well, most things probably damage cells in culture if used at a high enough dose, so I don’t think this study really tells us much. All it tells us is that cigarettes do far more damage to cell cultures than e-cigarette vapour does. Because, and I can’t emphasise this point enough, THEY COULDN’T DO THE STUDY WITH EQUIVALENT DOSES OF CIGARETTE SMOKE BECAUSE IT KILLED ALL THE CELLS.

A charitable explanation of how Knapton could write such nonsense might be that she simply took the press release on trust (to be clear, the press release also makes the claim that vaping is as dangerous as smoking) and didn’t have time to check it. But leaving aside the question of whether a journalist on a major national newspaper should be regurgitating press releases without any kind of fact checking, I note that many people (myself included) have been pointing out to Knapton on Twitter that there are flaws in the article, and her response has been not to engage with such criticism, but to insist she is right and to block anyone who disagrees: the Twitter equivalent of the “la la la I’m not listening” argument.

It seems hard to come up with any explanation other than that Knapton likes to write a sensational headline and simply doesn’t care whether it’s true, or, more importantly, what harm the article may do.

And make no mistake: articles like this do have the potential to cause harm. It is perfectly clear that, whether or not vaping is completely safe, it is vastly safer than smoking. It would be a really bad outcome if smokers who were planning to switch to vaping read Knapton’s article and thought “oh, well if vaping is just as bad as smoking, maybe I won’t bother”. Maybe some of those smokers will then go on to die a horrible death of lung cancer, which could have been avoided had they switched to vaping.

Is Knapton really so ignorant that she doesn’t realise that is a possible consequence of her article, or does she not care?

And in case you doubt that anyone would really be foolish enough to believe such nonsense, I’m afraid there is evidence that people do believe it. According to a survey by Action on Smoking and Health (ASH), the proportion of people who believe that vaping is as harmful or more harmful than smoking increased from 14% in 2014 to 22% in 2015. And in the USA, the figures may be even worse: this study found 38% of respondents thought e-cigarettes were as harmful or more harmful than smoking. (Thanks again to @CaeruleanSea for finding the links to the surveys.)

I’ll leave the last word to Deborah Arnott, Chief Executive of ASH:

“The number of ex-smokers who are staying off tobacco by using electronic cigarettes is growing, showing just what value they can have. But the number of people who wrongly believe that vaping is as harmful as smoking is worrying. The growth of this false perception risks discouraging many smokers from using electronic cigarettes to quit and keep them smoking instead which would be bad for their health and the health of those around them.”

Ovarian cancer and HRT

Yesterday’s big health story in the news was the finding that HRT ‘increases ovarian cancer risk’. The scare quotes there, of course, tell us that that’s probably not really true.

So let’s look at the study and see what it really tells us. The BBC can be awarded journalism points for linking to the actual study in the above article, so it was easy enough to find the relevant paper in the Lancet.

This was not new data: rather, it was a meta-analysis of existing studies. Quite a lot of existing studies, as it turns out. The authors found 52 epidemiological studies investigating the association between HRT use and ovarian cancer. This is quite impressive. So despite ovarian cancer being a thankfully rare disease, the analysis included over 12,000 women who had developed ovarian cancer. So whatever other criticisms we might make of the paper, I don’t think a small sample size is going to be one of them.

But what other criticisms might we make of the paper?

Well, the first thing to note is that the data are from epidemiological studies. There is a crucial difference between epidemiological studies and randomised controlled trials (RCTs). If you want to know if an exposure (such as HRT) causes an outcome (such as ovarian cancer), then the only way to know for sure is with an RCT. In an epidemiological study, where you are not doing an experiment, but merely observing what happens in real life, it is very hard to be sure if an exposure causes an outcome.

The study showed that women who take HRT are more likely to develop ovarian cancer than women who don’t take HRT. That is not the same thing as showing that HRT caused the excess risk of ovarian cancer. It’s possible that HRT was the cause, but it’s also possible that women who suffer from unpleasant menopausal symptoms (and so are more likely to take HRT than those women who have an uneventful menopause) are more likely to develop ovarian cancer. That’s not completely implausible. Ovaries are a pretty relevant organ in the menopause, and so it’s not too hard to imagine some common factor that predisposes both to unpleasant menopausal symptoms and an increased ovarian cancer risk.

And if that were the case, then the observed association between HRT use and ovarian cancer would be completely spurious.

So what this study shows us is a correlation between HRT use and ovarian cancer, but as I’ve said many times before, correlation does not equal causation. I know I’ve been moaned at by journalists for endlessly repeating that fact, but I make no apology for it. It’s important, and I shall carry on repeating it until every story in the mainstream media about epidemiological research includes a prominent reminder of that fact.

Of course, it is certainly possible that HRT causes an increased risk of ovarian cancer. We just cannot conclude it from that study.

It would be interesting to look at how biologically plausible it is. Now, I’m no expert in endocrinology, but one little thing I’ve observed makes me doubt the plausibility. We know from a large randomised trial that HRT increases breast cancer risk (at least in the short term). There also seems to be evidence that oral contraceptives increase breast cancer risk but decrease ovarian cancer risk. With my limited knowledge of endocrinology, I would have thought the biological effects of HRT and oral contraceptives on cancer risk would be similar, so it just strikes me as odd that they would have similar effects on breast cancer risk but opposite effects on ovarian cancer risk. Anyone who knows more about this sort of thing than I do, feel free to leave a comment below.

But leaving aside the question of whether the results of the latest study imply a causal relationship (though of course we’re not really going to leave it aside, are we? It’s important!), I think there may be further problems with the study.

The paper tells us, and this was widely reported in the media, that “women who use hormone therapy for 5 years from around age 50 years have about one extra ovarian cancer per 1000 users”.

I’ve been looking at how they arrived at that figure, and it’s not totally clear to me how it was calculated. The crucial data in the paper is this table.  The table is given in a bit more detail in their appendix, and I’m reproducing the part of the table for 5 years of HRT use below.

 

 Age group  Baseline risk (per 1000)  Relative excess risk Absolute excess risk (per 1000)
 50-54  1.2  0.43  0.52
 55-59  1.6  0.23  0.37
 60-64  2.1  0.05  0.10
 Total  0.99

The table is a bit complicated, so some words or explanation are probably helpful. The baseline risk is the probability (per 1000) of developing ovarian cancer over a 5 year period in the relevant age group. The relative excess risk is the proportional amount by which that risk is increased by 5 years of HRT use starting at age 50. The absolute excess risk is the baseline risk multiplied by the relative excess risk.

The risk in each 5 year period is then added together to give the total excess lifetime risk of ovarian cancer for a woman who takes HRT for 5 years starting at age 50. I assume excess risks at older age groups are ignored as there is no evidence that HRT increases the risk after such a long delay. It’s important to note here that the figure of 1 in 1000 excess ovarian cancer cases refers to lifetime risk: not the excess in a 5 year period.

The figures for incidence seem plausible. The figures for absolute excess risk are correct if the relative excess risk is correct. However, it’s not completely clear where the figures for relative risk come from. We are told they come from figure 2 in the paper. Maybe I’m missing something, but I’m struggling to match the 2 sets of figures. The excess risk of 0.43 for the 50-54 year age group matches the relative risk 1.43 for current users with duration < 5 years (which will be true while the women are still in that age group), but I can’t see where the relative excess risks of 0.23 and 0.05 come from.

Maybe it doesn’t matter hugely, as the numbers in figure 2 are in the same ballpark, but it always makes me suspicious when numbers should match and don’t.

There are some further statistical problems with the paper. This is going to get a bit technical, so feel free to skip the next two paragraphs if you’re not into statistical details. To be honest, it all pales into insignificance anyway beside the more serious problem that correlation does not equal causation.

The methods section tells us that cases were matched with controls. We are not told how the matching was done, which is the sort of detail I would not expect to see left out of a paper in the Lancet. But crucially, a matched case control study is different to a non-matched case control study, and it’s important to analyse it in a way that takes account of the matching, with a technique such as conditional logistic regression. Nothing in the paper suggests that the matching was taken into account in the analysis. This may mean that the confidence intervals for the relative risks are wrong.

It also seems odd that the data were analysed using Poisson regression (and no, I’m not going to say “a bit fishy”). Poisson regression makes the assumption that the baseline risk of developing ovarian cancer remains constant over time. That seems a highly questionable assumption here. It would be interesting to see if the results were similar using a method with more relaxed assumptions, such as Cox regression. It’s also a bit fishy (oh damn, I did say it after all) that the paper tells us that Poisson regression yielded odds ratios. Poisson regression doesn’t normally yield odds ratios: the default statistic is an incidence rate ratio. Granted, the interpretation is similar to an odds ratio, but they are not the same thing. Perhaps there is some cunning variation on Poisson regression in which the analysis can be coaxed into giving odds ratios, but if there is, I’m not aware of it.

I’m not sure how much those statistical issues matter. I would expect that you’d get broadly similar results with different techniques. But as with the opaque way in which the lifetime excess risk was calculated, it just bothers me when statistical methods are not as they should be. It makes you wonder if anything else was wrong with the analysis.

Oh, and a further oddity is that nowhere in the paper are we told the total sample size for the analysis. We are told the number of women who developed ovarian cancer, but we are not told the number of controls that were analysed. That’s a pretty basic piece of information that I would expect to see in any journal, never mind a top-tier journal such as the Lancet.

I don’t know whether those statistical oddities have a material impact on the analysis. Perhaps they do, perhaps they don’t. But ultimately, I’m not sure it’s the most important thing. The really important thing here is that the study has not shown that HRT causes an increase in ovarian cancer risk.

Remember folks, correlation does not equal causation.

The Saatchi Bill

I was disappointed to see yesterday that the Saatchi Bill (or Medical Innovations Bill, to give it its official name) passed its third reading in the House of Lords.

The Saatchi Bill, if passed, will be a dreadful piece of legislation. The arguments against it have been well rehearsed elsewhere, so I won’t go into them in detail here. But briefly, the bill sets out to solve a problem that doesn’t exist, and then offers solutions that wouldn’t solve it even if it did exist.

It is based on the premise that the main reason no progress is ever made in medical research (which is nonsense to start with, of course, because progress made all the time) is because doctors are afraid to try innovative treatments in case they get sued. There is, however, absolutely no evidence that that’s true, and in any case, the bill would not help promote real innovation, as it specifically excludes the use of treatments as part of research. Without research, there is no meaningful innovation.

If the bill were simply ineffective, that would be one thing, but it’s also actively harmful. By removing the legal protection that patients  currently enjoy against doctors acting irresponsibly, the bill will be a quack’s charter. It would certainly make it more likely that someone like Stanislaw Burzynski, an out-and-out quack who makes his fortune from fleecing cancer patients by offering them ineffective and dangerous treatments, could operate legally in the UK. That would not be a good thing.

One thing that has struck me about the sorry story of the Saatchi bill is just how dishonest Maurice Saatchi and his team have been. A particularly dishonourable mention goes to the Daily Telegraph, who have been the bill’s “official media partner“. Seriously? Since when did bills going through parliament have an official media partner? Some of the articles they have written have been breathtakingly dishonest. They wrote recently that the bill had “won over its critics“,  which is very far from the truth. Pretty much the entire medical profession is against it: this response from the Academy of Royal Medical Colleges is typical. The same article says that one way the bill had won over its critics was by amending it to require that doctors treating patients under this law must publish their research. There are 2 problems with that: first, the law doesn’t apply to research, and second, it doesn’t say anything about a requirement to publish results.

In an article in the Telegraph today, Saatchi himself continued the dishonesty. As well as continuing to pretend that the bill is now widely supported, he also claimed that more than 18,000 patients responded to the Department of Health’s consultation on the bill. In fact, the total number of responses to the consultation was only 170.

The dishonesty behind the promotion of the Saatchi bill has been well documented by David Hills (aka “the Wandering Teacake”), and I’d encourage you to read his detailed blogpost.

The question that I want to ask about all this is why? Why is Maurice Saatchi doing all this? What does he have to gain from promoting a bill that’s going to be bad for patients but good for unscrupulous quacks?

I cannot know the answers to any of those questions, of course. Only Saatchi himself can know, and even he may not really know: we are not always fully aware of our own motivations. The rest of us can only speculate. But nonetheless, I think it’s interesting to speculate, so I hope you’ll bear with me while I do so.

The original impetus for the Saatchi bill came when Saatchi lost his wife to ovarian cancer. Losing a loved one to cancer is always difficult, and ovarian cancer is a particularly nasty disease. There can be no doubt that Saatchi was genuinely distressed by the experience, and deserves our sympathy.

No doubt it seemed like a good idea to try to do something about this. After all, as a member of the House of Lords, he has the opportunity to propose new legislation. It is completely understandable that if he thought a new law could help people who were dying of cancer, he would be highly motivated to introduce one.

All of that is very plausible and easy to understand. What has happened subsequently, however, is a little harder to understand.

It can’t have been very long after Saatchi proposed the bill that many people who know more about medicine than he does told him why it simply wouldn’t work, and would have harmful consequences. So I think what is harder to understand is why he persisted with the bill after all the problems with it had been explained to him.

It has been suggested that this is about personal financial gain: his advertising company works for various pharmaceutical companies, and pharmaceutical companies will gain from the bill.

However, I don’t believe that that is a plausible explanation for Saatchi’s behaviour.

For a start, I’m pretty sure that the emotional impact of losing a beloved wife is a far stronger motivator than money, particularly for someone who is already extremely rich. It’s not as if Saatchi needs more money. He’s already rich enough to buy the support of a major national newspaper and to get a truly dreadful bill through parliament.

And for another thing, I’m not at all sure that pharmaceutical companies would do particularly well out of the bill anyway. They are mostly interested in getting their drugs licensed so that they can sell them in large quantities. Selling them as a one-off to individual patients is unlikely to be at the top of their list of priorities.

For what it’s worth, my guess is that Saatchi just has difficulty admitting that he was wrong. It’s not a particularly rare personality trait. He originally thought the bill would genuinely help cancer patients, and when told otherwise, he simply ignored that information. You might see this as an example of the Dunning Kruger effect, and it’s certainly consistent with the widely accepted phenomenon of confirmation bias.

Granted, what we’re seeing here is a pretty extreme case of confirmation bias, and has required some spectacular dishonesty on the part of Saatchi to maintain the illusion that he was right all along. But Saatchi is a politician who originally made his money in advertising, and it would be hard to think of 2 more dishonest professions than politics and advertising. It perhaps shouldn’t be too surprising that dishonesty is something that comes naturally to him.

Whatever the reasons for Saatchi’s insistence on promoting the bill in the face of widespread opposition, this whole story has been a rather scary tale of how money and power can buy your way through the legislative process.

The bill still has to pass its third reading in the House of Commons before it becomes law. We can only hope that our elected MPs are smart enough to see what a travesty the bill is. If you want to write to your MP to ask them to vote against the bill, now would be a good time to do it.

Are two thirds of cancers really due to bad luck?

A paper published in Science has been widely reported in the media today. According to media reports, such as this one, the paper showed that two thirds of cancers are simply due to bad luck, and only one third are due to environmental, lifestyle, or genetic risk factors.

The paper shows no such thing, of course.

It’s actually quite an interesting paper, and I’d encourage you to read it in full (though sadly it’s paywalled, so you may or may not be able to). But it did not show that two thirds of cancers are due to bad luck.

What the authors did was they looked at the published literature on 31 different types of cancer (eg lung cancer, thyroid cancer, colorectal cancer, etc) and estimated 2 quantities for each type of cancer. They estimated the lifetime risk of getting the cancer, and how often stem cells divide in those tissues.

They found a very strong correlation between those two quantities: tissues in which stem cells divided frequently (eg the colon) were more likely to develop cancer than tissues in which stem cell division was less frequent (eg the brain).

The correlation was so strong, in fact, that it explained two thirds of the variation among different tissue types in their cancer incidence. The authors argue that because mutations that can lead to cancer can occur during stem cell division purely by chance, that means that two thirds of the variation in cancer risk is due to bad luck.

So, that explains where the “two thirds” figure comes from.

The problem is that it applies only to explaining the variation in cancer risk from one tissue to another. It tells us nothing about how much of the risk within a given tissue is due to modifiable factors. You could potentially see exactly the same results whether each specific type of cancer struck completely at random or whether each specific type were hugely influenced by environmental risk factors.

Let’s take lung cancer as an example. Smoking is a massively important risk factor. Here’s a study that estimated that over half of all lung cancer deaths in Japanese males were due to smoking. Or to take cervical cancer as another example, about 70% of cervical cancers are due to just 2 strains of the HPV virus.

Those are important statistics when considering what proportion of cancers are just bad luck and what proportion are due to modifiable risk factors, but they did not figure anywhere in the latest analysis.

So in fact, interesting though this paper is, it tells us absolutely nothing about what proportion of cancer cases are due to modifiable risk factors.

We often see medical research badly reported in the newspapers. Often it doesn’t matter very much. But here, I think real harm could be done. The message that comes across from the media is that cancer is just a matter of luck, so changing your lifestyle won’t make much difference anyway.

We know that lifestyle is hugely important not only for cancer, but for many other diseases as well. For the media to claim give the impression that lifestyle isn’t important, based on a misunderstanding of what the research shows, is highly irresponsible.

Edit 5 Jan 2015:

Small correction made to the last paragraph following discussion in the comments below. Old text in strikethrough, new text in bold.