Making your own reality – part 2
Posted by apgaylard on August 26, 2008
Last week, having failed to get Sue Young to engage with a very simple criticism of a clearly erroneous statement made by American homeopath and author Dana Ullman in an interview she is carrying on her site, I sent my comment directly to the Zeus Information Service.
Mr Ullman has very thoughtfully copied me in on the reply that he made to Louise McLean of Zeus. Given that this is a Zeus’ official response to my query, I have decided to post it – along with a few comments.
Adrian Gaylard is being either dishonest or inadequately informed. It is true that the Shang review included the studies that Reilly and his team in his initial analysis, he conveniently excluded ALL of them in his final analysis!
Adrian Gaylard ignores the FACT that Shang never provided analysis and comparison of the 22 high-quality homeopathic trials to the 9 high-quality conventional medical studies. Gaylard and Shang had the chutzpah to ignore these randomized double-blind trials by asserting that they were somehow “biased.” Shang conveniently disregarded several high-quality trials that happened to have “positive” results (such as two of the large trials testing Oscillococcinum in the treatment of people with influenza or influenza-like syndrome, two trials by Jennifer Jacobs in the treatment of children with diarrhea, including her famous study that was published in the respected journal, Pediatrics, as well as the meta-analysis of the three diarrhea trials that was published in the respect journal, Pediatric Infectious Disease Journal.
Gaylard showed his dishonesty or ignorance by saying that the Reilly trials were not “high-quality,” when even the editors of the Lancet said of Reilly’s 1994 study, “carefully done work of this sort should not be denied the attention of Lancet readers” (vol 344, December 10, 1994, p. 1585). Then, when Reilly and team conducted their fourth trial on people with allergic disorders in the BMJ, an editorial in this prestigious medical journal noted, “The authors believe that when these results are taken together with the findings of three similar previous trials, it may be time to confront the conclusion that homeopathy and placebo differ.” Then, the editorial went on to say, “This may be more plausible than the conclusion that their trials have produced serial false positive results.” (vol. 321, August 19, 2000)
The point here is that when you put on blinders, you do not see the whole picture. Finally, I stand by the words and wisdom of Sir Arthur C. Clarke who said, : “A sufficiently advanced technology is indistinguishable from magic. When a distinguished but elderly scientist states that something is possible, he is almost certainly right; that something is impossible, he is very probably wrong.”
–Dana Ullman, MPH
Aside from the insults and bluster, there are some interesting points here which illuminate Ullman’s style and attitude to evidence.
He does make what is actually a concession, “It is true that the Shang review included the studies that [sic] Reilly and his team in his initial analysis”. How does that square with me being, “dishonest or inadequately informed” I wonder?
Never mind. Ullman’s tack is now quite different from what he was reported to have said in his interview; that Shang et al, “did not include any of David Reilly’s research”. However, instead of conceding the point he rather intemperately attacks what I didn’t say, switching to a series of different issues.
For instance, saying that I ignored, “the FACT that Shang never provided analysis and comparison of the 22 high-quality homeopathic trials to the 9 high-quality conventional medical studies” is a remarkable piece of misdirection. Actually, there were 21 higher quality homeopathic trials identified in the review, however, this point is totally irrelevant. It belies a willful misunderstanding of what Shang et al did. (It’s willful because it has been explained to him before. In fact we discussed this topic some time ago.)
What did Shang et al Really Do?
To highlight why Ullman is so misguided we need a brief digression to review what Shang et al really did in their analysis. Understanding this immediately beggars many of the questions raised about the study. The core concept explored in the paper was to use all the trials to perform a meta-regression analysis. This allowed the authors to test the relationship between the effect size (expressed as an Odds Ratio, OR) and a number of potential sources of bias: “SE of log odds ratio, language of publication, indexing of the publication in MEDLINE, trial quality (masking, generation of allocation sequence, concealment of allocation, intention-to-treat analysis), duration of follow-up, and clinical topic. For homoeopathy trials, we also examined whether effects varied between types of homoeopathy and types of indications (acute, chronic, primary prevention, or prophylaxis).”
ALL the 110 matched pairs of trial (homeopathic and conventional) were included in this meta-regression analysis, every one. The results are provided in table 3 of the paper. By a large margin, the strongest biasing influence was found to be SE of log odds ratio (SE) – a measure of trial size (P<0·0001, for both homeopathy and conventional treatment).
Thus ALL 110 trials of homeopathy played their part in establishing the best predictor of bias. The authors then, “combined treatment effects from larger trials of higher quality by use of standard random-effects meta-analysis and used meta-regression analysis to predict treatment effects in trials as large as the largest trials included in the study. Trials with SE in the lowest quartile were defined as larger trials.”
Simply, they used the knowledge gained from the analysis of their large data set to understand which were the most reliable trials, then did a second meta-analysis of the most reliable trials to provide the best prediction they could of treatment effects.
This is very clearly illustrated in the paper: if you look at Figure 2 it shows the relationship between bias (SE) and treatment effect (OR). It includes a regression line (solid blue) which shows that as the trials of homeopathic treatments approaches minimum bias (small SE) the treatment effect disappears (OR approaches unity) – becoming statistically indistinguishable from the placebo. For the matched conventional controls it did not. In other words, correcting for bias as best they could the conventional medicine had an effect above that of a placebo; the homeopathic interventions did not. (they also demonstrated in their Author’s reply that individualised homeopathy fared no better than any other kind.)
The ‘final eight’ homeopathy studies that seem to be the cause of such contention were just those “higher quality” trials whose measure of bias (SE) fell within the lowest quartile. In other words they were the most reliable of all the trials considered.
So, all the studies contributed to the conclusion. None of the 110 were ignored, they were just statistically processed to enable the most reliable studies to be found; a holistic approach if you like!
So when Ullman uses words like, “conveniently excluded” the reality is that 102 studies were filtered out on the basis of a transparent and objective analysis powered by their contents. Now, if Ullman doesn’t like the method he should point out the real flaws – not some imagined injustices.
He also seems to conflate statistical bias with some kind of moral bias. In the paper bias is not used in any pejoratively. Unsurprisingly, in a statistical analysis it’s used in a statistical sense.
So, the comment that Shang et al asserts some work to be, “somehow “biased” is odd. First, bias is not an absolute quality; it is all pervasive: it’s not a matter of it being present or not. Shang systematically looks for the least biased trials. Importantly, they quantify the effects of different potential sources of bias and define an objective criterion for selection, “Trials with SE in the lowest quartile … larger trials”. There is no ‘somehow’ about it.
He then goes on about Shang conveniently “disregarding” other “positive” trials. Here he just repeats the same mistakes. As we shall see, they are most definitely included.
Ullman alleges that, “Shang conveniently disregarded several high-quality trials that happened to have “positive” results (such as two of the large trials testing Oscillococcinum…” He doesn’t name the trials that he feels have been overlooked; so let’s look at what Shang et al did and see if there are any obvious omissions.
The fact is that Shang et al includes four trials of Oscillococcinum: three treatment trials and one of prophylaxis (Attena): Attena (5), Casanova (19), Ferley (35), Papp (71). [the reference numbers are those used by Shang et al in their webappendix and other published data] Papp is included in the final group of eight “higher quality” least biased trials. The others were not counted as being of “higher methodological quality”.
It seems that Ullman may be, yet again, equating “disregarding” with not being in the final set of “higher quality” low-bias trials. As we have seen, this is a false equivalence: they were used in the key part of the analysis.
The Cochrane Review of this intervention provides some insights into the quality of these trials and a possible candidate for a missing study. It also provides a reasonable assessment of the merits of this intervention. It has also previously been given Ullman’s stamp of approval.
It does include the three treatment trials used by Shang et al , plus one they seem to have missed, a 1984 study by Casanova (Casanova P. Homeopathy, flu syndrome and double blinding [Homeopathie, syndrome grippal et double insu]. Tonus 1984:25-6). However, as the Cochrane reviewers noted it, “was not published in a standard medical journal”. Perhaps this may explain why it was overlooked. They also went on to note that it, “contains little experimental detail, does not report withdrawals and analyses a suspiciously round number of patients…” Certainly, even if it was included, it could not have passed the test to make it into the ‘higher quality’ sub-group.
Also, it’s perhaps evidence for the generosity and impartiality of Shang et al that Casanova’s unpublished 1992 study was considered at all. (Casanova P, Gerard R. Bilan de 3 annees d’etudes randomisees multicentriques oscillococcinum/placebo. Laboratoires Boiron, 1992: 11-16. – From the reference it looks like an internal publication by the manufacturer)
The Review also notes that “…Two trials (Ferley and Papp) pre-specified ‘recovery after 48 hours’ as the main outcome measure. The RR of being sick at 48 hours on Oscillococcinum was 93% (95% CI 88% to 99%) of that of placebo …”. This main outcome measure did not make it into the oft quoted ‘headline’ for the review, evidently, a 95% CI reaching 0.99 cannot be safely considered to have reached statistical significance.
When evaluating the results of these two journal papers it is important to remember the number of outcome measures they assessed: 8 for Ferley and 17 for Papp; the odds of getting at least one statistically significant outcome by chance alone are pretty high – given that the significance testing was carried out at the 5% level with no correction for multiple comparisons. (i.e. 1 in 20 chance of the difference between treatment and placebo being coincidental.)
The conclusion of the Cochrane review that, “Participants taking Oscillococcinum had about a quarter of a day less illness than those on placebo. This effect might be as large as half a day and as small as about an hour” is based on combining the individually negative results for a secondary outcome from these two studies. When this is done (giving two-thirds of the “weight” to the smaller study) the lower bound of the confidence interval (95%) reaches down to a benefit “as small as about an hour”. It’s a lot of fuss over very little.
As the reviewers conclude, “the difference between groups in the meta-analysis only just reaches statistical significance. It is arguable that a question as scientifically controversial as whether homeopathic medicines are always equivalent to placebo would require more statistically robust data.” And I’d agree.
To sum up: Shang et al included three treatment and one prophylaxis trial for Oscillococcinum. The only trial I can see that they missed (and I’m happy to be corrected) is Casanova’s inadequately reported, dubious, 1984 non-peer reviewed publication. They rather generously considered an unpublished trial published internally by the manufacturer Boiron. Even taking the trial they missed along with the ones they didn’t and doing a meta-analysis – as the Cochrane people have done – manages to deliver a few outcomes, out of many tested, that, “only just reach statistical significance”. Even these don’t provide an effect of clinical significance.
As for the Jacobs studies, again: all three were selected as part of the initial 110 by Shang et al. Jacobs (46) was part of the high-quality low-bias final group of eight homeopathic trials. Jacobs (49) was deemed to be of high-quality but as the ninth largest study, it fell out of the lowest quartile for SE, so didn’t make it into the ‘final eight’. Jacobs (48) was a small study (n=33) and deemed not to be of high quality.
As for, “her famous study that was published in the respected journal, Pediatrics,” this seems to be Jacobs (49) (Jacobs J, Jiménez LM, Gloyd SS, Gale JL, Crothers D. Treatment of acute childhood diarrhea with homeopathic medicine: a randomized clinical trial in Nicaragua. Pediatrics 1994; 93: 719-25) which was included as part of the sub-group of 21 ‘higher quality’ studies.
Also there is no reason for Shang et al to have included, “the meta-analysis of the three diarrhea trials that was published in the respect [sic] journal, Pediatric Infectious Disease Journal” as they were undertaking their own meta-analysis and needed to include, as they did, the data from the individual trials.
So, as for Jacobs work: the individual trials were included: one made it into the final group of eight studies; one fell just short – but was assessed to be ‘high quality’; the other was a small study that played its part in the large group of 110 trials, but was not judged to be of “higher quality”. Again, Ullman is just plain wrong.
When Ullman chides that I, “showed his dishonesty or ignorance by saying that the Reilly trials were not “high-quality,” ” again he manages to miss several important points. For instance, I never said that these trials were not “high-quality”. Shang et al determined that they didn’t meet their criteria for ‘higher quality’. Now, I was clearly referring to the quality criteria of Shang et al, not any personal view. Also, note the words: higher-quality. This is a relative term, used by the authors, which allows for some studies to be high-quality whilst not making it to the top of the quality league.
However, there is an interesting question here: Is there a disagreement here between the criteria used by Shang et al and the editors of the Lancet and BMJ? All I can say is that Shang et al described their criteria:
Assessment of study quality focused on three key domains of internal validity: randomisation generation of allocation sequence and concealment of allocation), masking (of patients, therapists, and outcome assessors), and data analysis (by intention to treat or other). Random-number tables, computer generated random numbers, minimisation, coin-tossing, card-shuffling, and lot-drawing were classified as adequate methods for the generation of the allocation sequence. Sealed, opaque, sequentially numbered assignment envelopes, central randomisation, independently prepared and coded drug packs of identical appearance, and on-site computerised randomisation systems were classified as adequate methods of allocation concealment. Analysis by intention to treat was assumed if the reported number of participants randomised and the number analysed were identical. Descriptions of other methods were coded either as inadequate or unclear, depending on the amount of detail provided. Trials described as doubleblind, with adequate methods for the generation of allocation sequence and adequate concealment of allocation, were classified as of higher methodological quality.
They have also made clear which trials they considered to be, “of higher methodological quality” and which they did not. If the editors of The Lancet and BMJ have made similar information available, then it would be possible to evaluate this apparent discrepancy. It might even make a topic for a paper. Perhaps Ullman should consider doing the work before casting the stones.
As Ullman says, “The point here is that when you put on blinders, you do not see the whole picture.” I couldn’t agree more!
What we have seen here is classic Ullman: misinform and move on: never admit an error or make a correction. As a consequence the blogosphere is becoming littered with the remnants of his empty arguments.
Ullman and I have exchanged a few e-mails now, and we are getting nowhere. So, before I take the debate any further with him I’ve asked him to clear up a few of the misstatements that he’s left lying around. Otherwise we’ll just end up going in circles.
Here is my list.
He has never conceded that he was wrong to say, “… Perhaps, SOMEONE can finally tell us which were the 21 homeopathic trials and the 9 allopathic ones. Shang NEVER divulged, most likely because this review would show real benefits from homeopathic treatment. Isn’t anyone suspicious of “black box” comparison studies like this? Why are only the homeopaths complaining here about junk science? Hmmmm. …”
He has asserted before that two trails by Jacobs were excluded from Shang et al‘s analysis, when as we have seen, they were not.
- Some time ago he was invited, over at JREF to, “GIVE ONE, YOU ONLY NEED ONE, INCONTROVERTIBLE EXAMPLE, WITH REFERENCES, OF HOMEOPATHY CURING A NON-SELF-LIMITING CONDITION”.”
- After talking up the Rao et al paper on possible evidence for the ‘memory of water’ he – much like the authors – never addressed himself to the critique by Kerr et al published in the journal Homeopathy.
- We also had a brief exchange on the Oscillococcinum trials before. He never did get back to me on that one, and still keeps peddling nonsense.
- Last, but by no means least, he also left me hanging on the assertion that “silica has a tendency to store and broadcast information.” (I’ve even included this point in a recent e-mail).
I’m sure that there are more. But this will do for a start.
I would welcome a real debate – not avoidance, bluster, misdirection and name calling. It’d also be nice, the next time Shang et al is mentioned, if the debate could be about what they actually did.
17 Responses to “Making your own reality – part 2”
Sorry, the comment form is closed at this time.