A canna’ change the laws of physics

Scotty, The Naked Time, stardate 1704.3, Episode 7

Homeopathy and the Absence of Evidence

Posted by apgaylard on April 26, 2008

The apologists for homeopathy are upset again: this time with Professor Edzard Ernst and Simon Singh’s new book.  Why? Because once more homeopathy is exposed for what it is: a placebo.

Dr Damien Downing, the Medical Director for the Alliance for Natural Health (ANH), seems particularly put out.  So much so that he has released a rather silly critique. (This link seems to be down. Try here)

After some empty carping he suggests that Ernst is not a very good scientist and then goes on to wrap himself in the flag of good science, “The scientific method ‘consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses’ (Wikipedia) – not of unsubstantiated dogmatic statements. Science has no room for dogma.”

This is one point I can agree with; compared to the statements of some other protagonists it’s pretty reasonable: science should have no room for dogma.  However, Downing is not averse to peddling some homeopathic propaganda. 

This is clearly seen in the way he handles evidence.  He refers to, what I would expect, is his best scientific evidence base for homeopathy: the homeopathy evidence section of “The National Library for Health“.  He points out that it currently, in his view, “contains 32 systematic reviews and metaanalyses of [homeopathy’s] use in a wide range of disorders“. He opines, “Of the 32, 7 report a statistically significant clinical effect from homeopathy, 6 show a nonsignificant trend in its favour, and 3 show no effect; 16 concluded that there was “insufficient data” to draw a conclusion either way.”

Now I’ve very closely examined this database in the past and could not disagree more strongly with this ‘analysis’. It completely misses the main point: if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports show that homeopathy works as well as the recommended conventional treatment: the answer is none

[three more items of ‘evidence’ have been added since I looked at the database, none of these have, as yet, been through the complete review process so it would be premature to cite these – hence my review is still relevant.]

Next, the idea of a “statistically significant clinical effect” needs some thought.  Note that it does not claim that there is a clinically significant effect.  That is, an effect that would be worth having.  This is good, because none of the reports show any.

A statistically significant effect just means that the statistical tests used on the data indicate that if the study was repeated it is likely that the difference in outcomes between groups given homeopathic sugar pills and ordinary sugar pills would be seen again.  In other words, the difference between these two groups is not likely to be just the result of chance.  However, not likely here usually means nineteen times out of twenty.  Not too reassuring when you see the number of outcomes measured in some of these studies, or the number of researchers in the world looking at this issue.

Neither do tests of statistical significance account for a range of other impediments including experimenter bias, publication bias, lack of blinding, high drop-out rates from control groups etc.

One of the silliest things in Downing’s statement is the citing of studies that he contends show “a non‐significant trend” in favour of homeopathy.  If the trend is not statistically significant, then it’s very likely to be nothing more than random noise in the experiment.  This is non-evidence, not evidence.

So we have no evidence to support the use of homeopathy; the authors of these ‘positive’ studies actually ask for more research.  All we have is a small number that claim to reach statistical significance for the particular ‘homeopathic’ intervention showing an effect in excess of a placebo.

The next sleight of hand is the contention that, “16 concluded that there was “insufficient data” to draw a conclusion either way.”  The “insufficient data” part seems to be presented as a quotation.  This phrase does not appear in sixteen of the reports. 

Also, the observation that researchers have looked for an effect and not found one tells its own story.  Whilst, in an absolute sense absence of evidence is not evidence of absence; this is too simplistic.  We need to remember to account for prior probability.  If something genuinely doesn’t exist then, by definition, we are never going to find evidence of its existence: there is an absence of evidence of unicorns because they are a myth.  Similarly, absence of evidence for the deeply physically implausible practise of homeopathy is telling us something.

However, if we are looking for something that is likely to be real, the quality of the search for evidence is also important: absence of evidence in high-quality research is clearly informative in a way that a similar result in low-quality research is not.

Seen in this light one of the observations contained in this database, that Downing has mysteriously overlooked, is vital:  

“… Studies of high methodological quality were more likely to be negative than the lower quality studies …” [Cucherat et al]

In other words, the better the quality of the search for a homeopathic effect the less likely one is to be found! 

This moves us onto the oddly vexatious topic of the famous review by Shang et al.  For Downing, “…the authors identified 110 relevant studies and then excluded all but 8 of them from the final analysis – and declined to name them! This would seem to be blatant research misconduct.”

Reality is somewhat different.  The research progressively excluded studies on the basis of transparent quality criteria.  They were particularly interested in bias and found that is correlated strongly with sample size.  That only eight of over one hundred trials of homeopathy made the cut tells us something important about the quality of research conducted into homeopathy.  That the eight best studies, taken together, showed that homeopathy is no more than a placebo is an entirely proper conclusion – consistent with the findings of Cucherat et al.  Any plea to include more of the original 110 is a plea for the inclusion of bias: not good science. [this subject is excellently explored on Paul Wilson’s blog]

Again, the better the quality of the search the more negative the findings about homeopathy.  This is, of course, what would be expected if there were no benefits from homeopathic remedies (aside from the placebo effect): seemingly positive results are just noise in the signal and can be removed by proper filtering.

The assertion that the authors refused to name the final eight studies is a persistent piece of homeomythology.  I have commented on this before at some length.  The truth is that Shang and his co-authors unwisely omitted the names of the eight studies from the original paper; some people pointed this out and they named them in the 17th December 2005 issue of The Lancet.  They have also made the details of the included and excluded papers available on a website.  This all happened in 2005!  It would seem to me that to raise the banner of good science requires that one, at the very least, keeps up to date with developments! 

On the subject of Shang et al, Downing confuses proper scientific conduct with mis-conduct.  This egregious folly can only be the result of a shocking lack of competence or letting personal dogma cloud his judgement.  I prefer to think it is the latter.  In any event, given this woeful performance, it would seem rather embarrassing to vilify Ernst as a bad scientist.

Unfortunately Downing is not alone in perpetuating the myth of the secret eight; worse still others completely fail to understand Shang et al.

It’s worth noting that Downing’s much vaunted “National Library for Health” database contains one review that, taken at face value, is very problematic for a homeopathy advocate.

A meta-analysis of homeopathy for postoperative ileus by Barnes et al was not able to reach a definitive judgement.  However their data indicated that studies working with potencies below 12C (there could be some active agent left) provided a statistically significant reduction in time to first flatus (vs. placebo) whereas those using potencies above 12C (odds are that just the solvent is left) did not.  Now, because homeopathic ‘remedies’ are usually diluted to potencies beyond 12C it both flatly contradicts both usual homeopathic practise and the ‘less is more’ notion of the ‘law’ of infinitesimals.

Finally, the most positive review contained in this database, covering trails of a homeopathic ‘medicine’ for vertigo (that many homeopaths wouldn’t recognise as proper homeopathy anyway!) made this plea: 

“… The positive effects of Vertigoheel in vertigo are based on good levels of evidence, but larger trials are required …” [Karkos et al]

If this is the best evidence that apologists for homeopathy have to offer one wonders why they bother.  The real answer is that this debate is not about the evidence at all; it is about some believers in an out-moded quasi-religious system of medicine clutching at fig-leaves to cover their embarrassment.

Science certainly has no room for dogma. Propaganda isn’t that helpful either, but that is all Downing and other apologists are peddling.


 

 

 

19 Responses to “Homeopathy and the Absence of Evidence”

  1. gimpy said

    It’s fascinating how so many in the CAM community seem to ignore and misinterpret evidence. I am in two minds about whether it is deliberate deception or just cognitive dissonance caused by myth hitting reality. Today I am leaning towards the side of deliberate deception.

  2. apgaylard said

    Gimpy:
    I am inclined to agree. When the same errors are rehearsed again and again; when taking the trouble to check some readily available resources flaty contradicts their position: it’s hard to come to any other conclusion.

  3. dvnutrix said

    It is common practice. After all, Professor Patrick Holford has declared that Dr Ben Goldacre is inaccurate and, based on his own special interpretation of statistics, he found himself ‘questioning the integrity of the authors and the BMJ’ involved in publishing a systematic review of omega 3 for mortality, cardiovascular disease and cancer.

    This deliberate muddying of the waters looks like an intentional strategy of interfering with the public understanding of science. It may be an effective strategy. A recent neuroimaging study suggests that “belief and disbelief differ from uncertainty in that both provide information that can subsequently inform behavior and emotion”. So, creating uncertainty may work to prevent people from taking a particular course of action, such as rejecting a school of thought or therapeutic framework (not the authors’ examples).

    In further discussion, the authors discuss the implications for the role of emotion in disbelief and argue that their work “calls the popular opposition between reason and emotion into question”.

  4. apgaylard said

    dvnutrix:
    Thanks for the examples and insight. DUllman’s tendency to carry on banging on about studies after their flaws have been exposed is another example of this sort of tactic that comes to mind.

  5. drdowning said

    A big thank you to “apgaylard” for picking up on the silliness in this debate on homeopathy; my statement (not a press statement as we didn’t send it to the press, preferring to debate in a scientific forum) said;
    “Of course it’s silly to pool systematic reviews in this way, but no sillier than many of the systematic reviews of this and other CAM modalities, which pool highly heterogeneous studies and use arbitrary, often unspecified, criteria to exclude those they don’t like.”

    Can I do credentials first? I don’t practice homeopathy, I’m not trained in it, I don’t have a brief for it and I don’t really believe in it (you seem to disbelieve in it). I have seen it work sometimes; I’ve also seen it fail to work. Insufficient numbers from my clinical experience to enable me to conclude either way. I’m not an apologist for homeopathy; I wouldn’t mind being called an apologist for good science, and I’m happy to debate with people like apgaylard towards valid conclusions on this and any other topic.

    I’m also a practising doctor, and in general (not particularly about this blog) I do find it frustrating that so much of the debate nowadays cites “evidence” and interprets that as academic science only. The best definition I know of evidence-based medicine is;

    Evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical expertise and our patient’s unique values and circumstances.

    1. By best research evidence we mean valid and clinically relevant research, often from the basic sciences of medicine, but especially from patient-centered clinical research into the accuracy of diagnostic tests (including the clinical examination), the power of prognostic
    markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens. New evidence from clinical research both invalidates previously accepted diagnostic tests and treatments and replaces them with new ones that are more accurate, more efficacious, and safer.

    2. By clinical expertise we mean the ability to use our clinical skills and past experience to rapidly identify each patient’s unique health state and diagnosis, their individual risks and benefits of potential interventions, and their personal circumstances and expectations.

    3. By patient values we mean the unique preferences, concerns and expectations each patient brings to a clinical encounter and which must be integrated into clinical decisions if they are to serve the patient.

    4. By patient circumstances we mean their individual clinical and the clinical setting.

    [Reference: Evidence Based Medicine (3rd Edition) by Sharon E. Straus, W. Scott Richardson, Paul Glasziou, and R. Brian Haynes (Turtleback – April 29, 2005)]
    http://en.wikipedia.org/wiki/David_Sackett

    I won’t rant on about this, but I would say that I get p****d off at scientists telling doctors what they should do. Science is not guaranteed to be successful at achieving valid conclusions, and scientists do not necessarily hold the moral high ground — but they often think they do.

    A propos of which, is apgaylard actually Adrian Gaylard, a ground vehicle aerodynamicist? If so, how does that qualify you to discuss my job? And why do you put my being the “Medical” Director of ANH in quotes?

    You then say that I peddle homeopathic dogma; apart from the fact that I’m not sure what that would be, you don’t give any details. Never mind; two more important points need to be addressed. Firstly, you say;

    “If you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports recommend the use of homeopathy in preference to a conventional treatment”

    Well, no. If there exist a homeopathic and a conventional treatment with equal, proven, efficacy, then they are both effective, aren’t they? A patient could choose either, and might well pick the homeopathic because of less side-effects (or other cultural reasons). Are you showing your ignorance of medicine here?

    Secondly, you say;

    “Next, the idea of a “statistically significant clinical effect” needs some thought. Note that it does not claim that there is a clinically significant effect. That is, an effect that would be worth having. This is good, because none of the reports show any.”

    Well, again no. You’re referring back to your previous, incorrect, point when you say “ none of the reports show any.” Because what you say is that no homeopathic intervention is better than a conventional one; but they are both effective. And you misread me when you attempt to distinguish between a “statistically significant clinical effect” and a “clinically significant effect”. If I had meant a merely “statistically significant effect” I would have said so; I didn’t.

    This is old ground of course; a number of meta-analyses etc of nutritional interventions (in particular) have tweaked the stats to show a “statistically significant effect” which is not a “clinically significant effect” — the obvous example being the Miller et al 2005 meta-analysis of Vitamin E and all-cause mortality, which found a “statistically significant” increase in mortality (namely a 4% increase in relative risk; RR= 1.04, 95% CI 1.01 to 1.07) for high-doses of vitamin E. It’s not clinically significant, and it’s not relevant here despite your claims to the contrary. And don’t get me started on the other shortcomings of this and other reworkings of previous data. It’s just an illustration of the widespread misuse of science and statistics, towards which I fear you are veering, apgaylard.

    You can track it all back to the great Richard Feynman, who also said “Science is a belief in the ignorance of experts” — which is why apgaylard has as much right as I do to discuss this stuff

    Feynman’s conjecture states;

    “To report a significant result and reject the null in favor of an alternative hypothesis is meaningless unless the alternative hypothesis has been stated before the data was obtained.”

    Read Gigerenzer G., Mindless Statistics. Journal of Socio-Economics 2004; 33: 587-606, from which this quote is taken, on this and related problems of statistics, and tell me if you can that all systematic reviews and meta-analyses are not, at best, dodgy. That’s why I said that the whole thing is “silly”. All the same, it’s interesting that a recent review of RCTs of homeopathy found 44% to be positive, 6% negative and 50% neutral — almost exactly the same breakdown that I found in the systematic reviews.

    Re Shang et al, I don’t keep the paper copies of Lancet any more, and online I can’t find any mention of the papers you say they included in the next issue. I may be being dumb here but can you show me?

  6. apgaylard said

    Drdowning
    Thanks for your post. It raises a couple of interesting issues.

    It was a little unkind to put the ‘Medical’ bit of your title in quotes. I’m happy to fix that.

    Next, yes, you have identified me. I don’t make much effort to keep my identity secret after all. Does being a ground-vehicle aerodynamicist qualify me to discuss your job? Well, this is a bit silly as you answer this yourself later, “You can track it all back to the great Richard Feynman, who also said “Science is a belief in the ignorance of experts” — which is why apgaylard has as much right as I do to discuss this stuff

    I am not keen on telling doctors what to do – not that they’d take any notice of me anyway; but I reserve the right to comment. As a patient I like to see good evidence for treatments proffered; as a tax-payer I like to see good evidence for what I share in paying for; as a scientist I am also interested in medical science conducting itself in harmony with the values and methods of good science. If that upsets you, then that’s unfortunate, but we live in a post-deferential age.

    Your first substantive point seems to be that, “it’s silly to pool systematic reviews in this way, but no sillier than many of the systematic reviews of this and other CAM modalities, which pool highly heterogeneous studies and use arbitrary, often unspecified, criteria to exclude those they don’t like.”

    Do you really have examples where researchers use, “arbitrary, often unspecified, criteria to exclude those they don’t like”? I’d be really interested to explore that issue; it may even yield a productive discussion.

    On the issue of whether pooling systematic reviews is a problem, I agree that pooling highly heterogeneous studies is dubious. I don’t take the view that this necessarily rules out the approach; it depends on what level of heterogeneity can be coped with. However, it’s instructive to note that high-quality trials don’t tend to support homeopathy:

    “… Studies of high methodological quality were more likely to be negative than the lower quality studies …” [Cucherat et al]

    “in the study set investigated, there was clear evidence that studies with better methodological quality tended to yield less positive results.” [Linde, 1999]

    You are quite right that I disbelieve in homeopathy: there’s no reason why it should work and no good evidence that it does. In fact: the better the quality of the evidence the less the effect. I think that it’s reasonable to draw a lesson from that oft made observation. Evidence for homeopathy is not “signal” it’s “noise”.

    I am sure that you have, “seen it fail to work”; have you really, “seen it work sometimes” or was it just coincidence?

    I have no problem with your definition of EBM, however without item 1 the rest is undermined. Again, we return to the problem that the better quality trials tend not to support the efficacy of homeopathy beyond that of a placebo. I wouldn’t want to see medical practise (practised on me, or funded by me) based on such a foundation.

    Now perhaps I do have an, “ignorance of medicine”; after all I’m not a doctor. I’m a bit worried about you though. Precisely which of the homeopathic interventions described in the NHS database are proven to have an efficacy that equals their conventional counterparts? None of the claims go beyond asking for more research to provide better evidence.

    The strongest endorsement I came across was:

    “… The positive effects of Vertigoheel in vertigo are based on good levels of evidence, but larger trials are required …” [Karkos et al]

    This is hardly in the domain of the proven.

    I do apologise if I mis-read your notion of a, “statistically significant clinical effect”. If you are indeed claiming that some of the homeopathic interventions in the NHS database show both statistically and clinically significant effects can you cite them?

    Your reference to Miller et al. is a bit perplexing. I don’t discuss this study; neither did I claim that marginally statistically significant results are relevant to this debate, as you contend. In fact I made the opposite point:

    “One of the silliest things in Downing’s statement is the citing of studies that he contends show “a non‐significant trend” in favour of homeopathy. If the trend is not statistically significant, then it’s very likely to be nothing more than random noise in the experiment. This is non-evidence, not evidence.”

    I’m afraid that it’s you who contends that there is some relevance in even non-statistically significant results.

    “Read Gigerenzer G., Mindless Statistics. Journal of Socio-Economics 2004; 33: 587-606, from which this quote is taken, on this and related problems of statistics, and tell me if you can that all systematic reviews and meta-analyses are not, at best, dodgy. That’s why I said that the whole thing is “silly”. All the same, it’s interesting that a recent review of RCTs of homeopathy found 44% to be positive, 6% negative and 50% neutral — almost exactly the same breakdown that I found in the systematic reviews.”

    I must admit to some misgivings about meta-analyses – but given the limitations of individual trails, I think there is a need for a tool which helps us look at a complete body of literature in a systematic way. It’s certainly an imperfect tool, but I don’t know of a better one.

    I would be interested in the reference for the, “recent review of RCTs of homeopathy”. I’d be very surprised if 44% of high-quality trials were positive in any meaningful clinical sense. But I’m always happy to be surprised.

    “Re Shang et al, I don’t keep the paper copies of Lancet any more, and online I can’t find any mention of the papers you say they included in the next issue. I may be being dumb here but can you show me?”

    I covered the issue of identifying the papers from Shang’s analysis in a previous post. Here is the authors correspondence in response to the initial criticism: Lancet 366 (2005), pp.2083-2085. They said, “We agree that the larger trials of higher methodological quality (references 46, 55, 71, 80, 84, 94, 96, 97 in webappendix 1 and 23, 25, 45, 53, 66,72 in webappendix 2) should have been identified, and are grateful for the opportunity to rectify this oversight.” All you need to do is check the reference numbers provided in the original paper. This is hardly declining to name the studies as you alleged.

    Here is the data they made available on-line at Shang’s home institution, Institut für Sozial und Präventivmedizin at Universität Bern:

    List of excluded homeopathy studies.
    Characteristics of homeopathy studies.
    Characteristics of ‘allopathy’ studies.

    I hope that after reviewing the evidence you’ll publically retract your charge that they, “… declined to name…” the studies of homeopathy they analysed along with your accusation that, “This would seem to be blatant research misconduct.”

  7. dcolquho said

    From David Colquhoun

    Dr Downing is quite right that, in many cases the evidence that is needed for rational practice is simply not there. The answers to many nutritional questions are pretty dubious, for example. That means that the doctor simply has to guess. But it does not mean that they have to make up the answer and assert it as though it were certain (with supplement sales not far behind). Never forget that this approach is what gave us centuries of bloodletting.

    But the upsurge in interest in magic medicine has had one beneficial effect (in addition to stuffing a lot of wallets). Quite a lot of good trials have now been done in areas that are more amenable to RCTs than nutritional studies, Both homeopathy and acupuncture are perfectly amenable to RCTs, despite the protests of those who see their delusions (and incomes) threatened by them. And a sufficient number of good ones have been done to be able to say with some certainty that neither is better than placebo (acupuncture is a pretty theatrical placebo, but works no better than sham).

    This conclusion is, of course, hotly denied by those who make their living from it.

    We can agree with Downing that when there is no evidence, practitioners have to do what they guess to be best (but not to pretend that they know). His big mistake is to ignore evidence when it does exist, simply because it disagrees with his prejudices,

    Homeopaths (who have a vested interest) are organising a hate campaign against Singh and Ernst (who have no vested interest) simply because Singh and Ernst take evidence seriously, but by so doing have come to a conclusion that puts the homeopaths out of business);

    Likewise Barker Bausell, a statistician and experimental designer who was deeply involved in acupuncture research, has no financial interest in one outcome of the experiments than another. The fact that his excellent book comes to the conclusion that it doesn’t work offends nobody but those who will lose money (and face) because of that conclusion.

    References
    Singh, S and Ernst E (2008) Trick or Treatment. Bantam Press
    Bausell, B. (2007) Snake Oil Science Oxford University Press Inc, USA
    Colquhoun, D, (1970) Lectures on Biostatistics, Clarendon press, Oxford

  8. apgaylard said

    David:
    Thanks, that puts matters nicely in persepctive. Interesting side-note: the on-line pdf of the original article by Downing, “Lies, damn lies … and Professor Ernst’s new book” seems to be down (Still available here though). The press release that announced this attack on Ernst is still available.

    I still can’t believe that Downing accuses Ernst of being a bad scientist when he clearly hadn’t done his homework on the debate around Shang’s paper.

  9. draust said

    AP

    In the context of the anti-Ernst rhetoric from Damien Downing, and his lengthy post above, you might enjoy reading this short article that Ernst wrote a few years back. Especially arguments nos. 1 and 2.

    There is also the question of whether equivocal results from poorly designed and underpowered clinical trials should be used to say “there’s no evidence either way… so in my clinical judgment it might be effective” when discussing things that are totally and utterly implausible. This is the “prior probability” argument, most obvious when one looks at homeopathy. The best post I know on this, explaining just how the CAM people use this “no evidence either way” line, is Kimball Attwood’s masterly exposition here.

    Incidentally, Kimball Attwood, Orac, Stephen “Quackwatch” Barrett, Steve Novella, Prof Michael Baum and our own Ben Goldacre (to name but a few) are all medical doctors, so bad science debunking is quite clearly not just scientists “telling doctors what to do” – a rhetorical cheap shot from Damien.

  10. apgaylard said

    draust:
    Thanks for taking the time to comment. The Ernst paper is particularly good. I couldn’t agree more on the “no evidence either way” tactic. It’s just another way of letting poor quality studies cloud the issue.

    Another Downing cheap shot didn’t dawn on me until I’d replied. He wibbles on about his problems with meta-analyses as if that somehow justifed his position. In his diatribe against Ernst he was quite happy to cite this kind of evidence (though dubiously interpreted).

    Refering to the NHS CAM database homeopathy evidence he comments that it, “…contains 32 systematic reviews and meta‐analyses of its use in a wide range of disorders … Of the 32, 7 report a statistically significant clinical effect from homeopathy, 6 show a non‐significant trend in its favour, and 3 show no effect …”

    So it would seem that he has no problems with this type of evidence as long as he is citing (and misrepresenting) it.

  11. drdowning said

    Dear Apgaylard et al,

    I guess I owe you gents (and any ladies I may have missed) an apology; I haven’t been checking for responses to my posting. I’m impressed by your rapidity – but don’t you folks have jobs?? I’ll post in detail soon, but meanwhile:

    There was a technical error on the new ANH website which caused my original item to be unavailable – my apologies. It is now there at
    http://www.anhcampaign.org/documents/lies-damned-lies-and-professor-ernsts-new-book
    Apologies for the glitch.

    I imagine any of us could think of things that are, or used to be, just as scientifically implausible as homeopathy. Off the top of my head, how about the cytoskeleton? Who would have thought that there was a compex structure within every cell, and molecules like kinesin that WALK along them to deliver proteins? Have a look at this gorgeous animation;
    http://aimediaserver.com/studiodaily/videoplayer/?src=harvard/harvard.swf&width=640&height=520

    Back soon.
    DD

  12. jdc325 said

    Dear Dr Downing,

    Thank you for posting the link to that animation, it was rather like watching a choreographed dance at times – I found actin and microtubule assembly more interesting than I would have imagined.

    I’m only an amateur with an interest in science, and limited knowledge of the subject, but it seems to me that the problem with homeopathy is not simply that it is implausible. If we are to believe that the “higher potencies” work, we have to accept that nothing can do something. The “high potency” homeopathic remedies contain not a single molecule of the active ingredient, as the active has been diluted out of existence. I would have thought that it would therefore be impossible rather than merely implausible for “high potency” remedies to actually work (i.e., have an effect over and above placebo). This is backed up by the trials that have been already conducted into homeopathy.* Comparing the implausibility of something that has been shown not to work with the implausibility of something shown to be real seems a bit, well, dodgy to me. Particularly when the something that has been shown not to work may actually be impossible rather than implausible.

    I’ve read that the highest dilution that can be made without diluting the original substance out of existence is equivalent to 12C or 24X, so all the remedies of “higher potency” than 12C or 24X are actually zero potency. In a vain attempt to explain how homeopathy could work advocates invented the memory of water hypothesis, which was promptly debunked – if you would describe what water has as a ‘memory’, then this memory lasts only femto-seconds [“liquid water essentially loses the memory of persistent correlations in its structure within 50 fs”, according to doi:10.1038/nature03383] and I doubt that even the swiftest of practitioners could administer a homeopathic remedy in less than 50fs.

    Have homeopaths ever done anything remarkable that is true? Yes – they’ve managed to turn water into money. [With apologies to Dr Tony Copperfield]

    *[To paraphrase D Colquhoun: A sufficient number of good [RCTs] have been done to be able to say with some certainty that neither is better than placebo. We can already see that, plausible or implausible, homeopathy simply doesn’t work.]

    Cheers,
    jdc

  13. drdowning said

    Dear apgaylard,
    (Is there a real name I could use instead of an email address?)
    It’s my fault, I know, but one of the consequences of not monitoring this regularly is that there are too many points up now for me to be able to answer them all. I’ll do my best though.

    First off, I’m happy to retract the criticism that Shang et al failed to declare their “secret eight”, given that they did list them in an Authors’ Reply, and agreed that they “should have been identified”. But since this is dated 4 months after the paper, those who levelled this criticism at that time were correct, surely? And how does a 16 week interval make it the “very next issue”?

    I’m still puzzled, though, at the charge that I have been “peddling homeopathic dogma”; in what way are you not doing the same on the other side? I think you could have been less ad hominem, btw; in the first few sentences I’m an “apologist”, “rather silly”, guilty of “empty carping” and “philosophical naivete”, and of course “peddling homeopathic dogma”. Isn’t this the language of the propagandist, rather than scientific discourse? And isn’t the pot calling the kettle black?

    You then seek to substantiate this on the basis of my reading of the NLH section on homeopathy. But you haven’t yet addressed my point about how you interpret the data. You say; “if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports recommend the use of homeopathy in preference to a conventional treatment.”

    To which I said, and still do; “Well, no. If there exist a homeopathic and a conventional treatment with equal, proven, efficacy, then they are both effective, aren’t they? A patient could choose either, and might well pick the homeopathic because of less side-effects (or other cultural reasons).”

    It’s because of your apparent misunderstanding here that I felt justified to open the discussion about who you are, what you do and what you say. Were I to express an opinion on ground vehicle aerodynamics, you would be entitled to point out the limitations of my knowledge and understanding, and the impact of my field of study/expertise/work on that understanding. So, for my part, I do wonder whether your background in a physical science means that you expect to be able to find concrete, black and white, results even in the biomedical sphere, where believe me, it’s always more complex. I’ll reiterate that you’re entitled to discuss this (as a potential patient if nothing else, as you point out)

    Re your point that high-quality trials don’t tend to support homeopathy, well this is just regression to the mean, isn’t it? If you look at the two funnel-plots in Figure 2 of Shang et al, it’s clear that this is true, as expected, for the trials of conventional medicine as for homeopathy (they call it meta-regression).

    But perhaps you can explain something else about Shang et al; in the text (p729 and Table 3) they say; “with each unit increase in the SE, the odds ratio decreased by a factor of 0·17 for homoeopathy and 0·21 for conventional medicine.” In other words the effect to which you referred is greater for conventional medicine than homeopathy. But in Figure 2 the opposite is apparent. What?

    It has also been pointed out that Shang omitted 3 homeopathy studies which had been ranked highly by previous analyses (only one of which is listed in the excluded papers), all of which had positive results, and the inclusion of which would have shifted the slope of the line in Figure 2 to more vertical.

    Shang is thus a good example of the arbitrary, often unspecified, criteria to which I referred. The text states; “When the analysis was restricted to the larger trials of higher reported methodological quality, the odds ratio from random-effects meta-analysis was 0·88 (0·65–1·19) based on eight trials of homoeopathy and 0·58 (0·39–0·85) based on six trials of conventional medicine.” It doesn’t report the odds ratios at any other level of its stated criteria. Even though they subsequently listed the eight (and the six) papers, they haven’t specified the cutoff point for high quality, and the list of included studies in the final eight still looks arbitrary.

    I do think it’s a bit disingenuous of David Colquhoun to suggest that Ernst has no vested interest while practitioners do. He has a vested interest in holding his job and promoting his unit, but even more so in selling books, which has to be potentially more lucrative than seeing sick people – though probably worse than being a succesful GP these days! Frankly it’s hard to think of anybody without a vested interest. Doctors or practitioners on either side? Obvious. Academics? Worrying about grants, sometimes looking for an industry consultancy. Journal editors? Appeasing their advertisers and promoting their journal. Ironically our aerodynamicist might be the only one here without a vested interest.

    I thought the review of Ernst & Singh’s book in Nature was interesting. It concludes; “For now, the certainty expressed in Trick or Treatment? mirrors that of the proponents of alternative therapies, leaving each position as entrenched as ever.” Which has resonance here, I think; none of you has so far, to my knowledge, said that if the clinical evidence clearly pointed in favour of homeopathy working (I’m not saying it does, only that it’s starting to trend in that direction) you would accept it. Hence my point about microtubules and kinesin. I’m trying to be impartial here; are you?

  14. apgaylard said

    Dear drdowning,

    Thanks for your continued interest. I’m glad that you’ve decided to, “retract the criticism that Shang et al failed to declare their “secret eight.” It looks like I made a small slip with my, “very next issue” comment as well. I’ve corrected my blog post, when are you going to correct your critique?

    I’m sorry if my choice of words has offended you. Given that you implied that Ernst is lying and Shang is guilty of misconduct, I didn’t think you’d be that sensitive. Still, now I know and I will take more care in future. After reading Dr Alex Tournier’s presentation from the recent Scientific Research in Homeopathy conference I think it was too harsh to accuse you of “philosophical naivete”; in comparison your philosophical model of science is OK (I’ve edited my piece accordingly).

    I think my word-choice was off when I said you were, “peddling homeopathic dogma”. I should have said “peddling homeopathic propaganda” based on your perpetuation of the homoeomythology surrounding Shang’s paper. (I don’t take the view that there was any ad hominem here. My comments were directed against your published views – and I gave my reasons – not against you as a man. Any small discourtesies were merely incidental.)

    Actually, I think that your objection to my point that, “if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports recommend the use of homeopathy in preference to a conventional treatment” is well made. I didn’t express myself as accurately as I would have liked; I should have said, “if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports show that homeopathy works as well as the recommended conventional treatment.” I’ve corrected my post accordingly; thanks for the suggestion.

    Now, where are the homeopathic treatments with “equal, proven, efficacy”? I didn’t find any when I looked through the NLH database; though I may, of course, have overlooked them.

    Even though my academic training was in physics and I now work in engineering, I am not expecting concrete results (see here for some of my views on uncertainty). I do, however, believe that it is possible to come to reasonable assessments of medical interventions on the basis of evidence and determine whether, on balance, they are worth the candle.

    With that in mind, I had a go at looking in the round at the database (here). I appreciate that I am an amateur in such matters and that a professional such as yourself may well come to different conclusions. I’d be happy to debate the detail.

    I particularly don’t agree with the part of your analysis where you talk about “a non‐significant trend” in favour of homeopathy; as if that is something which positively favours homeopathy. Do you really think that non-significant trends count as evidence?

    Also, your comment that, “16 concluded that there was “insufficient data” to draw a conclusion either way” may encourage a reader to think that those sixteen reports actually use the expression “insufficient data”. I couldn’t find these sixteen direct quotations. Could you please point them out to me?

    As for the positive evidence in this database: the strongest endorsement I found concluded that, “larger trials are required.”

    When it comes to trial quality, it’s not really my point per se that high-quality trials tend to be more negative: I’m just pointing out what studies by Linde, Cucherat and Shang concluded. Is it really just regression to the mean? I think that there are also quite a lot of other biases present. However, I guess the key is what mean is being approached. For Shang it was a mean of ineffectiveness (relative to placebo), in contrast to conventional treatment. Even Linde, in his positive meta-analysis of homeopathy concluded that they, “found insufficient evidence from these studies that homoeopathy is clearly efficacious for any single clinical condition.”

    Your comments on Shang are interesting. Perhaps when you update your critique you can use them instead of the “secret eight” story. I’m a bit busy at the moment; but, when I have some time, I’ll have a re-read and see if I can come to any conclusions. However, I suspect that this is a matter you need to take up with the authors. Afterall, if there are serious problems with the paper they need to be debated in the open literature. In the meantime, could you tell me which three studies were missed out?

    On the matter of whether Shang is “a good example of the arbitrary, often unspecified, criteria” I think you’ll have to concede that the paper does specify the criteria, “Trials described as doubleblind, with adequate methods for the generation of allocation sequence and adequate concealment of allocation, were classified as of higher methodological quality.” And for ‘larger trials’: “Trials with SE in the lowest quartile were defined as larger trials.”

    Is that arbitrary? Certainly the ‘quality’ criteria are not. Neither am I persuaded that the size criterion is arbitary. The use of quartiles, quintiles and deciles is pretty standard. If they divided their dataset more finely then there would have been fewer than 8/6 studies left; which would probably be too few to draw sensible conclusions from. If they divided their data less finely, then they would just be adding in trials containing more bias. I think I am persuaded that this was a reasonable objective measure.

    Anyhow, with the references now available I would think that other workers could examine this if they wished. Perhaps there’s a paper in this for you?

    Anyway, I think that it’s interesting that after your comment disparaging meta-analyses that your critique relies on an analysis of those in the NLH database. Don’t you think that this is just a little inconsistent?

    Your critique mentioned a, “recent review of RCTs of homeopathy”. I’m still interested to know the identity of this study. Could you please provide some details? Where is it published?

    Your “point about microtubules and kinesin” didn’t really convince me; the animation was fantastic though! When you say, “I imagine any of us could think of things that are, or used to be, just as scientifically implausible as homeopathy” I struggle to find examples. Some things that are about as implausible might be an extra-terrestrial explanation of UFO’s and crop circles; the Loch Ness Monster, Yetti, and Bigfoot. Things that used to be considered implausible were rendered plausible and then accepted as the result of careful study and the production of high-quality evidence.

    The fact that quite a lot of good quality work has been done which fails to show a benefit beyond that of a placebo is persuasive. I think that if there were profound therapeutic effects here they would have been discovered by now. The contention of classical homeopaths, and others, that increasingly small probabilities of the presence of an alleged therapeutic agent deliver increasing therapeutic effect is so amazing that it would need more than the current collection of irrelevant, poorly executed experiments or wishful thinking currently on offer to persuade me.

    Finally, you ask “I’m trying to be impartial here; are you?” Well I’d say that I am trying. As you point out, I have no vested interests here one way or the other. I’d certainly endorse homeopathic treatments if high-quality evidence clearly, and consistently, demonstrated therapeutic benefits that usefully exceeded that of placebos. As far as I can tell, the opposite is the case.

  15. drdowning said

    Oh God. I’m just off on holiday shortly. Enjoying the debate, but can we accept a bit of a slow-down for the immediate future?

    I still don’t entirely agree with; if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports show that homeopathy works as well as the recommended conventional treatment. – because it’s always a risk-benefit assessment. I had a little girl as a patient today who has been shown to have 2 gut parasites; an aunt picked them up in S America and gave them to the whole family it seems. Several adults have been devastated by them; one has benefitted from mega-antibiotic therapy. Should we give the girl the same when she has no symptoms? I say no so far, because what does she gain? Risk vs benefit.

    The animation is fantastic, isn’t it?

    DD

  16. apgaylard said

    drdowning:
    Please don’t feel obliged to comment more quickly than your circumstances allow; we all have jobs, families, etc.

    Just a quick point: I’m a bit puzzled that you wish to disagree (to some extent) with my revision on equal efficacy. If I may, I’d just like to point out that I was directly addressing an earlier concern of yours, “If there exist a homeopathic and a conventional treatment with equal, proven, efficacy, then they are both effective, aren’t they? A patient could choose either, and might well pick the homeopathic because of less side-effects (or other cultural reasons).”” And I agree; so I don’t think you can really disagree with my amendment without disagreeing with yourself!

    I think that what you are doing is moving the discussion onto new – and entirely valid – terrain. This is an extension to the debate; unless treatments have proven efficacy these points are moot.

    Once the evidence supports the efficacy of treatments (of whatever kind) the risks vs. benefits judgement is clearly very important; we have some common ground here. I’d also like to add that in a resource-limited world (like the NHS) the costs vs. benefits of treatments are very important also.

    In fact, a more sophisticated analysis would move us away from your formula of, “equal, proven, efficacy”. Take the case where two treatments are both effective, but to different (and worthwhile) degrees. Now, if the more effective treatment comes with higher risks it could be rational, depending on the circumstances, to forgo the additional benefit to avoid the additional risk.

    Anyway, enjoy your holiday and debate at your own pace!

  17. apgaylard said

    Dear drdowning,
    Welcome back from your holiday. I’ve been looking at the Shang paper quite a lot recently and having some very productive discussions with Paul Wilson over at the Hawk/Handsaw blog. So, here are my comments.

    “Re your point that high-quality trials don’t tend to support homeopathy, well this is just regression to the mean, isn’t it? If you look at the two funnel-plots in Figure 2 of Shang et al, it’s clear that this is true, as expected, for the trials of conventional medicine as for homeopathy (they call it meta-regression).”

    Actually, if you look at the solid lines on Figure 2 which, “indicate predicted treatment effects from meta-regression, with dotted lines representing the 95% CI.” you’ll see as bias is minimised (low SE) the predicted treatment effects from homeopathy tend to an OR=1 (the CI certainly crosses OR=1). For conventional medicine it does not. This is really the main point the paper makes: by doing a meta-regression analysis on all 110 matched pairs of trials they found which was the most significant bias (SE – see Table 3), used that to determine which of the “higher quality” trials were least affected and re-analysed this group.

    “But perhaps you can explain something else about Shang et al; in the text (p729 and Table 3) they say; “with each unit increase in the SE, the odds ratio decreased by a factor of 0·17 for homoeopathy and 0·21 for conventional medicine.” In other words the effect to which you referred is greater for conventional medicine than homeopathy. But in Figure 2 the opposite is apparent. What?”

    The asymmetry coefficients that you mention are not actually significantly different for the two modalities. As the sentence before the one you quoted points out, “In meta-regression models, the association between SE and treatment effects was similar for trials of homoeopathy and conventional medicine: the respective asymmetry coefficients were 0·17 (95% CI 0·10–0·32) and 0·21 (0·11–0·40).” You’ll note that the 95% CI’s overlap substantially.

    Comparing Figure 2/Table 3 to the text it looks like a typo here. The most likely explanation is that the numbers are reversed in the text; but only the authors could say for sure.

    What we see here is that the two modalities ‘approach’ the answer at indistinguishable rates. The main point is that the answer is different in each case; for homeopathy the, “finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects”; conventional medicine, however, has an effect betond that of a placebo. It’s about the destination, not the journey.

    As for the missing trials, are these the ones that Peter Fisher refered to in his Lancet letter? If so, then check the Author’s reply, “Neither of the two studies mentioned by Fisher and colleagues were regarded as large and of high quality. The influenza trial did not meet our prespecified quality criteria and the asthma trial was available as an abstract only and excluded.” If they are then this answer would seem to be fair enough.

    As for any arbitrary criteria; I’d have to say that – given all the data that’s available – the piece of work looks pretty transparent and objective to me. They provide a list of their quality criteria in the paper and a definition of the size criteria: trials with SE in the lowest quartile. They have provided a list of excluded studies, complete with the reason for exclusion. Their lists of study characteristics for homeopathy and conventional medicine provide a fair summary and say which were considered to have met the “higher quailty” threshold. I’d say that there’s enough data here for a medical researcher to go and check their application of the quality criteria.

    Finally, the point about there being no analyses of other sub-groups. This, if you don’t mind me saying so, misses the point of the paper. The analysis of the 110 matched pairs of trials shows that both small (high SE) and lower-quality trials were not reliable. (See Table 3). Given this, the only sub-group capable of giving a robust answer is the ‘larger trials of higher quality’ group. Looking at other groups is merely adding in more biased trials.

    I hope that you are happy with these responses and will be updating your comments on Shang et al in due course.

  18. drdowning said

    Dear apgaylard,

    Well, I’m back and refreshed. There are a lot of issues here now, and again I’m not sure I can cover everything. Let’s go back to the beginning. You reiterated your statement about non-significant findings;

    “I particularly don’t agree with the part of your analysis where you talk about “a non‐significant trend” in favour of homeopathy; as if that is something which positively favours homeopathy. Do you really think that non-significant trends count as evidence?”

    This goes back to my mention of the Miller study. I know you didn’t raise it (I can see that it could look as though I meant that, and I apologise) – I raised it, as an illustration that statistical significance is not the only criterion to use. Recall that I said;
    “… Miller et al 2005 meta-analysis of Vitamin E and all-cause mortality, which found a “statistically significant” increase in mortality (namely a 4% increase in relative risk; RR= 1.04, 95% CI 1.01 to 1.07) for high-doses of vitamin E. It’s not clinically significant, and it’s not relevant here…”. The “despite your claims to the contrary” referred to your comments on statistical/other significance in general.

    Because the 95% confidence interval did not cross unity, the authors were able to claim statistical significance; but that’s for a 4% increase in RELATIVE risk. You have to plough through the paper to find the ABSOLUTE risk;
    “The average death risk across trials in the control groups was 1022 per 10,000 persons.”
    So according to the analysis, Vitamin E increased the absolute risk of death from 10.2% to 10.6%. Is that a big enough effect to make you change your behaviour? Put it the other way round; if Vitamin E had been reported as reducing the absolute risk of death from 10.6% to 10.2%, would that persuade you to take it?

    The reverse can also be true; failure to achieve statistical significance can hide a real and important effect. Take a (fictional) study that reports, in contrast to Miller; “RR= 1.40, 95% CI 0.95 to 1.70”; that would be unable to claim statistical significance. But the average effect is 10 times that in Miller, so there is, roughly (I’m guessing), a 90% probability of a 40%-sized effect, compared to a 97% probability of a 4% effect.

    If they were horses, which would you back? Many statisticians would say that this adds to the argument for using Bayesian statistics instead (results obtained over a pint across the road from the Statistical Society).

    So, I didn’t say that non-significant results amounted to clear evidence. what I said was;
    “So of those that felt able to draw a conclusion about homeopathy, 81% found it beneficial.” I’ll grant you that it could perhaps be better put as; “81% found it beneficial or likely to be beneficial”, or “81% found definite or probable benefit”.

    At the risk of opening up yet another area of detailed debate, I have looked at your analysis of the NeLCAM database, and here are some comments, and comparisons with my own version. To be honest I’m not sure I can manage to go much further with this, because there are papers that I found and you didn’t and vice-versa, and I think this may be the fault of the database, not ours.

    Your item:
    3. Are the clinical effects of homoeopathy placebo effects: a meta-analysis of placebo-controlled trials
    You categorise as negative, and emphasise the quote;
    “…we found insufficient evidence from these studies to suggest that homeopathy is clearly efficacious for any single clinical condition.” But that is “written by CRD reviewers”, not by the authors of the paper; it’s another layer of opinion on top of the meta-analytical layer.

    I took note of the numerical findings and categorised it as positive;
    “The overall OR (89 trials) was 2.45 (95% CI: 2.05, 2.93) in favour of homeopathy (random-effects model). Results from the various sensitivity analyses indicated that this finding was robust. The summary OR for trials categorised as high quality (26 trials) was 1.66 (95% CI: 1.33, 2.08).” – which are the findings of the actual paper.

    Classical homoeopathy versus conventional treatments: a systematic review 
 You class as inconclusive, and quote;
    “Thus the value of individualised homoeopathy relative to allopathic treatments is unknown.”

    I noted this statement and classed it as positive;
    “Two of the six studies (both non-randomised) suggest that homoeopathic remedies are superior to conventional drug therapy. Two trials suggest the opposite. The remaining 2 studies suggest both interventions to be equally effective (or ineffective).”

    This of course goes back to our ongoing discussion of what to compare homeopathy to – you recently said;
    “Actually, I think that your objection to my point that, “if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports recommend the use of homeopathy in preference to a conventional treatment” is well made. I didn’t express myself as accurately as I would have liked; I should have said, “if you are concerned with either treating patients, or recovering from illness, what you really need to know is how many of these reports show that homeopathy works as well as the recommended conventional treatment.”

    Well, despite what the author of item 5 (who, if you insist on describing me as peddling homeopathic propaganda, you will have to allow me to describe as well-known for peddling anti-homeopathy propaganda) says, an average from the studies included in this would suggest that they are equally effective (or ineffective).

    6. Complementary and alternative medicine in fibromyalgia and related syndromes 
 I didn’t even include this in my analysis

    30. Homeopathy for postoperative ileus: a meta-analysis
    You class as negative, quoting;
    “However several caveats preclude a definitive judgement. These results should form the basis of a randomised controlled trial to resolve the issue.”

    I class as positive based on;
    “There is evidence that homeopathic treatment can reduce the duration of ileus after abdominal or gynaecologic surgery.” – and on the numerical findings. I’m sorry, but “ These results should form the basis of a randomised controlled trial…” is the academic’s equivalent of “Gissa job”, or “Gissa grant” perhaps.

    Is this enough for now? I have to go feed the cats.

  19. apgaylard said

    Hi drdowning. Glad to see that you’re back.

    On the statistical points, my view is that I’m not impressed by statistically significant results of vanishingly small practical significance. Neither, given the biases present in the kinds of studies that we’re discussing am I comfortable with results that only just manage to achieve significance.

    However, nothing in this is inconsistent with seeing failure to meet significance as a negative result. To paraphrase Fisher, the experiment has been given its chance disprove the null hypothesis and has failed. It is certainly not evidence for anything – unless you really, really want it to be.

    I think that R Barker Bausell provides some sound advise when he cautions that if there are, “no statistically significant (or reliable or clinically significant) differences between the placebo (or sham) group and the treatment group, that says everything consumers need to know about whether they should seek the CAM intervention in question.”

    On the question of whether we should be considering the merits of a Bayesian, rather than Frequentist, approach I’m quite open-minded. However, considerations of prior probability (plausibility) certainly add to my currently sceptical judgement of homeopathy. (Steve Novella’s chain of implausibility is a pretty powerful analysis.)

    On the NLH CAM analysis, I think we could end up in a very long debate; probably longer than either of us has the time or inclination to pursue. I’d just like to say that my motivation was to try and get an overall feel for the strength of the case for homeopathy. I think that it’s quite possible to make a case for putting individual pieces of evidence in different boxes. I do doubt though that there is a fair case for enough re-binning to change the overall distribution that much.

    On your specific examples:
    3 – The comment that you attribute to the CRD reviewers is actually one of the authors’ conclusions. The exact phrase, “we found insufficient evidence from these studies that homoeopathy is clearly efficacious for any single clinical condition” appears on page 834 (under the heading ‘Interpretation’) and, “we found little evidence of effectiveness of any single homoeopathic approach on any single clinical condition” can be found on page 840. [Linde et al. Lancet 1997; 350: 834–43] I think that it’s quite reasonable to take that conclusion as a negative; at best it could make this piece of evidence inconclusive. Add in the cautions offered by Bandolier, and I couldn’t in all honesty see this as a positive. Particularly when what I am interested in when I look for treatment is an intervention that will be effective for a specific condition.

    5 – This is a really interesting case. Six studies, evenly split between superiority, inferiority and indeterminacy – for comparisons against conventional treatments – would seem like a tie. This might be argued to infer that, in terms of efficacy, it doesn’t matter which intervention someone choses (at least for the range of conditions considered). One fly in this ointment is that this is a kind of ‘mini-Shang’ where the analysis is trying to come to an overall view of the modality, not a view on the efficacy of the treatments for specific conditions.

    The big problem with this is that the DARE summary points out, “Two of the six studies (both non-randomised) suggest that homoeopathic remedies are superior to conventional drug therapy.”

    If we are to use the Frequentist tools of Fisher (and others) then we have to accept, as Fisher said, “The theory of estimation presupposes a process of random sampling. All our conclusions within that theory rest on this basis; without it our tests of significance would be worthless.” [Fisher RA. Development of the theory of experimental design. Proceedings of the International Statistical Conferences 1947;3:434–39.]

    From a straightforward scientific perspective, lack of randomisation in these kinds of trials is necessarily a fatal flaw. So when the author offers, according to the DARE summary, “Only few comparative clinical trials of homoeopathy exist. None is free from serious methodological flaws. Thus the value of individualised homoeopathy relative to allopathic treatments is unknown.” It seems very fair to call it inconclusive.

    (It’s also interesting to note that classical homeopathy fared no better than any other kind in Shang et al. (see particularly the Author’s reply).)

    30 – This is another very interesting review. The results are summarised by DARE as follows:

    Six RCTs (n=1,076).

    All studies: time to first flatus WMD between homeopathy and placebo = -7.4 hours (95% CI -4.0 hours, -10.8 hours), p<0.05. This effect is likely to be clinically relevant.

    Excluding studies of low quality (n=676): time to first flatus WMD between homeopathy and placebo = -6.11 hours (95% CI -2.31 hours, -9.91 hours), p<0.05.

    Only studies of <12C potency (n=660): time to first flatus WMD between homeopathy and placebo = -6.6 hours (95% CI -2.6 hours, -10.5 hours), p <0.05.

    Only studies of 12C potency or more (n=416): time to first flatus WMD between homeopathy and placebo = -3.1 hours (95% CI -7.5 hours, 1.3 hours), not statistically significant.

    This could only be considered a positive result if the kind of homeopathy one is interested in is the kind where there could be some medicine in the medicine. The Kentian brand practised in the UK/USA seems wedded to ‘high’ potencies. In this case, for potencies beyond 12C, the result is not statistically significant. Now, I could make a case for calling this inconclusive. However, taking the results on face value flatly contradicts a central tenet of classical homeopathy. This tipped it into the ‘negative’ box for me.

    So, in these instances, I really can’t really see a case for changing my analysis.

    Anyway, thanks for that. What did you make of my comments (here and particularly here) on your criticisms of Shang et al? When are you going to edit and re-issue your apologia?

    All the best,

    Adrian

Sorry, the comment form is closed at this time.