A canna’ change the laws of physics

Scotty, The Naked Time, stardate 1704.3, Episode 7

How Do You Interpret a CAM trial?

Posted by apgaylard on January 18, 2008

I have just read R. Barker Bausell‘s excellent “Snake Oil Science – The Truth About Complementary and Alternative Medicine“.  If you are interested in the reasons why CAM trials are likely to give false positive results and an in-depth look at the placebo effect: I thoroughly recommend it.

If you have not yet got around to it here is a flavour of the insights provided: some guidelines the author kindly offered me on how to interpret CAM trials (based on answers he has given to questions asked by interviewers).

“Guideline #1:

First and foremost, do not rely on journal press releases or second hand accounts.

Guideline #2:

The article itself should be approached with extreme skepticism. Skepticism is a recognized and valued scientific attribute in and of itself, but it is especially important when reviewing the results of a clinical trial based exclusively upon the investigators’ version of what they did, what they found, and why they found it.

This is even more crucial for positive CAM results because these practices are unsupported by conventional biological rationales and the best available evidence from high quality clinical trials indicate that they are no more effective than their placebos.

Guideline #3:

Give more credence to trials published in well known medical journal and give no credence at all to those published in CAM journals.

Guideline #4:

Give more credence to trials conducted in English and Scandinavian language speaking countries. A systematic review published several years ago found that investigators from certain countries (most notably China and Russia ) produced only positive results in their acupuncture trials.

Guideline #5:

Begin by reading only the study’s abstract. Most trial abstracts contain four sections: (1) background, (2) methods, (3) results, and (4) discussion. Feel free to ignore the first (because it seldom contains any useful information) but always ignore the fourth (because it contains only the authors’ spin regarding what the results mean). If the third section says that there were no statistically significant (or reliable or clinically significant) differences between the placebo (or sham) group and the treatment group, that says everything consumers need to know about whether they should seek the CAM intervention in question.

Guideline #6:

Read the study’s method section for the sole purpose of answering the following three crucial questions:

(1) Did the trial employ a randomly assigned placebo-sham control group indistinguishable from the therapy being evaluated? If such a control is not included, a CAM trial cannot avoid producing positive results if it is properly performed because of the placebo effect. (“No treatment,” “Wait list,” “Treatment-as-usual,” or “Conventional Treatment” control groups don’t count. Neither does the use of another CAM therapy as a comparison group since some of these elicit stronger placebo effects than others.)

(2) Did the trial employ at least 50 participants per group? (Small sample sizes inflate experimental results.)

(3) Did 25% or more of the participants drop out of the study before it was over? (If so this will invalidate the study regardless of its results). If even one of these three conditions is not met, the trial is worthless.

Or, if all this seems like a pain, nothing would be lost if all CAM research reports were simply ignored in the first place. The best available evidence suggests that they are all nothing more than cleverly disguised placebos anyway.”

Sensible advice from a research methodologist who was “Research Director of a National Institutes of Health-funded Complementary and Alternative Medicine Specialized Research Center“.

It will certainly change the way that I read trial data in the future.


7 Responses to “How Do You Interpret a CAM trial?”

  1. dvnutrix said

    On balance, do you think that book is well worth the expenditure of any recently-received book vouchers?

    Does it have any useful information in about effect sizes? I’m noticing them more and more in some literature but there are few straightforward explanations.

    I’m also getting increasingly annoyed by the number of authors who refer to “positive results” rather than saying that the results did not achieve statistical significance. Or people who used the word “significant” in ‘scientific’ reports when they mean it in the everyday sense rather than statistically. Like Humpty Dumpty:

    `When I use a word,’ Humpty Dumpty said, in rather a scornful tone, `it means just what I choose it to mean — neither more nor less.’

    `The question is,’ said Alice, `whether you can make words mean so many different things.’

    `The question is,’ said Humpty Dumpty, `which is to be master — that’s all.’

    Alice was too much puzzled to say anything; so after a minute Humpty Dumpty began again. `They’ve a temper, some of them — particularly verbs: they’re the proudest — adjectives you can do anything with, but not verbs — however, I can manage the whole lot of them! Impenetrability! That’s what I say!’

  2. apgaylard said


    Thanks for your comment. I found “Snake Oil Science” very useful; but I suspect that you are more expert in these matters than I am. The text has a distinct bias towards the author’s experience: acupuncture research and the placebo effect on pain.

    The book presents a biochemical explanation for the placebo effect based on endogenous opiods. I found this interesting; the author doesn’t seem to over-claim for the liklihood of this hypothesis, but I’m not familiar with this literature.

    There’s not much on effect sizes.

    Finally, thanks for the Humpty Dumpty quote; I may well make use of it here.

  3. apgaylard said

    Also worth a read: Researcher criticizes alternative medicine

  4. hcn57 said

    I recently read this book, and am taking a class in statistics. I actually received points on an exam last week on the question about double blind experiements by recommending the professor read “Snake Oil Science!”

    Since it was a library book, so I appreciate that your summary is online. Thank you.

  5. apgaylard said

    Thanks for your comment. I’m glad to have been of some help. The real thanks are due to Professor Bausell for providing this summary.

  6. […] am far from the first commenter to pick up on problems with alternative journals. AP Gaylard highlighted some suggestions made by R Barker Bausell in his book “Snake Oil Science – The Truth About […]

  7. […] am far from the first commenter to pick up on problems with alternative journals. AP Gaylard highlighted some suggestions made by R Barker Bausell in his book “Snake Oil Science – The Truth About […]

Sorry, the comment form is closed at this time.

%d bloggers like this: