It would be 

very difficult for

anyone reading

an article, 

watching a TV 

report or even 

listening to a 

scientific lecture

to know if what

is being 


is valid.



Unsupported Medical Claims:
Errors and misinterpretation in alternative medicine research
 By Jeanine DeNoma 
If you didn’t pick up the Oregonian August 24, you may not have noticed Oregonians for Rationality’s fortuitous timing for Dr. Wallace Sampson’s talk at the Multnomah County library in Portland.

     Speaking to a standing-room only crowd, Sampson, a practicing oncologist and medical instructor from Stanford University, discussed the quality of scientific research backing “alternative” medicine claims.

     The Oregonian had just begun a five-part alternative medicine series to coincide with the national naturopathic physicians conference being held in Portland that week. The series ran on the front page of the Living section and featured a different “alternative” therapy each day. Topics included herbal medicines, massage therapy (body work), acupuncture, chiropractic, and such “healing alternatives” as aroma therapy, homeopathy and magnets. The series repeated practitioners’ claims, substantiated by research or not.

     O4R provided at least a small voice for caution about these claims. An Oregonian staff reporter attended Sampson’s talk and a short article about it appeared August 26 beside a two-page report on acupuncture. On August 28, staff writer Jann Mitchell excerpted Barry Beyerstein’s list of why patients erroneously believe a therapy works and quoted the Skeptical Inquirer and Saul Green of the National Council Against Health Fraud.

Words of caution

     “What I have to say may shock you because of all the publicity around so-called ‘alternative’ medicine ... [Alternative therapies] aren’t ‘alternative’ or ‘complementary.’ They are actually things that don’t work,” Sampson stated in his opening remarks. He advised the audience not believe everything it reads in the popular media and suggested it keep a skeptical mind-set hen confronted with new claims.

     “That doesn’t mean you can’t use them. Or that you can’t believe they work. Or that you can’t feel better if you use them. But if you are looking at them scientifically and making the argument that they work, you’d better do a lot of homework because we have an awful lot of data to disprove it,” said Sampson.

Evaluating research quality

     “It would be very difficult for anyone reading an article, watching a TV report or even listening to a scientific lecture to know if what is being presented is valid,” said Sampson. One reason is that conclusions cannot be drawn from a single study. One needs to know what all the studies taken together indicate. Also, the quality of the data being presented must be examined and this is very difficult without a background in medical research and statistical techniques. Over the years scientists have developed a methodology for evaluating the quality of medical research, said Sampson. Before discussing specific examples, he reviewed the criteria scientists use to evaluate “alternative” medical claims.

  Is the claim consistent with our present knowledge? Is it plausible? If it violates known laws of physics, pharmacology, or other basic or applied sciences it fails the plausibility test and, while it might not be rejected out of hand, it would require exceptionally strong evidence to be accepted.

  Was the experiment set up and done properly? Were the patients properly randomized? How were they assigned to treatment and control groups? Were all groups treated the same except with respect to the treatment being tested? Was the study properly blinded? Were there enough patients in the study to guarantee a 90% likelihood that the results are true? Were the data analyzed properly?

  Did the control group behave as expected? Sometimes a statistically significant difference occurs, not because the treatment group did better, but because the control group did worse than expected. Was the control group representative of the population? Did too many controls drop out of the study? In medical trials this happens when the controls realize they are not in the treatment group. This occurred in the early tests of the anti-virals for AIDS. It’s an indication the experiment was not properly blinded, or the subjects were not properly randomized.

  Was the end-point determined before the study was conducted? Retrospective analysis, or picking the end-point after the fact, amounts to the magic trick called “magician’s choice,” said Sampson. In its simplest form the magician asks the audience to pick one object from among a group. The magician then “proves” he correctly predicted which object the audience would select by pulling an envelop from his pocket and displaying his previously recorded prediction. Of course he does not tell the audience about the other envelops placed around the room on which he had written the names of the non-selected objects.

     “If you torture the data long enough it will confess,” said Sampson, quoting the Dutch mathematician Jan Willhem Nienhuys. All kinds of correlations, meaningful or not, can be found. In a study that measures 20 end-points, there is a 50/50 chance of finding at least one positive correlation. There is, of course, the same chance of finding a negative correlation. If the researcher reports the positive correlation but not the negative one, that is the equivalent of a magician’s trick and it is statistically invalid.

  Do the conclusions follow from the data? Were the results over-interpreted? For example, if the differences between the treatment and control group are small and the methods used to measure the those differences are not particularly sensitive, then a significant statistical difference may be due to things other than the treatment’s effectiveness. If the study claims the treatment could save millions of lives, despite the small statistical difference, that is over-interpretation, said Sampson.

  Were alternate explanations considered and weighed? The best scientists carefully consider alternative explanations for results.

  Was there ideological contamination? Often this can be detected by certain catchwords. For example, homeopathy literature is filled with references to an “essence” transmitted in water. Homeopathy was an ideology developed by the German physician Samuel Hahnemann in the early 1800s in response to the purging and bleeding common at the time. Hahnemann was quite a successful physician, in part, because of the real dangers posed by these common practices; however, homeopathy has no basis in reality. Homeopathic remedies are prepared by serial dilutions until there is no chance that even a single molecule of the therapeutic agent remains in the solution. The greater the dilution, supposedly, the stronger the remedy. Papers with a homeopathic ideology commonly use terms such as “essence” and “potency” to explain their results.

     Phrases like “a different level of reality” or “a different culture’s reality” are red-flags that a study may be contaminated with post-modernist ideology. Post-modernists reject science, saying there is no objective, measurable reality, and instead claim that each person’s own reality is equally valid.

  Do other studies show similar results? Ineffective agents will show inconsistent research results. An effective agent will consistently show positive results across many studies. If a treatment works, it will work consistently; but if after many tests the results are very scattered, there is either something wrong with the way the studies were done or something wrong with the treatment.

Survey on alternative medicine use

     The introductory paragraph of almost every popular article on alternative medicine states that 33% of all people in the United States use alternative medicine. That number is taken from an article published in The New England Journal of Medicine (Eisenberg, et al. 1993), said Sampson. What did that study actually show? Sampson displayed a pie chart from the paper which showed seven percent of the study’s respondents saw both a medical doctor and an alternative provider and three percent saw only an alternative care provider.

     These numbers are similar to survey results in the 1950s and a 1976 Roper poll. If you ask “Did you see a provider or go for some other kind of treatment?” the answer is a consistent 10%, said Sampson, although he believes that number is higher now given the publicity alternative medicine is receiving in the popular media.

     So where did the 33% come from? The opening statement of the article’s abstract says, “One in three people surveyed reported using at least one form of unconventional medicine in 1990.” Alternative medicine proponents lumped together things such as weight loss clinics, group psychotherapy, some exercise programs, prayer—things that are either already part of the medical system or are what Sampson calls “people just being people.”

     “This is an example of a misrepresented study that has taken on a life of its own because of how the press continues to report it,” said Sampson.

Homeopathic treatments

     In a famous, or possibly infamous, study published in Nature, researchers in the laboratory of J. Benveniste claimed to have found test-tube evidence for the effectiveness of homeopathy (Davenas, et al. 1988).

     In this study, basophil cells were stained with a purple dye that is lost when histamine is released by the cell. (Basophils release histamine when stimulated by the antigen-antibody complex IgE. This histamine response is what causes the symptoms of hayfever and asthma.) The cells were then treated with serial dilutions of IgE ranging from 10-2 to 10-120, and were evaluated for basophil color loss.

     According to the authors, they found “successive peaks of degranulation [color loss] from 40 to 60% of the basophil, despite the calculated absence of any anti-IgE molecules at the highest dilutions. Since dilutions need to be accompanied by vigorous shaking for the effects to be observed, transmission of the biological information could be related to the molecular organization of the water.”

     One clue that something was seriously wrong with this study, said Sampson, was that the authors found similar histamine responses at concentrations of 10-3, which is close to normal pharmacological concentrations, and at 10-120, a dilution at which no IgE could still be present. “This doesn’t make much sense,” said Sampson. “Ordinarily the curve would show the greatest response at the highest concentrations and drop off to zero and remain there after a few dilutions; it should not keep bouncing up and down.”

     The authors wrote, “The repetitive waves of anti-IgE degranulation were reproducible, but the peaks could shift by one or two dilutions with each fresh sequential dilution.” In other words, each time they repeated their test with a fresh sample they got different peaks.

     Sampson plotted the range over which the peak response might occur for each dilution. When looked at this way it became apparent that for any given dilution the researcher might get either a peak response or no response at all. The results are not reproducible, said Sampson. A homeopath using these data would not know which dilution to give, since any dilution could give either maximum response or none at all. Further more, if these results were true, they suggest as much asthma would be triggered by the homeopathic preparation as is triggered by an active dose of IgE.

     The researchers’ methodology was also problematic. Color loss was estimated by the microscopist rather than by an objective method such as radioactive labeling or automated color reading, both of which are widely available and give accurate results, said Sampson. The mistakes associated with subjective bias were confirmed by James Maddox, W. Stewart and James Randi when they visited Benveniste’s lab. The results of their investigation were published in Nature (Maddox, et al. 1988). Since then a number of researchers have reported they could not repeat the reported results.

     “But wouldn’t you know,” said Sampson, “I picked up a textbook on homeopathy written in 1996...and on the cover were the graphs from this experiment. Throughout the book were all the scientific explanations for why homeopathy is true and speculation on why this ‘essence’ of homeopathy exists. It’s related to the structure of water, H+ and OH- ionization, and column formation in water.”

     In another example, researchers tested the effectiveness of homeopathic remedies administered in conjunction with standard medicine for treating acute childhood diarrhea in a rural community in Nicaragua (Jacobs, et al. 1994). Researchers relied on caretakers’ recollections, instead of medical observers, leaving the reliability of the data in question. They found no difference during the first 24-hours of the illness when an effective treatment is most important for saving a child’s life. On the second day, however, there was a small, but statistically significant difference between the treatment and control groups.

     The authors selected this one positive end-point from the six possible and reported positive results. This is again analogous to the magician’s trick, said Sampson. Because of their methodology and the small difference they observed, the effect could easily have been due to chance alone. Sampson also pointed to a number of other problems with this study. The authors claimed the treatment could save five million children worldwide. This is an example of a study which over-interpreted its results, said Sampson.

Group therapy and breast cancer

     Stanford researchers published a fairly famous study in The Lancet which showed group therapy significantly affected the long-term survival of women with metastatic breast cancer (Spiegel, et al. 1989). However, there are a number of problems with this study, said Sampson. First, it was a retrospective study using data originally collected to examine how well women were adjusting to their cancer. As you can imagine, said Sampson, group therapy helped women adjust. Ten years later, however, researchers decided to go back and examine survival rates. They found that, of the 86 original patients, three women from the treatment group were alive after ten years, but all the patients in the control group had died within five years.

     “The first thing my partner noticed when he examined this paper,” said Sampson, “was that there were no long-term survivors in the control group. We knew that about 10% of all women with metastatic breast cancer are alive ten years after the discovery of the spread.” For some reason, this 10% had not been represented in the control group, but had appeared in the treatment group.

     Sampson combed other studies and data sources. A study by Rosen, et al. (1989) found eight percent of women with metastatic breast cancer survive longer than 10 years. Data from the National Cancer Institute, which gathers data from all tumor registries and hospitals, showed that 19 to 22% of women with metastasis are alive at the end of five years and eight percent are alive after 10 years. The Stanford study, therefore, should have found about 20% of the women in both the treatment and the control groups alive at five years. Since all of the women in the control group were dead by four years, it clearly was not representative of the general population.

     Sampson could find no proof of any mistakes in the assignment of patients to either group, although his information rested on one author who recalled, many years after the study was done, that patients had been assigned to each group at a two-to-one ratio. Sampson compared the actual randomization to that expected in a two-to-one assignment and found the likelihood that the actual assignment occurred by chance was between three and six percent. Sampson then asked, “What are the chances of having no patients in the control group survive longer than four years?” This statistic was calculated to be about 0.16%.

     “We had group therapy in our practice before the Stanford study was started,” said Sampson. “I regard it as part of mainstream treatment...The problem is that the authors of this paper and alternative medicine proponents quote this paper as proof that ‘alternative medicine’ increases survival. I disagree with that...And this was one of the key studies that justified the formation of the Office of Alternative Medicine.”

Studies on acupuncture

     A study from the University of Minnesota on the effectiveness of acupuncture for treating alcoholism (Bullock, et al. 1989) was one of the major studies leading to the conference last year which released a paper stating acupuncture works for a number of cases, including pain and nausea in chemotherapy and pregnancy (Thompson, 1997). The problem with the Minnesota study is stated in its abstract—only one of the forty controls completed the study. Patients often drop out if they learn they are not receiving the treatment, although the authors claimed to have take precautions against this. This study, however, was not blinded. All patients, both from the treatment and the control group, were seated in adjacent chairs and the acupuncturists were free to mingle among themselves and with the patients.

     “You do not do things this way in a controlled study, especially in an acupuncture study,” said Sampson. “You must blind the patient, the therapist, and the person making the evaluations. And you cannot have any other contact. Otherwise, it may be the personal contact and not the acupuncture which is producing the results.”

     There are other flaws in this study, said Sampson. The patients in the treatment group received accurately placed acupuncture needles while the control group’s needles were placed 5 mm away from the real acupuncture point. The problem, said Sampson, is that the ear is one of the most highly variable structures in human anatomy. “You just cannot be that accurate in the placement of acupuncture needles—so a control point is really a real point. My challenge to them is to show me that they are accurately looking at a point on a patient’s ear from one patient to another and on the same patient when they come back for the next treatment.”

     Another study supporting acupuncture was recently published in The Proceedings of the National Academy of Science. There are three acupuncture points relating to the eye on the lateral aspect of the foot. Cho (1998) and his colleagues showed that stimulating these points increased brain activity, as measured by a functional MRI scan which measures blood flow to particular areas of the brain, just as if the retina of the eye had been stimulated with light. This study was used to support the claim that acupuncture to these points is therapeutic for eye disease.

     First, Sampson pointed out, there is no evidence that brain activity in an area controlling an organ has any affect on a disease of that organ. So just as light on the eye cannot cure eye disease, stimulation to an eye acupuncture point, even one which produces activity in the visual cortex, is a very long way from demonstrating that it can cure eye disease.

     Second, Sampson criticized how the authors explained results that went contrary to their expectations. The researchers recorded sequences of brain activity. The brain patterns recorded from eight of the 12 volunteers during acupuncture was similar to the patterns observed when the visual cortex was stimulated, but four of the volunteers had patterns that went in the opposite direction. The authors explained these by saying, “The difference between [the two patterns] appears to be caused by the two types of reactions dependent on individual physical characteristics described in oriental medicine. These two types are known in oriental medicine as ‘yin’ and ‘yang’ characters...the acupuncture response of the yin character exhibits the same direction in signal intensity variation as the visual stimulation...Whereas the yang character shows an opposite behavior.”

     “I don’t point this out because I’m some kind of xenophobe, because most physicians in China and Korea are also scientific physicians and they don’t believe in these things either,” said Sampson. “But this stuff is random. What you also need to know about functional MRIs is that if you think about light, your occipital cortex lights up. Tell me that those students did not know why they were getting acupuncture!”

Quality analysis on acupuncture studies

     Sampson graded the quality of published acupuncture papers on a scale from one to ten according to the criteria given above. He then plotted the percent difference between the control and the treatment group on the ordinate and the paper’s grade on the abscissa. The papers receiving the highest grades showed no effect due to acupuncture treatment, as did some of the papers that scored poorly. But all of the papers reporting that acupuncture had a positive effect had received a low grade—and the larger the effect the lower grade they had received. These papers had serious flaws such as having too few patients, poor controls, no randomization, etc., said Sampson. This work has been repeated by other researchers (Riet, et al. 1990a; Riet, et al. 1990b).

     “There are some very good studies in standard medicine and some very good studies in alternative medicine. The very good studies in alternative medicine almost always turn out to negative. As a matter of fact, show me a study that is really positive and I’ll show you why it probably is not,” said Sampson.


Bullock, M. et al. 1989. Controlled trial of acupuncture for severe recidivist alcoholism. The Lancet June 24, 1989.

Cho, Z. H. et al. 1998. New findings of the correlation between acu-points and corresponding brain cortices using functional MRI. Proceedings of the National Academy of Science 95:2670-3.

Davenas, F. et al. 1988. Human basophil degranulation triggered by very dilute antiserum against IgE. Nature 333:816-18.

Eisenberg, D. et al. 1993. Unconventional medicine in the United States: prevalence, costs, and patterns of use. New England Journal of Medicine 328(4):246-52.

Jacobs, J. et al. 1994. Treatment of acute childhood diarrhea with homeopathy. Pediatrics 93:719-23.

Maddox, J. et al. 1988. High dilution experiments a delusion. Nature 334:287-90.

Riet, et al. 1990a. Meta-analysis; acupuncture and pain. Journal of Clinical Epidemiology 43:1191.

Riet, et al. 1990b. Meta-analysis; acupuncture on addiction. British Journal of General Practice 40:379.

Rosen, P. et al. 1989. Examination of the natural history of breast cancer. Journal of Clinical Ontology 11:66-69.

Spiegel, David et al. 1989. Effect of psychosocial treatment on survival of patients with metastatic breast cancer. The Lancet [Oct 14] 888-91.

Thompson, Dick. 1997. Acupuncture works: an NIH panel endorses the ancient Chinese needle treatment—at least for some conditions. Time [Nov 17, 1997] v. 150(21):84.

Return to Archive Index
© 2001 Oregonians for Rationality