| |
|
Even after ten years of federally funded research, very little is known about whether most alternative therapies work at all and which methods are safe. Scientists and alternative practitioners are unanimous in their agreement that more research is necessary, but sometimes disagree on what it takes to prove something works. In these excerpts from their interviews, NCCAM director Stephen Straus, Harvard University's Tom Delbanco and Marcia Angell, alternative practitioner Andrew Weil and medical historian James Whorton discuss the scant evidence we have and why it's so hard to come by.
| | |
In 1992, the National Institutes of Health created an Office of Alternative Medicine. Its purpose was to stimulate interest and research in alternative and complementary medicine on the part of the other NIH institutes. This was a small office with about two million dollars to begin with.
Over the next several years, it was successful to the extent that it stimulated work and provided seed money for research projects in many institutions, and it created a legitimate place of discussion within academic institutions for research on complementary and alternative medicine.
By 1998, [the Office of Alternative Medicine] was deemed inadequate to meet the public's needs to understand complementary and alternative medicine. The responsibilities of those offices were elevated with more resources and more independent authority by creating a new institute at the NIH called the National Center for Complementary and Alternative Medicine (NCCAM).
To give you some sense of the transformation, in 1992 [the Office of Alternative Medicine had] a budget of $2 million, and a staff of a handful of people. In 2002, [NCCAM had] a budget of nearly $105 million and a staff of about seventy people. We're funding two to three hundred research and research training projects around the United States. We're collaborating with many of the other NIH institutes, [which are] funding an additional $120 million worth of work in the field. This is a huge evolution in ten years.
How does NCCAM decide what to study?
We study approaches that will address the most important public health indications. We study approaches that we can study in terms of the ethical constraints, in terms of the resource restraints. We study modalities that are the most promising and we don't study things that are the least promising. We don't have enough money, enough time, and enough resources to study everything.
What about critics who say the researchers are biased in favor of finding that alternative therapies work?
...My work is about encouraging good research. We have the responsibility to engage in research in complementary and alternative medicine in an open-minded fashion, not to enter into it with a prejudice that it must work, or it must not work. In fact, I would not have taken this job if it were about debunking things and proving that they can't work. I took this job because it's about the opportunity to prove that there are new things, that we can expand opportunities for health care. But along the way, we have to accept the possibility that some will not be good, some will not be safe, some will not be better than existing therapies. The public is going to have to accept it. People who have vested interests, they're going to have to get over it. …
People are impatient with how long the scientific research is taking. Why aren't there more definitive answers?
There's nothing in science about a final answer. Science is about replication. Now, very large, multi-centered trials only become believable if they're built on a body of smaller studies with the same kinds of outcomes. So we don't do a very large trial without that foundation of evidence. …
Why have there been so few conclusive studies in the ten-year period since the Office of Alternative Medicine was created? Are there specific challenges to studying alternative therapies?
You don't engage in a multi-million dollar trial until you know exactly what kinds of patients to recruit, how long to follow them, how to deliver the treatment, in what dose, and what the best measure would be of their improvement. That's what one does in pilot studies throughout biomedical research. We do have about a dozen studies that are large, multi-million dollar trials, and a couple of hundred studies that are much smaller.
The conclusions of the smaller studies will be this approach looks promising, we now know how to study it, this is ready for larger study. Or the conclusion may be the data are not that encouraging and we're not sure yet that this is an area ripe for more investment. That is a perfectly satisfactory and appropriate scientific outcome. The public should be satisfied with that.
There are thousands of potential approaches in complementary and alternative medicine, and we can't take them on all equally. There are some things we can't even study, not just because they're too expensive, but because it would be unethical to do so. Individuals may choose themselves to do a certain treatment but I can't ethically ask a research subject to withhold a life-saving treatment for an alternative treatment.
There are unique challenges to doing research in complementary and alternative medicine, on top of the traditional challenges of research in general. One of the challenges has to do with standardization of the product or the practice. If you want to study an herb, whose herb are you going to study?... We have to guarantee what we're studying because at the end of the day, if the study is positive, we're going to want people to be able to replicate our experience and expect beneficial outcomes. If the study is negative, we're not going to wish for the then-justifiable criticism that we studied the wrong product, or you think you studied echinacea but really, there was no echinacea in the bottle. In some cases we have to contract with manufacturers specifically to make the product to proper standards.
| | | | | |
The issue isn't that complementary alternative and integrative medicine has been disproven, but that very little research in the last decade has been done to prove or disprove its efficacy or its cost effectiveness. We have so few studies that have been done to a level of excellence that they can authoritatively and definitively tell us, this does work, this doesn't.
I think the impatience on the part of the American public is totally understandable. Many of these therapies have been around or are available off the shelves now and people rightfully ask themselves and their doctors in white coats and nurses and pharmacists, does it work, yes or not? We don't have the information yet. So I understand the impatience.
But the researcher in me, not the clinician or the human being says, now wait a second, some things take time...Things don't grow well when they're terribly rushed. The same is true for large clinical trials. Often the first thing that has to happen with a large clinical trial is asking the right question. Does gingko do anything to change the mental competence of people with dementia? Important question. Does acupuncture, chiropractic, massage and having those therapies available to people in our workforce change the course of acute low back pain or chronic low back pain for people on assembly lines? Good questions. To do those studies often takes five, six, seven, ten years.
The reason it takes so long is as follows. A pilot study could prove that people would enter an experiment and that you could collect information. It might take you two or three years to convince the federal government or a foundation or yourself to do the study and get twenty people in the study. So you might have to write a proposal, get approval from your hospital and the institutional review board, get funding for it. Let's say that that takes two years. Then we have to do the pilot study and prove to those who are skeptical that you can do the experiment and that from that experiment you can design a larger experiment to answer more definitively whether it works, yes or no. Let's say that takes two years.
Then you have to reapply and get funding for the larger experiment, which in the case of a back pain trial might be two or three million dollars. You might have to go to the National Institutes of Health or other parts of the government and say, I now want two and a half million dollars to do this properly. By the way, I and my colleagues and my statistician and the acupuncturists and the chiropractors are not making a dime on this, we're just salaried clinicians as part of a research venture, but it's going to cost two and a half million dollars. That might take two years.
Then you have to recruit the patients...It took us over two years to recruit four hundred and fifty patients [for our back pain study]. It'll take us another year to carefully analyze the data. We will finish our analyses, submit it for publication, and it will probably take another year before it's published because it will go through multiple [revisions] and the constructive criticism by our thoughtful colleagues who say, convince me. Then it's published...It's taking nine years, seven years, that's how long it sometimes takes.
Are the rules for designing studies of alternative techniques different than studies involving conventional medicine? Is there a double standard?
Absolutely not. I don't think there's a double standard. You have to apply the same rules of evidence to any therapy regardless of its pedigree, whether it comes from another culture, involves herbs, acupuncture, massage, or is a new drug or device. I think the rules of evidence are the rules of evidence.
...My approach to every project we do, whether it's a large clinical trial or a policy statement or a fellows experiment, is I try to find the most sophisticated skeptic who is open-minded to guide us in the science, in the methodology, to set up an experiment where that skeptic could work as a co-investigator. I say to him or her, "If we do it this way and if it turns out positively, will you then be convinced or almost convinced?" If I can't find that group of people to get it to that level of excellence, it's not worth doing. It just won't make an impact. Bad science doesn't impress anyone.
...Now it's not enough to just say, it looks like this acupuncture helps people with that condition. We have to understand the mechanism, because in the absence of a mechanistic explanation for something that is foreign, it's rare that the therapy will be embraced by the community. So we need both evidence that it really works beyond a shadow of a doubt and some sense of how.
Are there any clinical trials you know of that have proven alternative therapies effective?
I think there's only been one federally funded NIH sponsored large clinical trial. It involves St. John's Wort for depression. Specifically it looked at whether St. John's Wort was better than a standard antidepressant as compared to a placebo for depression. And in that instance none of the groups did better than any of the other groups.
...I turn the question around and say has any large federal trial that had adequate sample size and power shown that complementary therapies do not work? The historical example might be Laetrile. Twenty years ago there were people who believed that Laetrile, which was a derivative of the apricot pit if I'm not mistaken and had cyanide in it and was therefore potentially dangerous, might cure breast cancer, or cancer in general. There was an outcry on the part of the American public to insist that Laetrile be available in our cancer centers.
It was through a series of randomized controlled experiments paid for by the government that we unequivocally proved that Laetrile did not benefit patients with cancer, and then it went away. So I think the power of science should not be understated to prove or disprove. When it's done properly with all of the stakeholders bearing witness, I think the market then speaks again and says, "We won't buy that, it's been shown not to work, let's not use it any more." So I think that's the classic example of large studies that disproved a claim that something worked, in this case Laetrile.
I think the job of the academician and the government and somebody who is a dispassionate evaluator of these therapies is to say, let's get the best evidence we can, as quickly as we can to inform every patient of their options. That's what I do as a doctor every day. That's what any health care provider is supposed to do, give the best advice about what somebody's options are and then bow to their decision as to what they choose to do. …
Why is there so much criticism of the government's decision to fund studies that evaluate alternative therapies?
I think there is the argument, which I understand, that says, the government's investing in this area smacks of advocacy of complementary therapies. I would confront that and respectfully disagree. The government's investment in this area is advocacy for the application of science to distinguish useful from useless, safe from unsafe, and look at the dollars and where they could be best spent. …
| | | | | |
What do you think about the research that's being done at the NIH's Center for Complementary and Alternative Medicine or the other research organizations?
Well, there's no disease, is there, in "The Osher Institute" or the "Division for Complementary and Alternative Medicine"? So it's not focusing on a disease, it's focusing on a collection of methods or philosophies or approaches. And what that collection has in common is that it hasn't been demonstrated by scientific research. You would think the answer would be, "Well quick, let's demonstrate it. At least if there's any possibility that it will work let's demonstrate it." But that isn't what's happening. It's more of an advocates' center. They're assuming that at least some of these things work, never mind the evidence. And you shouldn't do that in a research institution, you should never say never mind the evidence. Now they don't say it in quite those words, but that's how they've been behaving.
Eisenberg says to be patient.
Right. Well if you don't start it'll take forever, that's my answer to that. I have yet to see the starting of it, never mind the finishing of it. But in fact you know if you really are going to put a lot of effort into a line of research, it's amazing how fast it can be. Just look at what we learned about HIV in a short, short time. … It doesn't take as long as it's been taking. The Office of Alternative Medicine was set up in 1992. In 1998 I looked at what had come out of the first thirty grants, those grants were awarded in 1993. You would certainly expect that after six years you would have some results. There was nothing. Out of the thirty grants there were maybe twenty-eight little abstracts on the website and out of those there were nine papers and none of those nine papers was a controlled clinical trial of an alternative remedy that would give you an answer. I mean it was just incredibly bad research. I can hardly dignify it by calling it research. So that was six years after the first grants were awarded.
Now, to be sure, these were small grants and that's often what's argued. Well, they were small grants, they were just $30,000 apiece. Still, that's no excuse for doing bad research. Little research, yes. Bad research, no. Since David's own piece in 1993 there's been plenty of time to start to do good research. Now we've just begun to see some good research. It has not come, so far, from the National Center for Complementary and Alternative Medicine, it's come from the other institutes at the NIH. It's come from the National Institute on Aging, from the National Institute on Mental Health. From the NCI. From the Mayo Clinic. From the Canadian Provincial governments. We've seen studies now start to come out--well-designed, large studies and they've all been negative. They have all been negative. A negative study is one that compares a new treatment with a placebo or an old treatment and finds no effect of the new treatment. Finds that it's no better than the old treatment or nothing. And the good studies, the credible studies that have begun to come out have been negative.
… There are a lot of people now who have a vested interest in complementary and alternative medicine, who sell it essentially, and so they have to say yes there's a study but it's in German and nobody's ever seen it that shows that homeopathy cures cancer. They have got to say that. They can't just say we're pushing something and there's never been any evidence for it. But you have to consider the source. They're not going to show you the evidence. They're going to allude to it. … Such people are going to have to say that there's research out there. But when they point to it, it's either very, very poor, or it somehow disappears. It doesn't really exist.
Are they lying?
Are they lying? I don't like to say that. but it's not the truth.
Are there any studies showing an alternative therapy to work?
I know of no good study that has shown an alternative remedy to work. They've been flawed in some way, the ones that I've read that show that they work.
What do you think about the report from the White House Commission on Complementary and Alternative Medicine?
The White House Commission on Complementary and Alternative Medicine, before which I testified last year, consisted of people the majority of whom had financial ties to complementary and alternative medicine. They had vested interests. They were practitioners of complementary and alternative medicine. Or they owned businesses that offered complementary and alternative medicine. … That is an obvious conflict of interest. By definition you can't have a dispassionate, disinterested evaluation. So this committee was set up, the deck was stacked by our new age president [Bill Clinton] from the beginning… .
When I testified, there was two days I think, and the first day was coverage and reimbursement and the second day was "Does it work"? That's backwards. They are sure that it should be covered and reimbursed… In fact the premise of the sessions on "does it work" was what are new methods for studying it? Because clearly the old methods won't do. The old clinical trials, that's probably not applicable to something so mysterious as complementary and alternative medicine, so we need a new methodology, and when I spoke before the commission I said no, the old methodology is fine. But the premise was, surely we must need a new methodology.
You don't believe that because these are new techniques we need a new way of studying them?
…They try to suggest that it's somehow too complicated to study scientifically, that there has to be some other way of studying it, and that's wrong on several counts. The scientific method is not just a sort of flavor of the month like chocolate or vanilla, it is the only way you can find out about the natural world and our bodies are part of the natural world. It's not something that you choose to do, it's something that you have to do if you want to find the answer. And the scientific method is just a matter of formulating a hypothesis that can be tested, designing a study that will test it, collecting objective, verifiable data, and then drawing the conclusions and only those conclusions that follow from that data. That's all it is. But it's powerful. And it's the way we study all new treatments in medicine and it's responsible for the great flowering of scientific medicine in the 20th Century. …
What do you know about Dr. Nicholas Gonzalez and his controversial cancer treatment regimen?
I read the New Yorker story about the Gonzalez therapy and I've read some about what it is. And this is, it seems to me, another instance of preying on desperate people. And it also shows the problem with the anecdote or the testimonial: it's no way to find out whether something works or not. You must do a proper trial to find out whether something works. The problem with the anecdote, saying that so and so got better, threw away his crutches, his tumor shrank is that you don't know what would have happened if he had not gotten that treatment. You don't know what caused the tumor to shrink. Now it's true that in cancer of the pancreas it's very unlikely to shrink on its own. But the natural course of many diseases is to wax and wane. And for those diseases it really is a problem. You don't know how many people got that treatment, who didn't get well because you only hear the success stories. …
What the anecdote tells you is this is something, if it's a well documented anecdote--and that's another problem, the alternative medicine gurus get letters from people who say, "I had cancer, my doctor gave me six months to live, and I drank carrot juice and now I'm alive and it's three years later," and maybe he's dead, that always adds to the story. You don't know whether he had cancer in the first place. You don't know what other treatment he was getting. So that's not documented, that's more of a testimonial. And a lot of complementary and alternative medicine is testimonials. Just "I know somebody who knew somebody who said this," without any effort to find out whether it's true.
The anecdote is a little bit different. It can be very well documented, and reputable medical journals--the New England Journal of Medicine occasionally would publish an anecdote if it's very well documented. If we got a study that said I have a patient and, just an anecdote, but he had cancer of the pancreas and I gave him carrot juice the tumor shrank and he's well and it's three years later, and they could document all of those facts, we might publish it. We would ask questions: how many people with cancer of the pancreas did you give carrot juice to? Who didn't get better? We would want a lot of documentation of it, but we might publish it. But we wouldn't publish it as evidence that carrot juice cures cancer of the pancreas. We would publish it as something that had to be looked at in a proper study. You would say this is a hypothesis-generating anecdote. It means this is something worth looking at, let's design a study. Say a small trial of people with cancer of the pancreas, and add carrot juice to the usual regimen in one half of the population, and don't do it in the other half and see how they do. And you would begin to look at something. A lot of accepted treatments come about in exactly this way. Theory, anecdote, and then the proper studies. So that's what an anecdote is good for. It is not proof of an effect at all. It's what it is. …
Some proponents of alternative and complementary medicine point out that many accepted scientific medical techniques have not been proven by double-blind clinical trials, either.
There are a number of standard medical treatments that have not yet been demonstrated in rigorous clinical trials. Usually there's some biological plausibility to them. For example, it's never been demonstrated in a rigorous clinical trial that a prostatectomy will extend your life in cancer of the prostate, as opposed to doing nothing. But there's some plausibility to that. If you have an organ that has a cancer in it, you take it out. And also, most people in standard medicine, at least most scientifically based doctors believe that that's a flaw, that you really shouldn't be giving treatments that have not been demonstrated to be effective in a clinical trial. And less and less is that being done. Certainly new treatments are now almost always demonstrated in a clinical trial. New drugs by law are demonstrated to be effective before the FDA. These old time practices that were just based on anecdotes are beginning to go the way of the dinosaur, and certainly people in standard medicine know that they should. People are going back now and looking at old treatments--hysterectomies for various indications where it wasn't really clear that that should happen, prostatectomy as I mentioned is now being looked at--going back and kind of cleaning up some practices that were accepted into the standard repertory without sufficient evidence.
Alternative medicine doesn't have quite that attitude toward it. If often promotes wildly implausible remedies and does so without saying, but we really should have the evidence on this, we really should subject it to a clinical trial. So I think there's a little bit of a difference. …
| | | | | |
I'm a practitioner and a teacher, I'm not a researcher. … I think there's an enormous difference between medical scientists and medical practitioners. Often the researchers really have little understanding of the world of the practitioner. I was just on a panel with some Nobel laureate medical researchers and their whole thing is evidence, evidence, we don't want just Andrew Weil's feeling that soy is good for prostate cancer, we want evidence. My response to that is, that's great, I'm all for getting evidence, but the reality is that practitioners are working in the trenches of uncertainty. We never have all the evidence and we have to make decisions, often life or death decisions, with inadequate information. I think the best we can do is learn how to play odds and make good guesses.
One concept that I'd like to get across is that I think it would be very useful if people, instead of just calling for evidence based medicine, if we conceived of a sliding scale of evidence that would work this way: that the greater the potential a treatment has to cause harm, the stricter the standards of evidence it should be held to [in terms of] efficacy. … That kind of sliding scale of evidence would simplify things because we don't have the resources to test everything that's out there in the world of alternative medicine using randomized controlled trials. And practitioners are always going to be guessing and operating in the midst of great uncertainty. …
Randomized controlled trials [link to evidence primer] produce one kind of information. There are other kinds of information. You can rate information in terms of quality. We can do outcome studies to get an idea of how effective therapies are. The first consideration always should be harm. If a therapy is not harmful, why not experiment with it, why not try it? Especially if conventional medicine doesn't have anything great to offer.
But how can we know they're not harmful before they've been tested?
There are various ways of estimating the harmfulness of a therapy. One is to look at what it contains. Does a plant have anything in it that looks harmful: does it have a class of molecules which look like molecules that we know to be harmful? What does the epidemiology show? If a plant has been used for centuries in various cultures, and there is no epidemiological evidence of toxicity, that's reassuring. You can try things on yourself, which is a strategy I've always used. I would never give a patient something that I didn't first try on myself. …
What is the scientific method?
To my mind the scientific method begins with controlled observation. That is you observe carefully, you note down what you observe. If you suspect a cause and effect relationship, you try to hold all variables constant and manipulate one and then observe to see if there are changes at the other end. So I think it is basically careful observation and experimentation that's then also compared to the experience of others who are trained in that method.
What about the "gold standard," the randomized clinical trial? Why are so few alternative therapies proven by RCTs?
I think a practical limitation of the gold standard is that it's gold: it's very expensive and we just don't have the time or money or resources to test everything by this method. So I think we have to prioritize. This business of running chelating agents into people's veins and saying it's going to remove plaque. That is being done on such a scale, people are paying so much money for it, and it so pushes the buttons of medical regulators, that's one you really want to do a definitive, large-scale randomized controlled trial, to once and for all set it to rest, either it does work or it doesn't work. That would be great.
But for all the other stuff, we don't have time to do that, so we have to have other methods of estimating how things work. Now one of the attitudes that I run into in the research community that just drives me up the wall is people who dismiss what they call anecdotal evidence. And I have challenged some of these people in public to strike the word "anecdote" from the medical vocabulary.
I think it is a trivializing word. If you want to call this uncontrolled clinical observation, that's fine with me. The fact is that the scientific method begins with raw observation. You notice something out there that catches your attention, that doesn't fit your conceptions. You see it again. That gives you an idea that generates a hypothesis which you can then test. It is this kind of uncontrolled observation which is the raw material from which you get hypotheses to test in a formal manner. If you dismiss all that stuff, if you drop it into a mental wastebasket labeled "anecdote," you cut yourself off from the raw material of science. …
But the traditionalists would say anecdotes aren't important enough to publish.
And my response to that would be that you could apply the same thing to these randomized controlled trials. One big randomized controlled trial on St. John's Wort in major depression, that wasn't worth publishing or putting out in the public eye either, it's useless information. … The St. John's Wort studies—there are actually three of them now-- all looked at St. John's Wort in major depression. No one has ever claimed that St. John's Wort is useful in major depression. … It should be looked at in mild to moderate depression which is what it's used for. …
Jim Donlon, who was the former dean at the University of Arizona and is currently the editor of The Archives of Internal Medicine … said that as he was nearing retirement, and having watched what happened, that he was more convinced than ever that what affected scientists' responses to new information was not so much the content of the information as its source. If information came from sources that they weren't used to paying attention to or respecting, their tendency was to ridicule it or dismiss it. An example that he used was the observation, originally an uncontrolled clinical observation, that aspirin had a clinical useful anticoagulant effect. That observation was first made by a general practitioner in southern California in the 1960s who published his observations in a journal of family practice and suggested that aspirin was useful as a heart medication to reduce risk of heart attack. It took almost thirty years before the conventional medical community came around to that point of view. And the reason that it'd ignored it for so long was that this came from a general practitioner and it was published in a journal of family practice, not in a journal of cardiology. …
Let's take the example of osteopathic manipulation for recurrent ear infections in kids. I wrote up my experience with an old osteopath in Tucson, who was a master of method called cranial therapy. He would take a kid, one treatment of this very noninvasive, inexpensive method and they would never get another ear infection. I saw this again and again. So based on my experience there, I have recommended in my writings on my website that kids with ear infections should go to osteopaths and get this method done. …
After something like twenty years of trying to get the research community interested in this, we finally set up some tests of doing this with kids with recurrent ear infections. We were unable in those tests to prove that this had an effect. The problem is, I'm sure there's an effect there. We couldn't capture it in the way we set up the experiment. Part of the problem is that osteopaths have very individual styles of doing this. Were the osteopaths that we used, were they doing it right? Was it the same kind of method as this old man that I saw? I don't know.
The other question is, if you're recommending this, as I said, the first consideration, can it hurt people? No. I think this is a completely benign treatment method. I've never seen any disasters as a result of cranial therapy, it's very gentle. The second question is, is it preventing people from getting legitimate treatment? In the case of recurrent ear infections all we've got is antibiotics--one after another--or putting tubes in the ears. There has been increasing question in the pediatric literature about the value of giving recurrent cycles of antibiotics. It looks as if the kids that get the most antibiotics wind up having the most and worst ear infections. So given that situation, I can see no harm in recommending to people that they try cranial therapy from a qualified osteopathic physician, even though we still have not yet been able to verify this in a randomized controlled trial. …
| | | | | |
I think that the people in the field of alternative medicine are true believers. They're not as skeptical as those who've been trained in the so-called scientific method have been taught to be. People who market something are always emphatic and that's true in all kinds of medicine, including alternative medicine....It bothers me that the kind of passion that comes with the true believer can be misleading.
... I think that if people are spending zillions of dollars on homeopathic medicines and are being told that these medicines are better than placebos and will really make a difference, then those medicines should be subject to real scientific scrutiny using scientific techniques in which I and others will believe. I don't believe for a bit that you can't study these kinds of therapies just as well as you study quote, "traditional" scientific therapies. I think because of the social and economic forces going nowadays it's probably valuable to study some of them. But an awful lot of the studies we do these days, whether in scientific medicine or alternative medicine, are ridiculous.
What in the field of alternative medicine do you consider worthwhile to study?
I would certainly study what's safe and unsafe, in terms of interactions with other medicines that people take...
Do people's assumptions or expectations about the alternative medicine being tested affect research outcomes?
One of the things alternative medicine is teaching us very well is how to study things better. Let's take expectations. Let's say we have a hundred people and we divide them into two parts and we give fifty of them a homeopathic medicine and we give the other fifty just the pill without the water sprayed on them and we say, which does better? That's called a double blind trial, that's called a randomized trial and neither the doctor nor the patient knows which pill he or she has taken.
But let's say by random chance in one group forty of the people came into the experiment thinking they were going to get better. And in the other group only ten people came into the experiment thinking they were going to get better. Guess which group is going to do better? The group with forty. We've never bothered to control for that in many of the scientific experiments we do when we try out quote, "real" medicines.
The alternative medicine people have reminded us of this, to control for expectations when you go into an experiment. So the actual scientific method of how we study things, whether it's the new penicillin or the new wonder drug in alternative medicine will be helped by our thinking more clearly about how we ask questions, how we control for expectations, for thoughts about placebos, et cetera. In that sense this has been a very healthy thing for the scientific world.
Is the availability of grant money partly responsible for the surge of interest in alternative medicine research?
Medical scientists and academics are taught from day one that they'd better be entrepreneurial if they're going to make it. No different from business or being a television reporter. Get your niche, get into a new area, and make it big, fast, while it's new. Alternative medicine is a beautiful example of that. There've been articulate, charismatic, bright, caring people who smelled this early, went after it, became prominent, and have done very well. They've probably done well financially, I don't know about that.
They're certainly national figures, they're household words. Some of them are beginning to ask some really interesting questions. Others I think are more on the speaker tour and writing books than they are being true academics. You can say that of many people in academia, in any form of academic life. It will shake out over time that there will be serious scientists in this field asking the serious questions within the field, most of which I believe will dwell on safety and the placebo effect.
One of the things I've noticed is that the difference between a real scientist and what I would call a not-so-real scientist is the person who comes in and says, "I have an idea that such and such is the case in this circumstance and I want to find out if I'm right or wrong." That person doesn't care whether he's right or wrong. The difference between that and a person who says, "I have an idea that this works and I'm going to prove it," is enormous. The latter person is biased. He or she is going to probably come out with a study that says this does work.
The other person says I'm going to write a good paper, I'm going to add to knowledge, and I don't care whether the paper says this doesn't work or this does work. That's the person that I think will have a great career in science and there are very few of them so far from what I can see in the field of alternative medicine. That's what makes me nervous. …
Do the resources being thrown at alternative medicine these days stop good research from happening?
It's very hard to do good research when the people who pay you to do it hope, want, expect it to come out in a certain way. Harvard Medical School, where I am, got an enormous amount of money from a philanthropist who believes strongly in alternative therapies. I think it will be hard for the researchers [whose work is] supported by that person to write a hundred papers in a row saying it isn't worth a damn. …
| | | | | |
… Within allopathic medicine there is a very strict gold standard of what constitutes proof: the double blind, randomized controlled trial. That doesn't necessarily work well for alternative therapies. There are a range of reasons that alternative practitioners will point to that say "This is not a model that works that well for us." For example, that it applies to populations and it tells you that a certain drug will be of benefit to most people within a population, but it doesn't tell you how it's going to affect an individual, and all of our treatments historically from the beginning have been oriented toward finding the best therapy for this individual patient.
To take naturopathic medicine as an example, naturopaths, if they treat an inner ear infection, they would treat it very differently in one patient from the way they'd treat it in another depending on the patient's age, the patient's diet, other aspects that that I can't really comment on because I'm not a naturopathic practitioner. But they could have thirty or forty different therapies that they might use depending on the patient, and they would use those therapies in different combinations, whereas in most cases if you've got an ear infection or your kid's got an ear infection you take him to an MD, he's going to give him amoxicillin. There are these wonderful randomized double blind studies that show amoxicillin works against infections, so everybody gets the same treatment. A naturopath would argue, "Our approach can't be crammed into that box. It just doesn't apply because we individualize our therapies and you would have to have so many different trials that you wouldn't have enough people within each category to feel you had a statistically significant population you were working with."
But a research scientists would say that's a cop-out.
Well, I understand their position and I can see why to them it would appear to be a copout. And I think many, many allopathic doctors would maintain that saying that the double blind study doesn't apply to alternative therapies is a copout. It allows alternative doctors to escape from being subjected to this scientific method of evaluating therapy. Most allopathic therapies can be studied that way. Whether or not it is a copout I'm unable to say, I'm not an epidemiologist. These are questions that are beyond my area of expertise.
I can say that alternative practitioners do not see it as a copout, they sincerely believe that because therapy's individualized, because the style of the practitioner and interacting with the patient contributes to the healing that you can't apply these other methods. They claim to be very open to developing other evaluation techniques that will be accepted by allopathic medicine, and there's a lot of thought being given to that among alternative practitioners. But at this point I don't think anyone's come up with an alternative to the randomized double blind trial that is acceptable to allopathic practitioners. But there have been studies done of different alternative therapies using that method that indicate efficacy. The trial of homeopathy against childhood diarrhea that was done in the early 1990s appears to be a very well designed study and it shows a definite benefit from with homeopathy over just basic nursing care.
But there aren't a lot of randomized double blind trials that show alternative treatments work.
The Office of Alternative Medicine, now the National Center for Complementary and Alternative Medicine has been doing research, or funding research, for ten years to try to demonstrate efficacy with alternative therapies and there's not a great deal positive that's come out of that yet. There have been some benefits shown from acupuncture. I think a couple of herbs have had some benefits shown, but there's nothing really I guess path breaking in the research that's been carried out so far.
What does that say, about the studies or about the treatments themselves?
I think all it says to me is that we, we still don't know. I'm skeptical of a lot of alternative therapies. I suspect that we may never get solid convincing evidence that a lot of things work. I suspect we'll find that some things do. Allopathic medicine has discarded a lot of therapies over the years that for a while they believed worked. I think this is just the natural process of medical evolution no matter what kind of therapies you're using. …
| |
| | | | |
home + introduction + tips for consumers + science or snake oil? + culture clash + interviews
analyses + discussion + teacher's guide + viewer's guide + producer's chat
tapes & transcripts + press reaction + credits + privacy policy
FRONTLINE + wgbh + pbsi
posted november 4, 2003
web site copyright WGBH educational foundation |
|