Antidepressant drugs have been getting a bad rap in the media. I’ll just give three examples:
- On the Today show, prominent medical expert ? Tom Cruise told us Brooke Shields shouldn’t have taken these drugs for her postpartum depression.
- In Natural News, “Health Ranger” Mike Adams accused pharmaceutical companies and the FDA of covering up negative information about antidepressants, saying it would be considered criminal activity in any other industry.
- And an article in Newsweek said “Studies suggest that the popular drugs are no more effective than a placebo. In fact, they may be worse.”
Yet psychiatrists are convinced that antidepressants work and are still routinely prescribing them for their patients. Is it all a Big Pharma plot? Who ya gonna believe? Inquiring minds want to know:
- Are antidepressants more effective than placebo?
- Has the efficacy of antidepressants been exaggerated?
- Is psychotherapy a better treatment choice?
The science-based answers to the first two questions are clearly “Yes.” The best answer to the third question is “It depends.”
In 2008, Erick Turner and four colleagues published an article in The New England Journal of Medicine (NEJM) entitled “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy.” The FDA is able to make sure that drug companies don’t pick and choose which trials, and which outcomes within those trials, get seen. Using clinical trial data from the FDA as a gold standard, Turner, et. al. examined how these same trials were reported in published journal articles, They found that:
…according to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results.
And some of the negative trials were published with a “spin” that made them appear positive. The data did show that each drug was superior to placebo, but the true magnitude of that superiority was less than a diligent literature review would indicate. They warned that
By altering the apparent risk–benefit ratio of drugs, selective publication can lead doctors to make inappropriate prescribing decisions that may not be in the best interest of their patients and, thus, the public health.
Irving Kirsch has been outspoken about antidepressants’ alleged lack of efficacy. In a controversial meta-analysis published in 1998, he found that placebos provided approximately 75% of the improvement provided by active drug. He suggested that the other 25% is debatable and could be due to an enhanced placebo response when patients experience side effects that convince them they are getting an active drug. In a further study in 2002, he “questioned the clinical significance of antidepressants.”
Kirsch recently looked at the FDA data for 4 of the 12 drugs that Turner examined. In spite of the smaller sample, where Turner found an effect size of 0.31, Kirsch got 0.32. So they got almost exactly the same result. But it was their interpretations of that result that were very different. Kirsch concluded that antidepressants are ineffective, while Turner found that the drugs were indeed superior to placebo. As the figure below shows, each drug’s effect size was positive. Also, none of the confidence intervals overlapped zero. This means that, while there is some probability that the true effect size is zero, meaning that antidepressants and placebo are equal in efficacy, that probability is negligibly small.
The discrepancy between Turner’s and Kirsch’s interpretations hinges on what these effect size numbers mean in terms of clinical significance,. Values of 0.2, 0.5, and 0.8 were once proposed as small, medium, and large effect sizes, respectively. The psychologist who proposed these landmarks admitted that he had picked them arbitrarily and that they had “no more reliable a basis than my own intuition.” Later, without providing any justification, the UK’s National Institute for Health and Clinical Excellence (NICE) decided to turn the 0.5 landmark (why not the 0.2 or the 0.8 value?) into a one-size-fits-all cut-off for clinical significance. In an editorial published in the British Medical Journal (BMJ), Turner explains with an elegant metaphor: journal articles had sold us a glass of juice advertised to contain 0.41 liters (0.41 being the effect size Turner, et al. derived from the journal articles); but the truth was that the “glass” of efficacy contained only 0.31 liters. Because these amounts were lower than the (arbitrary) 0.5 liter cut-off, NICE standards (and Kirsch) consider the glass to be empty. Turner correctly concludes that the glass is far from full, but it is also far from empty. He also points out that patients’ responses are not all-or-none and that partial responses can be meaningful.
Incidentally, NICE is no longer using the 0.5 effect size cutoff.
If we followed Kirsch’s interpretation and rejected antidepressants, how would we treat depression? Psychotherapy avoids the side effects of drugs, but it has its own drawbacks: it is expensive, time-consuming, and variable in quality. How effective is psychotherapy? Psychotherapy trials also suffer from publication bias, just like antidepressant drugs. And when one weeds out low quality studies, psychotherapy has an effect size of only 0.22, lower than the value for antidepressants reported by Kirsch himself, So if we reject any treatment below the (arbitrary) 0.5 cutoff, when a mental health care provider is faced with a patient in need of help, is he or she to do nothing at all?
I don’t doubt that antidepressants have sometimes been over-prescribed and used inappropriately for lesser levels of depression where they are less effective or even ineffective, but this is probably true for psychotherapy, as well. On the other hand, it has been estimated that only about half of depressed patients are getting any kind of treatment. Severe depression is a life-threatening disease. A recent study showed that antidepressants reduced the risk of suicide by 20% in the long term. The risk/benefit ratios are still not clear cut for either form of treatment.
Once more, science fails to give us the black-and-white answers we crave. And once again we are reminded that we can’t rely on the media for accurate, nuanced information about medical science.
For his assistance in preparing this article and for providing the figure, I want to thank Erick Turner, M.D., Department of Psychiatry, Oregon Health and Science University; Staff Psychiatrist, Portland Veterans Affairs Medical Center; Former reviewer, FDA.
This article was originally published in the Science-Based Medicine Blog