Monday, March 8, 2010
Wait! If You Read That Newsweek Article About How Antidepressants Don't Work and Are About to Flush Your Meds Down the Toilet, First Read This ...
A Jan 29 Newsweek cover story trumpeted, Why Antidepressants Don’t Work. There are many things not to like about antidepressants, including the fact that they are not magic bullets and often may make us worse rather than better. Indeed, if antidepressants were as effective as Pharma and psychiatry claim, depression would have gone the way of polio and small pox. Instead, it remains the leading cause of disability in the western world.
Nevertheless, a willing patient working with a smart clinician stands a fairly decent chance of getting somewhat or even a lot better on an antidepressant. And there is convincing data that a patient who stays on an antidepressant has a much better chance of avoiding relapse than one who doesn’t.
The best evidence we have points to the fact that antidepressants do work - only not for all of us and not as well as we wish they would. But, in their limited capacity, they do - actually - work.
So how can Newsweek be so irresponsible? The story begins with an equally irresponsible review article that appeared in the Jan 6 JAMA. Medical journals are notorious for publishing industry propaganda disguised as research. This one went the other way.
The JAMA article reports on a University of Pennsylvania meta-analysis that found that antidepressants were effective on patients with severe depression, but had little or no effect on those with mild or moderate depression. A meta-analysis pools previous studies and crunches the composite numbers. The catch is those putting together meta-analyses can be highly selective in which studies they decide to pool.
An article by Amy Tuteur MD in the Feb 8 Salon, How Did Journalists Get the Antidepressant Study So Wrong?, notes that of 141 random-controlled antidepressant trials the authors included only three involving a single SSRI medication, Paxil. (Three other trials included in the meta-analysis involved the rarely-used old-generation antidepressant imipramine.)
The selective weeding-out process seriously undermined the credibility of the meta-analysis, a fatal flaw the JAMA editors should have picked up. As Dr Tuteur points out, there were 23 trials that did not make the final cut. Of these, 15 of 17 showed antidepressants to be effective for mild to moderate depression.
In other words: “ALL the studies that demonstrated the effectiveness of antidepressants in mild to moderate depression were deliberately left out.”
More convincing evidence against antidepressants are two other meta-analyses cited in the Newsweek piece. These were conducted by Irving Kirsch PhD of the University of Connecticut. His second one, using all the antidepressant clinical trials the drug companies submitted to the FDA, found that those on the antidepressants fared only minimally better than those taking a placebo.
I recall emailing Dr Kirsch about this back in 2002. In other words, I suggested, antidepressants are nothing more than placebos with side effects. Dr Kirsch agreed with this.
Academic researchers managed to look silly attempting to rebut Dr Kirsch. Their main defense was that the stratospherically high placebo response rates (way higher than for say cancer drug trials) are the bane of psychiatric meds trials. They also pointed out that researchers, desperate to recruit patients into studies, may have allowed some in who didn’t meet their criteria for severe depression (a fantastic admission when you come to think of it, namely - we got bad results because in fact we cheated).
More credible was their argument that broad conclusions mask specific results - namely that for certain subpopulations these meds probably work like gangbusters. The problem is no one has identified this subpopulation. More on this in a minute.
Surprisingly, no one mentioned the obvious: A drug trial tests results on ONE drug only. In the real world, patients try a second drug if the first one doesn’t work. Various small studies showed that indeed it is worth not giving up after an initial failure.
The NIMH tested that proposition in a series of real-world trials called STAR*D, published in 2006. In the first round, about 50 percent of patients got better on Celexa. Of those who failed on Celexa, about a quarter to a third got better on another med or meds combination.
Thus, a 50-50 crapshoot turns into odds very much in your favor if you’re willing to play two rounds of pill roulette.
Third round success rates, however, were dismal - in single figures to very low double digits. In other words, after two failures, patients and clinicians need to be seriously rethinking their options, including revisiting the diagnosis.
What no one has seriously looked into is the fact that DSM depression may be completely wrong in the first place, that it is in fact a catch-all diagnosis for all manner of things going wrong in the brain. In a recent blog piece, The Draft DSM-5 - Rip It Up and Start Over, Part II, I mention:
In my book, "Living Well with Depression and Bipolar Disorder," I cite a 2004 article by Gordon Parker MD, PhD of the University of New South Wales in support of the proposition that this one-size-fits-all view of depression results in clinical trials that indiscriminately lump all patients together, with no regard to critical distinctions that may spell the difference between success and failure.
Would, say, an SSRI such as Paxil work better for a melancholic depression and a dopamine-enhancer such as Wellbutrin work better for a low-energy depression? We’ll never know. The drug industry has no incentive to test for this sort of thing.
In an article on my mcmanweb site, I cite Frederick Goodwin MD, co-author of Manic-Depressive Illness, who informed me that most of the patients in STAR*D had recurrent depression, ie depressions that come and go. These depressions may have a lot more in common with bipolar than with unipolar chronic depression. One study found that nearly 40 percent of those diagnosed with unipolar depression in fact fall into that diagnostic Terra Incognita known as the bipolar spectrum.
These are patients who could conceivably respond better to mood stabilizers than antidepressants. The catch is the current DSM doesn’t recognize the bipolar spectrum and neither will the next one.
To conclude ...
Antidepressants DO work, but you will probably be a lot more satisfied with the results if you don’t expect too much of them. A lot of failures are the result of patients quitting too soon. The same can be said of clinicians who don’t know what they are doing.
If your first antidepressant fails, it is wise to try a second one, perhaps even a third.
But after your second one fails, it is wise to revisit your diagnosis. You may in fact have bipolar, or a type of unrecognized depression that has more in common with bipolar than unipolar. Or you may have a depressive temperament - part of your personality - that is better suited to talking therapy. Or you may have some kind of personality disorder (such as borderline) that definitely does require talking therapy.
Antidepressants leave a lot to be desired, but their greatest fault can be attributed to human error. These meds simply don’t work for certain types of depressions, and definitely not for a range of conditions that only superficially resemble depression. But for the right kind of depression, they probably work a lot better than we give them credit for. If we could only get researchers interested in looking into this.
Maybe then, Newsweek would have something to report.
Further reading from McManweb
When Your First Antidepressant Fails
When Your Second Antidepressant Fails
Clinical Trials - What the Drug Companies Don't Report
Coming soon - back to my DSM-5 report cards ...