Monday, March 8, 2010

Wait! If You Read That Newsweek Article About How Antidepressants Don't Work and Are About to Flush Your Meds Down the Toilet, First Read This ...

I should have commented on this much earlier. Better late than never ...

A Jan 29 Newsweek cover story trumpeted, Why Antidepressants Don’t Work. There are many things not to like about antidepressants, including the fact that they are not magic bullets and often may make us worse rather than better. Indeed, if antidepressants were as effective as Pharma and psychiatry claim, depression would have gone the way of polio and small pox. Instead, it remains the leading cause of disability in the western world.

Nevertheless, a willing patient working with a smart clinician stands a fairly decent chance of getting somewhat or even a lot better on an antidepressant. And there is convincing data that a patient who stays on an antidepressant has a much better chance of avoiding relapse than one who doesn’t.

The best evidence we have points to the fact that antidepressants do work - only not for all of us and not as well as we wish they would. But, in their limited capacity, they do - actually - work.

So how can Newsweek be so irresponsible? The story begins with an equally irresponsible review article that appeared in the Jan 6 JAMA. Medical journals are notorious for publishing industry propaganda disguised as research. This one went the other way.

The JAMA article reports on a University of Pennsylvania meta-analysis that found that antidepressants were effective on patients with severe depression, but had little or no effect on those with mild or moderate depression. A meta-analysis pools previous studies and crunches the composite numbers. The catch is those putting together meta-analyses can be highly selective in which studies they decide to pool.

An article by Amy Tuteur MD in the Feb 8 Salon, How Did Journalists Get the Antidepressant Study So Wrong?, notes that of 141 random-controlled antidepressant trials the authors included only three involving a single SSRI medication, Paxil. (Three other trials included in the meta-analysis involved the rarely-used old-generation antidepressant imipramine.)

The selective weeding-out process seriously undermined the credibility of the meta-analysis, a fatal flaw the JAMA editors should have picked up. As Dr Tuteur points out, there were 23 trials that did not make the final cut. Of these, 15 of 17 showed antidepressants to be effective for mild to moderate depression.

In other words: “ALL the studies that demonstrated the effectiveness of antidepressants in mild to moderate depression were deliberately left out.”

More convincing evidence against antidepressants are two other meta-analyses cited in the Newsweek piece. These were conducted by Irving Kirsch PhD of the University of Connecticut. His second one, using all the antidepressant clinical trials the drug companies submitted to the FDA, found that those on the antidepressants fared only minimally better than those taking a placebo.

I recall emailing Dr Kirsch about this back in 2002. In other words, I suggested, antidepressants are nothing more than placebos with side effects. Dr Kirsch agreed with this.

Academic researchers managed to look silly attempting to rebut Dr Kirsch. Their main defense was that the stratospherically high placebo response rates (way higher than for say cancer drug trials) are the bane of psychiatric meds trials. They also pointed out that researchers, desperate to recruit patients into studies, may have allowed some in who didn’t meet their criteria for severe depression (a fantastic admission when you come to think of it, namely - we got bad results because in fact we cheated).

More credible was their argument that broad conclusions mask specific results - namely that for certain subpopulations these meds probably work like gangbusters. The problem is no one has identified this subpopulation. More on this in a minute.

Surprisingly, no one mentioned the obvious: A drug trial tests results on ONE drug only. In the real world, patients try a second drug if the first one doesn’t work. Various small studies showed that indeed it is worth not giving up after an initial failure.

The NIMH tested that proposition in a series of real-world trials called STAR*D, published in 2006. In the first round, about 50 percent of patients got better on Celexa. Of those who failed on Celexa, about a quarter to a third got better on another med or meds combination.

Thus, a 50-50 crapshoot turns into odds very much in your favor if you’re willing to play two rounds of pill roulette.

Third round success rates, however, were dismal - in single figures to very low double digits. In other words, after two failures, patients and clinicians need to be seriously rethinking their options, including revisiting the diagnosis.

What no one has seriously looked into is the fact that DSM depression may be completely wrong in the first place, that it is in fact a catch-all diagnosis for all manner of things going wrong in the brain.  In a recent blog piece, The Draft DSM-5 - Rip It Up and Start Over, Part II, I mention:


In my book, "Living Well with Depression and Bipolar Disorder," I cite a 2004 article by Gordon Parker MD, PhD of the University of New South Wales in support of the proposition that this one-size-fits-all view of depression results in clinical trials that indiscriminately lump all patients together, with no regard to critical distinctions that may spell the difference between success and failure.

Would, say, an SSRI such as Paxil work better for a melancholic depression and a dopamine-enhancer such as Wellbutrin work better for a low-energy depression? We’ll never know. The drug industry has no incentive to test for this sort of thing.

In an article on my mcmanweb site, I cite Frederick Goodwin MD, co-author of Manic-Depressive Illness, who informed me that most of the patients in STAR*D had recurrent depression, ie depressions that come and go. These depressions may have a lot more in common with bipolar than with unipolar chronic depression. One study found that nearly 40 percent of those diagnosed with unipolar depression in fact fall into that diagnostic Terra Incognita known as the bipolar spectrum.

These are patients who could conceivably respond better to mood stabilizers than antidepressants. The catch is the current DSM doesn’t recognize the bipolar spectrum and neither will the next one.


To conclude ...

Antidepressants DO work, but you will probably be a lot more satisfied with the results if you don’t expect too much of them. A lot of failures are the result of patients quitting too soon. The same can be said of clinicians who don’t know what they are doing.

If your first antidepressant fails, it is wise to try a second one, perhaps even a third.

But after your second one fails, it is wise to revisit your diagnosis. You may in fact have bipolar, or a type of unrecognized depression that has more in common with bipolar than unipolar. Or you may have a depressive temperament - part of your personality - that is better suited to talking therapy. Or you may have some kind of personality disorder (such as borderline) that definitely does require talking therapy.

Antidepressants leave a lot to be desired, but their greatest fault can be attributed to human error. These meds simply don’t work for certain types of depressions, and definitely not for a range of conditions that only superficially resemble depression. But for the right kind of depression, they probably work a lot better than we give them credit for. If we could only get researchers interested in looking into this.

Maybe then, Newsweek would have something to report.

Further reading from McManweb


When Your First Antidepressant Fails
When Your Second Antidepressant Fails
Clinical Trials - What the Drug Companies Don't Report

***

Coming soon - back to my DSM-5 report cards ...

8 comments:

Willa Goodfellow said...

Oh, I am so glad you wrote this, and I don't have to. I knew it would take more homework than I currently have the energy for. Thanks.

I think (of course) the most significant issue you raise is that of unrecognized bipolar spectrum. I have seen a lot of clinical trials that did attempt to weed out "recurrent depression," using those who were on only their first or maybe second episode. The odds are better if you don't include those who have already tried a few meds -- as STAR*D demonstrated. But if a potential participant's history were explored more thoroughly, with potentially hypomanic episodes as a disqualifier, then perhaps effectiveness over placebo might soar. That result really would be in Pharma's interest, especially to the first company that figured it out and jumped out ahead of the others, and especially to the company that had a good mood stabilizer to offer to those weeded out of the antidepressant trial.

The results would divide the market of any one drug in half. But imagine being able to claim that YOUR med has an 80% effectiveness rating, compared to other companies' 40%. Better to have two drugs that really do work (antidepressant and mood stabilizer) for those who take them than the single bullet that people stop using because Newsweek called them "expensive tic tacs."

Tony said...

I wonder if the strong placebo response seen in psychiatric drug studies may simply be due to the fact that people who suffer depression/mania/psychosis will get well on their own without medication. Of course, those people often will relapse eventually. For depression, say you have a pool of 1000 patients. Half in treatment; half as control (placebo). Depending where in the cycle of depression you started treating (or not) the patients fell, many may be in the late stages of the episode and have the depression lift on its own in spite of treatment of the placebo effect. Since severe depression has such a long life, the likelihood that people recover on their own is diminished, explaining why there is a smaller placebo effect. It would seem that the test of antidepressants is to see how they work long term. (But no drug company could afford to test them that way.) Also, most people have one depressive episode in their lifetime, so a long term trial would be fruitless. For recurring depressions, you are right to warn that antidepressants may not work; the doctors should consider mood stabilizers for those patients. That idea should really be put to the test in a study.

John McManamy said...

Hey, Willa. Exactly! It's as if we're using a statin to treat both high cholesterol and high blood pressure. Just because both are "cardiovascular" doesn't mean we act as if the condition is the same and we use the same med.

Only in psychiatry do we lump two different types of depression together as if they were one and then wonder why we don't get spectacular results. Too often, the doctor blames the patient. Or the patient is referred to as "treatment resistant."

Hell, I'd be treatment resistant too if you kept giving me different statins for high blood pressure.

Elizabeth said...

John:

You write that "there were 23 trials that did not make the final cut. Of these, 15 of 17 showed antidepressants to be effective for mild to moderate depression."

Huh? 23 or 17? Could you be a little clearer? Do you mean 17 of these studies tested the effectiveness of the drugs for mild to moderate depression, and the remaining five didn't? Another thing I'm not clear on is the criteria the JAMA authors used to keep studies out.

The New York Times January 11 article on the subject, "Before You Quit Antidepressants. . ." by Friedman M.D., says that for the JAMA article, "the authors identified 23 studies (out of several hundred clinical trials) that met their criteria for inclusion. Of those 23, they could get access to data on only 6, with a total of 718 subjects."

So now I'm totally confused.

Couldn't get access to data? Why not?

John McManamy said...

Hi, Elizabeth. Total confusion is a normal reaction. The Salon article goes into considerable detail, which I compassionately spared you and my other readers.

One of the problems with meta-analysis is researchers wind up comparing apples with oranges. One set of SSRI studies, for instance, may go on for 6 weeks, others for 8 weeks. Others may measure results differently, using different rating scales. Some may use various statistical tools to account for drop-outs, others not. On and on it goes.

So it is important when pooling studies to get them as uniform as possible. That's where various "exclusion criteria" come in. On your first pass, you exclude certain studies that don't conform to one set of criteria. On your next go, you do more weeding out, till finally you have a set of fairly uniform studies.

Re couldn't get access to data: This appeared to be the final exclusion criteria. The authors got the list down to 23 studies. Of these 23, 17 measured for mild-moderate depression, and of these 15 found the antidepressant proved effective.

Drug companies own their data. What you see in a journal article is a highly selective interpretation of reams and reams of data.

Some drug companies have made some of this unpublished data to the public. Others haven't. Most notably, GSK makers of Paxil, about 4 years ago put their raw data on its website. I dipped into this very soon after in a series of Newsletter articles.

So the authors had the same access to this data I did. But they didn't have access to similar data from other drug companies. Voila! This became their final exclusion criteria. Now they had three studies run in a similar fashion on the same drug - Paxil - by the same drug company, confirmed by reams of raw data, which made their meta-analysis pure.

But the real world is complex. Here were all these studies run in slightly different ways by different drug companies on different drugs with different results. The authors got their pure meta-analysis but an absurd result.

The only thing you can get from this meta-analysis is that Paxil didn't work for mild-moderate depression. Not all of SSRIs.

The 3 imipramine studies were a smoke screen. Hardly anyone takes that class of drugs.

This is probably as clear as mud. Please feel free to follow up. Trust me, you won't sound stupid. It took me years to get a handle on this.

Elizabeth said...

Hey John! Thanks for that very lucid explanation, saving me the time to try to figure it out. And thanks for getting out the information that a patient shouldn't just take one antidepressant after another and another and another and another. as I did to ill effect. After three, it's time to revisit the diagnosis. Now, if we could only have a psychiatric system where the patient doesn't have to be better informed than the doctor. I know, if wishes were horses . . .

While on the wishes as horses theme, I'd like to put my two cents in. Perhaps I'm being paranoid here, but I worry that JAMA let this study through, despite its problems, to encourage the burgeoning of atypical antipsychotics with their patented status.

John McManamy said...

Hey, Tony. I've often thought of the same thing, myself, and the Newsweek article touches on it. Many of us cycle out of our own depressions or other states on our own accord. This may account for a lot of the so-called placebo response.

Likewise, many of us cycle right back into depression, which may account for a lot of so-called relapses.

Also, many of us simply get better on our own.

Clinicians and researchers are likely to attribute any improvement solely to the meds.

STAR*D had a golden opportunity to test mood stabilizers on those with recurrent depression who had failed on antidepressants. They completely blew it. I'm with you all the way.

John McManamy said...

Hi, Elizabeth. The fact that JAMA let this study through leaves them open to every negative interpretation under the sun, including yours. Mine (I'm only speculating) goes like this:

Studies get "peer reviewed" before they get published. The trouble is peers think alike. This encourages stultifying conformity at the expense of originality. The peer or peers reviewing this never could have foreseen this study would be the basis of a Newsweek cover story dissing antidepressants.

The peer reviewer, instead, would have seen a meta-analysis supporting antidepressants for severe depression. Since the study was presented as a rebuttal to the Kirsch piece I referred to, the reviewer (and JAMA) would have actually seen the study as a pro-drug study.

Crazy, isn't it?

And, being a pro-drug study, the peer reviewer and JAMA's editors let it through with no critical assessment. Their only concern would have been that the I's were dotted and T's crossed. And since the purity of the meta-analysis couldn't be faulted, JAMA gave its stamp of approval to incredibly bad science.

Mind-boggling, isn't it?