Monday, February 14, 2011
In my previous post, I gave an example of a gold-standard clinical trial that told us next to nothing. The study, which debuted as a poster at the 12th International Congress on Schizophrenia Research in April, 2009, found that J&J’s antipsychotic Invega worked better than a sugar pill in reducing PANSS scores in patients with schizoaffective disorder.
PANSS is a 30-point rating scale used to assess patients for schizophrenia. But there is a twist with patients with schizoaffective, as they also experience mood symptoms. Therefore, the study also measured for depression (using the HAM-D) and mania (using the Young Mania Scale). But clinical trials are only allowed to measure for one thing, so reduction in PANSS scores became the “primary outcome," also known as "primary endpoint.”
The way I heard this explained to me: Suppose a study measured for five different results. Suppose there was only a 20 percent chance of each result coming up positive. That would mean a 100 percent chance of one result coming up positive, which the study sponsor could trumpet as evidence for the success of the treatment. There is value in data from “secondary endpoints,” but you could use secondary endpoint logic to argue that the Steelers beat the Packers in the Super Bowl.
You have to select your primary outcome before the study. No picking the most desirable one when it’s all over.
Okay, now that we’ve agreed on the primary outcome, let’s round up our schizoaffective patients and get crackin’. Um, define schizoaffective. From the DSM-IV:
An uninterrupted period of illness during which, at some time, there is either a Major Depressive Episode, a Manic Episode, or a Mixed Episode concurrent with symptoms that meet Criterion A for Schizophrenia.
Clear as day, right? The DSM-5 work group responsible for coming up with something better actually said, “the current DSM-IV-TR diagnosis schizoaffective disorder is unreliable,” then did not come up with something better.
Common sense dictates that an unreliable diagnosis automatically calls into question the credibility of any study of this sort from the very outset, gold standard or not. I would extend this line of reasoning to include all antidepressant trials on "depressed" patients (what the hell is depression, anyway?), as well, but that's me mounting my hobby horse.
But the practical problem remains. Patients with schizoaffective are not going to exactly materialize for a clinical trial, especially if they’re being diagnosed with something else. To compensate, J&J took the highly unusual step of recruiting patients from more than 40 centers worldwide, including India, Russia, the Ukraine, and all across the US. This way, they were able to round up a total of 311 patients who met DSM-IV criteria.
Keep in mind, in any meds trial both the drug group and placebo group need to be as close to an exact match as possible. The “randomization” in “randomized double-blind placebo-controlled” trials means patients are assigned to different groups by chance rather than choice (thereby preventing abuses such as clinicians selecting good prognosis patients for the drug group). It’s not as simple as a coin toss, especially across more than 40 different centers worldwide. Give J&J credit where credit is due.
There was an additional complication to this trial: The patients were also on mood stabilizers and/or antidepressants. Still, antipsychotics have a very good trial track record in the PANSS challenge. Even accounting for all the special challenges of this particular trial, how hard can a glorified counting exercise be, right? Did someone say “discontinuation”?
It turned out about 40 percent of the patients dropped out of the study, about equal between the Invega and placebo groups. What this tells me is that when a doctor prescribes this med for this illness there is a four in ten percent chance of failure, even before the patient walks out the door. But this has never bothered the people putting together these “gold standard” studies.
To compensate, the J&J people employed a standard statistical fiction known as “intent-to-treat analysis,” which includes the drop-outs in the final tally, as if these patients had completed the study. One way of implementing this is with “last observation carried forward” (LOCF). Thus, if a patient drops at at week two of a six-week trial, his or her two-week result is “carried forward” to the final week.
There is a legitimate reason for doing this. Patients who drop out of drug trials are more likely to be bad responders. Thus, if only good responders stay in the study, the results are likely to overstate the benefits of the test drug.
To me, however, a high drop-out rate means that any positive finding is next to useless. But who listens to me? Anyway, now that we’ve jumped through all the various statistical hoops, we’re finally ready to get down to some serious counting. Oops! - did someone say placebo?
Placebos are the bane of all clinical trials, especially for psychiatric meds. In an earlier trial conducted by J&J, the placebo group fared almost as well as the Invega group. The second time out, those on the Invega did no better than before, but those on the placebo did a lot worse (perhaps due to a bad batch of placebos). Thus, the J&J investigators had “clear separation” from the placebo group.
Thus, instead of tearing their hair out, J&J had reason to celebrate. All their hard work had paid off. Between the two trials, they now had an airtight case to take to the FDA. Indeed, six months after the trial results came through in February, three months after J&J debuted its study as a poster in April, in July the company proudly announced in a press release:
“The U.S. Food and Drug Administration (FDA) today approved the first and only [my emphasis] antipsychotic for the acute treatment of schizoaffective disorder.”
Mission accomplished: J&J had received clearance to market its Invega as a treatment for schizoaffective.
To market, to market ... That was the real - and only relevant - endpoint to this gold-standard study.
More to come ....