Friday, February 11, 2011
As a general rule, the more startling an assertion, the higher the level of proof we demand. Thus, if I say, “I have my driver’s license,” you are not likely to ask me to pull it out of my wallet and show it to you. (Ironically, three years ago I did not have a driver’s license, so the joke would have been on you.) You get the picture.
Accordingly, any spirited debate you run across concerning the studies Whitaker relies on are not mere academic quibbles. If his sources are faulty, or if he is making conclusions unsupported by the data, then we can easily dismiss Whitaker.
I’ve been reporting on mental health studies since 1999. My approach to looking at information as a journalist is quite a bit different from that of an academic researcher or a clinician looking at the same information. In many ways, my criteria for reporting is far more strict. And in other key ways, far more loose.
Basically, my standard is that we work from the best set of facts we have for the purpose of improving our understanding right now, and I suspect this is where Whitaker is coming from. We don’t sit around waiting for the ultimate gold standard study that is never likely to materialize. An example:
In 2009, I ran a series of reader polls on Knowledge is Necessity that found: 1) Four in five rated their meds as either the single most important tool in managing their wellness or as just as important as their other wellness tools; 2) Only 14 percent reported their meds worked “very well”; 3) Only 14 percent reported they were back to where they wanted to be.
Something clearly is very wrong with this picture. My series of polls indicate we dangerously over-rely on our meds and one result is we don’t get well. (You can read all about it on mcmanweb.)
But this is a reader poll, not a scientific study. A scientific study of this sort would probably cost in the neighborhood of several hundred thousand to get done. Maybe more. I would probably have to spend two years hustling for the same grant money six zillion other researchers are competing for, a year or more working with a team of experts designing and implementing the study and processing the data, and another year or two actually getting the thing published, assuming I was lucky enough to find an editor interested in reading it.
In all likelihood, my submission would be reviewed by an academic “referee” in the pocket of the drug industry who would find a million reasons to reject it.
But assuming my submission were accepted, there would be six to 12 months of revisions before my research saw the light of day. Nine-tenths of the study article would involve me explaining my methodology. The other one-tenth would be divided between the facts we need to know and my interpretation of what the facts mean. A journal editor would have me on a very short leash. My really interesting observations would likely be edited out, to the point that the study yielded little of value to anyone.
So - five or six or more years, several hundred thousand dollars, a team of experts, and endless aggravation for a by-now nearly meaningless study that maybe 30 people would read. If I were lucky, Robert Whitaker would find out about my study, mention it in his next book, make me famous, and draw all the conclusions that I was not allowed to make.
Then enraged academics like Andrew Nierenberg would attack him for relying on a low-quality study that really said nothing of the sort and moreover came from a nobody like me.
See what we’re up against?
This is the first in a series of pieces that examines the quality of mental health information, from different types of academic studies to clinical trials to media reports and blogs to informal surveys to personal experience. We will look at the tricks of the trade employed by the drug industry, academic researchers, journalists, and bloggers. We will look at why a so-called gold-standard scientific study may not be worth the paper it is printed on and why an unscientific reader poll may yield quality information.
Stay tuned ...