Search all of the Society for Participatory Medicine website:Search

A new commentary on “Lies, Damned Lies, and Medical Science,” in the current issue of The Atlantic Monthly. [See also our previous post on the article, with dozens of comments, some of them excellent. And be sure to read Peter’s footnotes. -e-Patient Dave]
____________

One of the best reads now being tweeted through the blogosphere is David Freedman’s excellent summary of the work of Dr. John Ioannidis in the current issue of The Atlantic Monthly[1].

Ioannidis and his colleagues are leading critics of the science of drug research (Rx and OTC including vitamins and nutritionals), surgical procedures, diets, and exercise regimens. Bias, broken peer review, commercial conflict of interest, government regulation spurred by bad academic practices — ugh!

What can we do to fix this? In the final paragraphs of the article, Ioannidis makes these recommendations:

  • Change the culture of scientific medicine. “We could solve much of the wrongness problem, if the world simply stopped expecting scientists to be right,” Ioannidis says. “That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.”
  • Reset expectations. “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”

I would add:

  • Embrace the Internet: C’mon science, it’s been 15 years since many of us have been on the web, so change what’s possible, and figure out a way to better incorporate patient self-reported and retrospective data in trials. Ioannidis found that when a study goes on long enough, the findings frequently upend those of the shorter studies. Those in the clinical research business would endorse this with a cheer:  “buy more Phase IV (post-approval) studies!”  I would add, engage patient communities like acor.org, Patients Like Me, Medhelp.org, and any others with the means (are you listening Everyday Health, WebMD, Sharecast?) to aggregate data carefully and with their patients’ permission.
  • Academia, which strongly influences both researchers and the government, must get with the program, too by giving up on tenure-tied-to-the-peer-reviewed-literature, and embracing a moderated form of pre and post-publication peer review[2]. PLoS Medicine and Biomednet have gone some of the distance, but not all the way by creating a full-fledge reputation system to evaluate the quality of their content. My earlier contribution to this blog, “A Troubled Trifecta: Peer Review, Academia & Tenure,” discussed this in more detail, as well as a number of articles and podcasts I participated in for the Society of Participatory Medicine.[3] [Podcast here. -Dave]
  • None of this matters unless government regulation to evaluate the safety and efficacy of new therapies also changes. In the U.S. FDA-mandated study design has been around in its basic form more than 50 years, and follows what academic science has believed to be the trusted and true methodology of the randomized control trial that Ioannidis finds so troubling.  With the cost of the approval process often hovering around $1 billion, is it any wonder that commercial interests may seek to maximize their chances of approval with a narrow study design that may not reflect how a therapy works in the real world? Ultimately this kind of gaming serves no one – follow-on studies, and the real-world experience of clinicians and patients provide a more complete picture.  From pharma’s point of view this can lead to a drug’s withdrawal from the market, a financially and socially punishing event.  Another reason to overhaul new approvals: with newer therapies for personalized medicine, population-based clinical trials are not even possible.

While medical research is far from perfect, it’s also important to remember that we also have many therapies that work perfectly in the vast majority of patients. As the Chochrane Collaborative show, some conditions are more difficult to measure and evaluate than others.  For conditions where the evidence is clear, we can celebrate our good fortunes. Ioannidis claims that “80 percent of non-randomized studies … turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.”  That doesn’t mean that an antibiotic used to treat a susceptible bacteria won’t work.

Like my friend and colleague Richard Smith, who for many years was editor of The British Medical Journal, when you read material from Ioannidis[4] you are left with the feeling that sometimes scientifically educated people may have little more to believe in than those who believe in folk art and religious healing. Were the Taoists right when they cautioned, “the more you know, the less you understand?”  Sometimes yes, other times no.

Interesting, too that this Atlantic article is likely to kick off a wider discussion of this important issue than resulted from his original work.  The article reports Ioannidis’s “PLoS Medicine paper is the most downloaded in the journal’s history. And it’s not even [his] most-cited work: that would be a paper he published in Nature Genetics on the problems with gene-link studies.” Ioannidis is our new rock star of research critics:

Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.”

With Ioannidis in the spotlight as never before let’s hope we can start to fix those parts of the medical research, publishing, and regulatory systems that are truly broken.  And good luck with that vertigo, Ioannidis.  Maybe there’s a therapy (let’s call it sleep) that could help!

_____________________

[1] “Lies, Damned Lies, and Medical Science,” is a nice coda to another must-read article The Atlantic published a year ago, Does the Vaccine Matter? That article, by science writers Shannon Brownlee and Jeanne Lenzer, recounted how researchers who questioned the efficacy of government programs to control pandemic annual flu with vaccines and antiviral drugs were marginalized by traditional peer review journals, and shunned socially.  When one annual flu vaccine critic, Tom Jefferson of the Cochrane Collaboration and an epidemiologist trained at the famed London School of Tropical Hygiene came to a meeting on pandemic preparedness in Bethesda, he “ate his meals in the hotel restaurant alone, surrounded by scientists chatting amiably at other tables.”  Talk about catching a bad bug!

[2] As the article states, “To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly ‘proves’ it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.”

[3] Frishauf P. Reputation systems: a new vision for publishing and peer review.  J Participat Med. 2009(Oct);1(1):e13a. . Retrieved 18:45, October 18, 2010

[4] Smith RW. In search of an optimal peer review system. J Participat Med. 2009(Oct);1(1):e13. Retrieved 20:05, October 18, 2010. Smith’s article is packed with information that questions the ability of traditional peer review to catch errors.  The Atlantic article echoes this: “Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it. Nature, the grande dame of science journals, stated in a 2006 editorial, ‘Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.’”

 

Please consider supporting the Society by joining us today! Thank you.

Donate