A new commentary on “Lies, Damned Lies, and Medical Science,” in the current issue of The Atlantic Monthly. [See also our previous post on the article, with dozens of comments, some of them excellent. And be sure to read Peter’s footnotes. -e-Patient Dave]
____________
One of the best reads now being tweeted through the blogosphere is David Freedman’s excellent summary of the work of Dr. John Ioannidis in the current issue of The Atlantic Monthly[1].
Ioannidis and his colleagues are leading critics of the science of drug research (Rx and OTC including vitamins and nutritionals), surgical procedures, diets, and exercise regimens. Bias, broken peer review, commercial conflict of interest, government regulation spurred by bad academic practices — ugh!
What can we do to fix this? In the final paragraphs of the article, Ioannidis makes these recommendations:
- Change the culture of scientific medicine. “We could solve much of the wrongness problem, if the world simply stopped expecting scientists to be right,” Ioannidis says. “That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.”
- Reset expectations. “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
I would add:
- Embrace the Internet: C’mon science, it’s been 15 years since many of us have been on the web, so change what’s possible, and figure out a way to better incorporate patient self-reported and retrospective data in trials. Ioannidis found that when a study goes on long enough, the findings frequently upend those of the shorter studies. Those in the clinical research business would endorse this with a cheer: “buy more Phase IV (post-approval) studies!” I would add, engage patient communities like acor.org, Patients Like Me, Medhelp.org, and any others with the means (are you listening Everyday Health, WebMD, Sharecast?) to aggregate data carefully and with their patients’ permission.
- Academia, which strongly influences both researchers and the government, must get with the program, too by giving up on tenure-tied-to-the-peer-reviewed-literature, and embracing a moderated form of pre and post-publication peer review[2]. PLoS Medicine and Biomednet have gone some of the distance, but not all the way by creating a full-fledge reputation system to evaluate the quality of their content. My earlier contribution to this blog, “A Troubled Trifecta: Peer Review, Academia & Tenure,” discussed this in more detail, as well as a number of articles and podcasts I participated in for the Society of Participatory Medicine.[3] [Podcast here. -Dave]
- None of this matters unless government regulation to evaluate the safety and efficacy of new therapies also changes. In the U.S. FDA-mandated study design has been around in its basic form more than 50 years, and follows what academic science has believed to be the trusted and true methodology of the randomized control trial that Ioannidis finds so troubling. With the cost of the approval process often hovering around $1 billion, is it any wonder that commercial interests may seek to maximize their chances of approval with a narrow study design that may not reflect how a therapy works in the real world? Ultimately this kind of gaming serves no one – follow-on studies, and the real-world experience of clinicians and patients provide a more complete picture. From pharma’s point of view this can lead to a drug’s withdrawal from the market, a financially and socially punishing event. Another reason to overhaul new approvals: with newer therapies for personalized medicine, population-based clinical trials are not even possible.
While medical research is far from perfect, it’s also important to remember that we also have many therapies that work perfectly in the vast majority of patients. As the Chochrane Collaborative show, some conditions are more difficult to measure and evaluate than others. For conditions where the evidence is clear, we can celebrate our good fortunes. Ioannidis claims that “80 percent of non-randomized studies … turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” That doesn’t mean that an antibiotic used to treat a susceptible bacteria won’t work.
Like my friend and colleague Richard Smith, who for many years was editor of The British Medical Journal, when you read material from Ioannidis[4] you are left with the feeling that sometimes scientifically educated people may have little more to believe in than those who believe in folk art and religious healing. Were the Taoists right when they cautioned, “the more you know, the less you understand?” Sometimes yes, other times no.
Interesting, too that this Atlantic article is likely to kick off a wider discussion of this important issue than resulted from his original work. The article reports Ioannidis’s “PLoS Medicine paper is the most downloaded in the journal’s history. And it’s not even [his] most-cited work: that would be a paper he published in Nature Genetics on the problems with gene-link studies.” Ioannidis is our new rock star of research critics:
Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.”
With Ioannidis in the spotlight as never before let’s hope we can start to fix those parts of the medical research, publishing, and regulatory systems that are truly broken. And good luck with that vertigo, Ioannidis. Maybe there’s a therapy (let’s call it sleep) that could help!
_____________________
[1] “Lies, Damned Lies, and Medical Science,” is a nice coda to another must-read article The Atlantic published a year ago, Does the Vaccine Matter? That article, by science writers Shannon Brownlee and Jeanne Lenzer, recounted how researchers who questioned the efficacy of government programs to control pandemic annual flu with vaccines and antiviral drugs were marginalized by traditional peer review journals, and shunned socially. When one annual flu vaccine critic, Tom Jefferson of the Cochrane Collaboration and an epidemiologist trained at the famed London School of Tropical Hygiene came to a meeting on pandemic preparedness in Bethesda, he “ate his meals in the hotel restaurant alone, surrounded by scientists chatting amiably at other tables.” Talk about catching a bad bug!
[2] As the article states, “To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly ‘proves’ it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.”
[3] Frishauf P. Reputation systems: a new vision for publishing and peer review. J Participat Med. 2009(Oct);1(1):e13a. . Retrieved 18:45, October 18, 2010
[4] Smith RW. In search of an optimal peer review system. J Participat Med. 2009(Oct);1(1):e13. Retrieved 20:05, October 18, 2010. Smith’s article is packed with information that questions the ability of traditional peer review to catch errors. The Atlantic article echoes this: “Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it. Nature, the grande dame of science journals, stated in a 2006 editorial, ‘Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.’”
There’s also an article on the same topic, but with different information in Discover Magazine’s November issue called Reckless Medicine.
Also a must read.
I have some thoughts on things these articles missed that I hope to blog about soon.
M
Peter,
I like your comment about embracing the internet and patient self-reported data, but the life science and healthcare industries have a lot of data management challenges to overcome before they are in a position to use that data in (dare I say) a meaningful way. If the data aren’t “clean” (e.g., consistently defined fields/measures, minimal gaps in data across populations, sufficient descriptive data on the subjects, etc.) then we’re in a garbage in/garbage out situation. I like Toffler’s term: cyberdust: lots of data, but it sits on shelves unanalyzed because of glut of data (and I’d add lack of good models for analyzing them).
Check out the article in today’s (10/21) Boston Globe on why Beth Israel Deaconess is temporarily halting recruiting for cancer clinical trials (http://www.boston.com/yourtown/boston/roxbury/articles/2010/10/21/cancer_trials_suspended_for_new_patients/). There were too many problems related to submitting patient data properly.
My key point is that along with all the new data that are being generated (and will continue to grow at exponential pace as more outcomes data are collected from EHRs and patient-generated data, never mind genomic data), we also need far better data management capabilities & standards.
As I alluded to above, new models for analyzing the expanding repositories of data (registries and life science research repositories and genomic data) are needed, too. The availability of new data that allows for more complex models of clinical research is wonderful. But, we’re in early stages of figuring out how to manage the data and how to analyze the data.
Janice,
Thanks and I agree we are drowning in data and starving for wisdom and “we …need far better data management capabilities & standards.”. But there is no reason that capability can’t draw on patient sites with structured data that comply with these standards: kind of a structured, data-driven Wikipedia based on actual experience, if you will. None of that is even on the table as far as I know.
Regards
I agree with Janice about the potential GIGO issues with multiple poorly controlled data sources.
I also think we need to back up a little and examine some of the assumptions of Ioannidis’ work. He comes at his conclusions from the standpoint of genomics research, where the prior probabilities of associations are pretty low. In the clinical realm these assumptions need to be altered, IMO, because we always talk about biological plausibility in what we investigate. And while there are still at least two camps here (the Bayesian and the frequentist), we do try to safeguard from defining spurious associations.
This is not to say that there are no problems. There are vast issues with how we do research, and they do need to be addressed. My only point is that we do not need to buy into the nihilist view that everything is a lie. At the same time we need to avoid the “illusion of certainty” as Dave has said before. The Buddha’s middle way is the correct approach here too.
I have blogged extensively on some of the methodologic issues, and I am confident that slowly but surely we will edge closer to the best way of doing things. My caveat remains that the nature of science is such that we will continue to be humbled, however.
I just posted my take on the PLoS paper that the article draws upon here: http://evimedgroup.blogspot.com/2010/10/lies-and-more-lies-are-all-lies-created.html
Kudos to Dr. Zilberberg, for her real-world perspective on this, and I highly recommend her post that echos my conclusion that Dr. Ioannidis’s work “doesn’t mean that an antibiotic used to treat a susceptible bacteria won’t work.” Here’s a snip from Dr. Zilberberg:
“So, let’s look at something that I know about – healthcare-associated pneumonia. We, and others, have shown that administering empiric antibiotics that do not cover the likely pathogens within the first 24 hours of hospitalization in this population is associated with a 2-3 increase in the risk of hospital death. So, the association is antibiotic choice and hospital survival. Any clinician will tell you that this idea has a lot of biologic plausibility: get the bug with the right drug and you improve the outcome. It is also easy to justify based on the germ theory. Finally, it does not get any more “gold standard” than death. We also look at the bugs themselves to see if some are worse than others, some of the process measures, as well as how sick the patient is, both acutely and chronically. Again, it is not unreasonable to hypothesize that all of these factors influence the biology of host-pathogen interaction. So, again, if you are Bayesian, you are comfortable with the prior probability.”
Thanks for your kind words, Peter.
Has anyone here looked at the new regime for Comparative Effectiveness Research (CER), for which the IoM has broadened evidentiary bases to include outcome-based work? This seems to opens the door to consideration of evidence from non-RCT sources and responds to the dissatisfaction expressed by many decision makers, to say nothing of patients, for how research is conducted.
Taylor,
I’ve followed the development of the CER framework, but haven’t taken an indepth look. Here’s a link to the existing framework that is still open to comments:
http://www.hhs.gov/recovery/programs/cer/draftdefinition.html
Yes, we’re opening the door to determining how to use outcomes data as valid “high-grade” evidence, but there are a lot of data management issues to work out for CER once the strategic framework has been formalized. Thanks for bringing up CER.
Thanks, Janice for providing the link. It is an important issue to comment on, as some of the choices are baffling to me. Like not including cost-effectiveness analysis — where does that get us?
As a health services and outcomes researcher I have been doing CER for some time now, we just did not call it that :)
Here is my (over?)simplified view of CER: http://evimedgroup.blogspot.com/2010/10/comparative-effectiveness-101.html
Wow! I worked at at accredited natural medicine college in the 1990s and the rap then whas how none of those treatment modalities are backed by “gold standard” research — and now it appears that very little “modern” medicine is either! I’d rather go to a doctor trained to listen, take a real history, treat the whole person as an individual, rather than someone trained in the reductionist Western method of one-symptom, one-cure, for all.
Eat whole food. Move around a lot. Stay away from the MD. That’s the conclusion I take from this. And thank goodness my health allows it.
For the record, Killroy71, I don’t read this as at all saying “stay away from MDs.” I believe I’d be dead if I did.
I know others disagree, but saying that there are gaping holes in the peer review process is (IMO) an indictment of the supposed infallibility *of* the peer review process, and in no way says the science that saved me (or taught my ortho how to screw my leg back together, etc etc) is all wrong.
For years it’s been widely recognized that most medical care doesn’t follow clinical guidelines. A second,less publicized problem has been that most guidelines aren’t based on evidence. Now we’re learning that most evidence–from randomized trials, comparative effectiveness research, cost-effectiveness studies, and other investigations–is based on flawed designs influenced by investigator biases. And, this third problem is the most fundamental.
From a policy standpoint, the new federal Patient-Centered Outcomes Research Institute (if it ever gets off the ground), other federal and state agencies, payers, and professional associations are going to have to insist on better methodologies. From a consumer standpoint, we’re all going to have to become more sophisticated at looking at a study’s methods before buying into the results. Some brave new world.
Excellent post on my Atlantic article, Peter, thanks, and extremely gratifying to see it provoke this sort of discussion, which was the goal of the article (and my book)–I certainly couldn’t pretend to be in a position to provide answers. A question I get asked a lot now is, Why care at all what studies find or what your doctor says? I think we need to keep caring (especially about what your doctor says), but maybe instead of expecting clear answers we should be looking to research and doctors to give us ideas, data points, suggestions, opinions, observations, possibilities. (Some situations will be more urgently clear-cut than others, of course.) I think if we listen carefully to credible sources most of us are able to take it all in and do a bit of pattern recognition and triangulating (our brains are very, very good at this) to end up with a reasonable course of action given the limits of what we can know. Over time we can hope the result of this process will be higher and higher hit rates. That may be a lot less than most of us have hoped and expected to get from medicine, but I suspect it’s the best we’re going to do for a while to come.