e-Patients who want to collaborate with their physicians, and be responsible for their medical decisions, need to clearly understand what constitutes good evidence. It’s not always easy.
Now Richard Smith, a 25 year editor of the British Medical Journal, has written another piece for the BMJ blog, citing a JAMA study showing “that of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005.”
What’s an e-patient to do?? Especially when we “patients who google” are so often sneered at by physicians who rely on these same journals.
Well, we need to educate ourselves, and learn to speak calmly, confidently and understandingly to anyone who doesn’t understand – just as we expect clinicians to do with us.:–) In short, we need to know our stuff.
In our journal JoPM‘s inaugural issue, Richard Smith wrote In Search of an Optimal Peer Review System, saying “After 30 years of practicing peer review and 15 years of studying it experimentally, I’m unconvinced of its value. … evidence on the upside is sparse, while evidence on the downside is abundant.” Earlier posts on this specific subject:
- Our The Decline Effect, about a New Yorker article
- Our Why Almost Everything You Hear About Medicine is Wrong (about a Newsweek article)
- All the above cite Dr. John Ioannidis; The Decline Effect links to two earlier posts about his article in Atlantic. Smith’s new post cites a paper he’d reviewed for PLoS Medicine written by Ioannidis, Neal Young, and JoPM advisor Mohammad Al-Ubaudli.