Search all of the Society for Participatory Medicine website:Search
The Journal for Participatory Medicine's website has moved. Please check out the new website for the latest articles.

Abstract

Keywords: Misdiagnosis, diagnostic error, information overload, diagnostic software, evidence-based medicine, participatory medicine.
Citation: Graedon T. Is Larry Weed right? J Participat Med. 2013 Mar 18; 5:e13.
Published: March 18, 2013.
Competing Interests: The author has declared that no competing interests exist.
 

Recent research reports have shown that misdiagnosis is an Achilles heel for the current practice of medicine. Dr. John Ioannidis and his colleagues at Stanford University reviewed the literature on patient safety strategies aimed at diagnostic error.[1] They make it clear that much remains to be done, both in research and in implementation of effective approaches.

A study and a commentary published recently showed that failure to achieve an accurate diagnosis causes substantial harm to as many as 150,000 patients annually.[2][3] Another study indicated that information overload due to time pressure may lead clinicians to miss abnormal test results that should trigger further diagnostic investigation.[4]

This problem is not new. Despite advanced medical technology, misdiagnosis doesn’t seem to have become less common since Dr. George Lundberg wrote a scathing editorial about decades of “diagnostic discordance” 15 years ago.[5] Many members of the Society for Participatory Medicine have personal examples of how a delayed or missed diagnosis affected the trajectory of an illness.

This brings us to ask whether it makes sense to embrace Dr. Larry Weed’s approach to diagnosis. (Physicians may recognize Larry Weed as the creator of SOAP notes.) In their book Medicine in Denial, Larry and Lincoln Weed argue that no single clinician has the cognitive capacity to match each patient’s presenting signs and symptoms to the correct choice out of hundreds of possible disease conditions that might correspond.[6] According to the Weeds, misdiagnoses “are not failures of individual physicians. Rather they are failures of a non-system that imposes burdens too great for physicians to bear.”

They argue that software tools should be employed first. Software linked to the medical evidence base could present a true list of probable diagnoses for physician and patient to consider together, rather than the ad hoc list of differential diagnoses that a doctor may construct based on his or her particular interests or specialization.

This scenario may sound like science fiction, but even science fiction aficionados have noticed that computers are gaining a diagnostic edge over many doctors.[7] In research from Indiana University, the computer demonstrated greater accuracy in diagnosis than the human physicians. IBM’s Watson and the diagnostic checklist software Isabel are also being tested and refined to facilitate accurate differential diagnoses. Computers don’t suffer from sleep deprivation or distraction, and they shouldn’t display the kinds of unintentional biases (based on gender, ethnicity, or age) that beset human doctors.

The Weeds’ vision of how the data would be collected and used is appealing from the perspective of participatory medicine. The patient would fill out a comprehensive standardized computerized questionnaire prior to a face-to-face clinical encounter. Some data that can only be derived from physical examination or laboratory tests might need to be provided by a clinician, perhaps a physician extender such as a nurse or physician’s assistant. Collecting the initial data in this way should mean that physicians wouldn’t need to spend extra time to use this decision support tool. Instead, once the software suggests a set of options for potential diagnoses to be considered, clinical judgment and (possibly) further testing to refine the diagnosis come into play.

The Weeds point out that this approach goes well beyond a simple Google search, which works best when an unusual diagnosis is linked to a distinctive symptom or set of symptoms. They envision the software linking patient data and medical evidence also connecting to the FDA, CDC, and other institutions, contributing to public health and expanding the evidence base. The potential benefit for such linkage is hinted at by recent research that analyzed six million Internet users’ Google searches to reveal a previously unknown interaction between paroxetine and pravastatin raising blood sugar.[8]

If diagnosis begins with standardized data collection, doctors bring clinical judgment to bear at the final stage of diagnosis. Treatment should then be evidence-guided but individualized for the particular patient. We trust that at this point the patient would make his or her preferences known and share in the decision.

The next step in the Weeds’ prescription for fixing some of medicine’s shortcomings is for physicians to be periodically tested and certified — based on performance, not on education — for the technical skills that they practice, whether it is in the field of interventional cardiology, electromyography, gynecologic surgery, or even prescribing. Those certifications would be publicized, making it possible for patients and referring clinicians to judge competence before the clinical encounter. Patients would certainly welcome that development, though no doubt it would meet a great deal of provider resistance.

If preventing diagnostic errors requires a different focus on “data gathering and synthesis in the patient-practitioner encounter[4],” why not take Larry Weed’s recommendations seriously?

References

  1. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013; 158: 381-9.
  2. Singh H, Giardina TD, Meyer AN, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;Feb. 25\:1-8. doi:10.1001/jamainternmed.2013.2777.[Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1656540. Accessed March 13, 2013.
  3. Newman-Toker DE, Makary M. Measuring diagnostic errors in primary care: comment on “Types and origins of diagnostic errors in primary care settings.” JAMA Intern Med. 2013;Feb. 25:1-2. doi:10.1001/jamainternmed.2013.225. [Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1656536. Accessed March 13, 2013.
  4. Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Sittig DF. Information overload and missed test results in electronic health record-based settings. JAMA Intern Med. 2013 Apr 22:1-3. doi: 10.1001/2013.jamainternmed.61. [Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1657753. Accessed March 13, 2013.
  5. Lundberg GD. Low-tech autopsies in the era of high-tech medicine: continued value for quality assurance and patient safety. JAMA 1998 Oct 14;280(14):1273-4.
  6. Weed LL, Weed L. Medicine in Denial. CreateSpace Independent Publishing Platform; 2011.
  7. Bennett CC, Hauser K. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach. Artif Intell Med. 2013;57(1):9-19. doi: 10.1016/j.artmed.2012.12.003. Epub 2012 Dec 31.
  8. White RW, Tatonetti NP, Shah NH, Altman RB, Horvitz E. Web-scale pharmacovigilance: listening to signals from the crowd. J Am Med Inform Assoc. 2013 Mar 6. doi:10.1136/amiajnl-2012-001482. [Epub ahead of print.]

Copyright: © 2013 Terry Graedon. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

 

Donate