Abstract
Keywords: Misdiagnosis, diagnostic error, information overload, diagnostic software, evidence-based medicine, participatory medicine.
Citation: Graedon T. Is Larry Weed right? J Participat Med. 2013 Mar 18; 5:e13.
Published: March 18, 2013.
Competing Interests: The author has declared that no competing interests exist.
Recent research reports have shown that misdiagnosis is an Achilles heel for the current practice of medicine. Dr. John Ioannidis and his colleagues at Stanford University reviewed the literature on patient safety strategies aimed at diagnostic error.[1] They make it clear that much remains to be done, both in research and in implementation of effective approaches.
A study and a commentary published recently showed that failure to achieve an accurate diagnosis causes substantial harm to as many as 150,000 patients annually.[2][3] Another study indicated that information overload due to time pressure may lead clinicians to miss abnormal test results that should trigger further diagnostic investigation.[4]
This problem is not new. Despite advanced medical technology, misdiagnosis doesn’t seem to have become less common since Dr. George Lundberg wrote a scathing editorial about decades of “diagnostic discordance” 15 years ago.[5] Many members of the Society for Participatory Medicine have personal examples of how a delayed or missed diagnosis affected the trajectory of an illness.
This brings us to ask whether it makes sense to embrace Dr. Larry Weed’s approach to diagnosis. (Physicians may recognize Larry Weed as the creator of SOAP notes.) In their book Medicine in Denial, Larry and Lincoln Weed argue that no single clinician has the cognitive capacity to match each patient’s presenting signs and symptoms to the correct choice out of hundreds of possible disease conditions that might correspond.[6] According to the Weeds, misdiagnoses “are not failures of individual physicians. Rather they are failures of a non-system that imposes burdens too great for physicians to bear.”
They argue that software tools should be employed first. Software linked to the medical evidence base could present a true list of probable diagnoses for physician and patient to consider together, rather than the ad hoc list of differential diagnoses that a doctor may construct based on his or her particular interests or specialization.
This scenario may sound like science fiction, but even science fiction aficionados have noticed that computers are gaining a diagnostic edge over many doctors.[7] In research from Indiana University, the computer demonstrated greater accuracy in diagnosis than the human physicians. IBM’s Watson and the diagnostic checklist software Isabel are also being tested and refined to facilitate accurate differential diagnoses. Computers don’t suffer from sleep deprivation or distraction, and they shouldn’t display the kinds of unintentional biases (based on gender, ethnicity, or age) that beset human doctors.
The Weeds’ vision of how the data would be collected and used is appealing from the perspective of participatory medicine. The patient would fill out a comprehensive standardized computerized questionnaire prior to a face-to-face clinical encounter. Some data that can only be derived from physical examination or laboratory tests might need to be provided by a clinician, perhaps a physician extender such as a nurse or physician’s assistant. Collecting the initial data in this way should mean that physicians wouldn’t need to spend extra time to use this decision support tool. Instead, once the software suggests a set of options for potential diagnoses to be considered, clinical judgment and (possibly) further testing to refine the diagnosis come into play.
The Weeds point out that this approach goes well beyond a simple Google search, which works best when an unusual diagnosis is linked to a distinctive symptom or set of symptoms. They envision the software linking patient data and medical evidence also connecting to the FDA, CDC, and other institutions, contributing to public health and expanding the evidence base. The potential benefit for such linkage is hinted at by recent research that analyzed six million Internet users’ Google searches to reveal a previously unknown interaction between paroxetine and pravastatin raising blood sugar.[8]
If diagnosis begins with standardized data collection, doctors bring clinical judgment to bear at the final stage of diagnosis. Treatment should then be evidence-guided but individualized for the particular patient. We trust that at this point the patient would make his or her preferences known and share in the decision.
The next step in the Weeds’ prescription for fixing some of medicine’s shortcomings is for physicians to be periodically tested and certified — based on performance, not on education — for the technical skills that they practice, whether it is in the field of interventional cardiology, electromyography, gynecologic surgery, or even prescribing. Those certifications would be publicized, making it possible for patients and referring clinicians to judge competence before the clinical encounter. Patients would certainly welcome that development, though no doubt it would meet a great deal of provider resistance.
If preventing diagnostic errors requires a different focus on “data gathering and synthesis in the patient-practitioner encounter[4],” why not take Larry Weed’s recommendations seriously?
References
- McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013; 158: 381-9. ↩
- Singh H, Giardina TD, Meyer AN, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;Feb. 25\:1-8. doi:10.1001/jamainternmed.2013.2777.[Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1656540. Accessed March 13, 2013. ↩
- Newman-Toker DE, Makary M. Measuring diagnostic errors in primary care: comment on “Types and origins of diagnostic errors in primary care settings.” JAMA Intern Med. 2013;Feb. 25:1-2. doi:10.1001/jamainternmed.2013.225. [Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1656536. Accessed March 13, 2013. ↩
- Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Sittig DF. Information overload and missed test results in electronic health record-based settings. JAMA Intern Med. 2013 Apr 22:1-3. doi: 10.1001/2013.jamainternmed.61. [Epub ahead of print.] Available at: http://archinte.jamanetwork.com/article.aspx?articleid=1657753. Accessed March 13, 2013. ↩
- Lundberg GD. Low-tech autopsies in the era of high-tech medicine: continued value for quality assurance and patient safety. JAMA 1998 Oct 14;280(14):1273-4. ↩
- Weed LL, Weed L. Medicine in Denial. CreateSpace Independent Publishing Platform; 2011. ↩
- Bennett CC, Hauser K. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach. Artif Intell Med. 2013;57(1):9-19. doi: 10.1016/j.artmed.2012.12.003. Epub 2012 Dec 31. ↩
- White RW, Tatonetti NP, Shah NH, Altman RB, Horvitz E. Web-scale pharmacovigilance: listening to signals from the crowd. J Am Med Inform Assoc. 2013 Mar 6. doi:10.1136/amiajnl-2012-001482. [Epub ahead of print.] ↩
Copyright: © 2013 Terry Graedon. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.
Shortly after this editorial was published, Dr. Weed telephoned to remind me that the PATIENT is at the center of his vision. His analogy is to travel: rather than being dependent upon travel agents telling us where to go, when and how, most of us make our own decisions about whether to fly, drive or take the train and use a map to navigate where we want to go. Health care should be that easy! This clearly puts the patient in the driver’s seat and deserves careful consideration as to how it could be implemented.
Terry: Great posting, and thanks for calling attention to the foundational work of Dr Weed. He DID get it right — computer-assisted diagnosis clearly has the potential to improve the reliability of the diagnostic process. Unfortunately, we’re not “there” yet. Web-based tools to assist with diagnosis, like DXplain and ISABEL and their many predecessors have been available for a while now, but haven’t been widely used. Now that these products are integrated more effectively with online workflow, hopefully utilization will increase. This will require some better solution to the last piece of the puzzle — getting physicians to realize that they are not perfect.
I would just take issue with the suggestion for a physician extender to gather key data from a patients history and physical exam. These elements are strongly guided by the initial diagnostic hypothesis the clinician generates. You need a physician to do that consistently well; the MD has had 7+ years of training and is licensed to diagnose — that’s who you want doing your H&P, notwithstanding the problem that having an extender do this would also involve an error-prone “hand off.”
As a son of Larry Weed (LLW) and co-author of his latest book, I appreciated Dr. Graber’s reference to LLW’s “foundational work.” But LLW’s work is not well understood by Dr. Graber and many others.
Let’s focus on the second paragraph of Dr. Graber’s comment. There he “would just take issue with the suggestion for a physician extender to gather key data from a patients history and physical exam.” His concept is that the history and physical should be “strongly guided by the initial diagnostic hypothesis the clinician generates.” He suggests that physicians handle the process “consistently well” because “the MD has had 7+ years of training and is licensed to diagnose.”
There is a huge disconnect between this statement and the reality of medical practice. As Dr. Graber knows better than most, the reality is that licensed physicians regularly miss recognizable diagnoses.
LLW argues that such diagnostic failure is inherent in the accepted practice that Dr. Graber describes: relying on the clinician’s “initial diagnostic hypothesis” to guide the history and physical. This accepted practice demands more information processing than the human mind is capable of delivering. Physicians are blinded to this reality by a self-serving professional dogma, a faith that medical training somehow renders them capable of matching detailed patient data with vast medical knowledge.
That matching process exceeds the capacities of the human mind. As a coping mechanism, physicians quickly jump to conclusions (“initial diagnostic hypotheses”). But that exercise of “clinical judgment” introduces all the cognitive vulnerabilities that the human mind is heir to. Rather than being subjected to those vulnerabilities, patients need protection against them. The only protection is to enforce high standards of care for managing clinical information (data and knowledge), by requiring use of electronic tools designed to implement those standards. Without standards and tools external to the physician’s mind, the process of data collection and analysis will never be trustworthy.
In contrast, with the right standards and tools, initial data collection and analysis become highly organized, meticulous, and transparent. Once that foundation is laid, it may then be supplemented with judgments from the practitioner — and from the patient — about the data collected and additional data or hypotheses they believe relevant. This initial data collection and analysis must then be followed by highly organized follow-up processes. These involve careful problem definition, planning, execution, feedback, and corrective action over time, all documented under strict standards of care for medical record keeping. These principles relate not just to the diagnostic process but to medical decision making generally.
The need for external standards and tools would be obvious to everyone if non-physician practitioners and patients were the ones who chose the relevant data points and connected them with medical knowledge. But when physicians are involved, we let them rely on whatever “initial diagnostic hypothesis” comes to mind. Somehow we have all been socialized to believe that physician training makes this lack of discipline acceptable.
For a comprehensive discussion, see LLW’s 2011 book, Medicine in Denial, cited in Terry Graedon’s editorial. (The book’s table of contents, overview and introduction are available at http://www.thepermanentejournal.org/files/MedicineInDenial.pdf.) Readers concerned with the diagnostic process would be especially interested in Part II.A of the book, which is a detailed case study in diagnostic error, and Part IV, which presents standards and tools designed to bring order and accountability to patient data collection and analysis. For discussion especially relevant to Dr. Graber’s comment, see pp. 26-27, 43-45, 58-61, 82-86, 98-101, 136-37, and 147-52 of the book.
Terry: great article, thank you! Dr. Weed was undoubtedly right and he is as inspirational today as he was when giving his 1971 grand rounds at Emory (see http://www.youtube.com/watch?v=qMsPXSMTpFI). To realize his vision, we will need 1) a broad acknowledgement of the burning platform and 2) deep and successful integration of these tools into clinician workflow.
Larry Weed was ahead of his time when he espoused the “Problem-Oriented Medical Record” in the 1970s. He still is. Too bad it often takes decades to implement new and better strategies in health care. Nevertheless, we should keep on trying!
I must say I nearly fell out of my chair while reading Terry Graedon’s fine piece on Dr. Weed and Mr. Weed’s prescriptions for a technology aimed at solving high rates of misdiagnosis. That’s because my company, Physician Cognition, has built the very capabilities outlined by the Weeds: it’s a computer algorithm that thinks just like a committee of specialists, and without the human error.
Before I address the other substantive points of the article and the many thoughtful comments to it, let me invite all of you readers to join our beta. We would be honored to have you pound on our technology and see what it can do for yourself, without charge or obligation. Just go to http://www.PhysicianCognition.com and sign up.
Lincoln Weed is quite right in saying, “As a coping mechanism, physicians quickly jump to conclusions (‘initial diagnostic hypotheses’).” Jumping to conclusions is a pretty good definition of premature closure, the #1 cause of misdiagnosis. He’s also spot-on when he says that the “exercise of ‘clinical judgment’ introduces all the cognitive vulnerabilities that the human mind is heir to.”
Precisely so. “Initial diagnostic hypotheses” involve exactly the types of thinking identified by Nobel-winner Dr. Daniel Kahneman and his research partners as System 1 thinking: fast, instinctive, and emotional. Sometimes it’s genius, sometimes it’s error-prone. A computer’s diagnosis would exemplify the System 2 thinking needed to complement the advantages of physicians’ System 1 thinking: System 2 thinking is slower, more deliberative, and more logical. In a computer, you can add expert handling of data, perfect memory, and instant, computer-generated probabilities.
As Kahneman et al point out, we humans are terrible at probabilities. We seek premature closure in all our lives’ dealings. We don’t exhaust all avenues of inquiry because we’re in a hurry, because we have a financial or (other) emotional stake in not exhausting them, because of bias and fatigue, because of cognitive errors like availability, recency, and anchoring bias, and more. Many studies show that doctors (being a subset of human beings) also suffer from biases about people — other races, the obese, alcoholics, even wealthy white women. And so, Mr. Weed concludes in a comment to the article, “Rather than being subjected to those vulnerabilities, patients need protection against them.”
I agree completely. I would also agree that misdiagnosis is, as Mr. Graedon points out, the Achilles’ heel of medicine, but only if we remember that the demi-god’s vulnerable heel was what killed him. All manner of waste, suffering, and even death follows from misdiagnosis (which includes the rather high rate of non-diagnosis when the clinical evidence is in fact sufficient to merit a diagnosis – 25% of correct diagnoses are tardy). Waste like the wrong medication and treatment, the wrong specialist referral, the wrong labs, tests, and procedures, and harm like the real blows that the anxiety of being in the dark deals to patients’ immune systems, the time lost to the patient when she could have been getting well, or at least informed and therefore empowered.
At least as harmful as the misdiagnoses, however, is physicians’ ignorance of their errors, which is simply ignorance of (denial of? immunity from?) human errors. When physicians are asked their error rate, their answer averages out to 1%. But the actual rate of misdiagnosis is (depending on the study) 10x to 30x greater than that. That disconnect is probably exactly proportionate to what some people refer to as doctors’ “God complex”. It’s an unwillingness to acknowledge human cognitive fallibility. We all suffer from that unwillingness, to one degree or another. But when physicians suffer from it, and don’t take measures to adjust for it, don’t force themselves into some objective standardization or process, the results can be deadly.
Lincoln Weed says the required high standards of care for managing clinical information will require use of electronic tools designed to implement those standards, and points out that only an objective standard (“external to the physician’s mind”) can really give comfort to the most concerned party in any diagnosis, the party who is not actually the doctor. That standardization as a complement to the physician’s mind is exactly what we at Physician Cognition have done.
We even built our system to include the sort of information mentioned in the article: “the software linking patient data and medical evidence also connecting to the FDA, CDC, and other institutions, contributing to public health and expanding the evidence base.”
There’s an easy solution to the “information overload due to time pressure” that “may lead clinicians to miss abnormal test results” – not to mention other symptoms and signs to check – “that should trigger further diagnostic investigation.” Doctors are not going to be able to start spending more than 7 minutes per patient, so time pressures – the small band of concierge practitioners aside — won’t go away. The Weeds suggest nurses and PAs could, armed with the right technology, conduct the initial examination. That was our aim too, but we took it further, as far as we think a physician would want to take it, if he or she really thought about it. We built a technology that can be programmed with a physician’s own clinical rules so as to allow so-called practice extenders to mimic the physician’s own judgment. This is the heart of the best practices and standardization that has transformed so many fields and industries to date.
But there’s another way to solve this time problem, and that is to begin the patient’s consultation at home – a consultation between the patient and a sophisticated algorithm that can dynamically conduct the entire diagnostic conversation about any novel combination of symptoms (including presentation, context, duration, history, geography), physical signs, labs, and even medications. What better way to, as Dr. Weed reminds Mr. Graedon, put “the PATIENT . . . at the center of his vision”?
Dr. Weed’s analogy to travel is exactly right: Internet-connected computers, in the language of business theorists and startup gurus, “disrupted” and “disintermediated” travel agents. Clayton Christensen, who coined the term “ disruptive innovation”, defined it as meaning the service in question – computing ability, the symphony, media, medical knowledge – comes closer to a user who is suddenly able to perform or self-serve to a similar degree but without as much money or expertise. Just as we no longer need to have a Ph.D. and a room-sized computer to compute, to schlep to a concert hall to hear music, or to walk through weather to a newsstand where we actually have to buy the news, in the future you won’t have to find, sit in the waiting room of, and pay a doctor every time you need a sophisticated answer to a health-related question for yourself or your family.
You will just whip out your smartphones and tablets (or write with fingers on their connected refrigerator screen, or manipulate holographic images like in “Minority Report”) and:
• pull up the on-demand music that reduced the power and revenue of the record labels and put more in the hands of musicians and fans
• check the email that put postal services around the world on the ropes, and all but killed the fax
• take pictures with special effects (and see them instantly), where doing so once had to be done by an expert user with an expensive camera, who had to travel to a photo development shop and pay to see his physical pictures
• pull up an application like our lay-friendly Dr.You and get attention and information equivalent to an hour (or several visits) with a physician (or pack of specialists) and have an idea of what’s wrong, what to do, whom to see, and what it will cost before they set their phone down.
The people of the very near future also won’t have to put up with avoidable error anymore.
Terry Graedon says that “Despite advanced medical technology, misdiagnosis doesn’t seem to have become less common.” Actually, medical diagnostic technology has not advanced that much. That’s because the effort to codify the wisdom of physicians is monumental. But we’ve shown it’s doable.
The icing on the top? A dynamically produced Workup Guide also takes care of Dr. Graber’s problem: the algorithm’s initial diagnoses are informing the examination of symptoms, signs, and labs, but in a more rigorous and evidence-based fashion.
Mr. Graedon says the Weeds “point out that this approach goes well beyond a simple Google search.” Indeed. Google’s algorithm is limited to finding terms that are (1) near one another and (2) on pages that are pointed to by many other pages. Google’s algorithm does not process medical information. It does not think about the search terms in any way like a doctor.
Finally, our algorithm is the perfect tool to test and certify doctors – they can log on, work through some cases, get their score for differential diagnosis (including workups), and pay the re-certification fee online.
We hope you’ll give us your expertise and help us improve the system. True democratization of medicine means top-flight medical intelligence for everyone, the world over, regardless of income or skin color or location. Please sign up to find the less-optimized aspects of our system, and give us an earful of prescriptions.
Cheers,
Cameron Powell
CEO
Physician Cognition, Inc.
http://www.PhysicianCognition.com