I recently hosted a Google Hangout on Air entitled Patient Reviews of Physicians: The Wisdom of the Crowd? (presented by The Harlow Group LLC in association with The Society for Participatory Medicine).
I spoke with Niam Yaraghi (Center for Technology Innovation, The Brookings Institution) and Casey Quinlan (Mighty Casey Media) following their interesting back-and-forth online on the question of whether and how patient reviews of physicians can add value. Please take a look at the posts that preceded the hangout. Here are the initial post and reaction: Niam’s post and Casey’s post – as well as Niam’s follow up post and Casey’s follow up post.
Please feel free to watch the Hangout on Air. Further below, I’ve shared a version of my introduction to the hangout, as well as some of the takeaways suggested by Niam and Casey (and taken away by me). We need your help refining these takeaways in the comments.
The challenge to the community is manifold. We need to do a better job of identifying:
- Types of cases in which the patient is an expert in the clinical sense (where we can rely on patient assessments of quality of clinical care)
- Useful measures (for both process and outcome)
- Better approaches to clarifying patient preferences (because we can’t score a process or an outcome if we don’t know what to score it against)
- On a related note, checklists for patients to use in making the most of encounters with the health care system
More on these points below.
The whole thing was kicked off by a post of Niam’s that included this passage:
Since patients do not have the medical expertise to judge the quality of physicians’ decisions in the short run and are neither capable of evaluating the outcomes of such decisions in the long run, their feedback would be limited to their immediate interaction with medical providers and their staff members. Instead of the quality of the medical services, patients would evaluate the bedside manners of physicians, decor of their offices and demeanor of their staff. This is why a physician’s online rating is not a valid measure of his or her medical expertise.
This, shall we say, inflamed Casey’s ire, as an engaged patient and patient activist. She noted that in many cases a patient with a chronic condition is in fact more expert in her condition — and certainly in the ins and outs of what works or doesn’t work for them in managing her condition — than a clinician new to the case. There is an oft-cited statistic that it takes 17 years for new medical science to filter from journal article to accepted everyday practice. Nobody wants to wait 17 years for her health care to catch up with the state of the art. An engaged patient is more likely to do the research, do the legwork, and surface ideas directly relevant to her case. Some clinicians, of course, are open to the notion that they can “Let Patients Help.”
Niam followed up with another post, noting that while some patients may be experts on their own conditions, others may not be — thus posing, essentially, the question of “how do I evaluate the reviewer?” This is a problem that should be familiar to any one of us who shops online for anything. The key issue Niam raised in his follow-up was this, though: An instrument is valid when it measures what it was intended to measure. He noted some studies that concluded that patient satisfaction is not necessarily tied to improved clinical outcomes.
(As an aside, there was a post on The Health Care Blog picking up on Niam’s perspective on patient reviews on Yelp, and showing that Yelp reviews are highly positively correlated with CAHPS results. With all due respect to the author, since both the reviews and the CAHPS surveys are largely based on patient experience — and not clinical quality process or outcome measures — the correlation does not seem to undercut Niam’s point. It does not address the broader question of whether a patient can be an expert on his or her own condition. The post does, however, point up the fact that physician-level predictive quality measures are as rare as hen’s teeth. This is in fact a problem that would be great to try and dig into.)
There are a whole lot of things that get measured and reported that are not necessarily tied to improved clinical outcomes — consider the reaction of top-tier medical centers every time some ranking is published showing that they are lower in quality than another provider, which is usually some variation on the following theme: “We serve sicker patients, so the results are skewed.”
Casey, in her follow-up post, confirmed that she is not suggesting that patient reviews should be the sole metric guiding choice of clinician … so I think she and Niam have at least some area of agreement.
What we need are metrics to guide rational choice of provider. Going with one’s gut is perhaps an imperfect approach, though for many of us it often seems to be the best we can manage. We certainly have a lot of measures and a lot of data on these measures rattling around out there — but they don’t necessarily enable us to better answer the question: What doctor should I go to?
So, back to our questions, as outlined above. I’m seeding the post with a few stabs at answers, but I am throwing the questions open to comment — please pile on.
1. What are some types of cases in which the patient is an expert in the clinical sense (where we can rely on patient assessments of quality of clinical care)?
A couple of examples come to mind:
- The patient with a chronic condition who is more knowledgeable about her condition and the latest research regarding therapies and other approaches to managing the condition than is her new doctor.
- The patient whose condition had been misdiagnosed (and therefore effectively left untreated) by three doctors. Doctor #4 correctly diagnoses and treats the condition.
Each of the patients in these cases is qualified to review her doctor(s) not just in terms of bedside manner, but in terms of clinical quality of care. I would appreciate the wisdom of the crowd in identifying additional “cases” such as these. With a bank of such cases at hand we may be better able to build a framework for clinical quality ratings by patients.
2. What are some useful process measures to use?Â
There may be some value in standardizing the measures we use in reviewing physicians, even from the process perspective, or even the domains to consider. Niam suggested the following domains to consider:
- Quality of communication between patient and care provider
- Quality of teamwork between the members of medical team as observed by patient
- Following basic rules of infection control
- Reviewing prior medical records of patients
- Following up with patient and making sure that he/she has completely understood medical orders and can comply with them
- Listening to patients and addressing their concerns during the visits
What other measures like these should we be considering? These are perhaps a step up from the “I like my doctor because she is polite and on time” sort of reviews, but: Is there a correlation between good rankings on these metrics and good outcomes? I am on a life-long search for a handful of good measures that would prove to be predictive of everything else that matters; I am not convinced that these take us too far down the path in that direction.
3. What are useful outcome measures to use, and who is able to rate a clinician according to these measures?
The classic example of the disconnect between a clinician’s view of quality and the patient’s is the old saw about the orthopedic surgeon who pronounced a leg healed, ignoring the fact that the patient had died. (Sorry, orthopedic surgeons; it’s just a hypothetical example ….)
- Patient satisfaction with the outcome
In the end, the only metric that matters is patient satisfaction. Why? Because care must be delivered to address patient needs, patient preferences. The optimal treatment for two patients with similar clinical presentations may be entirely different, based on family issues and personal preferences (for example, treating a terminal illness differently for a patient who wants to walk his daughter down the aisle in six months vs. one who wants only to have a good death).
4. What combined process and outcome measures should we be using to rate quality?
The desired outcome ought to be determined by taking patient desires and preferences into account. In many situations, success in achieving clinical goals will be largely determined by whether the patient has had sufficient voice in determining those goals. We need better approaches to clarifying patient preferences before embarking on courses of treatment. We can’t score the process or the outcome unless we know the patient’s views on the process and outcome. (Consider the work of the Dartmouth Preference Lab). A basic step on the way to clarifying these preferences is ensuring that patients are making the most of encounters with the health care system, which may be enabled in some situations by the use of checklists (here are two examples offered by Casey — one and two).
We have a tool that can be used to assess the level of a patient’s activation and engagement: the Patient Activation Measure. But of course that activated patient needs to engage with a receptive clinician. There ought to be a parallel tool that we could use to measure the clinician’s receptivity to and engagement with an activated patient — a tool that should include a measure that can identify the clinician who is able to activate a patient.
We may have veered slightly away from the narrow question of whether patients can rate providers in a useful way, and into the broader question of what might be the most useful set of quality measures for providers.
Bottom line: Any global assessment of provider quality must take into account care goals identified through an examination of patient preferences. Please help flesh out our thinking on this subject by adding your voice to the conversation.
David Harlow is a health care lawyer and consultant at The Harlow Group LLC, and chairs the Society for Participatory Medicine’s public policy committee. Check out his home blog, HealthBlawg. You should follow him on Twitter: @healthblawg.
Ultimately, healthcare consumers will need some objective measures of clinician performance.
For example, an algorithm like that built by my company can tell consumers exactly which symptoms their doctor should ask (or should have asked) about, which physical signs their doctor should take or should have taken, and which labs or tests should (and should not!) be or have been conducted.
Not to mention the differential diagnosis the clinician should have arrived at. In fact, consumers who use our Centaur(tm) differential diagnosis can present their clinician with considerably more information than the clinician could extract in the typical 8-minute appointment.
And our technology is all free to healthcare consumers, forever. Right now it’s got a lot of medical language in it, and while we’ll be making a more layperson-friendly version soon, dedicated consumers who really want answers can easily use the technology to do their homework — and, in many cases, be better prepared than their clinician.
Cameron
CEO
Physician Cognition
http://www.PhysicianCognition.com
Terrific capture of the crux of the conversation, and the controversy. Patients aren’t MDs, but they know what they want to get out of a clinical encounter. Some people might shade the truth (saying they don’t smoke, when they do) in order to avoid the outcome of that underlying truth, but … that truth will out.
In the case of the patient who knows what s/he wants, and understands his/her part in taking the clinical encounter to success or failure, assigning the patient’s knowledge and POV to secondary status, below that of “medical experts,” is a mug’s game, IMO.
The root problem here is that, when we argue metrics on provider quality, we wind up comparing apples to chain saws. Clinical quality metrics – infection control, guidelines for care for specific conditions, physician MoC, and the rest of the QI deck of cards – talk a lot about process, a little about experience (patient), but never, not once, about patient preference as part of the initial care intake process.
Until we get there, we’ll be stuck on apples and chain saws.
I cross-posted this at HealthBlawg. Stop by and see the comments there, too: http://healthblawg.com/2015/06/patient-reviews-physicians-useful.html
Coincidentally, I just stumbled across this ancient post here (from 2008!) by co-founder Doc John Grohol: How good are doctor rating sites? His comments on method are still relevant!
Dave, do you know the status of the S4PM “Seal”?
The Seal program is stalled for lack of direction. The last I heard all agreed it was a good idea but the question was, how exactly does one qualify for it, and what work would be done to publicize it, integrate it in databases, etc?
If you have any thoughts they’d be most welcome! (Or anyone out there.)
Some additional comments from e-patient Carly Medosch, who I met IRL at MedCity ENGAGE in Bethesda, July 2015: https://storify.com/healthblawg/how-to-frame-patient-reviews-of-physicians
This is a very thoughtful post.
1) Doctors can self assess using these provisional criteria and not wait for patient reviews to give them constructive or destructive criticism.
2) Patient experiences are not just medical and centered around medical decision-making. They are also social and emotional experiences for themselves and their families. All medical decisions are medical, social and emotional processes. This has yet to be worked out. The end of life care literature addresses some of this.