Dr. Robert Wachter has an interesting essay over at THCB entitled, Should Patient Satisfaction Scores Be Adjusted for Where Patients Shop? As health care in the U.S. continues to move in the direction of tailoring itself to patient satisfaction, the question becomes — how do we make such ratings more reliable and fair? The answer is, “Not easily.”
Patient satisfaction in a hospital is a many-varied thing. You could have the rudest doctor in the world (ala TV’s “Dr. House”), but if he ends up saving your life, how badly could you rate the rest of the hospital? You could end up dying of a preventable infection from the hospital, and yet, who’s going to capture your unhappiness once you’re in the grave? You could have the nicest, most kindest surgeon in the world, but if he leaves a sponge in you and has to re-open you back up to retrieve it (but tells you it’s just “a routine check to ensure everything is healing properly”), is your hospital rating really going to help others understand that hospital may not be the best one to visit?
Wachter’s point is a valid one — that we need to adjust hospital ratings, and even ratings within the same hospital, based upon the patient’s experience. Outpatient is usually a very different experience than inpatient. Staying in a psychiatric bed is usually nothing like staying in the ICU. Dealing with the madness of the E.R. is nothing like going in overnight for a routine colonoscopy. A hospital in a poor urban area is generally going to be more poorly rated than one in a rich suburb.
Researchers have long recognized the importance of rating “apples to apples.” If you try and compare the efficacy of an antidepressant, for instance, with say, an asthma inhaler, I’m sure you’ll get two very different results. But they shouldn’t have been compared in the first place, because they have virtually nothing to do with one another.
The same is true as we experiment with new ways to provide consumers with more information about the hospitals in their community. These rating systems should be carefully and scientifically devised, normed, validated, and then used only for “apples to apples” comparisons.
And the same can be emphatically said, too, for online ratings of virtually anything. Almost no online ratings’ systems have been normed, empirically validated, or have any sort of random selection occurring. This means that virtually every rating system you come across online — whether it’s for medications, or doctors, or, well, even a TV — is not really a scientific measure on which you should be basing your decision. The population of people submitting these ratings are not a random sample, and so what you see and read online should be taken with a grain of salt.
Eventually, all of these things will be sorted out and we’ll find some happy medium. Until then, we’re left with a lot of pseudo-science and questionable data.