CNN has recently published an article about what to look for in a doctor rating website. Unfortunately, they repeat some misconceptions and errors about these services.

The most serious error is the claim that the greater volume a website has of doctor ratings, the more reliable or statistically valid it will become.

It’s a matter of statistics: The more reviews you read, the more likely you are to get an accurate assessment. “I would check a lot of different Web sites,” says Carol Cronin, executive director of the Informed Patient Institute. “Look across them, not just within one.”

Speaking of volume, a common concern about doctor rating sites is that one angry patient can make multiple nasty comments, using a different name each time (or, conversely, that the physician herself could go on and make multiple glowing comments).

But Martin Schneider, chairman of the Informed Patient Institute, says these sites have ways of detecting when one person is making several comments under different names. Back in the 1990s, Schneider was president of a now-defunct doctor rating site called thehealthpages.com. “Even back then, we had to the technology to stop that from happening,” he says.

These claims are commonly made, but they are largely incorrect. Here’s why…

In survey research (which is basically what a doctor rating site is trying to be), you need a sample that is both large and randomized. That is, you do not go out and post an announcement saying, “Take our survey if you think you have depression” if you’re looking for an unbiased data sample on depression in the general population. You need to have a group of people that both have and don’t have depression in order to obtain generalizable results.

The same is true with ratings sites. They may get the volumes needed, but none of these sites have any way of addressing the biased sample problem. People who rate their doctors are likely to fall into one of two categories — they either had a horrible experience with them and want others to know, or they had a wonderful experience with them and want others to know. But most people who fall in between these two extremes and have run-of-the-mill experiences with the doctor will likely never rate, because they have little incentive to do so.

You will also need a humongous number of patients rating each doctor — at least 20 to 30% of their entire patient list — in order to for the ratings to start gaining enough power to be reliable and valid (notwithstanding the population sample bias issue).

CNN admits as much later on in the same article quoting Dr. Robert Wachter:

While patient reviews might be useful, they have several clear drawbacks, our experts say. First, many doctors have just a few reviews or none at all. Second, even if a doctor has 20, 30, 50 or 100 reviews, that’s still only a small fraction of his entire patient population — and a warped fraction at that.

“The person most likely to write is the one who’s most enthralled with the doctor, or the one who’s most pissed,” Wachter says. “You’re getting a skewed view.”

The other advice — decide what’s important to you, look for patterns in the ratings, look for specifics in people’s ratings of their doctor and put more weight onto detailed reviews rather than general comments, and consult objective data already available — is generally solid, but still doesn’t address the foundational statistical problems with these types of online ratings systems. All the business people gloss over these problems, but if a rating isn’t scientific, its value is diminished substantially.

And honestly, Martin Schneider is a bit naive if he thinks it isn’t a simple thing to rate one doctor multiple times on all of these sites. Simply by clearing one’s cookies, using a few webmail addresses and using a Web proxy, you can register as many accounts as you would like on any of these services in a matter of minutes.

Donate