Cross-posted from my own blog, with a late p.s. from this morning’s paper
When John Grohol read my post the other day about evidence-based medicine, he steered me to a paper worth reading: Helping Doctors and Patients Make Sense of Health Statistics. (Update Dec 15 2010: that link is broken; this link works.)
This is relevant to the e-patient movement because as you and I become more responsible for our own healthcare, we need to be clearer about what we’re reading. Plus, it appears we could be more vigilant about what our own professional policymakers – and even our MDs – are thinking.
The paper is 44 pages, but even the first few will open your eyes to how statistically illiterate most of us (and them) are. Consider this question, which was given to 160 gynecologists:
Assume the following information about the women in a region:
- The probability that a woman has breast cancer is 1%
- If a woman has breast cancer, the probability that she tests positive is 90%
- If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false-positive rate)
A woman tests positive. She wants to know whether that means that she has breast cancer for sure, or what the chances are. What is the best answer?
- The probability that she has breast cancer is about 81%.
- Out of 10 women with a positive mammogram, about 9 have breast cancer.
- Out of 10 women with a positive mammogram, about 1 has breast cancer.
- The probability that she has breast cancer is about 1%.
21% of them got the right answer (#3, 1 chance in 10). 60% guessed way too high, the other 19% guessed #4. (That’s 10 times too low).
The paper presents numerous other examples of statistical illiteracy (an example of “innumeracy”), misunderstandings of data that lead to serious unintended policy consequences. My personal favorite is the opening item about Rudy Giuliani’s assertion that he’s lucky to have gotten prostate cancer here instead of under the UK’s “socialized” medical system. It’s not because I don’t like Giuliani – it’s that his own misunderstanding of the data he was quoting led him to advocate something that had nothing to do with his actual odds. He himself would have been harmed if he’d been guided by his own best advice. And he’s not alone in that.
The paper proposes uncomplicated ways to improve our comprehension. First among them is to stop talking in percentages and talk instead in raw numbers. Phrased that way, the same three facts that were given to the gynecologists are much clearer:
- Ten out of every 1,000 women have breast cancer
- Of these ten women with breast cancer, 9 test positive
- Of the 990 without breast cancer, 89 nevertheless test positive.
With this view, 87% got it right. (Of the 98 women who tested positive, only 9 actually have cancer: about 1 in 10.)
Another example echoed what The End of Medicine said about Lipitor. (Without Lipitor, 1.5% of the control group had a coronary event; with Lipitor, about 1% still had one.) A 1995 alert in the UK warned that certain oral contraceptives doubled the risk of blood clots in the lung or leg. Understandably, many women stopped taking the pill; within three years, 13,000 more abortions were performed, reversing five years of decline, and there was a matching increase in live births.
What was the risk that led to this? In raw numbers, one woman in 7,000 has such a blood clot anyway; with this pill, one more blood clot happened.
The irony in this case is that both abortion and childbirth carry more risk of clots than the pill itself. In other words, one benefit of the pill is that it avoids the risk of clots associated with the end of any pregnancy.
So although the number presented (“double the risk”) was absolutely accurate, the real clinical impact wasn’t nearly as absolute.
This is a taste of what’s in the first few pages. It gets dry in places but even the first few pages are compelling and informative – and at no point does it require that you be a mathematician. The explanation of Giuliani’s error is particularly good.
p.s. A perfect example just came in, just before the scheduled release of this post: Today’s NY Times discusses a “large new study” of Crestor, a statin, involving 17,800 patients. It reports apparently dramatic benefits – 54% fewer heart attacks, etc. And it correctly, imo, asks “Who should take statins?”
But these “relative risk reduction” numbers (percent reduction) are exactly what Making Sense warns against: what are the raw numbers?
This is not to say we shouldn’t use statins. The whole point is that the Times piece doesn’t give us enough information to know.
And Making Sense argues that without such information, the whole concept of informed consent is a fiction. Think about that one for a bit.
Thanks to the good Doctor John for the link.
Hi Dave,
Great analysis and good question. I think part of what you’re asking is, “Why hasn’t the profession/industry” figured out the best way for me/us to make an informed decision about something so important?
It’s a really good question, and one that has been asked a lot of times. I’m sitting here thinking of a few answers and I could come up with many.
For example, here’s an article from the BMJ that attempted to tackle this very thing a few years, using graphical images.
At baseline, we don’t even use visuals that much in communicating in health care…and we should.
Maybe one of the issues with articles like the one you posted and the BMJ article is that we’re missing the part about implementing a system.
Maybe with new software (like this for the iPhone – seriously, why couldn’t there be a visual risk display application, and would this work for every patient), and new providers like HelloHealth, the desire to provide Results (with a capital R) for patients will cause this to happen.
Other explanations? Solutions?
You’re getting close to having a medical degree, watch out,
Ted
> You’re getting close to having a medical degree
Well, don’t come to ME with your renal cell carcinoma. :)
> I think part of what you’re asking is…
You and I are viewing this from very different angles. I completely agree with your question, but from the lay-patient perspective, the eye-opening mind-pop is that there are really significant holes in the *basic* analytical skills of the people we rely most on: doctors and science writers.
The AP reporter who wrote about my use of CaringBridge in June, Stephanie Nano, mentioned that she’d been through a rigorous course on science writing at MIT. I’ll see if I can invite her to pitch in here.
Sure, “the whole system ought to be different.” But at the Thomas Paine end of the transformational curve, what catches my eye is ways to awaken each other to surprising shortfalls (even big potholes) in what we think is reality.
Yesterday’s Crestor editorial in the Times really surprises me. btw, has anybody seen the raw numbers? What is the improvement, one in five or one in five thousand?
Dave hit the nail on the head at the end there — the mainstream media isn’t reporting the real data — the raw numbers. Instead, they report what the press release reports, or what the study’s authors report in summary.
I can’t emphasize enough how many decisions are made upon incomplete and inaccurate statistical information. Most docs would say, “Well, this medication will reduce your risk by 50%.” But the truth is far less compelling. It reduces a population’s overall risk by that number, but your own personal risk is likely changed in only a minuscule manner.
Until more physicians, policy makers, and reporters really start to understand the power (and misuse) of statistics, they will continue to make poor decisions, report on the “non findings” of much research, and make bad policy guidelines based upon faulty information.
John/Dave, your comments reminded me of this site, which reviews the quality of reports on studies:
http://www.healthnewsreview.org/review/review.php?rid=1631
Best,
Ted
Ted, I’m delinquent in responding, but WOW, HealthNewsReview is a great resource! Thanks!