We’ve recently been talking here about problems with poor study design in clinical trials. A health IT version of this problem raced through the newswires this week while I was on the road. The news coverage was particularly naïve, illustrating our point.
I’ll say at the outset that I haven’t corresponded with the study’s authors, and I’d welcome constructive dialog. I find myself frustrated, and I’ll lay out my reasons, open to correction.
Researchers at Stanford did a retrospective analysis of patient records 2005-2007 (article here) and concluded that quality of care was no better in institutions that used electronic medical records. Journalists who don’t know how to assess study design – or even look beyond the abstract of an article – were sucked in and posted headlines like Stanford researchers find EHRs don’t boost care quality.
Before we analyze, please reflect: what’s your impression upon reading those words? What does that headline lead you to think about the push to computerize healthcare?
It’s hard to know where to start in cataloging the ways these headlines give an impression that’s way off base. For starters, I wondered what they meant by “quality of healthcare.” What did they measure? Did they assess how well the patients turned out? How few complications they had, how few side effects? To me those are pretty important quality measures.
Well, no, they only measured whether the doctors prescribed the right treatment (or medication).
So it seems to me the headline should say, “Stanford researchers find EHRs don’t change what doctors prescribe.” Very different impression.
But what do I know? I do know that in healthcare a lot of people think the right prescription – the so-called “standard of care” – is the definition of quality. And indeed, that’s how this study defined quality. (Did you know patients only receive the standard of care a bit more than half the time? Indeed, installing EHRs wouldn’t change that. Methinks that problem has nothing to do with computers.)
A deeper issue is the question of who gets to define what quality is. In every transformed industry, the customer does, but in this one, they measure whether one professional did what another one said to.
But let’s stick to the issues of the research itself: social media to the rescue. On Twitter that day, health IT guru Brian Ahier steered us to this terrific analysis by Dr. Bill Hersh of Oregon Health & Science University. Bill catalogs the study’s limitations in clear English. A few excerpts: (Bill, I’m going to rip off chunks of your post, to reach lazy readers, in the hope that serious ones will click through for your whole great post.)
Like almost all science that gets reported in the general media, there is more to this study than what is described in the headlines and news reports. …
There is no reason to believe that the results obtained do not derive from the methods used…. However, there are serious limitations to this type of study and to the data resources used to answer the researchers’ question, which was whether ambulatory EHRs that include clinical decision support (CDS) lead to improved quality of medical care delivered.
“The data resources used” resonates with our posts this week about whether a study actually measures what it set out to measure.
…it is important to understand some serious limitations in these types of studies and this one in particular. A first limitation is that the study looks at correlation, which does not mean causality. This was an observational and not an experimental study. … As with any correlational study, there may be confounders that cause the correlation or lack of it.
The best study design to assess causality is an experimental randomized controlled trial. Indeed, such studies have been done and many have found that EHRs do lead to improvements in quality of care….
See Bill’s post for more. Then:
A second limitation of this study is the quality measures used. Quality measures are of two general types, process and outcome. Process measures look at what was done, such as ordering a certain test or prescribing a specific treatment. Outcome measures look at the actual clinical outcomes of the patient…
[In healthcare today] most measures used… are process measures that may or may not result in improved patient outcomes.
I guess that matches my initial gut reaction, above. (They looked at a process measure (doing the right thing) rather than how the patient turned out. Quite a different concept of quality.)
A third limitation … is that we do not know whether the physicians … had decision support [software] in place.
If I read this right it means the researchers evaluated whether docs made the right decision, even though they didn’t have data on whether the systems in use had decision support installed.
Please do read Bill’s post, and the original article, to fully understand the situation. My concern here is twofold:
- Before interpreting any study, find out what you can about how the study was done. As in this case, you might easily discover that they didn’t study what you’d think, based on the published conclusion.
- Be really careful about interpreting health news. Headlines are commonly off-base. In my opinion science writers ought to be spanked thoroughly for parroting a touted conclusion without at least looking as far as I did.
CNN’s Sanjay Gupta was one example: Electronic Health Records No Cure-All. After blindly accepting the published conclusion Gupta branches into a discussion of the Federal stimulus bill’s incentives to computerize healthcare, then cites skeptics who warn of overreliance on computers, because they might crash, etc. (I agree, don’t rely on unreliable crap of any sort! Certainly not mission-critical computer systems. Decades ago airline reservation systems used to crash sometimes; they fixed that. Engineering reliable systems isn’t rocket science; system buyers should ask for quality, just as patients should.)
As I said, I’d welcome dialog with the Stanford researchers. Hersh and I may be unaware of important factors. For the moment, I assert that the impression given by these headlines – and by the study’s published conclusion – is way, way off base: the 2005-2007 data didn’t at all give an indication of what the future holds. We don’t even know if the systems studied contain the feature that was measured.
Thanks again to Bill Hersh and to Brian Ahier for rapidly producing and sharing such great info.
Agreed that it’s important to read beyond the abstract to find out what the study really offers.
Also, though, many readers of non-technical media (news sites, newspapers, magazines, etc) might not realize that the headline is frequently written by someone other than the author and is designed to be “eye-catching” rather than accurate. Too bad that’s what we all remember.
Dave,Confess I’ve also RTed without really reading the news. Only after more tweets and news came I bookmarked to read during the week-end, when I stumbled upon your post.
Indeed,”Like almost all science that gets reported in the general media, there is more to this study than what is described in the headlines and news reports.” and moreover, few reporters know how to interpret medical statistics.
Would also like to hear the Stanford researchers explain in their own words their findings. Thanks Dave for your amazing promptness in commenting on strining news!
Terry, agreed about headlines – but in this case I think the abstract itself clearly says what these headlines say.
I wonder if we can encourage editors everywhere to pay closer attention to this issue. I know they have many pressures, but it’s our role to speak out when we see an area where more caution could help.
CDS and Rich Clinical Repositories: A Symbiotic Relationship
Also see important commentary here:
http://bit.ly/eN0OuH
Nice analysis. Thanks for adding some more knowledge to how messed up this study (and more particularly the headline for the articles on this study) are.
I wrote a blog post about some of the details you mention, but also talked about the benefits of an EHR beyond quality of care that the study didn’t look at: http://www.emrandhipaa.com/emr-and-hipaa/2011/01/25/study-ignores-other-benefits-of-electronic-health-records/
We shouldn’t be surprised by headlines that reach beyond the facts. Seems to be standard fare. The authors went beyond their own findings, however, something the reviewers of the manuscript might have nipped in the bud.
Some additional thoughts about EHR context and intentions at
http://www.sharedhealthdata.com/2011/01/30/ehr-and-quality-murrow-had-it-right-otherwise-its-just-wires-and-lights-in-a-box/
I heartily endorse Sue’s additional perspective in the post she linked to.
Dave, I read it, and shredded it. Not worth the paper I didn’t print it on. Even more telling is the editorial posted right alongside it in the same issue: Clinical Decision Support and Rich Clinical Repositories: A Symbiotic Relationship which none of the press read.
So I gotta ask, if the editorial pretty much trashes the study, why would they accept the paper??
I’m not trying to be snarky – that gets us nowhere. I really wonder, why?? What do I not understand about the process of accepting an article for publication?
Good question. I wish I knew the answer.
It’s no surprise to find an interesting discussion going on over on e-patients.net!
The thing that seems to be annoying most people about this study is the conflation of ‘quality of health care’ with the application of evidence-based medicine. Dave says that it might be better to look at outcomes data instead but even that can give a narrow impression of what quality is.
Crossing the Quality Chasm (published in 2001 by IoM) listed 10 ‘rules for redesign’ to improve quality of health care across the US. (http://www.iom.edu/~/media/Files/Report%20Files/2001/Crossing-the-Quality-Chasm/Quality%20Chasm%202001%20%20report%20brief.pdf)
I think I’ll put these all down here:
1. Care is based on continuous healing relationships.
2. Care is customized according to patient needs and values.
3. The patient is the source of control.
4. Knowledge is shared and information flows freely. (Patients should have unfettered access to their own medical information and to clinical knowledge. Clinicians and patients should communicate effectively and share information.)
5.Decision making is evidence-based.
6. Safety is a system property.
7. Transparency is necessary.
8. Needs are anticipated.
9. Waste is continuously decreased.
10. Cooperation among clinicians is a priority.
These are a great set of recommendations but unfortunately sometimes we get stuck at no. 5- care should be evidence-based. And even then we are slightly confused about what that might be. When Sackett first introduced the concept, he was at great pains to point out what it wasn’t:
Yes, patient choice! Sackett was all for shared decision making! But instead we have veered towards cookbook or tick-box medicine a little too much.
Last year, Kent Bottles and I had quite a debate when he criticised Danielle Ofri for speaking out against how she perceived the quality of her care of patients was measured. (You can read my response here http://wishfulthinkinginmedicaleducation.blogspot.com/2010/08/quality-measures-and-individual.html)
When it comes to measuring ‘quality of health care’ it feels as if we haven’t got our metrics right yet. Given that it doesn’t surprise me that the Stanford researchers considered that application of EBM in an individual consultation was evidence of good quality care.
But I am glad that this research has been published. And I think that the commentary (not editorial) alongside it is helpful. You see I would have presumed that introducing EHRs and especially Clinical Decision Support software WOULD have increased the application of EBM. The big question is, why not? This study can describe the lack of impact of CDS but it can’t tell us why. That answer would probably come from qualitative, possible ethnographic, research.
Thanks again,
Anne Marie
Great post! I hope more people will start to look beyond the headlines. You alluded to another important point, patients are very complex and there are many outcome and process metrics that need to be evaluated when defining quality care, not just one. Furthermore, systems don’t fix problems and don’t in themselves improve care. EHR’s are but one tool, and an effective tool only when used in conjunction with effective processes that eliminate waste, include the patient and family as a member of the care team and improve communication between care providers and best practice utilization. One of the best contributions the EHR can potentially provide is the ability to capture and measure actual outcomes and what contributed to those outcomes. There is no silver bullet, including the EHR.
Hey Jackie – nice to see a “big iron” EHR vendor here. (Everyone, Jackie’s with Siemens.) Agreed with all your points – glad to hear the post makes sense to you.