A recurring them on this blog is the need for empowered, engaged patients to understand what they read about science. It’s true when researching treatments for one’s condition, it’s true when considering government policy proposals, it’s true when reading advice based on statistics. If you take any journal article at face value, you may get severely misled; you need to think critically.
Sometimes there’s corruption (e.g. the fraudulent vaccine/autism data reported this month, or “Dr. Reuben regrets this happened“), sometimes articles are retracted due to errors (see the new Retraction Watch blog), sometimes scientists simply can’t reproduce a result that looked good in the early trials.
But an article a month ago in the New Yorker sent a chill down my spine tonight. (I wish I could remember which Twitter friend cited it.) It’ll chill you, too, if you believe the scientific method leads to certainty. This sums it up:
Many results that are rigorously proved and accepted
start shrinking in later studies.
This is disturbing. The whole idea of science is that once you’ve established a truth, it stays put: you don’t combine hydrogen and oxygen in a particular way and sometimes you get water, and other times chocolate cake.
Reliable findings are how we’re able to shoot a rocket and have it land on the moon, or step on the gas and make a car move (predictably), or flick a switch and turn on the lights. Things that were true yesterday don’t just become untrue. Right??
Bad news: sometimes the most rigorous published findings erode over time. That’s what the New Yorker article is about.
I won’t try to teach here everything in the article; if you want to understand research and certainty, read it. (It’s longish, but great writing.) I’ll just paste in some quotes. All emphasis is added, and my comments are in [brackets].
- All sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. … In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
- “This is a very sensitive issue for scientists,” [Schooler] says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
- [One factor is] publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for.
- [The point here is that naturally all you see published is a successful study. Lots of useful information can come from failed studies, but they never get published.]
- [The problem is that anything can happen once, at random. That’s why it’s important that a result be replicable (repeatable by another scientist): like that light switch, if someone else tries it, you better get the same result. But the article points out that most published results are never tested by another researcher.]
- In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
- [But publication bias] remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. [By this point, this article was driving me nuts.]
- [Re another cause of this problem,] In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
- [We had two posts in October here and here about an Atlantic article by Dr. John Ioannidis, who is quoted in this article:] “…even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
- The current “obsession” with replicability distracts from the real problem, which is faulty design [of studies].
- In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations [before they do them] and document all their results. [Including those that fail!]
- [Note: Pew Research publishes all its raw data, for other researchers to scrutinize or use in other ways.]
The corker that caps it off is John Crabbe, an Oregon neuroscientist, who designed an exquisite experiment on mice sent to three different labs with incredibly uniform conditions. Read the article for details. When these mice were injected with cocaine, the reactions of the three groups of relatives were radically different. Same biology, same circumstances, seven times greater effect in one of the groups.
What?? (There’s more; read it.)
If you’re a researcher and this has happened, and it’s time to “publish or perish,” what do you do? What is reality?
The article winds down:
The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
Implications for e-patients
Wiser people than I will have more to say, but here are my initial takeaways.
- Don’t presume anything you read is absolute certainty. There may be hope where people say there’s none; there may not be hope where people say there is. Question, question, question.
- Be a responsible, informed partner in medical decision making.
- Don’t expect your physician to have perfect knowledge. How can s/he, when the “gold standard” research available may be flaky?
- Do expect your physician to know about this issue, and to have an open mind. The two of you may be in a glorious exploration of uncertainty. Make the most of it.
- Expect your health journalists to know about this, too. Health News Review wrote about this article last week, and I imagine Retraction Watch will too. How about the science writers for your favorite news outlet? I can’t imagine reporting on any finding without understanding this. Write to them!
Mind you, all is not lost. Reliability goes way up when you can reproduce a result. (Rockets, light bulbs, chocolate cake.) But from this moment forward, I’m considering every new scientific finding to be nothing more than a first draft, awaiting replication by someone else.
Dave, this is a great article on a subject that needs to be dragged out into the light, scrutinized, discussed and reformed. Individual patients and patient communities have a major role to play here.
See #pubplan (publication plan – the legion of ancillaries that work with the industry on this, are not currently the force for change they should be, and are being called out as such) and the collective term for their medical communications activities, #medcomms
Andrew, the article is indeed a great read. Thanks Dave for summarizing it.
But a great read is different from the truth. The predicate presented in the article is certainly worth a significant amount of research but it should be subjected to the same kind of scrutiny as any other scientific predicate. And, as usual, thanks to the Net, it is!
Yes, we must present and introduce the decline effect to all patients who want to be informed and autonomous. But the cause of the decline effect may end up being completely different from what is presented here. I have been closely following the deep controversy that has followed the publication of the New Yorker article. As Dave wrote in bold letters : Don’t presume anything you read is absolute certainty. Even about the decline effect.
Via the tireless patient safety advocate Helen Haskell I just learned of a similar article in Discover last year: The Streetlight Effect: Why Scientific Studies Are So Often Wrong. It describes similar issues, e.g. with anti-arrhythmia drugs (56,000 deaths a year), and this:
The author is David Freedman, touting his new book Wrong. Has anyone here read it? (It’s not just about healthcare.)
Then he hints at what you may be suggesting, Andrew: “It sure would be nice if someone would point that out to us when one of those studies makes headlines.”
But Freedman’s article lacks what so many of these critiques lack: What’s a patient supposed to do?? What’s a researcher to do? Or physicians seeking reliable advice for a new standard of care?
If I get a warning I’m warned, but I’m not enabled. That’s why I ended with specific advice. And it sounds like your #pubplan and #medcomms advice would bolster the issue at a different point in the flow.
Dave, we have a very fundamental difference here!
I couldn’t disagree more with that sentence “If I get a warning I’m warned, but I’m not enabled”.
You and many ACOR KIDNEY-ONC members have been enabled by the simple warning about IL-2 not being proposed by many doctors. Yes, the list also provided you with the name of some doctors who behave differently, but, in the early days, when those names were not as famous as they are today, just knowing that you had to start asking questions was as powerful as it is today.
The warning is an integral part of the enabling process. The more you understand how much is unsure in medicine, the more you get the requirement of patient autonomy for survival optimization in cancer care.
Gilles, I completely agree. (You often note that you and I have fundamental differences, but I keep seeing the similarities.)
ACOR members surely are enabled by knowing about IL-2, and by being warned that their doc might not mention it (or even know about it). But I was talking in my comment about the Discover article, which merely rants about the problem without giving patients (or other readers) any usable advice.
And, as it happens, because I was a proactive patient years before I got cancer, ACOR did NOT make a difference in my rx. The ACOR members told me about IL-2 and gave me McDermott’s phone number, but as it happens he was the oncologist I’d already been referred to. So, in my own case, I was headed for the best even without ACOR.
It’s important, though, that as I began treatment I was well prepared for the side effects by the first-hand accounts of other ACOR members. McDermott has said that may have helped me endure the effects and have more IL-2 than I could otherwise have tolerated, which is likely why I’m alive.
There was a fun moment, though: when another doctor (after my biopsy confirmed the diagnosis) told me the oncologists would recommend some treatments, I said “Here’s hoping I qualify for IL-2.” His eyebrows went up about an inch – not accustomed to the newly diagnosed knowing about the latest & greatest.:–) That was an ACOR-induced moment.
btw, Gilles, I’m dyin’ to know what you’re learning about the decline of the decline effect! Can’t wait.
It’s going to be a while. The article and the controversy it generated are only 2 elements in this huge surge of questioning of the scientific method validity. I am very interested in trying to understand the reason for this surge at the same time we are seeing the disruptive power of wikileaks. I think the 2 are deeply connected.
Look at what happens on ACOR. It’s a lot like the groups act as the wikileaks for their disease.
Dave, great synthesis. Yes, “forewarned” needs to be accompanied by “forearmed”. This is why I am doing a series on how to read medical literature. It can be found here
http://evimedgroup.blogspot.com/search/label/reviewing%20lit
Oo, that series looks like juicy empowering stuff, Marya. As I said on Twitter, I’d love to see you turn that into something for the Journal of Participatory Medicine. Or maybe some sort of training material for the Society’s site.
We’re making this up as we go along. Glad to see so much good stuff coming together.
Interesting idea, Dave. Let me noodle on it and then let’s chat.
I’m anxious to read the New Yorker article.
In the meantime, people shouldn’t confuse basic science with clinical science. Basic science generally employs the experimental method whereas clinical research largely consists of observational studies. The study designs and reliance on statistical inference are dramatically different between the two. Clinical research is much more vulnerable to study design problems and statistical biases.
See Figure 1 in the linked PDF
http://www.markboguski.net/publications_PDFs/Boguski%20McIntosh.pdf
I’ll supply more references after I read the New Yorker article.
Thank you, Marc!
That is a very good article, indeed. But not exactly easy reading :-)
How do you explain the same, in a broader way, to cancer patients who are looking to become educated about the scientific process?
We should work on this together. Have you ever looked at our Baloney Detection Kit?
http://bit.ly/evRmwP
Perhaps we could devise a simpler, more focused version of this for use by healthcare consumers, perhaps in both written and video formats.
Agreed, Mark – I’d add that engineering is a whole different subject from science. Someone said once that pure scientists are concerned with absolute reality, regardless of whether it makes any practical difference, and engineers are only concerned with practical differences, regardless of why. This is what lets engineers come up with roads and cars that work, even though we don’t really know at a subatomic level why any of it works.
But I didn’t want to get into that here; I was seeking to awaken awareness of the uncertainty of what most people think is “good solid science.” How many times have I heard people talk about RCTs (randomized clinical trials) being “the gold standard” of evidence? Not nearly as gold as we’ve been told, evidently!
If your life is on the line and you’re debating treatment options (or statins or stents), that’s a big problem, IMO. Not to mention that reporters then pick up supposedly-reliable findings and tell the public what to do. YIKES.
As we (collectively) argue about what treatments are valid, and how patients can partner competently with oncologists and other providers, I keep bumping into the fact that it’s vital for us (patients) to understand the literature our doctors read, so we can be more effective in assessing treatment alternatives.
A perfect example (not related to the “decline effect”) is the treatment I got, IL-2. Almost all U.S. oncologists think it has low response and high mortality. That information is 15 years out of date. Yet doctors commonly (as ACOR members know) say they won’t recommend IL-2 because it lacks that “gold standard”!
It does no good to assert that patients should club such oncologists with insults; I want to develop ways of teaching people to be more effective in shared decision making.
That’s why, as I said in another comment, I don’t think Freedman’s Discover article is helpful: it just declares there’s a problem, without giving anyone any useful advice.
I agree with you Dave. I would add that we need to rethink RCTs for another reason: they simply won’t scale to the multi-dimensional data and multiple hypothesis-testing that genomics is leading us to.
As you know, I’ve formally proposed that consumers get directly involved in research and utilize social networking technologies as a new tool for data collection:
http://www.resoundinghealth.com/static/repurposing.pdf
Lastly, I’m going to have a captive audience of pharmaceutical executives in NYC on January 26
http://www.nypharmaforum.org/index.php?s=79&item=207
and I’d like to use this opportunity to advance the agenda of the Society for Participatory Medicine. Any suggestions that you, Gilles and others provide would be most welcomed.
Terrific, Mark! Gilles is in NY – maybe he can help on the spot. Nobody’s more compelling on this subject than he is, and his grasp of 15+ years of real stories is unparalleled.
You can also cite that on that same day in Washington, FIMDM is discussing shared decision making in their annual Research & Policy Forum (free webcast). I’ll be there for the Society in the morning, then keynoting at the Military Health System’s big conference at the Gaylord.
I’m hoping this will be seen as the year of the patient. Thanks for seeing how Resounding Health can bring SPM to the NY event.
On a related note (being a wised-up consumer of information, e.g. “who paid for the study”), Twitter buddy @EvidenceMatters cited a post How to read articles about health, about a 2008 article by Dr. Alicia White.
For corroboration from within the sausage factory, I’ll again cite two seasoned medical editors we’ve quoted here:
He explores what we might do. See Peter Frishauf’s sidebar on reputation systems.
(Our post here.)
All the articles people have cited here shine a light on part of why Smith and Angell despair. (Angell’s book review talked about drug corruption, but today’s links show it’s much more than that.)
Patients and policy people really need to get that our best methods today are nowhere near as good as we’d like to think. It leaves us with that question: what can we come up with – new participatory methods – to improve outcomes?
That is why I wrote “Patient-Driven Research: Rich Opportunities and Real Risks” which I’ll republish soon.
It is obvious that as good as medicine can be today, we are all learning that there are many holes in the scientific methods used to generate the advance knowledge necessary to improve outcomes of patients suffering from many diseases.
Saying the Patient is the most under-utilized resource in the healthcare system and not understanding that they MUST be intimately involved in research is really a remnant of the paternalistic view of medicine that we are still facing on a daily basis. Bring the patient communities front and center in the research enterprise and everybody wins. I mean everybody whose ultimate goal is the improvement of care, not the protection of ridiculous professional advantages.
Guilty as charged, Gilles; I confess that I am a relic of the paternalist era. I lack your 15+ years of experience running ACOR before Danny referred me there; I will never catch up. You are ahead of me, you know more than me.:–)
Continue to teach us. As we’ve discussed, that’s the participatory / empowering thing to do.
Dave,
You are missing the point. It is not a you vs. me thing.
You speak for your own experience. It’s a great thing. You are a very powerful advocate for patient engagement, as we all know. But you represent only one story, one that is easy to listen and embrace because it has a clear story line and a great ending. But what to make of the hundreds of thousands of stories of cancer patients who do not end this way, instead becoming a medical marathon, with errors made, treatments that don’t work for unknown reasons, or treatments that work then stop to work. For these patients, who represent the vast majority of the cancer world, we must understand that the outcomes are not all based on their personal engagement. The limits of medicine are much bigger than what we were made to believe.
You certainly cannot have the panoramic vision that I have acquired over 15 years as a witness of what happens in > 150 groups. In no way this fact will lessen what you are saying as a patient. We have 2 different roles, each with its distinct aspect and expertise. I am not (yet) a cancer patient. I would never speak as one.
“But from this moment forward, I’m considering every new scientific finding to be nothing more than a first draft, awaiting replication by someone else.”
I covered this last October in Constant Beta vs. Evidence-Based Policy:
Tension in the Data Continuum. As you know, I have been saying for a long while that Knowledge is in constant beta. Once you understand that point, you should no longer be expecting absolute answers, and you will want to maximize your chance to get optimal care by wanting to be intimately involved in it.
I have to get into these details further but from what I have gleaned from a brief review of this and my own experiences with the medical community is how fallible it is and how much they (collectively believe their own “bullshit”.. researchers, doctors, pharma etc…
“Don’t presume anything you read is absolute certainty” and “Don’t expect your physician to have perfect knowledge” . Boy ain’t that the truth, and they usually won’t tell you that.
It is rare to find who says so, luckily once I did and that saved my ass (sort of). If you go to 10 different doctors it is in my experience they have 10 different answers and they all believe their’s is the truth, same with research, statistical analysis enables one to spin just about anything. And, why would anyone post negative results. Great to get at the Raw data, but how hard is that for patients to glean through..
Journals, pharmaceuticals, FDA, and doctors have an unintentionally (I hope) co-opted relationship. Ask ask ask…but sometimes you have to wing it and go by your gut after you have done your own data collection/analysis. Sometimes I get stuck in the who to trust wash cycle and sometimes (most of the time) there is no perfect answer. It is all so imperfect and we should never expect anything to be perfectly true.
I often reflect on formal logical fallacies..affirming the consequent (if A then B, therefore A because of B) or denying the antecedent (If A then B, not A therefore not B). It is easy to fall into those traps…
So what’s the deal with HRT? Is it supposed to be bad now?
Claudia – the “deal” with HRT is that you need to look at the risk/benefit ratio before considering using and then take some time to talk to your healthcare practitioner. Evolving data from the Women’s Health Initiative Trial demonstrate a definitive link between HRT (combined estrogen/progestin) and increased breast cancer risk. Other data have shown a correlation with ovarian cancer, lung cancer and deaths from lung cancer. So, numerous associations are starting to recommend that hormone replacement only be used as a last resort and for the shortest time possible. Time from the start of menopause is also important.
I have been writing about hormone replacement and viable alternatives for years on my blog, Flashfree. You might want to check it out – click on “HRT” or “hormones” in the tag cloud. http://flashfree.wordpress.com
And with anything, especially HRT, ask the hard questions. When it comes to women’s health in particular, it’s imperative not to enter blindly into the proposition without doing thorough homework. http://flashfree.wordpress.com/2010/05/05/wednesday-bubble-hrt-ask-the-hard-questions/
“It’ll chill you, too, if you believe the scientific method leads to certainty.”
I would refer you to the classic:
“The Structure of Scientific Revolutions”, Thomas Kuhn
I’ve now read the New Yorker piece. I must say that it’s really nothing new. The great historian of science, Thomas Kuhn, explained back in 1962 how science, through a process of “paradigm shifts,” corrects and updates itself. This does not invalidate the scientific method, just shows that any method applied by fallible humans, who are subject to biases, cultural influences, institutional forces, etc., can lead to misleading or erroneous conclusions.
To me, the real problem is that our educational systems do not, in general, produce scientifically-literate, critical thinkers at the level of sophistication to make truly informed judgements on behalf of themselves and/or the public good in the highly technological society in which we live.
Agreed! The lack of science literacy is a VERY big problem among the general population and I would say it is an equally significant problem among the e-patient & doctor population who may believe, individually, that they have enough understanding of scientific issues, while they are missing the necessary detailed, in-depth knowledge to properly assess results. I believe this is why we will soon realize the limitations of the wisdom of crowds in a medical environment
I agree re science literacy, Gilles.
But Mark, I’d differ a little – Kuhn didn’t talk about the “decline effect,” did he? (By any other name.) I haven’t read him, have only heard about it, so perhaps he did.
In any case these all reinforce what an apparent house of cards we have, and what a shaky pudding it is when someone asserts “study X said Y, so it’s true and don’t ask questions.”
But what are we to do, when it’s crisis time or time for recommendations? Do we come back to “This is the best advice we have, though nothing’s certain?”
The fact that paradigms erode and are replaced over time might be considered a “decline effect” but Kuhn didn’t use this term. Kuhnian paradigms take place on relatively long time scales (generations) and perhaps what we’re seeing now is a consequence of the exponential growth of knowledge and ultra-rapid communication. In other words, the paradigms are shifting much more rapidly and in a more granular way.
I came away from the New Yorker article wanting to go and look up the original literature that Lehrer uses as the basis for his article. I haven’t done this yet but it does seem to me that some branches of natural science are more susceptible to the decline effect based on whether they primarily use experimentation or observational studies as their primary method.
So true about the science literacy issue.
Yes….nothing is certain. How can we expect it to be so. Science is imperfect. There are too many variables to control in any given study…evidenced based/dialectic/scientific
We all have to make the best “educated” guess patient and doctor together. We as patients are not passive conduits of the medical process. I take the position that I am responsible for the choices I make, to take a drug, not take a drug, to have surgery, etc….(unless a doctor was really negligent).
Bad things happen even with the best (gu)estimates.
Science and medicine are constantly evolving as we accrue more info. I am going use this quote by Gilles: “Knowledge is a constant Beta” it’s so true. If we focus and accept that rather than focusing on what system is better we would get further down the line of care.
Why are we so geared to expect certainy from anything? especially medicine. Yes, critical thinking skills are essential, but aren’t they anyway? :)
> Why are we so geared to expect certainy from anything?
Well, this again gets to the core of shared decision making, Kent Bottles’ The Difficult Science, practice variation, etc: we-all here are awake about this, but it seems really clear that most physicians aren’t. There’s much work to be done, starting with asking “What the heck do we do about this disconnect between reality and everyone’s view?”
The mere existence of practice variation seems to dynamite the validity of paternalism. If doctor knows best, but other-doctor disagrees, it doesn’t leave much room to assert that either one is right and patients should just do as they’re told.
I suspect we need to reach out to all those physician/nurse ears everywhere, through whatever channels they use, and show them “Everything you were taught about certainty was a mistake.” But boy is that a tall order. Imagine if someone told you that everything you know your job is questionable.
That’s why I keep coming back to the idea that our fastest path to real change is to teach / activate / empower consumer/patient/e-patients, and just encourage providers to listen and engage.
Heaven knows this issue hasn’t transformed in the 49 years since Kuhn published his book.
Thank you for shedding light on this really important issue.
A hefty addition to this discussion is in later post, Tips for understanding studies.
Perhaps a radical idea:
1. I’ve heard that new research is more likely to be funded if it’s on a new subject thank if it’s “only” going to replicate an existing finding. (True? No?)
If so, what if we shifted our funding priorities to reflect the issue in this post? What if we deemed it more valuable to firm up an interesting-but-unconfirmed finding?
Is there real value in spewing forth more unconfirmed “first draft” results?
2. What if every database that lists journal articles had an extra field identifying how many times it had been replicated?
On Twitter, Russell Faust MD @RussellFaust replied:
These days, one cannot find funding to replicate studies – nobody will pay to do study that has been done. So nothing is replicated. … This has been a crit of med/sci funding for at least a decade. There are more crits.
He was on Blackberry; said he’ll visit when he’s online.
Wow, what a great comment thread, and what a great site!
Thanks for the invitation to comment, Dave.
It is a sad fact that, whereas the scientific method demands the testing of hypotheses and reporting of our findings, and the verification of those findings – that they can be reproduced by others – the current funding model demands funding of NOVEL studies. It is not possible to obtain funding, from NIH or other sources, to reproduce studies that have already been published. The fiscal realities of science funding demand more “bang for the buck,” and paying anyone to simply reproduce someone else’s experiments is considered low yield.
Overall funding of new grant applications to NIH is at an all-time low: first-time RO1 applications are being funded well below 10%! Think about how demoralizing that is for aspiring physician-scientists early (or at any time) in their careers. One must be pretty confident (arrogant?) to believe that one will be in the top few percent of the applicant pool. A pool of physician-scientists that is a pretty intimidating group overall – smart, accomplished, with amazing creds.
In sum, it is a GREAT idea to add information about reproducibility to our valuation of scientific and medical journal articles. The sad reality is that nobody will pay for that reproducibility.
Russell Faust, PhD, MD, FAAP
Russ, thanks for responding to my Twitter suggestion that you visit us here. Pleased to meet you.
So: we here are constantly looking at current reality and wondering what it’ll take to create a new tomorrow that works better. I’ll ask some hard questions here, re your “the current funding model demands funding of NOVEL studies.”
1. Am I correct in guessing that the underlying assumption is that novel studies will broaden how much we know?
If so, this whole discussion suggests that it’s not working as intended: it’s like building a bigger and bigger deck out of shaky timbers. And it might be appropriate to re-examine whether the model is getting us where we wanted.
2. Where are the major sources of those funds – the people who approve the funding? Government, foundations, others? It would be great to engage those people in this discussion.
(Please, everyone, note that I’m not trying here to indict anyone or anything. The creative process and the scientific method both involve trying something in a universe of uncertainty, seeing how it plays out, and adjusting. Create & adjust, create & adjust. Here, I’m asking if we’ve located an essential “pivot point” where a policy change might lead to more effective results.)
Here is an article from Newsweek that I found via @rlanzara in Newsweek about the statistics game: http://www.newsweek.com/2011/01/23/why-almost-everything-you-hear-about-medicine-is-wrong.html
“….The situation isn’t hopeless. Geneticists have mostly mended their ways, tightening statistical criteria, but other fields still need to clean house, Ioannidis says. Surgical practices, for instance, have not been tested to nearly the extent that medications have. “I wouldn’t be surprised if a large proportion of surgical practice is based on thin air, and [claims for effectiveness] would evaporate if we studied them closely,” Ioannidis says. That would also save billions of dollars. George Lundberg, former editor of The Journal of the American Medical Association, estimates that strictly applying criteria like Ioannidis pushes would save $700 billion to $1 trillion a year in U.S. health-care spending.”
In response to Alexandra Alin:
it is entirely correct that the majority of surgical procedures that are performed by various surgical specialties (my own included) have little data to support their use. Procedures are done based on anecdotal experience, and on “standards of care” that have evolved over time.
That is not to say that they are being done inappropriately, or with malicious intent. It is simply that, especially for small surgical specialties, there has been so little coordinated effort to accrue “outcomes data” to demonstrate efficacy, even where efficacy exists.
For smaller surgical specialties, accumulating the numbers of patients that are required for meaningful statistical analysis requires multi-clinic / multi-hospital clinical trials. That represents its own challenge. Many, perhaps most, clinical studies being carried out in surgery are now focused on assessing efficacy – positive outcomes.
Of course, immediately eliminating all procedures for which there are not yet published outcomes data might save hundreds of billions of dollars. That doesn’t necessarily mean that doing so is the right thing to do. That action would also have tragic results, as many patients who have benefited from those surgeries can attest. They have experienced positive “outcomes” one person at a time.
There is no easy solution.
Russell Faust, PhD, MD