Update 1/22: this was originally in our “Found on the Net” sidebar, but it’s attracted enough comments that it belongs in the mainstream.
I was researching the coverage of statins on Health News Review, the great e-patient resource we’ve often covered, and I stumbled on their page Tips for understanding studies. Good: “does the language fit the evidence?”, “absolute versus relative risk,” “number needed to treat,” etc.
I was amazed to learn recently that many (most?) physicians get no advanced training in med school on how to critically evaluate whether the correct statistic was used in a paper, nor how to even understand what they mean. That would explain why Gigerenzer et al found that most doctors got it wrong (we reported on their great 2008 paper). Turns out the same is often true for journal editors and peer reviewers!
Lesson: e-patient, fend for thyself.
I’m not surprised to learn this…both my masters and doctoral courses have been focused on critically appraising research. These include courses and both qualitative and quantitative methods. I would imagine that courses with this much details would be challenging to fit into what is already a full medical curriculum.
I think there is an unrealistic expectation on the part of the public that anyone who has earned an MD is an expert in everything.
This is a great resource. Thanks for passing it along.
My addition’s to this fine article: As Dave has pointed out before, if it is research that affects you, get the full article, not the abstract and certainly not the news piece about it. Then, read the introduction. Figure out what question they asked. Read the methods. Think about whether the experiment they did answers the question they asked. Think about whether the people they tested it on represent real patients. Read the results. Think about whether the data they collected actually supports the conclusion they want to draw.
The key questions to ask:
1) did they ask the question before they collected the data or after? Many studies these days are analyses of data collected a while ago. There are many problems with this. First of all, the question they asked may come up from looking at the same data. A lot of experiments look like this: When I notice X, I screen for Y. When I don’t notice X, I don’t screen for Y. I hypothesize that X leads to Y. I did chart review, and found that my patients with X had a higher incidence of Y. This is called a retrospective (“looking back”) study and they are most frequently bullshit. In a prospective (“looking forward”) experiment, we ask the question first and then design the data collection so that if the hypothesis is false, we will see it. Retrospective analysis helps you ask questions and design prospective experiments.
2) Did they include the right people in the study? I recently read a paper where they tested whether a drug or surgery worked better on patients who came into the office because the drug wasn’t working. Duh. Wrong people. Do the people in the study seem similar to you? If not, wonder why.
3) Look at the “p” values. P-values explain the probability that the study is right. These days, a p-value of 0.05 isn’t enough. People will run too many experiments that they don’t report. I saw a presentation where a research compared how many trials were registered with the FDA versus how many were published. A p-value of 0.05 means that there is a 1-in-20 chance that the effect seen was random. In this comparison, 15 trials were registered with the FDA and one was published. Since positive findings are more likely to be published, suddenly that one study seems a lot more like a random positive than a conclusive finding. You can’t always know how many negatives there were that weren’t published.
4) Most importantly, see if the authors seem skeptical. Look for things that the authors say in the discussion that you think seem weakly supported in the results. If the authors seem to be drinking their own kool-aid, you need to be skeptical for them.
A couple of thoughts on a Friday night. As a funder of research, I am professionally skeptical.
Wellll, Peter, I’m sure glad we caught you in this mood on this Friday night. :–) What a fine piece of writing!
I’m going to spiff this up with some boldface and a couple of word additions, because I almost want to make a poster of it:
======
The key questions to ask:
1) Did they ask the question before they collected the data or after? Many studies these days are analyses of data collected a while ago. There are many problems with this.
First of all, the question they asked may come up from looking at the same data. A lot of experiments look like this: When I notice X, I screen for Y. When I don’t notice X, I don’t screen for Y. I hypothesize that X leads to Y. I did chart review, and found that my patients with X had a higher incidence of Y. This is called a retrospective (“looking back”) study and they are most frequently bullshit. In a prospective (“looking forward”) experiment, we ask the question first and then design the data collection so that if the hypothesis is false, we will see it. Retrospective analysis [only] helps you ask questions and design prospective experiments.
2) Did they include the right people in the study? I recently read a paper where they tested whether a drug or surgery worked better on patients who came into the office because the drug wasn’t working. Duh. Wrong people. Do the people in the study seem similar to you? If not, wonder why.
3) Look at the “p” values. P-values explain the probability that the study is right. These days, a p-value of 0.05 isn’t enough. People will run too many experiments that they don’t report. I saw a presentation where a research[er] compared how many trials were registered with the FDA versus how many were [eventually] published.
A p-value of 0.05 means that there is a 1-in-20 chance that the effect seen was random. In this comparison, 15 trials were registered with the FDA and one was published. Since positive findings are more likely to be published, suddenly that one study seems a lot more like a random positive than a conclusive finding. You can’t always know how many negatives there were that weren’t published.
4) Most importantly, see if the authors seem skeptical [about their conclusions, as good scientists always are]. [And] look for things that the authors say in the discussion that you think seem weakly supported in the results. If the authors seem to be drinking their own kool-aid, you need to be skeptical for them.
========
All this seems apt, on top of this week’s post (and dozens of comments) about the New Yorker article that said most published findings are never reproduced by another researcher … in particular, catch the one at the end, John Crabbe, with a tightly controlled experiment where one of the three groups showed seven times greater effect than the others.
I keep wondering where it leaves us. Surely it means none of us can take any published paper at face value just because it was published. But patients in trouble still need to choose treatments. And all this applies to non-crisis recommendations, too, like screening tests and whether to use statins.
What a great discussion, and thanks for the invite, Dave! So, Pete, has made some valid points, but I am afraid that some of them are a bit overstated.
First, not all retrospective studies are trash. A corollary to that is that not all prospective studies are wonderful, not even randomized controlled trials. It all depends on the question and the care with which the study was carried out.
Second, the issue of generalizability and selection bias, as addressed in Pete’s point #2, is real, and needs to be understood by the reader. But there are some nuances to it. Just because a study was done is a particular population does not exclude the biologic plausibility of getting similar results in a different population. This also has to be evaluated cautiously.
As for the p-value, this is a hugely misunderstood tool, and I have written extensively about its uses and pitfalls on my blog, so,please, read if interested.
I agree that a certain degree of skepticism from the authors is needed. However, the degree of skepticism will be proportional to a). how much work these authors have done in the area, and b). how much evidence supports their conclusions.
In short, there are no short cuts to understanding how to read this complicated literature. I have an ongoing series on my site on how to review medical literature (it is only about 1/2 done), and am contemplating putting together a webinar on the subject, if there is interest. You can access the series and the survey re: your interest in a webinar here (start at the bottom):
http://evimedgroup.blogspot.com/search/label/reviewing%20lit
“I was amazed to learn recently that many (most?) physicians get no advanced training in med school on how to critically evaluate whether the correct statistic was used in a paper, nor how to even understand what they mean.”
Dave,
You are right on about physician mis-application of statistics and the above comment about medical school (introductory school). We do get intense exposure, however, to reviewing the medical literature and statistical interpretations in residency training. Over and over and over, BUT still the reality of the Medical Industrial Complex overwhelms most physician brains into decision making that aligns better with Wall Street than our patients. You are wise to push for numerical literacy at all levels, especially patients, to try to remind us of why we went to medical school in the first place. To help people, not shareholders. Thanks for your good work!
I gotta ask, Dr. Synonymous – there must be a story connecting that handle to the other famous Ohio family physician, Dr. Anonymous. Dish, please.
Thanks for explaining the training you got on critically analyzing statistics during residency. I’ll take a shot in the almost-dark and guess that there’s lots of variation in what people get, because in my limited travels more than once a doc has said “Well, we never got trained at all on how to assess whether the right statistics were used.”
I’m not in a hangin’ mood here, not even indictin’ … I only get thrills from illuminating the unrealized. So I’m curious, what do others know?
As always, my goal is to improve everyone’s ability to find their way through the murk and land at the best plan of action for the patient’s health.
Dave, if I may stick my two cents in here… I know that the statistics are the most visible and deplored aspect of any study, but many are not even taught to appraise whether the study design is appropriate to the question, or even to know how to identify the research question/hypothesis. Certainly this is not true for everyone, but judging by some reviewer comments I get on my papers and by some of the papers I get to review, the situation is pretty serious.
Marya, you just echo’d one of the final points in the New Yorker article: that the issue of replicability masks the deeper problem of bad study design.
I get it: if the study as designed isn’t rigorously testing what it thinks it’s testing, the analysis of the results won’t matter much.
So where do we fix that? I see it as pretty tough (a long project) to alter the vetting skills of either the people who approve & fund project proposals or the journal editors everywhere… not to mention trying to re-educate all the docs in the world.
Meanwhile we-all (pts & docs) need advice we can use, now, this week, this year.
I keep thinking a reasonable first step, a catch-all for many ills, is to ask if the study’s been replicated. Worth something?
Dave,
Each learner makes their own decisions about their response to the information about statistical analysis and critical review of the medical literature. Some “Journal Club” discussions can get very intense. The information to which your post refers is presented to all family medicine residents in different forums for three years. It is not a one hour lecture that one can sleep through and then allege ignorance. You are right on about variation. The info isn’t what climbs to the top of the doctor’s thought processes, however. Time pressures and patient pressures may overwhelm analytic opportunities in residency training. Dogma rules initially, in spite of the best of intentions. Later, the Anthem (and other managed care co’s) 2% bonus each for use of statins, ordering mammograms, prescribing generic drugs, using EMR, online prescribing, etc. up to 12% may overshadow analytic thinking from time to time. Eventually, most primary care doctors are employed by entities (hospitals, etc.) and driven by quotas and time limitations to lower thresholds for testing and referral to subspecialists which drives up costs. The PCMH is supposed to cure primary care and get the communication time back to better clarify risks and benefits with patients. We’ll see.
p.s. Dr Anonymous is my main social media mentor and friend thru Ohio AFP. (I’ve been Synonymous, tho, since 1968-another story)
Dave, replication can be an encouraging sign. However, if you look at the HRT story, despite reams of replication, a definitive RCT showed the opposite of what we had thought was true. And even now it is still unclear what the true story is.
In the end, we all need to get a better handle on a). the amount of uncertainty inherent in a piece of information, and b). how we feel about the trade-offs offered by this uncertainty. This is where each of us needs to develop enough literacy to know how to interpret this information, or at least what questions to ask our clinicians and what answers to accept.
A tricky issue seems to be emerging:
> In the end, we all need to get a better handle on the amount of
> uncertainty inherent in a piece of information
Does this interfere with the idea of evidence-based medicine?
For those who don’t know, evidence-based medicine (EBM) “aims to apply the best available evidence gained from the scientific method to clinical decision making.” [Wikipedia]. As I understand it, it arose as a reaction to the discovery of practice variation, which we discussed a month ago: doctors in some regions do some operations several times more than others, and for the most part docs don’t realize they’re doing it. Since every surgery or treatment has risks (not to mention costs), EBM encourages people to base their decisions on evidence. But if the evidence is shaky, where does THAT leave us?
Here’s my guess: the pivotal phrase is “best available evidence.” In other words, uncertainty does mean things aren’t certain – but that doesn’t change the value of basing decisions on evidence rather than local custom.
And to me that heightens the importance of fortifying how we use the scientific method.
I have been making snarky comments about evidence-based medicine for a while now because it is used by many to obfuscate the reality and help perpetuate the fallacy that doctors know more than they do.Real changes must be made in medical education to transform the self-image of young doctors, who should embrace living in an era where not knowing is no longer considered a sign of weakness and lack of expertise.
In many cases, the best advice our trusted medical advisors should provide us is by defining the current limits of our understanding of a given condition or procedure. I must know about the lack of evidence just as I need to know about the evidence on which decisions regarding my body are made by others.
It is worth repeating what was said just a few days ago: most doctors are NOT scientists. In other words, they cannot properly assess the value of studies.
Richard Smith wrote a great editorial about it in 2004, when he was still editor-in-chief of the BMJ.
The editorial is still a must read for anyone, IMO. Since it was published as an Open Access editorial here (http://www.bmj.com/content/328/7454/0.9.full).
Here is a summary:
The consequences are:
.
.
Last year, Richard Smith added to his opinion, in an article of the Journal of Participatory Medicine entitled (In Search Of an Optimal Peer Review System: “After 30 years of practicing peer review and 15 years of studying it experimentally, I’m unconvinced of its value. Its downside is much more obvious to me than its upside, and the evidence we have on peer review tends to support that jaundiced view. ”
There is, indeed, enough evidence now to surmise that most articles published in peer-reviewed publications are flawed, sometimes deeply. This should come as no surprise and be considered the new normal. The situation will, IMO, get much worse before it gets any better. We are going to have to deal with an enormous issue of data integrity as our ecosystem becomes integrally networked.Just in our little corner of the Networked Universe I can already see that the lack of critical thinking is making people swallow results of studies regarding the use fo the Internet without ever questioning the integrity of the data sets used, the methodologies applied, and the motivation of the researchers. Bad science is creeping up at every corner. Bias runs rampant.
In consequence, it is obvious that autonomous patients must verify EVERYTHING their doctors tell them, just as we must critically assess every result of any study we intend to use to pursue our efforts, in any field. I am the CEO of my own body. I take advise from a diverse group of trusted advisor but I (or a proxy I have nominated) must be the one making the final decisions regarding my medical care.
I pretty much agree with all the observations here but just wanted to add a minor detail – often the letters to the editor about a published article will point out perceived flaws in study design or conclusions. Of course, these letters appear in subsequent issues of the journal, so one must practically be a subscriber to see them – but I have found them useful.
Otherwise, good comments – yes, I believe that most physicians either have not been trained in, or do not utilize, analytical skills in reviewing the literature. This may not be so true in academic centers, where arguing over the literature used to take up half my day on rounds in the 70’s. Dunno what they do now….
Great post/comments. However, I also disagree with the fundamental assertion that many/most physicians don’t get advanced training in statistics – at least during the last 10 years or so. Here and in the UK, medical schools must include that type of training. The same holds true for pharmacy schools here. Admittedly there is variation in how it is implemented. Sometimes it is as a standalone stats course, in others it is more longitudinally embedded in the curriculum. Where I did my fellowship after pharmacy school, our director actually created an entire two-semester, 6 credit EBM course. At my current college I co-taught an entire course specific to drug literature evaluation for several years. I do recognize aspects of these may be a bit rare.
Even though I disagree with your why (i.e. that they aren’t taught), I fully accept your what (i.e. many clinicians are ill-equipped to critically evaluate medical literature). To me there are three main reasons for this: 1) a practitioner (be it physician or pharmacist) is not involved with the respective stats course. Doing so may improve results (http://bit.ly/gwi8sn). 2) Students may fail to recognize the relevance of the subject at the time they get the course (http://bit.ly/eHz7eM). 3) Literature evaluation is a skill. Like any skill, it must be practiced to stay sharp. For the reasons listed by various commenters here, that skill frequently goes unpracticed.
As Marya mentioned, even basic elements of study design can be a source of confusion. I am currently in Philadelphia to talk yet again about non-inferiority trial design. I have yet to give a presentation on the topic where over half the audience (including physicians, pharmacists, and PhDs) can correctly define the purpose of non-inferiority trial design. Now that result *is* attributable to the fact they likely didn’t get it in school. It’s really only been popularized in the last 10-20 years after the FDA expressed concern about Biocreep (and as recently as November 2010 FDA issued even more updated guidance).
Thanks, Kevin (and all). For what it’s worth, I wasn’t the one who said it – I heard it from two different docs, neither one elderly, and one trained at a big-name medical school.
Might it be more accurate to say “One shouldn’t assume their doc was trained / is skilled in critically evaluating whether a particular study was designed well and the evidence evaluated astutely”?
(Reminder to self: the relevance of this for engaged patients is that there’s a time in any treatment incident where options are considered and evaluated; patients commonly ask the docs “What do you recommend” and we just need to realize that it can be worthwhile to ask questions about the reasons.)
btw, what the heck is non-inferiority trial design? Never heard of it! Link?
Great comments from everyone! I have to agree with Kevin that this is just not apart of everyday practice, and, as Dr. Synonymous aptly explained, is overshadowed by other more pressing quotidian considerations. And you do not have to pul out such somewhat obscure trial designs as non-inferiority (Dave, for statistical considerations, without comparing an intervention to a placebo, it is impossible to say whether it “works”. So, in cases where placebo is not reasonable, the non-inferiority is a margin established around the known effect of the comparator drug within which the new intervention has to fall in order to show that it is “at least as good” as the comparator. there are many issues with this design, but it is what has caught on in the regulatory universe here and in the EU).
I also agree with the idea of a “board of advisors.” Ultimately, people just need to know what questions to ask and what kinds of answers to settle for. This can get a little uncomfortable within the patient-doctor encounter, and we must continue to work hard to change this culture of supremacy to one of collaboration.
Dave,
I think your revision of ‘don’t assume your doctor is skilled in critically evaluating medical literature’ is pretty spot on.
The purpose of a non-inferiority trial is actually to demonstrate that a drug is not worse than the active control (drug) *by more than a pre-specified amount* (that ‘amount’ being the non-inferiority margin). So the most common misunderstanding about non-inferiority is the erroneous assumption that it means ‘as good as’ or ‘equivalent’. It does not. It *can* be worse. For example, up to 10-20% worse with anti-infective drugs.
I would further suggest that it is not an obscure design. For instance, in the recent GAO report they found that a quarter of all NDAs for anti-infectives included non-inferiority trials as supporting evidence(http://bit.ly/gVgpcN). As far as suggested links, two of the best OA papers on the topic are this primer (http://bit.ly/fOI3zX) and this primer/review of a sample of 232 NI trials (http://bit.ly/h6TijK). For a quick overview, here is my most recent on this topic (http://slidesha.re/e0dBN6).
Kevin, thanks for the link to your (as usual) stunning PowerPoint about these trials. Seriously: do you create those slides yourself? They’re way gorgeouser than anyone else’s slides. (I know that’s not a word.)
The content is great, too.
For us rookies: NDAs is New Drug Applications, right? (In the business world it’s non-disclosure agreements.)
Is there a registry somewhere of drugs that were accepted and failed, like the Ketek in your slide 18? It would also be useful to know of outright fraud, such as the Seroquel cherry-picking episode two years ago.
My desire for such a list isn’t scandal, it’s to provide teachable / learnable lessons, so everyone can be a wiser consumer of information…
I read Kevin’s post carefully and felt compelled to clarify what I said about non-inferiority. While it is true, as Kevin points out, that the point estimate and the 95% confidence interval for the agent under investigation may fall below the point estimate for the active control, the entire 95% confidence interval for the effect size of the former has to fit into that of the latter. Given that the 95% CI captures the uncertainty in any estimate, falling into this 95% CI in my mind constitutes being “at least as good.” I am sorry to get into the weeds here, but I am a great believer in the importance of expressing uncertainty around any estimate.
To Kevin’s point, however, one of the theoretical (and perhaps real) issues is that, if the point estimates for the new agents get lower, once they are used as the active comparators, we will end up with a bunch of products that really are inferior to the original active comparator. And this I think may be the crux of the objections to the non-inferiority designs. At least this is how I understand the issue. Always happy to learn more, though. Thanks.
So, Kevin and Marya, I just want to note that you guys have gone WAY over MY head on this. I kinda comprehend what you’re talking about, but there’s no way I’d be able to sniff out a glitch here: “Hey doc, are you SURE the confidence intervals were properly aligned on this non-inferiority study??”
Is this in the category of a fine point? I’m all for fine points, but what’s the likelihood of a serious error in judgment (e.g. choice of wrong therapy) of this sort?
I’m wondering if this is the level of critical analysis that editors should do – or even better, study designers and funders.
Dave – thanks much for the comments about the slides. Yes I create them. Combination of studying science behind process, applying cognitive load theory, and actively soliciting ways to get better. Yes on new drug application. CDER used to track withdrawn drugs, I have not looked recently, but imagine list is still maintained.
Marya/Dave – the issue of the CIs can be considered fine point and we may be drilling down a bit much for the purpose here. Marya, I get the interpretation about the entirety of the interval and even took two shots at depicting graphically on Slides 30 & 31. At some point we are talking about an estimate of an estimate and it gets very gray.
Perhaps for the current purpose, the issues can be broadly summarized via this quote from JAMA: “non-inferiority and equivalence trials present particular difficulties in design, conduct, analysis, and interpretation” (http://bit.ly/ih7iwL). I’ll just focus on the interpretation piece as it is closest in spirit of Dave’s original post. One of the essential components for interpreting results of an NI trial is evaluating the method of NI margin generation. Yet, the PLoS mega-review found 22% of NI articles justified the NI margin from “investigator’s assumption” and that another 54% were simply unclear as to the method used. So if you have to be able to assess the margin to evaluate the trial…you get the idea. There are a LOT of challenges. This is part of why NI trials are under increased scrutiny from the FDA.
Kevin, yes, great summary! I agree that defining the NI margin is the most dicey step. In general, these designs are mostly used for regulatory purposes, so FDA is the big vetter of the data. Closer to home (or to Dave’s questions), I am not entires sure that pharma knows when they should be running a NI or a superiority trial comparing their wares to competition. This becomes more important for the editors, the reviewers and the readers to judge. However, I am not sure that there is enough skill on that end to know whether the study was designed properly and whether conclusions are trustworthy. So, while seemingly rather obscure, the NI design should at leas be in the general awareness category for e-patients, and perhaps they should know that when 2 modalities are being compared to one another without a concurrent placebo, superiority or equivalence claims need to be scrutinized closely.
How about a “Research Scrutiny” blog or community where people can discuss such questions?
Dr. Synonymous, somehow I missed your great response last Saturday about how docs are trained and then how they’re affected by the various pressures of daily life. Thanks. (I recommend it for anyone who didn’t catch it.)
Again, to bring it back to the e-patient take-away, it seems wise for anyone, considering any treatment decision, to at least get a second opinion – perhaps even from a different part of the country, but at least from a doctor at a different hospital. But if my life or limb were at stake, I’d want to dig into the research as deep as my mind (and my friends’) could take me.
Nice discussion! I think we need to back off the complexity a bit, as this is a topic debated by patients, researchers, and journal editors. My original comment was overly simplistic because it was intended for e-patients, not journal editors!
The best way to fake sophistication is to be skeptical, and the hardest thing in research is to recognize that a questionable study is valid.
I recommend skepticism! Don’t trust research to prove a therapy works where any of the investigators are also on the patent. Don’t believe research funded by the company that makes the drug. Don’t clammor for a new drug until you’ve read the FDA filings: Mark Helfand in Oregon broke the Vioxx story by simply reading the FDA filings.
Ah, my dinner arrived, more later.
“Don’t trust research to prove a therapy works where any of the investigators are also on the patent. Don’t believe research funded by the company that makes the drug.”
Pete, don’t you think that this is “baby with the bath water” shortcut? A study should be evaluated on its merit, blind to who conducted or paid for it, IMO. (My disclosure is that I have received funding from manufacturers to conduct epidemiology, health services and outcomes studies).
Marya- Cognitive bias is important to consider when evaluating a study. People who invent things are more likely to suffer from confirmation bias. I’ve blogged on this. There certainly are people who can objectively assess their own work, and, of these, a subset would act against their own financial interest by publicizing negative findings, but how is the non-expert to identify them? History of full of scientists who attempt to establish a hypothesis by searching for confirming evidence but not also demonstrating an absence of evidence that could serve to disprove it.
We could perhaps meet half-way by agreeing that one can certainly trust negative findings by those financially conflicted. :-)
While I am all for patients being in the driver’s seat, for the overwhelming majority of patients, reading the medical literature would be a perfect example of patients ineffectively navigating medicine’s murky waters. If you really want to get so into the weeds, I think subscribing to UpToDate would enhance your knowledge base exponentially, and in a much more efficient manner than trying to read the medical journals. I think patients would be much better served focusing on the basics, including pursuing recommendations re diet/exercise, than to be spending their time in a library perusing medical journal articles.
Hi Dr. David –
I may have been unclear – Health News Review is about health news stories in the mass media.
The issue of who can understand medical literature is a different can of worms.:-) I’m out at dinner so I can’t pull up a link easily but in our site search look for “statistical illiteracy” – Gerd Gigerenzer et al’s excellent 2008 paper – and “the decline effect”:.. I’m as interested as anyone in good reliable evidence but it seems increasingly clear that it’s hard to find, and that savvier patients can learn to ask useful questions.
Good to meet you –
Dave
So Dr. David’s attitude, while I’m sure it is well meaning, is the problem we are trying to address. While Edward Jenner, MD, is credited with the smallpox vaccine, Benjamin Jesty, a local farmer, used cowpox to protect his family from smallpox before Jenner started his work. Not all medical breakthroughs come from physicians, and, like Jesty, non-physicians are often inspired by a personal connection to a disease.
While research in medicine is as murky as science gets, non-physicians like Andy Grove and Sergei Brin are adding a lot of sophistication to research in Parkinson’s disease. I am not a physician and yet I have a patent on a total hip replacement, and a bunch of papers, abstracts, and talks on medical topics, not to mention veterinary surgery and financial topics like derivatives. I, too, am a patient, and my condition lies far from my own areas.
Having degrees from and held important roles at several of the nation’s top universities, I don’t feel that the average physician has some special intellectual capacity that I do not (a feeling likely shared by many readers of this blog), and yet our system typically assumes that, for example, a dermatologist is better suited to understand a new therapy in, say, cancer than a physicist. In my experience, this is not a given: a friend of mine is a biophysicist, and understands biology pretty well, radiation extremely well, and what happens when the two mix better than just about anyone.
Physicians are more engineers than scientists: like engineers, they have a set of tools they use to manage a set of problems they are comfortable with. When an opportunity for science comes up, when a paradigm change occurs because of a new discovery, many physicians (but not all) are not up to the task of figuring out which articles are good and which are bullshit. This is the situation where a smart patient should keep on top of the research.
I put on a meeting recently where a patient in the audience asked a question that indicated that her physician was about 10 years behind on the research on the link between melanoma and Parkinson’s. Anyone reading abstracts on PubMed could have set this doc straight, but none of his patients were. Patients shouldn’t treat themselves, but they should be knowledgable participants in their care. This can be constructive, as it was in e-Patient Dave’s case, or defensive, as it was at my meeting.
I have a greater interest in my good outcomes than anyone else, and I strongly object to any suggestion that I shouldn’t educate myself. I’m willing to help guide anyone who wants to do the same to do so.