Updates: The text below the video was added later on 4/17, and the graphic was added 4/18.
For me the evidence highlight of TEDMED last week was a talk by Ben Goldacre MD (@BenGoldacre), a charming and articulate doctor who’s dug deeply into what seems to be scurrilous business: suppression of evidence that doesn’t favor the drug being studied. So he’s undertaken a project to dig up all the clinical trials that were registered with the FDA, and find the ones that never got published.
Think this is nit-picking? Watch this 6 minute hallway chat I had with him, late the last night of the conference. He calls it “The cancer at the core of evidence-based medicine.”
The video is hand-held-iPad shaky – forgive me, ignore it, just listen to the audio.
Example: in a survey of all 74 trials ever submitted to the FDA for anti-depressants, about half had positive outcomes – but only 40 papers were published, of which 37 were positive. Of the 36 negative outcomes, only 3 were published.
So here’s how that boils down: (Graphic added 4/18)
[Added 4/18] At right is a calculation from GraphPad.com of the probability that this is random: p < .0001 (1 in 10,000), “extremely statistically significant.” (See discussion of realist “Fat Tony” in this comment.)
That much skewing is an extreme example, but Goldacre says only about half the trials that are conducted are ever published, and (p=.0392) positive outcomes are twice as likely to be published as unfavorable ones. [End 4/18 addition]
- Inhibits patients from getting good information
- Inhibits doctors from having good information, too, with which to treat their patients
- Makes an absolutely mockery of evidence-based medicine
- As Goldacre points out, it’s a severe disservice to the patients who put themselves at risk by being in a clinical trial to help improve the body of scientific knowledge.
I’d welcome thoughts from all, especially clinicians, on how to spread awareness of this. Goldacre says he’s going to start AllTrials.org and will plumb the database of all trials ever registered, to find and publish their outcomes.
There’s more in his TEDMED talk than in this 6 minute video but for now this is a start.
Related posts on this blog:
ORLANDO, Feb. 27 — Newly unsealed court documents suggest that AstraZeneca tried to minimize the risk of diabetes and weight gain associated with its antipsychotic drug quetiapine (Seroquel), in part by “cherry-picking” data for publication. … lawsuits by some 9,000 people who claim to have developed diabetes while taking the drug. … a 1999 e-mail indicating that the company had “buried” three clinical trials and was considering doing so with a fourth. … Another … discussed ways to “minimize” and “put a positive spin” on safety data from a “cursed” study— one of those later described as “buried.”
A recurring them on this blog is the need for empowered, engaged patients to understand what they read about science. It’s true when researching treatments for one’s condition, it’s true when considering government policy proposals, it’s true when reading advice based on statistics. If you take any journal article at face value, you may get severely misled; you need to think critically.
Sometimes there’s corruption (e.g. the fraudulent vaccine/autism data reported this month, or “Dr. Reuben regrets this happened“), sometimes articles are retracted due to errors (see the new Retraction Watch blog), sometimes scientists simply can’t reproduce a result that looked good in the early trials.
[A rule at NEJM prohibiting articles about a drug, written by people with financial ties to the drug] was reversed in 2002, after the journal’s current editor in chief, Dr. Jeffrey M. Drazen, took the job. Dr. Drazen and his colleagues reported that for some subjects, so few experts without financial ties could be found that the journal’s scope was becoming artificially curtailed.(Emphasis added)
Reading The Decline Effect, I thought we were experiencing a weakness of the scientific method. Despite the Seroquel item and the NEJM piece (and others like it), it never dawned on me that we might be experiencing wholesale distortion of scientific research: actively suppressing half the data. Goldacre’s data suggests that perhaps we are.
To paraphrase him: If he tossed a coin 100 times and only reported half of the outcomes, he could convince you – and doctors, and insurance companies, and Medicare, and everyone – that the coin was different than it is. And that wouldn’t be good science.