Significantly expanded, below the video, an hour after the first post.
Peter Frishauf, member of the editorial board of our journal, has brought what is to me the most exciting news for participatory medicine since the OpenNotes project. Importantly, this news may have broader implications – because it addresses one of the core challenges of patient engagement: the quality and freshness of medical articles.
Last fall, UCSF School of Medicine professor Amin Azzam started a course for fourth year medical students to become Wikipedia editors and apply their skills to Wikipedia articles that were important to them and were poor quality. It got big-name media attention (NY Times, The Atlantic), and it should – because as we’ve often written, one of the core challenges e-patients face (and doctors face!) is finding up to date, reliable information.
This is not a trivial question – you can’t just rely on the peer review process, because it too has flaws, and good luck ever getting mistakes fixed. The biggest example is the ongoing vaccine controversy caused by a massive failure of peer review in the top-tier journal Lancet, but there are many others. Another shortfall is what our movement’s founder “Doc Tom” Ferguson called “the lethal lag time” – the years of delay between a result being discovered and the time it reaches doctors.
Frishauf, who has often written about such shortcomings (see comment below), created this 14 minute interview. I have more I’ll say later, but what do you think? (If you can’t see the video, click here.)
You can find the links mentioned in the video below, at the end of this commentary,
Added an hour later (by Dave):
About WikiProject Medicine, and Wikipedia quality levels
This initiative is part of a larger project called WikiProject Medicine. As it says under their Goals heading:
WikiProject Medicine seeks to benefit the world by giving the general public and health care professionals a text they can all read, appreciate, and respect, free of charge.
Wikipedia article quality levels
Wikipedia-savvy people will understand the specifics of the project’s efforts to elevate articles through the quality levels illustrated on the project’s page, through the initial “Stub” and “Start” status, up through C, B, “Good Article” (“written very well, contain factually accurate and verifiable information, are broad in coverage, neutral in point of view, stable, and illustrated, where possible, by relevant images with suitable copyright licenses“) and ultimately to “Featured” status (FA = Featured Article, FM = Featured Media, FL = Featured List). Only about one in a thousand is Featured; about four times more are Good.
A major advantage: transparency on who edited what, with public discussion
In talking with medical researchers, time after time I’ve heard frustration over the secrecy in the peer review process. An article can be critized, called outright wrong, or rejected, with no accountability, and (worse, in my view) no way for others (docs or patients) to assess whether they disagree with that secret reviewer’s opinion.
The underlying assumption in this process is that the best road to quality is secrecy, so that unnamed reviewers can speak their minds. But that in turn assumes that the most important thing is for secret reviewers to have free rein.
What if e-patients and other researchers would interpret the draft (and the data) differently?
With the Wikipedia process, every single edit it is recorded, from adding a comma to major additions or removals. You know who made the change, and it can be undone. And the process is openly discussed in each article’s talk page.
Another big deal: the patient perspective is welcome.
Wikipedia lets anyone (anyone!) edit an article. This means patients can express their perspective, in addition to or in contradiction to what the professionals wrote. To my knowledge this is the first large scale platform where patients and professionals have equal authority. I’m sure there will be bumps in the road – things to be worked out. But for the first time, patients with diseases like RA (see Kelly Young’s RAWarrior community), where many patients say clinicians don’t understand the disease, are able to speak out. Same for Parkinson Disease, where some e-patients have substantially different views of what research should be conducted and what the goals of therapy should be.
Crowd vs credentials: Medpedia and Britannica
It seems to me that we’re nearing the end of the era of what I’ll call “authoritative authority.” Followers of this issue will recall two developments that signal a big shift in the wind:
- In 2005 Wikipedia was judged “about as good a source” as Britannica for accuracy (Nature, CNet)
- In classic fashion, Britannica critized Nature‘s methods, while Wikipedians went off and discussed why they had so many errors!
- I can’t verify this, but I’m told that within days, all the Wikipedia errors had been corrected. (Can anyone send a link to verify this?)
- The last print edition of Britannica was in 2010.
- Last July we ran Crowd trumps credentials: Medpedia’s dead. Medpedia was an attempt to harness Wikipedia’s strengths, but editing was limited to credentialed professionals. The assumption was that credentials are the best guarantee of quality.
The “Medpedia’s dead” post pointed to our earlier posts, including this, which merits repeating again here.
The problem was predicted by our Feb 2009 post Who gets to say what info is reliable? which drew 70 comments. The post asked, “Who will vet the vetters?” Comments included this, from SPM founder John Grohol:
… If you start with a closed, walled-off garden to begin with in terms of contributors, you’re already starting at a disadvantage.
Wikipedia showed that shared knowledge can work … but it takes the devotion of thousands of dedicated, altruistic people. Medpedia, although perhaps well-intentioned, is coming at health and medical knowledge from a distinctly 1.0 “docs know best” philosophy.
It’s fascinating to see that old-school philosophy be grafted on to a 2.0 tool and business model, in hopes of generating something new and different. I have my doubts, reading through the tripe listed currently for mental disorders (full of misinformation).
Laika’s post links to numerous comments by sharp observers, including internet mega-maven Clay Shirky, and ScienceRoll doc @berci, who said back then:
I believe elitism kills content. Only the power of masses controlled by well-designed editing guidelines can lead to a comprehensive encyclopaedia.
The very first medical conference I ever attended was Connected Health, October 2008, where Shirky said in a keynote:
The patients on ACOR don’t need our help, and they don’t need our permission.
Here’s to the visionaries – the people who understand what’s going on, below the surface, and can see the future, so we can build on it.
=== End 2013 excerpt ===
Here’s a list of links for the podcast:
(In order of appearance in podcast)
Are Traditional Peer-Reviewed Medical Articles Obsolete? Frishauf,P, Lunberg,G Medscape. Jan 06, 2006.
Doc James, user page on Wikipedia
Editing Wikipedia Pages for Med School Credit Cohen, N: The New York Times, September 29, 2013
Should I Be Getting Health Information From Wikipedia? Beck, J The Atlantic, Oct 1, 2013.
Wikipedia:WikiProject Medicine/UCSF Elective 2013
Race and Health Wikipedia
Race and health: Revision history
Wikipedia Donation Page Wales, J Wikipedia
A Troubled Trifecta: Peer Review, Academia & Tenure Frishauf, P: e-patients.net August 26, 2010
Wikipedia Top Health Care Source for Patients, Providers, Report Says iHealthBeat January 30, 2014
Dr. Wikipedia: The ‘Double-Edged Sword’ Of Crowdsourced Medicine NPR, National Public Radio February 08, 2014
Here are links to some of Peter’s earlier work on the problem of traditional peer review, in the inaugural issue of our journal, Oct 2009:
And in 2006, on Medscape, which he founded: Are Traditional Peer-Reviewed Medical Articles Obsolete? A Pitch for the Wikipedia Concept (free registration required)