Search all of the Society for Participatory Medicine website:Search
The Journal for Participatory Medicine's website has moved. Please check out the new website for the latest articles.

Abstract

Summary: The credibility, authority, and relevance of prestigious journals are being questioned in the light of an apparent increase in publications marred by technical flaws or misconduct, despite having passed peer review. To strengthen the review process, the Journal of Participatory Medicine proposes to allow health care users and other lay experts to participate in the shaping of new knowledge by providing feedback on the quality of the evidence. Enlarging the pool of reviewers in this way has several potential advantages.
Keywords: Communication, expertise, knowledge, participatory medicine, peer review.
Citation: Shashok K. Who’s a peer? improving peer review by including additional sources of expertise. J Participat Med. 2010 Dec 8; 2:e15.
Published: December 8, 2010.
Competing Interests: The author has declared that no competing interests exist.

Current Peer Review Capacity Is Quantitatively and Qualitatively Inadequate

Concern is growing about the ability of peer review to filter out flawed research reports that are unworthy of the presumed prestige and authority of the journals that publish them. Because the quality filter is not as stringent as it could be, the frequency with which methodology in published research is questioned seems to be on the rise.[1][2][3][4][5] The increasing number of manuscripts in need of review is apparently outstripping available reviewer capacity because of the slow growth in the number of willing and competent peer reviewers.[6][7] As a result, peer review for research publication is failing to perform its “improving” function and is being degraded into a merely “screening” function. Yet even the manuscript screening process has methodological problems that may result in misguided acceptance and rejection decisions.[8] Efforts to eliminate manuscripts before peer review are a necessary response to reviewer shortages, but rejecting papers before they are read by scientific reviewers has its own risks.[9] As Anderson explained, “With financial rewards, philosophical pressures, academic incentives, and potentially false equivalences driving us toward publishing more and more, filtering less and less, we are already in the midst of a ‘filter failure’ of immense proportions.[10][11]”

The number of manuscripts needing review is predicted to increase because the population of researchers who need to publish is rising, and because of regulatory requirements for the publication of the results of industry-sponsored health research.[12] So the burden of peer review on an already overstretched system will soon become even heavier. Moreover, the limitations that make peer review for publication less effective than it should be also adversely affect peer review for funding applications,[13] another key process in scientific culture.

If gatekeepers limit the search for reviewers to the academic and professional communities they are comfortable with, they may perpetuate the reviewer shortage. Cope and Kalantzis noted that, “Journals come to act like insider networks more than places where knowledge subsists on its merits, or at least that’s the way it often feels to outsiders.[14]” The practical effect of this tendency is “to exclude thinkers who, regardless of their merit, may be from a non-English speaking country, or teach in a liberal arts college, or who do not work in a university, or who are young or an early career researcher, or who speak to an innovative paradigm, or who have an unusual source of data.[14][15]” As a result, “potentially very valuable knowledge work conducted in rich knowledge spaces” may go unused.[14]

Strategies to improve peer review by “certified” subject experts could increase reviewers’ motivation and satisfaction, but implementation of these strategies will require resources that appear not to be widely available at present. Educating undergraduates in critical reading and reviewing skills, like training and mentoring to prepare them to become reviewers, will require investments in capacity-building. Yet stakeholders are unclear about who should provide the economic and human resources for this. And even if capacity-building measures could be implemented globally within academic and professional communities, it would take years for their effects on overall peer review quality to appear. Rather than making superficial changes to retool an essentially outmoded review system, why not take advantage of as yet mostly untapped sources of expertise to increase the pool of reviewers? A new vision for publishing and peer review that would include “non-certified” experts as additional sources of knowledge was proposed by Frishauf, who envisioned a reputation system that would allow participation by persons, groups, or institutions; be subject-matter specific; and make open discussion possible between authors and reviewers.[16]

Broadening the Community of Reviewers

Useful feedback can come from experts such as patients, statisticians, readers from non-Anglophone settings, or anyone able to detect weaknesses in the reasoning that subject-expert peers may overlook if they are stressed for time or not motivated to produce a careful review. Open, community-based peer review could help overcome the shortage of expertise because other experts, who have felt marginalized from opportunities to participate in the shaping of knowledge about health care, may be eager to contribute their feedback. The Journal of Participatory Medicine (JoPM) is poised to try an innovative approach by including patients, non-academic experts and other interested parties in the process of critical evaluation of the content it publishes. This broadened vision of potentially useful sources of expertise and authority[17] recognizes that with the aid of 21st-century information and communication technologies, expert knowledge about health care is migrating toward users, communication professionals, journalists and other consumers of health information.

Anglocentrism is another limitation of traditional peer review that the JoPM approach could help to overcome. Despite rapid growth in the number of potential reviewers as research becomes increasingly global, most gatekeepers are still native English speakers. Rhetorical strategies used to report findings and persuade readers of their significance differ among cultures, and these differences can influence how researchers write up their work.[18] One result of cultural and linguistic diversity among scientists is that reviewers judge the quality of the writing differently depending on their own first language, and tend to consider any text that does not satisfy their personal preferences for “good scientific English style” to be badly written.[19] Experts in ethnographic aspects of research publication have argued for “the need to reduce Anglophone control and to change the kind of knowledge production, evaluation and distribution practices currently governing scholars’ practices and experiences.[20 p155]” Open, community-based review would favor the participation of a wider sample of reviewers with varied experiences in health care and health research, whose different cultural backgrounds may make them more tolerant of writing that is clear even if it does not follow the customary patterns recommended by experts in written scientific communication.[21] This variety is an important potential advantage of open review since health research, regardless of where it takes place, can be useful to readers anywhere in the world.

Regardless of who provides feedback to authors, anonymity should not be an option. Public attribution of review authorship could provide recognition for valuable input, and dissuade reviewers from letting their personal biases or conflicts of interest influence their feedback. Attribution and accountability would dissuade reviewers from providing a brief, superficial review rather than a more thorough but more time-consuming report, and could prevent self-interested manipulation of the process in the reviewer’s favor.[22] If potential reviewers are afraid of public embarrassment from asking a “stupid” question,[22] they should decline to evaluate the material since their expertise may not be right for that content. If their advice is useful, their reputation will be enhanced. Even if their advice is not useful, public loss of face may encourage them to think twice before posting comments that are not helpful, and to re-examine their own thinking on the subject. Making comments publicly available also gives interested readers a clear idea of where participants’ expertise lies, as is already occurring in the popular Comments sections of blogs and other online resources.

The Rise of Community-Based Review

Many science disciplines have experienced failures of peer review to filter out low-quality information. The poor scientific quality of articles in well-established microbiology journals led the author of the Bitsize bio blog–a reviewer and former managing editor of a specialized journal–to wonder how some articles ever got published.[23] Michael Brooks, writing for New Scientist, expressed concerns over recent cases of the publication of seriously flawed research in reputable physics and medical journals. In one case, failure of the filtering function of peer review was compounded by failure of the editor to remove the flawed research from the record until a journalist “outsider” warned him that the results were unreliable. Brooks noted that it was “worrying that it takes external activism–a second layer of peer review from outside the system–for [retraction] to happen.[24]”

The consequences of failure to consult reviewers with appropriate expertise are illustrated by the reactome array article published in Science,[25] which described a potentially valuable research and diagnostic tool developed by a multidisciplinary international team. Immediate feedback from chemists identified serious issues with the chemistry data, and Science admitted they had not included experts in synthetic organic chemistry among their reviewers.[26] By January 2010, Science had published an expression of editorial concern, and powerful critics and supporters of the research have weighed in online.

When the Journal of the American Chemical Society published an article on the oxidation of alcohols by sodium hydride, feedback from the blogosphere quickly pointed toward contamination rather than a new, undiscovered chemical mechanism as the most likely explanation for the unusual finding. This episode “highlighted the way that blogging can immediately bring together expert opinion on a given topic.[27]” A posting from an organic chemist noted that, “Poorly reviewed papers claiming novelty can be expected to be rapidly dissected in the blogosphere,” and a computational chemist noted that, “There seems little doubt that the very best blogs can provide a level of critical scientific commentary which in many cases surpasses the more traditional ‘QA’ mechanisms such as journal peer review.[27]”

Online publications that emphasize community input are already becoming authoritative sources of information. For example, MedPage Today is creating “rapid learning communities” where clinicians, researchers, and patients can post information about treatment effects and hypotheses. “Living review articles” (and other types of articles) will be immediately available, and will be continuously edited and updated by the user community. All contributors will be accountable for their input since no anonymity will be allowed..[28]

These examples (there are many more) show that feedback from experts outside the restricted community of pre-publication gatekeepers is already working well to correct the record on important topics that attract large numbers of evidence consumers.

Conclusion: User Expertise Is a Necessary Complement to Academic Expertise

Frishauf had this to say about traditional peer review in medicine: “We have ample evidence that the way peer review is conducted in the major [scientific, technical, and medical] journals today is inefficient and unreliable. Why cling to it? Is it because academia’s tenure system of ‘publish or perish’ only recognizes publication through this obsolete system?.[16]”

Although traditional peer review is often inadequate, stakeholders who benefit from the current approach may be slow to support the need for more participatory processes. Influential gatekeepers and other knowledge and opinion leaders, as well as commercial publishers that profit from the current system, may be reluctant to make room for alternatives in the knowledge marketplace. Research evaluators should rethink how contributions to knowledge creation can be counted for academic merit. The definition of a “publication”–like “the very structure and meaning of a journal ‘article’, and what we mean by an ‘author’ and ‘reviewer'”.[16]– needs to become more flexible to include new kinds of knowledge sharing, not just articles in indexed journals with a high impact factor.

Rather than removing the “peer” from “peer review”, the definition of “peer” expertise should be broadened to include experts outside the academic and professional communities who have a stake in the quality of the evidence. Both patients and physicians want more information about what happens in the patient’s body and life, rather than what happens in “tightly controlled experimental situations[29]” As Gruman noted, “patients are consumers of evidence,.[29]” so they have an interest in participating in debates about the quality of the evidence. By encouraging feedback from non-traditional reviewers, JoPM can tap into a valuable source of knowledge and help rebuild evidence-sharing conduits among patients, physicians, and researchers.

Information seekers want their evidence to come from 21st-century knowledge-sharing technologies and critical review processes that take full advantage of nontraditional sources of evidence. Nevertheless, even with improved review, readers need to reclaim personal responsibility for whether they believe the content to be true and useful, and not delegate this decision to apparently authoritative yet fallible surrogate decision makers. Critical thinking and critical reading are as much the responsibility of consumers of evidence as they are of producers of evidence.

References

  1. Lee K, Bero L. What authors, editors and reviewers should do to improve peer review. Nature. 2006. doi:10.1038/nature05007. http://www.nature.com/nature/peerreview/debate/nature05007.html Accessed Sept 23, 2010.
  2. Smith R. In search of an optimal peer review system. J Participat Med. 2009(Oct);1(1):e13. http://jopm.org/index.php/jpm/article/view/Article/12/25. Accessed April 12, 2010.
  3. Cohen P. Scholars test web alternative to peer review. New York Times. August 29, 2010. http://www.nytimes.com/2010/08/24/arts/24peer.html. Accessed Sept 23, 2010.
  4. Balaram P. Scientific publishing: eroding trust. Curr Sci. 2010;98(1):5-6. http://www.ias.ac.in/currsci/10jan2010/5.pdf. Accessed Aug 5, 2010.
  5. Marcus A, Oranksy I. Retraction Watch. http://retractionwatch.wordpress.com/. Accessed Sept 20, 2010.
  6. Fox J, Petchey OL. Pubcreds: Fixing the peer review process by “privatizing” the reviewer commons. Bull Ecol Soc Am. 2010;July:325-333. http://www.esajournals.org/doi/pdf/10.1890/0012-9623-91.3.325. Accessed Aug 5, 2010.
  7. Frishauf P, Smith R, Wager L, Jadad A, Adler T. Can we trust traditional peer review? J Participat Med. Podcast. Undated. http://www.patientpower.info/series/the-journal-of-participatory-medicine. Accessed May 28, 2010.
  8. Kravitz RL, Franks P, Feldman MD, Garrity M, Byrne C, Tierney WM. Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS ONE. 2010;5(4):e10072. http://www.plosone.org/article/info:doi/10.1371/journal.pone.0010072. Accessed Sept 1, 2010.
  9. Salisbury MW. Is peer review broken? Genome Technology. Posted 30 Oct 2009. http://www.genomeweb.com/peer-review-broken. Accessed 5 Aug 2010.
  10. Anderson K. Cups, buckets, pools, and puddles: when the flood of papers won’t abate, which do you choose? The Scholarly Kitchen. Posted 8 Jul 2010. http://scholarlykitchen.sspnet.org/2010/07/08/cups-buckets-pools-and-puddles-in-the-age-of-information-abundance-where-do-filters-belong/. Accessed Aug 23, 2010.
  11. Shirky C. It’s not information overload. It’s filter failure. Web2.0Expo, Sept 16-19 2008, New York, NY. Video available at http://web2expo.blip.tv/file/1277460. Accessed Sept 11, 2010.
  12. Practices initiative. Int J Clin Pract. 2010;64(8):1028-1033. http://onlinelibrary.wiley.com/doi/10.1111/j.1742-1241.2010.02416.x/full. Accessed Sept 19, 2010.
  13. Powell K. Research funding: making the cut. Nature 2010;467:383-5 doi:10.1038/467383a. http://www.nature.com/news/2010/100922/full/467383a.html?s=news_rss. Accessed Oct 27, 2010.
  14. Cope B, Kalantzis M. Signs of epistemic disruption: Transformations in the knowledge system of the academic journal. First Monday. 2009;14(4) 6 April. http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2309/2163. Accessed July 27, 2010.
  15. Stanley CA. When counter narratives meet master narratives in the journal editorial-review process. Educational Researcher. 2007;36:14-24. http://edr.sagepub.com/content/36/1/14.full.pdf+html. Accessed July 27, 2010.
  16. Frishauf P. Reputation systems: a new vision for publishing and peer review. J Participat Med. 2009(Oct);1(1):e13a. http://jopm.org/index.php/jpm/article/view/Article/11/21. Accessed April 13, 2010.
  17. Green LW. The field-building role of a journal about participatory medicine and health, and the evidence needed. J Participat Med. 2009;1(21 Oct). https://participatorymedicine.org/journal/evidence/reviews/2009/10/21/the-field-building-role-of-a-journal-about-participatory-medicine-and-health-and-the-evidence-needed/. Accessed Sept 21, 2010.
  18. Salager-Meyer F, Alcaraz Ariza MA, Zambrano N. The scimitar, the dagger and the glove: Intercultural differences in the rhetoric of criticism in Spanish, French and English medical discourse (1930-1995). English for Specific Purposes 2003;22:223-48.
    http://www.saber.ula.ve/handle/123456789/27712. Accessed Oct 27, 2010.
  19. Shashok K. Content and communication—how can peer review provide helpful feedback about the writing? BMC Med Res Methodol 2008;8(3): DOI: 10.1186/1471-2288-8-3 (31 Jan 2008). http://www.biomedcentral.com/1471-2288/8/3. Accessed Oct 28, 2010.
  20. Lillis T, Curry MJ. Academic Writing in a Global Context. London: Routledge; 2010.
  21. Shashok K. Successful communication in English for non-native users of the language. Panace@ 2007;VIII(25):82-6. http://medtrad.org/panacea/IndiceGeneral/n25_congresos-shashok.pdf. Accessed Oct 28, 2010.
  22. Akst J. I hate your paper. The Scientist. 2010;24(8):36-41. http://www.the-scientist.com/2010/8/1/36/1. Accessed Aug 5, 2010.
  23. Kennedy S. Is peer review broken? Bitsize bio. Posted 23 Feb 2010. http://bitesizebio.com/2010/02/23/is-peer-review-broken/. Accessed Aug 5, 2010.
  24. Brooks M. We need to fix peer review now. The S Word. The science of politics—and vice versa. New Scientist. Posted 3 June 2010. http://www.newscientists.com/blog/thesword/. Accessed Aug 5, 2010.
  25. Alberts B. Editorial expression of concern. Science. 2010;327:144. http://www.sciencemag.org/cgi/content/abstract/science.1186078v2. Accessed Aug 10, 2010.
  26. Travis J. Harsh reaction to chemistry claims cast doubt on reactome paper. ScienceInsider. 23 Dec 2009. http://news.sciencemag.org/scienceinsider/2009/12/harsh-reaction.html. Accessed Aug 3, 2010.
  27. Hadlington S. Peer review by live blogging. Chemistry World. Posted 27 July 2009. http://www.rsc.org/chemistryworld/News/2009/July/27070901.asp. Accessed Aug 11, 2010.
  28. Lundberg G. Making LIVING medical publishing closer to real time. MedPage Today. Posted 12 July 2010. http://www.medpagetoday.com/Columns/21107. Accessed Sept 6, 2010.
  29. Frishauf P, Smith RW, Gruman J, Green LW. Participatory evidence: Opportunities and threats. J Participat Med. 2010;2(9 Aug). https://participatorymedicine.org/journal/multimedia/podcasts/2010/08/09/participatory-evidence-opportunities-and-threats/. Accessed Sept 21, 2010.

Open Questions

1. Can all stakeholders be motivated to accept open, community peer review?
2. What evidence of their expertise and credentials should nonacademic reviewers be expected to provide?
3. How might reviewer anonymity affect the usefulness of feedback?
4. Would open, community peer review increase the signal more than the noise?

 

Donate