Abstract
Summary: A journal with “participatory medicine” in its name will challenge health care organizations, practitioners, care givers, and patients to examine their comportment and relationships. It will also challenge the scientists of medicine, health services, and patient education to re-examine their research methods and designs, because the participatory process will not lend itself easily to conventions of randomized controlled trials. The Journal will also be challenged by the shadow of impact factor scores with their bias toward academic rather than practical impact, and the need to report more fully on external validity. These challenges appear to be welcomed by the editors of this new journal.
Keywords: Research designs, evidence-based practice, practice-based evidence, alternatives to RCT, external validity, impact factor scores, RE-AIM.
Citation: Green LW. The field-building role of a journal about participatory medicine and health, and the evidence needed. J Participat Med. 2009(Oct);1(1):e11.
Published: October 21, 2009.
Competing Interests: The author has declared that no competing interests exist.
In building a fledgling movement into a recognizable and respected field of knowledge and action for the common good, one must first articulate a common purpose. One must then scan the environment to discern and delineate the actors and stakeholders, and then begin to assess the problems and needs they face in making their respective contributions to the field. A field is more than a discipline or a profession. It is more than its subject matter and more than the sum of the separate actions of those who identify with the field. It is the joint actions and converging products of action by the many within the historical, cultural, social, and scientific contexts in which they are contributing. I propose here, with my congratulations to the co-editors and their collaborators on launching this journal, a few observations on the editors’ purpose, and suggestions to the actors or stakeholders, particularly within the scientific context of the evidence they will be called upon to defend.
Definition, Purpose, and Scope of Participatory Medicine
The purpose of participatory medicine is implicit in its definition by the editors. They see it as “a cooperative model of health care that encourages, supports, and expects active involvement by all parties (health care professionals, patients, caregivers) in the prevention, management, and treatment of disease and disability and the promotion of health.” With that definition and statement of purpose for the field, which I like, the editors might have been too modest in limiting the scope to “medicine” in the name of the journal, because much of what the definition stakes out to be accomplished will call upon professionals beyond health care; it will have an impact not just on medical outcomes but on broader health and social benefits; not just on patients, but on those same people when they are not patients, and on their families and communities.
Roles and the Scientific Context
But taking the purpose and primary audiences in the more restrictive scope of medicine, some words of caution and, perchance, some inspiration, might be drawn from the recent scientific context of research on issues related to the participatory aspects of medicine. Those aspects include engaging people—sick and well—more actively in their own care; encouraging their proactive help-seeking from professionals, family, friends, and other social networks; and accessing trustworthy information to inform their health related choices. For the professionals who serve them, participatory medicine calls upon them to exercise more openness and fullness in informing and involving participants in their care and directing them to other sources. For the health-related institutions, it calls upon them to enforce policies assuring patient access to their health and care information and to accommodate diversity in literacy, culture, ability, and level of functioning.
What makes these roles part of the scientific context of medicine? In a word, most of them run counter in some degree to the conditions in which the canons of research evidence on interventions in medicine are produced and judged. They may contradict the very essence of random assignment to interventions, screening of subjects eligible for randomization, blinding and double-blinding of patients and their health care professionals administering the experimental interventions, the isolation of the experimental intervention from other concurrent interventions by limiting the scope or number of the interventions.[1]
These conditions of experimental control remain fully justifiable in early clinical trials to test the efficacy of new drugs, medical devices, and techniques. They set conditions, however, antithetical to the testing of most behavioral, educational, and communicative interventions for which the purpose is to encourage provider openness and active patient, provider, and caregiver participation. They might be doable in highly structured specialty clinics in teaching hospitals for some participatory interventions, but much of the control in experimental control will evaporate in most primary care and community hospital care settings outside the academic setting. This will be all the more so when the purpose or modus operandi of the intervention are participation, autonomy, self-agency, full disclosure of options, and professional discretion to adapt, tailor, and individualize interventions rather than following rigid protocols with “fidelity” in their implementation. In short, the conditions of participatory medicine will call for a wider range of research methods and designs to produce relevant evidence for the organization and practice of participatory medicine. Quasi-experimental trials and qualitative mixed-methods studies, for example, have been used with increasing sophistication in supplementing or supplanting designs that leave too many questions of the interactions between causality and context unanswered.[2]
The “Rules” of Evidence
Unfortunately, many of the preferred and often necessary alternatives for generating the types of evidence needed in participatory medicine will not align perfectly with the hierarchy of research designs adopted in the “evidence-based medicine” movement. That hierarchy universally prefers evidence generated from randomized controlled trials (RCTs). When the research on participatory medicine seeks also to cast the interventions in a wider context of nonmedical environments and social determinants of participatory opportunity and behavior, such as mass media influences, national and state health policies, professional training, home-based technologies, and changes in social norms, the alignment becomes still more convoluted.[3]
It is not enough for participatory medicine to say forget the evidence based medicine hierarchy. Federal funding and even much foundation funding for research and for programs and services is aligned with the hierarchy, even in fields farther from conventional medicine than is participatory medicine.[4] Systematic reviews of research literature that lead to guidelines for “best practices” use the hierarchy both to rule studies in or out of the reviews and to weight the relative importance of studies included according to their approximation of RCT conditions. It will not serve the building of a field of participatory medicine to be left out of these reviews. Participatory medicine is not alone in this quandary and misalignment of its subject matter with the conditions of experimentation that dominant the evidence hierarchy.
Other fields, including public health, social work, criminology and other social services, community psychology, environmental change and other policy evaluations, all have struggled with and come to terms in various ways with the impossibility, or impracticability, or ethical impropriety of meeting the conditions of RCT designs for the testing of interventions, programs, or policies in their fields. The most common remedy has been to add supplementary sources of data and statistical controls in the absence of randomized control to address the threats to internal validity created by deviations from RCT design.
Reclaiming the Importance of External Validity
A more positive alternative that I have favored over the defensive posture of apologizing for necessarily nonrandomized or quasi-experimental designs has been to advocate for the place of external validity as a neglected and equally important consideration in evidence hierarchies.[5] The very conditions that will drive much of the research on participative medicine away from randomized controlled designs are the conditions that will make their results more generalizable, more relevant, more affordable, more scalable, and more credible to other practitioners, program planners, and policy makers in other practice settings. These are the things that will give the results of participatory medicine studies greater external validity as a partial (at least) trade-off for the necessary sacrifices of internal validity. They will trade off some degree of certainty that the intervention tested was the sole determinant of the change or the difference found in the outcome variables. They will gain in return some certainty that the results will be replicable in real world circumstances, with real practitioners working in real time rather than highly trained research assistants implementing interventions under close supervision and with rigid protocols that call for no adaptation or tailoring to the individual patient.[6]
These trade-offs have been studied and debated increasingly in recent forums sponsored by federal agencies such as NIH, CDC, AHRQ, and the Veterans Administration,[7] and the case for more attention to external validity made in appeals to researchers, evaluators, journal editors, and funders.[8] One of the approaches to research and evaluation that is seen to increase relevance to end users of the results and to strengthen external validity is participatory research. Thus, policy makers, program planners, practitioners, patients, or other community members who would be the intended users or beneficiaries of the results, when engaged in helping to define the research or evaluation questions and interpreting the results, produce results that are more likely to resonate with their needs, and more likely therefore to be used.[9] This recognition could lead a journal dedicated to participatory medicine to give special consideration to manuscripts that report on participatory research about participatory medicine.The “Impact Factor” Challenge for the JournalBesides selecting manuscripts that give particular weight and attention to participation, this new journal will face what has become an obsession for journal publishers, editors, and editorial board members in recent years. That is the challenge of the impact factor score. This has taken on a particular demand function for journals in comparing their performance over time and with other journals as the main criterion of success, and as the ability to track citations has increased. The impact factor refers to the rate of citations a journal’s articles receive in subsequent publications throughout the field over a given year. It has become one of the most potent metrics for measuring academic success for purposes of professorial appointments and promotions. This in turn has driven the submission of manuscripts toward those journals with the highest impact factor scores and away from others with more practical and socially important missions. It leads editors and their peer reviewers to weight their selection of manuscripts toward those with the highest potential to be cited, which means those with the greatest academic appeal.The problems with being driven in editorial policy by impact factor scores are two. One is that, as constituted, impact factor scoring is based almost wholly on academic impact, and very little (except possibly indirectly) on the impact the articles of a journal are having on policy, program planning, institutional changes, practitioner changes, or the participation and benefit to patients and others who might be the intended ultimate beneficiaries of what the article is about. My hope for this new journal is that it might contribute to the development of new metrics by which articles and their practical impact can be assessed. A second problem is that any attempt to ignore the academic impact factor potential of manuscripts submitted will give the potential authors deciding where to submit their manuscripts pause. In the short term, there will be no choice but to consider academic impact and to develop additional, complementary metrics to assess the other, worldlier, criteria and standards of impact.[10]
My favorite set of criteria for impact beyond those of impact factor are encapsulated in the RE-AIM model.[11] The acronym refers to reach, effectiveness, adoption, implementation, and maintenance of the interventions being tested. An intervention is widely judged to have been adequately tested if it has been submitted to a “rigorous” RCT. Such trials, however, place virtually all of their weight on the internal validity—the degree to which an observed effect can be attributed with certainty to the intervention—which brings us full circle back to the issues raised in earlier paragraphs above about external validity. RE-AIM suggests that an intervention also needs to be judged on the basis of:
- Its reach (how many or what proportion of the intended beneficiaries of the intervention can be exposed to it?);
- Its effectiveness (beyond the “efficacy” of the intervention as tested under the hothouse conditions of a controlled trial, how does it work in real time with real patients and ordinary staffing and supervision of those conducting the intervention?);
- Its adoption by the organization or institution;
- Its implementation by those responsible for delivering it to patients; and
- Its maintenance by the organization and the practitioners.
Beyond these criteria, or imbedded within them, are considerations of cultural acceptability and appropriateness, cost and affordability, fit with institutional traditions in which it would be implemented, scalability, and professional and patient discretion to deviate from rigid protocols.[12] My fondest hope for this new journal is that it will blaze a new trail toward greater editorial attention to these considerations in the selection, publication, and impact scoring of manuscripts.
References
- [1]ENACCT (Education Network to Advance Cancer Clinical Trials) & CCPH (Community-Campus Partnerships for Health). Communities as partners in cancer clinical trials:changing research, policy and practice. Seattle: Joint Initiative of ENACCT & CCPH; 2008. Available at:http://www.enacct.org/sites/default/files/Communities%20Full%20Report_0.pdf Accessed October 8, 2009.↩
- [2]Utley-Smith Q, Colón-Emeric CS, Lekan-Rutledge D, et al. Nature of staff—family interactions in nursing homes: staff perceptions. J Aging Stud. 2009;23:168-177. (Medline) [Google Scholar] ↩
- [3]Sanson-Fisher RW, Bonevski B, Green LW, D’Este C. Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155-161. (Medline) [Google Scholar]↩
- [4]Schorr LB. To judge what will best help society’s neediest, let’s use a broad array of evaluation techniques. Chronicle of Philanthropy. August 20, 2009. Available at: http://philanthropy.com/free/articles/v21/i20/20003301.htm. Accessed October 8, 2009.↩
- [5]Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001;25:165-178. Available at: http://www.ajhb.org/issues/2001/3/25-3-2.pdf Accessed October 8, 2009. [Google Scholar]↩
- [6]Green LW. Making research relevant: if it’s an evidence-based practice, where’s the practice based evidence? Fam Pract. 2008; 25suppl1:i20-i24. (Medline) ↩
- [7]Mercer SL, DeVinney BJ, Fine LJ, et al. Study designs for effectiveness and translation research: identifying trade-offs. Am J Prev Med. 2007;33:139-154. (Medline) [Google Scholar] ↩
- [8]Glasgow RE., Green LW, Ammerman A. A focus on external validity. evaluation and the health professions. 2007;30:115-117. [Google Scholar] ↩
- [9]Kottke TE, Solberg LI, Nelson AF, et al. Optimizing practice through research: a new perspective to solve an old problem. Ann Fam Med. 2008;6:459-62. Available at: www.annfammed.org/cgi/content/full/6/5/459 or http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=18779551 Accessed October 8, 2009.[Google Scholar]↩
- [10]Favaloro EJ. The Journal Impact Factor: don’t expect its demise any time soon. Clin Chem Lab Med. 2009;47:1319-1324. doi: 10.1515/cclm.2009.328. (Medline) [Google Scholar] ↩
- [11]Glasgow RE. What types of evidence are most needed to advance behavioral medicine? Ann Behav Med. 2008;35:19-25. (Medline) [Google Scholar] ↩
- [12]Green LW, Glasgow R. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29:126-153. (Medline) [Google Scholar] ↩
Copyright: © 2009 Lawrence W. Green. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author(s), with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.
Big waves author, i think your written post is absolutely cogently.