Abstract
Summary: Program evaluation of public engagement processes is important in understanding how well these processes work and in building a knowledge base to improve future engagement efforts. This program evaluation examined a CDC initiative in six states to engage the public about pandemic influenza. Evaluation results indicated the six states were successful in engaging citizens in their processes, participants became more knowledgeable about the topic, citizens believed the process worked well, and projects were successful in influencing opinions about social values. Lessons learned from the evaluation included the importance of communicating evaluation expectations early in the process; creating a culture of evaluation through technical assistance; ensuring resources are available for on-site evaluation collaboration; and balancing the need for cross-site data with the interests of local projects to capture evaluation data relevant to each unique project.
Keywords: Public engagement, public health, communication, program evaluation, public policy, participatory model, participatory medicine.
Citation: Bulling D, DeKraai M. Evaluation of a multistate public engagement project on pandemic influenza. J Participat Med. 2014 Mar 8; 6:e5.
Published: March 8, 2014.
Competing Interests: The authors have declared that no competing interests exist.
The US Department of Health and Human Services, Centers for Disease Control and Prevention (CDC) funded public engagement initiatives in six states (Minnesota, Washington, Ohio, Massachusetts, Hawaii, and Nebraska). The purpose of these initiatives was to include citizens in values-based public policy development pertaining to pandemic influenza. In this paper, we describe the evaluation of this multistate project, and based on our experience in designing and implementing this evaluation, we share lessons learned that may be useful in evaluating public engagement processes in general.
Overview of the Evaluation
We used a participatory model to evaluate the project (see generally [1], [2], [3]). We chose this model because the participatory approach ensures that the needs of project sponsors and the local project implementers are incorporated as the project unfolds over time. This approach is particularly useful for complex projects that are collaborative in nature. [4][5] This project included collaborations between the funder (CDC) and each of the implementation teams at the state level along with project facilitators and the evaluation team. We believe that communication between the evaluation team and the funder should be clear, consistent, and collaborative over the life of the project. The evaluation team was available for planned and impromptu discussions with the state implementers, facilitators and the funder throughout the project, which enhanced the quality of the final product.
Evaluation Questions
We began our evaluation process by reviewing the original request for proposals and talking with the project sponsors to better understand the purpose and desired outcomes of the evaluation. From these discussions emerged key questions of interest to the CDC:
1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?
A rule of thumb for the CDC was to attract 100 individuals to each state citizen meeting. This number was not based on any statistical model of representativeness: rather, project sponsors consider this level of participation reasonable in communicating to policy makers a broad involvement of citizens within each state. This level of participation would also allow process facilitators to structure meetings that include both small group and large group discussions.
Project sponsors and facilitators were interested in recruiting a diversity of citizens representing multiple perspectives. While an exact replication of demographics within each community was not intended, it was a goal to attract citizens from different racial/ethnic groups, income levels, education backgrounds, age, gender, and profession. As a normative matter, commentators have asserted that involving a representative cross-section of the public to participate in deliberative forums is an ideal goal. Such representativeness is important to ensure all members of a community potentially affected by the policy matter at issue are provided a voice in the discussion. [6][7] Practitioners have also found that participants find greater satisfaction and value in participatory processes in which a wide diversity of viewpoints is shared. [8] Additionally, government sponsors of participatory processes benefit from listening to and receiving a broad – not narrow or selective – array of input. [9]
Recruitment of a representative cross-section can be challenging. Often, participatory forums can be dominated by special interest groups or others who represent a narrow personal or professional interest in a policy matter, rather than the interests of the community as a whole.[10] Research has also shown that some participatory forums tend to disproportionately attract individuals who are white, female, high-income, older, and have high educational levels. [11] Strategies to obtain more representative participants might involve using aggressive outreach and promotion efforts or oversampling techniques. Additionally, the use of a financial incentive can offset costs incurred through travel, daycare, or taking a day off from work, and attract individuals to participate in forums who are not motivated by personal or professional interests. [7]
2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?
One of the goals of the process was to ensure a sufficient level of participant knowledge so they can engage in informed dialogue about the issues. A process of education or increase in knowledge among participants is implicit in an effective deliberative experience. Thus, increase in knowledge among participants and their perceptions of the value of their discussion experience are measurable indicators of a successful deliberative discussion. [12][13]
The evaluation allows us to test assumptions for each state including (1) the degree to which the process significantly increases the relevant knowledge of participants; (2) whether participants believe they have sufficient knowledge to engage in informed discussion and make reasoned recommendations; and (3) whether the process produces some equalization of knowledge among participants; in other words, while participants are likely to have varying levels of knowledge going into the deliberation, the process may close this knowledge gap, resulting in a more equitable discussion of the issues. Through the evaluation, we also examine whether the information was successfully conveyed to specific populations based on demographics.
3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process?
Generally speaking, a deliberative experience is one in which participants carefully consider the pros and cons of a policy issue in a reasoned, informed, and balanced discussion. [14][15] A good deliberative experience involves listening to all sides of a debate, analysis of relevant information or evidence, and a discussion environment free of bias, peer pressure, or over-reliance on rhetoric. [7][16][17] A positive deliberative process may thus amount to a successful problem-solving experience, in which a solution to a policy question is arrived at through a process of reasoned and informed discussion. [18] Other components of deliberative quality include a respectful discussion tone, transparency and clarity of meeting objectives and rules, equal and fair treatment among participants, and comfort with the meeting’s physical location and environment. [8] Characteristics of a successful deliberation, such as exposure to different viewpoints, factual learning, and careful consideration of issues, may likely result in a shift in opinions or attitudes about the policy question of issue.
It is assumed that a well-facilitated meeting will result in a rich discussion of the issues in which multiple perspectives are considered and well-reasoned decisions or recommendations are made. To achieve this desired outcome, there are underlying assumptions about the process that can be tested through the evaluation including (1) whether the process is perceived to be fair by participants, (2) whether individual participants felt comfortable sharing their perspectives, (3) whether discussions were dominated by select individuals or groups, (4) how well discussions helped participants understand the trade-offs involved in policy decisions, (5) whether participants are satisfied with the outcome of the process, (6) the degree to which the process was perceived to be free from bias, and (7) whether all important points and perspectives were voiced.
4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options?
One of the assumptions of public engagement and deliberative processes is that through the process of understanding the issues, sharing perspectives, and gaining an appreciation of the trade-offs involved in policy decisions, participants change their opinions about the policies that should be implemented. If this were not the case, public input could be attained much easier and less expensively through public polling. This deliberative aspect is considered to be value-added because outputs will be more thoughtful and well reasoned. The evaluation could test this assumption by examining changes in perspectives about vaccine goals and values relevant to those goals. In addition, we hypothesize that because participants have a chance to obtain similar knowledge about pandemic influenza and develop a greater depth of understanding about the policy options, they will have increasingly similar perspectives after participation than before. In other words, the deliberative process will result in a convergence of beliefs among participants. We were also interested in whether there were differences among demographic groups in perspectives about policy choices.
5. Did the process affect citizen trust in government and support for policy decisions?
The primary goal for this public engagement process was to produce citizen and stakeholder perspective for state level policy makers to consider as they grapple with important decisions. The evaluation also tested whether the process had an impact in participant beliefs in other areas: specifically whether participants had greater trust in government and willingness to support policy decisions by public officials who considered their input. The evaluation tested this assumption by assessing trust in various levels of government before and after the process.
6. Did the process empower citizens to participate effectively in policymaking work?
Another by-product of public engagement is that citizens might feel more empowered by participating in public dialogue about important issues and increasing their involvement in activities designed to improve society or their community (e.g., voting, volunteering, lobbying elected officials). [19] The evaluation tested this assumption by assessing changes in participant planned activities such as participating in civic activities and public policy generally.
7. How did decision makers use citizen information?
A key indicator of the success of a participatory process is the extent to which the process resulted in any significant policy impact. Identifying what impacts equate with success is, however, a subjective exercise. Arguably, the optimal goal of a participatory process is for the public to have a direct opportunity to make policy that reflects their preferences and priorities. However, successful impact can have other manifestations. Public participation can inform or improve decision-making; it can connect the public with each other and policymakers, build trust in government, provide opportunities for public education about policy issues, and foster healthy discourse and discussion in general. [20] In a minority of cases, policymakers can have less virtuous objectives behind sponsoring participatory processes, such as to placate select interests, manage public impression, or generate public acceptance of a pre-determined policy. [21]
Impact can be measured in a number of ways. The extent to which a participatory process does directly influence policy has been measured through policymaker perceptions of how public input improves or informs policy decisions. [22] Additionally, changes in citizen trust and confidence in government, or perceptions of government responsiveness, can indicate a positive impact in participant attitudes towards government. [11] Commentators have also argued that participating in robust, deliberative experiences about policy can increase political sophistication among participants, [23][24] and research has shown such an increase can indeed occur after citizens engage in deliberative forums, [25] or that participants’ policy opinions change in other ways. [26]
Once recommendations from the citizen engagement efforts are communicated, there is an assumption (or expectation) that decision makers will carefully consider this information as they make policy. Through the evaluation, we hoped to understand how information from the public engagement process was communicated to decision makers, how they considered the citizen and stakeholder input in relation to various other information sources, and the extent to which public engagement input impacted policy decisions. Specifically, we planned to assess (1) how well decision-makers understood the process, (2) whether decision-makers read the report or outputs from the process, (3) whether public input from the process was part of the information considered in developing the policy, (4) whether public input become part of the evidence or justification for or against certain alternatives, and (5) whether public input affected the policy in a clearly defined way. We also planned to explore the expectations of decision makers regarding the public engagement process and the type of information resulting from the process that would be useful in making policy decisions.
8. How well did the process increase state and local capacity to engage the public on policy choices? One of the goals of the project was to increase capacity of states and local jurisdictions to involve the public in decision making on an ongoing basis and to sustain this capacity after the project. The CDC funded technical assistance to assist each state in designing public engagement processes, identifying and recruiting participants, forming teams to identify public policy objectives, developing agendas, incentivizing participation in public engagement processes, facilitating meetings, incorporating citizen input into the decision making process, and communicating results to citizens.
Evaluation Methods
We used a mixed methods evaluation design including both quantitative and qualitative information. The protocol was submitted to the University of Nebraska Institutional Review Board and determined to be program evaluation and not human subject research. There were five major components to the evaluation methodology: (1) a pre-post survey conducted at each citizen and stakeholder meeting to assess change in knowledge, opinions about social values, and trust in government, (2) a survey conducted after each public engagement meeting to assess perceptions about the process, (3) focus groups and individual interviews conducted with randomly selected participants immediately after the meetings to assess empowerment and perceptions about the process, (4) key informant interviews with state officials, facilitation contractors, and CDC representatives to assess changes in capacity for engaging the public in policy decisions and how the public input was used in policy development (after meetings had all been conducted), and (5) a review of documents in each state to assess the overall process and how information was conveyed to policy makers.
All surveys and interview questions went through a rigorous process of cognitive testing for comprehension and ease of administration. Responses for survey items were randomly ordered where possible to account for selection order bias; three versions of each survey were produced. A coding system was developed for pre-post surveys to ensure before and after measures could be matched by individual respondent. Qualitative data for this evaluation were drawn from 69 interviews for over 24 hours of audio data; five focus groups held after public engagement events; meeting summaries and notes from all six project sites; notes from contractor conference calls; evaluator observations of public engagement events and material from two lessons learned meetings held at the beginning and end of the project period. This data was used to help document the process of implementing public engagement projects by each state. Initial codes used to analyze the focus group and interview data were derived from evaluation questions. Additional codes emerged using the constant comparative technique [27] with the aid of the Atlas.ti qualitative analysis software program. Multiple coders reviewed the data and periodically met to resolve differences in code interpretation. This approach of comparing data and reaching consensus is part of Consensual Qualitative Research (CQR) and is consistent with the constant comparative technique (Hill, Thompson & Williams, 1997).[28]
Evaluation Results
A comprehensive review of the evaluation results is beyond the scope of this paper; however, we will highlight the major findings. (The full evaluation reports can be found on the University of Nebraska Public Policy Center website.[29] )
1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?
The six states were successful in engaging sufficient numbers of citizens to engage in dialogue about pandemic influenza policy issues; however, most states did not reach the goal of attracting 100 participants to meetings. Projects were successful in attracting a diversity of citizens to deliberations. Demographic characteristics of participants did not always match the characteristics of the broader communities within which the meetings were held but in some cases this was intentional. For example, in Washington there was a concerted effort to partner with community groups who could reach out to specific minority populations. In several states the focus was attracting certain sectors or groups within their communities rather than convening a representative sample; and in Nebraska the focus was on Native Americans/American Indians. Males were underrepresented across all states and older persons tended to be overrepresented. Most of the citizen meetings were representative of the broader community with respect to race and ethnicity; for meeting locations that were not representative, minority populations tended to be overrepresented. Participants also reflected a diversity of education levels, income levels and whether participants had children living at home. At all locations and across states, citizens, on average, agreed with the statement “Participants at this meeting represented a broad diversity of perspectives.” (See Figure 1.)
Figure 1. Perceptions of diversity by state (citizens).
2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?
For the most part, projects were successful in increasing the knowledge of citizens so they could engage in informed discussions about pandemic influenza. Knowledge increased in all states; however the change was statistically significant in only four of the states. Citizens generally believed they had enough knowledge to have well informed opinions about decisions related to pandemic influenza. Also, contrary to expectations, the processes across projects did not significantly level the playing field in terms of knowledge; participants were as varied in their level of knowledge at the end of the process as they were when they walked in the door. (See Table 1.)
Table 1. Participant knowledge by state.
3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process?
Participants in the public engagement processes generally thought the deliberative processes were high quality. Participants believed the discussions were fair to all participants, individuals were comfortable talking in the discussion, the process helped them better understand the types of trade-offs involved in policy decisions, and the process produced independent information and resulted in a valuable outcome (see Table 2).
Table 2. Perceptions of process by state (citizens).
4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options?
The projects were generally successful in influencing opinions about social values and policy options related to pandemic influenza. Citizen posttest ratings of importance of social values were significantly different than pre-test scores. This result indicates that overall as part of the deliberative processes conducted in each state, citizens changed their opinions about social values after being exposed to an educational presentation and discussing policy options. This result is important because it demonstrates that deliberative processes provide a different quality of input than surveys or polls.
5. Did the process affect citizen trust in government and support for policy decisions?
Citizens did not significantly change their trust in various levels of government as a result of the process. However, participants tended to believe their input would be used by decision makers. Stakeholders and citizens expressed hope in interviews and focus groups that decision makers would use the information offered at the events when making policy level decisions (see Table 3). There was no single expectation about how the information would be used, but many participants wanted to receive some sort of feedback from the project sponsors with that information. The presence of a decision maker at citizen events seemed to be proof to many that the information generated at the event was considered important by someone. Even when citizen and stakeholder ratings on surveys for trusting officials were low, their comments in interviews about the office or person representing the office present at the event were positive.
Table 3. Perceptions of process by state (citizens).
1. Did the process empower citizens to participate effectively in policymaking work?
To some extent the deliberative processes empowered citizens to participate effectively in public decision-making work. Citizens from all states reported in interviews and focus groups that they felt empowered and heard at the deliberation events. They were unsure of the impact their participation would have on decisions, but in almost every instance held out hope that the results of the deliberation would be considered when decisions were made. Almost all of the citizens interviewed enjoyed the deliberation events and appreciated the organization and facilitation. The seriousness of the event along with the presence of public officials led citizens to conclude their input would be taken into consideration, which was empowering. In one state, however, citizens perceived a public official as treating the event “casually,” which left them with a feeling that their input was not important. Conversely, in several states a public official traveled a great distance to attend and stayed for the entire event, which was noted by citizens as a sign their work was important.
Many of the citizens made comments about being empowered to serve as a conduit of information for their peers as a result of participating in the deliberative events. They may not have agreed with other discussants or with recommendations resulting from the event, but they generally believed they were better equipped to relay information to friends, family, neighborhoods or organizations as a result of participating in discussions. Empowerment to participate in public decision-making work seemed to emanate from different aspects of the events. For example, Nebraska tribal participants commented on the empowerment value of the information received at citizen gatherings and the value of the discussions at the stakeholder gathering. Citizens generally reported in interviews and focus groups they would consider attending another deliberation event on other topics as a result of their experience with this one.
2. How did decision makers use citizen information?
The state projects had some success in informing and assisting state and local decision makers involved in pending policy decisions related to pandemic influenza. Given the limited time period to assess this aspect, it is unclear how these deliberative processes will impact long-term decisions. Interviews with state level officials engaged in public health policy decisions revealed varying levels of immediate project impact with decision makers. Generally, the largest impact was personal and related to decision maker attendance at the event rather than from upward movement of a document or set of recommendations resulting from the event. In the limited time frame of the evaluation, states were still preparing final reports from the project and were not able to point to official documentation that reflected incorporation of citizen input in official state plans for pandemic preparation or response. This does not, however, tell the full story of how policy maker decisions were impacted. For example, one policy maker talked about the very real decisions that had to be made when the H1N1 outbreak occurred in the middle of the project; she said it was valuable to hear “real people wrestle with these issues while I was wrestling with it. It gave form and substance to conversations we need to have.” This sentiment was echoed by policy makers from every project who attended the project-sponsored deliberations. This influence was translated into operational decisions at the policy level that were not scripted by planning documents.
3. How well did the process increase state and local capacity to engage the public on policy choices?
There appeared to be some increase in state and local capacity to effectively engage the public in policy choices. The level of expertise in the public deliberation model envisioned by the CDC varied across the states receiving the cooperative agreement for this project. The project proposals contained a mix of traditional and innovative public information and engagement models. All jurisdictions receiving the awards were committed to engaging the public, but state project directors reported challenges reconciling their project designs with federal expectations to use a specific deliberative process with federal contractors as facilitation experts rather than the locally trusted contractors envisioned within their project proposals.
The states with prior experience using the model had less difficulty organizing and carrying out their projects than the states that had not been exposed to it prior to receiving funding via the cooperative agreement. All state project leads reported a temporary increase in capacity with the infusion of funds to support public engagement efforts. Although all states recognized value in engaging citizens and extracting focused input on issues, the time and cost of obtaining input using the deliberative model was perceived as prohibitive and not sustainable without additional funding to bolster capacity on an ongoing basis.
Lessons Learned
Many of the lessons learned from past public engagement projects have been associated with implementation of a process with citizens, stakeholders and policy makers. Evaluation lessons have resulted in recommendations to involve evaluators early in the process, create shared understanding of the importance of evaluation, clearly document the process to help explain evaluation results and involve policy makers early to track the impact of the public engagement process. The cross-site evaluation for pandemic influenza demonstration projects yielded four similar lessons learned that inform the role and function of evaluation and evaluators in multi-site public engagement projects.
1. Communicate cross-site or national evaluation expectations to project designers prior to their submission of project proposals.
State proposals for the pandemic influenza demonstration project included several types of public engagement models. Each project addressed policy issues important to the state or local organizers related to planning for pandemic influenza, but each varied in the approach taken to engage the public. The cross-site evaluation was designed to answer broad questions to assess impact across all of the projects. The States putting in project proposals were not aware of the cross-site evaluation goals when they designed their projects, so many had included evaluation components of local interest. Once awarded, States were told they were expected to use a single evaluation contractor to ensure cross-site evaluation needs were met. Although project sites were interested in using cross-site tools, they had to rethink their timelines and plans to incorporate them. The local/state partners who were testing innovative public engagement models were asked to incorporate the cross-site tools even though they were designed with the assumption that engagement would be in-person rather than on-line or via other mediums. We believe the local/State partners would have been more accommodating of the cross-site evaluation if they had been able to contemplate how it fit when they were designing their project applications. Setting the expectation of participation in the cross-site evaluation activities early assists project planners to incorporate evaluation components in their design.
2. Create an expectation that cross-site evaluators will provide technical assistance to local/state projects to ensure local evaluations are meaningful and compatible with cross-site evaluation needs.
Traditional evaluation usually means a neutral entity observes, collects data and provides feedback to project organizers and sponsors about process and outcomes. In the pandemic influenza demonstration project the evaluation could have been strengthened if the cross-site evaluators’ role was enhanced to include provision of technical assistance for local/State projects as they developed local evaluation questions. The cross-site material was valuable, but in some cases not as meaningful to local/State policy makers as it could have been. The cross-site evaluators offered to add questions or data points to the instruments but local teams were left with the responsibility of identifying the type of data they desired. In retrospect, this customization could have been stronger if cross-site evaluation team members were able to provide more in-depth technical assistance to the project sites as they considered the process and outcome measures that were meaningful to their policy makers as well as how the cross-site evaluation results could be used to strengthen their projects. The request for proposals for the overall demonstration project did not include a requirement for local evaluation personnel to unburden local/State projects by providing evaluation for them. However, the lesson learned was that cross-site evaluation would be more locally meaningful and effective if the role of the evaluator was expanded to include provision of technical assistance to ensure local needs are being adequately addressed.
3. Site visits by cross-site evaluators would increase applicability of results for local/state projects.
The pandemic influenza demonstration project began with a lessons learned conference to help successful State project applicants by bringing them together with previous public engagement organizers to give them the benefit of learning from the experience of others. Cross-site evaluators were introduced to State project personnel at this forum. This was a good beginning, but in the future we recommend follow-up with an in-person site visit as soon as possible at the beginning of the project. Although telephone contact was helpful, we believe cross-site evaluation expectations and adaptations could have been made more meaningful to local/State projects if on-site consultation were built into the overall design and expectations of evaluators. Early on-site consultation provides an opportunity for evaluators to communicate cross-site evaluation expectations, answer questions about the evaluation and begin the process of assisting projects with identification of local evaluation needs. This is recommended for instances where technical assistance is provided, and in cases when cross-site evaluation protocols are expected to be carried out by local organizers. On-site consultation would also be beneficial at the data collection stage and at the end when results are being interpreted. Increased involvement of local/State project personnel in interpreting the results of cross-site and site-specific data strengthens the applicability of findings and is consistent with the participatory model of evaluation.
4. Balance flexible evaluation design with tools that capture cross-site data effectively.
This evaluation included a need to flexibly balance local and federal expectations and tools. The role of the cross-site evaluator is to err on the side of comparison across sites rather than customizing to meet local needs. However, capturing the effectiveness of different models of public engagement required flexibility on the part of evaluators. For example, capturing change in knowledge of participants requires evaluators to understand the knowledge targets of project organizers. Cross-site comparison of similar knowledge questions only works when the same material is presented or made available to participants at each site. The variability in projects, presenters, presentation medium and style could only be documented but not controlled. Flexibly identifying change in knowledge as a cross-site question may be more effectively assessed by incorporating local knowledge targets rather than predetermining general knowledge questions.
The lessons learned from this evaluation can be of use to government planners as they consider how to structure cross-site evaluation components in future projects but they are also applicable to other planners and practitioners who want to incorporate evaluation in their work. For example, local public health agents may wish to use public engagement processes in neighborhoods related to a specific health issue and the methods they use may differ in each location to accommodate the culture of the area. Evaluation of the engagement processes across neighborhoods would be akin to the project we document here across states and the lessons learned could be of benefit to the public health community.
References
- Cousins JB, Earl LM, eds. Participatory Evaluation in Education: Studies in Evaluation Use and Organizational Learning. Washington, DC: The Falmer Press; 1995. ↩
- Cousins J, Whitmore E. Framing participatory evaluation. In: Whitmore E, ed. Understanding and Practicing Participatory Evaluation. San Francisco: Jossey-Bass; 1998. New Directions for Evaluation. 1998; 80:5-24. ↩
- Gregory A. Problematizing participation: a critical review of approaches to participation in evaluation theory. Evaluation. 2000; 6:179-199. ↩
- Greene JC. Stakeholder participation and utilization in program evaluation. Evaluation Review. 1988; 12 (2):91-116. ↩
- Mark MM, Shotland RL. Stakeholder-based evaluation and value judgments. Evaluation Review. 1985; 9:605-626. ↩
- Chambers S. Deliberative democratic theory. Annual Review of Political Science. 2003; 6:307-326. ↩
- Fishkin J. The voice of the people. New Haven, Conn: Yale University Press; 1995. ↩
- Halvorsen KE. Assessing public participation techniques for comfort, convenience, satisfaction, and deliberation. Environmental Management. 2001; 28(2):179-186. ↩
- Carnes SA, Schweitzer M, Peelle EB, Wolfe AK, Munro JF. Measuring the success of public participation on environmental restoration and waste management activities in the US Department of Energy. Technology in Society. 1998; 20:385-406. ↩
- Guild W, Guild R, Thompson F. 21st century polling. Public Power Magazine. 2004; March-April:28-35. ↩
- Goidel RK, Freeman CM, Procopio S, Zewe CF. Who participates in the “Public Square” and does it matter? Public Opinion Quarterly. 2008; 72(4):792-803. ↩
- Shindler B, Neburka J. Public participation in forest planning: attributes of success. Journal of Forestry. 1997; 95(1):17-19. ↩
- Webler, T., Tuler, S., & Krueger, R. (2001). What is a good public participation process? Five perspectives from the public. Environmental Management, 27(3), 435-450. ↩
- Matthews D. For Communities to Work. Dayton, Ohio: Kettering Foundation Press; 2002. ↩
- Stromer-Galley J. Decoding deliberation. Paper presented at the Second Conference on Online Deliberation: Design, Research, and Practice, May 20-25, 2005; Stanford, California. ↩
- Delli Carpini, MX, Cook FL, Jacobs LR. Public deliberation, discursive participation, and citizen engagement: a review of the empirical literature. Annual Review of Political Science. 2004; 7:315-344. ↩
- Gastil J. Democracy in Small Groups. Gabriola Island, BC: New Society Publishers; 1993. ↩
- Muhlberger P. Defining and measuring deliberative participation and potential: a theoretical analysis and operationalization. Paper presented at the International Society of Political Psychology Twenty-Third Annual Scientific Meeting, July 1-4, 2000; Seattle, Washington. ↩
- Min J. Online vs. face-to-face deliberation: effects on civic engagement. Journal of Computer-Mediated Communication. 2007; 12(4):article 11. ↩
- Beierle TC, Cayford J. Democracy in Practice: Public Participation in Environmental Decisions. Washington, DC: Resources for the Future; 2002. ↩
- Arnstein SR. A ladder of citizen participation. Journal of the American Institute of Planners. 1969; 35(4):216-224. ↩
- Carnes SA, Schweitzer M, Peelle EB, Wolfe AK, Munro JF. Performance Measures for Evaluating Public Participation Activities in DOE’s Office of Environmental Management (ORNL-6905). Oak Ridge, Tenn: Oak Ridge National Laboratory; 1996. ↩
- Fishkin J. Democracy and Deliberation. New Haven, Conn: Yale University Press; 1991. ↩
- Gastil J, Adams GE. Understanding Public Deliberation. Albuquerque, NM: Institute for Public Policy; 1995. ↩
- Gastil J, Dillard JP. Increasing political sophistication through public deliberation. Political Communication. 1999; 13:3-23. ↩
- Barabas J. How deliberation affects policy opinions. American Political Science Review. 2004; 98(4):687-701. ↩
- Glaser B, Strauss A. The Discovery of Grounded Theory. Chicago: Aldine Publishing Company; 1967. ↩
- Hill CE, Thompson BJ, Williams EN. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997; 25(4):517-572. ↩
- Public Policy Center, University of Nebraska. Evaluation of Public Engagement Demonstration Projects for Pandemic Influenza. Lincoln, Neb: Public Policy Center, University of Nebraska; May 31, 2010. Available at: http://ppc.unl.edu/wp-content/uploads/2010/05/P5-Report-FINAL.pdf. Accessed March 5, 2014. ↩
Copyright: © 2014 Denise Bulling and Mark DeKraai. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the authors, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.