{"id":3375,"date":"2014-03-08T23:15:27","date_gmt":"2014-03-09T04:15:27","guid":{"rendered":"http:\/\/pmedicine.org\/journal\/?p=3375"},"modified":"2023-02-20T11:02:32","modified_gmt":"2023-02-20T16:02:32","slug":"evaluation-of-a-multistate-public-engagement-project-on-pandemic-influenza","status":"publish","type":"post","link":"https:\/\/participatorymedicine.org\/journal\/evidence\/research\/2014\/03\/08\/evaluation-of-a-multistate-public-engagement-project-on-pandemic-influenza\/","title":{"rendered":"Evaluation of a Multistate Public Engagement Project on Pandemic Influenza"},"content":{"rendered":"
Summary<\/em><\/strong>: Program evaluation of public engagement processes is important in understanding how well these processes work and in building a knowledge base to improve future engagement efforts. This program evaluation examined a CDC initiative in six states to engage the public about pandemic influenza. Evaluation results indicated the six states were successful in engaging citizens in their processes, participants became more knowledgeable about the topic, citizens believed the process worked well, and projects were successful in influencing opinions about social values. Lessons learned from the evaluation included the importance of communicating evaluation expectations early in the process; creating a culture of evaluation through technical assistance; ensuring resources are available for on-site evaluation collaboration; and balancing the need for cross-site data with the interests of local projects to capture evaluation data relevant to each unique project. We used a participatory model to evaluate the project (see generally <\/a>[1<\/a>], <\/a>[2<\/a>], <\/a>[3<\/a>]). We chose this model because the participatory approach ensures that the needs of project sponsors and the local project implementers are incorporated as the project unfolds over time. This approach is particularly useful for complex projects that are collaborative in nature. <\/a>[4<\/a>]<\/a>[5<\/a>] This project included collaborations between the funder (CDC) and each of the implementation teams at the state level along with project facilitators and the evaluation team. We believe that communication between the evaluation team and the funder should be clear, consistent, and collaborative over the life of the project. The evaluation team was available for planned and impromptu discussions with the state implementers, facilitators and the funder throughout the project, which enhanced the quality of the final product. <\/p>\n We began our evaluation process by reviewing the original request for proposals and talking with the project sponsors to better understand the purpose and desired outcomes of the evaluation. From these discussions emerged key questions of interest to the CDC: <\/p>\n 1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?<\/strong> Project sponsors and facilitators were interested in recruiting a diversity of citizens representing multiple perspectives. While an exact replication of demographics within each community was not intended, it was a goal to attract citizens from different racial\/ethnic groups, income levels, education backgrounds, age, gender, and profession. As a normative matter, commentators have asserted that involving a representative cross-section of the public to participate in deliberative forums is an ideal goal. Such representativeness is important to ensure all members of a community potentially affected by the policy matter at issue are provided a voice in the discussion. <\/a>[6<\/a>]<\/a>[7<\/a>] Practitioners have also found that participants find greater satisfaction and value in participatory processes in which a wide diversity of viewpoints is shared. <\/a>[8<\/a>] Additionally, government sponsors of participatory processes benefit from listening to and receiving a broad \u2013 not narrow or selective \u2013 array of input. <\/a>[9<\/a>]<\/p>\n Recruitment of a representative cross-section can be challenging. Often, participatory forums can be dominated by special interest groups or others who represent a narrow personal or professional interest in a policy matter, rather than the interests of the community as a whole.<\/a>[10<\/a>] Research has also shown that some participatory forums tend to disproportionately attract individuals who are white, female, high-income, older, and have high educational levels. <\/a>[11<\/a>] Strategies to obtain more representative participants might involve using aggressive outreach and promotion efforts or oversampling techniques. Additionally, the use of a financial incentive can offset costs incurred through travel, daycare, or taking a day off from work, and attract individuals to participate in forums who are not motivated by personal or professional interests. <\/a>[7<\/a>]<\/p>\n 2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?<\/strong> The evaluation allows us to test assumptions for each state including (1) the degree to which the process significantly increases the relevant knowledge of participants; (2) whether participants believe they have sufficient knowledge to engage in informed discussion and make reasoned recommendations; and (3) whether the process produces some equalization of knowledge among participants; in other words, while participants are likely to have varying levels of knowledge going into the deliberation, the process may close this knowledge gap, resulting in a more equitable discussion of the issues. Through the evaluation, we also examine whether the information was successfully conveyed to specific populations based on demographics. <\/p>\n 3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process? <\/strong> It is assumed that a well-facilitated meeting will result in a rich discussion of the issues in which multiple perspectives are considered and well-reasoned decisions or recommendations are made. To achieve this desired outcome, there are underlying assumptions about the process that can be tested through the evaluation including (1) whether the process is perceived to be fair by participants, (2) whether individual participants felt comfortable sharing their perspectives, (3) whether discussions were dominated by select individuals or groups, (4) how well discussions helped participants understand the trade-offs involved in policy decisions, (5) whether participants are satisfied with the outcome of the process, (6) the degree to which the process was perceived to be free from bias, and (7) whether all important points and perspectives were voiced.<\/p>\n 4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options? <\/strong> 5. Did the process affect citizen trust in government and support for policy decisions?<\/strong> 6. Did the process empower citizens to participate effectively in policymaking work?<\/strong> 7. How did decision makers use citizen information?<\/strong> Impact can be measured in a number of ways. The extent to which a participatory process does directly influence policy has been measured through policymaker perceptions of how public input improves or informs policy decisions. <\/a>[22<\/a>] Additionally, changes in citizen trust and confidence in government, or perceptions of government responsiveness, can indicate a positive impact in participant attitudes towards government. <\/a>[11<\/a>] Commentators have also argued that participating in robust, deliberative experiences about policy can increase political sophistication among participants, <\/a>[23<\/a>]<\/a>[24<\/a>] and research has shown such an increase can indeed occur after citizens engage in deliberative forums, <\/a>[25<\/a>] or that participants\u2019 policy opinions change in other ways. <\/a>[26<\/a>] <\/p>\n Once recommendations from the citizen engagement efforts are communicated, there is an assumption (or expectation) that decision makers will carefully consider this information as they make policy. Through the evaluation, we hoped to understand how information from the public engagement process was communicated to decision makers, how they considered the citizen and stakeholder input in relation to various other information sources, and the extent to which public engagement input impacted policy decisions. Specifically, we planned to assess (1) how well decision-makers understood the process, (2) whether decision-makers read the report or outputs from the process, (3) whether public input from the process was part of the information considered in developing the policy, (4) whether public input become part of the evidence or justification for or against certain alternatives, and (5) whether public input affected the policy in a clearly defined way. We also planned to explore the expectations of decision makers regarding the public engagement process and the type of information resulting from the process that would be useful in making policy decisions.<\/p>\n 8. How well did the process increase state and local capacity to engage the public on policy choices?<\/strong> One of the goals of the project was to increase capacity of states and local jurisdictions to involve the public in decision making on an ongoing basis and to sustain this capacity after the project. The CDC funded technical assistance to assist each state in designing public engagement processes, identifying and recruiting participants, forming teams to identify public policy objectives, developing agendas, incentivizing participation in public engagement processes, facilitating meetings, incorporating citizen input into the decision making process, and communicating results to citizens. <\/p>\n We used a mixed methods evaluation design including both quantitative and qualitative information. The protocol was submitted to the University of Nebraska Institutional Review Board and determined to be program evaluation and not human subject research. There were five major components to the evaluation methodology: (1) a pre-post survey conducted at each citizen and stakeholder meeting to assess change in knowledge, opinions about social values, and trust in government, (2) a survey conducted after each public engagement meeting to assess perceptions about the process, (3) focus groups and individual interviews conducted with randomly selected participants immediately after the meetings to assess empowerment and perceptions about the process, (4) key informant interviews with state officials, facilitation contractors, and CDC representatives to assess changes in capacity for engaging the public in policy decisions and how the public input was used in policy development (after meetings had all been conducted), and (5) a review of documents in each state to assess the overall process and how information was conveyed to policy makers.<\/p>\n All surveys and interview questions went through a rigorous process of cognitive testing for comprehension and ease of administration. Responses for survey items were randomly ordered where possible to account for selection order bias; three versions of each survey were produced. A coding system was developed for pre-post surveys to ensure before and after measures could be matched by individual respondent. Qualitative data for this evaluation were drawn from 69 interviews for over 24 hours of audio data; five focus groups held after public engagement events; meeting summaries and notes from all six project sites; notes from contractor conference calls; evaluator observations of public engagement events and material from two lessons learned meetings held at the beginning and end of the project period. This data was used to help document the process of implementing public engagement projects by each state. Initial codes used to analyze the focus group and interview data were derived from evaluation questions. Additional codes emerged using the constant comparative technique <\/a>[27<\/a>] with the aid of the Atlas.ti qualitative analysis software program. Multiple coders reviewed the data and periodically met to resolve differences in code interpretation. This approach of comparing data and reaching consensus is part of Consensual Qualitative Research (CQR) and is consistent with the constant comparative technique (Hill, Thompson & Williams, 1997).<\/a>[28<\/a>]<\/p>\n A comprehensive review of the evaluation results is beyond the scope of this paper; however, we will highlight the major findings. (The full evaluation reports can be found on the University of Nebraska Public Policy Center website.<\/a>[29<\/a>] )<\/p>\n 1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?<\/strong> Figure 1.<\/strong> Perceptions of diversity by state (citizens). 2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?<\/strong> Table 1. Participant knowledge by state.<\/strong> 3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process?<\/strong> Table 2. Perceptions of process by state (citizens).<\/strong> 4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options?<\/strong> 5. Did the process affect citizen trust in government and support for policy decisions?<\/strong> Table 3. Perceptions of process by state (citizens).<\/strong> 1. Did the process empower citizens to participate effectively in policymaking work?<\/strong> Many of the citizens made comments about being empowered to serve as a conduit of information for their peers as a result of participating in the deliberative events. They may not have agreed with other discussants or with recommendations resulting from the event, but they generally believed they were better equipped to relay information to friends, family, neighborhoods or organizations as a result of participating in discussions. Empowerment to participate in public decision-making work seemed to emanate from different aspects of the events. For example, Nebraska tribal participants commented on the empowerment value of the information received at citizen gatherings and the value of the discussions at the stakeholder gathering. Citizens generally reported in interviews and focus groups they would consider attending another deliberation event on other topics as a result of their experience with this one. <\/p>\n 2. How did decision makers use citizen information?<\/strong> 3. How well did the process increase state and local capacity to engage the public on policy choices?<\/strong> The states with prior experience using the model had less difficulty organizing and carrying out their projects than the states that had not been exposed to it prior to receiving funding via the cooperative agreement. All state project leads reported a temporary increase in capacity with the infusion of funds to support public engagement efforts. Although all states recognized value in engaging citizens and extracting focused input on issues, the time and cost of obtaining input using the deliberative model was perceived as prohibitive and not sustainable without additional funding to bolster capacity on an ongoing basis. <\/p>\n Many of the lessons learned from past public engagement projects have been associated with implementation of a process with citizens, stakeholders and policy makers. Evaluation lessons have resulted in recommendations to involve evaluators early in the process, create shared understanding of the importance of evaluation, clearly document the process to help explain evaluation results and involve policy makers early to track the impact of the public engagement process. The cross-site evaluation for pandemic influenza demonstration projects yielded four similar lessons learned that inform the role and function of evaluation and evaluators in multi-site public engagement projects. <\/p>\n 1. Communicate cross-site or national evaluation expectations to project designers prior to their submission of project proposals.<\/strong> 2. Create an expectation that cross-site evaluators will provide technical assistance to local\/state projects to ensure local evaluations are meaningful and compatible with cross-site evaluation needs.<\/strong> 3. Site visits by cross-site evaluators would increase applicability of results for local\/state projects.<\/strong> 4. Balance flexible evaluation design with tools that capture cross-site data effectively.<\/strong> The lessons learned from this evaluation can be of use to government planners as they consider how to structure cross-site evaluation components in future projects but they are also applicable to other planners and practitioners who want to incorporate evaluation in their work. For example, local public health agents may wish to use public engagement processes in neighborhoods related to a specific health issue and the methods they use may differ in each location to accommodate the culture of the area. Evaluation of the engagement processes across neighborhoods would be akin to the project we document here across states and the lessons learned could be of benefit to the public health community. <\/p>\n Copyright: <\/em><\/strong>\u00a9 2014 Denise Bulling and Mark DeKraai. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the authors, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.<\/p>\n <\/p>\n","protected":false},"excerpt":{"rendered":" This study used a participatory model to evaluate six CDC-funded public-engagement initiatives pertaining to pandemic influenza. The authors describe the evaluation process and share lessons learned that may be useful in evaluating public engagement processes in general.<\/p>\n","protected":false},"author":314,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","jetpack_post_was_ever_published":false,"footnotes":""},"categories":[4,750],"tags":[129,37,77,102,767,765,201,617,766],"coauthors":[763,764],"class_list":["post-3375","post","type-post","status-publish","format-standard","hentry","category-research","category-vol-6-2014","tag-communication","tag-feature","tag-issue","tag-participatory-medicine","tag-participatory-model","tag-program-evaluation","tag-public-engagement","tag-public-health","tag-public-policy"],"yoast_head":"\n
\nKeywords<\/em><\/strong>: Public engagement, public health, communication, program evaluation, public policy, participatory model, participatory medicine.
\nCitation<\/em><\/strong>: Bulling D, DeKraai M. Evaluation of a multistate public engagement project on pandemic influenza. J Participat Med. 2014 Mar 8; 6:e5.
\nPublished<\/em><\/strong>: March 8, 2014.
\nCompeting Interests<\/em><\/strong>: The authors have declared that no competing interests exist.<\/p>\n
\nThe US Department of Health and Human Services, Centers for Disease Control and Prevention (CDC) funded public engagement initiatives in six states (Minnesota, Washington, Ohio, Massachusetts, Hawaii, and Nebraska). The purpose of these initiatives was to include citizens in values-based public policy development pertaining to pandemic influenza. In this paper, we describe the evaluation of this multistate project, and based on our experience in designing and implementing this evaluation, we share lessons learned that may be useful in evaluating public engagement processes in general.<\/p>\nOverview of the Evaluation<\/h3>\n
Evaluation Questions<\/h3>\n
\nA rule of thumb for the CDC was to attract 100 individuals to each state citizen meeting. This number was not based on any statistical model of representativeness: rather, project sponsors consider this level of participation reasonable in communicating to policy makers a broad involvement of citizens within each state. This level of participation would also allow process facilitators to structure meetings that include both small group and large group discussions. <\/p>\n
\nOne of the goals of the process was to ensure a sufficient level of participant knowledge so they can engage in informed dialogue about the issues. A process of education or increase in knowledge among participants is implicit in an effective deliberative experience. Thus, increase in knowledge among participants and their perceptions of the value of their discussion experience are measurable indicators of a successful deliberative discussion. <\/a>[12<\/a>]<\/a>[13<\/a>]<\/p>\n
\nGenerally speaking, a deliberative experience is one in which participants carefully consider the pros and cons of a policy issue in a reasoned, informed, and balanced discussion. <\/a>[14<\/a>]<\/a>[15<\/a>] A good deliberative experience involves listening to all sides of a debate, analysis of relevant information or evidence, and a discussion environment free of bias, peer pressure, or over-reliance on rhetoric. <\/a>[7<\/a>]<\/a>[16<\/a>]<\/a>[17<\/a>] A positive deliberative process may thus amount to a successful problem-solving experience, in which a solution to a policy question is arrived at through a process of reasoned and informed discussion. <\/a>[18<\/a>] Other components of deliberative quality include a respectful discussion tone, transparency and clarity of meeting objectives and rules, equal and fair treatment among participants, and comfort with the meeting\u2019s physical location and environment. <\/a>[8<\/a>] Characteristics of a successful deliberation, such as exposure to different viewpoints, factual learning, and careful consideration of issues, may likely result in a shift in opinions or attitudes about the policy question of issue.<\/p>\n
\nOne of the assumptions of public engagement and deliberative processes is that through the process of understanding the issues, sharing perspectives, and gaining an appreciation of the trade-offs involved in policy decisions, participants change their opinions about the policies that should be implemented. If this were not the case, public input could be attained much easier and less expensively through public polling. This deliberative aspect is considered to be value-added because outputs will be more thoughtful and well reasoned. The evaluation could test this assumption by examining changes in perspectives about vaccine goals and values relevant to those goals. In addition, we hypothesize that because participants have a chance to obtain similar knowledge about pandemic influenza and develop a greater depth of understanding about the policy options, they will have increasingly similar perspectives after participation than before. In other words, the deliberative process will result in a convergence of beliefs among participants. We were also interested in whether there were differences among demographic groups in perspectives about policy choices. <\/p>\n
\nThe primary goal for this public engagement process was to produce citizen and stakeholder perspective for state level policy makers to consider as they grapple with important decisions. The evaluation also tested whether the process had an impact in participant beliefs in other areas: specifically whether participants had greater trust in government and willingness to support policy decisions by public officials who considered their input. The evaluation tested this assumption by assessing trust in various levels of government before and after the process.<\/p>\n
\nAnother by-product of public engagement is that citizens might feel more empowered by participating in public dialogue about important issues and increasing their involvement in activities designed to improve society or their community (e.g., voting, volunteering, lobbying elected officials). <\/a>[19<\/a>] The evaluation tested this assumption by assessing changes in participant planned activities such as participating in civic activities and public policy generally. <\/p>\n
\nA key indicator of the success of a participatory process is the extent to which the process resulted in any significant policy impact. Identifying what impacts equate with success is, however, a subjective exercise. Arguably, the optimal goal of a participatory process is for the public to have a direct opportunity to make policy that reflects their preferences and priorities. However, successful impact can have other manifestations. Public participation can inform or improve decision-making; it can connect the public with each other and policymakers, build trust in government, provide opportunities for public education about policy issues, and foster healthy discourse and discussion in general. <\/a>[20<\/a>] In a minority of cases, policymakers can have less virtuous objectives behind sponsoring participatory processes, such as to placate select interests, manage public impression, or generate public acceptance of a pre-determined policy. <\/a>[21<\/a>]<\/p>\nEvaluation Methods<\/h3>\n
Evaluation Results<\/h3>\n
\nThe six states were successful in engaging sufficient numbers of citizens to engage in dialogue about pandemic influenza policy issues; however, most states did not reach the goal of attracting 100 participants to meetings. Projects were successful in attracting a diversity of citizens to deliberations. Demographic characteristics of participants did not always match the characteristics of the broader communities within which the meetings were held but in some cases this was intentional. For example, in Washington there was a concerted effort to partner with community groups who could reach out to specific minority populations. In several states the focus was attracting certain sectors or groups within their communities rather than convening a representative sample; and in Nebraska the focus was on Native Americans\/American Indians. Males were underrepresented across all states and older persons tended to be overrepresented. Most of the citizen meetings were representative of the broader community with respect to race and ethnicity; for meeting locations that were not representative, minority populations tended to be overrepresented. Participants also reflected a diversity of education levels, income levels and whether participants had children living at home. At all locations and across states, citizens, on average, agreed with the statement \u201cParticipants at this meeting represented a broad diversity of perspectives.\u201d (See Figure 1.)<\/p>\n
\n<\/a><\/p>\n
\nFor the most part, projects were successful in increasing the knowledge of citizens so they could engage in informed discussions about pandemic influenza. Knowledge increased in all states; however the change was statistically significant in only four of the states. Citizens generally believed they had enough knowledge to have well informed opinions about decisions related to pandemic influenza. Also, contrary to expectations, the processes across projects did not significantly level the playing field in terms of knowledge; participants were as varied in their level of knowledge at the end of the process as they were when they walked in the door. (See Table 1.)<\/p>\n
\n<\/a><\/p>\n
\nParticipants in the public engagement processes generally thought the deliberative processes were high quality. Participants believed the discussions were fair to all participants, individuals were comfortable talking in the discussion, the process helped them better understand the types of trade-offs involved in policy decisions, and the process produced independent information and resulted in a valuable outcome (see Table 2).<\/p>\n
\n<\/a><\/p>\n
\nThe projects were generally successful in influencing opinions about social values and policy options related to pandemic influenza. Citizen posttest ratings of importance of social values were significantly different than pre-test scores. This result indicates that overall as part of the deliberative processes conducted in each state, citizens changed their opinions about social values after being exposed to an educational presentation and discussing policy options. This result is important because it demonstrates that deliberative processes provide a different quality of input than surveys or polls. <\/p>\n
\nCitizens did not significantly change their trust in various levels of government as a result of the process. However, participants tended to believe their input would be used by decision makers. Stakeholders and citizens expressed hope in interviews and focus groups that decision makers would use the information offered at the events when making policy level decisions (see Table 3). There was no single expectation about how the information would be used, but many participants wanted to receive some sort of feedback from the project sponsors with that information. The presence of a decision maker at citizen events seemed to be proof to many that the information generated at the event was considered important by someone. Even when citizen and stakeholder ratings on surveys for trusting officials were low, their comments in interviews about the office or person representing the office present at the event were positive.<\/p>\n
\n<\/a><\/p>\n
\nTo some extent the deliberative processes empowered citizens to participate effectively in public decision-making work. Citizens from all states reported in interviews and focus groups that they felt empowered and heard at the deliberation events. They were unsure of the impact their participation would have on decisions, but in almost every instance held out hope that the results of the deliberation would be considered when decisions were made. Almost all of the citizens interviewed enjoyed the deliberation events and appreciated the organization and facilitation. The seriousness of the event along with the presence of public officials led citizens to conclude their input would be taken into consideration, which was empowering. In one state, however, citizens perceived a public official as treating the event \u201ccasually,\u201d which left them with a feeling that their input was not important. Conversely, in several states a public official traveled a great distance to attend and stayed for the entire event, which was noted by citizens as a sign their work was important.<\/p>\n
\nThe state projects had some success in informing and assisting state and local decision makers involved in pending policy decisions related to pandemic influenza. Given the limited time period to assess this aspect, it is unclear how these deliberative processes will impact long-term decisions. Interviews with state level officials engaged in public health policy decisions revealed varying levels of immediate project impact with decision makers. Generally, the largest impact was personal and related to decision maker attendance at the event rather than from upward movement of a document or set of recommendations resulting from the event. In the limited time frame of the evaluation, states were still preparing final reports from the project and were not able to point to official documentation that reflected incorporation of citizen input in official state plans for pandemic preparation or response. This does not, however, tell the full story of how policy maker decisions were impacted. For example, one policy maker talked about the very real decisions that had to be made when the H1N1 outbreak occurred in the middle of the project; she said it was valuable to hear \u201creal people wrestle with these issues while I was wrestling with it. It gave form and substance to conversations we need to have.\u201d This sentiment was echoed by policy makers from every project who attended the project-sponsored deliberations. This influence was translated into operational decisions at the policy level that were not scripted by planning documents.<\/p>\n
\nThere appeared to be some increase in state and local capacity to effectively engage the public in policy choices. The level of expertise in the public deliberation model envisioned by the CDC varied across the states receiving the cooperative agreement for this project. The project proposals contained a mix of traditional and innovative public information and engagement models. All jurisdictions receiving the awards were committed to engaging the public, but state project directors reported challenges reconciling their project designs with federal expectations to use a specific deliberative process with federal contractors as facilitation experts rather than the locally trusted contractors envisioned within their project proposals. <\/p>\nLessons Learned<\/h3>\n
\nState proposals for the pandemic influenza demonstration project included several types of public engagement models. Each project addressed policy issues important to the state or local organizers related to planning for pandemic influenza, but each varied in the approach taken to engage the public. The cross-site evaluation was designed to answer broad questions to assess impact across all of the projects. The States putting in project proposals were not aware of the cross-site evaluation goals when they designed their projects, so many had included evaluation components of local interest. Once awarded, States were told they were expected to use a single evaluation contractor to ensure cross-site evaluation needs were met. Although project sites were interested in using cross-site tools, they had to rethink their timelines and plans to incorporate them. The local\/state partners who were testing innovative public engagement models were asked to incorporate the cross-site tools even though they were designed with the assumption that engagement would be in-person rather than on-line or via other mediums. We believe the local\/State partners would have been more accommodating of the cross-site evaluation if they had been able to contemplate how it fit when they were designing their project applications. Setting the expectation of participation in the cross-site evaluation activities early assists project planners to incorporate evaluation components in their design. <\/p>\n
\nTraditional evaluation usually means a neutral entity observes, collects data and provides feedback to project organizers and sponsors about process and outcomes. In the pandemic influenza demonstration project the evaluation could have been strengthened if the cross-site evaluators\u2019 role was enhanced to include provision of technical assistance for local\/State projects as they developed local evaluation questions. The cross-site material was valuable, but in some cases not as meaningful to local\/State policy makers as it could have been. The cross-site evaluators offered to add questions or data points to the instruments but local teams were left with the responsibility of identifying the type of data they desired. In retrospect, this customization could have been stronger if cross-site evaluation team members were able to provide more in-depth technical assistance to the project sites as they considered the process and outcome measures that were meaningful to their policy makers as well as how the cross-site evaluation results could be used to strengthen their projects. The request for proposals for the overall demonstration project did not include a requirement for local evaluation personnel to unburden local\/State projects by providing evaluation for them. However, the lesson learned was that cross-site evaluation would be more locally meaningful and effective if the role of the evaluator was expanded to include provision of technical assistance to ensure local needs are being adequately addressed. <\/p>\n
\nThe pandemic influenza demonstration project began with a lessons learned conference to help successful State project applicants by bringing them together with previous public engagement organizers to give them the benefit of learning from the experience of others. Cross-site evaluators were introduced to State project personnel at this forum. This was a good beginning, but in the future we recommend follow-up with an in-person site visit as soon as possible at the beginning of the project. Although telephone contact was helpful, we believe cross-site evaluation expectations and adaptations could have been made more meaningful to local\/State projects if on-site consultation were built into the overall design and expectations of evaluators. Early on-site consultation provides an opportunity for evaluators to communicate cross-site evaluation expectations, answer questions about the evaluation and begin the process of assisting projects with identification of local evaluation needs. This is recommended for instances where technical assistance is provided, and in cases when cross-site evaluation protocols are expected to be carried out by local organizers. On-site consultation would also be beneficial at the data collection stage and at the end when results are being interpreted. Increased involvement of local\/State project personnel in interpreting the results of cross-site and site-specific data strengthens the applicability of findings and is consistent with the participatory model of evaluation. <\/p>\n
\nThis evaluation included a need to flexibly balance local and federal expectations and tools. The role of the cross-site evaluator is to err on the side of comparison across sites rather than customizing to meet local needs. However, capturing the effectiveness of different models of public engagement required flexibility on the part of evaluators. For example, capturing change in knowledge of participants requires evaluators to understand the knowledge targets of project organizers. Cross-site comparison of similar knowledge questions only works when the same material is presented or made available to participants at each site. The variability in projects, presenters, presentation medium and style could only be documented but not controlled. Flexibly identifying change in knowledge as a cross-site question may be more effectively assessed by incorporating local knowledge targets rather than predetermining general knowledge questions.<\/p>\nReferences<\/h3>\n
\n