Search all of the Society for Participatory Medicine website:Search
The Journal for Participatory Medicine's website has moved. Please check out the new website for the latest articles.

Abstract

Summary:
Background and Objective: Priorities for improving health care include enhanced coordination of care and involving patients in the evaluation of services. To help meet these goals, our research purpose was to develop an instrument to survey patients about their perceptions of teamwork-related behaviors they encountered during an emergency department (ED) visit.
Methods: The authors conducted a three-phased mixed methods study for development of survey items over the period of March 2012 to August 2012. Our review of survey items included assessing feasibility, usability, and response process through (1) initial review of potential items, (2) a web-based questionnaire for health care providers and patient advocates, and (3) cognitive interviewing with patients.
Findings: Participants included 119 web-based survey respondents and three rounds of ED patients (n=42) in cognitive interviews. Analysis supported removing five items of the original 21, revising the focus and wording of the remaining 16 items, and adapting response options to encourage patient participation in team evaluation. Measurement of patients’ views about teamwork-related behaviors they observe included items about care coordination, inter-professional dynamics, and team communication.
Implications: Analysis of validity evidence for content, response processes, and internal structure, supported development of an instrument for patient input about ED teamwork. Expanding our understanding of patients’ awareness of team interactions may support efforts to improve teamwork processes that align with a patient-centered approach.
Keywords: Patient perceptions, teamwork, emergency department, communication, instrument development, patient participation.
Citation: Henry BW, Rooney DM, Eller S, McCarthy DM, Seivert NP, Nannicelli AP, and Vozenilek JA. What patients observe about teamwork in the emergency department: development of the PIVOT questionnaire. J Participat Med. 2013 Jan 30; 5:e4.
Published: January 30, 2013.
Competing Interests: The authors have declared that no competing interests exist.

Introduction

Communication and teamwork are recognized essential elements of health care quality improvement and patient safety initiatives.[1][2] Studies of improved patient care relating to better teamwork have focused on a variety of settings from acute to primary care.[3][4] Strategies to address problems include assessment of teamwork-related behaviors through completion of the Teamwork and Safety Climate Survey by health care providers[5] and subsequent training using a curriculum developed by the Agency for Healthcare Research and Quality (AHRQ), Team Strategies & Tools to Enhance Performance & Patient Safety (TeamSTEPPS®), and ongoing use of related techniques.[6] However, a noticeable gap in these assessments is the lack of direct patient involvement.[7] In fact, two previous reviews of the literature were unable to identify studies wherein the patient was a source for observations of teamwork-related behaviors.[8][9]

Our minimal grasp of the patient perspective contrasts with efforts to improve health care quality in the United States through patient-centered means.[10] While interpretations of being ‘patient-centered’ can vary, the Institute of Medicine (IOM) definition states: “providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions.”[10 (p.7)] Though patient-centeredness acts as a singular quality dimension that often relates positively to health status outcomes, Berwick notes further work is needed to shift hospitals to becoming truly patient- and family-centered.[11]

In this vein, patient participation in the evaluation of health care experiences has revealed some indicators of team performance beyond satisfaction with individual care providers.[12][13][14][15] Patient interviews about safety issues noted a lack of communication among providers as problematic[16] and the 15-item Picker Patient Evaluation (PPE-15) measured patient feedback on coordination of care, continuity, and transitions of care.[13] However, an instrument to specifically measure the patient perspective of teamwork among health care providers is lacking. The purpose of our research was to develop an instrument to survey patients about their perceptions of teamwork-related behaviors. The emergency department (ED) was selected as the clinical setting for developing and pilot testing this instrument because it is a clinical setting where teamwork, at whatever level of effectiveness, is visibly on display. Specifically, our goals were to (a) develop survey items that captured patients’ views of teamwork behaviors; (b) determine which proposed survey items health care providers considered feasible to measure and useful for feedback; and (c) refine these items into a concise survey to measure patients’ views of the health care team in the ED.

Methods

Researchers conducted a three-phased, mixed methods study, with an explanatory design using a quantitative web-based survey and subsequent qualitative cognitive interviews.[17][18][19] Instrument design of the Patients’ Insights and Views Observing Teams (PIVOT) survey included developing questions to match teamwork-related behaviors reflected in the literature, as well as four areas that emerged from an earlier phase of this research.[8][20] Study procedures involved evaluating three sources of validity evidence – content, response processes, and internal structure, in line with validity as a unitary concept based on the Standards for Educational and Psychological Testing (Standards).[21] Institutional Review Board approval at the study site was obtained for research activities and all participants indicated their informed consent.

Phase I: initial instrument design.
We identified a pool of 44 potential question items from prior research on teamwork performance observed through self-assessment[5][22][23][24] or by patients and caregivers.[12][13][14][15] Next, three researchers (BH, SE, AN) independently reviewed the list of questions for applicability and then considered them for alignment with four content areas of team performance: coordination, organization and leadership, inter-professional dynamics, and communication. In an iterative process, we reviewed the development of question items for content area representation, reconciled differences, and proposed final item selection. Lastly, as a group, we modified the 21 remaining items for language consistency and to match with a single set of response options. The tenets of the Plain Language Initiative as noted on the National Institute of Health website were used in the creation of the items, including the use of full sentences, every day words, personal pronouns, active voice, and parallel construction. We conducted a pilot test of the PIVOT survey by inviting 12 local participants (health care team members, former hospital patients or patients’ caregivers) to complete the questionnaire online. In response, participants commented on item clarity, the need for clear and consistent use of terms, the survey process, and time spent to respond. We modified the directions and question items based on their feedback.

Phase II: web-based survey.
We recruited a sample of health care practitioners, educators, patients, and patient advocates to review proposed PIVOT items using web-based survey administration (http://www.qualtrics.com.) and following Umbach’s guidelines for web surveys.[18] For example, the line length and response types were selected for ease of completion and skip patterns inserted as appropriate so that respondents only saw question items needing their consideration. General demographic information (sex, country of residence, age, years of health care training, race/ethnicity) was requested at the end of the survey and adapted from the anonymous “Hospital Survey on Patient Safety Culture” developed by AHRQ and the continuity of care patient survey.[22][25] No identifiable participant information was collected.

For a two-week period, we recruited survey participants primarily through four separate groups. Two groups were internal to our academic medical center and two were external groups. Internal study participants were recruited among the clinical staff of the research institution and the survey was sent via email to approximately 150 nurses, 20 technicians, and 40 physician providers. Members of the hospital Patient & Family Advisory committee were also invited by the research team to participate via email and were the second internal group. Externally, the Society for Participatory Medicine was selected to reach health care clinicians (n = 297) with an interest in participatory medicine and team communication. This group promoted the survey through an announcement on their email listserv to active members. Also, the Shared Decision Making Network was contacted to reach patient advocates (n = 336) and they included announcements about the survey through their web-based social media. We sent reminder messages at the midpoint and two days prior to the end of the two-week period. Despite being given the option, no one requested a paper-and-pencil version of the survey. Locally, efforts to promote survey participation included word of mouth discussions.

Participants were directed to indicate their level of agreement, using a 5-point Likert scale, to determine whether it was feasible to ask patients to answer each of the 21 PIVOT survey question items. Respondents were also encouraged to include comments about clarity of question items. Items were displayed in groups by the four content areas of teamwork as designated in Phase I. Next, participants were asked to complete a second item review to consider the utility of patient responses in guiding providers’ improvement efforts. For this round, participants viewed items in one of three groups based on their first rating about overall feasibility (feasible (agreed), neutral, or unfeasible (disagreed)). We evaluated validity evidence relevant to test content using the participants’ feasibility and utility ratings of proposed PIVOT items. Participants’ comments regarding the items’ use in the ED population were also considered. Inter-item consistency was estimated to evaluate validity evidence relevant to internal structure.

Phase III: cognitive interviewing.
Our review of the web-based survey responses yielded support for 16 items to remain on the PIVOT survey. The research team then conducted cognitive interviews with patients and caregivers as they completed the PIVOT survey and commented on question items and response options. In this phase we aimed to ensure that patients and caregivers interpreted the questions in a way that matched the intended meaning. Development of the interviewing protocol followed published guidelines[19][26] and incorporated comments from expert review, according to the author (P.C. Beatty, PhD, oral communication, May 2012). Before engaging in the cognitive interviews, five members of the research team completed a 90-minute training session that highlighted how to use inductive processes to identify the ED patient or caregiver understanding of question items and response options with a practice interview and response activity. For example, the goal was to find problems with questions rather than work around them.[27] Training materials included the 4-stage model of survey response process,[28] types of cognitive probes (eg comprehension, paraphrasing, confidence judgment, recall),[27] and the study protocol script with debriefing questions, and review of the note-taking process.[19]

Over four weeks, one-on-one cognitive interviews were conducted in an iterative process that aimed for successive rounds of at least five to fifteen patient or caregiver volunteers, for a minimum of three rounds. Targeted participants included English-speaking ED patients and caregivers, over 18 years old, prior to discharge. Only those with uncorrected hearing impairment or being too ill to participate, as reported by patient or ED staff, were excluded. The PIVOT items were arranged in order from general to specific and negatively-worded items presented in pairs to cue respondents. Interviewers (NS, DR, AN) encouraged participants to think aloud as they completed the survey. Retrospective probes were used when participants expressed hesitation or uncertainty about an item. Interviewers and researchers met weekly to review notes and discuss findings, item by item. Item revision and re-testing continued until the interviews yielded few additional insights. As items were finalized, less probing by interviewers was required, until participants were encouraged to work through the survey as independently as possible during the final round. Following analysis of each round, a summary table of item changes with rationale and remaining points to probe was distributed to the interviewing team with the revised PIVOT survey and interview script. After the third round of interviews, it was clear we had reached saturation and data collection was stopped.

Data Analyses

We ensured content validity during instrument development (Phase I) via careful documentation, and review by consensus. In addition, validity evidence relevant to content, internal structure and response processes were evaluated across Phases II and III: web-based survey and cognitive interviewing.

Phase II: Web-based Survey.
To evaluate validity evidence relevant to test content from the web-based survey measures, we employed an application from modern test theory: a Rasch model.[29] Rasch models are part of a family of models known as item response theory (IRT), and have demonstrated their use in patient-centered survey development.[30][31][32][33][34][35][36] Analysis was performed using the Facets software v. 3.68.[37] For this study, we applied a many-facet Rasch model to acquire three indices used to evaluate content validity —observed averages, point-measure correlation and item outfit statistics. These indices, described in greater detail by Wolf and Smith[38] are summarized in Appendix A.

After identifying perceived feasibility and utility via the observed averages for each item, we used the item outfit statistics described above to explain the level of participants’ agreement in ratings. We then categorized survey items into three action groups: (a) Keep in the survey, (b) Evaluate, and (c) Remove from the survey. The criterion for prescribed actions is described also in Appendix A.

Open Coding Methods of Written Comments. We analyzed verbatim responses to open comment areas listed on the web-based survey or those emailed to the study personnel. Three coders (BH, SE, DM) individually reviewed all comments and identified themes in the data to develop a coding dictionary. Next, the coders met as a group to discuss these themes and refine definitions as necessary. Then, coders labeled each comment and reached consensus on the coding assignments which was used to inform the next iteration of the survey.

Phase III: Cognitive Interviewing. At the end of each round of interviews, each interviewer analyzed their notes and participant response patterns before meeting as a group to discuss our findings. Issues and confusion noted with the PIVOT survey included: survey instructions, response options, specific question items, participants’ responses to debriefing questions, and types of probing used by the interviewers. Potential sources of error for each item were categorized into groupings by comprehension, memory/knowledge, judgment, response process, and general. Researchers considered changes and estimated the impact of those changes on the basis of their prior interviews.

Results

We present results from the three phases of research that correspond to our three research goals. First, in Phase I, a 21-item questionnaire was developed to address patient observations of teamwork in the ED. Next; the proposed PIVOT questionnaire was assessed for feasibility and utility (Phase II, web-based survey). Third, the revised 16-item PIVOT survey was assessed for sources of error (Phase III, cognitive interviewing). Table 1 summarizes the demographic characteristics of the participants for both Phases II and III of the study.

Table 1: Demographic characteristics of study participants.

Phase II: web-based survey.
Data collection occurred for the web-based survey, over a two-week period (N = 119) following internal and external recruitment through electronic means (listserv or website posting, email message). Thus, the total number of potential respondents cannot be calculated and it is not possible to determine an overall response rate. For the internal group, there was a possible sample size of 200. With 68 individuals responding, the estimated response rate for the internal group was 38%.

Analysis of PIVOT Item Ratings

Response option frequencies aided decisions regarding recommended actions for items with marginal observed averages and high amount of rating variability. Results reviewed and discussed by the research team are shown in Table 2 with the summary of Rasch indices from feasibility and utility ratings.

These findings indicate that participants familiar with the subject matter felt the level of difficulty patients and caregivers would have in responding varied for the different items, providing rationale for their removal. A closer look at item outfit statistics describes participants’ level of rating consistency. See Table 2 for specific details about each item.

Table 2. Response option frequencies of the feasibility ratings and summary of Rasch indices and suggested actions for the proposed PIVOT items.

Of the items participants rated as not feasible to ask patients, three had reasonable fit values, suggesting these three particular items should be removed from the PIVOT survey. These were items #6, #16, and #21. However, two remaining problematic items and an additional item had elevated fit indices suggesting that there was less rater consistency in these items’ feasibility ratings. These three items required deeper evaluation: items #2, #7, and #15. Although the remaining 16 items had observed averages over 3.5, two items showed high fit indices that suggested a lack of agreement in participants’ ratings. Two items below had outfit Z-Standardized values of 3.5 and 2.1, respectively, indicating deeper evaluation was required: items #3 and #15.

Observed averages for the utility ratings of all items ranged from 2.0 to 4.4. Four of the five items that had low feasibility ratings continued to have problematic utility ratings. With observed averages of 2.0, 2.7, 2.6, and 3.2, respectively, these items were: #2, #7, #16, and #21. Further, items #7, #16, and #21 had very low outfit statistics indicating a very high level of agreement, suggesting raters agreed that these particular items had low utility and information from these items would not be helpful to their practice. There was less agreement about the utility of item #2, which had outfit Z-Standardized and mean square values of 2.0 and 1.5. This item was also considered for removal, along with item #19, which had outfit Z-Standardized and mean square values of 2.6 and 1.8, respectively.

Inter-item consistency for the feasibility ratings, estimated by Cronbach alpha, was high, α = .87, and inter-item consistency for the utility ratings were good with α = .84. Thus, these results supported validity evidence relevant to internal structure.

Open-ended Comments

In total, 45 of 119 participants (21 internal, 24 external with 7 via email), about 38% of survey respondents, provided 157 comments largely suggesting improvements and endorsing the project. Three coders identified a total of seven themes. The two most frequent themes were focused on recommended wording changes to improve clarity (32% of comments) and concerns about the patients’ awareness or ability to answer the question (29%). Other themes included: recommendations related to technical details of the survey (eg order of questions, response options) (13%), recommendations to provide specific definitions of terms (eg team or care-plan definition) (5%), general supportive comments for studying this topic (6%) and concerns about the utility of the survey (1%). Several participants providing comments also seemed to have misinterpreted the instructions of the survey and provided responses as if they themselves were the patient taking the survey and offered their observations of teamwork (13%).

Through analysis of responses to the proposed questionnaire items on patient perception of health care teamwork, a consensus emerged about which items provided the most clarity, relevance, and utility for improving team effectiveness. Resulting wording changes provided the survey format for cognitive interviewing to further clarify the wording and response options. The changes made from the web-based survey version improved the “plain language” of the questions and reduced the SMOG readability consensus for all question items from an initial reading level of 6th grade to a 5th grade level using the McLaughlin calculator.

Phase III: cognitive interviewing.
Three successive rounds of cognitive interviews were conducted with a total of 42 individuals in the ED over a four-week period with weekly research team meetings to analyze results (participant characteristics shown in Table 1). Data from cognitive interviews revealed potential sources of error related to the four-stage model of survey response process: (a) comprehension of the question; (b) retrieval from memory; (c) judgment or estimation processes; and (d) response processes or finding the answer they want.[27][28] Also, respondents reported difficulty with items due to a lack of knowledge about circumstances needed to respond or because there was semantic or grammatical difficulty. Table 3 shows item development from the cognitive interviews with the third version of the PIVOT shown as the final list of 16 items and related response options.

Table 3. PIVOT survey item development through cognitive interviewing.

Following the first round of cognitive interviews (n = 13), analysis revealed item revisions were necessary based on potential errors of each type in the four-stage model and participants reporting a lack of knowledge limiting their ability to respond. For example, the initial response scale was based on participants’ level of agreement regarding occurrence of events, ranging from “not at all” to “very much.” Participants stated they couldn’t remember how often the items occurred or lacked knowledge to reply when items were written as statements of fact. Thus, item wording was revised to highlight what participants liked, thought, felt, or saw and the response options changed to a frequency scale: “not at all” to “very much.”

During the second round of cognitive interviews (n = 12), errors occurred more often due to item wording, such as double-barreled questions or participants interpreting questions differently than intended. Researchers reviewed the participants’ responses to probing questions as well as interviewers’ notes. Then item wording was adjusted, and probing questions were specifically aimed to assess participant response process with those revisions on round three.

After five interviews in the third round, further adjustment to the response options was indicated as some participants reported they did not have an opportunity to observe some of the stated items. During probing and debriefing of those interviews, it was determined that participants had a good understanding of the intent of the items, however a response option was necessary for participants who felt they did not have an opportunity to make an observation. Thus, the response options were modified and the 12 subsequent cognitive interviews with this form were completed independently by participants and without difficulty. The final version of the PIVOT survey is shown in Appendix B.

Discussion

Through this study, we created and refined question items for a novel survey of Patients’ Insights and Views Observing Teams (PIVOT). Applying both quantitative and qualitative methods during the development of the survey, we ensured evidence of content validity. Also, we evaluated validity evidence relevant to response processes and internal structure.[21] Through this process, PIVOT items reflected patient awareness of health care services such as found with the CAT-T[12] and the Picker Patient Experience,[13] while focusing on teamwork-related interactions. Findings from this research support validity evidence relevant to internal structure and response processes.

The web-based survey provided feedback from health care providers and patient advocates estimating the level of difficulty for patients and caregivers to respond to PIVOT survey items; and how relevant these responses would be to teamwork process improvement efforts. As a result, we retained 16 items on the revised PIVOT survey which included items on care coordination, consistency of information between providers, awareness of team functioning, and communication between team members. Consistent with mixed methods research by Chin et al,[39] our findings indicated that patient views on aspects of teamwork such as care process, professional conduct, and communication could be helpful to know.

Through the cognitive interviewing process we identified changes to the survey questions and response options that were needed to measure what we intended to measure. Initially, modifications were made to shift the focus of items to better reflect what patients could directly observe and rely on their judgment to answer. As detailed in descriptions of the cognitive interviewing process,[19][26][27][28] participants’ abilities to respond to items appeared influenced by comprehension of question items (eg, wording, or mismatch with intended domains). Analysis of each round revealed item components to change that resonated across participants and showed improved clarity with subsequent versions of the survey. Changing the response options to range from “not at all” to “all the time” appeared to ease the participants’ ability to respond, aided by their memory of what they observed. Lastly, the response option, “had no opportunity to make an observation” was added to distinguish from responses to the “not at all” option. This revision better matched patients’ desire to respond to items based on frequency that a behavior could be viewed, rather than items they perceived they had no chance to see.

In spite of the positive findings, there are limitations to consider. The content areas to be measured may be underrepresented for consideration by the participants and there may be some variance due to the respondent’s own biases regarding satisfaction or trust in the health care system. Also, while web-based surveys can be an economical way to reach a large and geographically diverse sample, there are potential disadvantages to our methods, which relied on web-based access for participants in the second phase and a process of self-selection to respond to the survey.[18][40] Coverage error may have occurred if respondents were not representative of the target group of ED patients and caregivers, or if the web-posted notices did not reach a broad spectrum of health care providers and patient advocates. Sampling error could have occurred through the use of a web-based survey. In addition, nonresponse bias could have occurred when people with differing perspectives did not participate in the survey or cognitive interviews. Efforts to increase response rates included posting pre-notification of the questionnaire, a welcome message for all viewpoints, reminder announcements, and the offer of a written form (though no one requested it). For the cognitive interviews, we attempted patient and caregiver recruitment over varied times and days of the week. Future efforts to assess the survey should include a larger sample and encourage diverse perspectives, ensure privacy for participant responses, and offer open comments in at least two places in the survey.

Implications

Preliminary findings from the PIVOT survey of health care team members and patient advocates supported validity evidence relevant to test content, response processes, and internal structure. Subsequent field testing of the finalized PIVOT survey is required to substantiate validity evidence relevant to test content, response processes, and internal structure; and to evaluate validity evidence relevant to relationships to other variables. Our research team is in the process of analyzing results from a field test of 100 participants (results reported separately).

With further development, patient input may add to enhancement of teamwork-related behaviors. As noted by Reinke and Hammer,[41] important qualities of inter-professional collaboration that may improve with training programs such as TeamSTEPPS[6] matched aspects considered on the PIVOT survey such as team communication, coordination, and decision-making. Also, Knaus et al (1986) identified processes of care associated with lower mortality rates that included interactions and coordination of staff with excellent communication, a high degree of coordination of care among staff, and frequent staff interactions with close, comfortable working relationships.3 Incorporation of a patient-centered approach may contribute to evaluation of these care processes.[10]
[11][42] In the future, the PIVOT survey may be administered in clinical settings through paper-and-pencil or electronic means prior to discharge to capture patients’ views in real-time or on a periodic basis.

The need to increase our awareness of patients’ views may be highlighted by these contrasting comments from the web-based survey.

From health care provider responses:

  • How should patients know if team members discussed next steps with each other? This can happen in moments that the patient cannot observe.
  • [N]ot sure patients will be able to know much about “behind the curtain” communication.

From patient advocate view:

  • It seems like there is disconnect between the nurses and the doctors. The doctor leaves the room and doesn’t speak to the nurse regarding plan of care or tests ordered.
  • [P]atients can hear everything through the curtains, and if an attending is giving a dressing down to a subordinate it is clearly heard…

In conclusion, these results offer evidence that patients are aware of, and can answer questions about, teamwork in significant content areas such as coordination of care and communication. This study appears to confirm that patients’ comments about health care services can extend beyond their satisfaction with the care they received.

Acknowledgements

The authors would like to thank the following for their contributions to this research project: Paul Beatty for his guidance during the stages of instrument development and Alex Gerard for his participation during discussions for the analysis of the cognitive interviews.

References

  1. Sentinel Event Statistics Data – Root Causes by Event Type (2004 – Q2 2012). The Joint Commission. Available at: http://www.jointcommission.org/Sentinel_Event_Statistics. Accessed July 28, 2011.
  2. Emanuel L, Berwick D, Conway J, et al. What exactly is patient safety? In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 1: Assessment). Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  3. Knaus WA, Draper EA, Wagner DP, Zimmerman JE. An evaluation of outcome from intensive care in major medical centers. Ann Intern Med. 1986;104(3):410.
  4. Webster JS, King HB, Toomey LM, et al. Understanding quality and safety problems in the ambulatory environment: seeking improvement with promising teamwork skills and strategies. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol 3: Performance and Tools). Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  5. Sexton JB, Helmreich RL, Neilands TB, et al. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research. BMC Health Serv Res. 2006 Apr 3;6:44.
  6. Agency for Healthcare Research and Quality. TeamSTEPPS: National Implementation. Available at: http://teamstepps.ahrq.gov/. Accessed July 31, 2011.
  7. Tumerman M, Carlson LMH. Increasing team cohesion and leadership behaviors using a 360-degree evaluation process. WMJ. 2012;111(1):33-37.
  8. O’Leary KJ, Sehgal NL, Terrell G, Williams MV. Interdisciplinary teamwork in hospitals: a review and practical recommendations for improvement. J Hosp Med. 2012;7(1):48-54.
  9. Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta Anaesthesiol Scand. 2009;53(2):143-151.
  10. Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington DC: National Academy Press; 2001.
  11. Berwick DM. What ‘patient-centered’ should mean: confessions of an extremist. Health Aff (Millwood). 2009;28(4):w555-565.
  12. Mercer LM, Tanabe P, Pang PS, et al. Patient perspectives on communication with the medical team: Pilot study using the communication assessment tool-team (CAT-T). Patient Educ Couns. 2008;73(2):220-223.
  13. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care. 2002;14(5):353-358.
  14. Woodside J, Rosenbaum P, King S, King G. The Measure of Processes of Care for Service Providers (MPOC-SP). Hamilton, Ontario: CanChild Centre for Childhood Disability Research, McMaster University; 1998.
  15. Auerbach AD, Sehgal NL, Blegen MA, et al. Effects of a multicentre teamwork and communication programme on patient outcomes: results from the Triad for Optimal Patient Safety (TOPS) project. BMJ Qual Saf. 2012;21(2):118-126.
  16. Rathert C, Brandt J, Williams ES. Putting the ‘patient’ in patient safety: a qualitative study of consumer experiences. Health Expect. 2012 Sep;15(3):327-36.
  17. Bergman M, ed. Advances in Mixed Methods Research. Los Angeles: Sage Publishing; 2008.
  18. Umbach PD. Web surveys: best practices. New Directions for Institutional Research. 2004;121:23-38.
  19. Beatty PC, Willis GB. Research Synthesis: The Practice of Cognitive Interviewing. Public Opin Q. 2007;71(2):287-311.
  20. Henry BW, McCarthy DM, Eller S, Rooney DM, Vozenilek JA. Who’s in charge?: patient perceptions of team dynamics in the ED. Paper presented at the European Association for Communication in Healthcare; September 4-7, 2012; University of St Andrews, Scotland, UK.
  21. American Educational Research Association, American Psychological Associaiton, National Council on Measurement in Education. Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association; 1999.
  22. Improving Patient Safety in Hospitals: Resource List. Agency for Healthcare Research and Quality. Available at: http://www.ahrq.gov/qual/patientsafetyculture/hospimpdim.htm. Accessed July 28, 2011.
  23. Lubomski LH, Marsteller JA, Hsu YJ, Goeschel CA, Holzmueller CG, Pronovost PJ. The team checkup tool: evaluating QI team activities and giving feedback to senior leaders. Jt Comm J Qual Patient Saf. 2008;34(10):619-623,561.
  24. Tregunno D, Pittini R, Haley M, Morgan PJ. Development and usability of a behavioural marking system for performance assessment of obstetrical teams. Qual Saf Health Care. October 1, 2009;18(5):393-396.
  25. Christakis DA, Wright JA, Zimmerman FJ, Bassett AL, Connell FA. Continuity of care is associated with well-coordinated care. Ambul Pediatr. 2003;3(2):82-86.
  26. Beatty PC. Cognitive interviewing: the use of cognitive interviews to evaluate ePRO instruments. In: Byrom B, Tiplady B, eds. ePRO: Electronic Solutions for Patient Reported Data. Aldershot, UK: Gower Publishing; 2010.
  27. Willis GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA: Sage Publications; 2005.
  28. Tourangeau R. Cognitive sciences and survey methods. In: Jabine TB, Straf ML, Tanur JM, Tourangeau R, eds. Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines. Washington, DC: National Academy Press; 1984.
  29. Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests. Chicago: The University of Chicago Press; 1980.
  30. El Miedany Y, El Gaafary M, El Aroussy N, Ahmed I, Youssef S, Palmer D. Patient reported outcomes in ankylosing spondylitis: development and validation of a new questionnaire for functional impairment and quality of life assessment. Clin Exp Rheumatol. 2011;29(5):801-10. Epub 2011 Oct 31.
  31. Hibbard JH, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39(4 Pt 1):1005-26.
  32. McKenna SP, Meads DM, Doward LC, et al. Development and validation of the living with chronic obstructive pulmonary disease questionnaire. Qual Life Res. 2011;20(7):1043-52. Epub 2011 Feb 11.
  33. Minaya P, Baumstarck K, Berbis J, et al. The CareGiver Oncology Quality of Life questionnaire (CarGOQoL): development and validation of an instrument to measure the quality of life of the caregivers of patients with cancer. Eur J Cancer. 2012;48(6):904-11. Epub 2011 Oct 25.
  34. Mulhern B, Smith SC, Rowen D, et al. Improving the measurement of QALYs in dementia: developing patient- and carer-reported health state classification systems using Rasch analysis. Value Health. 2012;15(2):323-33. Epub 2011 Nov 17.
  35. Osborne RH, Norquist JM, Elsworth GR, et al. Development and validation of the Influenza Intensity and Impact Questionnaire (FluiiQ™). Value Health. 2011;14(5):687-99. Epub 2011 May 8.
  36. Development of the participation scale for patients with congestive heart failure. Am J Phys Med Rehabil. 2012;91(6):501-10.
  37. Linacre, J. Facets Many-Facet Rasch Measurement Software. Chicago: MESA Press; 2011.
  38. Wolfe EW, Smith EV Jr. Instrument development tools and activities for measure validation using Rasch models: part II–validation activities. J Appl Meas. 2007;8(2):204-34.
  39. Chin G, Warren N, Korman L, Cameron P. Patients’ perceptions of safety and quality of maternity clinical handover. BMC Pregnancy Childbirth. 2011;11:58-65.
  40. Dillman DA, Tortora RD, Bowker D. Principles for constructing web surveys. 1998. Available at: http://survey.sesrc.wsu.edu/dillman/papers/1998/principlesforconstructingwebsurveys.pdf. Accessed February 16, 2012.
  41. Reinke LF, Hammer B. The role of interprofessional collaboration in creating and supporting health care reform. Am J Respir Crit Care Med. 2011;184:863-865.
  42. Govindarajan P, Larkin GL, Rhodes KV, et al. Patient-centered integrated networks of emergency care: consensus-based recommendations and future priorities. Acad Emerg Med. 2010:17(2):1322-1329.

Copyright: © 2013 Beverly W. Henry, Deborah M. Rooney, Susan Eller, Danielle M McCarthy, Nicholas P. Seivert, Anna P. Nannicelli, and John A Vozenilek. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the authors, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

 

Donate