(This is a sample cover image for this issue. The actual cover is not yet available at this time.)
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the author's institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/authorsrights
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
Contents lists available at ScienceDirect
Journal of Interprofessional Education & Practice journal homepage: www.elsevier.com/locate/jiep
Multidimensional evaluation of interprofessional collaboration in a disaster simulation
T
Courtney Westa, Yuanyuan Zhoub,∗, Karen Landryc, Bree Watzakd, Lori Grahame a
Proposed College of Osteopathic Medicine (applicant status-seeking accreditation), Sam Houston State University, Huntsville, TX, United States Office of Medical Education, College of Medicine, Texas A&M University, Bryan, TX, United States c College of Nursing, Texas A&M University, Bryan, TX, United States d College of Pharmacy, Texas A&M University, College Station, TX, United States e Office of Medical Education, College of Medicine (recently retired), Research Scientist (part-time), Office of Special Programs & Global Health Texas A&M University School of Public Health, College Station, TX, United States b
1. Introduction Interprofessional Education (IPE) is becoming more common due to accreditation requirements [30] and more encompassing as a variety of training approaches, activities, and settings are being used to enhance teamwork and communication skills [7]. In clinical practice, working effectively in an interprofessional team impacts patient care and requires skills learned through IPE [3]. Deliberate educational activities enable students to maximize the expertise of healthcare professionals as they learn about roles and responsibilities, apply values and ethical principles, and build interprofessional teamwork and communication skills. When students' IPE learning opportunities transition from classroom and small group learning formats to active engagement in patient care in simulated [17]; [22] and actual patient encounters [26], they are able to enhance the quality of healthcare delivery. Due to emphasis on communication and teamwork, simulations are often utilized for IPE. Disaster simulation is an innovative approach to emergency preparedness training and can be an effective method to help students learn the core elements of disaster medicine [19]; [21]; [27] as well as IPE. Disaster Day at the Texas A&M University Health Science Center is an experiential event designed to train interprofessional teams (including nursing, medicine, pharmacy, public health, and other health professions students) in an “authentic” learning situation. Disaster Day began in 2008 by the College of Nursing due to training needs identified after a hurricane evacuation, which resulted in shelters being set-up and staffed in the community. It began as a small student-led event and quickly evolved and expanded into a multidisciplinary disaster simulation where the scenario is announced to all participants minutes before the simulation begins [16]. In the simulation, hundreds of patients portraying disaster victims with different symptoms and injuries flood the makeshift shelter divided into four pods of approximately four rows of ten cots. Prior to this
interprofessional evaluation process being introduced, discipline-specific objectives were utilized and assessed by discipline-specific faculty. Since everyone on the healthcare team needs to know their own and others' roles and responsibilities, communicate those, and work together as a team to coordinate care during the organized chaos of Disaster Day, we focused on assessing team-based collaborative care. Due to the broad scope of the interprofessional disaster simulation, measuring the attainment of IPE competencies presents a unique challenge. While there are numerous instruments available, reliable and applicable instruments appear to be limited [28]; [29]. Most existing instruments focus on attitudes about teamwork (e.g., TeamSTEPPS and ATHCT) [1]; [8], readiness for interprofessional learning (e.g., RIPLS) [25], or perceptions about interprofessional work (e.g., SPICE and IEPS) [6]; [18]. These targets are associated with or reflect the level of health professionals' interprofessional collaborations, but not the standards that directly assess demonstrated collaboration skills. In fact, assessment derived from the IPE competencies might provide a measure of interprofessional education programmatic outcomes [5]. However, assessment tools that rely heavily on respondents' self-perceptions have been questioned by some researchers for the tendencies of participants to overrate themselves [19]. There is a need for more unbiased assessment tools as well as multidimensional approaches to effectively evaluate short-term placement-based trainings such as Disaster Day. As a result, our instrument development team utilized the Interprofessional Education Collaboration (IPEC) [12] Competencies to create three instruments (i.e., team instrument, patient instrument, and observer instrument) to examine team-based collaborative care in healthcare simulations. IPEC includes six national associations from Nursing, Osteopathic Medicine, Pharmacy, Dentistry, Medicine, and Public Health, and nine other national associations (Interprofessional Education Collaborative Expert Panel) [24]. The instrument development team reviewed all IPEC competencies and subcompetencies. Then, the team made adjustments to the wording of the subcompetencies to
∗ Corresponding author. Office of Medical Education, College of Medicine, Texas A&M University, Clinical Building 1, Suite 1122, 8441 Riverside Pkwy, Bryan, TX, 77807, United States. E-mail address:
[email protected] (Y. Zhou).
https://doi.org/10.1016/j.xjep.2018.05.002 Received 13 February 2018; Accepted 12 May 2018 2405-4526/ © 2018 Published by Elsevier Inc.
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
4-point Likert-type scale (1 = strongly disagree, 4 = strongly agree) [9]. The patient instrument our team created is shorter than the other two instruments because lengthy questionnaires for patients often cause a drop in participation, and further affect the validity of the results [20]. The assessment plan involved collecting feedback from three perspectives, the students who participated as members of the interprofessional team, the observers of the team and patient interactions, and patients who directly received services from the interprofessional team.
abbreviate the length of the items for the team and observer instrument but not alter their meaning. The patient instrument was not directly derived from IPEC competencies but from the PIVOT Questionnaire [9], which was developed for patients to assess teamwork, coordination, and care delivery in an emergency department. The content validity, internal structure, and response processes of the PIVOT questionnaire were evaluated across a three-phased mixed methods study [9]. The instrument development team did modify the patient instrument to make it shorter and more manageable for patients in exchange for a higher response rate. Even though the instrument was a modified PIVOT questionnaire, the items in the patient instrument mirrored IPEC competency items in the team instrument and observer instrument. Thus, comparative analyses between the three instruments could be conducted. The development team further refined the instruments and items through a multiple meeting iterative discussion process, and then piloted and adjusted the instruments based on the results from the pilot. The final instruments were then implemented during Disaster Day to assess team based collaborative care using a multidimensional approach.
The IPE evaluation process for the large-scale Disaster Day simulation received approval from the Texas A&M University Institutional Review Board. All students, standardized/volunteer patients, and observers reviewed and signed consent forms before the simulation started. Each group was then asked to complete the questionnaires immediately following the simulation. This was done prior to debriefing so that the discussion would not influence the evaluation.
2. Method
2.3. Participants
2.1. Instruments
Seven hundred and ninety six (375 in 2014 and 421 in 2015) students participated on health care teams during Disaster Day. Nearly half were nursing students, followed by medical students. There were also students from Pharmacy, Radiology, Emergency Medicine, and Physical Therapy. No significant demographic differences existed with nursing, medicine, and pharmacy students between 2014 and 2015 at the 0.05 level. In the simulated disaster, the students assumed health care provider roles and worked collaboratively to provide care for patients. The response rate for the team instrument over the two years was 58.2% (2014: 50.1%; 2015: 65.3%). Forty-four observer instruments (Year, 2014: n = 20; Year, 2015: n = 24) were collected from the trained observers. The response rate was 100%. Faculty and staff observers were strategically placed around each pod in the disaster simulation to observe the students' performance and rate teams' performance from a third-party perspective. The observers only focused on students' performance and interprofessional team interaction during the simulation. They did not observe the prebriefing or debriefing components. Observers received training prior to the event. This included discussing how to interpret the items, the rating scale, and addressing specific questions about the instrument in an effort to increase rating consistency. Even though the training did not include practice using the instrument prior to the simulation, most of the observers participated in Disaster Day during both years and were familiar with the instrument.
2.2. Data collection
The Team's Perception of Collaborative Care Questionnaire (TPCCQ; team instrument; see Appendix 1) was created to assess the quality of team-based collaborative care from the student's perspective. The TPCCQ consisted of 30 4-point Likert-type items (1 = strongly disagree, 4 = strongly agree) that examined five components: values/ethics, roles and responsibilities, communication, teamwork, and self-evaluation. With the first four components, each team member rated the effectiveness of their team that consisted of students with varied and specialized knowledge, skills and methods, as they provided care during the disaster simulation. The fifth “self-evaluation” component included items to engage students in reflective practice to determine their own contributions to the team. The Team Observation Instrument (TOI; observer instrument; see Appendix 2) included 20 of the items in the team instrument but utilized a dichotomous scale (0 = not demonstrated, 1 = demonstrated). The scale was different since the observers would evaluate the interprofessional collaborative care process globally and would have difficulty in rating team interactions at such a finite level. The Standardized Patient Team Evaluation Instrument (SPTEI; patient instrument; see Appendix III) was designed to measure interprofessional practice from the patient's perspective. The SPTEI consisted of 10 items that align with the IPEC competencies and rated on a Table 1 Composition of health care team members in 2014 and 2015. Instrument
Team instrument
Patient instrument
Observer instrument
Discipline
Nursing Medicine Pharmacy Other Health Professions Total Trained Standardized patients Untrained volunteers Students professions patients Total Total
2014
2015
Number of participants
Number of completed surveys
Response rate (%)
Number of participants
Number of completed surveys
Response rate (%)
170 57 8 140
188
50.1%
175 40 24 182
275
65.3%
183
42.6%
137
27.4%
24
100%
375 80 300 50 430 20
421 0 450 50
20
100%
41
500 24
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
simplified form of Cronbach's alpha with binary responses) was calculated for items without 100% agreement. For items with constant ratings across observers, 100% agreement indicated ideal internal consistency between observers. Concurrent validity of the team instrument was addressed by comparing the mirrored items with the observer instrument and the patient instrument. Brown-Forsythe's tests were used to test the difference between the team instrument and the patient instrument on eight pairs of mirrored themes since the homogeneity assumption between the two groups was violated. Percentage of agreement was used to test the difference between the team instrument and the observer instrument on 20 identical items. The team instrument has four scales (i.e., strongly agree, agree, disagree, and strongly disagree), while the observer instrument is dichotomous (i.e., demonstrated and not demonstrated). For comparative purposes, data for the team instrument were recoded: strongly agree and agree were merged into one category, and disagree and strongly disagree were merged into another, so that the percentage of agreement for the team instrument could be calculated. Missing data was not included in the computation. Replicability was addressed by comparing the results from both years using independent t-tests. Mplus 7.4, IBM SPSS 22.0 and Excel 2013 were used to facilitate the calculation.
Table 2 Four-factor exploratory factor analysis estimated GEOMIN rotated loadings. Items
Components
F1
F2
F3
F4
Communality
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 Q19 Q20 Q21 Q22 Q23 Q24
VE VE VE VE VE VE VE RR RR RR RR CC CC CC CC CC CC CC CC TC TC TC TC TC
0.69 0.48 0.47 0.32 0.77 0.63 0.39 0.01 0.19 0.09 −0.08 0.01 0.10 0.07 −0.08 0.12 0.02 0.03 0.02 0.06 0.38 0.05 0.00 −0.08
0.16 −0.02 0.34 0.35 0.29 0.23 0.22 0.07 0.31 −0.03 0.39 0.40 0.52 0.60 0.89 0.76 0.78 0.53 0.51 0.21 −0.06 0.01 0.07 0.15
0.11 0.10 0.14 0.08 0.02 0.02 −0.02 0.67 0.51 0.84 0.45 0.08 0.03 0.01 −0.03 −0.13 0.08 0.20 0.20 −0.02 −0.07 0.12 0.19 0.04
0.12 0.07 0.02 0.07 −0.05 −0.01 0.02 0.01 0.00 −0.02 0.12 0.22 0.12 0.14 0.01 −0.01 −0.07 0.06 0.08 0.57 0.60 0.70 0.62 0.71
0.472 0.360 0.362 0.406 0.519 0.608 0.591 0.630 0.569 0.727 0.653 0.410 0.472 0.564 0.687 0.556 0.633 0.555 0.533 0.562 0.557 0.662 0.639 0.667
3. Results
Note. VE = Values and Ethics component, RR = Roles and Responsibilities component, CC = Communication component, TT = Teams and Teamwork component.
3.1. Reliability and validity of instruments 3.1.1. Team instrument The initial psychometrics of the team instrument's team-level items was examined by EFA using the two-year data. The KMO test indicated that the data is marvelous adequate (KMO = 0.090), and the Bartlett's test indicated that EFA may be useful (p = 0.000). Based on the data collected from 463 student respondents, four correlated components were identified from EFA. As shown in Table 2, items with moderate to heavy loadings to components aligned with the IPEC competency components with only a few exceptions (Q3, Q4, Q11, and Q21). This provides preliminary support for the four components determined by the four IPE competencies: Values/Ethics, Communication, Roles and Responsibilities, and Teamwork. The four-factor structure was further confirmed by CFA. Most standardized regression weights were above .70, and there were no excessive correlations (r < .85) between the latent components [14]. Model fit indices such as CFI indicated a fair fit (CFI = .917) [10], RMSEA indicated a fair fit (RMSEA = .069) [11], and SRMR indicated a good fit (SRMR = .041) [4], Chi-square test (791.035, df = 246, p = 0.000) indicated lack of fit of the four-factor model. Most model fit indices model satisfaction of the model but may require further modification [14]. Cronbach's alpha for the team instrument was .96, and each component was as follows: Values/Ethics .85, Roles and Responsibilities .86, Communication .91, Teamwork .87, and Self-Evaluation .88. Considering the length of each subscale (i.e., component) was relatively short (n = 4–8), the Cronbach's alpha equal or greater than .85 indicates good internal consistency between items [15]. The average ratings for the team components were between 3.60-3.63 in 2014 (SD: .36–.42), and 3.57–3.65 in 2015 (SD: .37–.46). The results for the two consecutive years were very consistent. There were only minimal differences between the four IPEC competency components, and the difference between team evaluation and self-assessment was minimal. The average rating for the self-reflection component was 3.64 (SD = .40) in 2014 and 3.63 (SD = .41) in 2015. For all 30 individual items, the average ratings for team evaluation items were between 3.47-3.77 (SD: .36–.56) in 2014, and 3.46–3.76 (SD: .37–.61) in 2015. Certain items were consistently rated lower (e.g., “protected confidentiality and privacy in the delivery of team-based care”) and certain items were always rated higher (e.g., “worked well with members of the healthcare team”).
Patient participants in Disaster Day included 750 standardized patients, health professions students, and untrained volunteers (see Table 1 for details). The response rate for the patient instrument was 42.6% in 2014 and 27.4% in 2015.
2.4. Data analysis The data analyses focused primarily on the validation of the team instrument and comparison of the assessment results between all three instruments to determine if there were differences in perspective. The construct validity of the team instrument was examined by exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) on the first 24 team-based items. The self-evaluation component was excluded because this component included items assessing all four of the IPEC competencies but from an individual perspective. If included, the self-evaluation component may have affected the accuracy of the extractions. The EFA used the Maximum Likelihood estimation methods with OBLIQUE rotation. The Kaiser-Meyer-Olkin (KMO) test was used for measuring sampling adequacy (greater than 0.90 means marvelous factorial simplicity) [13], and the Bartlett's test was used for testing the homogeneity of variances (statistically significant indicates EFA may be useful) [2]. In CFA, Comparative Fit Index (CFI) (CFI > .90 indicates a fair fit; CFI > 0.95 indicates good fit) [10], Standardized Root Mean Square Residual (SRMR) (SRMR < 0.06 indicates good fit) [4], Root Mean Square Error Approximation (RMSEA) (RMSEA around 0.06 indicates good fit) [11], and chi-square test of model fit (not statistically significant indicates good fit) [14] were selected to determine if the team instrument had four distinct factors. The internal consistency (i.e., reliability) of the whole instrument and each individual factor was explored by Cronbach's alpha. The standardized path loadings of each individual item in CFA also provided internal consistency information. Factor analysis was not conducted on the patient instrument because it was too short (10 items, including one for overall quality) and was not based entirely on the IPE competencies. However, its' internal consistency was examined by Cronbach's alpha. For the observer instrument, the Kuder-Richardson Coefficient (a 42
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
significantly different in 2015. The three themes that did not show statistically significant differences in 2015 were: supportive and respectful team members, understandable information to non-healthcare professionals, and established trust with patients and worked ethically.
3.1.2. Patient instrument Cronbach's alpha for the patient instrument was .89, which indicated good internal consistency between items. The average ratings from the patient instrument ranged from 3.02 to 3.57 in 2014 (SD: .50–.79), and 3.05 to 3.61 in 2015 (SD: .53–.82). There was an increase in all 10 team evaluation scores in 2015. A series of Two-way ANOVAs were conducted to test if there were statistically significant differences between 2014 and 2015. The ratings on “worked well together to coordinate care” and “demonstrated effective leadership” were statistically significantly different (Cohen's d = −.39, F1, 311 = 11.4, p = .001, and Cohen's d = −.29, F1,314 = 6.4, p = .012). The Cohen's d was negative which indicated that the average rating for this item was higher in 2015 than in 2014.
3.3. Comparing ratings on the team and the observer instrument As shown in Table 4, there was not a large difference between the two years on the teams' ratings. Students rated their teams' performance high (95.6%–100% agreement) on all items. However, ratings from the observers were lower than ratings from students. In 2014, observers' ratings were much lower than students' ratings on a majority of items (four items' percentage of agreement were below 60%, eight items' percentage of agreement were equal to or greater than 90%, and eight items had agreement from 60% to 89.9%. In 2015, ratings from observers increased on many items. Most items had agreement equal to or above 90%, and three items had agreement below 90%. The three items were “described own roles and responsibilities clearly” (75.0%), “explained other's roles and responsibilities accurately” (81.3%), and “accepted responsibility for outcomes” (88.9%).
3.1.3. Observer instrument When excluding the four items with constant values, the KuderRichardson Coefficient for the other 16 items was .717. For the four items with constant ratings, the internal consistency was 100%. The percentage of agreement for each item ranged from 40% to 100% in 2014, and from 75% to 100% in 2015 for all 20 items. Some items were consistently rated low (e.g., “explained other's roles and responsibilities accurately” — 45% agreement in 2014, 81% agreement in 2015, and “described own roles and responsibilities clearly”—50% agreement in 2014, and 81% agreement in 2015); while some items were consistently rated high (e.g., “respected the expertise of other health professionals” — 100% agreement in both years, and “engaged other healthcare professionals in the evaluation and treatment process” — 100% in 2014, and 96% agreement in 2015). There were also items rated low in 2014 that yielded improved ratings in 2015 (e.g., “managed ethical dilemmas in interprofessional care situations”— 44% agreement in 2014, 94% agreement in 2015, and “focused on quality of care and patient safety”— 67% in 2014, 100% agreement in 2015). Most of the items had 80% or above agreement in both years. Independent t-tests were used to test whether or not the observers' ratings on each item were different between the two years (2015 and 2016). Because the assumption of homogeneous variance was violated, equal variances were not assumed. Statistically significant increases in scores were found on “focused on quality of care and patient safety” (M2014 = .65 (equivalent to 65% agreement), SD = .45; M2015 = 1.00, SD = .00; Cohen's d = −1.1; p = .006), and “contributed to conversations leading to care decisions” (M2014 = .80, SD = 0.41; M2015 = 1.00, SD = .00; Cohen's d = −.69; p = .042). No other significant differences were noted on remaining items.
4. Discussion The initial version of the team instrument yielded preliminary evidence of internal consistency and appears to support the proposed IPEC competency structure. The patient instrument demonstrated high internal consistency. The observer instrument demonstrated a lack of variance for several items but results from it, as well as the patient instrument, indicated the same pattern. For example, observers and patients constantly rated the team performance lower than student's self-perceptions, which might be a sign of teams overrating themselves. The inflated ratings may also be the result of students' lack of knowledge about how to evaluate the quality of their team interaction when providing patient care. All three instruments yielded positive feedback from students, patients, and observers, which provided evidence for the concurrent validity of the team instrument and the other two instruments. However, the clustered and negatively skewed Likert-scaled data presented challenges with using regular parametric methods such as factor analysis due to the possibility of non-normal distribution. Since “conservative” non-parametric analysis may lead to incorrect interpretations, the researchers decided that it was best to move forward in this manner as it would likely not result in inaccurate conclusions [23].
3.2. Comparing ratings on the team instrument and patient instrument 5. Conclusions The patient instrument can be categorized into eight themes: communicated effectively, supported and respected team members, provided understandable information to patients, established trust and worked ethically with patients, worked well together to coordinate care, described roles and responsibilities clearly, discussed care and decisions with patient, and demonstrated leadership practices that led to effective teamwork. Each theme has a matched set of items in the team instrument that measure the same substance. Because the homogeneity assumption was violated based on the Levene's statistic for all eight pairs (all ps < .01), a series of robust tests (i.e., Brown-Forsythe's test) of equality of means were conducted. Table 3 provides the means, standard deviations, Brown-Forsythe statistic and the p-values for the two instruments in 2014 and 2015. Even though both instruments indicated that the IPE experience was positive, mean scores for the patient instrument were always lower than for the team instrument in all eight paired themes, and the standard deviations for the patient instrument were always higher than standard deviations for the team instrument. Based on the Brown-Forsythe's statistic, all eight paired themes' mean ratings were statistically significantly different in 2014, and five paired themes were statistically
Even though the team instrument was the strongest, the ability to compare mirror items on the patient and observer instrument provided a more comprehensive evaluation of the entire disaster simulation. Students rated their teams' performance higher than patients did. Although different patients rated students' practices high overall during both years, there were some patients who rated students' practices low, which resulted in overall ratings being more spread out. Patients' ratings indicated that students need improvement in the areas of communication, coordinating skills, and defining roles and responsibilities. There were several reasons why differences between students' ratings and patients' ratings may have occurred. For example, critically injured patients were prioritized and received treatment first. Thus, those who had less severe symptoms may have felt ignored. Due to the scale of the event and number of patients, different team members may have treated the same patient. Repetitive interactions would manifest in what would be viewed as communication issues. The type of simulation, which included a high-stress environment with limited resources, could have played a role in the patients' perception of students' teambased collaborative care performance. However, effective 43
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
Table 3 Mean ratings and standard deviations on themes, Brown-Forsythe tests for equality of means. Themes
communicated effectively supported and respected team members provided understandable information to patients established trust and worked ethically with patients worked well together to coordinated care described roles and responsibilities clearly discussed care and decisions with patient demonstrated leadership practices that led to effective teamwork
Year
2014 2015 2014 2015 2014 2015 2014 2015 2014 2015 2014 2015 2014 2015 2014 2015
standardized patients (n2014 = 183, n2015 = 137)
Team's perception (n2014 = 188, n2015 = 275)
Mean
SD
Mean
SD
3.13 3.26 3.57 3.61 3.37 3.43 3.42 3.49 3.27 3.46 3.02 3.05 3.24 3.36 3.32 3.50
.64 .64 .50 .53 .61 .69 .62 .58 .55 .50 .79 .82 .71 .66 .65 .58
3.57 3.60 3.75 3.71 3.54 3.49 3.54 3.57 3.68 3.68 3.53 3.49 3.60 3.59 3.66 3.65
.42 .42 .38 .40 .55 .53 .42 .43 .36 .38 .48 .53 .42 .45 .43 .43
Brown-Forsythe statistic
Sig.
Cohen's d
60.0 31.3 16.5 3.7 7.4 0.8 4.5 1.9 67.6 19.3 55.0 31.8 34.9 12.5 34.3 7.7
.000 .000 .000 .055 .007 .363 .035 .166 .000 .000 .000 .000 .000 .001 .000 .006
−0.81 −0.62 −0.43 −0.21 −0.28 −0.10 −0.23 −0.16 −0.88 −0.50 −0.78 −0.64 −0.63 −0.40 −0.62 −0.30
Table 4 Comparison between the team perception and team observation. Items
Teams' perception
Protected confidentiality and privacy in the delivery of team based care. Respected the expertise of other health professionals. Established trust and rapport with patients, families and team member. Focused on quality of care and patient safety. Managed ethical dilemmas in interprofessional care situations. Described own roles and responsibilities clearly. Engaged other healthcare professionals in the evaluation and treatment process. Explained other's roles and responsibilities accurately. Utilized abilities of team members to optimize patient care. Used effective communication tools and techniques. Provided information in a form that is understandable to non-healthcare professionals. Contributed to conversations leading to care decisions. Provided timely, specific feedback. Interacted in a respectful manner when dealing with conflict. Established hierarchy within healthcare team which contributed to effective collaborative care. Participated in collaborative problem solving. Engaged the patient in discussions/decisions about his/her care. Demonstrated leadership practices which facilitated effective teamwork. Accepted responsibility for outcomes. Performed effectively as a team by participating in a variety of roles.
Teams' observation
2014
2015
2014
2015
96.7% 100.0% 99.5% 99.5% 98.8% 98.9% 98.9% 97.8% 98.4% 96.8% 98.9% 99.5% 97.3% 98.9% 99.5% 97.8% 98.3% 97.8% 98.9% 99.5%
95.6% 99.6% 98.9% 97.8% 100.0% 96.4% 98.9% 97.1% 98.9% 96.0% 98.5% 98.9% 97.4% 100.0% 98.9% 100.0% 97.4% 99.3% 100.0% 99.6%
75.0% 100.0% 95.0% 58.8% 40.0% 50.0% 100.0% 45.0% 90.0% 90.0% 100.0% 80.0% 66.7% 92.9% 90.0% 75.0% 84.2% 84.2% 73.3% 80.0%
91.3% 100.0% 100.0% 100.0% 94.4% 75.0% 95.8% 81.3% 91.7% 100.0% 95.5% 100.0% 95.0% 100.0% 90.9% 95.2% 95.2% 95.2% 88.9% 100.0%
variety of IPE simulated interventions. Secondly, many different healthcare professionals are represented, but there are not equal numbers. It is possible that one group could be biasing the sample. A greater attempt should be made to create equal professional groups. For example, utilizing random sampling to examine group make-up and comparing teams' attainment of competencies could be conducted. This may yield information that could be used to determine how to comprise multidisciplinary teams to maximize performance. Thirdly, although the team created three instruments, the team instrument appears to be the strongest and yields the most comprehensive information. The patient instrument also appears to be a sound questionnaire. However, some participants were volunteers and lacked formal training, so their ratings may not accurately reflect the quality of the team-based collaborative care. Further examination of the instrument will be conducted to determine its readability and explore if reducing the number of items is feasible. The observer instrument may also need refinement due to lack of variation since the internal validity could not be effectively calculated. Overall, the data has indicated that utilizing IPEC competencies (Interprofessional Education Collaborative Expert Panel, 2011) to
communication and coordination are crucial in all patient care situations, so this will be an area to focus on in future simulations. Students also tended to rate their team's performance higher than observers did. Observers rated lower on “described own roles and responsibilities clearly,” “explained other's roles and responsibilities accurately,” and “accepted responsibility for outcomes.” The observers only observed students' performance during the actual simulation. However, this content could have been discussed before the simulation when students were engaged in pre-briefing. The observers may have also needed to attend the debriefing to see what students thought about outcomes following the activity. This information could also be used to facilitate the debriefing discussion that would enable students to actively engage in reflective practice. This particular team-based collaborative care evaluation approach has several limitations. First, the multi-dimensional assessments that were developed are criterion-referenced and focused on the evaluation of the quality of interprofessional practice during a simulated activity but have only been used during a disaster simulation. Additional research would need to be done to investigate if the instruments sufficiently measure the effectiveness of team-based collaborative care in a 44
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
Ethical approval
create measures may yield effective assessment of team-based collaborative care in a simulated educational environment similar to Disaster Day. Multi-dimensional assessment may also provide an opportunity to examine if team members' self-reporting practices result in inflated ratings in different settings. Future research will be conducted to further refine the instruments and examine if the tools can be effectively utilized in other patient care settings.
Ethical approval was granted by the authors' institution's Institutional Review Board. Disclaimers None.
Appendix A. Supplementary data Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.xjep.2018.05.002. Appendix I. IPE team's perception of collaborative care questionnaire-revised
Strongly Agree (SA)
Agree (A)
Neither agree nor Disagree (NAND)
Disagree (D)
Strongly Disagree (SD)
Following your participation in the Interprofessional Education activity, please rate your team's performance on the following items. Values/Ethics Components 1. Worked well with and respected the values and expertise of other health professionals. 2. Established trust and rapport with patients, families and team members. 3. Focused on quality of care and patient safety. 4. Managed ethical dilemmas in interprofessional care situations. Roles and Responsibilities Components 5. Described own roles and responsibilities clearly to patients and team members. 6. Engaged other healthcare professionals with different expertise in the evaluation and treatment process. 7. Explained roles and responsibilities of other providers accurately. 8. Utilized knowledge, skills, and abilities of team members to optimize patient care. Communication Components 9. Communicated information in a form that is understandable to non-healthcare professionals. 10. Contributed to conversations leading to care decisions. 11. Demonstrated active listening, encouraged ideas, and provided timely, constructive feedback. 12. Interacted in a respectful manner when dealing with conflict. Teamwork Components 13. Engaged in collaborative problem solving among healthcare professionals. 14. Demonstrated leadership practices, which facilitated effective teamwork. 15. Accepted responsibility for teamwork outcomes. 16. Performed effectively as a team by participating in a variety of roles. Self-Evaluation Components Rate your performance on the following items.
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
Strongly Agree (SA)
Agree (A)
Neither agree nor disagree (NAND)
Disagree (D)
A A A
NAND NAND NAND
D D D
Strongly Disagree (SD) SD SD SD
A
NAND
D
SD
17. You acted ethically with honesty and integrity. SA 18. You identified your strengths and weaknesses. SA 19. You recognized the impact of effective communication skills on SA patient care. 20 You participated in collaborative problem solving. SA
45
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
Appendix II IPE Team Observation Instrument - Revised As you observe the interprofessional activity, rate the team's performance on the following items.
Values/Ethics Components 1. Worked well with and respected the values and expertise of other health professionals. 2. Established trust and rapport with patients, families and team members. 3. Focused on quality of care and patient safety. 4. Managed ethical dilemmas in interprofessional care situations. Comments: Roles and Responsibilities Components 5. Described their own roles and responsibilities clearly to patients and team members. 6. Engaged other healthcare professionals with different expertise in the evaluation and treatment process. 7. Explained roles and responsibilities of other providers accurately. 8. Utilized knowledge, skills, and abilities of team members to optimize patient care. Comments: Communication Components 9. Communicated information in a form that is understandable to non-healthcare professionals. 10. Contributed to conversations leading to care decisions. 11. Demonstrated active listening, encouraged ideas, and provided timely, constructive feedback. 12. Interacted in a respectful manner when dealing with conflict. Comments: Teamwork Components 13. Engaged in collaborative problem solving among healthcare professionals. 14. Demonstrated leadership practices, which facilitated effective teamwork. 15. Accepted responsibility for teamwork outcomes. 16. Performed effectively as a team by participating in a variety of roles. Comments:
Strongly Agree (SA)
Agree (A)
Neither agree nor disagree (NAND)
Disagree (D)
Strongly Disagree (SD)
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA
A A
NAND NAND
D D
SD SD
Appendix III. Standardized patient IPE team evaluation instrument
Please reflect on the team's performance and rate the following items on the scale provided.
1. There appeared to be good communication among the team members. 2. Team members were supportive and respectful to other team members. 3. The team members provided information to me in a manner that was easy to understand 4. The team gained my trust and acted ethically. 5. The team worked well together to coordinate my care. 6. Team members made it clear to me what their roles were. 7. Team members helped one another with my care.
Strongly Agree (SA)
Agree (A)
Neither agree nor disagree (NAND)
Disagree (D)
Strongly Disagree (SD)
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA
A
NAND
D
SD
SA SA SA SA SA
A A A A A
NAND NAND NAND NAND NAND
D D D D D
SD SD SD SD SD
46
Author's Personal Copy Journal of Interprofessional Education & Practice 12 (2018) 40–47
C. West et al.
8. The team members discussed with me my care and supported my decisions about that care. 9. The team members collaborated and agreed with my plan of care. 10. The team members demonstrated leadership practices that led to effective teamwork.
SA
A
NAND
D
SD
SA
A
NAND
D
SD
16. Livingston L, West C, Livingston J, Landry K, Watzak B, Graham L. Simulated disaster day: benefit from lessons learned through years of transformation from silos to interprofessional education. Simulat Healthc J Soc Med Simulat. 2016;11(4):293–298. 17. Luctkar-Flude M, Baker C, Medves J, et al. Evaluating an interprofessional pediatrics educational module using simulation. Clinical Simulation in Nursing. 2013;9(5):e163–e169. 18. Luecht RM, Madsen M, Taugher M, Petterson B. Assessing professional perceptions: design and validation of an interdisciplinary education perception scale. J Allied Health. 1990;19(2):181–191. 19. Mackintosh S, McClure D. (A138) interprofessional education as a vehicle to instill teamwork mentality for disaster preparedness and response in healthcare professional students. Prehospital Disaster Med. 2011;26(S1):s48. 20. Maloney P, Grawitch MJ, Barber LK. Strategic item selection to reduce survey length: reduction in validity? Consult Psychol J Pract Res. 2011;63(3):162–175. 21. Miller J, Rambeck J, Snyder A. Improving emergency preparedness system readiness through simulation and interprofessional education. Publ Health Rep. 2014;129(6_suppl4):129–135. 22. Morphet J, Hood K, Cant R, Baulch J, Gilbee A, Sandry K. Teaching teamwork: an evaluation of an interprofessional training ward placement for health care students. Adv Med Educ Pract. 2014;5:197–204. 23. Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ. 2010;15(5):625–632. 24. Panel IECE. Core Competencies for Interprofessional Collaborative Practice: Report of an Expert Panel. Washington, DC: Interprofessional Education Collaborative; 2011. 25. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999;33(2):95–100. 26. Ponzer S, Hylin U, Kusoffsky A, et al. Interprofessional training in the context of clinical practice: goals and students' perceptions on clinical education wards. Med Educ. 2004;38(7):727–736. 27. Scott L, Carson D, Greenwell IB. Disaster 101: a novel approach to disaster medicine training for health professionals. J Emerg Med. 2010;39(2):220–226. 28. Thannhauser J, Russell-Mayhew S, Scott C. Measures of interprofessional education and collaboration. J Interprof Care. 2010;24(4):336–349. 29. West C, Landry K, Graham A, et al. Conceptualizing interprofessional teams as multiteam systems—implications for assessment and training. Teach Learn Med. 2015;27(4):366–369. 30. Zorek J, Raehl C. Interprofessional education accreditation standards in the USA: a comparative analysis. J Interprof Care. 2013;27(2):123–130.
References 1. Baker DP, Amodeo AM, Krokos KJ, Slonim A, Herrera H. Assessing teamwork attitudes in healthcare: development of the TeamSTEPPS teamwork attitudes questionnaire. Qual Saf Health Care. 2010;19:e49. 2. Bartlett MS. Properties of sufficiency and statistical tests. Proc Biol Sci. 1937;160:268. 3. Bridges DR, Davidson RA, Odegard PS, Maki IV, Tomkowiak J. Interprofessional collaboration: three best practice models of interprofessional education. Med Educ Online. 2011;16(1) Art 6035. 4. Diamantopoulos A, Siguaw JA, Introducing L. Introducing LISREL: a Guide for the Uninitiated. London, UK: Sage; 2000. 5. Dow AW, DiazGranados D, Mazmanian PE, Retchin SM. An exploratory study of an assessment tool derived from the competencies of the interprofessional education collaborative. J Interprof Care. 2014;28(4):299–304. 6. Fike DS, Zorek JA, MacLaughlin AA, Samiuddin M, Young RB, MacLaughlin EJ. Development and validation of the student perceptions of physician-pharmacist interprofessional clinical education (SPICE) instrument. American Journal of Pharmaceutical ESducation. 2013;77 Art 190. 7. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):1923–1958. 8. Heinemann GD, Schmitt MH, Farrell MP, Brallier SA. Development of an attitudes toward health care teams scale. Eval Health Prof. 1999;22(1):123–142. 9. Henry B, Rooney D, Eller S, McCarthy D, Seivert N, Nannicelli AJ. What patients observe about teamwork in the emergency department: development of the PIVOT questionnaire. Journal of Participatory medicine. 2013;5:e4. 10. Hooper D, Coughlan J, Mullen M. Structural equation modelling: guidelines for determining model fit. Journal of Business Research Methods. 2008;6:53–60. 11. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model: a Multidisciplinary Journal. 1999;6(1):1–55. 12. IPEC. Interprofessional Education Collaborative: Connecting health professions for better care. Retrieved from https://ipecollaborative.org/About_IPEC.html. 13. Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39(1):31–36. http:// dx.doi.org/10.1007/bf02291575. 14. Kline RB. Principles and Practice of Structural Equation Modeling. fourth ed. New York: Guilford Press; 2016 [2016]Fourth edition. 15. Lance CE, Butts MM, Michels LC. The sources of four commonly reported cutoff criteria what did they really say? Organ Res Meth. 2006;9(2):202–220.
47