Evaluating interprofessional education: The tautological need for

0 downloads 0 Views 123KB Size Report
Evaluating interprofessional education: The tautological need for interdisciplinary approaches. NICK STONE. Rural Interprofessional Education Project, School ...
Journal of Interprofessional Care, June 2006; 20(3): 260 – 275

Evaluating interprofessional education: The tautological need for interdisciplinary approaches

NICK STONE Rural Interprofessional Education Project, School of Rural Health University of Melbourne, Parkville, Victoria, Australia

Abstract This paper explores some issues associated with evaluating interprofessional education (IPE) programs. It proposes options that harness the synergy made possible through interdisciplinary and multimethod approaches. Both qualitative and quantitative research approaches are suggested. It is argued that traditional, control group experimental designs may not be adequate, appropriate or reasonable as the sole means of evaluating interprofessional education. The example of the four-year Rural IPE (RIPE) project, from southeastern Australia, is provided to suggest ways to identify indicators and implement features of successful IPE programs. It offers an interdisciplinary approach to measuring the effectiveness of IP programs. A particular focus is the use of self-assessment to both monitor and promote structured reflective learning and practice. Sample triangulatory data are presented from a range of evaluation methods collected from the RIPE project. The results suggest evidence of some significant educational gains as a result of this intervention. The data, the methods and the analyses may be useful for others interested in implementing or strengthening interprofessional education. The paper suggests a judicious, customized and balanced blend of methods and methodologies may offer more useful ways forward than relying on single method controlled studies which are, in any case, rarely feasible.

Keywords: Interprofessional, multiprofessional, interdisciplinary, health, rural, program evaluation

Introduction In recent years, interprofessional practice (IPP) and its corollary, interprofessional education (IPE), have received growing attention in response to a range of pressures and developments in the provision of health care (Schofield & Amodeo, 1999). Many of the ‘‘Why bother with IPE/IPP?’’ questions have previously been well addressed, for example by Headrick, Wilcock and Batalden (1998), Pearson (1999) and Duckett (2005). If we assume that most health practitioners, educators and administrators now accept that IPE is, at least in principle, a worthwhile (or even necessary) concept and endeavour, two major tasks remain: . To develop models of IPE that support successful implementation, and . To agree on how the effectiveness of IPE programs may be reasonably measured. Correspondence: Nick Stone, Research Fellow and Project Manager, Rural Interprofessional Education Project, School of Rural Health University of Melbourne, 4th Floor, 766 Swanston Street, Parkville, Victoria 3010, Australia. E-mail: [email protected] ISSN 1356-1820 print/ISSN 1469-9567 online Ó 2006 Taylor & Francis DOI: 10.1080/13561820600722503

The tautological need for interdisciplinary approaches

261

IPE effectiveness can be evaluated in a number of ways. For example, program measures may focus on whether there is credible evidence that educational outcomes have been achieved or whether IPE leads to better interprofessional practice (IPP) once students have graduated. A logical follow-on question then becomes whether better IPP leads to improved patient and/or community health outcomes. These relationships are represented in Figure 1 below. There is a growing evidence base that IPP can have a positive effect on health outcomes in particular by improving the management of complex, chronic conditions such as: . . . .

Hypertension (Litaker, D. et al., 2003); Mental health (Stephenson, Peloquin, Richmond, Hinman & Christiansen, 2002); Asthma (Headrick, Crain, Evans, Jackson, Layman, Bogin, et al., 1996); Degenerative disease rehabilitation, for example Parkinson’s Disease (Wade, Gage, Owen, Trend, Grossmith & Kaye, 2003); . Acute and palliative care associated with serious illness such as cancer and comorbid conditions (James, 2002). This vitally important evidence lies at the health outcome end of the spectrum described above. Establishing a positive relationship between IPE and IPP is also a crucial challenge, but is beyond possible scope of the current project due to the level of resources and commitment required for such a longitudinal and large scale endeavour. Instead this paper focuses on some practical strategies that have been successfully used to monitor and support the effectiveness of the Rural IPE (RIPE) project in the state of Victoria in Southeastern Australia. Background Interprofessional education is, by definition, a multidisciplinary domain, and therefore it seems inevitable that there will continue to be a wide range of ideas about and approaches to its evaluation. Like IPE and IPP itself, establishing broad consensus on evaluation methods is destined to be an ongoing source of discussion and negotiation, and at times tension and possibly conflict (Wood, 2001). Compared with other forms of professional education, it appears to be more complex (Barr & Low, 2003) and lies outside clearly defined parameters. The associated uncertainty is no doubt uncomfortable for educators and academics whose backgrounds have mainly been in single disciplinary areas such as nursing, medicine or allied health and their associated habits and traditions. However, developing a tolerance for this uncertainty, along with a willingness to accept that different professional discourses may offer valuable learning may be preconditions for successfully navigating the larger world and cultures of interdisciplinary education (Gudykunst, 1998). Successful teamwork and interdisciplinary education rely on transgressing disciplinary barriers and relaxing some of the more rigidly held beliefs about

Figure 1. The relationship between interprofessional practice and interprofessional education.

262

N. Stone

research paradigms (Stephenson et al., 2002), including false dichotomies such as the quantitative-qualitative divide (Onwuegbuzie, 2002). The implementation and evaluation of IPE is clearly challenging, but there is a wealth of informative research and resources to draw upon, which so far seem to have been largely ignored. Other such spheres of knowledge include transformative adult education theory (Mezirow, 1994; Mezirow, 2000) and educational psychology, for example Kember et al. (1999). Mezirow’s work is particularly pertinent to IPE as it emphasizes: . Becoming more reflective and critical; . Being more open to the perspectives of others; . Being less defensive and more accepting of new ideas. The challenge of teaching and assessing higher order, transferable and generic abilities, such as effective social interaction (which is a core feature of IPE and IPP) has been the subject of much concerted effort over recent decades. For example, there are successful and proven models for the assessment and evaluation of collaborative learning (Johnson & Johnson, 1992), problem solving, communication and critical analysis (Slavin, 1990; Loacker et al., 1986). These attributes are not only essential to IPE, but to professional life in general and currently seen as areas of general deficiency in higher education (AC Nielsen, 2000). Assessment and evaluation issues The quality and specifically the validity of any IPE program evaluation can only be as good as its individual level measures, that is, the evidence that is collected to investigate what students learn during a program (or do not). Again the question of evaluation purpose must be applied - is it to support and measure the progress of student skills, knowledge and attitudes? Or is it to control for the effect of a ‘‘treatment’’ on a group by excluding ‘‘extraneous’’ and possibly confounding factors? The latter is rarely a feasible possibility in the applied settings within which most of us work; in fact Kember (2003) declares that ‘‘genuine control is impossible’’ in this sort of context. The challenge then, is to accept the higher levels of uncertainty associated with less controllable systems. Saul’s observations (1997) reflect this challenge: ‘‘Our [academics and] specialised technocratic elites are shielded by a childlike certainty . . . a need for absolute truth rather than embracing doubt and advancing carefully.’’ (p. 5) Although Saul’s claim is clearly something of a generalization, there does seem to be a sort of ethnocentrism amongst disciplinary specialists which causes people to negate or ignore the value of knowledge created within domains different to their own (Gendreau, 2002). Through its essential nature IPE requires that we advance to higher levels of openness and cognitive complexity (Harvey et al., 1961) and avoid the trap of monodisciplinary rigidity, dogmatism and a consequent impairment of learning (Rokeach, 1960). Recognition of the need to broaden the scope of educational progress and quality measures are not limited to the IPE domain. For example, in the teaching of evidence-based medicine Hatala and Guyatt (2002) suggest: ‘‘. . . quantitative research methods may be inadequate to capture the complexity of an educational system’’ (p. 1110). Working across the higher education sector more than two decades ago, Biggs and Collis (1982) proclaimed ‘‘there is an urgent need for qualitative criteria of learning that have formative as well as summative value’’ (p. 15). This stance signals recognition of the need for assessment to do more than rank students for summative purposes such as competitive selection.

The tautological need for interdisciplinary approaches

263

More sophisticated approaches strive to integrate purposes through methods that provide support for the learning processes and well as a means of measuring progress and achievement. Biggs and Collis (1982), and many others since, assume a relatively narrow field of options in assessment, claiming ‘‘there are [only] two ways in which we can make an evaluative judgement- norm or criterion-referenced’’ (p. 7). In addition to this dichotomous stricture, there is at least one other major avenue to inform evaluation in IPE- self- or ‘‘ipsative’’ referenced assessment (Baron, 1996), which offers benefits that are particularly useful in addressing the challenges associated with IPE. Self-assessment for formative and summative purposes Self-referenced assessment is supported by developing a structured, systematic selfassessment regime that is closely tied to explicit learning goals. Exemplary models such as the Alverno College Institute (1994a; 1994b) offer well-established, rigorous and standardsbased approaches that integrate formative and summative assessment. A key feature of this system is developing student self-assessment capacities to underpin the development of a range of other generic abilities such as social interaction. The use of a strong and sustained self-assessment regime has multiple benefits including enhanced metacognition and reflective practice which are associated with independent and lifelong learning (Boud, 1994; Schon, 1987). Similarly, Epstein (1999) puts forward a strong case for teaching and assessing ‘‘mindful practice’’ – critical self-reflection that: ‘‘enables [health professionals] to listen with attentiveness to patients’ distress, recognize their own errors, refine their technical skills, make evidence-based decisions, and clarify their values so they can act with compassion, technical competence, presence and insight’’ (p. 833). Assessment data focusing on these abilities provide rich insights into learning processes, including unintended and unanticipated outcomes, as well as products. Exploration of the processes involved in IPE, especially those of student interaction, are a particular area of deficit identified by Barr (2000) and Reeves (2000) who also suggest there has been a corresponding over-reliance on quantitative measures. The ideologies (Barr & Low, 2003), training, culture, traditions, status, roles and backgrounds (Whyte & Brooker, 2001) of those involved in IPE appear related to many of the issues associated with its evaluation. IPE requires, to some extent at least, demystification and perhaps deconstruction of perceived or espoused disciplinary boundaries. Even if, as Hatala and Guyatt (2002) argue, ‘‘randomization of learners is possible as is blinding of assessors’’ (p. 1111), is it feasible or desirable in most instances? And if by chance it is feasible, is such a decontextualized design going to provide insights and be a valid source of evidence for the complexities of ‘‘uncontrollable’’ applied settings? Generalizability, in the strict empiricist sense of the word, is intrinsically limited across ‘‘real world’’ IPE projects – there are too many variables that cannot, and in most cases probably should not be controlled for. As Andrich argues: ‘‘While contrived situations provide controls, they are removed from natural settings and, therefore, their validity may be reduced’’ (Andrich, 1988, p. 10). Examples of these variables include: . Unrepeatable conditions that arise as a product of time-limited projects, events and activities such as the order, intensity and duration of various interventions. . Institutional and individual student cohort idiosyncrasies that render the combination of or meaningful comparisons of parallel data sets untenable.

264

N. Stone

. The inevitable multiplicity of possible sources of influence on student learning due to the complexity of any rich learning environment where there are myriad possible interaction effects between learner with co-learners, teachers, preceptors, administrative personnel. As Kember (2003) points out, attempts to capture rich, naturalistic data through controlled experimental designs tend to oversimplify and reduce variables to a ‘‘manageable’’ scope to an extent that makes findings and generalizability quite meaningless. Chalmers (2003) goes further with advice that is particularly pertinent to the study and promotion of reflective practice and learning: ‘‘. . . it’s very important to attend to the question of what exactly does the system [involved in consciousness] see? . . . you need to actually take something about subjective experience as irreducible, just as a fact of the world . . . ’’ (p. 3). Sustainability is another major issue that must be considered when exploring options for IPE evaluation. In times of widespread intensification of academic workloads (McInnis, 2000) it is typically not feasible to conduct studies of the nature and scale needed to establish traditionally accepted threshold numbers demanded by purely quantitative research methods and statistical analyses. The RIPE project has the relative luxury of dedicated funding for a four-year pilot project that is able to develop and trial a range of assessment and evaluation methods. We are highly cognisant that only a small proportion of these methods would be sustainable within the contexts typical of undergraduate IPE program administration. Therefore one aim of this article is to identify which methods seem to have worked best for specific evaluative purposes and to identify their respective resource requirements. The growing interest in IPE could not be more timely within the context of the accelerating rates of change in higher education, both in Australia and internationally. Of particular relevance to IPE are: . A heightened awareness of the importance of assessment requirements in establishing expectations and guiding student learning, particularly in more flexible, independent learning environments. . Prominence attached to the development of generic skills, such as communication skills, teamwork skills and critical thinking, in the desired outcomes of higher education and the desire to assess these skills, one outcome of which is the rise of assessable group work. . The efforts of academic staff to find cost- and time-effective assessment techniques in the face of larger and more diverse student cohorts. . The emergence of new technological possibilities for assessment, including the potential to integrate assessment in new ways with other teaching and learning activities. (CSHE, 2002) The Rural Interprofessional Education (RIPE) Project is a positive response to these developments and has been designed to incorporate contemporary principles of effective assessment (CSHE, 2002). Description of the RIPE Project The RIPE pilot project was funded by the State Government Department of Human Services in Victoria in southeastern Australia. It was initially managed through

The tautological need for interdisciplinary approaches

265

the Department of General Practice then later through the School of Rural Health at the University of Melbourne and involved students from over 15 different university departments across the state. A Steering Group was established to help manage the project and included representatives of these departments which included General Practice, Health Promotion, Nursing, Pharmacy, Physiotherapy, Public Health and Rural Health. Members were invited and self-selected on the basis of interest and commitment, and included health practitioners, academics and clinical coordinators from the above range of disciplines. More detail about the organization of the project has been provided by McNair et al. (2005). The project’s aims were to: . Develop and trial a new interprofessional education program, including an associated administrative model and curriculum. . Provide a rich and enjoyable learning experience for the students to develop skills, knowledge and attitudes that are consonant with IPE and IPP. . Raise awareness of and interest in rural community-based health care as a possible option for future health practice. . Conduct research to investigate the effects of the program on student and preceptor learning. Another motive for the project, and therefore a possible source of bias that should be declared, was to advocate the incorporation of IPE as an explicit and valued component in the undergraduate education of health professionals. Hence the purposes for evaluation and research include: . Quality assurance and improvement – investigating various aspects of program effectiveness and therefore ways to sustain or improve quality. . Developing and trialling instruments, procedures and methods for evaluating IPE. . Building on the existing knowledge base with respect to IPE and IPP, for example, identifying perceived facilitators and barriers to IPP exploring and measuring change in IP attitudes, skills and knowledge. . To provide evidence of the feasibility and effectiveness of IPE to help persuade policy makers to support its greater prominence in various health curricula. The multiplicity of purposes and associated audiences for the outcomes of this project mean that approaches to assessment, evaluation and reporting need to be correspondingly multifarious. The project revolved around two-week student placements in rural areas. Students volunteered to work in multidisciplinary pairs or teams to complete a communitybased project (CBP), learn about primary and community-based health care and the ways in which health and other professionals interact in their various workplaces. The nature of the CBP was decided through negotiation between students and preceptors, based on salient value to the local community. More detail on the CBPs is available in a prior article by McNair et al. (2005, p. 582). The initial tutorial prepares them for the placement and provides an introduction to interprofessional practice principles. A final tutorial offers a debriefing opportunity where they also present their completed community-based projects. Placement sites include Multipurpose Services, remote and other Community Health Centres, small and large general practice clinics and centres, District Hospitals, Bush and District Nursing Services, drug and alcohol and mental health agencies.

266

N. Stone

One of the most challenging, but potentially fruitful, aspects of the placement was that it typically involved two or more students who had never met, from different institutions and disciplines, often with different ages and cultural backgrounds, who chose to be immersed in an unfamiliar rural environment away from home and family, to work together to negotiate activities and projects with preceptors, other health staff and community members. Most reported this to be a tiring and demanding undertaking, but one which was often personally transformative and of great perceived value to their professional preparation. Sample Approximately 115 students participated in the first four years of the project (see Table I). The questionnaire return rate was 109 out of 115, or about 95%, suggesting a reasonable level of representation of the sample. Students were volunteers, approximately one half of whom were able to gain academic credit for their involvement. Third and fourth year students were targeted in recruitment, in an attempt to select those who may be early enough in their courses to be flexible in their thinking and more likely to be amenable to IP collaboration (Stephenson et al., 2002), but who will also have enough sense of their own professional identity to form a meaningful platform for IPE (Atkins 1998; Leaviss, 2000). Methodology The research methodology evolved within the constraints of relatively small sample sizes and the logistics of managing the RIPE project, including a limit on how much data can be reasonably extracted from students and preceptors in a relatively short space of time. Both quantitative and qualitative methodologies were used. The quantitative methods involved pre- and post-placement questionnaires completed by students and preceptors, with Likert agreement scales on issues such as knowledge and attitudes towards rural health practice, primary health care, various health professional roles, the impact of student projects and the satisfaction levels with various project components (postplacement). Choice of statistical analysis Parametric tests were chosen, notwithstanding standard checks for skewness, using the Statistical Package for Social Sciences (SPSS) Version 12.0.1 (2003). In this choice there is

Table I. Student participants in RIPE placements 2001 – 2004. Gender Discipline

F

M

Total

Medicine Nursing Pharmacy Physiotherapy Total

29 44 3 6 82

21 7 4 1 33

50 51 7 7 115

The tautological need for interdisciplinary approaches

267

also the issue of whether Likert type rating scales should be considered interval or ordinal data. Although arguments still occur, these days it is commonly assumed that respondents do in fact treat such scales as being of equal intervals: ‘‘Empirical evidence that people treat the intervals between points on such scales as being equal in magnitude provides justification for treating them as measures on an interval scale’’ (Hair et al., 1998; Hair, personal communication, 2003). This choice allows greater power in detecting findings of statistical significance. Conceptually related items were grouped as scales that may represent one or more underpinning construct. Constructs that have been identified include satisfaction level with the placement, beliefs and attitudes towards: . Own IPP; . IPP in general; . The roles of nurses. This aggregation of items allowed us to consider a wider range of statistical procedures including Cronbach’s alpha coefficient test of internal consistency. In keeping with the selfassessment theme, Table III below shows the results of this analysis with respect to the scale ‘‘Beliefs and attitude towards own IPP’’. Qualitative data were seen as an important complement to the quantitative data. This reflects the advice of Campbell and Johnson (1999) who criticize the inappropriate overreliance on quantitative methods in interprofessional education. Qualitative data sources in the RIPE project include: . Expression of interest forms (with a ‘‘Reasons for Applying’’ section). . Items requesting short written responses on the pre- and post-placement questionnaire. . An online discussion facility with access restricted to students and the project manager, who facilitated the forum. Students are encouraged to access this before, during and after the placement. . Video records of tutorial discussion and interaction as well as associated transcripts. . Community-based projects: student records describing related processes (especially the quality of student interaction) and products; observation of student presentations by preceptors, tutors and the project manager with a keen eye on how the students collaborate during this activity; and actual products of the projects such a Powerpoint presentations, posters, brochures and other resources. A report form was developed to facilitate students’ presentations, self-assessment and to document the range of projects for future reference. . A range of informal communication, feedback and correspondence with students and preceptors, such as email and telephone correspondences. Some students also volunteered to submit their reflective records which they were encouraged to make throughout the placement as well as a record of the mid-placement review involving site preceptors and students. Students were prompted before (during a preplacement teleconference), during and after the placement to actively reflect on their own strengths, preferred styles and areas for improvement with respect to working in interprofessional teams. Students were also encouraged to participate at least two times each week in an online discussion (OLD) forum that was established for each placement. Access was password

268

N. Stone

protected and only granted to past and present RIPE students, as well as the Project Manager and Officer and the Steering Committee Chair. Confidentiality was assured and its importance was emphasised, so that students could feel free to post entries on a wide range of issues and experiences. The purposes of the OLD were to: . Overcome the logistical challenges of communication to and between students at sites which are often hundreds of kilometres apart. . Establish and facilitate rapport, contact and communication between all students undertaking a specific regional placement, and especially co-placed students who typically did not meet until the placement commenced. . Provide an avenue for informal but honest feedback and reflection about each student’s experiences. This function varied from a ‘‘relief valve’’ for venting everyday issues and challenges they met to an early warning sign of possibly more serious problems that required external intervention. . Allow the Project Manager to encourage students to focus on specific aspects interprofessional practice, particularly to self-assess their own preferred and nonpreferred ways of engaging in IP encounters. . Facilitate provision and sharing or resources and ideas, particularly for communitybased projects. . To gain and document insights into student IPE processes during the placements. The OLD was analysed using ‘‘blunt’’ measures such as total numbers of words, comments on specific IPE aspects and entries. Sample results in Table VII below show one possible method of summarizing areas of interest, in this case the proportion of self-assessment related comments compared with total numbers of entries and words. This offers a way to compare results between student group by profession, cohort, age and other methods such as the various related questionnaire results mentioned above. Tutorials were evaluated using questionnaires inviting written comments as well as responses-to-agreement scales. In addition to the above measures, a ‘‘Past Student Questionnaire’’ has been developed and distributed to students who completed a RIPE placement at least several months prior. This is designed to investigate the longer term effects of the experience on students’ learning including inclination to return to a rural health setting. A key underpinning question for all data genres is ‘‘What was learned by students and preceptors as a result of the RIPE placement?’’ Sample results from the RIPE Project The first analysis used paired samples t-tests to produce p values for comparing pre- and post-placement questionnaire items for each group. Analyses of items that produced p values of less than .01 were examined and compared with frequency data to ascertain which direction any apparently significant results may have taken. The p 5 .01 threshold, more stringent than the common .05 level, was chosen to allow greater confidence in interpreting reported changes. It is intended that this is an illustrative selection rather than a comprehensive report of all findings, in order to show some examples of findings and methods that may be of interest to readers. Table II below summarizes significant pre-/postplacement changes in student responses. Table III below shows the results of a reliability analysis using SPSS (2003). So as to reduce selection bias, we used an external group of second year nursing students who were not involved in the RIPE project. Of 30 responses 25 were complete enough to analyse this

The tautological need for interdisciplinary approaches

269

Table II. Students (N ¼ 109) significant differences in responses between pre- vs. post-placement questionnaire items (Agreement Scales). N.B. The ratings ranged from ‘‘strongly agree’’ ¼ 1 to ‘‘strongly disagree’’ ¼ 5. Means Statement

Pre

Post

I need to improve my interprofessional effectiveness (knowledge, skills) I generally feel highly respected by students of the ‘other’ profession I generally feel highly respected by practitioners of the ‘other’ profession I believe I am highly effective at interprofessional collaboration I feel misunderstood by practitioners of the ‘other’ profession I feel misunderstood by students of the ‘other’ profession I feel confident interacting with students of the ‘other’ profession I feel confident interacting with practitioners of the ‘other’ profession I can identify several ways to improve the effectiveness of my interprofessional collaboration On this clinical placement, I felt like I was an active member of a multiprofessional team Interprofessional education should be a core part every health professional’s pre-service training I interacted more with the preceptor from my own profession than the ‘other’ preceptor I have a comprehensive understanding of the roles of other health professionals

1.75

2.19 p ¼ .000

2.57 2.80

2.27 p ¼ .004 2.31 p ¼ .000

2.66 3.30 3.22 2.14 2.38 2.18

2.37 3.63 3.52 1.77 1.90 1.95

2.72

2.16 p ¼ .000

1.66

1.40 p ¼ .002

2.87

3.38 p ¼ .000

3.13

2.28 p ¼ .000

The following statements were post-placement only: The placement has had a positive impact on my perceptions of the other profession I learned a lot from the ‘other’ student I learned a lot from the ‘other’ Preceptor I learned a lot from my own (profession) Preceptor My attitude towards interprofessional practice has become more positive as a result of RIPE

p ¼ .001 p ¼ .000 p ¼ .003 p ¼ .000 p ¼ .000 p ¼ .007

1.4 1.7 1.2 1.9 1.1

Table III. Reliability analysis: ‘‘Attitude towards Own IPP’’ Scale. Reliability Analysis Scale (Alpha) Item-total Statistics

V3 V20 V21 V24 V25 V26 V30 V31 Reliability Coefficients Alpha ¼ .7848

Scale mean if item deleted

Scale variance if item deleted

Corrected item-total correlation

Multiple correlation

Alpha if item deleted

15.0000 14.6000 14.6400 14.9200 14.8400 14.9600 14.0800 14.8400

17.9167 15.7500 16.4900 16.0767 15.1400 16.0400 18.5767 17.4733

.4722 .4851 .6556 .7368 .6425 .6124 .1326 .4028

.5867 .4252 .6281 .6633 .5613 .5906 .2564 .4453

.7662 .7642 .7393 .7279 .7333 .7414 .8307 .7745

8 items Standardized item alpha ¼ .8074

270

N. Stone

scale. While an alpha value of .7848 is more than acceptable for a new scale in the social sciences field, reliability would be increased to .8307 if Item 30 was deleted (this item solicits information about the degree of comfort students feel with being preceptored by someone from a different profession). Deletion of this item could be considered if it is re-examined and appears to be interpreted as a different construct to the other items. Self-assessment: Identifying own IP strengths & weaknesses For reasons outlined above, it was important to examine participants’ perceptions of their own IP strengths and weaknesses and particularly whether the placement appeared to have an influence on these perceptions. In addition to Likert scale items the questionnaires asked them to identify and write down three personal IPP strengths and weaknesses both before and after the placement. The results were processed by quantifying, identifying and ranking frequency of themes within each data set. The overall quantitative summary, shown in Table IV below, reveals few noticeable differences between student groups but further analysis of the thematically grouped comments, ranked by frequency, shows some interesting changes that may be attributable to the placement experience. These are illustrated in Table V and VI below. Changes in the rankings and frequency of particular responses offer some insights into the quality of student learning. For example, nursing students began the placement reporting ‘‘communication skills’’ most frequently as a personal strength, but being ‘‘Open minded’’ topped the list after the placement. This indicates they came to see a predisposition or attitude (openmindedness) as being more important than a more technical attribute such as communications skills. Similarly below in Table VI, the most frequently reported weakness before the placement was ‘‘lack of experience’’, but this was replaced with ‘‘defensiveness’’ or ‘‘intolerance’’ by both nursing and medical students after the two-week experience. Again there is a pattern of learning that finds personal attitudes to be more important than technical ‘‘literacies’’ or accumulated knowledge. By examining Table VII, or what would in reality probably be a more informal on-balance judgement by an experienced educator, the OLD Facilitator was able to see how much explicit self-assessment was happening for each student, when and where to intervene with appropriate prompts, encouragement and other activities such as subsequent guided tutorial discussion which can help to boost and balance this important aspect of learning. Common examples include when students may be excessively harsh in judging their own performance, and benefit from a ‘‘constructive, benevolent outsider’’ to gain broader perspectives. Others avoid the challenging task altogether by focusing on superficial aspects of their behaviour or phenomena outside their control. Such an asynchronous forum offers a relatively time efficient and effective way to help dispersed students groups improve their

Table IV. Numbers of self-assessment comments in written item responses. Strengths

Weaknesses

Students

Pre

Post

Pre

Post

Nursing/AH Medical Total

86 70 156

72 68 141

73 71 148

68 64 134

The tautological need for interdisciplinary approaches

271

Table V. Pre- vs. post-placement comparison of common themes in self-assessment: Strengths (Stimulus question was: ‘‘Please identify three areas of your own personal strengths you believe would contribute to effective interprofessional practice’’). Strengths Students

Pre

Post

Nursing/AH

1. 2. 3. 4. 5.

Communication skills (26) Teamwork (13) Positive attitude to IPE (11) Open minded (10) Respect (8)

1. Open minded (15) 2. Positive attitude to IPE (10) 3. Respect (8) 4. Effective work practices (6) 5. Teamwork (5) [7. Communication skills (4)]

Medical

1. 2. 3. 4. 5.

Communication skills (15) Willingness to learn (14) Respect (6) Open minded (6) Teamwork (5)

1. 2. 3. 4. 5.

Total

1. 2. 3. 4. 5.

Communication skills (41) Positive attitude to IPE (25) Teamwork (18) Open minded (16) Respect (14)

1. Positive attitude to IPE (21) 2. Respect (19) 3. Open minded (18) 4. Communication skills (17) 5. Role knowledge (13) [6. Teamwork (9)]

Communication skills (13) Positive attitude to IPE (11) Respect (11) Role knowledge (8) Teamwork (4)

Note: Numbers in parentheses after each item indicate actual frequencies. Items in italics indicate a ‘‘noticeable’’ change in pre/post results. Table VI. Pre- vs. post-placement comparison of common themes in self-assessment: Weaknesses. Weaknesses Students

Pre

Post

Nursing/AH

1. 2. 3. 4. 5.

1. Defensiveness (9) 2. Narrow/fixed view (9) 3. Fear of confrontation (7) 4. Intolerance (5) 5. Lack of confidence (5) Lack of knowledge (5) 10. Lack of experience (2)

Medical

1. Lack of experience (15) 2. Lack of role understanding (13) 3. Communication problems (5) 4. Lack of confidence (8) 5. Lack of knowledge (5) Defensiveness (0)

1. Intolerance (7) 2. Defensiveness (6) 3. Expectations of self/others (5) 4. Communication problems (5) 5. Lack of knowledge (4) Lack of role understanding (4) 8. Lack of confidence (2) 9. Lack of experience (2)

Total

1. Lack of experience (24) 2. Lack of role understanding (20) 3. Lack of confidence (13) 4. Lack of knowledge (12) 5. Communication problems (8) Narrow/fixed view (8) [9. Defensiveness (5)]

1. Defensiveness (15) 2. Intolerance (12) 3. Narrow/fixed view (11) 4. Lack of knowledge (9) 5. Lack of role understanding (8) 7. Lack of confidence (7) 11. Lack of experience (4)

Lack of experience (9) Lack of confidence (8) Lack of knowledge (7) Lack of role understanding (7) Defensiveness (5)

272

N. Stone Table VII. Online discussion analysis: Self-assessment related entries. Self-assessment (SA) comments

Entry by:

Discipline

Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Student 8 Past Student

Nursing Nursing Nursing Nursing Physiotherapy Medicine Medicine Medicine Nursing Total

No. entries

Total words

þve

7ve

SA words

SA words % of total

4 2 3 2 4 5 3 1 1 25

910* 1211 357 824 910* 1461 968 468 115 7224

3 4 1 1 4 4 2 0 2 21

1 2 0 1 1 2 1 1 0 9

82 186 12 29 131 482 218 89 18 1247

9 15 3 4 14 33 23 19 17 17

*Some students made some entries in pairs it was clear when this happened and for these entries their shared total word counts were added together then divided in half

self-assessment competency whilst also providing rich insights into the quality of learning as it unfolds.

Discussion Limitations of findings and methods The above findings should be interpreted with a number of caveats. These include questions about the accuracy of any self-report measures, including possible ‘‘well-intentioned’’ social desirability bias. Most students knew about our interest in establishing evidence for the effectiveness of this IPE program. Their responses to a number of items not reported here, however, suggest that students tended to be honest rather than glowing. Such doubts can also be moderated to some extent by the broad range of available data, such as direct observation of tutorials, a range of formal and informal communications, feedback from preceptors, the quality of community-based projects and the interaction of the students during their presentations. There was also a range of necessarily subjective judgements involved in processing qualitative data. For example, in analysing the online discussion, it was not always totally clear about the difference between self-assessment and more general reflective comments. Further analysis of textual materials could be used to detect and describe some of the processes of change that occurred during the placements. For example, the use of pronouns seemed to reveal an increasing sense of belonging to a team rather than acting as an individual. A number of students commenced the OLD using ‘‘I’’, progressed to ‘‘(name of co-student) and I’’, and then to ‘‘we’’ most of the time. Some students chose to make joint entries, sometimes one dictated to the other who was at the keyboard entering the content. Whilst this may complicate or confound the use of OLD to analyse personal selfassessments, a major objective of the placement was for students to develop enough rapport in a very short space of time and work effectively in a team. Most self-assessments were very clearly the work of individual students so this potential drawback was considered a small cost compared with the insights provided into emerging collaborative practice.

The tautological need for interdisciplinary approaches

273

Conclusion According to most criteria, for example Forrester (1994) and Gillies (1982) IPE qualifies well as a complex system. Wolstenholme’s advice then seems timely: . . . the need for quantification is relative and depends on the purpose of the analysis . . . the true power of system dynamics to address problem solving lies in a judicious blend and intertwining of both qualitative and quantitative ideas, aimed at addressing as broad an audience as possible while remaining sufficiently rigorous to be useful. (Wolstenholme, 1999, p. 422) If IPE evaluation is limited to controlled, randomized designs, it seems likely that there will not ever be an acceptable collection of studies to help inform future research, education and policy. By restricting these studies to such small slices of the IPE ‘‘pie’’ we would be ignoring the huge body of knowledge in related domains that allow us to investigate and learn in terms that are more meaningful to practitioners, students and applied settings. We would also fall prey to the criticism of unnecessary disciplinary fragmentation and preclude the rich synergy possible with a broader view or consilience towards IPE related knowledge (Wilson, 1999). Saul (1997) claims that: ‘‘[We live in] a society which rewards and admires the control of information in its tiniest fragments of specialisation by the millions of specialists’’ (p. 37). Researchers and practitioners in the field of interprofessional education and practice are eminently poised to counter this claim. They have a unique perspective and an opportunity to lead the study and improvement of the complex human interactions inherently involved in interdisciplinary health care. This paper has argued that in evaluating IPE programs, there needs to be a place for well-informed, on-balance professional judgements based on a realistically attainable body of evidence. The examples above are only part of the available evidence, but they should leave no professional educator in doubt that rich and valuable interprofessional learning and self-assessment has taken place. The next challenge is to more firmly establish how such learning can be supported so that it transfers along the path in Figure 1, from IPE to IPP to health outcomes. Acknowledgements This project was funded by the Department of Human Services, Victoria, Australia. I also express appreciation for the substantial contributions of the RIPE Project Officer – Caroline Curtis. References AC Nielsen (Research Services 2000). Employer satisfaction with graduate skills. Evaluations and Investigations Program. Department of Education, Training & Youth Affairs, Australia. Alverno College Institute (1994a). Student assessment-as-learning at Alverno College. Milwaukee: Alverno College. Alverno College Institute (1994b). Assessing general education outcomes for the individual student: Performance assessment-as-learning. Milwaukee: Alverno College. Andrich, D. (1988). Rasch models for measurement. Thousand Oaks, CA: Sage. Atkins, J. (1998). Tribalism, loss and grief: Issues for multiprofessional education. Journal of Interprofessional Care, 12, 303 – 307. Baron, H. (1996). Strengths and limitations of ipsative measurement. Journal of Occupational and Organizational Psychology, 69, 49 – 56. Online document retrieved 2 June 2005 from http://www.psychology.org.nz/industrial/ Baron%20H%20JOOP%201996%20Article%20ips_nor.doc

274

N. Stone

Barr, H. (2000). Interprofessional education: 1997 – 2000. A Review by Hugh Barr, Chairman. London: The UK Centre for the Advancement of Interprofessional Education (CAIPE), November. Online document retrieved 2 February 2001 from www.caipe.org.uk/documents.html Barr, H., & Low, H. (2003). Evaluating undergraduate interprofessional learning. Conference report, UK Centre for Advancement of Interprofessional Education. Online document retrieved 5 August 2003 from http://www.caipe.org.uk Biggs, J., & Collis, K. (1982). Evaluating the quality of learning. New York: Academic Press. Boud, D. (1994). Assessment and learning: Contradictory or complementary? Keynote Address to ‘‘Assessment for Learning in Higher Education: Responding to and Initiating Change’’. Conference of the Staff and Educational Development Association, Telford, UK, 16 – 18 May. Campbell, J., & Johnson, C. (1999). Trend spotting: Fashions in medical education. BMJ, 318, 1272 – 1275. Chalmers, D. (2003). ‘‘David Chalmers on the big conundrum: Consciousness’’. Interview with Natasha Mitchell, All in the Mind, ABC Radio National 10 August. Online document retrieved 21 April 2006 from http:// www.abc.net.au/rn/science/mind/s919229.htm/ CSHE (Centre for the Study of Higher Education 2002). Assessing Learning in Australian Universities. Web resource commissioned by AUTC (Australian Universities Teaching Committee). Online document retrieved 15 August 2003 from http://www.cshe.unimelb.edu.au/assessinglearning/index.html Duckett, S. (2005). Health workforce design for the 21st century. Australian Health Review, 29, 201 – 210. Epstein, R. (1999). Mindful Practice. JAMA, 282, 833 – 839. Forrester, J. (1994). Learning through system dynamics as preparation for the 21st century. Keynote Address for Systems Thinking and Dynamic Modeling Conference for K-12 Education, 27 – 29 June, Concord Academy Concord, MA. Gendreau, P. (2002). We must do a better job of cumulating knowledge. Canadian Psychology, 43, 3, 205 – 210. Gillies, D. (1982). Nursing management a systems approach. Philadelphia: W. B. Saunders Company/Harcourt Brace. Gudykunst, W. (1998). Bridging differences: Effective intergroup communication, 3rd ed. Thousand Oaks, CA: Sage. Hair, J., Anderson, R., Tatham, R., & Black, W. (1998). Multivariate data analysis, 5th ed. Upper Saddle River, NJ: Prentice-Hall. Harvey, O., Hunt, D., & Schroder, H. (1961). Conceptual systems and personality organization. New York: Wiley. Hatala, R., & Guyatt, G. (2002). Evaluating the teaching of evidence-based medicine. JAMA, 288, 1110 – 1112. Headrick, L., Crain, E., Evans, D., Jackson, M., Layman, B., Bogin, R., et al. (1996). National Asthma Education and Prevention Program Working Group Report on the quality of asthma care. American Journal of Respiratory and Critical Care Medicine, 154, S96 – 118. Headrick, L., Wilcock, M., & Batalden, B. (1998). Interprofessional working and continuing medical education. British Medical Journal, 316, 771 – 774. James, T. (2002). Improving outlook. Australian Doctor, 17, May, 41 – 42. Johnson, D., & Johnson, R. (1992). Implementing cooperative learning. Contemporary Education, 63, 173 – 180. Kember, D. (2003). To control or not to control: The question of whether experimental designs are appropriate for evaluating teaching innovations in higher education. Assessment & Evaluation in Higher Education, 28, 89 – 101. Kember, D., Wong, A., & Leung, D. (1999). Reconsidering the dimensions of approaches to learning. The British Journal of Educational Psychology, 69, 323 – 338. Leaviss, J. (2000). Exploring the perceived effect of an undergraduate multiprofessional educational intervention. Medical Education, 34, 483 – 486. Litaker, D., Mion, L., Planavsky, L., Kippes, C., Mehta, N., & Frolkis, J. (2003). Physician-nurse practitioner teams in chronic disease management: the impact on costs, clinical effectiveness, and patients’ perception of care. Journal of Interprofessional Care, 17, 223 – 237. Loacker, G., Cromwell, L., & O’Brien, K. (1986). Assessment in Higher Education: To serve the learner. In C. Adelman (Ed.) Assessment in American Higher Education. Office of Research and Improvement, U.S. Dept. of Education, Washington, DC. McInnis, C. (2000). The work roles of academics in Australian Universities. Evaluations and Investigations Programme, Commonwealth Department of Science, Education & Training. Online document retrieved 15 August 2003 from http://www.detya.gov.au/archive/highered/eippubs/eip00_5/fullcopy.pdf McNair, R., Stone, N., Sims, J., & Curtis, C. (2005). Australian evidence for interprofessional education contributing to effective teamwork preparation and interest in rural practice. Journal of Interprofessional Care, 19, 579 – 594. Mezirow, J. (1994). Understanding transformation theory. Adult Education Quarterly, 44, 222 – 232. Mezirow, J. et al. (2000). Learning as transformation. San Francisco, CA: Jossey Bass. Onwuegbuzie, A. (2002). Why can’t we all get along? Towards a framework for unifying research paradigms. Education, 122, 518 – 530.

The tautological need for interdisciplinary approaches

275

Pearson, P. (1999). Promoting interprofessional collaboration: The multidisciplinary face of primary health care. In J. Sims (Ed.), Primary health care sciences. London: Whurr. Reeves, S. (2000). Community-based interprofessional education for medical, nursing and dental students. Health and Social Care in the Community, 8, 269 – 276. Rokeach, M. (1960). The open and closed mind. New York: Basic Books. Saul, J. (1997). The unconscious civilization. Melbourne: Penguin. Schofield, R., & Amodeo, M. (1999). Interdisciplinary teams in health care and human services: Are they effective? Health & Social Work, 24, 210 – 219. Schon, D. (1987). Educating the reflective practitioner. San Francisco, CA: Jossey-Bass. Slavin, R. E. (1990). Cooperative learning: Theory, research and practice. Englewood Cliffs, NJ: Prentice-Hall. Statistical Package for Social Sciences (SPSS) Version 12.0.1 (2003). Chicago, IL: SPPS, Inc. Stephenson, K., Peloquin, S., Richmond, S., Hinman, M., & Christiansen, C. (2002). Changing educational paradigms to prepare allied health professionals for the 21st century. Education for Health, 15, 37 – 49. Wade D., Gage, H., Owen, C., Trend, P., Grossmith, C., & Kaye, J. (2003). Multidisciplinary rehabilitation for people with Parkinson’s disease: A randomised controlled study. Journal of Neurology: Neurosurgery and Psychiatry, 74, 158 – 162. Whyte, L., & Brooker, C. (2001). Working with a multidisciplinary team: In secure psychiatric environments. Journal of Psychosocial Nursing & Mental Health Services, 39, 26 – 35. Wilson, E. (1999). Consilience: The unity of knowledge. New York: Knopf. Wolstenholme, E. (1999). Qualitative and qualitative modelling: The evolving balance. The Journal of Operational Research Society, 50, 422 – 428. Wood, J. (2001). Interprofessional education- still more questions than answers? Medical Education, 35, 816 – 817.