Defining Decision Making: A Qualitative Study of ... - Springer Link

4 downloads 0 Views 369KB Size Report
Apr 7, 2011 - Experts' Views on Surgical Trainee Decision Making. Sarah C. Rennie • Andre M. van Rij •. Chrystal Jaye • Katherine H. Hall. Published online: ...
World J Surg (2011) 35:1214–1220 DOI 10.1007/s00268-011-1089-4

Defining Decision Making: A Qualitative Study of International Experts’ Views on Surgical Trainee Decision Making Sarah C. Rennie • Andre M. van Rij Chrystal Jaye • Katherine H. Hall



Published online: 7 April 2011  Socie´te´ Internationale de Chirurgie 2011

Abstract Background Decision making is a key competency of surgeons; however, how best to assess decisions and decision makers is not clearly established. The aim of the present study was to identify criteria that inform judgments about surgical trainees’ decision-making skills. Methods A qualitative free text web-based survey was distributed to recognized international experts in Surgery, Medical Education, and Cognitive Research. Half the participants were asked to identify features of good decisions, characteristics of good decision makers, and essential factors for developing good decision-making skills. The other half were asked to consider these areas in relation to poor decision making. Template analysis of free text responses was performed. Results Twenty-nine (52%) experts responded to the survey, identifying 13 categories for judging a decision and 14 for judging a decision maker. Twelve features/characteristics overlapped (considered, informed, well timed, aware of limitations, communicated, knowledgeable, collaborative, patient-focused, flexible, able to act on the decision, evidence-based, and coherent). Fifteen categories were generated for essential factors leading to development The results of this study formed part of a presentation given at the Royal Australasian College of Surgeons Annual Scientific Congress, 9 May 2009, and an abstract was published in the Australian New Zealand Journal of Surgery. S. C. Rennie (&)  A. M. van Rij  K. H. Hall Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand e-mail: [email protected] C. Jaye Department of General Practice and Rural Health, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand

123

of decision-making skills that fall into three major themes (personal qualities, training, and culture). The categories compiled from the perspectives of good/poor were predominantly the inverse of each other; however, the weighting given to some categories varied. Conclusions This study provides criteria described by experts when considering surgical decisions, decision makers, and development of decision-making skills. It proposes a working definition of a good decision maker. Understanding these criteria will enable clinical teachers to better recognize and encourage good decision-making skills and identify poor decision-making skills for remediation.

Introduction Decision making is a key competency for any healthcare professional. Although surgical colleges in Australasia, the United Kingdom, the United States, and Canada have moved to competency-based models for teaching and assessing trainees, only Australasia (RACS) has explicitly identified decision-making skills as a principal competency of a good surgeon and attempted to define its components [1, 2]. In the United Kingdom, a Non-Technical Skills for Surgeons (NOTSS) program has been developed by the Royal College of Surgeons of Edinburgh, which identifies four non-technical skills used in the operating room, including decision making [3–5]. However, the components of decision making described by RACS and NOTSS focus predominantly on analytical decision methods, and the NOTSS tool is applied only to intraoperative decision making. If trainees’ decision-making skills are to be formally taught and assessed in all settings, it is important to have an understanding of desirable attributes and the criteria used to judge the quality of the decisions made.

World J Surg (2011) 35:1214–1220

The assumption has been that the attributes of good decision making are self-evident and present in most surgeons. Some opinion papers postulate how surgeons may make decisions [6, 7] and the ability to make consistent decisions is also assumed. However, studies show poor agreement between different surgeons’ decisions regarding need for elective surgery and also by the same surgeons considering the same cases over time [8–10]. In deciding priority for elective surgery consultant surgeons appeared to place different weight on a range of criteria, from diagnosis to future complications [9]. Although variation does not necessarily mean error, such differences raise questions as to the validity of these decisions. There is now increasing attention regarding decision making within the context of the operating room [4, 11–14]. However, most of this research is context-specific, related to only one condition or operation, and, as such is not completely transferable to other situations that surgical trainees may face, e.g. assessing acutely ill patients in different settings or when attending to a deteriorating situation in the middle of the night. This rather narrow approach to surgical decision making research, with its focus on the elective setting and predominantly expert decision making, limits the understanding of what makes a good surgical decision maker across the broader scope of surgical training. A formal Medline literature review describing criteria for judging the quality of decisions by doctors found no directly relevant articles. One article explored criteria for judging a good decision within an environmental science context [15], but none of the criteria were transferable to the surgical setting. A ‘‘Google’’ search revealed a series of essays published in ‘‘Effective Clinical Practice’’ entitled ‘‘What is a good decision?’’ [16]. However, this series of opinion essays examines experts’ views of how to judge patients’ treatment decision making rather than clinician or trainee decision making. The one consensus from these essays was that a good treatment decision should not be measured by the patient outcome. The aim of the present study was to describe criteria identified by international experts for judging the quality of surgical trainee acute patient management decision making in a more general clinical setting. Expert opinion provides a basis and consistency essential for understanding how to teach and assess decision making and decision makers. Criteria developed will complement research done to date and can be incorporated into teaching and assessment by surgical colleges internationally.

Materials and methods Evaluating the quality of a decision is a value judgment; as such it is subjective, reflective, and qualitative. To gain a

1215

well-balanced concept of the characteristics of decision making in the surgical setting, the opinions of internationally recognized experts across the relevant disciplines of Surgery, Medical Education, and Cognitive Research were sought. Purposive sampling of experts in these areas was undertaken and they were invited to complete a qualitative free text survey. Participants International experts from Australia, New Zealand, Canada, the United Kingdom, the United States, and the Netherlands, in Surgery, Cognitive Research, and Medical Education were identified by their recognized world-class experience, output (papers published in peer-reviewed journals, conference presentations), and roles (Directors of Education within Surgical Colleges, Professors/Heads of Departments). Opinions were canvassed beyond surgery because surgeons, chosen for input regarding nuances of the surgical setting and leadership in surgical training, may not have an in-depth understanding of decision-making theory and techniques, despite being expert decision makers themselves. Cognitive researchers were included to enhance cognitive reasoning factors, and medical educationalists were included for their expertise in educational theory. The majority of the invited experts (93%) were male, reflecting the higher proportion of men in positions of responsibility in these domains. Data Collection The participants were divided randomly into two cohorts— one cohort considered three questions regarding good decision making and the other cohort poor decision making. It was postulated that ‘‘good’’ decision making may generate different criteria from ‘‘poor’’ decision making, thereby identifying positive and negative attributes to develop or avoid in training doctors. Every participant was asked to do the following: •





‘‘List the features you feel best describe a ‘good decision’/’poor decision’ in the care of an acutely sick surgical patient.’’ ‘‘List the characteristics you feel best describe the attributes of a ‘good decision maker’/‘poor decision maker’ when that person manages an acutely sick surgical patient.’’ ‘‘List at least three essential factors that you think are most likely to lead to the development of good/poor decision making skills in a surgical trainee.’’

The survey was delivered to participants using the webbased ‘‘SurveyMonkey.com’’ software. A letter of introduction and explanation with a link to the survey was sent

123

1216

initially to 18 experts for each arm of the study, 6 in each of the expert domains of Surgery, Medical Education, and Cognitive Research. A ‘‘reminder’’ email message was sent to all non-responders and partial responders after 2 and 6 weeks, and a ‘‘thank you’’ email message was sent to all responders. Although the initial response rate was 44%, saturation (i.e., no new categories identified by subsequent responders) had not been reached for question one, and representation from surgeons was poor (13%). Therefore, the pool of experts was expanded using the criteria above to 28 in each arm of the study (total 56). Paper copies of the survey with stamped addressed envelopes were also distributed to known non-responders to encourage response and perhaps increase the response rate. The responses were collected in the form of free text comments to minimize cueing and subsequent bias in the ideas and concepts from the experts. Data analysis Thematic analysis of the data was undertaken by S.R. and K.H., using a combination of template analysis [17] and incorporating aspects of quantitative content analysis [18], such as quantifying the number of comments. A template of categories was generated inductively from the responses and applied to the data. The template was modified and adapted dependent on the new responses, with new categories added to the template and in some cases categories modified to encompass the responses more appropriately. The final template was then used to code all the data, the few disagreements found were discussed, and a consensus reached as to the placement of comments. Some expert comments encompassed several categories; these were duplicated and placed verbatim in every category that applied to the comment. Categories were ranked according to the frequency of expert comments. Intercoder reliability tests were not conducted as, to quote Professor Nigel King, ‘‘It [inter-rater agreement] is based on at least an implicit assumption that one can objectively judge one way of defining themes as ‘‘correct,’’ which flies in the face of the notion that texts are always open to a variety of readings’’ [19]. The categories gathered triangulate with criteria identified by both RACS and NOTSS [1, 20]. Comparisons were made between the perspectives of ‘‘good’’ and ‘‘poor’’ decision making (nominal groups) comparing the number of respondents that made comments in each category by means of the Fisher exact test 2 9 2 contingency table with two-tailed p values. Comparisons were also made between the three expert domains’ (nominal groups) responses, comparing the number of respondents that made comments in each category, by the Fisher exact test 2 9 n contingency table with two-tailed p values performed by a statistician.

123

World J Surg (2011) 35:1214–1220

Results In total, 29 (52%) of the experts responded to the survey (76% from the initial invited group). In the good decision making section there were 15 responses (4 cognitive researchers, 5 educationalists, 6 surgeons) and in the poor decision making section there were 14 responses (5 cognitive researchers, 3 educationalists, 6 surgeons). Saturation was found to have been achieved after 20 participants for question one, 10 participants for question two, and 9 participants for question three. There was no difference in response rate by gender. The non-responder group differed from responders in having a larger proportion of experts based in the United States. Only 24% of the experts based in the United States responded, compared to 83% based in Australasia and Canada, 60% based in the United Kingdom, and 50% in the Netherlands. There was no difference in the quality or quantity of the comments between the initial group and the second group of experts recruited. Categories generated for good and bad decision making were the converse of each other; however, the emphasis given in each category as measured by frequency was different for some categories. The range of number of comments per respondent was 1–7 with a mode of 4 comments for those considering good decision making and 2–8 comments/respondent (mode 3) for those considering poor decision making. The most consistent features about a surgical decision were that it was well informed (‘‘all relevant information has been considered’’) and well considered (‘‘soundly thought through’’) (Fig. 1). The emphasis was significantly different between the two perspectives for three categories. The ‘‘timing’’ of a decision was emphasized more in a bad decision (p = 0.008), whereas being ‘‘patient focused’’ (p = 0.021) and being ‘‘evidenced based’’ (p = 0.005) was of greater importance to a good decision. A feature of poor decision descriptors was the inclusion of qualified or dichotomous statements within three of the categories. The decision’s timing was important, but the decision could be a poor decision if it was too slow (‘‘takes too much time’’) or too fast (‘‘rushed’’). The decision ‘‘not being informed’’ could be the result of too much or too little information being gathered (‘‘collecting irrelevant data’’ or ‘‘acting without adequate clinical assessment’’). Similarly, a ‘‘poorly considered’’ decision was a matter of too much analysis or too little analysis of the clinical data available (‘‘relying on analytical approach’’ or ‘‘not being analytical enough’’). The characteristics of a good and a poor decision maker (Fig. 2) mirrored many of the features identified for a good/ poor decision. The number of comments generated was similar, with a range of 1–10 comments/respondent (mode 5) for a good decision maker compared to 2–9 comments/

World J Surg (2011) 35:1214–1220

1217

Fig. 1 Categories generated from experts’ free-text comments for ‘‘Features of a good/poor decision’’ ordered by frequency of comments (number of respondents making the comments in brackets, where different to the number of comments)

Fig. 2 Categories generated from experts’ free-text comments for ‘‘Characteristics of a good/poor decision maker’’ ordered by frequency of comments (number of respondents making the comments in brackets, where different to the number of comments)

respondent (mode 5) for those considering poor decision makers. Being considered and well informed was the most common attribute describing a good decision maker. Additional categories related exclusively to decision makers rather than decisions; these included an awareness of their limitations, the ability to act on the decision, being well rested, and having psychological health. More comments were made relating to ‘‘timing’’ for poor decision makers compared to good decision makers, however, this was not statistically significant. Categories generated as essential for developing good/ poor decision-making skills can be grouped into three broad themes—the personal qualities of the trainee (including reflection), their training (decision-making skills, clinical/technical skills, communication skills, knowledge, experience, team work training, time management skills, feedback, and assessment), and the type of culture/environment within which the trainee is working (mentoring, supervision, role models, and environment) (Fig. 3). The most frequently considered factor represented the personal qualities of the trainee (e.g., ‘‘high emotional intelligence,’’ ‘‘self-confidence,’’ ‘‘lack of arrogance,’’ ‘‘ability to work under pressure’’). Knowledge, experience, and training in decision making were the next most frequently considered factors. The least frequently mentioned factors describe aspects of a relational culture/environment

theme. The number of comments was similar between the two groups, with a range of 3–9 comments/respondent (mode 3) from the respondents considering good decision making and 3–7 comments/respondent (mode 3) considering poor decision making. Again, a difference in emphasis was seen between the two cohorts, with good personal qualities and communication skills (p = 0.035) appearing more important in developing good decision-making skills while an unsupportive environment and lack of supervision were expressed as contributing to bad decision-making skills. Responses to all questions varied little between each expert group. Medical educators were significantly more likely to include comments in the ‘‘collaborative’’ category than surgeons or cognitive researchers (p = 0.022), regarding the decision maker. Medical educators also made significantly more comments about the role of clinical and technical skills in the development of decision-making skills compared to surgeons or cognitive researchers (p = 0.021).

Discussion The critical categories that have been identified from experts’ comments have been used to suggest a working definition of a good surgical decision maker:

123

1218

World J Surg (2011) 35:1214–1220

Fig. 3 Categories generated from experts’ free-text comments for ‘‘Essential factors in the development of good/ poor decision-making skills’’ ordered by frequency of comments (number of respondents making the comments in brackets, where different to the number of comments)

A good surgical decision maker is a surgeon or trainee who makes well informed and considered, timely, patient focused decisions which are backed by a sound knowledge and appropriate evidence base, while recognising their own limitations, the need for collaboration, reflection and clear communication to bring about an appropriate action. It is intended that this working definition has the potential to add structure to the teaching, learning, and assessment of surgical trainees’ decision-making skills. The need to ‘‘to bring about an appropriate action’’ has been highlighted in the above definition as it is seen as a crucial aspect that distinguishes decision making from problem solving and judgment. No decision is of any value unless it is acted upon; even the action of restraint should be undertaken thoughtfully with watchful waiting rather than by default as a non-decision. Acting on a decision is an area that needs explicit focus in the teaching and training of surgeons. The lack of emphasis on the outcome is consistent with the literature [16]. A good decision may be more likely to lead to a better patient outcome, but outcome alone cannot measure the quality of a decision. Good decisions may result in death or morbidity, and in practice a poor patient outcome may be more likely to prompt reflection and encourage consideration of the deficiencies in the decisionmaking process. However, bad decisions may remain unrecognized or be excused when the patients’ outcomes are good, and a focus only on patient outcome rather than the criteria identified by the experts will miss opportunities to identify poor decisions that have a good outcome. Identifying the components of a decision explicitly will enable focused reflection on all decisions a trainee makes and identify areas for improvement regardless of the patient outcome. The minimal difference in the types of responses from each expert group may reflect the strength of the methodology used; although surgeons and medical educators may not have the deliberate vocabulary of cognitive researchers,

123

the coding of free text responses enables their own language as expert decision makers to be coded. The exception of medical educators’ greater emphasis that a good decision maker should be collaborative and have training in clinical/technical skills may reflect biases from medical educators toward the encouragement of collaboration in education and the recent emphasis on clinical skills training. Medical educators may also have expectations of surgeons that those in surgery assume to be present or that they take for granted. When considering the perspectives of good and poor decision making, most categories show ‘‘good’’ decisions/ decision makers are the inverse of ‘‘poor’’ decisions/decision makers. This was not universal, and therefore the value of exploring decision making from both a good and bad perspective was beneficial. The unique perspective of poor decision making was the dichotomous nature of the comments for three categories: ‘‘timing,’’ ‘‘informed,’’ and ‘‘considered.’’ The skill of getting the timing right appears pertinent to avoiding poor decisions. These comments may simply reflect that decision making is a continuum with regard to these categories—and that good decision making lies somewhere in the middle. When assessing junior trainees, attention should be paid not only to how well-informed and considered the decision was, but also to the timing of the decision, to identify any poor performance that requires remediation. More respondents considering a good decision made comments related to the categories of ‘‘being patient focused’’ and ‘‘evidence based’’ compared to those considering a poor decision. This may reflect that these categories become redundant in the context of a poor decision if it is poorly informed, poorly considered, and poorly timed. It is interesting that the personal qualities of the trainee are considered to be the most essential factor influencing the development of good decision-making skills. The respondents didn’t allude to a specific personality type but to qualities they felt were important. Personality traits are

World J Surg (2011) 35:1214–1220

generally thought to be relatively stable [21], encouraging a focus on the selection of trainees with ‘‘good qualities.’’ It raises the question of how to select for these personal qualities and whether there is a desirable ‘‘surgical personality’’ [22]. Training and lack of training were also identified as having a contributory role, the former to good decision making and the latter to poor decision making. Research on surgical training within the operating room in non-technical skills (including decision making) has been positively received by surgeons as a way of increasing their understanding of the decision-making process [5]. This environment only covers a fraction of the situations in which a surgeon is expected to make decisions, most of these occurring in much less structured and supervised environments. Similar planned programs must be developed to include decision making on the wards, in the outpatient clinic, and in the emergency department. The present study identified a supportive environment, with good supervision, mentoring, and role modeling as essential for developing decision-making skills. This is important for those trainees lacking the personal qualities considered essential for good decision-making skills. It has been seen in the aviation industry that the relationship between junior and senior members of a team influences decision making. Steep hierarchical relationships inhibit junior decision making and, in turn, contribute to major airline disasters [23]. This may also apply in the surgical setting. Further work needs to be done to explore hierarchical inhibitions and their impact on surgical trainee decision making. A major strength of this study is the participation from International Experts across three domains. This is the first study that has investigated multiple perspectives on what constitutes a good/poor decision to gain some uniformity in our understanding. Another strength comes from dividing the experts into two cohorts and elucidating differences in perception when considering good and poor decisions. A limitation may be perceived by the small numbers of participants. However, a response rate of 52% is relatively good for an online survey where the mean response rate has usually been found to be about 33% [24, 25], and further participants were not selected because saturation was achieved. A further limitation is the free text response method used. Closed questions would have allowed interactions between the criteria identified to be analyzed. However, the free text nature of the questions ensured that the responses from the experts were not biased or cued. A further study using the categories identified and asking the experts to rank them or rate their importance to decision making may be useful to clarify decision-making components further. This study identifies the features expected by experts of a good and poor decision, the characteristics of a good/poor

1219

decision maker, and essential factors in the development of good/poor decision-making skills in the acute clinical surgical setting. It provides a working definition for a good surgical decision maker. The definition is generic and should be applicable to much of the decision making in clinical surgery. It provides a framework with which to inform development of teaching, learning, and assessment tools for surgical decision-making skills in surgical trainees. The complexities of these skills demand explicit decision-making teaching programs or modules for surgical trainees to develop competence in this area as surgeons. These modules must take into account the relationship between decision making and other surgical competencies. The relevance to other acute clinical disciplines should also be explored to determine the need for common approaches to teaching, learning, and assessment of clinical decisionmaking skills commencing in the undergraduate curriculum, and empowering lifelong learning in the medical graduate. Acknowledgments The authors are grateful to Research Professor Peter Herbison, MSc (Otago), biostatistician, for advice regarding statistics and undertaking some of the statistical analysis. Also thanks are due to all the international experts who gave their time to complete the survey. The study was designed by Sarah C. Rennie with the guidance of Andre M. van Rij. Sarah C. Rennie delivered the study and collected the data, and, with Katherine H. Hall, undertook the data analysis. All authors were involved in the interpretation of the data. Sarah C. Rennie wrote the first draft of the paper; all authors contributed to the various drafts and approved the final version for submission. Andre M. van Rij is the guarantor.

References 1. Surgical Competence Performance Working Party (2008) Surgical competence and performance. Royal Australasian College of Surgeons, Melbourne, pp 1–36 2. Dickinson I, Watters D, Graham I et al (2009) Guide to the assessment of competence and performance in practising surgeons. ANZ J Surg 79:198–204 3. Yule S, Flin R, Paterson-Brown S et al (2006) Development of a rating system for surgeons’ non-technical skills. Med Educ 40:1098–1104 4. Yule S, Flin R, Maran N et al (2008) Surgeons’ non-technical skills in the operating room: reliability testing of the NOTSS behavior rating system. World J Surg 32:548–556 5. Flin R, Yule S, Paterson-Brown S et al (2007) Teaching surgeons about non-technical skills. Surgeon 5:86–89 6. Hall J, Ellis C, Hamdorf J (2003) Surgeons and cognitive processes. Br J Surg 90:10–16 7. Flin R, Youngson G, Yule S (2007) How do surgeons make intraoperative decisions? Qual Saf Health Care 16:235–239 8. Rutkow IM (1982) Surgical decision making: the reproducibility of clinical judgment. Arch Surg 117:337–340 9. MacCormick AD, Parry BR (2006) Judgement analysis of surgeons’ prioritization of patients for elective general surgery. Med Decis Making 26:255–264

123

1220 10. Rutkow IM, Gittelsohn AM, Zuidema GD (1979) Surgical decision making. The reliability of clinical judgement. Ann Surg 190:409–417 11. Jacklin R, Sevdalis N, Harries C et al (2008) Judgment analysis: a method for quantitative evaluation of trainee surgeons’ judgments of surgical risk. Am J Surg 195:183–188 12. Sarker S, Rehman S, Ladwa M et al (2008) A decision-making learning and assessment tool in laparoscopic cholecystectomy. Surg Endosc 23:197–203 13. Sarker SK, Chang A, Vincent C (2008) Decision making in laparoscopic surgery: a prospective, independent and blinded analysis. Int J Surg 6:98–105 14. Jacklin R, Sevdalis N, Darzi A et al (2008) Mapping surgical practice decision making: an interview study to evaluate decisions in surgical care. Am J Surg 195:689–696 15. Dietz T (2003) What is a good decision? Criteria for environmental decision making. Hum Ecol Rev 10:33–39 16. Ratliff A, Angell M, Dow RW et al (1999) What is a good decision? Eff Clin Pract 2:185–197 17. King N (2004) Using templates in the thematic analysis of text. In: Cassell C, Symon G (eds) Essential guide to qualitative methods in organizational research. Sage, London, pp 256–270

123

World J Surg (2011) 35:1214–1220 18. Neuendorf KA (2002) The content analysis guidebook. Sage Publications, Thousand Oaks 19. King N (2007) Template analysis: quality checks. Last updated: Friday, April 27, 2007. Available from: http://www.hud.ac.uk/ hhs/research/template_analysis/technique/qualityreflexivity.htm. Accessed 27 March 2011 20. Flin R et al (2006) The non-technical skills for surgeons (NOTSS). In: Aberdeen UO (ed) System handbook, vol 1.2. University of Aberdeen, Aberdeen 21. American Psychiatric Association Task Force on DSM-IV (2000) Diagnostic and statistical manual of mental disorders: DSM-IVTR, 4th edn. American Psychiatric Association, Washington, DC, p 686 22. Bann S, Darzi A (2005) Selection of individuals for training in surgery. Am J Surg 190:98–102 23. Foushee HC (1984) Dyads and triads at 35,000 feet: factors affecting group process and aircrew performance. Am Psychol 39:885–893 24. Hamilton MB (2009) Online survey response rates and times. Available at http://www.supersurvey.com. Accessed 27 March 2011 25. Kaplowitz MD, Hadlock TD, Levine R (2004) A comparison of web and mail survey response rates. Public Opin Q 68:94–101