METHODOLOGICAL ISSUES IN NURSING RESEARCH
Knowledge acquisition, synthesis, and validation: a model for decision support systems Eileen S. O’Neill
PhD RN
Professor, College of Nursing, University of Massachusetts Dartmouth, North Dartmouth, Massachusetts, USA
Nancy M. Dluhy
PhD RN
Professor, College of Nursing, University of Massachusetts Dartmouth, North Dartmouth, Massachusetts, USA
Paul J. Fortier
DSc
Associate Professor of Electrical and Computer Engineering, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, Massachusetts, USA
Howard E. Michel
PhD
Assistant Professor of Electrical and Computer Engineering, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, Massachusetts, USA
Submitted for publication 1 May 2003 Accepted for publication 1 December 2003
Correspondence: Eileen O’Neill, College of Nursing, University of Massachusetts Dartmouth, 285 Old Westport Road, North Dartmouth, MA 02747-2300, USA. E-mail:
[email protected]
O ’ N E I L L E . S . , D L U H Y N . M . , F O R T I E R P . J . & M I C H E L H . E . ( 2 0 0 4 ) Journal of Advanced Nursing 47(2), 134–142 Knowledge acquisition, synthesis, and validation: a model for decision support systems Background. Decision tools such as clinical decision support systems must be built on a solid foundation of nursing knowledge. However, current methods to determine the best evidence do not include a broad range of knowledge sources. As clinical decision support systems will be designed to assist nurses when making critical decisions, methods need to be devised to glean the best possible knowledge. Aims. This paper presents a comprehensive knowledge development process to develop a nursing clinical decision support system. Discussion. The Nurse Computer Decision Support Project (N-CODES) is developing a prototype for a prospective decision support system. The prototype is being constructed on rules and cases generated by the best available evidence. To accommodate the range of decisions made in practice, different types of evidence are necessary. The process incorporates procedures to uncover, evaluate, and assimilate information to develop the knowledge domain for a clinical decision support systems. Both formal and practice-based knowledge are included. The model contains several innovative approaches including the use of clinical experts and a network of practicing clinicians. Conclusion. These strategies will assist scientists and practitioners interested in determining the best evidence to support clinical decision support systems.
Keywords: clinical decision support systems, evidence-based practice, point-of care system, knowledge development, practice maps, evidence evaluation, nursing
134
2004 Blackwell Publishing Ltd
Methodological issues in nursing research
Knowledge acquisition, synthesis, and validation
Introduction As the debate continues over the best evidence to guide nursing practice, developers of clinical decision support systems (CDSS) must devise ways to translate knowledge to practice. Most existing technology uses either textbooks or available clinical guidelines as the support base for diagnoses and interventions. But is this approach adequate? When nurses are relying on a system to help them make critical decisions, what evidence should be used to develop and maintain the system? Several other concerns arise such as locating the best evidence to answer clinical questions, deciding what should be done in the ‘gray zones of nursing practice’ source of quote? where answers are not apparent, and determining what merit experiential information should have as evidence. The Nurse Computer Decision Support Project (N-CODES) is developing a prototype for a prospective decision support system. The project began in the autumn of 2002. The N-CODES staff consists of four senior researchers, two computer engineers and two nurses. The engineers have expertise in developing knowledge-based systems and in artificial intelligence. The nurse researchers have considerable experience in decision research and knowledge development. In addition, seven research assistants (RAs) are employed: three engineering graduate students and four nursing graduate students. The engineer RAs are helping to develop the decision architecture and hardware requirements. The nurse RAs were selected because of their experience in acute care and their interest in the project. They are developing practice maps and will work on user issues. The CDSS being developed differs from traditional expert medical systems in several ways. It will be able to analyse individual client data quickly, and recognize trends earlier than the clinician. Clinicians caring for multiple clients might miss these early signs. The system can then prompt them to
Develop typology for knowledge domain
Background An overview of the knowledge development project is presented in Figure 1. Each step of the process will be discussed in depth. The first step was to develop the conceptual framework of novice nurse decision-making to guide the project. Clinical decision-making is a complex task requiring a knowledgeable practitioner, reliable informational inputs, and a supportive environment. Features of information, such as presentation (Lamond et al. 1996, Mellers et al. 1998), amount (Payne 1982) and complexity (Corcoran 1986), all contribute to decision-making uncertainty. Furthermore, time constraints and patient acuity increase decision-making difficulty (Henry 1991). As decisions become more complex, nurses use less normative thinking, collect fewer data, and rely more on short-cut strategies (O’Neill 1995, Cioffi & Markham 1997). All these strategies increase the likelihood of decision error. This problem is exacerbated in recent nursing school graduates. del Bueno (1994) found that new graduates were
Conceptual framework: Novice nurse Clinical decision-making Practice base network:interim evaluative process
If data inconsistent consult with clinical expert
Identify specific focus area
Search and select evidence
intervene or consider other possibilities before more serious problems develop. Clinicians, however, with their understanding of individual clients as well as additional situational information, may elect at any time to override manually the system’s recommendations. The system is being designed to be particularly helpful in assisting the novice in making more focused assessments, anticipating a deleterious client reaction, and initiating appropriate early actions. The knowledge domain of the N-CODES Project is being constructed based on rules and cases generated by the best available evidence. To accommodate the range of decisions made in practice, different types of evidence are necessary. In this paper, we outline the project work to date on a comprehensive framework for knowledge acquisition, validation and synthesis.
Evaluate quality and rate individual evidence
Develop individual rules
Evaluate strength of collective evidence
Organize using rule categories
Build practice map by constructing procedural rules
Integration of nursing and engineering perspectives
Figure 1 Overview of the knowledge development process. 2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
135
E.S. O’Neill et al.
not able to differentiate between clinical problems that needed immediate intervention and those that were less acute. Novices also attend to more irrelevant information (Tabek et al. 1996, Lamond & Farnell 1998), rely heavily on prescribed or standard protocols (Bruya & Demand 1985), and have less precise and/or incomplete patterns in memory (Patel et al. 1988). Clearly, lack of knowledge in a particular domain influences ability to make accurate decisions. When information is complex, the amount of information is large, or the decision must be made under time pressure, less experienced nurses tend to make errors (Ciafrani 1984, O’Neill 1994).
Conceptual framework Figure 2 outlines the decision-making process of the novice. Pre-encounter data include two types of information – patient knowledge from the chart and communication with other personnel – and the working knowledge of the nurse. Working knowledge is the knowledge that nurses use spontaneously and routinely (Kennedy 1983). An example of this is the knowledge that immobility can lead to pneumonia. Working memory contains patterns, memories of distinct patients or composite representations of many experiences with patients with a given condition, e.g.
pneumonia. As these cognitive patterns are built on experience, a novice has a limited number of these knowledge structures. Early hypothesis generation is a standard feature of clinical decision-making. Research clearly shows that clinicians begin generating hypotheses about client conditions early in the pre-encounter phase (Elstein et al. 1990, Offredy 1998). Based on this initial data, prescribed and/or standard nursing orders are initiated. These are modified by the setting and tailored to the patient condition. Based on pre-encounter data, a novice nurse initiates standard and/or prescribed nursing care. This care is tailored by encounter data and direct knowledge of the patient and modified by context. Salient concerns may emerge, that is concerns that stand out as needing attention (Benner & Tanner 1987). A new event or change in patient status leads to further hypothesis generation, further assessment, and hypothesis testing. This process guides the nurse to the best inference or diagnosis about the changing status of the patient, and this inference then drives nursing action. For example, consider a postoperative patient who develops a cough and fever. A novice nurse will form hypotheses, but these may be limited in number and incomplete due to a limited number of patterns in memory. Problem-solving performance is highly dependent on the available knowledge
Initiate measures specific to change (cough)
Client data to date
Modified by setting and tailored by client condition
Monitor for new event or change Hypothesis generation and selection
Example: : Acute Cough Standard and/or prescribed nurse care
Salient concerns
Working knowledge Clinical patterns
Hypothesis generation and selection
Hypothesis testing R/O hypothesis and revise
Hypothesisdriven assessment
Determine best inference
Preventive actions Actions based on inference
Preencounter data Key Action Inference
Figure 2 Novice nurse clinical decision-making model. 136
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
Methodological issues in nursing research
Knowledge acquisition, synthesis, and validation
relevant to a specific hypothesis. If the novice does generate hypotheses about potential causes such as pneumonia and pulmonary emboli, they will assess lung sounds, etc. and determine the most likely inference. This inference will determine the next step in the process, namely nursing action. Thus, the initial hypotheses that are generated are extremely important as they determine all future actions. If the novice does not generate the correct hypothesis initially, errors often occur (Joseph & Patel 1990). The conceptual framework gives insight into shaping knowledge for practice. Clinical practice involves a certain rhythm, sequencing and direction (Larsen et al. 2002). The conceptual framework assists in identifying the sequencing and direction of decision-making and anticipating the decision points in practice. Identification of the decision points led to the categories that were developed and the questions that were asked to develop the knowledge base. The model reflects our active, longstanding research programs in knowledge synthesis and use, and clinical decision-making (Dluhy 1995, O’Neill 1994, 1995, 1999, O’Neill & Dluhy 1997, 2000).
Typology for development of practice maps After the conceptual model was developed and refined, the third step of the process was to identify a specific knowledge area for development. As Forbes and Griffiths (2002) point out, this is a critical but often neglected area of evidence development. The focus of this project was the adult hospitalized on an acute (not critical) care unit. In a pilot study conducted by the first author, acute care nurses
indicated that respiratory conditions were commonly encountered in practice and there was little decision support available to help manage these problems (O’Neill 2001). Additionally, nurses on the research team have expertise in respiratory care and so decided to concentrate their efforts in this area. Next, project staff developed a typology of the major respiratory problems related to each symptom (see Table 1). These are cough, haemopytosis, chest pain, and dyspnoea (Albert et al. 1999). Cough was chosen as the initial symptom to develop because we considered that it was a universal symptom of respiratory illness. Once this choice was made, we adopted the recommendations of The American College of Chest Physicians National Consensus Report (Irwin et al. 1998) that divides cough into acute and chronic, based on 3-week duration. Then, clinical conditions related to each were determined. For example, under acute cough one category was infection. Acute bronchitis and pneumonia were two of the conditions subsumed under infection.
Searching and selecting evidence Once the framework for the knowledge acquisition phase was set, we began devising ways to uncover the best available evidence. Because nursing research is characterized by scientific pluralism and methodological diversity, the search is extensive. Databases such as CINAHL, PUBMED, and OVID are searched for processed evidence in the form of clinical guidelines, systematic reviews, narrative reviews, evidencebased practice, and meta-analyses. The web is searched for relevant materials. The team maintains a list of the most
Table 1 Typology of respiratory problems
Symptoms
Chest pain
Cough
CHRONIC
Dyspnea
Hemoptysis ACUTE
Infectious
Obstructive
TB
COPD
Restrictive
Occupational lung disease
Infectious
Obstructive
Drugrelated
Pneumonia
Pulmonary emboli
ACEinhibitor induced
Asthma
Chronic aspiration
Acute bronchitis
Acute aspiration
Lung cancer
Cystic fibrosis
Acute sinusitis
Pneumothorax
Pulmonary oedema
Pneumocystis
CP Pertussis
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
137
E.S. O’Neill et al.
useful web sites. Evidence-based reports such as the Cochrane reviews are accessed (Cochrane Collaboration 2003). Individual studies are located and recent, respected texts are sought. Only work published in English is considered. To assure that the information used is up-to-date, the most recent information is gleaned in the search.
Evaluating quality and rating individual evidence Each piece of evidence is then evaluated for quality. This involves critical appraisal of each study’s internal validity. Internal validity is the extent to which the effects detected in the study are a reflection of reality, rather than resulting from the effects of extraneous variables (Burns & Grove 2001). To critically appraise each piece of evidence, preset criteria have been adapted from several sources. The work of the Agency for Health Care Research and Quality (AHRQ) (West et al. 2002) is used to judge randomized clinical trials (RCTs) and Cook et al.’s (1997a, 1997b) work is used to appraise systematic reviews. The series on User’s Guides to the Medical Literature in JAMA by Guyatt et al. (1993) also provides direction in critical appraisal. An example of questions developed by the N-CODES Project to evaluate qualitative studies is presented in Table 2. In addition to internal validity, each piece of evidence must be relevant to acute care. Since clinicians will be acting on the information in the CDSS, a measure of the overall strength of the evidence relating to each rule needed to be determined. To meet this objective, each individual piece of evidence is rated. Table 3 presents the evidence grid developed for the project. It outlines four levels of evidence based on the source. While the grid reflects a hierarchy of research design, it also includes preprocessed data such as reviews and reports. Systematic reviews, clinical trials and National Consensus Reports are considered level I evidence. Ledbetter and Stevens (2000) define systematic reviews as ‘an evidence summary that uses a rigorous scientific approach to combine results from a body of original research studies into a clinically meaningful whole’ (p. 102). Systematic reviews have the Table 2 Judging the quality of qualitative research reports There is a clear statement of the aims of the research The sampling strategy is clearly justified and linked to the target population The description of the data collection and data analysis process is clear The categories and themes are logical The findings are clearly delineated The study’s findings are transferable The results will assist in the care of patients
138
advantage of being more up-to-date than textbooks, that are often outdated in the rapidly changing clinical world. Although these synopses are valuable, they may underrepresent some critical nursing aspects of a problem, for example, quality of life (Cook et al. 1997a, 1997b). National Consensus Reports, included with level I evidence, are official statements of recognized government or professional groups. They usually identify relevant clinical issues, evaluate the validity and the strength of the evidence, and develop clinically useful recommendations. RCTs are also considered level I evidence. For nursing, these trials are few in number and do present some translation difficulties to actual practice. The rigid control used in RCTs obliterates relational and contextual issues that are often important in nursing. Medical scientists are also beginning to recognize that RCTs may not be able to answer their most pressing clinical questions (Packer 2002). The second level includes narrative reviews, quasi-experimental studies, and published guidelines from respected organizations. Narrative reviews are conducted less rigorously than systematic reviews, and include descriptive as well as quasi-experimental studies. These reviews may detail the context of the findings, which helps to determine applicability to practice. The end result is a descriptive summary of the reviewed research (Forbes & Griffiths 2002). Other quasiexperimental studies are also included in level II. These often lack the randomization and/or control of experimental studies, but can provide useful directions for practice. Published practice guidelines such as those developed by the American College of Physicians-American Society of Internal Medicine (ACP-ASIM) (2004) are also included at this level. Level III includes descriptive studies, and published opinions of experts. These attempt to identify and understand the nature and attributes of nursing phenomena. They often provide information important to nursing. Expert opinion is found in recognized texts and agency and committee publications. The fourth evidence level provides for experiential knowledge. As Brailer (1999) points out, the literature cannot be the sole means of evidence acquisition for a CDSS. Literaturebased evidence provides only a fraction of the knowledge needed to build the N-CODES system and quality problems exist in some of it. Many problems that nurses face on a daily basis have not been studied or have been examined only in an isolated study. Furthermore, conflicting findings are fairly common when trying to determine optimum nursing interventions. To resolve informational conflicts, clinician input is sought. The project has four nursing RAs who are employed in acute care and have several years of clinical experience collectively.
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
Methodological issues in nursing research Table 3 Nurse Computer Decision Support Project levels of evidence
Knowledge acquisition, synthesis, and validation
Level
Source of evidence
I
Systematic review of experimental, quasi-experimental, and descriptive studies (SR) Randomized controlled trial of appropriate size (RCT) Clinical trial without randomization (e.g. single group pre/post, cohort, time-series, meta-analysis of cohort studies CT) National Institute of Health Consensus Report or nationally recognized group consensus report (CR) Narrative review of non-experimental design studies (NR) Case–Control Study (CC) Correlational, or case-series study (Cor) Published practice guidelines, for example from professional organizations, health care organizations, federal agencies (CG) Descriptive studies, Case studies, published option of experts, agencies, committees, regonized textbooks (DS, CS, EO) Expert opnion, clinical narratives, clinical observations (CE)
II
III IV
Developing data rules After the quality of the evidence is determined, then data rules are developed for each domain category. A data rule is an IF…THEN… statement that contains biophysical and/or psychosocial information. This is the knowledge that nurses use to monitor disease processes, evaluate therapeutic responses, and care for patients as they experience threatening and uncertain situations. Examples of individual rules for nosocomial pneumonia are: If decreased level of consciousness, then risk for pneumonia. (CG, NR, EO) If dental plaque present, then risk for pneumonia. (NR)
The letters in parenthesis are the code for evidence, NR indicating narrative review, CG, clinical guideline, and EO, published expert opinion.
Evaluating the strength of cumulative evidence After the quality of the evidence has been assured and the evidence rated, then the strength of all of the evidence relating to each data rule is evaluated. In addition to quality, quantity and consistency are the other domains identified by the AHRQ (West et al. 2002). The N-CODES team has developed an evidence framework containing these domains (see Table 4). Evidence hierarchies are not new. In 2002, the AHRQ evaluated 40 systems that addressed grading the strength of evidence (West et al. 2002). The parent evidence hierarchy is the one developed by the Agency for Health Care Policy and Research (AHCPR) now the AHRQ N-CODES levels of evidence share features with other ranking grids. The framework outlines three ratings for cumulative evidence: strong, sufficient, and marginal. To
Table 4 Ratings for cumulative evidence
Quantity
Level of evidence
Consistency
Strength of evidence
4 3 3
Any level At least one level I No level I at least one level II At least one level I No level I One level I No level I No level I Below level I
Yes Yes Yes
Strong Strong Strong
No Yes
Sufficient Sufficient Sufficient Marginal Marginal Marginal
3 2 1 3 2 1
No No
rate cumulative evidence, each source is ranked individually as previously discussed; for example, clinical guidelines are rated level I, expert opinion as level IV, etc. Then the evidence for each rule is considered together to determine the cumulative rating. Applying the data rules cited previously, If decreased level of consciousness, then risk for pneumonia (CG, NR, EO) would be ranked as strong evidence, while the evidence for the data rule If dental plaque present, then risk for pneumonia (NR) would be rated as sufficient. This framework is currently being used and will be evaluated at the end of this year. If the evidence is inconsistent and conflicts cannot be resolved in either the literature-based evidence or among the project staff, then expert opinion is sought. Experts who act as project consultant are practising clinicians with at least 5 years of recent experience caring for pulmonary patients.
Organizing rules into rule categories Once data rules are developed and rated, then they are organized into categories. Guided by the conceptual frame-
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
139
E.S. O’Neill et al. Base line assessment rules 11 12 13 14 16 19 19a 19b
STATE
State variables Base line assess Respiratory rate Temp. Heart rate pain bp O2 sat Symptom Diagnosis relevant Pain assess Cough assess
Base line assess
Pain assess
If pain do pain assessment (1)
assigned to pt assigned to pt assigned to pt assigned to pt assigned to pt assigned to pt assigned to pt assigned to pt
assess respiratory rate assess body temperature assess heart rate assess bp assess for pain or discomfort assess O2 sat assess for symptoms is diagnosis relevant?
Pain assessment *Insert data rules from other file Cough assessment
If cough do cough assessment
20 21 22 23 25
cough assess for sputum sputum observe amount of sputum sputum observe consistency of sputum sputum observe colour of sputum cough determine if recent invasive procedure such as bronchoscophy 15 cough observe for dyspnea 18 cough assess lung sounds 11a cough assess recent history of upper respiratory infection
11b 11c 11d 11e
Cough assess
If cough do cough intervention
cough assess positional correlation cough assess nasal secretions cough assess timing day/night/after meals cough assess character -- dry or productive 11f cough assess smoking history 11g cough assess cyanosis
Figure 3 Partial practice map.
work, the categories were developed around the questions clinicians ask themselves to retrieve information from memory. For example, the risk category questions are: Who is at risk for X? The intervention category question is: If I suspect X, what do I do? Rules are presently grouped in 15 categories including emotional responses (What are the emotional responses that I need to be aware of?) and a ‘living with’ category (What are the issues of a person living with X?).
Building practice maps by constructing linking procedural rules Once data rules are categorized, then practice maps for each clinical condition are constructed. A practice map is a template of the IF…THEN… data rules laid out to mimic a nurse’s decision-making process. These data rules are then linked together by procedural rules that connect domains of knowledge together. A procedural rule follows the model of If condition X, is present, then do Y. The ‘IF’ part of the rule relates to the domain knowledge in the data rules. The ‘THEN’ part invokes another task. Figure 3 shows a small cross-section of a practice map. The box labelled STATE contains the patient variables and is analogous to a nurse’s worksheet. The data rules are grouped in boxes marked ‘base line assessment rules’ and ‘cough assessment’ The procedural rules connect the data rules and patient information (STATE). As illustrated in Figure 3, the procedural rule if cough illustrates the data cough, and the new task do cough assessment. Procedural rules were developed by conducting mental simulation runs 140
to determine the direction and sequencing of actual nursing practice.
Practice-based network evaluation process Acceptability and usability of the CDSS are our central goals. To accomplish these, a network of clinicians recruited from regional hospitals participates in the development and configuration of the knowledge domain. This practice network performs three essential functions: suggesting missing data by providing an experiential perspective, validating the appropriateness of data and procedural rules, and providing insight into the integration of the support system with practice. After orientation to the project, clinicians are asked to consider the cluster of symptoms and baseline data associated with a particular respiratory condition. Focus group and online methods are used to allow individual and group reflection. Based on a limited amount of data, the clinicians are asked to describe how they might proceed and the hypotheses that they might generate about the client’s condition. As the description progresses and new information is needed to move forward, we provide data consistent with the previously constructed practice map. For example, if the condition is pneumonia, the researcher will indicate (if asked) that the white blood count is elevated so that the clinician can continue with actions that would be taken. After generating ‘experiential’ decision pathways, the clinicians are introduced to the evidence-based practice map previously constructed by the research team. Each nurse is asked to determine whether the client scenarios they generated can be traced effectively
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
Methodological issues in nursing research
What is already known about this topic • Most decision support systems are expert medical systems limited to a single decision model and application. • Current clinical decision support systems use textbooks or clinical guidelines as informational sources. • Novice nurses have limited support in the rapidly changing, stressful clinical environment.
What this paper adds • Description of a prospective nursing clinical decision support system. • A comprehensive method for uncovering, evaluating, and assimilating information for clinical decision support systems. • Innovative approaches for knowledge development, such as integration of clinical experts and a practice network to incorporate practice knowledge. through the constructed pathways of data and procedural rules. Data and procedural rules might be added or refined, based on this reflective activity. Finally, the clinicians are asked to examine critically the goodness of fit between this decision support practice map and the constraints of the practice environment for a novice acute care nurse. Recommendations from this practice network are incorporated into each final practice map.
Conclusion As Clancey (1988) points out, ‘Building a large, complex system is necessarily iterative, with early versions serving as sketches for the idealized model. Like artists, we start with an idea, represent it, study what we have done, and try again’ (p. 346). The challenge for nursing is to integrate the full range of nursing knowledge into a point-of-care system such as we have described. However, many conceptual and methodological issues remain before the N-CODES project has a working prototype. However, the process has been challenging, rewarding, time-consuming, and at times frustrating. Nevertheless, we are excited about the potential of the system and anxious to test the prototype with practising clinicians.
Acknowledgements The project is funded by the National Science Foundation no. EIA 0218909. We wish to thank the research assistants
Knowledge acquisition, synthesis, and validation
on the project: Elizabeth Chin, Veronica Coutu, Elizabeth Kelly, Rekha Madiraju, Sun Dip Pranhan, Jessica Ryan, and Beena Sarangarajan.
References Albert R.K., Spiro S. & Jett J.R. (1999) Comprehensive Respiratory Medicine. Mosby, London. American College of Physicians-American Society of Internal Medicine (ACP-ASIM) (2004) Clinical Practice Guidelines. Available at http://www.acponline.org/sci-policy/guideline. Benner P.E. & Tanner C.A. (1987) Clinical judgment: how expert nurses use intuition. American Journal of Nursing 87, 23–31. Brailer D.J. (1999) Management of knowledge in the modern health care delivery system. Joint Commission Journal on Quality Improvement 25, 6–19. Bruya M.A. & Demand J.K. (1985) Nursing decision making in critical care: traditional versus invasive blood pressure monitoring. Nursing Administration Quarterly 9, 19–31. Burns N. & Grove S.K. (2001) The Practice of Nursing Research. W.B. Saunders, Philadelphia, PA. Ciafrani K.L. (1984) The influence of the amount and relevance of data on identifying health problems. In Classification of Nursing Diagnosis: Proceedings From the Fifth Annual Conference (Kim M.J., McFarland G.K. & Mc-Lane A.M., eds), pp. 159–161. Mosby Yearbook, St Louis. Cioffi J. & Markham R. (1997) Clinical decision making by midwives: managing case complexity. Journal of Advanced Nursing 25, 265–272. Clancey W.J. (1988) Acquiring, representing, and evaluating a competence model of diagnostic reasoning. In The Nature of Expertise (Chi M., Glaser R. & Farr M., eds), pp. 343–418. Erlbaum, Hillsdale, NJ. Cochrane Collaboration (2003) Cochrane Library. Electronic serial publication issued quarterly by Update Software Ltd. Available at: http://www.cochrane.org. Cook D.J., Mulrow C.D. & Hayes R.B. (1997a) Systematic reviews: synthesis of best evidence for clinical decisions. Annals of Internal Medicine 126, 376–380. Cook D.J., Greengold N.L., Ellrodt A.G. & Weingarten S.R. (1997b) The relation between systematic reviews and practice guidelines. Annals of Internal Medicine 127, 210–216. Corcoran S.A. (1986) Task complexity and nursing expertise as factors in decision making. Nursing Research 35, 107–112. del Bueno D.J. (1994) Why can’t new grads think like nurses? Nurse Educator 19, 9–11. Dluhy N.M. (1995) Mapping knowledge in chronic illness. Journal of Advanced Nursing 21, 1051–1058. Elstein A.S., Shulman L.S. & Sprafka S.A. (1990) Medical problemsolving: a ten year retrospective. Evaluation and the Health Professions 13, 5–36. Forbes A. & Griffiths P. (2002) Methodological strategies for the identification and synthesis of ‘evidence’ to support decisionmaking in relation to complex healthcare systems and practices. Nursing Inquiry 9, 141–155. Guyatt G., Sackett D. & Cook D. (1993) Users’ Guides to the Medical Literature. II. How to use an article about therapy or
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142
141
E.S. O’Neill et al. prevention. A. Are the results of the study valid? Evidence-based medicine working group. Journal of the American Medical Association 270, 2598–2601. Henry S.B. (1991) Effects of level of patient acuity on clinical decision making of critical care nurses with varying levels of knowledge. Heart & Lung 20, 478–485. Irwin R.S., Boulet L.P., Cloutier M., Fuller R., Gold P., Hoffstein V., Ing A.J., McCool F.D., O’Byrne P., Poe R.H., Prakash U.B., Pratter M.R., Rubin B.K. (1998) Managing cough as a defense mechanisms and a symptom: A Consensus Panel Report of the College of Chest Physicians. Chest 114, 133–181. Joseph G.M. & Patel V.L. (1990) Domain knowledge and hypothesis generation in diagnostic reasoning. Medical Decision Making 10, 31–46. Kennedy M.M. (1983) Working knowledge. Knowledge: Creation, Diffusion, Utilization 5, 193–211. Lamond D., Crow R., Chase J., Doggen K. & Swinkels M. (1996) Information sources used in decision making: considerations for simulation development. International Journal of Nursing Studies 33, 47–57. Lamond D. & Farnell S. (1998) The treatment of pressure sores: a comparison of novice and expert nurses’ knowledge, information use, and decision accuracy. Journal of Advanced Nursing 27, 280– 286. Larson K., Adamsen L., Bjerregaard L. & Madison J.K. (2002) There is no gap ‘per se’ between theory and practice: Research knowledge and clinical knowledge are developed in different contexts and follow their own logic. Nursing Outlook 50, 204– 212. Ledbetter C.A. & Stevens K.R. (2000) Basics of evidence-based practice part 2: unscrambling the terms and processes. Seminars in Perioperative Nursing 9, 98–104. Mellers B.A., Schwartz A. & Cooke A. (1998) Judgment and decision making. Annual Review of Psychology 49, 447–477.
142
O’Neill E.S. (1994) The influence of experience on community health nurses’ use of the similarity heuristic in diagnostic reasoning. Scholarly Inquiry for Nursing Practice 8, 261–271. O’Neill E.S. (1995) Heuristics reasoning in diagnostic judgment. Journal of Professional Nursing 11, 239–245. O’Neill E.S. (1999) Strengthening clinical reasoning in graduate nursing students. Nurse Educator 24, 11–15. O’Neill E.S. (2001) Mapping nurse decisions in acute care. Unpublished data. O’Neill ES. & Dluhy N.M. (1997) A longitudinal model for fostering critical thinking and diagnostic reasoning. Journal of Advanced Nursing 26, 825–832. O’Neill E.S. & Dluhy N.M. (2000) Utility of structured care approaches in education and clinical practice. Nursing Outlook 48, 132–135. Offredy M. (1998) The application of decision making concepts by nurse practitioners in general practice. Journal of Advanced Nursing 28, 988–1000. Packer M. (2002) The impossible task of developing new treatment for heart failure. Journal of Heart Failure 8, 193–196. Patel V., Evans D. & Groen G. (1988) Biomedical knowledge and clinical reasoning. In Cognitive Science and Medicine (Evans D. & Patel V., eds), pp. 53–106. Massachusetts Institute of Technology, Cambridge, MA. Payne J. (1982) Contingent decision behavior. Psychological Bulletin 92, 382–402. Tabek N., Bar Tal Y. & Cohen-Mansfield J. (1996) Clinical decision making of experienced and novice nurses. Western Journal of Nursing Research 18, 534–547. West S., King V., Carey T.S., Lohr K., McKoy N., Sutton S. & Lux L. (2002) Systems to Rate the Strength of Scientific Evidence. Evidence report/Technology assessment no. 47 AHRQ Publication NO02-E016. Agency for Healthcare Research and Quality, Rockville, MD.
2004 Blackwell Publishing Ltd, Journal of Advanced Nursing, 47(2), 134–142