215 © 2004
Schattauer GmbH
Future Directions in Evaluation Research: People, Organizational, and Social Issues B. Kaplan1, N. T. Shaw2 Yale University School of Medicine, New Haven, CT, USA 2 THINK (The Health Informatics NetworK), Centre for Healthcare Innovation and Improvement, BC Research Institute for Children’s and Women’s Health, Vancouver, BC, Canada
1
Summary
Objective: To review evaluation literature concerning people, organizational, and social issues and provide recommendations for future research. Method: Analyze this research and make recommendations. Results and Conclusions: Evaluation research is key in identifying how people, organizational, and social issues – all crucial to system design, development, implementation, and use – interplay with informatics projects. Building on a long history of contributions and using a variety of methods, researchers continue developing evaluation theories and methods while producing significant interesting studies. We recommend that future research: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multisystem comparative studies. 3) Incorporate evaluation into all phases of a project. 4) Study failures, partial successes, and changes in project definition or outcome. 5) Employ evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations. 6) Incorporate people, social, organizational, cultural, and concomitant ethical issues into the mainstream of medical informatics. 7) Diversify research approaches and continue to develop new approaches. 8) Conduct investigations at different levels of analysis. 9) Integrate findings from different applications and contextual settings, different areas of health care, studies in other disciplines, and also work that is not published in traditional research outlets. 10) Develop and test theory to inform both further evaluation research and informatics practice.
Keywords
Evaluation; technology assessment, medical informatics, telemedicine, organizational culture; attitudes towards computers; implementation; barriers; people, organizational, social, issues; sociotechnical; ethical issues; qualitative methods; ethnographic methods; multi-method; human-computer interaction. Methods Inf Med 2004; 43: 215–31
With increased pressures for implementing information and communication technologies (ICT), spending on information technology in health care is expected to rise [1]. Introducing these technologies involves change, in at least three respects. Changes occur due to the way introducing new systems is handled.They also occur because of concomitant changes in communication flows and work patterns [2]. Changes occur, too, because of organizational repercussions on the way a new system functions and is used [3]. This complex interplay of changes involves a variety of people, organizational, and social issues, including humancomputer interaction, sociotechnical, cultural, and ethical concerns. Evaluation [4-7] can serve multiple purposes during these change processes. Evaluation can help in identifying where change may need fine tuning or major adjustment, thereby preventing harm and minimizing disruption. It can help in providing evidence for decision-making and extending knowledge [8]. Evaluation can explore how changes at different levels of analysis interrelate, so that changes in work roles or communication patterns between clinicians, for example, may affect how a hospital functions. In these ways, evaluation can enhance decision making for managing change in itself, or even better, for managing change as a means for strategic organizational development [5-7]. Evaluation and change management [9-11], then, address similar concerns and involve related theories [5, 12-14]. Evaluation researchers employ numerous approaches, including: randomized controlled trials, experimental designs, simulation, usability testing, cognitive studies, record and playback techniques, network analysis, ethnography, economic and organ-
izational impacts, action research, content analysis, data mining, studies based on actor-network or structuration theory, ethnomethodology, balanced score cards, soft systems and participatory design methodologies, surveys, qualitative methods of data collection and interpretive analyses of it, technology assessment, bench-marking, SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, and social interactionism [5-7, 14-23]. Many evaluations focus on impacts. People, organizational, and social issues are central to understanding those impacts, and hence, to evaluation. The UK’s National Health Service recognized this in advising that evaluations include business, user (i.e., organizational and patient), and technical impact [15]. In this paper, we review streams in evaluation research (sometimes called “assessment”) as related to people, organizational, and social issues, and we suggest directions for further research.
Foundation Studies Early evaluation studies of medical informatics applications are represented in [4]. These early papers, many by researchers who still are active in the field, report insights that remain relevant and serve as a foundation on which an expanding body of knowledge has been based. One, an evaluation of the PROMIS system at the University of Vermont [24, 25], was a multi-method study by external evaluators. This early study is exemplary both methodologically and for the people, organizational, and social issues insights it produced. The study identified how the implementation of a new medical record and clinical guidance Methods Inf Med 3/2004
216 Kaplan, Shaw
system was related to issues concerning professional roles and status; change management; user involvement; organizational communication; and relationships between the structure and organization of the medical record, philosophy of health care delivery, and clinical work – all still important in electronic record and clinical decision support systems. Other early evaluations focusing on these kinds of issues include a series of studies at what was then called the Rockland Research Institute in New York (e.g., [26, 27]), and another series at Methodist Hospital in Indiana (e.g., [28, 29]). The new hospital information system at El Camino Hospital in California also was extensively evaluated by independent researchers during the 1970s [30]. Originally developed by Lockheed Missiles and Space, this system became the popular Technicon, then TDS, and, later, the Eclypsis system. During the 1970s, analyses also began to appear of lessons learned and prescriptions for success [31]. Management issues, user acceptance, and diffusion and adoption of information systems have been discussed in the medical informatics literature at least since the early 1980s [32]. Authors linked diffusion and communication studies, evaluation research, and change management since early in the development of the field (e.g., [33-40]). For example, studies of the diffusion of a hospital information system showed that physicians’ professional networks influenced adoption and could be used to encourage system use [28, 29]. Others analyzed medical informatics applications according to innovation characteristics known to affect adoption [36, 41].
Current Evaluation Research In a recent literature review, Kaplan summarized people, organizational, and social issues identified in evaluation, and discussed the fit of information and communication technologies with various aspects related to these concerns [42]. These include how information and communication technologies fit other contextual issues surrounding their development, implementaMethods Inf Med 3/2004
tion, and use. Researchers have addressed work flow and routines [14, 43-52]; clinicians’ level of expertise [48]; values and professional norms [53-55]; institutional setting, history, and structure [37, 50, 56]; communication patterns [37-39, 52, 55]; organizational culture, status relationships, control relationships, division of labor, work roles, and professional responsibility [25, 49, 56, 57]; cognitive processes [58]; congruence with existing organizational business models and strategic partners [59]; compatibility with clinical-patient encounter and consultation patterns [50, 52, 60]; and the extent to which models embodied in a system are shared by its users [3, 14, 55, 61, 62]. Authors have also addressed (in various combinations) fit between information technology and how individuals define their work, user characteristics and preferences (e.g., information needs], the clinical operating model under which a system is used, and the organization into which it is introduced [37, 43, 55, 63-68]. Others have focused on interrelationships among key components of an organization, such as organizational structure, strategy, management, people’s skills, and technology [69]; and compatibility of goals, professional values, needs, and cultures of different groups within an organization, including developers, clinicians, administrators, and patients [69-76]. Some have discussed transferring to another country a system designed for use under a different country’s health care system [50, 53, 77, 78]. Some have investigated incorporating user requirements and involvement [79, 80] and patient preferences [81]. In addition, studies have been done on ways in which informatics applications embody values, norms, representations of work and work routines; assumptions about usability, information content and style of presentation, and links between medical knowledge and clinical practice; and how these assumptions influence system design [17, 50, 52, 55, 62, 73, 82-86]. The significance of these research contributions related to people, organizational, and social issues in medical informatics is evidenced by the International Medical Informatics Association (IMIA), American Medical Informatics Association (AMIA),
European Federation for Medical Informatics (EFMI), and European Science Foundation all supporting working groups in this areaa. These working groups have been building on years of activities and research by organizing conference sessions and publications. However, researchers recognize that these concerns have become more challenging because technological and institutional changes in health care contribute to making complex organizational, social, and personal arrangements even more complex. A recent White Paper reflected this when proposing a research agenda for key people and organizational issues [87].
Barriers to Evaluation Evaluation is inherently political: what happens when a new technology is introduced is both affected by organizational and implementation processes, as well as affecting them. Evaluation, too, is political in nature because it concerns needs, values, and interests of different stakeholders [88]. Further, evaluation is increasingly important because of its role in government regulation and policy concerning technologies in health care [89, 90]. It may provide the political rationale for adoption or discontinuation of projects on apparently disinterested scientific grounds [90]. Some, therefore, resist evaluation for fear of potential disruptiveness of the investigation or its findings [8]. Some fear the blame that may fall on those who point out difficulties [91]. Others question what can and cannot be evaluated [92]. Some recog-
a
IMIA Organizational and Social Issues Working Group (WG13), and Technology Assessment and Quality Development in Health Informatics (WG15) – http://www.imia.org; AMIA People and Organizational Issues Working Group – http://www.amia.org; EFMI Organisational Impact in Medical Informatics Working Group, and Assessment of Health Information Systems Working Group – http://www.efmi.org; ESF sponsored HIS-EVAL (Health Information Systems Evaluation) – http://bisg. umit.at/hiseval/
217 Future Directions in Evaluation Research
nize that evaluation methods must fit managerial and scientific preconceptions of what constitutes worthwhile methods and measures of success [93]. Still others may view evaluation results as site specific. There are many barriers to evaluation, including methodological complexity of the undertaking, motivation, and ethical considerations [94, 95]. For all these reasons, some discount either conducting evaluation studies or accepting their results [96, 97].
●
●
Evaluation Frameworks A variety of efforts has been undertaken to address these concerns while providing a structure for evaluations. For example, The National Health Service in the UK issued an evaluation framework with the intent to improve the quality of evaluations [15]. In the European Union, the first phase of ATIM, the Accompanying Measure on Assessment of Information Technologies In Medicine, was undertaken as an Accompanying Measure in the Programme for Telematics Systems in Areas of General Interest (DG XIII) in the area of AIM (Advanced Informatics in Medicine). Its goal was to develop consensus on both methods and criteria for assessment [96]. The IMIA Technology Assessment and Quality Development in Health Informatics Working Group (WG15) is directly concerned with these issues [8], as are the aforementioned sister working groups. In addition, a number of authors have suggested frameworks for conducting evaluation studies. Green and Moehr [98] reviewed six major Canadian frameworks alone, noting that documented empirical or theoretical support is lacking. Others draw on a variety of theories, suggest different methods, and address a variety of concerns. For example: ● Aarts and Peel [64, 99] discuss stages of implementation and change. ● Chimsar and Wiley-Patton [100] apply a technology acceptance model (TAM) to studying the Internet in pediatrics, and Gagnon et al. [101] use TAM in studying telemedicine adoption.
●
●
● ●
●
●
Grant, Plante, and LeBanc [102] argue for standardization in evaluation approach in order to avoid partial evaluation approaches and hence avoid remoteness from actual use. They propose their TEAM (Total Evaluation and Acceptance Methodology) methodology, based on the three dimensions of role, time, and structure, as a basis for future standardization. Kaplan’s 4Cs framework [12, 13, 44, 103] – focusing on communication, control, care, and context – and her set of evaluation guidelines call for flexible multi-method longitudinal designs of formative and summative evaluations that incorporate a variety of concerns. She elsewhere [12] suggests a matrix with one dimension, stages of patient treatment [104], and as a second dimension, beneficiaries of the information resource [105]. Together with other colleagues, Kaplan et al. [87] propose a research agenda of key issues based on a matrix crossing organizational level of analysis with different social science disciplines. Kazanjian and Green [106] propose a “Comprehensive Health Technology Assessment” framework. Lauer, Joshi, and Browdy [107] illustrate how an equity implementation model can apply to evaluating user satisfaction with a patient scheduling system. Kokol [108, 109] recommends specific software tools for evaluation. Lehoux and Blume [89] suggest a sociopolitical technology assessment framework that addresses four sets of issues: 1) the potential actors, 2) implications for flow of materials and resources, 3) knowledge production and circulation, and 4) power relations. Shaw [110] identifies six aspects in her CHEATS framework for a comprehensive evaluation strategy: clinical, human and organizational, educational, administrative, technical, and social. Balanced score cards also have served as the basis for evaluation. Protti [23] draws on Kaplan and Norton’s Balanced Scorecard [111] and recommends using them to measure and evaluate information management and technology’s efficiency, effectiveness, transformative po-
●
●
tential, and impacts from multiple perspectives and in assorted ways. Both Gordon [112] and Niss [113] also each advocate the use of a balanced score card. A multidisciplinary team evaluating electronic records at multiple sites in the UK [114] developed several frameworks to help guide their study. One of these frameworks, SAFE, was built upon that of Boloix and Robillard’s framework [115] for evaluating software. Others, including an evaluation textbook [7] and the UK’s National Health Service PROBE project [15, 114, 116], use, elaborate, or extend Donabedian’s well-known structure-process-outcome evaluation model [117, 118]. One such elaboration [119] combines it with the DeLone-McLean model of information system success [120]. Another [121, 122] combines the DeLone-McLean model with Fineberg’s model for efficacy of clinical diagnostic technology [123]. Others advocate using the DeLoneMcLean model to study both system successes and failures [124].
Earlier authors categorized evaluation methodologies, suggesting which methods are appropriate for which evaluation questions (e.g. [125-128]). The IMIA Technology Assessment and Quality Development in Health Informatics Working Group (WG15) has been creating a framework for assessing the validity of a study [129]. Recently, there has been interest in examining evaluation itself, with some researchers studying the evaluation process and how to evaluate evaluations [90, 130-133]. There is disagreement over the usefulness of frameworks per se [114, 134]. While some argue the need for a unifying evaluation model [79, 102], we prefer a diversity in approach, even though we have developed frameworks ourselves. We believe that frameworks need to be applied with care. The power of any model is that it simplifies, necessarily omitting some things while highlighting others. No framework is likely to be comprehensive, appropriate to all projects, nor even to all stages of a project.
Methods Inf Med 3/2004
218 Kaplan, Shaw
Evaluation Theory Jones classifies evaluation approaches into four models: randomized controlled trials, scientific/quantitative/objectivist, project management, and qualitative/interpretive/subjectivist [130], while Friedman and Wyatt pose a basic dichotomous distinction between objectivist and subjectivist [7]. Fulop similarly divides approaches into positivism, interactionism and postmodernism, but also into deductive and inductive approaches [135]. Evaluation approaches also may be classified as a) undertaken either from an organizational or sociotechnical viewpointb [23]; b) being driven by different theories of change or resistance [13, 127]; or c) being grounded in other theories or different social science methods. Different approaches and different evaluation questions are preferred by different groups (e.g., developers, managers, physicians, nurses, and secretaries) within a health care setting and according to whether individuals have a health care background [122, 136]. All these different preferences, theories, and methods have been subject to considerable debate about the appropriateness of which methods evaluation researchers should use. Most evaluations are based on positivist, rationalist, or rational choice theoretical perspectives and study designs [42]. Technology assessment, too, similarly is dominated by quantitative experimental designs (based on randomized controlled trials) and cost/benefit analysis.The focus is on cost, safety, and efficacy [89]. Both evaluation and technology assessment, therefore, often have not addressed social, ethical, political, organizational, and cultural considerations. This orientation, too, produces atheoretical evaluations [137]. Recently there have been calls to move away from an emphasis b
An organizational viewpoint involves a rational, mechanistic, explicit, fixed view of an organization operating according to standard operating procedures. A sociotechnical viewpoint involves a view of work as being performed through collaboration among groups and individuals who each have their own goals and aspirations, and who may not act according to the formal rules and procedures [23].
Methods Inf Med 3/2004
on methods and toward more theory-based evaluation research, including calls for studies informed by social science theories [42, 87, 138].The social sciences provide the basis for other approaches to evaluation and technology assessment that do address social, ethical, political, organizational, and cultural concerns.
Social and Behavioral Sciences Evaluations informed by theoretical work in organizational theory, communications, and the social and behavioral sciences have been undertaken for some time [4, 40, 42, 139]. Lorenzi gives an overview of organizational theory influences [10, 140], and Kaplan of social science ones [42, 87]. Earlier examples of work based in the social sciences and communications include works cited in [31, 87] as well as [4, 5, 24, 25, 141-151]. Although there is a tradition of evaluations based on social and behavioral sciences, recently these and post-modernist approaches have become more common. A variety of theoretic approaches have been advanced, including ethnography [152], ethnomethodology [18], action research [137, 153], critical theoretical perspectives [154], feminist theories [54, 55], diffusion of innovation [49, 155-157] and social interactionism based on diffusion of innovation theory (e.g. [29, 37-39, 42, 103, 158-160]). Among other recent examples are a variety of studies drawing on a constructivist tradition emphasizing organizational, political, social, and cultural concerns: constructivist [161] and social constructionist approaches [47, 55, 162, 163]; actor-network theory [76, 164], a sociological approach that employs sociocultural analyses and sociotechnical design as well as drawing on actor-network theory and situated action/design [3, 17, 165-169]; activity theory [170]; a purely subjective approach [171]; structuration theory and concepts based on it [52, 143, 172], or combinations of constructionist theories and structuration theory [173]; and Zerubavel’s [174] work on sociotemporal order [145]. Kaplan argues that each of these theoretical threads is a form of social interactionist theory [42, 103]. She sees in social interactionism an explanation for the concept of “fit” summarized above, and al-
so a theoretical base from which to derive evaluation frameworks, principles, and guidelines [12, 13, 42]. Although the social and behavioral sciences have underpinned much work on people, organizational, and social issues, highlighting their contribution and explicitly drawing upon them is continuing to lead to new directions in evaluation for people, organizational, and social issues.
New Interdisciplinary Directions There is increasing recognition in evaluation research that many different disciplines can contribute to evaluation efforts and to addressing different questions in technology assessment [135]. Social scientists, computer scientists, and information scientists all are exploring the complexities of people, social, and organizational issues, while individuals from disciplines as diverse as anthropology, communications, computer science, history, library and information science, management information systems, science and technology studies, and sociology are examining implications of the idea that technologies are embedded in social contexts [2]. There is, therefore, a growing tendency to form multidisciplinary evaluation teams, and to incorporate social scientists, computer scientists, health informaticians, experts in organizational behavior, and economics into implementation and evaluation efforts [175]. Evaluations undertaken by such teams may address important new evaluation questions [89]. These evaluations tend to be robust and very comprehensive, producing greater insight than those of less diverse evaluation teams. However, multidisciplinary evaluation research designs and research teams present inherent difficulties that should not be underestimated. Their very value is that of providing a different perspective, but this, in itself can prove divisive, disheartening, or demoralizing, depending on whether the team views these perspectives in positive or negative ways. Experiences in the UK are demonstrating that teams need to be fluid and flexible as projects develop and progress. It is notable that four of the published UK-based multi-disciplinary teams
219 Future Directions in Evaluation Research
(Harrison et al., May et al., Peel et al., and Purves et al.) have evolved and reformed in structure over time, which suggests that it is worth striving for the benefits obtained from a multidisciplinary approach. How to conduct such evaluations and manage interdisciplinary project teams is a developing area in itself [43, 114, 135, 176]. Working group and other activities are also promoting an interdisciplinary evaluation approach [129]. New forums and growing interaction between medical informatics evaluation researchers and other research communities are bringing together researchers from different traditions and creating opportunities for findings from evaluation studies of different application areas to enrich each other c. These working groups also have been instrumental in creating frequent sessions at information systems conferences [180-182]. Researchers in the field of information systems are more and more publishing evaluation studies conducted in health care environments
c
Working group activities are promoting and nurturing the growing overlap between related research in medical informatics, the social sciences, information systems, and social informatics (an interdisciplinary field in which researchers consider “the design, uses, and consequences of information technologies that take into account their interaction with social and institutional contexts” [2, 177]). The IMIA, AMIA, and EFMI sister working groups on people, organizational, and social issues sponsor sessions and tutorials at Medinfo and AMIA conferences. These most recently included tutorials on evaluation and a series of sessions on evaluation alternatives to randomized controlled trials in which researchers from several different disciplines participated [16, 19-21]. They collaborate on AMIA’s People and Organizational Issues Working Group’s Diana Forsythe Award, established in 2000 to recognize new publications at the intersection of medical informatics and social science. Winners to date [55, 60, 75, 178] come from a variety of disciplinary backgrounds. The EFMI Organisational Impact in Medical Informatics Working Group held a conference in 2001 that drew on the disciplines of medical informatics, information systems, and social studies of science [138]. Its success lead to a special issue of Methods of Information in Medicine [179] and a second conference sponsored by the three sister working groups in 2004.
(e.g. [43-45, 54, 91, 100, 103, 116, 130, 137, 172, 180, 183-189]). As interest grows in information technology in health care, other information systems conferences recently have included tracks on the topic. These multidisciplinary and interdisciplinary efforts help counteract the insulation that evaluation studies (and researchers) have had from each other. This insulation has resulted in an impoverished understanding of people, organizational, and social issues [42]. Such efforts towards drawing on a variety of disciplines (those we include as well as others) creates synergies between medical and health care informatics evaluation research and other fields, enriching both, as researchers act as conduits for work which crosses disciplinary boundaries.
New Challenges Changes in Technologies and in Health Care Delivery New technologies and new ways of organizing health care delivery bring new evaluation challenges and questions. Newer applications of information and communication technologies may be used in settings quite different from those in which evaluation studies have typically been undertaken, potentially raising not only different evaluation questions but possibly requiring different research designs. We take telehealthcare and telemedicine as an example of the kinds of issues that arise. The telehealth evaluation literature discusses small-scale demonstration projects and feasibility studies. While considered methodologically inadequate [190], this literature discusses a range of problems that relate to evaluation [191-205]. Among these are not only issues about the nature of evaluation and evidence when technologies have not yet become stable [134], but also about changes health care delivery is undergoing. As with previous medical technologies and information systems, as well as newer consumer informatics applications and health management uses, telehealth technologies may be used in ways that re-
define how health care is practiced and delivered [52, 142, 184, 206-209].Telehealth, too, makes more evident concerns over what “information” consists of in health care [52, 60, 207, 210], and highlights how computer-mediated clinical encounters differ from face-to-face ones [210-214]. Additionally, telehealth provokes ethical questions little considered in evaluation – such as empowerment; equity and equality of services and use of health care resources; trust among clinicians; the role of employers in health care management; monitoring and surveillance of patients at home; the therapeutic or dehumanizing effects of health care technologies; as well as ethical issues of system design and evaluation itself – suggesting that such concerns should be reflected in evaluations of other areas of information technology in health care as well [89, 93, 173, 209]. Further, it is clear that health care itself is changing from single-institution episodic encounters between practitioner and patient to being more patient-focused without regard to physical or geographic boundaries [209, 215, 216], redefining what constitutes a clinical encounter and giving rise to virtual health care organizations [211]. Together with the growth of home health care and Internet health information and services, this increasingly complex and changing environment may require evaluation studies themselves to change from a focus on individual technologies, individual institutions, and individual users, to the changing context of patient-centered care and integrated delivery systems, and networked technologies that support them [59]. Telehealth and telemedicine then are examples of how new technologies, like some older applications, raise new issues related to different meanings information and communication technologies have for different users and how people make sense of them [50, 55, 61, 66, 75, 85, 103, 143, 200, 217]. Studies of imaging technologies, like ultrasonography [151], CT scanning [143-146], or clinical imaging systems [218, 219] point to the need for newer studies to understand contemporary systems [131]. As other new technologies – such as robots and augmented reality [220] in surgery, visualization used for genetic alterations as well as for medical Methods Inf Med 3/2004
220 Kaplan, Shaw
research, and other means of virtual or distant care – become more common, evaluation researchers will need ways of studying them and the issues they raise.
Mutual Transformation and Co-Evolution of Technologies, Organizations, and Evaluations Developing and implementing systems within complex organizations is an inherently unpredictable process. Evaluation involves a variety of dynamic technologies, projects, and organizational settings. Both system and process boundaries can shift and may be hard to determine, while the nature of both the technology and the context may change during project phases [89, 221]. As a result, the focus of evaluation itself may change throughout a study as various actors, reflecting political, professional, and commercial interests, adapt, modify and transform themselves, the technology, and the evaluation. Because of these moving targets, it is increasingly apparent that causal relationships are not unidirectional or straightforward [42]. Researchers in several fields are moving away from studies based on technological or socio-economic deterministic models. Instead of seeing evaluation as a neutral technical process of applying specific methods, the process of producing evidence of efficacy and utility is recognized as resulting from a series of contingent decisions, even where the design of evaluation or research projects apparently is structured in a quite rigid and “objective” way [60, 161, 206, 222]. As a result, new questions are being raised concerning the nature and role of evaluation itself [90, 223]. A recent trend in evaluation takes a postmodern stance and turns reflexively to examine evaluation itself [222]. Researchers, rather than studying “impacts” of technology, favor viewing technology and society as part of a “seamless web” [2, 224] in which implementation is seen to involve a series of negotiated definitions and actions rather than as a fixed process with pre-defined successes and project uses [62, 150, 161]. Evaluation is Methods Inf Med 3/2004
seen as a fluid process that is but one component of extended social and technical networks. These contingent processes present major challenges for evaluators who are dealing with a technology that is applied and deployed in the real world of health care provision, rather than in the laboratory of system developers [134]. Because numerous contextual aspects can change over the course of a project, co-evolving in a process of mutual transformation and thereby changing the context from the one when the evaluation originally was planned [3, 42, 46, 221], a conceptual model may help illuminate some of the complex dynamics of evaluation. It also might challenge basic assumptions of evaluation itself, viz. whether more information leads to better decisions, about what exactly is being evaluated in the face of so many moving targets, about what the point of evaluation is once a technology has become stabilized and its use routinized, and whether evaluation – or any other study – provides the “scientific-”, evidencebased information that is sought. Recent efforts address such issues [89, 90, 223].
Recommendations As the many studies cited here indicate, integral to system success is not only system functionality, but also a mix of organizational, behavioral, and social issues involving clinical context, cognitive factors, methods of development and dissemination [42, 63, 87, 95, 225-227], and who defines “success” and when the determination of “success” is made [124]. Numerous studies support the observations that: “Sociologic, cultural, and financial issues have as much to do with the success or failure of a system as do technological aspects” [228] because “information technologies are embedded within a complex social and organizational context” [229]. Thus evaluation needs to address more than how well a system works. Evaluation also needs to address how well a system works with particular users in a particular setting, and further, why it works that way there, and what “works” itself means. Eval-
uation needs to address how conceptions of whether a system works change with time and with who is making the judgment. This scope to evaluation will help answer such key questions as: ● Why are the outcomes that are studied as they are? ● What might be done to affect outcomes? ● What influences whether information and communication technologies will have the desired effects? and desired by whom? ● Why do individuals use or not use an informatics application? ● What from one study might be generalizable or transferable to other sites or applications? ● How can information and communications technologies be used so as to improve patient care? To help do this, a number of areas need additional development. We, therefore, propose the following: 1. Address concerns of the many individuals involved in or affected by informatics applications Many evaluations focus on practitioners, primarily physicians [42], and most studies do not clearly identify primary stakeholders [124], yet different groups have different reactions to a system [25, 230]. Different participants (e.g., hospital specialists, general practitioners, and patients) have different perceptions of the same application, and those perceptions, in turn, can differ from those of the researchers conducting a study [132]. This makes it all the more important to take account of the ways that different individuals and groups may interact, may variously define a project or information needs, and may differ in their views of what constitutes health and health care or a successful informatics project [75, 89, 154]. In addition, assumptions underlying both physician-centric health care and basing informatics applications on what experts (e.g., clinicians, system designers, vendors, insurance companies, legislators) think is best for others are increasingly being challenged. Evaluation research that considers direct effect on patient care or
221 Future Directions in Evaluation Research
health outcomes for patients has been scarce [231-235]. Even in studies of patient care information systems, few evaluations focus on impact on care [124]. The notorious difficulty of measuring direct patient outcomes related to information technology leads to proxy measures. Nevertheless, effect on patient care is an important criterion by which health care professionals judge innovations [70, 236]. The fundamental goal of medical informatics, and hence all associated research, is to improve care. Although it is exceptionally difficult to evaluate whether care is improved, the attempt should be made. Health care, and ultimately, health, is a people, organizational, social, and cultural issue [237]. Recently, evaluators have been paying more attention to patients and health care consumers in evaluations. Both patient perceptions and patient outcomes deserve more study. More evaluations are needed to address concerns of the many individuals involved in or affected by informatics applications, including not only more study of nurses, administrators, pharmacists, patients, laboratory technologists, or personal caregivers, but also the interests of vendors, policy makers, third-party payors, etc. 2. Conduct studies in different types and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies Most studies are small-scale, single site, and of stand-alone systems [114]. Many are in academic settings in industrialized countries. Variety in kinds of study site is needed, as are large-scale systems studies, multi-site studies, and comparative studies. Studies of different kinds of systems, studies at different kinds of sites, and comparative studies are important for making underlying assumptions more apparent and for illuminating contextual issues. Studies need to be undertaken outside academic medical centers and satellite provider centers. Rural health care facilities, inner city clinics, emergency rescue operations, nursing homes and geriatric centers, remote facilities in developing areas or at military installations, home health management systems, or home use of web-based
health information – some of many diverse health care settings – face problems not common among individuals at academic medical centers [238].They may raise different health care delivery and evaluation issues. Systems may not be easily transportable across geographic or cultural boundaries [87]. When systems are developed in one context and moved to another, assumptions about development, implementation, and use may be challenged [189]. Environmental differences, such as availability of reliable power, computing resources, a system of patient identifiers, and sufficient economic resources, let alone experience with electronic storage and retrieval systems, also might come into play, as they did for an international team successfully modifying an electronic medical record system developed at a US academic medical center for use in rural Kenya [53, 77, 78]. Reporting these issues and examples of how to adjust to local conditions can help those in similar situations, help increase validity by generalizing across implementations, and point to what evaluation results may be transferable. Comparative studies, too, are needed. Such studies may be designed to compare similar groups using the same technology at different sites, different groups using the same technology at one site, or various other combinations.The value of such research is illustrated by comparative studies of an electronic medical record [239], physician order entry [11, 49, 75, 156, 175, 240-245], pediatric office systems [217], CT scanning [37, 143], physicians’ use of images [218], Community Information Systems (CIS) within the UK’s National Health Service [246], electronic mail use vs. web use among the same provider population [238], a stroke guidance system evidence-based practice tool [247], and a drug distribution system [85]. Multi-site studies can lead to increased ability to generalize from case studies [246]. However, comparative studies are difficult [114], and extending studies not only across sites, but cross-culturally, remains a challenge [248]. Much may be learned from experiences in places with marked differences from where a system was first developed and from multi-site studies. Such research makes con-
textual issues more apparent, along with those assumptions on which medical informatics developments are based. 3. Incorporate evaluation into all phases of a project Evaluation methods and questions depend upon both system development phase and purpose of the evaluation [5, 6, 14, 15, 102, 110, 249]. Recent software engineering methodologies use evaluation methods throughout the software life cycle [22]. Usability testing, iterative design, and prototyping, which blur life cycle phases, are premised on the importance of evaluating a system before finalizing its design. Evaluation may be done concurrently with various project phases. It should be an on-going process throughout the life of a project, and include a variety of approaches [95]. The role of evaluation in design serves as an example. Using evaluation to influence system design enables building into a system an understanding of users’ goals, roles, tasks, and how they think about their work. To do this, situated action and participatory design approaches – often based on the influential work on situated action by anthropologist Lucy Suchman [250-252] – have been undertaken in efforts to link work design and software design, including attempts to model work according to users’ views [42-45, 253, 254]. These themes are apparent in a special issue of Artificial Intelligence in Medicine [255]. The authors draw on successfully Scandinavian participatory design approaches [256] and on the writings of Winograd and Flores [257], as well as on Suchman’s research. A stream of work undertaken by Timpka and colleagues promotes action design, a combination of action research, participatory design, and situated action (e.g., [82-84, 148, 258, 259]). An underlying principle is that “knowledge can never be decontextualized” because knowledge “is situated in particular social and physical systems” and “emerges in the context of interactions with other people and with the environment” [260]. Another undertaking uses cognitive approaches to identify major cognitive problems physicians will encounter when dealing with an interface [22, 261-264], Methods Inf Med 3/2004
222 Kaplan, Shaw
while another employs ethnographic techniques for usability testing [265]. Evaluation can be used to influence system design, development, and implementation. While results of post-hoc or summative assessments may influence future development, formative evaluation, which precedes or is concurrent with the processes of systems design, development, and implementation, can be a helpful way to incorporate people, social, organizational, ethical, legal, and economic considerations into all phases of a project [3, 6, 12, 13, 22, 62, 266-268]. 4. Study failures, partial successes, and changes in project definition or outcome “Success” and “failure” have different and changing definitions in different projects and among different stakeholders [3, 124]. Attention, nevertheless, is needed not only to successes, but also to failures, partial successes, and changes in project definition or outcome. Despite an accumulation of best practices research that has identified a series of success factors, many projects still fail [246]. Some 40% of information technology developments in a variety of sectors are either abandoned or fail [91], while fewer than 40% of large systems purchased from vendors meet their goals [269]. Similar numbers have been estimated for health care, and the number has unfortunately remained approximately the same for at least the last 25 years [14, 231, 270-272]. Some researchers have examined failures, removals, sabotage of systems, or how failures became successes or were otherwise redefined (e.g. [47, 91, 161, 162, 242, 273-280]); and some high-profile failures have been reported (e.g. [130, 246]) – the failure of the London Ambulance Service’s dispatch system, for example, has been studied extensively (e.g., [186, 274, 281, 282]). Nevertheless, over the years, publication bias has provided little opportunity to learn from studies in which technological interventions resulted in null, negative, or disappointing results [231, 283]. These failures need to be studied and reported. The political nature of evaluation and the understandable reluctance to be an institution or project reported as an example Methods Inf Med 3/2004
of “failure” also contribute to the aversion to reporting project difficulties. Further, stakeholders have an interest in promoting a project despite evaluation results. Projects may simply be strategically redefined as “successes” [284]. In addition, projects often are defined in terms of the technology [3], a focus which emphasizes how technology is used and the benefits it could bring. This focus may skew notions or reports of “success”, in health care, much as with other computerization movements [72, 285]. The combination of publication bias, avoiding embarrassment, project promotion, and ideology contribute to uneasiness in reporting or hearing studies that challenge the prevailing ethos. 5. Develop evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations Evaluation also is affected by not paying sufficient attention to failure in another way.A failed project is less likely to be evaluated after the fact. Focusing on, or assuming, success may reinforce a tendency towards summative evaluation, i.e., evaluation undertaken after a project is completed. Randomized controlled clinical trials and experimental research designs also increase the tendency towards summative evaluation in that studies of process can be seen to “taint” research involving outcomes of interest. Summative evaluations often assume that systems, organizations, and the environment in which they operate are stable, so that outcomes may be compared to the situation before the intervention, and also compared with a priori expectations and goals, even though these measures may no longer apply. However, summative evaluation approaches do not work well when situations change radically; the assumptions on which they are based do not hold under such conditions [91]. Definitions of both project and evaluation success shift as they are negotiated over time among participants. Evaluation research designs need to take account of the flux in and the dynamic nature both of projects and of evaluations, and also to provide feedback to projects so they can adjust better to what is going on
[286]. Nevertheless, evaluations tend to be summative [124]. Much is to be gained from formative evaluation, i.e.. evaluation conducted during the course of a project. Formative evaluation has many advantages, including: a) allowing for changes and uncertainties in project technology and implementation; b) capturing the fluid nature of project and evaluation objectives and in evaluation design; c) enhancing organizational learning and buy-in; d) attracting management attention; e) identifying unexpected or emergent benefits; f) tracking a project over time longitudinally; g) adjusting study design and questions to the site involved and to new insights as they develop; h) allowing for varying and changing definitions among site and research participants of the systems project, the evaluation, or of what constitutes project goals or “success”; i) enabling evaluation at multiple sites each at different stages of implementation; j) tracking ways in which users “re-invent” the system or how a system and organization are mutually transformative, thereby enabling learning by monitoring the many experiments that naturally occur spontaneously as part of the processes of implementation and use; and k) providing the possibility of influencing system design or implementation [3, 12, 13, 44, 91, 102, 110, 114, 221]. 6. Incorporate people, organizational, social, cultural, and concomitant ethical issues into the mainstream of medical informatics Change is needed to incorporate people, organizational, social, and cultural issues into the mainstream of medical informatics in general and into evaluation research in particular. Publication, practice, and curricula should reflect the importance of these issues. Medical informatics tends to “delete the social”, resulting in devaluing non-technical aspects of systems design and implementation [287]. Many evaluations employ study designs to focus on system and information quality and technical issues. They rarely consider social, ethical, political, organizational, or cultural issues; system development processes; or implementation processes [88, 89, 124, 288]. They pay little
223 Future Directions in Evaluation Research
attention to the roles of politics [289] or affect [290], to take two rather different examples, in organizational interactions, clinical encounters, and innovation.They little acknowledge how medical informatics privileges particular kinds of information and users, work roles and statuses, and organizational structures; embodies assumptions, such as about clinical work, hierarchy, information flow, responsibility for health and health care, and gender issues [143, 154, 178, 291]; and ignores relationships individuals form not only with each other but also with technologies [292]. More broadly, social concerns, value judgments, relationships with communities and families, the central role of health and disease in social relations and spiritual considerations, and the like, also are ignored. All these, as well as the nature of the assumptions embodied in system design, evaluation research, and medical informatics itself, clearly also have ethical dimensions that need addressing. While it is increasingly recognized that information technology is embedded in a nexus of social, organizational, political, cultural, and personal concerns and norms, more work – and likely a culture change within medical informatics – is needed to examine these issues and to incorporate the findings into design and implementation. Indeed, evaluations and assessments based on randomized clinical trials and experimental designs are considered “gold standard” precisely because they attempt to eliminate all influences except the intervention itself and because they are thought to be more “scientific” and value-free [89, 293, 294]. 7. Diversify research approaches and continue to develop new approaches Experimental designs and randomized controlled trials dominate in medical informatics evaluation [42, 88, 295, 296] and are advocated as the best approaches, particularly if combined with economic analyses [293, 297]. They also are more aligned with management values [93]. For a number of reasons, including lack of attention to a variety of human, contextual, and cultural factors that affect system acceptance in actual use, these approaches have come under increasing criticism [16, 19-22, 42, 88, 89, 114, 221,
229, 295, 298-300]. Other approaches, when used under controlled conditions, also have been similarly criticized [42]. Some have called for making it a priority “to develop richer understanding of the effects of [system] benefits in health care and to develop new evaluation methods that help us to understand the process of implementing it” [229]. A school of thought has developed suggesting that neither randomized controlled trials, experimental designs, nor economic impacts are suitable in and of themselves for evaluating informatics applications. Such designs may pinpoint what changed, but they make it hard to assess why changes occurred. Additionally, these traditional designs prove difficult for following changes as they are developing, or in determining system design and implementation strategies that are well suited to particular institutional settings and societal considerations. Longer-term field studies and more interpretive approaches are better for investigating processes, multiple dimensions and directions of causality, and relationships among system constituents and actors [12, 13, 161, 301]. Quantitative study designs are considered the gold standard. A review of evaluations of patient information systems indicated few case studies, the only subjectivist approach the authors found [124], with similar tendency evidenced in evaluations of clinical decision support system [288]. Consequently, here a special plea is made for incorporating qualitative/interpretive/subjectivist methods, without prejudice to other approaches. Including such approaches in evaluation research reveals issues that otherwise would not surface [295]. As numerous well-regarded evaluation researchers have pointed out, qualitative methods especially are needed in order to study organizational, social, contextual, cultural, ethical, and people issues, particularly in the complex and dynamic environments in which health care information technology is used. They may be used to: a) explore how participants conceptualize, perceive, understand, and attribute meaning to systems and to their work; b) examine how organizational, social, and historical context influence, and are influenced by,
system implementation and use; c) investigate causal procedures by identifying not only what changes occurred, but why; d) identify unexpected consequences; e) form the basis for formative evaluation; f) focus on routine, often tacit, ways people do things in practice, and identify recurring patterns of action and interaction; g) reveal the close relationship between work practices and organizational processes, even if on the surface they appear not to be rational; h) make more evident how technology embodies values and assumptions, and how individuals project values onto its design and use; i) point to different kinds of information and how various kinds and forms of information are conveyed and used in clinical practice; j) allow results to be incorporated throughout the life cycle, from planning through design into implementation; and k) increase utilization of evaluation results, in part because of a focus on what information is relevant to particular situations and individuals [3, 5-7, 16, 18, 22, 23, 42, 88, 114, 138, 149, 173, 175, 221, 299, 301]. For reasons like these, some consider it ethically imperative to incorporate qualitative methods into research design [173]. Related to an emphasis on experimental designs, perhaps, is a search for generalizable lessons or success factors. Generalizability is an important goal, and we argue below for the need for it. Nevertheless, while valuable for pointing to significant considerations in system design and implementation, lists of success factors lose their potency by being highly generalized. Notwithstanding the lists included in this article, how lessons or factors will play out in any given situation is what is most important in that situation. Consequently, insights gained from close study of different implementations are crucial for improving insights into whatever new situation currently is at question [3]. Here again, qualitative methods are tremendously valuable in providing the richness of such insights and suggesting their applicability in or how they might be transferrable to various settings and under various guises. We also advocate more than mere tolerance for a variety of approaches. Innovation is needed in evaluation as well as in Methods Inf Med 3/2004
224 Kaplan, Shaw
technology. Incorporating multiple methods into research design may necessitate a multidisciplinary evaluation project team, with consequent need for work on how to manage such evaluation projects. Further, as technologies develop, methodological approaches to evaluation also should develop to take account of markedly new scientific and technological capabilities, and ways of promoting health or providing health care. For example, ways of studying the place of images, visuality, and touch in clinical care, and how these are affected by computer-mediated communication, need refinement. In brief, a major theme of our discussion points toward the need for diverse evaluation research approaches and questions, and further work to develop new evaluation approaches as well. 8. Conduct investigations at different levels of analysis Most evaluation studies focus on groups of individuals that fit familiar categories in health care: nurses, physicians, developers, etc. Researchers collect data on the individual level and aggregate it according to these categories. Important differences among these individuals, however, may be obscured by grouping them together in this way. For example, a study of laboratory technologists indicated that individual technologists could be categorized based on how they conceptualized their jobs, and also that individual laboratories within the institution could be so categorized [43]. Simply considering the category “laboratory technologist” would have lost these findings and revealed little of interest in how laboratory technologists responded to a new laboratory information system. Similarly, focusing on a health care team instead of on individuals helped elaborate the need to support the collaborative nature of work in environments such as intensive care units [178]. Concern with granularity and units of analysis increases in importance as health care changes and information technologies become more widespread. The availability of new technologies may change what individuals do, thereby changing their relationships with others. These changes can affect Methods Inf Med 3/2004
social, occupational, or organizational structures. For example, micro-level work role changes related to introducing new information technologies resulted in structural organizational changes when CT scanning and subsequent imaging technologies were introduced [144, 146]. Changes at an individual level, as in information and communication technologies to support home health care, also have societal repercussions with respect to health, public health, and the organization of health care. Health care information technologies are diffusing across geographic boundaries and health care itself is becoming less localized in centralized institutions. Individuals increasingly use these technologies to gain information about their health and health care and to manage it [209]. Community informatics, for example, uses the community as a level of analysis to study how communities pursue their goals in such areas as local economic developments, cultural affairs, civic activism, electronic democracy, self-help, advocacy, and cultural enhancement [182]. Formation of on-line communities and other forms of participation in health care can be a part of these activities, with consequent implications both for informatics applications and evaluation research, as well as for health and health care per se. Like cross-cultural and comparative studies, research employing different degrees of granularity and different units and levels of analysis, and research investigating how changes ripple across them, may provide not only new insights, but also may challenge basic assumptions derived from traditionally focused studies. 9. Integrate findings from different applications and contextual settings, different areas of health care, studies in other disciplines, and also work that is not published in traditional research outlets Medical informatics already has been enriched by scholars with degrees in a variety of disciplines. Evaluation researchers include specialists in organizational behavior, education, information systems, communications, history, biology, computer science, anthropology, engineering, sociology, library and information sciences, psychology,
social studies of science, and, of course, numerous health care specialties – to name but some of those disciplines. Researchers who keep up with another field of study enable their, and consequently others’, medical informatics work to be informed by new theories, knowledge, and approaches. Different disciplines may ask new research questions in people, organizational, and social issues, and employ different ways of addressing them. In addition, much valuable work is undertaken outside usual academic and research contexts. Mechanisms are needed to report and disseminate work that is not published in traditional research outlets. For example, insights gained through evaluations of governmental projects worldwide, vendor and consultant activities, and experiences in developing countries, need to be more widely known. Practitioner knowledge, gained through much experience, also would provide insights not easily obtained through other means. Both evaluation methods and theory would benefit from continued efforts to integrate findings from different applications and contextual settings (including ones not in health care), and to bring together understanding developed through studies and other activities undertaken in different areas of health care as well as in other disciplines. This latter integration is beginning to occur in social informatics, producing generalizations that also are applicable in medical informatics [302]. Medical informatics, as a field, would be enriched by integrative activity across studies, disciplines, and practice. Moreover, findings from these other areas would both enhance and can be enhanced by related medical informatics evaluation research. 10. Develop and test theory to inform both further evaluation research and informatics practice Evaluation studies tend to be undertheorized. Compared with other disciplines, there is more emphasis on context than on theory [137]. Although some significant streams of work start from an explicit theoretical base, and use it to organize and analyze results, little has been done to generate theory across studies. Testing or generating
225 Future Directions in Evaluation Research
theory need not diminish from contextual description and consideration. By integrating findings across different studies, theory can be generated and tested, and individual studies can build into cumulative bodies of generalizable and transferrable knowledge. Many medical informatics studies are atheoretical, or evidence tacit theoretical assumptions rather than explicit discussion [42]. Much medical informatics work is characterized by considering technology, work, specific clinical practices (e.g., ordering laboratory tests), profession, and institution, as static and isolated constructs.This approach allows studies to be designed to investigate the impact of one factor, such as a system reporting drug-drug interactions, on another, such as reduction in medication errors. It ignores, however, the complex network of social interactions that surround the introduction and use of such a system by isolating the system as a static, individual item rather than as part of a nexus of practices, relationships, history, regulatory requirements, etc. and the meanings that are attributed to all these [146]. Evaluation researchers could theorize concepts and constructs so that they include dynamic social and institutional nexuses of which they are a part, as well as to integrate across studies to develop theory to inform both further research and also informatics practice. Evaluation researchers, too, need to turn their theoretical and empirical eyes reflexively on evaluation research itself.
Conclusions Our review leads us to observe common threads that run through newer orientations towards evaluation. These themes are shared with trends in social informatics studies in other fields [303]. The underlying basis for attention to people, organizational, and social issues is that human and organizational concerns should be taken into account during system design, implementation, and use. International perspectives are converging to a broad and encompassing multi-method approach to evaluation throughout the life of a project, with studies conducted in actual clinical settings so as to allow for complex contextual issues
to be addressed through a variety of theoretical lenses [42, 110]. Considerable work has been undertaken concerning appropriate evaluation paradigms. Newer evaluations build on the growing body of empirical and theoretical evaluation research that focuses on roles of different actors and the connections between them; on contextual, organizational, social, and cultural concerns; on meanings attributed to the experiences by the persons involved; and on the processes and interactions among different aspects of system design, implementation, and use. These trends share a holist view and the conviction that analyses be set in a complex context of work group, professional community, cultural unit, or organization. Moreover, there is a general commitment to methodological pluralism and development of approaches for studying numerous areas, issues, and concerns that have not been well addressed. We have reviewed this body of research and suggested directions for further development: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies. 3) Incorporate evaluation into all phases of a project. 4) Study failures, partial successes, and changes in project definition or outcome. 5) Employ evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations. 6) Incorporate people, social, organizational, cultural, and concomitant ethical issues into the mainstream of medical informatics. 7) Diversify research approaches and continue to develop new approaches. 8) Conduct investigations at different levels of analysis. 9) Integrate findings from different applications and contextual settings, different areas of health care, studies in other
disciplines, and also work that is not published in traditional research outlets. 10) Develop and test theory to inform both further evaluation research and informatics practice. Acknowledgments This work by N. Shaw was funded when she was a National Health Service Executive (North West Region) Research & Development Post-Doctoral Training Fellow. The support of the UK Department of Health, formerly the National Health Service Executive (North West Region) Research & Development Department and Oldham Primary Care Trust is therefore gratefully acknowledged.
References 1. Paré G, Sicotte C. Information technology sophistication in health care: An instrument validation study among Canadian hospitals. Int J Med Inform 2001; 63 (3): 205-23. 2. Kling R, Rosenbaum H, Hert C. Social informatics in information science: An introduction. J Am Soc Inform Sci 1998; 49 (12): 1047-52. 3. Berg M. Implementing information systems in health care organizations: Myths and challenges. Int J Med Inform 2001; 64 (2-3): 143-56. 4. Anderson JG, Jay SJ (eds). Use and impact of computers in clinical medicine. New York: Springer; 1987. 5. Anderson JG, Aydin CE, Jay SJ (eds). Evaluating health care information systems: Methods and applications. Thousand Oaks, CA: Sage Publications; 1994. 6. van Gennip EMS, Talmon JL (eds). Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. 7. Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer; 1997. 8. Rigby M. Evaluation: 16 powerful reasons why not to do it – and 6 over-riding imperatives. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 1198-201. 9. Lorenzi NM, Riley RT. Organizational aspects of health informatics: Managing technological change. New York: Springer; 1995. 10. Lorenzi N, Riley RT. Managing change: An overview. JAMIA 2000; 7 (2): 116-24. 11. Ash J. Managing change: Analysis of a hypothetical case. JAMIA 2000; 7 (2): 125-34. 12. Kaplan B. Organizational evaluation of medical information systems. In: Friedman CP, Wyatt JC (eds). Evaluation methods in medical informatics. New York: Springer; 1997. pp. 255-80. 13. Kaplan B. Addressing organizational issues into the evaluation of medical systems. JAMIA 1997; 4 (2): 94-101. 14. Kuhn KA, Guise DA. From hospital information systems to health information systems – problems, challenges, perspectives. In: Haux R, Kulikowski C (eds). Yearbook of Medical Informatics. Stuttgart: Schattauer; 2001. pp. 63-76.
Methods Inf Med 3/2004
226 Kaplan, Shaw
15. DoH, NHSE, IMG. Project review: Objective evaluation. Guidance for NHS managers on evaluating information systems projects; 1996. 16. Ellis NT, Aarts J, Kaplan B, Leonard K. Alternatives to the controlled trial: New paradigms for evaluation. Proc AMIA Symp (CD ROM) 2000. 17. Goorman E, Berg M. Modelling nursing activities: Electronic patient records and their discontents. Nurs Inq 2000; 200 (7): 3-9. 18. Greatbatch D, Murphy E, Dingwall R. Evaluating medical information systems: Ethnomethodological and interactionist approaches. Health Serv Manag Res 2001; 14 (3): 181-91. 19. Kaplan B, Aarts J, Hebert M, Klecun-Dabrowska E, Lewis D, Vimlarlund V. New approaches to evaluation: Alternatives to the randomized controlled trial – qualitative approaches to design and evaluation: Theory and practice. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. p. 1531. 20. Kaplan B, Anderson JG, Jaffe C, Leonard K, Zitner D. New approaches to evaluation:Alternatives to the randomized controlled trial – qualitative models for evaluation. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. p. 1456. 21. Kaplan B, Atkinson C, Ellis E, Jones M, KlecunDabrowska E, Teasdale S. Evaluation in the UK’s National Health Service. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. p. 1529. 22. Kushniruk A. Evaluation in the design of health information systems: Application of approaches emerging from usability engineering. Comput Biol Med 2002; 32 (3): 141-49. 23. Protti D. A proposal to use a balanced scorecard to evaluate information for health: An information strategy for the modern NHS (1998-2005). Comput Biol Med 2002; 32 (3): 221-36. 24. Fischer PJ, Stratman WC, Lundsgaarde HP. User reactions to PROMIS: Issues related to acceptability of medical innovations. In: O’Neill JT (ed). Proc Symp Comput Appl Med Care. Silver Spring: IEEE Computer Society; 1980. pp. 1722-30. Reprinted in: Anderson JG, Jay SJ (eds). Use and impact of computers in clinical medicine. New York: Springer; 1987. pp. 284-301. 25. Lundsgaarde HP, Fischer PJ, Steele DJ. Human problems in computerized medicine. Lawrence, KS: The University of Kansas; 1981. 26. Dlugacz YD, Siegel C, Fischer S. Receptivity towards uses of a new computer application in medicine. In: Blum BL (ed.). Proc Symp Comput Appl Med Care. Silver Spring: IEEE Computer Society; 1980. pp. 384-91. 27. Siegel C, Alexander MJ, Dlugacz YD, Fischer S. Evaluation of a computerized drug review system: Impact, attitudes and interactions. Comput Biomed Res 1984; 17: 419-35. 28. Anderson JG, Jay SL. Computers and clinical judgement:The role of physician networks. Soc Sci Med 1985; 20: 967-79. 29. Anderson JG, Jay SL, Schwerr HM, Anderson MM, Kassing D. Physician communication networks and the adoption and utilization of computer applications in medicine. In: Anderson JG, Jay SJ (eds). Use and impact of computers in clinical medicine. New York: Springer; 1987. pp. 185-99. 30. Barrett JP, Barnum RA, Gordon BB, Pesut RN. Evaluation of a medical information system in a
Methods Inf Med 3/2004
31. 32. 33. 34.
35. 36.
37. 38.
39.
40.
41.
42.
43.
44.
45.
general community hospital. Report no. HSM 110-73-331, PB 233 784 (NTIS PB 248 340). Battelle Columbus Laboratories; 1975. Kaplan BM. Computers in medicine, 1950-1980: The relationship between history and policy [Ph.D.]: University of Chicago; 1983. Collen MF. A history of medical informatics in the United States: 1950-1990. Bethesda, MD: American Medical Informatics Association; 1995. Farlee C, Goldstein B. Hospital organization and computer technology. New Brunswick, NJ: Health Care Systems Research; 1972. Brown B, Harbot B, Kaplan B, Maxwell J. Guidelines for managing the implementation of automated medical systems. In: Heffeman HG (ed.). Proc Symp Comput Applic Med Care. Silver Spring: IEEE Computer Society Press; 1981. pp. 791-96. Brown B, Harbot B, Kaplan B. Management issues in automating medical systems. J Clin Engr 1983; 8 (1): 23-30. Kaplan B. Barriers to medical computing history, diagnosis and therapy for the medical computing ‘lag’. In: Ackerman MJ (ed.). Proc Symp Comput Appl Med Care. Silver Spring: IEEE Computer Society Press; 1985. pp. 400-4. Aydin CE. Occupational adaption to computerized medical information systems. J Health Soc Behav 1989; 30: 163-79. Aydin CR, Rice RE. Bringing social worlds together: Computers as catalysts for new interactions in health care organizations. J Health Soc Behav 1992; 33: 168-85. Aydin C. Computerized order entry in a large medical center: Evaluating interactions between departments. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating health care information systems: Approaches and applications. Thousand Oaks, Cal: Sage; 1994. pp. 260-75. Sigurdardottir H, Skov D, Bartholdy C,Wann-Larsen S. Evaluating the usefulness of monitoring change readiness in organisations which plan on implementing health informatics systems. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; 2001, September 6-7; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 52. Hanmer JC. Diffusion of medical technologies: Comparison with ADP systems in the medical environment. In: O’Neill JT (ed). Proc Symp Comput Applic Med Care. Silver Spring: IEEE Computer Society Press; 1980. pp. 1731-36. Kaplan B. Evaluating informatics applications – some alternative approaches: Theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001; 64: 39-56. Kaplan B, Duchon D. Combining qualitative and quantitative approaches in information systems research: A case study. Manag Inform Sys Q 1988; 12 (4): 571-86. Kaplan B. A model comprehensive evaluation plan for complex information systems: Clinical imaging systems as an example. In: Brown A, Remenyi D (eds). Proc Second European Conference on Information Technology Investment Evaluation. Henley on Thames, Birmingham, England: Operational Research Society; 1995. pp. 14-181. Kaplan B, Morelli R, Goethe J. Preliminary findings from an evaluation of the acceptability of an
46. 47. 48.
49. 50.
51. 52.
53.
54.
55.
56. 57.
58.
59. 60.
61.
62.
63.
expert system in psychiatry. Extended Proc AMIA Symp (CD-ROM) 1997. Degoulet Fieschi M. Critical dimensions in medical informatics. Int J Med Inform 1997; 44: 21-6. Sicotte C, Denis JL, Lehoux P. The computer based patient record: A strategic issue in process innovation. J Med Systs 1998; 22 (6): 431-43. Safran C, Jones PC, Rind D, Booker B, Cytryn KN, Patel VL. Electronic communication and collaboration in a health care practice. Artif Intell Med 1998; 12 (2): 137-51. Ash JS, Lyman J, Carpenter J, Fournier L. A diffusion of innovations model of physician order entry. Proc AMIA Symp 2001: 22-6. Mitev N, Kerkham S. Organization and implementation issues of patient data management systems in an intensive care unit. J End User Comput 2001; 13 (3): 20-9. Wetter T. Lessons learned from bringing knowledge-based decision support into routine use. Artif Intell Med 2002; 24 (3): 195-203. Lehoux P, Sicotte C, Denis JL, Berg M, Lacroix A. The theory of use behind telemedicine: How compatible with physicians’ clinical routines? Soc Sci Med 2002; 54 (6): 889-904. Hannan TJ, Rotich JK, Odero WW, Menya D, Esamai F, Einterz RM, et al. The Mosoriot medical record system; design and initial implementation of an outpatient electronic record system in rural Kenya. Int J Med Inform 2000; 60 (1): 21-8. Wilson M, Howcroft D. The role of gender in user resistance and information systems failure. In: Baskerville R, Stage J, DeGross JI (eds). Organizational and social perspectives on information technology. Boston, Dordrecht, London: Kluwer Academic Publisher; 2000. pp. 453-71. Novek J. IT, gender and professional practice: Or, why an automated drug distribution system was sent back to the manufacturer. Sci Technol Hum Val 2002; 27 (3): 379-403. Kaplan B. Development and acceptance of medical information systems: An historical overview. J Health Hum Resour Admin 1988; 11 (1): 9-29. Massaro TA. Introducing physician order entry at a major academic medical center. 1: Impact on organizational culture and behavior. Acad Med 1993; 68 (1): 20-5. Patel VL, Allen VG, Arocha JF, Shortliffe EH. Representing clinical guidelines in GLIF: Individual and collaborative expertise. JAMIA 1998; 5 (5): 467-83. Harrop VM. Virtual healthcare delivery: Defined, modeled and predictive barriers to implementation identified. Proc AMIA Symp 2001: 244-8. May CR, Gask L, Atkinson T, Ellis NT, Mair F, Esmail A. Resisting and promoting new technologies in clinical practice: The case of ‘telepsychiatry’. Soc Sci Med 2001; 52 (12): 1889-901. Kaplan B. The computer as Rorschach: Implications for management and user acceptance. In: Dayhoff RE (ed.). Proc Symp Comput Appl Med Care. Silver Spring: IEEE Computer Society Press; 1983. pp. 664-67. van’t Reit A, Berg M, Hiddema F, Kees S. Meeting patients’ needs with patient information systems: Potential benefits from qualitative research methods. Int J Med Inform 2001; 64: 1-14. Clarke K, O’Moore R, Smeets R,Talmon J, Brender J, McNair P et al. Health systems evaluation of telemedicine: A staged approach. Telemed J 1996; 2 (4): 303-12.
227 Future Directions in Evaluation Research
64. Aarts J, Peel V, Wright G. Organizational issues in health informatics: A model approach. Int J Med Inf 1998; 52 (1-3): 235-42. 65. Folz-Murphy N, Partin M, Williams L, Harris CM, Lauer MS. Physician use of an ambulatory medical record system: Matching form and function. Proc AMIA Symp 1998: 260-64. 66. Nordyke RA, Kulikowski CA. An informatics based chronic disease practice – case study of a 35-year computer based longitudinal record. JAMIA 1998; 5 (1): 88-103. 67. Dixon DR. The behavioral side of information technology. Int J Med Inform 1999; 56 (1-3): 117-23. 68. Manning B, Gadd CS. Introducing handheld computing into a residency program: Preliminary results from qualitative and quantitative inquiry. Proc. AMIA Symp 2001: 428-32. 69. Southon G. IT, change and evaluation:An overview of the role of evaluation in health services. Int J Med Inform 1999; 56 (1-3): 125-33. 70. Kaplan B. The influence of medical values and practices on medical computer applications. In: Anderson JG, Jay SJ (eds.). Use and impact of computers in clinical medicine. New York: Springer; 1987. pp. 39-50. 71. Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision- support systems. Methods Inf Med 1993; 32 (1): 1-8. 72. Kaplan B. The computer prescription: Medical computing, public policy and views of history. Sci Technol Hum Val 1995; 20 (1): 5-38. 73. Forsythe DE. New bottles, old wine: Hidden cultural assumptions in a computerized explanation system for migraine sufferers. Med Anthropol Q 1996; 10 (4): 551-74. 74. Friedman CP. Information technology leadership in academic medical centers: A tale of four cultures. Acad Med 1999; 74 (7): 795-99. 75. Ash JS, Gorman PN, Lavelle M, Lyman J. Multiple perspectives on physician order entry. Proc AMIA Symp 2000: 27-31. 76. Whitley EA, Pouloudi A. Studying the translations of NHSnet. J End User Comput 2001; 13 (3): 31-40. 77. Hannan TJ, Tierney WM, Rotich JK, Odero WW, Smith F, Mamlin JJ et al. The Mosoriot medical record system (MMRS) phase I to phase II implementation: An outpatient computer-based medical record in rural Kenya. In: Patel V, Rogers R, Haux R, editors. Med-info 2001. Amsterdam: IOS Press; 2001. pp. 619-22. 78. Tierney WM, Rotich JK, SmithFE, Bii J, Einterz RM, Hannan TJ. Crossing the ‘digital divide’: Implementing an electronic medical record system in a rural Kenyan health center to support clinical care and research. Proc AMIA Symp 2002: 792-95. 79. Brender J, McNair P. User requirements in a system development & evaluation context. Stud Health Technol Inform 2000; 77: 203-7. 80. Bygholm A. End-user support: A necessary issue in the implementation and use of EPR systems. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 604-08. 81. Brennan PF, Strombom I. Improving health care by understanding patient preferences: The role of computer technology. JAMIA 1998; 5 (3): 257-62. 82. Graves WI, Nyce JM. Normative models and situated practice in medicine. Inform Decis Technol 1992; 18: 143-49. 83. Nyce JM, Timpka T. Work, knowledge and argument in specialist consultations: Incorporat-
84.
85. 86. 87.
88. 89. 90.
91.
92.
93.
94. 95.
96.
97.
98.
99.
100. 101.
ing tacit knowledge into system design and development. Med and Biol Eng and Comput 1993; 31 (1): HTA16-HTA19. Timpka T, Rauch E, Nyce JM. Towards productive knowledge-based systems in clinical organizations: A methods perspective. Artif Intell Med 1994; 6 (6): 501-19. Novek J. Hospital pharmacy automation: Collective mobility or collective control? Soc Sci Med 2000; 51 (4): 491-503. Covvey HD. When is a system only a system? Healthcare Inform. Manag. Comm. Canada 2001; 15 (1): 46-7. Kaplan B, Brennan PF, Dowling AF, Friedman CP, Peel V. Towards an informatics research agenda: Key people and organizational issues. JAMIA 2001; 8 (3): 234-41. Leys M. Health care policy: Qualitative evidence and health technology assessment. Health Policy 2003; 65 (3): 217-26. Lehoux P, Blume S. Technology assessment and the sociopolitics of health technologies. J Health Politics, Policy, and Law 2000; 25 (6): 1083-120. May C, Mort M, Williams T, Mair F, Gask L. Health technology assessment in its local contexts: Studies of telehealthcare. Soc Sci Med 2003; 57 (4): 697-710. Farbey B, Land F, Targett D. The moving staircase – problems of appraisal and evaluation in a turbulent environment. Inform.Tech. People 1999; 12 (3): 238-52. Burkle T, Ammenwerth E, Prokosch HU, Dudeck J. Evaluation of clinical information systems.What can be evaluated and what cannot? J Eval Clin Pract 2001; 7 (4): 373-85. Land F. Evaluation in a socio-technical context. In: Orlikowski WJ, Walsham G, Jones MR, DeGross JI (eds.). Information technology and changes in organizational work. London: Chapman & Hall; 1996. pp. 115-26. Friedman CF, Haug P. Report on conference track 5: Evaluation metrics and outcome. Int J Med Inform 2003; 69 (2-3): 307-9. Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems – problems and challenges. Int J Med Inform 2003; 71 (2-3): 125-35. van Gennip EMS,Talmon JL. Introduction. In: van Gennip EMS, Talmon JL (eds). Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 1-5. Fallman H. A computer professor questions computerization: The projects will become institutions which no-one can evaluate. Lakartidningen 1997; 94 (4): 204-6. Green CJ, Moehr JR. Performance evaluation frameworks for vertically integrated health care systems: Shifting paradigms in Canada. Proc AMIA Symp 2000: 315-19. Aarts J, Peel V. Using a descriptive model of change when implementing large scale clinical information systems to identify priorities for further research. Int J Med Inform 1999; 56 (1-3): 43-50. Chismar WG, Wiley-Paton S. Test of the technology acceptance model for the Internet in pediatrics. Proc AMIA Symp 2002: 155-59. Gagnon M-P, Godin G, Gagné C, Fortin J-P, Lamothe L, Reinharz D et al.An adaptation of the theory of interpersonal behaviour to the study of
102.
103.
104.
105. 106.
107.
108. 109. 110.
111. 112.
113.
114.
115. 116.
117.
118.
telemedicine adoption by physicians. Int J Med Inform 2003;71 (2-3): 103-15. Grant A, Plante I, Leblanc F. The TEAM methodology for the evaluation of information systems in biomedicine. Comput Biol Med 2002; 32 (3): 195-207. Kaplan B. Social interactionist framework for information systems studies: The 4C’s. In: Larson TJ, Levine L, DeGross JI (eds). IFIP WG8.2 & WG8.6 Joint Working Conference on Information Systems: Current issues and future changes: International Federation for Information Processing; 1998. pp. 327-39. Available at http://www.bi.no/ dep2/infomgt/wg82-86/proceedings/, accessed 4 Feb 2003. Dwyer SJ, Templeton AW, Martin NL, Lee KR, Levine E, Batnitzky S et al. The cost of managing digital diagnostic images. Radiology 1982; 144 (2): 313-18. Crowe BL. Overview of some methodological problems in assessment of PACS. Int J Biomed Comput 1992; 30 (3-4): 181-86. Kazanjian A, Green CJ. Beyond effectiveness: The evaluation of information systems using a comprehensive health technology assessment framwork. Comput Biol Med 2002; 32 (3): 165-77. Lauer TW, Joshi K, Browdy T. Use of the equity implementation model to review clinical systems implementation efforts: A case report. JAMIA 2000; 7 (1): 91-102. Kokol P. A new microcomputer software system evaluation paradigm: The medical perspective. J Med Syst 1991; 15 (4): 269-75. Kokol P. A tool for software and hardware evaluation. J Med Syst 1996; 20 (3): 167-72. Shaw NT. ‘CHEATS’: A generic information communication technology (ICT) evaluation framework. Comput Biol Med 2002; 32 (3): 209-20. Kaplan R, Norton D. The balanced score card – measures that drive performance. Harv Bus Rev 1992; 70 (1): 71-9. Gordon D, Chapman R, Kunov H, Dolan A, Carter M. Hospital management decision support: A balanced scorecard approach. In: Cesnik B, McCray AT, Scherrer J-R (eds). Medinfo ‘98. Amsterdam: IOS Press; 1998. p. Pt 1: 453-56. Niss KU. The use of the balanced scorecard (BSC) in the model for investment and evaluation of medical information systems. Stud Health Technol Inform 1999; 68: 110-4. Heathfield H, Hudson P, Kay S, Roberts R, Williams, J. Issues in the multidisciplinary assessment of healthcare information systems. Inform Tech People 1999; 12 (3): 253-75. Boloix G, Robillard P. A software system evaluation framework. Commun ACM 1995; 28 (12): 17-26. Cornford T, Doukidis GI, Forster D. Experience with a structure, process and outcome framework for evaluating an information system. Omega, Int J Manag Sci 1994; 255 5): 491-504. Donabedian A. The quality of medical care: Methods for assessing and monitoring the quality of care for research and for quality assessment programs. Science 1978; 200: 856-64. Donabedian A. Explorations in quality assessment and monitoring;Vol I:The definition of quality and approaches to its assessment. Ann Arbor: Health Adminisitration Press; 1980.
Methods Inf Med 3/2004
228 Kaplan, Shaw
119. Hebert M. Telehealth success: Evaluation framework development. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 1145-49. 120. DeLone W, McLean E. Information systems success: The quest for the dependent variable. Inform Syst Res 1992; 3 (1): 60-95. 121. Turunen P. A model for evaluation of health care information systems. In: Remenyi D, Brown A (eds). Procs 8th European Conference on Information Technology Evaluation September 17-18, 2001; Oriel College, Oxford, UK. pp. 183-87. 122. Turunen P, Salmela H. The internal stake holders’ attitudes toward the evaluation methods for medical information systems. In: Pantzar E, Savolainen R, Tynjälä P (eds). In search for a human-centred information society. Tampere: Tampere University Press; 2001. pp. 253-65. 123. Fineberg H, Bauan R, Sosman M. Computerized cranial tomography – effect on diagnostic and therapeutic plans. JAMIA 1997; 3: 244-7. 124. van der Meijden MJ, Tange HH, Troost J, Hasman A. Determinants of success of inpatient clinical information systems: A literature review. JAMIA 2003; 10 (3): 235-43. 125. Krobock J. A taxonomy: Hospital information systems evaluation methodologies. J Med Systs 1984; 8 (5): 419-29. 126. Grémy F, Degoulet P. Assessment of health information technology: Which questions for which systems? Proposal for a taxonomy. Med Inform (Lond) 1993; 18 (3): 185-93. 127. Anderson JG, Aydin CE. Overview: Theoretical perspectives and methodologies for the evaluation of health care information systems. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating health care information systems: Approaches and applications. Thousand Oaks, Cal: Sage; 1994. pp. 5-29. 128. Grant A, Buteau M, Richards Y, Delisle E, Laplante P, Niyonsenga T et al. Informatics methodologies for evaluation research in the practice setting. Methods Inf Med 1998; 37 (2): 178-81. 129. Talmon JL. Workshop WG15: Technology assessment and quality improvement. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. p. 1540. 130. Jones MR. An interpretive method for the formative evaluation of an electronic patient record system. In: Remenyi D, Brown A (eds). Procs of the 8th European conference on IT evaluation 17-18 September 2001; Oriel College, Oxford, UK. pp. 349-56. 131. May CR, Mort M, Williams T, Mair F, Shaw NT. Understanding the evaluation of telemedicine: The play of the social and the technical, and the shifting sands of reliable knowledge. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; September 6-7, 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 43. 132. MacFarlane A, Harrison R,Wallace P.The benefits of a qualitative approach to telemedicine research. J Telemed Telecare 2002; 8 Suppl 2: 56-7. 133. May C, Williams T, Mair F, MM., M, Shaw N, Gask L. Factors influencing the evaluation of telehealth interventions: Preliminary results from a qualitative study of evaluation projects in the UK. J Telemed Telecare 2002; 8 Suppl 2: 65-7.
Methods Inf Med 3/2004
134. Williams T, May C, Mair F, MortM, Shaw NT, Gask L. Normative models of health technology assessment and the social production of evidence about telehealth care. Health Policy 2003; 64 (1): 39-54. 135. Fulop N, Allen P, Clarke A, Black N. From health technology assessment to research on the organisation and delivery of health services: Addressing the balance. Health Policy 2003; 63 (2): 155-65. 136. Turunen P, Talmon J. Stakeholder groups in the evaluation of medical information systems. In: Brown A, Remenyi D (eds). Procs 7th European Conference on the Evaluation of Information Technology. September 28-29, 2000; Dublin, Ireland. pp. 329-34. 137. Chiasson MW, Davidson E. Pushing the contextual envelope: Developing and diffusing IS theory for health information systems research. Inform Organization 2004; in press. 138. IT in Health Care: Sociotechnical Approaches. International Conference Proc. September 6-7, 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands. 139. Protti DJ, Haskell AR. Managing information in hospitals: 60% social, 40% technical. In: Proc IMIA Working Conference on Trends in Hospital Information Systems 1992. 140. Lorenzi NM, Riley RT, Blyth AJC, Southon G, Dixon BJ. Antecedents of the people and organizational aspects of medical informatics: Review of the literature. JAMIA 1997; 4 (2): 79-93. 141. Lundsgaarde H. Evaluating medical expert systems. SocSci Med 1987; 24 (10): 805-19. 142. Bosk C, Frader J. The impact of place of decision making on medical decisions. In: O’Neill JT (ed). Proc Symp Comput Applic Med Care. Silver Spring: IEEE Computer Society Press; 1980. pp. 1326-29. 143. Barley SR.Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Admin Sci Q 1986; 31: 78-108. 144. Barley SR. The social construction of a machine: Ritual, superstition, magical thinking and other pragmatic responses to running a CT scanner. In: Lock M, Gordon D (eds). Biomedicine examined. Dordrecht: Kluwer Academic Publishers; 1988. pp. 497-539. 145. Barley SR. On technology, time, and social order: Technically induced change in the temporal organization of radiological work. In: Dubinskas FA (ed). Making time: Ethnographies of high-technology organizations. Philadelphia: Temple University Press; 1988. pp. 123-69. 146. Barley SR.The alignment of technology and structure through roles and networks. Admin Sci Q 1990; 35 (1): 61-103. 147. Barney S, Nyce JM, Ackerman M, Graves WX III, King J, Koh E, et al. Visualization in the neurosciences: Addressing problems in research, teaching and clinical practice. In: First Conference on Visualization in Biomedical Computing. Los Alamitos, Calif: IEEE Computer Society Press; 1990. pp. 322-27. 148. Nyce JM, Graves WI. The construction of knowledge in neurology: Implications for hypermedia system development. Artif Intell Med 1990; 29 (2): 315-22. 149. Forsythe DE, Buchanan BG. Broadening our approach to evaluating medical information systems.
150. 151.
152. 153. 154. 155.
156. 157. 158.
159.
160. 161.
162.
163.
164.
165.
In: Proc Symp Comput Appl Med Care; 1991. pp. 8-12. Saetnan AR. Rigid politics and technological flexibility; the anatomy of a failed hospital innovation. Sci Technol Hum Val 1991; 16 (4): 419-47. Yoxen E. Seeing with sound: A study of the development of medical images. In: Bijker WE, Hughes T, Pinch T (eds.). The social construction of technological systems. Cambridge, Mass: MIT Press; 1987. pp. 281-306. Forsythe DE. Using ethnography in the design of an explanation system. Expert Syst Applic 1995; 8 (4): 403-17. Lau F, Hayward R. Building a virtual network in a community health research training program. JAMIA 2000; 7 (4): 361-77. Lee RG, Garvin T. Moving from information transfer to information exchange in health and health care. Soc Sci Med 2003; 56: 449-64. Weaver RR. Evaluating the problem knowledge coupler: A case study. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating health care information systems: Approaches and applications. Thousand Oaks, CA: Sage; 1994. pp. 203-25. Ash J. Organizational factors that influence information technology diffusion in academic health sciences centers. JAMIA 1997; 4 (2): 102-11. Schubart JR, Einbinder JS. Evaluation of a data warehouse in an academic health sciences center. Int J Med Inform 2000; 60: 319-33. Kaplan B. The medical computing ‘lag’: Perceptions of barriers to the application of computers to medicine. Int J Technol Assess Health Care 1987; 3 (1): 123-36. Anderson JG, Aydin CE, Kaplan B. An analytical framework for measuring the effectiveness/impacts of computer-based patient record systems. In: Nunamaker JF, Sprague RH (eds). IV: Information Systems – collaboration systems and technology, organizational systems and technology. Los Alamitos, Cal: IEEE Computer Society Press; 1995. pp. 767-76. Anderson JG. Evaluation in health informatics: Social network analysis. Comput Biol Med 2002; 32 (3): 179-93. May C, Ellis NT. When protocols fail: technical evaluation, biomedical knowledge and the social production of ‘facts’ about a telemedicine clinic. Soc Sci Med 2001; 53 (8): 989-1002. Sicotte C, Dennis JL, Lehoux P, Champagne F.The computer-based patient record: Challenges towards timeless and spaceless medical practice. J Med Systs 1998; 22 (4): 237-56. Kristensen M. Assessing a prerequisite for a constructivist approach – change readiness in organisations implementing health informatics systems. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; 2001 6-7 September 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 38. Aarts J. On articulation and localization – some sociotechnical issues in design, implementation and evaluation of knowledge based systems. In: Quaglini S, Barahona P, Andreassen S (eds). Artificial Intelligence in Medicine: Proc. of the 8th Conference on Artificial Intelligence in Medicine in Europe AIME 2001. Berlin: Springer; 2001. pp. 16-19. Berg M, Langenberg C, Berg I, Kwakkemaat J. Considerations for sociotechnical design: Ex-
229 Future Directions in Evaluation Research
166. 167. 168. 169.
170.
171. 172. 173. 174. 175. 176.
177. 178. 179. 180.
181.
182.
periences with an electronic patient record in a clinical context. Int J Med Inform 1998; 52 (1-3): 243-51. Berg M, Goorman E. The contextual nature of medical information. Int J Med Inform 1999; 56 (1-3): 51-60. Berg M. Patient care information systems and health care work: A sociotechnical approach. Int J Med Inform 1999; 55 (2): 87-101. Reddy M, Pratt W, Dourish P, Shabot MM. Sociotechnical requirements analysis for clinical systems. Methods Inf Med 2003; 42 (4): 437-44. Stoop AP, Berg M. Integrating quantitative and qualitative methods in patient care information system evaluation: Guidance for the organizational decision maker. Methods Inf Med 2003; 42 (4): 458-62. Bygholm A. Activity theory as a framework for conducting end-use support. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; September 6-7, 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 21. Grémy F, Fessler JM, Bonnin M. Information systems evaluation and subjectivity. Int J Med Inform 1999; 56 (1-3): 13-23. Davidson EJ. Analyzing genre of organizational communication in clinical information systems. Inform Tech People 2000; 33 (3): 196-209. Klecun-Dabrowska E, Cornford T. Evaluating telehealth: The search for an ethical perspective. Camb Q Healthc Ethics 2001; 10 (2): 161-69. Zerubavel E. Patterns of time in hospital life. Chicago and London: University of Chicago Press; 1979. Ash J, Berg M. Report of conference track 4: Socio-technical issues of HIS. Int J Med Inform 2003; 69 (2-3): 305-6. Mathiassen L. Collaborative research practice. In: Orlikowski WJ, Walsham G, Jones MR, DeGross JI (eds). Information technology and changes in organizational work. London: Chapman & Hall; 1996. pp. 127-48. Sawyer S, Rosenbaum H. Social informatics in the information sciences: Current activities and emerging directions. Inform Sci 2002; 3 (2). Reddy M, Pratt W, Dourish P, Shabot MM. Asking questions: Information needs in a surgical intensive care unit. Proc AMIA Symp 2002: 647-51. Methods Inf Med 2003; 42 (4). Kaplan B, Lau F, Aarts J, Forsythe DE. Information systems qualitative research in health care. In: Lee AS, Liebenau J, DeGross JI (eds). Information systems and qualitative research. London: Chapman & Hall; 1997. pp. 569-70. Aarts J, Goorman E, Heathfield H, Kaplan B. Successful development, implementation and evaluation of information systems: Does healthcare serve as a model for networked organizations? In: Baskerville R, Stage J, DeGross JI, editors. Organizational and social perspective on information technology. Boston, Dordrecht, London: Kluwer Academic Publishers; 2000. pp. 517-20. Kaplan B, Kvasny L, Sawyer S, Trauth EM. New words and old books: Challenging conventional discourses about domain and theory in information systems research. In: Myers MD, Whitley EA, Wynn E, De Gross JI (eds). Global and organizational discourse about information technology. London: Kluwer; 2002. pp. 539-45.
183. Bloomfield B, Coombs R, Cooper D, Rea D. Machines and manoeuvres: Responsibility accounting and the construction of hospital information systems. Accounting Manag Inform Technol 1992; 2: 197-219. 184. Bloomfield BP, McLean C. Madness and organization: Informed management and empowerment. In: Orlikowski WJ, Walsham G, Jones MR, DeGross JI (eds). Information technology and changes in organizational work. London: Chapman & Hall; 1996. pp. 371-93. 185. Inform Tech People 1999; 12 (3). 186. Silva L, Backhouse J. Becoming part of the furniture: The institutionalization of infromation systems. In: Lee AS, Leibenau J, DeGross. JI (eds). Information systems and qualitative research. London: Chapman & Hall; 1997. pp. 389-414. 187. J End User Comput 2001; 13 (3 and 4). 188. Walsham G. IT and changing professional identity: Micro-studies and macro-theory. J Am Soc Inform Sci 1998; 49: 1081-89. 189. Avgerou C. Information systems and global diversity. Oxford: Oxford University Press; 2002. 190. Hersh WR, Patterson PK, Kraemer DF. Telehealth: The need for evaluation redux. JAMA 2002; 9 (1): 89-91. 191. Nykanen P, Chowdhury S, Wigertz O. Evaluation of decision support systems in medicine. Comput Methods Programs Biomed. 1991; 34: 229-38. 192. Allen D. Telemental health services today. Telemed Today 1994; 2 (2): 1-24. 193. Burghgraeve P, De Maeseneer J. Improved methods for assessing information technology in primary health care and an example from telemedicine. J Telemed Telecare 1995; 1: 157-64. 194. Bashur R. On the definition and evaluation of telemedicine. Telemed J 1995; 1 (1): 19-30. 195. Filiberti D, Wallace J, Koteeswaran R, Neft D. A telemedicine transaction model. Telemed J 1995; 1 (3): 237-47. 196. Grigsby J. You got an attitude problem or what? Telemed Today 1995: 32-4. 197. Grigsby J, Schlenker R, Kaehny M, Shaughnessy P. Analytic framework for evaluation of telemedicine. Telemed J 1995; 1 (1): 31-9. 198. Grigsby J. Sentenced to life without parole doing outcomes research. Telemed Today 1996: 40-1. 199. DeChant H, Tohme W, Mun S, Hayes W. Health systems evaluation of telemedicine: A staged approach. Telemed J 1996; 2 (4): 303-12. 200. McIntosh E, Cairns J. A framework for the economic evaluation of telemedicine. J Telemed Telecare 1997; 3: 132-9. 201. Yellowlees P. Practical evaluation of telemedicine systems in the real world. J. Telemed. Telecare 1998; 4 (S1): 56-7. 202. Perednia DM. Telemedicine system evaluation, transaction models and multicentered research. J Am Health Inform Manag Assn 1996; 67 (1): 60-3. 203. Rigby M. Health informatics as a tool to improve quality in non-acute care – new opportunities and a matching need for a new evaluation paradigm. Int J Med Inform 1999; 56 (1-3): 141-50. 204. Tohme WG, Hayes WS, Dai H, Komo D. Evaluation of a telemedicine platform for three medical applications. Comp Assisted Rad 1996: 505-11. 205. Wyatt JC. Commentary: Telemedicine trials – clinical pull or technology push? BMJ 1996; 313: 380-1.
206. May C, Gask L, Ellis NT, Atkinson TA, Mair F, Smith C et al. Telepsychiatry evaluation in the North West of England: Preliminary results. Telemed Telecare 2000; 6 (S1): 20-2. 207. Qavi T, Corley L, Kay S. Nursing staff requirements for telemedicine in the neonatal intensive care unit. J End User Comput 2001; 13 (3): 5-13. 208. Reiser SJ. Medicine and the reign of technology. Cambridge: Cambridge University Press; 1978. 209. Kaplan B, Brennan PF. Consumer informatics supporting patients as co-producers of quality. JAMIA 2001; 8 (4): 309-16. 210. Mort M, May CR,Williams T. Remote doctors and absent patients: Acting at a distance in telemedicine? Sci Technol Hum Val 2003; 28 (2): 274-95. 211. Turner J. Telemedicine: Generating the virtual office visit. In: Eder LB (ed). Managing healthcare information systems with web-enabled technologies. Hershey and London: Idea Group Publishing; 2000. pp. 59-68. 212. Balas EA, Jaffrey F, Kuperman GJ, Boren SA, Brown GD, Pinciroli F et al. Electronic communication with patients. Evaluation of distance medicine technology. JAMA 1997; 278 (2): 152-9. 213. Finkelstein J, Cabrera MR, Hripcsak G. Internetbased home asthma telemonitoring: Can patients handle the technology? Chest 2000; 117 (1): 148-55. 214. Sandelowski M. Visible humans, vanishing bodies, and virtual nursing: Complications of life, presence, place, and identity. Advances in Nursing Science 2002; 24 (3): 58-70. 215. Allaert FA, Weinberg D, Dusserre P, Yvon PJ, Dusserre L, Cotran P. Evaluation of a telepathology system between Boston (USA) and Dijon (France): Glass slides versus telediagnostic tvmonitor. Proc Annu Symp Comput Appl Med Care 1995: 596-600. 216. Allaert FA, Weinberg D, Dusserre P, Yvon PJ, Dusserre L, Retaillau B et al. Evaluation of an international telepathology system between Boston (USA) and Dijon: Glass slides versus telediagnostic television monitor. J Telemed Telecare 1996; 2 Suppl 1: 27-30. 217. Travers DA, Downs SM. Comparing the user acceptance of a computer system in two pediatric offices: A qualitative study. Proc AMIA Symp 2000: 853-7. 218. Kaplan B. Objectification and negotiation in interrupting clinical images: Implications for computerbased patient records. Artif Intell Med 1995; 7: 439-54. 219. Kaplan B, Lundsgaarde HP. Toward an evaluation of a clinical imaging system: Identifying benefits. Methods Inf Med 1996; 35: 221-9. 220. Billinghurst M, Hirokazu K. Collaborative augmented reality. Commun ACM 2002; 45 (7): 64-70. 221. Moehr JR. Evaluation: Salvation or nemesis of medical informatics? Comput Biol Med 2002; 32 (3): 113-25. 222. May CR, Mort M, Mair F, EllisNT, Gask L. Evaluation of new technologies in health care systems: What’s the context? Health Inform J 2000; 6 (2): 67-70. 223. Harrison R, MacFarlane A, Wallace P. Implementation of telemedicine: The problem of evaluation. J Telemed Telecare 2002; 8 Suppl 2: 39-40. 224. Pinch TJ, Bijker WE. The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit
Methods Inf Med 3/2004
230 Kaplan, Shaw
225. 226.
227.
228.
229. 230. 231. 232.
233.
234.
235. 236.
237. 238.
239.
240. 241. 242.
each other. In: Bijker W, Hughes T, Pinch T (eds). The social construction of technological systems. Cambridge, Mass: MIT Press; 1987. pp. 17-50. Nøhr C. The evaluation of expert diagnostic systems – how to assess outcomes and quality parameters? Artif Intell Med 1994; 6 (2): 123-35. Grémy F, Bonnin M. Evaluation of automatic health information systems. In: van Gennip EMS, Talmon JL, editors. Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 9-20. Jørgensen T. Measuring effects. In: van Gennip EMS,Talmon JL (eds).Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 99-109. Miller RA. Medical diagnostic decision support systems – past, present and future – a threaded bibliography and brief commentary. JAMIA 1994; 1: 8-27. Heathfield HA, Buchan IE. Current evaluations of information technology in health care are often inadequate. BMJ 1996; 313 (7063): 1008. Gosling AS, Westbrook JI, Coiera EW. Variation in the use of online clinical evidence:A qualitative analysis. Int J Med Inform 2003 (1): 1-16. Friedman RB, Gustafson DH. Computers in clinical medicine, a critical review. Comput Biomed Res 1977; 10 (3): 199-204. Bankowitz RA, Lave JR, McNeil MA. A method for assessing the impact of a computer-based decision support system on health care outcomes. Methods Inf Med 1992; 31 (1): 3-10. Goldberg HS, Morales A, Gottlieb L, Meador L, Safran C. Reinventing patient-centered computing for the twenty-first century. In: Patel V, Rogers R, Haux R (eds.). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 1455-58. Kreps GL. Evaluating new health information technologies: Expanding the frontiers of health care delivery and health promotion. Stud Health Technol Inform 2002; 80: 205-12. Pugh GE, Tan JK. Computerized databases for emergency care: What impact on patient care? Methods Inf Med 1994; 33 (5): 507-13. Banta H. Embracing or rejecting innovations: Clinical diffusion of health care technology. In: Anderson JG, Jay SJ (eds). Use and impact of computers in clinical care. New York: Springer; 1987. pp. 132-60. Miller RA. Why the standard view is standard: People, not machines, understand patients’ problems. J Med Philos 1990; 15 (6): 581-91. Harris KD, Donaldson JF, Campbell JD. Introducing computer-based telemedicine in three rural Missouri counties. J End User Comput 2001; 13 (4): 26. Penrod LE, Gadd CS. Attitudes of academicbased and community-based physicians regarding EMR use during outpatient encounters. Proc AMIA Symp 2001: 529-32. Ash JS, Gorman PN, Hersh WR. Physician order entry in U.S. Hospitals. Proc AMIA Symp 1998: 235-9. Ash JS, Gorman PN, Hersh WR, Poulson SP. Perceptions of house officers who use physician order entry. Proc AMIA Symp 1999: 471-5. Ash J, Gorman P, Lavelle M, Lyman J, Fournier L. Investigating physician order entry in the field: Lessons learned in a multi-centered study. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 1107-11.
Methods Inf Med 3/2004
243. Ash JS, Gorman PN, Lavelle M, Stavri PZ, Lyman J, Fournier L et al. Perceptions of physician order entry: Results of a cross-site qualitative study. Methods Inf Med 2003; 42 (4): 313-23. 244. Ash JS, Gorman PN, Lavelle M, Payne TH, Massaro TA, Frantz GL et al. A cross-site qualitative study of physician order entry. JAMIA 2003; 10 (2): 188-200. 245. Ash JS, Stavri PZ, Kuperman GJ. A consensus statement on considerations for a sucessful CPOE implementation. JAMIA 2003;10 (3): 229-34. 246. Coombs CR, Doherty NF, Loan-Clark J.The importance of user ownership and positive user attitudes in the successful adoption of community information systems. J End User Comput 2001; 13 (4): 5. 247. Lau F, Doze S,Vincent D,Wilson D, Noseworthy T, Hayward R. Patterns of improvisation for evidence-based practice in clinical settings. Inform Tech People 1999; 12 (3): 287-303. 248. Isaacs S. The power distance between users of information technology and experts and satisfaction with the information system. In: Patel V, Rogers R, Haux R (eds). Medinfo 2001. Amsterdam: IOS Press; 2001. pp. 1155-7. 249. Talmon J, Enning J, Castãneda G, Eurlings F, Hoyer D, Nykänen P et al. The VATAM guidelines. Int J Med Inform 1999; 56 (1-3): 107-15. 250. Suchman LA. Plans and situated actions: The problem of human-machine interaction. Cambridge: Cambridge University Press; 1987. 251. Suchman L. Representations of work. Commun ACM 1995; 38 (9): 33-4. 252. Butler KA, Esposito C, Hebron R. Connecting the design of software to the design of work. Commun ACM 1999; 42 (1): 38-46. 253. Fafchamps D, Young CY, Tang PC. Modelling work practices: Input to the design of a physician’s workstation. In: Clayton PD (ed.). Proc Symp Comput Appl Med Care. New York: McGraw Hill; 1991. pp. 788-92. 254. Holtzblatt K, Bryer HR. Apprenticing with the customer. Commun ACM 1995; 28 (1): 45-52. 255. Artif Intell Med 1995; 7. 256. Greenbaum J, Kyng M. Design at work: Cooperative design of computer systems. Hillsdale, NJ: Lawrence Erlbaum Associates; 1991. 257. Winograd T, Flores F. Understanding computers and cognition: A new foundation for design. Norwood, NJ: Ablex; 1986. 258. Timpka T, Hedblom P, Holmgren J. Action design: Using an object oriented environment for group process development of medical software. In: Timmers T (ed.). Software engineering in medical informatics. Amsterdam: Elsevier; 1991. pp. 151-65. 259. Sjöberg C, Timpka T. Participatory design of information systems in health care. J Am Health Inform Manag Assn 1998; 5 (2): 177-83. 260. Musen MA. Architectures for architects. Methods Inf Med 1993; 32: 12-3. 261. Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: Cognitive approaches to evaluation of information systems and user interfaces. Proc AMIA Symp 1997: 218-22. 262. Kushniruk AW, Patel VL. Cognitive evaluation of decision making processes and assessment of information technology in medicine. Int J Med Inform 1998; 51 (2-3): 83-90. 263. Patel V, Kaufman D. Medical informatics and the science of cognition. JAMIA 1998; 5 (6): 493-502. 264. Beuscart-Zéphir MC, Anceaux AF, Crinquette C, Renarda JM. Integrating users’ activity modeling
265.
266. 267.
268. 269. 270.
271. 272. 273.
274. 275. 276.
277.
278.
279.
280.
281.
in the design and assessment of hospital electronic patient records. Int J Med Inform 2001; 64 (2-3): 157-71. Kaplan B, Drickamer M, Marattoli RA. Deriving design recommendations through discount usability engineering: Ethnographic observation and thinking-aloud protocol in usability testing for computer-based teaching cases. Proc AMIA Symp 2003: 346-50. Gardner RM, Lundsgaarde HP. Evaluation of user acceptance of a clinical expert system. JAMIA 1994;1 (6): 428-38. Saranummi N. Supporting system development with technology assessment. In: van Gennip EMS, Talmon JL (eds).Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 35-44. Faber M. Design and introduction of an electronic patient record: How to involve users? Methods Inf Med 2003; 42 (4): 371-5. Smith ME. Success rates for different types of organizational change. Performance Improvement 2002; 41 (1): 26-33. Lau F, Hebert M. Experiences from health informatics information system implementation projects reported in Canada between 1991 and 1997. J End User Comput 2001; 13 (4): 17. Dowling AF. Do hospital staff interfere with computer system implementation? Health Care Manag Rev 1980; 5: 23-32. Amatayakul M. The state of the computer-based patient record. JAHIMA 1998; 69:34. Dambro MR, Weiss BD, McClure C, Vuturo AF. An unsuccessful experience with computerized medical records in an academic medical centre. J Med Ed 1988; 63: 617-23. Flowers S. Software failure, management failure: Amazing stories and cautionary tales. Wiley; 1986. Chessare JB, Torok KE. Implementation of COSTAR in an academic group practice of general pediatrics. MD Comput 1993; 10 (1): 23-27. Tonnesen AS, LeMaistre A, Tucker D. Electronic medical record implementation barriers encountered during implementation. Proc AMIA Symp 1999: 625-66. Jeffcott M. Technology alone will never work: Understanding how organizational issues contribute to user neglect and information systems failure in healthcare. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; September 6-7, 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 35. Zuiderent T. Changing care and theory: A theoretical joyride as the safest journey. In: IT in Health Care: Sociotechnical Approaches. International Conference Proc.; September 6-7, 2001; Department of Health Policy and Management, Erasmus University, Rotterdam, The Netherlands; 2001. p. 62. Southon F, Sauer C, Dampney C. Information technology in complex health services: Organizational impediments to successful technology transfer and diffusion. JAMIA 1997; 4: 112-24. Balka E. Getting the big picture: The macro-politics of information system development (and failure) in a Canadian hospital. Methods Inf Med 2003; 42 (4): 324-30. Benyon-Davies P. The London ambulance service’s computerised dispatch system: A case study in information system failure: University of Glamorgan Pontyprid; 1993.
231 Future Directions in Evaluation Research
282. Benyon-Davies P. Information systems ‘failure’: The case of the London ambulance service’s computer aided despatch project. Eur J Inform Syst 1995; 4: 171-84. 283. Friedman CP, Wyatt JC. Publication bias in medical informatics. JAMIA 2001; 8 (2): 189-91. 284. Keen P. Information systems and organizational change. Commun ACM 1981; 24 (1): 24-33. 285. Kling R, Iacono S. The mobilization of support for computerization: The role of computerization movements. Soc Problems 1988; 35 (3): 226-43. 286. Stead WW. Matching the level of evaluation to a project’s stage of development. JAMIA 1996; 3 (1): 92-94. 287. Forsythe DE.The construction of work in artificial intelligence. In: Hess DJ (eds). Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford, Calif.: Stanford University Press; 2001. pp. 16-34. 288. Kaplan B. Evaluating informatics applications – clinical decision support systems literature review. Int J Med Inform 2001; 64 (1): 15-37. 289. Markus M. Power, politics, and MIS implementation. Commun ACM 1983; 26: 430-44. 290. Barsade SG, Brief AP, Spartaro S. The affective revolution of organizational behavior: The emergence of a paradigm. In: Greenberg J (ed). OB: The state of the science. Second ed. Hillsdale, NJ: L. Erlbaum Associates; 2002.
291. Forsythe DE, Buchanan B, Osheroff J, Miller R. Expanding the concept of medical information: An obsevational study of physicians’ information needs. Comput Biomed Res 1992; 25: 181-200. 292. Kaplan B, Farzanfar R, Friedman RH. Personal relationships with an intelligent interactive telephone health behavior advisor system: A multimethod study using surveys and ethnographic interviews. Int J Med Inform 2003; 71 (1): 33-41. 293. Tierney WM, Overhage JM, McDonald CJ. A plea for controlled trials in medical informatics. JAMIA 1994; 1 (4): 353-5. 294. Wyatt JC, Wyatt SM. When and how to evaluate health information systems? Int J Med Inform 2003; 69 (2-3): 251-9. 295. Mair F, Whitten P. Systematic review of studies of patient satisfaction with telemedicine. BMJ 2000; 320 (7248): 1517-20. 296. Banta D. The development of health technology assessment. Health Policy 2003; 63 (2): 121-32. 297. van der Loo RP. Overview of published assessment and evaluation studies. In: van Gennip EMS, Talmon JL (eds.). Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 261-82. 298. Heathfield HA, Peel V, Hudson P, Kay S, Mackay L, Marley T et al. Evaluating large scale health information systems: From practice towards theory. Proc AMIA Symp 1997: 116-20.
299. Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: Barriers and challenges. BMJ 1998; 316 (7149): 1959-61. 300. Paré G. Implementing clinical information systems: A multiple-case study within a US hospital. Health Ser Manag Res 2002; 15 (2): 71-92. 301. Kaplan B, Maxwell JA. Qualitative research methods for evaluating computer information systems. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating health care information systems; approaches and applications. Thousand Oaks, CA: Sage; 1994. pp. 45-68. 302. Sawyer S, Eschenfelder KR. Social informatics: Perspectives, examples and trends. In: Cronon B (eds). Annual review of information technology. Medford, NJ; 2002. pp. 427-65.
Correspondence to: Bonnie Kaplan, PhD President, Kaplan Associates 59 Morris Street Hamden, CT 06517 USA E-mail:
[email protected]
Methods Inf Med 3/2004