Document not found! Please try again

override

4 downloads 2389 Views 250KB Size Report
Apr 30, 2011 - development of international practice guidelines highlighting the most important warnings. Errors in ..... A number of expert groups have provided rec- ommendations to .... Available from: http:// law2point0.com/wordpress/ wp-.
At the Intersection of Health, Health Care and Policy Cite this article as: Aaron S. Kesselheim, Kathrin Cresswell, Shobha Phansalkar, David W. Bates and Aziz Sheikh Clinical Decision Support Systems Could Be Modified To Reduce 'Alert Fatigue' While Still Minimizing The Risk Of Litigation Health Affairs, 30, no.12 (2011):2310-2317 doi: 10.1377/hlthaff.2010.1111

The online version of this article, along with updated information and services, is available at: http://content.healthaffairs.org/content/30/12/2310.full.html

For Reprints, Links & Permissions: http://healthaffairs.org/1340_reprints.php E-mail Alerts : http://content.healthaffairs.org/subscriptions/etoc.dtl To Subscribe: http://content.healthaffairs.org/subscriptions/online.shtml

Health Affairs is published monthly by Project HOPE at 7500 Old Georgetown Road, Suite 600, Bethesda, MD 20814-6133. Copyright © 2011 by Project HOPE - The People-to-People Health Foundation. As provided by United States copyright law (Title 17, U.S. Code), no part of Health Affairs may be reproduced, displayed, or transmitted in any form or by any means, electronic or mechanical, including photocopying or by information storage or retrieval systems, without prior written permission from the Publisher. All rights reserved.

Not for commercial use or unauthorized distribution Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Health Information Technology By Aaron S. Kesselheim, Kathrin Cresswell, Shobha Phansalkar, David W. Bates, and Aziz Sheikh 10.1377/hlthaff.2010.1111 HEALTH AFFAIRS 30, NO. 12 (2011): 2310–2317 ©2011 Project HOPE— The People-to-People Health Foundation, Inc.

doi:

Aaron S. Kesselheim ([email protected]) is an assistant professor of medicine at Harvard Medical School, and is on the faculty of the Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, in Boston, Massachusetts. Kathrin Cresswell is a research associate at the Centre for Population Health Sciences at the University of Edinburgh, in the United Kingdom. Shobha Phansalkar is a senior medical informatician at Partners Healthcare, in Boston. David W. Bates is chief of general internal medicine and primary care at Brigham and Women’s Hospital. Aziz Sheikh is a professor of primary care research and development at the Centre for Population Health Sciences.

2310

A N A LYSI S

&

C O M M E N TARY

Clinical Decision Support Systems Could Be Modified To Reduce ‘Alert Fatigue’ While Still Minimizing The Risk Of Litigation Clinical decision support systems—interactive computer systems that help doctors make clinical choices—can reduce errors in drug prescribing by offering real-time alerts about possible adverse reactions. But physicians and other users often suffer “alert fatigue” caused by excessive numbers of warnings about items such as potentially dangerous drug interactions. As a result, they may pay less attention to or even ignore some vital alerts, thus limiting these systems’ effectiveness. Designers and vendors sharply limit the ability to modify alert systems because they fear being exposed to liability if they permit removal of a warning that could have prevented a harmful prescribing error. Our analysis of product liability principles and existing research into the use of clinical decision support systems, however, finds that more finely tailored or parsimonious warnings could ease alert fatigue without imparting a high risk of litigation for vendors, purchasers, and users. Even so, to limit liability in this area, we recommend stronger government regulation of clinical decision support systems and development of international practice guidelines highlighting the most important warnings. ABSTRACT

E

rrors in ordering, dispensing, and administering medications represent a common patient safety concern in inpatient, outpatient, and community settings such as nursing homes. The rate of such errors, as well as associated adverse drug events, can be greatly reduced by computer systems that help doctors make clinical decisions.1 Simple clinical decision support systems can provide medication-specific information to users based on a stored repository of clinical information. Such information includes, for example, recommended dosages to avoid both under- and overdosing and known adverse drug-drug interactions to avoid when prescrib-

H e a lt h A f fai r s

D e c em b e r 2 0 1 1

30:12

ing medications. This repository of clinical information is the knowledge base underlying the clinical decision support system. More complex systems integrate the knowledge base with patients’ electronic medical records and use an inference mechanism to generate recommendations that are specific to individual patients. These more complex decision support systems can offer real-time alerts for a variety of prescribing issues—including suggested dosing, drug allergies, and drug-drug interactions—and they can prompt users to order and check specific laboratory tests. Despite their benefits, clinical decision support systems are sometimes criticized for issuing excessive alerts about possible drug interactions

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

that are of limited clinical usefulness.2,3 For example, it has been reported that some systems will trigger a medication interaction alert when one of the offending agents is a topical preparation with negligible systemic absorption.4 As a result, physicians in various care settings override or ignore 49–96 percent of all alerts.5 Excessive warnings can result in “alert fatigue,” wherein physicians may inadvertently ignore clinically useful alerts, thus diminishing the systems’ effectiveness and possibly leading to serious adverse consequences for patients.6 Possible remedies include generating a more effective, parsimonious set of warnings—that is, a more limited set that includes messages of the highest importance.7 Alternatively, warnings could be tailored to a particular clinical environment, taking into account the individual care setting, such as adjusting the type or style of a medication interaction alert if a particular community of physicians is found to respond inappropriately to it.4 A tailored system might advise adjusting a patient’s medication dosage only when other patient data, such as age, or specific comorbidities, such as renal function, raise concern, as opposed to alerting indiscriminately for all patients. Both approaches can lead to fewer overrides. Tailoring can also meet the needs of different user groups. For example, systems that allow doctors to order medications electronically can present options based on the varying needs of different specialties. They can even present lists of medications specifically relevant to an individual physician or specialty.8 Similarly, warnings about the administration of drugs—such as the need to give a pill on an empty stomach, thirty minutes before a meal, or while the patient is sitting upright—may be directed to the parties responsible for implementing them. However, clinical decision support systems used by most health care organizations are neither parsimonious nor tailored. Vendors purposefully design knowledge sets that are overly inclusive in nature, and they often prevent subsequent tailoring of alerts, even if the purchasing organization finds that specific alerts are not useful.9,10 Vendors’ fear of litigation appears to be a driving force behind this lack of flexibility.11 Vendors worry that they could be exposed to liability if they remove a warning that could have prevented a harmful prescribing error.12,13 Purchasers and physicians are therefore often unable to implement fewer, more careful, or more tailored alerts. Even when they can, they may be reluctant to do so because of their own concern about the legal implications of ignoring a potentially important warning.14,15 As a result, overalerting is the norm.16,17

Thus, although electronic medical records and clinical decision support systems are growing in popularity, legal hurdles may prevent them from being optimally designed and used by health care professionals.18 The result may be similar to the effects of defensive medicine, which emerges from physicians’ concern about liability risk19 and can lead to inefficient use of resources20 and threats to patient safety.21 We do not know of any legal test cases that have explored product liability risk associated with clinical decision support systems. However, in this article we review the principles of product liability to examine whether fears of liability arising from medication-related clinical decision support systems are well grounded. We review these concerns from the standpoint of designers and users, and we conclude that the liability risks of parsimonious and tailored warnings are overstated. Indeed, it may be argued that failure to design systems that issue warnings more sparingly also poses a threat of liability, because of physicians’ tendency to mechanically override alerts when they seem excessive. Finally, we suggest that better coordination and oversight by professional or regulatory groups in this field could help reduce any real or perceived liability on the part of vendors, purchasers, and users.

The Legal Risks Three parties are involved in the development, implementation, and use of a clinical decision support system for medication safety: the software developers and knowledge base vendors, the purchasing organization, and the health care provider (the user) who interacts with the message. We address each in turn. Developers And Knowledge Base Vendors The software developers for clinical decision support systems integrate information on risks with the electronic medical record. The risk information may be separately supplied by other vendors who have expertise in processing and applying medical knowledge. Software developers can also create inference mechanisms that supply patient-specific information and alerts. Systems sold in the United States are subject to oversight by the Food and Drug Administration (FDA) as medical devices.22 The FDA has authority to make sure systems adhere to general development standards, including those calling for appropriate testing, planning, configuration, and reports of adverse events.23 Although the FDA has legal authority, the agency “has largely refrained from enforcing [its] regulatory requirements” for health information technology devices such as clinical decision support systems.24 As a result, developers D e c em b e r 2 0 1 1

30:12

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Health A ffairs

2311

Health Information Technology submit few clinical decision support systems for premarket review. Yet it is not hard to imagine a scenario in which faulty clinical decision support leads physician-users to miss critical information and results in a negative outcome for the patient. For example, in one case, the FDA reported that a software application for “checking a patient’s profile for drug allergies failed to display the allergy information properly…[because of] a missing codeset [the inclusion of certain data elements in the overall code].”24 If that error led to patient harm from prescription of a contraindicated drug, the designer of the knowledge base and inference mechanism might come under legal scrutiny. The legal—and related financial and reputational—risk to the software developers for such design errors largely turns on whether they are held to a strict liability or traditional negligence standard. Strict liability is common in product liability law and imposes damages on defendants whose consumer goods cause harm.25 Under a strict liability regime, a plaintiff needs only to prove that the defendant’s actions caused the injury; it does not matter whether the defendant followed customary practices or took reasonable precautions.26 Strict liability is therefore highly favorable to plaintiffs. This heightened standard is intended to encourage manufacturers to develop consumer goods that are appropriately safe. Although strict liability applies to goods, it does not apply to services, such as the medical care that is provided and “sold” in the marketplace. Claims involving services are usually adjudicated under traditional negligence principles,27,28 which require more than proof of causation. Instead, making a case for negligence requires a demonstration that a party did not take appropriate care or that the party’s behavior was not in line with usual practices in the field. Because negligence imposes a higher standard of proof, this standard can favor defendants. In the medical malpractice field, where a negligence standard is the norm, data show that plaintiffs lose the majority of cases that go to trial.29 Therefore, the overall liability risk for developers hinges on whether clinical decision support systems and knowledge bases are considered goods or services. Some computer programs, such as those embedded in pacemakers or radiation therapy machines, resemble products because they cause the device to function in the way directed by the health care provider.30,31 Strict liability may be appropriate in these situations.32 In contrast, clinical decision support systems are intended to help physicians make appropri2312

H e a lt h A f fai r s

D e c em b e r 2 0 1 1

30:12

ate decisions about the use of prescription drugs.33 Because the final prescribing decision is left up to the physician, these computer programs are more analogous to services and should be judged under a negligence standard. Thus, exercising reasonable care in the design of clinical decision support systems should help absolve developers from liability. In this context, “reasonable care” includes routine quality checks to ensure that the software performs as intended—avoiding erroneous alerts—and that the knowledge base is accurate and up to date. Two other principles also limit the liability of vendors and developers of clinical decision support systems. First, the “learned intermediary” rule holds physicians responsible to use technology in combination with their professional knowledge and experience in the provision of care. This rule arose because many courts accepted that physicians’ advanced training provides them with specialized knowledge that separates them from traditional consumers.34 Failure to apply that knowledge is likely to cast scrutiny on the health care professional, rather than the purchaser, vendor, or developer. Second, some vendors have tried to deflect liability by forcing purchasers to sign “hold harmless” clauses that shift responsibility to the system’s users for errors that arise, even when the users are following vendors’ instructions.11 Such clauses may be enforceable, although courts have invalidated similar clauses in other fields if they violated public policy or if the vendors, in imposing them, took advantage of the users’ poor bargaining position.35 Holdharmless clauses should not exonerate developers from faulty software or other errors of their own creation,11 although to our knowledge, holdharmless clauses required by clinical decision support vendors have not yet faced court challenges. Even without relying on hold-harmless clauses, we believe that a traditional negligence standard and the learned intermediary principle substantially limit liability risks for clinical decision support vendors and developers. Allowing tailored or parsimonious warnings in their software should not increase their risk, particularly if such changes are implemented in accordance with defensible standards and with the participation and agreement of the purchasers and users. Indeed, it is worth noting again that the risk of liability could work both ways and could apply to developers of overly conservative systems.36 If too many warnings lead to alert fatigue and increase the risk of prescribing errors, software developers who prevented tailored warnings may face allegations that they failed to design

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

the system with reasonable care, leaving them exposed to charges of negligence. Systems Purchasers And Users Purchasers of clinical decision support systems for medication safety—usually health care delivery organizations and physician group practices—may have insights about how to optimize the usefulness of inference mechanisms. For example, potential drug-drug interactions between azole drugs to treat fungal infections (for example, ketoconazole) and HMG-CoA reductase inhibitor drugs to treat elevated lipids (for example, simvastatin) have been shown to carry a low risk of adverse effects and are rarely of any clinical consequence.37 If a purchaser decides to exclude that warning and a patient is prescribed both drugs and experiences an adverse event, is the purchaser liable? As with developers and vendors, the inference mechanism would probably be judged to be a service, and a decision by a purchaser to opt out of a warning would be judged under a reasonableness standard: that is, what action would have been appropriate under the given circumstances? Physicians and other health care professionals, for their part, may worry about the application of the learned intermediary rule. Are they at risk if they choose to override the alerts issued by clinical decision support systems? For example, assume that a physician seeks to prescribe a betablocker as secondary prevention of ischemic heart disease for a patient with a history of asthma. The inference mechanism would generate a warning about the potential for an asthma flare-up triggered by the use of a beta-blocker.38 If the physician ignores the warning, and the patient experiences an asthma flare-up that leads to hospitalization, death, or both, the court would evaluate the question on the basis of the given circumstances. The court might reasonably conclude that in a patient with a mild history of asthma, the need to prevent ischemic heart disease is likely to be a more serious risk than an asthma flare-up. The decision to ignore a warning would probably not change the disposition of a malpractice case brought against the physician-user. The use of clinical practice guidelines in malpractice litigation serves as a reasonable analogy: If clinicians are sued for negligence, evidence of adherence to known guidelines can help them avoid liability.39,40 However, departing from guidelines does not necessarily subject physicians to liability if the departure was medically appropriate.41 The same would be true for clinical decision support system warnings. Prescribers are assumed to have knowledge about the known risks of the drugs they prescribe. Questions facing the physi-

cian would include whether the potential for harm should have led to a different prescription or whether the physician adequately informed the patient about the risk of taking the medication. Thus, although purchasers’ and users’ liability risk should not be dismissed, it should also not be overstated. Claims alleging a system’s failure to warn about the side effects of medications have been relatively rare, although with increasing use of clinical decision support systems, such claims may increase in the future. Of course, as with developers and vendors, the risk of liability could work both ways. As clinical decision support systems become more widespread and studies showing their usefulness in reducing patient error mount, liability could increasingly arise from failure to use these technologies.42 Promoting Effective Use Of Clinical Decision Support The previous section detailed factors that we believe limit liability considerations for developers, purchasers, and users, even in the context of clinical decision support systems that issue parsimonious or tailored warnings.Yet liability concerns may be justified for vendors or users who do not exercise reasonable care in designing, using, or implementing these systems. Unfortunately, the sorts of behaviors that fall short of reasonable care and thus increase liability risk are not well described, in part because there is little consensus about evidencebased system design and implementation.43 A number of expert groups have provided recommendations to increase the effective use of clinical decision support systems.18,44–46 In this section we highlight how to address the liability concerns of stakeholders who develop and use clinical decision support systems that feature tailored or parsimonious warnings. For developers, the risk is greatest when the system is faulty, which emphasizes the importance of ensuring that the code is written in a competent and reliable way and that the system is tested in a way that mirrors the practical context of intended use as closely as possible. In designing systems, developers should consider the needs of users, possibly by consulting them about system requirements before writing the code and by presenting them with prototypes that can be refined based on user feedback. In addition, the system should be appropriately labeled so that its characteristics and limitations are known.47 Relevant information for users may include how and what evidence was incorporated in the design of the system and how much local configuration and training should be expected.36 Another important goal is to reduce variability D e c em b e r 2 0 1 1

30:12

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Health A ffairs

2313

Health Information Technology in systems performance. A study evaluating the adequacy of medication-related decision support in US hospitals found high levels of variability in whether the system detected important warnings. In a simulation, only 53 percent of potentially fatal orders triggered a warning, while the system alerted users to 10–82 percent of potentially serious adverse drug events, depending on the individual hospital and system in question.48 One solution would be to increase national and international collaboration among vendors, implementers, and regulators for the coordination and oversight of clinical decision support system software.24,49–51 Some commentators have suggested that regulatory oversight may stifle innovation.12 However, such oversight can protect patients by ensuring that systems are of high quality and include approved parsimonious and tailored decision support. For example, the independent Certification Commission for Healthcare Information Technology launched a software certification program for clinical decision support systems in 2011.52 England’s Department of Health has already developed software design standards, including those for clinical decision support systems, that will be mandatory for all software suppliers.53 Internationally, the World Health Organization’s World Alliance of Patient Safety could emphasize the importance of promoting optimal use in the design of clinical decision support systems. It could do so by providing high-level guidelines, which national governmental bodies of member states could then encourage their members to implement.54 Better postmarketing surveillance can also help prevent clinical decision support systems from inadvertently facilitating medical errors or other unintended consequences that have arisen—particularly from computerized physician order entry—such as inadvertent discontinuation of medications or ignored allergy notices.55–57 To further protect purchasers and users, governments should pay greater attention to and Aaron Kesselheim is supported by a career development award from the Agency for Healthcare Research and Quality (K08HS18465-01) and a Robert Wood Johnson Foundation Investigator Award in Health Policy Research. Shobha Phansalkar and David Bates

2314

H e a lt h A f fai r s

D e c em b e r 2 0 1 1

30:12

devote additional financial resources to developing consensus guidelines that support parsimonious warnings. Physician input will be essential, although the process will probably be difficult because of a lack of agreement regarding which alerts can be safely switched off.58 Currently, any organization that wishes to have a parsimonious set of warnings needs to develop it—and most lack the resources to do so. To address this, the Office of the National Coordinator for Health Information Technology identified a small set of the most important drug-drug interactions, which will be made publicly available to all system and knowledge base vendors. Finally, there needs to be appropriate labeling, training, and briefing to help clinicians use clinical decision support systems as intended and understand the relevant liability issues. For example, user training conducted by health care organizations should emphasize that physicians should not blindly follow or unreasonably ignore alerts and that, ultimately, they need to use their best judgment in assessing the evidence. Unfortunately, because of cost implications and inadequate communication of system limitations, many organizations provide limited training regarding these issues.

Conclusion The legal situation surrounding clinical decision support systems is still unclear, but our review suggests that parsimonious or tailored warnings do not raise the liability risk of system manufacturers and physicians as long as systems are designed well and providers continue to use their best medical judgment. Yet liability remains a real, although remote, possibility for developers or users. Stronger government regulation and international guidelines—such as practice guidelines highlighting the most important warnings—could help lower both the liability risk and perceptions of risk. Such steps would support the efficient uptake and implementation of this important tool for patient safety. ▪

acknowledge the support for their work from the Health Information Technology Center for Education and Research in Therapeutics, supported by the Agency for Healthcare Research and Quality (HS11169-01; “Improving safety by computerizing outpatient prescribing”).

Kathrin Cresswell and Aziz Sheikh acknowledge the support for their work from the National Health Service (NHS) Connecting for Health’s Evaluation Programme and the Medical Research Council.

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

NOTES 1 Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):1311–6. 2 Black A, Car J, Pagliari C, Anandan C, Cresswell K. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med. 2011;8(1):e1000387. 3 Eslami S, Abu-Hanna A, de Keizer NF. Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc. 2007;14(4):400–6. 4 Kuperman GJ, Bobb A, Payne TH, Avery AJ, Gandhi TK, Burns G, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14(1): 29–40. 5 Van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–47. 6 Avery AJ, Savelyich BS, Sheikh A, Cantrill J, Morris CJ, Fernando B, et al. Identifying and establishing consensus on the most important safety features of GP computer systems: e-Delphi study. Inform Prim Care. 2005;13(1):3–12. 7 Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13(1): 5–11. 8 Miller RA, Waitman LR, Chen S, Rosenbloom ST. The anatomy of decision support during inpatient care provider order entry (CPOE): empirical observations from a decade of CPOE experience at Vanderbilt. J Biomed Inform. 2005; 38(6):469–85. 9 Fieschi M, Dufour JC, Staccini P, Gouvernet J, Bouhaddou O. Medical decision support systems: old dilemmas and new paradigms? Methods Inf Med. 2003;42(3):190–8. 10 Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision-support systems. Methods Inf Med. 1993;32(1):1–8. 11 Koppel R, Kreda D. Health care information technology vendors’ “hold harmless” clause: implications for patients and clinicians. JAMA. 2009;301(12):1276–8. 12 Gable JK. An overview of the legal liabilities facing manufacturers of medical information systems. Quinnipiac Health Law J. 2001;5:127–51. 13 Fox J, Thomson R. Clinical decision support systems: a discussion of quality, safety, and legal liability issues. Proc AMIA Symp. 2002:265–9.

14 Mangalmurti SS, Murtagh L, Mello MM. Medical malpractice liability in the age of electronic health records. N Engl J Med. 2010;363(21):2060–7. 15 Office of the National Coordinator for Health Information Technology. Clinical Decision Support Workshop meeting summary, August 25–26, 2009 [Internet]. Washington (DC): ONC; 2009 [cited 2011 Nov 15]. Available from: http://healthit.hhs .gov/portal/server.pt/document/ 898131/onc_cds_workshop_ meeting_summary_pdf 16 Teich JM, Osheroff JA, Pifer EA, Sittig DF, Jenders RA. Clinical decision support in electronic prescribing: recommendations and an action plan. J Am Med Inform Assoc. 2005;12(4):365–76. 17 Burton LC, Anderson GF, Kues IW. Using electronic health records to help coordinate care. Milbank Q. 2004;82(3):457–81. 18 Greenberg M, Ridgely MS. Clinical decision support and malpractice risk. JAMA. 2011;306(1):90–1. 19 Studdert DM, Mello MM, Sage WM, DesRoches CM, Peugh J, Zapert K, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609–17. 20 Hermer LD, Brody H. Defensive medicine, cost containment, and reform. J Gen Intern Med. 2010; 25(5):470–3. 21 Budetti PP. Tort reform and the patient safety movement: seeking common ground. JAMA. 2005; 293(21):2660–2. 22 Miller RA, Gardner RM. Summary recommendations for responsible monitoring and regulation of clinical software systems. Ann Intern Med. 1997;127(9):842–5. 23 Food and Drug Administration. General principles of software validation; final guidance for industry and FDA staff [Internet]. Silver Spring (MD): FDA; 2002 Jan 11 [cited 2011 Apr 30]. Available from: http://www.fda.gov/medical devices/deviceregulationand guidance/guidancedocuments/ ucm085281.htm 24 Shuren J. Testimony of Jeffrey Shuren, director of FDA’s Center for Devices and Radiological Health, to Health Information Technology (HIT) Policy Committee [Internet]. Silver Spring (MD): Food and Drug Administration; 2010 Feb 25 [cited 2011 Apr 30]. Available from: http:// law2point0.com/wordpress/ wpcontent/uploads/2010/03/ DrShrudenPreparedTestiony.pdf 25 Epstein RA. Case and materials on torts. 6th ed. Boston (MA): Little, Brown and Company; 1995. 26 American Law Institute. Restate-

27

28

29

30

31

32

33

34

35

36

37

38

39

ment of the law, second, torts, sec. 402A, comment j. Philadelphia (PA): The Institute; 1965. Rick SL. The key issue in reducing risk of liability with expert systems: product or service. In: IEEE International Conference on Developing and Managing Intelligent System Projects, 1993. Washington (DC): Institute of Electrical and Electronics Engineers; 1993. p. 124–9. Bequai A. Legal status of expert systems. Comput Law Secur Rep. 2005;8(3):136. Studdert DM, Mello MM. When tort resolutions are “wrong”: predictors of discordant outcomes in medical malpractice litigation. J Legal Stud. 2007;36:S47–78. Mancino PB, Siachos PG. Physician liability for malfunctioning equipment. J Am Coll Radiol. 2008;5(4): 546–9. Miller RA, Miller SM. Legal and regulatory issues related to the use of clinical software in health care delivery. In: Greenes RA, editor. Clinical decision support: the road ahead. Amsterdam: Elsevier Academic Press; 2007. p. 423–44 Ware A, Castle G. Product liability for medical devices. Regulatory Affairs Journal Devices. 2005;14(Jul/Aug): 217–24. Miller RA, Schaffner KF, Meisel A. Ethical and legal issues related to use of computer programs in clinical medicine. Ann Intern Med. 1985; 102(4):529–37. Hall TS. Reimagining the learned intermediary rule for the new pharmaceutical marketplace. Seton Hall Law Rev. 2004;35(1):193–261. Hoffman S, Podgurski A. Finding a cure: the case for regulation and oversight of electronic health record systems. Harvard J Law Tech. 2008; 22:103–65. Berner ES. Ethical and legal issues in the use of clinical decision support systems. J Healthc Inf Manag. 2002;16(4):34–7. Yu DT, Peterson JF, Seger DL, Werth GC, Bates DW. Frequency of potential azole drug-drug interactions and consequences of potential fluconazole drug interactions. Pharmacoepidemiol Drug Saf. 2005; 14(11):755–67. Avery AJ, Rodgers S, Cantrill JA, Armstrong S, Elliott R, Howard R, et al. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices. Trials. 2009;10:28. Samanta A, Mello MM, Foster C, Tingle J, Samanta J. The role of clinical guidelines in medical negli-

D e c em b e r 2 0 1 1

30:12

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Health A ffairs

2315

Health Information Technology

40

41

42

43

44

45

46

47

gence litigation: a shift from the Bolam standard? Med Law Rev. 2006;14(3):321–66. Hurwitz B. Clinical guidelines and the law: negligence, discretion and judgment. Oxford: Radcliffe Medical Press; 1998. Recupero PR. Clinical practice guidelines as learned treatises: understanding their use as evidence in the courtroom. J Am Acad Psychiatry Law. 2008;36(3):290–301. Miller RA. Legal issues related to medical decision support systems. Int J Clin Monit Comput. 1989; 6(2):75–80. Mollon B, Chong J Jr., Holbrook AM, Sung M, Thabane L, Foster G. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak. 2009;9:11. Teich JM, Osheroff JA, Pifer EA, Sittig DF, Jenders RA. Clinical decision support in electronic prescribing: recommendations and an action plan. J Am Med Inform Assoc. 2005;12(4):365–76. Miller RA, Gardner RM, Johnson KB, Hripcsak G. Clinical decision support and electronic prescribing systems: a time for responsible thought and action. J Amer Med Inform Assoc. 2005; 12(4):403–9. Sittig DF, Classen DC. Safe electronic health record use requires a comprehensive monitoring and evaluation framework. JAMA. 2010; 303(5):450–1. Geissbuhler A, Geissbuhler A. Computer-assisted clinical decision support. Chap. 14 in: Chapman GB,

48

49

50

51

52

53

Sonnenberg FA, editors. Decision making in healthcare: theory, psychology, and applications. New York (NY): Cambridge University Press; 2000. p. 362–85. Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood). 2010;29(4): 655–63. Coiera EW, Westbrook JI. Should clinical software be regulated? Med J Aust. 2006;184(12):600–1. Australian Department of Health and Ageing. Australian Health Information Council electronic decision support systems report—2008 [Internet]. Canberra: The Department; 2008 [cited 2011 Nov 15]. Available from: http://www.health .gov.au/internet/main/publishing .nsf/Content/9B50F6B541401BBA CA25753F007C283A/$File/AHIC %20EDSS%20web%20version %2012%20January%202009.pdf Cresswell KM, Bates DW, Phansalkar S, Sheikh A. Opportunities and challenges in creating an international centralised knowledge base for clinical decision support systems in ePrescribing. BMJ Qual Saf. 2011;20(7):625–30. Manos D. Certification Commission accelerates certification development [Internet]. New Gloucester (ME): Healthcare IT News, MedTech Media. 23 Mar 23 [cited 2011 Apr 30]. Available from: http:// healthcareitnews.com/news/ certification-commissionaccelerates-certificationdevelopment NHS Connecting for Health. Health informatics—application of patient

54

55

56

57

58

safety risk management to the manufacture of health software [Internet]. London: National Health Service; 2009 Apr [cited 2011 Apr 30]. Available from: http:// www.connectingforhealth.nhs.uk/ systemsandservices/clinsafety/ dscn/dscn14.pdf World Alliance for Patient Safety. WHO draft guidelines for adverse event reporting and learning systems: from information to action [Internet]. Geneva: World Health Organization; 2005 [cited 2011 Apr 30]. Available from: http:// www.who.int/patientsafety/events/ 05/Reporting_Guidelines.pdf Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medical errors. JAMA. 2005; 293(10):1197–203. Campbell EM, Sittig DF, Ash JS, Guappone KP, Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006;13(5): 547–56. Strom BL, Schinnar R, Aberra F, Bilker W, Hennessy S, Leonard CE, et al. Unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med 2010;170(17):1578–83. Van der Sijs H, Aarts J, van Gelder T, Berg M, Vulto A. Turning off frequently overridden drug alerts: limited opportunities for doing it safely. J Am Med Inform Assoc. 2008;15(4): 439–48.

AARON S. KESSELHEIM, KATHRIN CRESSWELL, SHOBHA PHANSALKAR, DAVID W. BATES & AZIZ SHEIKH

Aaron S. Kesselheim is an assistant professor of medicine at Harvard Medical School.

In this month’s Health Affairs, Aaron Kesselheim and coauthors explore the issue of “alert fatigue” from clinical decision support systems—excessive numbers of 2316

Health Affairs

warnings about potentially dangerous drug interactions that doctors, instead of paying attention to, effectively tune out. System designers and vendors sharply limit the ability to modify these alerts because they fear being exposed to liability if they remove a warning that could have prevented a harmful prescribing error. Kesselheim and colleagues analyzed product liability law and existing research, and they conclude that the systems could be altered so that more-tailored

D e c e m b e r 20 1 1

30 : 1 2

warnings could be issued without greatly increasing litigation risks. To limit liability but still encourage the appropriate use of these important systems, the authors recommend stronger government regulation of the systems and international practice guidelines that would highlight the most important warnings these systems should issue. Kesselheim is an assistant professor of medicine at Harvard Medical School who is also on the faculty of the Division of

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, where he concentrates on pharmaceutical epidemiology and economics. He is also a primary care doctor and patent attorney whose research focuses on the effects of intellectual property laws and regulatory policies on drug development; the approval process; and the costs, availability and use of prescription drugs in resourcepoor areas. In 2010 he received the Alice S. Hersh New Investigator Award from AcademyHealth. Kesselheim is board certified in internal medicine and a member of the New York bar. He earned his medical and law degrees at the University of Pennsylvania and a master of public health degree at Harvard.

Kathrin Cresswell is a research associate at the Centre for Population Health Sciences at the University of Edinburgh.

Kathrin Cresswell is a psychologist who works as a research associate at the Centre for Population Health Sciences at the University of Edinburgh, in Scotland. With an interest in patient safety and a background in qualitative health services, she has been involved in several studies investigating technology-based approaches to improving safety. Currently, Cresswell is project coordinator in a study evaluating the introduction of electronic

health records in English hospitals. She earned a master’s degree in health psychology from Queen Margaret University, in Edinburgh.

Shobha Phansalkar is a senior medical informatician at Partners Healthcare.

Shobha Phansalkar is a senior medical informatician—one who collects and studies data to help clinicians decide on a course of treatment—in the clinical informatics research and development group of Partners Healthcare, in Boston. She is also an instructor in the Division of General Medicine and Primary Care at Brigham and Women’s Hospital and Harvard Medical School. Phansalkar’s research focuses mainly on the evaluation of medication decision support in clinical information systems. She received a doctorate in biomedical informatics at the University of Utah and is a registered pharmacist educated at the University of Pune, in India.

David W. Bates is chief of general internal medicine and primary care at Brigham and Women’s Hospital.

David Bates is a widely known expert in the use of information technology to improve clinical decision making, patient safety, health care quality, costeffectiveness, and outcomes assessments. A practicing internist and member of the Institute of Medicine, Bates is chief of the Division of General Internal Medicine and Primary Care at Brigham and Women’s Hospital, a professor of medicine at Harvard Medical School, and a professor of health policy and management at the Harvard School of Public Health, where he codirects the program in clinical effectiveness. Bates is the author of more than 400 peer-reviewed papers. He earned his medical degree from the Johns Hopkins University.

Aziz Sheikh is a professor of primary care research and development at the Centre for Population Health Sciences.

Aziz Sheikh is a professor of primary care research and development and director of research at the University of Edinburgh’s Centre for Population Health Sciences. He is a methodologist whose research focuses on the use of electronic systems in health care, the costeffectiveness of electronic prescribing systems, and secondary uses of data derived from electronic health records. He received a doctorate of medicine from Imperial College London.

D e c e m b e r 20 1 1

30:12

Downloaded from content.healthaffairs.org by Health Affairs on December 5, 2011 at LIBRARY OF MEDICINE

Health Affairs

2317

Suggest Documents