Linking Informaticians and End Users – Using the ... - Semantic Scholar

1 downloads 0 Views 175KB Size Report
Abstract. There is understandable concern about low uptake and sub-optimal use of health informatics systems, which is often caused by a lack of shared ...
Medical Informatics in a United and Healthy Europe K.-P. Adlassnig et al. (Eds.) IOS Press, 2009 © 2009 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-044-5-66

66

Linking Informaticians and End Users – Using the STARE-HI Evaluation Reporting Framework as a Unifying Design Approach Michael RIGBY a,1 , Jan TALMON b, Jytte BRENDER c, Elske AMMENWERTH d, Nicolette F. DE KEIZER e, Pirkko NYKÄNEN f a School of Public Policy and Professional Practice, Keele University, UK b School for Public Health and Primary Care: Caphri, Maastricht University, The Netherlands c Department of Health Science and Technology, Aalborg University, Denmark d UMIT – University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria e Department of Medical Informatics, Academic Medical Center, Amsterdam, The Netherlands f Department of Computer Sciences, University of Tampere, Finland Abstract. There is understandable concern about low uptake and sub-optimal use of health informatics systems, which is often caused by a lack of shared objectives and values by the different stakeholders. Moreover, all parties work to different ethical codes. For future success, all need to work to the same values and objectives, measured by agreed outcomes data, creating robust evidence. The Statement on Reporting of Evaluation Studies in Health Informatics (STARE-HI), by being recently endorsed by IMIA, EFMI and the EQUATOR Network, may therefore provide a generic objectives framework to help achieve common goals. Keywords. health informatics, ethics, objective design, outcomes, evaluation

1. Introduction Health informaticians and system end-users – mainly healthcare professionals – in principle share the same core objective: to improve healthcare delivery and thus the health of citizens. Policy advocates of e-health applications, and organizational executives commissioning systems, must not negate these principles in the drive for quality and efficiency. However, hitherto there has been no recognized framework to enable all parties to establish and work to a common understanding. The recently endorsed Statement on Reporting of Evaluation Studies in Health Informatics (STAREHI) [1] arguably meets this need when used prospectively as a design objectives method. This paper suggests its components yield values which may unite all interests.

1 Corresponding Author: Professor Michael Rigby, School of Public Policy and Professional Practice, Chancellor’s Building, Keele University, Keele, Staffordshire, ST5 5BG, UK; E-mail: [email protected].

M. Rigby et al. / Linking Informaticians and End Users

67

2. Health and Health Informatics Ethics and Accountability In a civilized but complex society, as indeed occurs in Europe, individuals and organizations should be strongly guided by ethical principles. In health systems there is a considerable difference between the protocols applying to clinical professions, informatics professions, and organizations, yet informatics systems have to work seamlessly across them all if they are to be well used and effective. 2.1. Ethical Frameworks Health care professionals (HCPs) are strictly bound by professional codes, generally based on the principles of the Hippocratic Oath [2]. Central to this are the principles of confidentiality, not working outside personal competence, and not causing harm. Much more recently, health informaticians have created a Code of Ethics for their work [3]. Welcome and important though this is, the core thrust of the thirteen underpinning principles is about the data subject’s rights and informaticians’ moral duties, and none refer to the effect on end-user HCPs whose practice is directly affected. It is only in the section on discharging these principles that informaticians’ duties towards HCPs, towards institutions, and towards Society are mentioned – after the duty to data subjects. Thus not only is the duty to HCPs limited in weight, but this Code of Ethics is far less known or promoted than the Hippocratic Oath, and very little enforced. Health organizations seldom have an ethical code based on either organizational or business ethics. Rather, their functioning is controlled primarily by national regulation (often including accreditation), legislation, and legal liability on the one hand, and by financial and competitive pressures on the other. Yet it is normally the organization which commissions the informatics system, and it is health professionals who have to use it and thus have their practice (and personal liability) affected by it. 2.2. Accountability and Success What differentiates the health professional from health informaticians or health policy makers and managers is that the HCP is autonomously accountable for every patientrelated decision or action. At stake is the possibility of losing their license to practice, and thus their livelihood. This is an accountability not shared at the individual operational decision level by either the informatician or the policy maker. A computer system initiated change in the way information is found or displayed may meet the system’s definitions of improvement or success, but simultaneously it may change fundamentally the way an HCP practices, and may negate rehearsed (often subconscious) processes accrued from many years of training practice. Thus if an HCP feels that a software system reduces their competence or confidence, or takes decisions out of their hands in a way which they do not understand, they may believe that the care they give to patients may be adversely affected (separate from purported system benefits), and that patient safety is thus at stake. By contrast, with drugs or other clinical products the safety and effectiveness has had to be publically proven before they can be introduced into general use. But with informatics systems, the HCP gets no proof of prior testing, no scientific outcomes evidence, and thus no protection. Small wonder, then, that HCPs may be resistant to software systems they do not see as fully consistent with current practice, or which have not been proven unequivocally to be effective and safe. Understandably, they can see it as their duty to

68

M. Rigby et al. / Linking Informaticians and End Users

minimize use of a system, based on the precautionary principle of not adopting change without clear evidence of lack of harm – the non nocere Hippocratic aspect. Well-known consequences of this are the low uptake of computerized systems, poor understanding or compliance, and indeed active apathy or even opposition, such as [4, 5]. This is not dissimilar to wider commerce, with 68% of large companies in one report feeling that major computer system success was ‘improbable’ [6], but with health systems the stakes – patient safety and personal professional liability and livelihood – are much higher. Indeed, there is even clear published evidence that unsound health informatics can be damaging or fatal to patients [7, 8].

3. Shared Objectives, Values, and Measures as the Key to Success How can this unhelpful situation be avoided? The answer is in sharing views of intention and success. Too often informaticians are designing or implementing a system to move data effectively; the policy maker is seeking ‘modernization’; the organization is seeking efficiency; while the HCP is seeking to deliver good care safely. For health informatics to work, all must have common value sets and metrics. But how is this to be achieved given the different driving forces, and ethical positions? Evaluation is seen as important by those who believe in evidence-based decisions and optimization of investment, but as an expensive irrelevance by others [9]. However, given the spectre of poor uptake, failure, and demonstrable adverse outcomes, the need to consider evaluation as an ethical essential has developed [8, 10]. IMIA and EFMI have now both endorsed a framework for reporting health informatics evaluations – the Statement on Reporting of Evaluation Studies in Health Informatics (STARE-HI) [1], and it is now listed by EQUATOR [11], bringing health informatics the opportunity of the same scientific standing as such as the CONSORT standards for RCT reporting. This paper suggests the potential of this reporting framework as a means of establishing at the outset of a health informatics project a set of open, clear, and shared objectives, but equally importantly shared values, measures, and data items which can assess objectively the outcome and success of a system. This should lead to better shared understanding from the outset through to completion, and to less mutual distrust.

4. The STARE-HI Principles The STARE-HI principles are shown in Table 1. They have been refined iteratively over three years through international conferences and an open-access web site [1]. Though it might seem initially that they relate primarily to publication, the principles can (and indeed should) be seen as a set of shared values between developer and enduser. Thus they can underpin system specification and function, enabling the system designer to think ahead to end-user and health system interests and constraints. 4.1. Title, Abstract, and Keywords Knowing the purpose of a development is essential for success and for safety. Even identifying keywords can force analysis and discussion of priorities and values.

M. Rigby et al. / Linking Informaticians and End Users

69

4.2. Scientific Background, Rationale, and Objectives All parties should expect to start with a shared collation of prior scientific evidence and formal requirements. The rationale for the work Table 1. The STARE-HI principles must be clear, as must its objectives. 4.3. Context Health software and related systems must be sensitive to health system context. The way data are handled and processed will vary according to the organizational setting, with care type and the difference between public and fee-paying services systems being greatest factors. There may be different legal or practice requirements. The domain and the clinical or other tasks supported by the system in use should be stated for clarity, such as order entry. Equally important will be to appreciate whether the system is closed, or transmits data to other systems, the degree to which it creates new or value-added data, and any initiation of external action. 4.4. Methods The essential concepts of an evaluation study can be argued as being very relevant for instigating informed system design, with only minor modification, hence STARE-HI’s Section 6 can be used to underpin an informed user-centered design framework. A system needs a duality of theoretical backgrounds – distillation of best evidence for software and peripherals, and evidence as to optimal health systems and practice. A health informatics system will have many participants, direct and indirect, and they will have little choice but to use the system. Such participants will include HCPs, support and informatics staff, linked organisations receiving data or documentation – and above all patients, whose care should be enhanced by the system. Designers and informaticians should have full understanding of users’ assessed needs from the outset. A project should have a clearly understood flow and lifespan, including initiation, implementation, review, and confirmation for general use. Mutually agreed Outcome measures, defined from the outset, should facilitate success – either direct health system measures, such as a specified reduction in medication errors, or indirect such as user satisfaction, or uptake levels. Planned Data acquisition for the outcome measures should be agreed in advance. Finally, the scientific outcome analysis should be an essential part of sign-off.

70

M. Rigby et al. / Linking Informaticians and End Users

4.5. System Results, Discussions and Conclusions An informatics view of a system is often quite circumscribed, but if software is to be effective its impact must be studied beyond the pilot site, to yield mutually convincing evidence including coverage and use [12]. Unexpected events must be noted and managed; outcomes assessed against objectives; and unexpected observations both positive and negative openly shared. All parties should jointly assess whether a project met its objectives; its strengths and weaknesses; its results in relation to other projects; its significance and generalisability; and any unanswered and new questions.

5. Discussion and Conclusion Scientific evidence should be recognised as having real weight and value in facilitating health informatics system use, but it is not free and thus often avoided [9]. By assessing what needs to be measured to assess success, the STARE-HI approach seeks to create a holistic viewpoint which can be built into initial design processes, yielding credible evidence at low net cost while seeking sharing of values by all stakeholders. Evidence-based ethical and reflective practice is incumbent upon all in healthcare; informaticians should be no exception. This paper argues for an objective approach to design and implementation, considering mutual outcome measures as an integral part of those processes. This should both strengthen end-user trust, and yield evidence to encourage uptake [12]. To this end the STARE-HI framework, openly developed and now internationally recognised [1, 11], provides a tool which can well meet this need.

References [1]

Talmon, J., Ammenwerth, E., Brender, J. et al. (2009) STARE-HI—Statement on Reporting of Evaluation Studies in Health Informatics. International Journal of Medical Informatics, 1–9. [2] Edelstein, L. (2000) The Hippocratic oath: Text, translation, and interpretation. In Veatch, R.M. (Ed.) Cross-Cultural Perspectives in Medical Ethics. 2nd edition, Jones & Bartlett, Boston. [3] International Medical Informatics Association. The IMIA Code of Ethics for Health Information Professionals, www.imia.org/pubdocs/Ethics_Eng.pdf. [4] Anderson, J.G., Aydin, C.E. (1994) Overview – Theoretical perspectives and methodologies for the evaluation of health care information systems. In Anderson, J.G., Aydin, C.E., Jay, S.J. (Eds.) Evaluating Health Care Information Systems: Methods and Applications, Sage, Thousand Oaks, 5–29. [5] Poon, E.G. et al. (2004) Overcoming barriers to adopting and implementing computerized physician order entry systems in U.S. Hospitals. Health Affairs 23(4):184–190. [6] Ellis, K. (2008) Business Analysis Benchmark: The Impact of Business Requirements on the Success of Technology Projects. IAG Consulting, New Castle. [7] http://iig.umit.at/efmi/ Bad Health Informatics Can Kill. [8] Ammenwerth, E., Shaw, N.T. (2005) Bad Health Informatics can Kill – Is Evaluation the Answer? Methods of Information in Medicine 44(1):1–3. [9] Rigby, M. (2001) Evaluation: 16 Powerful Reasons Why Not to Do It – And 6 Over-Riding Imperatives. In Patel, V. et al. (Eds.) Proceedings of MEDINFO 2001, IOS Press, Amsterdam, 1198–1202. [10] Ammenwerth, E. et al. (2004) Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. International Journal of Medical Informatics 73(6):479–491. [11] http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reportingguidelines/other-reporting-guidelines/. [12] Rigby, M. (2006) Essential Prerequisites to the Safe and Effective Roll-out of E-working in Healthcare. International Journal of Medical Informatics 75:138–147.

Suggest Documents