980
MEDINFO 2015: eHealth-enabled Health I.N. Sarkar et al. (Eds.) © 2015 IMIA and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-564-7-980
Lessons Learnt from Evaluation of Computer Interpretable Clinical Guidelines Tools and Methods: Literature Review Soudabeh Khodambashia, Øystein Nytrøa a
Department of computer and Information Science, Norwegian University of Science and Technology, Norway
Abstract Representation of clinical guidelines in a computer interpretable format is an active area of research. Various methods and tools have been proposed which have been evaluated based on different evaluation criteria. The evaluation results in the literature and their lessons learnt can be a valuable learning resource in order to redesign and improve the tools. Therefore, this research investigates the lessons learnt from the evaluation studies. Broad search in literature together with a purposeful snowball method were performed to identify the related papers that report any type of evaluation or comparison. We reviewed and analysed the lessons learnt from the evaluation results and classified them into 17 themes which reflect the suggestion concerns. The results indicate that the lessons learnt are more focused on tool functionalities, integration, sharing and maintenance domain. We provide suggestions for the area which had less attention. Keywords: Computer interpretable clinical guidelines; Evaluation; Lessons learnt.
Introduction Clinical guidelines mostly are written in a free-text format from which it is hard to search and retrieve recommendations during medication in a short period of time. Several improvements with respect to computer interpretable clinical guidelines (CIGs) have been reported [1]. Some of the tools and methods were evaluated based on sets of criteria independently in order to assess the extent to which the tool can satisfy certain aspects [2]. As each of the evaluations and their results are valuable and much can be learned by comparing the findings, the main goal of this research was reviewing the results of evaluations studies and their lessons learnt regarding both technical and sociotechnical aspects in the domain of CIGs.
Results A total number of 27 papers identified as relevant for full-text review. We grouped similar results into 17 thematic categories. The similarities are identified based on their suggestions for improvement in specific domain. The identified themes are: user support in modelling and encoding process, standard terminology and data model, consistency checking, knowledge specification step, guideline content and dimensions, visual support and editor features, variation, ontology and semantic tagging, macros, representation on different level of abstraction, representation primitives, automatic knowledge extraction, concurrent guideline development and formalization, configurable componentbased modelling constructs, required skill levels, tool functionality, didactic and maintenance support.
Discussion The results indicate that lessons learnt are more focused on functionalities that a tool needs to support, as well as integration, sharing and maintenance considerations (i.e. standard terminology and data model). In addition, factors that reduce variability and improve consistency of the encoded guidelines are another area of concern. We see fewer suggestions related to visualization support, especially regarding adaptation to the encoder’s skill level. Based on the results, visualization support with provided didactic material that is adapted based on user’s skills can improve usability and reduce variability in encoding clinical guidelines.
Conclusion The results of this paper are valuable for researchers and designers of the tools in the domain of CIGs. To our best knowledge, this is the only broad literature review on the results and evaluation lessons in the domain of CIGs.
Methods
References
The research method was literature review in the domain of CIGs. We searched in the Journal of Biomedical Informatics, Journal of the American Medical Informatics Association, Methods of Information in medicine, International Journal of Medical Informatics, Artificial Intelligence in Medicine and Digital Bibliographic Library Browser to find the relevant papers. The search keywords were “computer interpretable guidelines”, “clinical guideline formal representation”, “electronic clinical guideline”, and “computerized clinical guideline”. In addition, a purposeful “snowball” sampling method enabled us to gain more relevant papers which were cited in the full-text of the reviewed papers.
[1] Ciccarese P, Caffi E, Quaglini S, Stefanelli M. Architectures and tools for innovative Health Information Systems: The Guide Project. International journal of medical informatics. 2005;74(7):553-62. [2] Peleg M, Tu S, Bury J, Ciccarese P, Fox J, Greenes RA, Hall R, Johnson PD, Jones N, Kumar A. Comparing computer-interpretable guideline models: a case-study approach. Journal of the American Medical Informatics Association. 2003;10(1):52-68. Address for correspondence
[email protected]