RESEARCH IN NURSING PRACTICE
Nurses’ perception of the use of computerised information systems in practice: questionnaire development Cristina Oroviogoicoechea, Roger Watson, Elena Beortegui and Silvia Remirez
Aim. To develop and validate a questionnaire to explore the perceptions of nurses about the implementation of a computerised information system in clinical practice. Background. A growing interest in understanding nurses’ experience of developing and implementing clinically relevant Information Technology systems and the lack of measurement tools in this area, justifies further research into the development of instruments to provide an insight into nurses’ experience. Design. Survey and questionnaire development. Method. An initial draft of the questionnaire was developed based on the literature and expert opinion. The questionnaire was piloted by ten nurses to check face validity, reliability and test-retest reliability. A revised version of the questionnaire was distributed to nurses working in the in-patient area of a university hospital in Spain (n = 227). Principal components analysis with oblique rotation was carried out to test theoretically developed underlying dimensions and to test construct validity. Cronbach’s alpha coefficient was used to determine internal consistency. Results. Cronbach’s alpha for all the items included in the different scales was 0Æ88 in the pilot questionnaire and test-retest reliability was adequate. Principal components analysis of items related to mechanisms produced a three-component structure (‘IT support’, ‘usability’ and ‘information characteristics’). The three factors explained 48Æ6% of the total variance and Cronbach’s alpha ranged from 0Æ66–0Æ79. Principal components analysis of items related to outcomes produced a three factor solution (‘impact on patient care’, ‘impact on communication’ and ‘image profile’). The factors explained 65Æ9% of the total variance and Cronbach’s alpha ranged from 0Æ64–0Æ85. Conclusion. The study provides a detailed description and justification of an instrument development process. The instrument is valid and reliable for the setting where it has been used. Relevance to clinical practice. The instrument could provide insight into nurses’ experience of IT implementation that will guide further development of systems to enhance clinical practice. Key words: evaluation, information technology, nurses, nursing, questionnaire, Spain Accepted for publication: 1 April 2009
Communication and information management are key elements in healthcare organisations. Quality of care is
directly related to the quality of information available to healthcare professionals and charting and managing clinical information is an essential part of their daily work (Currel & Urquhart 2003). In this context, information technology (IT)
Authors: Cristina Oroviogoicoechea, MSc, RGN, General Nurse Manager, Clinica Universitaria, Universidad de Navarra, Pamplona, Spain; Roger Watson, PhD, RN, FAAN, Editor-in-Chief, Journal of Clinical Nursing, Professor of Nursing, Centre for Health & Social Care Studies and Service Development, School of Nursing and Midwifery, University of Sheffield, Sheffield, UK; Elena Beortegui, RGN, Informatics Nurse, Clinica Universitaria, Universidad de
Navarra, Pamplona, Spain; Silvia Remirez, RGN, Otorhinolaryngology Nurse, Clinica Universitaria, Universidad de Navarra, Pamplona, Spain Correspondence: Cristina Oroviogoicoechea, MSc, RGN, General Nurse Manager, Clinica Universitaria, Universidad de Navarra, 31080 Pamplona, Spain. Telephone: +34 948 255 400. E-mail:
[email protected]
Introduction
240
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248 doi: 10.1111/j.1365-2702.2009.03003.x
Research in nursing practice
Nurses’ perception of the use of computerised information systems in practice
offers tremendous opportunities to enhance clinical practice and appropriateness of care and to increase efficiency and effectiveness in healthcare organisations (Ammenwerth et al. 2004). Clinically oriented applications are increasingly being developed and introduced to support the daily work of healthcare professionals (Giuse & Kuhn 2003). As a recent review of the literature shows, evaluation of IT system implementation in healthcare is growing (Oroviogoicoechea et al. 2008). Different authors have carried out reviews on the quality and characteristics of evaluation studies (Friedman & Abbas 2003, Van der Meijden et al. 2003) and conclusions are that evaluation research of IT systems nowadays faces important challenges. There is no explicit definition of success of IT systems and it fluctuates over time, not only because of changes in the healthcare environment and IT approach, but also because the focus of evaluation changes along the process of IT implementation (Van der Meijden et al. 2003, Nahm et al. 2007).
Background Success is considered to be a multidimensional concept which encompasses system, individual and organisational factors. System and information quality are the factors most widely analysed in IT evaluation research and which both individually and jointly affect usage and user satisfaction (Van der Meijden et al. 2003). Studies in nursing focus on electronic record completeness, nurses’ satisfaction with information tools and the correlation of nurses’ characteristics (such as expertise, level of use of computers and age) with satisfaction. Questionnaires are the method most widely used, together with qualitative approaches including observation, interviews and focus groups (Nahm & Poston 2000, Ammenwerth et al. 2001). Users’ acceptance is an important factor in IT effectiveness and, therefore, the need for user involvement in all phases of the implementation including design and evaluation has been highlighted (Helleso & Ruland 2001, Rodrigues 2001, Van Ginneken 2002, Darbyshire 2004, Currie 2005). According to Nygren et al. (1998) and Wyatt and Wright (1998), information design is about managing the relationship between people and information making information accessible to and usable by people. This highlights the need to understand how and why clinicians search records and the factors that make it easier. IT effectiveness ‘should be judged by its ability to present reliable, relevant data to clinicians in a usable form, when and where needed’ (Powsner et al. 1998 p. 1619). Nurses’ views on IT effectiveness, despite its relevance, have not been widely investigated (Otieno et al. 2007).
Despite the amount of research carried out to evaluate IT systems in healthcare organisations, it can be considered to being at an early stage. There is a lack of evidence of quality research and measurement tools (Lee 2004, Otieno et al. 2007). Attempts to conduct systematic reviews make the lack of conclusive research obvious (Moloney & Maggs 1999, Ammenwerth et al. 2003). Friedman and Abbas (2003) found, in a literature review of measurement tools, that only 27 citations of 414 initially retrieved met the inclusion criteria for reporting validity and reliability and replication in other studies and criteria were not found to be complete in any study. Lee (2004) explains how, despite the use of questionnaires on IT research, the focus has been to report findings and not to establish the validity and reliability of the instruments used. Measurement studies, where psychometric properties are analysed and reported, are important to determine the degree of confidence of the values and conclusions obtained with the instrument (Cork et al. 1998, DeVon et al. 2007, Otieno et al. 2007). This study is part of a project to evaluate nurses’ perception of the impact of the use of an IT system in clinical practice. A realistic evaluation design was used to make sense of the complex relationships of variables included in the evaluation of IT systems. Realistic evaluation emerges in the context of theory-driven perspectives of evaluation research. Context, mechanisms and outcomes are essential parts of evaluation research and realistic evaluation examines the relationships underlying them, what works for whom in what circumstances (Pawson & Tilley 1997). Theory is constructed as different configurations of context-mechanism-outcomes that explain the phenomena under study. The study was carried out in a university hospital in Spain as it provides a setting where a computerised information system for patient records has been in place since 1999. Although the introduction of the different applications is still in progress, the nursing documentation has been fully computerised since December 2001. The study was discussed with the hospital board and permission to carry out the study was granted. The study was also approved by the hospital ethics committee.
The present study The aim of this study was to develop and validate a questionnaire to provide an insight into nurses’ experience using a computerised hospital information system in clinical practice in a teaching hospital in Spain. The main objectives of the questionnaire were to: • Provide a broad perspective of nurses’ experiences using the information system in their daily work.
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
241
C Oroviogoicoechea et al.
Box 1 • Outcomes: Does electronic nursing documentation meet its goals? • Context: What features within the hospital, the ward units and the users that facilitate or limit its effectiveness? • Mechanism: How the IT system works to support clinical practice and guarantee quality patient care?
• Specification of contexts, mechanisms and outcomes involved in nurses’ use of information systems (Box 1).
Methods The questionnaire was developed and validated in three phases: • Questionnaire design: An initial draft of the questionnaire was developed from the literature and expert review of content and design. • Pilot test: The initial draft was distributed to a small sample of nurses to test reliability and validity. A revised version of the questionnaire was developed. • Factor analysis: The new version of the questionnaire was distributed to a larger sample of nurses working in the in-patient area of the hospital and the data subjected to factor analysis.
Questionnaire design Principal items were drawn based on an overview of the existing literature on determinants of success of inpatient clinical information systems (Oroviogoicoechea et al. 2008). These elements were used to draft a preliminary list of items that were important to measure and distributed within the main areas of the questionnaire: • Demographic data of nurses and ward unit where they work. • Development of the programme and support for users. • Characteristics and system quality of the running of the programme. • Adaptation of the programme to the daily work of the unit. • Quality of the documentation associated with the programme. • Impact of the use of the programme on nurses’ work and on the organisation. 242
This preliminary list of items was distributed and discussed with two of the hospital’s experts: the information technology nurse and one of the support nurses for information system implementation who checked the adequacy of the content to represent the objectives of study. Literature review and discussion with experts have been considered as elements for exploratory work when deciding upon the content of a questionnaire specifically design for the research purposes (Murphy-Black 2006). Areas and items from the reviewed list were organised within the context, mechanisms and outcomes classification (Pawson & Tilley 1997) to clarify the theoretical framework and guide further analysis of results. Preparation and design of the questionnaire is the most important stage. ‘The data collected can only be as good as the questions asked’ (Murphy-Black 2006, p. 367). Suggestions from the literature were taken into account looking at length of the survey, order of the questions and appearance of the questionnaire (Jackson & Furnham 2000). Both open and closed-ended questions were considered for inclusion in the questionnaire. The advantages and disadvantages were considered taking into account the ability of open-questions to ‘provide forthright and valuable insights into people’s perceptions of the issues involved and to get a feel for the words and phrases that they use’ (Jackson & Furnham 2000, p. 116). Closed questions comprise all types of responses, from ‘yes/no’ response to rating scales. Some of the closed questions have an associated open question giving the possibility of elaborating the response. Distribution of the questions in the content areas, as well as wording and comprehension of questions, was checked with the IT experts involved in the item development phase. Changes were introduced and a final draft of the questionnaire was produced. It comprised a 43-item questionnaire divided into the principal content areas, combining open and closed questions to allow an objective evaluation of satisfaction and attitudes together with a description of the personal experience of nurses. Closed-questions about perception of the use of information technology use a fivepoint Likert rating scale, from ‘strongly agree’ to ‘strongly disagree’; a middle point has been considered to allow nurses to express a neutral attitude (Jackson & Furnham 2000). Questions were grouped in the six dimensions described above.
Pilot study A pilot study of the first draft of questionnaire was carried out to ensure, before distribution, that it was clear and understandable and to check reliability and test-retest
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
Research in nursing practice
Nurses’ perception of the use of computerised information systems in practice
reliability. The pilot involved both qualitative and quantitative approaches (Jackson & Furnham 2000). Nurses from the different units in the hospital that would be involved in the research, one from each ward, were invited to participate in the pilot. Participants took part in two sessions; during the first one they were asked to complete the final draft of the questionnaire and afterwards they were asked about obvious problems in completing it considering whether it was clear and understandable but also about completeness and relevance of the content (Polit & Hungler 1993). The second session took place a week after and participants were asked to complete only the closed questions from the questionnaire for test-retest reliability. It is acknowledged that there is no ideal period for test-retest: too short a period leads to recall; too long a period leads to change. Nevertheless, a standard recommended period for test-retest is one week (Lubans et al. 2008). Ten nurses agreed to participate and we informed each one about the objective and their role during the pilot. Time to answer the questionnaire ranged from 50– 60 minutes. The general impression of the questionnaire was that it was long but not boring and a general comment from all the participants was that ‘You have to think to answer open questions’. Information about the project and the questionnaire and instructions on how to complete the questionnaire were adequate. Questions included in the questionnaire were considered relevant for the topic and they would not exclude any or include new ones. There were no ambiguous questions or ones they felt uneasy about answering. In relation to the questionnaire design, the layout of the questionnaire and the sections and question ordering were adequate. Participants made some reasonable suggestions to specific questions and changes were introduced to contemplate them. Validity and reliability Validity and reliability issues were addressed during the process of questionnaire design and the pilot study. Validity addresses ‘whether the questionnaire measures what is intended to measure’ (Murphy-Black 2006, p. 375). Content validity was assessed to guarantee that items in the questionnaire cover the construct under study (Murphy-Black 2006, DeVon et al. 2007). It was addressed during the questionnaire design phase by contrasting items generated with the literature on information technology in nursing and with the revision of items generated by two experts from the hospital. ‘Reliability refers to the extent to which a questionnaire would produce the same results if used repeatedly with the same group under the same conditions’ (Murphy-Black 2006, p. 376). Bryman and Cramer (2005) differentiate between
internal and external reliability. External reliability looks at the ‘degree of consistency of measure over time’ (p. 76) and internal reliability looks at the internal consistency of items within a scale. Test-retest reliability and Cronbach’s alpha were the methods used to analyse external and internal reliability, respectively. Cronbach’s alpha of 0Æ7 or above can be considered adequate (Watson & Thompson 2006, DeVon et al. 2007). As the questionnaire during the pilot was administered twice for the test-retest purposes Cronbach’s alpha has been calculated for both. Cronbach’s alpha for all the items included in the different scales is 0Æ88 in the first questionnaire of the pilot and 0Æ93 in the second questionnaire. Therefore, there is intercorrelation between items in the questionnaire which can be considered to measure perception of information technology. Looking at the internal consistency of the different dimensions, some differences were observed (Table 1). Although results from the second questionnaire in the pilot show better results, only results from three scales can be considered adequate (Development of the programme and support, Characteristics of the running of the programme, Outcomes and impact). Despite low results in the other three scales, which can be considered to have acceptable values of Cronbach’s alpha, no modifications to the final questionnaire have been introduced in relation to items included in the different scales. Taking into account the high internal consistency of the total items measuring the perception of nurses, initial constructions in the questionnaire, drawn from a theoretical approach, were tested after the distribution of the questionnaire using factor analysis to enhance internal consistency. Test-retest reliability was established using the intraclass correlation coefficient (ICC) to assess the strength of
Table 1 Internal consistency for scales within the questionnaire during the pilot test
Items
Cronbach’s alpha*
Question
n
Development of the programme and support Characteristics of the running of the programme Adaptation of the programme to your daily work Characteristics of information of the programme in general Quality of nursing documentation Outcomes and impact
10
7
0Æ69; 0Æ77
10
6
0Æ70; 0Æ81
10
6
0Æ70; 0Æ62
10
6
0Æ47; 0Æ58
10 10
6 11
0Æ68; 0Æ60 0Æ83; 0Æ83
*First value of Cronbach’s alpha corresponds to the first time questionnaire and the second to the second.
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
243
C Oroviogoicoechea et al. Table 2 Test-retest results Question
ICC
n
Development of the programme and support Characteristics of the running of the programme Adaptation of the programme to your daily work Characteristics of information of the programme in general Quality of the information of the nursing record Outcomes and impact
0Æ725 0Æ652 0Æ718 0Æ469 0Æ893 0Æ896
10 10 10 10 10 10
95% CI 0Æ219–0Æ924 0Æ082–0Æ924 0Æ205–0Æ922 0Æ185 to 0Æ835 0Æ630–0Æ972 0Æ638–0Æ973
F
df
6Æ286 4Æ744 6Æ100 2Æ770 17Æ733 18Æ234
9, 9, 9, 9, 9, 9,
p 9 9 9 9 9 9
0Æ006 0Æ015 0Æ006 0Æ073 0Æ000 0Æ000
ICC, intraclass correlation coefficient; CI, confidence interval; p, statistical significance.
agreement between the scores at time 1 and time 2 (a week after). The ICC to calculate the test-retest reliability is a better measure than the Pearson’s product-moment correlation as ‘this approach uses analysis of variance and allows the calculation of error variances from each source’ (Yen & Lo 2002, p. 59). A two-way random effect model was selected as error variances could come from different sources and variables can be considered random (Yen & Lo 2002). There is no consensus about the significance of ICC results and Kanste et al. (2007) indicate that, although values ‡0Æ80 have been suggested as good, a value >0Æ50 has been proposed as sufficient. Most of the ICC results are high and only the dimension of characteristics of information is 1 has been widely used. Some authors refer to the risk of unquestioning results from this perspective (Jackson & Furnham 2000, Watson & Thompson 2006) and suggest a combination of methods that includes also the Scree test. In addition, a subjective evaluation of how meaningful the items loaded on factors is needed; ‘the point at which a factor analysis can be considered complete is when resulting factors are meaningful and some iteration between the mathematical techniques and common sense is required’ (Watson & Thompson 2006, p. 332).
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
Research in nursing practice
Nurses’ perception of the use of computerised information systems in practice
Rotation is carried out for an adequate identification and characterisation of factors and to improve interpretation of results (Polit & Hungler 1993, Bryman & Cramer 2005). There are two kinds of rotation: orthogonal and oblique, depending on whether it is assumed that factors are unrelated or related, respectively. Bryman and Cramer (2005) suggest that, although results from the orthogonal rotation produce no redundant information, they are more artificial. There is no general rule to choose and it is a case of judgement by each individual (Ferguson & Cox 1993). Deciding which one to choose is a matter of preference or deciding on the simplest structure; one whereby item loading on putative factors is maximised and other loadings are minimised (Watson & Thompson 2006). In this study, factors can be considered as correlated with each other because all try to measure mechanisms and outcomes involved in the use of IT systems in clinical practice, therefore, oblique rotation was carried out to maximise the loading of items in the factors generated. Items loading on the different factors have to make sense to the researcher and factors have to be labelled to assure they are not arbitrary (Watson & Thompson 2006). A decision about removing items implies the whole factor analysis being carried out again without items removed (Ferguson & Cox 1993).
Results Factor analysis of mechanisms Principal components factor analysis and oblique rotation was carried out using SPSS for Windows version 13.0 (SPSS Inc., Chicago, IL, USA). The initial Kaiser–Meyer–Olkin coefficient was 0Æ81 and the Barlett test was statistically significant (v2 = 1335Æ6, df = 300, p < 0Æ001); therefore, carrying out factor analysis was justified. Sample size can be considered adequate as the item to subject ratio is 1:7Æ1. Extraction of factors was based on a Scree plot using visual interpretation together with interpretation of the first solution based on Eigenvalues >1. The three latent factors identified include 17 of the initial 25 items. One item (‘the number of computers is adequate’) was removed because of low factor loading (0Æ20) and three items (‘information is comprehensive’, ‘the programme does not have unexpected interruptions’ and ‘it is easy to find the information I need’) because of cross-loading, similar communality scores in more than one factor and because they did not load on a unique factor. The rest of the items removed (‘programme improves quality of work’, ‘time I use for documentation is acceptable’, ‘the programme is
quick’ and ‘I have received adequate training for the use of information system’) did not fit, conceptually, the factor they loaded on. The three factors explained 48Æ6% of the total variance. Factor 1 is described as ‘Usability’ and includes ease of use of the programme and integration of the programme in daily work. It includes six items and explained 28Æ4% of the total variance and has an Eigenvalue of 4Æ83. Factor 2 is described as ‘IT support’ and includes six items such as relationship with IT personnel, relevance of the changes introduced or nurses’ problems with the programme being understood by IT personnel; it explained 11Æ6% of the total variance and has an Eigenvalue of 1Æ98. Factor 3 is described as ‘Information characteristics’ and includes both content and accessibility of information. It explained 8Æ52% of the total variance and has an Eigenvalue of 1Æ4. Table 4 illustrates final solution with Cronbach¢s value for each factor.
Factor analysis of outcomes items For outcomes principal components factor analysis and oblique rotation was carried out. Carrying out factor analysis can be considered justified by a higher than 1:10 variable to subject ratio (1:16Æ2), Kaiser–Meyer–Olkin coefficient of 0Æ833 and the Barlett test statistically significant (v2 = 760Æ15, df = 55, p < 0Æ001). Extraction of factors based on Scree plot visual interpretation and interpretation of the first solution based on Eigenvalues >1 gave the same number of factors. The three latent factors identified include all the items. The three factors explained 65Æ9% of the total variance. Factor 1 is described as ‘Impact on patient care’ and includes six items and explained 43Æ9% of the total variance and has an Eigenvalue of 4Æ83. Factor 2 is described as ‘Impact on communication’ and includes three items related to communication within the health team and the nursing team as well as recognition of nurses’ work within the team. It explained 12Æ7% of the total variance and has an Eigenvalue of 1Æ39. Factor 3 is described as ‘Hospital profile’ and includes research and image of the organisation; it explained 9Æ2% of the total variance and had an Eigenvalue of 1Æ02. Cronbach’s value for each factor was calculated and range from 0Æ85 in the first factor to 0Æ64 in the third factor. Table 5 presents the final solution with Cronbach¢s value for each factor.
Discussion Questionnaires are the main method for data collection in research in IT implementation; therefore, development of instruments with description and analysis of psychometric
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
245
C Oroviogoicoechea et al. Table 4 Principal component factor analysis followed by oblique rotation from data of mechanisms Factors 1
Item label It is easy to learn how to use it It is easy to use Data I register are important for the care of the patients The programme is integrated in the daily work It is easy to know how to do what you need to do request of test, record, etc.) The information I access from the programme makes my work easier The relationship with the personnel of the department of informatics is good The suggestions I make are taking into account The attitude of the personnel of the department of informatics is cooperative The response time to the introduction of an improvement is adequate The people responsible for developing the programme understand my problems The changes introduced have importance for my daily work I have access to the information where I need it I have access to the information when I need it I am certain about the reliability of the data documented I find all the information I need Information is always updated Cronbach’s alpha
817 755 617 602 601 591 018 182 075 378 396 216 236 278 161 256 227 0Æ77
2 344 203 136 372 043 426 767 753 733 718 696 437 280 132 112 203 242 0Æ79
3 308 125 360 589 305 319 211 186 279 194 268 126 777 766 678 517 391 0Æ66
Loadings are shown to three points after the decimal point but without the decimal point to fit the table and putative loadings are shown in bold for clarity.
Table 5 Principal component factor analysis followed by oblique rotation from data of outcomes Factors Item label
1
Coordination of care Facilitate patient care Individualised care Continuity of care Decision-making Quality of information Communication nursing team Communication health team Consideration of nursing work Hospital image Research Cronbach’s alpha
808 800 785 772 761 665 364 393 576 431 276 0Æ85
2 237 402 338 285 402 486 894 892 653 220 109 0Æ75
3 491 353 193 357 245 263 166 133 238 850 847 0Æ64
Loadings are shown to three points after the decimal point but without the decimal point to fit the table and putative loadings are shown in bold for clarity.
properties helps to interpret results in a meaningful way and to advance this field in a coherent and comparable way (Nahm et al. 2007, Rattray & Jones 2007). The study provides a valid and reliable instrument to evaluate nurses’ perception of the use of IT systems in clinical practice that could be used in other studies. The process of questionnaire development and analysis of validity and reliability have been rigorously defined and described. In addition, despite con246
taining mainly close-ended questions, it incorporates qualitative data in the form of open-ended questions that allows underlying dimensions for further research and improvement of the instrument to be identified. The process followed has suggested ways to improve the questionnaire. Results from the pilot test produced a new version of the questionnaire and recommended further analysis using factor analysis to test construct validity and enhance reliability. The use of both qualitative and quantitative approaches has enriched the process; nurses’ accounts of their experience completing the questionnaire have helped to clarify and reword some of the questions included. Dimensions and variables included in the initial questionnaire were modified and factor analysis generated a ‘set of posteriori constructs: what, based on the data, the instrument appears to be assessing’ (Cork et al. 1998, p. 168). Mechanisms of IT system implementation have been transformed from an initial four dimensions structure to three factors. ‘IT support’ is the only factor that remains the same and the other three are reduced to two: ‘usability’ that includes both ease of use of the system and integration in work dynamics and ‘information characteristics’. The three factors explained 48Æ6% of the total variance. ‘IT support’ was the second factor with 11Æ6% of the total variance explained. Support has been mainly studied looking at the adequacy of the training provided but not from the development and improvement of the system to fit users’
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
Research in nursing practice
Nurses’ perception of the use of computerised information systems in practice
needs. User involvement has been identified as relevant for effective implementation as it favours incorporation of users’ needs but it is also relevant for users’ acceptance as it generates a sense of ownership (Urquhart & Currell 2005). In this study, this issue has not been directly addressed as the design and implementation phases of the project have not been analysed. Nevertheless, the perception nurses have of IT support could be considered a consequence of taking into account users’ perspective and, therefore, could be used in future research when analysing user involvement in long-term evaluation of IT implementation. Changes in the initial structure in aspects related to characteristics of the programme, integration in daily work and information characteristics are similar to results of Otieno et al.’s (2007) study. They suggest the difficulty that users have to differentiate the system and the information characteristics. Further analysis of these concepts needs to be addressed in future research to understand those better (Nahm et al. 2007). System characteristics and information characteristics, despite being related to each other to some extent, have different implications for evaluation and development. Variables considered within the outcomes dimension, initially considered as one factor, produced a three factor solution: ‘impact on patient care’, ‘impact on communication’ and’ image profile’. Nurses’ perceptions are just one dimension of effectiveness. More objective dimensions, such as completeness or impact on patient safety and quality of care need to be considered on IT evaluation. In this context, impact on patient care is a significant issue as measuring the impact on patient care is not easy and very few evaluations focus on that (Kaplan & Shaw 2004). Lee (2005) points out that there is no empirical evidence that the lack of good documentation decreases the quality of care.
Limitations This study was carried out involving a sample from one hospital using one IT system therefore development of the questionnaire and further analysis of reliability and validity when using the instrument in other settings is recommended and this will also increase the generalisabilty of the questionnaire.
Relevance to clinical practice There is a growing research on development and validation on IT evaluation instruments (Lee 2004, Otieno et al. 2007). Publication and dissemination of these studies will help to develop measurement tools that could contribute to further understanding the complexity of IT implementation and,
ultimately, to developing and implementing effective IT systems in clinical practice. Recognition of users’ needs continue to be an important issue to include in IT evaluation. Nurses’ perception of IT support can be considered a valid approach to users’ involvement after going through initial stages of IT implementation.
Conclusion • The study provides a detailed description of and justification for the development process of an instrument to measure nurses’ perception of an IT system. • The study contributes to the growing area of research in healthcare on evaluation of IT system implementation. • Despite the complexity of the phenomena, efforts to develop and validate instruments to evaluate IT systems are worthwhile. • Comparing results of different studies could help to move forward the concept analysis and evaluation of IT implementation.
Contributions Study design: CO, RW; data collection and analysis: CO, RW, EB, SR and manuscript preparation: CO, RW.
References Ammenwerth E, Eichstadter R, Haux R, Kutscha A, Pohl U & Ziegler S (2001) A randomized evaluation of a computer-based nursing documentation system. Methods of Information in Medicine 40, 61–68. Ammenwerth E, Mansmann U, Iller C & Eichstadter R (2003) Factors affecting and affected by user acceptance of computer-based nursing documentation: results of a two year study. Journal of American Medical Informatics Association 10, 69–84. Ammenwerth E, Brender J, Nykanen P, Prokosch H, Rigby M & Talmon J (2004) Vision and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. International Journal of Medical Informatics 73, 479–491. Bryman A & Cramer D (2005) Quantitative Data Analysis with SPSS 12 and 13. Routledge, Hove, East Sussex. Cork RD, Detmer WM & Friedman CP (1998) Development and initial validation of an instrument to measure physicians’ use of, knowledge about and attitudes towards computers. Journal of American Medical Informatics Association 5, 164–176. Currel R & Urquhart C (2003) Nursing records systems: effects on nursing practice and health care outcomes. Cochrane Database of Systematic Reviews, Issue 3. Article no.: CD002099. Currie LM (2005) Evaluation frameworks for nursing informatics. International Journal of Medical Informatics 74, 908–916.
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248
247
C Oroviogoicoechea et al. Darbyshire P (2004) Rage against the machine?: nurses’ and midwives’ experiences of using computerized patient information Systems for clinical information. Journal of Clinical Nursing 13, 17–25. DeVon HA, Block ME, Moyle-Wright P, Ernst DM, Hayden SJ, Lazzara DJ, Savoy SM & Kostas-Polston E (2007) A psychometric toolbox for testing validity and reliability. Journal of Nursing Scholarship 39, 155–164. Ferguson E & Cox T (1993) Exploratory factors analysis: a users’ guide. International Journal of Selection and Assessment 1, 84–94. Friedman CP & Abbas UL (2003) Is medical informatics a mature science? A review of measurement practice in outcome studies of clinical systems International Journal of Medical Informatics 69, 261–272. Giuse DA & Kuhn KA (2003) Health information systems challenges: the Heidelberg conference and the future. International Journal of Medical Informatics 69, 105–114. Helleso R & Ruland CM (2001) Developing a module for nursing documentation integrated in the electronic patient record. Journal of Clinical Nursing 10, 799–805. Jackson C & Furnham A (2000) Designing and Analysing Questionnaires and Surveys: A Manual for Health Professionals and Administrators. Whurr Publishers Ltd., London. Kanste O, Miettunen J & Kynga¨s H (2007) Psychometric properties of the multifactor leadership questionnaire among nurses. Journal of Advanced Nursing 57, 201–212. Kaplan B & Shaw NT (2004) Future directions in evaluation research: people, organizational and social issues. Methods of Information in Medicine 43, 215–231. Lee T-T (2004) Evaluation of computerised nursing care plan: instrument development. Journal of Professional Nursing 20, 230–238. Lee T-T (2005) Nursing diagnoses: factors affecting their use in charting standardized care plans. Journal of Clinical Nursing 14, 640–647. Lubans DR, Sylva K & Osborn Z (2008) Convergent validity and test–retest reliability of the oxford physical activity questionnaire for secondary school students. Behavioural Change 25, 23–24. Moloney R & Maggs C (1999) A systematic review of the relationships between written manual nursing care planning, record keeping and patient outcomes. Journal of Advanced Nursing 30, 51–57. Murphy-Black T (2006) Using questionnaires. In The Research Process in Nursing, 5th edn (Gerrish K & Lacey A eds). Blackwell Publishing Ltd, Oxford, pp. 367–382.
248
Nahm R & Poston I (2000) Measurement of the effects of an integrated point-of-care computer system on quality of nursing documentation and patient satisfaction. Computers in Nursing 18, 220–229. Nahm ES, Vaydia V, Ho D, Scharf B & Seagull J (2007) Outcomes assessment of clinical information system implementation: a practical guide. Nursing Outlook 55, 282–288. Nygren E, Wyatt JC & Wright P (1998) Helping clinicians to find data and avoid delays. The Lancet 352, 1462–1466. Oroviogoicoechea C, Elliott B & Watson R (2008) Review: evaluating information systems in nursing. Journal of Clinical Nursing 17, 567–575. Otieno OG, Toyama H, Asonuma M, Kanai-Pak M & Naitoh K (2007) Nurses’ views on the use, quality and user satisfaction with electronic medical records: questionnaire development. Journal of Advanced Nursing 60, 209–219. Pawson R & Tilley N (1997) Realistic Evaluation. Sage, London. Polit DF & Hungler BP (1993) Nursing Research. Methods, Appraisal and Utilization, 3rd edn. Lippincott Company, Philadelphia, Pennsylvania. Powsner SM, Wyatt JC & Wright P (1998) Opportunities for and challenges of computerisation. The Lancet 352, 1617–1622. Rattray J & Jones MC (2007) Essentials elements of questionnaire design and development. Journal of Clinical Nursing 16, 234–243. Rodrigues J (2001) The complexity of developing a nursing information system: a brazilian experience. Computer in Nursing 19, 98–104. Urquhart C & Currell R (2005) Reviewing the evidence on nursing record systems. Health Informatics Journal 11, 33–44. Van der Meijden MJ, Tange HJ & Hasman A (2003) Determinants of success of inpatient clinical information systems: a literature review. Journal of American Medical Informatics Association 10, 235–243. Van Ginneken AM (2002) The computerized patient record: balancing effort and benefit. International Journal of Medical Informatics 65, 97–119. Watson R & Thompson DR (2006) Use of factor analysis in Journal of Advanced Nursing: literature review. Journal of Advanced Nursing 55, 330–341. Wyatt JC & Wright P (1998) Design should help use of patients’ data. The Lancet 352, 1375–1378. Yen M & Lo L (2002) Examining test-retest reliability. An intra-class correlation approach. Nursing Research 51, 59–62.
2010 The Authors. Journal compilation 2010 Blackwell Publishing Ltd, Journal of Clinical Nursing, 19, 240–248