Evaluation Strategy for a Web-Based Decision Support System for Healthcare Projects ABDOU Alaa1 LEWIS John 2 Al ZAROONI Sameera 3 Abstract: Decision Support System (DSS) is a computer based tool that utilizes data and models to help decision-makers enhancing their capabilities in understanding and solving complex or poorly structured decision problems. The evaluation of Decision Support Systems is an important stage in their development cycle. Its main objective is to assess a system’s overall value. Evaluation is usually conducted with the purpose of verifying and validating a decision support system. The currently evolving Internet technologies, with their potential for communication, collaboration and information sharing, provide a unique platform for Decision Support Systems. For web-based DSS, the potential benefits gained by implementing such system on the web need to be assessed and examined as well. A number of approaches to evaluate and measure success of DSSs have been advanced in literature. This paper presents a framework for evaluating a Web-based Decision Support System for the appraisal stage of public healthcare projects in the United Arab Emirates with main objective focus on assisting decision-makers in examining different function program alternatives and their associated conceptual budgets. The aim of the proposed evaluation strategy is to verify and validate the developed system prototype as well as to assess its potential users’ satisfaction. The study is guided by a comprehensive literature review on validation and verification of DSSs and their basic tools and techniques.
Key words: Evaluation, Verification; Validation; Decision Support System; User Satisfaction.
1. INTRODUCTION The evaluation of Decision Support Systems is an important stage in their development cycle. According to Borenstein (1998), evaluation is defined as the process of assessing a system’s overall value. Miles et al. (2000), asserts that omitting the evaluation step may lead to the reliance on a system with outputs of uncertain quality. It is only by carrying out an evaluation that the strengths and weaknesses of the system can be truly assessed. Evaluation is usually conducted with the purpose of verifying and validating a Decision Support System. A DSS is considered to be successful when it satisfies not only the needs of the user but also the organisational objectives and the demands of the organisation’s environment which can affect the performance of this Decision Support System (Papamichail and French 2005). A web-based Healthcare Decision Support System (HCDSS) was developed. The system was implemented on the World Wide Web. PHP and MySQL were selected as scripting language and database management system to build this system prototype. Its main objectives focus on assisting decision-makers in examining different function program alternatives and their associated conceptual budgets. In addition, the system facilitates reflecting risk and uncertainty factors associated with healthcare space programming into cost estimating and forecasting processes. The system consists of three functional modules: space programming module, risk assessment and cost estimating module, and cost bidding module. The Internet is utilized to provide an efficient data —————————————————————————————— 1 Lecturer, Department of Architectural Engineering, College of Engineering, United Arab Emirate University; PH (971) 050 4727615; email:
[email protected] 2 Lecturer, School of Architecture, University of Liverpool, UK, PH (44) 151 794 2609; email:
[email protected] 3 Assistant professor Department of Architectural Engineering, College of Engineering, United Arab Emirate University; PH (971) 03 7678501; email:
[email protected]
sharing environment and to provide a mechanism for updating project data and cost information. (More information about design and construction of the system can be found in (Abdou, Lewis and Radaideh 2003, Abdou et al. 2007). An evaluation strategy was developed to be implemented on three level of assessment to evaluate the Prototype. The paper starts by providing a comprehensive review of the literature on the evaluation of decision support systems and their main methods and techniques. The evaluation of Web-based Decision Support Systems is also explored. Following that, the proposed evaluation strategy for the developed HCDSS Prototype with its three level of assessment is presented.
2. EVALUATION OF DECISION SUPPORT SYSTEMS: A BACKGROUND Decision Support System (DSS) is a computer based tool that utilizes data and models to help decision-makers enhance their capabilities in understanding and solving complex or poorlystructured decision problems. According to Sojda (2004), empirical evaluation of Decision Support Systems (DSSs) in some form is essential, and can range from experiments run against a preselected gold standard to simple testing of system components. Furthermore, the opinion of potential users of the system is considered an important part of its evaluation process. Papamichail and French (2005) assert that because of the peoplecentred focus of DSS technologies, it is important not only to assess the technical aspects of DSSs and their overall performance but also to seek the views of their potential users as well. The following sections discuss the different frameworks for DSS’s evaluation and provide a comprehensive review of the DSS’s Verification and Validation of and their different processes and techniques.
2.1. Frameworks for Evaluating Decision Support Systems
2.2. Verification and Validation of Decision Support Systems and their Techniques
A number of approaches to evaluate and measure success of DSSs have been advanced in literature. According to Adelman (1992), the successful implementation of DSSs relies on incorporating three evaluation procedures: (1) examining the logical consistency of system algorithms (verification); (2) empirically testing the predictive accuracy of the system (validation); and (3) documenting its user satisfaction. When conducting his own review, Marakas (1999) identified four main approaches in DSS evaluation: (1) overall system quality assessment, which includes issues such as system efficiency, understandability, reliability and consistency; (2) attitudinal measures of success, which measure the attitudes of the system users and their satisfaction in regard to various aspects of its use, such as ‘perceived usefulness’ and ‘perceived ease of use’ proposed by Davis (1989); (3) technical measure of success, which determines whether the system does what is supposed to do; (4) and organizational measure of success, which focuses on measuring the degree to which the system meets (certain) organizational needs and expectations. Hall, Stranieri and Zeleznikow (2002) introduced a comprehensive evaluation criteria for DSSs that not only cover the technical and subjective evaluation of a DSS but also examine the impact of this system on its environment. The criteria framework was arranged in four quadrants: verification and validation, user credibility, technical infrastructure and the impact of the system upon its environment. The Validation and Verification (V and V) Quadrant: it is concerned with the Micro/Technical aspects of technology and the system development process. Internal criteria include those concerned with validity, and software design whilst external criteria canvas areas such as efficiency, reliability and security. The Technical Infrastructure Quadrant: it is concerned with the Macro/Technical infrastructure requirements such as the technical fit of the system with existing systems, resource requirements and availability, portability. The User Credibility Quadrant: it is Micro/People oriented and is subdivided into three main areas: user satisfaction, utility (fitness for purpose, usefulness) and usability (ease of use). The Impact Quadrant: it is Macro/People oriented and is concerned with the impact of the system upon its environment, including tasks, people, and the parent organization and beyond.
Verification and Validation (V&V) are important processes in the development lifecycle of any computer system, their main purpose is being to assure that the system meets its specifications and the outputs of this system are correct. They establish confidence that the system is appropriate/ valid for its purpose. When reviewing literature on software engineering, several definitions of Verification and Validation can be found. The IEEE Standard Glossary of Software Engineering Terminology (1990) defines Verification as “The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.” Furthermore Boehm (1981), stated that the main purpose of verification is to ensure that the system is coherent and logical from a modelling and programming perspective and conforms to its targeted specification. In the same vein, Miser and Quade (1988) described Verification as the process by which the analyst assures himself and others that the actual model that has been constructed is indeed the one he intended to build. Overall, Verification demonstrates consistency, completeness, and correctness of the software as it is being developed (Adrion, Branstad and Cherniavsky 1982, The Center for Devices and Radiological Health (Cdrh) 2002). On the other hand, Validation is defined in the IEEE same glossary as “The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements”. Finlay (1989) defined it as “the process of testing the agreements between behaviour of the DSS and that of the real world system being modelled”. Furthermore, Sojda (2004), stated that Validation is to examine whether the system achieved the project’s stated purpose related to helping the user(s) reach a decision(s). By carrying out validation, the analyst assures himself and others that the model is a representation of the phenomena being modelled and that it is adequate for the purposes of the study of which it is a part (Miser and Quade 1988). In summary, Boehm (1981), described V and V in simple terms, validation means building the right system while verification means building the system right. These terms have been frequently used by others in much of the literature on software engineering. The authors’ perceptive for verification and validation, in reference to decision support systems, is drawn from the above definitions. Verification ensures that the internal parts for examined system are logically and correctly developed as intended in relation to its structure, algorithm and programming/coding, while Validation assures that the system accomplishes its (stated) objectives. From starting point of view, verification must occur before validation. According to Sojda (Sojda 2004), Verification should be performed prior to any delivery of a working system, even a prototype. This avoids the undesirable scenario where software generates expected outputs simply via calibration and correlation of input and outputs rather than via logical relationships. With the production of the system prototype, general validation can be conducted at this stage as well, with detailed efforts performed later when a complete system is constructed. Papamichail and French (2005) broadly categorised different V and V methods into three types: Technical methods; Empirical methods; and Subjective methods. The technical methods include tests for measuring the technical aspects of the system and its components (i.e. internal correctness), and focus on verification issues. Verification tests aim at debugging the logic of a computer system and eliminating any errors it might contain. On the other hand, empirical methods include tests for measuring the
The authors’ belief is that, most currently implemented criteria can be easily classified as belonging to a particular quadrant of these four. For example, the user credibility quadrant contains group of criteria used to assess usability, usefulness and user satisfaction. However, some criteria are not attributable to just one quadrant and can fit more naturally on the borders between quadrants. For instance, criteria concerned with the satisfaction of user requirements lie naturally between the ‘V and V’ and ‘User Credibility’ quadrants, as they are concerned with usefulness. Similarly, personal impact criteria can be considered both in the User Credibility (user satisfaction subgroup) and Impact quadrants. In order to successfully evaluate a DSS using the above criteria, the evaluator needs to understand the following: 1) the context of the problem which the system intends to support and the organization environment it operates in; 2) the development stage/phase in which the system is (i.e. construction stage, prototype, or complete system); and finally; 3) the context of the evaluation itself in regard to the availability of data/cases as well as human resources/respondents needed for conducting the proposed evaluation strategy.
performance of the system and focus on validation issues such as external correctness (i.e. the quality of a system). They check how well a system performs its tasks. And finally subjective methods include tests for measuring the usefulness and effectiveness of the system, i.e. whether the system addresses an important problem, how logical and systematic its problem-solving approach is, does the system meet the needs of its users, and how well its interface is designed. Testing is a useful technique to ensure both Verification and Validation, by examining the behaviour of a DSS or its components over a set of sample data. A successful test is the one that finds a “bug”. According to Iftikhar (2004), test engineers can use testing only to show the presence of errors and not the absence of them. From a testing component point of view, a test can be categorized in two ways, ‘Black-Box’ test and ‘White-box’ test. In ‘Black-Box’ testing, a system is treated as if it is an input-output system of unknown internal components, and thus it analyses the behaviour of the hall system/model. On the other hand, ‘WhiteBox’ testing, sometimes called ‘Open-Box’ (Pidd 1996), recognizes that the model/system is not really a black-box but includes components with certain relationships. It assumes that the detailed internal structure of the model/system is examined or tested individually in accordance with their relationship reference system to each others and to the hall model/system. The following two sections review in more detail the different techniques for the Verification and Validation of Decision Support Systems and their main processes.
2.2.1. Verification Techniques As mentioned earlier, the main objective of verification is to ensure that the system is internally complete and correctly developed. According to Papamichail and French (2005), Verification tests aim at debugging the logic of a computer program and eliminating any errors, defects or deficiencies in the system at each step of its life cycle. Debugging includes implementing a repair and verifying that it works as intended. According to Carson (2002), debugging occurs when the developer of a model knows there is a “bug” in the model and uses various techniques to determine the cause and fix it. The process of Verification starts with requirements verification after completing system analysis stage. At this stage, the developer examines the requirements for completeness and testability. During the system design stage, the developer defines the problem clearly, hypothesizes the detailed system framework and selects set of development tools to be used in the construction of the system. According to Iftikhar (2004), once the developer formulates the design specifications and requirements, he/she verifies their correctness and consistency. During the construction stage, the system components and modules are coded and constructed. Verification during this stage includes testing these modules and their (detailed) components to prove that the system is internally complete from a modelling and programming perspective and conforms to its targeted specifications. Extensive and systematic verification is essential to find out errors, which are expensive and hard to fix at later stages of the system development. If the programmer finds any errors and makes changes to the program, he/she must retest. This includes not only running the test that failed but also running any of the other tests associated with the changed code (Iftikhar 2004). A review of the literature on software engineering suggests that the main testing techniques for verification are:
Static tests (Software Inspections): These are concerned with analysis of the static system representation in order to discover problems such as coding or incomplete rules. It May be supplemented by a code analysis tool. According to Hailpern and Santhanam (2002), the typical mode of a static test operation is to make the target code available to a code analysis tool, which will then look for a class of problems and flag them as potential candidates for investigation and fixing. Dynamic Tests (Software Testing): These are concerned with exercising and observing the system or its components/modules behaviour. The system is executed with test data and its operational behaviour is observed and judged. ‘Black-Box’ and ‘Open/White-Box’ are famous testing techniques for both verification and validation. From a testing component point of view, a verification test can be categorized into black-box (i.e. functional/specification-based) and white-box (i.e. structural). A black-box verification test focuses on the functional/specification requirements of the system. It identifies interface errors, performance errors, and initialisation and termination errors of the system. The white-box verification test focuses on the detailed structure and components of the system. Its objective is to guarantee that all independent components within the system/ or its module are inspected or tested. This independent component can be tested statically (inspected) and/or can also be tested dynamically (behaviour test). According to Iftikhar (2004), the developers generally use more ad-hoc testing techniques such as hand-calculations, simulations (manual and/or automated) or alternative solutions to the problems during the development of the system. Test sets should be developed before carrying out testing at the development stage. These sets include inputs and their corresponding outputs. The developer must create test cases that test the structural and functional properties of the system. The input to the test cases should include the external values of the program range and must test all the branches of the system.
2.2.2. Validation Techniques There are several possible methods of carrying out validation that are reported in literature. The black-box validation testing technique is concerned with input-output behaviour of the whole system/model. According to Pidd (1996), in such an approach, the idea of validation is to test the model to a degree to which its results resemble those produced by the system(s) being modelled. Statistical methods can be employed to express and examine the relationship between both of them. According to Tomlison (1998), in black-box validation, the emphasis is on the predictive power of the model and this approach can be applied to individual parts/modules of the system as well. On the other hand, the white/open-box validation testing technique assumes that the detailed internal structure of the model/system is examined or tested individually in accordance with their relationship reference system to each other and to the whole model/system. According to Pidd (1996), white/open-box validation is part of the system modelling process itself and needs to be conducted through out the whole modelling process, i.e. it should also be conducted hand-inhand with the client or the user of the model/system. Considering the data type which is used for validation testing, ‘Real-time and Historic Data Sets’ is another approach. According to Sojda (2004), in an ideal world, one could create a decision support system and test its performance against actual scenarios as they unfold. However, this is not often possible because implementation of systems may need to be immediate. One
alternative is to build the system using data, information, and knowledge from one set of situations and to validate using an independent set. When a data-driven model is a significant part of the decision support system, sometimes the data can be randomly separated into two parts: one for model development and one for its validation. In the same vein, Paulson (1995) stated that systems which are planned for the future, as is in typical when estimating new work, the model results can still be compared with conventional deterministic calculations, and be tuned with efficiency or contingency factors as appropriate. Another popular approach to validation is subjective evaluation using questionnaire surveys. According to Papamichail and French (2005), questionnaires are a popular technique of determining a user’s attitude/satisfaction towards various aspects of a Decision Support System. According to (Marakas 1999), the concept of user satisfaction was proposed as a measure of success for DSS. (On this route), Ives, Olson and Baroudi (1985) developed a concise-forty questions to measure the user satisfaction which synthesize the work of several researchers in this area. According to Papamichail and French (2005), the technology acceptance model, developed by Davis (1989), suggests that ‘perceived usefulness’ and ‘perceived ease of use’ are primary factors in predicting user acceptance behaviour. Marakas (1999), described ‘perceived usefulness’ as focusing on the attitudes of the user in regard to the degree to which the system is perceived as bringing utility or value to the activities within which it is used. On the other hand, ‘perceived ease of use’ focuses on the degree to which the users of the system perceive the tool to be easy to use. A review of the literature on software engineering also shows that other approaches for validation are reported and implemented. For example, Sensitivity Analysis can be a validation tool, especially for heuristic-based systems, and for systems where few or no test cases are available for comparison (O'keefe, Balci and Smith 1987, Bahill 1991). Other approaches include: Gold Standard, Panel of Experts, Component Testing, Turing Tests, Comparison of a DSS with other systems in the same domain, and Multi-Criteria Decision Analysis techniques. For more details see for example (Sojda 2004, Papamichail and French 2005). In order to select the appropriate validation approach, it is important to take into consideration the context of the problem which the system intends to support, the development stage/phase of the system (i.e. construction stage, prototype, or complete system) and the availability of both data and human resources/respondents needed to conduct the selected technique.
2.3. Evaluation of Web-based Decision Support Systems When designing a web-based Decision Support System, components such as interface, functionality, and databases need to be considered in order to interact with and take advantages of such evolving technology. The potential benefits of implementing DSSs on the Web were include greater accessibility, more efficient distribution, more effective administration, and a greater degree of flexibility across a user's individual operator platform. When evaluating a Web-based system, these potential benefits along with Web related issues such as security and accessibility need to be assessed and examined in addition to its technical verification and validation. According to San Murugesan et al (2001), testing, and verification and validation of Web-based systems is an important and challenging task in the Web engineering process and yet very little attention is given by Web developers to testing and evaluation. The authors added that Webbased systems testing differs from conventional software testing
and poses new challenges as these systems need to be tested from the ultimate user’s perspective for important issues such as security and usability. Considering the four quadrants’ evaluation criteria (explained earlier in section 2.1), different approaches can be implemented to assess and evaluate these systems. For example, to evaluate a Webbased DSS on the user credibility/satisfaction micro level, the user’s attitude/satisfaction towards these potential benefits, by being implemented on the Web, needs to be measured in order to predict his/her acceptance behaviour for such benefits. On technical infrastructure and impact macro levels, some other issues also need to be examined. According to Hall, Stranieri and Zeleznikow (2002), issues such as maintainability, portability, and machine independence are often benefits of using the World Wide Web and should be considered in any evaluation. Other criteria that could be taken into account, possibly less beneficial, include accessibility and availability and performance, response time, and the efficiency impact on other systems operating in the environment at the same time.
3. THE EVALUATION STRATEGY FOR THE HCDSS PROTOTYPE In the light of the previously presented literature review, which discusses different evaluation strategies for DSSs and their implementation techniques, an evaluation strategy was proposed to evaluate the developed HCDSS Prototype. The main considerations behind its development take into account the following: 1) The context of the problem which the HCDSS intends to support; i.e. assisting decision-makers in examining different function program alternatives and their associated conceptual budgets for UAE healthcare projects, and in addition, reflecting associated uncertainty and risk factors into space programming and cost estimating processes of these projects; 2) The development stage/phase of the system (i.e. a developed prototype); 3) The availability of data/cases as well as healthcare experts needed to conduct the proposed evaluation strategy. The proposed evaluation strategy has been influenced by the research work of Miles et al. (2000); Hall et al.(2002); and Papamichail & French (2005). Its implementation is proposed on three assessment levels: 1) technical verification; 2) performance validation; and 3) subjective assessment/ evaluation. Considering the four quadrants’ evaluation criteria developed by Hall et al (2002), it can be seen to lie on the borders between the ‘V and V’ and ‘User Credibility’ quadrants, as the objectives of the proposed evaluation criteria are concerned with satisfying system requirements including V and V , as well as user acceptance behaviour for the proposed system. It can not be considered in either the ‘Technical Infrastructure’ quadrant or the ‘Impact’ quadrant as the proposed prototype is not intended to be implemented in a particular organization. So no need to evaluate its Macro/Technical infrastructure requirements such as the technical fit of the system with existing systems, resource requirements etc, or its impact upon its environment, including tasks, people, and the parent organization. The following two sections describe in more detail the objectives of the proposed evaluation strategy as well as its three levels of assessment.
3.1. Objectives of the Evaluation The main aim of the proposed evaluation is to verify and validate the HCDSS Prototype and to document its potential user’s satisfaction. Its detailed objectives are: 1) To assure that the HCDSS prototype is coherent and logical from a modelling and programming perspective and its code is correctly written. 2) To assess the performance aspects of the system, such as how well it works and performs its intended tasks and how accurate its performed results are in comparison to real cases. 3) To assess and validate the effectiveness, functionality coverage and scope of the HCDSS model as well as explore the applicability and usability of the model for improving the healthcare project appraisal. 4) To assess the ease of use/user friendliness of its User Interface design 5) To assess the potential benefits gained by being implemented as a Web-based system. 6) To identify future directions for developing the proposed system.
3.2. The Assessment Levels for Proposed Evaluation Strategy The proposed strategy for the evaluation of HCDSS is implemented on three assessment levels: Technical verification, Performance validation, and Subjective assessment/ evaluation. The objectives and implemented methodology for each of them are described in the following paragraphs. Technical Verification: this involves performing static and dynamic testing methods, which are implemented to achieve the first objective of the evaluation strategy, i.e. eliminate coding errors and check how well the system has been built and how accurate its output. Performance Validation: this involves looking inside the black and white-boxes to achieve the second objective, i.e. to examine whether the system achieved its stated purpose relating to helping the user(s) reach a decision(s) and measuring its performance. It focuses on validation issues such
as external correctness i.e. how well a system performs its tasks and how accurate its performance results in comparison to real cases. Subjective Assessment/ Evaluation: this involves presenting the developed prototype to a set of Healthcare industry practitioners and experts for review and assessment. Questionnaires are a popular way of determining a user’s attitude towards a DSS. They have been used in a variety of applications (Papamichail and French 2005). A questionnaire survey was adapted in order to collate these practitioners and experts’ opinions in order to achieve objectives three till six, i.e. to measure the effectiveness/ usefulness of the system and its functionality coverage and scope; to examine its potential gained Web benefits; to assess the ease of use of the system, and finally to identify possible directions for its future development.
4. CONCLUSION This paper began by providing a comprehensive review of the literature concerning the different strategies for evaluating Decision Support Systems and their main methods and techniques. In addition, the evaluation of Web-based decision support systems is explored. An evaluation strategy was developed to evaluate the developed Healthcare Decision Support System (HCDSS) Prototype. It depends not only on empirical tests but also considers the opinion of potential users of the system. The proposed strategy for evaluation was implemented on three assessment levels: Technical Verification, Performance Validation, and Subjective Assessment/ Evaluation. The verification process occurred in parallel with the development of each module components within the development life cycle of the HCDSS prototype. The prototype is now sufficiently developed to be considered as an input–output device, the verification process and the subjective evaluation are currently tacking place. The implementation of the proposed evaluation strategy with its three level of assessment will be reported in future publications.
References Abdou, A, Lewis, J and Radaideh, M (2003) An Internet-based decision support system for healthcare project appraisal: a conceptual proposal. International Journal of Construction Innovation: Information Process, 3(3), 145-55. Abdou, A, Lewis, J, Radaideh, M and AlZarooni, S (2007) Web-based Information Systems in construction: a case-study for healthcare projects. In: Freire, M and Pereira, M (Eds.), Encyclopedia of Internet Technologies and Applications. Hershey, PA, USA: IDEA Group Reference, IDEA Group Inc. Adelman, L (1992) Evaluating Decision Support and Expert Systems. New York, USA: John Wiley and Sons, Inc. Adrion, W R, Branstad, M A and Cherniavsky, J C (1982) Validation, verification, and testing of computer software. ACM Computing Surveys, 14(2), 159-92. Bahill, A T (1991) Verifying and Validating Personal Computer-Based Expert Systems. Englewood Cliffs New Jersey, USA: Prentice-Hall Inc. Boehm, B W (1981) Software Engineering Economics. 1st ed. PrenticeHall Advances in Computing Science & Technology Series, Englewood Cliffs, New Jersey, USA: Prentice-Hall Inc. Borenstein, D (1998) Towards a practical method to validate decision support systems. Decision Support Systems, 23(3), 227–39.
Carson, J S (2002) Model Verification and validation. In: Yücesan, E, Chen, C-H, Snowdon, J L and Charnes, J M (Eds.), Winter Simulation Conference, December 8-11, San Diego, California, USA, 52-8. Davis, F D (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 31940. Finlay, P (1989) Introducing Decision Support Systems. Oxford, UK: NCC Blackwell. Hailpern, B and Santhanam, P (2002) Software debugging, testing, and verification. IBM Systems Journal: Software Testing and Verification, 41(1), 4-12. Hall, M J J, Stranieri, A and Zeleznikow, J (2002) A strategy for evaluating web-based discretionary decision support systems. In: Manolopoulos, Y and Návrat, P (Eds.), Sixth East-European Conference on Advances in Databases and Information Systems, 811 September, Bratislava, Slovakia, 108-20. Iftikhar, B. 2006. Validation, Verification and Debugging – The Process and Techniques 2004 [cited 10 July 2006]. Available from http://www.cs.utexas.edu/users/almstrum/cs370/iftikhar/Validation VerificationandTesting.html.
Ives, B, Olson, M H and Baroudi, J J (1985) The measurement of user information satisfaction. Communications of the ACM, 26(10), 785-93. Marakas, G M (1999) Decision support systems in the twenty-first century. international ed. New Jersey, USA: Prentice-Hall, Inc. Miles, J C, Moore, C J, Kotb, A S M and Jaberian-Hamedani, A (2000) End user evaluation of engineering knowledge based systems. Civil Engineering and Environmental Systems, 17(4), 293-318. Miser, H J and Quade, E S, (Eds.) (1988) Handbook of Systems Analysis - Craft Issues and Procedural Choices. Wiley, USA. O'Keefe, R M, Balci, O and Smith, E P (1987) Validating Expert System Performance. IEEE Expert, 2(4), 81-9. Papamichail, K N and French, S (2005) Design and evaluation of an intelligent decision support system for nuclear emergencies. Decision Support Systems, 41 (1), 84- 111. Paulson, B C (1995) Computer Applications in Construction. First ed. McGraw-Hill, Inc. Pidd, M (1996) Tools for Thinking, Modeling in Management Science. West Sussex, UK: John Wiley & Sons Ltd. San Murugesan, Yogesh Deshpande, Steve Hansen and Ginige, A (2001) Web Engineering: A New Discipline for Development of WebBased Systems, Web Engineering: Software Engineering and Web Application Development, pp. 1-9: Springer Berlin / Heidelberg. Sojda, R S (2004) Empirical evaluation of decision support systems: concepts and an example for trumpeter swan management. In, Complexity and Integrated Resources Management, 14-17 June, University of Osnabrück, Germany. The International Environmental Modelling and Software Society, IEMSS. The Center for Devices and Radiological Health (CDRH). 2006. General Principles of Software Validation; Final Guidance for Industry and FDA Staff 2002 [cited August 2006]. Available from http://www.fda.gov/cdrh/comp/guidance/938.pdf. The Institute of Electrical and Electronics Engineers (1990) IEEE Standard Glossary of Software Engineering Terminology. New York, USA: IEEE. Tomlison, J (1998) A Premises Occupancy Cost Forecasting Model, Ph.D, School of Architecture and Building Engineering, University of Liverpool.