Keyes [5] argued that, if the end-users were excluded up front, they would ..... two groups: non-manufacturing and manufacturing and Mann-Whitney tests were.
Understanding The Factors Important To Expert Systems Success
by
Youngohc Yoon Southwest Missouri State University College of Business Administration Springfield, MO 65804-0095
Tor Guimaraes *** J.E. Owen Chair of Excellence Tennessee Technological University Post Office Box 5022 Cookeville, TN 38505
Aaron Clevenson E.I. DuPont de Nemours & Company, Inc. One Kingwood Place, Suite 215 Kingwood, TX 77339
*** Please address all correspondence to this author.
June 28, 1995
Understanding The Factors Important To Expert Systems Success ABSTRACT This study empirically tests several determinants for Expert Systems (ES) implementation success.
User satisfaction is used as the surrogate of ES success, affording some basis for
inter-study comparison.
To reduce possible confounding results that may occur due to
interorganizational differences, a case study approach to data collection in a single company has been used. The company is E.I. DuPont de Nemours & Company, Inc. which since 1986 has implemented over 1200 ES within the organization. The results underline the importance of seven of the nine hypothesized determinants of ES success.
One of the most important factors is
increasing system usage by establishing end-user training programs which desensitize the potential user community to ES technology and demonstrate its potential as a business tool and as a source of job improvement. Another important factor is the selection of an appropriate shell which matches the business problem, as well as user and developer requirements. Last, the results show the need for user involvement in the ES development process. KEYWORDS: Expert Systems, User Satisfaction, Implementation Success, Success Factors
Understanding The Factors Important To Expert Systems Success INTRODUCTION For the last two decades, information technologies have played an increasingly significant role in organizations and have become an indispensable tool to perform an organization's daily tasks. In order to effectively assist end-users at different organization levels in solving a wide range of problems, a variety of computer-based systems have emerged: Transaction Processing Systems, Decision Support Systems (DSS), Executive Information Systems (EIS), Expert Systems (ES), and others. ES represent a remarkable technology which has motivated reports of great success, as well as some painful failures. An ES is a computer system which mimics the behavior of human experts by encapsulating their expertise in solving problems in a particular domain. Due to its nature, ES has demonstrated potential for improving the productivity of organizations by supporting the reengineering of business processes and supporting end-user tasks. The rewards from successful ES have been considerable [1] [2]. For example, XSEL and XCON have saved approximately 40 million each year for Digital Equipment Corporation [3]. Unfortunately, constructing an ES is widely known to be a difficult and risky task. Many have failed to be operationalized and/or were not accepted by target end-users [4] [5] [6]. The mixed results calls for a better understanding of the major factors leading to ES success or failure. Much of the research on Information Systems (IS) implementation research has been focused on identifying the factors which appear to be conducive to either success or failure of computer-based IS [7] [8] [9]. However, the above factors have been studied in the context of DSS, EIS, or various computer-based IS other than ES. Due to the unique nature of ES [10] [11] [12] [13], previous results obtained exclusively from studies of DSS or other IS cannot be directly applied to ES although they may be related. Coleman [14] pointed out that one of the areas to be carefully addressed for ES to reach its full potential deals with the business and managerial issues. Despite their importance, little effort has been made to identify the critical success factors. Although ES technology was introduced over two decades ago, the main streams of research on ES have remained focused on technical aspects. A few studies have been conducted to address ES managerial aspects
[15] [16] [17] [18] [19]. Mumford and MacDonald [20] discussed managerial issues to which ES developers should pay more attention, based on the experience of building one of the most successful ES, XCON and XSEL. However, prior studies on ES managerial issues have been based predominantly on the opinion and personal experience of individuals and have not been empirically tested [21] [5] [20] [18] [17]. A few empirical studies have been conducted to identify the factors influencing ES implementation success [22] [23] [24] [25]. Although these studies have provided a better understanding of some important factors,
additional study is needed to systematically
synthesize personal opinion and previous findings, formulate, and empirically test the many factors affecting ES implementation success.
The purpose of this study is to empirically test a
larger collection of determinants for ES implementation success. Due to the wide recognition of user satisfaction as a useful surrogate measure of system success, it is used as such in this study. The nine major factors related to ES success which have been hypothesized in literature are: problem importance, problem difficulty; developer(s) skill; end user(s) characteristics; ES impact on end-users jobs; characteristics of ES building tools; user involvement; management support; and system usage. The next section contains the definitions and the motivation for the major variables in this study and a set of hypotheses on the relationships between ES success (the dependent variable) and its nine major determinants studied here (the independent variables).
THEORETICAL FRAMEWORK ES Success. In the context of ES, measures of success are poorly developed. The success of most expert systems has been measured largely by their cost saving and/or benefits [3] [26]. The ES cost saving/benefits have been measured by approximate estimate of monetary gains made by the ES, frequently overlooking the intangible benefits. Prior research has employed various measures of success for systems other than ES, including user satisfaction [27] [28] [29] [30] [31] [32] [33], level of system usage [34] [35], perceived benefits of systems [36] [37] [38] [39], improved decision quality and performance [40] [41] [42] [43], and business profitability [44] [45] [46]. Among these measures for system success, user satisfaction has been viewed as the most useful surrogate [47]
5 [29]. Gatian [48] tested the validity of using user satisfaction as a surrogate measure of system effectiveness and confirmed its construct validity. For these reasons, we chose it as the dependent variable for this study. Problem Importance Prior studies have stressed that ES should address a needed and useful task so that ES solution has a high payoff [49] [16] [50] [20] [51]. A useful task must be non-trivial, and important to the organization. The successful ES that were widely reported were designed to solve problems core to the business [15]. For example, Authorizer's Assistant at American Express performs a function critical to the firm by assisting the credit managers in approving or denying credit to customers in a timely manner. XCON deals with a key problem for DEC -- configuring a computer system according to customer request [20]. These ES perform functions which are essential for their host organizations to obtain competitive advantages. ES of the greatest interest to top managements are those that are important to a firm and tie in with its strategic objectives. Considering the importance of top-management support for IS success. Choosing an application of interest to upper management is also indirectly related to its success, since it is likely to motivate long-term management support for ES technology in general. For all these reasons, we test the following hypothesis: H1: Problem importance is directly related to ES success. Problem Difficulty Many studies have emphasized the importance of selecting ES application domains with certain characteristics [15] [52] [5] [53] [50] [51] [54] [55] [25]. Some authors expressed opinions that problem simplicity (rather than difficulty) will lead to ES success [56]. We believe the opposite may be true. Waterman [55] described several attributes of appropriate ES problems: not requiring common sense, high payoff, not too difficult, symbolic structure, manageable size, and stability. Smith [56] emphasized simplicity, ease of understanding, and manageable size. Other studies listed stability of task knowledge [52] [50] and a narrow/well-defined focus [49] [25]. As
6 alluded to earlier, ES are quite different from MIS and DSS. In the case of MIS and DSS, the end-users are the "domain experts;" thus, developers are required to have considerable interaction with end-users to define the nature, functions and features of the system. In the case of ES, the more advanced knowledge of domain experts is used to assist the system end-users in solving problems.
Assuming the experts can properly address the problem, problems perceived by
end-users as being relatively more difficult represent a correspondingly greater opportunity for the ES to be of service to the end-users. Since, in this study, user satisfaction with the ES is the measure for success, we test a direct relationship between problem difficulty and ES success: H2: Problem difficulty is directly related to ES success. Developer Skills The importance of skillful ES developers has been emphasized by several authors [57] [58] [59] [60] [61] [62] [54] [25]. These prior studies have recognized knowledge engineers as the critical members of ES development teams and emphasized the qualified knowledge engineer for successful development.
Due to the special nature of ES and its development process, the
importance of skillful ES developers has been emphasized by several authors [61] [62]. Different from most other systems types, the construction of an ES requires developers to elicit the decision rules employed by domain experts. In order to elicit the decision rules, developers must ask relevant questions and quickly comprehend the decision procedures reasoned by domain experts while problem solving. The knowledge elicitation procedure--the lengthy process of interviews--is widely known to be a bottleneck in ES development. It is obviously desirable for a developer to possess excellent communication skills in order to alleviate the difficult knowledge elicitation process. A developer with poor communication skills may not be able to properly perform the critical knowledge acquisition task, causing project implementation failure [18]. Strong knowledge of various functional areas in an organization improves developers' communication with end-users, as well as domain experts, and also helps save everyone's time and effort. Once knowledge is elicited from domain experts, it is represented and stored in a knowledge base using a programming
7 language or an ES shell. For this purpose, a developer should be familiar with various knowledge representation paradigms and ES building tools, which are important requirements for technical competence. On the basis of the above discussion, we test the following hypothesis: H3: Developer(s) characteristics is directly related to ES success. End-User(s) Characteristics Prior studies have stressed the importance of end-users characteristics to ES success [16] [63] [51] [56]. The dominant end-user characteristics affecting ES success include user attitude, user expectation and user knowledge of computer and ES technology [56] , user confidence with system [25] and user commitment to learn how to use the system [16] [63] [51]. User attitude has been considered as an important factor to ES success since end-users with a negative attitude toward an ES will not utilize the system, completely wasting development costs. Users often have fears about the ES affecting their job security, thus they develop negative attitudes and challenge the system implementation [22] [64]. The problem of negative user attitude and resistance is more apparent with ES since they may significantly change the nature and requirements of a job and replace human tasks with artificial systems, i.e. the effect of XCON [26].
Another end-user
characteristic mentioned in previous studies is the lack of user's knowledge of AI computer and ES techniques. Unlike domain experts who are expected to know a great deal of ES techniques, end-users do not need to have much knowledge about AI and ES. However, they should be committed to learn how to use the system and generate suggestion for continuous improvement [16]. End-users should also be trained to interpret the results of an ES and how to incorporate them into their job performance [18]. The above discussion strongly indicates end-users importance to ES success, thus we test the following hypothesis: H4: End-user characteristics are directly related to ES success. ES Desirable Impact on End-Users Jobs ES have significant impacts on end-users jobs by providing high-level expertise in performing their tasks. ES have often altered the nature of end-users tasks, significantly changing
8 tasks and rearranging their responsibility. Due to the possible change in end-users jobs by ES, knowledge engineers have recognized the ES impact on the end-users jobs as an important factor to successful ES implementation [22] [23]. Related to the impact of ES, end-users job stress and loss of control have been observed [65] [66]. The fear of job loss can cause end-users to resist ES implementation and make the ES success extremely difficult. Although ES has resulted in a negative impact on end-users jobs, its positive impact has been far more significant. ES have released end-users from doing repetitive, routine tasks to do more creative work, increasing their job satisfaction [22]. ES have also enabled end-users to accomplish a larger amount of complicated tasks within a shorter period of time [16]. The increase in end-users productivity can cultivate end-users' positive attitude toward the ES, leading to its success. For all these reasons, this study included the ES desirable impact on end-user(s) jobs as one of the major independent variables and the following hypothesis is tested: H5: ES desirable impact on end-user(s) jobs is directly related to ES success Shell Characteristics An ES can be built with the aid of various building tools. Prior studies have stressed the importance of building tools to ES success [67] [68] [69] [70] [54] [24] [25].
ES shells are now the
most commonly used ES building tool. Furthermore, the shell used to develop an ES has been found to determine its quality [15]. Employing an appropriate shell to the business task at hand is vitally important to the ES success. The desirable features of ES shells will vary significantly depending on the tasks to be performed by the ES. For many applications, shells must enable the ES to be easily integrated with existing database and computer-based systems [71] [63]. However, many ES are capable of only limited interface with existing systems due to the ES shell used. Similarly, a shell providing a friendly user interface enables ES developers to build a user-friendly interface. The shell execution time is also very important to success since it determines the response time of ES [72]. Based on the above discussion, the following hypothesis is tested: H6: Shell characteristics are directly related to ES success.
9 User Involvement While for ES development domain experts are many times the only source of knowledge and inference
about the problem, high levels of user involvement are considered important to
implementation success [27] [56] [25]. ES development is thought to require a higher level of user involvement during the verification and validation phases due to ES applications dealing primarily with ill-defined business problems. Turban [11] claimed that in the verification and validation phases, user involvement becomes more important. Overall, user involvement has been deemed very important for ES success. Mumford & MacDonald [20] proposed user involvement as a major contributor to the success of XSEL.
Hayes-Roth and Jacobstein [16] also proposed user
involvement as a critical success factor for ES implementation. Medsker and Liebowitz [50] warned that insufficient user involvement is likely to result in ES failure and suggested the use of project leaders who have good track records with the user community. Users who initiated the ES project and were involved in establishing its goals/objectives are more likely to be satisfied with the system. Keyes [5] argued that, if the end-users were excluded up front, they would exclude themselves at the end and not use the ES. Based on this evidence, we test the hypothesis: H7: User involvement is directly related to ES success. Management Support Management commitment to ES development, utilization and maintenance has been recognized as a critical success factor for ES development by many authors [16] [73] [74] [63] [59] [5] [18] [6] [56] [54].
Keyes [15] reported that lack of management support was a critical barrier
to ES success, and Barsanti [15] said that a key predictor of ES success in an organization is the existence of top management support. The surveys conducted by Byrd [22] and Tyran and George [24] also revealed management support as an important factor to ES success. There are several reasons why management support may be important to ES success: First, management support is essential to receive personnel and monetary resources necessary to the development. Without their support, a system development cost will not be funded, resulting in a system failure. Second, the
10 adoption of a new technology by an organization always results in some change in the manner in which decisions are made, business tasks are performed, and power is allocated. Changes in a work environment frequently increase end-users' fears about their jobs and, in turn, may generate resistance against a new ES [6]. The psychological impact and the organizational changes brought on by ES technology can be far more marked than those created by any other type of system. In such cases, management support is crucial to mitigate end-users' negative attitudes toward the ES, and to overcome user resistance. Third, a desirable application area for ES technology is where expertise is scarce or expensive, or where the experts are overworked [64]. In such cases, the direct supervisors of domain experts may find it difficult to share their time for ES development. However, higher-level management may take a longer view and be more amenable to investing substantial amounts of time over the period necessary for effective ES development [18]. Based on the above discussion, the following hypothesis is tested: H8: Management support is directly related to ES success. System Usage The use of a system has been frequently used to measure its success [36] [34] [35]. Prior studies indicated that system use is an important variable affecting user satisfaction [75]; thus, it is included in this study as a determinant of ES success and a final hypothesis is tested: H9: System usage is directly related to ES success.
STUDY METHODOLOGY This section discusses the study setting, the sampling procedure and the measurement of variables used in this study. The Study Setting As discussed in the introduction to this paper, the empirical evidence regarding ES success factors specifically is quite limited. For this reason instead of a multi-company survey, a case study approach to data collection has been used. This approach is expected to reduce the possibility of
11 confounding results due to inter-organization differences such as ES development sophistication, budget, methodologies used, development resources available, policies, conflicts between user departments and the company's ES development group, etc. The company in this study is E.I. DuPont de Nemours & Company, Inc. (DuPont). In 1986, DuPont made the decision to pursue the use of ES to help make better decisions. At that time an Artificial Intelligence Task Force was created and charged with the broad implementation of ES technology throughout the company. This AI Task Force is still functioning today with the same vision that was identified in 1986: ALL critical decisions will be made with our cumulative best knowledge and relevant information at the point of decision making
Since 1986, DuPont has implemented over 1200 expert systems. These systems are used by all functions and almost every location in the company is using at least one ES. DuPont's approach to the implementation of ES was very different from the text book approach. The plan was to provide low cost tools (shells), that were easy to use for system development. The domain experts were trained to use the tools to develop their own systems without the need for computer group support. Corporate licenses were set-up and training courses were developed. About 700 employees have been trained and approximately 80% of ES have been developed by the domain experts themselves. For most ES development projects at DuPont, the project leaders are usually also the developers. ES development teams are usually very small. The most common team size for a development team is one (70-80%). For the rest of the applications, the development team is typically two. Only a small number of ES were developed by more than two people in a development team. Also, most ES developed are written by the domain experts to make their expertise more widely available to users who are less knowledgeable in the particular problem area.
12 Sampling Procedure In an attempt to ensure the acquisition of unbiased data, the questionnaire of this study consists of two parts. One part is designed to collect the data from the project leader/developer who are in charge of the development as well as the maintenance of the ES. The other part is to gather the data directly from the end-users for a particular ES.
The questionnaire for project
leaders/developers consists of questions regarding ES shell characteristics, managerial support for ES development, and ES developers' skills. Due to their intimate involvement in the implementation of specific ES, they should be considered most appropriate sources of information regarding those areas. On the other hand, the questionnaire for end-users is composed of questions regarding problem importance and difficulty, end-users characteristics, ES impact on end-users' jobs, user satisfaction, user involvement, and system usage. Of the many ES at DuPont, the project leaders/developers and primary end-users of 150 operational ES were invited to participate this study. One or more frequent end-users for the particular ES have been chosen by one of the authors. This should be viewed as a convenience sample since many ES were excluded because their developer was not known or no longer worked for the company, or the ES operation has been discontinued, or the project leader/developer could not be reached or were too busy at the time to participate in this study. In order to obtain the end-users own opinion of the ES without undue influence from other parties, the 150 end-users were asked to return their part of the questionnaire directly to the researchers. Of the 150 questionnaires which were mailed out to all the target respondents, 114 matched sets (project leader/developer and end-user) were returned in time to be processed for this report (a response rate of 76 percent). These 114 pairs of project leaders/developers and end-users who participated in this study have a diverse background. Table 1 shows demographic information for project leaders/developers and end-users. The 114 related ES applications reported on fall in the following areas: service (10), manufacturing (50), finance (5), management (9), personnel (3), marketing (3), research (9), information systems (22), and others (3). The 114 ES deal with the following problem categories:
13 control procedure (26), planning (8), education (4), configuration (4), selection (24), diagnostic (25), forecasting (4), and others (19). Place Table 1 about Here Measurement of Variables The major variables in this study contain one or more sub-items extracted from the literature. These sub-items were used to construct measuring scales making up the questionnaire which was pretested for content and readability through personal interviews with several practitioners and academics. The questionnaire asked respondents to indicate their agreement or disagreement with each statement using a seven-point rating scale from (7) completely agree to (1) completely disagree.
For example, one question on user satisfaction states: on the average, the
ES provides extremely reliable output. User Satisfaction. In the context of ES, pretested measures for this construct have not been reported in literature. Therefore, this measure was adapted from pretested and validated instruments used to measure user satisfaction with IS other than ES. The measurement of this variable is a 9-item instrument adapted from the questionnaire used by Bailey & Pearson [76], Raymond [33] and Lucas [77]. It excluded items deemed to be not applicable to measure the user satisfaction with the output quality and user-friendliness of an ES. The 9 items include output value, timeliness, reliability, response/turnaround time, accuracy, completeness, ease to use, ease to learn, and the usefulness of a documentation. For each subject, the average response to the nine items is the measure of user satisfaction. The internal consistency reliability coefficient (Cronbach's alpha) of the nine-item scale was .81. Problem Importance.
A single item is used to measure the variable of problem
importance. Respondents were asked to rate the importance of the task which an ES deals with on seven point rating scale. Problem Difficulty. The following eight variables were operationalized to measure ES task difficulty: problem size, complexity, interdependence among variables, the level of expertise
14 required, uncertainty, instability, labor intensity, and problem unstructuredness. Problem size refers to the number of variables and the depth/width of the problem addressed by the ES. Problem complexity refers to the number and variety of ways in which the variables in the problem relate to another, and how well these relationships are understood by ES developers and end-users. The interdependence among variables is defined as the degree to which a variable is correlated with another. The level of expertise required refers to the level of education and experience domain experts are required to have to effectively solve problems in a specific domain.
Problem
uncertainty is defined as the degree to which the problem is unknown. Problem instability refers to the frequency which domain knowledge changes over time. The degree of labor intensity deals with the amount of people's time and effort necessary to solve the problem. Degree of problem structure refers to the extent to which the problem is programmable. The internal consistency (Cronbach's alpha) of the eight-item scale was .79. For each subject, the average response for the eight items is the measure of problem difficulty. Developer Skills. Debenham [78] listed four essential skills of knowledge engineers: the ability to extract accurate and complete knowledge from human experts; the ability to represent and implement knowledge; the ability to design an ES for maintenance; and the ability to design an ES that exploits existing investments in information processing. A comprehensive list of knowledge and skills necessary for ES developers has been developed by Payne and Awad [61]. It includes knowledge of computer technology, knowledge of general fact-finding techniques, knowledge of prototype methods, knowledge of human factors, knowledge of functional areas, communication skills, project planning skills, human relations skills, organizational skills, and personal attributes. Behavioral and interpersonal skills of knowledge engineers have been emphasized by Mykytyn, et. al [60]. Although the range of developers skills and abilities varies slightly between studies, they can be classified into six categories, according to Nunamaker, Couger, & Davis [79]: people, models, systems, computers, organizations, and society. People skills refer to communication and interpersonal skills. Models skills are defined as the ability to formulate and solve models of the
15 operation research type. Systems skills are referred to as the ability to view and define a situation as a system--specifying components, scopes, and functions. Computer skills refer to knowledge of hardware/software, programming languages, and ES techniques. Organizational skills are defined as knowledge of the functional areas of an organization and organizational conditions. Last, society skills generally refer to the ability to articulate and defend a personal position on important issues of information technology impact on society. More specifically, it refers to the ability to perceive and describe the impact of an ES on a particular part of society. This study used these six items to examine the relationship between developer(s) skills and ES success.
The average response to the six items represent the measure of developer(s) skills.
The internal consistency reliabilities (Cronbach's alpha) of the six-item scale was .76. End-User(s) Characteristics. The dominant end-user characteristics affecting ES success include user attitude, user expectation and user knowledge of computer and ES technology [56], user confidence with system [25], and user commitment to learn how to use the system [16] [63] [51]. Another end-user characteristic mentioned in previous studies is the user knowledge of computer and ES techniques [18].
Respondents were asked to indicate their agreement or
disagreement with five items used to assess end-user(s) characteristics: education, experience, positive attitudes toward the ES, expectations on the ES, and computer and AI knowledge. The internal consistency reliability coefficient (Cronbach's alpha) for this scale was .64. The average response to these five items is the measure of end-user(s) characteristics. ES desirable impact on end-user(s) jobs: Byrd [22] developed a measure of ES impact on end-user(s) jobs from two factors: fear of loss of control and fear of job loss. Sviokla [26] analyzed the impact of XCON on end-user(s) jobs by examining the changes on input and output, the increase in the task accuracy and amount of work completed, the shifts in end-user(s) role and responsibilities, and job satisfaction. Based on Byrd [22] and Sviokla [26], this study employed eleven items to measure the desirable impact of an ES on end-user(s) jobs: increase in the importance of the user's job, decrease in the amount of work required to do the job, decrease in the
16 accuracy demanded on the job, increase in skills needed to do the job, increase in job appeal, increase in feedback on the job performance, increase in freedom to do the job, increase in opportunity for advancement, increase in job security, increase in relationship with fellow employees, and increase in job satisfaction. The average score for the eleven items is computed to assess the ES impact on end-user(s) jobs. The internal consistency reliability coefficient (Cronbach's alpha) for this scale was .73. Shell Characteristics: The desirable features of ES shells listed in literature include flexibility of knowledge representation,
flexibility of the inference engine, ability of monitoring
session activities, knowledge-base editing facility,
knowledge acquisition facility, debugging aids,
uncertainty management, end-user interface, explanation facility,
integration with external
databases and other systems, integration with other programming languages, ease to use, ease to learn, portability, response time, real time support, documentation, and vendor support [80] [67] [68] [72] [69] [70]. Obviously, some features are applicable only to specific expert systems; i.e., the capability of real-time support is applied exclusively to real-time ES applications. Based on the above studies, the general features deemed to be applicable to a wide range of ES applications were selected for this study. These include flexibility of knowledge representation for the inference engine; the quality of the developer interface, end-user interface, and system interface; the portability among different platforms; system ease to use and to learn; availability of training and vendor support; system response time; and the shell's appropriateness to the problem.
The average
score for the ten items was computed to measure ES shell quality. The internal consistency coefficient for the scale was .86. User Involvement: The measurement of end-user's involvement was also adapted from the validated measures used in the context of information systems other than ES. The instrument has been validated through several studies in user involvement [81] [82] [83] [84] [85]. The measure was slightly modified to include nine items measuring user involvement in ES implementation: initiating the project, establishing the objective of the project, determining user requirements, accessing ways
17 to meet user requirements, identifying the sources of data/ information, outlining the information flow, developing the input forms/screens, developing the output forms/screen, and determining the system availability/access. The average score for the nine items is the measure of user involvement. The internal consistency coefficient (Cronbach's alpha) for this scale was .95. Management Support: Based on the work by Sloane [6], Prerau [18], and Byrd [22] which was discussed earlier, management support was measured by four items: management understanding of the ES potential benefits, encouragement by management to use ES in their job, providing the necessary help and resources for effective use of ES, and management interest in having employees satisfied with ES technology. The items were averaged to compute a measure of management support.
The internal consistency reliability coefficient for the scale was .89.
System Usage: The variable of system usage is measured by a single item. Participants were asked to indicate their agreement or disagreement with the statement "the ES is used all the time" on a seven point rating scale from (7) completely agree to (1) completely disagree. Data Analysis Procedures Three phases of data analysis were performed: (1) To test the relationships hypothesized in this study the correlation coefficients among the major study variables were computed.
(2)
Mann-Whitney (M-W) and Kruskal-Wallis (K-W) tests were performed to test whether there were significant differences in the major study variables based on three potential moderating variables: ES problem category, ES functional area, and ES shell used. For ES problem categories, the 114 responses were classified into four groups: control procedure , selection , diagnostics, and others. The K-W one-way analysis of variance test was performed on the major study variables to control for these potential moderators. For ES functional areas, the responses from the 114 subjects were divided into two groups: non-manufacturing and manufacturing and Mann-Whitney tests were performed for the same reason. For ES shells used, the responses were divided into two groups: non-RS and RS shells.
(3) Multivariate Regression Analysis was performed to assess the
contribution of the major study variables as a group to the prediction of ES success, as measured by
18 user satisfaction. The contribution of each independent variable in explaining the variance in the dependent variable was determined by the increment in R squared which occurred when a given variable entered the regression equation. RESULTS This section presents the results dealing with hypotheses testing and other interesting intercorrelations (Table 2), results addressing possible moderating variables affecting the relationships between the major variables (Tables 3, 4 and 5), and the results from multivariate regression which explore the power of the independent variables in explaining the variance in the dependent variable as a combined set (Table 6). Place Table 2 about Here Results Regarding Hypotheses Testing The means, standard deviations, and the matrix of intercorrelations among the major study variables are shown in Table 2.
The correlations reveal that four variables are significantly
correlated with user satisfaction at the 0.01 level or better: problem importance, developer skills, shell characteristics, and system usage. Therefore, the following hypotheses are accepted: H1: Problem importance is directly related to ES success. H3: Developer(s) characteristics is directly related to ES success. H6: Shell characteristics are directly related to ES success. H9: System usage is directly related to ES success. User satisfaction is also positively correlated with three other variables at the significant level of 0.05 or better: end-user characteristics, ES impact on end-users jobs, and user involvement. Thus, the following hypotheses are accepted at this level: H4: End-user characteristics are directly related to ES success. H5: ES desirable impact on end-user(s) jobs is directly related to ES success. H7: User involvement is directly related to ES success. Because of the extremely small correlation coefficients between user satisfaction and
19 problem difficulty and managerial support, the following hypotheses are rejected: H2: Problem difficulty is directly related to ES success. H8: Management support is directly related to ES success. Other Interesting Results Besides providing evidence to support the seven hypotheses formally proposed in this study, Table 2 also reveals other interesting results. The more difficult the problem an ES tackles, the higher the user involvement reported (r= .45), confirming prior study results [27]. However, higher user involvement did not lead to higher system use (r=.09). Developer skill is also correlated with end-user characteristics, ES impact on end-users jobs, ES building tool characteristics, managerial support, and system usage. Additionally, the results revealed that system usage is significantly correlated with several other independent variables: problem importance, developers skills, end-user characteristics, ES impact on end-users jobs, ES shell characteristics, and managerial support. Results Regarding Possible Moderating Variables As mentioned earlier, in order to test whether the major study variables are significantly different depending upon the problem ES deal with, the 114 subjects were classified into four problem categories: control, selection, diagnostics, and others. There were 26, 24, 25, and 39 subjects for each group, respectively. The K-W one-way analysis of variance tested for the possibility that ES problem categories may influence the major variables. The results of these tests presented in Table 3 show significant differences on end-user characteristics, ES impact on end-users jobs, ES building tool characteristics along the four ES problem categories. Place Tables 3, 4 and 5 About Here For testing for possible differences along the major variables based on the functional areas in which the ES was applied, the 114 subjects were divided into two groups: non-manufacturing and manufacturing. There were 55 non-manufacturing ES and 59 manufacturing ES.
Table 4 revealed
no significant difference between non-manufacturing and manufacturing ES along the major study variables except for management support. The management support of non-manufacturing ES is
20 higher than the support given to manufacturing ES, at the significant level of 0.05 or better. In order to assess possible differences along the major study variables due to ES development with different shells, the responses were divided into two groups: non-RS and RS shells. Both groups have 57 subjects evenly. Table 5 presents the results of the M-W test, showing a significant difference between non-RS and RS shells along management support. ES developed with RS shells apparently receive significantly higher management support than ES developed with non-RS shells. Results From Multivariate Regression Analysis The inter-correlation analysis discussed earlier provided evidence about the relationships of each independent variable with the dependent variable. However, such analysis does not address possible interrelations among the independent variables as in combination they affect the dependent variable. In order to test that, an integrated model for user satisfaction was tested via multivariate analysis. Place Table 6 About Here Table 6 shows the results from using the stepwise method and first entering the independent variables making the largest contribution to the R squared.
This integrated model explains
approximately 41 percent of the variance in the dependent variable. The condition index method [86] was used to test for possible multicollinearity among the independent variables in this integrated model. If the condition number is less than 100, multicollinearity is not to be considered a problem. The results in Table 6 show that the highest condition number is 42.123, well below the safe limit. Table 6 also presents three statistically significant determinants to the user satisfaction at the 0.05 level or better: system usage explaining 23.84 percent of the variance in user satisfaction, management support explaining 6.46 percent, and ES impact on end-users jobs explaining 5.87 percent. The contributions of the other variables in this multivariate model, given the order in which they have entered the regression equation, are not statistically significant. Although managerial support by itself has no significant relationship to user satisfaction, in
21 conjunction with the other two variables which entered the regression equation before, it can explain 6.46 percent of variance in the user satisfaction. Further, managerial support is highly correlated with developers' skills, end-user characteristics, ES impact on end-user(s) jobs, and ES building tool characteristics.
22 CONCLUSIONS AND MANAGERIAL RECOMMENDATIONS The results clearly corroborated the importance of seven of the nine determinants of ES success. The first exception forces the conclusion that problem difficulty (or complexity) does not necessarily lead to ES success as we proposed. The lack of an inverse relationship suggests that neither does problem "simplicity", as suggested by Waterman [55]; Smith [56]; Casey [49]; Beckman [52]; Medsker & Liebowitz [50], and Will, et al [25]. The second exception forces the conclusion that the widely held belief in management support as an important determinant of ES success is not always true. A plausible explanation is that in companies where the benefits from management support (i.e. resources availability, training availability, political support for user involvement and ES implementation) exists, such support is common to all ES projects thus becoming undetectable as a determinant of success. Several insights can be gleaned from the study's results which have important implications to ES technology management in organizations.
The results indicate that ES implementation success,
as measured by user satisfaction, is related to several major factors: business problem importance, developer(s) characteristics (skill), ES shell characteristics, system usage, end-user characteristics, desirable impact on end-users' jobs, and user involvement in the ES development process. While some of these factors cannot be directly controlled in the short run, ES development managers can be more aware of potential ES development difficulties, attempt to pre-empt the likely problems and establish plans to facilitate the development of more successful ES applications. Considered individually, based on their correlation coefficients, the most important major variables affecting user satisfaction with ES are system usage, shell characteristics, developer characteristics, and problem importance.
According to the integrated model (multivariate
regression, stepwise method) developed in this study, based on the percentage of variance (multiple R squared) in the dependent variable explained by the particular independent variable, the order of importance is somewhat different: system usage, managerial support, and ES impact on end-users' jobs.
23 In general, it behooves managers championing the introduction of ES technology into their organizations to attempt to increase system usage by establishing end-user training programs to desensitize the potential user community to ES technology, demonstrate its potential as a business tool and as a source of job improvement. Further, usage may be increased by developing ES which help with important problems, using developers with the characteristics addressed in this study, cultivating the user community along the characteristics addressed in this study, ensuring that the ES will have a favorable impact on end-users' jobs, using quality shells as defined earlier, and cultivating management support. The selection of an appropriate shell which matches the business problem, as well as user and developer requirements is an important factor for ES success. Managers should restrict ES development groups to acquire only shells with the desirable characteristics described earlier. As the collection of ES development techniques and commercially available shells increase, ES developers and project managers must ensure a careful match of development techniques and tools to the business problem at hand [87] [88]. Yoon and Guimaraes [87] have provided guidelines for matching specific problem characteristics with shells supporting four main ES development techniques (knowledge-based, neural network, model-based, or case-based). Unfortunately most organizations today are woefully unprepared in this area with few organizations providing the main alternatives. The results from this study regarding shell quality may be applicable only to the more widely used knowledge-based ES which comprise our sample. Nevertheless, developers and project managers should carefully select shells along the important features studied here, i.e., flexibility in knowledge representation and construction of the inference engine, interface with other systems, ease of learning and of using, training and vendor support, response time, and appropriateness to the problem. As the cost of ES shells continue to decrease, managers are quickly running out of excuses for not making available to developers and encouraging the use, (i.e. providing developer training) of appropriate shells for the development of new ES applications. This is strongly recommended for applications dealing with problems of major importance to the organization. The need for training developers and end-users is also clear from the results. Developers
24 must be trained to develop people skills, formulate models of business problems, and be able to use a systems approach to problems. While one may think that because of the presence of the domain expert, the role of end-users in system development is relatively less critical in the case of ES as compared to MIS or DSS, the results indicate that end-user expectations and attitude toward the ES are important factors for successful ES implementation. As suggested by Prerau [18], to improve user expectations and attitudes, companies should establish short seminars to explain the potential and limitations of ES technology, to interpret ES conclusions and output, to incorporate the system into users jobs, and to effectively interact with specific ES. Managing end-user attitudes and expectations from a specific system should be an important item for ES project managers to include in meeting agendas. Improvement in this area may call for substantial changes from what is going on in industry today since training for ES developers and end users has been found lacking in most organizations [89]. Several general recommendations [64] for the systematic selection of applications have been made to meet the objective of leveraging ES development resources in the long-run. The results from this study provide empirical evidence for two more ES application selection criteria, both related to the ES potential contribution to the organization: problem importance and ES desirable input on users' jobs. ES project managers should deliberately seek applications which will address business problems deemed important by department or top managers, and should thoroughly understand and manage changes due to ES implementation. Despite the fact that the knowledge for an ES will come from a domain expert who expectedly has more knowledge about the problem than most users, user involvement in the ES development process seems to significantly affect user satisfaction. ES developers should strive to give end-users a chance to feel ownership over the particular ES being developed. There are several things that ES project managers can do to increase user involvement [89]. It is interesting to note that at a time when end-users are independently developing their own systems, relatively few end-users develop ES without knowledge engineers. However, as the user interface for more advanced ES shells become commercially available, end-users are more likely to independently
25 develop ES. Meanwhile, user involvement should be cultivated by ES project managers to benefit from the psychology of ownership and a host of other reasons: to gradually introduce the ES application under development to the end-users' world, to de-sensitize end-users fearful of ES technology and/or change, and to collect feedback on how they feel about the system in general and its features. Project managers should ensure appropriate level of user participation by discussing the need with department and top managers, and by following many of the recommendations to ensure management support previously proposed in the context of more traditional systems such as DSS and transaction processing systems.
STUDY LIMITATIONS AND RESEARCH OPPORTUNITIES This study represents a field test surveying the ES applications within one organization instead of many. This approach reduces the possibility of confounding results due to a wide variety of inter-organization differences in ES development and company environment such as expertise available, ES technology implementation phase, budgets, methodologies used, development resources available, policies, and conflicts between user departments and the company's ES development group. On the other hand, this approach raises the question of generalizability. The results from this study provide researchers with more focus for defining hypotheses which can be better tested later with a multi-company survey. The general application of this study's results can also be questioned in terms of industry and the sophistication of the ES development environment. We have no reason to suspect that the results and conclusions are applicable only to manufacturing organizations, however, that needs to be formally tested. On the other hand, one should assume that the organization studied indeed has a sophisticated ES development environment, with hundreds of ES developers and users with many years of experience using ES technology. It may be vital to take that into consideration while attempting to apply some of the results to other organizations. Future studies need to control for industry type, ES application area, development tools used, and the sophistication of the ES development environment.
26 Despite its limitations, this study represents one of the first systematic attempts to identify a comprehensive list of factors which are likely to influence ES success.
Given the growing
investment, the impressive results many organizations are deriving from effectively applying ES technology, its versatility and future potential, it is imperative that research on the use and on the management of this technology be expanded quickly.
27
[1]
REFERENCES Feigenbaum, E. McCorduck, P. & Nii, P. (1988). Alexandria, VA: Time Life.
The Rise of the Expert Company,
[2]
Shpilberg, D., Graham, L.E., & Schatz, H. (1986). Expert Tax: An Expert System for Corporate Tax Planning. Expert Systems, 3(3), 136-150.
[3]
Liebowitz, J. (1990). Expert Systems for Business & Management. Englewood Cliffs, NJ: Yourdon Press.
[4]
Coats, P. (1988). Why Expert Systems Fail. Financial Management, Autumn, 77-86.
[5]
Keyes, J. (1989b). Why Expert Systems Fail. AI Expert, November, 50-53.
[6]
Sloane, S.B. (1991). The Use of Artificial Intelligence by the United States Navy: Case Study of A Failure. AI Magazine, 12(1), 80-92.
[7]
DeLone, W.H. & McLean, E.R. (1992). Information Systems Success: The Quest for the Dependent Variables. Information Systems Research, 3(1), 60-95.
[8]
Guimaraes, T., Igbaria, M., & Lu, M. (1992). The Determinants of DSS Success: An Integrated Model. Decision Sciences, 23(2), 409-430.
[9]
Liang, P.L. (1986). Critical Success Factors of Decision Support Systems: An Experimental Study. Data Base, Winter, 3-15.
[10] Jih, W.J.K. (1990). Comparing Knowledge-based and Transaction Processing Systems Development. Journal of Systems Management, May, 23-28. [11] Turban, E. (1990). Publishing Co.
Decision Support and Expert Systems. New York, NY: MacMillan
[12] Turban, E. & Watkins, P.R. (1986). Integrating Expert Systems and Decision Support Systems, MIS Quarterly, June, 121-136. [13] Wiig, K. (1990). Expert Systems: A Manager;s Guide. Geneva: International Labor Office. [14] Coleman, K. (1993). The AI Marketplace in the Year 2000. AI Expert, January, 35-38. [15] Barsanti, J.B. (1990). Expert Systems: Critical Success Factors for Their Implementation. Information Executive, 3(1), 30-34. [16] Hayes-Roth, F. & Jacobstein, N. (1994). Communication of the ACM, 37(3), 27-39. [17] O'Neal, Q. (1990). IAKE.
The State of Knowledge-based Systems,
Planning and Managing Successful KBS Applications. Presented at
28 [18] Prerau, D.S. (1990). Developing and Managing Expert Systems, Reading, MA: Addison-Wesley Publishers. [19] Turban, E. (1992b). Why Expert Systems Succeed and Fail. Managing Expert Systems, Turban, E. & Liebowitz (eds.), 2-13. [20] Mumford, E. & MacDonald, W.B. (1989). XSEL's Progress: The Continuing Journey of An Expert System. Chichester: John Wiley. [21] Ignizio, J.P. (1991). Introduction to Expert Systems, New York, NY: McGraw-Hill, Inc. [22] Byrd, T.A. (1992). Implementation and Use of Expert Systems in Organizations: Perceptions of Knowledge Engineers. Journal of Management Information Systems, 8(4), 97-116. [23] Byrd, T.A. (1993). Expert Systems in Production and Operations Management: Results of a Survey, Interfaces, 23(2), 118-129. [24] Tyran, C.K. & George, J.F. (1993). The implementation of Expert Systems: A Survey of Successful Implementation. Database, Winter, 5-15. [25] Will, R.P., McQuaig, M.K. & Hardway, D.E. (1994). Identifying Long-term Success Issues of Expert Systems. Expert Systems with Applications, 7(2), 272-279. [26] Sviokla, J. (1990). The Examination of the Impact of Expert Systems on the Firm: The Case of XCON. MIS Quarterly, June, 126-140. [27] Amoako-Gyampah, K. & White, K.B. (1993). User Involvement and User Satisfaction. Information & Management, 25, 1-10. [28] Galletta, D.F. & Lederer, A.L. (1989). Some Cautions on the Measurement of User Information Satisfaction. Decision Sciences, 20(3), 419-438. [29] Ives, B., Olson, M.H., & Baroudi, J.J. (1983). The Measurement of User Information Satisfaction. Communications of the ACM, 26(10), 785-793. [30] Kendall, K.E., Buffington, J.R., & Kendall, J.E. (1987). The Relationship of Organizational Subcultures to DSS User Satisfaction. Human Systems Management, 7, 31-39. [31] Mahmood, M.A. & Sniezek, J.A. (1989). Defining Decision Support Systems: An Empirical Assessment of End-User Satisfaction. Information Systems & Operational Research (INFOR), 27(3), 253-271. [32] Meador, L.C. Guyote, M.J. , & Keen, P.G.W. (1984). Development. MIS Quarterly, 8(2), 117-129.
Setting Priorities for DSS
[33] Raymond, L. (1985). Organizational Characteristics and MIS Success in the Context of
29 Small Business, MIS Quarterly, March, 37-53. [34] Fuerst, W.L. & Cheney, P.H. (1982). Factors Affecting the Perceived Utilization of Computer-based Decision Support Systems in the Oil Industry. Decision Sciences, 13, 554-569. [35] Mykytyn, P.P. (1988). End-User Perceptions of DSS Training and DSS Usage. Journal of System Management, 39(6), 32-35. [36] Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-339. [37] Keen, P.G.W. (1981). Value Analysis: Justifying Decision Support Systems. MIS Quarterly, 5(1), 1-16. [38] Larcker, D.F. & Lessig, V.P. (1980). Perceived Usefulness of Information: A Psychometric Examination. Decision Sciences, 11(1), 121-134. [39] Money, A., Tromp, D., & Wegner, T. (1988). The Quantification of Decision Support Benefits Within the Context of Value Analysis. MIS Quarterly, 12(2), 223-236. [40] Aldag, R.J. & Power, D.J. (1986). An Empirical Assessment of Computer-Assisted Decision Analysis. Decision Sciences,17, 572-588. [41] Cats-Baril, W.L. & Huber, G.P. (1987). Decision Support Systems for Ill-Structured Problems: An Empirical Study. Decision Sciences, 18, 350-372. [42] King, W. L., & Rodriguez, J. I. (1981). Participative Design of Strategic Decision Support Systems: An Empirical Assessment. Management Science, 27(6), 717-726. [43] Kottemann, J.E. & Remus, W.E. (1989). A Study of the Relationship Between Decision Model Naturalness and Performance. MIS Quarterly, 13(2), 171-181. [44] Benbasat, I. & Dexter, A.S. (1982). Individual Differences in the Use of Decision Support Aids. Journal of Accounting Research, 20, Spring, 1-11. [45] Eckel, N.L. (1983). The Impact of Probabilistic Information on Decision Behavior and Performance in an Experimental Game. Decision Sciences, 14, Fall, 483-502. [46] Sharda, R., Barr, S.H., & McDonnell, J.C. (1988). Decision Support System Effectiveness: A Review and an Empirical Test. Management Science, 34(2), 139-159. [47] Hamilton, S. & Chervany, N.L. (1981). Evaluating Information System Effectiveness. MIS Quarterly, 5(1), 76-88. [48] Gatian, A. T. (1994). Is User Satisfaction a Valid Measure of System Effectiveness. Information & Management, 26, 119-131.
30 [49] Casey, J. (1989). Picking the Right Expert System Application. AI Expert, September, 44-47. [50] Medsker, L. & Liebowitz, J. (1994). Design and Development of Expert Systems and Neural Networks, New York, NY: Macmillan Publishing Co. [51] Slagle, J.R. & Wick, M.R. (1988). A Method for Evaluating Candidate Expert System Applications, AI Magazine, 44-53. [52] Beckman, T.J. (1991). Selecting Expert-System Applications, AI Expert, February, 42-48. [53] Liebowitz, J. (1989). Problem Selection for Expert Systems Development. Structuring Expert Systems, Liebowitz, J. & De Salvo, D.A. (Eds), Englewood Cliffs, NJ: Prentice Hall, 1-24. [54] Turban, E. (1992a). Expert Systems and Applied Artificial Intelligence. New York, NY: MacMillan Publishing Co. [55] Waterman, D. A. (1986). A Guide to Expert System. Reading, MA: Addison-Wesley Publishing Co. [56] Smith, D.L. (1988). Implementing Real World Expert Systems. AI Expert, 3(2), 51-57. [57] Couger, J.D. & McIntyre, S.C. (1987-1988). Motivation Norms of Knowledge Engineers Compared to those of Software Engineers. Journal of Management Information Systems, 4(3), 82-93. [58] Fellers, J.W. (1987). Skills and Techniques for Knowledge Acquisition: A Survey, Assessment, and Future Directions. Proceedings of the Eighth International Conference on Information Systems, Pittsburgh, PA, 118-132. [59] Liebowitz, J. (1993). The Need for Better Educating Prospective Knowledge Engineers on Knowledge Acquisition. Journal of Computer Information Systems, Fall, 37-40. [60] Mykytyn, P.P., Mykytyn, K. & Raja, M.K. (1994). Knowledge Acquisition Skills and Traits: A Self-assessment of Knowledge Engineers. Information & Management, 26, 95-104.
[61] Payne, S.C. & Awad, E.M. (1990). The Systems Analyst as A Knowledge Engineer: Can the Transition Be Successfully Made? Proceeding of .... October, 115-169. [62] Shacklett, M.E. (1990). 16-17.
In Search of the Knowledge Engineer. UNISPHERE, August,
[63] Liebowitz, J. (1991). Institutionalizing Expert Systems: A Handbook for Managers, Englewood Cliffs, NJ: Prentice Hall.
31 [64] Lu, M. & Guimaraes, T. (1988). A Guide to Selecting Expert Systems Applications. Systems Development Management, 32-03-20, December, 1-11. Reprinted in Journal of Information Systems Management, Spring 1989, 8-15. Reprinted in Expert Systems, Summer 1989. [65] Argote, L., & Goodman, P.S. (1986). The Organizational Implications of Robotics. Managing Technological Innovation, Davis, D.D. (ed). San Francisco: Jossey-Bass, 127-153 [66] Argote, L., Goodman, P.S., & Schkade, L. (1983). The Human Side of Robotics: How Workers React to a Robot. Sloan Management Review, 24, 31-41. [67] Harmon, P., Maus, R., & Morrisey, W. (1988). Expert Systems Tools and Applications. New York, NY: John Wiley & Sons, INC. [68] Kim, C. & Yoon, Y. (1992). Selection of A Good Expert System Shell for Instructional Purposes in Business. Information and Management. [69] Vedder, R.G. (1989). PC-based Expert System Shells: Some Desirable and Less Desirable Characteristics. Expert Systems, 6(1), 28-42. [70] Vedder, R.G., Fortin, M.G., Lemmermann, S.A., & Johnson, R.N. (1989). Five PC-based Expert Systems for Business Reference: An Evaluation. Information Technology and Libraries, March, 42-54. [71] Keyes, J. (1989a). Expert Systems and Corporate Database. AI Expert, May, 50-53. [72] Plant, R. T. & Salinas, J. P. (1994). Expert Systems Shell Benchmarks: The Missing Comparion Factor. Information & Management, 27, 89-101. [73] Leonard-Barton, D. (1987). The Case for Integrative Innovation: An Expert System at Digital. Sloan Management Review, 29(1), 7-19. [74] Leonard-Barton, D., & Deschamps, I. (1988). Managerial Influence in the implementation of new technology. Management Science, 34(10), 1252-1265. [75] Baroudi, J.J., Olson, M.H., & Ives, B. (1986). An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction. Communication of the ACM, 29(3), 232-238.
[76] Bailey, J.E. & Pearson, S.W. (1983). Development of a Tool for Measuring and Analyzing Computer User Satisfaction. Management Science, 29(5), 530-545. [77] Lucas, H.C. (1978). Empirical Evidence For a Descriptive Model of Implementation. MIS Quarterly, June, 27-42.
32 [78] Debenham, J.K. (1990). Knowledge Engineering: The Essential Skills. Expert Systems for Management and Engineering, Balagurusamy, E. and Howe, J. (Eds). New York, NY: Ellis Horwood, 36-66. [79] Nunamaker, J., Couger, J.D., & Davis, G.B. (1982). Information Systems Curriculum Recommendations for the 80s: Undergraduate and Graduate Programs. Communications of the ACM, 25(11), 781-794. [80] Brody, A. (1989). The Experts, INFOWORLD, June 19, 59-75. [81] Doll, W.J. & Torkzadeh, G. (1989). A Discrepancy Model of End-user Computing Involvement. Management Science, 35(10), 1151-1171. [82] Doll, W.J. & Torkzadeh, G. (1990). The Measurement of End-user Software Involvement. OMEGA, 18(4), 399-406. [83] Doll, W.J. & Torkzadeh, G. (1991). Decision Sciences, 22(2), 443-453.
A Congruence Construct of User Involvement.
[84] McKeen, J., Guimaraes, T., & Wetherbe, J. (1994). The Relationship Between User Participation and User Satisfaction: An Investigation. MIS Quarterly, December. [85] Torkzadeh, G. & Doll, W.J. (1994). The Test-retest Reliability of User Involvement Instruments. Information & Management, 26, 21-31. [86] Jobson, J.D. (1991). Applied Mutivariate Data Analysis, Regression and Experimental Design, Springer-Verlag, New York, 1, 281-282. [87] Yoon, Y., Guimaraes, T. (1993). Selecting Expert System Development Techniques. Information & Management, 24, 209-223. [88] Yoon, Y., Guimaraes, T., Swales, (1993). Integrating Artificial Neural Networks with RuleBased ES. DSS Special Issue on Artificial Neural Networks, 1, 497-507. [89] Wells, S. & Guimaraes, T. (1992). End-User Development of Expert Systems. Emerging Technologies.
33 Table 1: Demographic Information of Respondents Project Leader/Developers
End-Users
Work Experience * 1-3 years 4-6 years 7-10 years 11-20 years over 20 years No response
Work Experience * 1-3 years 4-6 years 7-10 years 11-20 years over 20 years
Education High School Bachelor Master Doctoral No response Age 26-30 31-40 41-50 51-60 over 60 No response
* In current jobs.
No. 42 28 18 15 10 1
10 49 46 7 2
Education High School Bachelor Master 7 Doctoral
No. 8 17 38 41 10
71 34 2
Type 10 54 34 13 1 2
Staff Clerical Blue Collar Supervisors Middle Managers 3 Top-level managers Engineers
33 7 60 8 2 1
34 Table 2: Matrix of Intercorrelationships Among Major Study Variables
Mean STD
1.
3.
4.
1. User Satisfaction
5.34
0.81
1.00
2. Problem Importance
5.40
1.37
.31**
1.00
3. Problem Difficulty 3.66
1.10
.01
.32**
1.00
4. Developer Characteristics
5.69
0.74
.34**
.10
-.02
1.00
5. End User Characteristics
3.72
0.89
.19*
.21*
.30**
.36**
6. ES Impact on End-Users Jobs
4.12
0.64
.22*
.23*
.21*
7. Shell Characteristics
5.13
1.02
.37**
.14
-.06
.48**
.07
.41**
8. User Involvement
3.94
1.82
.20*
.20*
.45**
-.02
.21*
.31**
-.05
9. Managerial Support
3.86
1.46
.01
.03
-.11
.34**
.23*
.28**
.34** -.06
1.00
10. System Usage
4.72
2.01
.50**
.34**
.01
.33**
.43**
.36**
.35** .09
.25**
* p