This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright
Author's personal copy Decision Support Systems 49 (2010) 386–395
Contents lists available at ScienceDirect
Decision Support Systems j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / d s s
Dynamic interaction in knowledge based systems: An exploratory investigation and empirical evaluation Brandon A. Beemer ⁎, Dawn G. Gregg University of Colorado at Denver, United States
a r t i c l e
i n f o
Article history: Received 15 July 2009 Received in revised form 20 February 2010 Accepted 24 April 2010 Available online 29 April 2010 Keywords: Dynamic interaction Decision support systems Knowledge based systems Advisory systems
a b s t r a c t In response to the need for knowledge based support in unstructured domains, researchers and practitioners have begun developing systems that mesh the traditional attributes of knowledge based systems (KBS) and decision support system (DSS). One such attribute being applied to KBS is dynamic interaction. In an effort to provide a mechanism that will enable researchers to quantify this system attribute, and enable practitioners to prescribe the needed aspects of dynamic interaction in a specific application, a measurement scale was derived from previous literature. Control theory was used to provide the theoretical underpinnings of dynamic interaction and to identify its conceptual substrata. A pretest and exploratory study was conducted to refine the derived scale items, and then a confirmatory study was conducted to evaluate the nomological validity of the measurement scale. © 2010 Elsevier B.V. All rights reserved.
1. Introduction Information Systems (IS) literature contains two main system architectures designed for decision support: knowledge based systems (KBS) and decision support systems (DSS). KBS have been defined as a system that uses human knowledge captured in a computer to solve a problem that ordinarily needs human expertise and have applications in virtually every field of knowledge [6]. KBS are designed to deal with complex problems in narrow, well-defined problem domains. If a human expert can specify the steps and reasoning by which a problem may be solved, then a KBS can be created to solve the same problem [21]. The architectural design of KBS are very different from traditional systems because the problems they are designed to solve have no algorithmic solution; instead, they utilize codified heuristics or decisionmaking rules of thumb which have been extracted from the domain expert(s), to make inferences and determine a satisfactory solution [7,37]. For unstructured decision domains, DSS are developed, which are collaborative systems that use various types of formulas and algorithms to synthesize information from various data sources [6]. The architectural design of DSS differs significantly from KBS's heuristics approach, instead they are designed to interactively tract with the user's nonlinear cognition process in unstructured decision domains [23,26,49]. A functional gap has existed between KBS's support of structured decisions and DSS's support of unstructured decisions [6]. Despite this, the need to apply knowledge to unstructured domains exists, and recently system developers have begun meshing the traditional ⁎ Corresponding author. E-mail addresses:
[email protected] (B.A. Beemer),
[email protected] (D.G. Gregg). 0167-9236/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.dss.2010.04.007
characteristics of these two system architectures to provide knowledge based support of unstructured decisions. This is being accomplished by incorporating dynamic interaction (which is traditionally a DSS trait) into KBS [18,28,53]. Fig. 1 provides an illustration of emerging system designs that are bridging the described functional gap between knowledge based systems and decision support systems (e.g. iterative expert systems, advisory systems, web 2.0 mash-ups). As Fig. 1 illustrates, a common feature shared by these three systems is dynamic interaction [7,28,53]. Iterative expert systems closely resemble traditional expert systems but include the added functionality of an iterative decision process that allows the user to revisit and revise their inputs and consider alternative solutions [28]. Advisory systems are also driven by an iterative system process, but unlike iterative expert systems which are typically rule-based, advisory systems couple rulebased reasoning with other forms of logic such as case-based reasoning [7]. Another type of system that is bridging the gap between knowledge based systems and decision support systems is Web 2.0 Mash-ups, which provide an enhanced interactive web experience that allow a variety of users to share in the generation, organization, distribution, and utilization of knowledge [53]. Generally speaking, when knowledge based systems are applied to unstructured domains they tend to experience low user acceptance resulting from mistrust of the system because of its inability to justify solutions [19]. There are two main schools of thought addressing this low user acceptance rate. The first focuses on developing more robust explanation facilities to justify the system's solution in unstructured domains [5]. The second declares that “the need for interaction between knowledge based systems (KBS) and the user has increased, mainly, to enhance the acceptability of the reasoning process and of the solutions proposed by the KBS” [19, pg. 1]. Through dynamic interaction with the
Author's personal copy B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
Fig. 1. Dynamic interaction bridges gap between KBS and DSS.
user, KBS are able to tract with the user's iterative cognition process in solving unstructured decision and involve the user's opinion in the system's logic, which gives them a sense of ownership (and ultimately trust) in the solution [26,38]. The purpose of this study is to derive a measurement scale to quantify dynamic interaction in knowledge based systems. It measures dynamic interaction as a construct that is perceived by the user because of the positive or negative influence a user's initial attitude towards the system may have on the actual dynamic interaction that takes place during system use. A review of system control theory literature is conducted which provides the theoretical underpinnings for the substrata and nomological model of dynamic interaction. Upon this foundation a multi-item measurement scale is derived for dynamic interaction which is then evaluated in an exploratory and confirmatory study. Lastly a discussion is presented that covers the implications of this study on future research. 2. Theoretical background Dynamic interaction is neither a new concept nor is it limited to the decision support architectures previously discussed, in fact it has been around for several decades in artificial intelligence literature. In the 1990s the World Wide Web began to take on dynamic characteristics with database driven web applications that serve custom web pages designed to meet specific criteria and interests of unique user [25]. Web applications then evolved into virtual working environments and various studies evaluated groupware architectures in terms of their ability to synthesize and support the dynamic interaction between colleagues and teacher/students in distance education [31]. Most recently, dynamic interaction is being expanded to make systems increasingly proactive with technologies like AJAX (Asynchronous JavaScript and XML) which brings browser-based interaction much closer to application-based interaction [36,44,46]. Despite the popularity of dynamic interaction as a system attribute, IS literature lacks a measure to quantify the role of this construct within the information system nomological net. To address this need, the focus of this paper is to derive such a measurement for dynamic interaction. Dynamic interaction's underpinnings are found in system control theory, which spans many academic disciplines ranging from engineering to economics and is primarily focused with influencing the behavior of dynamic systems [30]. Specifically stated, “control theory is the area of application-oriented mathematics that deals with the basic principles
387
underlying the analysis and design of control systems. To control an object means to influence its behavior so as to achieve a desired goal” [48, pg. 1]. The majority of control theory applications incorporate some variation of a feedback loop; as Fig. 2 illustrates a feedback loop has 3 general phases: A) input values, B) process input and calculate output, C) evaluate output, if necessary iterate back to step A) and adjust input values [43]. A common application of control feedback loops is machine learning which includes but is not limited to autonomous robots, fuzzy logic, intelligent systems, neural networks, and database autonomics. For example, database autonomics has become more popular as databases have become increasingly large and complex and the human resource cost to administer these databases has also grown. To help ease the cost of ownership of the large databases, researchers have begun developing self managing databases (ADBMS) that automatically configure and manage its resources [17]. Essentially an ADBMS is a control feedback loop that oversees the database and collects and analyzes statistics, determines whether performance is satisfactory or not, and then takes appropriate action to resolve performance issues if they exist [35]. The control feedback loop is also being applied in the context of dynamic KBS related to the brain-computer interaction subset of machine learning. While there is no physical connection involved, the KBS includes the user's cognition process in developing a solution through dynamic interaction. KBS can be effective in supporting unstructured decisions when they are designed with feedback loops that allow the user to influence the behavior of the system as to achieve the desired solution by evaluating alternative solutions [48]. A good example of knowledge based support being applied to unstructured decisions via dynamic interaction would be an Iterative Expert System built for solving shipment consolidation problems. Lau and Tsui [28] developed such a system that adopted rule-based reasoning to provide expert advice for cargo allocation, and included an iterative improvement mechanism that undertakes different outcomes until an optimal solution is found. 3. Hypothesis model Examining the predictive ability of a measurement scale and its nomological validity requires identifying the constructs within a nomological network of consequent variables [9]. Fig. 3 illustrates the hypothesized nomological network for dynamic interaction, which has been constructed from prior literature, and serves as the research model for this study. The hypothesized consequential constructs of dynamic interaction are Perceived Reliability (trust in the predictions made), Perceived Usefulness, and Behavioral Intention to Use. The hypothesized relationship between dynamic interaction and these consequential constructs are discussed below. The primary conceptualization of DSS/KBS trust is based on the assumption that users' generally adapt their trust levels to accommodate different levels of recommendation quality. That is, the more reliably the system is in providing appropriate decision recommendations, the higher the level of trust the user will have in the system. For
Fig. 2. Control Theory Feedback Loop, derived from [43].
Author's personal copy 388
B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
to which these trust-building mechanisms enhance Perceived Reliability determines the degree to which the user's Behavioral Intention to Use the system (TAM) is positively affected [20]. H3. Perceived Reliability will have a positive influence on Behavioral Intention to Use. Both Perceived Usefulness and Behavioral Intention to Use belong to TAM, and the relationship between these two constructs has been confirmed in a plethora of different domains (e.g. Email, Lotus Notes, Online Auctions) [15,16,20,22]. Therefore, it is hypothesized that in the domain of dynamic interaction being applied to KBS, Perceived Usefulness will have a positive influence on Behavioral Intention to Use [15,16].
Fig. 3. Research Model for Dynamic Interaction.
example, one study examining the relationship between system trust and the reliability of diagnostic and decision support aids found they were sensitive to different levels of aid reliabilities [54]. When knowledge based systems are applied in unstructured domains a second constraint on user trust comes into play. Users distrust system recommendations because proposed solutions cannot be exhaustively justified, that is, a lack of system transparency negatively impacts trust in the reliability of the recommendations made by the systems [19]. The majority of KBS research covering this reliability concern is focused on developing more robust explanation facilities for the KBS (e.g. [5]), however some researchers are calling for KBS to be more dynamically interactive [19]. Dynamic interaction serves to increase the user's involvement in the decision-making process and awareness of the processes employed by the system when arriving at a decision recommendation. Therefore, it is hypothesized that dynamic interaction will induce trust due to enhanced system transparency and thus have a positive effect on the user's perception of the reliability of the recommendations made by the system: H1. Perceived Dynamic Interaction will have a positive influence on Perceived Reliability. In unstructured decision domains there is often a particular person that is held legally accountable for the decisions being made (e.g. doctors, stock brokers, and executives). In these situations the KBS no longer performs autonomously but rather performs as a decision support tool like a DSS (e.g. [42]). As such, there have been recent studies that have investigated the user's cognition process in unstructured domains, with the premise being that since the system is used to support the user's decision process, it should be designed around the user's decision process. Various studies have shown that the user's decision process in unstructured domains is iterative, which suggests that when dynamic interaction is built into the system to support the user's iterative thought process, it will have a positive influence on user perceptions of the decision support tool [25,38,55]. In these cases, the addition of dynamic interaction serves to improve user perceptions of how well a system is performing its tasks, and thus serves to enhance perceptions of the output quality of the system [19]. A theoretical extension to the technology acceptance model (TAM2) found that output quality has a positive effect on the perceived usefulness of systems [52]. Thus it is hypothesized: H2. Perceived Dynamic Interaction will have a positive influence on Perceived Usefulness. The relationship between Trust and the Technology Acceptance Model (TAM) is well established in IS literature (e.g. [20,51,57]). Perceived Reliability is a dimension of Trust that is commonly investigated when a system's ability is a concern [34]. In these cases practitioners are directed to include trust-building mechanisms into the application [20]. The degree
H4. Perceived Usefulness will have a positive influence on Behavioral Intention to Use. To evaluate the nomological validity of dynamic interaction within the research model in Fig. 3, measurements were derived from previous studies and were reworded to fit the context of the basketball knowledge based prediction system developed for this study. The measurements for perceived usefulness and behavioral intention were derived from Davis's [16] measurement development paper for TAM, and the measurement for perceived reliability was derived from Madsen and Gregor's [34] study on various aspects of trust. The scale items for trust were all worded positively, since negative items reflect distrust, which literature describes as being an entirely different construct, rather than being a polar opposite of trust [8,29]. The exact wording of the measurements can be found in Table 1.
4. Measurement development and pretest Despite the long history of dynamic interaction as a system attribute, and it's growing popularity via control feedback loops in emerging technologies (e.g. Web 2.0 Mash-ups), IS literature lacks a measurement scale to quantify dynamic interaction. The focus of this section is to derive such a measurement in the context of KBS being applied to unstructured decision domains, in an effort to provide a measurement scale that academicians can use to quantify dynamic interaction and that practitioners can use to prescribe the aspects of dynamic interaction that are warranted in different environments. The validity of a measurement scale begins with the initial item
Table 1 Derived measures. (Perceived usefulness)
Derived from
The system would be helpful to you in filling out your tournament bracket. The system would make it easier for you to make your tournament predictions. The system would make the time you spent on your bracket more effective. Overall, using the system would be better than not using the system.
[16]
(Perceived reliability) The system performs reliably. The system analyzed the problem domain consistently. The system provided the advice I required to make my decision. I feel that I can rely on the system to function properly (Behavioral intention) I intend to use this tool to basketball tournament. I intend to use this tool to I intend to use this tool to tournament outcomes. I intend to use this tool to the tournament.
[16] [16] [16]
[34] [34] [34] [34]
gather information for the NCAA
[16]
help me fill out a tournament bracket. help me predict the
[16] [16]
review potential outcomes of
[16]
Author's personal copy B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
Fig. 4. Substrata of dynamic interaction as derived from control theory.
construction [40]; “Rather than test the validity of measures after they have been constructed, one should ensure the [content] validity by the plan and procedures for [instrument] construction” [16, p.323]. Therefore to promote construct validity from the onset, a top-down substrata specification and bottom-up item matching approach was used to promote adequate coverage of the domain by the scale items [9]. First, the substrata of dynamic interaction were conceptualized in the KBS domain from control theory (Section 4.1). Next, an extensive review of IS literature was conducted to derive scale items from studies focused on the particular substrata of dynamic interaction (Section 4.2). Lastly, a Q-sort was conducted to pretest and refine the initial measurement scale (Section 4.3). 4.1. Conceptualization of dynamic interaction substrata The first step in measurement development consisted of using previous literature to hypothesize the substrata of dynamic interaction. Despite the vast array of differing contexts that control theory exists in (e.g. atmospheric science, biology, and economics), the basic process used to control these dynamic systems remains quite germane to the basic feedback loop prescribed by control theory [14,41,50]. The same is true when control theory is applied to dynamic information systems through dynamic interaction. Therefore, the control process prescribed by control theory provides theoretical grounding for developing the substrata of dynamic interaction as an information system construct,
389
these substrata are illustrated in Fig. 4 and include inclusive, incremental, and iterative system traits [29,48]. The following provides an example of how the substrata of dynamic interaction will be coupled together to work in a repetitive process as applied to a knowledge based system (as shown in Fig. 3). As mentioned previously, knowledge based systems often incorporate some type of heuristics based inference engine [6,21]. A knowledge based system that is designed to be dynamically interactive is inclusive, and includes the user's opinion when navigating the rule-based inference engine [47]. After the first and each subsequent pass through the inference engine's logic, an incremental solution is presented to the user. This functionality matches the user's cognition process (phase theorem) in unstructured environments and allows them to consider alternatives solutions [38,55]. The user then evaluates each incremental solution, if the user is satisfied then the system process terminates, however if the user is not satisfied the system allows them to iterate back to the beginning of the process and adjust their inputs, to view a different alternative [26]. 4.1.1. Inclusive Traditional knowledge based systems are designed with a user interface component which enables the user to input information the system requests from them [6]. Since these systems are designed for structured domains (e.g. training facilities, diagnostic applications), the user's opinion is generally not included in the system's cognition process, because the problem has a definable right solution that is codified in the system knowledge base and which is then explained by the system's explanation facility [33]. However, as knowledge based systems are beginning to be designed for unstructured environments, the user's inclusion in the system's cognition process becomes warranted to enhance the acceptability of the reasoning process and of the solutions proposed by the KBS [19]. 4.1.2. Incremental System architects often design incremental systems when dealing with domains that are ever changing, exceedingly complex, or contain uncertainty [2,31,39]. In the context of data mining, maintaining accurate query cost statistics becomes extremely difficult because of continuous changes being made to the database (inserts, updates, and deletes), and the large cost (time) of reanalyzing extremely large
Table 2 Initial scale items for perceived dynamic interaction. Scale item
Substrata
Source Synopsis
(1) The system progressively Interacted with you
Inclusive
[3]
(2) The system included you in the decision process
Inclusive
[45]
(3) The system involved you in the decision process (4) Your opinion influenced the system's suggestion (5) The system incorporated your ranking into the solution (6) The system used a phased approach to develop a solution (7) The system added to the proposed solution until it was satisfactory. (8) The system developed the solution step by step until it was satisfactory. (9) The system developed the solution in increments (10) The system incrementally worked with you to develop your final solution. (11) The system allowed you to go back and change your rankings if you wanted to. (12) The system adjusted the computer's prediction if you changed your team rankings. (13) The system allowed you to review and revise your candidate solution. (14) The system iteratively worked with you until you were satisfied with your solution. (15) The system continually allowed you to revise your solution until you were satisfied.
Inclusive Inclusive Inclusive Incremental
[19] [19] [47] [31]
Incremental [41] Incremental [32]
Developed a methodology to dynamically enhance the interaction with multiple artifacts to suit different usage patterns, user groups, or contexts of use. Developed a search summarization system that included dynamic interaction; their system performed better than systems without interaction. Investigated a new methodology for interactive KBS that include the user in the process. Investigated a new methodology for interactive KBS that include the user in the process. Provided an interactive knowledge representation model for ambient intelligence computing. Demonstrated, phased, incremental updates on database statistics, as a method of increasing performance. Discussed incremental strategic information systems planning in an uncertain environment.
Incremental [2] Incremental [26]
Developed an incremental text mining technique to efficiently identify the user's current interest by mining the user's information folders. Developed an incremental dialogue system faster and preferred over non-incremental counterpart. Discussed the incremental human decision-making behavior in unstructured environments.
Iterative
[4]
Developed a terminological feedback query composer for iterative information seeking.
Iterative
[24]
Developed an iterative dynamic programming approach.
Iterative
[27]
Iterative
[8]
Proposed an iterative improvement-based de-clustering method that utilizes the available information on query distribution. Discussed various aspects of iteration in a complex decision domain.
Iterative
[28]
An Iterative Heuristics expert system for enhancing consolidation shipment process in logistics operations.
Author's personal copy 390
B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
Table 3 Pretest results Perceived Dynamic Interaction. Item
Substrata
Rank
(1) The system progressively Interacted with you (2) The system included you in the decision process (3) The system involved you in the decision process (4) Your opinion influenced the system's suggestion (5) The system incorporated your ranking into the solution (6) The system used a phased approach to develop a solution (7) The system added to the proposed solution until it was satisfactory. (8) The system developed the solution step by step until it was satisfactory. (9) The system developed the solution in increments (10) The system incrementally worked with you to develop your final solution (11) The system allowed you to go back and change your rankings if you wanted to. (12) The system adjusted the computer's prediction if you changed your team rankings. (13) The system allowed you to review and revise your candidate solution. (14) The system iteratively worked with you until you were satisfied with your solution. (15) The system continually allowed you to revise your solution until you were satisfied.
Inclusive Inclusive Inclusive Inclusive Inclusive Incremental Incremental Incremental Incremental Incremental Iterative Iterative Iterative Iterative Iterative
15 10 2 7 12 4 9 14 6 8 13 1 11 5 3
databases. For example, Lin and Lee [31] present an approach to maintaining these query cost statistics by breaking them into smaller statistics, which are incrementally updated, and then aggregated together. Similar to domains that are ever changing (e.g. database cost statistics), decisions in domains that contain uncertainty are extremely difficult to quantify; researches have found that systems designed with an incremental process are able to identify solutions more quickly and accurately in these domains [2,39]. 4.1.3. Iterative Decision theorist long ago determined that the human cognition process in unstructured environments is incremental and iterative in nature [38,55]. Recent literature has found that when decision support systems are designed to tract with the user's iterative cognitive process, the decision time and accuracy is improved [26]. Kim et. al. [26] did call for further study to investigate both linear and non-linear decision models in unstructured environments, which is addressed by this study. 4.2. Initial item selection The item selection process used was designed to achieve two main goals. The first goal was to determine the appropriate number of items to quantify the influence of the inclusion, incremental, and iterative factors on the decision-making process without producing redundant scale items. The next goal was to select items that represented the actual dimensions of dynamic interaction as applied to information systems. To account for scale refinement 15 scale items (5 per substrata) were generated from past IS studies concerning different aspects of dynamic interaction. One observation that should be noted is the extent to which IS literature covers the substrata of dynamic interaction (inclusive, incremental, and iterative) which were identified in the previous section. This provides justification of the use of control theory as the theoretical foundation of dynamic interaction in the realm of IS. Table 2 contains the 5 scale items chosen for each substrata, along with details concerning the study that each item was derived from. 4.3. Scale pretest and refinement Pretest participants consisted of 10 database application developers, the participants were asked to perform a prioritization and then a categorization. For prioritization they were given a definition for dynamic interaction and then asked to rank each statement by how
Cluster
Removed *
A A A * B B * B B * C C C C
well it matches the definition. For the categorization task, participants were given 3 envelopes, each containing the name and definition of one of dynamic interaction's three substrata. Next they were given 15 index cards with each containing one of the scale items and were instructed to match each item with one of the substrata definitions. A fourth envelope was provided labeled, “Does not fit anywhere”, so that participants could discard items they felt did not match anywhere. The categorization data was then cluster analyzed by placing in the same cluster items that 6 or more respondents placed in the same category [16]. “The clusters are considered to be a reflection of the domain substrata for each construct and serve as a basis of assessing coverage, or representativeness, of the item pools” [16, p. 325]. The results of both the prioritization and categorization are provided in Table 3. Cluster A contained 3 scale items representing the inclusive substrata, cluster B contained 4 scale items representing the incremental substrata, and finally cluster C contained 4 scales items representing the iterative substrata. Scale items 1, 5, 8, and 11 were ranked 15th, 12th, 14th, and 13th respectively and fell outside of the three substrata clusters and thus were removed from the measurement.
5. Exploratory study The pretest procedure refined the initial measurement scale from 15 to 11 items, with 3 items remaining in the inclusive substrata, and 4 items in both the incremental and iterative substrata's. An exploratory study was conducted to further refine the measurement scale. The decision domain selected for the exploratory study was that of predicting the outcome of the National Collegiate Athletic Association (NCAA) men's basketball tournament. It is very popular amongst office colleagues in the United States to conduct pools where participants fill out a tournament bracket with the goal of predicting the outcome with the highest accuracy. Participants pay an entrance fee and the person with the highest prediction accuracy receives a monetary award comprised of the aggregated entrance fees. There are numerous variables that can potentially impact the outcome off the NCAA tournament (e.g. win%, conference, experience, average height, ranking, and seed), which adds a considerable amount of uncertainty to this decision domain. Additionally, the nature of a tournament bracket (e.g. IF team-A beats team-B AND team-D beats team-C THEN team-A plays team-D) is similar to that of a KBS heuristic decision tree. Therefore, since this decision domain contained uncertainty and is also conducive to a KBS inference engine, it was selected for this study.
Fig. 5. Google Ads Used to Solicit Participation in Exploratory Study.
Author's personal copy B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
391
Fig. 6. Revised 2nd Order Factor PLS Model With 9 Scale Items.
To conduct the experiment, two KBS were developed to aid users in predicting the outcome of the NCAA tournament. The first system was a traditional KBS which included an explanation facility that explained the most significant predictors of past tournaments and the current values of these statistics. The second system was built upon the first, but included a dynamically interactive interface that had inclusive, incremental, and iterative functionality. To include the user's opinion in the decision process, the system allowed the user to input his own rankings of the tournament teams. Next, the system presented an incremental solution that was composed of a (user selected) combination of the KBS's prediction and the user's opinion (e.g. the user could select a solution based 50% on the KBS and 50% on their own rankings). Finally, the system was designed to be iterative by allowing the user to revisit one, or both, of the previous decision steps (inputting rankings and selecting combination ratio). The iterative decision process of the second KBS would continue until the user was satisfied with their solution. Table 4 Descriptive statistics. Construct
Mean (non DI) Mean (DI) Standard deviation
(DI) Dynamic Interaction 2.08 6.06 2.29 (PR) Perceived Reliability 2.04 6.23 2.26 (PU) Perceived Usefulness 2.17 6.33 2.25 (BI) Behavioral Intention to Use 2.08 6.25 2.19 All items were measured with a 7 point Likert scale. Strongly Disagree b —————————————————————————— N Strongly Agree 1 2 3 4 5 6 7
To solicit participation, a Google Ad-Words campaign was ran for 4 consecutive days leading up to the seeding of the 2009 NCAA basketball tournament. Fig. 5 contains the ads that were used to solicit participation in the study. Ad clicks were bid at $0.10 per click, with a $25.00 daily budget limit. Over the four day period that the ad campaign was ran, the ads produced 1000 clicks (which was restricted by the daily budget limit), with 133 individuals participating in the study, yielding a 13.3% response rate. For scale validation the Partial Least Squares (PLS)1 Structural Equation modeling (SEM) method was performed on 67 randomly selected observations from the exploratory test data (a holdout sample of 66 was retained for further analysis). The 11 item scale produced by the pretest in Section 4.3 was modeled as a second order factor model, with items: incl2–4, incr1–2 and 4–5, and iter2–5 as reflective indicators of latent first order factors of the Inclusive, Incremental, and Iterative substrata and these three first order factors as formative indicators of the Dynamic Interaction construct. A second order formative model is reasonable for this study because a change in one aspect of dynamic interaction does not necessarily imply a change in the others [13]. For example, one can include users' opinions without allowing the user to iterate on those opinions throughout the decision-making process. As with other multivariate statistical packages (e.g. AMOS), PLS-Graph does not have built in functionality to model second order constructs, however there are methods that can be applied to accomplish a second order
1
PLS-Graph 3.0 was used to perform the PLS analysis.
Author's personal copy 392
B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
Table 5 Loadings and cross-loadings.
incl2 incl3 incl4 incr1 incr2 incr4 iter2 iter4 iter5 pr1 pr2 pr3 pr4 pu1 pu2 pu3 pu4 bi1 bi2 bi3 bi4
DI
PR
PU
BI
.97 .97 .97 .95 .96 .97 .98 .97 .99 .79 .75 .78 .75 .81 .81 .81 .81 .78 .80 .79 .81
.74 .76 .76 .74 .76 .76 .77 .77 .78 .97 .99 .98 .98 .78 .78 .80 .81 .80 .80 .79 .81
.77 .78 .76 .76 .79 .78 .81 .81 .81 .78 .77 .81 .77 .99 .99 .98 .98 .79 .81 .82 .81
.75 .77 .79 .75 .77 .77 .78 .78 .78 .81 .76 .80 .76 .80 .80 .82 .79 .99 .99 .99 .99
construct model in PLS-Graph, the easiest of which is the approach of repeated indicators known as the hierarchical component model [11,12,56]. When using the hierarchical component model, researchers suggest that it is ideal to have an equal number of indicators for each 1st order construct [11]. Therefore items incr5 and iter3 were removed because they were the lowest loading indicators on the Incremental and Iterative substrata, which each had four indicators, compared to Inclusive's three. The revised second order factor model illustrated in Fig. 6 has an equal number of indicators for each 1st order factor. The relation between the Inclusive, Incremental and Iterative substrata is modeled using a formative measurement model because causality flows from the indicators to the dynamic interaction construct [10]. Removing the incr5 and iter3 scale items reduced the T-statistic for the Inclusive path coefficient from 28 to 20, with Incremental's remaining unchanged at 30, and the Iterative's being reduced from 24 to 20. Despite the nominal variations in the path coefficient T-statistics after these items were removed, all three path coefficients remained significant at the .01 level. When modeling the paths from the first order to the overall second order construct using PLS, the R-square for the second order construct is always 1.0. However, the reason for using a formative (or molar) model is to examine the relative path weights as this “molar construct” is used to predict other constructs in the model [13]. The results of the second order PLS analysis suggests that the Inclusive, Incremental and Iterative substrata contribute almost equally to the Dynamic Interaction construct.
6. Confirmatory study To investigate the nomological validity of the proposed measurement scale, a confirmatory study was conducted to evaluate the hypothesis model developed in Section 3. The same two NCAA knowledge based prediction systems used in the exploratory study were used in the
Table 6 Internal consistency and discriminant validity. Composite reliability
.99 .98 .99 .99
AVE and inter-construct correlations
DI PR PU BI
DI
PR
PU
BI
.93 .86 .89 .87
.94 .87 .87
.97 .87
.98
Fig. 7. PLS SEM Results.
confirmatory study, however instead of soliciting participation via a Google ad campaign, email solicitations were used. Five listservs were obtained which together contained 316 email addresses of individuals who participated in various NCAA March Madness pools in 2008. An email was sent to each of the individuals in the aggregated listserv that invited them to participate in the study. Since the study was conducted during the weeks prior to the seeding of the 2009 tournament, hypothetical teams were seeded into each of the two knowledge based systems. As an incentive for participating in the study — participants would have access to the system loaded with the actual tournament teams — once the tournament seeding took place. Of the 316 invitations sent, 86 individuals participated, yielding a 27% response rate for the confirmatory study. 62% of the respondents were male, and the overall participant population had an average age of 35. Respondents were asked to rate their overall familiarity with the NCAA men's basketball tournament on a scale of 1 to 4: 1 — Not at all familiar, 2 — Vaguely of familiar, 3 — Familiar, 4 — Very familiar; the average self evaluation of contextual familiarity was 2.6. Descriptive statistics of the dataset are presented in Table 4. For the system lacking dynamic interaction the means of the constructs ranged from 2.04 to 2.17, conversely, the means of the constructs for the system that included dynamic interaction ranged from 6.06 to 6.33. The psychometric properties of the research model were evaluated by examining item loadings, internal consistency, and discriminant validity. Researchers suggest that item loadings and internal consistencies greater than .70 are considered acceptable [1]. As can be seen by the shaded cells in Table 5, all item loading surpass this threshold. Internal consistency is evaluated by a construct's composite reliability score. The composite reliability scores are located in the leftmost column of Table 6 and are more than adequate for each construct. There are two parts to evaluating discriminant validity; firstly, each item should load higher on its respective construct than on the other constructs in the model, and secondly, the Average Variance Extracted (AVE) for each construct should be higher than the inter-construct correlations [1]. In Table 5, by comparing the shaded cells to the non-shaded cells, we can see that all items load higher on their respective construct than the other constructs in the research model. Likewise, in Table 6, by comparing the shaded cells to the non-shaded cells, we can see that the AVE for each construct is higher than the inter-construct correlations without exception. These two comparisons suggest that the model has good discriminant validity. The results of the PLS SEM analysis are presented in Fig. 7. Perceived Reliability had an R-Square of .88, with R-square values of .93 and .94 for Perceived Usefulness and Behavioral Intention to Use respectively. This means that 88% of the variance in Perceived Reliability and 93% of the variance in Perceived Usefulness is explained by Dynamic Interaction, and 94% of the variance in Behavioral Intention to Use is explained by Perceived Reliability and Perceived Usefulness [1]. The path coefficients
Author's personal copy B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395 Table 7 Summary of hypothesis tests. Hypothesis H1: H2: H3: H4:
DI − N PR DI − N PU PR − N BI PU − NBI
Supported Yes Yes Yes Yes
between Dynamic Interaction, Perceived Reliability, and Perceived Usefulness were significant at .001, while the path coefficients between Perceived Reliability, Perceived Usefulness, and Behavioral Intention to Use were significant at .01. As summarized by Table 7, all four hypotheses were supported, which supports nomological validity of the dynamic interaction measurement scale. 7. Limitations and future research While the immediate focus of this paper is concerning the quantification of dynamic interaction as a system attribute, the domain which was investigated (KBS in unstructured domains) raises some interesting questions that are not addressed by this study, namely “Do users really know what's important when making complex (and potentially nonlinear) decisions?” Traditionally in structured domains KBS perform as autonomous decision makers and outline the relevant factors involved in decision through the explanation facility. However, unstructured domains often require a human to be held legally responsible for the final decision (e. g., a Doctor's diagnosis). This study could benefit from further investigation of this middle ground, where the role KBS are limited to decision support and to framing the problem for the user by identifying relevant decisional antecedents. Additionally, further study regarding how decision quality is affected when the user's opinion is included in the system's cognition process is warranted. It is common practice to include a mix of negatively worded and positively worded items in measurement scales to help reduce response bias. However this study included several items related to trust. Literature describes trust and distrust as being entirely different constructs and not simply just polar opposites [9,29]. Therefore the scale items for perceived reliability (trust) were all worded positively, to ensure that trust, and not distrust, was being measured. For the purposes of this study, it was decided that all items should be positively worded so that all scales were measured the same way. Future research
393
may need to examine the impact of including negatively worded scale items, particularly for the dynamic interaction scale. The contribution of this study was limited to the development of a measurement scale for dynamic interaction as a system attribute. To accomplish this, an experiment was developed where dynamic interaction was controlled for between two NCAA prediction KBS. While the context of the NCAA tournament was interesting and attracted participation in the study, the measurement needs to be applied to a business information system context so that the measurement's nomological net can be further investigated. Additionally, the measurement scale's nomological net should be expanded to see which other constructs are potentially impacted by dynamic interaction. As illustrated by Fig. 8, this study validated the relationships between dynamic interaction, perceived reliability, perceived usefulness, and behavioral intention to use. There are a number of different IS constructs that may be impacted as well. While perceived reliability is an aspect of trust, IS literature could benefit from further investigation into the other trust related constructs that are influenced by dynamic interaction. In unstructured decision domains, humans iteratively assimilate new information and compare it to existing knowledge [38,55]. Since dynamic interaction is an effort to build the KBS around the user's iterative decision process, it would be interesting to investigate the role dynamic interaction plays on cognitive absorption. From a similar perspective, since dynamic interaction enables the KBS to tract with the user's decision process, it would be interesting to investigate how self efficacy and personal innovativeness are affected. Lastly, while this study focused on the domain of knowledge based systems being applied to unstructured domains – the measurement should be evaluated in other architectural domains that model control feedback loops. For instance Database Autonomics would be a suitable architecture to apply this measurement scale because they are viewed as control feedback loops, which is what this measurement was derived from [35]. 8. Conclusion The use of knowledge based systems in unstructured domains is expanding; thus it is important that researchers understand the factors that impact the use of these systems. Past researchers have hypothesized that including dynamic interaction in these systems will improve trust in the reliability of these systems (e.g. [19]). This study supports this, finding that including dynamic interaction in a KBS
Fig. 8. Potential Expansions of Dynamic Interaction's Nomological Net.
Author's personal copy 394
B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395
system operating in an unstructured domain will increase the both the perceived reliability and the perceived usefulness of the system, ultimately leading to an increased intention to use the system. One of the contributions of this study is the development of a measurement scale for dynamic interaction based upon the theoretical underpinnings of control theory. This phase of the research found that the substrata of dynamic interaction does include the ability of the system to include the opinions of the user, to incrementally adjust the solution during the decision process and to iterate through the decision process until an acceptable solution is reached. Results suggest that when modeling the dynamic interaction a second order formative measurement scale is appropriate. As originally suggested by [13], formative second order models are useful in cases where the goal is to assess how much an individual factor contributes to the overall factors. This study also agrees with [10], which argues that to understand these factors they must be demonstrated by embedding the model within a nomological network. In this case, we used the second order Dynamic Interaction model to predict perceived Reliability, Perceived Usefulness and Intention to use. Future studies investigating use of knowledge based systems in unstructured domains may use the proposed scale to evaluate the role played by dynamic interaction in their domains. The proposed scale may be useful in examining improvements dynamic interaction provides in trust the recommendations provided by the system as well as in the perceived usefulness of the system. From a practitioner standpoint, the dynamic interaction scale presented here provides a convenient means for developers to assess their users' perceptions of their KBS and allows them to assess whether the levels of inclusion, presentation of incremental solutions and ability to iterate is appropriate to the problem domain. References [1] R. Agarwal, E. Karahanna, Time flies when you're having fun: cognitive absorption and beliefs about information technology usage, MIS Quarterly 24 (4) (2000) 665–694. [2] G. Aist, J. Allen, E. Campana, C.G. Gallo, S. Stoness, M. Swift, M.K. Tanenhaus, Incremental dialogue system faster than and preferred to its non-incremental counterpart, Proceedings of the 2007 Workshop on the Semantics and Pragmatics of Dialogue (DECALOG), Rovereto, Italy, 2007, pp. 761–766. [3] D. Akoumianakis, A. Savidis, C. Stephanidis, Encapsulating intelligent interactive behavior in unified user interface artifacts, Interacting with Computers 12 (4) (2000) 383–408. [4] P.G. Anick, S. Tipirneni, The paraphrase search assistant: terminological feedback for iterative information seeking, Annual ACM Conference on Research and Development in Information Retrieval, 1999, pp. 153–159. [5] V. Arnold, N. Clark, P.A. Collier, S.A. Leech, S.G. Sutton, The differential use and effect of knowledge-based system explanations in novice and expert judgment decisions, MIS Quarterly 30 (1) (2006) 79–97. [6] J. Aronson, E. Turban, Decision Support Systems and Intelligent Systems, PrenticeHall, Upper Saddle River, NJ, 2001. [7] B.A. Beemer, D.G. Gregg, Advisory systems to support decision making, in: F. Burstein, C.W. Holsapple (Eds.), Handbook on Decision Support Systems 1, Springer, Berlin Heidelberg, 2008, pp. 361–377. [8] N. Berente, K. Lyytinen, What is being iterated? Reflections on iteration in information system engineering processes, in: J. Krogstie, A.L. Opdahl, S. Brinkkemper (Eds.), Conceptual Modeling in Information Systems Engineering, Springer Link, 2007, pp. 261–278. [9] A. Bhattacherjee, Individual trust in online firms: scale development and initial test, Journal of Management Information Systems 19 (1) (2002) 211–241. [10] W. Chin, Commentary: issues and opinion on structural equation modeling, MIS Quarterly 22 (1) (1998) vii–xvi. [11] W. Chin, Partial least squares for researchers: an overview and presentation of recent advances using the PLS approach, lecture slides, International Conference on Information Systems, 2000, pp. 741–742. [12] W. Chin, B. Marcolin, P. Newsted, A partial least squares latent variable modeling approach for measuring interaction effects: results from a Monte Carlo Simulation study and voice mail emotion/adoption study, International Conference on Information Systems (1996) 21–41. [13] W. Chin, A. Gopal, Adoption intention in GSS: relative importance of beliefs, Data Base Advances 26 (2/3) (1995) 42–64. [14] P. Cox, R.A. Betts, C.D. Jones, S.A. Spall, I.J. Totterdell, Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model, Nature 408 (2000) 184–187. [15] F.D. Davis, A technology acceptance model for empirically testing new end-user information systems: theory and results, Doctoral Dissertation, MIT Sloan School of Management, Cambridge, MA (1986). [16] F.D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13 (3) (1989) 319–340.
[17] V. Devraj, Leveraging decision automation in database administration, DM Review, 2007 http://www.datawarehouse.com/article/?articleId=6876. [18] G. Forslund, Toward cooperative advice-giving systems: a case study in knowledge based decision support, IEEE Expert (1995) 56–62. [19] V. Furtado, Developing interaction capabilities in knowledge-based systems via design patterns, 2004. [20] D. Gefen, E. Karahanna, D.W. Straub, Trust and TAM in online shopping: an integrated model, MIS Quarterly 27 (1) (2003) 51–90. [21] J.C. Giarranto, G.D. Riley, Expert systems: principles and programming, Thompson Course Technology, Boston, MA, 2005. [22] D.G. Gregg, S. Walczak, Auction advisor: online auction recommendation and bidding decision support system, Decision Support System 41 (2) (2006) 449–471. [23] S. Guerlain, D.E. Brown, C. Mastrangelo, Intelligent decision support systems, IEEE International Conference on Systems, Man, and Cybernetics (2000) 1934–1938. [24] K. Huang, Demand subscription services: an iterative dynamic programming for the substation suffering from capacity shortage, IEEE Transactions on Power Systems 18 (3) (2003) 947–953. [25] J. Keyes, Datacasting: how to stream databases over the Internet, McGraw-Hill, 1998. [26] C.N. Kim, K.H. Yang, J. Kim, Human decision-making behavior and modeling effects, Decision Support Systems 45 (1) (2008) 517–527. [27] M. Koyuturk, C. Aykanat, Iterative-improvement-based declustering heuristics for multi-disk databases, Information Systems 30 (1) (2005) 47–70. [28] H.C.W. Lau, W.T. Tsui, An iterative heuristics expert system for enhancing consolidation shipment process in logistics operations, in: Z. Shi, K. Shimohara, D. Feng (Eds.), Intelligent Information Processing, Springer, Boston, 2006, pp. 279–289. [29] R.J. Lewicki, D.J. McAllister, R.J. Bies, Trust and distrust: new relationships and realities, Academy of Management Review 23 (3) (1998) 438–458. [30] F.L. Lewis, Applied optimal control and estimation, Prentice-Hall, 1992. [31] M. Lin, S. Lee, Incremental update on sequential patterns in large databases by implicit merging and efficient counting, Information Systems 29 (5) (2004) 385–404. [32] R.L. Liu, W. Lin, Incremental mining of information interest for personalized Web scanning, Information Systems 30 (8) (2004) 630–648. [33] G. Luger, Artificial intelligence: structures and strategies for complex problem solving, Addison Wesley, 2005. [34] M. Madsen, S. Gregor, Measuring human–computer trust, 11th Australasian Conference on Information Systems, Brisbane, 2000. [35] P. Martin, S. Elnaffar, T. Wasserman, Workload model for autonomic database management systems, IEEE (2006) 10–16. [36] P. McCarthy, Ajax for Java developers: build dynamic Java applications, IBM Developers Works, 2005 http://www.ibm.com/developerworks/library/j-ajax1/. [37] K. McGraw, K.A. Harbison-Briggs, Knowledge Acquisition: Principles and Guidelines, Prentice-Hall, NJ, 1989. [38] H. Mintzberg, D. Raisinghani, A. Theoret, The structure of ‘unstructured’ decision processes, Administrative Science Quarterly 21 (2) (1976) 246–275. [39] H.E. Newkirk, A.L. Lederer, Incremental and comprehensive strategic information systems planning in an uncertain environment, IEEE Transactions on Engineering Management 53 (3) (2006) 380–394. [40] J. Nunnally, Psychometric Theory, McGraw-Hill, New York, 1978. [41] M.J. North, C.M. Macal, Managing Business Complexity, Oxford University Press, 2007. [42] L. Rapanotti, News, Expert systems 21 (4) (2004) 229–238. [43] M. Raudsepp, Ideal Feedback Loop, 2007 http://en.wikipedia.org/wiki/File: Ideal_feedback_model.svg#filelinks. [44] G. Royale, Human computer interaction — AJAX, 2008 http://www.csse.uwa.edu. au/teaching/units/231.325/lectures/hci-ajax-nup4.pdf. [45] H. Sakai, S. Masuyama, A multiple-document summarization system with user interaction, Proceedings of the 20th international conference on Computational Linguistics, 2004. [46] K. Smith, Simplifying Ajax-style Web development, 2006 available at: http://lesia. com/content/articles/SimplifyingAjaxWebDevelopment.pdf. [47] L. Snidaro, G.C. Foresti, Knowledge representation for ambient security, Expert Systems 24 (5) (2007) 321–333. [48] E.D. Sontag, Mathematical Control Theory, Second EditionSpringer, New York, 1998. [49] P. Subsorn, K. Singh, DSS applications as a business enhancement strategy, Proceedings from the 3rd annual Transforming Information and Learning Conference, 2007. [50] R. Thomas, D. Thieffry, M. Kaufman, Dynamic behavior of biological regulatory networks — I. Biological role of feedback loops and practical use of the concept of the loop-characteristic state, Bulletin of Mathematical Biology 57 (2) (1995). [51] F. Tung, S. Chang, C. Chou, An extension of trust and TAM model with IDT in the adoption of the electronic logistics information system in HIS in the medical industry, International Journal of Medical Informatics 77 (5) (2008) 324–335. [52] V. Venkatesh, F.D. Davis, A theoretical extension of the technology acceptance model: four longitudinal field studies, Management Science 46 (2) (2000) 186–204. [53] S. Walczak, D.L. Kellogg, D. Gregg, A Web 2.0 application to support consumer decision-making in multi-criteria environments, Work in progress, , 2008. [54] D. Wiegmann, A. Rich, H. Zhang, Automated diagnostic aids: the effects of aid reliability on users' trust and reliance, Theoretical Issues in Ergonomics Science 2 (4) (2001) 352–367. [55] E. Witte, Field research on complex decision-making processes — the phase theorem, International Studies of Management and Organization 2 (2) (1972) 156–182. [56] H. Wold, Introduction to the second generation of multivariate analysis, in: H. Wold (Ed.), Theoretical Empiricism, Paragon House, New York, 1989. [57] I. Wu, J. Chen, An extension of trust and TAM model with TPB in the initial adoption of on-line tax: an empirical study, International Journal of Human– Computer Studies 62 (6) (2005) 784–808.
Author's personal copy B.A. Beemer, D.G. Gregg / Decision Support Systems 49 (2010) 386–395 Brandon A. Beemer is a project manager for the U.S. Department of Defense. His most recent experience involves managing large scale Oracle mid-tier system development projects. He received a B.S. in Computer Information Systems from DeVry University and an M.S. in Information Systems from the University of Colorado, Denver. He is currently a doctoral candidate at the University of Colorado in the Computer Science and Information Systems program. His research interests include decision support systems, human–computer interaction, and database autonomics.
395
Dawn G. Gregg is an Associate Professor of information systems at the University of Colorado, Denver. She received her Ph.D. in Computer Information Systems and her M.S. in Information Management from Arizona State University, her M.B.A. from Arizona State University West, and her B.S. in Mechanical Engineering from the University of California at Irvine. Her current research seeks to improve the quality and usability of Web-based information. Her work has been published in journals including MIS Quarterly, International Journal of Electronic Commerce, IEEE Transactions on Systems Man and Cybernetics, Communications of the ACM, and Decision Support Systems.