COMPLEXITY AND FAMILIARITY WITH ... - Semantic Scholar

3 downloads 34128 Views 272KB Size Report
sion making. Normative theory of decision making argues that the application of .... The complex assistance was a commercial software program supporting the ...
September 29, 2009 13:24 WSPC/173-IJITDM

00349

International Journal of Information Technology & Decision Making Vol. 8, No. 3 (2009) 407–426 c World Scientific Publishing Company 

COMPLEXITY AND FAMILIARITY WITH COMPUTER ASSISTANCE WHEN MAKING ILL-STRUCTURED BUSINESS DECISIONS

DAVID L. MCLAIN School of Management State University of New York Institute of Technology Utica, NY 13504-3050, USA [email protected] RAMON J. ALDAG∗ School of Business, University of Wisconsin 975 University Avenue, Madison, WI 53706, USA [email protected]

Using high-level tasks typical of managerial decisions, this experimental study examined the influence of computer assistance on solving ill-structured problems. Of specific interest were the influences of complex and simple assistance on decision performance and decision maker attitudes. Our findings suggest that performance and user attitudes can be enhanced by technology that provides clear and simple instruction in good problemsolving practices. However, when that technology adds a complex array of technical options and features, the assistance may fail to improve or/and may even diminish performance and damage user attitudes. Familiarization with such complex technology may not improve these outcomes. The findings regarding the relationship between assistance complexity and decision performance are consistent with those of studies that suggest complexity has a curvilinear influence on performance. When considering the application of technological assistance to ill-structured decision tasks, the complexity of the assistance should not be ignored. Keywords: End user computing; decision support systems; decision technology; complexity.

1. Introduction The job of the managerial decision maker has changed in recent years making technology a frequent decision partner. Software specifically designed to aid decision making has evolved from desktop programs1,2,67 into web-based decision tools.3–7 However, many of these tools only aid in manipulating data when solving moderately- or well-structured problems.8,9 Also, as technology has advanced, so has the complexity of problems faced by managers. A full understanding of ∗Corresponding

author. 407

September 29, 2009 13:24 WSPC/173-IJITDM

408

00349

D. L. McLain & R. J. Aldag

technology-assisted decision making cannot be accomplished by studying only those decisions made in well-structured situations. Theory predicts conflicting outcomes when technology is used to assist decision making. Normative theory of decision making argues that the application of a systematic and thorough decision process guided by rational problem-solving principles should improve decision performance. The more that technology supports this type of process, the better the outcomes. Studies of computer assistance designed to assist with solving well-structured problems support this perspective.8 Administrative decision theory, which has long argued that managers fail to make optimal decisions because of limited time and information processing capabilities (Ref. 10, as reviewed in Ref. 11), offers an explanation for why this perspective may not predict actual performance when using technology to make ill-structured decisions. Despite a design consistent with good decision principles, the complexity of a decision-assisting tool may tax the decision maker’s information processing capacity, already challenged by an ill-structured problem, and thus limit or weaken decision performance. Evidence of this possibility appears in studies showing curvilinear relationships between influences and performance, with optimal performance achieved at moderate levels of the influence. This pattern has appeared repeatedly in research and is exemplified by the Yerkes–Dodson relationship.66,69 Of course, if a curvilinear relationship does exist between decision aid complexity and decision performance, perhaps practice can reduce the downturn in performance at higher levels of complexity.12,13 This paper describes an experimental study of the use of software decision aids when solving ill-structured management problems. Subjects were randomly selected into one of three groups: unassisted, simple assistance (instructional), and complex assistance (both instructional and process-organizing). Each subject developed solutions to two ill-structured business problems with one week between decisions. Dependent variables included four aspects of decision performance and seven decision attitudes. 2. Background 2.1. Decision technology and managerial decision making Despite years of success using software to help solve well-defined problems in narrow domains, the development of software technology to help solve problems that are complex and ill-structured is rather recent and the use and effects of such technology are not fully understood.14 Benbasat et al. (1993),62 Benbasat and Nault,15 Eom,16 and Sharda et al.13 collected or reviewed research regarding software designed to aid individual decision makers. From this work it appears that to benefit problem solving, decision assistance must enhance the user’s abilities to accomplish one or more of the steps in the problem-solving process.17,18 Accordingly, research on computerized assistance has measured not only performance but process coordination and structural variables as well.19 Unfortunately, the performance findings of these

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

409

studies are conflicting. Some studies have found benefits while others have found none or even negative effects.13,20–22 The influence of computer assistance on decision performance appears to be contingent on three factors: the user, the problem, and the assistance.22–28 Studies of decision makers’ attitudes toward technologyassisted decision making have also produced conflicting findings, suggesting such attitudes are influenced by more than just the effects of the technology on decision performance.8,13,23,29,30,68 Attitudes leading to the intention to use decision technology are well-studied within the domain of the technology acceptance model (TAM). That model suggests the intention to use a technology is determined by the technology’s perceived usefulness and ease of use.31 However, the gap between intent and effective use of the technology is less well-studied, especially in the context of ill-structured problem-solving. The absence of a complete theory of performance when making technology-assisted, ill-structured decisions makes performance difficult to predict, but it is believed that decision performance is contingent at least in part on task complexity.32 Ill-structured problem solving is inherently difficult.33 Computer assistance, while offering to organize information and help navigate this difficult challenge, can also increase the complexity of the decision process. Despite this threat, practice may reduce this perceived complexity. Longitudinal studies which examine the evolving interaction between the decision maker and the aid point to this possibility (e.g., Ref. 13), finding that performance may improve with familiarization.

2.2. Ill-structured decisions and the complexity of decision assistance An important component of this study is the incorporation of ill-structured problems typical of many real managerial situations. Ill-structured, also called wicked or strategic, problems are novel, unstructured, complex, and consequential. For these problems, routine methods of problem-solving are precluded because the problem is complex, its nature elusive, or the issue so important that a unique approach is required.34 The complexity of decision aids is also of interest because decision software is becoming increasingly complex at the same time that problems are increasing in complexity. Even though a wide variety of potentially beneficial features are designed into a decision aid, a natural side effect of increasing the number and variety of features is an increase in perceived complexity. Furthermore, in light of evidence that performance influences often have nonlinear effects (Johns, 2006), the relationship between complexity and decision performance may not be simple.27 Complexity is the minimum amount of information needed to communicate something and is usually invoked relative to something accepted as “complex”.65 In organizational decision making, complexity challenges information processing capacity and gives rise to uncertainty about performance. In systems designed to

September 29, 2009 13:24 WSPC/173-IJITDM

410

00349

D. L. McLain & R. J. Aldag

manipulate knowledge for the benefit of a decision maker, complexity arises from both the knowledge and the system that manipulates that knowledge.35 When the system is an aid to ill-structured decision making, the considerable complexity that exists in the problem can be magnified by aid complexity. There are two broad dimensions to aid complexity. The first is the breadth, or number of steps in a decision process addressed by the aid. The second is the depth, or degree of assistance offered at each step in the decision-making process. Although more complex decision aids promise more power and versatility, the added complexity can induce stress during aid use and can tax the information handling capacity of the decision maker, especially when solving ill-structured business problems. 2.3. Hypotheses The first hypothesis accepts that the multiplicity of features of a decision aid increases perceived complexity which may then affect performance. If the degree of complexity and not simply the range of features of the aid affects performance, this might help to explain conflicting research findings regarding the performance benefits of decision aids. Hypothesis 1. Decision performance will be influenced by the complexity of computer assistance. Even if a complex decision aid does not provide immediate benefits, experience may. Familiarization may speed and facilitate the effective application of useful software features. Positive attitudes toward assisted problem solving might accompany these improvements. Alternatively, familiarization may yield disappointment with the benefits of the technology, which are not as direct as with software that assists well-structured tasks, and therefore reduce the motivation to learn to effectively use the software. User attitudes might then suffer since complex software includes more options to be identified, mastered, and integrated, as well as more opportunity to encounter difficulty. It is also possible that user performance and attitudes could diverge with practice because the user is distracted by aid complexity that channels attention away from effective problem-solving and toward the technology. In that case, attitudes would be determined by interaction with the technology rather than by the technology’s effect on problem-solving. While the user may appreciate the complexity of the software system, performance may not correspondingly benefit and attitudes would derive from engaging the software and not from effective problem-solving. Research findings are not completely consistent regarding the effects of decision aid familiarization on performance but on balance the evidence suggests that familiarization produces favorable results. Whelan et al. (2003)68 provided a noncomputer-based aid to female breast cancer patients and found that knowledge about breast cancer and satisfaction with the aid were increased by aid use. The benefits of experience should be greater for a relatively more complex technology

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

411

because experience enables the selection, adaptation, and integration of more technology features. However, an alternative argument exists for ill-structured problemsolving in that assistance is frustrating to apply to such problems, making the shortcomings of such assistance especially prominent.36 Those shortcomings do not disappear with practice. In addition, the added complexity and difficulty of integrating computer technology into the decision process may, while seeming relevant, distract the decision maker from a good problem-solving process. Integrating technology into the process might even confuse the user about success at navigating the technology and success at making a quality decision. The result is that assisted decision makers may invest time and effort in the technology that would better be used solving the problem. We therefore propose this hypothesis: Hypothesis 2. The performance of technology-assisted decision makers will be influenced by familiarization with the decision aid. Attitudes toward computer-aided decision making differ in part from attitudes toward other aspects of management because computer use is typically more solitary and less socially visible than many other managerial tasks. This suggests attitudes toward the decision process may adhere more to predictions offered by classical expectancy theory which focuses on an individual process37 than to the predictions made by socially-shaped theories of attitudes.38,39 If computer assistance improves performance and produces associated intrinsic rewards, the result will be positive attitudes toward the technology and the decision process. However, the findings from research do not unequivocally support such a straightforward relationship. Some research finds that nonroutine tasks dampen attitudes toward associated technology.36 This effect seems to result from frustration with difficult-touse technology and discovery of the technology’s limitations when solving complex and uncertain problems. Conflicting and unclear findings in other studies might be explained by the many other factors that influence decision attitudes.8,13,23,29,30,68 In general, we expect that the richness of interaction with computerized assistance will influence decision attitudes and this effect will be reinforced by repeated decision making. Therefore, we offer these hypotheses: Hypothesis 3. Decision maker attitudes toward the decision process and toward problem solutions will be influenced by the complexity of computer assistance. Hypothesis 4. The attitudes of technology-assisted decision makers toward the decision process will change with familiarization more so than the attitudes of unassisted decision makers. 3. Method 3.1. Subjects A total of 90 subjects were recruited for this experiment. All were students or very recent graduates (distributed as approximately 82% upper level undergraduate and

September 29, 2009 13:24 WSPC/173-IJITDM

412

00349

D. L. McLain & R. J. Aldag

18% master’s level students) in the business school at a large Midwestern university. There were 41 males and 49 females, nearly evenly distributed across conditions. The average age of subjects was 23.2 years. Each subject had considerable familiarity with routine computer use but no experience with decision-assisting software. 3.2. Experimental design Subjects were randomly assigned to one of three computer-assistance groups: no assistance, simple instructional assistance, and complex assistance. Simple assistance consisted of a software program that provided a barebones description of a normative, multi-step decision process. It also provided a tool for listing and scoring potential solutions. The complex assistance was a commercial software program supporting the same rational decision process described by the simple assistance. However, this software differed from the simple assistance in the: (1) wide variety of ways assistance was provided at each stage of the decision process and (2) extent of interactive assistance requiring input from the user.a Some options were interdependent in that choice of an option might require additional choices among other options to complete a decision step. The complex assistance also offered many utilities to facilitate printing, storage of intermediate work, and rework of earlier steps. It also provided a flow chart of the decision process and help screens. These features added typical complexity that distinguished the complex from simple assistance. All subjects analyzed two short business cases, seven days apart, enabling subjects to practice using the decision aid. Both cases have been used in a previous study and were found to be viewed by subjects as comparable in difficulty and interest.23 Although the hypothesis tests do not depend on the order of case presentation, the order of the cases was controlled to standardize conditions for all subjects.b,c,d a Our interest was in the role of the complexity of decision-assisting software in determining the effects of familiarization rather than in familiarization per se. In addition, we were concerned about potential contamination across conditions if a case completed by one group at time 1 was later completed by another group. As such, as noted below, one case was administered to all subjects at time 1 and a second case at time 2. While the cases were chosen because they have been found to be similar on important dimensions, the design nevertheless precludes conclusions regarding learning for all subjects but enables conclusions regarding learning for assisted subjects versus unassisted subjects. b In the Veteran’s Day case, the labor relations department of a financially troubled railroad is trying to save $100,000 by arranging for employees to observe Veterans Day on November 11 rather than October 27 as established in the union contracts. One of approximately 30 brotherhoods has scheduled a celebration for October 27 and will not agree to the change. The other brotherhoods seem to want unanimity before agreeing to the change. A deadlock, and the threat of more serious labor problems, face the decision maker. In the World Electronics Company case, a young entrepreneur has recently sold or shut down his successful retail store in order to concentrate his efforts on mail-order sales. However, his catalog is receiving poor response from customers. He does not know the reason for this. c Decision Aid II, Kepner–Tregoe. d In some situations it may be permissible to test in one decision situation and generalize to another. For instance, while models of man61 are primarily intended for application in situations

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

413

3.2.1. Instructions to subjects Each subject in the two assisted treatment groups was told how to start his or her particular decision aid but was not given additional instructions. This was done to minimize external influence on process attitude measurements.40 Technical help was available from the experimenter to answer subjects’ questions regarding program command syntax. In eight instances, a computer failure required restart of the program. Help was not provided for interpreting or analyzing the problem situation and similar numbers of technical questions were raised by the users of each aid. Subjects were instructed not to indicate in their solutions whether or not assistance of any type was used. All cases were typed prior to evaluation. 3.2.2. Dependent variable measures Two scales, the 14-item decision report characteristics scale and the 28-item decision process and solutions scale both developed for decision aid research by Aldag and Power,23 were used to measure the dependent variables. 3.2.3. Decision report characteristics (decision quality) This 14-item scale was designed to measure performance at solving any illstructured decision problem by assessing the extent to which elements of a normative decision process appear in the solution. Choices in ill-structured situations cannot be judged as correct41 making multi-item subjective measures of decision quality desirable. Factor analysis of data from previous research identified five dimensions which were used in this study: (1) logical analysis (items 1, 2, 6, 7, 13, and 14), (2) problem elucidation (items 8 and 11), (3) rater’s affect toward the solution (items 3 and 4), (4) decision flaws (items 5 and 10), and (5) alternative generation (items 9 and 12). Responses to scale items ranged from 1 (completely disagree) to 5 (completely agree). The performance measures are not case specific. In this study, reliability (coefficient alpha) values for the five decision performance measures were (average of both cases): (1) logical analysis (0.97), (2) problem elucidation (0.99), (3) rater’s affect toward the solution (0.93), (4) decision flaws (0.69), and (5) alternative generation (0.94). The decision flaws measure was not entered into further analyses due to a reliability value much lower than that of the other quality measures and slightly below the accepted level for research (i.e., 0.7042 ). Three doctoral students at a major Midwestern university served as expert raters in which there is no objective criterion measure, and for which development of an actuarial model is impossible, the validity of such models is tested in situations where an objective criterion is available. In that case, however, there is no fundamental difference between the decision situation of interest and that employed in the study; they differ only on the presence or absence of a criterion variable. In the situation examined in this study, this is not the case; ill-structured and structured problems differ in critical ways that preclude generalization from one to the other.

September 29, 2009 13:24 WSPC/173-IJITDM

414

00349

D. L. McLain & R. J. Aldag

and used this scale to score each subject’s case solutions. Each rater was compensated for evaluating both reports from each of the 90 subjects. Raters were told only the general nature of the study and were blind to information regarding treatment condition. Average interrater reliabilities were 0.80 for case 1 solutions, 0.80 for case two solutions, and 0.88 for combined solutions to cases 1 and 2.

3.2.4. Attitudes toward the decision process and solution This scale measures seven user attitudes toward the decision process and his or her case solution: (1) confidence in decision quality (items 2, 8, and 20), (2) enhancement of problem-solving skills (items 6, 14, and 19), (3) satisfaction with resource expenditure (items 4, 12, 18, and 22), (4) perceived acceptability of solution (items 3, 11, 16, and 24), (5) perceived process structure (items 1, 13, and 21), (6) perceived process adequacy (items 7, 10, 15, and 23), and (7) positive affect toward the process (items 5, 9, 17, and 25). The scale was completed twice by each subject, once after solving each case problem. Responses to each item in this scale ranged from 1 (completely disagree) to 7 (completely agree). Reliability values in this study for the seven attitude measures were: (1) confidence in decision quality (0.84), (2) enhancement of problem-solving skills (0.87), (3) satisfaction with resource expenditure (0.83), (4) perceived acceptability of solution (0.74), (5) perceived process structure (0.79), (6) perceived process adequacy (0.70), and (7) positive affect toward the process (0.83).

4. Results A correlation matrix for all dependent variables is presented in Table 1. Dependent variable means and standard deviations for each treatment condition appear in Table 2. An examination of the values in Table 2 indicates that both performance and user attitudes were generally more favorable for subjects in the simple rather than the complex condition or unassisted conditions. This contrast is greater between the simple and complex conditions. The trend in decision performance from case 1 to case 2 favored both simple and complex software users over unassisted decision makers on every performance measure for the simple group and on three out of four measures for the complex software users. The exception for the complex group was problem elucidation for which the complex and unassisted groups exhibited similar change. Although not of direct interest to the hypotheses in this study, performance for the unassisted subjects generally declined from case 1 to case 2. This is probably not due to case content because the same cases were analyzed by all three groups and no similar differences in the cases have appeared in other studies.23 It may be due to subject characteristics or the 1 week delay between cases 1 and 2. It is also possible that the experience of solving the first ill-structured case was frustrating and reduced subjects’ motivation for case 2.

1.00 0.43∗∗∗ 0.51∗∗∗ 0.40∗∗∗ 0.81∗∗∗ 0.37∗∗∗ 0.77∗∗∗ 0.36∗∗∗ 0.20 0.14 0.09 0.14 0.19 0.07 0.18 0.06 0.28∗ 0.13 0.28∗ 0.12 0.23∗ 0.08

Decision Performance Variables 1. Logical analysis, case 1 2. Logical analysis, case 2 3. Problem elucidation, case 1 4. Problem elucidation, case 2 5. Affect toward solution quality, case 1 6. Affect toward solution quality, case 2 7. Alternative generation, case 1 8. Alternative generation, case 2

Decision Attitude Variables 9. Confidence in decision, case 1 10. Confidence in decision, case 2 11. Improvement in problem solving ability, case 1 12. Improvement in problem solving ability, case 2 13. Satisfaction with time and effort, case 1 14. Satisfaction with time and effort, case 2 15. Decision acceptability, case 1 16. Decision acceptability, case 2 17. Process structure, case 1 18. Process structure, case 2 19. Process adequacy, case 1 20. Process adequacy, case 2 21 Positive affect toward process, case 1 22. Positive affect toward process, case 2

1

0.13 0.30∗∗ −0.02 0.22∗ 0.11 0.14 0.06 0.25∗ 0.15 0.35∗∗∗ 0.26∗ 0.21∗ 0.13 0.25∗

1.00 0.44∗∗∗ 0.66∗∗ 0.33∗∗∗ 0.83∗∗∗ 0.41∗∗∗ 0.77∗∗∗

2

1.00 0.47∗∗∗ 0.31∗∗ 0.36∗∗∗ 0.63∗∗∗ 0.33∗∗

3

0.19 0.22∗ −0.04 0.09 0.12 0.19 −0.09 0.19 0.19 0.28∗∗ 0.17 0.24∗ 0.24∗ 0.22∗

Table 1. Correlation matrix.

0.23∗ 0.31∗∗ 0.03 0.18 0.14 0.25∗ 0.11 0.15 0.14 0.37∗∗∗ 0.33∗∗ 0.28∗∗ 0.24∗ 0.27∗

1.00 0.26∗ 0.39∗∗∗ 0.38∗∗∗ 0.62∗∗∗

4

0.16 0.10 −0.05 −0.01 0.09 −0.01 0.19 −0.04 0.25∗ 0.05 0.20 0.12 0.17 0.00

1.00 0.27∗ 0.52∗∗∗ 0.25∗∗

5

0.04 0.24∗ −0.07 0.21∗ 0.09 0.06 −0.04 0.18 0.11 0.29∗∗ 0.26∗ 0.17 0.06 0.16

1.00 0.39∗∗∗ 0.59∗∗∗

6

0.22∗ 0.20 0.03 0.18 0.26∗ 0.12 0.08 0.16 0.16 0.13 0.25∗ 0.18 0.23∗ 0.16

1.00 0.31∗∗

7

September 29, 2009 13:24 WSPC/173-IJITDM 00349

Computer Assistance Decisions Making 415

1.00

Decision Attitude Variables 9. Confidence in decision, case 1 0.11 10. Confidence in decision, case 2 0.20 11. Improvement in problem solving ability, case 1 0.00 12. Improvement in problem solving ability, case 2 0.20 13. Satisfaction with time and effort, case 1 0.08 14. Satisfaction with time and effort, case 2 0.15 15. Decision acceptability, case 1 −0.05 16. Decision acceptability, case 2 0.14 17. Process structure, case 1 0.17 18. Process structure, case 2 0.23∗ 19. Process adequacy, case 1 0.25∗ 20. Process adequacy, case 2 0.16 21 Positive affect toward process, case 1 0.11 22. Positive affect toward process, case 2 0.16

Decision Performance Variables 1. Logical analysis, case 1 2. Logical analysis, case 2 3. Problem elucidation, case 1 4. Problem elucidation, case 2 5. Affect toward solution quality, case 1 6. Affect toward solution quality, case 2 7. Alternative generation, case 1 8. Alternative generation, case 2

8

1.00 0.40∗∗∗ 0.20 0.08 0.41∗∗∗ 0.33∗∗ 0.55∗∗∗ 0.23∗ 0.33∗∗ 0.26∗ 0.50∗∗∗ 0.42∗∗∗ 0.44∗∗∗ 0.26∗

9

1.00 0.22∗ 0.44∗∗∗ 0.35∗∗∗ 0.53∗∗∗ 0.15 0.68∗∗∗ 0.27∗ 0.55∗∗∗ 0.35∗∗∗ 0.64∗∗∗ 0.33∗∗ 0.74∗∗∗

10

1.00 0.61∗∗∗ 0.44∗∗∗ 0.38∗∗∗ 0.24∗ 0.25∗ 0.45∗∗∗ 0.30∗∗ 0.28∗∗ 0.23∗ 0.40∗∗∗ 0.42∗∗∗

11

1.00 0.34∗∗ 0.40∗∗∗ 0.27∗ 0.51∗∗∗ 0.33∗∗ 0.60∗∗∗ 0.19 0.29∗∗ 0.32∗∗ 0.62∗∗∗

12

1.00 0.67∗∗∗ 0.16 0.30∗∗ 0.35∗∗∗ 0.15 0.73∗∗∗ 0.47∗∗∗ 0.63∗∗∗ 0.53∗∗∗

13

1.00 0.09 0.42∗∗∗ 0.24∗ 0.41∗∗∗ 0.51∗∗∗ 0.67∗∗∗ 0.40∗∗∗ 0.76∗∗∗

14

1.00 0.19 0.12 0.21∗ 0.35∗∗∗ 0.19 0.24∗ 0.18

15

1.00 0.23∗ 0.50∗∗∗ 0.26∗ 0.46∗∗∗ 0.29∗∗ 0.68∗∗∗

16

416

Table 1. (Continued )

September 29, 2009 13:24 WSPC/173-IJITDM 00349

D. L. McLain & R. J. Aldag

< 0.05, two-tailed;

∗∗ p

< 0.01, two-tailed;

∗∗∗ p

1.00 0.48∗∗∗ 0.34∗∗ 0.18 0.47∗∗∗ 0.29∗∗

< 0.001, two-tailed.

Attitude Variables Confidence in decision, case 1 Confidence in decision, case 2 Improvement in problem solving ability, case 1 Improvement in problem solving ability, case 2 Satisfaction with time and effort, case 1 Satisfaction with time and effort, case 2 Decision acceptability, case 1 Decision acceptability, case 2 Process structure, case 1 Process structure, case 2 Process adequacy, case 1 Process adequacy, case 2 Positive affect toward process, case 1 Positive affect toward process, case 2

Decision 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21 22.

∗p

Performance Variables Logical analysis, case 1 Logical analysis, case 2 Problem elucidation, case 1 Problem elucidation, case 2 Affect toward solution quality, case 1 Affect toward solution quality, case 2 Alternative generation, case 1 Alternative generation, case 2

Decision 1. 2. 3. 4. 5. 6. 7. 8.

17

1.00 0.13 0.41∗∗∗ 0.32∗∗ 0.62∗∗∗

18

Table 1. (Continued )

1.00 0.51∗∗∗ 0.55∗∗∗ 0.38∗∗∗

19

1.00 0.44∗∗∗ 0.63∗∗∗

20

1.00 0.49∗∗∗

21

1.00

22

September 29, 2009 13:24 WSPC/173-IJITDM 00349

Computer Assistance Decisions Making 417

September 29, 2009 13:24 WSPC/173-IJITDM

418

00349

D. L. McLain & R. J. Aldag Table 2. Variable means and standard deviations by condition. Condition No Assist.

Simple Assist.

Complex Assist.

(S.D.)

Mean

(S.D.)

Mean

(S.D.)

Independent Evaluation of Solution 1 Logical analysis Case 1 18.92 (4.85) Case 2 15.46 (5.16)

20.20 19.10

(5.17) (5.94)

17.62 15.56

(5.75) (5.27)

Dependent Variable

2

Mean

Problem elucidation Case 1 15.53 Case 2 14.57

(7.58) (6.61)

15.44 17.41

(8.17) (8.21)

12.65 12.35

(7.65) (6.79)

Affect toward the solution Case 1 17.07 Case 2 17.00

(4.53) (4.76)

15.81 18.56

(6.04) (4.64)

15.29 16.84

(6.00) (4.42)

Alternative generation Case 1 14.17 Case 2 10.13

(6.69) (3.04)

14.96 13.59

(6.10) (7.00)

12.03 12.26

(5.76) (4.37)

Attitudes Toward the Decision Process and Solution 1. Confidence in decision Case 1 14.56 (3.24) 14.28 (3.18) Case 2 14.16 (4.19) 14.60 (3.83)

12.36 12.64

(3.39) (4.27)

3

4

2.

3.

4.

5.

6.

7.

Enhanced problem-solving ability Case 1 13.65 (3.67) Case 2 13.27 (3.98)

13.96 14.96

(2.35) (2.51)

13.79 13.07

(3.13) (4.09)

Satisfaction with resource expenditure Case 1 19.81 (4.40) 19.76 Case 2 20.50 (4.43) 21.24

(4.59) (3.35)

15.54 17.11

(4.66) (4.15)

Perceived acceptability of solution Case 1 17.46 (3.34) Case 2 18.50 (4.01)

17.20 19.72

(2.66) (4.64)

16.21 17.21

(2.92) (4.66)

Perceived process structure Case 1 14.50 (3.88) Case 2 13.54 (4.27)

14.56 15.08

(3.27) (3.48)

14.39 13.86

(2.86) (3.33)

Perceived process adequacy Case 1 19.15 (3.68) Case 2 19.77 (3.76)

18.88 19.52

(4.03) (2.93)

15.39 15.50

(4.00) (3.85)

Positive affect Case 1 Case 2

19.92 21.20

(4.08) (3.58)

18.07 17.89

(4.24) (4.43)

20.38 20.38

(4.61) (5.34)

Analyses were performed using the general linear model (GLM) module in SPSS. Because the relationship between aid complexity and the dependent variables was not assumed to be linear, both linear and polynomial effect tests were conducted for complexity. The analyses were first conducted for the performance variables (hypotheses 1 and 2), then for user attitudes (hypotheses 3 and 4). Separate analyses

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

419

22.00

20.00

18.00

16.00

14.00

12.00

10.00 No Assistance

Simple Assistance

Complex Assistance

Logical Analysis

Problem Elucidation

Affect toward Solution

Alternative Generation

Fig. 1. Decision performance.

were performed because measures of these dependent variables came from independent sources. Complexity significantly influenced performance (F = 2.976, p < 0.01). When separated into linear and quadratic components, the nonlinear components better explained logical analysis, problem elucidation, and alternative generation. This nonlinear relationship is visible in Fig. 1, which depicts average performance scores for each of the three treatment groups. Consistent with other studies that have found curvilinear performance relationships, access to simple assistance improves performance but this positive effect does not appear when the assistance is more complex. Unassisted decision makers and those using complex assistance varied little on any of the four performance dimensions while those using simple assistance consistently performed better. Hypothesis 1 is supported. To test hypothesis 2, that familiarization with a decision aid would influence the performance of assisted decision makers, an analysis was performed using a dummy variable to distinguish assisted from unassisted decision makers. Although the performance trend from case 1 to case 2 was better for the assisted than the control subjects on all four performance dimensions, the magnitude of this effect was not significant (F = 0.972, ns) and Hypothesis 2 received insufficient support. A multivariate test performed for the seven decision attitudes was also significant for the complexity of assistance (F = 2.369, p < 0.01). Therefore, hypothesis 3 was supported. The nonlinear relationship between the complexity of assistance and decision maker attitudes is apparent in Fig. 2. A look at univariate results reveals that the influence of complexity on decision attitudes was significant for

September 29, 2009 13:24 WSPC/173-IJITDM

420

00349

D. L. McLain & R. J. Aldag 22.00 21.00 20.00 19.00 18.00 17.00 16.00 15.00 14.00 13.00 12.00 No Assistance

Simple Assistance

Confidence in Decision Quality Satisfaction with Resource Expenditure Process Structure Affect Toward the Process

Complex Assistance

Enhancement of Problem Solving Ability Acceptability of Solution to Others Process Adequacy

Fig. 2. User attitudes toward the process and solution.

the decision maker’s confidence in the decision, enhanced problem-solving ability, perceived acceptability of the solution, process structure, and positive affect. These findings indicate that a broad array of attitudes was influenced by complexity. A closer look finds that those attitudes were more favorably influenced by simple assistance than by complex assistance which failed to enhance and even diminished the favorability of some attitudes relative to those of unassisted subjects. With familiarization, attitudes improved more for users of simple assistance than for subjects in either of the other two groups. To test hypothesis 4, that assisted decision makers would experience a greater attitude change from case 1 to case 2 than unassisted decision makers, another analysis was performed comparing assisted with unassisted decision makers. The change in attitudes among assisted decision makers was more positive than for those who were unassisted. This was always true for those using simple assistance but less often so for those using complex assistance. However, the magnitude of improvement was not significant (F = 0.784, ns), failing to provide sufficient support for hypothesis 4. In sum, simple assistance tended to enhance performance but complex assistance failed to produce the same benefits. This pattern was also exhibited for decision attitudes — the most positive attitudes were reported by users of simple assistance. Post-hoc interviews with subjects did not identify any unique technical frustrations or attractions associated with either of the decision aids but confirmed that subjects thought the complex aid added complexity to the decision process while the simple aid was informative.

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

421

5. Discussion Making complex and ill-structured decisions, with or without the assistance of computer technology, is a difficult challenge.43,44 Although research has shown that task-specific software is helpful for unambiguous tasks, the application of computer assistance to complex and ambiguous decisions does not necessarily lead to quick or assured benefits.45 In this study, decision makers not only made better decisions but also viewed the decision process and their solutions more favorably when they used assistance that was simple and taught a normative decision process than when they were provided with a more complex and supposedly more functional aid structured around the same normative process. These results for ill-structured decisions call for caution when generalizing from the findings of studies that incorporate well-structured problems. The findings of those studies may not generalize to actual managerial situations where the problems are ill-structured. The complexity of decision-assisting technology may show a curvilinear relationship with decision performance, initially benefiting but eventually weakening performance as the costs associated with heightened technological complexity overcome the benefits. Experience with the technology may not improve this relationship. The findings of this study discourage responding to increasingly complex problems by increasing the complexity of decision assistance. The very technologies designed to assist managerial problem solving may tax the manager’s limited cognitive resources, as was noted long ago by Simon.10 In addition, the inherent attractiveness of decision technology may draw cognitive resources to its manipulation and mislead the decision maker into believing the benefits of assistance are greater than they really are. These possibilities should be considered by both designers of decision aids and scholars studying the effects of technology on decision making. It is also possible that the most valuable benefits of assistance when solving illstructured problems are found in teaching the process rather than in facilitating the elements of the process. Assistance that does not overwhelm the decision maker but that provides simple guidance in good decision making can improve decisions, as found here. Despite the fact that the complex aid was theoretically sound and provided many avenues of support, results were more favorable for the relatively simple, even skeletal, aid that provided uncluttered instruction in decision making. Familiarization benefits also appeared stronger for users of simple assistance than for those using complex assistance. This is especially surprising in view of the fact that most users had considerable computer, if not decision aid, experience. While these results could be interpreted as with evidence of the technical superiority of the simple aid, the technical limitations associated with that aid argue against this conclusion. The simple aid provided none of the sophisticated breadth and depth of the complex aid. It had a limited options menu, performed primarily by providing textual explanations that could have been easily read in printed form, and provided

September 29, 2009 13:24 WSPC/173-IJITDM

422

00349

D. L. McLain & R. J. Aldag

a module for scoring alternatives that was very limited. However, unlike the complex aid, the simple aid apparently did not distract the user with its complexity. It is conceivable that subjects felt it was appropriate or equitable to devote a certain amount of time to the task, and that those who spent more time and effort on aid manipulation (i.e., users of the complex aid) spent correspondingly less on the case solution report. Hence, future research must consider motivation to use a decision aid as an influence on effective aided decision making. In addition, attention is drawn to the difference between an aid that instructs the user about how to make effective decisions and an aid that performs complex tasks but fails to guide the user in understanding the process of effective decision making. It could be argued that additional familiarization may have improved responses to the aids, and particularly to the complex aid. It may require repeated use of an aid to arrive at consequences of interest.21 However, in view of the difficulties that are experienced in attempting to convince individuals to use and apply computerized decision aids and the persistence of early negative attitudes toward a technology (e.g., Refs. 46 and 47), it seems unlikely that aid users, unless forced to do so, would continue to persevere in their search for an upswing in the learning curve. As such, early experience with the aid is critical; if such experience is negative, there will likely be no later experience, positive or negative. Considerable attention has been paid to the generalizability of findings from student samples.48–50 All subjects in this experiment were business students with a minimum of two years of college. Each had some work experience and some instruction in case analysis and decision making. Therefore, each had sufficient skill to perform the experimental tasks and to compose a report describing his or her analysis and recommendations. Despite these points, it cannot be assumed that the findings would be the same if the subjects were more experienced in management and had real relationships with the organizations described in the cases. A manager in one of those organizations would likely consider the implications of solutions more seriously and devote more effort to using and learning the software. For actual managers, the range of possible solutions might differ as might the speed of evaluating alternatives, which could influence not only decision quality but also attitudes toward the process. Further study with diverse samples is therefore important for establishing the generalizability of findings. Some directions for future research are evident. The literature suggests that a complex array of variables, including — but not limited to — technology characteristics (e.g. Refs. 51, 52, 64), task characteristics (e.g. Ref. 53), and user knowledge, expectations, and motivation (e.g. Refs. 3, 22, and 54), interact to influence the decision process and outcomes. Each warrants study. Future studies of the impacts of decision-assisting technology should also carefully consider which characteristics of the technology most influence outcomes of interest. This is not to focus attention exclusively on the technology but to focus attention on characteristics of the technology that users choose to apply and that best fit the decision context. For most users, many elements of large software

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

423

programs are used little or not at all. The emphasis on features designed into a decision aid may differ from the emphasis on features preferred by users. The aid’s inherent complexity may make useful features slow to access or difficult to use as users seek to incorporate those features in their decision process. Further consideration of the influences of individual differences may also prove insightful (e.g. Refs. 23, 55–59). In addition, the match between the technology and the user’s problem-solving process,3,21 attitude toward the decision technology,46 or gender58 may influence familiarization. While decision-assisting technology advances and takes on new forms,6,9,14,60 the complexity of decision making will continue to increase. Future research of decision technology must not overlook interactions among the user, problem, technology, and the organization, each of which is part of the decision process. It is the interplay between these factors that shapes modern decisions. References 1. J. S. Dyer, P. C. Fishburn, R. E. Steuer, J. Wallenius and S. Zionts, Multiple criteria decision making, multiattribute utility theory: The next ten years, Manag. Sci. 38 (1992) 645–654. 2. T. Saaty, Decision Making for Leaders (Lifelong Learning Publications, Belmont, CA, 1982). 3. R. Al-Aomar and F. Dweiri, A customer-oriented decision agent for product selection in web-based services, Int. J. Inform. Technol. Decision Making 7 (2008) 35–52. 4. W. Chen, T. Hong and R. Jeng, A framework of decision support systems for use on the World Wide Web, J. Netw. Comput. Appl. 22 (1999) 1–17. 5. M. Head, N. Archer and Y. Yuan, World wide web navigation aid, Int. J. Hum. Comput. Stud. 53 (2000) 301–330. 6. X. Huang, Comparison of interestingness measures for web mining usage: An empirical study, Int. J. Inform. Technol. Decision Making 6 (2007) 15–41. 7. H. B. Manguerra, Internet-based decision support systems, Resource 4(4) (1997) 11–12. 8. W. L. Cats-Baril and G. P. Huber, Decision support systems for ill-structured problems: An empirical study, Decis. Sci. 18 (1987) 350–372. 9. A. DePiante Henriksen and S. W. Palocsay, An excel-based decision support system for scoring and ranking proposed R&D projects, Int. J. Inform. Technol. Decision Making 7 (2008) 529–546. 10. H. A. Simon, Models of Man, Social and Rational (Wiley, New York, 1957). 11. R. Brown, Consideration of the origin of Herber Simon’s theory of “satisficing” (1933–1947), Manag. Decis. 42 (2004) 1240–1256. 12. W. R. King, G. Premkumar and K. Ramamurthy, An evaluation of the role and performance of a decision support system in business education, Decis. Sci. 21 (1990) 642–659. 13. R. Sharda, S. H. Barr and J. C. McDonnell, Decision support system effectiveness: A review and empirical test, Manag. Sci. 34 (1988) 139–159. 14. Y. Peng, G. Kou, Y. Shi and Z. Chen, A descriptive framework for the field of data mining and knowledge discovery, Int. J. Inform. Technol. Decision Making 7 (2008) 639–682. 15. I. Benbasat and B. R. Nault, An evaluation of empirical research in managerial support systems, Decis. Support Syst. 6 (1990) 203–226.

September 29, 2009 13:24 WSPC/173-IJITDM

424

00349

D. L. McLain & R. J. Aldag

16. S. B. Eom, Mapping the intellectual structure of research in decision support systems through author cocitation analysis (1971–1993), Decis. Support Syst. 16 (1996) 315–338. 17. F. C. Sainfort, D. H. Gustafson, K. Bosworth and R. P. Hawkins, Decision support systems effectiveness: Conceptual framework and empirical evaluation, Organ. Behav. Hum. Decis. Process. 45 (1990) 232–252. 18. M. S. Silver, Decision support systems: Directed and nondirected change, Inform. Syst. Res. 1 (1990) 47–70. 19. K. E. Kendall, The significance of information systems research on emerging technologies: Seven information technologies that promise to improve managerial effectiveness, Decis. Sci. 28 (1997) 775–792. 20. P. N. Finley and C. J. Martin, The state of decision support systems: A review, Omega 17 (1989) 525–531. 21. J. E. Kottemann and W. E. Remus, A study of the relationship between decision model naturalness and performance, MIS Quarterly 13 (1989) 171–181. 22. P. Todd and I. Benbasat, Evaluating the impact of DSS, cognitive effort, and incentives on strategy selection, Inform. Syst. Res. 10(4) (1999) 356–374. 23. R. J. Aldag and D. P. Power, An empirical assessment of computer-assisted decision analysis, Decis. Sci. 17 (1986) 572–588. 24. G. Pitz, Human engineering of decision aids, in Analysing and Aiding Decision Processes, eds. P. Humphreys, O. Svenson and A. Vari (North-Holland Publishing Company, Amsterdam, 1983), pp. 205–221. 25. P. C. Chu and J. J. Elam, Induced systems restrictiveness: An experimental demonstration, IEEE Trans. Syst. Man Cybern. 20 (1990) 195–201. 26. W. Huang, K. K. Lai, Y. Nakamori, S. Wang and L. Yu, Neural networks in economics and finance forecasting, Int. J. Inform. Technol. Decision Making 6 (2007) 113–140. 27. D. Te’eni, Determinants and consequences of perceived complexity in humancomputer interaction, Decis. Sci. 20 (1989) 166–181. 28. G. Gal and P. J. Steinbart, Interface style and training task difficulty as determinants of effective computer assisted knowledge transfer, Decis. Sci. 23 (1992) 128–143. 29. F. E. Davis and J. E. Kotteman, User perceptions of decision support effectiveness: Two production planning experiments, Decis. Sci. 25 (1994) 57–78. 30. R. J. Morris, Enhancing strategic vision in the strategic management course: An empirical test of two computerized case analysis tools, Acad. Manag. Best Papers Proc. (1990) 122–126. 31. F. D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13 (1989) 319–340. 32. G. W. Dickson, G. DeSanctis and D. J. McBride, Understanding the effectiveness of computer graphics for decision support: A cumulative experimental approach, Commun. ACM 29(1) (1986) 40–47. 33. H. Eskandari and L. Rabelo, Handling uncertainty in the analytical hierarchy process: A stochastic approach, Int. J. Inform. Technol. Decision Making 6 (2007) 177–189. 34. H. A. Simon, The New Science of Management Decision (Prentice-Hall, Upper Saddle River, New Jersey, 1977). 35. M. H. Meyer and K. F. Curley, An applied framework for classifying the complexity of knowledge-based systems, MIS Quarterly 15 (1991) 454–472. 36. D. Goodhue and R. L. Thompson, Task-technology fit and individual performance, MIS Quarterly 19 (1995) 213–236. 37. V. C. Vroom, Work and Motivation (Wiley, New York, 1964).

September 29, 2009 13:24 WSPC/173-IJITDM

00349

Computer Assistance Decisions Making

425

38. I. Ajzen and M. Fishbein, Understanding Attitudes and Predicting Social Behavior (Prentice-Hall, Englewood Cliffs, N.J., 1980). 39. G. R. Salancik and J. Pfeffer, A social information processing approach to job attitudes and task design, Admin. Sci. Quarterly 23 (1978) 224–253. 40. B. T. Tuttle and M. H. Stocks, The effects of task information and outcome feedback on individuals’ insight into their decision models, Decis. Sci. 28 (1997) 421–442. 41. R. O. Mason and I. I. Mitroff, A program for research on management information systems, Manag. Sci. 19 (1983) 475–485. 42. J. C. Nunnally, Psychometric Theory, 2nd edn. (McGraw-Hill, New York, 1976). 43. U. Cebeci and D. Ruan, A multi-attribute comparison of Turkish quality consultants by fuzzy AHP, Int. J. Inform. Technol. Decision Making 6 (2007) 191–207. 44. T. Xiin, Towards meta-synthetic support to unstructured problem-solving, Int. J. Inform. Technol. Decision Making 6 (2007) 491–508. 45. C. Theetranont, P. Haddawy and D. Krairit, Integrating visualization and multiattribute utility theory for online product selection, Int. J. Inform. Technol. Decision Making 6 (2007) 723–750. 46. D. K. Peterson and G. F. Pitz, Effect of input from a mechanical model on clinical judgment, J. Appl. Psychol. 71 (1986) 163–167. 47. V. Venkatesh and C. Speier, Computer technology training in the workplace: A longitudinal investigation of the effect of mood, Organ. Behav. Hum. Decis. Process. 79 (1999) 1–28. 48. J. Greenberg, The college sophomore as guinea pig: Setting the record straight, Acad. Manag. Rev. 12 (1987) 157–159. 49. G. A. Liyanarachchi and M. J. Milne, Comparing the investment decisions of accounting practitioners and students: An empirical study on the adequacy of student surrogates, Account. Forum 29 (2005) 121–135. 50. E. A. Locke, Generalizing from Laboratory to Field Settings (Lexington Books, Lexington, MA, 1986). 51. S. Asghar, D. Alahakoon and L. Churilov, Categorization of disaster decision support needs for the development of an integrated model for DMDSS, Int. J. Inform. Technol. Decision Making 7 (2008) 115–145. 52. C. Gonzalez and G. M. Kasper, Animation in user interfaces designed for decision support systems: The effects of image abstraction, transition, and interactivity on decision quality, in Emerging Information Technologies: Improving Decisions, Cooperation, and Infrastructure, ed. K. E. Kendall (Sage, Thousand Oaks, CA, 1999), pp. 45–74. 53. T. Guimaraes, M. Igbaria and M. Lu, The determinants of DSS success: An integrated model, Decis. Sci. 23 (1992) 409–430. 54. T. Tyszka, Two pairs of conflicting motives in decision making, Organ. Behav. Hum. Decis. Process. 74 (1998) 189–211. 55. R. Agarwal and J. Prasad, The role of innovation characteristics and perceived voluntariness in the acceptance of information technologies, Decis. Sci. 28 (1997) 557–582. 56. R. Agarwal and J. Prasad, Are individual differences germane to the acceptance of new information technologies? Decis. Sci. 30 (1999) 361–391. 57. D. Ghosh and M. R. Ray, Risk, ambiguity, and decision choice: Some additional evidence, Decis. Sci. 28 (1997) 81–104. 58. E. Karahanna, M. Ahuja, M. Srite and J. Galvin, Individual differences and relative advantage: The case of GSS, Decis. Support Syst. 32 (2002) 327–341.

September 29, 2009 13:24 WSPC/173-IJITDM

426

00349

D. L. McLain & R. J. Aldag

59. V. Venkatesh, M. G. Morris and P. L. Ackerman, A longitudinal field investigation of gender differences in individual technology adoption decision-making processes, Organ. Behav. Hum. Decis. Process. 83 (2000) 33–60. 60. J. P. Shim, M. Warkentin, J. F. Courtney, D. J. Power, R. Sharda and C. Carlsson, Past, present, and future of decision support technology, Decis. Support Syst. 33 (2002) 111–126. 61. L. R. Goldberg, Man versus models of man: A rationale, plus some evidence, for a method of improving on clinical inferences, Psychol. Bull. 73 (1970) 422–432. 62. G. DeSanctis Benbasat and B. R. Nault, Empirical research in managerial support systems: A review and assessment, in Recent Developments in Decision Support Systems (2nd edn.), NATO ASI Series (F), Computer Systems Sciences, Vol. 101, eds. C.W. Holsapple and A.B. Whinston (Springer, Berlin, 1993), pp. 383–437. 63. F. D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13(3) (1989) 319–339. 64. W. N. Dilla and D. N. Stone, Representations as decision aids: The asymmetric effects of words and numbers on auditors’ inherent risk judgments, Decision Sciences 28(3) (1997) 709–743. 65. M. Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and Complex (New York, Freeman, 1995). 66. G. Johns, The essential impact of context on organizational behavior, Academy of Management Review 31(2) (2006) 386–408. 67. C. H. Kepner and B. B. Tregoe, The New Rational Manager (Princeton, NJ, 1981). 68. T. Whelan, C. Sawka, M. Levine, A. Gafni, L. Reyno, A. Willan, J. Julian, S. Dent, H. Abu-Zahra, E. Chouinard, R. Tozer, K. Pritchard and I. Bodendorfer, Helping patients make informed choices: A randomized trial of a decision aid for adjuvant chemotherapy in lymph node-negative breast cancer, Journal of the National Cancer Institute 95 (2003) 581–587. 69. R. M. Yerkes and J. D. Dodson, The relation of strength of stimulus to rapidity of habit-formation, Journal of Comparative Neurology and Psychology 18 (1908) 459– 482.

Suggest Documents