Trainee reactions and task performance: a study of open training in ...

2 downloads 2820 Views 196KB Size Report
Mar 20, 2007 - Keywords Training evaluation Æ Open learning Æ. Perceived ..... Analysis class, where both data- and process-oriented techniques had been.
Inf Syst E-Bus Manage (2009) 7:21–37 DOI 10.1007/s10257-007-0049-x ORIGINAL ARTICLE

Trainee reactions and task performance: a study of open training in object-oriented systems development Liping Liu Æ Elizabeth E. Grandon Æ Steven R. Ash

Accepted: 10 February 2007 / Published online: 20 March 2007 Ó Springer-Verlag 2007

Abstract In this study, we examine two trainee reactions: ease of learning and ease of use and their relationships with task performance in the context of object-oriented systems development. We conducted an experiment involving over 300 subjects. From that pool 72 trainees that met all of the criteria were selected for analysis in a carefully controlled study. We found that ease of learning was strongly correlated to task performance whereas ease of use was not. The finding was unexpected; ease of learning and ease of use are two overlapping concepts while their effects on task performance are very different. We offer a theoretical explanation to the paradoxical finding and its implications to the improvement of training evaluation. Keywords Training evaluation Æ Open learning Æ Perceived ease of learning Æ Perceived ease of use Æ Systems development Æ Object-oriented analysis

L. Liu (&) The University of Akron, College of Business Building 351, Akron, OH 44325-4801, USA e-mail: [email protected] E. E. Grandon University of Bı´o-Bı´o, Casilla 5-C, Concepcio´n, Chile e-mail: [email protected] S. R. Ash College of Business and Administration, The University of Akron, Akron, OH 44325-4801, USA e-mail: [email protected]

123

22

L. Liu et al.

1 Introduction Practitioners generally believe that object-oriented development, although not a silver bullet, is useful in alleviating many troublesome issues associated with structured techniques such as data flow and entity-relationship diagrams (Brooks 1987; Sircar et al. 2001). Among many benefits, object-orientation improves the communication between end-users and systems analysts, increases productivity and reliability, and reduces the load of code maintenance (Booch 1991; Garceau et al. 1993; Yourdon 1994; Eaton and Gatian 1996; Jonhson 2000). It allows a seamless transition among different phases of software development by using the same language for analysis, design, and implementation (Graham 1994). Most current systems analysts are trained or experienced in structured techniques. The transition to object-orientation not only brings benefits and opportunities but also forces organizations to make a hard decision: to hire outside experts who are skilled in the new techniques, or retrain their existing systems analysts (Nelson et al. 2002). Object-orientation is qualitatively different from its structured counterparts and entails a different mindset (Henderson-Sellers 1992). Thus, new employees trained in object-orientation, if they can be found, seem to be a perfect solution—they have the proper mindset and the needed skill set (Nelson et al. 2002). However, their lack of business domain knowledge greatly discounts their utility. Empirical evidence has shown that it may take years for new hires to become useful to an organization (Lee and Allen 1982). Therefore, a viable alternative for many organizations is to retrain their existing systems developers. Training can be costly and time consuming, and perhaps most importantly, the metrics used to evaluate its success are not completely clear. Training evaluation is consistently identified as one of the greatest challenges in the training literature (Goldstein 1993; Salas and Cannon-Bowers 2001). In the classic framework of training effectiveness, Kirkpatrick (1959a, b, 1960a, b) categorizes training outcomes into four layers: reactions, learning, behavior, and results. Reactions refer to affective responses of how trainees liked and felt about training. Learning refers to knowledge acquisition and retention. Behavior emphasizes the transfer of training to on-the-job performance. Finally, results measure the organizational impact of the training. Of the four measures, trainee reactions are by far the most commonly collected evaluation criteria (Saari et al. 1988; Bassi et al. 1996; Alliger et al. 1997). Although immediate trainee reaction measures are easy to collect, there is concern over how well trainee reactions act as surrogates of other, more meaningful indicators such as learning or behavior change (Alliger et al. 1997). Whereas some researchers questioned the value of reactions in gauging other criteria, others found strong correlations between reactions and actual learning. It is our contention that whether reactions act as good surrogates of learning depends on whether right reaction measures are selected. Goldstein and Gilliam (1990) draw attention to training consequences arising from greater technological sophistication and an increased require-

123

Trainee reactions and task performance

23

ment for cognitive complexity in work, justifying the importance of training evaluation methods specific to systems development professionals. Paradoxically, while OO (object-oriented) techniques have many potential benefits, their adoption in organizations is slow and marginal (Glass 1999). This has motivated many to search for a possible explanation. The usability of OO technique has been identified as a primary reason. Indeed, in the ongoing debate on whether object-orientation represents an evolution, revolution, or architectural change (Sircar et al. 2001), existing studies have exclusively considered the comparative usability of OO techniques and whether it is easy or difficult to retrain procedure-oriented developers to use the techniques. For example, Vessey and Conger (1994) and Agarwal et al. (1996b) found that OO techniques were more difficult to learn and use than structured methods within a group of novices. In contrast, Lee and Pennington (1994) found that OO analysis was easier and faster to learn. Sharp and Griffyth (1999) found that basic OO concepts were not regarded as difficult regardless of respondents’ prior experience. Boehm-Davis and Ross (1992) had a similar finding that OO methods reduced the complexity and design time while increasing solution completeness. In these studies, the measurement of ease of use varied but all were based on objective performance. There has been virtually no research into the role of trainee reactions and its impact on task performance. The objective of this study is to examine this relationship in an open training environment.

2 Theoretical development 2.1 Ease of use and ease of learning Kirkpatrick’s taxonomy is by far the most influential framework of training outcomes. Alliger et al. (1997) proposed an augmented framework, in which trainee reactions are divided into affective responses and utility judgments. Independently, Warr and Bunce (1995) empirically established three categories of trainee reactions: perceived enjoyment, perceived usefulness, and perceived difficulty. Perceived enjoyment measures the extent to which trainees view training as enjoyable whereas perceived usefulness focuses on the potential applicability of the training material to their job. They correspond to affective and utility judgments of Alliger et al. (1997). Perceived difficulty, on the other hand, measures the cognitive and emotional effort required to master training materials, including both skill acquisition and application. In the end-user computing literature, perceived ease of use (PEU) is defined as the degree to which one believes that using a system, technology, or skill is free of effort (Davis 1989). Its measurement, however, often includes as a sub-dimension perceived ease of learning (PEL), which we define as the degree to which one believes that acquiring a skill or comprehending a

123

24

L. Liu et al.

technology or system is free of effort. For example, in his final list of six items to measure PEU, Davis uses two items to tap into PEL. In this paper, we argue that PEU and PEL are two distinct concepts. Their consolidation is appropriate only when comprehension can lead to immediate ease of use. This is the case for most end-user applications, on which Davis and others have focused. In areas of high cognitive complexity, however, understanding a concept does not automatically translate into a successful application. In systems development, for example, the use of OO techniques requires more than just rote memorization and comprehension of OO concepts and processes. It requires domain knowledge about a business where the techniques are applied. It entails abstracting task elements and relationships into mental models. It also demands cognitive skills that identify business data and functions and optimally allocate them into a community of collaborating objects (Jacobson et al. 1999). While usability has been an important issue for systems designers for many years (cf. Silver 2006), cognition theories provide further rationales for differentiating PEL from PEU. Taxonomically, learning occurs in one of three overlapping domains: cognitive, affective, and psychomotor. Within the cognitive domain, Bloom (1956) classifies learning into six hierarchical levels: recall, comprehension, application, analysis, synthesis, and evaluation. The most basic level is recall of information, ranging from specific facts to more general patterns and theories. Comprehension refers to a low-level understanding of training materials. At application, analysis, and synthesis levels, the learner applies training materials to particular situations, breaks down the materials into constituent elements, relationships and interactions, and combines elements into an integrated whole. At the most sophisticated level, the learner makes a judgment about the value of the materials. There are some discussions concerning the precise distinction of the six levels (Seddon 1978; Kottke and Schuster 1990). To sidestep some of the debate, other researchers tend to regroup the six levels into a coarser classification (Taylor et al. 2003). For example, the taxonomy is sometimes collapsed into so-called surface and deep learning (Martin and Saljo 1976) to differentiate between rote memorization and the application and appreciation of abstract concepts. According to this taxonomy, our conceptualization of PEL measures the reaction to learning at recall and comprehension levels whereas PEU taps application, analysis, and synthesis of training materials. Furthermore, perceived enjoyment and usefulness are trainee reactions at the evaluation level (Table 1). Despite the distinction between the constructs, ease of learning is a precondition of ease of use. It is difficult to expect one to use OO techniques if he or she does not understand it. Similarly, a good understanding contributes to effective use. Therefore, we anticipate that PEL positively influences PEU as stated by the following hypothesis: Hypothesis 1. The perceived ease of learning of object-oriented techniques positively influences the perceived ease of use of the techniques.

123

Trainee reactions and task performance

25

Table 1 Mapping trainee reactions to learning levels Trainee reaction

Learning experience

Definition

Ease of learning

Recall + comprehension

Ease of use

Application + analysis + synthesis

Affective and utility reactions

Evaluation

The degree to which one believes that acquiring a skill or comprehending a technology or system is free of effort The degree to which one believes that using a system, technology, or skill is free of effort The extent to which one views training as enjoyable and usefulness in their job

2.2 Reactions and task performance Warr and Bunce (1995) forcefully argue that, among all trainee reactions, perceived difficulty is the only one expected to predict actual learning, which in our case is called task performance. They predict that trainees who find the materials particularly difficult are likely to be less able to acquire new knowledge or skill. The prediction does not recognize the role of affective and usefulness reactions in learning. Some may argue that a person learns something because it is useful. However, Warr and Bunce (1995) argue that perceived usefulness is associated with changes in work behavior, because trainees who see training materials as relevant to their work have a greater opportunity to transfer their learning than do those for whom they are of less relevance. These predictions have all been supported by accumulated empirical evidence. In their study of 106 junior managers over a 7-month period, Warr and Bunce (1995) found a modest correlation (–0.14) between perceived difficulty and actual learning. However, the correlation between usefulness and actual learning was very small (0.04). In a meta analysis of 34 empirical studies and 115 correlations, Alliger et al. (1997) found that, although usefulness had a modest average correlation with immediate posttraining knowledge (0.12) and training transfer (0.18), its mean correlation with task performance was not significantly different from zero. In addition, affective reaction (or enjoyment) had small mean correlations across tens of studies with immediate post-training knowledge (0.02), task performance (0.03), and training transfer (0.07). Because of this plethora of empirical evidence, in this study we rule out the usefulness and affective reactions while focusing on usability and learnability reactions, i.e., PEU and PEL. Since PEL is opposite to perceived difficulty, the existing empirical evidence supports the following hypothesis: Hypothesis 2. The perceived ease of learning of object-oriented techniques positively influences the performance of using the techniques. It is intuitive that ease of use directly impacts task performance. Following the logic of Warr and Bunce (1995), we argue that the trainees who find a technology difficult to use are likely to be less able to use the technology well,

123

26

L. Liu et al.

Ease of Use + Ease of Learning

+ + Task Performance

Fig. 1 Relationship between expected findings

which will in turn hinder their task performance of applying the technology. This leads to our next hypothesis: Hypothesis 3. The perceived ease of use of object-oriented techniques positively influences the performance of using the techniques. Hypotheses 2 and 3 may also be supported by the expectancy theory (Vroom 1964) in addition to the evidence in the training literature. Davis (1989) argues that whether individuals will or not use an information system depends upon the extent to which they believe it will improve their job performance, i.e., perceived usefulness. However, while the potential users believe that the technology is useful, they may at the same time believe that it is too hard to learn and too hard to use and that the performance benefits of usage are outweighed by the perceived effort of using the technology. This argument implicitly assumes a few possible causes for the user to choose not to use a technology. One of them is that that PEU and PEL measure the probability that effort leads to performance. Thus, they take on the role of expectancy and predict the desire to learn and to use the system (Vroom 1964). Therefore, when deciding among behavioral options, individuals who have higher usability and learnability beliefs generate greater motivation to participate in training, which in turn will improve their actual learning and post-training task performance. Figure 1 graphically illustrates the expected relationship between the constructs, as outlined in hypotheses 1–3. 3 Research method 3.1 Trainee characteristics Trainee characteristics that are found to influence training outcomes include trainee attitudes (Noe and Schmitt 1986), self-efficacy (Tannenbaum et al. 1991), anxiety (Martocchio and Webster 1992), and other individual difference factors such as age, job tenure, and work experience. In the context of retraining systems analysts, prior knowledge on structured techniques is found to be a most salient characteristic. For example, Agarwal et al. (1996b) found that process-oriented experts performed significantly better than novices on OO tasks. In contrast, Morris et al. (1999) found that those who were previously trained in process-oriented techniques perform worse than novices in a later training of OO techniques.

123

Trainee reactions and task performance

27

In structured analysis methodology, a software system is viewed as a collection of data with separate processes that operate on the data. Early structured techniques were largely process-oriented (e.g., using data flow diagrams or program flow charts) while the latest are more data-oriented (e.g., using entity-relationship diagrams) (Fichman and Kemerer 1992). Modern structured methodology uses a blend of these techniques for capturing both data and process requirements (Hoffer et al. 1999). Therefore, to ensure the representativeness of trainee subjects, we recruited subjects with four different characteristics (see Table 2): Group A consisted of subjects familiar with both data- and process-oriented techniques; Group B consisted of those familiar with only data-oriented techniques; Group C consisted of those familiar only with process-oriented techniques; and Group D consisted of novices with no prior exposure in either data- or process-oriented techniques. To implement the design, we recruited candidates from senior level classes at a large Midwestern American university. We requested the rosters of all current and previous classes and screened each candidate with respect to his or her prior knowledge in data- or process-oriented techniques. After the screening, we ended up with 131 trainees. We controlled the subjects’ prior knowledge according to their prior relevant courses and provided additional pre-training preparation if necessary. The subjects in Group A were selected from a Systems Analysis class, where both data- and process-oriented techniques had been extensively taught and practiced. The subjects in Group B were selected from a Database Management class and Group C from a conceptual MIS class. Under the direction of the program coordinator, these two classes were specially designed to accommodate the needs of the study. For example, we gave five weeks of extensive lectures and exercises on data-oriented techniques to Group B and the same amount of preparation on process-oriented techniques to Group C. In addition to regular lectures, these subjects were assigned to solve a large number of real business problems to fulfill their course requirements. The special treatment was meant to provide equivalent coverage of the same topic as professional training and to prepare the subjects for entry-level systems analyst positions. The subjects in Group D were selected from two other business classes that had no exposure to any systems analysis topics. 3.2 Training evaluation The training method was the open learning system, modeled after Warr and Bunce (1995), where trainees worked on their own to learn written materials. Table 2 Control of pretraining condition

Process-oriented techniques

Data-oriented techniques Y N

Y

N

A C

B D

123

28

L. Liu et al.

After finishing pre-training preparation, we provided each of the trainees with an 18-page training binder. The materials covered the essential concepts of object-orientation (e.g., objects, encapsulation, polymorphism, etc.), how to model business problems (e.g., how to draw use case, class, and interaction diagrams), a list of review questions, and exercises for each participant to work through over the following 2 weeks. The trainees were advised that their participation in the training would be awarded credits, determined based on their performance, toward fulfilling partial requirements of one designated academic course. After the 2-week open learning period, we conducted training evaluation in an experimental setting, where trainees gathered in a securely managed facility and went through a 2-h process of knowledge assessment, during which, trainees were not allowed to consult training materials or talk to others. Training evaluation consisted of three parts. First, we administered a short quiz consisting of five screening questions to ensure that the trainees actually read the training materials. A subject was dropped from the further study if they missed two or more questions. After doing so, we ended up with 72 subjects evenly distributed in each of the four groups. Among the finalists, 41 were males and 31 females with 52% of them majoring in MIS and 48% in other business areas. On the average, the subjects had four years of work and 3 years of using computer experience. Some subjects also reported programming and database experience. However, no subjects reported to have prior exposure to OO techniques before the training. Second, we collected trainee reactions, including three questions on PEU and three questions on PEL. In this study, we adapted the reaction scale items from those in Davis (1989). Upon successive pilot testing and validating, we identified six items to measure PEL and PEU (see Table 3). For each item, we used a seven-point Likert scale and asked each trainee to provide a response ranging from ‘‘strongly disagree’’ to ‘‘strongly agree.’’ Third, we gave the trainees a real systems analysis task and asked them to create an OO analysis model as the blueprint for the system to be developed. Agarwal et al. (1996c) found that the task characteristic, i.e., whether it is process- or object-oriented, affects task performance. To eliminate this potential confound, we selected a problem to include both process and structural features. We measured each trainee’s task performance by the quality of his or her analysis model. We defined quality as the extent to which an analysis model captured all data and functional requirements as specified in the

Table 3 Scale items for PEL and PEU PEL1 PEL2 PEL3 PEU1 PEU2 PEU3

123

It is easy to comprehend object-oriented concepts I felt comfortable in studying object-oriented analysis and modeling The object-oriented concepts seems to be straightforward I felt comfortable in applying object-oriented models Modeling business problems using class diagrams seems to be easy It is easy to apply the object-oriented analysis and modeling

Trainee reactions and task performance

29

problem description, and satisfied the goals of OO modeling, including behavior allocation and code reuse (Jacobson et al. 1999). To ensure the reliability of quality assessment, we had two independent experts evaluate each analysis model blindly by using two evaluation procedures. Each expert first evaluated each solution according to its overall feel and look. This procedure focused on the correctness of using OO concepts such as inheritance and encapsulation, whether a model satisfied OO modeling goals such as optimal allocation of behavior, and how closely each solution was to the abstract structure of a good design, i.e., the schema of an OO expert (Jeffries et al. 1980). Then, a few days later, each expert focused on model elements, including objects, services, relationships, and attributes, which are the major facets of OO models (Batra et al. 1990), and evaluated each model based on the number of errors present in each facet as compared to a ‘‘correct’’ solution. In particular, the procedure assigned a weight to each facet to measure its relative importance: 30% (major objects), 10% (minor objects), 30% (relationships), 15% (attributes), and 15% (services). Within each facet, the procedure started with 100 points and took a certain number of points off the total score if an element in the facet was missing or incorrect. Finally, the weighted average score across all facets was indexed as the overall performance. Through these two independent procedures, the two experts produced four indices of task performance. The correlation between each pair of the indices ranged from 0.80 to 0.85, suggesting excellent inter-expert and interprocedure reliability of the indices. 3.3 Measurement validity All the three research constructs were latent and each was reflected by a group of measurable items and indirectly measured as the average of their scores. Thus, before conducting data analysis, we needed to ensure the validity of the scales (Churchill 1979). In order to assess factorial validity, i.e., the items for the same construct measure a single trait while items for different constructs measure distinct traits, we conducted a principal component factor analysis with Varimax rotation on ten items, including six items for trainee reactions (see Table 3) and four indices for task performance, labeled as P1, P2, P3, and P4. Using the Kaiser eigenvalue criterion, we extracted three factors that collectively explained 82.6% of the variance in all items. The rotated factor matrix in Table 4 shows that all items cleanly loaded onto the correct latent constructs, implying the distinctness of the three constructs. The above factor analysis result may also be interpreted in terms of convergent and discriminant validity. A construct is said to have convergent validity if each of its scale items loads on the construct strongly with loadings 0.50 or higher. A construct is said to have discriminant validity if each of its items loads stronger on the construct than on any other constructs (Hair et al. 1998). According to these definitions, the results showed both convergent and discriminant validities of the constructs.

123

30 Table 4 Rotated component matrix: principal component analysis with varimax rotation

Table 5 Reliability analysis

L. Liu et al.

Component

P1 P2 P3 P4 PEU1 PEU2 PEU3 PEL1 PEL2 PEL3

1

2

3

0.943 0.934 0.919 0.912 0.065 0.043 0.042 0.219 0.001 0.121

0.142 0.045 –0.102 0.111 0.834 0.888 0.834 0.240 0.452 0.488

0.065 0.155 0.063 0.083 0.348 0.197 0.402 0.844 0.721 0.661

Construct

Cronbach’s alpha

Perceived ease of use (PEU) Perceived ease of learning (PEL) Task performance (P)

0.87 0.80 0.94

The internal validity, or construct reliability, is usually measured by Crobach’s alpha coefficients. Generally, reliability coefficients of 0.70 or higher are considered acceptable (Nunnally 1967; Hair et al. 1998). Table 5 shows that alpha values for the three constructs ranged from 0.80 to 0.94, demonstrating a very good scale reliability of each of the multi-item constructs.

4 Research findings 4.1 Regression analysis To evaluate the research hypotheses, we employed multivariate linear regressions as the primary method for data analysis. We first regressed PEU against the predictor PEL and listed the result in the first row of Table 6. This resulted in a positive correlation coefficient (r = 0.80) between PEL and PEU. The correlation was significant at the level a = 0.001. The overall R-square was 0.65, meaning that ease of learning could explain 65% of the variance in ease of use. Thus, Hypothesis 1 was supported. Then we regressed Task Performance against PEL and PEU, respectively, and listed the results in the second and third row of Table 6. A positive correlation (r = 0.36) between task performance and PEL was produced. This correlation was significant at a = 0.05. The R-square value was 0.13. Thus, PEL was able to explain 13% of the variance in performance. Therefore, Hypothesis 2 was also strongly supported. However, contrary to what was expected, we found that task performance and PEU were not correlated. Although the correlation was positive (r = 0.15), it was not significant by any

123

Trainee reactions and task performance

31

Table 6 Regression analyses for Hypotheses 1–3 Relationship

R-square

Beta

t value

p value

PEU = PEL Performance = PEL Performance = PEU Performance = PEU + PEL PEU PEL

0.65 0.13 0.02 0.16

0.803*** 0.358*** 0.151

11.278 3.164 1.256

0.000 0.002 0.214

0.226 0.521***

–1.390 3.208

0.169 0.002

* p < 0.1, ** p < 0.05, *** p < 0.01

standard and hence not statistically differentiable from zero. The overall Rsquared value (0.02) did not even suggest any linear relationship between performance and ease of use. To further confirm these findings on Hypotheses 2 and 3, we regressed Performance against both PEL and PEU. We found a similar pattern of support. That is, PEL was still a significant predictor of performance whereas PEU was not; the regression coefficient between PEU and performance was not significantly different from zero. Overall PEL and PEU could jointly explain 16% of the variance in Performance. 4.2 Findings explained If we pool the above three findings together, we notice a paradox. On the one hand, PEL and PEU were statistically correlated with each other (r = 0.80). The strong correlation seems to suggest that ease of learning may simply be a component of PEU as in most of existing studies (Davis 1989). On the other hand, their relationships to the same third construct—task performance—were very different; PEL was highly correlated with performance whereas PEU was not. Thus, PEL and PEU played different roles in predicting performance and were justified to be distinct constructs. Now that PEL and PEU are empirically correlated and conceptually connected, why are their effects on task performance different? To explain the phenomenon, we researched a large body of literature linking perceptions to task performance. In social psychology, we found that, although many studies have reported significant correlations between self-efficacy and task performance (Bandura 1982), some found weak or no relationships. To explain the conflicting findings, Gist and Mitchell (1992) suggested that the lack of direct experience with a task along with task complexity and feedback ambiguity could degrade the efficacy–performance relationship. In the end-user computing literature, there has been some research into the relationship between PEU and objective usability. The notion of objective usability (OU) captures the actual level of effort required to complete a specific task using an information system. Based on the responses of 76 student subjects, Venkatesh and Davis (1996) found that OU did not influence PEU before direct hands-on experience with a system. However, the rela-

123

32

L. Liu et al.

tionship did become significant after the experience. In a more recent study, Venkatesh (2000) found a similar relationship between OU and PEU. Upon scrutiny of these studies, we found that their conception of OU essentially measured the task performance of using a system. For example, when operationalizing the concept, Venkatesh and Davis (1996) measured OU by the ratio of the expert performance on a task to the subject performance on the same task; a technology was objectively more difficult to use if it had a higher performance ratio. As Venkatesh and Davis (1996) indicated, comparative performance acted as the surrogate for OU. Thus, if reinterpreted, their findings suggest that PEU correlates to performance only after direct handson experience. The above two findings exhibit a common pattern—efficacy, or usability perception, correlates to task performance only after individuals have direct experience with a task. In order to make sense of this moderating role of experience, we reasoned that correlation becomes strong only if efficacy or usability perception is assessed based on direct ‘‘use’’ experience. This reasoning suggested a clue to our paradoxical findings; whether PEU or PEL correlated with task performance depended upon whether the trainee’s perception was based on experience or speculation. While all trainees in the study did learn the material and passed a quiz to demonstrate their knowledge, the assessment of PEU was highly speculative; the trainees had no experience in actually applying the material to real business problems except for that in solving toy problems during the training and in performing the experiment task. Consequently, PEL was related to performance, but PEU had no correlation. When a trainee gains more direct experience, his or her perceptions of ease of learning and ease of use will gradually be adjusted to reflect objective usability and learnability, which is then equivalent to performance (Venkatesh and Davis 1996). Behavioral decision theory suggests that individuals use heuristics in making subjective judgments (Tversky and Kahneman 1974). Among other heuristics, availability and anchoring-adjustment explain the role of experience in forming usability and learnability perceptions. First, the availability heuristic refers to making judgments based on the instances that come most readily to mind. Experience creates specific retrievable instances that exhibit how easy or difficult it is to use or learn training materials. Although they are only snap shots of entire interactions, these instances partially portray objective ease of learning or ease of use. When they are brought into mind, PEL and PEU judgments will be made more objectively. In contrast, without experience, one may vividly construct such instances based on imaginations (Payne et al. 1992). The resulting judgment is then speculative. Second, according to anchoring-adjustment heuristic, people make estimates by anchoring on an initial value and making adjustment to yield the final answer (Tversky and Kahneman 1974). In particular, without experience, individuals anchor their perceptions to their general beliefs or speculations. With increasing experience, the individuals are expected to adjust their perceptions to reflect the experience (Venkatesh 2000), bringing PEU and PEL

123

Trainee reactions and task performance

33

closer to objective usability and learnability, respectively. Of course, adjustments are typically insufficient (Slovic and Lichtenstein 1971); one cannot expect a full alignment between reactions and actual task performance. To provide further support to our reasoning, we did a final exercise. Note that, in reference to Bloom’s taxonomy of learning, we assumed that, during the training process, trainees had less experience in application, analysis, and synthesis than in recall and comprehension. Thus, PEU had a smaller correlation with task performance than ease of learning. If we extend this reasoning to the evaluative learning outcomes, we would expect that affective or utility reactions (see Table 1) would have an even smaller correlation with performance because trainees had even less experience in evaluation than in application, analysis, and synthesis. Surprisingly, this is the case! In their metaanalysis, Alliger et al. (1997) found that both affective and utility reactions had a very small mean correlation (r = 0.03) with behavior/skill demonstration, i.e., task performance. Therefore, our explanation is consistent with empirical evidence in the training literature.

5 Conclusion To better understand trainee reactions and performance in cognitive complex environments, we proposed to split the perceived difficulty of Warr and Bunce (1995) into two separate constructs: perceived ease of use (PEU) and perceived ease of learning (PEL). We then developed a theoretical basis for and explored how these reactions predicted task performance in an open training environment for object-oriented (OO) systems development. Empirically, we found a strong correlation (r = 0.36) between task performance and PEL but none between performance and PEU. Ease of learning is conventionally considered sub dimensions of PEU in the literature. However, we believe there is enough evidence in support of the breakup of their marriage for training evaluation involving cognitively complex topics. First, the taxonomy of learning (Bloom 1956) dictates the hierarchical nature of trainee reactions; PEL addresses recall and comprehension, PEU addresses application, analysis, and synthesis, and affective and utility reactions address evaluation. Second, principal component factor analysis operationally justified the distinction between PEL and PEU. Third, the difference in their power of predicting task performance implies different roles they may play in a nomological network of other antecedents and consequences. This provided empirical evidence for their distinction in general causal modeling. In fact, if we had modeled PEL as a dimension of PEU, we would have obtained a finding of no or marginal significance. The significant relationship between PEL and task performance would be canceled out by the insignificant effect of PEU on performance. Nevertheless, the conceptual distinction does not reduce the causal effect of PEL on PEU. In fact, we found a strong correlation between the two constructs (r = 0.80). Overall, PEL could predict 65% of the variance in PEU.

123

34

L. Liu et al.

Thus, the trainees who found OO techniques easy to learn were likely to find them easy to use. 5.1 Implications Our findings have many important practical implications to training evaluation and management. First, when collecting trainee reactions, one should focus on the perceived ease of learning. It is ease of learning that predicts actual learning such as task performance. Second, in order to improve training effectiveness, one may use appropriate intervention mechanisms aimed at improving trainee learnability reactions. Improved perception of ease of learning enhances their actual learning outcomes and the chance of transfer of training to on-the-job performance. Third, in order to make trainee reactions more predictive of actual learning outcomes, trainees must be offered sufficient opportunities for direct experience with training materials. The questions that elicit reactions must be relevant to what trainees actually experienced. For example, they should be pertinent to specific tasks rather than cross-domain or general-domain tasks. By doing so, reaction measures better predict actual learning outcomes; they are adjusted based on direct experience and hence more reflective of objective enjoyment, utility, and usability. This study contributes to the literature in several important ways. First, it demonstrates the importance of assessing the social aspects of systems development by understanding the role of reaction data. Second, this study is an important incremental addition to the training literature. Since immediate reaction measures are by far the easiest and least costly to obtain form Kirkpatrick’s list, it is valuable to find evidence for links to actual performance criteria. Third, this paper helps create specific understanding on information technology (IT) challenges, specifically in regards to object-oriented development. By designing course materials with learnability feedback, specific development goals can be achieved. In the IT area, most training studies focus on end users (e.g. Martocchio and Webster 1992) and research on training of IT specialists is limited (Agarwal et al. 1996a). A few studies addressed issues on retraining computer programmers (e.g. Nelson et al. 2002) and systems analysts (e.g. Morris et al. 1999). However, they exclusively focused on how prior knowledge as a trainee characteristic may interfere with the learning of new tools. IT is full of rapid changes and skill upgrades in an environment of high cognitive complexity. IT workers possess unique characteristics and respond to unique motivations (Florida 2002). They form the ‘‘creative class’’ that requires managers to use different management styles to ensure optimal job performance. They are also creative trainees that require trainers to use different training styles and evaluation criteria to achieve best training effectiveness. No study has examined training evaluation specific to retraining systems developers. This study represents a first step along this line of inquiry and calls for future research into training development and evaluation of IT professionals.

123

Trainee reactions and task performance

35

5.2 Limitations The use of student trainees may affect its external validity. The same concern also affects many other similar studies. However, the use of student trainees does not present as significant of a threat to validity in the current study as it might to research in other domains. In our study, students were neither asked to pretend to be within an imagined organizational context nor to respond to questions about situations that are unrealistic and unfamiliar. In addition, since most college programs are still teaching structured techniques, organizations often have to retrain new graduates to do OO development. The trainees of this study are representative of these new hires. Nevertheless, a further replication involving professional developers is important. Since findings could vary across samples and contexts, an additional test would have value in its own right. References Agarwal R, Prasad J et al (1996a) From needs assessment to outcomes: managing the training of information systems professionals. SIGCPR/SIGMIS ‘96, Denver, CO Agarwal R, Sinha A et al (1996b) Cognitive fit in requirements modeling: a study of object and process methodologies. J Manage Inf Syst 13(2):137–162 Agarwal R, Sinha A et al (1996c) The role of prior experience and task characteristics in objectoriented modeling: an empirical study. Int J Hum Comput Stud 45:639–667 Alliger GM, Tannenbaum SI et al (1997) A meta-analysis of the relations among training criteria. Pers Psychol 50(2):341–358 Bandura A (1982) Self-efficacy mechanism in human agency. Am Psychol 37:122–147 Bassi LJ, Benson G et al (1996) The top ten trends. Train Dev 50:28–42 Batra D, Hoffer JA et al (1990) Comparing representations with relational and EER models. Commun ACM 33(2):126–139 Bloom BS (1956) Taxonomy of educational objectives—handbook I: cognitive domain. David McKay, New York Boehm-Davis D, Ross L (1992) Program design methodologies and the software development process. Int J Man Mach Stud 36:1–19 Booch G (1991) Object oriented design with applications. Benjamin/Cummings, Redwood City Brooks FP (1987) No silver bullet: essence and accidents of software engineering. Computer 20:10–19 Churchill GA Jr (1979) A paradigm for developing better measures of marketing constructs. J Mark Res 16(1):64–73 Davis FD (1989) Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Q 13(3):319–340 Eaton T, Gatian A (1996) Organizational impacts of moving to object-oriented technology. J Syst Manage 47(2):18–26 Fichman R, Kemerer C (1992) Object-oriented and conventional analysis and design methodologies. Computer 25:22–39 Florida R (2002) The rise of the creative class and how it’s transforming work, leisure, community and everyday life. Basic Books, New York Garceau L, Jancura E et al (1993) Object-oriented analysis and design: A new approach to systems development. J Syst Manage 44(1):25–33 Gist ME, Mitchell TR (1992) Self-efficacy: a theoretical analysis of its determinants and malleability. Acad Manage Rev 17:183–211 Glass RL (1999) A snapshot of systems development practice. IEEE Software 16:112–121 Goldstein IL (1993) Training in organizations: needs assessment, development, and evaluation. Brooks/Cole, Pacific Grove

123

36

L. Liu et al.

Goldstein IL, Gilliam P (1990) Training system issues in the year 2000. Am Psychol 45:134–143 Graham I (1994) Object oriented methods. Addison & Wesley, Reading Hair JF, Anderson RR et al (1998) Multivariate data analysis with readings. Prentice Hall, Englewood Cliffs Henderson-Sellers B (1992) A book of object-oriented knowledge. Prentice Hall, Englewood Cliffs Hoffer J, George J et al (1999) Modern systems analysis and design. Addison Wesley, Reading Jacobson I, Booch G et al (1999) The unified software development process. Addison Wesley, Reading Jeffries R, Turner AA et al (1980) The processes involved in designing software. In: Anderson JR (ed) Cognitive skills and their acquisition. Erlbaum, Hillsdale, pp 255–283 Jonhson R (2000) The ups and downs of object-oriented systems development. Commun ACM 43(10):68–73 Kirkpatrick DL (1959a) Techniques for evaluating training programs. J Am Soc Train Dev 13:3–9 Kirkpatrick DL (1959b) Techniques for evaluating training programs: part 2—learning. J Am Soc Train Dev 13:21–26 Kirkpatrick DL (1960a) Techniques for evaluating training programs: part 3—behavior. J Am Soc Train Dev 14:13–18 Kirkpatrick DL (1960b) Techniques for evaluating training programs: part 4—results. J Am Soc Train Dev 14:28–32 Kottke JL, Schuster DH (1990) Developing tests for measuring Bloom’s learning outcomes. Psychol Rep 66:27–32 Lee DM, Allen TJ (1982) Integrating new staff: implications for acquiring new technology. Manage Sci 28(12):1405–1420 Lee A, Pennington N (1994) The effects of paradigm on cognitive activities in design. Int J Hum Comput Stud 40:577–601 Martin F, Saljo R (1976) On qualitative differences in learning: outcome and process. Br J Educ Psychol 46:4–11 Martocchio JJ, Webster J (1992) Effects of feedback and cognitive playfulness on performance in microcomputer software training. Pers Psychol 45:553–578 Morris M, Speier C et al (1999) An examination of procedural and object-oriented systems analysis methods: does prior experience help or hinder performance. Decis Sci 30(1):107–135 Nelson HJ, Armstrong D et al (2002) Old dogs and new tricks. Commun ACM 45(10):132–137 Noe RA, Schmitt N (1986) The influence of trainee attitudes on training effectiveness: test of a model. Pers Psychol 39:497–523 Nunnally JC (1967) Psychometric theory. McGraw-Hill, New York Payne JW, Bettman JR et al (1992) Behavioral decision research: a constructive processing perspective. Annu Rev Psychol 43:87–131 Saari LM, Johnson TR et al (1988) A survey of management training and education practices in US companies. Pers Psychol 41:731–743 Salas E, Cannon-Bowers JA (2001) The science of training: a decade of progress. Annu Rev Psychol 52:471–499 Seddon GM (1978) The properties of Bloom’s taxonomy of educational objectives for the cognitive domain. Rev Educ Res 45:303–323 Sharp H, Griffyth J (1999) The effect of previous software development experience on understanding the object-oriented paradigm. J Comput Math Sci Teach 18(3):245–265 Silver MS (2006) Browser-based applications: popular but flawed? Inf Syst E Bus Manage 4(4):361–393 Sircar S, Nerur SP et al (2001) Revolution or evolution? A comparison of OO and structured systems development methods. MIS Q 25(4):457–471 Slovic P, Lichtenstein S (1971) Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organ Behav Hum Perform 6:641–744 Tannenbaum SI, Mathieu JE et al (1991) Meeting trainees’ expectations: the influence of training fulfillment on the development of commitment, self-efficacy, and motivation. J Appl Psychol 76:759–769 Taylor DS, Goles T et al (2003) Normative perception of the role of is within the organization: an empirical test of measuring student learning. University of Houston, Houston

123

Trainee reactions and task performance

37

Tversky A, Kahneman (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131 Venkatesh V (2000) Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into technology acceptance model. Inf Syst Res 11(4):342–365 Venkatesh V, Davis FD (1996) A model of the antecedents of perceived ease of use: development and test. Decis Sci 27(3):451–481 Vessey I, Conger S (1994) Requirements specifications: learning object, process, and data methodologies. Commun ACM 37(5):102–111 Vroom VH (1964) Work and motivation. Wiley, New York Warr P, Bunce D (1995) Traning characteristics and the outcomes of open learning. Pers Psychol 48(2):347–375 Yourdon E (1994) Object-oriented systems design: an integrated approach. Yourdon, Englewood Cliffs

123

Suggest Documents