Examining Applicant Reactions to Different Media Types in Character ...

9 downloads 2266 Views 343KB Size Report
Mar 1, 2016 - *Florida International University, 11200 SW 8 Street, DM 256, Miami, FL 33199, ... While the influence of technology and medium of assessment ...
Volume 24 Number 1 March 2016

International Journal of Selection and Assessment

Examining Applicant Reactions to Different Media Types in Character-based Simulations for Employee Selection Valentina Bruk-Lee*, Julie Lanz*, Erica N. Drew*, Chris Coughlin**,†, Pamela Levine**,†, Kathy Tuzinski**,† and Kimberly Wrenn**,† *Florida International University, 11200 SW 8 Street, DM 256, Miami, FL 33199, USA. [email protected] **CEB, 555 North Point Center E#600, Alpharetta, GA 30022, USA

While the influence of technology and medium of assessment administration on applicant reactions has been a topic for recent discussion, scant research has considered reactions to various forms of media types in employee character-based simulations. In a series of two studies, we focused on the influence of various media types on a variety of applicant reaction criteria. In Study 1, we explored (1) differences in procedural justice perceptions and company impressions between a text and 3D animated simulation, (2) spillover mechanisms by which applicant reactions influences company perceptions, and (3) the influence of media richness on perceptions of other assessments types within a battery. In a second study, we focused on applicant reactions to and rankings of three media types (i.e., 2D animation, 3D animation, and live-action video) in a character-based simulation. Our results indicated support for a mediated effect of procedural justice rules on company perceptions. Across studies, favorable reaction ratings and rankings for 3D animation and live-action video were found.

1. Introduction he rise of technology has played an important role in personnel selection as it has impacted the way in which assessments are developed, utilized, and delivered to applicants. Where assessments used to be static textbased paper-and-pencil, they are now interactive and media-rich, making computer-based testing (CBT) a popular medium for test administration. Indeed, the growing use of CBT has led to a variety of item innovations, such as more efficient scoring, new item response methods, and the inclusion of multiple media types (see Parshall, Harmes, Davey, & Pashley, 2010 for a review). As such, technology has facilitated enhanced realism, increased complexity, and higher engagement in modern day mediarich simulations used widely in personnel selection (Fetzer & Tuzinski, 2013).

T



CEB authors contributed equally to this work.

Media-rich simulations can provide candidates with a realistic job preview by mimicking the work pressures inherent in the job and, at the same time, assess knowledge, skills, abilities, and other characteristics (KSAOs) necessary for job. These can take on various forms, including character-based simulations, desktop simulations, and virtual environment simulations (Hawkes, 2013). While media-rich simulations have been shown to explain incremental variance in performance above other noncognitive predictors (Fluckinger, Dudley, & Seeds, 2014), scant research exists on the impact that mediarichness can have on applicant reactions to assessment batteries and to the hiring organization, as well as on applicants’ preference for specific media types. This is perhaps due to the rapid pace at which changes in the media content of these assessments have evolved, as well as the complexity and costs involved in having various mediarich versions of the same simulation available for research purposes. Media richness, specifically, refers to the

C 2016 John Wiley & Sons Ltd, V

9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main St., Malden, MA, 02148, USA

78 combination of audio and/or visual stimuli that can be presented through live-action video or animation (see La Torre & Bucklan, 2013, on multimedia). Media-rich simulations can vary in the percentage of time in which audio and/or visual cues are present; however, character-based simulations like the ones used in this series of studies typically include one media-rich scenario per situational judgment item that is generalizable across organizations and jobs. From a production standpoint, there are various considerations in choosing live-action versus animation that range in cost, time, aesthetic requirements, and flexibility in adaptation (Hawkes, 2013). However, there are potential implications for applicant reactions that should also guide this decision process. Given the timely need and the gap in the existing research, we sought to address multiple questions in a series of two studies. In the first study, we used a counterbalanced repeated measures design with random assignment to test for differences in perceptions of job relatedness, opportunity to perform, and company perceptions across a text and 3D animated computer-based situational judgment test (SJT) (i.e., character-based simulation) of the same content. This study also assessed the spillover of applicant reactions in predicting company perceptions, as well as, the mediated path whereby justice rules impact company perceptions. Third, we assessed whether the use of media-rich simulations would impact applicant perceptions of other assessment types. In the second study, we extend our findings by comparing three types of media-rich (2D, 3D, and liveaction video) simulations and exploring differences in various reaction ratings within a working sample.

Valentina Bruk-Lee et al. Schmitt (1997) reported higher ratings of job relatedness for a video-based assessment than a paper-and-pencil version, albeit the former required the use of a paper booklet to record answers. Indeed, the important distinction between the medium of administration (e.g., paper-andpencil, CBT) and media types (e.g., video, animation) used has not always been clearly made in the applicant reactions literature, a need that is now more relevant due to the growing media options available and facilitated by the use of CBT (Bruk-Lee, Drew, & Hawkes, 2013). More favorable applicant perceptions of face validity were reported for a video computer-based SJT than for a paper-and-pencil and a text computer-based SJT of the same content (Richman-Hirsch, Olson-Buchanan, & Drasgow, 2000), hence highlighting the important influence that media richness specifically can have on applicant attitudes. Indeed, researchers have argued that video SJTs produce stronger relationships with job performance because of their close resemblance to the criterion, which in turn increases the relevance of the test as a measure of job performance (Lievens & Sackett, 2006). Research comparing innovative item formats (e.g., graphics, dragand-drop items) to text only computer-based SJTs have also found that innovative items produce higher face validity than their text counterparts (Gutierrez, 2010). The use of 3D animation provides additional visual cues that resembles the job conditions and are not available in text only assessments. We, therefore, hypothesize that: Hypothesis 1 (H1): Perceived job relatedness will be significantly higher for a 3D animated computer-based SJT than for a text computer-based SJT of the same content.

1.1. Applicant reactions to media-rich simulations Organizational justice theory has fueled the applicant reactions literature (see Gilliland, 1993). The theory considers both applicant perceptions of hiring decision fairness (distributive justice) and the fairness of selection procedures (procedural justice), with much of the research focusing on the latter. Specifically, Gilliland’s (1993) model proposes 10 procedural justice rules that are categorized into three areas, including the formal characteristics of the selection procedures (e.g., job relatedness), explanations offered during the selection process (e.g., feedback), and interpersonal treatment (e.g., propriety of questions). Job relatedness comprises perceptions of face validity (the extent to which content of the selection procedure is perceived to reflect the content of the job) and perceived predictive validity (the extent to which the procedure predicts future job performance) (Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993). Of the 10 rules, it is perhaps the most widely cited, particularly in relation to the influence of technology on applicant reactions to selection assessments (Bauer, Truxillo, Mack, & Costa, 2011; Ryan & Ployhart, 2000). For example, Chan and

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

1.2. Opportunity to perform and media-rich assessments Given the relationship between procedural justice and important organizational outcomes (e.g., likelihood of accepting the job offer), inclusion of additional justice rules from Gilliland’s (1993) model can be beneficial (Hausknecht, Day, & Thomas, 2004; Schleicher, Venkataramani, Morgeson, & Campion, 2006). Findings suggest that opportunity to perform can be an important factor influencing fairness perceptions, especially when an applicant performs poorly and/or is rejected (Schleicher et al., 2006). While an applicant’s perception of job relatedness is dependent on the referent job, opportunity to perform refers to the perception that a selection procedure has provided a chance for the candidate to demonstrate their knowledge, skills, and abilities (KSAs). This distinction is important because the two procedural justice rules are not mutually exclusive. For example, a candidate may perceive a work sample test as relevant to the job content and still have felt unable to demonstrate critical KSAs for

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types job performance. Generally speaking, opportunity to perform has received limited attention in the applicant reactions literature with some exceptions (e.g., Konradt, Warszta, & Ellwart, 2013; Schleicher et al., 2006). Bauer et al. (2011) suggested that technological advances in the medium of assessment administration may impact reactions of opportunity to perform. For example, Beaty, Dawson, Fallaw, and Kantrowitz (2009) reported favorable opportunity to perform reactions associated with internet CBT due to the flexibility of choosing a testing location and time. Similarly, Konradt et al. (2013) examined applicant reactions to a Web-based selection process and found that opportunity to perform ultimately influenced a candidate’s desire to work for the hiring organization. However, to date, no published studies have considered the influence of media types specifically on a candidate’s opportunity to perform reactions. Perhaps the closest has been a study by Gutierrez (2011), who found that the ability to control a computer mouse impacted opportunity to perform reactions more when using a point-and-click item than a computerized-multiple choice item format, hence highlighting the influence of high fidelity innovative item types on applicant perceptions. Further, the additional ambient detail and information provided with the use of 3D animation is expected to be more engaging than a text computer-based assessment, thus likely to influence a candidate’s feelings regarding the opportunity to display KSAs. Given the general preference for media-rich simulations (Bryant & Malsey, 2012; Richman-Hirsch et al., 2000), and the limited findings associated with the preference for higher fidelity assessments, we hypothesize that: Hypothesis 2 (H2): Perceptions of opportunity to perform will be significantly higher for a 3D animated computer-based SJT than for a text computer-based SJT of the same content.

1.3. Applicant perceptions of the hiring organization The selection process is an important source of information for applicants. In fact, an organization’s policies, practices, and procedures can be inferred from the selection process (French, 1987). Candidate perceptions of an organization may dictate an applicant’s acceptance of a job offer and impact the formation of job attitudes after hire (Hausknecht et al., 2004; Ryan & Ployhart, 2000). Further, the loss of candidates from the selection process that may have resulted from poor applicant reactions of the hiring organization can result in high recruitment costs (see Murphy, 1986). While the majority of studies supporting these findings have been based on traditional forms of assessments, there is limited support for the influence of media-richness in the formation of positive organizational reactions, particularly as it relates to the use of modern-

C 2016 John Wiley & Sons Ltd V

79 ized assessments (Richman-Hirsch et al., 2000). However, research in the field of marketing and consumer behavior has found that the use of visual components on a website (e.g., images) produces more favorable perceptions of a corporation than the use of large blocks of text (Oh, Fiorito, Cho, & Hofacker, 2008). Hypothesis 3 (H3): Applicant perceptions of a company will be significantly more favorable for a 3D animated computer-based SJT than for a text computer-based SJT of the same content. In addition to expecting significant differences in company perceptions, it is also likely that candidate reactions of job relatedness and opportunity to perform will influence directly their view of the organization. According to Rynes and Barber (1990), an applicant’s interactions with the organization may “spill over” to affect their later reactions or decisions about the company. Indeed, Smither et al., (1993) reported a spillover effect whereby perceptions of job relatedness impacted a candidate’s likelihood of recommending the organization to others. Similarly, Hausknecht et al.’s (2004) model of applicant reactions suggest that perceived procedure characteristics impact attitudes and behaviors toward the organization directly and indirectly through generalized beliefs of procedural fairness. Their meta-analytic findings further suggest a significant mean effect for both job relatedness and opportunity to perform with organizational attractiveness. Indeed, empirical studies have supported a significant association between various measures of applicant perceptions (e.g., user-friendliness, job relatedness) and multiple indicators of company perceptions; however, assessments were either paper-and-pencil (e.g., Bauer, Maertz, Dolen, & Campion, 1998; Smither et al., 1993) or nonanimated computerized assessments (e.g., Sinar, Reynolds, & Paquet, 2003; Wiechmann & Ryan, 2003). We extrapolate earlier findings to media-rich simulations and hypothesize that: Hypothesis 4 (H4): Procedural justice perceptions of (a) job relatedness and (b) opportunity to perform of the 3D animated computer-based SJT will positively predict unique variance in company perceptions. Hypothesis 5 (H5): Overall procedural justice perceptions will mediate the relationship between (a) job relatedness and company perceptions, as well as between (b) opportunity to perform and company perceptions for a 3D animated computer-based SJT.

1.4. The impact of animation on justice perceptions of other assessments Media-rich simulations are typically one of several types of assessments included in selection batteries that are used to make hiring decisions. For example, a media-rich

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

80 simulation may be coupled with a cognitive ability test, biographical data, or personality measure. Ryan and Ployhart (2000) raise the importance of considering the impact of one aspect of the selection process on its other components. This raises an important question: Do animated computer-based simulations influence the procedural justice perceptions of text computer-based personality and cognitive ability measures when placed in the same test battery? Indeed, reactions to personality assessments have been more favorable when used together with cognitive ability tests, supporting a compensatory-evaluation process in which the perceived validity of a test in a battery is influenced by the validity of the others (Rosse, Miller, & Stecher, 1994). However, can media-rich simulations have a similar effect by raising perceptions of job relatedness and opportunity to perform when compared to text-based computerized tests? Based on limited findings, we hypothesize that: Hypothesis 6 (H6): Perceptions of (a) job relatedness and (b) opportunity to perform for a personality assessment will be significantly higher when included in a test battery containing a 3D animated computerbased SJT than a text computer-based SJT of the same content. Hypothesis 7 (H7): Perceptions of (a) job relatedness and (b) opportunity to perform for a cognitive ability assessment will be significantly higher when included in a test battery containing a 3D animated computer-based SJT than a text computer-based SJT of the same content.

2. Study 1 method 2.1. Participants Four hundred and forty undergraduate students from a large research university in the southeast participated in a simulated hiring process for a retail position. The majority of participants were female (65.5%). Participants were Hispanic (75%), White (9.5%), Black or African-American (8%), Asian (3.4%), and of two or more races (2.3%). The mean participant age of 21 years (SD 5 4.65) is relatively congruent to a typical entry-level retail candidate, suggesting that this sample is representative of the applicant pool for retail positions. Approximately half of the participants reported being currently employed (46%). Additionally, about half (53%) of the participants reported specific interest in obtaining a retail job within the next year. Participants who reported having experience in retail (48%) had an average tenure of 2.42 years (SD 5 2.57). Participants were incentivized with research credit in participating university psychology courses. To raise the stakes of the simulated hiring situation, the top two performing participants were awarded with a $50 gift card at the end of the study.

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Valentina Bruk-Lee et al.

2.2. Procedure Participants were given an assessment battery including a set of proprietary measures that are commonly used to make hiring decisions (a SJT, personality measure, and cognitive ability test), as well as, publicly available measures of procedural justice and company perceptions. Participants were directed to the study using the Qualtrics online survey platform at two time points. In Qualtrics, participants completed the informed consent and were given information about the study. Participants were told to imagine that they were applying for a retail job that required them to sell merchandise such as furniture, motor vehicles, appliances, or apparel in a retail establishment. Participants were also told that they would be expected to perform daily tasks such as greeting customers, recommend and select merchandise for customers, and maintain records related to sales. After receiving this information, participants were given a link to the global assessment firm’s testing platform website, where they completed the demographic information and the assessment battery. Using a within-subjects design, the assessment battery was comprised of two conditions: (A) a media-rich (3D) computer-based SJT, a personality test, and a quantitative cognitive ability test; and (B) a text computer-based SJT, a personality test, and a quantitative cognitive ability test. Participants completed measures of job relatedness and opportunity to perform immediately following each of the three assessments. Ratings of company perceptions and overall procedural justice were collected at the end of each completed battery at Time 1 and Time 2. Participants completed either A or B at Time 1, and 7–10 days later were contacted to complete Time 2 (participant response rate at Time 2 was 58%). The assessment battery was counterbalanced such that one group of participants was randomly assigned to receive the AB condition, and a second group received the BA condition. The number of participants in each condition was roughly equal (AB 5 225, BA 5 215). The personality and cognitive ability assessments were identical across all administrations and no significant differences in scores were found across conditions.

2.3. Measures 2.3.1. Situational judgment tests Two SJT formats were used: (a) a 3D computer-based SJT currently in use by a global assessment firm for selection purposes; and (b) a text computer-based-SJT that was developed solely for the purposes of this research study. The media-rich simulation used was a noncustomized ‘off the shelf’ assessment that could be applied to a variety of sales jobs across organizations. Assessment content was identical across SJT formats to provide for more accurate comparisons. Both versions of the SJTs depicted eight scenarios that a retail sales associate would typically

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types encounter on the job and measured customer service effectiveness and sales effectiveness. Customer service effectiveness refers to the extent to which the candidate provides exceptional customer service and is characterized by displaying a genuine interest in the customer, providing accurate information to meet customer needs, and treating the customer with respect. Sales effectiveness refers to the extent to which a candidate seeks out opportunities to make a sale and is characterized by persuasion, enjoyment in making a sale, and staying abreast of sales and promotions. Before the SJT, participants read a vignette that included a job description of an entry-level retail position and a brief explanation of the scenario. In the text version, participants read the character dialog from a script that appeared on their computer screen. In the media-rich version, avatars acted out the scenarios in an embedded video. For example, in one scenario, an employee is on the phone helping a female customer locate a pair of glasses at another store, and a male customer asks to be helped. The female customer rudely tells the second to wait his turn. In response to the scenario, participants were asked to indicate which of four behavioral options would be most and least effective to employ. 2.3.2. Job relatedness A four-item job relatedness subscale was adapted for use from the selection procedural justice scale (SPJS) (Bauer et al., 2001), such that items did not reference a specific job title (a 5 .80 for 3D; .81 for text). Two items assessed perceived predictive validity (e.g., ‘Doing well on this test means a person can do the job well’) and two additional items measured face validity (e.g. ‘It would be clear to anyone that this test is related to the job’). The average correlation between them was .55. Items were measured on a 5-point Likert scale from 1 5 strongly disagree to 5 5 strongly agree.

81 2.3.5. Company perceptions Company perceptions (e.g., ‘Most people would like to work for this company’) were measured with 10 items from Highhouse, Lievens, and Sinar (2003). All items were measured on a 5-point Likert scale from 1 5 strongly disagree to 5 5 strongly agree. Cronbach’s alpha reliability was .93 for both the 3D and text computer-based SJT. 2.3.6. Personality The Global Personality Inventory-Adaptive (GPI-A) (CEB, 2010a) is a computer adaptive general assessment of personality for use in the selection and development of employees across a wide range of job levels and types. The measure included eight dimensions of normal adult personality that are relevant to performance in a retail position, such as achievement striving, collaboration, sense of duty, reliability, and thoroughness. Items within each dimension were comprised of two statements that represent different levels of the particular personality trait. Participants were instructed to select which of the two statements were more descriptive of them. They were given an unlimited amount of time to complete this assessment. Given the adaptive format of the assessment, the next item was comprised of two additional statements, selected using an updated trait level estimate based on previous incumbent responses. Subsequent sequences of statement pairs were selected in a manner that maximized item information for the particular dimension. In variable length computer adaptive tests, such as this one, the test administration engine may be programmed to end the test once a desired level of precision is reached for each individual’s theta score. The standard error measurement threshold used in the GPI-A is .38, which approximates a classical test reliability of .85 for each person on each of the personality traits. 2.3.7. Cognitive ability test The Global Cognitive Index (GCI) (CEB, 2010b) was used to measure problem solving and numerical skills ability. This measure was provided in a variable length computer adaptive format, and participants were given 3 min per item to complete the assessment. As with the GPI-A, the standard error measurement threshold utilized in the GCI was also .38.

2.3.3. Opportunity to perform The four-item opportunity to perform subscale from the SPJS (Bauer et al., 2001) was used (a 5 .93 for 3D; .92 for text). A sample item is ‘This test gives applicants the chance to show what they can really do.’ Items were measured on a 5-point Likert scale from 1 5 strongly disagree to 5 5 strongly agree.

3. Study 1 results

2.3.4. Overall procedural justice The two-item procedural justice measure (a 5 .86) was used (Smither et al., 1993) to measure overall perceived fairness of the 3D animated SJT. A sample item is ‘Overall, I believe that the examination was fair.’ Items were measured on a 5-point Likert scale from 1 5 strongly disagree to 5 5 strongly agree.

Our first research goal was to compare applicant perceptions of job relatedness between the media-rich 3D animated computer-based SJT and text computer-based SJT formats using a repeated measures design. Means, standard deviations, and correlations are reported in Table 1. We hypothesized that participants would report both: (H1) higher job-relatedness scores and (H2) higher ratings of opportunity to perform after taking the 3D

C 2016 John Wiley & Sons Ltd V

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

82

Valentina Bruk-Lee et al.

Table 1. Means, standard deviations, and correlations for Study 1 variables

1. 2. 3. 4. 5. 6. 7.

3D SJT – Job relatedness 3D SJT – OTP 3D SJT – Company perceptions 3D SJT – Overall procedural justice Text SJT – Job relatedness Text SJT – OTP Text SJT – Company perceptions

M

SD

1

2

3

4

5

6

7

3.47 3.34 3.48 3.67 3.42 3.31 3.39

.75 .95 .72 .87 .77 .94 .72

(.80) .54* .44* .35* .60* .44* .40*

(.93) .39* .32* .41* .55* .37*

(.93) .49* .38* .36* .74*

(.86) .29* .27* .36*

(.81) .59* .46*

(.92) .44*

(.93)

Note: * p < .01; OTP 5 Opportunity to perform. Alpha reliabilities are provided along the diagonal.

animated computer-based SJT than after taking the text computer-based SJT. Overall, results regarding procedural justice rating differences across SJT formats were conflicting. A paired samples t-test was conducted to investigate mean differences in participant ratings of job relatedness and opportunity to perform across the two SJT formats. A significant difference was found for job relatedness between SJT formats, t(439) 5 1.61, one-tailed p 5 .05, d 5 .08. Participants perceived the 3D animated computer-based SJT (M 5 3.47, SD 5.75) to be more job relevant than the text computer-based SJT (M 5 3.42, SD 5.77). A significant group by main effect interaction was found, V 5.02, F(1, 438) 5 8.17, p 5 .00, which led us to test the hypothesis for each condition separately. A significant main effect was found for condition AB (t(224) 5 3.27, p 5 .00) where the 3D animated computer-based SJT was perceived to be more job relevant, but not for condition BA (t(214) 5 2.88, p 5 .38). Results indicated that when candidates viewed the 3D animation computerbased SJT first, the contrast effect was greater than when they first viewed the text computer-based SJT. No significant difference was found for opportunity to perform between formats, t(439) 5 .56, one-tailed p 5 .29, d 5 .03. Order effects were present, however, such that participants rated the first simulation presented to them higher, V 5.05, F(1, 438) 5 20.69, p 5 .00, perhaps due to the novelty of the task. Alternatively, they may have experienced fatigue effects upon completing the simulation the second time, even though a time gap existed between conditions. H2 was not supported. A paired samples t-test was conducted to evaluate differences in company perceptions between SJT formats (H3). Participants reported more positive company perceptions after taking the 3D animated computer-based SJT (M 5 3.48, SD 5.72) than after completing the text computer-based SJT (M 5 3.39, SD 5.72); t(439) 5 3.37, one-tailed p 5 .00, d 5 .16. As expected, a group by main effect interaction was not significant, V 5 00, F(1, 438) 5 .25, p 5 .62, indicating that order effects were not present. H3 was supported. A multiple regression was employed to determine if procedural justice perceptions (job relatedness and opportunity to perform) of the 3D animated computerbased SJT predicted positive company perceptions (H4).

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Figure 1. Study 1 mediation between job relatedness and company perceptions by overall procedural justice for the 3D animated SJT.

Results demonstrated that opportunity to perform and job relatedness (b 5 .22 and .32, respectively) were significant predictors of company perceptions (R2 5 .23, F(2,437) 5 63.43, p 5 .00), providing support for H4. The H5a predicted that overall procedural justice perceptions would mediate the relationship between job relatedness and perceptions of the company. Consistent with recent scholarship (MacKinnon, Lockwood, & Williams, 2004), this model was tested using a bootstrapping method for deriving indirect effects and standard errors. Bootstrapped confidence intervals (CIs) were biascorrected and accelerated at 95% CIs around the effects using model 4 of the SPSS macro called PROCESS developed by Preacher and Hayes (2004; see Preacher & Hayes, 2008). The overall model was significant and accounted for 32% of the variance in company perceptions, p 5 .00 (see Figure 1). The bootstrapped indirect effect through overall procedural justice perceptions was significant (B 5 .12, SE 5 .03, p < .05; 95% CI: L 5 .08; U 5 .18). Overall procedural justice partially mediated this relationship such that participants who felt that the content of the 3D animated computer-based SJT was related to the job reported higher overall perceptions of justice, which had a positive impact on their perceptions of the company. Hypothesis 5a was supported. Similarly, H5b predicted that the relationship between ratings of opportunity to perform on the 3D animated computer-based SJT and perceptions of the company would be mediated by overall procedural justice perceptions. The overall model was also significant and accounted for 30% of the variance in company perceptions, p 5 .00 (see Figure 2). Overall procedural justice partially mediated the relationship between opportunity to perform and company perceptions, and the bootstrapped indirect effect through overall procedural justice

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types

Figure 2. Study 1 mediation between opportunity to perform and company perceptions by overall procedural justice for the 3D animated SJT.

perceptions was significant (B 5 .10, SE 5 .02, p < .05; 95% CI: L 5 .06; U 5 .14). Hypothesis 5b was supported; participants that felt that the 3D animated computerbased SJT gave them an opportunity to show their skills reported more positive overall procedural justice perceptions, which had a positive impact on their perceptions of the company. Paired samples t-tests were conducted to compare applicant reactions of job relatedness (H6a) and opportunity to perform (H6b) of the personality assessment across the 3D animated computer-based SJT and a text computer-based SJT. The personality assessment was not perceived as more job related when presented in a battery that included the 3D animated computer-based SJT (M 5 3.18, SD 5.87) than the text computer-based SJT (M 5 3.12, SD 5.97), t(439) 5 1.42, one-tailed p 5 .08, d 5 .07. However, scores of opportunity to perform were higher for the personality assessment given within the 3D animated computer-based SJT battery (M 5 3.23, SD 5.96) than the text computer-based SJT battery (M 5 3.13, SD 5 1.05), t(439) 5 2.24, one-tailed p 5 .01, d 5 .11. H6 was partially supported. Lastly, H7 predicted more positive perceptions of job relatedness (H7a) and opportunity to perform (H7b) of a cognitive ability test given within the same battery as the 3D animated computer-based SJT than the text computer-based SJT. There were no significant differences in ratings of job relatedness (M3D 5 2.97, SD3D 5.96; Mtext 5 3.04, SDtext 5 .91) or opportunity to perform (M3D 5 3.09, SD3D 5 1.00; Mtext 5 3.05, SDtext 5 1.02) of the cognitive ability test across test batteries, t(439) 5 21.52, one-tailed p 5 .07, d 5 2.07, and t(439) 5 .82, one-tailed p 5 .21, d 5 .04, respectively. H7 was not supported.

4. Study 1 discussion Organizations have seen a 10-fold increase in the use of character-based media-rich simulations in their selection procedures in recent years (Hawkes, 2011). From a practical viewpoint, the use of media-rich SJTs creates efficiencies in the cost and time involved in customer branding, as well as makes customizations to off-the-shelf simulations more feasible. The influence of 3D animation on applicant reactions remains a novel topic as the published

C 2016 John Wiley & Sons Ltd V

83 empirical work in this area has been limited to comparisons of live-action video and paper-and-pencil, oftentimes confounding media type and medium of delivery. This study contributed to the literature by: (1) investigating differences in procedural justice perceptions and company perceptions across two computer-based SJTs of the same content varying in media type (i.e., text and 3D animation), (2) assessing the spillover of justice perceptions on company perceptions specifically for a 3D animated SJT, and (3) testing for the influence of media richness on the applicant reactions of personality and cognitive ability tests administered in the same battery of assessments. The use of 3D animation resulted in more favorable perceptions of job relatedness. The added visual cues, ambience details, and context provided by the animation proved beneficial in shaping perceptions of face and predictive validity, particularly when participants were exposed to this media-rich format first. Given the large percentage of participants in our sample with retail experience, even a small effect size like the one found here provides support for the robustness of the finding. That is, the participants’ prior retail experience likely also increased their perceptions of job relatedness for the written text computer-based SJT as they would be able to appraise the scenarios as being relevant to the job even without the added media. Ratings of opportunity to perform did not significantly differ across the text and 3D animated computer-based SJT. Perhaps prior job relevant experience aided the formation of participant attitudes toward both versions of the assessment, irrelevant of the added media richness of the 3D version. More importantly, however, this raises an important issue that has been discussed by media-rich simulation developers regarding the distinction across types of simulations (see Fetzer & Tuzinski, 2013). While character-based simulations, such as the one used in this study are increasingly popular in employee selection, they are distinct from desktop simulations and virtual environment-based simulations (Hawkes, 2013). The latter two recreate the working environment to varying degrees of virtual reality aided by animated media components that differ in their richness. As such, desktop and virtual environment-based simulations more accurately reflect simulated work samples (e.g., a call center simulation; managerial inbox exercise). Consequently, our findings suggest that while a 3D animated character-based simulation did not increase perceptions of opportunity to perform when compared to its text only counterpart, the use of other types of media-rich simulations are likely to result in different findings given that they actually provide an opportunity for the applicant to use their KSAs in a simulated context. The influence of media use on applicant recruitment efforts have focused on the characteristics of media (e.g., interactivity and vividness) in conveying organizational messages through websites or other mediums (see Allen,

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

84 Van Scotter, & Otondo, 2004). In the context of applicant reactions, our findings suggest that 3D animation used in selection assessments can enhance applicants’ perceptions of the organization, including their desire to work for the hiring company and their perceived reputation. Consistent with Hausknecht et al.’s (2004) model, the overall formation of procedural justice perceptions partially mediated the relationship between procedural justice characteristics (i.e., job relatedness and opportunity to perform) and company perceptions. Our findings add to the existing research demonstrating support for the spillover effect (Smither et al., 1993) of a 3D simulated character-based simulation on important outcomes of the selection process. Further, post-test ratings of job relatedness were not higher for the personality test, although ratings of opportunity to perform were. In both conditions, the participant completed the personality and cognitive ability tests, in that order, after the SJT version. Recent research expanding applicant reactions to selection procedures has found evidence that psychological factors such as personal control, and stability have a significant impact on applicant perceptions of fairness (Ababneh, Hackett, & Schat, 2014), hence other mechanisms beyond the system characteristics of the selection process may play a role in applicant reactions. Nonetheless, our results indicate that the use of media-rich simulations may help shape applicant’s beliefs regarding their opportunity to perform in a personality test. These findings raise an interesting question: does the media component provide clues as to the type of personality traits that would be needed for the job, and hence, influence the test taker’s perception of control in displaying such traits? The inclusion of 3D animated media did not impact post-test ratings of procedural justice rules for the cognitive ability test, however. As previously shown, post-test reactions to cognitive ability tests are rather influenced by the applicant’s appraisal of their test performance (Chan, Schmitt, Sacco, & DeShon, 1998), hence it appears that the added media did not influence in this regard. Further, as was the case with reaction ratings for the personality test, the study was limited in considering post-test differences across conditions only. As testing for the role of individual level traits on applicant reactions was not a central goal of this study, Study 1 used a repeated measures design which allowed each subject to act as their own control. Nonetheless, future research should consider the influence of cognitive ability and personality facets in predicting reactions to media rich simulations. Bauer, Truxillo, Paronto, Weekley, and Campion (2004), for example, tested the interaction between individual level variables and technologically enhanced screening methods in predicting various applicant reactions of fairness, organizational attractiveness, and job pursuit intentions. While their results, overall, did not indicate preferences by applicants high on cognitive ability

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Valentina Bruk-Lee et al. or conscientiousness for a specific screening technology, research looking specifically at variations in media richness is lacking. Further, ad hoc analyses of our data indicated that cognitive ability and adjusting to change were significant predictors of company perceptions above and beyond procedural justice characteristics, but no discernible pattern of prediction was found for reactions to the 3D or text SJT. This is consistent with recent findings by Honkaniemi, Feldt, Mets€apelto, and Tolvanen (2013) concluding that individual difference variables impact perceptions of overall fairness more so than reactions to specific assessments. While Study 1 generally supports favorable applicant reaction outcomes for a 3D animated SJT, widely used selection simulations vary in the type of media they employ. Our research, thus, raises additional question regarding the general preference for different media types. Indeed, Chan and Schmitt (2004) suggested that newer technologies would create interest in reaction criteria that we have not previously examined. Consequently, Study 2 focuses not only perceptions of job relatedness, but also on applicant ratings of engagement in the selection process, as well as, rankings of test preference, realism, and overall impressions of the hiring organization across the same character-based simulation using either 2D animation, 3D animation, or live-action video.

5. Study 2 The choice among media types used to develop a character-based simulation is often made with several practical considerations, such as cost and production time in mind (Hawkes, 2013). The availability of several off-theshelf animation software packages, however, has made the use 2D and 3D animation grow in popularity among assessment developers. Two-dimensional animation is characterized by flat-looking characters while 3D animation provides greater depth and character definition that can appear caricatured or realistic (see Bruk-Lee et al., 2013). Live-action video, on the other hand, captures real people in actual behavior, hence benefiting from the display of movements and expressions that appear more natural. These distinctions are important as the depth of the visual image has been related to its quality (Steuer, 1992). To date, no other published research has investigated applicant reactions among working adults to these three forms of media using the same character-based simulation (i.e., SJT) content. Indeed, the most closely related research has focused on the impact of 3D animation use on viewers’ attitudes and behaviors from a computergraphic perspective, and only a handful of unpublished studies focused primarily on employment related simulations and contexts. Of these, Hawkes (2012a, 2012b) cites an ‘uncanny valley’ (Mori, 1970) to explain reactions to animated

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types characters (see Bruk-Lee et al., 2013). According to Mori (1970), human-likeness in animations engenders feelings of warmth and acceptance up to the point when apparent remaining nonhuman flaws make it unlikeable. The sense of dislike for animations that cross into the uncanny may stem from an evolutionary need to avoid danger (see Moosa & Ud-Dean, 2010) or biological mechanisms meant to aid in threat avoidance (MacDorman, Green, Ho, & Koch, 2009). MacDorman and Ishiguro (2006) advance an expectation violation theory of this phenomenon suggesting that ‘an entity is experienced as uncanny when it elicits the brain’s model of a human being but possesses features that violate the model’s predictions’ (MacDorman & Entezari, 2015, p. 6). Indeed, the greater the congruence between behavioral fidelity and photorealism of the character, the more positive it will be evaluated (Vinayagamoorthy, Steed, & Slater, 2005). A wide range of robotics studies have investigated the factors that may influence these negative perceptions, such as a mismatch between visual and auditory cues (Mitchell et al., 2011) and individual differences in sensitivity to the uncanny (MacDorman & Entezari, 2015). For example, Tinwell, Grimshaw, Nabi, and Williams (2011) compared facial expressions of six basic emotions (e.g., anger, happiness, disgust) across a video, full animation, and animation lacking movement in the upper face (highly uncanny), and found that the absence of emotion may have evoked negative responses in participants because it made it difficult to determine if the avatars were real or not. Being able to detect emotion is valuable because it allows people to predict behavior, and incongruences between facial expressions and tone of voice can be distressing and scary for observers because they act as a signal for unpredictability (Tinwell et al., 2011). While the appearance, motion quality, and interactivity of an animated character had important implications for the behavioral choices made in a simulated ethics dilemma (MacDorman, Coram, Ho, & Patel, 2010), the same has not been found in a simulated employment context. Hawkes (2012a) investigated whether the choice of 2D animation, realistic 3D animation, or video in a SJT used for selection purposes could impact the answers provided by test-takers. While the 3D animation was identified as significantly less human-like, more eerie and unattractive than live-action video, there were no significant differences in test responses across media types in both cognitive and noncognitive items. A second study further showed that test takers did not display varying levels of empathy in a customer service skills media-rich SJT as a result of these three media types, although the 3D animation was again perceived to be more uncanny (Hawkes, 2012b). Overall, research suggests that individuals perceive animation and live-action differently. Given the potential implications for applicant perceptions of media-rich simulations, the dearth of research in this area is surprising

C 2016 John Wiley & Sons Ltd V

85 although, as earlier noted, perhaps justified by the costly endeavor of recreating simulations using multiple media types. Hawkes (2012b) noted the importance of studying the impact that the specific media type can have on company perceptions. We seek to answer this call for research by exploring applicant reactions to the use of 2D animation, 3D animation, and live-action video in a SJT simulation. Given the limited existing research and literature in this area, we propose the following hypotheses without making specific predictions as to the directions of significant differences: Hypothesis 1 (H1): There will be significant differences in ratings of job relatedness across media types (2D animation, 3D animation, or video) using a computerbased SJT of the same content. Hypothesis 2 (H2): There will be significant differences in levels of engagement in the application process across media types (2D animation, 3D animation, or video) using a computer-based SJT of the same content. Hypothesis 3 (H3): There will be significant differences in rankings of (a) preference, (b) realism, and (c) overall company impressions across media types (2D animation, 3D animation, or video) using a computer-based SJT of the same content.

6. Study 2 method 6.1. Participants Participants were recruited through an open access convenience sampling website hosted by a global assessment firm where users can attempt tests in exchange for incentives, practice, getting acquainted with the testing platform, or feedback on the candidate’s test performance. The open access convenience sampling website is also utilized to collect validation data for new tests using a global sample of individuals. The online posting for this study asked for research volunteers to view different types of a new SJT and provide their reactions. No incentive or feedback was offered to participants in this study. Of the 434 individuals that read the informed consent and started the survey, 209 participants completed the study. Of these, three were removed for spending less than 5 min on the survey and four participants were removed for spending 15 or more hours. The average time spent on the survey was M 5 20.21 min for a total final sample of 202 participants. Of those that responded, gender was evenly split between females (49%) and males (47.5%). Applicants represented a variety of age groups, including under 25 years old (24.3%), 26–30 (18.3%), 31–40 (16.8%), 41–50 (17.8%), and 511 (13.4%). The majority of participants reported 111 years of part- and/or full-time work experience (39.1%), although other groups were

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

86 also represented: less than 1 year of experience (8.9%), 1–2 years of experience (16.3%), 3–5 years of experience (14.4%), and 6–10 years of experience (12.4%). Over half the sample (59.9%) also reported having previous management experience. We did not contact participants directly, therefore, it is unknown if any of them were actual job applicants at companies that utilized the simulation content displayed to them or working students. Participants resided in the United Kingdom (47%), South Africa (10.4%), the United States (7.9%), India (3.5%), Australia (3.5%), and the Netherlands (2%). Participants from various countries in Asia, Africa, Europe, and the Middle East were also represented (25.7%). Seventeen participants did not report their country of residence.

6.2. Materials 6.2.1. Computer-based scenario Three types of the same computer-based scenario were used in this study: (a) 2D animation; (b) 3D animation; and (c) live-action video. All versions depicted a scenario that an applicant would typically encounter on the job. The scenario used in this study was from a managerial assessment measuring coaching skills developed by a global assessment firm for the purpose of applicant selection. The various versions were created by the global assessment firm specifically for the purpose of this research and were deployed from their testing platform. Both animated versions used avatars to act out the scenario, while actors performed the script in the live-action video version. In response to the scenario, participants indicated which behavioral options would be: (1) most and (2) least effective for an individual to employ. The test content and composition was held constant across media types.

6.3. Procedure The participants viewed one scenario from an extended version of a managerial assessment used to measure coaching skills. Participants were provided with a description of the scenario to provide them with situational context. Participants were asked to imagine they were applying for a job as a manager of a team in a call center and told that the role would involve some degree of coaching. In the scenario, the manager had to inform his or her teammates that their customer service goal had increased from 89% to 92%. Participants were asked to imagine how they would respond if they had to coach the team. To accurately assess changes in applicant reactions due to changes in media format, we adopted a withinsubjects approach, such that all participants were able to view and compare the three media types (Chan & Schmitt, 1997). The order of the media presentation was randomized across participants. Participants completed

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Valentina Bruk-Lee et al. measures of job relatedness and engagement immediately following each media version. After viewing the three media types, participants were asked to rank order the three media types with regards to their preference, level of realism, and overall impression of the hiring organization. Participants were also asked to provide additional qualitative comments about the media types they viewed, followed by demographic information.

6.4. Measures 6.4.1. Job relatedness Job relatedness was measured using a 4-item scale adapted from the SPJS (Bauer et al., 2001) (a 5 .74 for 2D, .78 for video, and .80 for 3D; average 5 .77), such that items did not reference a specific job title. Two items assessed perceived predictive validity (e.g., ‘Doing well on this test means a person can do the job well’) and two additional items measured face validity (e.g. ‘It would be clear to anyone that this test is related to the job’). Items were measured on a 5-point Likert scale where 1 5 strongly disagree and 5 5 strongly agree. 6.4.2. Engagement Engagement in the application process was measured with a 3-item scale (a 5 .92 for 2D, .91 for video, and .91 for 3D; average 5 .91) created by the global assessment firm specifically for this study. These three items are based on a pool of items used to measure applicant engagement to simulations. Participants were asked to imagine that this situation was one of 15 different managerial situations in a managerial coaching assessment, and that the whole test took about 20 min to complete and then answer the engagement questions accordingly. A sample item is ‘I would enjoy taking tests like this.’ Items were measured on a 5-point Likert scale where 1 5 strongly disagree and 5 5 strongly agree. 6.4.3. Applicant reaction rankings Participants were asked to rank order each media type with regards to three reaction criteria, such that a value of 1 5 top choice. Preference was measured with the following item: ‘Which test would you most prefer to take?’ Realism was assessed by asking participants ‘Which test seems the most realistic?’ The overall impression of the organization was measured by asking ‘Which test conveys the best impression to job candidates about the hiring organization?’

7. Study 2 results Hypotheses 1 and 2 explored significant differences in ratings of job relatedness and engagement across media types (refer to Table 2 for means and standard deviations). For these, significant within-subjects univariate analyses were followed by Bonferroni adjusted post hoc

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types

87

Table 2. Study 2 means and standard deviations for reaction ratings across media types

Job relatedness Engagement

2D

3D

Video

Mean (SD)

Mean (SD)

Mean (SD)

3.59 (.79) 3.22 (.97)

3.60 (.85) 3.26 (1.02)

3.71 (.80) 3.66 (.96)

Note: Italic values denote standard deviations.

comparison tests of the following groups: (1) video computer-based SJT to the 2D and 3D animated versions and (2) 2D computer-based SJT to the 3D animated version. A one-way within-subjects analysis of variance (ANOVA) indicated that ratings of job relatedness were significantly different across media types, Wilks’ Lambda 5 .95, F(2, 168) 5 4.91, p 5 .01, partial g2 5 .06. Specifically, paired samples t-tests were run to examine the significant differences found between video and 2D (t(180) 5 2.42, p 5 .02), as well as video and 3D media types (t(173) 5 2.65, p 5 .01). In both cases, video was ranked as more job related. H1 was supported. A statistically significant effect of media type on engagement was also found, thus supporting H2. As sphericity was violated (v2 (2) 5 6.33, p 5 .04), Pillai’s trace is reported (e 5 .97). Engagement levels varied by media type, V 5.22, F(2, 174) 5 32.91, p 5 .00, partial g2 5 .22. The video computer-based SJT was perceived as more engaging than the 2D (t(186) 5 6.23, p 5 .00) and 3D (t(180) 5 6.06, p 5 .00) animated versions. Given the nonparametric nature of the ranking data, a Friedman test of differences among repeated measures was used to test H3a–H3c. Post hoc analyses were conducted using Wilcoxon Signed Rank tests adjusting the alpha level with a Bonferroni correction for the number of comparisons. Rankings of preference, realism, and overall impression across media types displayed a similar pattern of results. Figure 3 displays a visual comparison of the mean rankings across media types. Applicant rankings of preference (v2 (2) 5 151.25, p 5 .00), realism (v2 (2) 5 188.67, p 5 .00), and overall impression (v2 (23) 5 139.06, p 5 .00) significantly varied by media type. Overall, video outperformed both 2D and 3D animations, and 3D outperformed the 2D version. Post hoc comparisons indicated that 3D animated computer-based SJT received higher rankings than the 2D animated version across all three applicant reactions rankings of preference, realism, and overall impression (Z 5 26.09, p 5 .00, r 5.43, Z 5 27.27, p 5 .00, r 5.58, Z 5 23.59, p 5 .00, r 5.26, respectively). However, the video version received the highest rankings by participants. Specifically, the video computer-based SJT was preferred over the 2D (Z 5 29.94, p 5 .00, r 5.70) and 3D (Z 5 27.39, p 5 .00, r 5.52) animated versions. It also was perceived as more realistic than the 2D (Z 5 210.67,

C 2016 John Wiley & Sons Ltd V

Figure 3. Study 2 participant mean rankings of the three types of computer-based SJTs.

p 5 .00, r 5.85) and 3D (Z 5 28.34, p 5 .00, r 5.67) animated versions. Lastly, applicants ranked video as making the best overall impression when compared to the 2D and 3D animation versions (Z 5 29.13, p 5 .00, r 5.67 and Z 5 28.73, p 5 .00, r 5.64, respectively). The results, hence, support H3a–H3c.

8. Study 2 discussion This study makes a novel contribution to the applicant reactions literature by investigating differences in reactions across various types of media commonly used in character-based simulations. The research supplements the findings of Study 1 by including a 2D, 3D, and liveaction video version of a computer-based SJT in which all but the media was kept constant. Of particular interest was the ability to compare various forms of animation options available to simulation developers. Overall, 2D animation received the least favorable reactions in terms of its level of realism, overall impression of the hiring organization, and the engagement level in the testing process that it provoked. The caricature form represented by 2D animation also warranted negative open ended feedback from participants, who noted the ‘cartoons’ as being distracting and unable to portray the affective meaning behind the script being delivered. Hawkes (2012b) investigated a related issue, mainly whether the chosen media type could influence the manifestation of important applicant traits being measured. His findings indicated that applicant levels of empathy, associated with the measurement of customer service orientation skills, were not impacted by their perceived uncanniness of the animations used. Hence, there is some preliminary indication that applicants can overcome limitations, such as the 2D character’s inability to express realistic emotion, in responding to assessments. Additionally, 3D animation was preferred on average over 2D animation, as well as considered more realistic

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

88 and creating a more favorable impression of the organization despite some participants describing it as ‘kind of spooky.’ Studies in the area of robotics and animation have concluded that various factors beyond the degree of photorealism (e.g., facial proportions) influence the uncanny valley phenomenon (MacDorman et al., 2009). Some of these more intricate factors may be difficult to manipulate by simulation developers when using common off-the-shelf character animation software (see Hawkes, 2013), although advancements in this area have been notable even from the time of our data collection. Last, our results indicated that video elicited the most positive reactions when compared to both animation options across all five criteria. This is a timely investigation given the growing use of animation technologies used in assessment creation. The findings are consistent with the cited advantages of live-action footage (see Hawkes, 2013), including its enhanced realism and ability to convey more convincing sensory cues and emotions. For example, participants noted that the video was the most effective way of communicating the scenario because it made the whole situation real. Here again, we present our results within the boundaries of a character-based simulation as the preference of video over animation may be less marked in desktop simulations where the combination of the two may result in an optimal applicant experience.

9. General discussion The development and use of media-rich simulations for employee selection presents new and exciting research avenues. In a series of two studies, we first investigated the benefits of using 3D animation over text, and later explored differences in reactions across multiple forms of widely used media. Our findings generally indicate that companies using 3D animation and live-action video media content in their character-based simulations may reap the additional benefits of engendering favorable reactions to the selection process. However, the choice of media use and type is clearly one that must be weighed in relation to practical considerations, such as cost, time, development resources, and customer preferences. Further, Popp, Tuzinski, & Fetzer (2016) provide an extensive treatment of the issues that test developers should consider when determining which media format to adopt, such as psychometric, applied, contextual, and logistical considerations. In particular, the use of media-rich assessments should match the construct being measured. We raise important issues in interpreting our results with regards to the various types of simulations currently available. Given the significant cost and expertise involved in creating various versions of the same simulation for research purposes, we limited our study to a characterbased simulation. However, the influence of media type may be more significant in desktop and virtual-

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Valentina Bruk-Lee et al. environment simulations, which are more dependent on the media quality and richness to create the desired simulated work scenario and in which the media serves, perhaps, a more central purpose. Research should explore the generalizability of our findings to virtual environmentbased simulations in which the role of the virtual character in extracting specific responses may be greater. Indeed, research has shown that visual and behavioral fidelity influences the success of a virtual environment in representing sensory cues that reflect the real world (Vinayagamoorthy et al., 2005). Hence, we stress the importance of differentiating not only medium (e.g., paper-and-pencil, computer-based) and media type (e.g., 2D, 3D, live-action video), but also taking into account the type of media-rich simulation used. While our research design presented several strengths (e.g., random assignment to counterbalanced conditions, within-subjects measures), we recognize the limitations of our findings. For example, our sample in Study 1 was composed of students. In an effort to increase the viability of the sample, screening variables were included to control for random responding. Further, the top performer was offered $50 gift card as a means of creating a high stakes situation. Also, the sample demographics are reflective of the applicant pool for the job in question in relation to age and interest/experience in retail. The sample in Study 2, while not made up of job applicants, included working participants in various countries. Indeed, given the design and main research purpose of Study 2, we considered it impractical to use real job applicants. Further, utilizing an international sample of participants is valuable because it provided us with a wide range of demographics suggesting that our conclusions may be applied to international selection firms seeking best practices in their processes. Our results in Study 1 were limited to differences in post-test reactions only when testing for support of a spillover effect of favorable reactions generated by the use of 3D animation on the other assessments. Future research should consider the inclusion of pretest reactions as well, which could be used for comparison within each media condition. Further, Study 2 was limited to exploring differences across media types in reaction ratings and rankings. While this fulfilled our study’s purpose of identifying preferences for specific media types, we did not evaluate mechanisms by which these reactions could impact organizational outcomes as was done in Study 1.

9.1. Future research and concluding remarks Our studies are an early effort at integrating the influence of media selection in simulated assessments into the applicant reactions literature. It is worth noting that enhancements in animation software are happening at an exponential rate and advancements in 3D animation continue to get us closer to mimicking the realism of video.

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types Hence, the advantage of live-action video might wane as it could soon be difficult to distinguish it from realistic 3D animation. Indeed, iterations of 3D animation may render video obsolete. Our research raises many questions and areas for future investigation. Generational differences in preferences for the use of technology have been widely cited (Pew Research Center, 2010). What influence do these preferences and usage habits have in forming attitudes and reactions about the use of media-rich assessments? We see this not only as an issue of age differences, but one that includes a complex set of factors, such as normative attitudes about the role of testing for hiring purposes, as well as, expectations for the quality of media-based technologies. For example, employers are not only adapting their human resources strategies to attract talented Millenials entering the workforce, but are also investing in presenting a consistent appealing brand image that relates to this group through various channels of social media. The use of animation facilitates customer branding embedded within employee assessments, hence potentially influencing consumer attitudes. From a practical standpoint, this becomes a critical factor for organizations who consider the overall impact of their selection procedures on applicant behavior beyond that of simply accepting a job offer. Evidence suggests, for example, that 32% of workers would not purchase products from a company if they did not hear a response back from the organization after submitting their application (Grasz, 2012). We are not aware of any research exploring the influence of animation use in simulated selection assessments on consumer attitudes and behaviors. Further, understanding differences in applicant reactions across both branded and noncustomized ‘off-the-shelf’ mediarich simulations can have important practical implications. Additionally, while Hawkes (2012a) suggests that media type does not significantly influence test takers’ responses, research is needed to further assess whether media richness can impact a simulation’s validity. Ideally, such a study would consider not only the media types presented here, but also additional moderators such as the constructs assessed and sample characteristics. While this type of research is likely to be challenging given the limited accessibility of these media-rich simulations to academic researchers, its contribution would be timely and valuable. Badger, Kaminsky, and Behrend (2014) concluded that media richness can impair information acquisition by creating additional mental workload. Furthermore, research indicates that avatar attractiveness can influence interviewer ratings in a selection context (Behrend, Toaddy, Thompson, & Sharek, 2012), hence can the attributes and use of animation present a confound in applicant responses to various test types and item formats? Moreover, do applicant preferences for media richness vary by the type of construct measured? For interpersonal situations such as the ones used in the simulations in our stud-

C 2016 John Wiley & Sons Ltd V

89 ies, a multimedia-based format provides a social interaction stimulus that would normally take a large amount of text to describe (Popp et al., in press). More research into the influence of media-richness and type on candidate preferences, information processing, and decision making in the selection process is needed. Overall, the literature on the influence of technology on applicant reactions (see Bauer et al., 2011) has not distinguished across various forms of animated media, which our results indicate may differ significantly on the criteria of interest. Indeed, not all media-rich simulations are perceived the same; hence, we consider the inclusion of media type an important factor in future models of applicant reactions. Furthermore, we encourage researchers and practitioners to consider the literature in areas such as informatics and computing when studying the influence of media on employee assessments.

References Ababneh, K. I., Hackett, R. D., & Schat, A. C. (2014). The role of attributions and fairness in understanding job applicant reactions to selection procedures and decisions. Journal of Business and Psychology, 29, 111–129. Allen, D. G., Van Scotter, J. R., & Otondo, R. F. (2004). Recruitment communication media: Impact on prehire outcomes. Personnel Psychology, 57, 143–171. Badger, J. M., Kaminsky, S. E., & Behrend, T. S. (2014). Media richness and information acquisition in internet recruitment. Journal of Managerial Psychology, 29, 866–883. Bauer, T. N., Maertz, C. P., Jr., Dolen, M. R., & Campion, M. A. (1998). Longitudinal assessment of applicant reactions to employment testing and test outcome feedback. Journal of Applied Psychology, 83, 892–903. Bauer, T. N., Truxillo, D. M., Mack, K., & Costa, A. B. (2011). Applicant reactions to technology-based selection: What we know so far. In N. T. Tippins, S. Adler, & A. I. Kraut (Eds.), Technology-enhanced assessment of talent (pp. 190–223). San Francisco, CA: Jossey-Bass. Bauer, T. N., Truxillo, D. M, Paronto, M. E., Weekley, J. A., & Campion, M. A. (2004). Applicant reactions to different selection technology: Face-to-face, interactive voice response, and computer-assisted telephone screening interviews. International Journal of Selection and Assessment, 12, 135–148. Bauer, T. N., Truxillo, D. M., Sanchez, R. J., Craig, J. M., Ferrara, P., & Campion, M. A. (2001). Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel Psychology, 54, 388–420. Beaty, J. C., Dawson, C. R., Fallaw, S. S., & Kantrowitz, T. M. (2009). Recovering the scientist–practitioner model: How IOs should respond to unproctored internet testing. Industrial and Organizational Psychology: Perspectives on Science and Practice, 2, 58–63. Behrend, T., Toaddy, S., Thompson, L. F., & Sharek, D. J. (2012). The effects of avatar appearance on interviewer ratings in virtual employment interviews. Computers in Human Behavior, 28, 2128–2133. Bruk-Lee, V., Drew, E. N., & Hawkes, B. (2013). Candidate reactions to simulations and media-rich assessments in personnel

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

90 selection. In M. Fetzer & K. Tuzinski (Eds.), Simulations for personnel selection (pp. 43–60). New York: Springer Science 1 Business Media. Bryant, S. E., & Malsey, S. (2012, April). 21st Century assessment centers: Technology’s increasing role and impact. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Diego, CA. CEB. (2010a). Global personality inventory – Adaptive technical manual. Thames Ditton, UK: CEB. CEB. (2010b). Global cognitive index test manual. Thames Ditton, UK: CEB. Chan, D., & Schmitt, N. (1997). Video-based versus paper-andpencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143–159. Chan, D., & Schmitt, N. (2004). An agenda for future research on applicant reactions to selection procedures: A constructoriented approach. International Journal of Selection and Assessment, 12, 9–23. Chan, D., Schmitt, N., Sacco, J. M., & DeShon, R. P. (1998). Understanding pretest and posttest reactions to cognitive ability and personality tests. Journal of Applied Psychology, 83, 471–485. Fetzer, M., & Tuzinski, K. (2013). Simulations for personnel selection. New York: Springer. Fluckinger, C. D., Dudley, N. M., & Seeds, M. (2014). Incremental validity of interactive multimedia simulations in two organizations. International Journal of Selection and Assessment, 22, 108–112. French, W. L. (1987). The personnel management process. Boston, MA: Houghton Mifflin. Gilliland, S. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18, 694–734. Grasz, J. (2012, December 31). Candidates who have a bad job search experience can adversely affect a company’s bottom line, CareerBuilder study shows. Available at http://www.careerbuilder. com/share/aboutus/pressreleasesdetail.aspx?sd56/20/2012& id5pr703&ed512/31/2012 (accessed 21 January 2016). Gutierrez, S. L. (2010, April). Comparing examinee reactions to multimedia and text-based simulation items. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, New Orleans, LA. Gutierrez, S. L. (2011, February). Moving beyond multiple-choice items: Examining the technological considerations and examinee reaction to a new point and click innovative item format. Poster presented at the annual conference of the Association of Test Publishers Conference, Phoenix, AZ. Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639–683. Hawkes, B. J. (2012a, April). Multimedia SJTs: Are animation and live action really equivalent? Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Diego, CA. Hawkes, B. J. (2012b, April). Test-takers’ empathy for animated humans in SJTs. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Diego, CA. Hawkes, B. J. (2013). Simulation technologies. In M. Fetzer & K. Tuzinski (Eds.), Simulations for personnel selection (pp. 61–82). New York: Springer Science 1 Business Media.

International Journal of Selection and Assessment Volume 24 Number 1 March 2016

Valentina Bruk-Lee et al. Highhouse, S., Lievens, F., & Sinar, E. F. (2003). Measuring attraction to organizations. Educational and Psychological Measurement, 63, 986–1001. Honkaniemi, L., Feldt, T., Mets€apelto, R., & Tolvanen, A. (2013). Personality types and applicant reactions in real-life selection. International Journal of Selection and Assessment, 21, 32–45. Konradt, U., Warszta, T., & Ellwart, T. (2013). Fairness perceptions in web-based selection: Impact on applicants’ pursuit intentions, recommendation intentions, and intentions to reapply. International Journal of Selection and Assessment, 21, 155–169. La Torre, J., & Bucklan, M. A. (2013). Simulations for service roles. In M. Fetzer & K. Tuzinski (Eds.), Simulations for personnel selection (pp. 187–213). New York: Springer Science 1 Business Media. Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A comparison in terms of predictive validity. Journal of Applied Psychology, 91, 1181–1188. MacDorman, K. F., Coram, J. A., Ho, C., & Patel, H. (2010). Gender differences in the impact of presentational factors in human character animation on decisions in ethical dilemmas. Presence, 19, 213–229. MacDorman, K. F., & Entezari, S. (2015). Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16, 141–172. doi:10.1075/is.16.2.01mac. MacDorman, K. F., Green, R. D., Ho, C.-C., & Koch, C. T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in Human Behavior, 25, 695–710. MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7, 297–337. MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99–128. Mitchell, W. J., Szerszen, K. A., Sr., Lu, A. S., Schermerhorn, P. W., Scheutz, M., & MacDorman, K. (2011). A mismatch in the human realism of face and voice produces an uncanny valley. Iperception, 2, 10–12. Mori, M. (1970). Bukimi no tani (the uncanny valley). Energy, 7, 33–35. Moosa, M. M., & Minhaz Ud-Dean, S. M. (2010). Danger avoidance: An evolutionary explanation of the uncanny valley. Biological Theory, 5, 12–14. Murphy, K. R. (1986). When your top choice turns you down: Effect of rejected job offers on the utility of selection tests. Psychological Bulletin, 99, 133–138. Oh, J., Fiorito, S. S., Cho, H., & Hofacker, C. F. (2008). Effects of design factors on store image and expectation of merchandise quality in web-based stores. Journal of Retailing and Consumer Services, 15, 237–249. Parshall, C. G., Harmes, J. C., Davey, T., & Pashley, P. (2010). Innovative items for computerized testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Elements of adaptive testing (pp. 215–230). New York: Springer. Pew Research Center. (2010). Millennials: A portrait of generation next. Available at http://www.pewsocialtrends.org/files/2010/10/ millennials-confident-connected-open-to-change.pdf (accessed 10 April 2015). Popp, E. C., Tuzinski, K., & Fetzer, M. (2016). Actor or avatar? Considerations in selecting appropriate formats for assessment content. In M. J. Kolen (Series Ed.), & F. Drasgow (Vol.

C 2016 John Wiley & Sons Ltd V

Applicant Reactions to Different Media Types Ed.), NCME application of educational measurement and assessment: Vol.2. Technology and testing: Improving educational and psychological measurement (pp. 79–103). New York: Taylor & Francis/Routledge. Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments & Computers, 36, 717–731. Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–891. Richman-Hirsch, W. L., Olson-Buchanan, J. B., & Drasgow, F. F. (2000). Examining the impact of administration medium on examinee perceptions and attitudes. Journal of Applied Psychology, 85, 880–887. Rosse, J. G., Miller, J. L., & Stecher, M. D. (1994). A field study of job applicants’ reactions to personality and cognitive ability testing. Journal of Applied Psychology, 79, 987–992. Ryan, A. M., & Ployhart, R. E. (2000). Applicant perception of selection procedures and decisions: A critical review and agenda for the future. Journal of Management, 26, 565–606. Rynes, S. L., & Barber, A. E. (1990). Applicant attraction strategies: An organizational perspective. The Academy of Management Review, 15, 286–310.

C 2016 John Wiley & Sons Ltd V

91 Schleicher, D. J., Venkataramani, V., Morgeson, F. P., & Campion, M. A. (2006). So you didn’t get the job. . .now what do you think? Examining opportunity-to-perform fairness perceptions. Personnel Psychology, 59, 559–590. Sinar, E. F., Reynolds, D. H., & Paquet, S. L. (2003). Nothing but ‘net? corporate image and web-based testing. International Journal of Selection and Assessment, 11, 150–157. Smither, F. L., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R. W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46, 49–76. Steuer, J. (1992). Defining virtual reality: Dimensions determining telepresence. Journal of Communication, 42, 73–93. Tinwell, A., Grimshaw, M., Nabi, D. A., & Williams, A. (2011). Facial expression of emotion and perception of the uncanny valley in virtual characters. Computers in Human Behavior, 27, 741–749. Vinayagamoorthy, V., Steed, A., & Slater, M. (2005, July). Building characters: Lessons drawn from virtual environments. Toward social mechanisms of android science: A CogSci 2005 workshop (pp. 119–126). Wiechmann, D., & Ryan, A. M. (2003). Reactions to computerized testing in selection contexts. International Journal of Selection and Assessment, 11, 215–229.

International Journal of Selection and Assessment Volume 24 Number 1 March 2016