Not Your Cup of Tea? How Interacting With a Robot Can Increase Perceived Self-efficacy in HRI and Evaluation Astrid M. Rosenthal-von der Pütten, Nikolai Bock & Katharina Brockmann Social Psychology: Media and Communication, University of Duisburg Essen, Forsthausweg 2, 47057 Duisburg, Germany
[email protected]; {nikolai.bock, katharina.brockmann}@stud.uni-due.de
ABSTRACT The goal of this work is to explore the influence of do-it-yourself customization of a robot on technologically experienced students and unexperienced elderly users’ perceived self-efficacy in HRI, uncertainty, and evaluation of the robot and interaction. We introduce the Self-Efficacy in HRI Scale and present two experimental studies. In study 1 (students, n=60) we found that any interaction with the robot increased self-efficacy, regardless of whether this interaction involves customization or not. Moreover, individual increases in self-efficacy predict more positive evaluations. In a second study with elderly users (n=60) we could not replicate the general positive effect of the interaction on selfefficacy. Again, we did not find the hypothesized stronger effect of customization on self-efficacy, nor did we find that relationship between self-efficacy increase and evaluation. We discuss limitations of the setting and for questionnaire design for elderly participants.
Keywords Human-robot interaction; self-efficacy, experimental study; elderly users; robot teaching; technology acceptance; do-it-yourself
1. INTRODUCTION The common future vision of companion robots is one of fully adaptive systems that adjust to new tasks, new situational contexts, and new user needs in real-time, using diverse resources of information. Ideally, this happens without a greater involvement of the user in the sense that he or she has to have certain skills to adjust or customize the robot to their needs. This is especially important for the predominantly targeted user group of elderly people who might not have sufficient expertise in complex technology. However, creating fully adaptive robots requires a lot of effort, is resource-intensive and poses limitations in various ways. Moreover, it is questionable whether perfect adaptivity is needed for the robot to be used efficiently and accepted by users. In contrast, adjusting and customizing a robot to one’s own needs might have a positive influence, since do-it-yourself experiences (DIY) can lead to higher self-efficacy, probably creating higher acceptance and thereby possibly leading to a more constant usage of these systems. In the area of human-computer interaction (HCI) it was found that self-efficacy (the extent or strength of one's belief in one's own ability to complete tasks and reach goals, [2]), Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
[email protected]. HRI '17, March 06-09, 2017, Vienna, Austria © 2017 ACM. ISBN 978-1-4503-4336-7/17/03…$15.00 DOI: http://dx.doi.org/10.1145/2909824.3020251
amongst other factors such as perceived ease of use and perceived usefulness, predicts usage tendencies [10]. Self-efficacy can be positively influenced by enactive attainment or in other words, the experience of mastery. This was previously used in pedagogical contexts for the transfer of technological competencies [1, 24] and could also be used to introduce socially-assistive technologies to unexperienced users. By giving the opportunity to customize systems to one’s own preferences, we might increase perceived self-efficacy and thereby increase acceptance of that sociallyassistive technology. Moreover, regarding different categories of products, studies have shown that consumers value those products higher that they have assembled or designed on their own compared to equivalent ready-made products, a phenomenon also known as “IKEA”-effect [29]. This phenomenon has also been found regarding the use of technologies and especially robots [17, 22]. The goal of this work is therefore, to study the influence of the adaptability or customizability of a robot through users on their self-efficacy in HRI and likewise, more positive evaluations. For this purpose, we developed a new measure, the Self-Efficacy in Human-Robot Interaction Scale (SE-HRI) which we describe in the following. Next, the questionnaire has been tested with two different samples in two experimental studies. In the first study, we tested the questionnaire and the hypotheses that a) any interaction increases self-efficacy in HRI and that b) this effect is strongest for interactions involving DIY customization of a robot. In the second study, we tested the questionnaire with a group of technologically unexperienced users of the age group 60+, because seniors are a major target group for future companion technologies. In this second study, we further explored the possible differences of what we call mediated-customization and DIY-customization.
1.1 Self-Efficacy in Human-Robot Interaction Self-efficacy has been defined by Bandura as “people's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances. It is concerned not with the skills one has but with judgments of what one can do with whatever skills one possesses” [3]. Self-efficacy can be altered via various mechanisms, with enactive mastery experiences being the most effective ones since they serve as indicators of capability. Within the field of HCI, quite a number of studies have shown the influence of computer self-efficacy on diverse outcome variables such as performance [9, 15], ease of use [38], system use [9, 23], and early adoption [7]. Since self-efficacy was found to be predictive for using intentions in HCI studies, it is reasonable to assume a similar correlation in the field of human-robot interaction (HRI). Self-efficacy estimations are, however, domain specific meaning that a person can expose high self-efficacy in one domain (e.g. general computer use), but low self-efficacy in other domains (e.g. smart phone use, programming, interacting with a robot). Therefore, even users with high scores in computer self-efficacy
might score low in their self-efficacy expectations regarding HRI. This might, however, not have been detected so far due to the lack of an adequate instrument capturing self-efficacy in HRI. For these reasons, we developed the Self-Efficacy in HRI Scale (SE_HRI, [31]). Self-efficacy in HRI is regarded as a state variable that can change according to the influential factors identified by Bandura [2], especially with the experience of mastery in HRI. We created a German and an English version of the scale with initially 50 items. We conducted three surveys to develop and validate the measure of Self-Efficacy in HRI. In a first study (n = 201), exploratory factor analysis revealed a two-factorial (factors perceived self-efficacy and loss of control) solution with good reliability. Confirmatory factor analysis did not confirm the two-factorial structure. Instead, it revealed a better model fit for a one-factorial solution for the German (second survey, n = 450) and the English version (third survey, n = 209) of the scale with good indices for convergent and divergent validity. The final questionnaire contains 18 items and was used in the studies presented in this paper. For a more detailed description of the scale development process we refer you to [31].
1.2 DIY and Technology Acceptance Since socially-assistive robots are not yet market-ready, there is no information on abandonment rates for this kind of technology, nor is there reliable information on how high initial adoption rates will be. Regarding assistive technologies in general, Hurst et al. [22] discuss reasons for low acceptance rates and how they might be overcome by DIY approaches for the design of assistive technologies, since even essential devices such as hearing aids suffer a very high abandonment rate of up to 75%. Philips and Zhao identified factors for technology abandonment such as: how easy users can obtain devices, whether they were involved in the selection process, how well the device performs, and whether or not users’ needs changed before or during usage [30]. It is not surprising that customized devices better meet users’ needs. Previous work has discussed the need for personalization of robots to their users’ needs [12], too. Lee [26] summarizes that personalization in HRI includes the customization of a robot’s appearance or personality, or task preferences. Lee et al. demonstrated that a robot that offered personalized services (snack delivery) based on users’ prior interactions with the robot improved rapport, cooperation, and engagement with the robot during service encounters. Saunders et al. [35] enabled participants to teach a robot human activities in a smart home and teach it to carry out behaviors in response to these activities. The authors state that “rather than passively accepting imposed solutions to a particular need, the user actively participates in formulating with the robot their own solutions and thus remains dominant of the technology and is empowered, physically, cognitively, and socially.” (p.27). These are indicators that automatic personalization (robot adapts to user) has positive outcomes, but also that being actively involved in customization is can be beneficial. Against this background, it should be our goal to take these mechanisms into account that might prevent high dropout rates of the usage of socially-assistive robot in advance and therefore screen for solutions in related fields. Hurst et al. state “that adoption rates can be improved by empowering individuals to create and modify their own Assistive Technology rather than being forced to rely on “off-the-shelf” products.” (p. 11). In this regard, the researchers propose a DIY approach. It is indeed noteworthy that creating and building a device on one’s own, further creates feelings of increased self-efficacy. Due to the experience of mastery when customizing, users might in consequence be more likely to adopt the technology. Hands-on experience is beneficial in many ways. In HRI it is most
often used in educational contexts to increase self-efficacy when teaching students or to pre-service teachers how to program [1,25]. However, hands-on or DIY experiences also foster HRI in more social dimensions. For instance, Groom et al. [17] found that, if participants interacted with a robot they had constructed themselves, in comparison to interacting with a robot that had been built by another person, they showed greater overlap in terms of personality with the robot, they liked the robot more, and reported to be more attached to it. In this study, all participants built a robot, but only half of them interacted with this robot afterwards, while the other half interacted with another robot built by someone else. Accordingly, all participants had hands-on experience and the chance to learn about the functions of the robot. As Groom et al. conclude, this suggests that there is something fundamental about using the robot built by oneself before. Stadler et al. [36] transferred this research into a Programming by Demonstration scenario, in which participants either interacted with a robot whose behaviors were previously trained by themselves or trained by another person. Again participants rated the robot to be more similar to themselves when they interacted with a robot with behavior modules that they believed they had trained the robot themselves before. This was also superior to interacting with a robot that actually used the selftrained module but was introduced as using a module trained by a stranger. De Graf (2015) reported a longitudinal study with an entertainment robot. Participants who adopted the technology in the long run and reached the final stage in the technology adoption process were the ones who had adapted the robot to their personal needs (within the limitations given by the device [16]). Similarly, in an ethnographic study with Roombas in people’s homes, Fink et al. (2013) found that “some households were not willing to engage in the effort of learning how to optimally use the robot and their process of adoption stopped in this stage” [14]. Only one of these reviewed studies reports age differences. Saunders et al. [35] found that older participants found the system more difficult to use than the younger participants, i.e. teaching the robot via the provided interface was perceived as being more complicated by older users. These users also preferred if the robot would be already set up by someone else before usage. Hence, customizing a robot on one’s own is a relatively unexplored topic especially regarding different age groups. Moreover, it has not been studied how DIY customization differs from mediated customization (e.g., a service provider that sets up the robot according to the wishes of the user).
1.3 Using DIY Customization to Reduce Uncertainty in HRI HRI is still a mystery to a lot of potential future users. On the one hand, users have a lot of expectations of what a robot can do, how it might communicate, and what actions it is able to perform. On the other hand, users are aware that their knowledge is mainly generated from science fiction [32]. Thus, especially technologically naive users experience high levels of uncertainty when confronted with the possibility to interact with a robot. Admittedly, every new encounter, also with fellow humans, is affected by uncertainty. Hence, the primary goal in initial interactions is to reduce uncertainty. According to Berger and Calabrese, this can be achieved by finding causal structures to explain the own behavior and that of others [6] and increase predictability of behaviors. Actions of oneself or others at one time are explained retroactively, i.e., based on what came before. But these actions also serve as the basis for predicting future responses. Every action and reaction is part of an information gathering process to reduce uncertainty in interaction. Attributional confidence, either in retroactive explanations of behavior or in
proactive predictions of behavior, is discussed as a way of operationalizing uncertainty [8]. Indeed, it has been argued that people have “an intrinsic need to deal with the environment” ( [40], p. 318), to develop an “effective familiarity” with it (p. 321), that they are motivated to master their uncertain environment by increasing its predictability and controllability. Customizing a robot and thereby understanding and also determining at least part of its behavior should increase attributional confidence and thereby decrease uncertainty. In other words, the possibility to determine the robots following behavior should give participants the sensation that they are able to predict the robot’s behavior more reliably.
H1: Participants will report higher self-efficacy and lower loss of control at the end of the experiment compared to the baseline measure.
1.4 Research Objectives
2.1 Experimental Design and Setting
The goal of this study is to explore the influence of (DIY) customization of a robot on technologically experienced students and unexperienced elderly users’ perceived self-efficacy in HRI, uncertainty, and system evaluations. We assume that a number of different mechanisms contribute to technology acceptance. Receiving detailed information on a robot’s capabilities should enable the users to better predict what to expect from the robot. Within the framework of Berger and Calabrese, this necessary information is gathered during interaction. In this regard, any interaction should reduce uncertainty. However, it seems that the amount of information or the objective quality of information does not necessarily correspond to uncertainty reduction [6]. This means that more information or objectively better information does not necessarily lead to higher uncertainty reduction. It is the perceived quality of information that contributes to uncertainty reduction. We assume that customizing the robot to one’s own preferences is perceived as more valuable information (for instance, compared to an information fact sheet about the robot) because the user can at least in some part determine the behavior of the robot and has a basis to build his or her predictions on. Hence, customizing the robot should decrease uncertainty and result in higher attributional confidence and thus, also in more favorable evaluations of the robot and the interaction. Moreover, the literature suggests that hands-on experience and especially interacting with something that you constructed or customized yourself (i.e. compared to having someone customize a system for you) leads to more favorable outcomes, such as increased technology adoption and more favorable evaluations [17, 36]. In the following we present two studies to explore these assumptions.
2. STUDY 1 In the first study, student participants engaged in a social interaction with the Nao robot in a household related setting. The aim of this interaction was to create five different dishes, each based on three different ingredients, with the robot’s help. Preceding this social interaction, participants either a) trained the robot (training interaction) which ingredients fit into which dish, b) simply showed the robot all items upon request in order to check whether all study materials are present (non-training interaction), or c) read a fact sheet about the robot’s capabilities. We expected that interacting with a robot increases self-efficacy and decreases perceived loss of control. Moreover, we hypothesized that actively teaching or training the robot leads to even stronger effects. However, because of our assumption that the interaction itself, regardless of the customization, affects self-efficacy, we needed to control for the effect of the prior interaction within the training interaction group. Hence, we added the condition non-training interaction group in which participants also engage in two interactions with the robot, but with no possibility to customize the robot.
H2: Participants who actively trained the robot will state higher perceived self-efficacy and lower loss of control directly after the training (H2a) as well as at the end of the experiment (H2b) than participants who engaged in the interaction without training or participants who read a technical information sheet. H3: Individual changes in self-efficacy predict evaluations of the robot. Greater increases lead to more positive evaluations. We used a 3x3 mixed design with three experimental groups (training interaction group, non-training interaction group, control group) and repeated measures on participants’ self-reported selfefficacy and loss of control in HRI (baseline, after experimental treatment, after social interaction). All participants in both studies did engage with Aldebaran’s Nao robot in a household related task (in the following referred to as social interaction scenario). Participants were asked to imagine they would have visitors later that day for dinner and want to find a recipe based on the foods they have in stock. The aim of the interaction was to create five different dishes, each based on three different ingredients, with the robot’s help. Participants took a seat and the Nao robot and twelve different ingredients were placed on the table in front of them. As the robot had knowledge about different dishes and which ingredients fit to which dishes, it was the participants’ task to show Nao three of the twelve different ingredients. The Nao would then give a recommendation for a recipe alongside pointing out what foods are possibly missing and have to be bought before preparation of the dish can begin. Preceding this social interaction scenario, participants underwent different treatments in three experimental groups. Control group. Participants in the control group read a two-page long information sheet about the robot, its technical parameters, and social abilities. Training interaction group. Participants in the training group trained the robot themselves to categorize foods according to their preferences. They were told to teach the robot which ingredient fits into which dish, so that the robot Nao could use this knowledge in the social interaction scenario later on. Participants were confronted with Nao (with colored dots on its hands to indicate where to touch the robot to give positive or negative reward for his answer) and twelve different ingredients in front of them on the table (cf. Figure 1). Participants showed each ingredient to the Nao at least five times. Each time Nao would ask whether the ingredient might fit into one of the five recipe categories. The participants had to provide feedback on whether Nao’s suggestion has been right. They touched the robot’s left or right hand in order to provide negative or positive feedback, respectively. Participants were told that Nao would use this new knowledge in the second interaction. Hence, the robot’s recommendations are ostensibly based on the training, however, the robot’s recommendations were based on a hardwired decision tree to guarantee the same outcome for all participants. In sum, participants could connect one ingredient to different recipe categories as well as one recipe category could contain several ingredients. We assumed that participants would take the training serious and would not intentionally compromise the learning outcome of the robot. We checked the video material and found that none of the participants trained or indicated an unexpected ingredient-category pair (e.g., chocolate-soup). Hence, their customization of the robot matched our programming.
Non-training interaction group. Participants in the non-training interaction group were asked to make sure that all ingredients needed for the following social interaction scenario were in place. During this condition, the Nao robot asked participants to show the items and then confirmed that the item was present (e.g., “We have to make sure that the pasta is here. Please show me the pasta. participant shows pasta- Great. The pasta is here.”).
Figure 1: Set-up for the training and social interaction
2.2 Participants and Procedure Sixty (38 female, 22 male) volunteers participated in this study. All participants were university students recruited on campus. They were aged between 18 and 39 years (M=22.42, SD=4.08). All participants owned a personal computer and -with one exceptionowned a smartphone. Participants had moderate experiences with robots: none of them has a robot at home, 51% had previously seen a real robot (in a museum), 48% had previously been interacting with a robot (e.g., Roomba, museum robot), and only two participants stated that they had previously programmed a robot. Upon arrival participants signed informed consent. They were seated in front of a computer and completed a web-based questionnaire consisting of demographic data, their prior knowledge of robots, and the Self-Efficacy in HRI Scale. Students completed some more questionnaires that served to validate the developed Scale and are not part of this paper. After that, depending on the respective experimental condition, participants either a) read an information sheet about the robot, b) showed the robot the different objects in order to ensure are materials are present, or c) trained the robot ingredient-recipe matches. Afterwards, they completed the Self-Efficacy in HRI Scale a second time before they interacted with Nao in the household related task described previously. At the end of the interaction, participants filled in the remaining questionnaires evaluating Nao and the interaction as well as the Self-Efficacy in HRI Scale for a third time. Participants were debriefed, compensated with €10, and thanked for participation.
2.3 Dependent Variables The following dependent variables were consistently used in both studies and are therefore described only once. Reliability measures are presented in table 1 for Self-Efficacy in HRI and loss of control. All other reliability scores for both studies are presented in table 3.
2.3.1 Self-efficacy in HRI & Loss of Control Participants self-efficacy was measured using the SE-HRI scale [31]. Self-efficacy in HRI is regarded as a variable that can change according to the influential factors identified by Bandura [2], especially with the experience of mastery. SE-HRI was measured at the beginning of the experiment (T1), after the experimental treatment (T2), and at the end of the experiment after the social interaction scenario (T3). The SE-HRI is a newly developed instrument. In the first phase of scale development, we identified
two factors, namely perceived self-efficacy and loss of control. During the process of validation, the second factor was omitted since loss of control did not seem to be the negative side of selfefficacy, but a different concept closer to locus of control. Selfefficacy should be distinguished from locus of control, which refers to the belief whether outcomes are determined by one’s own actions or by other forces that are outside of what one can control [4] and not to the strength of one's belief in one's own ability to complete tasks and reach goals. However, locus of control is a very interesting concept to study in context of self-efficacy expectations and we decided to include the omitted subscale in the experiments presented in this paper to explore judgements of locus of control in HRI. The SE-HRI scale comprises 18 items (e.g., I can make a robot perform a specific task; I am very confident in my abilities to control a robot; I think I could adjust a robot the way, that it could help me in my daily life). We assessed people’s sense of loss of control with three items (e.g., I do not have any control over what a robot is doing; I do not have any influence on how a robot behaves). Items on both scales were rated on a 6-point Likert scale from strongly disagree to strongly agree. Table 1: Reliability scores for SE-HRI & Loss of Control SE-HRI Loss of Control
Study 1 (T1/T2/T3) .943 .954 .965 .822 .887 .971
Study 2 (T1/T2/T3) .971 .980 .974 .811 .918 .917
2.3.2 Evaluation of the Robot and the Interaction Evaluation of the robot. For the evaluation of the robot, we used the Godspeed Questionnaire [5], a semantic differential with 20 bipolar items which are rated on a 5-point scale. It contains five subscales of which we used the four subscales Anthropomorphism (attribution of a human form, characteristics, or behavior to nonhuman things; 5 items, e.g., machinelike-humanlike, unconscious-conscious), Animacy (perception of lifelikeness; 5 items, e.g., mechanical-organic), Liking (5 items, e.g., dislike-like), and Perceived Intelligence (5 items, e.g. incompetent-competent). Person perception of the robot. Participants were also asked to evaluate the robot with regard to person perception on three dimensions (Likability, Intelligence, and Autonomy), indicating their agreement to 22 items on a 5-point Likert scale from “I do not agree at all” to “I fully agree.” While the Godspeed Questionnaire aims at technology-specific or robot-specific evaluation dimensions (e.g., animacy and anthropomorphism), the person perception scale stems from social psychology research and has been used in prior work (cf. [33, 34]). The dimension Likability was measured with eleven items (likable, warm, cool-, pleasant, approachable, similar to me, familiar, I can ask the robot for advice, trustworthy, I would work together with the robot). Intelligence was measured using five items (dumb-, intelligent, competent, incompetent-, knowledgeable). Autonomy contained six items (not autonomous-, self-dependent, responsible for its actions, restricted in its abilities-, free, self-determined). Evaluation of the interaction. The general evaluation of the interaction was assessed by eight items that asked for the participants’ sense of control during the interaction, the enjoyment of the interaction, and whether participants would like to use a system like this for other tasks (cf. prior work [33, 34]) with a 5point Likert-scale.
2.3.3 Attributional Confidence We also wanted to know how confident participants were in their belief that they were correctly attributing the robot’s feelings, intentions, and values, or in other words, how well they believed
they knew the robot and could make predictions of its behavior. We used the attributional confidence scale [8] consisting of seven items rated on a 6-point Likert scale from “very unsure” up to “very sure”.
2.4 Results Self-efficacy in HRI. To test whether participants do not differ in initial self-efficacy, an ANOVA with the dependent factors perceived self-efficacy and loss of control at measurement T1 was conducted and it revealed no significant differences between conditions (for means and standard deviations cf. Table 1). However, we also estimated Bayes factors using Bayesian Information Criteria [39], comparing the fit of the data under the null hypothesis and the alternative hypothesis using R and the package BayesFactor by Richard D. Morey. The estimated Bayes factor (null/alternative) suggested that the data were 2.4 times more likely to occur under a model without including an effect of experimental condition, rather than a model with this factor (H0). To explore whether the training and the social interaction have a positive effect on participants’ perceived self-efficacy in HRI, we conducted split-plot ANOVAs with the group factor experimental condition (non-training vs. training vs. control) and repeated measures for perceived self-efficacy. We did the same for loss of control. We found a main effect for the repeated measures on both scales indicating that self-efficacy, but also loss of control, increased over the three measuring points (H1) and an interaction effect of the repeated measures of both scales with the experimental condition (perceived self-efficacy: F(38/2) = 7.32; p < .001; η2 = .204; loss of control: F(38/2) = 2.61; p = .039; η2 = .084, H2). To further explore the nature of the interaction effects we conducted three separate ANOVAS on the two scales for the three experimental conditions, respectively. While the increase in selfefficacy was not significant for participants in the control group, neither for T2 after the information material nor for T3 after the social interaction, there were significant effects for the increase of perceived self-efficacy for participants in the training and in the non-training interaction groups. Participants reported significantly
greater self-efficacy after the initial interaction session (either training or non-training) compared to the baseline and significantly greater self-efficacy after the social interaction compared to the baseline (training & non-training; cf. Table 2). Contrary to our hypothesis regarding loss of control (H1), we found that participants in the control condition reported significantly higher loss of control at the end of the experiment compared to the baseline measure. Moreover, participants in the training group reported significantly higher loss of control directly after the training compared to the baseline measure (cf. Table 2). Evaluation of the robot and the interaction. We conducted ANOVAs with experimental group as independent variable and subscales of the Godspeed Questionnaire (Anthropomorphism, Animacy, Perceived Intelligence, Likability) and the Person Perception Scale (Likability, Intelligence, Autonomy) and Attributional Confidence as dependent variables. The evaluation of the robot and the interaction did not differ significantly between conditions, except for the two Intelligence variables, where post hoc test showed that the non-training group rated the robot as being significantly more intelligent than the control group (Perceived Intelligence: p = .008; Intelligence: p = .003; cf. Table 3). Influence of self-efficacy on evaluation. We assumed that changes in perceived self-efficacy predict evaluations of the robot (H3) with greater increases leading to more positive evaluations. To address this hypothesis, we calculated self-efficacy deltas by subtracting perceived self-efficacy scores at T1 of those scores at T3 and conducted regression analyses with the self-efficacy deltas as predictors and the following dependent variables: Godspeed Questionnaire, Person Perception Scale, Attributional Confidence, and evaluation of the interaction. We found significant regression models for evaluation of the interaction, for Anthropomorphism, Animacy, Perceived Intelligence, Likability (all Godspeed), and Intelligence and Likability (Person Perception Scale), but no models for Autonomy and Attributional Confidence (cf. Table 3).
Table 2: Repeated measures ANOVAs for experimental conditions with dependent factors perceived self-efficacy and loss of control T1 Study 1 Students n=60
Perceived self-efficacy
Loss of control
Study 2 Seniors n=60
Perceived self-efficacy
Loss of control
T2
T3
total
µ 3.02
SD .93
µ 3.66
SD .95
µ 3.84
SD 1.06
F 36.51
η2 .382
control non-training
3.31 2.70
.94 .83
3.43 3.79
1.12 .73
3.55 4.05
1.30 .92
1.19 32.88
.059 .634
training total control non-training training total control med.-custom.a training total control med.-custom. training
3.04 2.53 2.67 2.60 2.33 3.46 3.58 3.43 3.37 3.07 3.15 2.80 3.25
.96 .97 1.07 1.06 .75 1.31 1.19 1.31 1.47 1.31 1.25 1.19 1.49
3.74 2.81 2.75 2.68 3.00 3.46 3.63 3.16 3.58 3.09 2.70 3.40 3.17
.97 1.10 1.23 1.08 .99 1.43 1.61 1.16 1.49 1.35 1.40 1.25 1.37
3.93 2.94 3.50 2.65 2.67 3.70 3.98 3.45 3.69 3.09 2.83 2.92 3.53
.89 1.47 1.67 1.34 1.28 1.42 1.46 1.20 1.57 1.63 1.83 1.55 1.50
21.06 3.28 5.95 .04 3.92 3.95 3.72 1.74 1.46 .01 1.06 1.59 .94
.526 .053 .227 .002 .171 .063 .164 .084 .071 .000 .053 .077 .094
Note: all ANOVAS used Bonferroni correction for multiple comparisons; a) mediated-customization
p