Using Real-World and Standardized Spatial Imagery

0 downloads 0 Views 400KB Size Report
Aug 13, 2014 - realism? Forty-two female and 31 male undergraduates first rated the realism of their images after reading spatial scenarios based on actual ...
Applied Cognitive Psychology, Appl. Cognit. Psychol. 28: 789–798 (2014) Published online 13 August 2014 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/acp.3061

Using Real-World and Standardized Spatial Imagery Tasks: Convergence, Imagery Realism, and Gender Differences VIRGINIA A. DIEHL* Psychology Department, Western Illinois University, Macomb, USA Summary: Can realistic spatial scenarios be used to measure spatial mental imagery? Can people accurately evaluate their spatial imagery? Does gender moderate the relationship between performance on realistic spatial imagery tasks and ratings of imagery realism? Forty-two female and 31 male undergraduates first rated the realism of their images after reading spatial scenarios based on actual spatial tasks. In phase 2, they solved closely related spatial scenario problems, and then completed the VZ-2 Paper Folding test. Performance on realistic spatial scenarios predicted performance on the VZ-2. Men’s evaluation of their spatial imagery realism predicted their actual spatial performance, but women’s did not. More task experience was positively related to more realistic images and higher scores on the VZ-2. These results were generally consistent with those found with more artificial stimulus materials, but also demonstrated the importance of considering gender differences in spatial problem strategy and/or rating scale interpretation. Copyright © 2014 John Wiley & Sons, Ltd.

Spatial imagery involves mentally performing spatial tasks (e.g., tasks that involve determining an object’s location or rotating objects in space (Thompson, Slotnick, Burrage, & Kosslyn, 2009)). Spatial mental imagery assessment tools that are relevant to the physical world are needed to ensure that research findings on spatial imagery have applicability; one of the goals of the present study was to increase the ecological validity of spatial tasks. In addition, it is important to know the extent to which we are able to evaluate the quality of our spatial imagery, so we can assess the extent to which our predictions of our spatial performance are accurate. The present study was designed to investigate these issues as they apply to men and women. Different types of spatial tests have been shown to be related to each other, but most studies have used abstract stimuli for their spatial assessments. Linn and Petersen (1985), in a review of the literature, found evidence for several types of spatial abilities: spatial perception (using one’s body as the basis for accessing the spatial relationships between given things), mental rotation (speed and accuracy of imagining the rotation of a two- or three-dimensional object), and spatial visualization (involving complex tasks). Dean and Morris (2003) used the Comprehensive Ability Battery Spacial Test (CAB-S; Hakstian & Cattell, 1976) and the Vandenberg tests of mental rotation (Vandenberg & Kuse, 1978) and found the tests to be positively correlated with each other. Burton (2003) demonstrated similar results using three standardized visualization tests (Paper Form Board [VZ-1], Paper Folding [VZ-2], and Surface Development [VZ-3]; Ekstrom, French, Harman, & Dermen, 1976) and the CAB-S and Vandenberg tests. Borst and Kosslyn (2010) showed that spatial test performance (on standard spatial tasks and the spatial subscale of the Object-Spatial Imagery Questionnaire (OSIQ; Blajenkova, Kozhevnikov, & Motes, 2006) correlated with accuracy in an image scanning task, which required subjects to memorize a pattern of dots, and then, without the dots present, answer questions *Correspondence to: Virginia A. Diehl, Psychology Department, Western Illinois University, 1 University Circle, Macomb, IL 61455, USA. E-mail: [email protected]

Copyright © 2014 John Wiley & Sons, Ltd.

about whether an arrow shown on the display would have been pointing at one of the dots shown earlier. Can ‘real-world’ spatial scenarios be used to assess spatial imagery ability? Demonstrating positive correlations between realistic spatial scenarios and established standardized measures would be a first step toward answering this question. The studies described earlier used many different types of spatial tasks, but none of them had a direct correspondence to everyday spatial activities, thereby lacking ecological validity. MacIntyre, Moran, Collet, and Guillot (2013) have noted that tasks in neuroscience research are often chosen for their simplicity, not their correspondence to everyday experience, and this is true for many other fields as well. In the present study, we constructed spatial ‘scenarios’ with two characteristics: They described spatial problems that people had limited experience with, so they couldn’t use long-term memory in order to solve them, but at the same time, they had a correspondence to everyday spatial tasks. One of the goals of this study was to see if the positive relationships that have been found based on the use of abstract, fundamental, or ‘impoverished’ tasks would generalize to the current study’s more realistic (and admittedly, messier) spatial situations. We expected that people who performed well on the experimental spatial scenarios would also do well on the standardized visualization task (the VZ-2). Is performance on a visual task related to the vividness of a person’s mental imagery? Pearson, Rademaker, and Tong (2011) used a methodology involving binocular rivalry to investigate the relationship between the selfreported vividness of visual imagery (when imagining a similarly oriented visual image) and the extent of bias exhibited in the perception of a subsequently presented ambiguous display. When participants reported a vivid image, they were more likely to perceive the subsequent visual display in a manner consistent with their image. This suggests that people’s self-report of the quality of their visual image can be used to predict performance on a related task. We tested this in the present study using spatial tasks by asking participants to judge the realism of the images they generated (i.e., how similar their image was to perception) when asked to imagine spatial scenarios

790

V. A. Diehl

and by assessing their performance on closely related spatial tasks. We expected a positive relationship between imagery realism and spatial task performance. Are the relationships among the spatial imagery variables different for men and women? Vecchi and colleagues (i.e., Bosco, Longoni, & Vecchi, 2004; Vecchi & Girelli, 1998) distinguished between active and passive spatial tasks. Passive tasks require only memorization (e.g., remembering which cells in a 5 × 5 matrix had a dot appear in them), whereas active tasks involve manipulating or transforming visuo-spatial information (e.g., mental rotation tasks). They found minimal gender differences in performance on the passive tasks but found increases in differences as the tasks required more manipulation (i.e., became more active). Of the 10 scenarios used in the present study, all but one were at least somewhat active, but the extent to which they are active (i.e., how much they required the manipulation and integration of elements) varied from one scenario to another (see Table 1), so the present study explored this question of gender differences in mental spatial task performance. Voyer, Voyer, and Bryden (1995), in their meta-analysis of gender differences in spatial performance, reported that the reviewed studies found no significant gender differences on the VZ-2, but there was a small but consistent effect size associated with it, with men having the advantage. The VZ-2, although it involves some mental rotation, is classified as a visualization task, a category in which gender effect sizes tend to be smaller. Therefore, in the present study, we did not expect significant gender effects on the VZ-2.

During Phase 2 (Performance), participants were asked to imagine scenarios that were slightly modified from those used in Phase 1 (e.g., the fruit was distributed differently for the juggling scenario). They then answered a multiplechoice question regarding the end state that resulted from the transformation described in the spatial scenario. Again, there was one practice performance scenario, which was followed by a practice performance question, and 10 experimental performance scenarios, each followed by the experimental performance question that was relevant to it. Four of the items (Dance, Blocks, Bus, and Rubik’s Cube) had accompanying graphics, just as in Phase 1 (see Appendix C-2 and C-3). Cronbach’s alpha was computed to determine the consistency with which participants had the items correct during phase 2. This was found to be equal to .29, which indicates poor inter-item consistency.1 When the participants completed Phase 2, they then worked on the VZ-2 Paper Folding task (Ekstrom et al., 1976), which served as the standardized test of spatial imagery ability. Describing a study that used a sample of college students, Ekstrom et al. (1976) reported the test-retest reliability of the VZ-2 to be .84. Each problem of this test shows a piece of paper that has been folded, with a hole punched through all of the layers. This is followed by five unfolded pieces with holes located in different positions. The subject is to choose the option that corresponds to the punched holes. Participants were given the test on paper, and recorded their answers on a scantron sheet. They were given 3 minutes to work on each of the two pages of the VZ-2. The study took about 30 minutes to complete.

METHOD Measures Participants The participants were 73 undergraduate students (42 women) from a Midwestern university who participated as part of a course requirement. In addition, 14 students served as pilot subjects; we clarified the wording of two of the scenarios as a result of their comments. Materials and procedure E-Prime was used for presenting stimulus materials and for recording responses. Participants were given an introduction to the study, and then filled out a paper consent form. One or two participants were run in a session. During Phase 1 (Self-Rating), participants were asked to imagine a presented spatial scenario, such as juggling fruit. They then rated how realistic their image was (Realism) and how often they had experienced a similar situation (Experience). A third rating scale (Point of View) was included, but is not discussed in this paper. The rating scales for the two measures are presented in Appendix A. The subjects received a practice rating scenario, and then 10 experimental rating scenarios (see Appendix B for a listing of the practice and experimental items). The experimental items were presented in random order, and four of them (Dance, Blocks, Bus, and Rubik’s Cube) were illustrated with related graphics (see Appendix C-1 for an example). The remaining six were composed only of text. Copyright © 2014 John Wiley & Sons, Ltd.

During Phase 1, participants gave responses to each of the rating scales for each of the rating scenarios. The time the participants spent processing the Phase 1 (Self-Rating) scenarios in milliseconds (Phase 1 Scenario Time) and their responses to the rating scales were recorded. As can be seen in Appendix A, their ratings of the realism of their images (Realism, i.e., how similar their image was to perception) and the frequency with which they had experienced a similar event (Experience) were measured on a scale of 0–4, with 4 indicating extremely realistic and very frequently, respectively. For Phase 2 (Performance), the time they took to process each performance scenario (Phase 2 Scenario Time) and to respond to the performance questions was recorded, as well as the multiple-choice answers they gave to the performance questions (Phase 2 Correct). Finally, the number correct on the VZ-2 (VZ-2 Correct) served as a standardized measure of spatial imagery ability. RESULTS Table 1 shows the means and standard deviations for each of the variables broken down by the 10 scenarios. For the three 1 Cronbach’s alpha was recalculated, using only 7 of the 10 items, and was equal to .45, which is still low. The analyses were redone using these seven items; the results were the same with two exceptions that are described in the Results section.

Appl. Cognit. Psychol. 28: 789–798 (2014)

Copyright © 2014 John Wiley & Sons, Ltd.

23.81 0.79 19.38 0.63 9.92 0.52 1.47 2.64 0.23 9.90

M 26.11 0.47 24.16 0.43 7.56 0.54 0.86 2.71 0.44

Phase 1 Scenario Timeb Time/Wordsc Phase 2 Scenario Timeb Time/Wordsc Phase 2 Question Timeb Time/Wordsc Experienced Realismd Phase 2 Correcte VZ-2 Correctf

Variable Phase 1 Scenario Timeb Time/Wordsc Phase 2 Scenario Timeb Time/Wordsc Phase 2 Question Timeb Time/Wordsc Experienced Realismd Phase 2 Correcte

19

31

30

Words

Juggling (6) SD Words 9.66 56 0.17 10.85 56 0.19 3.41 14 0.24 1.06 0.82 0.50

10.12 0.34 8.55 0.28 5.36 0.28 1.13 1.11 0.43 3.48

SD

a

M 31.01 0.47 22.41 0.37 5.77 0.44 1.41 2.67 0.66

17.70 0.93 12.64 0.79 11.20 0.70 2.04 3.32 0.78

M

Dance (7) SD 9.80 0.15 10.37 0.17 3.59 0.28 1.29 1.16 0.49

6.49 0.34 4.74 0.30 6.83 0.43 1.14 0.78 0.42

SD

Paper II (2)

13

60

Words 66

16

16

19

Words

M 38.66 0.59 28.72 0.50 13.55 1.51 0.86 2.26 0.44

36.90 0.46 24.99 0.31 15.04 0.63 0.58 2.12 0.37

M

Scenarios

Scenarios Blocks (8) SD 13.46 0.20 11.95 0.21 5.77 0.61 0.98 1.16 0.50

14.10 0.17 12.76 0.16 11.23 0.47 1.00 1.31 0.49

SD

Paint Drip (3)

Note: For Phase 1 Scenario Time, N = 71. For VZ-2, N = 72. For all other measures, N = 73. a Number of words in the scenario or multiple-choice question. b Recorded in milliseconds but here presented in seconds for space considerations. c Time (in seconds) divided by number of words in scenario/question. d On a scale of 0–4, where 4 indicates extremely realistically or very frequently for Realism and Experience, respectively. e Proportion of subjects who had that item correct. f Measured once; not specific to individual scenarios. Scale is 0–20 correct.

M

Variable

Cut Paper (1)

Table 1. Means and standard deviations for each of the variables for each scenario

9

57

Words 66

24

80

81

Words

M 40.40 0.58 24.31 0.33 30.13 0.77 1.48 2.85 0.49

20.69 0.54 15.26 0.42 12.20 0.64 1.49 3.14 0.68

M

Bus (9) SD 13.53 0.19 10.14 0.14 13.26 0.34 1.40 1.11 0.50

6.67 0.18 6.38 0.18 5.86 0.31 1.41 0.96 0.47

SD

Square (4)

39

74

Words 70

19

36

38

Words

9.40 0.39 5.84 0.24 7.26 0.43 1.42 1.18 0.42

SD

17

24

24

Words

Rubik’s Cube (10) M SD Words 21.77 8.05 14 1.56 0.57 12.59 7.65 14 0.90 0.55 10.91 7.93 20 0.55 0.40 1.67 1.29 2.79 0.94 0.63 0.49

22.98 0.96 13.80 0.57 9.74 0.57 1.58 2.16 0.78

M

Car (5)

Perceived spatial imagery and spatial performance 791

Appl. Cognit. Psychol. 28: 789–798 (2014)

792

V. A. Diehl

Table 2. Summary of intercorrelations for scores on the measures collapsed across item Measure

1

2

1. 2. 3. 4. 5. 6. 7.

— .46* .19 .10 .01 .16 .11

— .40* .05 .21 .25 .02

P1ST P2ST P2QT Exp Real P2C VZ-2

3

— .08 .21

Suggest Documents