Procedure: Press button to move through picture story at own pace while ... narratives (Magliano, Larson, Higgs, & Loschky, 2015; Cohn & Wittenberg, 2015).
Eye-Movements Search for Comprehension during Bridging Inference Generation in Wordless Visual Sequential Narratives John Hutson1, Joseph Magliano2, and Lester Loschky1
INTRODUCTION
Kansas State University1; Northern Illinois University2
DATA COLLECTION CONDITIONS EYE-MOVEMENTS INFORMATIVENESS CLICK MAPS
• What guides gaze during sequential narrative inference generation? • According to the Scene Perception & Event Comprehension Theory (SPECT)(Loschky et al., 2014, 2015, 2016), viewers’ front-end attentional selection in scenes can be influenced by: – Front-end stimulus features (e.g., saliency maps [Itti & Koch, 2000] ) – Back-end goal-driven executive processes (e.g., tasks) (Yarbus, 1967; DeAngelus
N = 77 Manipulation: Bridging-Event Present or Absent Procedure: Press button to move through picture story at own pace while eyes tracked • Recall narrative after each story Analyses: Multilevel models (Participant and Image random effects)
RESULTS 1
& Pelz, 2009; Hutson et al., 2016)
Viewing Time
2016; Foulsham, Wybrow, & Cohn, 2016)
3000 2500 2000 1500 1000 500
Will back-end event model processes (inference generation) guide attentional selection in sequential visual narratives? Computational Load Hypothesis: During inference generation, eye-movement locations driven by bottomup saliency, and fixation durations are longer due to higher computational load Visual Search Hypothesis: During inference generation, eye-movement locations driven by search for inference-relevant information, producing more fixations
DESIGN Stimulus: Boy, Dog, Frog (Mayer, 1967, 1969, 1973, 1974, 1975; Mayer & Mayer, 1971) • 6 stories (Counterbalanced); 24-26 images per story; 4 target episodes per story
0 End-State
Bridging-Event Absent (Inference Needed):
250 200 150 100
12 10 8 6 4
50
2
0
0
End-State +1
End-State
Bridging-Event Absent Bridging-Event Present
14
Bridging-Event Absent Bridging-Event Present
End-State +1
End-State
End-State +1
R2 = .32 Bridging-Event: t(3484) = -.20, p = .841 Bridging-Event x End-State: t(3484) = .40, p = .693
R2 = .56 Bridging-Event: t(3484) = 7.63, p < .001 Bridging-Event x End-State: t(3484) = 4.48, p < .001
• Replication of Magliano et al. (2015): Inference generation increased viewing time
• No effect of inference generation on fixation durations
• Inference generation increased number of fixations
Inference generation increased number of fixations. Did back-end event model processes (inference generation) also affect front-end attentional selection (fixation locations)?
SCENE REGION INFERENCE GENERATION INFORMATIVENESS ANALYSIS Correlation of eye-movement fixation density difference heat maps to click (informativeness) heat maps • For each image, heat maps created for fixation density and click • Bootstrapped correlations run for each image (1000 iterations) data • Correlation Mean and 95% Confidence Interval calculated • Found difference between fixation density heat maps by condition • Shuffle Control: Procedure repeated with randomly paired click & fixation maps
RESULTS 2 Heat Map Correlations
End-State Correlation (Bootstrap)
Bridging-Event
300
Number of Fixations
R2 = .55 Bridging-Event: t(3567) = 6.57, p < .001 Bridging-Event x End-State: t(3567) = 4.43, p < .001
Complete Target Episode: Beginning-State
Bridging-Event Absent Bridging-Event Present
Fixation Durations (ms)
Viewing Time (ms)
3500
Fixation Durations Fixation Count
– Back-end event model processes (e.g., mapping in-coming info to event model) (Loschky et al., 2015; Hutson et al., 2016; Foulsham, Wybrow, & Cohn, 2016) • Comprehension guides gaze during reading (for review Rayner, 1998) • Inference generation increases picture viewing time in sequential narratives (Magliano, Larson, Higgs, & Loschky, 2015; Cohn & Wittenberg, 2015) • In visual narrative viewing, only weak evidence of back-end event models influencing front-end attentional selection (Loschky et al., 2015; Hutson et al.,
ALTERNATIVE COMPETING HYPOTHESES
N = 42 Participants told about bridging event manipulation Task: For each target episode, click on areas of end-state scene informative for making inference IF bridging-event was absent
0.50 0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00
Bridging-Event Absent
Click Map
Bridging-Event Present
Bridging Event Bridging Event Shuffle Absent Shuffle Present Absent Present Error Bars = 95% CI Condition
End-State +1
Click heat maps showed higher correlations with bridging-event absent fixation density heat maps • Inference generation process resulted in eye-movements to inference informative scene locations
GENERAL DISCUSSION • • • •
Study tested the role of back-end event model on front-end attentional selection during wordless visual sequential narratives Increased viewing time for generating inferences (Magliano et al., 2015) is due to making 22% more fixations, NOT longer fixation durations Extra fixations for making inferences go to regions informative to generating the inference Strong support for Visual Search Hypothesis: Inference generation in back-end event model influenced front-end attentional selection