Flight Psychophysiology Laboratory Research Team â Justin Estepp, Iris Davis, Jason Monnin, Bill Miller, and Glenn Wilson. Funding provided by: Consortium ...
Altering the Number of Targets During Multiple Object Tracking Justin Ericson and James Christensen th Air Force Research Laboratory, 711 HPW, RHCP • Pylyshn and Annan (2006) found that participants had
difficulty inhibiting automatically selected targets from a pre-cued set of tracked objects • Wolfe, Place and Horowitz (2007) used a paradigm described as Multiple Object Juggling (MOJ) where objects in the tracked set were added or removed throughout the duration of a trial, rather than manipulating precues • Over the course of the entire trial they found little to no performance cost associated with the MOJ
Design
Model Analysis
Offload
Cue (5s) Motion (10s) Tone
Upload
Current Research
• 10 naïve participants drawn from personnel currently employed at Wright-Patterson Air Force Base • Task was implemented in Matlab utilizing the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) • Participants received instruction in basic task performance as well as offload and upload conditions • 16 trials per participant of 6 trial types, randomized: o Long (20s) 3 and 4 dots o Short (10s) 3 and 4 dots o 3 with upload to 4 (3+), 4 with offload to 3 (4-) • The display size subtended 20.6 visual angle, with each dot 1.2 visual angle • Dots moved in a linear trajectory at approximately 10 per second, occluding each other when passing and repulsing off the white frame border • Photometrically equiluminant red and green color cues paired with a tone were used to direct adding or subtracting a dot
Cue (5s) Motion (10s)
0
Tone
Results
Discussion
Motion (10s)
Offloading to 3 dots vs. constant 3 dots Response
4
Number of dots tracked
Method
Response
3.5 3 2.5 2 1.5 3 dots
1
4-
0.5
Model 4-
0 0s
10s
20s
Trial length
Uploading a 4th dot vs. constant 4 dots 4
Number of dots tracked
• Does modifying the tracked set during tracking result in performance differences, specifically a deficit when inhibiting a previously selected target? • Separate trials where objects are added and removed in order to look for differences • Control for sequential effects in the analysis by comparing actual performance to predictions based on the assumption of no switch cost
Motion (10s)
• Assumptions: performance is a decreasing function of time , number of dots tracked, and individual differences. • Use short (10s) trials and long (20s) unmodified trials to predict performance on upload and offload trials, via slope estimates for each half of long trials • Model each individual based on their data from the other conditions; also include sequential effects (i.e. probability of offloading a dot that had already been lost vs. still being tracked). 4.5 • Model predicts 3+ quite well, with 4 43+ slope and intercept not significantly y = 1.1223x - 0.5493 different from 1 and 0; lower load in 3.5 R² = 0.759 first half benefits performance 3 • Excellent R2 for 4-, but model 2.5 overpredicts performance : there is 2 a penalty for switching in 4y = 0.4028x + 1.1431 1.5 R² = 0.9114 • Model is not just scaling individual 1 differences: R2 drops by .2-.4 when 0.5 correlating long 3 dots with 4- and long 4 dots with 3+ 0 Actual Data
Background
3.5 3 2.5 2 1.5 4 dots
1
3+
0.5
Model 3+
0 0s
10s
Trial length
20s
• Offloading and uploading result in opposite effects when compared to unmodified tracking. • This may explain the null result found with MOJ • Use of a simple model predicts performance well for upload but not offload.
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Model Prediction
• Our results showed that there is a drop in performance when removing a target from the tracked set • Over the long term, the effects of adding and dropping dots may cancel out leading to results such as Wolfe et al (2007) • Similar to Pylyshn and Annan (2006), we demonstrate that inhibiting an attended item (cued by popout features) is more effortful than an adding an object to our tracked set • Future directions include testing variable length trials to fill out performance curves, and using eyetracking to investigate disruptions of scan strategy due to the add/drop cues
References Brainerd, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10¸433-436. Pelli, D. G. (1997). The videotoolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437-442. Pylyshn, Z. W., & Annan, V. (2006). Dynamics of target selection in multiple object tracking. Spatial Vision, 19, 485-504 Wolfe, J. M., Place, S. S., & Horowitz, T. S. (2007). Multiple object juggling: Changing what is tracked during extended multiple object tracking. Psychonomic Bulletin & Review, 14, 344-349
Acknowledgements Flight Psychophysiology Laboratory Research Team – Justin Estepp, Iris Davis, Jason Monnin, Bill Miller, and Glenn Wilson Funding provided by: Consortium Research Fellows Program – 4214 King Street, Alexandria VA 22302 Air Force Research Laboratory – 711th HPW, Human Effectiveness Directorate