P.W. Foos ExperimentalP & P. Goolkasian: © sychology 2008 Presentation Hogrefe 2008; Vol. & Huber Format 55(4):215–227 Publishers Effects
Presentation Format Effects in a Levels-of-Processing Task Paul W. Foos and Paula Goolkasian University of North Carolina, Charlotte, NC, USA Abstract. Three experiments were conducted to examine better performance in long-term memory when stimulus items are pictures or spoken words compared to printed words. Hypotheses regarding the allocation of attention to printed words, the semantic link between pictures and processing, and a rich long-term representation for pictures were tested. Using levels-of-processing tasks eliminated format effects when no memory test was expected and processing was deep (E1), and when study and test formats did not match (E3). Pictures produced superior performance when a memory test was expected (E1 & 2) and when study and test formats were the same (E3). Results of all experiments support the attenuation of attention model and that picture superiority is due to a more direct access to semantic processing and a richer visual code. General principles to guide the processing of stimulus information are discussed. Keywords: format effects, recognition memory, levels of processing
One of the benefits of living in the information age is that we have the tools to present information in any number of formats and modalities. Which one we choose should be guided by our desire as scientists and educators to maximize the impact of the flow of that piece of information. Although the effect of presentation format has had a long and rich research history, it is somewhat fragmented; as a result, there are few general principles that psychological science can offer as a guide for efficient processing of stimulus information. The present research integrates previous work with picture/word (e.g., Kosslyn, 1980; Paivio, 1975) and auditory/visual (e.g., Greene, 1985; Penney, 1989) comparisons by exploring why printed words are not recalled as well as other presentation formats (e.g., pictures and spoken words; Foos & Goolkasian, 2005; Goolkasian & Foos, 2002). Three experiments extend the investigation of format effects beyond encoding processes and working memory by using a levels of processing (LoP) approach (e.g., Craik & Lockhart, 1972) to examine whether effects of presentation format remain in long-term memory even after participants have encoded the stimulus items to the same levels. Previous work that examined format differences in the visual modality found better memory for pictures than for words when explicit memory was tested (e.g., Goolkasian & Park, 1980; Kroll & Corrigan, 1981; Paivio, 1975, 1978; Pellegrino, Rosinski, Chiesi, & Siegel, 1977; Smith & Magee, 1980) and in tests of implicit memory which ask for semantic information (e.g., asking participants to name members of some category; e.g., McBride & Dosher, 2002; Wippich, Melzer, & Mecklenbräuker, 1998). Paivio’s dual coding theory (1975) and Nelson’s (1979) sensory-semantic model of encoding influenced this work and much of our later thinking with respect to picture-word differences. The differences obtained in these studies were thought to © 2008 Hogrefe & Huber Publishers
be the result of encoding. Pictures may have dual codes (e.g., Paivio, 1975), more distinctive encoding, or more direct access to semantic coding (e.g., Nelson, 1979) in comparison to printed words. Still another model (Larkin & Simon, 1987) indicates that the picture-word difference lies in the way information is extracted from the different formats. Some features may be directly represented in a picture that must be inferred from a word. They suggest that picture-word differences lie in the efficiency of the search for information and differences in explicitness of the information. The emphasis in this approach is on the richness of the pictorial representation following encoding. Other work examined auditory-visual modality differences. Better memory for auditory presentation, particularly for the last several items in a list, has been found (referred to as the modality effect; e.g., Cowan, Saults, Elliot, & Moreno, 2002; Gardiner, Gardiner, & Gregg, 1983; Gathercole & Conway, 1988; Greene, 1985; Greene, Elliott, & Smith, 1988) and has been shown to consist of sustained superiority for these auditory items, as well as a brief advantage due to echoic storage that can be eliminated by other auditory input (i.e., the suffix effect; see Crowder, 1972; Goolkasian & Foos, 2002; Penney, 1989). The longterm modality effect takes place in long-term memory, and is found for both serial and free recall (Cowan et al., 2002; Gardiner et al., 1983; Greene & Crowder, 1986). Because the superiority of pictures over printed words involves encoding processes to some degree – although the exact nature of the difference in encoding is much debated – we decided to focus on working memory where encoding takes place. This work (Goolkasian & Foos, 2002) examined retrieval from working memory and compared all three types of presentation using a dual task. These studies found little influence of format on the processing task but strong effects on item recall. Stimulus items presented as Experimental Psychology 2008; Vol. 55(4):215–227 DOI 10.1027/1618-3169.55.4.215
216
P.W. Foos & P. Goolkasian: Presentation Format Effects
pictures and as spoken words are recalled (and recognized) equally well or better than printed words. These results suggest that the previously found advantages of pictures over printed words and of spoken words (auditory presentation) over printed words may be due to the same underlying mechanism(s). We will refer to the finding of superior performance with pictures and spoken words over printed words as the presentation format effect, even though a change in modality is part of these findings. In several studies (Foos & Goolkasian, 2005) designed to uncover the source of these format effects, we varied the type and difficulty of the sentence verification task. These manipulations had no influence on the obtained format/modality differences. We hypothesized that these obtained differences (the presentation format effect) might lie in a disadvantage for printed words rather than different advantages for pictures (e.g., Nelson, 1979; Paivio, 1978) and spoken words (e.g., Penney, 1989). Posner, Nissen, and Klein (1976) provided some evidence for differential attention effects by modality. They found that visual signals are less alerting than stimuli in other modalities. To test this attention allocation hypothesis, we required participants to articulate each of three or six presented items to ensure some allocation of conscious attention to all items. Results showed that performance with articulated printed words improved performance while articulation had no influence on the recall of pictures and spoken words. In a second experiment we presented pictures, spoken words, and printed words under normal and under degraded conditions, assuming that degradation would force more conscious attention to underattended printed words but have little influence on well-attended pictures and spoken words. Results again supported our attention allocation model, as degraded pictures and spoken words were recalled worse while degraded printed words, particularly in the higher memory load condition, were recalled at a much higher level. The recall disadvantage of printed words lies in the fact that ordinarily they are not given full conscious attention, and forcing attention to printed words improved performance and diminished format effects. The present experiments broaden the scope of our investigation by looking at presentation format effects in longterm memory and examining two complementary explanations for picture superiority (i.e., Nelson, 1979; Larkin & Simon, 1987). In Experiment 1 we use an LoP task to examine the three styles of presentation under conditions of shallow and deep processing. The question of main interest is whether effects of presentation format remain in longterm memory even after participants have been forced to encode stimulus items to varying levels of depth. In this experiment we also compare intentional versus incidental learning conditions, since one might expect greater conscious attention to encoding under intentional learning. Experiment 2 repeats some of these conditions in a recall experiment to rule out test format as a possible alternative explanation for improved performance with printed words. In both of these experiments Nelson’s (1979) model preExperimental Psychology 2008; Vol. 55(4):215–227
dicts better performance with pictures relative to verbally presented items. Finally, in Experiment 3, we force deep processing on items presented as pictures, spoken words, and printed words, and we test long term recognition under conditions where the test items are presented in the same or a different format. If presentation format is an essential component in processing the stimulus item, that is, if format is encoded together with semantic information, then recognition of test items presented in a different format at study and test should be relatively low. Furthermore, a comparison of performance with same and different study-test formats allows us to examine the effect, if any, of the hypothesized rich, visual, long-term representation for pictures compared to semantic codes for all items processed at a deep level (e.g., Larkin & Simon, 1987).
Experiment 1 In Experiment 1 we investigate the presentation format effects in long-term memory. If we control the level of processing required in response to stimulus items, do we still get a format effect? According to the attention allocation hypothesis, printed words are at somewhat of a disadvantage because of attenuated conscious processing. If we control the processing level we control the amount of conscious encoding and might therefore eliminate format effects in retrieval from long-term memory. At the very least we should eliminate those components that results from differences in semantic coding. Furthermore, we manipulate levels of processing to direct the allocation of attention to meaning as well as to physical features of the presented items. LoP tasks (Hyde & Jenkins, 1969) have been used to show the importance of processing for later memory and the effects of relatively shallow and deep processing (e.g., Craik & Lockhart, 1972; Craik & Tulving, 1975; Craik, 2002). Although there is still considerable debate regarding independent measures of processing levels (e.g., Baddeley, 1978) and whether deep processing implies elaborative/distinctive encoding or organization (e.g., Mandler, 2002), these shallow and deep processing tasks have been a useful tool for occupying conscious attention and at least some of the processing given to incoming information (Lockhart, 2002; Watkins, 2002). These tasks seem then to be appropriate for testing the attention allocation model of format effects. If format effects are due, in part, to attenuated attention to the encoding of printed words, then the deep processing task should reduce such effects because enough conscious attention must be allocated to fulfill the task for each presented item, and that processing comprises most (in an intentional condition) or all (in an incidental condition) of the conscious encoding that takes place. A major difference among the tasks used in levels of processing studies is that deep processing tasks require sig© 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
nificantly more conscious processing of meaning than do shallow tasks. Several studies have proposed and found more conscious attentive processing for deep tasks compared to shallow tasks (e.g., Craik & Byrd, 1982; Eysenck, 1974; Eysenck & Eysenck, 1979; Johnston & Heinz, 1978). If deep tasks and semantic processing require more conscious attention, then one would expect an even greater reduction of format effects when deep tasks rather than shallow tasks are used. While the effects of levels-of-processing tasks on memory for words generally show superior memory for words processed at a deep level (e.g., Craik, 2002; Craik & Lockhart, 1972; Craik & Tulving, 1975), the effects of these tasks with pictures have been mixed. In many cases, particularly with faces as stimuli, deep processing has led to superior performance (e.g., Bower & Karlin, 1974; Konstantinou & Gardiner, 2005), while in other cases shallow, physical processing tasks have sometimes led to superior performance (e.g., Intraub & Nicklos, 1985). In sum, the next set of experiments manipulates both format of presented items and the level of processing required for those items. If format effects are due, in part, to an attenuation of attention to printed words, then those effects should be reduced when an attention demanding processing task is required. Format effects should remain strong in our control condition where no levels-of-processing tasks are required and individuals are simply told to remember the presented items. If more attention must be devoted to accomplish deep than shallow processing, then format effects may be reduced more with a deep processing task than with a shallow processing task. There is, however, another possibility for items presented as pictures. Nelson’s (1979; Nelson, Reed, & McEvoy, 1977) sensory-semantic model of picture memory would argue that pictures are not just well attended (e.g., McBride & Dosher, 2002), but also have a more direct link with semantic processing. Perhaps as a result of this more direct link or as a result of the rich visual detail present in pictures, the long-term representation of a picture is thought to be far more rich or distinctive than that for verbal items, whether spoken or printed (Larkin & Simon, 1987). Pictures are not named until after they are meaningfully identified, whereas words are named before meaning is accessed. This more direct semantic link coupled with a more distinctive visual code should make pictures more memorable than words, regardless of whether they are processed at a deep or shallow level, but that advantage might be expected to be greater when a shallow processing task is used. That is, with deep processing all items receive semantic coding, but with shallow processing only pictures can be certain of some semantic processing because of their direct link with such processing. Experiment 1 uses recognition memory to test predictions derived from attention allocation and sensory-semantic models. Stimulus items are presented during the study phase in one of three different conditions. For two of the conditions, participants perform a deep or a shallow processing task on each item presented and items are presented as pictures, spo© 2008 Hogrefe & Huber Publishers
217
ken words, or printed words. When in the incidental memory condition, participants respond to the items without any expectation that a test of memory will follow list presentation. The attention allocation model predicts that format effects will be present only in the shallow task because the requirement to process items semantically in the deep task will result in equivalent conscious processing for all presented items. Support for this model will be found if format effects interact with LoP task. However, the sensory-semantic model of picture memory predicts superior performance with pictures in comparison to the verbal formats. The picture superiority effect should be present even when items have received shallow processing because, in this case, semantic processing should occur for pictures but it is far less likely to occur for spoken or printed words. In the second condition (intentional learning), participants are informed that there will be a test of memory following list presentation. Because participants expect a memory test, predictions from both the attentional allocation and the sensory semantic model would be altered by the fact that intentional instructions should promote some semantic processing across all conditions. The attention allocation model again predicts a reduction of format effects to coincide with an increase in semantic processing; the semantic-sensory model would predict some decrease in the picture advantage, as all items, even those processed at a shallow level, are expected to receive some semantic processing since participants expect a test of memory. The third condition is a control condition in which participants study the mixed list of items for a later test. There are no levels-of-processing tasks during the study phase. The attention allocation model (Foos & Goolkasian, 2005) predicts better performance with pictures and spoken words than with printed words, while the sensory-semantic model (Nelson, 1979) predicts superior performance with pictures over both forms of words. Reaction times (RTs) to the LoP tasks are also recorded in the intentional and incidental conditions in order to assess the amount of study time used for each level of the LoP task. It is expected that RTs to deep processing would be longer than to shallow processing, but the LoP difference in RT is not expected to vary by format.
Method Participants Participants were 71 men and women students drawn from the University of North Carolina, Charlotte, NC. They were volunteers who participated in the experiment to obtain credit in psychology. Fifty were randomly assigned to one of the two levels of processing groups (incidental and intentional processing) and 21 to a control group that was simply told to remember the presented items. Participation was restricted to those with normal (or corrected-to-normal) color vision. Experimental Psychology 2008; Vol. 55(4):215–227
218
P.W. Foos & P. Goolkasian: Presentation Format Effects
Materials The stimuli used in the levels-of-processing task were 60 items randomly selected from a pool of items used in prior work with presentation format effects (Foos & Goolkasian, 2005). The items appeared as either a picture, printed word, or spoken word. Three versions of the stimulus lists were developed so that, across participants, all 60 items appeared equally often as pictures, spoken words, and printed words. Each picture was imported into Adobe Photoshop and its size was adjusted to approximately 4 × 4 cm. A black border (.1 cm) was added to some of the pictures. The printed words were either uppercase or lowercase characters printed with a Geneva font in a character size of 24 points. Spoken words were sound files created by a human female or male voice as a Macintosh system sound file. The items were randomly assigned to either shallow or deep processing tasks. For both, an orienting question preceded the presentation of the item and participants answered by pressing the “F” (for “yes”) or “J” (for “no”) keys with their left or right index fingers. The orienting questions followed the guideline established by Intraub and Nicklos (1985) that shallow questions could be answered without reference to meaning and could be answered for meaningless stimuli. Shallow tasks always focused attention on some physical feature of the items. For pictures, the question asked was, “Is the item framed?” For spoken words the question was, “Is the item in a female voice?” For printed words the question was, “Is the word printed in uppercase?” The deep task for all item types was whether the item belonged to some category (e.g., “Is the item a type of food?” “Is the item found in a garden?”). For half of all items in each condition, the correct answer was yes and for the other half the correct answer was no. A list of the stimulus items is in the Appendix. All stimuli were displayed in the center on a 15-in (38.1 cm) Apple flat-screen monitor. Stimulus presentation and data collection were controlled by SuperLab running on a Power Mac G4 computer. Memory was measured through written responses to a printed recognition test. The test contained a list of 120 item names presented alphabetically. The 60 old items that appeared in the levels-of-processing task were mixed together with 60 new items matched for concept frequency with the old items and selected from the same item pool. For each item, the participant responded with a yes/no to indicate whether the item had been presented, and indicated the degree of confidence in their answer by using a 3-point scale where 1 = not very confident, 2 = somewhat confident, and 3 = very confident.
Procedure
30 min. In the incidental and intentional processing condition, an orienting question appeared for 2 s followed by an item presented as a picture, spoken word, or printed word. Participants considered whether the item had a physical feature (shallow processing) or belonged to a semantic category (deep processing), and responded with a (yes or no) key press. The next trial started as soon as the participant had made a response. The procedure continued until 60 items were presented. Within the list of 60 items, there was random arrangement of shallow and deep items and presentation formats. There were five practice trials before the experimental trials. Following the levels-of-processing task, participants were asked to engage in a filler task for 2 min. They counted backwards by two out loud, starting with the number 99. Participants were then given a printed recognition test and asked to write yes or no to indicate whether or not the item appeared in the study phase task, and to indicate their confidence. The groups differed in their expectation for the memory test. The intentional processing group was instructed to study the items for a recognition test, but the incidental processing group was not and for them the test was a surprise. The third group was a control condition to study recognition memory for the items presented in varied formats without the levels-of-processing task. For these participants, items were presented without the orienting question for 1200 ms each. The exposure duration was determined by averaging response times to the levels-of-processing task in the other two conditions. We wanted to make sure that study time was equated across the three groups of participants. Participants were asked to study the items presented as pictures, printed words, or spoken words for a later memory test. After studying the serial list of 60 items, the participants engaged in the filler task and then took the recognition memory test. The 60 items in the levels-of-processing task represented five replications of each presentation format (picture, spoken word, printed word) by levels of processing (shallow, deep) and by response type (yes, no) condition. Response times and accuracy were recorded on each trial of the levels-of-processing task. Hit and false alarm rates were calculated from the yes responses to the recognition test. Hit rates were calculated from the number of times the participants correctly identified an old item, and false alarm rate noted the number of times a new item was incorrectly labeled as old. To correct for guessing, recognition memory scores were computed by subtracting the proportion of false alarms from the proportion of hits. Confidence ratings were combined with the recognition responses to produce a 6-point scale where 6 = very confident yes, 5 = somewhat confident yes, 4 = not very confident yes, 3 = not very confident no, 2 = somewhat confident no, 1 = very confident no.
All participants were randomly assigned to one of three conditions and run individually in sessions of around Experimental Psychology 2008; Vol. 55(4):215–227
© 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
Table 1. Mean (SD) recognition and confidence scores from Experiment 1 Condition
Recognition score Confidence score
Incidental – Deep Picture
.69 (.17)
4.9 (.7)
Spoken Word
.64 (.22)
4.7 (.8)
Printed Word
.69 (.19)
5.0 (.8)
.37 (.20)
3.5 (.9)
Incidental – Shallow Picture Spoken Word
.15 (.20)
2.6 (.9)
Printed Word
.23 (.20)
3.0 (.9)
.68 (.17)
5.3 (.6)
Intentional – Deep Picture Spoken Word
.55 (.25)
4.7 (.8)
Printed Word
.54 (.23)
4.6 (.9)
.35 (.18)
3.9 (.7)
Intentional – Shallow Picture Spoken Word
.27 (.19)
3.6 (.8)
Printed Word
.20 (.18)
3.3 (.8)
Picture
.64 (.18)
5.2 (.6)
Spoken Word
.47 (.19)
4.4 (.7)
Control
Printed Word .34 (.19) 3.8 (.9) Note. Recognition scores were computed by subtracting the probability of false alarms from the probability of hits. Confidence was measured on a 6-point scale (1 = not very confident no, 2 = somewhat confident, 3 = very confident, 4 = not very confident, 5 = somewhat confident, 6 = very confident).
Results Table 1 presents the means computed from the recognition memory scores and confidence scale data for each participant across the 10 items within each of the presentation format by level of processing conditions. Data from one participant in the intentional study condition was removed prior to analysis because the participant’s performance in the levels-of-processing task was at chance. Data from the remaining 49 participants were analyzed with a 2 × 3 × 2 ANOVA where study condition was between-subjects (i.e., intentional/incidental processing) and presentation format (i.e., pictures, spoken words, and printed words) and level of processing (i.e., deep and shallow) were within-subjects. Mean false alarm rates were also computed by averaging across the participants’ data in each of the study conditions.
Recognition Memory and Confidence Scores The analyses on recognition memory and confidence scores were similar in showing that study condition was found to © 2008 Hogrefe & Huber Publishers
219
interact with presentation format and LoP, memory F(2, 94) = 3.73, p < .03, η2 = .07; confidence scores, F(2, 94) = 4.68, p < .01, η2 = .09. There were also significant 2 way interactions of study condition with LoP, memory F(1, 47) = 7.05, p < .01, η2 = .13; confidence scores F(1, 47) = 8.62, p < .01, η2 = .15; and the interaction of condition by format was significant with the confidence score, F(2, 94) = 3.18, p < .05, η2 = .06; but not with the analysis on recognition memory scores, F(2, 94) = 2.33, p = .10. There were also significant main effects of presentation format, memory F(2, 94) = 15.04, p < .01, η2 = .25; confidence scores,, F(2, 94) = 16.23, p < .01, η2 = .26, and LoP, memory F(1, 47) = 330.32, p < .01, η2 = .88; confidence scores, F(4, 47) = 280.34, p < .01, η2 = .86. However, there was no main effect of study condition, memory F < 1; confidence scores, F(1, 47) = 3.11, p < .08. To understand what was happening to recognition memory with these complex interactions, simple interactions of presentation format by LoP were conducted for each of the study conditions. Under incidental study, the effects of presentation format were found only when items were shallowly encoded. For those items, pictures were recognized more readily than either spoken or printed words. When items were deeply encoded, there were no differences evident across presentation formats. There were significant interaction effects of format by LoP for both memory F(2, 48) = 3.97, p = .027, η2 = .14, and confidence scores F(2, 48) = 5.33, p = .01, η2 = .18; and follow-up within-subject contrasts (at p < .05 significant level) showed no format differences with deeply processed items and an advantage for pictures relative to the other format conditions with shallow processed items. The analysis on the incidental study condition also showed two significant main effects of presentation format, memory F(2, 48) = 7.31, p = .01, η2 = .23; confidence scores F(2, 48) = 9.62, p = .01, η2 = .29, and LoP, memory F(1, 24) = 198.16, p = .01, η2 = .89. and confidence scores F(1, 29) = 159.22, p = .01, η2 = .87. As expected, the main effect of LoP indicated that recognition memory was higher for deep rather than shallow processed items. In the intentional study condition, there were similar main effects but no interaction effect. Pictures were recognized better than the other two formats, memory F(2, 46) = 10.36, p = .01, η2 = .32; confidence scores F(2, 46) = 9.68, p = .01, η2 = .29, and recognition was higher for deeply encoded items, memory F(1, 23) = 134.53, p = .01, η2 = .86; confidence scores F(1, 23) = 124.89, p = .01, η2 = .84. The interaction was not significant, F values < 1. The data from the control condition was also treated with a repeated measures analysis; and these data show a strong effect of presentation format, memory F(2, 40 = 24.92, p = .01, η2 = .55; confidence scores F(2, 40) = 34.50, p = .01, η2 = .63. Follow-up within-subject contrasts (at the p < .05) showed that pictures (.64) were remembered better than words and spoken words (.47) better than printed words (.34). Experimental Psychology 2008; Vol. 55(4):215–227
220
P.W. Foos & P. Goolkasian: Presentation Format Effects
Reaction Times An analysis of the reaction time data from the correct responses to the LoP task showed no differences between the incidental and intentional study groups, F < 1. As expected, processing times were significantly quicker with a shallow (961.34 ms) than with a deep processing task (1134.91 ms), F(1, 47) = 49.36, p < .0001, η2 = .51. There was also a main effect of format, F(2, 94) = 108.46, p < .0001, η2 = .70. Processing times were significantly quicker with spoken words (818.60 ms) than with pictures (1100.22 ms) or printed words (1225.54 ms). No significant effect of response type (yes vs. no) was obtained, F < 1. Also analyzed were the proportion of incorrect responses in the levels-of-processing task. On average errors occurred 3% of the time and did not differ as a function of study group, F < 1, format, F(2, 94) = 1.15, p = .32, or LoP condition, F(1, 47) = 2.42, p = .13. The only significant effect was a main effect of response type, F(1, 47) = 8.14, p = .01. There were more errors associated with yes than no responses; means are respectively .05 vs .03.
Discussion Since the results obtained for confidence scores and recognition memory are consistent (with one minor exception), we will focus our discussion on memory. As expected, under the control condition both pictures and spoken words produced superior performance to printed words. This replicates previous work on picture superiority and long-term modality effects. However, when individuals were directed to process each presented item at a deep or shallow level, the advantage of pictures and spoken words over printed words was greatly reduced and, in the incidental/deep processing condition, entirely eliminated. These results provide strong support for the attention allocation model (Foos & Goolkasian, 2005). When participants’ attention is fully focused on semantically processing the stimulus item, the item’s format does not influence recognition from longterm memory. A small effect of format is found in the intentional learning condition when participants are aware that a memory test will occur, however the format effect is limited to a picture advantage. Differences are not found for recognition of material presented in spoken and printed word formats. The fact that pictures produced superior performance in both intentional learning conditions and in the incidental shallow processing condition also support the sensory-semantic model of picture memory (Nelson, 1979). The longer RTs in the deep task than in the shallow task confirmed our basic assumption about the deep task requiring more processing than the shallow task, but it also suggests that participants had more time to study the items and the differences in study time may have partially explained the differences in recognition between the two LoP tasks. However, the focus of our effort was not on the differences Experimental Psychology 2008; Vol. 55(4):215–227
between memory for deep versus shallow items, but rather the effect of presentation format and its interaction with LoP tasks. There was a format effect obtained with the RT data, but it was not consistent with either the memory or the confidence data, and it could not have explained the picture advantage. RTs were faster to spoken words than to pictures and printed words most likely because of the fact that completion of the processing tasks could begin slightly sooner with auditory presentation. It was not necessary to listen to the entire sound file before responding. While the present findings support the attention allocation model of format effects and Nelson’s sensory-semantic model of picture superiority, there is a need to see if a similar pattern of findings occur when the test format is changed. A number of recent studies suggest that printed words are coded orthographically as well as semantically and phonologically (e.g., Cleary & Greene, 2002; Gallo, McDermott, Percer, & Roediger, 2001). This orthographic coding has been shown to improve memory for printed words when a visual recognition test is used (e.g., Cleary & Greene, 2002; Gallo et al., 2001; Kellogg, 2001). Perhaps the improved recall of printed words in the present experiment is due, in part, to orthographic coding for printed words and use of a visual recognition test of memory. Experiment 2 is designed to test this hypothesis by changing the test format to eliminate any advantage attributable to orthographic coding.
Experiment 2 In the present experiment, participants were run under the intentional and the control conditions from Experiment 1, with the same procedure except that an auditory recall task replaced the written recognition test. We were interested in whether the type of memory test would influence the presentation format effects that were so evident in the previous findings. Predictions from the attention allocation and sensory-semantic models are the same as in the previous experiment. We did not include the incidental learning condition because, without any expectation of a memory test, recall would have been so low that it may have masked any format difference that were present in the data.
Method The participants were 59 men and women students drawn from the same participant pool as the previous experiments. Twenty-seven were in the intentional study condition and 32 participated in the control condition. The study materials were the same 60 items used for both conditions in Experiment 1. In our intentional condition, participants were asked to respond to the LoP item on every trial and to remember the items for a memory test. There were 10 items for each of the LoP by format conditions. In the control © 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
condition participants were just told to study each item for a memory test. There were 20 items for each of the format conditions. As in the previous experiment, each item was presented for 1200 ms. After the last item on each study list was presented, participants counted backward by twos from 99 for 2 min. They were then asked to verbally recall as many of the presented words as they could while the experimenter marked their answers on a scoring sheet. Minimum recall time was 5 min.
Results Table 2 presents the mean proportion of items recalled for each of the groups. The means were computed from each participant’s data across the items within each of the conditions. In the intentional study condition, recall data were analyzed with a 3 × 2 ANOVA where Format (i.e., pictures, spoken words, and printed words) and Level of Processing (i.e., deep and shallow) were within subjects. Significantly more items presented as pictures (.27) were recalled than items presented as spoken (.14) or printed (.18) words, F(2, 50) = 12.43, p < .01, η2 = .33. Deep processing led to higher recall (.28) than shallow processing (.12), F(1, 25) = 87.79, p < .01, η2 = .77, and processing level interacted with format condition, F(2, 50) = 6.80, p < .01, η2 = .21. The picture advantage was more evident with deeply processed items than with shallow items. However, consistent with the results of Experiment 1, there were no differences between long-term memory for spoken and printed words. For the control group, there was only a significant effect of format, F(2, 62) = 30.58, p < .01, η2 = .50. Pictures (.37) were recalled at a higher rate than spoken (.24) or printed words (.22). The analysis of the RT data showed that spoken words (755 ms) were again processed significantly faster than picTable 2. Mean (SD) proportion of items recalled in Experiment 2 Condition
Mean
SD
Picture
.40
.15
Spoken Word
.21
.13
Printed Word
.22
.15
Picture
.15
.13
Spoken Word
.08
.07
Printed Word
.12
.09
Picture
.37
.17
Spoken Word
.24
.12
Printed Word
.22
.16
Intentional – Deep
Intentional – Shallow
Control
© 2008 Hogrefe & Huber Publishers
221
tures (1059 ms) or printed (1154 ms) words, F(2, 52) = 53.69, p < .0001, η2 = .67. Items processed at a shallow level (943 ms) were processed more rapidly than those at a deep level (1035), F(1, 26) = 10.83, p = .003, η2 = .29. No main effect of response type (i.e., yes or no) was obtained, F(1, 26) = 1.18, p = .29, and none of the interactions were significant. An analysis of the incorrect responses on the LoP task showed no main effects of format or levels of processing were obtained, F(2, 52) = 1.38, p = .26 and F(1, 26) = 3.47, p = .07, respectively. Significantly more errors were made with yes (.06) than no (.02) responses, F(1, 26) = 18.25, p < 0001. None of the interactions were significant.
Discussion When a LoP task is required and conscious attention is thereby directed to semantic processing irrespective of presentation format, then memory differences between the verbal formats disappear. The only lingering format effects are associated with pictures. The previously obtained reduction (i.e., Experiment 1) cannot be attributed to a test format that offered some advantage to words since, in the present experiment, any advantage was eliminated by the use of an auditory recall task. The present results offer some modest support for the attention allocation model of format effects, because differences attributable to presentation format are dramatically reduced when conscious attention to semantic encoding is controlled; however, that support is limited by the better recall of pictures under deep processing. The fact that pictures were again remembered better than words under both shallow and deep processing levels provides strong support for the sensory-semantic model of picture superiority.
Experiment 3 In Experiment 3 we again used the levels-of-processing task but this time the questions required only deep processing and half of the old items presented in the recognition test were in a changed presentation format. As before, we were interested in the degree to which presentation format effects are present in long-term memory and the degree to which format effects are eliminated when conscious processing is the same for all items. When engaged in the levels-of-processing task under incidental instructions, participants are focused on semantically processing the stimulus information. Since they do not expect a memory test, the only task at hand is to answer the question preceding the presentation of the item (i.e., “Is the item found in a garden?” – “Is the item a type of animal?”). Under conditions like these, the attention allocation model predicts diminished format effects as was found in our Experiment 1. When, however, learning is intentional Experimental Psychology 2008; Vol. 55(4):215–227
222
P.W. Foos & P. Goolkasian: Presentation Format Effects
and/or processing is shallow (as in some conditions of Experiment 1 and Experiment 2), pictures still result in higher performance. We have taken this as support for Nelson’s model of picture processing. In the present experiment we attempt to determine if the long-term representation for pictures contributes to this advantage. We were interested in whether participants would be able to recognize an item as being presented in the study phase even when it appeared in a changed format at test. Participants were not informed about the memory test and they were not informed about the change in presentation format. Since the instructions for the deep processing task ask participants to focus on processing the items semantically, there is no need to retain any information about presentation format. If only semantic information is preserved then we would expect to find no differences due to whether or not the study and test formats are the same or different. In both cases, the semantic information remains the same. If, however, format information is automatically processed and stored, as Intraub and Nicklos (1985) have suggested for pictures, then performance with the same format at study and test should result in better performance because the long-term representation of format would match the item’s format at test. One would have physical as well as semantic information to assist performance. If this automatic coding of format occurs for verbal items as well as pictures, then performance should always be better when the same format is used at study and test. If, on the other hand, format information is only coded for pictures, then only pictures should show better performance when formats are the same at study and test. The present study also examines the richness of the visual and semantic codes for pictures compared to verbal items. If the long-term visual representation of pictures is richer than that for verbal items, as suggested by Larkin and Simon (1987), then one would expect to find picture superiority when study and test format are the same. Under this condition, the less rich codes for verbal items, whether spoken or printed, should be similar; and if format information is present in the long-term representation, then the richness of the stored visual code should result in picture superiority. Such a finding would be in distinct contrast to the attentional allocation model, which predicts that there is a common mechanism that underlies the better performance of pictures and spoken words in comparison to printed words. In terms of format effects, the attention allocation model predicts these effects will be diminished because conscious attention is directed to the processing task and no test of memory is expected. With respect to whether study and test formats are the same or different, the model predicts no differences unless format information is stored and picture representations in long-term memory are richer than verbal representations. These are, of course, the primary questions addressed in the present study. Experimental Psychology 2008; Vol. 55(4):215–227
Method Participants were 27 men and women students drawn from the same participant pool as the previous experiments. The study materials were the same 60 items used in Experiments 1 and 2 and the item list was comprised of 20 pictures, 20 spoken words and 20 printed words. Three versions of the stimulus list were developed to counterbalance item format across participants. As in the previous experiments, an orienting question preceded the item for 2 s and required deep processing (e.g., “Is the item a type of food?” – “Is the item found in a garden?”). Half of all items in each format condition required a yes response and the other half a no response. The recognition memory test consisted of 120 items – 60 old and 60 new items. Half of the old items in each of the format conditions were presented in the same format at study and test, and the remaining half of the items appeared in a different format. When an item appeared in a different format, an effort was made to change the format an equal number of times into each of the two remaining formats so that across all of the old items appearing with a different format there were an equal number of pictures, spoken words, and printed words. The new items were matched in concept frequency with the old items and selected from the same item pool. One third of the new items were pictures, spoken words, or printed words. The test items were presented in a different random order for each of the participants. Recognition test items appeared one at a time on the screen and participants used the number keypad to respond to the question “Was this item presented in the study list?” For the spoken items, the sound file played while the response scale was shown on the screen. Figure 1 shows an example of a picture item with the response scale used by the participants. All participants were run individually in session of around 20 min. They studied each of the items under inci-
Figure 1. Sample recognition test item used in Experiment 3. © 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
223
Figure 2. Mean proportion of yes responses for the interaction of presentation format by test item condition. The error bars represent 95% confidence intervals.
dental learning instructions. Participants considered whether the item belonged to a semantic category and responded with a key press. There were five practice trials before the experimental trials. Following the levels-of-processing task, participants were asked to engage in the filler task of counting backwards for 2 min. After that they took the computerized recognition test with 120 items. In all other respects the procedure was the same as the previous LoP studies.
Results Mean proportion of yes responses to each of the conditions of the recognition test are presented in Figure 2. To correct for guessing, mean recognition memory scores were calculated by subtracting the proportion of false alarms from the proportion of hits. Hit rates were calculated for each participant across the 10 old items within each of the presentation format by same/different conditions. Means were also computed from the confidence scale data. The false alarm rates were taken across the 20 items within each of the presentation format conditions. A repeated measures analysis was conducted separately on the recognition memory scores and the confidence scale data. The analysis on the confidence scores included three item types (same/different/new), while the analysis on the memory scores included two (same/different). Results of both analyses were the same. There were significant main effects of presentation format, memory F(2, 52) = 6.89, p < .01, η2 = .21; confidence scores, F(2, 52) = 6.84, p < .01, η2 = .21; and same/different test item, memory F(1, 26) = 32.12, p < .01, η2 = .55; confidence scores, F(2, 52) = 120.57, p < .01, η2 = .82. The analyses also showed significant interaction ef© 2008 Hogrefe & Huber Publishers
fects of presentation format and test conditions, memory F(2, 52) = 8.25, p < .01, η2 = .24; confidence scores, F(4, 104) = 4.96, p < .01, η2 = .16. When corrected for guessing, recognition of old items presented in the same format at study and test (.73) were significantly higher than recognition of old items presented in a different format (.46). And the old items with the changed format were recognized more often than new items (.09; this is, of course, a false alarm rate). Follow-up within-subject contrasts (at p < .05 level of significance) showed that format differences within each of these item types were very different As can be seen in Figure 2, format effects are totally absent when the test format is different from the study format. When, however, study and test formats are the same, a distinct advantage for pictures over the two verbal formats is evident. The recognition rate is highest with pictures in the same format compared to all other conditions. Since this study was primarily interested in responses to the old items presented in different formats, we calculated mean recognition scores and confidence scale data for each of the six possible study-test change conditions. These conditions are identified in the first column of Table 3 together with the old items presented in the same format. When the nine conditions were analyzed with a single factor repeated measures design, significant differences were found among the conditions, memory F(8, 208) = 14.84, p < .01; confidence scores, F(8, 208) = 14.77, p < .01. Recognition memory and confidence scale data showed that changes between spoken and printed words were not as recognizable as changes to, or from, pictures. Recognition scores fell around 11% (from .67 to .56) when verbal formats were switched. However, when the change involved pictures, the fall in recognition was 30% (from .73 to .43). Follow-up Experimental Psychology 2008; Vol. 55(4):215–227
224
P.W. Foos & P. Goolkasian: Presentation Format Effects
Table 3. Mean (SD) recognition and confidence scores from Experiment 3. Condition
Recognition score Confidence score
Old – Same Pictures
.85 (.16)
5.6 (0.5)
Spoken Word
.66 (.20)
5.0 (0.8)
Printed Word
.68 (.18)
5.0 (0.7)
Picture to Spoken
.46 (.30)
4.1 (1.2)
Picture to Printed
.48 (.33)
4.2 (1.3)
Spoken to Pictures
.34 (.34)
3.9 (1.4)
Spoken to Printed
.54 (.29)
4.5 (1.1)
Printed to Pictures
.39 (.33)
4.0 (1.2)
Printed to Spoken
.58 (.27)
4.5 (1.0)
Old – Different
Note. Recognition scores were computed by subtracting the probability of false alarms from the probability of hits. Confidence was measured on a 6-point scale (1 = not very confident, 2 = somewhat confident, 3 = very confident, 4 = not very confident, 5 = somewhat confident, 6 = very confident).
within-subject contrasts (at p < .05 level of significance) showed that these drops in recognition were significantly different from each other and from the recognition score for the old items presented in the same format. RT responses to the study items took on average 1354 ms and there were no significant differences as a function of presentation format of the study items, F < 1. Participants made an average of 2% errors in the LoP task and these errors were not found to vary by presentation format of the items, F(2, 52) = 1.60, p = .21.
Discussion These findings show that presentation formats are not irrelevant for recognition from long-term memory. Recognition memory scores and confidence scores were significantly higher when old items appeared in the same format in the study and test phases. On average, recognition dropped 27% when old items appeared in a different format. Since an incidental leaning task was used, these format results suggest some automatic coding of format for spoken and printed words and not just for pictures (Intraub & Nicklos, 1985). The fact that recognition and confidence scores were higher for old items presented in picture format provides support for those who believe that pictures are more richly coded (e.g., Larkin & Simon, 1987). Moreover, it made a difference whether the change in format at test involved a switch between the two verbal formats or between picture and verbal formats. Changes to or from pictures were more noticeable than changes between the spoken and printed Experimental Psychology 2008; Vol. 55(4):215–227
words. The two verbal representations are more similar to one another than either is to the picture code. This finding in particular refutes the notion raised in our previous work (Foos & Goolkasian, 2005) that there is one mechanism that underlies the distinct advantage that pictures and spoken words have over printed words. Quite obviously, the picture advantage, unlike the auditory advantage, cannot be diminished by controlling the level of processing. Apart from this, however, support for the attention allocation model (Foos & Goolkasian, 2005) is found in these results by replicating the prior finding (Experiment 1) of an elimination of format effects with deep processing under incidental learning. When individuals could only use semantic features, because test and study format differed, format effects were totally eliminated. When study and test formats were the same and participants could use physical as well as semantic features, pictures produced superior performance. This finding reveals a limitation of the attention allocation model. It appears that the allocation of attention to semantic processing eliminates format effects, but when physical processing plays a role at encoding (as in the shallow processing task in Experiment 1), or at retrieval (as in the present Experiment), pictures have an advantage over both spoken and printed words.
General Discussion When taken together these findings show that varying the manner of encoding by using LoP tasks and instructions for incidental/intentional learning can have significant effects on the pictorial advantage and the advantage of spoken over written words. The pictorial advantage was consistently obtained in all three experiments, while the advantage of spoken over written words was eliminated when participants were asked to process the stimulus items semantically and no memory test was expected. Additionally, the effects were obtained when both visual and auditory recognition and recall were used. These findings contribute theoretically by providing support for sensory semantic models to explain picture processing and the attention allocation hypothesis for verbal material. In contrast to our earlier prediction, however, both theories are necessary to explain presentation format effects on memory. The attentional allocation hypothesis is not sufficient by itself to explain the picture advantage in long term memory. On a more practical level, these findings help to identify some principles to guide efficient processing of and memory for stimulus information. When a LoP approach was used control the level of processing required, we found evidence for a strong although not exclusive role of attention to conscious processing underlying effects of presentation format. In Experiment 1 when participants were required to semantically process the stimulus items with no expectation of a memory test © 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
(incidental study), long-term recognition was comparable in spite of presentation format differences in study items. This finding supports the attention allocation hypothesis, because, by controlling the processing level across stimulus format, we eliminated the attenuated processing that typically disadvantages the recall of the printed words. As a result, format effects were removed. In the control condition when there was no attempt to control processing, printed words were again found to be at some disadvantage relative to both pictures and spoken words. A general principle is that focused processing can diminish format effects, and when that processing is semantic, format effects can be eliminated. An effect of presentation format is found when participants are aware that a memory test will follow. The awareness of a memory test improves recognition for items presented as pictures compared to both spoken and printed words. The picture advantage can be explained by the sensory-semantic model of picture memory (Nelson, 1979; Nelson et al, 1977), because pictures are not just well attended, but also have a more direct link with semantic processing and a richer encoding (Larkin & Simon, 1987). The more direct semantic link together with a more distinct visual code made pictures more memorable than words, regardless of whether they were processed at a deep or shallow level, and also regardless of whether the memory test was recognition (Experiment 1) or recall (Experiment 2). A second general principle is that pictures are remembered better when individuals expect a memory test and/or are occupied by a shallow processing task. The effect of presentation format on long-term memory was particularly evident in the findings from Experiment 3 when we compared recognition and confidence scores for items presented in the same and different formats during study and test phases of the experiment. Recognition of all items presented in the same format was significantly better than items presented in a changed format. This finding provides some evidence that information about presentation format is retained together with semantic information, and that this automatic retention of format occurs for spoken and printed words in addition to pictures (Intraub & Nicklos, 1985). Superior performance with pictures was, however, only obtained when the same format was used at study and test. This suggests a richer coding for pictures (Larkin & Simon, 1987). A third general principle is that performance is best when study and test formats match. This is, of course, similar to the principle of transfer appropriate processing (Morris, Bransford, & Franks, 1977), but is here shown to be a stronger effect for pictures than for verbal items. We attribute this stronger effect to the richer coding of pictures. These findings are limited by the nature of the stimulus materials and the tasks used in these experiments. Results may be quite different when more complex material is presented or other formats are used (e.g., tactile). Results may also vary if participants are asked to retain the information for periods of time that exceed one experimental session. © 2008 Hogrefe & Huber Publishers
225
Acknowledgments These experiments were partially supported by an NSF grant (Award #SES-0552160). Thanks are due to Amina Saadaoui, Taylor Grayson, and Teneya Mormon for their assistance with data collection and data analysis.
References Baddeley, A.D. (1978). The trouble with levels: A reexamination of Craik and Lockhart’s framework for memory research. Psychological Review, 85, 139–152. Bower, G.H., & Karlin, M.B. (1974). Depth of processing pictures of faces and recognition memory. Journal of Experimental Psychology, 103, 751–757. Cleary, A.M., & Greene, R.L. (2002). Paradoxical effects of presentation modality on false memory. Memory, 10, 55–61. Cowan, N., Saults, J.S., Elliott, E.M., & Moreno, M. (2002). Deconfounding serial recall. Journal of Memory and Language, 46, 153–177. Craik, F.I.M. (2002). Levels of processing: Past, present, and future. Memory, 10, 305–318. Craik, F.I.M., & Byrd, M. (1982). Aging and cognitive deficits: The role of attentional resources. In F.I.M. Craik & S. Trehub (Eds.), Aging and cognitive processes (pp. 261–280). New York: Plenum. Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11, 671–684. Craik, F.I.M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 268–294. Crowder, R.G. (1972). Visual and auditory memory. In J.F. Kavanaugh & I.G. Mattingly (Eds.), Language by ear and by eye: The relations between speech and learning to read (pp. 251–275). Cambridge, MA: MIT Press. Eysenck, M.W. (1974). Age differences in incidental learning. Developmental Psychology, 10, 936–941. Eysenck, M.W., & Eysenck, M.C. (1979). Processing depth, elaboration of encoding, memory stores, and extended processing capacity. Journal of Experimental Psychology: Human Learning and Memory, 5, 472–484. Foos, P.W., & Goolkasian, P. (2005). Presentation format effects in working memory: The role of attention. Memory and Cognition, 33, 499–513. Gallo, D.A., McDermott, K.B., Percer, J.M., & Roediger, H.L., III (2001). Modality effects in false recall and false recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 339–353. Gardiner, J.M., Gardiner, M.M., & Gregg, V.H. (1983). The auditory recency advantage in longer term free recall is not enhanced by recalling prerecency items first. Memory and Cognition, 11, 616–620. Gathercole, S.E., & Conway, M.A. (1988). Exploring long-term modality effects: Vocalization leads to best retention. Memory & Cognition, 16, 110–119. Goolkasian, P., & Foos, P.W. (2002). Presentation format and its Experimental Psychology 2008; Vol. 55(4):215–227
226
P.W. Foos & P. Goolkasian: Presentation Format Effects
effects on working memory. Memory and Cognition, 30, 1096–1105. Goolkasian, P., & Park, D.C. (1980). Processing of visually presented clock times. Journal of Experimental Psychology: Human Perception and Performance, 6, 707–717. Greene, R.L. (1985). Constraints on the long-term modality effect. Journal of Memory and Language, 24, 526–541. Greene, R.L., & Crowder, R.G. (1986). Recency effects in delayed recall of mouthed stimuli. Memory and Cognition, 14, 355–360. Greene, R.L., Elliott, C.L., & Smith, M.D. (1988). When do interleaved suffixes improve recall? Journal of Memory and Language, 27, 560–571. Hyde, T.S., & Jenkins, J.J. (1969). The differential effects of incidental tasks on the organization of recall of a list of highly associated words. Journal of Experimental Psychology, 82, 472–481. [not in text]Intraub, H. (1984). Conceptual masking: The effects of subsequent visual events on memory for pictures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 115–125. Intraub, H., & Nicklos, S. (1985). Levels of processing and picture memory: The physical superiority effect. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 284–298. Johnston, W.A., & Heinz, S.P. (1978). Flexibility and capacity demands of attention. Journal of Experimental Psychology: General, 107, 420–435. Kellogg, R.T. (2001). Presentation modality and mode of recall in verbal false memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 913–919. Konstantinou, I., & Gardiner, J.M. (2005). Conscious control and memory awareness when recognizing famous faces. Memory, 13, 449–457. Kosslyn, S.M. (1980). Image and mind. Cambridge, MA: Harvard University Press. Kroll, J.F., & Corrigan, A. (1981). Strategies in sentence-picture verification: The effect of an unexpected picture. Journal of Verbal Learning and Verbal Behavior, 20, 515–531. Larkin, J.H., & Simon, H.A. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11, 65–99. Lockhart, R.S. (2002). Levels of processing, transfer-appropriate processing, and the concept of robust encoding. Memory, 10, 397–403. McBride, D.M., & Dosher, B.A. (2002). A comparison of conscious and automatic memory processes for picture and word stimuli: A process dissociation analysis. Consciousness and Cognition, 11, 423–460. Mandler, G. (2002). Organization: What levels of processing are levels of. Memory, 10, 333–338.
Experimental Psychology 2008; Vol. 55(4):215–227
Morris, D.C., Bransford, J.D., & Franks, J.J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519–533. Nelson, D.L. (1979). Remembering pictures and words: Appearance, significance, and name. In L.S. Cermak & F.I.M. Craik (Eds.), Levels of processing in human memory (pp. 45–76). Hillsdale, NJ: Erlbaum. Nelson, D.L., Reed, V.S., & McEvoy, C. (1977). Encoding strategy and sensory and semantic interference. Memory and Cognition, 5, 462–467. Paivio, A. (1975). Perceptual comparisons through the mind’s eye. Memory and Cognition, 3, 63–647. Paivio, A. (1978). A dual coding approach to perception and cognition. In H.L. Pick & E. Saltzman (Eds), Modes of perceiving and processing information (pp. 39–51). Hillsdale, NJ: Erlbaum. Pellegrino, J.W., Rosinski, R.R., Chiesi, H.L., & Siegel, A. (1977). Picture-word differences in decision latency: An analysis of single and dual memory models, Memory and Cognition, 5, 383–396. Penney, C.G. (1989). Modality effects and the structure of shortterm verbal memory. Memory and Cognition, 17, 398–422. Posner, M.I., Nissen, M.J., & Klein, R.M. (1976). Visual dominance: An information-processing account of its origins and significance. Psychological Review, 83, 157–171. Smith, M.C., & Magee, L.E. (1980). Tracing the time course of picture-word processing. Journal of Experimental Psychology: General, 109, 373–392. Watkins, M.J. (2002). Limits and province of levels of processing: Considerations of a construct. Memory, 10, 339–343. Wippich, W., Melzer, A., & Mecklenbräuker, S. (1998). Picture or word superiority effects in implicit memory: Levels of processing, attention, and retrieval constraints. Swiss Journal of Psychology, 57, 33–46.
Received February 8, 2007 Revision received April 3, 2007 Accepted April 3, 2007
Paula Goolkasian Professor of Psychology and Director of Cognitive Science 9201 University City Blvd UNC Charlotte Charlotte, NC 28223 USA Tel. +1 704 687-4749 E-mail
[email protected]
© 2008 Hogrefe & Huber Publishers
P.W. Foos & P. Goolkasian: Presentation Format Effects
227
Appendix Stimulus items Accordian Alligator Apple Bat Belt Bike Blender Broom Button Camera Candle Cannon Car Carrot Cat Celery Chair Cheese Cherries Clock Dish Domino Doorknob Ear Egg Finger Fish Giraffe Glasses Globe
© 2008 Hogrefe & Huber Publishers
Gloves Grapes Hanger Harp Helmet Hook Kettle Lion Lobster Maze Microscope Nurse Owl Panda Parrot Pencil Penguin Pig Refrigerator Sandwich Scissors Snowman Stamp Toaster Tractor Wagon Whale Wheel Windmill Shovel
Experimental Psychology 2008; Vol. 55(4):215–227