Does Perceptual Learning Require Consciousness or Attention?

2 downloads 0 Views 3MB Size Report
the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learn- ing. It is known that learning ...
Does Perceptual Learning Require Consciousness or Attention? Julia D. I. Meuwese, Ruben A. G. Post, H. Steven Scholte, and Victor A. F. Lamme

Abstract ■ It has been proposed that visual attention and consciousness

are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16–22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12–18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555–560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700–707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really

INTRODUCTION Attention and awareness are intimately related processes, as we are generally conscious of stimuli that are in the focus of our attention. It has long been thought that these processes are identical or at least indistinguishable (De Brigard & Prinz, 2010; Prinz, 2010; Chun & Wolfe, 2001; OʼRegan & Noë, 2001; Merikle & Joordens, 1997; Posner, 1994). However, in recent years, evidence is accumulating that attention and awareness are distinct processes that can be dissociated empirically and on a theoretical basis ( Van Boxtel, Tsuchiya, & Koch, 2010a; Koch & Tsuchiya, 2007; Lamme, 2003). Empirical evidence came from various experiments. An ERP component called visual awareness negativity, a negative difference wave around 200 msec after stimulus onset, correlates with visual awareness but is independent from top–down attention (Railo, Koivisto, & Revonsuo, 2011; Koivisto, Revonsuo, & Lehtonen, 2006). Distinct and independent neural correlates of visual awareness and spatial attention were also revealed by stimulus-induced oscillatory brain activity in

University of Amsterdam © 2013 Massachusetts Institute of Technology

passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844–848, 2001], but it is unclear which of the two ingredients— consciousness or attention—is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning, suggesting that the memory trace that is formed during inattention is latent until accessed. The results suggest that learning requires consciousness, and not attention, and further strengthen the idea that consciousness is separate from attention. ■

the gamma range ( Wyart & Tallon-Baudry, 2008). At the single-cell level, Supèr, Spekreijse, and Lamme (2001) showed that early neural responses in monkey primary visual cortex (V1) were unaffected by reportability of visual stimuli. A recent study independently manipulated attention and consciousness in a two-by-two design and found that V1 BOLD signals were selectively altered by the one (attention) and not the other (awareness) (Watanabe et al., 2011; see Koch & Tsuchiya, 2011, for a review). Also by means of a two-by-two design, Van Boxtel, Tsuchiya, and Koch (2010b) showed that attention decreases afterimage duration, whereas consciousness prolongs the duration. This shows that the recently proposed orthogonality of attention and consciousness makes sense and can be tackled experimentally. Separately manipulating attention and consciousness thus guides us toward the following two-by-two scheme where four types of neural processing can be distinguished: unattended and unconscious, attended yet unconscious, unattended yet conscious, and attended and conscious (see Figure 1). The latter condition is the “classical” case, where a conscious visual percept is attended, so that attention and awareness converge and the stimulus becomes Journal of Cognitive Neuroscience 25:10, pp. 1579–1596 doi:10.1162/jocn_a_00424

Figure 1. Four stages of neural processing emerge when manipulating either attention or consciousness or both. The manipulations that are being used in the current experiment are highlighted in white (top right and bottom left quadrant).

available for conscious report. But as can be judged from Figure 1, a lack of reportability can have two causes: the absence of attentional access or the absence of a conscious percept (or, obviously, of both), with different neural consequences. Disrupting consciousness can be done in a variety of experimental ways, the most well known of which is masking (Fahrenfort, Scholte, & Lamme, 2007; Enns & Di Lollo, 2000). A properly masked stimulus remains fully invisible, even when it is attended. The lack of reportability of the stimulus in such a case is clearly because of the absence of consciousness, not because of a lack of attention. In our diagram (Figure 1), masked stimuli hence fall in the top right quadrant (“attended yet unconscious”). Taking away attention can be done by manipulations like change blindness or inattentional blindness. The result is attentional blindness, a lack of attentional access to low-level sensory signals (Kanai, Walsh, & Tseng, 2010; Mack & Rock, 1998; Simons & Levin, 1997; Rock, Linnett, Grant, & Mack, 1992). These manipulations have been shown to affect the extent or depth of processing of stimuli, that is, whether stimuli evoke activity in higher, postperceptual levels or not (Scholte, Witteveen, Spekreijse, & Lamme, 2006; Vuilleumier et al., 2001; Rees, Russell, Frith, & Driver, 1999). Perceptual processing typically remains largely intact. We put this kind of processing in the bottom left quadrant of the scheme of Figure 1, hence call it “unattended yet conscious.” This is obviously a somewhat counterintuitive notion: it implies consciousness without cognitive access and reportability (Lamme, 2003, 2006, 2010; Block, 2007). Is this quadrant a realistic option? Do participants in these conditions simply have no conscious sensation of the stimuli because of the removal of attention (i.e., absence of report implying absence of consciousness)? Or do they still have a conscious (or phenomenal) experience, which 1580

Journal of Cognitive Neuroscience

is however not available for report—the so-called “overflow argument” (Block, 2007, 2011), somewhat akin to the early versus late selection theories of attention, and related to the distinction between so called P- versus A-consciousness? Our stance is the latter, and we have provided arguments elsewhere (Lamme, 2003, 2006, 2010), but we acknowledge that the topic is subject to fierce debate (Cohen & Dennett, 2011; Kouider, De Gardelle, Sackur, & Dupoux, 2010). We do not want to enter that complex discussion here but simply note that there are visual representations outside what is attended, that these are rich in content (Vandenbroucke, Sligte, & Lamme, 2011; Sligte, Vandenbroucke, Scholte, & Lamme, 2010), have perceptual characteristics (Vandenbroucke, Sligte, Fahrenfort, Ambroziak, & Lamme, 2012; Vandenbroucke, Fahrenfort, & Lamme, 2010; Moore & Egeth, 1997), and may deserve the status of being called “conscious.” At the least, there is agreement on the fact that these unattended representations are very different from truly unconscious representations, such as when stimuli are masked. We will call these representations “conscious” in this article, without implying a strong claim on our position in this debate. “Visible” or “potentially conscious” might have been equally proper denominations. Either way, we use this term for stimuli that fall into the bottom left quadrant of the scheme of Figure 1. If we indeed acknowledge that attention and awareness are dissociable, the question arises whether this differential effect on stimulus processing has any long-term consequences. A typical long-term consequence of stimulus processing is perceptual learning. Experience and practice with sensory stimulation increases the ability to extract information from the environment (Gibson, 1969). Perceptual learning is stimulus-specific and seems to take place at early cortical stages where low-level stimulus attributes are processed (Ahissar & Hochstein, 1993). To date, the question whether attention or awareness is required for perceptual learning is still subject of debate. It is known that attended yet masked stimuli, although fully invisible, can cause short-lived priming effects. Lately, the boundaries have been pushed regarding the duration of these unconscious effects, but at most they seem to extend for about half an hour instead of eliciting any long-term learning (Capa, Cleeremans, Bustin, & Hansenne, 2011; see Van Gaal, De Lange, & Cohen, 2012, for a review). On the other hand, to our knowledge there is no evidence yet showing that perceptual learning does not occur for attended yet masked stimuli, as this has simply never been tested. For stimuli that are unmasked yet not attended however, perceptual processing is still intact and perceptual learning has been shown ( Jeneson, Kirwan, & Squire, 2010; Gutnisky, Hansen, Iliescu, & Dragoi, 2009; Jessup & OʼDoherty, 2009; Seitz, Kim, & Watanabe, 2009; Seitz & Watanabe, 2003; Manns & Squire, 2001; Watanabe, Náñez, & Sasaki, 2001). Also, in studies of conditioning, that is, associating one stimulus with another stimulus, it is often found that this can occur without reportability of Volume 25, Number 10

the stimulus contingencies (Custers & Aarts, 2011; Knight, Nguyen, & Bandettini, 2003; Clark, Manns, & Squire, 2001; Clark, 1998). Many studies have claimed that either attention or awareness is (not) needed for learning, however, while assuming that they are identical processes or at least without explicitly distinguishing the two ( Jessup & OʼDoherty, 2009; Gilbert, Sigman, & Crist, 2001; Ahissar & Hochstein, 1993, 1997; Shiu & Pashler, 1992). Therefore, it seems relevant to investigate systematically which of the two is really required for perceptual learning, using an experimental design where both mechanisms are independently manipulated. Here, we tested which ingredient—consciousness or attention—is necessary for learning to detect a texturedefined stimulus. We chose texture-defined figureground stimuli, as consistent perceptual learning effects have been observed in the domain of texture segregation (Censor, Bonneh, Arieli, & Sagi, 2009; Pourtois, Rauss, Vuilleumier, & Schwartz, 2008; Ofen, Moran, & Sagi, 2007; Censor, Karni, & Sagi, 2006; Casco, Campana, Grieco, & Fuggetta, 2004; Sagi & Tanne, 1994; Karni & Sagi, 1991). When conscious and attended, figure-ground stimuli (or very related types) have been shown to induce strong perceptual learning effects at the earliest stages of visual processing, which are retinotopically specific (Schwartz, Maquet, & Frith, 2002; Schoups, Vogels, & Orban, 1995; Karni & Sagi, 1991). We independently manipulated consciousness and attention by exposing participants to these stimuli rendered unreportable in two different ways: either by perceptual blindness (masking) or by attentional blindness (inattention paradigm) (Kanai et al., 2010). Respectively, stimuli were either masked yet attended or unmasked yet not attended. This yields stimuli unreportable in both conditions, yet for orthogonal and different reasons. This enabled us to study visual field location specific learning in participants that could not report about the stimuli that induced the learning (either because of the absence of attention or consciousness) and hence answer the question whether perceptual learning requires consciousness or attention.

METHODS Participants Fifty-four participants (all women) with no relevant psychiatric or neurological history participated in the experiment for course credit or financial compensation. We included only female participants, as it is our personal experience that they are generally more reliable in attending (both) sessions, and more likely to follow instructions closely (especially in the Inattention condition fixating on the center is important to render participants “inattentionally blind”). Possibly our results do not generalize to male participants, although so far the only known sex effects show more robust (task-irrelevant) perceptual learning effects in male compared with female participants (Leclercq

& Seitz, 2012). Participants were all right-handed and had normal or corrected-to-normal vision based on the Landolt eye chart. We obtained written informed consent from each participant before experimentation. The experiment was approved by the ethical committee of the Psychology Department of the University of Amsterdam. Four participants from the Inattention group were excluded because they reported to have seen the figure presented on Day 1 (as assessed by the 10AFC task, see below). Two participants (one from each group) were excluded because of apparatus failure. After exclusion, 24 participants were tested in both groups (Inattention group M = 21.31 years, SD = 2.04 years, Masked group M = 21.62 years, SD = 2.49 years).

Task Design Stimuli and Setup Stimuli were textured patterns consisting of a full field of patches of homogenously oriented line elements, oriented at either 30°, 60°, 120°, or 150°. Target textures (presented at the onset of each white letter, see explanation of 2-back task below) formed either a figure or a no-figure. Texture defined figures contained a background with line elements oriented at either 30°, 60°, 120°, or 150°, and six squares (individual width 3.6°) consisting of line elements oriented orthogonal to the background, divided over two diagonally opposite quadrants (three squares in each quadrant; see Figure 2). Another six squares were present in the two remaining quadrants; however, their line elements had the same orientation as the background texture, so that they were not visible (however, because of a different jittering of the line elements, very vague and generally not perceived “boundaries” between square and background are discernible on close scrutiny; see “no-figure” panel in Figure 2). The no-figure texture also contained 12 squares (three in each quadrant); however, all squares had the same orientation as the background texture and were hence invisible (except that again, vague “boundaries” were created between square and background, by the different jittering of the line elements). The reason for doing this is to make figure and no-figure stimuli fully comparable from the point of view of low-level features, such as line orientation and length (see also Zipser, Lamme, & Schiller, 1996). Namely, when figure trials of all orientations are collapsed, overall (and on each location) the number of line elements is equal for each orientation compared with when all no-figure trials are collapsed. The same is true for the number of boundaries; only the orthogonality of the line elements in the figure squares differs from the no-figures. This enabled us to calculate figure-no-figure difference waves in the EEG and hence isolate neural activity specifically related to figure-ground segregation (and not just boundary detection; for a similar procedure, see Fahrenfort et al., 2007; Scholte et al., 2006; Caputo & Casco, 1999; Lamme, Van Dijk, & Spekreijse, 1992). Meuwese et al.

1581

1582

Journal of Cognitive Neuroscience

Volume 25, Number 10

Stimuli were created using Matlab (The MathWorks, Natick, MA, USA). Stimuli were presented using Presentation (Neurobehavioral Systems, Albany, CA, USA) and displayed on a 19-in. Iiyama Vision Master Pro 450 with display settings of 1024 × 768, 32 bits and 60 Hz. Participants were seated in a comfortable chair at fixed distance to the screen. An in-house manufactured chin rest positioned the participants at equal height and fixed eye distance (75 cm) to the screen on both days. General Procedure The experiment consisted of two separate sessions on consecutive days; a learning phase (on Day 1) and a testing phase (24 hr later, on Day 2). Every participant was assigned to either the Inattention or the Masked group on the basis of a predetermined randomization schedule. On Day 1, both groups were exposed to a rapid serial visual presentation (RSVP) letter stream, presented against a background of textured figures. However, for the Inattention group reportability of the to-be-learned figure was manipulated by inattention, and for the Masked group reportability was manipulated by masking the figure. On Day 2, both groups performed the same (RSVP) task, while in the background figures were presented both at trained and novel positions to assess learning neurally. Finally, a staircased detection task with figures at trained and novel positions was administered to measure behavioral learning effects. Day 1—learning phase. GRAYSCALES TASK. Within each group (Inattention and Masked), there were two conditions: A and B, referring to the position of the figures presented on Day 1 (see Figure 2). In each group, half of the participants (12 participants) were assigned to Condition A, and the other half were assigned to Condition B. Visual field biases (an asymmetry in representation and perception of/attention for one side of the visual field) may influence learning effects (Dickinson & Intraub, 2009). Therefore, before being assigned to a condition every participant was tested for visual field bias by performing a computerized version of the Grayscales task (Nicholls, Bradshaw, & Mattingley, 1999; both the original version and a 90° rotated one, so biases could be calculated for both left vs. right and top vs. bottom of the visual field). On the basis of the outcome of this test, participants were

assigned to a condition such that visual field bias differences between conditions were minimized. INATTENTION GROUP (2-BACK TASK, EEG). We used an inattention paradigm based on the paradigm used by Scholte (Scholte et al., 2006). It consisted of a primary stream of stimuli of which the participants were informed and a secondary stream of stimuli of which the participants were not informed. The primary stream consisted of a foveally presented RSVP of letters at a speed of 217 msec per letter (A, G, P, S, R, O, L, B, D, and T were used), which were presented on a red fixation point with a gray center of 1.1° in diameter. These letters were usually black, but once every six to eight letters, a white letter was presented. Participants were instructed to indicate whether every white letter was the same as or different from the white letter presented two instances earlier (2-back task), by pressing one of two buttons of a response box. At the onset of every white letter, a short beep sounded to minimize the number of missed responses. The secondary stream of stimuli consisted of textured patterns that were presented with the onset of each new letter for 50 msec (the remaining 167 msec an isoluminant gray dotted pattern was presented). Together with every black letter, a homogenous pattern was presented. However, at the onset of each white letter a textured pattern was presented that either formed a figure (containing six squares, presented in 47% of the cases) or a no-figure (containing no squares, presented in 47% of the cases; for a detailed description of these textured patterns, see “Stimuli and Setup” above). The remaining 6% of the cases so-called “filler trials” were presented, with homogeneous textured patterns, which acted as a dummy to match the “catch trials” in the Masked group paradigm (see below for further explanation of “catch trials”). For every trial, the background orientation of each textured pattern was chosen randomly from the four orientations (30°, 60°, 120°, or 150°; see “Stimuli and Setup” above), within the following constraints: subsequent textures always contained line elements with a different orientation to ensure that a new image was presented with the onset of each subsequent letter (black or white). The four background orientations were equally divided over figures and no-figures, and each orientation was equally divided over each white letter (A, G, P, S, R,

Figure 2. Schematic representation of the task designs. On Day 1 (learning phase), both groups perform a different task. The Inattention group performs an RSVP 2-back letter task, whereas a figure texture (layout A or B) is presented in the background together with every target letter. The Masked group performs a figure detection task, where they have to detect whether or not a masked figure texture is present in the background while the target letter is presented. Easily detectable unmasked luminance defined “catch” figure trials (layouts A and B) have been added to keep participants motivated and to check whether the task instructions were being followed. On Day 2 (testing phase), both groups perform the same two tasks to measure learning effects neurally (vowel task) and behaviorally (staircase task). In the vowel task, participants have to indicate whether the target letter is a vowel or not while both the trained and novel figure (layouts A and B) are presented in the background. In the staircase task, participants have to detect whether or not the masked target is a figure. According to their performance, the SOA between target and mask varies, in a double staircase manner for both figures independently. Two versions of the staircase task are presented, one without and one with performance feedback.

Meuwese et al.

1583

O, L, B, D, and T). In total, 1,440 trials were presented (680 per figure/no-figure, 80 “filler trials”) in six blocks, with a total duration of approximately 45 min (including breaks). The task was preceded by a short practice version (12 trials, with only no-figure backgrounds) to make participants acquainted with the task. Buttons were counterbalanced across participants. Participants were instructed to perform as good as possible on the primary letter stream task. They were not told about the potential presence of figure textures in the background, so as to induce inattentional blindness for these stimuli. We evaluated the presence or absence of inattentional blindness for the textured figures at the end of the session, by means of a 10 alternative forcedchoice (10AFC) task in which participants had to select the figure presented during the 2-back task from a set of nine other figure textures (see Figure 3). They were told that one of the figure textures had been presented regularly during the previous task, at the onset of the white letter, and were instructed to select it. The actual figures (both the stimuli from conditions A and B) and eight others consisting of circles and rectangles with similarly oriented line elements and textures of 0° and 90° were presented in random order (numbered 1 to 10). If participants failed to select the correct figure, they were considered to be unable to report about the figure, that is, to have suffered from inattentional blindness during the exposure to the texture figures. MASKED GROUP (DETECTION TASK, EEG). Participants in the Masked group were presented with a paradigm similar to the Inattention group; however, there were some crucial differences. Our aim was to manipulate reportability of the figure by masking instead of inattention. Therefore, all background textures were masked (the mask consisting of two superimposed homogenous textures with orthogonally oriented line elements; see “mask” panel in Figure 2). Masking was achieved by having each figure/ no-figure stimulus (50 msec), presented together with the white letters, being followed by an isoluminant gray screen (for 100 msec), followed by a mask with a duration of 67 msec. Instead of the 2-back letter task, participants directed their attention toward the secondary stream of stimuli to perform a figure detection task. With every white letter (accompanied by a short beep), they had to indicate whether or not a figure was present in the background by pressing one of two buttons of a response box. Because of the mask, this task was practically undoable (but that was indeed our goal, unreportability of the figures by masking, even with attention). To keep our participants motivated and to check whether the task instructions were being followed, “catch trials” were presented in 6% of the trials, which were much easier to detect. These “catch trials” contained unmasked and fairly visible luminance defined catch figures (both layouts A and B to prevent any learning effects of location to occur and to ensure 1584

Journal of Cognitive Neuroscience

Figure 3. This set of figure textures was presented (in random order, numbered 1–10) during the 10AFC task, in which participants from the Inattention group had to select the figure presented during the 2-back task from a set of nine other figure textures. If participants failed to select the correct figure, they were considered to have suffered from inattentional blindness for the figure presented in the background on Day 1. Figures A and B are depicted in the top two pictures.

fixation at the center of the screen) and catch no-figures. Note that in the Inattention group, to keep number of trials and task length equal, “filler trials” were presented instead of these “catch trials,” containing only catch nofigures (see Figure 2) while participants were performing the 2-back task. Day 2—testing phase. VOWEL TASK (EEG). Participants of both groups (Inattention and Masked) and assigned to each condition (A and B) performed the same task to measure neural activity evoked by both the trained and Volume 25, Number 10

the novel figure (its spatial configuration in opposite corners of the trained figure). Participants performed an RSVP letter task to ensure fixation in the center while being passively exposed to figures and no-figures in the background. The task is similar to the task performed by the Inattention Group on Day 1, except that here for every white letter (A, E, O, U, S, T, G, and B were used), participants had to indicate whether it was a vowel or a consonant by pressing one of two buttons of a response box. The task is different from the 2-back task (Inattention group, Day 1) to ensure equal task difficulty for both groups. With every white letter (as on Day 1, accompanied by a short beep), either figure A, figure B, or no-figure was presented, which allowed us to make relevant subtractions in the EEG to assess neural learning effects. In total 768 trials were presented (256 per figure A/ figure B/no-figure) in four blocks, with a total duration of approximately 25 min (including breaks). The task was preceded by a short practice version (12 trials, with only no-figure backgrounds) to make participants acquainted with the task. Buttons were counterbalanced across participants. STAIRCASE TASK ( BEHAVIOR). Participants from both groups (Inattention and Masked) and conditions (A and B) performed the same task to measure behavioral detectability of the trained and the novel figure (its configuration in opposite corners of the trained figure). Every trial a target, either a figure (A or B) or no-figure, was presented (50 msec, accompanied by a short beep), followed by an isoluminant screen (variable duration, starting at 300 msec) and a meta-contrast mask consisting of orthogonally oriented line elements (500 msec, see “staircase mask” panel in Figure 2). After presentation of the mask, the red fixation dot turned green to indicate that a response could be made. Participants then pressed one of two buttons of a response box to indicate whether or not they saw a figure preceding the mask. The next trial started after a response was made, but when no response was given 1500 msec after mask presentation, a warning message appeared “No response! Respond when the fixation dot is green” (but in Dutch), which stayed on screen until a response was made. The SOA between the target and the mask was initially set at 350 msec for both figures but varied in a double staircased manner (for each figure independently), according to the participantʼs responses. For every two correct responses, the SOA was shortened by 17 msec, and for every incorrect response, it was lengthened by 17 msec, thereby varying the duration of the isoluminant screen in between target and mask. The SOA of the no-figures was paired with both figures (in a 50:50 manner), such that with every SOA there was an equal probability for a figure or no-figure as a target. After 60 trials for each figure/ no-figure pair, this resulted in two separate SOA curves, one for the trained (figure layout A for participants exposed to Condition A on Day 1 and figure layout B

for participants exposed to Condition B on Day 1) and one for the novel figure (figure layout A for participants exposed to Condition B on Day 1 and figure layout B for participants exposed to Condition A on Day 1). Participants performed this task twice in a row. The first time as described above, the second time performance feedback was given at every trial. After each response, a green or red bar appeared (for 500 msec), with the words “correct” or “incorrect,” respectively. In each group, only 18 of the 24 participants performed these two tasks, as we started out with a more difficult version (target presented for 33 msec instead of 50 msec, no beep, no feedback) for the first six participants in each group. This version of the task turned out to be incomprehensible (11 of 12 participants finished with an SOA > 350 msec); therefore, we changed the task to the above described and excluded these 12 participants from the analysis for these tasks. A further three participants from the Inattention group were excluded from the analysis of the Staircase detection task without feedback and two participants from the one with feedback, as they reported in the exit interview to have been fixating on one corner of the screen instead of on the center. In total 120 trials were presented in one block, with a duration of approximately 6 min. The task was preceded by a short practice version (10 trials, with only no-figure backgrounds) to make participants acquainted with the task. Buttons were counterbalanced across participants. Exit Interview At the end of the second session, participants filled out an exit interview, with questions about the amount of sleep they had the night before and in between the two sessions, how many units of alcohol they drank, and what strategy they had used for the detection task (Day 2). Three participants from the Inattention group were excluded from behavioral analysis for the no-feedback Staircase task and two participants for the feedback Staircase task, as they reported to have been fixating on one corner of the screen instead of the center during these tasks.

Behavioral Data Analysis Behavioral data were analyzed using SPSS 17.0 (IBM, Armonk, NY). For Day 1, bias scores on the Grayscales task were calculated as in Nicholls et al. (1999), for both left versus right (horizontal) and top versus bottom (vertical), which results in one overall score (horizontal × vertical). Bias score differences between conditions and groups were tested with independent t tests. Performance on the 2-back task (Inattention group, Day 1) was calculated in percentage correct. For the detection task (Masked group, Day 1), performance (percentage correct and d0) was analyzed separately for the catch stimuli and the masked stimuli. It was tested with paired two-tailed t tests Meuwese et al.

1585

whether these performances differed significantly from chance. Equal performance on the Vowel task (Day 2) was tested with an independent t test between both groups (percentage correct). Behavioral learning effects on Day 2 were established by comparing the results of the Staircase task for the trained versus the novel figure, for both the feedback and no feedback version. For each group, the trials were binned to five bins (12 trials each), and for every fifth bin (the end point of the SOA curves), a paired t test was performed for the trained versus the novel figure. Learning effects were then compared between both groups, by means of an independent t test for the fifth bin. EEG Measurements and Analyses EEG data were recorded using a BioSemi ActiveTwo 64-channel active EEG system (BioSemi, Amsterdam, the Netherlands) during the learning phase (Day 1) and the Vowel task (Day 2). Data were sampled at 1024 Hz and referenced to electrodes placed on both ear lobes. Sixty-four scalp electrodes were measured, as well as two electrodes for horizontal and two for vertical eye movements. Data were filtered using a high-pass filter of 0.5 Hz, a low-pass filter of 30 Hz, and a notch filter of 50 Hz before down sampling to 256 Hz. An independent component analysis (Hyvarinen, Karhunen, & Oja, 2001) was used to remove eye blink and eye movement artifacts. Another low-pass filter of 20 Hz was performed. Data were segmented into epochs starting at −217 msec before stimulus onset and ending at 500 msec after stimulus onset. Baseline correction was applied to the interval from −100 to 0 msec preceding stimulus onset. Automatic artifact rejection was applied by removing segments falling outside the −250 to 250 μV range or deviating more than two standard deviations from average. Spherical interpolation was used to create a signal for removed channels. Data were normalized toward peak signal responses of the no-figure stimuli on a per participant basis. To investigate the extent of neural processing of the figures on Day 1 (learning phase) in both groups, we averaged ERPs for figure and no-figure trials, and difference waves were obtained by subtracting the average of nofigure trials from the average of figure trials. The signal that remains then is selectively reflecting the processes underlying figure-ground segregation, whereas the signals related to low-level processes such as the detection of line segments and line segment discontinuities (the vague boundaries mentioned above) have been eliminated from the EEG signal (see Fahrenfort et al., 2007; Scholte et al., 2006; Caputo & Casco, 1999; Lamme et al., 1992). ROIs were determined based on the figure-ground difference wave of the Inattention group (because in the Masked group the figure-ground signal was greatly reduced by the mask). Selected electrodes were pooled, and a time window for each ROI was determined based on visual inspection (by two independent observers). This resulted 1586

Journal of Cognitive Neuroscience

in the following ROIs: an early peri-occipital ROI (P5, P7, PO7, P6, P8, PO8; 100–200 msec) and a late perioccipital ROI (O1, PO7, PO3, Oz, POz, O2, PO8, PO4; 200–350 msec). For these ROIs and time windows, we performed paired two-tailed t tests on the average (figure vs. no-figure) of each time window and ran an ANOVA for Timing (early, late) × Figure (figure, no-figure). We then tested at which time points the figure trials differed significantly from the no-figure trials for both groups. We conducted a paired two-tailed t test at each time point for the selected time window (treating the average of each participant at that time point as an observation, thresholding with a false discovery rate at a p < .05 level), to see at which exact time points figure evoked activity differs from no-figure evoked activity. We then analyzed neural learning effects for Day 2 (testing phase). The same ROIs and time windows were used as for the Day 1 (learning phase) analysis, and ERP averages were calculated for trained figure trials and novel figure trials. Similar to the analysis for Day 1, using the same ROIs and time windows, we tested whether the average value of the trained figure (minus no-figure) differs from the novel figure (minus no-figure) for each time window using paired two-tailed t tests and an ANOVA for Timing (early, late) × Figure (trained figure, novel figure). We then tested at which time points within the selected time window the trained figure differed significantly from the novel figure, within both the Inattention group and the Masked group (again, thresholding with a false discovery rate at a p < .05 level). Preprocessing was performed using Brain Vision Analyzer (Brain Products

Figure 4. Behavioral performance on Day 1. For the Masked group, percentage correct does not differ significantly from chance for detection of masked figure textures [50.5% correct, t(1, 23) = 1.60, p = .12], whereas unmasked “catch” figures are detected above chance level [85.1%, t(1, 23) = 14.77, p < .00001], which indicates that participants were performing the task correctly but just were not able to detect the masked figure textures. The Inattention group performs above chance level at the 2-back letter task [90.7%, t(1, 23) = 38.95, p < .00001], indicating that top–down attention was directed at the letter task instead of the background figures.

Volume 25, Number 10

Figure 5. Neural processing of the figure and no-figure texture on Day 1 for both groups. Activation maps depict figure minus no-figure activity for the early (100–200 msec) and late (200–350 msec) time window. The difference wave of figure minus no-figure evoked signal shows early (FFS) and late (RP) figure-related activity in the Inattention group [early ROI: t(1, 23) = −2.700, p = .013, late ROI: t(1, 23) = −3.077, p = .005]. When the difference between trained figure and novel figure activity is tested per time point, significant differences were found for the following time periods: 117–199 msec (early ROI) and 223–238 and 277–348 msec (late ROI). For the Masked group, late figure-related activity is entirely blocked by the mask [t(1, 23) = 0.226, p = .82], whereas early (FFS) processing is still present, although marginally significant [148–156 msec, early ROI: t(1, 23) = −1.935, p = .065 (but note that this is tested using a two-sided paired t test)]. Please note that prestimulus activity is large for the nonsubtracted ERP signal (figure and no-figure) because of the RSVP design of our experiment, where the target stimulus is presented amidst a series of distractors in a rapid stream.

GmbH, Munich, Germany). For statistical analysis and visualization of the time courses, Matlab (The MathWorks, Natick, MA, USA) was used.

RESULTS Learning Phase (Day 1) Behavioral Results: Manipulation Checks To minimize visual field bias differences between conditions, for every participant a bias score was measured by performing a computerized version of the Grayscales task (Nicholls et al., 1999). On the basis of the outcome of this test, participants were assigned to a condition such that visual field bias differences between conditions were minimized. The bias scores indeed did not differ significantly between conditions [Inattention group: t(1, 11) = −0.322, p = .75; Masked group: t(1, 11) = −0.822, p = .43] and groups [t(1, 46) = 0.151, p = .88]. Performance measures on the tasks for Day 1 are presented in Figure 4. For the Inattention group, attention was successfully directed towards the primary stream of letter stimuli, as performance on the 2-back task was significantly better than chance [90.7%, t(1, 23) = 38.95,

p < .00001]. Reportability of the figure presented in the secondary stream of stimuli was measured with a 10AFC task. Four of 28 participants (14%) from the Inattention group were excluded from further analysis because they chose the correct figure presented on Day 1 (see also “Participants” section above). For the Masked group, detection of the catch trials was significantly better than chance [85.1%, t(1, 23) = 14.77, p < .00001], indicating that the task instructions were being followed correctly. The other trials were successfully masked, as detection of the masked figures did not differ from chance significantly [50.5%, t(1, 23) = 1.60, p = .12; see Figure 4]. However, the d0 value for detection of the masked figures, although being very close to zero (0.11), did differ significantly from chance (i.e., a d0 value of 0) [t(1, 23) = 2.48, p = .02], suggesting that the masked figures may have been just barely detectable for some participants on some trials. EEG Results: More Late Figure Related Processing in the Inattention Group As expected, because of the different task manipulations, on Day 1 figure textures were processed differently in both Meuwese et al.

1587

groups (see Figures 5 and 6). In the Inattention group, early (100–200 msec) and late (200–350 msec) figureground specific activity can be observed when the nofigure evoked signal is subtracted from the figure evoked signal. On the basis of earlier work, we interpret the early figure-ground signal as feedforward sweep (FFS) and the late signal as recurrent processing (RP) activity (Fahrenfort & Scholte, 2008; Scholte, Jolij, Fahrenfort, & Lamme, 2008; Lamme & Roelfsema, 2000). For both time windows, figure evoked activity differs significantly from no-figure evoked activity [early ROI: t(1, 23) = −2.700, p = .013; late ROI: t(1, 23) = −3.077, p = .005]. The ANOVA showed a main effect of Figure [vs. no-figure, F(1, 23) = 12.383, p = .002, and for Timing, F(1, 23) = 30.779, p = .00001, no interaction of Timing × Figure], indicating that figure evoked signal deviates from no-figure evoked signal at both time windows. When the difference between figure and no-figure activity is tested per time point, the following time periods differ significantly: 117–199 msec (early ROI) and 223–238 msec, 277– 348 msec (late ROI). Thus, figure-ground segregation processes proceeded normally during inattention (see also Scholte et al., 2006).

For the Masked group, late (RP) figure-related activity is entirely blocked by the mask [late ROI: t(1, 23) = 0.226, p = .82; none of the time points differ significantly], whereas early (FFS) processing is still present, although marginally significant [early ROI: t(1, 23) = −1.935, p = .065 (but note that this is tested using a two-sided paired t test); time period 148–156 msec differs significantly]. The ANOVA showed an interaction of Timing × Figure at trend level [F(1, 23) = 3.178, p = .088; no main effects of Figure, F(1, 23) = 0.480, p = .50, or Timing, F(1, 23) = 0.490, p = .49], in line with our finding that figure evoked signal deviates from no-figure evoked signal at the early, but not the late time window. The masks successfully blocked RP of the figures, which is also reflected in the performance on the detection task (see above). This is in line with previous results (Fahrenfort et al., 2007).

Testing Phase (Day 2) Behavioral Results: Manipulation Checks During the testing phase, both groups performed the same task (Vowel task), on which performance was equal

Figure 6. Neural processing of the trained figure and novel figure texture on Day 2 for both groups. Activation maps depict trained figure minus novel figure activity for the early (100–200 msec) and late (200–350 msec) time window. When comparing trained figure and novel figure evoked activity on Day 2, learning effects were found only in the Inattention group. For both early and late ROIs, trained figure evoked activity differs significantly from novel figure evoked activity [early ROI: t(1, 23) = −2.213, p = .037, late ROI: t(1, 23) = −3.137, p = .005]. When the difference between trained figure and novel figure activity is tested per time point, significant differences were found for the following time periods: 156–172 msec (early ROI) and 273–328 msec (late ROI). No learning seems to have taken place in the Masked group, as no differences were found between trained figure and novel figure evoked activity, for both ROIs [early ROI: t(1, 23) = −0.566, p = .58, late ROI: t(1, 23) = −0.593, p = .56]. Also none of the time points differ significantly between trained figure and novel figure ERPs. Please note that prestimulus activity is large for the nonsubtracted ERP signal (trained figure and novel figure) because of the RSVP design of our experiment, where the target stimulus is presented amidst a series of distractors in a rapid stream.

1588

Journal of Cognitive Neuroscience

Volume 25, Number 10

For the Masked group, no differences were found between trained figure minus no-figure and novel figure minus no-figure evoked activity, for both ROIs [early ROI: t(1, 23) = −0.566, p = .58, late ROI: t(1, 23) = −0.593, p = .56]. The ANOVA showed no main effect of Trained Figure [vs. Novel Figure, F(1, 23) = 0.469, p = .50, neither for Timing, F(1, 23) = 0.111, p = .74, nor an interaction of Timing × Figure], indicating that Trained Figure evoked signal does not deviate from novel figure evoked signal at both time windows. Also none of the time points differ significantly between trained figure and novel figure ERPs. So no neural learning seems to have taken place in this group. Behavioral Results: Learning Effect Only for Inattention Group with Performance Feedback

Figure 7. Behavioral performance on the staircase task on Day 2. The version without performance feedback (on the left) does not reveal any learning effects in both groups [Inattention group: t(1, 14) = 0.233, p = .819, Masked group: t(1, 17) = 0.010, p = .992]. Once performance feedback is provided on a trial-by-trial basis, a learning effect becomes apparent only for the Inattention group [Inattention group: t(1, 15) = −2.147, p = .049; Masked group: t(1, 17) = 1.891, p = .076]. Note that for the Masked group, this is a trend going in the opposite direction: shorter SOAʼs for the novel figure as opposed to the trained one, whereas in the Inattention group SOAʼs for the trained figure are significantly shorter than for the novel figure.

for both groups [Inattention group: 95.4%, Masked group: 95.6%; t(1, 46) = 0.174, p = .863]. EEG Results: Early and Late Learning Effects Only for Inattention Group Neural learning effects were found only in the Inattention group (see Figure 6). For both early and late ROIs, Trained Figure minus no-figure evoked activity differs significantly from novel figure minus no-figure evoked activity [early ROI: t(1, 23) = −2.213, p = .037; late ROI: t(1, 23) = −3.137, p = .005]. The ANOVA showed a main effect of Trained Figure [vs. Novel Figure, F(1, 23) = 10.415, p = .004, and no main effect of Timing, F(1, 23) = 0.693, p = .41, nor an interaction of Timing × Figure], indicating that trained figure evoked signal deviates from novel figure evoked signal at both time windows. When the difference between trained figure and novel figure activity is tested per time point, the following time periods differ significantly: 156–172 msec (early ROI) and 273–328 msec (late ROI).

In Figure 7, behavioral performance on the Staircase task is depicted. In each group, only 18 of the 24 participants performed these two tasks, as we started out with a more difficult version for the first six participants in each group, which turned out to be incomprehensible. Therefore, we changed the task and excluded these 12 participants from the analysis for these tasks (see “Staircase task” in “Methods” section above). From these 18 participants in the Inattention group, three participants were excluded from behavioral analysis for the no-feedback Staircase task and two participants for the feedback Staircase task, as they reported to have been fixating on one corner of the screen instead of the center during these tasks. No further exclusions were required in the Masked group (see “Exit Interview” in “Methods” section above). There was a learning effect in the Inattention group, but not for the Masked group as measured by the Staircase task with feedback. In the Inattention group SOAs for the trained figure were significantly shorter than for the novel figure [t(1, 15) = −2.147, p = .049]. For the Masked group, there was a trend going in the opposite direction: shorter SOAʼs for the novel figure as opposed to the trained one [t(1, 17) = 1.891, p = .076]. Overall, SOAs were lower in the Masked group compared with the Inattention group. Although this difference was not significant [t(1, 32) = 1.082, p = .29], it might be because of the fact that participants in the Masked group have more prior experience with figure detection tasks, as they had to detect the masked figure on Day 1. Remarkably, these effects were only present for the second version of the Staircase task, during which performance feedback was given at every trial. For the first (no feedback) version, no learning effects were found in both groups [Inattention group: t(1, 14) = 0.233, p = .819, Masked group: t(1, 17) = 0.010, p = .992].

DISCUSSION Inattentional Learning We found perceptual learning during inattention, but not for masked stimuli to which attention is directed. Neurally, Meuwese et al.

1589

learning effects were found for both early and late ROIs in the Inattention group, whereas the Masked group did not show any neural learning effects. This learning effect for the Inattention group was apparent behaviorally, measured with a staircased detection task, but only when performance feedback was provided. These results suggest that different kinds of unreportability, perceptual or attentional blindness, can lead to different neural and behavioral outcomes. Attentional blindness (inattention) to stimuli does not fully preclude that the unreported stimuli induce learning, whereas perceptual blindness (masking) does. The fact that we find a learning effect for the Inattention group, but not for the Masked group, indicates that attention is not required for perceptual learning, whereas consciousness is. This further strengthens the idea that consciousness is separate from attention. Did we successfully divert attention away from the background figures in the Inattention group and direct attention toward the masked figures in the Masked group? Although not widely used, the Inattentional Blindness paradigm—originally devised by Mack and Rock (1998) and popularized by the famous Gorilla-suit movie (Simons & Chabris, 1999)—is in fact very effective when it comes to shutting down both bottom–up and top– down attention. Key to the Inattentional Blindness paradigm is that participants do not know in advance about the possible presentation of figures. This prevents any (hidden) top–down motivation for attending these figures. Also, bottom–up attention is precluded, as is evidenced by the absence of knowledge of the figures after the task: typically, when bottom–up attention is grabbed by the figure, participants notice the object and are readily able to report it (as happened in a subset of our participants that were hence excluded from further analysis). This is also shown by the classical Mack and Rock experiments, showing that bottom–up attention grabbers such as faces typically do “break through” and people readily remember having seen these (Mack & Rock, 1998). This strongly argues that participants who do not report the figures indeed devoted neither top– down nor bottom–up attention to these figures. We therefore think that our Inattention condition sufficiently exhausted all attentional resources. As for the Masked condition, the question is whether attending to the “catch” stimuli implies that the masked figures were attended as well. We can only show successful detection of the “catch” stimuli that are identical to the masked stimuli in terms of spatial layout. The situation is similar to paradigms in which masked and hence unconscious primes are presented together with visible stimuli (Dehaene et al., 2001). Such invisible primes at the same location as visible stimuli are known to affect subsequent processing, whereas unconscious primes that are not attended do not (Naccache, Blandin, & Dehaene, 2002). This shows that attention to visible stimuli at a particular location automatically transfers to other—invisible—stimuli at that 1590

Journal of Cognitive Neuroscience

same location in some tasks, suggesting that our masked figures were indeed attended. Task-irrelevant versus Task-relevant Learning Several studies are in line with the idea that attention is not necessary for learning. A relevant paradigm is so-called “task-irrelevant learning” (Seitz et al., 2009; Seitz & Watanabe, 2003; Watanabe et al., 2001; for a review, see Choi & Watanabe, 2009). It is claimed that internal rewards gate this type of learning, as learning only occurs when the task-irrelevant stimulus is temporally paired with a task-relevant stimulus, which triggers an internal reward (release of neuromodulatory factors that boost sensory signals) resulting in reinforcement learning (Seitz & Watanabe, 2003, 2005, 2009; Seitz et al., 2009; Seitz, Náñez, Holloway, Koyama, & Watanabe, 2005). This explanation could possibly account for our results, as in our study internal rewards might have been present (triggered by the “task-relevant” letters, presented simultaneously with the “task-irrelevant” background figures). On the one hand, it must be noted that internal rewards paired to the target stimuli might have been less present in the Masked group compared with the Inattention group, as task performance for masked figure detection was at chance level (as opposed to a high performance on the 2-back letter task in the Inattention group), which would be in line with our results. On the other hand, Seitz, Náñez, Holloway, Tsushima, and Watanabe (2006) show that correct performance may not always generate internal reinforcement signals, and Seitz and Watanabe (2008) found that learning was greatest in the most difficult task condition where performance was lowest. They suggest that reinforcement learning may not depend upon correct responses, but rather on target-uncertainty (the more difficult target recognition is, the greater the reinforcement signal upon successful recognition). In our case, there would be more target-uncertainty in the Masked group than in the Inattention group (although there are fewer instances of successful recognition), and this should then have resulted in stronger learning for masked than for unattended stimuli—that is, the opposite of our results. Also, learning has been shown to occur just because of the temporal pairing of a target and stimulus, such that taskdriven and stimulus-driven signals coincide, irrespective of the task-relevancy of the stimulus (Seitz & Watanabe, 2005, 2008). This temporal pairing is equal for both groups in our experiment. Either way, to figure out the exact relation of our results to other established learning paradigms would require more study. Task requirements for the Inattention and Masked group during the learning phase were asymmetrical in terms of task-relevancy, which is potentially an essential difference. For the Inattention group the to-be-learned stimulus was task-irrelevant, whereas for the Masked group it was taskrelevant. The fact that the Staircase task with performance feedback revealed a behavioral trend in the opposite Volume 25, Number 10

direction for the Masked group (lower SOAʼs for the novel figure compared with the trained figure) might indicate that in this group inhibition of learning has taken place. This might have been caused by the fact that in the Masked group attention was directed toward the figure stimuli during the learning phase. However, attention is thought to suppress learning only when stimuli are distracting from the main task (or task-irrelevant; Roelfsema, Van Ooyen, & Watanabe, 2010; Choi & Watanabe, 2009; Tsushima & Watanabe, 2009; Tsushima, Seitz, & Watanabe, 2008). In our experiment, the masked figures are task-relevant; therefore, attention should, in general, enhance learning of these figures (Jessup & OʼDoherty, 2009; Gilbert et al., 2001; Ahissar & Hochstein, 1993; Shiu & Pashler, 1992). However, attention may not be a sufficient requirement for learning to occur (Ahissar & Hochstein, 1997) and there are multiple examples where explicitly attending to a task-relevant stimulus leads to decreased learning compared with implicit exposure. This seems to be the case in particular for complex stimuli that cannot be described with simple rules, such as in a recognition task with kaleidoscopic visual stimuli (Voss & Paller, 2009; Voss, Baym, & Paller, 2008), rule discovery in an artificial grammar task (Reber, 1976), and complex auditory stimuli ( Vlahou, Protopapas, & Seitz, 2012; Seitz et al., 2010; Wade & Holt, 2005; see Reber, 1989, for a review). Neural changes associated with task-irrelevant learning are largely unclear (Sasaki, Náñez, & Watanabe, 2009), and one could argue that this type of learning (and thus the learning reported in the current study) may not be wholly perceptual, but associated more with decisionmaking regions instead. In general, some have suggested that visual training does not improve how sensory information is represented in the brain, but rather how this sensory representation is interpreted by higher areas that guide behavior (Law & Gold, 2008; Dosher & Lu, 1998). However, as the task-irrelevant stimulus is actually dissociated from the decision process concerning the task, we believe that task-irrelevant perceptual learning really has a lowlevel basis (Seitz & Watanabe, 2009). Indeed, here we report neural learning effects recorded by (peri-)occipital electrodes. Further study (using a technique with higher spatial resolution) would be required to investigate the possibility that neural changes took place in (connections with) higher brain areas. Location Specificity of Learning Location-specific learning is often considered a hallmark of perceptual as opposed to other types of learning (Pourtois et al., 2008; Ofen et al., 2007; Mednick et al., 2002; Schwartz et al., 2002; Sowden, Rose, & Davies, 2002; Schoups et al., 1995; Sagi & Tanne, 1994; Karni & Sagi, 1991). Recently, however, location specificity of perceptual learning has become subject of debate (Harris, Gliksberg, & Sagi, 2012; Le Dantec & Seitz, 2012; Zhang, Klein, Levi, & Yu, 2011). If perceptual learning in our experiment would be

location nonspecific, learning of one figure would transfer to the orthogonal (novel) figure, which would make the comparison of trained figure versus novel figure processing meaningless. Perhaps there was location nonspecific learning, possibly even in the masked condition, that we did not pick up. On the one hand, Harris et al. (2012) state that location-specific learning is a consequence of low-level sensory adaptation. They show that, as soon as trials are interleaved with dummy trials suppressing local adaptation, transfer of learning to other locations takes place. This happens for dummy trials containing textures oriented 45° away from the target textures, whereas dummy trials with an orientation 90° orthogonal to the targets have no such effect. In our study, presentation of textured target stimuli was interrupted during learning by “catch” and “filler” trials. These temporal interruptions caused by “catch” and “filler” trials were equal for both groups, and to induce learning at the level of figure-ground segregation that generalizes across orientations (instead of merely inducing learning at the level of orientation discrimination), we made sure that all stimuli (targets as well as “catch” and “filler” textures) consisted of line elements with the same set of different orientations (30°, 60°, 120°, and 150°). However, luminance properties of these interruptions differed (luminance defined “catch” figures [both configurations A and B] in the Masked group, no-figure “filler” trials in the Inattention group). This may have caused location-invariant learning in the Masked group, which would be an alternative explanation for our finding that no learning effects were found in this group. On the other hand, according to Zhang et al. (2011), only location nonspecific additional tasks (so called “double training”) can cause transfer to untrained locations. As our additional task (detection of “catch” trials) is location specific, according to Zhang et al. (2011) no transfer could have taken place. Transfer induced by double training has further been challenged by Le Dantec and Seitz (2012), who found that transfer during double training is incomplete. It is clear that transfer is controlled by many different factors and that further research is needed to know whether transfer could have occurred in our experimental setup. The Role of Recurrent Processing in Learning No learning occurred when the task-irrelevant figures were masked yet still attended, whereas learning did occur for visible (“conscious”) yet unattended stimuli. As masking blocks RP (late, figure-ground segregation related activity, see Fahrenfort et al., 2007; Ro, Breitmeyer, Burton, Singhal, & Lane, 2003; Lamme, Zipser, & Spekreijse, 2002; Di Lollo, Enns, & Rensink, 2000; Bridgeman, 1980), while inattention does not, RP seems to be a requirement for learning. What are the differences between feedforward (early activity, propagating from lower to higher areas) and RP that could explain the differential involvement in perceptual learning? A potentially important difference is that Meuwese et al.

1591

feedforward propagation is more strongly supported by AMPA receptors, whereas the prolonged recurrent activation of cells will more strongly trigger the activation of N-methyl-D-aspartate (NMDA)-type receptors, following Hebbian rules, and thus mediate synaptic plasticity. This has been shown in monkeys (Self, Kooijmans, Supèr, Lamme, & Roelfsema, 2012), which confirmed predictions from modeling studies (Dehaene, Sergent, & Changeux, 2003; Lumer, Edelman, & Tononi, 1997). The efficacy of NMDA receptors is enhanced if there is a concurrent bottom–up depolarization by sensory inputs, which gives top–down projections a mainly modulatory role (Dehaene et al., 2003). Thus, recurrent connections might gate perceptual learning by activating NMDA receptors (Roelfsema et al., 2010; Fino, Deniau, & Venance, 2009; Berardi, Pizzorusso, Ratto, & Maffei, 2003; Dinse, Ragert, Pleger, Schwenkreis, & Tegenthoff, 2003; Sheng & Kim, 2002; Mori & Mishina, 1995). Several other studies suggest a role for recurrent activity in synaptic plasticity as well: (Perceptual) learning involves establishing a task-specific activation of an appropriate subset of neurons by feedback connections (Gilbert & Sigman, 2007; Crist, Li, & Gilbert, 2001; Gilbert et al., 2001). Moreover, recurrent connections from higher to lower areas seem to underlie persistent use-dependent functional plasticity, whereas feedforward connections are thought to lose much of their plasticity as development proceeds (Gilbert, 1996; Singer, 1995). Also, evidence from lesion studies in rats indicates that feedback from other cortical regions is necessary for the consolidation of learning (Diamond, Petersen, & Harris, 1999; Cauller, 1995). Schäfer, Vasilaki, and Senn (2007) propose a model, consistent with biological data, showing that perceptual learning can modulate top–down input to V1 while feedforward (and lateral) pathways remain unaffected. Our claim that RP is needed for perceptual learning may be inconsistent with findings by Seitz et al. (2009), who show learning of a stimulus rendered unreportable by continuous flash suppression. Although the level of (recurrent) processing for the suppressed stimulus during continuous flash suppression is still unclear (Yang & Blake, 2012; Jiang & He, 2006), this would imply that for these low-level orientation gratings learning could take place at the level of feedforward processing. As orientation selectivity is thought to arise from feedforward processing (Hubel & Wiesel, 1968), this actually makes sense. In contrast, figure-ground segregation requires RP (Fahrenfort & Scholte, 2008; Scholte et al., 2008; Lamme & Roelfsema, 2000), which might explain the difference in learning requirements for orientation discrimination (Seitzʼ stimuli) versus figure detection (our stimuli). Instead of explaining the lack of learning effects in the Masked group in terms of (disruption of ) RP, these results could also be explained in terms of erasure of iconic memory in this condition. Iconic memory relies on the persistence of neural signals in the visual system, which in turn depends on recurrent signals (Sligte et al., 2010; 1592

Journal of Cognitive Neuroscience

Sligte, Scholte, & Lamme, 2008). It is known that masking erases RP (Boehler, Schoenfeld, Heinze, & Hopf, 2008; Fahrenfort et al., 2007; Lamme et al., 2002) and, therefore, erases iconic memory as well.

Effects of Performance Feedback A surprising outcome of this experiment was the finding that behavioral learning effects were only present when performance feedback was given on the Staircase task (although a neural learning effect was found before any behavioral tasks were conducted). This suggests that some sort of latent learning occurred, which only became apparent after feedback was provided. This finding stresses that the absence of performance enhancement after a learning task does not necessarily mean that the stimulus has not been learned, it just means that learning is not expressed in that particular task (see, e.g., Frensch, Wenke, & Rünger, 1999; Frensch, Lin, & Buchner, 1998). As we did not include a conditon in which participants did not receive feedback on the second run of the Staircase task, nor a condition in which participants were not exposed to any figure stimuli on Day 1, it must be noted that we do not know for sure whether the observed behavioral effects are a direct result of the performance feedback. A more conservative interpretation would be that performance feedback led to a better understanding of the task itself, which enabled us to measure behavioral output more reliably, as participants were now executing the task correctly. This explanation fits with the “Eureka effect,” where participants suddenly learn to perform a difficult detection task after being presented with a single long exposure stimulus (Ahissar & Hochstein, 1997, 2004) or an easy trial (Liu, Lu, & Dosher, 2010). Perhaps the performance feedback acted as top–down guidance of perception, facilitating neuronal pattern recognition. Since our behavioral learning effect seems to be latent, it would be interesting to remeasure neural responses after the behavioral learning effect was revealed by the Staircase task with performance feedback, to see whether behavioral manifestation of learning had any effect on its neural manifestation. We will do so in our next study, and we expect to find that the behavioral expression of latent learning effects will enhance neural learning effects. Conclusions In summary, our results are in line with studies that show location specific perceptual learning effects for figureground stimuli (Censor et al., 2006, 2009; Pourtois et al., 2008; Ofen et al., 2007; Casco et al., 2004; Schwartz et al., 2002; Sagi & Tanne, 1994; Karni & Sagi, 1991) and studies that show “task-irrelevant” perceptual learning (Seitz et al., 2009; Seitz & Watanabe, 2003; Watanabe et al., 2001; for a review, see Choi & Watanabe, 2009). However, as we only found perceptual learning during inattention, but Volume 25, Number 10

not for masked stimuli to which attention was directed, we also show that different kinds of unreportability (perceptual or attentional blindness) can lead to different neural and behavioral outcomes. These results should however be interpreted with caution, as the two groups differ in terms of task-relevancy of the trained stimulus. Remarkably, behavioral learning effects (for the Inattention group) were only present when performance feedback was given on the Staircase task. This potentially suggests that some sort of latent learning trace has been formed that is only accessed after feedback is provided. A further study of this phenomenon of “latent learning” seems necessary.

Acknowledgments This study was supported by an advanced investigator grant from the European Research Council to V. A. F. Lamme. We would like to thank all volunteers for their participation. Reprint requests should be sent to Julia D. I. Meuwese, Department of Psychology, University of Amsterdam, Weesperplein 4, 1018XA Amsterdam, Netherlands, or via e-mail: [email protected].

REFERENCES Ahissar, M., & Hochstein, S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences, U.S.A., 90, 5718–5722. Ahissar, M., & Hochstein, S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387, 401–406. Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8, 457–464. Berardi, N., Pizzorusso, T., Ratto, G. M., & Maffei, L. (2003). Molecular basis of plasticity in the visual cortex. Trends in Neurosciences, 26, 369–378. Block, N. (2007). Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences, 30, 481–548. Block, N. (2011). Perceptual consciousness overflows cognitive access. Trends in Cognitive Sciences, 15, 567–575. Boehler, C. N., Schoenfeld, M. A., Heinze, H.-J., & Hopf, J.-M. (2008). Rapid recurrent processing gates awareness in primary visual cortex. Proceedings of the National Academy of Sciences, U.S.A., 105, 8742–8747. Bridgeman, B. (1980). Temporal response characteristics of cells in monkey striate cortex measured with metacontrast masking and brightness discrimination. Brain Research, 196, 347–364. Capa, R. L., Cleeremans, A., Bustin, G. M., & Hansenne, M. (2011). Long-lasting effect of subliminal processes on cardiovascular responses and performance. International Journal of Psychophysiology, 81, 22–30. Caputo, G., & Casco, C. (1999). A visual evoked potential correlate of global figure-ground segmentation. Vision Research, 39, 1597–1610. Casco, C., Campana, G., Grieco, A., & Fuggetta, G. (2004). Perceptual learning modulates electrophysiological and psychophysical response to visual texture segmentation in humans. Neuroscience Letters, 371, 18–23. Cauller, L. (1995). Layer I of primary sensory neocortex: Where top–down converges upon bottom–up. Behavioural Brain Research, 71, 163–170.

Censor, N., Bonneh, Y., Arieli, A., & Sagi, D. (2009). Early-vision brain responses which predict human visual segmentation and learning. Journal of Vision, 9, 1–9. Censor, N., Karni, A., & Sagi, D. (2006). A link between perceptual learning, adaptation and sleep. Vision Research, 46, 4071–4074. Choi, H., & Watanabe, T. (2009). Selectiveness of the exposure-based perceptual learning: What to learn and what not to learn. Learning & Perception, 1, 89–98. Chun, M. M., & Wolfe, J. M. (2001). Visual attention. In B. Goldstein (Ed.), Blackwell handbook of perception (pp. 272–310). Oxford, UK: Blackwell Publishers Ltd. Clark, R. E. (1998). Classical conditioning and brain systems: The role of awareness. Science, 280, 77–81. Clark, R. E., Manns, J. R., & Squire, L. R. (2001). Trace and delay eyeblink conditioning: Contrasting phenomena of declarative and nondeclarative memory. Psychological Science, 12, 304–308. Cohen, M. A., & Dennett, D. C. (2011). Consciousness cannot be separated from function. Trends in Cognitive Sciences, 15, 358–364. Crist, R. E., Li, W., & Gilbert, C. D. (2001). Learning to see: Experience and attention in primary visual cortex. Nature Neuroscience, 4, 519–525. Custers, R., & Aarts, H. (2011). Learning of predictive relations between events depends on attention, not on awareness. Consciousness and Cognition, 20, 368–378. De Brigard, F., & Prinz, J. (2010). Attention and consciousness. Wiley Interdisciplinary Reviews: Cognitive Science, 1, 51–59. Dehaene, S., Naccache, L., Cohen, L., Le Bihan, D., Mangin, J.-F., Poline, J.-B., et al. (2001). Cerebral mechanisms of word masking and unconscious repetition priming. Nature Neuroscience, 4, 752–758. Dehaene, S., Sergent, C., & Changeux, J.-P. (2003). A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proceedings of the National Academy of Sciences, U.S.A., 100, 8520–8525. Di Lollo, V., Enns, J. T., & Rensink, R. A. (2000). Competition for consciousness among visual events: The psychophysics of reentrant visual processes. Journal of Experimental Psychology: General, 129, 481–507. Diamond, M. E., Petersen, R. S., & Harris, J. A. (1999). Learning through maps: Functional significance of topographic organization in primary sensory cortex. Journal of Neurobiology, 41, 64–68. Dickinson, C. A., & Intraub, H. (2009). Spatial asymmetries in viewing and remembering scenes: Consequences of an attentional bias? Attention, Perception & Psychophysics, 71, 1251–1262. Dinse, H. R., Ragert, P., Pleger, B., Schwenkreis, P., & Tegenthoff, M. (2003). Pharmacological modulation of perceptual learning and associated cortical reorganization. Science, 301, 91–94. Dosher, B. A., & Lu, Z. L. (1998). Perceptual learning reflects external noise filtering and internal noise reduction through channel reweighting. Proceedings of the National Academy of Sciences, U.S.A., 95, 13988–13993. Enns, J., & Di Lollo, V. (2000). Whatʼs new in visual masking? Trends in Cognitive Sciences, 4, 345–352. Fahrenfort, J. J., & Scholte, H. S. (2008). The spatiotemporal profile of cortical processing leading up to visual perception. Journal of Vision, 8, 1–12. Fahrenfort, J. J., Scholte, H. S., & Lamme, V. A. F. (2007). Masking disrupts reentrant processing in human visual cortex. Journal of Cognitive Neuroscience, 19, 1488–1497.

Meuwese et al.

1593

Fino, E., Deniau, J.-M., & Venance, L. (2009). Brief subthreshold events can act as Hebbian signals for long-term plasticity. PloS One, 4, e6557. Frensch, P. A., Lin, J., & Buchner, A. (1998). Learning versus behavioral expression of the learned: The effects of a secondary tone-counting task on implicit learning in the serial reaction task. Psychological Research, 61, 83–98. Frensch, P. A., Wenke, D., & Rünger, D. (1999). A secondary tone-counting task suppresses expression of knowledge in the serial reaction task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 260–274. Gibson, E. J. (1969). Principles of perceptual learning and development (p. 3). New York: Appleton Century Crofts. Gilbert, C. D. (1996). Plasticity in visual perception and physiology. Current Opinion in Neurobiology, 6, 269–274. Gilbert, C. D., & Sigman, M. (2007). Brain states: Top–down influences in sensory processing. Neuron, 54, 677–696. Gilbert, C. D., Sigman, M., & Crist, R. E. (2001). The neural basis of perceptual learning. Neuron, 31, 681–697. Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. (2009). Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555–560. Harris, H., Gliksberg, M., & Sagi, D. (2012). Generalized perceptual learning in the absence of sensory adaptation. Current Biology, 22, 1813–1817. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195, 215–243. Hyvarinen, A., Karhunen, J., & Oja, E. (2001). Independent component analysis. New York: Wiley. Jeneson, A., Kirwan, C. B., & Squire, L. R. (2010). Recognition without awareness: An elusive phenomenon. Learning & Memory, 17, 454–459. Jessup, R. K., & OʼDoherty, J. P. (2009). It was nice not seeing you: Perceptual learning with rewards in the absence of awareness. Neuron, 61, 649–650. Jiang, Y., & He, S. (2006). Cortical responses to invisible faces: Dissociating subsystems for facial-information processing. Current Biology, 16, 2023–2029. Kanai, R., Walsh, V., & Tseng, C. (2010). Subjective discriminability of invisibility: A framework for distinguishing perceptual and attentional failures of awareness. Consciousness and Cognition, 19, 1045–1057. Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences, U.S.A., 88, 4966–4970. Knight, D. C., Nguyen, H. T., & Bandettini, P. A. (2003). Expression of conditional fear with and without awareness. Proceedings of the National Academy of Sciences, U.S.A., 100, 15280–15283. Koch, C., & Tsuchiya, N. (2007). Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16–22. Koch, C., & Tsuchiya, N. (2011). Attention and consciousness: Related yet different. Trends in Cognitive Sciences, 16, 103–105. Koivisto, M., Revonsuo, A., & Lehtonen, M. (2006). Independence of visual awareness from the scope of attention: An electrophysiological study. Cerebral Cortex, 16, 415–424. Kouider, S., De Gardelle, V., Sackur, J., & Dupoux, E. (2010). How rich is consciousness? The partial awareness hypothesis. Trends in Cognitive Sciences, 14, 301–307. Lamme, V. A. F. (2003). Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12–18. Lamme, V. A. F. (2006). Towards a true neural stance on consciousness. Trends in Cognitive Sciences, 10, 494–501.

1594

Journal of Cognitive Neuroscience

Lamme, V. A. F. (2010). How neuroscience will change our view on consciousness. Cognitive Neuroscience, 1, 204–240. Lamme, V. A. F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23, 571–579. Lamme, V. A. F., Van Dijk, B. W., & Spekreijse, H. (1992). Texture segregation is processed by primary visual cortex in man and monkey. Evidence from VEP experiments. Vision Research, 32, 797–807. Lamme, V. A. F., Zipser, K., & Spekreijse, H. (2002). Masking interrupts figure-ground signals in V1. Journal of Cognitive Neuroscience, 14, 1044–1053. Law, C.-T., & Gold, J. I. (2008). Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area. Nature Neuroscience, 11, 505–513. Le Dantec, C. C., & Seitz, A. R. (2012). High resolution, high capacity, spatial specificity in perceptual learning. Frontiers in Psychology, 3, 1–7. Leclercq, V., & Seitz, A. R. (2012). Fast-TIPL occurs for salient images without a memorization requirement in men but not in women. PloS One, 7, e36228. Liu, J., Lu, Z.-L., & Dosher, B. A. (2010). Augmented Hebbian reweighting: Interactions between feedback and training accuracy in perceptual learning. Journal of Vision, 10, 1–14. Lumer, E. D., Edelman, G. M., & Tononi, G. (1997). Neural dynamics in a model of the thalamocortical system. I. Layers, loops and the emergence of fast synchronous rhythms. Cerebral Cortex, 7, 207–227. Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press. Manns, J. R., & Squire, L. R. (2001). Perceptual learning, awareness, and the hippocampus. Hippocampus, 11, 776–782. Mednick, S. C., Nakayama, K., Cantero, J. L., Atienza, M., Levin, A. A., Pathak, N., et al. (2002). The restorative effect of naps on perceptual deterioration. Nature Neuroscience, 5, 677–681. Merikle, P. M., & Joordens, S. (1997). Parallels between perception without attention and perception without awareness. Consciousness and Cognition, 6, 219–236. Moore, C. M., & Egeth, H. (1997). Perception without attention: Evidence of grouping under conditions of inattention. Journal of Experimental Psychology: Human Perception and Performance, 33, 339–352. Mori, H., & Mishina, M. (1995). Structure and function of the NMDA channel. Neuropharmacology, 34, 1219–1237. Naccache, L., Blandin, E., & Dehaene, S. (2002). Unconscious masked priming depends on temporal attention. Psychological Science, 13, 416–424. Nicholls, M. E., Bradshaw, J. L., & Mattingley, J. B. (1999). Free-viewing perceptual asymmetries for the judgement of brightness, numerosity and size. Neuropsychologia, 37, 307–314. Ofen, N., Moran, A., & Sagi, D. (2007). Effects of trial repetition in texture discrimination. Vision Research, 47, 1094–1102. OʼRegan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. The Behavioral and Brain Sciences, 24, 939–973. Posner, M. I. (1994). Attention: The mechanisms of consciousness. Proceedings of the National Academy of Sciences, U.S.A., 91, 7398–7403. Pourtois, G., Rauss, K. S., Vuilleumier, P., & Schwartz, S. (2008). Effects of perceptual learning on primary visual cortex activity in humans. Vision Research, 48, 55–62.

Volume 25, Number 10

Prinz, J. J. (2010). When is perception conscious? In B. Nanay (Ed.), Perceiving the world: New essays on perception. New York: Oxford University Press. Railo, H., Koivisto, M., & Revonsuo, A. (2011). Tracking the processes behind conscious perception: A review of event-related potential correlates of visual consciousness. Consciousness and Cognition, 20, 972–983. Reber, A. S. (1976). Implicit learning of synthetic languages: The role of instructional set. Journal of Experimental Psychology: Learning, Memory, and Cognition, 2, 88–94. Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology: General, 118, 219–235. Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504–2507. Ro, T., Breitmeyer, B., Burton, P., Singhal, N. S., & Lane, D. (2003). Feedback contributions to visual awareness in human occipital cortex. Current Biology, 11, 1038–1041. Rock, I., Linnett, C. M., Grant, P., & Mack, A. (1992). Perception without attention-Results of a new method. Cognitive Psychology, 24, 502–534. Roelfsema, P. R., Van Ooyen, A., & Watanabe, T. (2010). Perceptual learning rules based on reinforcers and attention. Trends in Cognitive Sciences, 14, 64–71. Sagi, D., & Tanne, D. (1994). Perceptual learning: Learning to see. Current Opinion in Neurobiology, 4, 195–199. Sasaki, Y., Náñez, J. E., & Watanabe, T. (2009). Advances in visual perceptual learning and plasticity. Nature Reviews Neuroscience, 11, 53–60. Schäfer, R., Vasilaki, E., & Senn, W. (2007). Perceptual learning via modification of cortical top–down signals. PLoS Computational Biology, 3, e165. Scholte, H. S., Jolij, J., Fahrenfort, J. J., & Lamme, V. A. F. (2008). Feedforward and recurrent processing in scene segmentation: Electroencephalography and functional magnetic resonance imaging. Journal of Cognitive Neuroscience, 20, 2097–2109. Scholte, H. S., Witteveen, S. C., Spekreijse, H., & Lamme, V. A. F. (2006). The influence of inattention on the neural correlates of scene segmentation. Brain Research, 1076, 106–115. Schoups, A., Vogels, R., & Orban, G. (1995). Human perceptual learning in identifying the oblique orientation: Retinotopy, orientation specificity and monocularity. Journal of Physiology, 483, 797–810. Schwartz, S., Maquet, P., & Frith, C. (2002). Neural correlates of perceptual learning: A functional MRI study of visual texture discrimination. Proceedings of the National Academy of Sciences, U.S.A., 99, 17137–17142. Seitz, A. R., Kim, D., & Watanabe, T. (2009). Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700–707. Seitz, A. R., Náñez, J. E., Holloway, S. R., Koyama, S., & Watanabe, T. (2005). Seeing what is not there shows the costs of perceptual learning. Proceedings of the National Academy of Sciences, U.S.A., 102, 9080–9085. Seitz, A. R., Náñez, J. E., Holloway, S. R., Tsushima, Y., & Watanabe, T. (2006). Two cases requiring external reinforcement in perceptual learning. Journal of Vision, 6, 966–973. Seitz, A. R., Protopapas, A., Tsushima, Y., Vlahou, E. L., Gori, S., Grossberg, S., et al. (2010). Unattended exposure to components of speech sounds yields same benefits as explicit auditory training. Cognition, 115, 435–443. Seitz, A. R., & Watanabe, T. (2003). Is subliminal learning really passive? Nature, 422, 36. Seitz, A. R., & Watanabe, T. (2005). A unified model for perceptual learning. Trends in Cognitive Sciences, 9, 329–334.

Seitz, A. R., & Watanabe, T. (2008). Is task-irrelevant learning really task-irrelevant? PLoS One, 3, e3792. Seitz, A. R., & Watanabe, T. (2009). The phenomenon of task-irrelevant perceptual learning. Vision Research, 49, 2604–2610. Self, M. W., Kooijmans, R. N., Supèr, H., Lamme, V. A. F., & Roelfsema, P. R. (2012). Different glutamate receptors convey feedforward and recurrent processing in macaque V1. Proceedings of the National Academy of Sciences, U.S.A., 109, 11031–11036. Sheng, M., & Kim, M. J. (2002). Postsynaptic signaling and plasticity mechanisms. Science, 298, 776–780. Shiu, L. P., & Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception & Psychophysics, 52, 582–588. Simons, D. J., & Chabris, C. F. (1999). Gorrilas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074. Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Sciences, 1, 261–267. Singer, W. (1995). Development and plasticity of cortical processing architectures. Science, 270, 758–764. Sligte, I. G., Scholte, H. S., & Lamme, V. A. F. (2008). Are there multiple visual short-term memory stores? PLoS One, 3, e1699. Sligte, I. G., Vandenbroucke, A. R. E., Scholte, H. S., & Lamme, V. A. F. (2010). Detailed sensory memory, sloppy working memory. Frontiers in Psychology, 1, 1–10. Sowden, P. T., Rose, D., & Davies, I. R. L. (2002). Perceptual learning of luminance contrast detection: Specific for spatial frequency and retinal location but not orientation. Vision Research, 42, 1249–1258. Supèr, H., Spekreijse, H., & Lamme, V. A. F. (2001). Two distinct modes of sensory processing observed in monkey primary visual cortex ( V1). Nature Neuroscience, 4, 304–310. Tsushima, Y., Seitz, A. R., & Watanabe, T. (2008). Task-irrelevant learning occurs only when the irrelevant feature is weak. Current Biology: CB, 18, R516–R517. Tsushima, Y., & Watanabe, T. (2009). Roles of attention in perceptual learning from perspectives of psychophysics and animal learning. Learning & Behavior, 37, 126–132. Van Boxtel, J. J. A., Tsuchiya, N., & Koch, C. (2010a). Consciousness and attention: On sufficiency and necessity. Frontiers in Psychology, 1, 1–13. Van Boxtel, J. J. A., Tsuchiya, N., & Koch, C. (2010b). Opposing effects of attention and consciousness on afterimages. Proceedings of the National Academy of Sciences, U.S.A., 107, 8883–8888. Van Gaal, S., De Lange, F. P., & Cohen, M. X. (2012). The role of consciousness in cognitive control and decision making. Frontiers in Human Neuroscience, 6, 121. Vandenbroucke, A. R. E., Fahrenfort, J. J., & Lamme, V. A. F. (2010). Seeing without knowing: Qualia are present during inattentional blindness. Poster at ASSC 15. Vandenbroucke, A. R. E., Sligte, I. G., Fahrenfort, J. J., Ambroziak, K. B., & Lamme, V. A. F. (2012). Non-attended representations are perceptual rather than unconscious in nature. PLoS One, 7, e50042. Vandenbroucke, A. R. E., Sligte, I. G., & Lamme, V. A. F. (2011). Manipulations of attention dissociate fragile visual short-term memory from visual working memory. Neuropsychologia, 49, 1559–1568. Vlahou, E. L., Protopapas, A., & Seitz, A. R. (2012). Implicit training of nonnative speech stimuli. 141, 363–381. Voss, J. L., Baym, C. L., & Paller, K. A. (2008). Accurate forced-choice recognition without awareness of memory retrieval. Learning & Memory, 15, 454–459.

Meuwese et al.

1595

Voss, J. L., & Paller, K. A. (2009).An electrophysiological signature of unconscious recognition memory. Nature Neuroscience, 12, 349–355. Vuilleumier, P., Sagiv, N., Hazeltine, E., Poldrack, R. A., Swick, D., Rafal, R. D., et al. (2001). Neural fate of seen and unseen faces in visuospatial neglect: A combined event-related functional MRI and event-related potential study. Proceedings of the National Academy of Sciences, U.S.A., 98, 3495–3500. Wade, T., & Holt, L. L. (2005). Incidental categorization of spectrally complex non-invariant auditory stimuli in a computer game task. The Journal of the Acoustical Society of America, 118, 2618. Watanabe, M., Cheng, K., Murayama, Y., Ueno, K., Asamizuya, T., Tanaka, K., et al. (2011). Attention but not awareness

1596

Journal of Cognitive Neuroscience

modulates the BOLD signal in the human V1 during binocular suppression. Science, 334, 829–831. Watanabe, T., Náñez, J. E., & Sasaki, Y. (2001). Perceptual learning without perception. Nature, 413, 844–848. Wyart, V., & Tallon-Baudry, C. (2008). Neural dissociation between visual awareness and spatial attention. The Journal of Neuroscience, 28, 2667–2679. Yang, E., & Blake, R. (2012). Deconstructing continuous flash suppression. Journal of Vision, 12, 1–14. Zhang, J.-Y., Klein, S., Levi, D., & Yu, C. (2011). Perceptual learning transfers to untrained retinal locations after double training: A piggyback effect. VSS 2011 Abstracts (p. 42). Zipser, K., Lamme, V. A. F., & Schiller, P. H. (1996). Contextual modulation in primary visual cortex. The Journal of Neuroscience, 16, 7376–7389.

Volume 25, Number 10