Indian headband with three protruding feathers), rain gutter (a small section of an ordinary downspout in an S-shape), and tool box (a metal carrying box for ...
Journal of" Experimental Psychology: A n i m a l Behavior Processes 1989, Vol. 15, No. 2, 124-136
Copyright I'JK'J by the American Psychological Association fnc 0(W7-7403/89/$00.75
Generalization of Visual Matching by a Bottlenosed Dolphin (Tursiops truncatus): Evidence for Invariance of Cognitive Performance with Visual and Auditory Materials Louis M. Herman
John R. Hovancik
University of Hawaii
Seton Hall University
John D. Gory
Gary L. Bradshaw
University of Hawaii
University of Colorado
Generalization of a visual matching-to-sample rule was shown for the first time in a bottlenosed dolphin (Tursiops truncatus), normally considered an auditory specialist. The visual items used were all real-world objects. Some objects had acoustic names in an artificial acoustic language taught to the dolphin named Phoenix. Other objects were unnamed but familiar to Phoenix, and still others were objects entirely new to her experience. In Experiment 1 and 2, we demonstrated Phoenix's ability to match these objects, from among two alternative comparison objects, at levels of 87% correct responses or better, after 0-s delay. In Experiment 3, Phoenix's matches of familiar and of new objects were better than 94% correct through to delays of 30 s and were 73% correct after a delay of 80 s. In Experiment 4, performance was nearly equivalent for statically displayed and dynamically displayed sample objects. Over the four experiments, Phoenix matched 16 of 18 objects successfully on the first trial that they appeared as samples. From these and other recent Findings, it appears that bottlenosed dolphins are capable of carrying out both visualand auditory-based complex cognitive tasks approximately equally well, a finding at variance with earlier notions of sensory modality limitations in cognitive performance of animals.
Animals may interact with their world through a variety of sensory' modes, yel il often appears that many cognitive capacities are modality limited. The concept of modality limitation, often called modality specificity, holds that advanced cognitive skills, or even simple ones, may be largely restricted to the dominant sensory modality of the species. For example, monkeys may solve visual hut not auditory tasks easily (D'Amato, 1973; D'Amato & Salmon, 1982; Thompson, 1981), whereas the reverse may hold for dolphins (Herman, 1980). These modality-related differences are not traceable to raw sensory limitations. Monkeys have welldeveloped hearing abilities (Fobes & King, 1982) and dolphins have a highly evolved visual system (Dawson, 1980; Herman, Peacock, Yunker, & Madsen, 1975). Recent work has begun to dispel or reshape some of the notions about modality specificity. D'Amato and Colombo (1985) and Colombo and D'Amato (1986) reported on the ability of capuchin monkeys (Cebiis apella) to learn auditory delayed matching-to-sample (DMTS) tasks, and similar find-
Preparation of this report was supported by Contract No. N0001485-K-0210 from the Office of Naval Research and Grant BNS8109653 from the National Science Foundation. Additional support was from Earthwatch. At the lime of these studies John R. I lovancik and Gary L. Bradshaw were postdoctoral research associates at the University of Hawaii. We thank Tony Wright for many helpful suggestions on an earlier version of this article. Correspondence concerning this article should be addressed to Louis M. Herman, Kewalo Basin Marine Mammal Laboratory. University of Hawaii, 1129 Ala Moana Blvd., Honolulu, Hawaii 96814.
ings were reported by Shyan, Wright, Cook, and Jitsumori (1987) for rhesus monkeys (Macaco mulalta) in a samedifferent task. DMTS tasks arc directed toward an analysis of short-term memory capacities and processes, particularly the ability to encode, update, and maintain new items in working memory (Honig, 1978). Typically, the animal subject is briefly presented with a "sample" stimulus. After a delay, measured in seconds or minutes, the subject is offered two or more comparison stimuli, one of which matches the sample. A response to the matching comparison stimulus is correct. The stimuli are usually either visual or auditory materials or some combination (see Herman & Forestell, 1985, for discussion of possible resulting stimulus codes). The findings of the cited studies with capuchin and rhesus monkeys were that auditory DMTS was attained and that the matching concept transferred to new auditory stimuli. The transfer results showed that the monkeys had learned a general rule for matching, rather than only a stimulus-specific rule. Colombo and D'Amato (1986) reported that the overall performance of the monkeys with auditory materials was inferior to that achieved with comparable visual materials, prompting these authors to suggest that "modality asymmetry" was a better descriptor of monkey auditory and visual capabilities in these tasks than was "modality specificity." The bottlenosed dolphin, long-regarded as an acoustic specialist (Herman, 1980: Popper, 1980), has recently been shown capable of high levels of visually based performance. Herman, Richards, and Wolz (1984) reported that a bottlenosed dolphin named Akeakamai learned to interpret the gestures of a trainer's arms and hands as semantic references to real-world objects, object locations, and actions to be taken
125
VISUAL MATCHING BY A DOLPHIN
to objects. Sequences of gestures comprising "sentences" with structural rules were understood reliably. Forestell and Herman (1988) reported that another bottlenosed dolphin named Puka was able to solve a visual delayed matching-to-sample problem. In that work, the problem was solved by associating unique sounds with each of a pair of visual targets, allowing the dolphin to solve the matching problem through its auditory system. Later, when the sounds were delayed, delayed matching performance was maintained with the visual materials alone, although the limits of the delay achieved were considerably shorter than those attained with another dolphin (Kea) tested on auditory DMTS (Herman, 1975; Herman & Gordon, 1974). These results give some support to the notion of modality asymmetry in dolphins. The subsequent loss of Puka precluded Forestell and Herman from testing for generalization of the matching rule to new visual materials. We now report on a series of studies of the ability of a bottlenosed dolphin named Phoenix, trained previously for receptive competency within an artificial acoustic language (Herman et al., 1984), to form a generalized matching rule with visual materials. In these studies we (a) examined the ability of the dolphin to substitute the visual referent of an acoustic symbol for the symbol itself, within a languagelike task that, in effect, was an analog to a visual matching procedure; (b) compared abilities to visually match objects having names in the dolphin's acoustic language with abilities to match objects that were familiar but unnamed and objects that were totally novel to the dolphin's experience; (c) tested for the ability to sustain the memory for a visual object in delayed matching tests; and (d) examined a visual factor that might influence the dolphin's matching abilities. The results of the studies contribute to a concept of modality equivalence in bottlenosed dolphins for the processing of cognitivcly demanding tasks through the visual and auditory- sensory systems.
Experiment 1 In the earlier work of Herman et al. (1984), Phoenix had learned to interpret underwater sounds as semantic references to objects in her tank, to object locations, and to actions that might be taken on objects. Phoenix's language system was the acoustic analog of the gestural language taught to Akcakamai. In the current work, Phoenix was tesled for her ability to interpret a shown object, instead of a sound, as a reference to a like object floating in her lank. In concept, this tested whether the referent of an acoustic symbol could be substituted for the symbol itself in directing behavior toward a particular object in the tank. In practice, the procedure was similar to an identity matching-to-sample (MTS) task. In Phoenix's acoustic language, sequences of whistlclike sounds generated by a computer system comprised instructional strings ("imperative sentences") directing her to take named actions relative to named objects and their named modifiers. The simplest sentences were two words in length and were constructed syntactically as object name + action name. For example, if sounds signifying "frisbee" and "over" were inserted into the indicated syntactic frame to produce
the sequence FRISBEE + OVER, Phoenix was directed to swim to a frisbee floating in her tank and jump over it.' The same syntactic frame was used for all two-word sentences. During tests reported earlier (Herman et al., 1984) Phoenix was approximately 94% correct in her responses to 82 different two-word instructions, each given once—a result confirmed in additional tests completed just before the start of the present studies. In these present studies, when replacing the object name by its literal referent, a proper two-word instruction was now framed as displayed object + action name. The combination of the visually displayed object plus the immediately following acoustic name of the required action created a mixed-modal instruction.
Method Subject Phoenix was an adolescent female bottlenosed dolphin (Tursiops truncalus) of approximately 8 years of age at the beginning of this study. She was housed together with Akeakamai in an outdoor enclosure consisting of two 15.2-m-diameter circular concrete seawater tanks of approximately 6 ft (1.7 m) depth joined by a 2-m wide x 3-m long channel. The tank walls rose approximately 0.9 m above the deck level. Trainers were able to lean over the wall and interact with the dolphins by standing on a low, portable platform placed on the deck. Phoenix's laboratory experience consisted of approximately 5 years of training in the artificial acoustic language (Herman, 1980, 1986, I987a; Herman ct al., 1984). Additionally, Phoenix responded reliably to nine gestural signs used to indicate actions in the artificial gestural language taught to Akeakamai (e.g., OVKR, UNDFR, THROUGH). She also responded reliably to approximately 30 other gestural signs controlling nonlanguage motor behaviors (e.g., "backdive," "turn over," etc.). However, she had no previous exposure to gestures or other visual items as symbols for objects. Phoenix was fed a full ration of approximately 8.5 kg of silver smelt (Osmerus sp.) during her twice-daily training sessions.
Apparatus Training and testing were conducted at tankside in one of the home tanks. There was a central location at which the trainer stood (the training station) and from which a sample object was displayed, as well as two lateral locations (one to each side of the trainer at approximately 2.2-m distance) at which comparison objects were displayed. All of the objects were familiar to the dolphin, and all had names in her acoustic language. Sounds were generated by a microcomputer system controlled from a remote keyboard located in an observation tower al tankside. The sounds were broadcast into the dolphin's tank through a Type J9 underwater sound projector.
1 Words in all capital letters indicate the vocabulary items in Phoenix's acoustic language. Physically, they are mainly whistlelike in character and bore no resemblance to how they are pronounced in English. Names of objects used in subsequent matching tests, and which are not vocabulary items within Phoenix's acoustic language, are shown in italics.
126
HERMAN, HOVANCIK, GORY, AND BRADSHAW
Procedure The experiment took place over a 4-month period, with some interruptions. Generally, there were two testing sessions daily, each of 16-20 trials. A trial consisted sequentially of (a) a brief display of a sample object, (b) an underwater sound denoting an action to be taken to that object, and (c) a display of two comparison objects, one of which matched the sample. Phoenix was required to swim to the comparison object matching the sample and to execute the indicated action. This requirement for performance of an acoustically indicated action to the match of a visually displayed object allowed for an assessment of multimodal receptive abilities in a dolphin. The requirement for a specific action response was deleted after Experiment 2. If Phoenix performed the indicated action to the correct comparison object, she heard a three-"word" acoustic string (glossed as YI-S PHOENIX FISH). She then returned to her station for a fish reward and social interaction with the trainer. If Phoenix was incorrect, she heard only a sound glossed as PHOKNIX and returned to her station without reward. All of the sounds were whistlelike in character. Intertrial intervals averaged approximately 20 s in duration. The sample and comparison objects included a large and small plastic hoop (HOOP), a frisbee (FRISRFF), a short length of plastic pipe (PIPE), and a plastic laundry basket (BASKET). The action sounds used commanded the dolphin either to leap over the object (OVI:R). swim under it (UNDER), toss it with her rostrum (TOSS), touch it with her tail (TAIL-TOUCH), squirt water at it (SPIT), touch it with her pectoral fin (ptc-iouCH), or place her mouth about it (MOUTH). Appropriate responses to these acoustic commands had been established in earlier training (Herman et al., 1984). Each trial began with Phoenix at her station. On a verbal cue from the session supervisor located in the observation tower, the trainer reached behind her with her right hand to receive a sample object from an assistant. On receiving the object, the trainer brought it into view of the dolphin by passing it in an aerial arc above the dolphin's head with her right hand and then returning it behind her back with her left hand, where it was retrieved by the assistant. The entire sample presentation time took approximately 3 s. As soon as the sample disappeared from the dolphin's view, the supervisor played one of the action sounds, using his keyboard and referring to a predetermined list of aclion sounds. At the same time, two additional assistants, stationed one each at two lateral locations beside the trainer, gently placed into the water (to minimize any acoustic cue) two comparison objects, one of which matched the sample. The comparison objects were allowed to float freely until the dolphin responded and were then retrieved by the assistants. Because the comparison objects were placed in the tank immediately after the sample was removed, the procedure is formally identified as zerodelayed matching. Several controls guarded against inadvertent cuing of the dolphin. At the start of each trial, the trainer donned a pair of opaque goggles to guard against eye-gaze cues that might direct Phoenix's behavior. The goggles were removed only on instruction from the session supervisor and after Phoenix had completed her response. The trainer's assistant sat on a low stool behind the trainer, out of sight of the dolphin, and provided objects to the trainer according to the dictates of a written list. The trainer had no access to the list and no knowledge of which object would be handed her. Each of the assistants at the lateral locations had a list indicating which comparison object to present at each trial. The assistants faced away from the trainer and had no knowledge of which sample had been shown the dolphin. Training. Training took place in four stages: I. Phoenix was familiarized with the task by being required to respond to a single comparison object floating in her tank after being shown an identical sample object and hearing an aclion sound. One
of three objects (HOOP, PIPE, or FRISBEE) was used as sample and comparison object during 10 probe trials interspersed among 16 other trials in which objects were acoustically named, but not shown, as in the usual acoustic language training. Phoenix swam to the indicated comparison object on all 10 probe trials and executed the indicated action command correctly on 6 of the trials. 2. Two comparison objects were now placed in the tank, and Phoenix was required to carry out the indicated action to the one that matched the subsequently displayed sample. A total of 72 trials were given over a 6-day period. The sample objects were HOOP or FRISBI-I;. The comparison objects were either both of these (during the first 4 days) or one of these plus either PIPF, BASKET, or NET. Because the two comparison objects were available at the time the sample was displayed, the procedure was formally identical to a simultaneous matching-to-sample task. During the first 2 days (21 trials). Phoenix made only four errors in her object choice (81.0% correct responses, p < .003, summed binomial test), including one error in nine tnals on the first day. Hence, she spontaneously used the same object in place of the acoustic symbol for selecting a comparison object. There were six action errors over the 21 trials, among a choice of five different actions (/;< .OOOi, summed binomial test). For reasons that remain unclear, but which are not atypical of a dolphin's performance during the early stages of learning new problems, object performance declined on the following 4 days (51 trials) to 64.7% correct choices. At the same time, action responses improved (four errors in 51 trials). 3. To improve the reliability of object matching, acoustic cuing was introduced during the latter portion of Day 6. This consisted of playing the acoustic name of the sample object as it was being shown to Phoenix, followed, as usual, by the acoustic name for an action. Acoustic cuing of objects had been used successfully earlier by Forestell and Herman (1988). The sample and comparison objects were again solely the frisbee and hoop. A total of 37 acoustically cued trials were given across 2 days, with only two object errors occurring. Ten uncued trials (no sound accompanied the display of sample object) were inserted at random locations among the 37 cued trials, producing three object errors. Throughout, there were only two action errors. 4. Zero-delayed matching was now introduced by presenting the comparison objects only after the sample was withdrawn. As was described earlier, two assistants located laterally to the trainer placed the comparison objects in the tank after the sample was removed from view. The sample and comparison objects were HOOP, PIPE, and BASKET. A goal of this training stage was to reduce and then eliminate reliance on acoustic cuing of sample objects. A total of 350 trials were given over an 11-day period. The probability of an acoustic objectname occurring on any trial within a session was reduced gradually across days from 0.65 during the first day to zero during the final day. Object performance during 179 cued trials was 84.9% correct. During the remaining 171 uncued trials, performance was 72.1% correct. There were only nine action errors throughout. Testing. A period of 47 days intervened between the end of Stage 4 training and the beginning of testing. The hiatus was caused by Phoenix's entering a receptive phase of her estrous cycle and becoming uncooperative in participating in a structured training session. Bottlenosed dolphins may have multiple breeding seasons, with peaks in the spring and fall (Harrison & Ridgway. 1971). In our experience, as well as that of others whom we have contacted that house bottlenosed dolphins, periods of inappetence and unwillingness to engage in the normal routine are common behavioral responses of estrous females. Because of the indicated long interval since her last training, Phoenix's first day of testing began with acoustic object cuing reintroduced as "refresher" training. A total of 36 trials were given, 18 of which were acoustically cued and 18 uncued. Cued and uncued trials occurred in a random balanced sequence. This was followed by 24
VISUAL MATCHING BY A DOLPHIN days (564 trials) of testing, all without acoustic cuing of objects. The sample objects were PIPE, HOOP, BASKET, and FRISBEE—all of the objects used in the preceding training stages. Additionally, two exemplars of HOOP were used, a small hoop and a large hoop. During the first 201 uncued trials the frisbee was not included among the sample objects but was used thereafter.
Results and Discussion Phoenix responded to the correct comparison object on 14 (77.8%) of the 18 acoustically cued refresher trials and on 13 (72.2%) of the 18 remaining uncued trials. Both performance levels significantly exceeded chance values (summed binomial test, p < .05). Phoenix's performance on uncued trials was virtually equivalent to that observed just before the 47-day hiatus. She used the correct action on 33 of the total of 36 trials (91.7%,p .10. Because there was a ceiling effect with the familiar objects, the effects of the two presentation modes on performance with novel objects were examined further by dividing the data into three blocks of six sessions each, representing early, middle, and late stages of testing. Figure 4 shows that for both the static and dynamic modes, performance improved substantially over the three blocks. The effect of blocks was significant, F(2, 210) = 3.76, p < .05, as was the interaction of mode with blocks, F(2, 210) = 67.19, p < .001. Newman Keuls tests of simple effects revealed that for the static condition, performance was significantly better (p < .05) during Block 3 than during either Block 1 or Block 2, but the difference between the latter two blocks was not significant. For the dynamic condition, performance during Blocks 2 and 3 was significantly better than during Block 1. Also, performance during Block 2 of the dynamic mode significantly exceeded performance during Blocks 1 and 2 of the static mode. In short, there was significant improvement in performance for both presentation modes with practice, but at a faster rate for the dynamic condition. An examination was also made of performance on the first trial of each pairing of novel objects, in order to assess whether the identity rule generalized immediately to pairs of objects not encountered previously by Phoenix. During the first two sessions, all possible pairings were made among the four novel objects assigned to the dynamic condition and among the four novel objects assigned to the static condition. For each presentation mode, there were 12 such pairings, because each object appeared with every other object twice, once as S+ and once as S—. For the dynamic condition, there were no errors over the 12 pairings, and for the static condition there were three errors. The chance probability of these occurrences was .073 for the static mode, .0002 for the dynamic mode, and
134
HERMAN, HOVANCIK, GORY, AND BRADSHAW
.0001 for both conditions combined, using the summed binomial test. Overall, the results of Experiment 4 show that the visual matching ability of the dolphin is not dependent on a dynamic mode of presentation. In either mode, sample familiarity aids matching, although matching performance is still highly reliable with novel objects. For novel objects, the dynamic condition aids performance mainly at the earlier stages of practice. Possibly, the dynamic mode lends itself to more rapid familiarization with the novel objects because the salient features of thee objects are more clearly revealed as the object passes through the dolphin's visual field in different perspectives. General Discussion The results demonstrated the ability of a bottlenosed dolphin to learn and apply a generalized matching rule to a variety of familiar and novel visual objects. This constitutes the first report of this ability in a bottlenosed dolphin, normally considered an auditory specialist. Table 4 summarizes first-trial data for Rxperiments 2, 3, and 4, showing the response of the dolphin on the first trial for an object as S+ (i.e., as the sample object) and the first trial as S— (i.e., as the distractor). A total of 18 different objects were presented across the three experiments. Phoenix responded correctly in 16 first-trial instances with the objects as S+ (p = .0007) and in 15 instances with the objects as S— (p = .0038). Significance levels are based on the summed binomial test. Data for Experiment 1 are not included because the objects were used previously in training for the matching task. All of the remaining objects had not appeared previously in a matching context, and all but four (NHT, SURFBOARD, torpedo, and rope) were novel objects not previously seen by the dolphin. These data offer the strongest support for generalized visual matching ability by a dolphin. The immediacy of transfer observed to new visual materials and the limits of delay achieved during delayed matching tests rivaled performances observed in a bottlenosed dolphin using auditory materials (Herman, 1975; Herman & Gordon, 1974; Herman & Thompson, 1982). These comparisons suggest that the modality through which information arrives, acoustic or
visual, is not necessarily a limitation on cognitive performance in this species. This conclusion is given further support by findings, discussed earlier, that a bottlenosed dolphin (named Akeakamai) was able to interpret instructions contained wholly within a sequence of gestures given by a trainer (Herman, 1986, 1987a; Herman et. al., 1984). Phoenix, the subject of this study, was able to make similar interpretations on hearing sequences of sounds. Other recent data have demonstrated the ability of Akeakamai and Phoenix to interpret immediately gestures or sequences of gestures given by a televised image of a trainer appearing on a 13-in. monochrome monitor displayed behind an underwater window in the dolphins'tank (Herman, I987b). Our recent findings on visual performance of bottlenosed dolphins are consistent with emerging data on multimodal cognitive abilities of some nonhuman primate species: cebus and macaque monkeys (Colombo & D'Amato, 1986; Shyan et al., 1987) and pygmy chimpanzees. Pan paniscus (SavageRumbaugh, 1986). In the latter case, the chimpanzees appear able to understand either visual symbols or spoken English as references to real-world objects. Although the workers with primates either claim or imply some superiority for the visual system relative to the auditory system for the processing of complex cognitive tasks (i.e., they imply modality asymmetry), our results arc more suggestive of an invariance in cognitive performance over the visual and auditory modalities for dolphins, or what might be termed modality equivalence or near-equivalence. For the dolphin, it still remains to be shown whether information arriving through one sensory mode is relatively independent of information in another mode. Such studies, for example, may test whether new visual information has retroactive effects on stored auditory information as compared with effects of new auditory information (cf. Colombo & D'Amato, 1986). Further support for the notion of modality equivalence In the bottlenosed dolphin comes from the results of Experiment 1 of this study. Phoenix was able to organize a correct response by combining visual input referencing an object in her tank, with auditory input referencing an action to be taken to that object. Previously, Phoenix had experience only with sound sequences (Herman et al., 1984), rather than with mixed-
Table 4 Summary of Performance on First Trial as S+ and First Trial as S— for Each Object Used for the First Time During Experiment 2, 3, or 4 Experiment 2 Object NET
Experiment '.
Experiment 4
S+
S-
Object
S+
0
1
Seed holder Fence Bear chair Refrigerator shelf
1
s1
0
0
SURFBOARD
1
Torpedo Rope Cone Lifejacket
1 0 0 1
1 1
Note. 1 indicates a correct response; 0 indicates an error. S+ Total objects = 18, S+ correct = 16, S- correct = 15.
1 1
Object
S+
Fan blades Feathers Rain guner 'I'ool box Hub ca;i Lawn mower Monster Skid
I 1 1 1 1 1 1 1
S-
sample; S- = distractor. Summary:
VISUAL MATCHING BY A DOLPHIN
modal sequences. Experiment 1 thus demonstrated that Phoenix was able to substitute the referent of an acoustic symbol for the symbol itself in a languagelike task within an artificial miniature language system. Forestell and Herman (1988) reported that a bottlenosed dolphin (named Puka) was able to learn visual delayed matching of a single pair of visual planometric patterns, with the aid of auditory associates of those patterns. These authors were unable to test for generalization of the matching rule to new visual materials before the dolphin subject was abducted from the laboratory. It was not determined, therefore, whether auditory associates would continue to be necessary in subsequent transfer tests using other visual materials. It is now clear that the bottlenosed dolphin is capable of pure visualvisual matching. This suggests that Puka might have learned to match without the continued aid of auditory associates during subsequent transfer tests. A remaining question is what factors might have been responsible for the success in visual matching seen in this study as compared with earlier studies reporting difficulties with visual matching by dolphins (Chun, 1978; Forestell & Herman, 1988). One factor might be the extensive previous training of Phoenix for receptive competency in an artificial language. This might have served to increase her general level of task sophistication, transferable to any new task, or it might have allowed her to represent the visual objects richly through auditory codes. The latter possibility is discounted by the findings in Experiments 2 and 3 that nonlanguage objects— those without names in the artificial language— were matched as reliably as were language objects. This included objects entirely novel to Phoenix's experience. Her ability to apply the matching principle to these novel objects was nearly immediate. Although Experiment 2 showed that language and nonlanguage objects seemed to be categorized differently by Phoenix, in that there was more interference to memory within language and within nonlanguage categories than across categories, there was no evidence that being able to represent an object by an acoustic label facilitated matching under the conditions of the present studies. With respect to task sophistication, there is no doubt that Phoenix was exceptionally experienced in solving problems. Yet, so was Puka, the dolphin studied by Forestell and Herman (1988) that had extreme initial difficulty with visual matching. It is interesting that most of Puka's prior experience was with visual tasks (Herman et al., 1975; Madsen, 1976) whereas Phoenix's was mainly with auditory materials. D'Amato and Colombo (1985) speculated that extensive task experience through one modality might limit performance in another modality. They saw this as an explanation for the success of some of their cebus monkeys in solving auditory matching. The successful monkeys were those that were relatively unexposed previously to visual tasks. This modality "bias" certainly did not operate in the present cases with dolphins. Phoenix, who learned visual-visual matching relatively easily, was trained in auditory tasks primarily, whereas Puka, who had initial difficulty with visual matching, was trained primarily in visual tasks. Also, Shyan et al. (1987) showed that a macaque monkey experienced with visual tasks was nevertheless able to learn auditory matching.
135
Possibly, as is claimed for some chimpanzees (Premack, 1983), training in a language develops special abilities for forming abstract categories. In this view, training of Phoenix in her artificial language may have allowed her to form a generalized matching concept with visual materials. This interpretation is weakened by findings that animals not trained in language can form a matching concept for both visual and auditory materials, as in the studies with monkeys reviewed earlier. Also, a nonlanguage trained dolphin readily formed a matching concept with auditory materials (Herman & Gordon, 1974). Nevertheless, it may be that the language training with Phoenix was facilitative in that it called special attention to objects. Objects had acoustic "names" in the language used with Phoenix, and evidence suggests that these names came to be understood as references to those objects (Herman, 1987a). This referential quality was used advantageously during the early training for matching when acoustic names were used as necessary to supplement the information provided by the visually displayed sample object. In this manner, Phoenix may have been efficiently directed toward the essential attribute of the displayed sample object: that it referred to a like object in the tank. A remaining factor to be accounted for is the method of presenting the visual objects in the present studies. Initially, through to Experiment 4, objects were always presented in a dynamic mode, by passing the object in an arc in front of the dolphin's head. The movement may have served to call attention of the dolphin to the objects. In the prior visual matching studies with dolphins, the visual materials were presented in a static mode, fixed in place. In Experiment 4, we showed that presentation mode made no difference when matching familiar objects. Phoenix's performance remained near a ceiling level in both the static and the dynamic mode. For novel objects, there was an approximate seven-point percentage gain for the dynamic mode as compared with the static, although the amount of gain was not significant. When blocks of trials were examined, performance with novel objects improved significantly over blocks for both modes, but at a faster rate for the dynamic condition. These results suggest that the dynamic mode can have some influence, probably by serving to make the sample figure more apparent against its background and/or by increasing the number of object dimensions shown to the dolphin. The failure to find an effect of mode of presentation with familiar objects may simply mean that all that needed to be learned perceptually about these objects was already learned by the time mode of presentation was studied in Experiment 4. As a final caution, the visual materials used in this study were all three-dimensional objects, as compared with twodimensional figures used by Chun (1978) and by Forestell and Herman (1988). It remains to be seen how well Phoenix might perform with two-dimensional visual materials. That such materials might not significantly impair Phoenix's performance is suggested by the results of Forestell and Herman (1988) and Herman (1980) that three-dimensional materials initially used with Puka in attempts at visual learning were no panacea. Instead, the key may be to find ways to increase the saliency of the relevant figure to be discriminated by the dolphin by enhancing the contrast of figure and ground or by
136
HERMAN, HOVANCIK, GORY, AND BRADSHAW
differentiating clearly between sample and comparison objects.
References Chun, N. K. W. (1978). Aerial visual shape discrimination and malching-to-sample problem solving ability of an Atlantic Bottlenosed dolphin (NOSC Tech. Rep. 236, May). San Diego: Naval Ocean Systems Center. Colombo, M., & D'Amato, M. R. (1986). A comparison of visual and auditory short-term memory in monkeys (Cebus apella). Quarterly Journal of Experimental Psychology, 38B, 425-448. D'Amato, M. R. (1973). Delayed matching and short-term memory in monkeys. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 7, pp. 227-269). New York: Academic Press. D'Amato, M. R., & Colombo, M. (1985). Auditory matching-tosample in monkeys (Cebus apella). Animal Learning A Behavior, 13, 375-382. D'Amato, M. R., & Salmon, D. P. (1982). Tune discrimination in monkeys (Cebus apella} and in rats. Animal Learning & Behavior, JO, 126-134. Dawson, W. W. (1980). The cetacean eye. In L. M. Herman (Ed.), Cetacean behavior: Mechanisms and functions (pp. 53-100). New York: Wiley. Fobes, J. L., & King, J. E. (1982). Auditory and chemoreceptivc sensitivity in primates. In J. L. Fobes & J. E. King (Eds.), Primate behavior (pp. 245-270). New York: Academic Press. Forestell, P. H., & Herman, L. M. (1988). Delayed matching of visual materials by a bottlenosed dolphin aided by auditory symbols. Animal Learning £ Behavior, 16, 137-147. Harrison, R. J., & Ridgway, S. H. (1971). Gonadal activity in some bottlenose dolphins (Tursiops tnmcatus). Journal of Zoology, London, 165, 355-366. Herman, L. M. (1975). Interference and auditory short-term memory' in the bottlenosed dolphin. Animal Learning & Behavior, 3, 4348. Herman, L. M. (1980). Cognitive characteristics of dolphins. In L. M. Herman (Ed.), Cetacean behavior: Mechanisms and functions (pp. 363-430). New York: Wiley. Herman, L. M. (1986). Cognition and language competencies of bottlenosed dolphins. In R. J. Schusterman, J. Thomas, & F. G. Wood (Eds.), Dolphin cognition and behavior: A comparative approach (pp. 221-252). Hillsdale, NJ: Erlbaum. Herman, L. M. (1987a). Receptive competencies of language-trained animals. In J. S. Rosenblatt, C. Beer, M. C. Busnel, & P. J. B. Slater (Eds.), Advances in the study of behavior (Vol. 17, pp. 1-60). New York: Academic Press. Herman, L. M. (1987b, December). The visual dolphin. Paper presented at the seventh biennial conference on the biology of marine mammals, Miami, FL. Herman, L. M. (1988). The language of animal language research:
Reply to Schusterman and Gisiner. The Psychological Record, 38, 349-362. Herman, L. M., & Forestell, P. H. (1985). Short-term memory in pigeons: Modality-specific or code-specific effects? Animal Learning & Behavior, 13, 463-465. Herman, L. M., & Gordon, J. A. (1974). Auditory delayed matching in the bottlenosed dolphin. Journal of the Experimental Analysis of Behavior, 21, 19-26. Herman, L. M., Peacock. M. F.. Yunker, M. P., & Madsen, C. (1975). Bottlenosed dolphin: Double-slit pupil yields equivalent aerial and underwater diurnal acuity. Science, 139, 650-652. Herman, L. M., Richards, D. G., & Wolz, J. P. (1984). Comprehension of sentences by bottlenosed dolphins. Cognition, 16, 129-219. Herman, L. M., & Thompson, R. K. R. (1982). Symbolic, identity, and probe delayed matching of sounds by the bottlenosed dolphin. Animal Learning & Behavior, 10, 22-34. Honig, W. K. (1978). Studies of working memory in the pigeon. In S. H. Hulse, H. Fowler, & W. K. Honig (Eds.), Cognitive processes in animal behavior (pp. 211 -248). Hillsdale, NJ: Erlbaum. Madsen, C. (1976). Tests for color discrimination and spectral sensitivity in the botltenosed dolphin. Tursiops truncatus. Unpublished doctoral dissertation, University of Hawaii. Madsen, C. J., & Herman, L. M. (1980). Social and ecological correlates of cetacean vision and visual appearance. In L. M. Herman (Ed.), Cetacean behavior: Mechanisms and functions (pp. 101-148). New York: Wiley. Popper, A. N. (1980). Sound emission and detection by delphinids. In L. M. Herman (Ed.), Cetacean behavior: Mechanisms and functions (pp. 1-52). New York: Wiley. Premack, D. (1983). The codes of man and beast. The Behavioral and Brain Sciences, 6, 125-167. Savage-Rumbaugh, E. S. (1986). Ape language: From conditioned response to symbol. New York: Columbia University Press. Shyan, M. R., & Herman, L. M. (1987). Determinants of recognition of gestural signs in an artificial language by Atlantic bottle-nosed dolphins (Tursiops truncatus) and humans (Homo .sapiens). Journal oi'Comparative Psychology, 101, 112-125. Shyan, M. R., Wright, A. A., Cook, R. G., & Jitsumori, M. (1987). Acquisition of the auditory same/different task in a rhesus monkey. Bulletin of the Psvchonomic Societv. 25, 1-4. Thompson, R. K. R. (1981, October). Nonconcc-ptual auditory matching by a rhesus monkey reflects biological constraints on cognition processes? Paper presented at the Northeastern Meeting of the Animal Behavior Society, Kingston, Ontario, Canada. Wright, A. A., Urcuioli, P. J., & Sands. S. F. (1986). Proactive interference in animal memory. In D. F. Kendrick. M. li. Rilling, & M. R. Denny (Eds.), Theories of animal memory (pp. 101-125). Hillsdale, NJ: Erlbaum. Received May 25, 1988 Revision received September 9, 1988 Accepted September 13, 1988 •