(22) Remove soup plates to the sideboard one at a time. (23) Place the vegetable spoons at the corner of the table, about ten inches from it, their bowls pointing ...
COLLEEN CRANGLE
COMMAND SATISFACTION AND THE ACQUISITION OF HABITS
ABSTRACT. This article develops the idea that the meaning of a command can be analyzed by stating conditions on a satisfactory response to the command. It argues that in evaluating command satisfaction, consideration must be given not only to resu!t judgments but also to judgments about process. It identifies two kinds of process judgments - one in terms of action properties, the other in terms of action execution - and two kinds of result judgments - one at the level of events, the other at the level of specific outcomes. It discusses when judgments of each kind are called for and examines the circumstances under which process considerations intrude into judgments that are primarily in terms of result. Finally, it identifies a role for habits in command semantics, drawing on work done to design robots that can understand ordinary English commands.
Over the past few years, in work Patrick Suppes and I have done on robots that learn new tasks through verbal instruction, the notion of a robot's needing to acquire habits has surfaced from time to time. It quickly became apparent to us that robots need repetition and practice in the performance of a new task, just as people do. What has been more surprising, however, is the extent to which repetition and practice, and the concomitant acquisition of habits, are required for robots to understand ordinary English commands requesting action. My purpose in this article is to examine the extent to which a consideration of habits must enter into the semantics of commands, and in so doing also to identify a place for habits in the design of intelligent robots. First, I must extend the analysis of command satisfaction begun in our earlier work (Suppes and Crangle, 1988). SATISFACTION CONDITIONS FOR COMMANDS
Consider the following simple command: (1) Put the cup on the table.
P Humphreys (ed.),Patrick Suppes: Scientific Philosopher; Vol. 3, 223-241. @ 1994 Kluwer Academic Publishers. Printed in the Netherlands.
224
COLLEEN CRANGLE
If in response to this command the agent - robot or human - tips the cup, spills its contents, and then places the cup upside down on the table, in ordinary circumstances we would consider the request to have been poorly or only partially satisfied. The intention behind the command is most likely for the cup to be placed right way up, its contents undisturbed. Consider another example, the directive
(2) Go to your room addressed by a parent to a child. The intention behind this command typically is for the child to go directly to her room, without dawdling, without taking a detour to the playroom, and without stopping to pat the dog. The further intention is typically that she stay in her room for some period of time. It is characteristic of a great many commands requesting action that they leave unexpressed most of the details required to perform the action. At the same time, however, the command carries with it a host of conditions on what counts as a satisfactory response. These conditions encompass not only considerations of result - the cup is on the table or it is not, the child is in the room or she is not - but also process considerations - how the cup was moved to the table, how and when the child ended up in the room. It has long been acknowledged that one way to analyze the meaning of an utterance is to state the conditions under which the utterance is satisfied. For a statement, giving satisfaction conditions amounts to providing conditions under which the statement is true. For a question, conditions on a correct answer are sought. For a command, the conditions of interest are conditions on a satisfactory response to the command. The basic semantic notion is that of satisfaction in a model, and while it is relatively easy to make the intuitions behind this notion explicit in the case of elementary formal languages, considerable difficulties arise when the complexities of a natural language such as English are introduced. Still the basic intuition prevails, for commands in part because we generally make the reasonable assumption that the agent's response to a command proceeds from his understanding of the command, not from mere willfulness. One major source of difficulty lies in determining what kinds of satisfaction conditions should apply. As noted above, the satisfaction of a command can be judged purely in terms of result: the cup is on the table or it is not, the child is in the room or she is not. But if we desire
THE ACQUISITION OF HABITS
225
anything more than this somewhat crude semantic evaluation, process considerations inevitably intrude. In our earlier work we introduced the distinction between result semantics and process semantics but offered little analysis of the distinction and did not examine in any detail the question of when result conditions or process conditions are called for. These issues will be discussed in some detail in the next two sections. First, some straightforward remarks about the two kinds of conditions are needed. While result conditions concern the outcome of a response to a command, process conditions pertain to the action taken to achieve an outcome. Result conditions are most naturally thought of as pairs, with one member of the pair representing a satisfactory outcome, the other an unsatisfactory outcome, as in (the cup is on the table, the cup is not on the table). Note that a satisfactory outcome for a response to a command may itself be the taking of an action, but these action outcomes should not be confused with process conditions. Consider, for example, the following command addressed to a hospital attendant or a mobile robotic courier:
(3) Continue to deliver meals to room 27. The action outcome pair (meals are being delivered to room 27, meals are not being delivered to room 27) is a result condition. A process condition would refer to the way in which delivery was continued perhaps there was a break in service, for instance. I turn now to examine these process conditions in greater depth.
PROCESS SEMANTICS
Language is replete with mechanisms for characterizing actions and thereby making process-oriented distinctions. Familiar aspectual distinctions such as progressive and non-progressive, perfective and nonperfective, durative and punctual all serve as indicators of process. In addition, a wide variety of adverbials qualify described actions. However, it is not at all a straightforward matter to specify appropriate process conditions for evaluating the satisfaction of a command. Part of the difficulty lies in the fact that there is a level of detail beyond which language cannot go. A command such as (4) Walk carefully
226
COLLEEN CRANGLE
is simple and concise, but conditions laying out the requirements of careful locomotion in a particular context may be beyond the descriptive powers of language. In fact, as Patrick Suppes has remarked in various discussions, there are limits on the extent to which the details of an action can be laid out, no matter what descriptive means and depth of detail are relied on. Suppes points out that even for the simple action of throwing a pair of dice, the detailed analysis of the motion of the dice is a matter of great mathematical difficulty, with a full description of the process by which a result such as double sixes is reached in fact being impossible. Furthermore, in order to fully specify conditions on the action an agent takes in response to a command, we need some way to specify details of the agent's cognitive, perceptual, and motor functioning. Consider commands such as
(5) Lift the chair gently and
(6) Look diligently for the empty bottle on the shelf It is hard to see how satisfaction of such commands can be evaluated without including aspects of the agent's functioning. At this point, a retreat to the simpler framework provided by robots often becomes necessary. Later in this article I briefly describe some of our own work using robots. Adverbials of inanner such as 'gently' and 'diligently' certainly demand process evaluations in terms of the agent's functioning. But contrast these adverbials, which stipulate details of action execution, with adverbials that simply qualify the event that is to take place by stipulating a property that the action must have. Here are examples.
(7) Cook the spaghetti for 20 minutes. (8) Ring the bell loudly. (9) Answer at once. In each case a property of the requested action is given: there must be a 20-minute spaghetti-cooking activity, a loud bell ringing, an immediate answer. Process evaluations are required for (7), (8) and (9), but it is not clear that these have to include details of the agent's operation.
THE ACQUISITION O F HABITS
227
A further distinction must therefore be introduced, that between action property and action execution. A judgment about process satisfaction may be in terms of action properties or in terms of action execution. Note that even simple commands unadorned by adverbs may demand appraisal in terms of action execution, as evidenced by the following set: (10) Pound the nail into the plank. (1 1) Drive the nail into the plank. (12) Hammer the nail into the plank. (13) Tap the nail into the plank. Verbs are rich in process distinctions, a fact that has long been a challenge to lexical semantics. Nonetheless, adverbials remain central to questions about command satisfaction. It has been notoriously difficult to explain adverbs of action. Familiar problems such as the difference between (14) Clumsily he trod on the lawn and (15 ) He trod on the lawn clumsily have received considerable attention - see, for example, Vendler (1984) - but there is as yet no systematic account of the way action descriptions are modified by adverbials. Furthermore, in philosophical analyses of events and actions such as those provided by Davidson (1970) and Bratman (1987), while the notion of intentionality has received much attention there has been less of a concern with properties of actions and details of action execution. The two kinds of process evaluations I have identified are just a beginning in understanding how the plethora of process-oriented distinctions found in language affect judgments about a command's satisfaction. Later in this article I will return to process judgments when I discuss circumstances under which process considerations intrude into judgments that are primarily in terms of result. Table I presents the two kinds of process judgments together with sample commands requiring each kind of evaluation.
228
COLLEEN CRANGLE TABLE I
Kinds of process judgments and sample commands. JUDGMENT IN TERMS OF ACTION PROPERTIES
Cook the spaghetti for 20 minutes. Ring the bell loudly. Answer at once.
JUDGMENT IN TERMS O F ACTION EXECUTION
Lift the chair gently Pound the nail into the plank. Hammer the nail into the plank
RESULT SEMANTICS
Despite the importance of process considerations in the evaluation of commands, result semantics has a role to play in the analysis of commands, one that cannot be eliminated. Consider, first of all, commands requesting actions that have well-defined end points, as in (1 6) Write the letter. This command will elicit a result judgment: has the end point been reached, that is, does a terminating condition such as the letter's having been signed and sealed hold? The agent may have been writing and may have since stopped writing, but unless the terminating condition holds the command will not be judged to have been satisfied. Process considerations may also intrude into satisfaction judgments for these commands, but result judgments are sure. This characteristic - an action's having a well-defined end point - is a familiar one in the extensive literature on tense and aspect. See, for example, Comrie (1976), Vendler (1967), and Freed (1979). Vendler 's original classification of activities, accomplishments and achievements has spawned much discussion over the years, and the distinctions that have been recognized as a result are significant to judgments about command satisfaction. For instance, consider a punctual event that is the culmination of some process (an achievement in Vendler's terms). Examples are winning the prize, reaching the summit of the mountain, finding the red book on the shelf - respectively the culmination of competing in an event, climbing a mountain, searching for a book.
THE ACQUISITION OF HABITS
229
A command requesting the performance of a punctual end-point event cannot in itself elicit process conditions for there is no process to evaluate. What may be evaluated, however, is the process leading up to the punctual event: how the race was run, for instance, how the mountain was climbed, how the book was searched for. Primarily, though, these commands elicit result judgments. Result semantics plays a role in yet other command evaluations. In many cases of aspectual complementation found in 'stop', 'start', 'continue', 'resume' and 'repeat' commands, result semantics is exactly what is needed. Take the following 'stop' command:
(17) Stop carrying the box. Its satisfaction is judged primarily in terms of result: the box continues to be carried or it does not. But simple result semantics is not enough. Consider the command (18) Put the book on the shelf. A satisfactory response at the result level would have the book on the shelf, the intended result achieved. However, the event of the book's being shelved may be realized by many different specific actions that may produce many different specific outcomes. The book may be grasped clumsily and its cover torn, on placement its spine may face in or out on the shelf, it may be placed upside down or right way up. A second distinction introduced in our earlier work was that between events and specijic outcomes. Closer analysis of the 'stop' command in (17) reveals that what is needed is not a simple result semantics that offers pairs such as (the box is being carried, the box is not being carried) but an analysis that is at the level of specific outcomes. A command to stop doing something expresses more than the simple intention that the agent interrupt that activity. It carries with it the usually unexpressed intention that the agent do something else instead (Crangle, 1989). For the command in (17), for instance, the agent should not simply freeze; typically the object being carried should be put down carefully or handed over to someone else. There are even circumstances in which it should be dropped without delay. What is needed for these 'stop' commands is an analysis that offers pairs of action outcomes such as (the box is being carried, the box is put down), (the box is being carried, the box is handed over), or (the box is carried, the box is dropped). Similarly, for the command
230
COLLEEN CRANGLE
(1 9) Stop running a result evaluation is needed, one that offers specific outcome pairs such as (the agent is running, the agent is walking) or (the agent is running, the agent is standing still). Judgments at the level of events have their own place. They are typically elicited by commands requesting repeated or habitual behavior. Consider, for example, the injunctions found in a book of etiquette. (20) Be considerate. (21) Escort your visitor to the front door. (22) Remove soup plates to the sideboard one at a time. (23) Place the vegetable spoons at the corner of the table, about ten inches from it, their bowls pointing in opposite directions. (24) Watch the theater entrance for your escort unobtrusively, not with fixed and desperate gaze. Here we encounter exhortations to engage in certain kinds of habitual behavior, some expressing broad generalities as in (20), others addressing specific conventions as in (22) and (23). We tend to evaluate satisfaction of such admonishments not so much in terms of any specific outcome as over the long haul, making a judgment, for instance, as to whether a few failures to accompany a visitor to the door mean the injunction is not being obeyed. It is interesting to note that although only some of the above commands make explicit reference to process - 'one at a time', 'with fixed and desperate gaze' - process considerations would probably enter into most satisfaction judgments for these commands. A possible exception is (23); under ordinary circumstances a straightforward evaluation of the location and orientation of the spoons is all that would be required. In terms of the distinction introduced earlier, any process evaluations made would typically be in terms of action properties not action execution. It would be assumed that details of action execution have been thoroughly taken care of by the time a call for habitual behavior is issued.
THE ACQUISITION OF HABITS
THE INTRUSION O F PROCESS JUDGMENTS
It is instructive to examine just when it is that process considerations intrude into satisfaction judgments that are primarily in terms of result and at the level of events. Consider the following command addressed to a human or robotic janitor: (25) Pick up litter. This command directs the agent to engage repeatedly in the activity of finding and retrieving individual items of refuse. We judge satisfaction in terms of the results achieved - litter is left behind or it is not - and we do so less in terms of specific outcomes - there is no litter Wednesday 2:35 p.m., there are three scattered pieces Friday 5:40 p.m., and so on - than in general terms, that is, at the level of events. Furthermore, we typically do not concern ourselves with how each item of refuse is removed, for the request assumes that all details of action execution at this lower level will be taken care of adequately. The satisfaction of (25) is therefore typically judged in terms of results at the event level. Process considerations intrude, however, when something can go wrong and does. Suppose certain pieces of litter start getting left behind - discarded wooden ice-cream sticks, for instance, and crumpled candy wrappers. Suppose it turns out that all small objects are being ignored, perhaps because the agent regards only big items as trash, perhaps because the agent has poor vision, or perhaps because the agent lacks the fine motor control needed to retrieve small items. In each case, a flaw in the process of picking up litter produces an unsatisfactory response to the command. Any subsequent evaluation of the agent's response to the command would have to include an evaluation of process to check that the flaw has been eradicated. This evaluation would be in terms of action execution, that is, it would refer to details of the agent's functioning. Although for human agents there is limited access to the processes by which cognitive, perceptual, and motor functioning are governed, there are various methods for making the appropriate process-oriented appraisals: interrogating the person, subjecting her to tests of visual acuity and manual dexterity, for instance. In the case of robots, investigation of process, though technically complex, is entirely possible and indeed necessary if a satisfactory response to a command is to be achieved.
COLLEEN CRANGLE TABLE I1
Kinds of result judgments and sample commands. JUDGMENTS INTRUDE. . . JUDGMENT AT THE LEVEL OF EVENTS
Be considerate. Pick up litter.
JUDGMENT AT THE LEVEL OF SPECIFIC OUTCOMES
Stop carrying the box. Hit the ball.
WHENEVER FLAWS SURFACE AND DURING VERBAL INSTRUCTION
There is another set of circumstances in which process judgments intrude into evaluations that are primarily result oriented. Consider the command (26) Hit the ball addressed to an agent learning how to play tennis. Result judgments are naturally important here and they are primarily at the level of specific outcomes - exactly where the ball lands on each stroke is crucial to evaluating responses to the command. Process considerations inevitably intrude, however, when a new task is being taught. Initially, for tennis instruction, evaluations are made about how the player holds the racquet, uses her wrist and elbow, positions her feet, and so on. These process evaluations fall away as she masters her stroke, but return if she has to adapt her skill to changed circumstances, a different racquet design, for instance. In Table 11, I show the two kinds of result judgments I have discussed along with sample commands and an indication of when process judgments intrude. I turn in the next section to examine interactions between the different kinds of satisfaction conditions.
THE ACQUISITION OF HABITS HABITS, LEARNING, AND SATISFACTION CONDITIONS
I began this article with the observation that a great many commands requesting action leave unexpressed many details of action execution and specific outcome. A request as simple as
(27) Put the wrench away says nothing about how the wrench is to be held and nothing about its precise placement. What has not yet been emphasized, but is crucial to an understanding of commands requesting action, is that the speaker will typically have in mind not some precise spot but a target location somewhere around the middle of the top shelf, for instance, or anywhere on the countertop. The speaker will often also be quite neutral as to the object's orientation while it is being moved and when it is deposited on the shelf. Even when details of action execution and outcome are specifically intended, because of the difficulty and tedium of expressing them precisely, something like a negotiated settlement is generally arrived at. The agent being addressed offers a best response to the command and the person giving the order determines if that response is good enough. If it is not, she repeats the order or gives corrective instruction. During this interchange, process considerations and judgments about specific outcomes predominate in evaluations of command satisfaction. There is, however, a point at which settlements no longer have to be negotiated. Consider the command (28) Go to the door.
It carries with it many unexpressed intentions concerning how close to the door the agent should come, and whether she should stop at a point that does not impede the door's opening or perhaps block entry to the door. People typically respond to this request by drawing on habits acquired over time through experience with many comings and goings. But a very young child will not automatically stand in the right place or face the right way in response to this command, for the appropriate habits of arrival and departure have not yet been ingrained. Habits have received relatively little attention from philosophers and cognitive scientists. It is only in the study of animal behavior that habits have been accorded any kind of prominence. The ordinary understanding of a habit is as behavior ingrained by frequent repetition of an act. The behavior is often thought to be almost involuntary, particularly in the case of bad habits. A habit is also sometimes thought
234
COLLEEN CRANGLE
of as a tendency or disposition to act in a certain way brought about by frequent repetition. Although I believe it would be fruitful to analyze habits in probabilistic terms, as suggested by the notions of disposition and tendency, in the context of this discussion it is instructive rather to examine several lines of development in the design of intelligent robots. A recent debate in robotics argues the merits of two different approaches to intelligent robot design. The first approach adopts the perspective characteristic of much mainline research in artificial intelligence over the past few decades. See, for instance, Nilsson (1980) and Latombe (1991). It assumes that an intelligent agent will need to reason about its world and the tasks it undertakes using representations or models of that world. These robots typically include 'task planners' in their design. To do its job, a task planner needs geometrical, physical, and kinematical descriptions of all objects in the robot's environment, including the robot itself. A task planner takes a task-level specification, that is, one that specifies an action largely by its effect on objects, and produces a manipulator-level program, one that directs the detailed movement of the robot. Task-level robot programming languages are thus largely result oriented rather than process oriented. In its most idealized form, this approach assumes that with enough information about the world and itself, the robot can, from scratch, satisfactorily perform any task within its capability. In contrast, the second approach believes in the efficacy of trial and error, that is, in the importance of process (Brooks, 1991). Each robot is designed as a network of relatively independent 'primitive' actions or behaviors. For example, 'avoid hitting things' is one primitive behavior, 'stand up' is another (for legged robots), 'explore' (i.e., wander around) yet another. These behaviors operate largely in parallel, and during development of the robot they are added incrementally. This approach abjures centralized or hierarchical control in its robots and makes only limited use of representations of the robot's world. It seeks to rely rather on techniques that through trial and error will organize the primitive behaviors in such a way that they will, in concert, exhibit the desired higher-level behavior. For example, one well publicized experimental robot finds and retrieves empty soda cans, a goal that was built into it through the choice and design of its primitive behaviors. The debate over these two approaches is often thought to be primarily about centralized versus decentralized control in intelligent agents and about the role of representations in intelligent behavior. But at a more
THE ACQUISITION OF HABITS
235
general level it is a debate about whether or not robots need habits. In the second approach, the robot is essentially constructed as a collection of built-in habits, habits of standing, avoiding, and wandering about, for instance. Through repetition and practice a new habit emerges, that of finding and collecting empty soda cans, for instance. The robot's behavior is unreflective and, by all ordlnary notions of consciousness, unconscious. In contrast, the first approach builds robots that engage in continual reflection, reasoning about their world, their capabilities, and the tasks they are to undertake. There is no obvious place in this framework for repetition, practice, correction or refinement. There is no obvious place in this framework for habits. The second approach to robot design - the trial-and-error habitforming approach - has made a significant contribution to the study of intelligent agency. The development of habits is central to the proper functioning of intelligent agents, not least of all to their ability to respond appropriately to commands requesting action. However, habits alone are not enough. Healthy skepticism should be held toward one of the key assumptions of this second approach, namely that increasingly complex robot behavior can be achieved by incrementally adding new behaviors, that is, new built-in habits, without imposing any form of centralized or hierarchical control. Some organizing scheme is surely needed if these robots are to exhibit high-level skills such as adjudicating between competing goals. In our own work on intelligent robots, our approach has been to explicitly build into the robot mechanisms for acquiring habits through interaction with a human operator (Crangle et al., 1987; Suppes and Crangle, 1990). These habits are developed over time in response to verbal commands requesting action. The first time a command such as 'Go to the door' is issued to a robot in a particular set of circumstances, the robot's response typically only approximates the desired response. A process of negotiated settlement then ensues. The operator issues qualitative feedback to correct or confirm the action taken, either a congratulatory command such as 'That's fine' which indicates that the response was acceptable, or a corrective command such as 'Further to the left' which indicates that the response was unacceptable. The corrective feedback is non-determinate in that it does not let the robot know exactly what its response should have been. It merely indicates what the robot can do to improve its response on subsequent attempts. The corrective feedback is either in terms of result - 'Much further to
COLLEEN CRANGLE TABLE 111
A dialogue for developing a habit of correct responses to a command.
I ORIGINAL COMMAND ISSUED: RESULT FEEDBACK: CONGRATULATORY FEEDBACK:
Put the wrench on the shelf. Not that far. That's fine.
ORIGINAL COMMAND REPEATED: RESULT FEEDBACK: MORE RESULT FEEDBACK: PROCESS FEEDBACK: RESULT FEEDBACK AGAIN: CONGRATULATORY FEEDBACK:
Put the wrench on the shelf. A little further to the right. Further. Be more careful. A little to the left now. Good.
the right', 'A little bit further' - or in terms of process - 'Be more careful', 'Faster next time'. Over time, the robot's repetition of the action together with the refinement achieved through verbal feedback inculcates in the robot a habit of correct responses to the command. In Table 111, I show the initial portion of an instruction dialogue used to induce in the robot a habit of correct responses to the following command:
(29) Put the wrench on the shelf. Before a habit of correct responses has been ingrained in the robot, judgments about action execution and specific outcome predominate in evaluations of the command's satisfaction. The proposition I would defend in general is that as an action becomes ingrained as habit in an agent, satisfaction judgments for commands requesting that action change from judgments of action execution and specific outcome to result judgments at the level of events and process judgments in terms of action properties. A consequence of this proposition is that satisfaction conditions for a command do not adhere in the language itself but are determined by the circumstances of the command's use, including the agent's capabilities and past responses to the command. That is as it should be. Consider, in particular, imperatives that may be used to
THE ACQUISITION OF HABITS
237
L
Result judgments at the level of events
Judgments in terms of action properties
Result judgments at the level of specific outcomes
Judgments in terms of action execution
Fig. 1.
express either a request for habitual behavior or a request for specific action. For example, while it is hard to construe (30) Take regular exercise as referring to anything other than habitual behavior, (31) Mow the lawn may be understood as a request for specific action or a call to practice a certain kind of habit. Past experience is surely crucial in determining the appropriate satisfaction judgments for this command. In the figure above I show the four principal satisfaction judgments covered in this article and I indicate the changes brought about by the agent's acquiring a habit of correct responses to the command. I have sought in this article to lay out a framework for evaluating command satisfaction, and in addition to advocate a more prominent place for habits in the semantics of commands. A full account of command satisfaction will undoubtedly turn out to be more complex than demonstrated here, but to be complete it will, I believe, have to encompass the different kinds of satisfaction judgments that are made and acknowledge the role played by habits.
Intelligent Interface Technology, 848 Cambridge Avenue, Menlo Park, CA 94025, U.S.A.
238
COLLEEN CRANGLE REFERENCES
Bratman, M.: 1987, Intention, Plans, and Practical Reason, Harvard University Press, Cambridge, Massachusetts. Brooks, R. A.: 1991, 'Intelligence Without Representation', Artificial Intelligence, 47, 139-1 60. Comrie, B.: 1976, Aspect: an Introduction to the Study of Verbal Aspect and Related Problems, Cambridge University Press, Cambridge, Massachusetts. Crangle, C.: 1989, 'On Saying "Stop" to a Robot', Language and Communication, 9(1), 23-33. Crangle, C., Suppes, P., and Michalowski, S.: 1987, 'Types of Verbal Interaction with Instructable Robots', in: G. Rodriguez (Ed.), Proceedings of the Workshop on Space Telerobotics, JPL Publication 87-13, Vol. IT, NASA Jet Propulsion Laboratory, Pasadena, California, pp. 393-402. Reprinted in P. Suppes: 1991, Language for Humans and Robots, Blackwell Publishers, Cambridge, Massachusetts, pp. 299316. Davidson, D.: 1970, 'The Individuation of Events', in: Nicholas Rescher (Ed.), Essays in Honor of Carl G. Hempel, D. Reidel, Dordrecht. Freed, A. F.: 1979, The Semantics of English Aspectual Complementation, D. Reidel, Dordrecht. Latombe, J.: 1991, Robot Motion Planning, Kluwer Academic Publishers, Boston, Massachusetts. Nilsson, N. J.: 1980, Principles of Artificial Intelligence, Tioga Publishing Company, Palo Alto, California. Suppes, P. and Crangle, C.: 1988, 'Context-Fixing Semantics for the Language of Action', in: J. Dancy, J. M. E. Moravcsik, and C. C. W. Taylor (Eds.), Human Agency: Language, Duty, and Value, Stanford University Press, Stanford, California, pp. 47-76. Suppes, P. and Crangle, C.: 1990, 'Robots that Learn: A Test of Intelligence', Revue Internationale de Plzilosophie, 44(172), 5-23. Vendler, Z.: 1967, Linguistics in Philosophy, Cornell University Press, Ithaca. Vendler, Z.: 1984, 'Adverbs of Action', in: David Testen, Veena Mishra, and Joseph Drogo (Eds.), Lexical Semantics, Chicago Linguistic Society, Chicago, IL, pp. 297307.
COMMENTS BY PATRICK SUPPES
Colleen Crangle's extension of ideas that we have worked on together in the past shows how there is no natural bottom to the depth of semantic interpretation of commands. It is the single most striking feature of ordinary language that any sharp model-theoretic cutoff in the interpretation of the semantics is bound to be unsatisfactory in some respects. This is scarcely surprising, for it is similar to the situation that
THE ACQUISITION OF HABITS
239
obtains in the use of mathematical models in almost every domain of science. Increasingly, we have come to learn that, whether it is a matter of physics, economics, psychology or biology, an exact and complete model can seldom if ever be given for complicated phenomena. It is not unreasonable to think that the simple utterances so familiar as commands in ordinary language actually apply to simple situations, but this is only a characteristic illusion in the understanding of what is implicit in the use of language. Colleen's examples and arguments strengthen the case for the importance of context in giving a semantics for commands in natural language. There is much to be understood about past experience that is assumed, which comes under the general heading of habits and expectations, and there is also much to be understood about current external conditions when the command is given. As a reflection of what I have just said, I would mention one minor difference with Colleen. I would not myself be inclined to draw a sharp distinction between events and specific outcomes. It is as if we can really conceive of specific outcomes as definite atoms of experience. This again represents for me an artificial simplification, much used in science and valuable for that reason, but not intrinsic to natural language in its use or in its semantic analysis. For example, we certainly do want to speak of outcomes in a definite theoretical and abstract fashion when we introduce sample spaces in probability theory or when we have an abstract, highly specific theory to consider. But recall the situation in probability theory just as an example. In almost all advanced work in statistics, one considers not a specific probability space, but a family of random variables and relies upon the family of random variables satisfying Kolmogorov's theorem so that there exists a common underlying probability space, but this space is not unique. Moreover, in all of the conceptual and computational work specific features of an underlying space scarcely arise. To put the formulation in an even more radical way, a subjectivist such as de Finetti is even skeptical of using a probability space at all, and his random quantities, which he does not call 'random variables' for good reason, are introduced without being defined on any probability space. I would expect de Finetti to have shared my views on this even though we use somewhat different language. If we mention what Colleen would term a 'specific outcome' in natural language then the generative capacity of the language always permits us to qualify in different ways this specific outcome. Let me consider one simple example.
240
COLLEEN CRANGLE
Specific Outcomes. When someone says 'Hit the ball' usually a range of specific outcomes are expected but unstated. For example, the command might be implicitly qualified depending upon the sport in different ways. In tennis it might mean 'Hit the ball over the net', in baseball it might mean 'Hit the ball at least 50 feet', and in croquet it might mean something still different. The features that are expected arise from current circumstances and past habits and have no natural fixed finite enumeration. For this reason I would much prefer to talk about atomless Boolean algebras in terms of formal models and to speak in terms of one event being more specific than another, specific in the way in which it is more specific to say that in rolling two dice 'at least one 6 came up' as opposed to saying 'at least one even number came up'. Skepticism about Representations. A more important point is that I very much share Colleen's skepticism about intelligent agents, especially robots we create or humans that we educate, being able simply to think rationally about their world and use continually explicit representations or models to guide their behavior. The strong belief in the psychological reality of such representations is certainly behind misguided efforts of some philosophers of language to believe that all thinking and reasoning are done in terms of discrete symbolic symbols, a mistaken psychological fantasy if ever there was one. Rather, what we need is what Colleen emphasizes: learning and the formation of habits which do not have an explicit representation and which, above all, either in the case of robots or humans, are not accessible to consciousness. Just as I cannot describe or explicitly represent in my own mind consciously how I hit a tennis ball, so a robot that goes through a process of learning will not naturally have such representations. What I can say about hitting a tennis ball, or what even the best experts can say is, from the standpoint of the physical trajectory of the racket and the ball, crude and qualitative in character. Now it might be thought that we can artificially build into the robot a capacity to describe in mathematical terms what it is doing while it acquires specific habits of motion. But this would be mistaken for an obvious reason. We might be able to build into the robot the ingredients to find the derivation of various differential equations governing various pieces of the trajectory, but we would not at all be able to embed in the robot the ability to solve these differential equations and thereby to describe the trajectories. In
THE ACQUISITION O F HABITS
241
most casss nonlinear phenomena are flagrantly present which makes detailed solutions with realistic boundary conditions unfeasible. The lesson to be learned from all the problems of chaos theory, incompleteness of physical theories, etc., is that explicit representations of the motions of bodies is a mistaken enterprise either in terms of thinking about human mental processes or about the construction of robots. In being so negative about representations, I do not want to give the impression that I think we can therefore introduce habits as a panacea for our ways of thinking about the behavior of intelligent agents. There is still much that we need to learn about the theory of habits, and above all about how they can be learned by intelligent agents of our own construction. All the same, the emphasis on habits rather than representations is, in my view, the right one.