A planning language for embedded systems Luca Spalazzi Istituto di Informatica, University of Ancona, Via Brecce Bianche, 60131 Ancona, Italy
Short title: A planning language for embedded systems Mailing address: Luca Spalazzi Istituto di Informatica, University of Ancona Via Brecce Bianche, 60131 Ancona, Italy phone: +39 71 2204829 fax: +39 71 2204474 e-mail:
[email protected]
The
research described in this paper has been done as part of MAIA, the integrated AI project developed at irsT
(Istituto per la Ricerca Scienti ca e Tecnologica - Trento, Italy). Partial support has been provided by CNR (Italian National Research Council), Progetto Finalizzato Robotica (Special Project on Robotics). This work owes a lot to all the members of the Mechanized Reasoning Groups in Trento and Genoa. We thank Fausto Giunchiglia and Paolo Traverso for their invaluable support to the research described in this paper. The acting and sensing level of MAIA has been developed by the people at irsT working on Vision and Speech Recognition. Several people at irsT worked on the planning and user level of MAIA. We thank all of them. In particular we would like to thank Alessandro Cimatti and Roberto Giuri. Michael George and Sam Steel have provided useful feedback on various aspects of the work described in this paper.
1
Abstract Recent research in planning is more and more focusing on planning systems working in \real world" domains. These systems need to act in, sense and represent the real world. Furthermore, no action, even if apparently simple, is guaranteed to succeed and, therefore, no planning can be \sound" (with respect to the real world) without taking into account possible
failures.
This
is mainly due to the intrinsic complexity of reality. A planning language is therefore required to represent explicitly failures, sensing tasks, planning tasks, and task combinations. In this paper, we propose a planning language (called L) which addresses the above features. L allows representing the basic planning activities, the control structures and the basic operations to deal with failures. As a consequence, a uniform representation is used to describe both acting/sensing in the external world and basic planning activities. In this paper, we give the syntax and the semantics of L. Furthermore we also give some examples from an application (the project MAIA) which uses L as planning language.
2
1 Introduction Autonomous and intelligent systems working in \real world" domains have to deal with the intrinsic complexity of dynamic and unpredictable environments. These systems are called embedded systems. They have to decide when an action and what kind of action has to be performed, i.e. plan formation (or simply planning) and plan execution. As a consequence, these systems need internal models of the external world to reason about problems, to nd solutions and to perform high-level tasks. Moreover, they need heavily to perceive and acquire information (e.g. through sensors) from the real world trying to capture the actual world state, i.e. sensing. They also need to control actuators in order to act in the environment, i.e. acting. Finally, even if acting and sensing capabilities allow systems to work in real world domains, these systems do not often work properly. They fail to execute actions and rarely perceive the external world correctly. This is mainly due to the intrinsic complexity of reality and to the fact that actuators and sensors are not perfect (e.g. in a navigating robot, sonars are not precise enough). Thus no action, even if apparently simple, is guaranteed to succeed and, therefore, no planning can be \sound" (with respect to the real world) without taking into account failures. Up to now, the languages to describe embedded system activities arise from two dierent approaches. The rst approach starts from a \theoretical" point of view, the so called theories of actions. They are languages focused on what an action is [McCarthy and Hayes, 1969;
Rosenschein, 1981; Allen, 1984; Rao and George, 1991; Gelfond and Lifschitz, 1993; Lifschitz, 1993; Lesperance et al., 1994]. Internal models are usually assumed to be \good" enough to predict all (or most of) the external events. Similarly, information acquired from the environment is often assumed to be truthful (e.g. sensors to be reliable). In real world applications, autonomous agents cannot rely on these two fundamental hypotheses. For instance, a robot might \think" to be somewhere dierent from where it actually is, recognize an object as another, have a model of the world that is inconsistent with the \real world". This is a consequence of the fact that no \perfect" model and no \fully reliable" sensor exist. The second approach starts from a \practical" point of view, the so called planning systems. They
1
are focused on how a plan can be built. Hence their languages are aected by their architectures and plan formation algorithms. For instance classical planning [Fikes and Nilsson, 1971; Wilkins, 1988; Biundo et al., 1992; Hammond, 1990] is characterized by a full separation between the plan formation and the plan execution phases. When a failure occurs, a new plan formation is performed. This kind of architecture is based on the assumption that we have a highly predictable domain. The main problem of this architecture is that domains do not usually satisfy the above assumption. Moreover, dierent plan formation algorithms have been used in classical planning. For example, dierent plan formation activities are based on state space search [Fikes and Nilsson, 1971], partial plan space search [Wilkins, 1988], theorem proving [Biundo et al., 1992] (deductive planning), and case memory search [Hammond, 1990] (case-based planning). Reactive systems have a dierent kind of architecture. They focus on the necessity of a fast reaction to events in a very dynamic environment [Brooks, 1986; Kaelbling, 1987]. They have precompiled plans or re exes, one for each kind of stimuli from sensors. These systems do not have reasoning capabilities, even if reasoning can be sometimes useful. Finally, several systems (called situated systems) trying to integrate reasoning and reacting capabilities have been proposed [George and Lansky, 1986; Simmons, 1990; Firby, 1992; Beetz and McDermott, 1994; Ghallab and Laruelle, 1994]. Any of the previous solutions can be applied to certain classes of problems. For instance, most real world systems need to generate a plan to anticipate predictable events and situations and then to execute the plan (as in classical planning). However in certain situations the same application has not time to plan ahead, but it needs to respond immediately to environment changes (as in reactive systems). Furthermore, dierent algorithms could be useful in dierent domain problems. In conclusion, systems and their planning languages need to satisfy some requirements to deal with dierent problems.
They must be able to use dierent planning algorithms depending on the problems they have to solve.
They must be able to deal with failures and to use dierent and exible failure handling mechanisms.
2
They must have strategies considering that information acquired through sensors can be wrong. In [Traverso et al., 1994], we have proposed a system (called MRG) which is part of the MAIA project (Advanced Model of Arti cial Intelligence). It has a very simple but exible architecture and a very powerful planning language (called L) which addresses the above features. In this paper, we extend L at least in three respects. (1) We provide a more general and powerful failure handling construct with which it is possible to represent both all the previous constructs and new failure handling mechanisms. (2) We provide an explicit representation of successful and failing actions. It is possible to combine them with the failure handling constructs to obtain more complex planning strategies. (3) Now the language has a clear formal semantics. L introduces an extended notion of plan (called tactic). Each basic planning activity is represented explicitly as an expression of the planning language by means of tactics. A tactic can describe dierent ways to generate a plan for a given goal, to respond to external stimuli, to execute a generated plan, and to sense the external environment. Tactics can also represent actions executable in the external world, like moving a block, moving a robot to a certain position and so on. The basic control structures include the usual constructs as sequences, conditionals and repetitions. The basic operations for dealing with failure are also represented through appropriate control structures. These operations are the basic constructs for a variety of control mechanisms that can be used to detect and recover from failure. As a consequence, a uniform representation is used to describe both action/sensing in the external world and basic planning activities. Complex tactics may describe when to generate a plan, when to respond to external stimuli and under which conditions to execute a generated plan. It can state whether it is better to replan a new course of actions after which a failure occurred or to execute directly some precompiled failure handling strategy. In conclusion, in
L a plan represents a planning architecture, since dierent modules of the architecture are dierent tactics and the control over the activation of the modules are represented explicitly in tactics by means of the control structures. L has a formal semantics. It formalizes failure and success in plan execution, failure detection and reaction to failure. It takes into account the system's point of view, i.e. how the world is perceived by the system instead of how the world actually is. The reminder of the paper is organized as follows. In section 2 and 3 we describe the syntax and 3
semantics of L respectively. Section 4 gives a brief description of the application where L has been used. Section 5 gives an example from the application. Some related works are contained in section 6. Finally some conclusions and future developments are in section 7.
2 The language L L has been designed to represent facts (called expressions), plans (called tactics) and goals. Plans may contain both actions in the external world and planning activities. L is based upon an arity function a and four nite sets of symbols: V ; FS ; T S ; GS . V is the set of variables, FS is the set of function symbols, T S is the set of tactic symbols, GS is the set of goal symbols. a is a total function from FS [ T S [ GS to the natural numbers. From a; V ; FS ; T S ; GS we inductively construct the set
E of expressions, the set T of tactics and the set G of goals as follows.
De nition 2.1 (Syntax of L) E ; T ; G are the smallest sets closed under the following construction: Let x 2 V . Then x 2 E . Let f 2 FS such that a(f ) = 0. Then f is in E . Let f 2 FS such that a(f ) = n. Let e1 ; : : :; en 2 E . Then (f e1 : : : en ) 2 E . Let 2 T be a tactic with no free variables occurring in it. Then \ " 2 E . Let ! 2 G be a goal with no free variables occurring in it. Then \!" 2 E . (succeed) 2 T , (fail) 2 T . Let t 2 T S such that a(t) = n. Let e1 ; : : :; en 2 E . Then (t e1 : : : en) 2 T . Let f 2 FS . Then (get f ) 2 T . Let e 2 E . Then (plan for e) 2 T . Let e 2 E . Then (exec e) 2 T . Let , , and 2 T . Then (iffail ) 2 T . Let , 2 T . Let p 2 E . Then (if p ) 2 T . Let x 2 V . Let e 2 E . Let 2 T . Then (let (x e) ) 2 T .
4
Let , 2 T . Then (fork ) 2 T . Let g 2 GS such that a(g) = n. Let e1 ; : : :; en 2 E . Then (g e1 : : : en ) 2 G .
Some remarks. 0-ary function symbols are called constants. Boolean expressions are a subset of expressions. Boolean expressions are the constants true, false 2 E and any expression of the form (f e1 : : : en ) where f is a symbol which belongs to a subset of function symbols called Boolean function symbols. Intuitively these expressions denote a truth value true or false. Finally, unknown
denotes a value which is unknown to the system. Tactics of the form (t e1 : : : en ) are called primitive tactics. They are the basic building blocks of L plans. Primitive tactics correspond to basic actions
and they depend on the application. They are de ned on the basis of tactic symbols and expressions which represent entities over which actions are executed.
if
is the usual conditional construct. or
are executed depending on the truth value of the Boolean expression p. (let (x e) ) means that the current value of the expression e is assigned to the variable x and during the execution of the action the value in x is used. It is the usual assignment construct of programming languages. It is useful since the value of an expression might change during the execution of . Finally, fork is the usual operator to activate parallel tactics. The third component of the language is the set of goals. They are de ned on the base of goal symbols and expressions and represent desired behaviours of the systems. It is possible to extend the language, de ning new tactic symbols. Let t be a new tactic symbol. Let (x1 ; :::; xn) be a tactic where x1; :::; xn are variables occurring in . Then the new tactic symbol is de ned with the following equation: (t x1 ::: xn) = (x1; :::; xn) The new set of tactic symbols is T S [ ftg. It is possible to de ne recursive tactics as well. Besides the usual features of such a kind of language, L introduces some novelties. First of all, notice that any executable action type may fail since it is a piece of executable code. As a consequence, L allows handling failures.
succeed
and fail represent basic actions which generate a success and a
failure respectively. They do nothing else but end the execution with a success or a failure. iffail is the main construct to detect failures and specify dierent strategies. If the action fails, the action 5
is executed, is executed otherwise.
,
and iffail are the basic building blocks for
succeed fail
failure handling. Indeed, we can use them to build new failure handling strategies ([Spalazzi, 1997]). For example we may de ne: 1. (then ) = (iffail (fail) ), 2. (orelse ) = (iffail (succeed)), The operator then implements the strategy which is usually adopted by the embedded systems for a sequence of actions: the action is executed only if succeeds. On the other hand orelse implements the strategy which tries an alternative action when a failure occurs. Secondly, in L we have the usual notion of action as \action in the external world". Besides it, we have the notion of action as \basic planning activity", i.e. sensing, plan formation and execution. For this reason, we say that L is an introspective language which has the \so called" introspective tactics. For instance, an executable action may be the module that acquires information from the environment by means of sensors or from its knowledge by means of reasoning. This kind of action (represented by the tactic get) is very important since it makes the link between actions and the system capability of updating its knowledge explicitly. Another example may be the action that, given a goal and a model of the world, generates a sequence of actions to be carried out by the system. This action (called plan formation) is represented by the tactic
. As a third example, we have the module
plan for
that executes the corresponding actions given a plan. This action is called plan execution and it is represented by the tactic exec. Notice that, as a consequence, plan for and exec need that goal and actions themselves must be handled as expressions. For this purpose, we introduce the notion of goal names and action names. For example, the name of the goal (At robot at the position denoted by the constant room 100) is \(At action (goto
room 100)
room 100)
(the goal of having a
" and the name of the
room 100)
(the action of moving the robot to room 100) is \(goto
notice that get, plan for and exec are sets of tactics more than single tactics.
6
". Finally
room 100)
3 Semantics of L The semantics can be given following dierent approaches. One of them consists of the representation of the \world" as it \actually is", i.e. how it would be perceived by an hypothetical perfect observer. Such a semantics could formalize the fact that the system might have a \bad" model of the world, might have sensors that are not perfect (as actually happens), and so on. However, this work has a dierent perspective. The semantics is based on the intuition that any system strategy can only take into account the \internal" knowledge, i.e. information which is actually available to the system. Indeed this is the approach followed for the actual implementation of L. Consider the following statement (this statement is due to Sam Steel, taken from the abstract of his seminar \Action under uncertainty"): Go to Fred's. If Fred is in, give him this parcel, otherwise leave it in the garage.
This might be a plan that an autonomous system (Barney from now on) has to execute. Related problems are whether this plan is executable, whether it is correct, what are the features of a planning system able to generate such a plan, and so on. However, there is an even more basic problem. Indeed, the meaning of this statement is only apparently clear to Barney. Suppose he actually manages to go to Fred's. The condition Fred is in has now to be tested. Barney (as anyone) has probably an idea of how to nd out whether Fred is at home (e.g. ring the bell, look around). Suppose that after these tests have been performed, he is convinced that Fred is not at home (e.g. he did not see him, and nobody opened the door). Then he does the right thing leaving the parcel in the garage. He interprets Fred is in taking a subjective view, i.e. he bases his actions on his perception of the world. Suppose
that Fred was actually at home, but he did not answer the ring and was not seen by Barney. It is possible to say that Barney has not executed the plan correctly, if an objective view is taken, i.e. the view of a hypothetical observer that perceives the world perfectly and knows everything about Fred and Barney. Now the question is whether the semantics should describe both the views of the world or only one (and if one which of them). The objective view gives more information. It explains the Barney's behaviour from his creator's point of view, as it relates Barney's perception with the actual
7
state of the external world. However, it cannot be used by Barney himself. The information which must be taken into account is not available to him. The aim of this work is to give a semantics to
L so that it can be used by the embedded systems which support the language (see section 4). As a consequence, the semantics has to represent only what the system knows. In the following sections, the semantics of L is given keeping in mind the above considerations.
3.1 Basic de nitions The semantics of L is de ned relative to a given structure U , of the form U = hD; W ; Ii where D is the domain of interpretation, W is an abstract set of states and I is the interpretation function.
Domain of interpretation. D represents the set of objects of the world (see gure 1) where the system is embedded in (as blocks, rooms, doors, etc.), and the representation of the system itself (as plans, goals, etc.). In D it is possible to distinguish the subsets DO , DT and DG . DO is the set of objects of the world, DT is the set of the actions that the system is able to execute and DG is the set of the goals that the system is able to understand. The unde ned value (represented by ?) and the truth values (true and false) belong to the domain set as well. The unde ned value is the interpretation of the symbol unknown and of objects and properties whose values are unknown to the system. Object properties are expressed by relationships over the domain denoted by function symbols. Thus, there exists a mapping between any function symbol and a relationship over the domain.
States and behaviours. Object properties may change during the time. This fact leads to the introduction of the usual notion of state. Intuitively, any state captures the world in a particular situation. Formally, we have a set W which is the (possibly in nite) set of states. A nite sequence of states, repetitions allowed, is called behaviour (see gure 1). It represents a possible evolution of the system, i.e. how the system changes while it is executing an action (see for example [George and Lansky, 1986]). As a matter of fact, during the execution of an action a system transits from a state to another and so on, until the action ends. Formally the set of all the possible behaviours is B = W + . The concatenation of two behaviours is de ned as follows: if wi 2 W , i = 1; : : :; j ? 1; j; j + 1; : : :n,
b1; b2 2 B, b1 = w1 : : :wj , b2 = wj : : :wn , then b1 b2 = w1 : : :wj ?1 wj wj +1 : : :wn. Intuitively it 8
means that a new action starts at the end of the previous one. Concatenation is extended over sets of behaviours as follows: A B = fb1 : : :b2 j b1 2 A and b2 2 B g. W is the null element for set concatenation, i.e. A W = W A = A. Moreover, in the behaviour algebra there are the usual set theory operations like union ([), intersection (\) and dierence (n).
Interpretation. I assigns interpretations to variables and symbols. From them, we are able to interpret expressions, tactics and goals.
3.2 Expression semantics The interpretation of expressions must take into account the fact that values and relationships denoted by variables and function symbols may change during the time. For example, let us suppose that robot position is a symbol which denotes the current position of a robot. Its value (i.e. its interpretation) varies according to the actual position of the robot. When it is in the position A (the robot's sensors perceive the position A), the interpretation of the symbol robot position is A 2 DO . Formally we have the following de nitions.
De nition 3.1 (Interpretation of function symbols) Let U = hD; W ; Ii be a structure. Let w 2 W . Let f 2 FS . Let a(f ) = 0. Then the interpretation of f is fw 2 D. Let a(f ) = n. Then the interpretation of f is fw : Dn ! D. Now it is possible to de ne the value denoted by an expression.
De nition 3.2 (Interpretation of expressions) Let U = hD; W ; Ii be a structure. Let w 2 W . Let x 2 V . Let e ; : : :; en 2 E . Let f 2 FS such that a(f ) = n. Then: 1
The interpretation of x is xw 2 D. The interpretation of (f e1 : : :en) is (f e1 : : :en )w = fw (e1 w ; : : :; en w ). This is a well-known way of giving semantics to expressions (for example see [Harel, 1984]). The Boolean functions are interpreted as functions from D to ftrue; falseg D. The names of goals 9
and tactics are expressions in L. They are interpreted as elements of DG and DT respectively. For example, \ "w = 2 DT and \!"w = ! 2 DG . There is a bijective relationship among tactics in DT (goals in DG ) and tactics in T (goals in G ) of L. We adopt the convention of writing them in the same manner and saying explicitly which set they belong to when it is important. Besides the names, we may also have expressions which denote goals and tactics, for example an expression e 2 E such that in the state w, ew 2 DT or ew 2 DG . Now we introduce some set restrictions that we use in the rest of the paper. They are an extension of the operators de ned by [George and Lansky, 1986]. Let be an expression, let d 2 D be a value. !Ad = fb j b 2 A ^ b = w0 : : :wn ^ w = dg
(1)
?Ad = fb j b 2 A ^ b = w0 : : :wn ^ w0 = dg
(2)
#Ad = fb j b 2 A ^ b = w0 : : :wn ^ 8wi : w = dg
(3)
n
i
!Ad (?Ad ) is the set of any behaviour of A such that in the nal state (the rst state) the interpretation of the expression is d 2 D. #Ad is the set of any behaviour of A such that in each state of the behaviour the interpretation of is d 2 D.
3.3 Tactic semantics A lot of works formalize actions as sets of pair of states (see for example [Harel, 1984]). In this work, actions have been formalized as sequences of states. Indeed the same action may transit dierent states depending on the environmental events detected by the system sensors. Two behaviours of an action may start from the same state ws and reach the same nal state wf but have dierent intermediate states (see the rst two behaviours in gure 2). Furthermore, a system embedded in the real world has to consider the possibility of failures. We have an action failure when the related executable code aborts. For example, the action which moves a robot to a given position may abort when there is an obstacle along the path. Notice that an action may change the world even if it fails. For example, the robot has changed its position even if it has failed (see the last three behaviours in gure 2). In order to take into account these facts, the behaviour set of an action is divided into two 10
subsets [George and Lansky, 1986]: a set of successful behaviours (success set) and a set of failing behaviours (failure set). For example in gure 2, S ((goto the success set and the failure set of (goto
room 100)
) and F ((goto
room 100)
) are
room 100)
respectively. We prefer to have two subsets
instead of a proposition success in the nal state of a behaviour since we may have two dierent behaviours with the same nal state; one belongs to the success set and the other one to the failure set. Furthermore a behaviour may belong to the success set of an action (e.g. the failure set of another action (e.g (goto
) and
(goto room 100)
) since the robot has followed a wrong path.
room 200)
The success and failure sets of tactics are formally de ned on the base of tactic symbol interpretation which is given by I .
De nition 3.3 (Tactic symbol Interpretation) Let U = hD; W ; Ii be a structure. Let t 2 T S such that a(t) = n. The interpretation of t is tw =< Sw (t); Fw (t) >, where Sw (t); Fw (t) are the functions: Sw (t) : Dn ! 2fwgB
Fw (t) : Dn ! 2fwgB
and
Notice that any function generates behaviours which start from w, the state where the symbol is interpreted. Now we can de ne the interpretation of tactics as follows.
De nition 3.4 (Tactic interpretation) Let U = hD; W ; Iibe a structure. Let t 2 T S such that a(t) = n. Let tw =< Sw (t); Fw (t) > be the interpretation w.r.t. w 2 W of t. Let p; e; e ; : : :; en 2 E . Let ew ; e w ; : : :; enw be the interpretations w.r.t. w 2 W of e; e ; : : :; en , respectively. Let x be a variable. Let ; ; 2 T . Let plan 2 T S such that a(plan) = 0. Then. 1
1
1
S ((t e1 : : : en )) = S
Sw (t)(e1 w
S ((succeed)) = W
F ((succeed)) = ;
w2W
S ((fail)) = ;
: : : enw ) F ((t e1 : : : en )) =
F ((fail)) = W
S ((iffail )) = S () S ( ) [ F () S ( ) F ((iffail )) = S () F ( ) [ F () F ( ).
11
S
w2W
Fw (t)(e1 w
: : : enw ).
S ((if p )) = ?S ()ptrue [ ?S ( )pfalse
F ((if p )) = ?F ()ptrue [ ?F ( )pfalse
S ((let (x e) )) = fws0 : : :sn j s0 : : :sn 2 S () ^ s0 = wxe ^ 8si :xs = ew g w
i
F ((let (x e) )) = fws0 : : :sn j s0 : : :sn 2 F () ^ s0 = wxe ^ 8si :xs = ew g w
i
where wxe means a state which is equal to w for the interpretation of every function symbol but w
for the variable x we have xs0 = ew .
S ((fork )) = S () \ S ( ) F ((fork )) = (S () \ F ( )) [ (F () \ S ( )) [ (F () \ F ( )) S ((getf )) = fw t 2 B; w; t 2 W j t = wf ^ ft 6=?g F ((getf )) = fw t 2 B; w; t 2 W j t = wf ^ ft =?g where wf means a state which is equal to w for the interpretation of every function symbol but
f which might have a dierent interpretation.
S ((exec e)) = S S(exec)(ew ) = fww1 : : :wn j ew 2 DT ^ ww1 : : :wn 2 S (ew )g w2W
F ((exec e)) = S F(exec)(ew ) = fww1 : : :wn j ew 2 DT ^ ww1 : : :wn 2 F (ew )g. w2W
S ((plan for e)) = S S(plan for)(ew ) = fww1 : : :wn j ew 2 DG ^ planw 2 DT g w2W
n
F ((plan for e)) = S F(plan for)(ew ) = fww1 : : :wn j ew 2 DG ^ planw =?g. w2W
n
Some remarks. The failure set of the tactic succeed is empty, since it always succeeds. Its success set is equal to W since succeed does nothing else but succeeds (W is the null element for the set concatenation). Vice-versa, the success set of fail is the empty set and the failure set is W . In a conditional tactic, when the expression p does not denote a truth value then the action is not executed. The semantics of let captures that the value denoted by e is assigned to the variable x and
x preserves that value till the end of . The parallel tactic (fork ) succeeds when both the tactics do, otherwise it fails. Its semantics also captures the fact that the two tactics cross the same states. In L, the sensing tactic is an action which assigns a value to the symbol which is its argument and does not change the values of the other symbols. The tactic fails when at the end of the action, f is unde ned (it has the value ?), otherwise it succeeds. The previous value of f is not important. For 12
example, fw may be unde ned or equal to ft . In the interpretation of an execution tactic, S (ew ) and
F (ew ) are the behaviours of the tactic denoted by e. Thus the behaviours of (exec e) are the same as those of the tactic executed. When ew 62 DT (e does not denote a tactic) the exec does nothing. In the interpretation of plan for, plan is a constant such that its interpretation is the last generated plan (planw 2 DT [ f?g). The interpretation of plan for is general and does not force any condition over the relationship which must hold between the goal and the action proposed. It also does not give any constraint on how the plan is obtained. As a consequence, we maintain a general notion of plan formation and leave the user the choice of the appropriate plan formation algorithm. When ew 62 DG (e does not denote a goal) the plan for action does not plan. Now it is possible to give the semantics of derived tactic operators as then and orelse.
S ((then )) = S () S ( )
F ((then )) = F () [ S () F ( )
S ((orelse )) = S () [ F () S ( )
F ((orelse )) = F () F ( )
3.4 Goal semantics Most of the works in AI represent a goal as a condition on nal states of actions (see for example [Biundo et al., 1992; Fikes and Nilsson, 1971]), i.e. the set of desired nal states. However a system must be able both to reach a particular state and to reach it in a particular way [George and Lansky, 1986]. Thus a goal must be described as a set of desired behaviours instead of a set of nal states. For example, [George and Lansky, 1986] use conditions on the rst state or all the states of a behaviour. For example, the desired behaviours of the goal (Across
room 48 room 65 room 100)
are all the be-
haviours with an intermediate state where the robot is in room 48, a state where the robot is in room 65 and a nal one where the robot is in room 100. Formally, let ?(Across
;
;
))
room 48 room 65 room 100
be the desired behaviour set, we have: ?(Across
;
;
robot position !Brobot position !Brobot position )) = !Broom 48 room 65 room 100
room 48 room 65 room 100
where robot position is a constant. The desired behaviour set of a goal is formally de ned on the base of the interpretation of goal symbols which is de ned as follows. 13
De nition 3.5 (Goal symbol Interpretation) Let U = hD; W ; Ii be a structure. Let g 2 GS such that a(g) = n. Then the interpretation of g is a function gw (g) : Dn ! 2B . As a consequence, we have the following de nition of goal interpretation.
De nition 3.6 (Goal Interpretation) Let U = hD; W ; Ii be a structure. Let g 2 GS such that a(g) = n. Let gw be the interpretation w.r.t. w 2 W of g. Let e : : : en 2 E . Let e w : : : en w be the interpretation w.r.t. w 2 W of e : : : en , respectively. Then. S g (g)(e : : : e ). ?((g e : : : e )) = 1
1
1
1
n
w2W
w
1
w
nw
Notice that the notion of goal achievement and of successful action are dierent. The fundamental assumption in this work is that successful and failing actions depend on the actions themselves. They depend on the fact that the related executable code aborts or not. On the other hand a goal is speci ed by a set of desired behaviours. We may have dierent actions with behaviours which satisfy a given goal. Thus the notion of goal achievement depends on both the action and the goal. An action may fail and nevertheless it may satisfy the goal. For example, the action (goto position A. It may be executed to satisfy the goal (At
A)
moves the robot to the
, i.e. the goal of having the robot in the
A')
region A0 which contains A. If the action aborts when the robot is inside A0 but not at the position
A, the action fails but the goal is achieved.
4 Application In this section, we describe the actual implementation of L in MAIA. Even if a detailed discussion about MAIA is out of the scope of the present paper, we give a brief description of it in order to make the paper self-contained. For more details see [Stringa, 1991; Antoniol et al., 1994; Armando et al., 1995]. MAIA (Advanced Model of Arti cial Intelligence) is a complex large-scale application
developed at irsT by a large team. The application aims at the development of a system able to control and coordinate a mobile robot ( gure 3) which navigates in unpredictable environments (as inside a building) and performs high level tasks (as transportation in oces). Indeed MAIA is a good testbed for L. It requires dierent kinds of missions (e.g. navigation, interaction with human being, etc.), 14
both strategic and reactive capabilities (since it must plan missions and navigate in a dynamic and unpredictable environment), and failure handling strategies (since in the environment where MAIA operates no action is guaranteed to succeed). MAIA is realized as a distributed architecture split into three levels (see gure 4).
User Level. Users can request the mobile robot to perform desired tasks (the goals of L) by means of user interfaces (see gures 4 and 5). The user interfaces also allow the system to send and receive data (the facts or expressions of L) in order to monitor the robot activities.
Acting and Sensing Level. Task execution is performed by means of modules controlling robot's sensors and actuators (see gure 4). Sensorial devices comprise sonars, odometers and cameras. Actuators comprise the robot's engines (the robot has the maximum speed of 1 meter/second and the average speed of 0.5 m/sec).
Planning Level.
MRG
is encharged to plan activities in order to perform the requested tasks and
to control their execution (see gure 4). It comprises several modules each dedicated to a particular task (see gure 6). Its architecture (see [Traverso et al., 1994; Spalazzi, 1997]) is relatively simple and completely general purpose. It provides the basic mechanisms to acquire user's goals and real world events, to activate modules that execute actions at the acting/sensing level and to manage tactics. The main components of
MRG
are the communication system, the reasoning system, the
language interpreter, and the knowledge bases (see gure 6). The communication system is composed
of interface channels. They provide the basic interaction between MRG and the other modules of MAIA. They allow having a communication between the system and (human and arti cial) external agents (e.g. the user interfaces) through the input and output channels. Moreover they allow having a sort of feedback loop, i.e. the output of sensors and actuators can be given in input through internal channels. The reasoning system is composed of two modules (the scheduler and the reasoner) and three data structures (the goal/fact queue, the goal/fact tactic table and the tactic libraries). The goal/fact queue receives goals and facts through the input and internal channels. The scheduler waits for a goal or a fact in the goal/fact queue, then it activates a reasoner and checks again the queue. The reasoner is one of the plan formation modules of MAIA. It is a reactive planner that, given a goal or a fact, looks
15
for the corresponding tactic in the goal/fact-tactic table, loads the tactic from a tactic library and executes it through the language interpreter. More than one reasoner can be active at the same time. This provides the capability to deal with asynchronous and parallel events and goals. Given a goal or a fact, there exists one and only one tactic in the goal/fact-tactic table which can be executed. The reasoner does not check any precondition before the activation of a tactic. Notice that the reasoner itself is linked to a tactic symbol of L according to the principle that any system activity has to be represented in the planning language. Namely it is a plan formation tactic. The language interpreter is the part of the system which executes a tactic. It is able to trap a module abort (a failure) and eventually propagate it to a failure handling tactic. The interpreter is a tactic as well, i.e. a plan execution tactic. The knowledge bases contain information about the external world, the internal MRG status and its resources. The information are denoted by expressions of L. Any constant is bound to a value in one of the knowledge bases through a pair h symbol , value i. Any function symbol is bound to a function (a piece of executable code) which accesses one of the knowledge bases. This means that the knowledge bases ( gure 6) have interface functions, each of them is linked to a function symbol. Even if any knowledge base has the same kind of interface with the language L, it can have a dierent representation technique. It may be a general purpose knowledge base, for example using the rst order logic, or a domain dependent knowledge base, for instance a topological graph representing a navigation domain. Actually, in MAIA, one of the knowledge bases contains a map of the environment (depicted in gure 5), a graph where nodes represent locations (rooms, corridors, etc.) and arcs represent accessibility among locations [Armando et al., 1995]. Each node contains information such as geometric data, items and people that are usually in the location. The tactic libraries are a particular kind of knowledge base. They are sets of tactics devoted to speci c tasks. Any tactic symbol is linked to a module of the MAIA architecture ( gure 4). There are tactics controlling MAIA's sensors and actuators as well as planning tactics (i.e. plan formation and execution tactics). At the acting/sensing level there are a set of primitive tactics which control actuators as \follow the wall until a speci c landmark is reached" (the tactic given distance" (the tactic
), \turn right/left" (the tactic rotate), \move for a
follow wall
), and so on. There are also primitive sensing tactics as \control
move
16
the sonars to detect possible obstacles" (the tactic (get
obstacle)
where
obstacle
is a Boolean
constant which is true when the system detects an obstacle). The primitive sensing tactics have been implemented in such a way that they return the constant unknown whenever they fail, according to their semantics. Both acting and sensing tactics are connected to output channels to send facts to the user interfaces and to internal channels to allow for reacting. As mentioned above, at the planning level we have plan formation and execution tactics, i.e. the introspective tactics. The code of the tactic interpreter is linked to the tactic symbol exec. In this way, according to its semantics, the execution of a tactic and the execution of (exec ) produce the same behaviours. Moreover MAIA has three basic plan generation modules. With these basic planners, we may de ne several planning strategies. The rst one is the reasoner of MRG described above (see gure 6). It is linked to the tactic symbol plan for. Besides this planner, there are two special purpose classical planners: the path planner and the mission scheduler. The path planner is linked to the tactic symbol path for.
It must look for a path in the building to go from the current position of the robot to the target location. The path planner uses a graph search algorithm (the Dijkstra's algorithm [Dijkstra, 1959; van Leeuwen, 1990]) to look for a path in a graph representing the building. The mission scheduler is linked to the tactic symbol mission-for. It allocates time for each task. It must schedule the robot activities, distinguishing \spot" missions and periodic missions. It solves a set of temporal constraints. All these tactics update the status constant plan with the produced tactic, as stated by the semantics. When they fail, the plan is unknown. In conclusion, when users combine tactics in dierent ways, they actually combine the modules of MAIA obtaining dierent architectures. Users can exibly customize the whole planner by means of
L to feature dierent behaviours depending on the requirements of complex, real world, large-scale applications.
17
5 An example Let us consider an example of navigation session. A user requests to transport loads (e.g. mails) to a given room through one of the user interfaces ( gure 5). The scheduler activates a reasoner which looks for a tactic in the goal-tactic table. The reasoner chooses a tactic which re ects the adopted planning paradigm. All the planning paradigms and techniques depend on their basic planning activities and control mechanisms [Traverso et al., 1992; Spalazzi, 1997]. Namely, they depend on how the basic planning tasks are implemented and on how they activate, combine and control the various basic planning activities. In this section, we see a solution which is the integration of the reactive and classical approaches (the tactic (goto
goal)
in gure 7). This is the solution adopted in the current
implementations of MAIA for this kind of tasks. Indeed a purely reactive approach has no global strategy to satisfy a goal or repair a failure. First of all, the tactic activates the path planner (the tactic path for). Given the target location, the knowledge about the current position of the robot, and a topological map of the building, the path planner looks for a path (e.g. the shortest) to reach the target location (see gure 5). A plan is a set of primitive acting and sensing tactics combined in such a way to implement a reactive navigation system. This provides the system the ability to react immediately to environmental changes. In MAIA, we have a set of primitive acting tactics (like follow wall
and so on), a set of sensing tactics to detect some \dangerous situations", and a set of
basic reactions to then. The most common \dangerous" situation is the presence of an obstacle along the path. In order to detect obstacles, we need continuously active sensors which fail when an obstacle is detected, i.e. the tactic (fail when obstacle) (a representation of this tactic in L is given in gure 8). When an obstacle is detected, the system activates the basic reaction which locally changes the trajectory to avoid it (the tactic change trajectory). If it is impossible to change the trajectory or the robot gets lost, then a failure occurs. The tactics which have been obtained by composing the above basic actions are called re exes. When we express re exes in L, they look as in gure 9. For a complete description of basic navigation activities and re exes see [Cattoni et al., 1994]. In conclusion, the plan returned by path for looks as a sequence of re exes:
18
(then (reflex "(follow wall)")
(reflex "(rotate right)") (reflex "(follow wall)") (reflex "(move 10)")...)
(see gure
5). Now we must take into account that re exes cannot guarantee that their execution will end as expected. In our example, a re ex might nd a not avoidable obstacle along its way or it may fail to detect a landmark and get stuck at the end of the corridor. In all these cases, the executable code related to the tactic interrupts the execution and reports an exception message to the system. From a formal point of view, we can represent the possible behaviours of such a tactic using the semantics given in the previous section. We can describe all the successful behaviours as those which led to a state where the Boolean expression (At
is true, i.e. S ((reflex"(follow wall
landmark)
)")) =
landmark
(At landmark) !Btrue . The failing behaviours are all the behaviours which violate some conditions, e.g. there
is a not avoidable obstacle, the sensor does not detect the landmark and the robot gets lost, etc. Formally F ((reflex"(follow wall
obstacle )")) = !Btrue [ : : :. In the example, there is a not
landmark
avoidable obstacle along the path (see gure 5), thus the tactic consequence, the re ex and then the plan (i.e. (exec
plan)
change trajectory
fails. As a
in gure 7) fail as well. In this situation,
the tactic goto tries to generate a new plan in a classical fashion. A new plan consists of a new path to reach the desired room (see gure 10). This is just a very simple example of integration. Actually, more complex strategies are possible. For example, the system can interact with the user to decide what strategy to adopt.
6 Discussion L may be confused with the conventional programming languages (e.g. LISP), but its similarity is only apparent and syntactical. The building blocks of L are primitive tactics. They represent basic activities related to system modules. In this work, the programming constructs that we need to build planners have been circumscribed. For instance iffail, orelse, succeed and fail allow the user to handle failure.
iffail
is dierent from the usual conditional constructs of programming
languages, since iffail allows us to detect and managing failures (i.e. internal and external actions which are not able to continue the execution), while conditionals operate with Boolean expressions
19
and detect truth values (we have the control structure if). Analogously orelse is dierent from the Boolean operator or. Furthermore the level of abstraction is much higher than in a general purpose programming language. Within L, the user has not to develop the application from scratch. The classical planning languages (see for example [Fikes and Nilsson, 1971; Wilkins, 1988]) usually represent actions in term of their preconditions and eects. L has not any explicit representation of preconditions and eects. In this way, the plan formation tactic can quickly generate a plan and the system can react in a very short time to external events (e.g. with precompiled plans in the goal/fact tactic table). Nevertheless, it is easy to perform precondition checking and eect assertion within a tactic using the building blocks of L. On the other hand, classical planning languages do not address the issue to represent and manage failures. Finally, the actions represented by these languages are just external ones. They do not represent basic system activities like plan formation and plan execution. Even from the architectural point of view, classical planning is not exible enough. As mentioned in section 1, the classical planning architecture divides plan formation time from plan execution time. In L, it is possible to represent a planning architecture inside a tactic, specifying when sensing, plan formation and execution must be activated. In this respect, L is a planning language which is not only able to represent the traditional plans, but also real planning architectures as those of reactive systems and more complex embedded systems. Notice that this feature is not provided by metaplanning systems (see for example [Ste k, 1981; Wilensky, 1979]), since their languages are just
able to represent plan formation activities. The works closest to our are those on situated planning. These systems have languages which support their capability of integrating reactivity and reasoning. Let us examine two noteworthy systems. TCA [Simmons, 1990] has a planning language which provides a representation for various planning activities, namely plan formation, plan execution, and sensing (TCA plans are called Task Trees). The system can exibly performs interleaving planning and execution, run-time changing of a plan and coordinating multiple tasks. Similarly to L, the TCA exception handling facilities support contextdependent error recovery since dierent error handlers can be associated to dierent nodes in the task tree. Nevertheless, error handlers are not written explicitly in the TCA planning language. This
20
does not allow the user to de ne plans and failure handling mechanisms with the same language and to modify them exibly. In L, failure capturing is explicitly represented by the construct
.
iffail
Recovery from failure is uniformly represented by tactics. Finally the TCA planning language has not a formal semantics. In PRS [George and Lansky, 1986], the planning language is able to describe how certain sequences of actions and tests may be performed to achieve given goals or to react to particular situations. A plan in PRS is called KA. Metalevel KAs encode various methods for chosing among multiple applicable KAs. They provide a high amount of exibility in forming plans. The same amount of exibility is provided in L by tactics. In PRS, plan execution and plan formation are not explicitly denoted by dierent tactics, like plan for and exec. This distinction is explicit in L tactics; it opens up the possibility to reason about how plan formation and plan execution can be performed to solve a problem; it opens up the possibility to reason about when it is better to generate or to execute a plan. As a further consequence of this fact, dierent planning techniques can be implemented in L naturally.
7 Conclusions and future work This paper has presented L, a domain independent language for embedded systems. L can be seen as a step towards general purpose languages to build real world applications, i.e. languages allowing the user to fully customize the whole planner depending on the application requirements. This paper has described L and its semantics. They can be seen as a formal and general account of the fundamental functionalities that embedded systems should oer. L provides plans (tactics) de nition in terms of possible successful and failing behaviours. Moreover it provides a general de nition of the meaning of plan generation and execution, gives a formal de nition of sensing tactics, represents explicitly failure and explains the meaning of recovering from failure in an unpredictable environment. The work described in this paper has been fully inspired by an experimental application developed at
irsT [Stringa, 1991]. Most of the examples in this paper have been taken from this application. The formalization of L described here has been extremely useful in the incremental building of the system,
21
since it provides a clear and not ambiguous description of its features and behaviours. Future developments of this research include how to de ne theoretically and generally a way to generate and reason about tactics. Whereas it has been shown how to reason about classical plans, it is not obvious how to reason about tactics. [Traverso and Spalazzi, 1995; Traverso et al., 1996] shows how a logical theory can be used to represent expressions that have some similarities with L tactics. The idea is to translate tactics and control structures into terms of the logic. Thus the logic can be used to reason about, prove properties of, and automatically generate tactics via theorem proving. This theory is also able to represent failures.
References [Allen, 1984] J.F. Allen. Towards a General Theory of Action and Time. Arti cial Intelligence, 23:123{154, July 1984. [Antoniol et al., 1994] G. Antoniol, B. Caprile, A. Cimatti, R. Fiutem, and G. Lazzari. Experiencing real-life interaction with the experimental platform of MAIA. In Proceedings of the 1st European Workshop on Human Comfort and Security, 1994.
[Armando et al., 1995] A. Armando, A. Cimatti, E. Giunchiglia, P. Pecchiari, L. Spalazzi, and P. Traverso. Flexible Planning by Integrating Multilevel Reasoning. Journal of Engineering Application of Arti cial Intelligence, 4:401{412, July 1995.
[Beetz and McDermott, 1994] M. Beetz and D. McDermott. Improving Robot Plans During Their Execution. In Proceedings 2nd International Conference on AI Planning Systems (AIPS-94), Chicago, IL, 1994. [Biundo et al., 1992] S. Biundo, D. Dengler, and J. Kohler. Deductive Planning and Plan Reuse in a Command Language Environment. In Proc. 10th European Conference on Arti cial Intelligence, pages 628{632, Vienna, Austria, 1992.
22
[Brooks, 1986] R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(1):14{23, 1986.
[Cattoni et al., 1994] R. Cattoni, G. Di Caro, M. Aste, and B. Caprile. Bridging the Gap between Planning and Reactivity: a Layered Architecture for Autonomous Indoor Navigation. In Proc. of the the International Conference IROS '94, Munich, Germany, 1994.
[Dijkstra, 1959] E. W. Dijkstra. A note on two problems in connexion with graphs. Numer. Math., 1:269{271, 1959. [Fikes and Nilsson, 1971] R. E. Fikes and N. J. Nilsson. STRIPS: A new approach to the application of Theorem Proving to Problem Solving. Arti cial Intelligence, 2(3-4):189{208, 1971. [Firby, 1992] R. J. Firby. Building Symbolic Primitives with Continuous Control Routines. In J. Hendler, editor, Arti cial Intelligence Planning Systems: Proc. of 1st International Conference, pages 62{69, San Mateo, CA, 1992. Morgan Kaufmann. [Gelfond and Lifschitz, 1993] M. Gelfond and V. Lifschitz. Representing action and change by logic programs. Journal of Logic Programming, 17:301{322, 1993. [George and Lansky, 1986] M. George and A. L. Lansky. Procedural knowledge. Proc. of IEEE, 74(10):1383{1398, 1986. [Ghallab and Laruelle, 1994] M. Ghallab and H. Laruelle. Representation and Control in IxTeT, a Temporal Planner. In Proceedings 2nd International Conference on AI Planning Systems (AIPS94), Chicago, IL, 1994.
[Hammond, 1990] K. J. Hammond. Explaining and Repairing Plans that Fail. Arti cial Intelligence, 45(1-2):173{228, 1990. [Harel, 1984] D. Harel. Dynamic Logic. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, volume II, pages 497{604. D. Reidel Publishing Company, 1984.
23
[Kaelbling, 1987] L.P. Kaelbling. An architecture for intelligent reactive systems. In Reasoning about actions and plans: Procedings of the 1986 Workshop. Morgan-Kaufmann Publishers, 1987.
[Lesperance et al., 1994] Y. Lesperance, H. J. Levesque, F. Lin, D. Marcu, R. Reiter, and R.B. Scherl. A Logical Approach to High-Level Robot Programming - A Progress Report. In Control of the phisical world by intelligent systems, working notes of the 1994 AAAI Fall Symp., 1994.
[Lifschitz, 1993] V. Lifschitz. A Language for Describing Actions. In Proceedings Second Symposium on Logical Formalizations of Commonsense Reasoning, Austin, Texas, 1993.
[McCarthy and Hayes, 1969] J. McCarthy and P. Hayes. Some Philosophical Problems from the Standpoint of Arti cial Intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence 4, pages 463{502. Edinburgh University Press, 1969. Also in V. Lifschitz (ed.), Formalizing common sense: papers by John McCarthy, Ablex Publ., 1990, pp. 21{63.
[Rao and George, 1991] A. S. Rao and M. P. George. Modeling Rational Agents within a BDIArchitecture. In Proc. KR'91, Principle of Knowledge Representation and Reasoning, pages 473{ 484, Cambridge Massachusetts, 1991. Morgan Kaufmann. [Rosenschein, 1981] S. Rosenschein. Plan synthesis: A logical perspective. In Proc. of the 7th International Joint Conference on Arti cial Intelligence, pages 331{337, Vancouver, British Columbia,
1981. [Simmons, 1990] R. Simmons. An Architecture for Coordinating Planning, Sensing and Action. In Proceedings of the Workshop on Innovative Approaches to Planning, Scheduling and Control, pages
292{297, 1990. [Spalazzi, 1997] L. Spalazzi. An Architecture for Planning in Embedded Systems. Applied Intelligence, Forthcoming, 1997. [Ste k, 1981] M. J. Ste k. Planning and Meta-Planning. Arti cial Intelligence, 16:141{169, 1981. [Stringa, 1991] L. Stringa. An Integrated Approach to Arti cial Intelligence: the MAIA Project at IRST. Technical Report 9103-13, IRST, Trento, Italy, 1991. 24
[Traverso and Spalazzi, 1995] P. Traverso and L. Spalazzi. A Logic for Acting, Sensing and Planning. In Proc. of the 14th International Joint Conference on Arti cial Intelligence, 1995. [Traverso et al., 1992] P. Traverso, A. Cimatti, and L. Spalazzi. Beyond the single planning paradigm: introspective planning. In Proceedings ECAI-92, pages 643{647, Vienna, Austria, 1992. IRSTTechnical Report 9204-05, IRST, Trento, Italy. [Traverso et al., 1994] P. Traverso, A. Cimatti, L. Spalazzi, E. Giunchiglia, and A. Armando. MRG: Building planners for real world complex applications. Applied Arti cial Intelligence, 8(3), 1994. [Traverso et al., 1996] P. Traverso, L. Spalazzi, and F. Giunchiglia. Reasoning about acting, sensing, and failure handling: A logic for agents embedded in the real world. In M. Wooldridge, J. P. Muller, and M. Tambe, editors, Intelligent Agents Volume II | Proceedings of the 1995 Workshop on Agent Theories, Architectures, and Languages (ATAL-95), Lecture Notes in Arti cial Intelligence.
Springer-Verlag, 1996. [van Leeuwen, 1990] J. van Leeuwen. Graph Algorithms. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, pages 526{631. North-Holland, Amsterdam, 1990.
[Wilensky, 1979] R. Wilensky. Meta-Planning: Representing and Using Knowledge About Planning in Problem Solving and Natural Language Understanding. In A. Collins and E. Smith, editors, Readings in Cognitive Science. A Perspective from Psycology and Arti cial Intelligence. Morgan
Kaufmann, 1979. [Wilkins, 1988] D. E. Wilkins. Practical Planning: extending the classical AI planning paradigm. Morgan Kaufmann, San Mateo, 1988.
25
Action names true false
Goal names
Goals
Actions
unknown
Expressions Language
Γ
Semantics
D T
true false
S
D
G
B
Do ?
F
D
Figure 1: Syntax and Semantics of L
S((goto room-100) )
robot at room-65
robot at room-100
robot at room-60 robot at room-100 robot at room-60
robot at room-70 robot at room-100
robot at room-50
F((goto room-100)
) robot at room-65
robot at room-70 robot at room-50
Figure 2: Successful and failing behaviours of the action (goto room 100).
I
Figure 3: The robot MAIA
User Level
User Interfaces
Supervisor Station Remote Requests
Remote Requests
Planning Level
Tactic Libraries
Knowledge Bases
Mission Scheduler
Interpreter Tactics
Code
Topological Map Path Planner
MRG
System Status
Reasoner Goals/Facts Tactics
Acting and Sensing Level Sensor/Actuator Controllers Mobile Robot
Figure 4: The architecture of MAIA
II
Figure 5: The map of the building and the navigation plan goal/fact tactic table
tactic libraries
go
input channel
reasoner
al ct
fa
s
output channel
cts
output channel
goals/fact
...
s
goals/facts
tactic
goal/fact queue
s/
input channel
KBs
scheduler
ls/fa
goa reasoner
...
tactic
... goals/facts
internal channel
goals/facts
...
Figure 6: The system architecture of MRG (goto goal) = (then (path for goal) (orelse (exec plan) (goto goal))) plan
is a constant which denotes the plan generated by the path planner path for. Figure 7: An example of navigation tactic
III
(fail when obstacle) = (then (get obstacle) (if obstacle (fail) (fail when obstacle)))
Figure 8: Obstacle detection
(reflex
)
= (orelse (fork (exec
)
(fail when obstacle))
(then (change trajectory) (reflex
))) Figure 9: A re ex
Figure 10: The new navigation plan
IV