Automatic Synthesis of Sequential Control Schemes - CiteSeerX

0 downloads 0 Views 1MB Size Report
Mar 22, 1993 - product is heated indirectly using plate heat exchangers, and cooled in the same way before ..... found, the degree of the polynomial often undergoes a series of decrements ..... directed graph a loop detection must be added to avoid in nite loops. ...... Structured programming based on IEC SC 65A; using al-.
Automatic Synthesis of Sequential Control Schemes Inger Klein Dept. of Electrical Engineering, Linkoping University S-581 83 Linkoping, Sweden email: [email protected]

Final Version

Do not distribute or copy without authors permission March 22, 1993

i Abstract

Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main idea is to spend some e ort o -line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plan is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simpler problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planning tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This possibility opens up completely new applications such as operator supervision and simpli ed error recovery and restart procedures after a plant fault has occurred. Additionally we analyze reachability for a restricted class of problems. For this class we state a reachability criterion that may be checked using a slightly modi ed version of one of the above mentioned algorithms.

ii

iii

Preface In this thesis three planning algorithms are presented. Section 5.1 is devoted to Algortihm 5.1 which is a modi cation of the algorithm presented in C. Backstrom and I. Klein. Planning in polynomial time: the SASPUBS class. Computational Intelligence, 7:181{197, August 1991. A short version of Section 5.2 appear in C. Backstrom and I. Klein. Parallel non-binary planning in polynomial time. In Proceedings of the 12th International Joint Conference on Arti cial Intelligence, pages 268{273, Sydney, Australia, Aug 1991. Chapter 7 consists of the report I. Klein and P. Lindskog. Automatic creation of sequential control schemes in polynomial time. Technical Report LiTH-ISY-I1430, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1992.

iv

v

Acknowledgement The years together with the Automatic Control group at Linkoping University have not only given me intellectual inspiration, the group members have also proven to be kind, helpful and true friends. First of all I would like to thank my supervisor Professor Lennart Ljung for excellent guidance. He has been a source of constant encouragement and he has always been willing to share his time in creative discussions during the progress of my work. He and Professor Torkel Glad have succeeded in creating a stimulating atmosphere in the research group. Dr. Christer Backstrom has been of invaluable help in developing the original idea, in fact the work we have done together would probably not have been possible by either of us alone. Professor Peter Caines helped me rediscover my reachability results, which for some reasons I had completely forgotten. Peter Lindskog and Jan-Erik Stromberg have been of great support. Apart from many interesting discussions they have provided an invaluable help in reading the manuscript and suggesting improvements. Peter Lindskog has patiently answered all my questions, especially concerning G2. He has kindly given permission to use some of his gures, in particular from the compendium in Digital Control used at our department. The manuscript has also been improved signi cantly thanks to comments from Professor Lars Nielsen, Dr. Krister Forsman, Roger Germundsson, Tomas McKelvey and Jonas Sjoberg, who have read parts of the manuscript. In addition, I would like to thank my family, especially my parents, for all their love and support. Finally, I would like to thank my huspand Christo er for all his love and support, and for his patience througout the completion of this work. To him I dedicate this thesis.

vi

Contents Abstract Preface Acknowledgements 1 Introduction

i i i 1

2 Sequential control in the past and present

5

1.1 Outline of the thesis : : : : : : : : : : : : : : : : : : : : : : : : 3 2.1 Sequential control from a practical viewpoint : : : : : : : : : : : 2.1.1 Sequential control in reality : : : : : : : : : : : : : : : : 2.1.2 Designing and implementing sequential control schemes : 2.1.3 Automatic synthesis of control schemes : : : : : : : : : : 2.2 A brief introduction to complexity theory : : : : : : : : : : : : : 2.3 Sequential control from an AI perspective : : : : : : : : : : : : 2.3.1 Linear planners : : : : : : : : : : : : : : : : : : : : : : : 2.3.2 Non-linear planners : : : : : : : : : : : : : : : : : : : : : 2.3.3 Computational complexity : : : : : : : : : : : : : : : : : 2.3.4 Approaches to reduce the computational complexity : : : 2.3.5 Representational issues : : : : : : : : : : : : : : : : : : : 2.4 Some other related perspectives : : : : : : : : : : : : : : : : : : 2.4.1 Algorithms based on graph theory search : : : : : : : : 2.4.2 Discrete Event Dynamic Systems : : : : : : : : : : : : : 2.4.3 Other related elds : : : : : : : : : : : : : : : : : : : : :

3 A formalism for describing planning problems 3.1 3.2 3.3 3.4

States : : : : : : : : : : Action types and actions Planning : : : : : : : : : Examples : : : : : : : :

: : : :

: : : :

: : : :

vii

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

5 5 8 14 18 21 23 26 27 29 31 32 32 35 37

39 40 43 46 49

viii

Contents

4 Classes of planning problems

57

5 Polynomial planning algorithms

67

4.1 Restrictions and classes of planning problems : : : : : : : : : : 57 4.2 A discussion of complexity : : : : : : : : : : : : : : : : : : : : : 60 4.3 Modelling a problem to make it t into a class : : : : : : : : : : 63 5.1

Planning for the SAS(+)-PUBS class : : : : : : 5.1.1 Existence of SAS(+)-PUBS plans : : : 5.1.2 The SAS(+)-PUBS planning algorithm

5.1.3 Complexity analysis : : : : : : : : 5.2 Planning for the SAS-PUS class : : : : : : 5.2.1 Preliminaries : : : : : : : : : : : : 5.2.2 Existence of SAS-PUS plans : : : : 5.2.3 The SAS-PUS planning algorithm : 5.2.4 Complexity analysis : : : : : : : :

6 Planning for the SAS(+)-PUB class 6.1 6.2 6.3 6.4

Deadlock detection : : : : : : : : : : The SAS(+)-PUB planning algorithm Examples : : : : : : : : : : : : : : : Test cases : : : : : : : : : : : : : : :

: : : :

: : : :

: : : :

: : : : : :

: : : : : :

: : : : : : : : :

: : : :

: : : :

: : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : : : : : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: 104 : 108 : 119 : 127

7 Implementation

7.1 A short introduction to GRAFCET in G2 : : : : : 7.1.1 The G2 real-time expert system : : : : : : : 7.1.2 GrafcetTool - a GRAFCET implementation 7.2 The planning tool : : : : : : : : : : : : : : : : : : : 7.3 Example : : : : : : : : : : : : : : : : : : : : : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

67 68 76 83 87 87 89 91 98

103

133

: 133 : 134 : 134 : 136 : 138

8 Reachability for the SAS(+)-PUBS class

145

9 A B C

157 159 161 167

8.1 Reachability criterion : : : : : : : : : : : : : : : : : : : : : : : : 145 8.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 151

Conclusions Notations A brief introduction to the theory of relations Proofs of theorems in Chapter 5 C.1 Proofs of theorems in Section 5.1 C.2 Proofs of theorems in Section 5.2 C.2.1 Proof of Theorem 5.35 : : C.2.2 Proof of Theorem 5.37 : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: 167 : 178 : 178 : 183

Contents Bibliography

ix 195

x

Contents

1 Introduction Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control [14]. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating and some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to sequential control, and sequential control programs are therefore still created manually without much support for a systematic approach. Recently sequential function charts (SFC) [45, 84], also known as GRAFCET, have attracted signi cant interest. GRAFCET shows many similarities with Petri nets [45, 127], but is more aimed at solving practical sequential control problems. Apart from being an international standard, one advantage is that function charts are easy to understand and use by, for example, operators, service personnel etc. In particular, sequential function charts give support during the program modularization phase as compared to e.g. traditional low-level relay-ladder programming. Although the GRAFCET formalism represents a promising improvement in abstraction, it is important to remember that it is still nothing but a high-level programming language. Alternatively, sequential control can be viewed as a sub eld of Discrete Event Dynamic Systems (DEDS). A DEDS is a dynamic system that evolves in accordance with the abrupt occurrence of events or actions. In contrast to the well-known models for dynamical systems, which can be described by di erential or di erence equations, there is not yet any unifying theory for DEDS. A considerable amount of work has been devoted to describing and analyzing DEDS, and to develop controllers for DEDS. Di erent types of models focus on di erent properties, and are thus suited for di erent applications and di erent tasks. An overview over the di erent approaches to DEDS is given in [49, 128]. The automatic synthesis of a sequential control program is in the AI litera1

2

Introduction

ture called planning, and we will frequently use this as a synonym to automatical synthesis of sequential control schemes. A program called a planner automatically constructs a plan, i.e., a sequence of actions transforming a given initial state of the world (the plant) into a desired nal state. In spite of all the work that has been done in this area, most of the methods used today are based on heuristics, and not much is known about the complexity of the di erent planners. However, lately there has been a growing interest in analyzing the complexity of planning [19, 21, 22, 23, 32, 33, 50, 67, 90], and the area is currently being very active. Instead of explicitly describing the state space, the basic idea in AI planning is to describe the available actions that a ect the state of the world. This makes it possible to identify and introduce restrictions that reduce the planning complexity, and this is also the approach being followed in this thesis. Automatic synthesis of control schemes can only be achieved if a plant model is available. Consequently the designer must rst choose a modelling language, then specify the goals of the controller and thereafter model the plant using the selected language. Finally, this model can be used by a systematic procedure to create a control strategy. This could be compared to conventional control theory where the plant is modelled by di erential or di erence equations. The plant model can be used to generate a control law if specifying, for example, the desired poles of the closed-loop system (the goal). A system that is able to automatically create control schemes can of course also be used on-line if it is fast enough. This possibility opens up completely new applications such as operator supervision, simpli ed error recovery and restart procedures after a plant fault has occurred. Model-based synthesis is also useful when, for example, modifying the plant. It can be dicult to realize how the control program should be changed to take the new situation into account. However, using a model-based approach the modi cations are limited to the model alone. Here we present algorithms that can handle a restricted class of sequential control problems. For this class the complexity only increases polynomially with the number of state variables. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., the algorithms fail only when no solution exists. The algorithms generate a plan as a partial order on a subset of the available actions. This partial order speci es the execution order. The generated plan is proven to be minimal and to show maximal parallelism. For a larger class of problems, we propose a method to split the original problem into a number of simpler problems that can each be solved using one of the presented algorithms. This extended algorithm is proven to be sound which is the most important quality of an algorithm. It is also shown how a plan represented as a partial order can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planning tool that takes a plant model and the goals as inputs and produces a GRAFCET chart as an output.

1.1 Outline of the thesis

3

Additionally we analyze reachability for a restricted class of problems. For this class we state a reachability criterion that may be evaluated using a slightly modi ed version of one of the above mentioned algorithms.

1.1 Outline of the thesis The outline of the thesis is as follows. Chapter 2 gives a short survey over how sequential control problems have been solved in the past and present, and how it relates to AI planning and other related areas. The formalism Extended Simpli ed Action Structures which we use, is described in Chapter 3. In Chapter 4 we introduce some restrictions to reduce the complexity of planning, and discuss how these restrictions a ect the computational complexity. These restrictions induce di erent classes of planning problems, and in Chapter 5 two polynomial time algorithms for solving problem from two such classes are presented. One of these is used in Chapter 6 to develop an algorithm for a larger class of problems. The presented algorithms have been implemented in a planning tool that generates the plans as GRAFCET charts as is shown in Chapter 7. In Chapter 8 we give a polynomial time reachability criterion for a restricted class of problems. Finally, in Chapter 9, we conclude the presented work.

4

Introduction

2 Sequential control in the past and present In this chapter we brie y describe how sequential control problems have been solved in the past and present. In Section 2.1 we take a control engineer's viewpoint. Before describing how sequential control has been formulated in the domain of arti cial intelligence (AI) (Section 2.3) we given a short introduction to complexity theory in Section 2.2. Some other related areas are described in Section 2.4.

2.1 Sequential control from a practical viewpoint In this section we give a short survey over sequential control in the past and present from a control engineer's viewpoint. Section 2.1.1 gives some examples of sequential control problems, and some techniques to design and implement sequential control schemes are described in Section 2.1.2. In Section 2.1.3 we describe when automatic synthesis of such schemes may be useful, and formulate some requirements for such methods.

2.1.1 Sequential control in reality

As described earlier a major part of the hard- and software when developing a controller is devoted to sequential control [14]. In this section we give a few examples of sequential control problems. Typically the sequential parts are invoked during startup or shut-down to bring the system into its normal operating region or into some safe standby region respectively, see e.g. Example 2.1. Among the more obvious examples we nd manufacturing systems and assembly systems, see e.g. Example 2.3. 5

6

Sequential control in the past and present

Example 2.1 When handling chemical processes, it is, for example, of utmost importance that the reactants have speci ed pressures and temperatures before the chemical reaction is allowed to start. This might require fairly complex initialization procedures where actions have to be performed in a certain order to prevent uncontrollable reactions or to get any reaction at all. Consider Figure 2.1. MV1 Reactant 1 MV3

Heater

Cooler

MV2

MV4

Reactant 2 Resulting product

Figure 2.1: A chemical process plant. To initialize the process in Figure 2.1 safely we may have to do the following: 1. Open solenoid valve MV1 until the pressure p1 is obtained. 2. Turn on the heater until the temperature T1 is obtained. 3. Open solenoid valve MV2 until the pressure p2 is obtained. 4. Start cooling via valve MV3 . When the pressure p3 is obtained the process may be stable and we can activate the continuous input/output ow control. 2 Another example of the same character is the Steritherm process described in Example 2.2.

Example 2.2 Steritherm is an Alfa-Laval process for UHT sterilization of

liquid food products, for example milk or cream. The liquid is heated to a high temperature during a short time interval. This will kill all micro organisms in the product and the product can then be stored at room temperature. The product is heated indirectly using plate heat exchangers, and cooled in the same way before it enters the packing machine.

2.1 Sequential control from a practical viewpoint

7

In Figure 2.2 a picture of a simulation system is shown. The simulator was developed at the Department of Automatic Control at Lund Institute of Technology using the expert system tool G2 [42] (a short description of G2 is given in Section 7.1.1). A description of the implemented plant can be found in [12].

Figure 2.2: A simulation system of a Steritherm process plant. The process is continuous but, once again, there are also signi cant sequential parts. Before production starts the plant must be sterilized and it must also be cleaned at speci c time intervals. These parts are essentially sequential. As can be seen in Figure 2.2 the typical actions are to turn on or o speci c motors and to change the uids ows by closing or opening valves. There are also ordinary PI-controllers and automatic mechanical devices that control the temperature, the input ow and the pressure while the plant is running in its normal operating mode. These can be turned on or o . 2 We end this section by a more traditional example of sequential control, namely a system of manufacturing type. Example 2.3 Suppose we want to design an automatic system for unloading a freight carrier as illustrated in Figure 2.3. The system contains four di erent

8

Sequential control in the past and present

e ectors, or motors, for moving the trolley horizontally, for moving the grabber vertically, for opening and for closing the grabber, and opening the hopper hatch. Information about the state of the system is provided through ten sensors indicating the position of the trolley, the position of the grabber and so on. 2

2.1.2 Designing and implementing sequential control schemes

Basically, practical sequential control problem solving consists of, or should consist of, four steps, namely: 1. Specifying the goals Before starting the actual design procedure the goal of the design, i.e. what the resulting controller should achieve, must be clearly stated. 2. Specifying the means In this step the means to achieve the speci ed goals are recognized. For instance we identify the available sensors and actuators of the plant. This is where a model of the plant is being developed. 3. Synthesis Synthesis of a control scheme, or planning, is the process of deciding upon a control scheme, i.e., nding an order in which the given1 actions should be performed to achieve the speci ed goals. By actions we mean operations such as turning on or o motors, or opening or closing valves. Thus the set of actions is de ned by the available actuators and control signals. 4. Implementation In this step the control scheme is implemented, i.e., the actual controller or sequencer hard- and/or software is designed. In practise these four steps seldom occur as distinct parts in the design procedure. Normally only a minor part of the time is spent on the rst two steps, and very seldom any formal models are being used. Instead the goal and the plant model are typically described informally in words, and the designer starts the synthesis without having access to a formal plant model. Furthermore the synthesis and the implementation is often carried out interleaved. One important reason for all this is the absence of systematic design methods. Since the rst two steps normally are carried out informally we do not describe them further at this stage.

Usually the set of actions is given and well de ned when the control engineers starts to work. 1

2.1 Sequential control from a practical viewpoint

9

Trolley E1 G2L

G2AR

G2R

Winch motor E2

Hatch G4O

Grabber E3 G1U

G3O G3C Storage Wished Cycle

G5F E4

G1D

OPERATOR PANEL

START

STOP

Control signals Description

EMERGENCY STOP Outputs

ALARM

ACKNOWLEDGEMENT

Sensor signals Logic

Description

Input signals

E1D E1U E2R E2L E3C E3O

Move grabber down. Move grabber up. Move trolley to the right. Move trolley to the left. Close grabber. Open grabber.

G1D G1U G2R G2L G3C G3O

H H H H H H

Grabber down. Grabber up. Trolley in rightmost position. Trolley in leftmost position. Grabber closed. Grabber open.

E4O

Open hatch..

ALARM

Storage full. Turn on alarm lamp.

G5F G4O G2AR

L H H

Storage full. Hatch open. Trolley almost in rightmost position. Push-button start affected. Push-button stop affected. Emergency stop affected.

START H STOP L EMERGENCY L STOP H ACKNOWLEDGEMENT

Ackment pushbutton affected.

Figure 2.3: Sketch over the freight carrier unloading system.

10

Sequential control in the past and present

Synthesis As stated above, the lack of systematic design methods implies that the synthesis of the controller is often integrated with the implementation. Hopkins and Walker [83] observe that the requirements are rarely expressed formally, but instead given by drawings such as plant layouts, timing diagrams etc. They suggest the use of state transition diagram when a sequential control scheme should be designed. Other approaches are mainly focused on using structured high level languages such as those described below, and preferable GRAFCET [25, 53, 122, 129].

Implementation A sequential control scheme can be implemented in several ways. The rst sequential controllers were implemented using some kind of mechanical sequencers. Today electro-mechanical sequencers are still, to some extent, used in for example washing machines, dish-washing machines and co ee machines. Electro-mechanical relays has been widely used to build sequencers. They can imitate the function of a sequencer, but is more exible since the program is stored by the electrical wiring. Even if they do not dominate the market any more they are still used in smaller systems and in safety critical parts of a plant controller. Often relays are used in combination with modern computer based controllers as an independent security system. Later, but to a small extent, hardwired electronic devices such as transistors came to replace the relays. Nowadays the market are dominated by PLC:s (Programmable Logical Controllers). The PLC-systems are specialized computers developed for control purposes only. They are exible and comparably cheap. A major drawback is that the programming languages supplied di ers from machine to machine and they are often at a very low assembly-like level. These languages are not easy to use, and a program developed for one PLC-system cannot easily be moved to another system. However, the PLC-manufacturers sometimes supply some kind of higher level language to make them easier to use. The by far most common \high level" language is \relay ladder logic", but for example Boolean logic and function block diagrams have also been used [108]. The graphical relay ladder programming environment was developed to allow descriptions of the control scheme in the same manner as for relay systems, and the di erent symbols imitate the physical relay components. It is thus easy to use for sta used to the classical relay technique, and is highly accepted and understood by factory oor personnel, e.g. operators and service personnel. The fact that it is a graphical language has contributed to making relay ladder programming so popular. It is easy to follow the ow of electrical signals, and the activity during execution can be easily displayed. However, it is not always trivial to see the actual ow of control commands that are being represented by the

2.1 Sequential control from a practical viewpoint + A

B

D

11

-

C

Figure 2.4: A simple relay ladder diagram realizing D = (A ^:B ) _ C , where ^ denotes logical and, : logical not and _ logical or. electrical signals. An example of a simple relay ladder diagram is given in Figure 2.4. Since the relay ladder logic was developed, applications and programs have grown in size and in complexity. Often additional internal relays need to be added to identify events, thus making the logic more complicated and making it even more dicult to follow the actual control sequence, i.e. realize what actually happens in the plant. Another disadvantage is that a relay ladder logic program is scanned from the beginning to the end, and thus every step in the program is scanned whether it need to be executed or not. This results in unnecessary slow program execution, but it can to some extent be dealt with by using subroutines that can be called upon from the main program when needed. Even if the PLC-systems of today are very exible, the extensive use of relay ladder diagrams has delayed the development of new and ecient programming techniques. Controllers have been developed and implemented using basically the same methods as when electro-mechanical relays dominated the market, forcing the developers to do programming at a very low level. By the introduction of sequential function charts (SFC) [45, 84], also known as GRAFCET, the possibility to use abstraction has to some extent been facilitated. GRAFCET shows many similarities with Petri nets [45, 127], but is more aimed at solving practical sequential control problems, and is therefore basically a restriction of the Petri net formalism. It is a carefully speci ed international standard, and compilers that can generate PLC-code from a GRAFCET chart have been developed and are commercially available [28, 87, 106]. It can be used not only for implementing controllers, but for modelling, specifying and simulating plants. A GRAFCET chart displays the state of the controller, i.e. which actions that are currently being executed. Together with the information supplied by the transition conditions this gives the state of the plant. Thus the process of locating plant faults is simpli ed. The program itself is a good documentation of the controller, and by using macro steps as explained below a top-down design is supported. Even when a compiler that translates a GRAFCET chart to PLC-code is not available, it is still advisable to use GRAFCET to modularize and structure the program and

12

Sequential control in the past and present

thereafter translate it to e.g. relay ladder logic [75]. Although the GRAFCET formalism represent a promising improvement in abstraction, it is important to remember that it is still nothing but a high level programming language.

Basic notions of sequential function charts We will not give a full description of GRAFCET, but only state the main parts. See, for example, [45, 84] for a more comprehensive account. A GRAFCET chart is mainly composed of steps interconnected via directed links, so called transitions as in Figure 2.5. The steps can either be active, which is illustrated by a highlighted dot called a token, or inactive. Once a step is activated its associated action is executed (e.g. open a valve, start a motor). Conversely, deactivating a step implies that the step's action is inhibited. The change from an active step to an inactive one is determined by the transition located between the steps in question. More precisely, a state change occurs when the so called transition condition associated with the transition becomes true (e.g. when a sensor is triggered). In addition to the ordinary action type as described above, the GRAFCET standard de nes a number of simple action types (stored, conditional, delayed, time limited) as well as combinations of them. For example, a stored action is an action that is initiated by a set step and is continuously performed until a subsequent step executes a reset action. Since most practical sequential control schemes are not strictly sequential, the GRAFCET standard also includes two control structures { simultaneous sequences (or parallel branches) and sequence selection (or alternative branches). The start and the end of parallel branches are represented by special objects consisting of two parallel and horizontal bars, see Figure 2.5. Once a parallel branch is encountered, all underlying sequences are activated simultaneously, and thereafter executed independently of each other. To be able to continue from the convergence point of a branch, all incoming branches must be ready, i.e. all steps connected to the \closing" point must be active at the same time. Alternative branches must start with transition conditions as in Figure 2.5. These conditions should be carefully chosen so that no more than one transition condition is true at the same time. The GRAFCET objects can only be connected to each other according to some basic rules. For example, two steps must be separated by exactly one transition and two transitions must be separated by one step. Again we refer to Figure 2.5 where a syntactically correct graph illustrating many of the ideas behind GRAFCET is given. Apart from the above a GRAFCET chart can contain loop branches (see Figure 2.5), which makes it possible to create iterations, and the standard can be extended to contain macro steps where a step itself contains a GRAFCET chart. Since macro steps are not included in the original standard, there is no

2.1 Sequential control from a practical viewpoint

Type of action

Initial Step

C

S0

Control signal

E2

Constraint (action executed only if this is true)

Action

G5 Transition condition

G4

Transition

13

Simultaneous sequences Step

S1

S3 G3*F2

A complete graph must be closed.

G2 or V1

S2

S4

Alternative branching

G7

not G7

S5 F4

S6 G4 Loop chart

S7

not G5

G5

Figure 2.5: Concepts and ideas found in GRAFCET.

14

Sequential control in the past and present Subgraph S23 Subgraph S2 S231 S0

S21

S1

S22

S232

S233

S2

S23

S234

S235

S3

S24 S236

Figure 2.6: The use of macro steps in GRAFCET. uniform syntax for them. An example of the use and syntax of macro steps is shown in Figure 2.6. To further illustrate the basic ideas in GRAFCET we show how a control chart for controlling the previously presented freight carrier may look like.

Example 2.4 Consider the freight carrier in Example 2.3. A GRAFCET chart for unloading the freight carrier is shown in Figure 2.7.

2

2.1.3 Automatic synthesis of control schemes Automatic synthesis of control schemes can only be achieved if a plant model is available. Thus it emphasizes the two rst steps in the design procedure described in Section 2.1.2. Consequently the designer must rst chose a modelling language, then specify the goals of the controller and model the plant using this language. Finally this model can be used by a systematic procedure to create a control strategy. A system able to automatically create control schemes can of course also be used on-line if it is fast enough. This possibility opens up completely new applications such as operator supervision and simpli ed error recovery and restart procedures as is exempli ed below.

2.1 Sequential control from a practical viewpoint

15

S1 (Move to freight carrier) R: /EMERGENCY STOP E2L

S11 /G2AR

S5 START

E2L

S12 S

OPERATION

S6

E4O = 0 S2 (Fill grabber)

/STOP

E1D

S21 G1D

E3C

S22

S3 (Move to storage)

R: /EMERGENCY STOP

E1U

S31 S0

G1U OPERATION

S1

S4 (TömS skopan) E4O = 1 S32 C

S41

E2R

/G2AR + G4O

Alarm

G2L G5F

/G5F

S2 S42

ALARM

G3C G5F*ACKNOWLEDGEMENT

S3 G2R

E3O

S43

S4

G3O WAIT1*G3O

S44

D WAIT1 3 s

Figure 2.7: Control sequences for the freight carrier unloading system. Here n means logical not and + logical or.

16

Sequential control in the past and present

Adding new devices to the plant

A situation where systematic synthesis of control schemes is particularly interesting is when dealing with a plant with varying structure. It is common that new devices are added when the control scheme has already been developed, and hence it must be changed to take the new devices in the plant into account. It may be dicult to realize how the plant changes a ect di erent parts of the program. However, if the plant is described by a model and the control scheme is automatically generated from this model, only the model has to be changed when the process has been changed.

Operator support

Automatic plan generation can also be used for operator support as in Figure 2.8. A supervisor checks if the operations that the operator wants to perform are allowed. If, for example, the operator wants to open a speci c valve, the supervisor should quickly tell the operator that to be allowed to open this valve he or she must rst, e.g., open another valve. Another possibility is to automatically generate and execute a sequence of actions resulting in the opening of the valve. This can be viewed as semi-automatic control. Operator

Supervisor

Plant Controller

Plant

Figure 2.8: A supervisor used for operator support.

2.1 Sequential control from a practical viewpoint

17

Plant fault and error recovery

Normally when creating sequential control programs one must take into account not only the ordinary operation of the plant, but also what should be done when a plant fault occurs. For each possible fault situation we must construct a control strategy that eliminates the e ects of, or at least reduces the damage caused by, the fault. This control strategy is traditionally constructed o -line. When a fault occurs during operation it must be decided beforehand which plan to follow. To verify that such a program is correct is very costly, and in practice each case must be simulated, which due to time constraints often is impossible. There are in principle three di erent situations.  When a plant fault occurs the plant should be taken to some prior de ned \safe state". This \safe state" may depend on the current fault situation, that is, di erent fault may lead to di erent \safe states". The problem is to design a control scheme transforming the current fault situation, i.e. the current initial state x0, into a prior de ned \safe state" x?. Note that the \safe state" may be the same for many fault situations.  A slightly di erent situation occurs if something in the plant breaks down. If possible we want to keep the process running even if there are some \missing actions". If this is not possible at least we want to reach a \safe state". The initial state x0 can be any state in the state space, and the set of \allowed" actions at a speci c time t is A~(t) = A ? factions which are \out of order"g; where A is the set of available actions during normal operation. The actions which are out of order may, for example, be due to motors that are broken. The problem is to nd a sequence of actions in the set A~ which transforms the current initial state x0 into the desired nal state x?. If this is impossible because of some missing actions, we want to come as close as possible.  An example of the initialization problem is when there has been an emergency shut-down, and we want to re-start the process. Usually this is done manually by the operator. The nal state x? and the set of available actions A are given beforehand, but the initial state is not fully speci ed until the plan is needed. From the beginning we only know that the initial state x0 belongs to a given set S0. Given an initial state x0 2 S0 at a certain time t, the problem consists of nding a control strategy transforming the current initial state x0 into the nal state x?.

18

Sequential control in the past and present

In all the cases above an approach allowing automatic creation of sequential control schemes on-line seems like an attractive solution. Otherwise we must, as stated before, prepare for all possible situations and the number of plans we would have to store could be very large. Observe that when using on-line development of the control schemes, the plan for each case is generated only if and when needed. Thus there is a need for automatic synthesis of sequential control schemes. A tool capable of automatically developing such control strategies must satisfy three requirements. It must be sound, i.e., the generated solution will solve the stated problem, and complete, i.e., it fails only if no solution exists. Of these two, soundness is the most important property. An algorithm that occasionally fails to nd a solution (i.e. is not complete) can sometimes be accepted if the improvement in speed is signi cant, but soundness is always a critical point. It could cause disaster if using an incorrect control scheme. Additionally it is desirable that the control scheme is generated fast enough to allow for on-line replanning. The last requirement is the one that is most dicult to satisfy. What is considered \fast enough" varies of course from application to application, but normally one means that the maximum time required by the algorithm is polynomial in the size of the input (see Section 2.2). For the type of problems we consider the size can be measured in the number of state variables used when modelling the plant.

2.2 A brief introduction to complexity theory As described above it is important that an algorithm solving a speci c problem is correct and that the solution is computed \fast enough". In this section we describe how the actual speed can be quanti ed in general. This is referred to as complexity theory. Here we will only give a brief overview, and a more thorough presentation may be found in, for example, [15, 56, 123]. We are interested in the time it takes for an algorithm to generate a solution, and we will measure this time in the number of elementary steps required to execute the algorithm until termination. Even for problem instances of the same size the number of steps varies considerably. For this reason we de ne the complexity of an algorithm as the worst-case behavior when considering all instances of the same size. Thus the complexity of an algorithm is a function of the size of the problem input (the problem instance supplied to the algorithm) and hence the rst issue is to state how to measure the size of an input. Before applying an algorithm to a problem instance, we must encode or represent it somehow. In our context this results in a nite string of symbols over some prede ned alphabet such as binary symbols or ASCII characters. To

2.2 A brief introduction to complexity theory

19

formally de ne such an encoding goes far beyond our need, and additionally, all reasonable encodings are essentially equivalent from our viewpoint. Hence we just assume that some \reasonable" encoding [56] is used. Formally, assuming the input to be represented as a string of symbols, the size of the input is de ned as the length of this string, i.e., the number of symbols in it. As stated above we want to compute the number of steps executed by an algorithm as a function of the size of the input. Our main interest is how the algorithm behaves when applied to large problem instances. If n is the size of the problem input, the di erence between 10n2 and 11n2 is unimportant, and slowly growing terms will eventually be overwhelmed by faster growing terms. Guided by these observations the de nition of the rate of growth of the complexity of an algorithm is quite natural. The de nition is given for any positive function. De nition 2.1 Let f (n) and g(n) be functions from the positive integers to the positive reals. We make the following de nitions: 1. f (n) 2 O(g(n)) if there exists a constant c > 0 such that, for some m > 0 and all n > m, f (n)  cg(n), 2. f (n) 2 (g(n)) if there exists a constant c > 0 such that, for some m > 0 and all n > m, f (n)  cg(n), and 3. f (n) 2 (g(n)) if there exists constants c; c0 > 0 such that, for some m > 0 and all n > m, cg(n)  f (n)  c0g(n).

2 Now, having de ned how to measure the complexity of an algorithm, we can focus on what is meant by \fast enough". Let n be the size of the problem instance and let f (n) denote the complexity of the algorithm, i.e., the number of instructions executed. With \fast enough" one usually means that the complexity of the algorithm is polynomial in the size of the input, i.e., f (n) 2 O(nk ), where k is some integer. Actually it is sucient that the rate of growth can be bounded by a polynomial. Examples of such functions are n log n and n1:5, that both belong to the set O(n2 ). If there exists a polynomial algorithm solving a class of problem instances this class is said to be tractable. If no polynomial algorithm can exist, the class is intractable. An algorithm whose complexity increases faster than any polynomial is said to have exponential complexity. Note that any function that increases faster than a polynomial is in this sense regarded as exponential, e.g. functions like n! and nlog n are regarded as exponential although they are not strictly exponential in the mathematical sense. An obvious requirement for a problem class to be tractable is that the size of the solution is polynomial in the size of the input. As we will see in Section 4.2 this is not always the case for the kind of problems considered in this

20

Sequential control in the past and present

thesis. One might argue that an algorithm whose complexity is, for instance, (n100) would not be very useful in practise, and hence it may seem strange to regard this as a fast algorithm. However, once a polynomial algorithm is found, the degree of the polynomial often undergoes a series of decrements as various researchers improve on the original algorithm. Often the resulting complexity is O(n3) or less. It is worth noting that we are discussing the worst-case complexity, and that even if an algorithm is proven to have an exponential worst-case complexity it might be the case that in practise the algorithm is very fast. An example of such an exponential algorithm is the Simplex algorithm solving the linear programming problem [123]. It is well-known that in practise the Simplex algorithm works fast for very large problem instances. Unfortunately this is not the case for most other exponential time algorithms. Based on the complexity discussion above it is possible to de ne di erent problem classes with respect to complexity. Formally these classes are de ned only for decision problems, i.e., problems whose answer is \yes" or \no", but it is straightforward to reformulate for example an optimization problem into a decision problem. The class P contains all problems that, under reasonable encoding, can be solved in polynomial time. NP is the class of all problems that can be solved by a polynomial time nondeterministic algorithm. Such an algorithm rst guesses a solution and then checks in polynomial time if the guessed solution in fact solves the stated problem. Thus it can only verify a solution in polynomial time, not nd it. Obviously P  NP, but even if there is a widespread belief that P 6= NP, nobody has so far been able to prove it. The NP-complete problems are the hardest problems in NP. A problem  is NP-complete if  2 NP and for all other problems 0 2 NP, there is a polynomial transformation from 0 to . A polynomial transformation is simply a transformation that can be computed in polynomial time. The NPcomplete problems cannot be solved in polynomial time unless P = NP. The relationship among these classes are illustrated in Figure 2.9. So far we have only been concerned with the time complexity of an algorithm. In practise one must of course also take the space complexity into account. The class of problems that can be solved in polynomial space is denoted PSPACE. It is easy to realize that if a problem can be solved in polynomial time, then it can be solved using polynomial space, i.e., P  PSPACE. PSPACE-complete problems are de ned in the same way as NP-complete problems, i.e., a problem  is PSPACE-complete if  2 PSPACE and for all other problems 0 2 PSPACE, there is a polynomial transformation from 0 to . EXPSPACE contains problems whose space complexity is bounded by 2p(n) where p is some polynomial and n is the size of the input.

2.3 Sequential control from an AI perspective

21

NP P

NP-complete

Figure 2.9: The relationship between P, NP and NP-complete problems assuming that P = 6 NP.

2.3 Sequential control from an AI perspective Automatic synthesis of sequential control schemes is in the domain of AI known as planning2 . A program called a planner automatically develops a plan (a sequential control scheme) using some kind of world model. AI planning has been a research area for some 30 years, and we do not intend to make a full survey but rather a short overview of the area. More elaborate introductions to AI planning can be found in Rich [135, Section 8.1], Tate et. al. [153], George [58], Allen [7] and Vere [157], as well as in Charniak and McDermott [37, Chapter 9], Nilsson [120, Chapter 7 and 8] and the introductions in [8]. It should also be pointed out that in some aspects the problems dealt with in the context of sequential control, di er from the applications that most AI researchers have in mind. Usually their intended application is an autonomous robot moving in a changing environment and a frequently used example is the blocks world as described below. We use this example to explain some important AI concepts. Example 2.5 The blocks world consists of a table whereupon a number of blocks are placed, see Figure 2.10. The blocks can be moved by a robot arm. They are of equal size and it is assumed that only one block can be placed on top of another block. An operator (or action type) is a generic description of an action, and an instantiation of an operator is thus an action. In this section we will not make a clear distinction between operator and actions, hoping this will not confuse the reader. The blocks world operators can be de ned in slightly di erent ways. We follow the de nitions in Backstrom [18]. The available This is the way we will use the word planning. Other meanings are for example motion planning and path planning. 2

22

Sequential control in the past and present A

B C

D

Initial state

A B

C D

Final state

Figure 2.10: The blocks world (Example 2.5). Move-to-table(B,C)

Move-from-table(C,D)

Move-from-table(A,B)

Figure 2.11: A linear plan solving Example 2.5. operators are Move-to-table(x; y), where block x is moved from block y to the table, Move-from-table(x; y), where block x is moved from the table to the top of block y, and Move-direct(x; y; z), where block x is moved from block z to the top of block y. To specify the operators more formally we must decide upon which formalism to use, so we postpone that to Section 2.3.1. The state can, for example, be described by a set of atomic logical formulas, so the initial state in Figure 2.10 can be described as I = fOn(A; T ); On(B; C ); On(C; T ); On(D; T )g; (2.1) where A, B , C and D are the corresponding blocks, T denotes the table and On(x; y) is a binary logical predicate that is true if x is placed immediately on top of y. The nal state, the goal, can be described as G = fOn(A; B ); On(B; T ); On(C; D); On(D; T )g: The planning problem is to nd a sequence of actions transforming the initial state into the nal state. An optimal solution is given in Figure 2.11. This is a linear plan because the actions are totally ordered, i.e., there are no unordered actions. Thus the word linear refers to a linear (total) order and a linear plan is just a sequence of actions. In a non-linear plan the execution order is not yet fully speci ed, and the actions are only partially ordered. A non-linear plan solving the same problem is given in Figure 2.12. The actions Move-from-table(C; D) and Move-from-table(A; B ) can be executed in any order. Usually such a plan is strengthened into a linear plan before execution. For example the non-linear plan in Figure 2.12 can be strengthened into the linear plan in Figure 2.11. If the unordered actions can be performed in parallel without interfering with each other, we call it a parallel plan. This is the case for the plan in Figure 2.12 if there are two robot arms that can move the blocks. These concepts are formally de ned in Section 3.3. 2

2.3 Sequential control from an AI perspective

23

Move-from-table(C,D) Move-to-table(B,C) Move-from-table(A,B)

Figure 2.12: A non-linear plan solving Example 2.5. In Section 2.3.1 linear planners are treated, while non-linear planners are described in Section 2.3.2. Computational complexityis discussed in Section 2.3.3 and some attempts to reduce the complexity are described in Section 2.3.4. Finally, some representational issues are discussed in Section 2.3.5.

2.3.1 Linear planners

Linear planners basically uses some kind of guided search in the state space when trying to nd a plan. They are called linear since they develop plans in a linear fashion, and during the planning phase all actions must be strictly ordered. This means that a linear planner is over-committed as will be described later. Furthermore, a linear planner always generates a linear plan. Two early attempts at solving planning problems was the General Problem Solver (GPS) [118, 119] and QA3 [65]. The General Problem Solver (GPS) [118, 119] introduced the important search strategy means-ends analysis which has been used by many planners since. The idea is to identify the di erence between the initial state3 and the nal state, and then try to reduce the part of the di erence (the subgoal) that is judged hardest to solve. This may in turn create new subgoals. When a subgoal cannot be satis ed backtracking is used. The operators (actions) are not explicitly described. Di erence tables tailored to each application are used to guide the choice of operator. In such a table all operators that can be used to reduce a particular di erence are listed. Another approach was followed by Green [65]. While GPS as a general problem solver was intended to simulate human problem solving, QA3 was more focused on planning. In QA3 the world is modeled as axioms in a rst order logic, and actions are encoded as functions. The goal is represented as a formula expressing that the goal is true in some future state. Letting x? denote this future state, the goal in Figure 2.10 is On(A; B; x?) ^ On(B; C; x?) ^ On(C; D; x?) ^ On(D; T; x?) where ^ denotes logical and. The goal is given as a query to a resolution 3

In GPS states are referred to as objects.

24

Sequential control in the past and present

based theorem prover. Since actions are explicitly encoded it accounts for a uniform and formalized method. However, being based on theorem proving it is inherently inecient since predicate calculus is undecidable [55]. If a proof exists it will eventually be found, but if no such proof exists the theorem prover may never halt. While QA3 represented actions as functions, in situation calculus [110] actions are represented as objects. Thus a single result function can be used instead of one function corresponding to each action as in QA3.4 A major problem with both QA3 and situation calculus is known as the frame problem [73]. The frame problem is the problem of specifying what remains unchanged when an action is executed. It deals with the question of how to know, for example, that the color of the car remains the same when the garage is painted. Using situation calculus everything that is not a ected by an action must be explicitly stated using so-called frame axioms. For a realistic application the number of frame axioms soon becomes overwhelming. A lot of work has been done to characterize di erent kinds of frame problems and suggest how to deal with them [69, 138]. Good collections of papers on the frame problem are [31, 54, 130]. The most well-known planner is probably STRIPS, presented by Fikes and Nilsson [52]. It combines means-ends analysis from GPS with the notion of states used in QA3 and situation calculus. STRIPS has had a large in uence on the development of planners over the years. One of the important contributions in STRIPS is the way that it deals with the frame problem. STRIPS employs the STRIPS assumption that everything that is not explicitly changed by an action remains una ected when performing the action. Most planners developed after STRIPS have used this assumption. In STRIPS actions, or operators, are modeled by a pre-condition, an add list and a delete list. The state is described by a set of logical formulas and the pre-condition of an action must be true in a state to execute the action. The new state is then achieved by deleting the formulas in the delete list from the current state and adding the formulas in the add list. We exemplify with the blocks world.

Example 2.6 Consider the blocks world in Example 2.5. The operators can

be described by the operator schemata in Figure 2.13. Suppose the initial state is as in Figure 2.10, and that we want to apply the operator Move-to-table(B; C ). We see immediately that the pre-condition is satis ed since On(B; C ) is true, C 6= T and there is no z such that On(z; B ) is true. Thus we can apply the operator, and the resulting state is obtained by deleting On(B; C ) from Equation 2.1 and adding On(B; T ), which results 4

This was actually mentioned by Green [65] as an alternative.

2.3 Sequential control from an AI perspective

25

Move-to-table(x,y):

Pre-condition: On(x; y) ^ y 6= T ^ :9z(On(z; x)), Add-list: On(x; T ), Delete-list: On(x; y); Move-from-table(x,y): Pre-condition: On(x; T ) ^ :9z(On(z; y)), Add-list: On(x; y), Delete-list: On(x; T ); Move-direct(x,y,z): Pre-condition: On(x; z) ^ z 6= T ^ :9u(On(u; x)) ^ :9v(On(v; y)), Add-list: On(x; y), Delete-list: On(x; z).

Figure 2.13: Operator schemata for the blocks-world example (^ denotes logical and, and : logical not). in the state

fOn(A; T ); On(B; T ); On(C; T ); On(D; T )g: The resulting state is shown in Figure 2.14. A

C

D

B

Figure 2.14: The resulting state in Example 2.6. 2 One of the problems with STRIPS is that is is based on the linearity assumption that the subgoals can be achieved independently. This is not true in, for example, the blocks world situation depicted in Figure 2.15 which is known as the Sussman anomaly. The goal is only that block A should be on block B and block B on block C , i.e.,

fOn(A; B ); On(B; C )g:

(2.2) If the planner rst tries to achieve the goal On(B; C ) this has to be destroyed when solving for the second goal, On(A; B ). The same is true if the goals are tried in the opposite order. In this case a solution can be obtained if applying

26

Sequential control in the past and present C A

B

Initial state

A B C Final state

Figure 2.15: The Sussman anomaly. the planning algorithm once again but now with the new resulting state as a starting point. This will, however, not be an optimal solution. Additionally this method only works if all actions have an inverse, i.e., it is always possible to undo an action. STRIPS will also fail on, for example, the problem of swapping the values of two variables [158]. It should be remarked that the word \linear" is used for three di erent concepts. The linearity assumption is not coupled to linear planning or linear plans. Furthermore, even if a linear planner always produces a linear plan, the opposite is not true, i.e., a linear plan does not have to be generated by a linear planner. To solve some of the problems in STRIPS Sussman [149] introduced protection intervals in his planner HACKER. A protection interval is used to protect an achieved subgoal from being destroyed before needed, and are used to detect interactions. However, the only way that HACKER could try to solve detected interactions was to backtrack to a point where an alternative goal ordering could be tried. Hence HACKER could not solve the Sussman anomaly in Figure 2.15. A more elaborate way to correct for detected interactions was suggested by Warren in WARPLAN [93, 159]. The interfering action is allowed to be placed earlier and earlier in the plan until there is no interaction. Another solution is by regressing not only the action but the subgoal itself when an interaction is detected [158]. Tate [151] introduced a \goal structure" in his planner INTERPLAN. This was used to record the link between an e ect of one action that was a pre-condition (sub-goal) of a later one. (This should not be confused with the actual ordering links between the actions. )

2.3.2 Non-linear planners

All of the planners discussed above are linear planners since they develop plans in a sequential fashion. In a sense a linear planner is over-committed in that it always has to put everything in a strict sequential order. The basic idea in non-linear planners is to defer decisions on the ordering between actions until such decisions are forced to resolve some con ict. Some authors refer to such

2.3 Sequential control from an AI perspective

27

planners as partial order planners [44, 115, 153] to avoid the possible confusion between non-linear planners and non-linear plans (see Example 2.5). The rst non-linear planner NOAH was presented by Sacerdoti [137]. In NOAH a partial plan is represented using a procedural net and a Table of Multiple E ects (TOME) is used as an aid in discovering interactions. The TOME is based on the \goal structure" used in INTERPLAN. Sometimes two actions must be ordered but it is not clear how. In such cases NOAH will make what is considered the best choice according to some heuristics, and then stick to it, i.e. no backtracking takes place. Thus NOAH can fail to nd a solution even if there exists one, and hence it is not complete. Based on NOAH Tate [152] presented NONLIN, which could consider alternatives, i.e., use backtracking when needed. He claimed that NONLIN is complete, but no formal proof was given. Often heuristics are used to prune the search space. This is the case in, for example, NONLIN, O-PLAN [43] and SIPE [163]. Drummond and Currie [48] presented heuristics for pruning the search space and proved that under certain restrictions a solution will be found even if branches of the search space are cut o . An additional way to defer decisions is to allow objects to be partially speci ed and pose constraints on the properties these objects must satisfy. Such planners are referred to as constraint-posting planners [145, 162, 163]. Additional improvement of the eciency is often gained by using plan specialization and plan modi cation operators. Chapman [36] presented a constraint-posting planner TWEAK, which is equipped with a complete set of such plan modi cation operators. He proved that TWEAK is sound and complete. This is important since often not much is known about the soundness and/or completeness of a planner. Furthermore Chapman analyzed the complexity for di erent classes of problems that could be handled by TWEAK. This will be further discussed in Section 2.3.3. Hertzberg and Horz [76] theoretically analyzed and classi ed di erent kinds of con icts that can arise using a non-linear planner. Their con ict classi cation was used by Yang [166] who proposed a method based on constraint propagation for deciding how to resolve a set of con icts. McAllester and Rosenblitt [109] presented a non-linear planner that is sound, complete and systematic in the sense that the same partial plan is never examined more than once. Minton et.al. [115] compared two formal planners, a linear and a corresponding non-linear planner. It has often been claimed that non-linear planning is more ecient than linear planning. However, they concluded that this is not always true.

2.3.3 Computational complexity

With the exceptions of Korf [98] and Chapman [36] there has not been much done theoretically concerning the computational complexity of planning until

28

Sequential control in the past and present

recently. Lately there has been a growing interest in the computational aspects of planning. Korf [98] analyzed how di erent properties of subgoals a ect the computational complexity. He showed that the search space can be reduced by creating a subgoal and thus divide the planning problem into two problems. If the linearity assumption is ful lled then the subgoals can be solved independently, and are accordingly called independent subgoals. Subgoals are called serializable subgoals if they can be solved independently if solved in some particular order. Non-serializable subgoals are subgoals that cannot be serialized. Korf claimed that non-serializable subgoals may sometimes be due to the goal formulation rather than the goal itself. For example, the Sussman anomaly in Figure 2.15 is non-serializable as originally formulated (Equation 2.2), but is serializable if the goal is expressed as

fOn(A; B ); On(B; C ); On(C; T )g:

(2.3)

Because TWEAK [36] is based on a formal model it is possible to analyze its properties. Chapman proved that TWEAK is sound and complete. Additionally, he proved that planning is semi-decidable [55], i.e., if a plan exists TWEAK will eventually nd it, but if no plan exists TWEAK may never halt. This was proven by showing that a Turing machine [82] can be encoded in the TWEAK formalism. However, it should be noted that the proof requires an in nite initial state. Erol et.al. [50] cleaned up this results by showing that planning is semi-decidable if there are function symbols or in nitely many constants in the language. If no function symbols and only nitely many constants are allowed, planning is decidable with a complexity ranging from EXPSPACEcomplete to PSPACE-complete, depending on further restrictions. Since STRIPS is based on rst order logic and theorem proving it is of course undecidable, at least in theory. In practise only a nite amount of time is allowed for trying to prove a formula, which makes STRIPS more e ective but incomplete. STRIPS, as well as most planners, is not based on a formal model which makes it dicult to analyze the complexity formally. However, recently some results concerning the complexity of STRIPS planning have been presented. These results refer to the STRIPS formalism, and not to the planner itself. Bylander [32, 33] analyzed the complexity of propositional STRIPS planning. He showed that plan existence and planning is PSPACE-complete in the general case, but NP-complete or tractable for some restricted cases. The restrictions introduced are local restrictions on the operators such as bounding the number of atoms in the pre-condition. For example, the operator Move-to-table in Figure 2.13 has three atoms in its pre-condition, while Move-from-table only has two. Additional results are presented by Erol et. al. [50].

2.3 Sequential control from an AI perspective

29

Recently there has been some work regarding the blocks world presented in Example 2.5. Gupta and Nau [67] showed that nding optimal plans in the blocks world is NP-hard. However, Bylander [32] showed that a restricted version of the blocks world can be solved in polynomial time. He requires that the goal is fully speci ed (i.e. Equation 2.3 instead of 2.2) and excludes the possibility of moving directly from one block to another (i.e. the operator Move-direct in Figure 2.13). Backstrom [18] strengthened his result by showing the same result even if the goal state is not fully speci ed. Finally, we have previously presented polynomial time algorithms for restricted classes of planning problems based on the formalism Extended Simpli ed Action Structures (SAS+ ), which is formally de ned in Chapter 3. In [20, 22, 89, 90] we analyze the SAS-PUBS class of planning problems, and in [21, 91] we analyze the SAS-PUS class, which is an extension of the SASPUBS class. These classes, and the restrictions forming them, are de ned in Chapter 4. Backstrom [18] showed that the SAS+formalism is equivalent to propositional STRIPS concerning expressiveness. However, the restrictions forming the subclasses we have analyzed are more natural to express using simpli ed action structures. An extension of one of the previously mentioned algorithms is presented by Backstrom [18], and in Section 5.1 we present yet another extension. Backstrom and Nebel [23] have analyzed all possible combinations of the previously mentioned restrictions. This is further discussed in Section 4.2, and in Figure 4.1 we show their main result.

2.3.4 Approaches to reduce the computational complexity

Several attempts have been made at trying to reduce the complexity of planning. We give a short overview over some of them.

Domain-dependent planners So far we have only been concerned with domain-independent planners, i.e., a planner that will work for any application. If instead tailoring a planner to a speci c application, i.e., using a domain-dependent planner, the available knowledge about the particular domain can be used to speed up planning. In a domain-dependent planner it is easier to design good heuristics for guiding the search since good heuristics necessarily take advantage of the speci c structure of the problem at hand.

Hierarchical planners The main idea when using abstraction is to concentrate on the major goals rst and then ll in the details. The rst planners that were based on abstraction

30

Sequential control in the past and present

used a strict search by levels where no backtracking to higher levels was possible. This means that it must always be possible to re ne a high-level plan into a low-level plan. However, this is not always the case. An example of a planner implementing this strict search is ABSTRIPS by Sacerdoti [136], which is an extension to STRIPS. In ABSTRIPS the pre-condition of an operator di ers at di erent levels of abstraction. Each atom in the pre-condition is assigned a \criticality" value. Only the most critical atoms are visible at the highest abstraction level, whereas at lower levels less critical atoms will also appear. ABTWEAK [168, 169] by Yang and Tenenberg combines the hierarchical planning as in ABSTRIPS with the constraint-posting planner TWEAK [36]. They showed that ABTWEAK satis es the monotonic property [94], whereby the existence of a lowest level solution implies the existence of a structurally similar high level solution. Another TWEAK-based hierarchical planner is PABLO [38]. However, PABLO is based on a di erent approach and deeper abstraction hierarchies can be developed. Thus PABLO may be more ecient. Korf [98] found that in average the search space can be exponentially reduced when using hierarchical abstraction. His results were extended by Knoblock [94, 95] who tried to examine exactly when such an exponential reduction of the search space is possible. Korf and Knoblock assumed that no backtracking occur across abstraction levels. Bacchus and Yang [17] examined the bene t of hierarchical planning when allowing backtracking across abstraction levels, and predicted a boundary where no bene t is gained through abstraction. Another type of abstraction is used in, for example, NOAH [137], NONLIN [152] and MOLGEN [145]. Here a high level action can be expanded to lower level actions, i.e., a high level action is a form of macro action. In NONLIN the abstraction level is used as a guide to the search at lower levels, and NONLIN can re-plan or consider alternatives at any level. Some systems can determine when a particular choice (at any level) is suciently constrained to be a preferable goal to work on [145]. Yang [167] has made a theoretical investigation of some issues concerning planning with concurrent processing of levels. Lansky's GEMPLAN [100, 101] also allows for simultaneous use of actions at di erent levels. A theoretical classi cation of di erent abstraction hierarchies is given in [96].

Using old plans Macro operators can also be used to reduce the complexity. When a part of a plan has been successfully achieved it can be turned into a single operator that can later be used on the same basis as all other operators. This approach is used in, for example, MACROP [51] which is an extension to STRIPS. An old plan can also be used as a starting point when developing a new plan. Planners like CHEF [68] put the main e ort in guiding the search for the

2.3 Sequential control from an AI perspective

31

old plan, and use simple methods to map the old plan onto a new plan. Hanks and Weld [70] presented a domain-independent algorithm for how to map the old plan into a new plan. Their algorithm is proven to be sound, complete and systematic.

Reactive planners In all the cases considered above the planner is supposed to work in an o -line fashion developing a plan, and then executing the result. Thus the planner can be viewed as a feed-forward controller working without any knowledge about what is actually happening in the real world5. In a reactive planner the idea is that the system should respond immediately to the events that happen in the real world. Thus a reactive system can be viewed as a feed-back controller. An overview over reactive planning can be found in [107]. The rst reactive planners were not doing any actual planning on-line but only responded immediately to events using a pre-de ned plan [5, 30]. One way of designing reactive planners is by using universal plans proposed by Schoppers [141]. His universal plans are highly conditional plans that can achieve a goal given any initial state. The use of situated automata to design reactive planners is discussed by, for example, [88]. Lately there has been some work on integrating the concept of planning ahead with that of reacting immediately to the environment [29].

Restricting the class of problems One way to reduce the planning complexity is of course to restrict the class of problems one can handle. This is, as described earlier, the approach we have followed previously and will also follow in this thesis. Algorithms for restricted classes of problems can be found in [18, 21, 22, 91].

2.3.5 Representational issues

Many researchers have focused their attention on how to represent the world using some kind of logic rather than on developing planners. They view planning as a reasoning process within the logic of their choice. One key issue is how to model concurrent (parallel) actions. Concurrent action handling based on situation calculus [110] is treated in, for example, [161, 57, 66]. However, some authors claim that situation calculus is not powerful enough since time is not explicitly represented. Instead di erent kinds of temporal logic have been proposed. Often the time is represented as intervals, and thus a formula can be Of course this is not the way planners really work. The systems are often equipped with some kind of supervision checking that the execution of the plan is successful. However, if the execution is unsuccessful, the planner has to start planning from scratch again. 5

32

Sequential control in the past and present

true over a time interval instead of in a speci c state. Examples of such logics are found in [9, 10, 111, 126]. These logics are more expressive than rst order logic, and it is well known that satis ability in rst order logic is undecidable [55]. Thus, to develop an e ective planner one would have to introduce some kind of restrictions. Veloso et.al. [156] and Regnier and Fade [133] have developed STRIPS-like languages where parallel actions can be treated.

2.4 Some other related perspectives There are a lot of other areas that are closely related to sequential control, for example graph theory (Section 2.4.1) and Discrete Event Dynamic Systems (DEDS) (Section 2.4.2).

2.4.1 Algorithms based on graph theory search

Given an initial state x0 and a nal state x?, the problem of nding a control sequence transforming x0 into x? may be described as a search for a path in the state graph. The state graph is simply a graph where the nodes are states and the arcs denote action transformations between states. Finding a minimal plan, i.e. a control sequence containing as few actions as possible, is equivalent to searching the state graph for the shortest path from x0 to x?. Using this approach we exclude the possibility of executing actions in parallel, but this may not be a serious restriction, depending on the application. Thus the graph based approach results in a linear plan. There are several algorithms that can be used to solve the shortest path problem. We will only brie y describe three such algorithms. It is of course possible to use di erent kinds of search methods such as for example depth- rst search, breadth- rst search or the A algorithm. These algorithms are basically designed for searching a tree, and when used on a directed graph a loop detection must be added to avoid in nite loops. A short survey over di erent search methods is given in [135], and a more detailed analysis of depth- rst search and other related search methods is given in [6]. Depth- rst search continues searching a path until it is fully explored, while breadth- rst simultaneously explores several paths. The di erence is best understood by an example. Example 2.7 To illustrate how depth- rst and breadth- rst search traverses a tree we look at a simple example. Suppose the tree is as in Figure 2.16, and that F is the goal. Let us rst consider depth- rst search. In depth- rst each node is iteratively expanded until there are no more nodes on the current path. Suppose that we always chose the left node for expansion when there are several nodes to choose between. In our case these means that we rst select

2.4 Some other related perspectives

33

A

B

D

C

E

F

G

H

Figure 2.16: Search tree in Example 2.7. Step1

Step2

Step3

A

A

A

B

B

C

D

C

E

F

Figure 2.17: Breadth- rst search for Example 2.7. the node B for expansion, then D and H . Since there are no more nodes after H , and the goal is not found, backtracking must be applied. This leads to the expansion of E . Again, backtracking must be used resulting in the expansion of node C , whereupon node F is expanded and the goal is found. Let us instead consider breadth- rst search applied to the same example. Now, all the nodes that are at the same level must be expanded before expanding any node on the next level. In Figure 2.17 we show how the nodes will be expanded.

2

In applications where there are several paths leading to the goal and each of the paths are rather long, the depth- rst algorithm is likely to nd a solution faster than the breadth- rst algorithm. However, if searching for the minimal path depth- rst search results in exhaustive search while breadth- rst is always

34

Sequential control in the past and present

guaranteed to nd a minimal path. The depth- rst algorithm takes O(k) time where k is the larger of the number of nodes m and the number of arcs a in the graph. An upper bound for the number of arcs is m2, and hence the algorithm takes O(m2) time. Suppose all state variables are binary. Then the number of nodes in the state graph is 2n , where n is the number of state variables, i.e., the dimension of x. This means that even if the complexity of the depth rst algorithm seems to be polynomial, it is exponential in the number of state variables. This shows the importance of carefully specifying the size of a problem instance when talking about the complexity of algorithms. In the rest of this thesis we measure the size of a problem in the number of state variables as de ned in Section 2.2. The A algorithm, rst described in [71, 72], is a kind of best- rst search. The A algorithm assumes that there is a cost attached to reaching the nal state, the goal node, from the current node in the graph. In this thesis the cost is simply the number of actions that must be executed, i.e., the length of a path from the current node to the goal. In each step of the algorithm the most promising node is chosen, i.e., the node where the sum of the cost of getting to the node from the initial state and the estimated cost of getting from the node to the goal is minimal. If our estimate of the cost to the goal is perfect, that is, we actually have complete knowledge of the distance from each node in the state graph to the goal, then the algorithm will generate the correct solution immediately without any backtracking at all. If we never overestimate the length of a path to the goal, the algorithm is guaranteed to nd an optimal solution, i.e., a minimal path from the initial state to the nal state. The catch is how to estimate the length of a path that we have not yet explored. A simple estimate would be to count all state variables that di er in the current state and the desired nal state. If every action only a ects one state variable6, it is easily realized that this is at least a lower bound on the number of actions that must be executed, i.e. the length of a path to the goal node. The problem is that this could be a very poor estimate because normally there are certain conditions that must be satis ed when executing a speci c action. Consider, for example, Rubik's cube [98]. Once a part of the puzzle is solved, in general it must be messed up, at least temporarily, in order to make further progress. The actual cost is obviously greater than the estimated cost, and it is easy to imagine situations where a lot of these \secondary" actions must be performed. Hence, using the A algorithm results in a possibly exhaustive search of the state graph, and for many practical problems it has worse performance than a depth- rst or breadth- rst search algorithm as described above. An iterativedeepening version of A (IDA) was presented by Korf [97]. IDA has a better space complexity than A, and asymptotically it expands the same number of nodes as A. 6

All algorithms developed in this thesis uses this assumption as de ned in De nition 4.3.

2.4 Some other related perspectives

35

In dynamic programming [123] the principle of optimality is used. This principle states that a completion of an optimal sequence of decisions must be optimal. Dijkstra's algorithm [47] is an example of such an algorithm. A good description is found in [150]. We will not describe it in detail but only remark that the complexity of the algorithm is O(k12 ) where k1 is the number of nodes in the graph, i.e. the number of states in our application.

2.4.2 Discrete Event Dynamic Systems

In the area of Discrete Event Dynamical Systems (DEDS) the same type of problems as in sequential control are analyzed. A DEDS is a system where the state of the world is changed by the occurrence of discrete events, or actions. In contrast to the well-known models for dynamical systems, which can be described by di erential or di erence equations, there is not yet any unifying theory for DEDS. A considerable amount of work has been done in di erent areas to describe and analyze DEDS, and to develop controllers for DEDS. Di erent approaches stress di erent properties of DEDS, and it is probably not possible to design a theory that is well suited for all applications. An overview over di erent approaches to DEDS is given in the IEEE Proceedings special issue on Dynamics of Discrete Event Systems published 1989 [49] and by Pollacia [128].

Temporal logic Temporal logic has not only been used by AI researchers (Section 2.3.5) but also in the eld DEDS [121, 154]. Given a model of the plant and a controller controlling it (both described in temporal logic) it should be proven that the resulting system ful lls some speci cation. Temporal logic can also be generalized to handle nondeterministic discrete event systems [103].

Automata theory In the original Ramadge and Wonham [131, 165] formulation the plant is modelled as a state automaton. The events can be divided into controllable and uncontrollable events. A controllable event can be either enabled or disabled, but not actually forced to occur. The behavior of the plant is described in terms of the language generated by the state automaton, i.e., by the strings (sequences) of events. The desired behavior is also speci ed as a language. The problem is to design a supervisor so that the behavior of the plant equals the speci cation. Such a supervisor can be described as a state automaton and an output map from the states in this automaton to the controllable events, thus enabling or disabling these events. In this framework the qualitative aspects can be analyzed. As in classical control theory it is possible to de ne

36

Sequential control in the past and present

and analyze controllability [165] and observability [39, 155]. An algorithm for synthesizing a controller when full observations are available are given in [131]. The complexity for this algorithm, as well as for example for checking controllability, increases polynomially with the number of states7 [165]. Extensions are described in, for example, [34, 64, 102, 132, 170]. Balemi [26] presents an extension where the plant receives commands and reacts to these commands with responses. A real-world application, a piece of equipment for semiconductor manufacturing, is described in [27].

Stochastic performance models In a stochastic framework such as markov chains [62] and queuing networks [16, 142] both quantitative and qualitative aspects can be analyzed. Such models are however in general too complex to allow for analytic computations, and the performance is therefore usually evaluated using simulation. Perturbation analysis [78, 79, 148] allows for quantitative analysis of a DEDS by parameter sensitivities. Assuming Y is some kind of performance measurement (e.g. total transmitting time in a communication system) and  some parameters that can be adjusted, perturbation analysis deals with estimating the gradient dYd . Such an estimate can is some cases be obtained by observing a single sample path of the system. For a class of systems consistency and eciency have been shown, and gradient estimates have been obtained using signi cantly fewer experiments.

Minimax algebra

A timed event graph can be represented as a linear \time-invariant" nitedimensional system using a special algebra [40, 41]. Such an event graph is a Petri net [127] where there is only one transition upstream and one downstream in every place. Based on the algebraic representation it is possible to analyze the performance and answer questions about the number of events of a particular type that will occur in a certain time interval, or at which time the kth occurrence of an event happens.

GRAFCET and Petri nets

As stated before GRAFCET and Petri nets [45, 84, 127] can be used to model and analyze a DEDS. A GRAFCET chart can be considered as a restricted Petri net where the GRAFCET-steps correspond to places in Petri nets. Petri nets can be used to describe systems that are concurrent, asynchronous, distributed, parallel, nondeterministic or stochastic. As for many discrete formalisms the major weakness is how to deal with complex systems. 7

Note though that the number of states can be very large.

2.4 Some other related perspectives

37

The reachability problem, i.e., the problem of deciding if a particular marking can be reached, is decidable [116]. Another interesting concept is liveness [81]. If a Petri net is live then any transition will eventually re, and no deadlock can exist.

Process algebra

Process algebra [24, 80, 86, 114] is the study of concurrent processes in an algebraic framework. It focuses on concurrency and communication between interacting modules, and is originally used in computer science.

Polynomial dynamical systems

Polynomial dynamical systems are systems of the form

xk+1 = f (xk ; uk ) where both xk and uk are vectors in some nite eld GF (pk ). In order to consider all possible such systems it is enough that the function f is a vector of polynomials. Some well-known special cases include linear systems as used in convolution codes, and linear sequential circuits. If considering the eld GF (2) we get the set of Boolean systems. In other cases such as various forms of nite automata, bounded Petri nets and GRAFCET one can derive the corresponding polynomial description. One can then make use of a strong mathematical basis to analyze and synthesize further systems under sets of design constraints [59, 60].

2.4.3 Other related elds

Other related areas are for example scheduling [56, 164] and automated manufacturing systems [46]. Scheduling takes a nonlinear plan and additional constraints on for example delivering times as inputs, and produces a new strengthened plan where the actions are given exact starting times. In this thesis we restrict ourselves to a discrete world where the state of a process is changed only by actions, or events. However, such an action can of course in the real world consist of something continuous. Consider, for example, the freight carrier unloading system in Example 2.3. The position of the trolley is of course a continuous variable, but we are only interested in if the trolley is in some speci c positions. Hence, for control purposes it is sucient to treat the position as a discrete variable. Systems consisting of both discrete and continuous parts are often refered to as hybrid systems [35, 117, 125, 146]. Such systems are dicult to analyze, and often one tries to abstract the continuous behavior to a discrete level as in the freight carrier example mentioned above.

38

Sequential control in the past and present

3 A formalism for describing planning problems In this chapter we will describe the formalism, i.e., the modelling language, we have chosen for describing the sequential parts of a plant. Let us assume that the plant is described by a state, and that the control action, or event, is chosen from a set of actions which transforms the state of the plant into a new state. An action is usually performed by a controller, i.e., a robot or a computer; it has a duration in time and it has a result, i.e., it a ects the state of the plant in some way. The formalism, extended simpli ed action structures (SAS+ ), presented here is based on the work by Sandewall and Ronnquist [139], but somewhat simpli ed. It is an extension to the formalism we have previously used [20, 21, 22, 89, 90, 91]. This extension was made by Backstrom [18]. The main advantage of focusing on the actions instead of on the states is that it is possible to reduce the planning complexity. As stated before (Section 2.4) the number of states in the state graph is exponential in the number of state variables, and hence the complexity of a search based algorithm increases exponentially with the number of state variables. However, using action structures it is possible to construct algorithms whose complexity increases polynomially with the number of state variables at least for a restricted class of problems. It is also intuitively attractive to describe the actions and how they a ect the state of the plant, instead of explicitly describe the state graph. In Section 3.1 we will formalize the state description and in Section 3.2 we will de ne what we mean by an action. Section 3.3 then formally de nes what a plan is, and the planning problem is stated. We will use some concepts about relations, and the reader who is not familiar with relations and partial orders can nd an introduction to this topic in Appendix B. A more thorough presentation may be found in, for example, Gill [61]. 39

40

A formalism for describing planning problems

3.1 States

The state of the plant is described by a state vector x of dimension n, i.e., x = (x1; x2; : : :; xn). Each state variable belongs to a discrete, nite set Si and thus x 2 S = S1  S2  : : :  Sn . We let M denote the set of state variable indices, i.e., M = f1; 2; : : : ; ng. Each domain is extended with the values unde ned (ui) and contradictory (ki). The unde ned value can be interpreted as \don't care", i.e., the value is unknown or does not matter. It is the latter interpretation that we will use when de ning actions later on. The contradictory value is added for technical reasons only. A state x is called a partial state if the unde ned or contradictory value is allowed, and it is called a total state if no state variable has the unde ned or the contradictory value. Thus a total state is also a partial state. This may seem counterintuitive at this stage, but the reason will be clear later on. Note that this means that a state containing the contradictory value is by de nition a partial state. With notational abuse we will often only write a state when it is clear from the context whether it is a total state or not, or if this distinction is unimportant. The function dim(x) picks out the indices for the state variables in x whose values are not unde ned. If a state variable does not have the unde ned value we say that it is de ned.

De nition 3.1

1. For i 2 M, ui denotes the value unde ned, ki denotes the contradictory value and Si+ = Si [ fui; kig is the extended domain for the ith state variable. 2. Let i1; : : :; in where n = jMj be an enumeration of M. Then S = Si1 Si2  : : : Sin is the total state space, and S + = Si+1 Si+2  : : : Si+n is the partial state space. 3. A state x 2 S + is consistent if xi 6= ki for all i 2 M. 4. The function dim : S + ! 2M is de ned such that for x 2 S +, dim(x) is the set of all state variable indices i 2 M such that xi 6= ui. 5. If i 2 dim(x), then i is de ned for x.

2

Here jMj denotes the cardinality (i.e. the number of elements) of the set M. We also de ne a re exive partial 1 order vi on each extended domain Si+. The A partial order can be de ned in two ways. We follow the de nition given in [113]. By a partial order we mean a relation which is irre exive, antisymmetric and transitive, and by a re exive partial order we mean a relation which is re exive, antisymmetric and transitive. These concepts are de ned in Appendix B. 1

3.1 States

41

ordering is such that the unde ned value ui is less than any other value, and the contradictory value ki is greater than any other value. All values in the set Si are mutually incomparable, i.e., they are not related unless they are equal. The order can be interpreted as re ecting information content, and is extended to partial states in the obvious way.

De nition 3.2 1. For i 2 M, vi is de ned as for all xi; x0i 2 Si+

xi vi x0i , xi = ui or xi = x0i or x0i = ki: 2. v is de ned as for all x; x0 2 S +

x v x0 , 8i 2 M (xi vi x0i): If x v x0 then we say that x0 is more informative than x.

2 Then vi is a re exive partial order on Si+, and hSi+ ; vii forms a at lattice for each i. Additionally v is a re exive partial order on S +. It forms a lattice over the partial state space S + and t and u de ne the usual lattice operators join and meet. The de nitions of these concepts are given in Appendix B. That x0 is more informative than x means that if a state variable is de ned in x then x0 must have the same value (or the contradictory value) for this state variable, and x0 may assign a value to a state variable which is unde ned in x. Thus a total state x0 2 S is more informed than a partial state x 2 S + if the state variables which are de ned in x have the same value in x0. In the following we will usually drop the subscripts of ui, ki and vi and write u, k and v instead. The domain will be clear from the context. An example is shown in Example 3.1. Example 3.1 Let x = (x1; x2), S1 = f0; 1g and S2 = f0; 1; 2g. Then x 2 S where S = f(0; 0); (0; 1); (0; 2); (1; 0); (1; 1); (1; 2)g. The extended domains are given by S1+ = f0; 1; u; kg and S2+ = f0; 1; 2; u; kg. We make the following observations:  x1 = (0; 1), x2 = (0; u) and x3 = (1; 0) are consistent states,  dim(x1) = f1; 2g and dim(x2) = f1g,  x1 t x2 = (0; 1) t (0; u) = (0; 1),  x2 t x3 = (0; u) t (1; 0) = (k; 0), which is not consistent,

42

A formalism for describing planning problems

 x2 = (0; u) v x1 = (0; 1),  (0; u) 6v (u; 0) and (0; 0) v (k; 0). The relation graph for the relation v1 on the extended domain S1+ is shown in Figure 3.1a, and the relation v2 on the extended domain S2+ is shown in Figure 3.1b. The transitive and re exive arcs are omitted in the gures. The relation graph for the resulting relation v on S + is shown in Figure 3.2. 2 k

k

0

1

0

1

u

2

u

(a)

(b)

Figure 3.1: The+ relation graphs for the relations (a) v+1 on the extended domain S1 and (b) v2 on the extended domain S2 in Example 3.1. Transitive and re exive arcs are omitted. (k, k)

(0, k)

total states

(0, 0)

(k, 0)

(0, 1)

(0, u)

(k, 1)

(0, 2)

(1, u)

(k, 2)

(1, 0)

(u, 0)

(u, u)

(1, k)

(1, 1)

(u, 1)

(1, 2)

(u, 2)

}

consistent states

Figure 3.2: The relation graph for the relation v on the extended domain S + in Example 3.1. Transitive and re exive arcs are omitted.

3.2 Action types and actions

43

3.2 Action types and actions Examples of actions could be MoveWorkpiece, where a robot moves a workpiece from a work-station to storage, OpenValve1 where a valve called Valve1 is opened and ReadInputChannel, where a computer reads an input channel. An action is formally described by two concepts: an action label and an action type. The action type can be interpreted as a generic action description, and the action label is simply used to distinguish between di erent actions of the same type. Consequently there can be several actions of the same type, but with di erent action labels. Thus we can talk about a particular instantiation of an action type, and not just about any instantiation of the considered action type. The action types can thus be viewed as action class de nitions (compare type de nitions in programming languages such as Pascal). An action type is de ned by its pre-, post- and prevail-condition. The precondition speci es what must hold when the action starts, the post-condition what holds when the action ends, and the prevail-condition what must be true during the execution of the action. Notice that the state variables in the prevail-condition are not a ected by the action. Consider, for example, the action type OpenValve1 where a valve called Valve1 is opened. Suppose that in order to open Valve1 we require that a valve called Valve2 is already open to avoid over ow. Here, the pre-condition is that Valve1 is closed, and the postcondition that it is open. Thus the pre- and post-conditions describe what is changed by the action. An action of type OpenValve1 can only be started when Valve1 is closed (the pre-condition is ful lled) and when Valve2 is open. Consequently there is a condition which must be ful lled while the action is performed, but is not a ected by the action. This is the prevail-condition (Valve2 is open). Formally the pre-, post- and prevail-conditions are functions as de ned below.

De nition 3.3 1. 2. 3. 4.

H is a set of action types. b : H ! S + gives the pre-condition of an action type. e : H ! S + gives the post-condition of an action type. f : H ! S + gives the prevail-condition of an action type. 2

The conditions above are partial states, and for the partial state b(h) we let bi (h) denote the ith state variable in b(h).

44

A formalism for describing planning problems

We have thus de ned an action type, a sort of generic action. As stated before an action is an instantiation of an action type, and to distinguish actions of the same type from each other, a unique action label is associated to each action. The functions label and type pick out the label and type of an action, respectively.

De nition 3.4 Suppose H is a set of action types. We then make the following de nitions. 1. L is an in nite set of action labels. 2. A set  LH is a set of actions if no two distinct elements in have identical rst components (i.e. elements in L). 3. If is a set of actions we de ne two functions: label : ! L and type : ! H such that if hl; hi 2 then label (hl; hi) = l and type (hl; hi) = h. The function type is extended to sets of actions so that if is a set of actions then type ( ) = ftype (a)ja 2 g. 4. If is a set of actions then we also extend the functions b, e and f such that for a 2 we de ne b(a) = b(type (a)), e(a) = e(type (a)) and f (a) = f (type (a)).

2 If an action a such that type (a) = h 2 H can transform the state x into the state x0 then b(a) t f (a) v x and e(a) t f (a) v x0. In other words, b(a) t f (a) de nes a subset in S where the action a can be executed, and e(a) t f (a) de nes a subset we will end up in after executing the action. Another way to de ne pre- and post-condition would be to de ne b(a) t f (a) as the pre-condition and e(a) t f (a) as the post-condition. However, by separating the prevail-condition and the pre-/post-condition we can handle actions performed in parallel, since the prevail-condition states exactly what must be true during the execution of the action. Even for strictly sequential plans it is an advantage to clarify the di erence between b(a) and e(a) on one side and f (a) on the other. The conditions b(a) and e(a) de ne how the action a ects the state of the plant, while f (a) is only a condition which tells us in which states the action can be performed. The state variables that are de ned in these conditions may be thought of as sharable and non-sharable resources following the notation used in operating systems theory [105]. The state variables in the pre- and post-conditions correspond to non-sharable resources, while the state variables in the prevail-condition correspond to sharable resources. A SAS+ -structure consists of a set of state variable indices M, a total state space S and a set of action types H ful lling four restrictions introduced in De nition 3.5 below. The rst restriction (S1) states that the conditions

3.2 Action types and actions

45

must be consistent, i.e., no state variable can have the contradictory value. The second and third restrictions (S2 and S3) assert that if a state variable is de ned in the pre-condition then it must be de ned in the post-condition, and that for each de ned state variable the values are di erent. Note that the opposite is not true, i.e., a state variable may be de ned in the post-condition even if it is not de ned in the pre-condition. State variables de ned in the pre- and post-conditions cannot be de ned in the prevail-condition and vice versa, which is guaranteed by (S4). Finally, a parsimonious SAS+-structure is a SAS+ -structure where the possibility of having two di erent action types that are almost identical is excluded. With almost identical we mean that they have identical post-conditions and that the pre- and prevail-conditions for one of the action types are included in the pre- and prevail-conditions for the other. The formal de nitions are given below. De nition 3.5 A SAS+ -structure hM; S ; Hi consists of a set of state variable indices M, a total state space S and a set of action types H where the action types satisfy the following restrictions: (S1) The states b(h), e(h) and f (h) are consistent for all h 2 H. (S2) For all h 2 H, dim(b(h))  dim(e(h)). (S3) For all h 2 H and for all i 2 dim(b(h)), bi(h) 6= ei(h). (S4) For all h 2 H, dim(e(h)) \ dim(f (h)) = ;. A SAS+-structure hM; S ; Hi is parsimonious if (S5) for all h; h0 2 H such that e(h) = e(h0) and (b(h) t f (h)) v (b(h0) t f (h0)), h = h0 .

2

Note that we allow actions that change an unde ned state variable to a de ned value, but exclude the possibility of having actions that change a de ned value to the unde ned value. To illustrate what the restrictions forming a SAS+structure means we look at an example. Example 3.2 Consider M and S as in Example 3.1. We de ne six action types according to Table 3.1. From Table 3.1 we see that the three rst action types ful ll the restrictions (S1){(S4). However, the last three action types do not ful ll these restrictions. The action type h4 violates (S3), while h5 violates (S2). Finally h6 violates (S4). The tuple hM; S ; H1i where H1 = fh1; h2; h3g is a SAS+ -structure but it is not parsimonious. This follows because e(h1) = e(h2), b(h1) = b(h2) and f (h2 ) v f (h1 ). If we instead consider H2 = fh1 ; h3 g or H3 = fh2 ; h3 g we get a parsimonious SAS+-structure. 2

46

A formalism for describing planning problems action type h h1 h2 h3 h4 h5 h6

b(h) e(h) f (h) (0; u) (1; u) (u; 1) (0; u) (1; u) (u; u) (u; u) (0; u) (u; u) (0; u) (0; u) (u; u) (0; u) (u; u) (u; u) (u; 1) (u; 0) (1; 1)

Table 3.1: Pre-, post- and prevail-conditions for Example 3.2. We end this section by giving the intuitively clear de nition of when an action a ects a state variable, and illustrate this by an example.

De nition 3.6 1. An action a such that type (a) = h 2 H a ects the ith state variable where i 2 M if i 2 dim(e(h)). 2. If is a set of actions and i 2 M, then the set [i] denotes the set of all a 2 such that a a ects the ith state variable. 3. If H is a set of action types and i 2 M, then the set H[i] denotes the set of all h 2 H such that i 2 dim(e(h)).

2

Example 3.3 Consider the SAS+-structure hM; S ; H2i where M and S are given by Example 3.1 and H2 is as in Example 3.2. Suppose the set of action labels are the natural numbers, i.e., L = f1; 2; 3; : : :g. Let = fa1; a2; a3g be a set of actions such that a1 = h1; h1i, a2 = h2; h1 i and a3 = h3; h3i. We get

that type (a1) = type (a2) = h1 and type (a3) = h3. Furthermore label (a1) = 1 and label (a2) = 2. The actions a1 and a2 a ects the 1st state variable, while a3 a ects the 2nd state variable. Finally [1] = fa1; a2g and [2] = fa3g. 2

3.3 Planning Having the basic de nitions as given in Sections 3.1 and 3.2 we can formally de ne the concept of a plan, and state the planning problem. A plan is a control sequence, or a sequence of actions, which transforms a given initial state of the plant x0 into a desired nal stateindex nal state x?. Another

3.3 Planning

47

way to describe a plan is to say that it is a set of actions and an ordering on the actions de ning the execution order. We will also de ne the concept of a parallel plan, thus allowing actions to be performed in parallel as is often the case when solving sequential control problems. a 0 First we de ne the relation 7?! on a set of actions. If x 7?! x then there 0 is an arrow in the state graph from vertex x to vertex x , i.e., the state x can ; 0 be transformed into the state x0 by performing the action a. If x 7? ! x then there is a path in the state graph from vertex x to vertex x0. The total state x can be transformed into the total state x0 by performing the actions in the set in the order given by . Since  is a total order on , this is just a sequence of actions in a particular order. The formal de nition is given below.

De nition 3.7

Let the total states x; x0 2 S , a be an action,  LH a nite set of actions and  a total order on2 . The relation 7?! is then de ned as: a 0 1. x 7?! x if

(a) b(a) t f (a) v x, (b) e(a) t f (a) v x0 and (c) for all i 62 dim(e(a) t f (a)), xi = x0i. ; 0 2. x 7?;! x if x = x0.

; 0 3. x 7? ! x if

(a) a 2 , (b) there is no a0 2 such that a0a and (c) there exists x00 2 S such that x 7?a! x00 and x00 ?f 7?!ag; x0.

2 The relation 7?! can now be used to give a formal de nition of a linear (or a non-linear) plan from an initial state x0 to a nal state x?, a goal state. In the following we will use x0 for the initial state, and x? for the nal state unless otherwise stated. A linear plan is a totally ordered set of actions, that is, a sequence of actions transforming the initial state into the nal state. For a non-linear plan the ordering can be a partial order, i.e., the execution order is not fully speci ed. We will often abuse both this condition and the condition in De nition 3.8 requiring that    by using a relation containing elements not in  . In all such cases we mean the restriction of the relation to the set  . 2

48

A formalism for describing planning problems

The persistence handling3 is the same as the STRIPS assumption in [52], namely that nothing changes unless explicitly stated in the pre- and postcondition.

De nition 3.8 Given a SAS+-structure hM; S ; Hi and a set of action labels L, we assume that  L  H is a set of actions,    , and x0; x? 2 S + are consistent states, and make the following de nitions:

1. h ; i is a linear plan from x0 to x? if  is a total order on and there ; exists x 2 S such that x0 7? ! x and x? v x. 2. h ; i is a non-linear plan from x0 to x? if  is a partial order on and h ; i is a linear plan for any total order  on such that   .

2 If the nal state x? is a total state, i.e. x? 2 S , then h ; i is a linear plan from ; ? x0 to x? if x0 7? ! x assuming that  is a total order. When this distinction is of no importance we simply call it a plan. To be able to de ne the notion of a parallel plan we must rst de ne what is meant by independent actions. Two actions are independent if they can be performed in parallel without interfering with each other. There are three cases where actions are not independent, i.e., cannot be performed in parallel. Two actions cannot be performed in parallel if they a ect the same state variable. An action requiring a state variable to be constant during its execution, that is the state variable is de ned in the prevail-condition, cannot be performed in parallel with an action changing this state variable. If two actions performed in parallel require some state variable to be de ned in their prevail-conditions, then the value of this state variable must be the same in both prevail-conditions. (Obviously a state variable cannot have two values at the same time.) This is formally de ned in De nition 3.9.

De nition 3.9 Two actions a and a0 are independent if, for all i 2 M, all of the following conditions hold:

1. ei(a) = u or (ei(a0) t fi(a0)) = u, 2. ei(a0) = u or (ei(a) t fi(a)) = u, and 3. fi(a) v fi(a0) or fi(a0) v fi(a).

2 This is the issue of whether or not things can change by themselves in the world (See Section 2.3). 3

3.4 Examples

49

Now, a parallel plan is simply a non-linear plan where all unordered actions are independent and hence may be performed in parallel. Furthermore we say that a plan is maximally parallel if it is parallel and it is impossible to make it \more parallel", i.e., if we remove any arrow in the relation graph there will exist unordered actions that are not independent, i.e., the plan is not parallel any more.

De nition 3.10 A non-linear plan h ; i from x0 to x? is a parallel plan if all pairs of actions a; a0 2 that are not ordered under  are independent. A plan h ; i is maximally parallel if there exists no order  on such that    and h ; i is a parallel plan. 2 Since we have not put any cost on the di erent actions or action types the natural thing to minimize is the number of actions in a plan. We say that a plan is minimal if it contains fewer actions than any other plan from the given initial state to the desired nal state.

De nition 3.11 A plan h ; i from x0 to x? is minimal if there is no other plan h; i from x0 to x? such that jj < j j. 2 The SAS+ planning problem upon which we will concentrate can now be stated as follows.

De nition 3.12 The SAS+ planning problem is formulated as follows: Given a tuple  = hhM; S ; Hi; x0; x?i such that hM; S ; Hi is a SAS+ -structure and x0; x? 2 S + are consistent states, nd a plan h ; i from x0 to x?. 2 Note that we do not demand the initial state x0 or the nal state x? to be fully speci ed. We should nally stress that any partial order can be implemented using GRAFCET (see Section 2.1.2) and the plan (the partial order) might therefore be represented by a GRAFCET chart. The reader who is not familiar with relations and partial orders may think of a plan as a GRAFCET chart without alternative paths. This will be further investigated in Chapter 7.

3.4 Examples

We end this chapter by presenting two systems modeled using SAS+ -structures. The examples will be used later to illustrate di erent concepts and algorithms. The rst example is a toy example only chosen to be illustrative while the second is part of a laboratory process at our department.

Example 3.4 Consider a tunnel as in Figure 3.3 that is divided into n sections, each section equipped with a lamp.

50

A formalism for describing planning problems

Section 1

Section 2

Section 3

Section n-1

Section n

Figure 3.3: The tunnel in Example 3.4. The tunnel is divided into n sections and in each section there is a lamp that may be switched on or o . Suppose that when switching on or o the lamp in section k the lamps in all previous sections, that is, in sections 1 to k ? 1 must already be on. We can then de ne n state variables such that ( the lamp in section i is o xi = 01 when when the lamp in section i is on for i = 1; 2; : : : ; n. Obviously M = f1; 2; : : : ; ng and Si = f0; 1g for i 2 M. For each i 2 M we de ne an action type Oni such that bi (Oni ) = 0 ei (Oni ) = 1 fj (Oni ) = 1 for j = 1; 2; : : : ; i ? 1 fj (Oni ) = u for j = i; : : :; n which means that when performing an action of type Oni we switch on the light in section i. As stated before, bi(Oni) denotes the ith state variable in the partial state b(Oni). In the same way we de ne the action type O i for each i 2 M such that bi (O i ) = 1 ei (O i ) = 0 fj (O i ) = 1 for j = 1; 2; : : : ; i ? 1 fj (O i ) = u for j = i; : : :; n which consequently means that an action of type O i switches o the light in section i. Let H denote the set of action types as de ned above. Using De nition 3.5 we see that hM; S ; Hi is a parsimonious SAS+-structure. Let us assume that all lamps are switched o in the initial state, i.e., x0 = (0; 0; : : : ; 0) and that the goal is to switch on the lamp in section n leaving all other lamps unchanged, i.e., x? = (0; 0; : : : ; 0; 1). From De nition 3.12 we see that the planning problem  = hhM; S ; Hi; x0; x?i is in the SAS+ class of planning problems.

3.4 Examples

51

If n = 3 we get the set of action types as in Table 3.2. action type On1 O 1 On2 O 2 On3 O 3

pre (0; u; u) (1; u; u) (u; 0; u) (u; 1; u) (u; u; 0) (u; u; 1)

post (1; u; u) (0; u; u) (u; 1; u) (u; 0; u) (u; u; 1) (u; u; 0)

prevail (u; u; u) (u; u; u) (1; u; u) (1; u; u) (1; 1; u) (1; 1; u)

Table 3.2: Pre-, post- and prevail-conditions for the tunnel when when

n = 3 (Example 3.4).

(0,0,0) Off 1

On 1 (1,0,0)

Off 2

On 2

Off 1 (0,1,0)

(1,1,0) On 1 Off 3

On 3

Off 1 (1,1,1)

(0,1,1) On 1 On 2

Off 2 (1,0,1)

On 1

Off 1 (0,0,1)

Figure 3.4: The state graph for the tunnel example (Example 3.4). The state transitions are marked with the action types. The initial state is marked with an arrow and the nal state with double lines.

52

A formalism for describing planning problems

The state graph for the case when n = 3 is shown in Figure 3.4. The state transitions are marked with the corresponding action types. From the gure we see that a sequence of actions that transforms the initial state into the desired nal state is given by actions of the types On1; On2; On3 ; O 2; O 1: Let the set of action labels L be the natural numbers. Then h ; i where

= fh1; On1i; h2; On2i; h3; On3i; h4; O 2i; h5; O 1ig and  is a relation on as shown in Figure 3.5 is a plan from x0 to x?. 1, On 1

2, On 2

3, On 3

4, Off 2

5, Off 1

Figure 3.5: The relation  on the set in Example 3.4. Transitive

arcs are omitted.

2

Example 3.5 Our application example is an automated assembly line for

LEGO cars [147], which is used for undergraduate laboratory sessions in digital control at the Department of Electrical Engineering at Linkoping University. The students are faced with the task of writing a program to control this assembly line using the graphical language GRAFCET [84]. The main operations for assembling a LEGO car are shown in Figure 3.6. A car is assembled from three types of parts: the chassis, the wheels and the

Figure 3.6: The main operations when assembling a Lego car (Exam-

ple 3.5).

top. We will not describe the whole assembly line here. We just remark that it consists of two very similar halves, the rst mounting the wheels on the chassis and the second mounting the top. We focus on the second half where the top

3.4 Examples

53

1

4 3 2

5

Figure 3.7: The second half of the LEGO car factory in Example 3.5.

The chassis enters the conveyor belt (1) and is transported to the top storage (2) where a top is put onto the chassis. Thereafter it is transported to the top press (3) and, nally, pushed o the conveyor belt (4) into a bu er storage (5). is put onto the chassis and pressed tight to the chassis (see Figure 3.7). The chassis rst enters on the conveyor belt from the rst half of the factory (1). It is then transported to the top storage (2) where the top is put onto the chassis, on to the top press (3) and, nally, pushed o the conveyor belt sideways (4), into a bu er storage (5). The operations are mutually asynchronous in the sense that the conveyor belt used to transport the chassis runs continuously. Hence, a stopper bar is pushed out in front of the chassis at each work-station, holding the car xed, sliding on the belt. The car cannot pass the work-station until this bar is withdrawn. Figure 3.8 shows the second work-station in more detail. The chassis is held xed at the top storage (A) by the stopper bar (B). The tops are stored in a pile and the feeder (C) is used to push out the lower-most top onto the chassis. When the top is on the chassis, the feeder is withdrawn, whereupon the stopper bar is withdrawn, thus allowing the chassis to move on to the next work-station. We will now describe one way of modelling the LEGO assembly line using action structures. Another way will be shown in Example 4.4. Depending on how the state variables are chosen we get di erent behaviors as will be described in Section 4.2. To simplify we focus on one work-station, the one that puts the top onto the chassis, see Figure 3.8. However, this work-station contains all the principal diculties encountered in the assembly line. The rst problem is to decide which state variables to use. Once this is done, we can de ne the action types. Let us introduce four state variables in

54

A formalism for describing planning problems

C

A

B

Figure 3.8: Putting the top onto the chassis (Example 3.5). the following way: 8 > 2 :3 ( x2 = 10 ( x1 = 01 ( x1 = 01

if the chassis is not yet at the top storage if the chassis is at the top storage if the chassis is at the top press if there is no top on the chassis if there is a top on the chassis if the stopper bar is home if the stopper bar is not home if the feeder is home if the feeder is not home When the stopper bar is home it is withdrawn allowing a chassis to pass the work-station, and when the feeder is home it is withdrawn and not feeding a new top onto the chassis. We de ne the action types as shown in Table 3.3. The initial state is when the chassis has just entered this part of the assembly line, i.e., x0 = (1; 0; 0; 0), and the nal state is when there is a top on the chassis and the chassis is at the top press, i.e., x? = (3; 1; 0; 0). From De nition 3.5 it follows that hM; S ; Hi, where M = f1; 2; 3; 4g, S and H as de ned above, is a parsimonious SAS+-structure. Furthermore the planning problem  = hhM; S ; Hi; x0; x?i is in the SAS+ class of planning problems

3.4 Examples

55

action type ToTopStorage ToTopPress PutTop FeederHome

pre (1; u; 0; u) (2; u; 1; u) (u; 0; u; 0) (u; u; u; 1)

post (2; u; 1; u) (3; u; 0; u) (u; 1; u; 1) (u; u; u; 0)

prevail (u; u; u; 0) (u; u; u; 0) (2; u; u; u) (u; u; u; u)

Table 3.3: Pre-, post- and prevail-conditions for the Lego car factory in Example 3.5. according to De nition 3.12. The state graph is given in Figure 3.9. If the set of action lables L is the natural numbers, then a plan for normal operation is given by h ; i where = fh1; ToTopStoragei; h2; PutTopi; h3; Feederhomei; h4; ToTopPressig and the relation  on is given in Figure 3.10. (1,0,0,0) ToTopStorage PutTop (2,0,1,0)

ToTopPress (3,0,0,0)

FeederHome (2,1,1,1)

(2,1,1,0)

ToTopPress (3,1,0,0)

Figure 3.9: State graph for the Lego assembly line in Example 3.5. The transitions are marked with the corresponding action types. The initial state is marked with an arrow and the nal state with double lines.

2

56

A formalism for describing planning problems

1, ToTopStorage

2, PutTop

3, FeederHome

4, ToTopPress

Figure 3.10: A plan for mounting the top on the chassis (Example 3.5). We have thus stated our basic formalism, SAS+, and have given two examples of how systems can be modeled using this formalism. In the following chapters we will analyze di erent properties for systems modelled using the SAS+ formalism, and present algorithms for automatic synthesis of control charts in some restricted cases.

4 Classes of planning problems Since the SAS+ formalism is as expressive as the STRIPS formalism [18] it is clear from the discussions in Sections 2.3 and 2.4 that to reduce the complexity of the planning problem we must introduce some restrictions on the problem class de ned in Chapter 3. In Section 4.1 these restrictions are given and the di erent classes of planning problems are de ned. In Section 4.2 we will see that the complexity of planning highly depends on whether the di erent restrictions are ful lled or not, and in Section 4.3 we will show how some systems can be modelled to t into the classes introduced here.

4.1 Restrictions and classes of planning problems First we reconsider and formally de ne the previously [20, 21, 22, 89, 90, 91] analyzed SAS class. De nition 4.1 A SAS+-structure hM; S ; Hi is a SAS-structure if for all h 2 H, dim(b(h)) = dim(e(h)). A SAS+ planning problem hhM; S ; Hi; x0; x?i is a SAS planning problem if hM; S ; Hi is a SAS-structure and x0 ; x? 2 S . 2 There are thus two di erences between a SAS+ and a SAS planning problem. First, in a SAS-structure the possibility of changing a state variable from the unde ned value to a de ned one is excluded. Second, the initial and the nal states must be completely speci ed, i.e., we do not allow unde ned state variables in the initial or the goal state. We also de ne a less restrictive subclass of the SAS+ class of planning problems which we call the SAS(+) class. The only di erence compared to the SAS+ class is that the initial state must be a total state, i.e., x0 2 S . The reason for introducing this restriction is to keep the binary properties when 57

58

Classes of planning problems

developing planning algorithms in Section 5.1 and Chapter 6, and we do not regard this as a serious restriction. It is more important to be able to handle partial goal states, i.e., x? 2 S + since this means that we only need to specify the values of the state variables we are interested in, and may leave all other state variables unde ned. De nition 4.2 A SAS+ planning problem  = hhM; S ; Hi; x0; x?i is a SAS(+) planning problem if x0 2 S . 2 We introduce four di erent restrictions, one on the state variable domains and three on the set of action types, as in De nition 4.3. De nition 4.3 Given a SAS+-structure hM; S ; Hi we introduce the following restrictions. (B) The domain Si, where i 2 M, is binary if jSij = 2. A set of action types H0  H is (U) unary if for all h 2 H0, dim(e(h)) is a singleton, i.e., the action type changes only one state variable. (P) post-unique if for all h; h0 2 H0,

8i 2 M (ei(h) = ei(h0) 6= ui ) h = h0) i.e., no two distinct action types in H0 can change a particular state

variable to the same value. (S) single-valued if th2H0 f (h) is consistent, i.e., there are no two distinct actions in H0 having di erent but de ned prevail-conditions for the same state variable. These de nitions are extended to a planning problem  in the obvious way. By combining these restrictions we get di erent classes of planning problems. A class that ful lls some of the restrictions is named by appending the corresponding letters to SAS, SAS(+) and SAS+ respectively. For example, the class of SAS(+)-PUBS planning problems contains instances which belong to the SAS(+) class and are post-unique, unary, binary and single-valued, while the class of SAS-PUS contains instances which belong to the SAS class and are post-unique, unary and single-valued. 2 Let us discuss these de nitions. An action type is thus unary if it a ects a single state variable. That a set of action types is post-unique means that no two di erent action types can change a particular state variable to the same value. Thus post-uniqueness means that two action types cannot even partly have the

4.1 Restrictions and classes of planning problems

59

same result. The easiest way to understand what post-unique means is probably to give an example where the set of action types is not post-unique. Suppose that we want to design an assembly line with two work-stations, where different manipulations are done. The workpieces have to pass both stations, but it does not matter in which order the workpieces are sent to the work-stations, and hence we do not want to x the order before the planning starts. This problem contains, for example, the action types MoveFromStorageToWork{ station1 and MoveFromWork{station2to1. The post-conditions to both action types are the same, namely that the workpiece is at work-station 1, and the condition above is not ful lled. The de nition of a single-valued set of action types is very restrictive. This de nition is equivalent to saying that there exists a consistent partial state ^f 2 S + such that for all h 2 H, f (h) v ^f . Let us assume that a set of action types is single-valued, and there is, for example, an action type whose prevailcondition is that a certain valve is open. Then there can be no action type whose prevail-condition is that this particular valve is closed (assuming that one state variable tells if the valve is open or closed). The class of planning problems where the state variable domains are binary and the set of action types is unary and post-unique is likely to include some interesting problems, for example process plants where some uid is transported in pipes. In such a plant the typical actions would be to open or close a speci c valve. Other examples are actuator motors that are on or o . However, we are aware of the fact that many problems of practical interest are excluded when adding the condition that the set of action types H should be single-valued. The following theorem can now be proven.

Theorem 4.4 Any SAS+ -structure hM; S ; Hi such that H is post-unique and unary is also parsimonious.

2

Proof: [18] Let hM; S ; Hi be a SAS+-structure such that H is post-unique and unary. Suppose there exist h; h0 2 H such that e(h) = e(h0). Since H is unary there exists some i 2 M such that ei(h) = ei(h0) = 6 u and, hence, h = h0 since H is post-unique. It follows that hM; S ; Hi is parsimonious. 2 To illustrate the restrictions introduced in De nition 4.3 we end this section by showing three examples.

Example 4.1 Consider the tunnel described in Example 3.4. It follows from De nition 4.1 and Table 3.2 that the planning problem belongs to the SAS class since x0; x? 2 S and dim(b(h)) = dim(e(h)) for all h 2 H. Furthermore,

the four restrictions introduced in De nition 4.3 are satis ed. We immediately see that the state variable domains are binary, and that the set of action types is unary and post-unique. We obtain th2Hf (h) = (1; 1; u), which is a consistent

60

Classes of planning problems

state, and hence the set of action types is single-valued. Thus the planning problem is in the SAS-PUBS class. Suppose our only goal is to switch on the lamp in section 3, and that we do not care if the other lamps are on or o . This corresponds to the nal state x1 = (u; u; 1). The planning problem hhM; S ; Hi; x0; x1i belongs to the SAS(+) class because x1 2 S +. 2

Example 4.2 The Lego car factory as modelled in Example 3.5 is in the SASPS class. First we observe that x0; x? 2 S and consequently the stated planning problem is in the SAS class. From Table 3.3 we see that H is post-unique and single-valued so the planning problem is in the SAS-PS class. However, H is

not unary because three of the action types have two de ned state variables in their post-condition. This means that an action of such an action type a ects two state variables. Furthermore S1 = f0; 1; 2g and hence the state variable domains are not binary. 2

Example 4.3 Let us once again look at the LEGO car factory in Example 3.5.

The position of the chassis is given by the state variable x1. If x1 = 2 then the chassis is at the top storage, and if x1 = 3 it is at the top press. Suppose we extend the set of action types with the action type PressTop and let x2 2 S2 = f0; 1; 2g where x2 is de ned as previous except that x2 = 2 if the top is pressed tight to the chassis. Then the pre-, post- and prevail-conditions for the action type PressTop are given by b(PressTop) = (u; 1; u; u) e(PressTop) = (u; 2; u; u) f (PressTop) = (3; u; u; u) because the chassis must be at the top press when performing an action of this type. We get that th2H f (h) = (k; u; u; 0) since f1(PressTop) = 3 6= f1(PutTop) = 2. Obviously th2Hf (h) is not a consistent state, and thus the set of action types is not single-valued. It follows from this and Example 4.2 that a planning problem with this new set of action types is in the SAS-P class, but not in the SAS-PS class. 2

4.2 A discussion of complexity In this section we focus on the planning problem and discuss how the restrictions introduced in De nition 4.3 a ect the computational complexity. Instead of using the formal measure of the size of an input as de ned in Section 2.2, we let the size of an instance of the planning problem be the number of state variables n = jMj. This is a simpli cation because the size of the input also

4.2 A discussion of complexity

61

depends on the number of available action types and the size of the state variable domains Si. However, when considering post-unique sets of action types there is an upper bound on the number of action types. Given a post-unique SAS+-structure hM; S ; Hi we get X jS j)  jMj jHj  jSij  (max i2M i i2M

and jHj = O((maxi2M jSij)  jMj). If all state variables are binary we simple get jHj  2jMj so jHj = O(jMj) = O(n). Hence when dealing with SAS+-structures that are binary and post-unique n is a good measure of the size of a problem instance, while if the SAS+-structure is only post-unique we must also take the size of the state variable domains into account. From Section 2.2 we know that a requirement for tractable classes is that the size of the solution to every problem instance is polynomial in the size of the input. It is easy to see that even if we restrict ourselves to the SAS-PUB class of planning problem this is not the case as shown in Theorem 4.5 [22]. Theorem 4.5 A lower bound for nding a plan for a SAS-PUB planning problem de ned in De nition 4.3 is (2jMj) operations in the worst case. 2 Proof: We rst note that a plan is not minimal if it passes some state x 2 S more than once. Since there are 2jMj states, no minimal plan can have more than 2jMj ? 1 actions. We will now prove that there are SAS-PUB problems with minimal plans of this size by constructing a generic example. Given an integer n > 0, let M = f1; 2; : : : ; ng and Si = f0; 1g for i 2 M. Construct H = fh1; h01; : : : ; hn; h0ng such that for 1  k  n: ( 0; i = k 0 bi (hk ) = ei(hk ) = u; i 6= k ( 1; i = k 0 ei (hk ) = bi (hk ) = u; i 6= k 8 > < 0; 1  i < k ? 1 0 fi (hk ) = fi (hk ) = > 1; 1  i = k ? 1 : u; k  i  n Also de ne x0 and x? such that x0i = 0 for 1  i  n ( ? xi = 01;; 1i = ni < n

62

Classes of planning problems

It can be proven by induction on n that a minimal plan from x0 to x? requires 2n ? 1 actions. Obviously, minimal plans for SAS-PUB problems are of size (2jMj) in the worst case, so a trivial lower bound for worst case planning is (2jMj) operations. 2 It is thus not possible to construct a polynomial-time planning algorithm for SAS-PUB problems. However, the intractability stem from the plans themselves being of exponential size in the worst case and such plans are unlikely to be of practical interest. Backstrom and Nebel [18, 23] have further analyzed the complexity of the di erent classes of planning problems that are induced by the restrictions introduced in Section 4.1. In Figure 4.11 we show their result. On top is the Unrestricted Intractable (exponential solutions) P

B

S

U

Tractable (polynomial solutions) PB

PU

PUB

UB

PS

PBS

PUS

BS

US

UBS

PUBS

Figure 4.1: Plan search complexity for the SAS+ planning problem and its subclasses.

unrestricted SAS+ planning problem, and at the bottom the most restrictive 1

This gure is from Backstrom [18].

4.3 Modelling a problem to make it t into a class

63

SAS+ problem, that is, the SAS+-PUBS planning problem. With plan search complexity we mean the complexity of the planning problem as de ned in Definition 3.12. The results are valid also for the SAS and the SAS(+) classes of planning problems. From Figure 4.1 we see that we cannot hope to construct planning algorithms whose complexity increases polynomially with the number of state variables other than for a restricted class of problems. We have previously presented polynomial time algorithms for solving the SAS-PUBS [20, 22, 89, 90] and the SAS-PUS [21, 91] planning problems. Backstrom [18, 19] presents a polynomial time algorithm for the SAS+-PUS class. This algorithm is a modi cation of the SAS-PUS algorithm mentioned above. These algorithms generate minimal and maximally parallel plans. Backstrom and Nebel [23] present a polynomial algorithm for the SAS+-US class, and since the SAS+ -UBS class is a subclass of this class the algorithm can of course be used for this class too. This algorithm di ers from the others in that it is not guaranteed to generate a minimal plan. There are, however, many situations where one is satis ed with a non-minimal plan if generated fast enough. Furthermore, nding an optimal plan is NP-equivalent [23] for these two classes. NP-equivalence corresponds to the class of NP-complete decision problems. In Section 5.1 we present an algorithm for solving the SAS(+)-PUBS planning problem. This algorithm is a modi cation of the previously mentioned SAS-PUBS planning algorithm and is in Chapter 6 used to construct an algorithm for the SAS(+)-PUB class. The algorithm splits the original SAS(+)-PUB planning problem into a number of SAS(+)-PUBS planning problems, that can be solved in polynomial time. However, from Theorem 4.5 and Figure 4.1 is is clear that this algorithm cannot show a polynomial worst-case complexity. Still, we believe that for real-world applications the number of splits will be reasonably low. For completeness we also present the previously mentioned SAS-PUS algorithm in Section 5.2.

4.3 Modelling a problem to make it t into a class In this section we will show how a given plant in some cases can be modelled to t into a tractable class. The examples presented here covers three di erent cases:

 An action type may be split ( ctitious actions types added) to preserve unariness (Example 4.4).

 Additional state variables may be added to preserve single-valuedness (Example 4.4).

64

Classes of planning problems  Conditional action types may be split into several action types. For a

conditional action the outcome of the action depends on the state where the action is performed (Examples 4.5 and 4.6).

As can be seen above, this procedure will lead to more state variables and action types than the original problem. If one is forced to add too many ctitious action types or add too many ctitious state variables then there will of course be no gain in complexity. An additional way to preserve singlevaluedness would be to have di erent prevail-conditions in di erent operation modes for the plant.

Example 4.4 This example describes how we have modelled the LEGO assembly line described in Example 3.5 so that it becomes tractable to plan for it. In the same way as before we focus on one work-station, the one that puts the top onto the chassis, see Figure 3.8. The rst problem is to decide which state variables to use. Once this is done, we can de ne the action types. It is rather straightforward to de ne the pre- and post-conditions for the action types, but de ning the prevailconditions requires some care if we are not to violate the restrictions which render tractability. In addition we do not want to over-specify the prevailconditions since we want to be able to execute actions in parallel whenever possible. If we are to succeed in staying close to the restrictions of the SASPUBS problem, we have to consider the action type modelling when chosing state variables, however. We must not violate unariness, but we may, to some extent, violate single-valuedness, as we will see in Section 5.1. As an alternative to Example 3.5 we introduce six state variables, interpreted according to Table 4.1. With stopper bar home we mean that the stopper bar is withdrawn allowing a chassis to pass the work-station. The representation of stopper bar home and chassis at top storage may seem strange at this point, but the reason for choosing this representation will become clear later on. state chassis at top storage chassis passed top storage stopper bar home feeder home top mounted

Yes x1 = 1 and x2 = 0 x2 = 1 x3 = 0 or x4 = 1 x5 = 0 x6 = 1

No x1 = 0 or x2 = 1 x2 = 0 x3 = 1 and x4 = 0 x5 = 1 x6 = 0

Table 4.1: State variables for the Lego car factory (Example 4.4).

4.3 Modelling a problem to make it t into a class action type ToTopStorage ToTopPress StopperForward StopperHome FeederForward FeederHome PutTop

pre x1 = 0 x2 = 0 x3 = 0 x4 = 0 x5 = 0 x5 = 1 x6 = 0

post x1 = 1 x2 = 1 x3 = 1 x4 = 1 x5 = 1 x5 = 0 x6 = 1

65

prevail x3 = 1 x1 = 1; x4 = 1

x1 = 1; x3 = 1 x1 = 1 x1 = x3 = x5 = 1; x4 = 0

Table 4.2: Pre-, post- and prevail-conditions for the Lego car factory

(Example 4.4).

We de ne the action types as shown in Table 4.2. The action type PutTop is a ctitious action type which is introduced in order to keep the action types unary. The physical e ect of an action of this type is actually brought about by an action of type FeederForward and thus nothing will actually be changed when executing an action of type PutTop. This does not matter since these two action types must always occur together. In order not to violate single-valuedness we use one state variable for each possible position of the chassis along the line. Furthermore, each such position variable is interpreted such that it is reset as long as the chassis has not yet reached this position, and it is set if the chassis is at or has been at this position. Hence, they cannot be interpreted in isolation but only together. The initial state is x0 = (0; 0; 0; 0; 0; 0) and the nal state is x? = (1; 1; 1; 1; 0; 1). We see from Tables 4.1 and 4.2 that the problem is in the SAS-PUB class. The action types PutTop and ToTopPress together violate single-valuedness, by requiring x4 = 0 and x4 = 1 in their prevail-conditions respectively. This will be further investigated in Example 5.4, and we will see that even if this problem belongs to the SAS-PUB class (compare Figure 4.1) it can actually be solved using a polynomial time algorithm described in Section 5.1 (Algorithm 5.1). This is of course not always true since the algorithm is developed for the SAS(+)-PUBS class. When using it for the SAS(+)-PUB class of planning problems we only know that if that algorithm generates a plan, it will solve the stated problem, but if the algorithm fails nothing can be said. 2

Example 4.5 Suppose a robot can move a workpiece between three di er-

ent work{stations. It may seem natural to introduce an action type Move that is dependent on the start position of the workpiece and the desired position of it. Such an action type would be a function of these two positions (Move(Position1,Position2)) and the position of the robot (the robot must be

66

Classes of planning problems

at Position1 when starting to move the workpiece and at Position2 after the action). Using the SAS+ formalism such an action type must be split into six di erent action types Move12, Move13, Move23, Move21, Move31 and Move32 with obvious pre- and post-conditions. Another example is the operation Add1, performed on a subset of the natural numbers, for example f0; 1; : : : ; 100g. To model this using simpli ed action structures we must de ne one action type for each number in the given set. 2

Example 4.6 Consider a system consisting of pipes and valves as in Figure 4.2 inspired by Steritherm presented in Example 2.2. As can be seen in the gure, there are two kinds of uids available, and which one that enters the system depends on the position of the two-way valve ValveA. Suppose that one of the

uids is the production uid, for example milk, while the other one is water used to wash the pipes. When producing the uid should go through pipe 4 to a packing machine, while the water should go through pipe 3 to a sink. Production fluid

3

A

1

Water

B

2

Sink

C

4

To packing machine

Figure 4.2: Example 4.6. A rst approach might be do introduce one action type for each valve corresponding to opening the valve, and one for closing the valve. This will lead to conditional actions because the result of an action of, for instance, type OpenValveB depends on what is in pipe 1. We must thus split such an action type into two action types: OpenValveBP and OpenValveBW . Here OpenValveP P can only be performed when there are production uid in pipe 1, and the result will be production uid in pipe 2. In the same way OpenValveBW requires water in pipe 1, and after the execution of an action of this type there will be water in pipe 2.

2

5 Polynomial planning algorithms In this chapter we present two planning algorithms for two di erent classes of planning problems, namely the SAS(+)-PUBS class and the SAS-PUS class de ned in De nition 4.3. The algorithms given here are based on theoretical considerations and the complexity of the algorithms are proven to be polynomial in the number of state variables. We show that the algorithms are sound, i.e., a returned plan will in fact solve the stated planning problem, and complete, i.e., if any plan exists then the algorithm succeeds in constructing a plan. Additionally we show that the algorithms fail if no plan exists, and that the returned plans are minimal with maximal parallelism. In Section 5.1 we consider SAS(+)-PUBS planning problems, and in Section 5.2 we analyze the SAS-PUS class.

5.1 Planning for the SAS(+)-PUBS class In this section we will consider planning for SAS(+)-PUBS problems as de ned in De nition 4.3. The main result is a polynomial time algorithm for such problems which is proven sound and complete. Furthermore the algorithm is proven to return a minimal and maximally parallel plan. It is an extension of the algorithm for solving SAS-PUBS planning problems that we presented in [22, 89]. The speci cation of the algorithm is given in Section 5.1.1, and we show that this de nes a minimal and maximally parallel plan. An algorithm ful lling the speci cation is presented in Section 5.1.2 and the complexity of the algorithm is analyzed in Section 5.1.3. 67

68

Polynomial planning algorithms

5.1.1 Existence of SAS(+) -PUBS plans

In this subsection we give the speci cation of a planning algorithm for the SAS(+)-PUBS class of planning problems. For such a problem the planning can be split into two parts. First we nd which actions we have to perform, i.e., we nd the set of necessary actions. Then we nd the execution order by de ning the relation `precedes' on the set of necessary actions. The set of necessary actions consists of two principally di erent sets, the set of primarily necessary actions (P0) and the set of secondarily necessary actions (P~ ). When nding the set of primarily necessary actions we look at the state variables that are di erent in the the initial state and the nal state, and nd the actions whose pre- and post-conditions are such that if they could be performed, the initial state would be transformed into the nal state. Having no prevail-conditions, only these actions would be needed to transform the initial state into the nal state. However, we cannot be sure that the prevailconditions for these actions are ful lled. Hence we must probably perform some actions to ensure that the prevail-conditions for the di erent actions hold, and some actions to \reset" the thereby transformed state variables. These actions belong to the set of secondarily necessary actions. The formal de nition given below might look rather complicated, but the idea is, as presented above, rather simple. De nition 5.1 Given a SAS(+)-PUB planning problem  = hhM; S ; Hi; x0; x?i, the set (x0; x?) of necessary actions for a plan from x0 to x? is recursively de ned as follows: 1. A = fhg(h); hi j h 2 Hg where g : H ! L is an arbitrary injection. 2. (a) For each i 2 M such that x?i 6vi x0i there is exactly one action a 2 A such that bi(a) vi x0i , ei(a) = x?i and a 2 P0 . No other actions belong to P0. (b) T0 = P0 (c) A0 = A ? P0 3. For k  0: (a) For each a 2 Pk and for each i 2 M: i. If fi(a) 6vi x0i and there is no a0 2 Tk such that ei(a0) = fi(a), then there is one action a1 2 Ak such that bi(a1) vi x0i , ei(a1) = fi (a), and a1 2 Pk+1 . ii. If x?i 6vi fi(a) and there is no a0 2 Tk such that ei(a0) = x?i, then there is one action a2 2 Ak such that bi(a2) vi fi(a), ei(a2) = x?i and a2 2 Pk+1 . No other actions belong to Pk+1 .

5.1 Planning for the SAS(+)-PUBS class (b) (c) 4. (a) (b)

69

Tk+1 = Tk [ Pk+1 Ak+1 = Ak ? Pk+1 P~ = [1k=1Pk , and (x0; x?) = P0 [ P~ .

2 As stated before the set P0 is the set of primarily necessary actions and the set P~ is the set of secondarily necessary actions. The union of these two sets is the set (x0; x?) which is the set of necessary actions. That the action a in step 2(a) above is unique is obvious since the set of action types H is post-unique and the set A is constructed by putting a unique label on every action type in the set H. Hence either the action a exists, and is unique, or no such action exists at all. Note that this is not an algorithm for computing , but a speci cation such of . We have thus given a speci cation of the set of necessary actions (x0; x?), that is, the set of actions needed to transform the initial state x0 into the desired nal state x?. It will be shown in Lemma 5.7 and Theorem 5.8 below that these actions are necessary and sucient for SAS(+)-PUBS planning problems. When the initial and nal states are clear from the context we will often write  instead of (x0; x?). An example is given below. Example 5.1 Consider Example 3.4. Let the set of action labels L be the natural numbers, and

A = fh1; On1i; h2; On2i; h3; On3i; h4; O 1i; h5; O 2i; h6; O 3ig: The rst step is to compute the set of primarily necessary actions P0 according to (2) in De nition 5.1. The initial state x0 = (0; 0; 0) and the nal state x? = (0; 0; 1), and we see that x0 and x? only di er for the third state variable, i.e., x03 = 0 6= x?3 = 1. Now, b3(h3; On3i) = 0 v x03 and e3(h3; On3i) = 1 = x?3 and thus h3; On3i 2 P0. In fact, this is the only action in the set of primarily necessary actions, so P0 = fh3; On3ig. We get that

T0 = P 0 = fh3; On3ig A0 = A ? P0 = fh1; On1i; h2; On2i; h4; O 1i; h5; O 2i; h6; O 3ig according to (2b) and (2c). The next step is to compute the set of secondarily necessary actions as in step (3) above. For k = 0 we get f (h3; On 3 i) = (1; 1; u) 6v x0: For the rst state variable we have f1(h3; On3i) 6v x01. Furthermore there is no action a0 2 T0 such that e1(a0) = f1(h3; On3i). The action h1; On1i 2 A0

70

Polynomial planning algorithms

is such that b1(h1; On1i) v x01 and e1(h1; On1i) = f1(h3; On3i) = 1, and it follows that h1; On1i 2 P1. In the same way we obtain that x?1 6v f1(h3; On3i) and the action h4; O 1i 2 A0 is such that b1(h4; O 1i) v f1(h3; On1i) = 1 and e1(h4; O 1 i) = x?1 = 1, and thus h4; O 1 i 2 P1 . Repeating this for the second and third state variables results in P1 = fh1; On1i; h2; On2i; h4; O 1i; h5; O 2ig T1 = T0 [ P1 = fh1; On1i; h2; On2i; h3; On3i; h4; O 1i; h5; O 2ig A1 = fh6; O 3ig and for every k > 1 we get that Pk = ;. The set of secondarily necessary actions is thus given by P~ = fh1; On1i; h2; On2i; h4; O 1i; h5; O 2ig and nally the set of necessary actions is (x0; x?) = fh1; On1i; h2; On2i; h3; On3i; h4; O 1i; h5; O 2ig:

2 Before de ning the execution order we de ne the transitive closure and the reduction of a relation.

De nition 5.2 Given a relation r we make the following de nitions: i 1. r+ = [1 k=1 r is the transitive closure of r, and 2. r? denotes the reduction1 of r de ned as the minimal q  r such that q + = r+ .

2 Taking the transitive closure of a relation is the same as making the relation transitive, and it corresponds to all vertices in the relation graph that can be reached in any number of steps. The reduction of a relation corresponds to deleting all transitive arcs in the relation graph. The next step is to de ne the execution order, i.e., the relation `precedes' which is formally de ned in De nition 5.3 below. The relation `precedes' () is de ned as the transitive closure of the union of two relations `enables' ( ) and `disables' (). These two relations can be described as follows:  If a1a2 then a1 provides some part of the prevail-condition of the action a2, that is, a1 `enables' a2. This name is from Mehlhorn [112] who de nes the same concept but in a slightly di erent way. We are indebted to Karl-Johan Backstrom for this elegant formulation. 1

5.1 Planning for the SAS(+)-PUBS class

71

 If a1a2 then a2 \destroys" some part of the prevail-condition of the

action a1, i.e., a2 `disables' a1. Thus, that a2 `disables' a1 means that if we are in a state x such that the prevail-condition of a1 is ful lled, i.e., f (a1) v x and the action a2 is performed we get a new state x0 such that f (a1) 6v x0 and we cannot immediately perform a1. If a1 `precedes' a2 then, loosely speaking, either a1 `enables' a2, or a2 `disables' a1. Note that a1a2 means that a1 `enables' a2, but that a1a2 means that a2 `disables' a1, that is,  is actually `inverse disables'. The formal de nition below is given for any set of actions or action types. De nition 5.3 Suppose  is a set of actions or action types, then the relation  on  is de ned as: 1. For all a; a0 2 , a a0 , 9i 2 M such that ei(a) = fi(a0) 6= u. 2. For all a; a0 2 , a a0 , 9i 2 M such that fi(a) 6v ei(a0) 6= u. 3.  =  [  i 4.  = + = [1 k=1 

2

Note that since  is nite + = [1k=1 i = [jk=1j i , and that fi(a) 6v ei(a0) 6= u means that u 6= fi(a) 6= ei(a0) 6= u and either bi(a0) = u or bi(a0) = fi(a). This follows because only binary state variables are used. Sometimes we drop the subscript on , and it is then assumed that the relation  is de ned on the set . The easiest way to get some feeling for these relations is to look at an example. Example 5.2 Consider Example 3.4. According to Example 5.1 the set of necessary actions is (x0; x?) = fh1; On1i; h2; On2i; h3; On3i; h4; O 1i; h5; O 2ig: We want to nd the relation `precedes' on the set (x0; x?). According to De nition 5.3 we must rst compute the two relations  and . Now, aa0 if ei(a) = fi (a0) 6= u for some i 2 M. Consider, for example, the actions h3; On3 i and h1; On1i. From Table 3.2 we see that e1(h1; On1i) = f1(h3; On3i) = 1 and hence h1; On1i h3; On3i. This means that if we are in a state such that the prevail-condition of h3; On3i is not ful lled for the rst state variable, i.e. x1 = 0, then we must perform an action of the same type as h1; On1i before the action h3; On3i because f1(h3; On3i) = 1. Continuing in the same way we get  as in Figure 5.1. Furthermore, aa0 if fi(a) 6v ei(a0) 6= u for some i 2 M. Consider, for example, the actions h3; On3i and h4; O 1i. From Table 3.2 we see that

72

Polynomial planning algorithms 1, On 1

2, On 2

3, On 3

5, Off 2

Figure 5.1: The relation graph for the relation  in Example 5.2. Transitive arcs are omitted.

h3; On3i) = 1 6v e1(h4; O 1i) = 0, and hence h3; On3i h4; O 1i. This means that if we are in a state such that the prevail-condition of h3; On3i is ful lled, i.e. x1 = 1, and we perform the action h4; O 1i we cannot immediately perform the action h3; On3i because we have `destroyed' its prevail-condition f1 (

(in the resulting state x1 = 0). Continuing in the same way we get  as in Figure 5.2. Finally, the relation graph for the relation  is given in Figure 5.3. 3, On 3

2, Off 2

4, Off 1

2, On 2

Figure 5.2: The relation graph for the relation  in Example 5.2. Transitive arcs are omitted.

1, On 1

2, On 2

3, On 3

5, Off 2

4, Off 1

Figure 5.3: The relation graph for the relation  in Example 5.2. Transitive arcs are omitted.

2 Now, given a SAS(+)-PUBS problem, where x0 is the initial state and x? the nal state, a plan from x0 to x? is given by h(x0; x?); i if there exists any plan. Observe that when saying that h(x0; x?); i is a plan it is implicit from De nition 3.8 that  is a partial order, and if this is not the case no plan exists. There are thus two conditions for a plan to exist. First, it must be possible to construct the set of necessary actions (x0; x?), that is, there can be no

5.1 Planning for the SAS(+)-PUBS class

73

\missing" action type. Second, the relation `precedes' () must be a partial order. This means that the relation graph is acyclic, i.e., there are no loops in the relation graph. That this de nes a correct plan that is minimal and shows maximal parallelism is shown in Theorem 5.9. An algorithm according to the speci cations given in De nitions 5.1 and 5.3 is stated in Section 5.1.2 (Algorithm 5.1). Before stating the formal theorems we look at Example 3.4 again. Example 5.3 Consider Example 3.4. It follows from Examples 5.1 and 5.2 that a minimal plan from x0 to x? is given by h(x0; x?); i where (x0; x?) = fh1; On1i; h2; On2i; h3; On3i; h4; O 1i; h5; O 2ig and  is given by Figure 5.3. 2 We have thus given a speci cation of the algorithm, and must now prove that this speci cation is correct, i.e., that it speci es minimal and maximally parallel plans solving the stated problem. The main result in this section is Theorem 5.9 where the speci cation in De nitions 5.1 and 5.3 is proven to de ne a minimal and maximally parallel plan if and only if any plan exists. First we de ne when a function is a relabelling and the concept of isomorphic sets of actions. If two sets of actions are isomorphic then for each action in the rst set there is a corresponding unique action of the same type in the second set, and vice versa. De nition 5.4 Given a set of action labels L and a set of action types H we make the following de nitions. 1. A function g : L ! L is a relabelling if it is a permutation on L. 2. Each relabelling g is extended to a function g : L  H ! L  H de ned as g(hl; hi) = hg(l); hi, and it is further extended to be a function g : 2LH ! 2LH de ned for sets of actions such that g( ) = fg(a) j a 2 g. 3. If g is a relabelling on2 a set of actions and  is a relation on then 2 ( LH ) ( LH g:2 ! 2 ) is de ned as hg(a); g(a0)i 2 g() if ha; a0i 2  for a; a0 2 .

2

De nition 5.5 If and  are sets of actions, and  and  are relations on and  respectively, then h ; i and h; i are isomorphic if there is a

relabelling g such that = g() and  = g(). 2 Given a set of actions A and a relabelling r, r(A) is a set of actions that is isomorphic to A.

74

Polynomial planning algorithms

Proposition 5.6 Consider a SAS+-structure hM; S ; Hi. If A is a set of actions such that type (A)  H and r is a relabelling, then r(A) is a set of actions isomorphic to A.

2

The following lemma shows that (x0; x?) is the set of necessary actions, i.e., that (x0; x?) exists if any plan exists and that (x0; x?) is isomorphic to a subset of any plan. This means that any plan contains actions of the same type as the actions in the set (x0; x?), that is, if h ; i is a plan, then type ((x0; x? ))  type ( ). Observe that we do not require single-valued sets of action types here. If the set of action types is single-valued, the set (x0; x?) will be both necessary and sucient as shown in Theorem 5.9. The di erence when dealing with non-single-valued sets of action types is that a minimal plan might contain several actions of the same type. Lemma 5.7 Consider a SAS(+)-PUB problem  = hhM; S ; Hi; x0; x?i. If there is a plan h ; i from x0 to x?, then the set of necessary actions (x0; x?) de ned in De nition 5.1 exists and there exists a relabeling r such that (x0; x?)  r( ). 2 Proof: The proof is given in Appendix C. 2 In Theorem 5.8 we show that if the set (x0; x?) exists and  is a partial order then h(x0; x?); i is a plan from x0 to x?, i.e., the actions in the set (x0; x?) performed in any total order which includes  transforms the initial state x0 into the desired nal state x?. For future use we observe that singlevaluedness is not required for this theorem either. This means that if the set (x0; x?) can be computed, and  is a partial order, then h; i is a plan even for SAS(+)-PUB planning problems. Theorem 5.8 Consider a SAS(+)-PUB problem  = hhM; S ; Hi; x0; x?i. If (x0; x?) according to De nition 5.1 exists and  de ned in De nition 5.3 is a partial order then h(x0; x?); i is a plan from x0 to x?. 2 Proof: The proof is given in Appendix C. 2 We can now state the main theorem for SAS(+)-PUBS planning problems, namely that (x0; x?) exists and that h(x0; x?); i is a minimal and maximally parallel plan if and only if there is a plan. Thus, either there exists a set (x0; x?) and a relation  that is a partial order, or there is no plan from x0 to x?. Theorem 5.9 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUBS planning problem. Then (x0; x?) according to De nition 5.1 exists and h(x0; x?); i, where  is de ned in De nition 5.3, is a minimal and maximally parallel plan from x0 to x? if and only if there is any plan from x0 to x?. 2

5.1 Planning for the SAS(+)-PUBS class

75

Proof: The proof is given in Appendix C.

2

In Example 5.3 we showed an example of a SAS(+)-PUBS planning problem and used De nitions 5.1 and 5.3 to compute a plan. According to Theorem 5.8 this strategy can be used sometimes even for SAS(+)-PUB planning problem. If it is possible to compute the set (x0; x?) and  is a partial order, then h(x0; x?); i is a plan solving the stated problem. If (x0; x?) cannot be computed, then no plan exists according to Lemma 5.7. However, if (x0; x?) can be computed but  is not a partial order there might still exist a plan containing several actions of some action types. This will be further investigated in Chapter 6. In Example 5.4 the set of action types is not single-valued but h; i still speci es a plan.

Example 5.4 Consider the lego car assembly line presented in Example 3.5

modelled as in Example 4.4. From Example 4.4 we know that the problem belongs to the SAS-PUB class. If the set of action labels L is the natural numbers, then a set of actions ful lling De nition 5.1 is (x0; x?) = f h1; ToTopStoragei; h2; ToTopPressi; h3; StopperForwardi; h4; StopperHomei; h5; FeederForwardi; h6; FeederHomei; h7; PutTopig: The relation `precedes' () on the set of necessary actions (x0; x?) is given in Figure 5.4. From the gure we see that  is a partial order, and according to Theorem C.13 h(x0; x?); i is a minimal plan from x0 to x? showing maximal parallelism. 3, StopperForward

1, ToTopStorage

5, FeederForward

7, PutTop

4, StopperHome

6, FeederHome

2, ToTopPress

Figure 5.4: The relation graph for the relation  in Example 5.4. Transitive arcs are omitted.

It is easy to understand why it works for this example. There are two action types that `destroy' single-valuedness, namely PutTop and ToTopPress. This is due to the fact that f4(PutTop) = 0 6= f4(ToTopPress) = 1. From Table 4.2 it follows that there is only one action type a ecting this state variable. Because of this it is obvious that either h(x0; x?); i is a plan ( i.e.  can be constructed and  is a partial order) or no plan exists. 2

76

Polynomial planning algorithms

5.1.2 The SAS(+)-PUBS planning algorithm

Here we present an algorithm according to the speci cations in De nition 5.1 and De nition 5.3. The algorithm is proven correct, i.e., it ful lls the speci cations given in De nition 5.1 and De nition 5.3. An informal description of the algorithm is given below. compute the set of primarily necessary actions P0 (lines 15-24) if the set P0 exists then compute the set of secondarily necessary actions P~ (lines 26-53) if the set P~ exists then  = P0 [ P~ fCompute `precedes' ()(lines 55-68)g for all actions a1; a2 2  do if a1 'enables' a2 then a1a2 if a2 'disables' a1 then a1a2

endfor if there is no loop in  then return h; + i endif endif endif

In the informal description above we have indicated which lines in Algorithm 5.1 below that corresponds to the di erent steps. Before stating the formal algorithm we de ne some functions and procedures used in it.

De nition 5.10 We assume that the following functions and procedures are

available: Insert(a,A): Inserts the action a into the set A. FindActionPost(A,i,x): Searches the set A for an action a such that ei (a) = xi. Returns a if found, otherwise returns nil. FindAndRemove(A,i,x): Like FindActionPost, but also removes a from A if it is found.

2 Now we can describe an algorithm according to the speci cations in De nition 5.1 and De nition 5.3.

5.1 Planning for the SAS(+)-PUBS class

77

Algorithm 5.1 Input: A, a set containing one action for each action type in H, M, a set of state variable indices, and x0 and x?, the initial and nal states, respectively.

Output: D a set of actions, and r a partial order on D. 1 Procedure PlanPUBS(x0 ; x?; M; A);

2 x0; x? : state; 3 M : set of state variable indices; 4 A : set of actions; 5 var 6 i : state variable index; 7 a; a0; a1; a2 : action; 8 P; Q; D : set of actions; 9 r : relation; 10 11 beginfPlanPUBSg 12 finitializationg 13 D := ;; 14 P := ;; 15 fComputation of the set of primarily necessary actionsg 16 for i 2 Mdo 17 if x?i 6v x0i then 18 a :=FindAndRemove(A; i; x?i); 19 if a 6= nil then 20 Insert(a; P ); 21 Insert(a; D); 22 elsefail 23 endif ; 24 endif ; 25 endfor; 26 fComputation of the set of secondarily necessary actionsg 27 while P 6= ;do 28 Q := ;; 29 for a 2 P do 30 for i 2 Mdo 31 if fi(a) 6v x0i then 32 a0 :=FindActionPost(D; i; fi(a)); 33 if a0 = nil then 34 a1 :=FindAndRemove(A; i; fi(a)); 35 if a1 = nil then 36 fail; 37 else 38 Insert(a1 ; Q);

78

Polynomial planning algorithms

39 Insert(a1 ; D); 40 endif ; 41 if x?i 6v fi(a) then 42 a2 :=FindAndRemove(A; i; x?i); 43 if a2 = nil then 44 fail; 45 else 46 Insert(a2; Q); 47 Insert(a2; D); 48 endif ; 49 endif ; 50 endif ; 51 endif ; 52 endfor; 53 endfor; 54 P := Q; 55 endwhile; 56 fComputation of `precedes'g 57 for a 2 D do 58 for a0 2 D do 59 for i 2 Mdo 60 fComputation of `enables'g 61 if ei(a) = fi(a0) 6= u then 62 Order(a; a0; r); 63 endif ; 64 fComputation of `disables'g 65 if fi(a) 6v ei(a0) 6= u then 66 Order(a; a0; r); 67 endif ; 68 endfor; 69 endfor; 70 endfor; 71 fTest if `precedes' is a partial orderg 72 if there is a loop in r then 73 fail; 74 endif ; 75 returnhD; r i; 76 end; fPlanPUBSg

2

In the algorithm above we do not compute the transitive closure of the relation but only test whether r is acyclic, i.e., if there is a loop in the relation graph. This is because the transitive closure is not likely to be of any practical

5.1 Planning for the SAS(+)-PUBS class

79

interest, and it is costly to compute it. In Lemma C.7 we show that  cannot be irre exive if it is antisymmetric. Thus to test if  is a partial order we only need to test if it is antisymmetric, and this is equivalent to being acyclic, i.e., there are no loops in . If desired it is of course possible to add a computation of the transitive closure to the algorithm. The rst part of the algorithm, lines 15{24, compares the states x0 and ? x and for each state variable that di ers if x?i 6= u it searches A for an appropriate action to change this state variable. If such an action is found it is removed from A and inserted into D and P , and otherwise the algorithm fails. Immediately after line 24, P corresponds to the set P0 of De nition 5.1, i.e., the set of primarily necessary actions. The next part, lines 26{53, nds the actions needed to satisfy the prevail-conditions of the actions in the plan. The variables are used so that the kth time through the while loop P = Pk?1 and Q = Pk . The variable D is the union of all Pk :s so far. The while loop terminates as soon as P = ;, that is Pk = ; after the kth time through the loop. It is proven below that this is sucient so no in nite chain of empty Pk :s need be constructed. D =  after the termination of the while loop. The for loops in lines 56{69 then go through all pairs a; a0 of actions in D and mark that ara0 if aa0 or aa0. Finally r+ corresponds to the relation  when the algorithm terminates. The algorithm is not optimized since our goal has only been to prove tractability. The rest of this subsection presents the correctness proof of Algorithm 5.1, which results in Theorem 5.20. In the remainder of this section we assume that A = A where A is the input to Algorithm 5.1 and A is as in De nition 5.1.

Lemma 5.11 Throughout the execution of Algorithm 5.1, A  A and D  A, where A is as de ned in De nition 5.1. 2 Proof: Initially A = A and no actions are ever inserted into A, so clearly A  A. Furthermore, all actions inserted into D are rst found in A by the function FindAndRemove, so D  A and thus also D  A. 2 Lemma 5.12 If Pm = ; for some m  0 then [1k=0Pk = [mk=0Pk , where Pk is as de ned in De nition 5.1.

2

Proof: We prove by induction over k that Pk = ; for k  m. Basis: Pm = ; by assumption. Induction: If Pk = ; then Pk+1 = ; by De nition 5.1. Consequently, Pk = ; for k  m, so [1k=0Pk = [mk=0 Pk [ [1k=m+1 Pk = [mk=0 Pk . 2

80

Polynomial planning algorithms

Lemma 5.13 If P0 exists according to De nition 5.1, then P = P0, D = T0 and A = A0 at line 25 of Algorithm 5.1.

2

Proof: This proof concerns the loop in line 15{24 of the algorithm. We rst observe that FindAndRemove is called at most once for each i 2 M, so, since H is unary, no action a 2 A will be searched for in A more than once. Since

actions can be deleted from A only by FindAndRemove, no attempt will ever be made to delete an action already searched for in A. We will now prove that for each a 2 A we have, at line 25, a 2 P if and only if a 2 P0. For the if case, suppose that a 2 P0. Hence, there must be an i 2 M such that x?i 6v x0i and ei(a) = x?i, so FindAndRemove will be called to search for a in A. Since a 2 P0  A, initially A = A, and, by the observation above, a has not been searched for earlier, we have a 2 A. Consequently, a will be found and inserted into P . For the only if case, suppose that a 62 P0, and i 2 M is the state variable a ected by a. Now, since a 62 P0 , either x?i v x0i or ei(a) 6= x?i, so either FindAndRemove is never called to search for a, or one failed search for a is performed. In neither case is a inserted into P . Since P is initially empty and no actions are removed from P , it is obvious that P = P0 in line 25. We furthermore observe that the actions inserted into D and deleted from A are exactly those actions inserted into P . Since, initially, D = ; and A = A and since nothing is inserted into A and nothing is deleted from D, we have D = P = P0 = T0 and A = A ? P = A ? P0 = A0 in line 25. 2

Lemma 5.14 For k  0, if Pk+1 exists according to De nition 5.1 and if P = Pk , D = Tk and A = Ak before the (k + 1)st iteration of the while-loop

in line 26{54 of Algorithm 5.1, then P = Pk+1 , D = Tk+1 and A = Ak+1 after the (k + 1)st iteration of the loop. 2

Proof: We rst observe that the value of P is not changed until after the double for-loop in line 28{52, so P = Pk during this loop. Also Q = ; immediately before the double for-loop. We further observe that no actions are deleted from Q or D and no actions are inserted into A, so Tk  D and A  Ak during the double for-loop. We now prove that for all a00 2 A, a00 2 Pk+1 if and only if a00 2 Q immediately after the double for-loop. For the if case, suppose that a00 2 Q at line 52, then a00 has been inserted into Q in some iteration of the double for-loop. From the algorithm we get that for some i 2 M either bi (a00) v x0i and ei(a00) = x?i = 6 u, or x?i =6 ei(a00) = x0i . The algorithm further gives that there is an action a 2 P such that fi(a) 6v x0i and there is no action a0 2 D such that ei(a0) = fi(a). However, P = Pk throughout the loop and Tk  D, so a 2 Pk and a0 62 Tk . There are two cases depending on if x?i = u or not.

5.1 Planning for the SAS(+)-PUBS class

81

1. Suppose x?i 6= u. Then since Pk+1 exists, there are, by De nition 5.1, two actions a1; a2 2 Pk+1 such that bi(a1) v x0i , u 6= ei(a1) 6= x0i and ei(a2 ) = x0i . The set A contains at most one action of each type and H is post-unique, so, obviously, a00 = a1 or a00 = a2, and thus a00 2 Pk+1 in either case. 2. Suppose x?i = u. Then obviously a00 has been inserted at line 37 and thus bi(a00) v x0i and ei(a00) = fi(a) 6= u. Since Pk+1 exists there is, by De nition 5.1, an action a1 2 Pk+1 such that bi(a1) v x0i and ei(a1) 6= x0i . The set A contains at most one action of each type and since H is postunique it is clear that a00 = a1 and a00 2 Pk+1 . For the only if case, suppose that a00 2 Pk+1 and that i 2 M is the state variable a ected by a00. Since Si is binary, either ei(a00) 6= x0i or ei(a00) = x0i . 1. Suppose ei(a00) 6= x0i . By De nition 5.1, there is an action a 2 Pk such that fi(a) 6v x0i and there is no action a0 2 Tk such that ei(a0) = fi(a). Now, let m be the number such that the value of the loop variables of the double for-loop are a and i respectively during the mth iteration of the double for-loop. Such an m exists since a 2 Pk and P = Pk during the loop. It follows that fi(a) 6v x0i in the mth iteration, so FindActionPost is called to search D for an action a0 such that ei(a0) = fi(a). This search either succeeds or fails. (a) Suppose it fails. Then FindAndRemove is called to search A for an action a1 such that ei(a1) = fi(a). Now, u 6= ei(a00) 6= x0i and binariness gives that ei(a00) = fi(a). It follows because of postuniqueness that type (a1) = type (a00). According to De nition 5.1, a00 2 Ak since a00 2 Pk+1 , Tk  D, and the search for a0 in D failed. Now, A = Ak immediately before the double for-loop, and no actions are deleted from A without being inserted into D. Thus a00 2 A immediately before the mth iteration. Consequently, the search for a1 in A succeeds, and the action is thus inserted into Q, so a1 = a00 2 Q in line 52. (b) Suppose that the search for a0 in D succeeds. Since a0 62 Tk and Tk  D, a0 = a00 must have been inserted into D in the lth iteration of the double for-loop for some l such that 1  l < m. Hence, also a00 2 Q in line 52. In either case we have a00 2 Q in line 52. 2. The case where ei(a00) = x0i is analogous to the previous case. Consequently, a00 2 Pk+1 if and only if a00 2 Q at line 52, so P = Q = Pk+1 immediately after the (k + 1)st iteration of the while-loop. Furthermore, the

82

Polynomial planning algorithms

actions inserted into Q are exactly those actions inserted into D and deleted from A, so D = Tk [Q = Tk [Pk+1 = Tk+1 and A = Ak ?Q = Ak ?Pk+1 = Ak+1 after the (k + 1)st iteration of the while-loop. 2

Lemma 5.15 Given a SAS(+) planning problem  = hhM; S ; Hi; x0; x?i, sup-

pose (x0; x?) as de ned in De nition 5.1 exists. Then D = (x0; x?) after line 54 of Algorithm 5.1. 2

Proof: If  exists, then Pk exists for all k  0. Using Lemma 5.13 as the

basis and Lemma 5.14 as an induction step it is easily proven by induction that P = Pk , D = Tk and A = Ak after the kth iteration of the loop in line 28{52. Moreover, the loop terminates as soon as P = ;. Let m be the smallest k such that Pk = ;, then the loop terminates after the mth iteration, where we understand the case m = 0 as the case where the loop does not iterate at all. Lemma 5.12 and De nition 5.1 give that D = Tk = [mk=0 Pk = [1k=0 Pk = . 2

Lemma 5.16 Consider a SAS(+) planning problem  = hhM; S ; Hi; x0; x?i.

If P0 as de ned in De nition 5.1 does not exist, then Algorithm 5.1 fails before line 25. 2

Proof: If P0 does not exist, this must be because there is an i 2 M such that x?i 6v x0i , but there is no action a 2 A such that bi(a) v x0i and ei(a) = x?i. Since x?i 6v x0i , the FindAndRemove call in line 17 will search for an action a0 2 A such that ei(a0) = x?i, but, since A  A, Si is binary and H is postunique, there can be no such action in A. Hence, FindAndRemove will return nil and the algorithm will fail. 2 Lemma 5.17 Consider a SAS(+) planning problem  = hhM; S ; Hi; x0; x?i.

Suppose there is a k > 0 such that Pk+1 does not exist where Pk+1 is as de ned in De nition 5.1. Then Algorithm 5.1 fails in the (k +1)st iteration of the loop in line 28{52. 2

Proof: If Pk+1 does not exist, then there is an a 2 Pk such that fi(a) 6v x0i , there is no a0 2 Tk such that ei(a0) = fi(a) and there is either no a1 2 Ak such that ei(a1) = fi(a), or no a2 2 Ak such that ei(a2) = x?i. Since Si is binary, H is post-unique and A contains at most one action of each type, we have a0 = a1. We know that P = Pk during the double for-loop, so a 2 P during the whole loop. Suppose there is no a1 2 Ak such that ei(a1) = fi(a). Then there is no

5.1 Planning for the SAS(+)-PUBS class

83

a1 2 A such that ei(a1) = fi(a), since by De nition 5.1 A = Tk [ Ak , and by assumption there is no a0 2 Tk such that ei(a0) = fi(a). Because a 2 P , FindActionPost will be called to search for an action a00 such that ei(a00) = fi (a), but since D  A the search will fail. Consequently, FindAndRemove will be called to search A for such an action, but this search will fail since A  A, so the algorithm will fail. The case when there is no a2 2 Ak such that ei(a2) = x?i is analogous. 2

Lemma 5.18 Regard a SAS(+) planning problem  = hhM; S ; Hi; x0; x?i. If (x0; x?) de ned in De nition 5.1 does not exists, then Algorithm 5.1 fails before line 55. 2

Proof: If  does not exist, then there is a k  0 such that Pk does not exist,

so, by Lemmas 5.16 and 5.17, the algorithm will fail before line 55.

2

Lemma 5.19 Consider a SAS(+) planning problem  = hhM; S ; Hi; x0; x?i.

If (x0; x?) de ned in De nition 5.1 exists, then r+ =  after line 69.

2

Proof: By Lemma 5.18 the algorithm will go through lines 55{69 if  exists. For each pair a; a0 of actions in D, the pair a; a0 is added to r if either ei(a) = fi (a0) or u = 6 ei(a0) =6 fi(a) =6 u, corresponding to aa0 and aa0, respectively. Hence r = D in line 69 according to De nition 5.3. By Lemma 5.15, 2 D =  after line 54, so r+ = + = .

Theorem 5.20 Given a SAS(+)-PUBS planning problem hhM; S ; Hi; x0; x?i

Algorithm 5.1 returns a maximally parallel and minimal plan from x0 to x? if there exists a plan from x0 to x? and otherwise it fails. 2

Proof: Immediate from Lemmas 5.15, 5.18, 5.19 and Theorem 5.9.

5.1.3 Complexity analysis

2

This subsection is devoted to the complexity analysis of Algorithm 5.1. The rst result is Theorem 5.22 stating that the time complexity of Algorithm 5.1 is polynomial in the number of state variables. We also analyze the complexity of deciding whether a given SAS(+)problem is in the SAS(+)-PUBS class or not,

84

Polynomial planning algorithms

and Theorem 5.25 states that the total complexity of both nding whether the algorithm is applicable and, if so, apply it is polynomial in the number of state variables. Finally, the space complexity is stated in Theorem 5.26. Our goal is only to prove that Algorithm 5.1 is tractable, so no attempts have been made to reduce the complexity gures further. In Section 2.2 the notation and the concepts used here are de ned.

Lemma 5.21 For a SAS(+)-PB planning problem  = hhM; S ; Hi; x0; x?i, O(jHj)  O(jMj). 2 Proof: Because H is post-unique, we get that H contains at most jSij action types a ecting i for each i 2 M. Since Si is binary for all i 2 M, jHj  2jMj, from which the lemma trivially follows.

2

For simplicity we assume that states are represented as arrays and sets as unordered linked lists. In addition we assume that relations are stored as adjacency lists. Theorem 5.22 Suppose  = hhM; S ; Hi; x0; x?i belongs to the SAS(+)-PUBS class of planning problems. Then the worst-case execution time of Algorithm 5.1 is O(jMj3). 2 Proof: As basic operations we take variable assignment, elementary pointer operations, and comparison of two state variable values, all of which are constant time operations. Since we assume that states are represented as arrays and sets as unordered linked lists, the number of operations used by FindActionPost and FindAndRemove is linear in the size of the set searched and the number of operations used by Insert is constant. 1. Initializing D and P takes a constant number of operations. 2. The for-loop in line 15{24 does jMj iterations and, in the worst case, the loop body searches A for an action, which takes O(jAj) operations. A  A and, by De nition 5.1, jAj = jHj, giving jAj  jAj = jHj, so the search takes O(jAj)  O(jHj)  O(jMj) operations. Hence, the whole loop does O(jMj2) operations in the worst case. 3. To analyze the while-loop in line 26{54, we must rst determine how many iterations that are done by the while-loop and the outer for-loop (iterating over P ). Let m be the smallest k such that Pk = ;. We observe that the while-loop terminates as soon as P = ;, and, by Lemmas 5.13 and 5.14, P = Pk after the kth iteration of the loop, so the loop terminates after the mth iteration. By Lemma 5.12, [mk=0 Pk =   A, so, since all Pk are disjunct, the body of the combined while-loop and

5.1 Planning for the SAS(+)-PUBS class

85

outer for-loop does Pmk=0 jPk j = jj  jAj = jHj iterations. The inner for-loop does jMj turns, so the body of the inner for-loop is executed O(jHjjMj) times. In worst case, the inner loop body searches D once and A twice, but D  A and A  A so the loop body does O(jAj) = 2 O(jHj) operations. Hence, the while-loop does O(jHj jMj)  O(jMj3) operations in the worst case. 4. The double for-loop in line 56{69 does (jDj2jMj)  O(jMj3) operations. This is true since the relation is stored as an adjacency list, and each action with its corresponding set of related actions, is inserted in the list only once, i.e., we do not have to search the list before inserting a new element. 5. Since the relation is stored as an adjacency list, testing if there is a loop in r can be done in O(jDj + jrj)  O(jHj2)  O(jMj2) time by using topological sorting [112]. Clearly, the algorithm does O(jMj3) operations in the worst case.

2

As stated before the algorithm does not compute the transitive closure of the relation since this is not likely to be of any real interest. However, even if such a computation is added,the complexity bound remains the same. Using Warshalls algorithm, see, for example, [15], the transitive closure can be computed using (jDj3)  O(jMj3) operations. Before stating the space complexity in Theorem 5.26 we prove that the complexity of deciding whether Algorithm 5.1 is applicable is polynomial in the number of state variables. This is stated in Theorem 5.25.

Theorem 5.23 Given a set of state variable indices M, a state space S and a set of action types H, checking that hM; S ; Hi is a parsimonious SAS+structure as de ned in De nition 3.5 takes O(jMjjHj2) time. 2 Proof: Checking whether H ful lls (S1){(S4) requires looping through H and M and thus takes O(jHjjMj) time. Checking that (S5) is ful lled requires checking each i 2 M for each pair h; h0 2 H and thus takes O(jHj2jMj) time. Hence, checking that hM; S ; Hi is a parsimonious SAS+-structure takes O(jMjjHj2) time. 2 Theorem 5.24 Deciding whether a given SAS(+) problem is in the SAS(+)2 PUBS class can be done in O(jMjjHj ) time. 2

86

Polynomial planning algorithms

Proof: Checking whether S is binary can be done by checking for each i 2 M whether Si contains more than two elements or not. This requires O(jMj) operations. Checking whether H is unary can be done by checking for each h 2 H whether there is more than one i 2 M such that ei(h) 6= u. This can be done in O(jHjjMj) time. To test whether H is post-unique can be done by examining each pair h; h0 2 H to see if ei(h) = ei(h0) for some i 2 M. This can be done in O(jHj2jMj) time. Checking whether H is singlevalued requires checking for each pair h; h0 2 H whether u 6= fi(h) 6= fi(h0) 6= u for some i 2 M. This takes O(jHj2jMj) time. Finally, checking if x0 2 S takes O(jMj) time. Consequently, deciding whether a given SAS(+) problem is SAS(+)-PUBS thus can be done in O(jMjjHj2) time. 2 Theorem 5.25 Consider a set of state variable indices M, a state space S , a set of action type H, and two partial states x0; x? 2 S + . It takes O(jMjjHj2) time to decide whether Algorithm 5.1 is applicable and, if so, nd a minimal plan from x0 to x? or report that no plan at all exists from x0 to x?. 2 Proof: Immediate from Theorems 5.22, 5.23 and 5.24.

2

Note that if the state variables are binary, then jHj  2jMj and O(jMjjHj2)  O(jMj3. Finally, we end this section by showing the space complexity for Algorithm 5.1. Theorem 5.26 Algorithm 5.1 uses O(jMj2) space. 2 Proof: We assume that states are represented as arrays of state variable values. We further assume that actions are represented as tuples hl; b(h); e(h); f (h)i, that is, we represent the action type by its corresponding pre-, post- and prevail-conditions. Sets are assumed to be represented as linked lists, and relations as adjacency lists. The set of action types uses O(jMj  jHj)  O(jMj2) space. State and action variables clearly use O(jMj) space. P  A, Q  A, D  A and A  A, so P; D; Q, and A contain O(jAj) = O(jHj)  O(jMj) actions. Hence, each of these variables occupy O(jMj) space. The relation r can be represented by an adjacency list of size O(jDj + jrj)  O(jDj2)  O(jAj2)  O(jMj2). The total space required by the algorithm is clearly O(jMj2). 2 We have thus given a polynomial time algorithm (Algorithm 5.1) for solving SAS(+)-PUBS planning problem. In Chapter 6 we will use this algorithm to develop an algorithm for the SAS(+)-PUB class of planning problems, i.e., problems where we do not require single-valued sets of action types.

5.2 Planning for the SAS-PUS class

87

5.2 Planning for the SAS-PUS class This section presents some theoretical results on SAS-PUS (see De nition 4.3) planning along with an algorithm for nding such plans [21, 91]. There are three di erences between the SAS-PUS class and the SAS(+)-PUBS class analyzed in Section 5.1. First, in the SAS-PUS class non-binary state variables are allowed. Second, both the initial and the nal states must be total states for SAS-PUS problems, i.e., the possibility to have unde ned values in the nal state is excluded. Third, actions changing a state variable from the unde ned value to a de ned one is not allowed. This means that the SAS-PUS class is at the same time both more expressive and more restricted. In [18, 19] Backstrom presents a modi cation to the algorithm presented here. His algorithm can be used for SAS+-PUS problems thus allowing unde ned values in both the initial and the nal states, as well as actions changing an unde ned state variable to a de ned value. Furthermore, by more explicit bookkeeping, his algorithm shows a better worst-case complexity than the algorithm presented here. The rst subsection introduces some new concepts needed for the rest of the section. Then follows a criterion for the existence of minimal and maximally parallel plans for the SAS-PUS class together with a correctness proof of this criterion. The third subsection presents an algorithm for nding plans according to the existence criterion, followed by a correctness proof for the algorithm. Finally, we analyze the complexity of planning for the SAS-PUS class. We recommend the reader to rst take a look at the algorithm (Section 5.2.3) and Example 5.6 before looking at the rest of this section. In this section, we implicitly assume that we are talking about the SASPUS class, and when talking about the existence of actions or sets of actions with certain properties we mean actions of types in H.

5.2.1 Preliminaries

We rst introduce the concept of i-chains. An i-chain is a sequence of actions with the combined e ect of changing a certain state variable from one value to another. This concept will be important when developing minimal plans for the SAS-PUS class. De nition 5.27 A sequence  = ha1; : : :; am i of actions is an i-chain from x to x0 if either of the two cases below are true. 1. (a) xi = bi(a1), (b) x0i = ei(am ), and (c) ei(ak ) = bi(ak+1) for 1  k < m . 2. (a) xi = x0i, and (b)  is the empty sequence.

88

Polynomial planning algorithms

The empty sequence is called the empty i-chain. For any i-chain  ,   denotes the corresponding total order on its actions, and we de ne b( ) = b(a1) and e( ) = e(am). An i-chain  from x to x0 is a minimal i-chain from x to x0 if there is no other i-chain  from x to x0 such that j j < j j. An i-chain  passes the state variable value xi if there is a k such that 1  k < m and ei(ak ) = xi. Given two i-chains  = ha1; : : :; am i from x to x0 and  = hb1; : : : ; bm i from x0 to x00, the concatenation of  and  is denoted ( ; ) and is de ned as ha1; : : :; am ; b1; : : : ; bm i. 2 Whenever convenient, we will use decreasing index numbers. We will also frequently view an i-chain as the set of its actions. For example, if  = ha1; : : :; ami is an i-chain, then  can denote either the sequence ha1; : : :; ami or the set fa1; : : : ; amg, or even both, depending on context. Sometimes, we will even say that a set of actions is an i-chain; meaning, of course, that the set ordered under some implicit ordering is an i-chain. Hopefully, neither of these conventions will cause any trouble for the benevolent reader. Obviously every i-chain contains a minimal i-chain. This is the same as stating that every non-minimal i-chain contains a loop. Proposition 5.28 If a set of actions  contains an i-chain from x to x0 then  contains a minimal i-chain from x to x0. 2 Since the set of action types is post-unique all minimal i-chains must be type isomorphic, i.e, only their action labels are di erent. Theorem 5.29 Suppose hM; S ; Hi is a post-unique SAS+-structure and x; x0 2 S . If  = ha1; : : :; am i and  = hb1; : : : ; bm i are both minimal ichains from x to x0 then m = m and type (ak ) = type (bk ) for 1  k  m .

2

Proof: De nition 5.27 gives m = m . Suppose the theorem is false and let l be the greatest k  m such that type (ak ) 6= type (bk ). Obviously type (ak ) = type (bk ) for l < k  m . De nition 5.27 gives ei(al) = bi(al+1)

and ei(bl) = bi(bl+1)i = b(al+1) if l < m , and ei(al) = ei(bl) = x0i if l = m . In both cases, ei(al) = ei(bl), so post-uniqueness give type (al) = type (bl) which contradicts the assumption. Consequently, the theorem holds. 2

Theorem 5.30 If  is an i-chain from x to x0 and  is an i-chain from x0 to x00 then ( ; ) is an i-chain from x to x00. Proof: Immediate from De nition 5.27.

Finally, the following de nition will simplify the proofs later on.

2 2

5.2 Planning for the SAS-PUS class

89

De nition 5.31 If is a set of actions, a 2 ,   and  is a relation on we de ne: 1. a if aa0 for all a0 2 . 2. a if a0a for all a0 2 .

2

5.2.2 Existence of SAS-PUS plans

Using the concept of i-chains de ned in the previous subsection we can now give a speci cation for minimal and maximally parallel plans for the SAS-PUS class (De nition 5.32). We also prove that this speci es minimal and maximally parallel plans (Theorem 5.35) and in Section 5.2.3 we give an algorithm for nding such plans (Algorithm 5.2). First we give the speci cation of the set of necessary actions () and the execution order (). De nition 5.32 Given a SAS-PUS problem  = hhM; S ; Hi; x0; x?i we de ne the set of necessary actions  and an ordering relation  de ned on  in the following way. 1. For each i 2 M there is an i-chain  i from x0 to x? such that i  [i] and  i  . 2. For each a 2  and for each i 2 M, if fi(a) 6v x0i then there is an i-chain i from x0 to f (a) such that i  [i],  i  , and i a. 3. For each a 2  and for each i 2 M, if fi(a) 6v x?i then there is an i-chain

i from f (a) to x? such that i  [i],  i   , and a i. 4.  is a minimal set of actions satisfying parts 1{3. 5.  is a minimal partial order on  satisfying parts 1{3.

2 Before proving that the de nition above de nes a minimal and maximally parallel plan from x0 to x? we illustrate the de nition with a simple example. Example 5.5 Let M = f1; 2; 3g, S1 = S2 = f0; 1; 2g, S3 = f0; 1g and the set of action types H is given in Table 5.1. Suppose the initial state is x0 = (0; 0; 0) and that the nal state is x? = (0; 1; 1). Then a set of actions ful lling De nition 5.32 is  = fa1; a2; a3; a4; a5; a6; a7; a12g where type (ak ) = hk for 1  k  7 and type (a12) = h4. The relation  is given in Figure 5.5. We see that  is a partial order, and hence h; i is a minimal

90

Polynomial planning algorithms action type h1 h2 h3 h4 h5 h6 h7 h8

pre (0; u; u) (1; u; u) (2; u; u) (u; 0; u) (u; 1; u) (u; 2; u) (u; u; 0) (u; u; 1)

post (1; u; u) (2; u; u) (0; u; u) (u; 1; u) (u; 2; u) (u; 0; u) (u; u; 1) (u; u; 0)

prevail (u; u; u) (u; u; u) (u; u; u) (u; u; u) (1; u; u) (u; u; u) (u; 2; u) (u; u; u)

Table 5.1: Pre-, post- and prevail-conditions for Example 5.5. and maximally parallel plan from x0 to x?. The i-chains that form the plan are the following. For i = 1 there is an i-chain  1 = ha1; a2; a3i. This i-chain does not permanently change anything since x01 = x?1. It consists of the two i-chains 1 = ha1i and 1 = ha2; a3i. These i-chains are needed because f1(a5) = 1, and thus 1a5 and a5 1. For i = 2 there is an i-chain 2 = ha4; a5; a6; a12i from x0 to x?. If we just look at x0 and x? one might suggest that ha4i would be a minimal i-chain from x0 to x?. This is true, but this i-chain does not satisfy the other requirements in De nition 5.32. Because f2(a7) = 2 there must be an i-chain from x0 to f (a7) (point 2 above), and one from f (a7) to x? (point 3 above). These two i-chains are 2 = ha4; a5i and 2 = ha6; a12i, and 2 = ( 2; 2). In other words we must temporarily change x2 from its desired value in order to satisfy the prevail-condition of a7. Finally, for i = 3 there is an i-chain 3 = ha7i from x0 to x?. 2 a1

a4

a5

a2

a3

a7

a6

a12

Figure 5.5: The relation graph for the relation  in Example 5.5.

Transitive arcs are omitted. The dashed line indicates the i-chain 2, and the dotted indicates the i-chain 2. It is obvious that all tuples ful lling De nition 5.32 are isomorphic. Lemma 5.33 If two tuples h ; i and h; i are isomorphic and one of them ful lls De nition 5.32, then both do. 2

5.2 Planning for the SAS-PUS class

91

Proof: Immediate from De nitions 5.5 and 5.32.

2

The subset of  containing all actions a ecting the ith state variable, is an i-chain from x0 to x? as shown in Lemma 5.34.

Lemma 5.34 Suppose that  = hhM; S ; Hix0; x?i is in the SAS-PUS class, and that h; i is as de ned in De nition 5.32. Then for each i 2 M, [i] is a, possibly empty, i-chain from x0 to x? containing at most two actions of each type in H. 2 Proof: Choose an arbitrary i 2 M. First suppose there is no a 2  such that fi (a) = 6 u. Part 1 of De nition 5.32 requires [i] to contain an i-chain  from

x0 to x? and, since [i] is minimal,  must be minimal. Furthermore, there is no a 2  such that fi(a) 6= u so parts 2 and 3 are trivially ful lled and minimality gives [i] = . Now suppose there is some a 2  such that fi(a) 6= u. Single-valuedness gives fi(a0) = fi(a) for all a0 2  such that fi(a0) 6= u, so parts 2 and 3 only require [i] to contain two minimal i-chains , from x0 to f (a), and , from f (a) to x?. Obviously, ( ; ) is an i-chain from x0 to x? so part 1 is trivially ful lled and minimality give [i] = ( ;  ). In either case, [i] consists of at most two minimal i-chains and can, thus, contain at most two actions of each type in H. 2 The main theorem in this section is Theorem 5.35 below. It states that

h; i is a minimal and maximally parallel plan if and only if there is any plan

solving the stated SAS-PUS problem. Thus, either it is possible to compute  and  ful lling De nition 5.32, or it is impossible to transform the given initial state into the desired nal state. Theorem 5.35 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem. Then h; i as de ned according to De nition 5.32 is a minimal and maximally parallel plan from x0 to x? if and only if there is any plan from x0 to x?. 2 Proof: The proof is given in Appendix C. 2

5.2.3 The SAS-PUS planning algorithm

This subsection presents an algorithm for nding minimal, maximally parallel plans for SAS-PUS problems. The algorithms works by nding h; i according to De nition 5.32, and is proven to be correct. The complexity of the algorithm is proven in the next subsection.

92

Polynomial planning algorithms

Before stating the algorithm we give some functions and procedures that are used in the algorithm. De nition 5.36 We assume that the following functions and procedures are available: Insert(a,S): Inserts the action a into the set S . Copy(S): Returns a copy of the set S. Concat(a,L): Adds the action a to the front of the list L. First(L): Returns the rst action of the list L. Last(L): Returns the last action of the list L. SelectAndRemove(S): Removes an arbitrary action from the set S and returns it. FindActionPost(A,i,x): Searches the set A of actions for a member a such that ei (a) = xi , which is returned if it exists. If such an a does not exist, the value nil is returned. FindActionPre(A,i,x): Like FindActionPost but tests for bi(a) = xi. FindAndRemove(A,i,x): Like FindActionPost but also removes a from A. Order(a,a',r): Adds ara0 to the relation r.

2

Algorithm 5.2 Input: M, a set of state variable indices, A, a set containing two actions of each type in H, and x0 and x?, the initial and nal states respectively. Output: D, a set of actions, and r, a relation on D. 1 Procedure PlanPUS(x0 ; x?; M; A); 2 x0; x? : state; 3 M : set of state variable indices; 4 A : set of actions; 5 var 6 i : state variable index; 7 a; a0 : action; 8 D; P; T : set of actions;

5.2 Planning for the SAS-PUS class 9 L : list of actions; 10 r : relation on D; 11 12 Procedure BuildChain(xF ; xT ; i; A; D; T; r); 13 xF ; xT : state; 14 i : state variable index; 15 A : set of actions; 16 D; T : set of actions (in/out parameter); 17 r : relation (in/out parameter); 18 var 19 x : state; 20 a; a0 : action; 21 L : List of actions; 22 23 beginfBuildChaing 24 L := nil; a0 := nil; x := xT ; 25 while xi 6= xFi do 26 a :=FindAndRemove(A; i; x); 27 if a = nilthenfail; 28 else 29 Insert(a; D); Insert(a; T ); Concat(a; L); 30 if a0 6= nilthenOrder (a,a',r); 31 endif ; 32 a0 := a; x := b(a); 33 endif ; 34 endwhile; 35 return L; 36 end; fBuildChaing 37 38 beginfPlanPUSg 39 D := ;; T := ;; r := ;; 40 fPart 1 in De nition 5.32g 41 for i 2 Mdo 42 fConstructing ig 43 L :=BuildChain(x0; x?; i; A; D; T; r); 44 endfor; 45 P :=Copy(D); 46 while T 6= ;do 47 a :=SelectAndRemove(T ); 48 for i 2 Mdo 49 fPart 2 in De nition 5.32g 50 if fi(a) 6v x0i then 51 fTest if i already existsg

93

94

Polynomial planning algorithms

52 a0 :=FindActionPost(D; i; f (a)); 53 fIf i exists order i before ag 54 if a0 6= nilthenOrder (a',a,r); 55 else 56 fConstructing ig 57 L :=BuildChain(x0; f (a); i; A; D; T; r); 58 Order(Last(L); a; r); 59 endif ; 60 endif ; 61 fPart 3 in De nition 5.32g 62 if fi(a) 6v x?i then 63 fTest if i already existsg 64 a0 :=FindActionPre(D; i; f (a)); 65 fIf i exists order i after ag 66 if a0 6= nilthenOrder (a,a',r); 67 else 68 fConstructing i g 69 L :=BuildChain(f (a); x0; i; A; D; T; r); 70 Order(a;First(L); r); 71 a0 :=FindActionPre(P; i; x0); 72 if a0 6= nilthenOrder (Last (L),a',r); 73 endif ; 74 endif ; 75 endif ; 76 endfor; 77 endwhile; 78 fTest if r is a partial orderg 79 if there is a loop in r thenfail; 80 endif ; 81 returnhD; r i; 82 end; fPlanPUSg

2

The main variables are D, T , and r. D is a non-decreasing set of actions, which eventually will be the set of actions in the plan (i.e. the set  in De nition 5.32), if the algorithm succeeds. Every action ever inserted into D is also inserted into T and the use of this set will become evident below. Furthermore r is a relation on D and it will eventually be the execution order of the plan (i.e. r+ =  as de ned in De nition 5.32). The algorithm does not compute the transitive closure of r since this is not likely to be of any practical interest. However, this may of course be added if so required. The function BuildChain has the purpose of trying to nd a, possibly empty, i-chain in A which, if executable, changes the ith state variable from xFi to xTi .

5.2 Planning for the SAS-PUS class

95

If such a sequence is found, it is removed from A, inserted into D and T , and r is extended to include the implicit order of the i-chain. Otherwise, the algorithm fails. The main body of the algorithm rst (lines 41{44) calls BuildChain once for each state variable i to nd an i-chain changing the ith state variable from x0i to x?i. Afterwards, D contains all actions primarily needed to change x0 into x?, but of course all of these actions do not necessarily have their prevailconditions satis ed. Line 45 stores this set of primary actions in P for future use. The purpose of the main while-loop (lines 46{77) is to assure that all actions have their prevail-conditions satis ed, and that possible side-e ects of actions added for this reason are undone. It works by removing one action at a time from T and guaranteeing that it is executable. Since all actions ever inserted into D, even by the while-loop itself, are also inserted into T and thus eventually removed and processed by the body of the while-loop, all actions in the nal plan will have their prevail-conditions satis ed. For each action a in T and for each state variable i, the rst half of the while-loop body (lines 50{60) tests whether the prevail-condition of a is satis ed in x0. Nothing needs to be done if this is the case but, otherwise, the algorithm tests if there is already an i-chain in D that changes the ith state variable from x0 to f (a). If there is such an i-chain, it is ordered before a; otherwise, BuildChain is called to nd such an i-chain in A. Since the actions in this i-chain might interfere with the primary actions changing x0 into x?, we must also assure that fi(a) is eventually changed into x?i. This is secured by the second half of the while-loop body (lines 62{75). This half is analogous with the rst but with one exception: if an i-chain is inserted, it goes from f (a) to x0, not x?, and if there is already an i-chain from x0 to x? in D, it is ordered after the new i-chain. The reason for this is that either x?i = x0i , and the i-chain is trivially correct, or x?i 6= x0i in which case there must already be an i-chain from x0 to x? in D. In the latter case, the concatenation of the two i-chains constitute an i-chain from f (a) to x? and it is minimal since it will be proven later that any i-chain from f (a) to x? must pass x0. The reader also deserves an explanation of why the algorithm only looks for the rst or last action of an i-chain when testing if it is in D already. Since the algorithm nds only minimal plans, there can be no actions in D satisfying the test criterion unless being part of an i-chain. The last part of the algorithm (line 79) tests if r contains cycles, in which case the algorithm fails as r+ cannot be a partial order. In Theorem 5.37 we show that Algorithm 5.2 is correct, i.e., that it returns a minimal and maximally parallel plan if and only if a plan exists. The proof is quite lengthy, and is given in Appendix C.

Theorem 5.37 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem. If there is a plan from x0 to x?, then Algorithm 5.2 returns a tuple hD; r i

96

Polynomial planning algorithms

such that hD; r+ i is a minimal and maximally parallel plan from x0 to x? according to De nition 5.32, and otherwise it fails. 2

Proof: The proof can be found in Appendix C.

2

To illustrate how the algorithms works we apply it to a simple example.

Example 5.6 Consider a much simpli ed version of the LEGO car assembly

line presented in Example 3.5. The task is still to assembly a LEGO car as was shown in Figure 3.6, but the assembly line is modi ed in the following way. The assembling takes place at one work-station only, and there is no separate work-station where the parts are pressed tight together. Instead this is assumed do be done at the same time as the top or the wheels are put on the chassis. Additionally the di erent parts (chassis, top, wheels) must be fetched from di erent storages as shown in Figure 5.6. h3 Top storage

h4 h2

h1 Chassis storage

h6 h5

Wheels storage

Workstation

Chassis storage

Figure 5.6: The modi ed LEGO car assembling in Example 5.6. We introduce three state variables such that ( chassis is in the chassis storage x1 = 01 ifif the the chassis is at at the work-station 8 > < 0 if the top is in the top storage x2 = > 1 if the top is at the work-station : 2 if there is a top on the chassis 8 > < 0 if the wheels are in the wheels storage x2 = > 1 if the wheels are at the work-station : 2 if there are wheels on the chassis and thus S1 = f0; 1g, S2 = S3 = f0; 1; 2g, and M = f1; 2; 3g. The set of possible action types H is de ned in Table 5.2. In the initial state all parts are in storage, which means that x0 = (0; 0; 0), and in the nal state an assembled car should be in the chassis storage so x? = (0; 2; 2). Obviously

5.2 Planning for the SAS-PUS class h h1 h2 h3 h4 h5 h6

b(h) (0; u; u) (1; u; u) (u; 0; u) (u; 1; u) (u; u; 0) (u; u; 1)

e(h) (1; u; u) (0; u; u) (u; 1; u) (u; 2; u) (u; u; 1) (u; u; 2)

f(h) (u; u; u) (u; u; u) (u; u; u) (1; u; u) (u; u; u) (1; u; u)

97

Explanation Move chassis to workstation Move chassis to chassis storage Move top to work station Mount top Move wheels to work station Mount wheels

Table 5.2: Action types for the modi ed LEGO car assembly line in Example 5.6.

hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem according to De nitions

4.1 and 4.3. Consequently Algorithm 5.2 can be applied. First we create a set of actions A consisting of two actions of each type. Thus A = fa1; a2; : : : ; a12g where type (ak ) = type (ak+6) = hk for 1  k  6. Initially D = P = T = r = ;. The for-loop in lines 41{44 then calls BuildChain once for each i 2 M to build an i-chain from x0i to x?i. The rst call returns the empty i-chain since x01 = x?1. However, x02 6= x?2 and x03 6= x?3 so BuildChain returns the i-chains a3; a4 and a5; a6, respectively2. BuildChain also inserts the i-chain orders into r, and the value of D is saved in P for future use. We now have: D = T = P = fa3; a4; a5; a6g r = fha3; a4i; ha5; a6ig The main while-loop in lines 46{77 now removes one action at a time from T . Let a3 be removed rst. Since fi(a3) = u for all i 2 M, a3 falls straight through the loop, so T = fa4; a5; a6g and nothing else happens. Now a4 is removed. Since x01 6= f1(a4) and there is no action a in D such that e1(a) = f1(a4), BuildChain is called to nd an i-chain from x01 to f1(a4). It returns the singleton i-chain a1, which it orders before a4. Furthermore, f1(a4 ) 6= x?1 and there is no action a in D s.t. b1(a) = f1 (a4) so BuildChain is called to nd an i-chain from f1(a4) to x?1. It returns the singleton i-chain a2, which it orders after a4. Nothing more happens since f2(a4) = f3(a4) = u. We now have: D = fa1; a2; a3; a4; a5; a6g T = fa1; a2; a5; a6g P = fa3; a4; a5; a6g

This is an arbitrary choice since there are two action of each type in A. We will make more such arbitrary choices, without comments. 2

98

Polynomial planning algorithms

r = fha3; a4i; ha5; a6i; ha1; a4i; ha4; a2ig The actions a1, a2, and a5 all fall straight through the loop since their prevail-conditions are unde ned for all i. We have T = fa6g and all other variables unchanged. Finally, a6 is removed. We have x01 6= f1(a6), and a1 2 D satis es e1(a1) = f1 (a6) so BuildChain is not called, but a1 is simply ordered before a6. We also have f1(a6) 6= x?1 and a2 2 D satis es b1(a2) = f1(a6) so BuildChain is not called but a2 is simply ordered after a6. Nothing more happens since f2 (a6) = f3 (a6) = u, and we now have: D = fa1; a2; a3; a4; a5; a6g T =; P = fa3; a4; a5; a6g r = fha3; a4i; ha5; a6i; ha1; a4i; ha4; a2i; ha1; a6i; ha6; a2ig The relation r is then tested for cycles and, since there are none, the algorithm returns the tuple hD; ri. According to Theorem 5.37 this is a minimal and maximally parallel plan satisfying De nition 5.32, i.e., D =  and r+ = . The resulting relation  = r+ is shown in Figure 5.7. a3 a4 a2

a1 a6 a5

Figure 5.7: The relation graph for  in Example 5.6. For 1  k  6

type (ak ) = hk . Transitive arcs are omitted.

2

5.2.4 Complexity analysis

In this subsection, we prove that the SAS-PUS class is tractable by showing that Algorithm 5.2 runs in polynomial time. The time complexity of the algorithm is stated in Theorem 5.41 and the space complexity in Theorem 5.45. We also prove that the complexity of deciding whether the algorithm is applicable is even lower than the complexity of the algorithm (Theorem 5.44).

5.2 Planning for the SAS-PUS class

99

We do not attempt to prove an optimal bound on the complexityso we make the following assumptions about data representation: States are represented as arrays of state variable values, action types as triples of states, actions as tuples of labels and action types, sets as lists of pointers to actions, and relations as adjacency lists. We will make frequent use of the following observations without explicitly referring to them:

Lemma 5.38 Given a SAS-PUS-structure hM; S ; Hi the following is true: 1. O(H)  O(Pi2M jSij) 2. T  D  A 3. A  A 4. r  D2 5. jAj  2 Pi2M jSij

2

Proof: 1 follows from post-uniqueness, 2{4 are obvious from the algorithm, and 5 follows from post-uniqueness and Lemma 5.34. 2 Lemma 5.39 Assuming that r is a relation over the set D, the functions and

procedures in De nition 5.36 have the following worst case time complexities: First(L), Last(L), Concat(L), Insert(a,S), and SelectAndRemove(S) runs in O(1) time; and Copy(S) runs in O(jS j) time; and Order(a,a',r) runs in O(jDj) time; 2

Proof: The complexity gures are obvious except for SelectAndRemove. Se-

lectAndRemove can choose any action in S , so letting it always choose the rst action gives the stated complexity. 2

Lemma 5.40 BuildChain runs in O(jSi jjAj) time when called to build an i-chain for some state variable i 2 M. 2 Proof: Each turn through the while-loop, BuildChain tries to nd an action a 2 A a ecting state variable i. Since A  A, Lemma 5.34 and post-uniqueness give that there are at most (jSij) such actions in A. Furthermore, each such action is removed from A, so the while-loop makes O(jSi j) turns. The

100

Polynomial planning algorithms

body of the while-loop is dominated by the calls to FindAndRemove, Insert and Order. FindAndRemove takes O(jAj)  O(jAj) time, Insert(a,D) takes O(jDj)  O(jAj) time, Insert(a,T) takes O(1) time, and Order(a,a',r) takes O(jDj)  O(jAj) time since r is a relation over D. Hence, BuildChain runs in O(jSi jjAj) time. 2

Theorem 5.41 Algorithm 5.2 runs in O((Pi2M jSij)3) time.

2

Proof: The main body of the algorithm can be divided into three consecutive parts. The rst part, lines 39{45, starts by initializing some variables, which takes O(1) time. BuildChain Pis then called once for every i 2 M, resulting in a total complexity of O( i2M jSijjAj) which, by Lemma 5.38, is in O((Pi2M jSij)2).P In line 45, Copy is called with the set D  A which takes O(jAj)  O( i2M jSij) time. The second part, lines 46{77, consists of a while-loop that removes one action from T each turn through the loop, so the while-loop makes O(jT j)  O(jAj) turns. Each turn through the while-loop, the body of the for-loop is executed once for each i 2 M. The rst half of

the body, lines 50{60, is dominated by the calls to FindActionPost, Order, and BuildChain. FindActionPost called with the set D runs in O(jDj)  O(jAj) time; Order called with the relation r takes O(jDj)  O(jAj) time since r is a relation over D; and BuildChain runs in O(jSijjAj) time, thus dominating the other two gures. The second half of the loop body can be analyzed analogously, P resulting in the same complexity gure. Hence, the while-loop runs in P 2 O(jAj i2M jSij)  O(( i2M jSij)3) time. The test for acyclicity of r can by done using an algorithm presented by Mehlhorn [112]. The complexity of this algorithm is O(jDj + jrj)  O(jDj2) for topological sorting and acyclicity test of a digraph represented as an adjacency list. He also presents an O(jDj3) time algorithm for nding the transitive closure of a topologically sorted acyclic digraph. Thus the time complexity of the algorithm will not change if a computation of the transitive closure is added. Consequently the test at line 79 takesP O(jDj3)  O((Pi2M jSij)3) time. Clearly, the whole algorithm runs in 2 O(( i2M jSij)3) time. Letting n denote jMj and k denote maxi2M jSij we get the simpler but somewhat looser bound O(n3k3).

Theorem 5.42 Given a set of state variable indices M, a state space S and a set of action types H, checking that hM; S ; Hi is a parsimonious SAS-structure 2 as de ned in De nition 4.1 takes O(jMj(Pi2M jSij)2) time.

5.2 Planning for the SAS-PUS class

101

Proof: First we must check 2if hM; S ; Hi is a parsimonious SAS+-structure. This can be using O(jMjjHj ) time according to Theorem 5.23. Checking whether dim(b(h)) = dim(e(h)) for all h 2 H requires looping through H and M and thus takes O(jHjjMj) time. Hence, checking that hM; S ; Hi is a 2 parsimonious SAS-structure takes O(jMjjHj )  O(jMjjSij)2) time. 2 Theorem 5.43 Deciding whether a given SAS problem is in the SAS-PUS P class can be done in O(jMj( i2M jSij)2) time. 2 Proof: Immediate from the proof of Theorem 5.24. 2 Theorem 5.44 Given a planning problem  = hhM; S ; Hi; x0; x?i it takes P O(( i2M jSij)3) time to decide whether Algorithm 5.2 is applicable and, if so,

nd a maximally parallel, minimal plan from x0 to x? or report that no plan at all exists from x0 to x?. 2

Proof: Immediate from Theorems 5.37, 5.41, 5.42, and 5.43.

2

Theorem 5.45 Algorithm 5.2 requires O((Pi2M jSij)2) space. 2 Proof: The set of action types uses O(jMj  jHj)  O(jMM j(Pi2M jSi j)) space. StatesPtake O(jMj) space; the sets A, D, P , and T and the list L take O(A)  O( i2M jSij) space; and the relation r take O(jDj + jrj) space. Finally, the algorithm presented by Mehlhorn[112] for topological sorting takes P 2 O(jDj + jrj) space. Since O(jDj + jrj)  O(jDj )  O(( i2M jSij)2), the theorem follows.

2

De ning n and k as before, a looser but simpler bound on the space complexity is O(n2k2).

102

Polynomial planning algorithms

6 Planning for the SAS(+)-PUB class In this chapter we present a planning algorithm for SAS(+)-PUB planning problems. Thus we remove the restriction that the set of action types must be single-valued (see De nition 4.3). It turns out that quite a number of real-world processes can be modelled as SAS(+)-PUB planning problems, for example process plants where some uid is transported in pipes. The typical actions in such a plant would be to open or close a valve, or to turn on or o a motor. From Section 4.2 we know that the worst-case complexity for this class is exponential in the number of state variables. Hence we cannot expect to nd an algorithm of polynomial complexity in the number of state variables for all problems, and of course the worst-case complexity for the algorithm given here is exponential in the number of state variables. Apart from not having polynomial complexity this algorithm di ers from the ones presented in Chapter 5 in that it partly uses the state space when constructing a plan. Actually, it can be viewed as a guided search in the state graph. Furthermore, it is not guaranteed to generate a minimal and maximally parallel plan, but in many situations one is satis ed with any plan, if it is generated fast enough. The main idea behind the algorithm is to split the original planning problem into a number of simpler problems, each solvable using Algorithm 5.1 for SAS(+)-PUBS planning problems. The key problem is how to do the right splits. If this is not done in the right way it may result in a lot of backtracking. Additionally, we must make sure that the algorithm fails when no solution exists. We have not proven that the algorithm is correct, and hence we give it only as a heuristic search method. In Section 6.1 we analyze possible deadlock situations, and an algorithm based on detecting deadlocks and splitting the original planning problem is described in Section 6.2. Some examples that illustrate the algorithm are given in Section 6.3. Finally, we illustrate the behaviour of the algorithm by some test cases in Section 6.4. 103

104

Planning for the SAS(+) -PUB class

6.1 Deadlock detection

It is of course desirable to detect deadlock situations before the actual planning starts. Besides saving time, this may avoid in nite loops. We give two simple lemmas stating two cases when a plan from the initial to the nal state does not exist. These lemmas result in the deadlock-tests in Equations 6.1{6.4 which are used in Algorithm 6.1 described in the next section. From Lemma 5.7 we know that the set (x0; x?) de ned in De nition 5.1 is the set of necessary actions for planning problems in the SAS(+)-PUB class. This means that any plan from x0 to x? must contain actions of the same types as the actions in (x0; x?), but since the set of action types H is not single-valued a minimal plan may contain more than one action of each type. Consequently, all actions in the set (x0; x?) must be performed to achieve the goal (the nal state), and we can use this observation to construct two tests for plan existence. Let us rst consider the actions whose pre-conditions hold in the initial state. We call these actions pre-enabled actions, meaning that their precondition is \enabled" in the initial state. Observe that the set of pre-enabled actions depends on the current initial state and on the current set of necessary actions. De nition 6.1 Given a SAS(+)-PUB planning problem hhM; S ; Hi; x0; x?i, we de ne the set of pre-enabled actions as follows: E (x0) = fa 2 (x0; x?) : e(a) 6v x0g:

2 Note that since the set of action types is unary and the state variables are binary we get that if a 2 E (x0), then b(a) v x0. It might seem logical to use this as a de nition instead, i.e., let E (x0) be the set of all actions in (x0; x?) such that b(a) v x0. Yet this would cause problems if bi(a) = u and ei (a) = x0i . Using the latter de nition, such an action would belong to the set of pre-enabled actions. However, since both b(a) v x0 and e(a) v x0, also the post-condition hold in the initial state, and hence the action does not have to be executed in the initial state. Now, suppose there are two actions a1; a2 2 E (x0) such that a1 `enables' a2 and a2 `enables' a1, i.e., for some i; j 2 M, ei(a1) = fi(a2) and ej (a2) = fj (a1), where `enables' is the relation  as de ned in De nition 5.3. Then neither of these actions can be executed in the initial state since both actions require the other to be executed rst. Obviously this is a kind of deadlock situation, and because all actions in the set E (x0) are required to achieve the the nal state, no plan exists. Extending the above to more than two actions, the condition for deadlock detection can be generalized as in Lemma 6.2.

6.1 Deadlock detection

105

Lemma 6.2 Suppose the planning problem hhM; S ; Hi; x0; x?i is in the SAS(+)PUB class. Consider the relation E (x0) where E (x0) is de ned as in De nition 6.1 and  is de ned as in De nition 5.3. If +E (x0;x?) is not a partial order then no plan exists from x0 to x?. 2 Proof: Analogously to 2b in the proof of Lemma C.7, if considering E (x0) instead of (x0; x?), and observing that since E (x0)  (x0; x?) it follows from Lemma 5.7 that no plan exists. 2

That +E (x0) is a partial order is the same as stating that E (x0) is acyclic, that is, there are no loops in the relation graph. Hence testing if +E (x0;x?) is a partial order can be done by checking if E (x0;x?) contains a loop in the state graph. Let us look at a simple example to illustrate the lemma above. Example 6.1 Let M = f1; 2g, i.e., there are two state variables, and let the domains be Si = f0; 1g for i = 1; 2. We de ne four action types according to Table 6.1. Let the initial state be x0 = (0; 0) and the nal state x? = (1; 1). action type h h1 h2 h3 h4

b(h) (0; u) (1; u) (u; 0) (u; 1)

e(h) (1; u) (0; u) (u; 1) (u; 0)

f (h) (u; 1) (u; 1) (1; u) (1; u)

Table 6.1: Pre-, post- and prevail-conditions for Example 6.1. The set of necessary actions is given by (x0; x?) = fa1; a3g, where type (a1) = h1 and type (a3) = h3. The set of pre-enabled actions in the initial state is E (x0) = fa1; a3g, and the relation `enables' on the set of pre-enabled actions is given in Figure 6.1. From the gure we see that the relation is not antisymmetric, and hence no plan exists. Actually no action can be performed in the initial state as can be seen from the state graph in Figure 6.2. This is of course not always true. It may be possible to execute some actions, although the actions forming the loop in the relation graph cannot be executed. a1

a3

Figure 6.1: The relation E (x ) in Example 6.1. 0

2

106

Planning for the SAS(+) -PUB class (1,0)

h4

h3 (1,1)

(0,0)

h1

h2 (0,1)

Figure 6.2: The state graph for Example 6.1. The transitions are marked with the corresponding action types.

Above we considered the actions whose pre-conditions hold in the initial state. In the same way we can de ne the set of all actions in the set of necessary actions whose post-conditions hold in the nal state. We call this the set of goal actions. De nition 6.3 Given a SAS(+)-PUB planning problem hhM; S ; Hi; x0; x?i, we de ne the set of goal actions as follows: D (x?) = fa 2 (x0; x?) : e(a) v x?g:

2 Note that the set of goal actions depends on the nal state in the same way as the set of pre-enabled actions depends on the initial state. Now, suppose there are two actions a1; a2 2 D (x?) such that a1 `disables' a2 and a2 `disables' a1, where `disables' is the inverse of the relation  as de ned in De nition 5.3. Then, for some i; j 2 M, fi(a2) 6v ei(a1) 6= u and fj (a1) 6v ej (a2) 6= u. Assuming bi(a1) 6= u and bj (a2) 6= u this is equivalent with fi(a2) = bi(a1) and fj (a1) = bj (a2 ). Obviously it is impossible to reach any state x such that e(a1) t e(a2) v x, and in particular it is impossible to reach our goal, the nal state x?. This is formally stated in Lemma 6.4 for any number of actions. Lemma 6.4 Suppose the planning problem hhM; S ; Hi; x0; x?i is in the SAS(+)PUB class. Consider the relation D (x?), where  is de ned according to Definition 5.3 and D (x?) according to De nition 6.3. If +D (x?) is not a partial order then there is no plan from x0 to x?. 2

6.1 Deadlock detection

107

Proof: Analogous to 2a(ii) in the proof of Lemma C.7.

2

As stated before +D (x?) is a partial order if and only if D (x?) is acyclic, i.e., there are no loops in the relation graph. A simple example is given below.

Example 6.2 Consider Example 6.1 but with the initial and the nal states

interchanged, i.e., x0 = (1; 1) and x? = (0; 0). The set of necessary actions is given by (x0; x?) = fa2; a4g where type (a2) = h2 and type (a4) = h4. The set of goal actions equals the set of necessary actions, so D(x?) = fa2; a4g. The relation D (x?) is given in Figure 6.3. From the gure we see that it is symmetric, and hence no plan exists. It is also obvious from Table 6.1 that there are no action type h 2 H such that e(h) t f (h) v x?, and we see that there are no arrow leading to the state (0; 0) in the state graph in Figure 6.2. a2

a4

Figure 6.3: The relation D (x?) in Example 6.2. 2 From Lemmas 6.2 and 6.4, and from the examples above, it is clear that we can detect when no plan exists before the actual planning starts. Apart from the obvious tests that E (x0) and D (x?) are acyclic, for each a 2 (x0; x?) we also compute the set of goal actions for the state e(a) t f (a). Since all actions in (x0; x?) must be executed, every action a 2 (x0; x?) must eventually be executed, leading to a state x such that e(a) t f (a) v x. But if D (e(a)tf (a)) contains cycles, it is impossible to reach such a state. This can be summarized as follows: Compute (x0; x?): Test if +D (x?) is acyclic. For each a 2 (x0; x?); test if +D (f (a)te(a)) is acyclic. Test if +E (x0) is acyclic.

(6.1) (6.2) (6.3) (6.4)

As stated above Equations 6.1{6.4 can be used to detect deadlocks before the actual planning starts, and this is used in Algorithm 6.1 presented in the next section.

108

Planning for the SAS(+) -PUB class

6.2 The SAS(+)-PUB planning algorithm

In this section we present the SAS(+)-PUB planning algorithm (Algorithm 6.1) and prove that it is sound (Theorem 6.8). As earlier pointed out, Lemma 5.7 states that every plan from x0 to x? must contain actions of the same type as in (x0; x?). This means that for every action a 2 (x0; x?) every plan h ; i from x0 to x? must pass some state x such that f (a) t e(a) v x. This is the idea we use when splitting the original problem. We choose an action a 2 (x0; x?) and try to reach a state x such that e(a) t f (a) v x, i.e., nd a plan h1; 1i from x0 to e(a) t f (a). Next a plan h2; 2i from the intermediate state x to x? should be searched for. The combination of these two plans forms a plan from x0 to x?. When splitting the problem we get two (or more) simpler problems as described below, and eventually these simpler problems will t into the SAS(+)-PUBS class (or at least the algorithm developed for this class will succeed). From Theorem 5.8 we know that the algorithm sometimes returns a plan even if the planning problem belongs to the SAS(+)-PUB class. Now the question is how the \split action" should be chosen. To resolve this we de ne the set of primary split actions. These are the actions that should be tried rst when splitting the original problem. From the set of necessary actions (x0; x?) we construct a subset S1(x0; x?) containing the primary split actions. These actions are the actions in (x0; x?) that \destroy" the single-valuedness, i.e., if it were not for them the set as a whole would have been single-valued. In some sense, the prevail-conditions to the primary split actions are harder to achieve, which is the reason for choosing them rst when trying to split the original problem. De nition 6.5 Given a SAS(+)-PUB planning problem hhM; S ; Hi; x0; x?i, we de ne the set of primary split actions as follows: S1(x0; x?) = fa1 2 (x0; x?): 9a2 2 (x0; x?) and i 2M : u 6= fi(a1) 6= fi(a2) 6= ug where (x0; x?) is the set of necessary actions de ned in De nition 5.1. 2 To illustrate this concept we give two examples. Example 6.3 Let M = f1; 2g, i.e., there are two state variables, and let the domains be Si = f0; 1g for i = 1; 2. We de ne four action types according to Table 6.2. Let the initial state be x0 = (0; 0) and the nal state x? = (0; 1). According to De nition 5.1 the set of necessary actions is (x0; x?) = fa1; a2; a3g, where type (ak ) = hk for 1  k  3. The set of necessary actions is not single-valued, because it follows from Table 6.2 that f2(a1) = 0 6= f2 (a2) = 1. Thus the set of primary split actions as in De nition 6.5 is S1 = fa1; a2g. 2

6.2 The SAS(+) -PUB planning algorithm action type h h1 h2 h3 h4

b(h) (0; u) (1; u) (u; 0) (u; 1)

e(h) (1; u) (0; u) (u; 1) (u; 0)

109 f (h) (u; 0) (u; 1) (1; u) (1; u)

Table 6.2: Pre-, post- and prevail-conditions for Example 6.3. Example 6.4 Consider the LEGO car assembly line presented in Example 3.5

and modelled as in Example 4.4. According to Example 5.4 the set of necessary actions is (x0; x?) = f h1; ToTopStoragei; h2; ToTopPressi; h3; StopperForwardi; h4; StopperHomei; h5; FeederForwardi; h6; FeederHomei; h7; PutTopig: From the example we know that there are two actions that \destroy" the single-valuedness, namely h2; ToTopPressi and h7; PutTopi. This follows from Table 4.2 because f4(h2; ToTopPressi) = 1 6= f4(h7; PutTopi) = 0. Thus these actions are primary split actions according to De nition 6.5, and hence S1 = fh2; ToTopPressi; h7; PutTopig. 2

Apart from the idea of splitting the problem as presented above, we have to add tests to detect deadlocks and avoid in nite loops. For this reason we add the tests in Equations 6.1{6.4. Additionally we must make sure that the number of necessary actions decreases when splitting the original problem, and we must test if the intermediate state x was visited before. Furthermore, we must also take into account that we might choose the wrong split action when splitting the original problem. In other words, even if we did not manage to construct a plan when splitting around the current action, there may be a plan if splitting around some other action, so backtracking may be necessary. This is what makes the algorithm look rather complicated. Examples of when backtracking is necessary are shown in Examples 6.7 and 6.8. Before stating the formal algorithm we give a GRAFCET chart in Figure 6.4 describing the main parts of it, and in De nition 6.6 we de ne some functions and procedures used in the GRAFCET chart.

110

Planning for the SAS(+) -PUB class

De nition 6.6 We assume that the following functions and procedures are

available: PlanPUBS(x0 ; x?; M; H): Algorithm 5.11 . Returns a plan h; i according to the speci cations in De nitions 5.1 and 5.3. Deadlock(x0 ; x?): Returns true if there is a deadlock as in 6.2{6.4, otherwise false. SplitActions(): Returns the set of split actions de ned in De nition 6.5. SelectAndRemove(S): Removes an arbitrary action from the set S and returns it. ApplyPlan(x; hD; r i): Returns the state resulting from applying the plan hD; ri in the initial state x.

2 We assume that the computations carried out in each GRAFCET step are completed before the condition on the transition following the step is tested. We are looping through the sets S1 and S2 until either a plan is found (Steps S12 and S14) or no plan exists (Step S3). That no plan exists can be for two reasons: either there are no more actions in S1 or S2, or the state x 2 V isited. The set S2 contains all actions in  that are not in the set of primary split actions. Thus S2 is the set of secondary split actions. Observe that the GRAFCET chart PlanPUB is recursively called in Steps S9 and S13. Steps marked with double bars show that a new GRAFCET chart is activated when the step is active. The integer ND denotes the number of necessary actions in the previous call, and should be initialized to ?1 the rst time the GRAFCET chart is activated. In Step S14 the relation 0 is used to compute . This relation is de ned in the following way:

fha1; a2i : a1 2 1 and a1 maximal under 1; a2 2 2 and a2 minimal under 2g and is used only to simplify the presentation of the algorithm. That a1 is maximal under 1 means that a1 is a maximal element, and that a2 is minimal under 2 means that a2 is a minimal element. The maximal elements of a partial order are simply all elements such that no element is \greater" under the given ordering, and the minimal elements are all elements such that no element is \smaller". These concepts are formally de ned in Appendix B. Inspecting the algorithm we notice that rst Algorithm 5.1 is called (Step S1). If  is a partial order, then h; i as computed in Step 1 is a solution and Algorithm 5.1 takes a set containing one action of each type from the set H as an input. It is straightforward to modify it so that H itself is an input. 1

6.2 The SAS(+) -PUB planning algorithm S0

111

PlanPUBS Start

S1

∆ , δ := PlanPUBS(x0, x*, M, H)

δ p.o.

δ not p.o. and |∆| > N D

δ not p.o. and ( |∆| < ND or ND= -1) T := Deadlock(x0, x*)

S2

T = true

T = false ∆ S1:= SplitActions(∆) ∆ S2:= ∆−∆ S1

S4

true N D := |∆|

S5 true

S6

∆ S1=O and ∆ S2=O

∆ S1= O S7

∆ S1=O and ∆ S2= O

a := SelectRemove(∆ ∆ S1)

S8

x0

e(a) f(a) S9

∆1 = O

x0

∆ 1, δ 1 := PlanPUB(x0, e(a) f(a), M, H, ND )

∆1 = O

x ∋Visited Insert x in Visited

S11 S3

e(a) f(a)

x: = ApplyPlan(x0, ∆ 1, δ 1)

S10

x ∋ Visited

a := SelectRemove(∆ ∆ S2)

∆ := O, δ := O x = x*

x = x*

∆ 2, δ 2 := PlanPUB(x, x*, M, H, ND +1)

S13 ∆ := ∆ 1 δ := δ 1

S12

∆2 = O

true

∆ := ∆ 1 U ∆ 2 δ := δ 1 U δ 2 U δ’

S14

∆2 = O

S15

Delete x from Visited

true

return ∆ , δ

S16 true

Figure 6.4: A GRAFCET chart for the procedure PlanPUB (Algorithm 6.1).

112

Planning for the SAS(+) -PUB class

is returned in Step S16. Otherwise we test if the number of necessary actions is decreasing, i.e., if jj < ND . If jj  ND , then we we set  =  = ; in Step S3 to avoid entering an in nite loop. On the other hand, if the number of actions is decreasing, the deadlock test is called in Step S2. If it returns true, there is no plan from xF to xT , and in Step S3 the sets  and  are set to the empty set. If the deadlock test returns false there might still be a solution. In Step S4 the sets of primary and secondary split actions are computed. In Step S5 we set ND = jj for later use. If both the set S1 and the set S2 are empty, i.e. all candidate actions have been tried without nding a plan, no plan exists and we let  =  = ; in Step S3. If, on the other hand S1 6= ; or S2 6= ; then a split action is selected in Step S7 or Step S8. If S1 6= ; then a primary split action is chosen (Step S7), otherwise a secondary one from the set S2 (Step S8). If e(a) t f (a) v xF a plan from xF to e(a) t f (a) is h;; ;i, and a new split action is selected (if the sets are not empty). Suppose e(a) t f (a) 6v xF . Then PlanPUB is recursively called in Step S9 to nd a plan from xF to e(a) t f (a). If no plan is returned, i.e. 1 = ;, we go back to Step S6 again. If a plan h1; 1i is returned, i.e. 1 6= ;, then the state x resulting when applying this plan to the initial state xF is computed (Step S10). If x 2 V isited, then we set  =  = ; in Step S3 to avoid entering an in nite loop in which we repeatedly split around the same actions. Otherwise the state x is inserted into Visited to be used for future tests (Step S11). Note that Visited should only contain states generated on the currently tried path to the goal, and hence if this direction fails, the state x must be deleted from Visited (Step S15). For the same reason Visited should be implemented as an array and not as a list. If x = xT then h1; 1i is the searched-for plan. Otherwise PlanPUB is called to construct a plan from x to xT , but with ND +1 instead of ND . This is due to the fact that in the second part of the split we must allow the new set of necessary actions to be as large as the current one. If no plan is returned, i.e. 2 = ;, we go back to Step S6 again. If a plan h2; 2i is returned, i.e. 2 6= ;, then the combination of h1; 1i and h2; 2i forms a plan from xF to xT (Step S14). Finally the resulting plan, or the empty sets from Step S3, is returned in Step S16. Since e(a) t f (a) can be viewed as a subgoal when trying to achieve the nal state (the main goal), it is clear that the subproblems resulting when splitting a problem, in some sense, are easier than the original problem. The idea is thus the same as when using the technique of divide-and-conquer [6], but since the subproblems we try to solve might lack solution, backtracking must be used as described below. To make sure that the algorithm terminates, the set of necessary actions must decrease monotonically. In the rst part of the split (Step S9), it is obvious that the number of split actions decreases. In the second part we allow j0j = jj where 0 is the new set of necessary actions and  is the set from the previous call. Since the state space is nite we will in the case of creating an in nite chain of split actions 0 such that

6.2 The SAS(+) -PUB planning algorithm

113

j0j = jj eventually generate a state x 2 S such that x 2 V isited and hence

the loop will terminate. It is thus clear that the set of necessary actions  will decrease when splitting the original problem, and since it is nite we will eventually reach a situation where the set of necessary actions  contains no split actions. Then, according to Theorem 5.92, either  on this set is a partial order, and h; i forms a plan from the initial state to the nal state, or no plan exists. If no plan exists, we try a new split action from the set . If there are no more actions to try, there is no plan from xF to xT . If xT is a subgoal, we report failure in achieving this subgoal, and try a new split action on this higher level. In the algorithm depicted in Figure 6.4, the deadlock tests in Step S2 are carried out every time the procedure PlanPUB is called. It is, however, possible to simplify this and thus save some time when applying the algorithm. The test according to Equation 6.4 must be carried out every time the procedure is called with a new initial state. The tests on the set of goal actions according to Lemma 6.4 need only be done once with x? as a goal (Equation 6.2) and once with the post- and prevail-conditions for every action in the set  (Equation 6.3). This follows since all new subgoals (new nal states) computed by the algorithm are such that for some a 2 0, where 0 is any set of necessary actions generated by the algorithm, e(a) t f (a) is the new subgoal. Furthermore, it is easily realized that type (0)  type ((x0; x?)) for all sets of necessary actions 0 constructed by the algorithm. Obviously the algorithm above can lead to a lot of backtracking if choosing the wrong split action (see e.g. Examples 6.7 and 6.8). It is not clear how this action should be chosen to completely avoid backtracking, since whether a particular action in  is a good or bad choice depends heavily on the prevailcondition. Thus one may draw some conclusions in each special case, but nothing can be said in general. Accordingly, the time needed for nding a plan using this algorithm depends heavily on the number of necessary actions and how \dicult" their prevail-conditions are. If the number of necessary actions is high, there is a high probability of chosing the actions in the wrong order, which results in time consuming backtracking. If instead the number of actions is low, this means that the probability of choosing the right action is higher, which results in less backtracking. We can now state the modi ed algorithm solving the SAS(+)-PUB planning problem. Apart from the functions and procedures de ned in De nition 6.6, we need the following ones used in the algorithm.

Theorem 5.9 is only valid for SAS(+) -PUBS planning problem. It is straightforward to de ne a new set of action types H such that H is single-valued and type ()  H , where  is the set of necessary actions containing no primary split actions. 2

0

0

0

114

Planning for the SAS(+) -PUB class

De nition 6.7 We assume that the following functions and procedures are

available: Insert(a,S): Inserts the element a into the set S . Delete(a,S): Deletes the element a from the set S . DeleteSet(S1 ; S2 ): Returns the set S1 ? S2, i.e., the set of all actions in S1 that are not in S2. MinimalElements(h ; i): Returns a list containing the minimal elements of the set ordered under . MaximalElements(h ; i): Returns a list containing the maximal elements of the set ordered under . Order(a,a',r): Adds ara0 to the relation r. ApplyPlan(x; hD; r i): Returns the state resulting from applying the plan hD; ri in the state x. FindEnables(D, M): Returns the relation D on the set D according to De nition 5.3. Compare lines 59{61 in Algorithm 5.1. FindDisables(D, M): Returns the relation D on the set D according to Definition 5.3. Compare lines 63{65 in Algorithm 5.1.

2

Algorithm 6.1 Input: M, a set of state variable indices, H, a set of action types, and x0 2 S and x? 2 S +, the initial and nal states respectively. The integer ND denotes the number of split actions (should be ?1 when rst called) and FirstCall should be true the rst time the procedure is called, otherwise false. NewState should be true when the algorithm is called with a new initial state (i.e. the rst time it is called and when called in the second part of the split), otherwise false. The set Visited should be empty when rst called. Output: D, a set of actions, and r, a relation on D. 1 Procedure PlanPUB(x0 ; x?; M; H; ND ; V isited; FirstCall; NewState); 2 x0; x? : state; 3 M : set of state variable indices;

6.2 The SAS(+) -PUB planning algorithm 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46

115

H : set of action types; ND : integer; FirstCall : boolean;

var

a; a1; a2 : action; D; D1 ; D2; DS ; Min; Max : set of actions; r; r1; r2 : relations on D; D1 and D2 respectively;

Procedure DisablesTest(x; D); x : partialstate; D : set of action types;

var

a : action; D1 : set of actions; r : relation on D1; beginfDisablesTestg fComputing the set of goal actions D1g for a 2 D do if e(a) v x then Insert(a; D1 ); endif ; endfor; fComputing D1 g r :=FindDisables(D1 ); fTest if r+ is a partial orderg if r is not acyclic then fail endif ; end; fDisablesTestg

Procedure EnablesTest(x; D; M);

x : partial state; D : set of action types; M : set of state variable indices;

var

a : action; D1 : set of actions; r : relation on D1; beginfEnablesTestg fComputing the set of pre-enabled actions D1g for a 2 D do for a 2 Mdo if ei(a) 6v x0i then Insert(a; D1 );

116 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89

Planning for the SAS(+) -PUB class endif ; endfor; endfor; fComputing D g r1 :=FindEnables(D1 ); fTest if r+ is a partial orderg if r is not acyclic then fail endif ; end; fEnablesTestg 1

Procedure SplitActions(D; M);

D : set of action types; M : set of state variable indices;

var

a : action; DS : set of actions; x : state; i : state variable index; beginfSplitActionsg fComputing the set of split actions DS g x := (u; : : : ; u); for a 2 D do x := x t f (a); endfor; for i 2 Mdo if xi = k then for a 2 D do if fi(a) 6= u then Insert(a; DS ); endif ; endfor; endif ; endfor; return DS ; end; fSplitActionsg

Procedure AppendPlan(hD1 ; r1i; hD2 ; r2i);

D1; D2 : set of action types; r1; r2 : relations on D1 and D2 respectevily;

var

a1; a2 : action; D; Min; Max : set of actions; r : relation on D;

6.2 The SAS(+) -PUB planning algorithm

117

90 beginfAppendPlang 91 D := D1 [ D2; 92 r := r1 [ r2; 93 Max :=MaximalElements(hD1 ; r1i); 94 Min :=MinimalElements(hD2 ; r2i); 95 for a1 2 Max do 96 for a2 2 Min do 97 Order(a1; a2 ; r); 98 endfor; 99 endfor; 100 returnhD; r i; 101 end; fAppendPlang 102 103 beginfPlanPUBg 104 D1 := ;; 105 hD; ri :=PlanPUBS(x0 ; x?; M; H); 106 if D 6= ;then 107 if r is acyclic then 108 fComputing the state resulting when applying the plan hD; ri to x0g 109 x :=ApplyPlan(x0; hD; ri); 110 else 111 if FirstCall then 112 fTest if D (x?) is acyclicg 113 DisablesTest(x? ; D; M); 114 for a 2 D do 115 fTest if D (e(a)tf (a)) is acyclicg 116 DisablesTest(e(a) t f (a); D; M); 117 endfor; 118 endif ; 119 fTest if the procedure is called with a new initial stateg 120 if NewState then 121 EnablesTest(x0 ; D; M); 122 endif ; 123 fComputing the set of primary split actions DS1 g 124 DS1 :=SplitActions(D); 125 fComputing the set of secondary split actions DS2 g 126 DS2 :=DeleteSet(D; DS1 ); 127 if (jDj < ND or ND < 0) then 128 ND := jDj; 129 while ((DS1 6= ;or DS2 6= ;) and D2 = ;)) do 130 if DS1 6= ;then 131 a :=SelectRemove(DS1 );

118

Planning for the SAS(+) -PUB class

132 133 134 135 136 137

else a :=SelectRemove(DS2 ); endif ; if f (a) t e(a) 6v x0 then fConstructing a plan from x0 to e(a) t f (a)g hhD1 ; r1i; xi := PlanPUB(x0 ; f (a)te(a); M; H; ND ; V isited; false; false); if D1 =6 ;then if x 62 V isited then

138 139 140 141 142

Insert(x; V isited); fConstructing a plan from x to x?g hhD2 ; r2i; xi := PlanPUB(x; x? ; M; H; ND +1; V isited; false; true); if D2 = ;and x 6= x? then Delete(x; V isited); endif ; endif ; endif ; endif ; endwhile; endif ; if D2 = ;and x 6= x? then fNo plan existsg D := ;; r := ;;

143 144 145 146 147 148 149 150 151 152 153 154 else 155 fComputing the resulting plang 156 hD; r i :=AppendPlan(hD1 ; r1i; hD2 ; r2i); 157 endif ; 158 endif ; 159 endif ; 160 returnhD; ri; 161 end; fPlanPUBg

2

Observe that the algorithm returns D = r = ; if either x0 = x? or if no plan exists. However, the case x0 = x? can be checked before calling the algorithm and hence returning the empty sets is the same as failing. The algorithm above may look rather complicated, but the main idea is simple as described earlier. The detection of the cases when no plan exists is carried out in the main procedure PlanPUB. In line 113 the test according to Equation 6.2 is done, and in lines 114{117 the same test but with e(a) t f (a) as goal is carried out (Equation 6.3). This is done using the procedure DisablesTest in lines 13{32. The test in Equation 6.4 is done at lines 120{122 if a new initial state is used.

6.3 Examples

119

Here the procedure EnablesTest in lines 34{55 is used. It is easy to prove that if the algorithm succeeds, then the generated plan will solve the stated planning problem, i.e., the algorithm is sound. This is shown in Theorem 6.8. From Lemmas 6.2 and 6.4 we immediately see that the algorithm fails if any of the situations accounted for in these lemmas occurs. However, we have not proven that the algorithm always nds a solution if there exists one and otherwise fails, and hence we present the algorithm as a heuristically guided search method. Furthermore, the algorithm is not guaranteed to return a minimal and maximally parallel plan. Theorem 6.8 Given a SAS(+)-PUB planning problem hhM; S ; Hi; x0; x?i, if Algorithm 6.1 succeeds then hD; r i is a parallel plan from x0 to x?. 2 Proof: Suppose the algorithm succeeds, i.e., it returns a tuple hD; ri such that D 6= ;. Then there are two cases depending on if hD; r i returned by PlanPUBS (Algorithm 6.1) on line 105 form a plan (i.e., D 6= ; and r is acyclic) or not. 1. Suppose D 6= ; and r is acyclic. It follows from Theorem 5.8 that hD; r i is a plan from x0 to x?, i.e., D = (x0; x?) and r = . Since r is acyclic hD; ri we will not enter lines 110{158, and h(x0; x?); i is returned in line 160. 2. Suppose hD; ri at line 94 does not form a plan from x0 to x?. Then r contains cycles because D must exist since the algorithm succeeds. Since r contains cycles we enter the splitting part in lines 110{158. Now, the returned sets D and r must have been computed at line 156 because D 6= ;. Thus D1 6= ; and D2 6= ; at line 151. This means that hD1; r1i and hD2 ; r2i returned at lines 137 and 142 are such that D1 6= ; and D2 6= ;. Obviously hD; ri is a plan from x0 to x? if hD1 ; r1i is a plan from x0 to x0 and hD2; r2i is a plan from x0 to x?, where x0 is the intermediate state resulting when applying the plan hD1 ; r1i to the initial state x0. Repeating the same argument for hD1 ; r1i and hD2; r2i nally leads to a number of tuples hDk ; rk i computed by PlanPUBS, and it follows from 1 above that each tuple hDk ; rk i forms a plan between the corresponding states, and obviously the resulting tuple hD; ri is a plan from x0 to x?. Thus, if D 6= ;, then hD; ri is a plan from x0 to x?. 2

6.3 Examples In Examples 6.1 and 6.2 we gave two cases when Algorithm 6.1 detects that no plan exists. In this section we will apply the algorithm to some more examples

120

Planning for the SAS(+) -PUB class

to illustrate how it works. In Example 6.5 we show an example where it is not enough to look at the set of goal actions for the nal state x?, but all subgoals e(a) t f (a) must also be checked to detect that no plan exists. Examples 6.6, 6.7 and 6.8 show three cases where the algorithm succeeds in nding a plan. In Example 6.7 some backtracking is required if choosing the wrong split action, and in Example 6.8 we illustrate that it is not enough to only split around the primary split actions. Example 6.5 Let M = f1; 2; 3g, i.e., there are three state variables, and let the domains be Si = f0; 1g for i = 1; 2; 3. We de ne six action types according to Table 6.3. action type h h1 h2 h3 h4 h5 h6

b(h) (0; u; u) (1; u; u) (u; 0; u) (u; 1; u) (u; u; 0) (u; u; 1)

e(h) (1; u; u) (0; u; u) (u; 1; u) (u; 0; u) (u; u; 1) (u; u; 0)

f (h) (u; 0; u) (u; u; u) (0; u; u) (u; u; u) (1; 1; u) (u; u; u)

Table 6.3: Pre-, post- and prevail-conditions for Example 6.5. Let the initial state be x0 = (0; 0; 0) and the nal state x? = (0; 0; 1). It follows from De nitions 4.1 and 4.3 that hhM; S ; Hi; x0; x?i, where H is given by Table 6.3, is in the SAS-PUB class of planning problems. We get the set of necessary actions (x0; x?) = fa1; a2; a3; a4; a5g where type (ak ) = hk for 1  k  5. The set of goal actions for the nal state is D(x?) = fa2; a4; a5g and these actions are not related under the relation D (x0;x?). Now, for each action a in (x0; x?) we should consider the relation  on the set of goal actions for f (a) t e(a). For the two actions a2 and a4 the corresponding sets of goal actions contain only one element. The sets of goal actions corresponding to a1 and a3 are D (f (a1) t e(a1)) = fa1; a4g and D (f (a3) t e(a3)) = fa2; a3g respectively. In both cases the goal actions are unordered under the relation . Thus the only remaining action is a5. For a5 we get that D (f (a5) t e(a5)) = fa1; a3; a5g. The relation D (f (a5)te(a5)) is thus given by Figure 6.5. There is a loop in the relation graph and hence no plan exists.

2

Example 6.6 Let M = f1; 2; 3g, i.e., there are three state variables, and let the domains be Si = f0; 1g for i = 1; 2; 3. We de ne six action types

6.3 Examples

121 a1

a3

a5

Figure 6.5: The relation D (f (a)te(a)) for Example 6.5. according to Table 6.4. Let the initial state be x0 = (0; 0; 0), and the nal state x? = (1; 0; 1). Now, hhM; S ; Hi; x0; x?i is a SAS-PUB problem according to De nitions 4.1 and 4.3, and thus Algorithm 6.1 can be used. action type h h1 h2 h3 h4 h5 h6

b(h) (0; u; u) (1; u; u) (u; 0; u) (u; 1; u) (u; u; 0) (u; u; 1)

e(h) (1; u; u) (0; u; u) (u; 1; u) (u; 0; u) (u; u; 1) (u; u; 0)

f (h) (u; u; u) (u; u; u) (1; u; u) (u; u; u) (0; 1; u) (u; u; u)

Table 6.4: Pre-, post- and prevail-conditions for Example 6.6. The set of necessary actions is (x0; x?) = fa1; a2; a3; a4; a5g where type (ak ) = hk for 1  k  5. The relation `disables' on the sets of goal actions is anti-symmetric, and so is the relation `enables' on the set of pre-enabled actions in the initial state. However, the relation `precedes' () on the set of necessary actions is not a partial order (see Figure 6.6) and we have to split the problem. a1

a3

a5

a2

Figure 6.6: The relation graph for (x ;x?) for Example 6.6. Transitive

arcs are omitted.

0

From Table 6.4 we see that there are two action types that have di erent values in the prevail-conditions for the same state variable, namely h3 and h5. This is due to the fact that f1(h3) = 1 6= f1(h5) = 0. There are two actions of these types in the set of necessary actions (type (a3) = h3 and type (a5) = h5),

122

Planning for the SAS(+) -PUB class

and accordingly f1(a3) = 1 6= f1(a5) = 0. From De nition 6.5 we see that these actions form the set of primary split actions, and hence S1(x0; x?) = fa3; a5g. Let us rst try to split the problem around the prevail- and post-condition for a3 . 1. Find a plan from x0 to f (a3) t e(a3) = (1; 1; u). The set of necessary actions is (x0; f (a3) t e(a3)) = fa6; a7g where type (a6) = h1 and type (a7 ) = h3 . The relation `precedes' on the set (x0; f (a3 ) t e(a3)) is given in Figure 6.7a. a10 a6

a7

a11

a9 a8

(a)

(b)

Figure 6.7: The relation graphs for (a) (x0;f (a3)te(a3)) and (b) (x1;x?) for Example 6.6. Transitive arcs are omitted. We see that (x0;f (a3)te(a3)) is a partial order and hence the tuple h(x0; f (a3) t e(a3)); (x0;f (a3)te(a3))i is a plan from x0 to f (a3) t e(a3)). The resulting intermediate state is x1 = (1; 1; 0). 2. Find a plan from x1 = (1; 1; 0) to x?. The set of necessary actions is (x1; x?) = fa8; a9; a10; a11g where type (a8) = h4, type (a9) = h5, type (a10) = h1 and type (a11) = h2. The relation `precedes' on the set (x1; x?) is given in Figure 6.7b. We see that (x1;x?) is a partial order and hence h(x1; x?); (x1;x?)i is a plan from x1 to x?. Now, a plan from x0 to x? is constructed by combining the two plans above. The result is shown in Figure 6.8.

2

Example 6.7 Suppose M = f1; 2; 3g, Si = f0; 1g for i = 1; 2; 3, and the set of action types H is given in Table 6.5. Let the initial state be x0 = (0; 0; 0) and the nal state be x? = (0; 1; 1). We see from De nitions 4.1 and 4.3 that hhM; S ; Hi; x0; x?i is in the SAS-PUB class and we apply Algorithm 6.1. The set of necessary actions is (x0; x?) = fa1; a2; a3; a4g where type (ak ) = hk for 1  k  4. The relation `disables' on the sets of goal actions is anti-symmetric, and so is the relation

6.3 Examples

123 a10 a6

a7

a9

a11

a8

Figure 6.8: The resulting plan from x0 to x? for Example 6.6. Tran-

sitive arcs are omitted. The dashed line indicates where the problem is split. action type h h1 h2 h3 h4 h5

b(h) (0; u; u) (1; u; u) (u; 0; u) (u; u; 0) (u; u; 1)

e(h) (1; u; u) (0; u; u) (u; 1; u) (u; u; 1) (u; u; 0)

f (h) (u; u; u) (u; u; u) (1; u; u) (0; 0; u) (u; u; u)

Table 6.5: Pre-, post- and prevail-conditions for Example 6.7. `enables' on the set of pre-enabled actions in the initial state. However, the relation `precedes' () on the set of necessary actions is not a partial order as can be seen from Figure 6.9, and we have to split the problem. From Table 6.5 we see that there are two actions in the set (x0; x?) with di erent values for the rst state variable in their prevail-conditions. These actions are a3 and a4, and f1(a3) = 1 6= f1(a4) = 0. De nition 6.5 gives that these actions belong to the set of primary split actions, i.e., S1(x0; x?) = fa3; a4g. a1

a3

a4

a2

Figure 6.9: The relation graph for (x ;x?) for Example 6.7. Transitive

arcs are omitted.

0

124

Planning for the SAS(+) -PUB class

Let us st choose the action a3 as split action. Then we split around the state x = e(a3) t f (a3) = (1; 1; u), and thus construct a plan from x0 to x. Such a plan is given by h1; 1i where 1 = fa5; a6g, type (a5) = h1, type (a6) = h3 and 1 is as in Figure 6.10a. The state resulting when applying this plan to x0 is x0 = (1; 1; 0), and next we try to construct a plan from x0 to x?. To achieve the original goal (the nal state) we must execute an action of type h4, and the prevail-condition of this action type is f (h4) = (0; 0; u). But there is no available action type such that e2(h) = 0, and since x02 6= 0 we cannot construct the set of necessary actions, and there is no plan from x0 to x?. a5

(a)

a8

a6

a10

a9

(b)

Figure 6.10: The relation graphs for (a) 1 and (b) 22 in Example 6.7. Transitive arcs are omitted.

Let us now try the single remaining action in the set S1(x0; x?), namely a4. Here we split around the state x00 = e(a4)tf (a4 ) = (0; 0; 1), and a plan from x0 to x00 is given by h11; 11i where the set 11 = fa7g and type (a7) = h4. Obviously 11 is empty. Next we construct a plan from x00 to x?. Such a plan is given by h22; 22i where 22 = fa8; a9; a10g, type (a8) = h4, type (a9) = h2, type (a10) = h3 and the relation 22 is given by Figure 6.10b. The resulting plan from x0 to x? is given in Figure 6.11. a7

a8

a10

a9

Figure 6.11: The resulting plan in Example 6.7. Transitive arcs are omitted. The dashed line indicates where the problem is split.

2

Example 6.8 Let M = f1; 2; 3g, i.e., there are three state variables, and let the domains be Si = f0; 1g for i = 1; 2; 3. We de ne six action types according to Table 6.6. Let the initial state be x0 = (0; 0; 0), and the nal state x? = (0; 0; 1). Now, hhM; S ; Hi; x0; x?i is a SAS-PUB problem according

to De nitions 4.1 and 4.3, and thus Algorithm 6.1 can be used. The set of necessary actions is (x0; x?) = fa1; a2; a3; a4; a5g where type (ak ) = hk for 1  k  5. The relation `disables' on the sets of goal actions is anti-symmetric, and so is the relation `enables' on the set of pre-enabled

6.3 Examples

125 action type h h1 h2 h3 h4 h5 h6

b(h) (0; u; u) (1; u; u) (u; 0; u) (u; 1; u) (u; u; 0) (u; u; 1)

e(h) (1; u; u) (0; u; u) (u; 1; u) (u; 0; u) (u; u; 1) (u; u; 0)

f (h) (u; u; 0) (u; 0; u) (u; u; u) (1; u; u) (u; 1; u) (0; 0; u)

Table 6.6: Pre-, post- and prevail-conditions for Example 6.8. actions in the initial state. However, the relation `precedes' () on the set of necessary actions is not a partial order (see Figure 6.12a) and we have to split the problem. a4

a2

a8

a6

a1 a5

(a)

a3

a10

a7 a12

(b)

(c)

Figure 6.12: The relation graphs for (a) (x ;x?), (b) (x ;f (a )te(a )) 0

0

and (c) E (x0) for Example 6.8. Transitive arcs are omitted.

5

5

The set of primary split actions is S1 = fa2; a5g, and thus the set of secondary split actions is S2 = fa1; a3; a4g. For the action a2 we get e(a2)

t f (a2) = (0; 0; u) v x0;

so we try with the split action a5. 1. Find a plan from x0 to f (a5) t e(a5) = (u; 1; 1). The set of necessary actions is (x0; f (a5) t e(a5)) = fa6; a7g where type (a6) = h3 and type (a7) = h5 . The relation `precedes' on the set (x0; f (a5) t e(a5)) is given in Figure 6.12b. We see that (x0;f (a5)te(a5)) is a partial order and hence the tuple h(x0; f (a5) t e(a5)); i is a plan from x0 to f (a5) t e(a5)). The resulting state is x1 = (0; 1; 1).

126

Planning for the SAS(+) -PUB class

2. Find a plan from x1 = (0; 1; 1) to x?. The set of necessary actions is (x1; x?) = fa8; a9; a10; a11; a12g where type (a8) = h1, type (a9) = h2, type (a10) = h4 , type (a11) = h5 and type (a12) = h6. The set of pre-enabled actions is relation E (x0) = fa8; a10; a12g, and the relation E (x0) is given in Figure 6.12c. We see that E (x0) contains a cycle, and according to Lemma 6.2 there is no plan from x1 to x?. Thus we have tried all primary split actions, and must now turn to the set S2. Suppose we next try to split around the post- and prevail-conditions for a3 . 1. Find a plan from x0 to f (a3) t e(a3) = (u; 1; u). The set of necessary actions is (x0; f (a3) t e(a3)) = fa13; a14; a15; a16g where type (a13) = h1, type (a14) = h2 , type (a15) = h4 and type (a16) = h5. The relation `enables' on the set of pre-enabled actions is a partial order, but this is not the case for the relation `precedes' on the set (x0; f (a3) t e(a3)) as can be seen from Figure 6.13. a16

a13

a15

a14

Figure 6.13: The relation graph for (x ;f (a )te(a )) for Example 6.8. 0

3

3

Thus we must split the problem, and the set of primary split actions is S1 = fa14; a16g, so the set of secondary split actions is S2 = fa13; a15g. To split around the primary split actions gives the same result as above. Let us instead try to split around a13. 1.1. Find a plan from x0 to f (a13)te(a13) = (u; 1; u). The set of necessary actions is (x0; f (a13) t e(a13)) = fa17g, where type (a17) = h1. The relation  is of course a partial order, and thus the tuple h(x0; f (a13) t e(a13)); i is a plan from x0 to f (a13) t e(a13). The resulting state is x2 = (1; 1; 0). 1.2. Find a plan from x2 = (1; 1; 0) to f (a3) t e(a3) = (u; 1; 1). We see immediately that the set of necessary actions is (x2; f (a3) t e(a3)) = fa18g, where type (a18) = h5. Hence h(x2; f (a3) t e(a3)); i is a plan from x2 to f (a3) t e(a3). The resulting state is x2 = (1; 1; 1). Combining the two plans above gives a plan according to Figure 6.14a.

6.4 Test cases

127 a17

a18

a20

(a)

a19

(b)

Figure 6.14: Plans from (a) x0 to e(a3) t f (a3) and (b) from x2 to x? in Example 6.8. The dashed line indicates where the problem is split.

2. Find a plan from x2 = (1; 1; 1) to x?. The set of necessary actions is (x2; x?) = fa19; a20g where type (a19) = h2 and type (a20) = h4. The relation  is given in Figure 6.14b. We see that  is a partial order and hence h(x2; x?); i is a plan from x2 to x?. Finally, by combining the two plans above we get the resulting plan as shown in Figure 6.15. The state graph is given in Figure 6.16, and we see from the a17

a18

a20

a19

Figure 6.15: The resulting plan in Example 6.8. Transitive arcs are omitted. The dashed line indicates where the problem is split.

gure that from the state (0; 1; 1) no other state can be reached.

2

6.4 Test cases In this section we investigate how Algorithm 6.1 behaves for some test cases. Even if these test cases di ers from real-world problems in that they lack structure, some interesting properties of the algorithm can be illustrated. The test cases di ers in the number of state variables and in the prevail-conditions of the action types. The prevail-conditions are randomly generated, and the probabilities are varied in the di erent cases. For each case 100 randomly chosen examples are investigated. In some cases, due to the probability of the prevail-conditions, a plan exists for only a few of the generated examples. In these cases the average number of steps presented in the tables may not be representative, and more problems should be investigated to get accurate gures. The investigation was carried out using the planning tool described in Chapter 7. The results using an ordinary depth- rst algorithm (see Section 2.4.1) is also presented for comparison.

128

Planning for the SAS(+) -PUB class (0,0,0) h2

h3 h1

(1,0,0)

h5

(0,1,0)

h3

(0,1,1)

h3

h2

h1

h4 (1,1,0)

(0,0,1)

h4 h5

(1,1,1)

(1,0,1) h3

Figure 6.16: State graph for Example 6.8. The initial state is marked

with an arrow and the nal state with double lines. The transitions are marked with the corresponding action types.

The investigated problems can be described as follows. Let M = f1; : : : ; ng and, for each i 2 M, Si = f0; 1g. The set of action types H is de ned in the following way: For each i 2 M there are two action types hi1 and hi2. For the action type hi1 the pre-condition is bi (hi1) bj (hi1)

= 0 = u; j 6= i and 1  j  n

and the post-condition is ei (hi1 ) ej (hi1 )

= 1 = u; j 6= i and 1  j  n

In the same way the pre-condition for the action type hi2 is bi (hi2) bj (hi2)

= 1 = u; j 6= i and 1  j  n

and the post-condition is ei (hi2 ) ej (hi2 )

= 0 = u; j 6= i and 1  j  n

6.4 Test cases

129

The prevail-conditions are randomly generated for each problem. For each h 2 H and j 2 M, P (fj (h) = 1) = P1 and P (fj (h) = 0) = P0: This means that the probability that a state variable equals 1 in the prevailcondition is P1, and the probability that it equals 0 is P0. The probabilities for each case are given below. The initial state is x0 = (0; : : : ; 0) and the nal state is x? = (1; : : : ; 0; 1; 0; : : : ; 0), where the state variable in x? that equals 1 is randomly decided for each problem instance. It follows from De nitions 4.2 and 4.3 that this de nes a SAS(+)-PUB planning problem. The parameters that are varied (probabilities of the prevail-conditions and number of state variables) a ect the execution time and the number of steps in the resulting plan. These can be varied in many ways, making a complete investigation cumbersome. We have selected three principle aspects that presumably covers the major behavior of the algorithm. In Table 6.7 we show how the execution time and the number of steps are a ected by the number of state variables, i.e., n is varied. For each action type h 2 H and each state variable j 2 M the probability for fj (h) = 1 is 0:2, and the probability for fj (h) = 0 is 0:1. From the table we see that Algorithm 6.1 probability

n=4 n=6 n=8 n = 10

Average time Average steps Plan exists Alg. 6.1 Depth- rst Alg. 6.1 Depth- rst % 0.51 4.42 1.60 4.28 92 1.39 25.26 2.01 5.91 79 9.71 76.06 1.80 6.35 51 3.70 251.37 1.63 2.56 54

Table 6.7: Average execution time (in seconds) and average number of steps in the resulting plan when varying n. For each h 2 H and j 2 M, P (fj (h) = 1) = 0:2 and P (fj (h) = 0) = 0:1. In the column \Plan exists" the number of problems (out of 100 problems) where a plan exists is given. is faster than depth- rst in all the cases, and that the returned plan contains fewer steps. It is also interesting to see how the distribution between zeros and ones in the prevail-conditions a ects the execution time. Since Algorithm 6.1 is based on teh SAS(+)-PUBS algorithm (Algorithm 5.1), it seems reasonable to assume

130

Planning for the SAS(+) -PUB class

that for problems that are \almost single-valued" the algorithm would behave well. In Table 6.8 the probability for a de ned value in the prevail-condition is constant, and P (fj (h) 6= u) = 0:3. However, the distribution between zeros and ones varies from no zeros to as many zeros as ones. In the rst case in probability

Average time Average steps Plan exists Alg. 6.1 Depth- rst Alg. 6.1 Depth- rst % P1 = 0:3; P0 = 0 0.71 135.88 1 14.64 100 P 1 = 4P 0 1.84 115.44 1.38 11.54 85 P 1 = 3P 0 4.03 154.63 1.46 10.56 69 P 1 = 2P 0 9.71 76.06 1.80 6.35 51 P1 = P0 7.75 88.29 1.91 7.0 35

Table 6.8: Average execution time (in seconds) and average number of steps in the resulting plan for di erent distributions in the prevailconditions between zeros and ones. For each h 2 H and each j 2 M, P (fj (h) = 6 u) = 0:3, P1 = P (fj (h) = 1) and P0 = P (fj (h) = 0). In the column \Plan exists" the number of problems (out of 100 problems) where a plan exists is given.

Table 6.8 P (fj (h) = 0) = 0, i.e., the problems actually belong to the SAS(+)PUBS class. In this case no splitting is needed in Algorithm 6.1, and the algorithm is very fast compared to the depth- rst algorithm. When P (fj (h) = 0) is increased the problem becomes more dicult and the average execution time for Algorithm 6.1 increases. This could be compared to the decrease in execution time for the depth- rst search. When the probability increases the number of executable actions in each step decreases and thus the number of problems where no plan exists increases. When, for example, no action can be executed in the initial state, the depth- rst algorithm is of course very fast. Algorithm 6.1 on the other hand, detects that no action can be executed using the deadlock-test in Equation 6.4. Table 6.9 shows how the execution time and the number of steps in the resulting plan depend on how many de ned state variables there are in the prevail-conditions. The probability for fi(h) = 0 and fi(h) = 1 are equal, but the total probability for a de ned state variable varies from 0:2 to 1. From Table 6.9 we see that the execution time for Algorithm 6.1 is almost the same for the six cases shown here. Thus it seems like the execution time does not depend on the total probability of having a de ned value in the prevailconditions, but rather on the distribution between zeros and ones. As before the depth- rst search is very fast when no plan exists.

6.4 Test cases probability

P1 = P0 = 0:1 P1 = P0 = 0:15 P1 = P0 = 0:2 P1 = P0 = 0:3 P1 = P0 = 0:4 P1 = P0 = 0:5

131 Average time Average steps Plan exists Alg. 6.1 Depth- rst Alg. 6.1 Depth- rst % 8.61 210.44 2.4 21.7 60 7.75 88.29 1.91 7.0 35 4.54 20.48 1.38 1.67 21 5.48 3.39 1 1 8 5.49 1.78 1 1 2 5.84 1.54 1 1 1

Table 6.9: Average execution time (in seconds) and average number of steps in the resulting plan for di erent numbers of de ned values in the prevail-conditions. For each h 2 H and j 2 M, P1 = P (fj (h) = 1)

and P0 = P (fj (h) = 0). In the column \Plan exists" the number of problems (out of 100 problems) where a plan exists is given.

Thus, from the tables we see that Algorithm 6.1 is fast and generates a short plan.

132

Planning for the SAS(+) -PUB class

7 Implementation Here we present a tool for creating sequential control schemes [92] implemented in a real-time expert system environment, G2 [42]. The implemented planning system contains algorithms for creating plans in form of GRAFCET charts (see Section 2.1.2) using a fairly general G2 package called GrafcetTool [104] in combination with the algorithms described in Chapters 5 and 6. The previously described algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. Such a plan can then automatically be translated to a GRAFCET chart. Before discussing the planning tool (Section 7.2) we give a short introduction to G2 and GrafcetTool (Section 7.1). A simple example is given in Section 7.3.

7.1 A short introduction to GRAFCET in G2 Using the G2 real-time expert system [42] as a programming platform, Lindskog has designed and implemented GrafcetTool [104], which is a fully graphical tool based on the GRAFCET formalism described in Section 2.1.2. We focuse on realizing plans, but GrafcetTool has a much wider applicability including simulation and modelling of discrete event systems.  Arzen [11, 13] has presented a GRAFCET toolbox Grafchart implemented in G2. He has extended the standard with the possibility of using coloured GRAFCET [4] (compare coloured Petri nets [45, 140]). However, in Grafchart it is not possible to dynamically create new GRAFCET objects, a feature needed when generating new plans. Another possibility would of course be to have a number of pre-de ned GRAFCET objects, but the number needed would soon be very large. Before discussing GrafcetTool we start with a short introduction to G2. 133

134

Implementation

7.1.1 The G2 real-time expert system

G2 is an advanced graphical tool for designing and running real-time expert systems for complex applications that require continuous and \intelligent" monitoring, diagnosis and control [42]. It is aimed at a wide range of realtime applications including process control, robotics, network management and nancial analysis. As of today G2 has more or less established itself as the de facto standard for real-time knowledge-base systems. In G2 the objects are ordered in an object-oriented class hierarchy, where each de ned class can inherit attributes, colors and even icons from its superior class. Using the built-in inference engine, it is possible to reason about the current process state and perform actions through rules that are operating on the objects. G2 heavily rely on graphics and every object is represented by and manipulated through its icon. The objects can also be graphically connected to each other, thereby making G2 suitable for problems that have a graphical representation. As indicated above G2 is intended for real-time applications and supports this in a number of ways, including variables with validity intervals, rule scan intervals and priorities, asynchronous event handling, and more. Another quite important concept is the possibility to activate and deactivate subworkspaces. A workspace is just a kind of window upon which various knowledge components are placed. Workspaces can also be assigned to objects, or in other words, be owned by objects. Such a workspace is called a subworkspace and in a way it represents the internal structure of the object. If a subworkspace is activated during run-time, all its knowledge parts (rules, procedures etc) are executable, but when it is deactivated, none of its knowledge pieces are available. Thus, a partial order on a set of actions, or a plan, can be realized by activating and deactivating object subworkspaces in the order prescribed by the plan. Guaranteeing this order is of course crucial. For a number of reasons (standardization, graphic representation, decomposition possibilities, syntax suitable for compilers) we have chosen to use GRAFCET for this purpose.

7.1.2 GrafcetTool - a GRAFCET implementation

As described in Section 2.1.2, a GRAFCET function chart [45, 84, 104] is mainly composed of steps and transitions. With each step and transition we associate a subworkspace, and upon this we place the action that should be executed in the step, or the transition condition, respectively. Once a step is activated its associated subworkspace also becomes active, and all commands or actions (e.g. open a valve, start a motor) upon this workspace are enabled. Conversely, deactivating a step implies that the step's subworkspace also is deactivated. The change from an active step to an inactive one is determined by the transition located between the steps in question. More precisely, a state change occurs when the so called transition condition placed upon the

7.1 A short introduction to GRAFCET in G2

135

subworkspace of the transition becomes true (e.g. when a sensor is triggered). Actions and transition conditions are speci ed using ordinary G2 rules. Since the GRAFCET standard de nes a number of simple action types (stored, conditional, delayed, time limited) as well as combinations of them, it is necessary to combine several rules to obtain the intended function. A stored action (the action is performed until another rule belonging to a subsequent step executes a reset action) is for instance speci ed by an initial rule with the following structure: initially conclude that () The other action types are somewhat more complicated, so to avoid being too technical let us just refer to [104] for a more thorough treatment concerning these matters. A transition condition rule can be implemented as a scanned rule: when () then start Fire-Transition(this workspace) In the example in Section 7.3 these two rule types are in fact the only ones needed. An example of a GRAFCET chart implemented using GrafcetTool is shown in Figure 7.1. Workspace

Subworkspaces Stored Action (Initial rule) initially conclude that the state of MOTOR1 is STARTED

Transition Condition (Scanned rule) when (the state of SENSOR1 is TRIGGERED) then start Fire-Transition(this workspace)

Figure 7.1: A GRAFCET chart implemented using GrafcetTool.

136

Implementation

7.2 The planning tool Using GrafcetTool as described in the previous section we have designed and implemented a planning tool [92]. The components and the relationships amongst them are depicted in Figure 7.2, and throughout this section we recommend the reader to keep this gure fresh in mind. To begin with, the general GrafcetTool package is t into the framework of the planner. The only change needed in GrafcetTool is the adding of some attributes to the ordinary step de nition. The most important of these new attributes are, no doubt, the pre-, post- and prevail-conditions, which are all implemented as lists. With these alterations the steps may be treated as actions or action types. As described in Section 2.1 there are many reasons for using automatic synthesis of control charts. Since it stresses the modelling of the plant it can be very useful when adding new devices to the plant. If fast enough to allow for on-line planning it can be used for example by an operator supervisor or to simplify error recovery. In such an on-line application the supervisor communicates with the process and the operator, and when needed calls the on-line planner. So far only the planner is implemented. Apart from receiving the current state and the wished-for nal state from the supervisor, the planner must know what the plant looks like. Each action type is therefore implemented as an ordinary GRAFCET step and stored in a database describing the plant. The steps in this database are always inactive and may be viewed as action class de nitions, i.e. action types. The database also contains the state variables, which together with the action types completely describe the plant. All items located in the database are of course the result of the modelling phase, which must be done o -line. Formally the database contains a SAS+-structure hM; S ; Hi where the action types in H are implemented as inactive GRAFCET steps. Using the database the planner computes which class the problem belongs to according to De nitions 3.5, 4.1 and 4.3. The resulting problem class is stored in the database and is used later on when deciding which planning algorithm to use. Having access to the plant model along with the initial and the nal states, the planner creates the necessary actions by copying action type objects. The action instantiations as well as the creation of other GRAFCET elements are entirely handled via special interface procedures found in GrafcetTool. Additionally, the planner produces a partial order on the set of necessary actions specifying the execution order. These two steps are carried out using one of the algorithms described in Chapters 5 and 6. Which of the algorithms that will be used depends on the computed planning problem class. For problems outside these classes an ordinary depth- rst algorithm is used (see Section 2.4.1). To improve execution speed the algorithms are implemented using procedures only.

7.2 The planning tool

137 Verification/ Fault detection

Supervisor Actions

Plant Controller Verifier/Fault detector

From Plant To Plant

PLC 0

X , X

*

Planner ready Model of the Plant Action types

Planner ...

State variables

a1 a2

X = (X1 , ... , X n )

Partial Order

a3

a4

Partial Order --> GRAFCET

GrafcetTool

a1 a2

a3

PLC-code

a4

Figure 7.2: Sketch over the planning system. Grey boxes indicate

modules already implemented. Thick arrows show the data ow between implemented modules, whereas thin arrows show the data ow between modules not yet implemented. Grey arrows illustrate modules that are using GrafcetTool procedures.

138

Implementation

Next the partial order is converted to a GRAFCET function chart. Since a plan is described by a partial order on a set of actions the resulting chart lacks alternative branches. However, parallel branches will be used whenever possible. To get a syntactically correct chart the steps must be separated by transitions. For that reason we automatically create a transition with a transition condition (a scanned rule) for each step. Observe that the transitions do not belong to the database. The transition condition is derived as a combination of the post-condition of the current step and the pre-conditions of the steps immediately following the current step. The latter part of this condition ensures that an action is only performed when its pre-condition is ful lled, resulting in simpli ed fault detection. This can of course be modi ed using only the post-condition of the current step as the transition condition. Another syntactically motivated move is the addition of empty steps, which will not change the state. Empty steps are only situated immediately before convergence points of parallel branches, and their purpose is to guarantee a correct synchronization. In Figure 7.2 we have colored the empty steps grey. After adding syntactically necessary objects the graph is completed. Again, this is handled through one of the procedures implemented in GrafcetTool. When it is called, all steps and transitions belonging to the plan are placed on a workspace so that the structure of the graph is clearly recognized. Thereafter, parallel branches are positioned and the elements are connected to each other. Finally, it is possible to translate the outcoming graph to ordinary PLCcode. Thus the cycle is closed and the result is an integrated system able to perform planning in reality.

7.3 Example In this section we will use a simple example to illustrate how the planning system works. The example is a puzzle according to Figure 7.3. As can be seen from the gure the puzzle consists of 16 squares. The squares can be either black or white, and the available action types change the color of each square from black to white or vice versa. For each square we introduce a state variable xij where i is the row and j the column the square is placed in. The state variable xij is interpreted as: ( the color of the square is white xij = 10 ifif the color of the square is black and thus Sij = f0; 1g for 1  i; j  4. There are two action types for changing the color of each square. For the square associated with the state variable xij the action types are called White-To-Blackij and Black-To-White ij . From now on these will be abbreviated as W2Bij and B2Wij , respectively. For 1  i; j  4, the pre-condition for the action type W2Bij is

7.3 Example

139

Figure 7.3: The puzzle.

and the post-condition is

bij (W2Bij ) bkl (W2Bij ) eij (W2Bij ) ekl (W2Bij )

= 0 = u kl 6= ij

= 1 = u kl 6= ij where 1  k; l  4. Notice that no other state variable than xij will be changed when performing an action of this type. In the same way the pre-condition for the action type B2Wij is bij (B2Wij ) = 1 bkl (B2Wij ) = u kl 6= ij and the post-condition is eij (B2Wij ) = 0 ekl (B2Wij ) = u kl 6= ij where k; l = 1; 2; 3; 4. An action can only be performed when both the precondition and the prevail-condition are satis ed. The prevail-conditions for W2Bij are given in Figure 7.4, and for B2Wij in Figure 7.5. The gures should be interpreted as follows. For each action type a partial state of the puzzle is shown corresponding to the prevail-condition for the action type. The square which will be a ected by the action is marked with a cross. A black cross on a white square denotes that the color is changed from white to black (W2Bij ), and a white cross on a black square denotes the opposite (B2Wij ). Thus the state in the upper left corner in Figure 7.4 shows the prevail-condition for the action type W2B11. A black square means that the corresponding state variable should be 1, a white square means that it should be 0 and a grey square means that we have no demands on the value, i.e, the value may be either 0 or 1 and hence the corresponding state variable is unde ned (xkl = u) in the

140

Implementation

prevail-condition. For example, the prevail-condition for W2B11 is x21 = 1 and xkl = u if kl 6= 21.

Figure 7.4: The prevail-conditions for the action types W2Bij . The square which will be a ected by the action is marked with a cross. A black or white square means that the value in the prevail-condition for the corresponding state variable should be 1 or 0 respectively. A grey square means that the corresponding state variable is u, i.e., there is no demand on its value. The initial and the nal states are given in Figure 7.6. Note that even if it seems like the nal state can be achieved by performing one single action this is not true since we must take the prevail-conditions into account. It follows from De nitions 4.1 and 4.3 that the planning problem belongs to the SAS-PUBS class, and Algorithm 5.1 can be used. In Figure 7.7 a minimal plan of maximal parallelity created using Algorithm 5.1 is shown. To avoid making the gure too complicated we have excluded the action labels and only show the action types. This plan can be automatically translated to a maximally parallel GRAFCET chart according to Section 7.2, and the result is shown in Figure 7.8. Each step corresponds to an action as described in Section 7.2 except the ones colored grey, which are empty steps. The example given above is only chosen to be intuitive and small enough to illustrate the ideas behind the planning tool. Yet, for many real applications, where on-line replanning is required, this kind of tool can be successfully used.

7.3 Example

141

Figure 7.5: The prevail-conditions for the action types B2Wij . The square which will be a ected by the action is marked with a cross. A black or white square means that the value in the prevail-condition for the corresponding state variable should be 1 or 0 respectively. A grey square means that the corresponding state variable is u, i.e., there is no demand on its value.

Figure 7.6: The initial and the nal states for the puzzle.

142

Implementation

W2B21

W2B11

W2B12

W2B44

W2B31 W2B22

B2W11 B2W12

B2W21

B2W44 W2B32

B2W33

B2W31

B2W32

B2W22

Figure 7.7: A minimal and maximally parallel plan solving the puzzle

problem. The action labels are omitted and only the action types are shown. Transitive arcs are omitted.

7.3 Example

143

Workspace

W2B-21 initially conclude that X21 = 1

Transition Condition (after W2B-21) when (X21 = 1 and X31 = 0 and X11 = 0) then start Fire-Transition(this workspace) W2B44

W2B21

W2B11

W2B31

W2B12

W2B22

B2W11

B2W12

B2W44 B2W21

W2B32

B2W33

B2W31

B2W32

B2W22

Figure 7.8: A minimal and maximally parallel GRAFCET chart solving the puzzle problem, translated from the plan given in Figure 7.7. The steps are labelled with the corresponding action type. The grey steps are empty steps.

144

Implementation

8 Reachability for the SAS(+) -PUBS class In classical control theory the two concepts controllability and reachanbility are very closely realted. For DEDS the situation is slightly di erent. It is natural to de ne reachability in terms of the state graph [63, 116, 121]. A state x1 is reachable from some other state x2 if there is a path from x2 to x1. As described in Section 2.4.2 a DEDS can be described by the language generated by the system, and controllability can be de ned in terms of the generated language [99, 131, 165]. We focus on reachability, and give a reachability criterion for planning problems in the SAS(+)-PUBS class (see De nition 4.3). The criterion presented here is based on the algorithm presented in Section 5.1 and hence the state graph does not have to be constructed. This is an advantage since the number of states in the state graph normally is exponentially larger than the number of available actions. The complexity of checking if the criterion is ful lled increases polynomially with the number of state variables. In Section 8.1 we formally de ne reachability and we give a reachability criterion for the SAS(+)-PUBS class. A simple example is then given in Section 8.2.

8.1 Reachability criterion As stated above reachability may be de ned in terms of the state graph [63, 116, 121]. Given a SAS(+)-structure hM; S ; Hi, we say that a state x0 2 S is reachable from some other state x 2 M if there is a path in the state graph from x to x0. The formal de nition of reachability is given in De nition 8.1.

De nition 8.1 Given a SAS+ -structure hM; S ; Hi, let x; x0 2 S be any states. Then the pair (x; x0) is reachable if there exists a plan hA; i from x to x0 such that for all a 2 A, type (a) 2 H. If all pairs (x; x0) where x; x0 2 S are reachable, then hM; S ; Hi is reachable. 2 145

146

Reachability for the SAS(+)-PUBS class

Note that the order between x and x0 is important. That (x; x0) is reachable does not imply that (x0; x) is reachable. Reachability as we de ne it here is a very strong concept. Passino and Antsaklis [124] refer to this as completely controllable. We require that there is a path in the state graph from x to x0 for every pair x; x0 2 S . In other words we require that the state graph is be strongly connected, see for example [123]. There exists algorithms for deciding in O(k) time if a graph is strongly connected, where k is the number of nodes in the graph. As stated before, for a SAS(+)-PUBS-structure the state variables are binary and hence the number of nodes in the state graph is 2n where n is the number of state variables. Thus the complexity of existing algorithms increases exponentially with the number of state variables. However, for the SAS(+)-PUBS class of planning problems we can nd a criterion whose complexity increases polynomially with the number of state variables, in fact the complexity is O(n3). This will be illustrated later on. The reachability criterion is based on the relation precedes () de ned in De nition 5.3. Here we consider this relation de ned for the set of available action types. In any minimal plan containing the two actions a1 and a2, if a1a2 then a1 should be performed before a2. Furthermore, if a1a2 then it follows from the de nition that type (a1)type (a2). Using this we can develop a reachability criterion. The system is reachable, i.e., there is a path between any two states in the state graph, if the relation  on the set H is a partial order, and there is no \missing" action type. More speci cally, the criterion says that the SAS+-structure hM; S ; Hi is reachable if and only if the relation H is a partial order and jHj = 2n, where n = jMj, i.e., n is the number of state variables. This is stated in Theorem 8.6 and the proof is based on four lemmas. Lemma 8.2 proves that hM; S ; Hi cannot be reachable if jHj < 2n, because then there is some `missing' action type. In Lemma 8.3 we show that if jHj = 2n, then the set of necessary and sucient actions (x; x0) as de ned in De nition 5.1 exists for any states x; x0 2 S . Lemma 8.4 shows that hM; S ; Hi is reachable if jHj = 2n and H is a partial order, and Lemma 8.5 shows the opposite, i.e., if H is not a partial order then hM; S ; Hi is not reachable. Lemma 8.2 Suppose hM; S ; Hi belongs to the SAS(+)-PUBS class. If jHj < 2n then there exists two states x; x0 2 S such that there is no plan from x to x0, i.e., hM; S ; Hi is not reachable. 2 Proof: Suppose jHj < 2n. Because of unariness and binariness there must exist x0 2 S and i 2 M such that there is no action type h 2 H such that ei (h) = x0i . Let x 2 S be any state such that xi 6= x0i . Then obviously there is no action transforming xi into x0i and hence there cannot be a plan from x to x0 . 2

8.1 Reachability criterion

147

Lemma 8.3 Suppose hM; S ; Hi belongs to the SAS(+)-PUBS class, and that jHj = 2n. Then the set of necessary actions (x; x0) de ned in De nition 5.1 exists for any states x; x0 2 S . 2 Proof: The set of action types H is unary and post-unique, and thus there must be exactly two action types in H a ecting each state variable, i.e., for all i 2 M, jH[i]j = 2. Because of post-uniqueness and binariness we must have that if H[i] = fh1; h2g then ei(h1) = 6 ei(h2), bi(h1) v ei(h2) and bi(h2) v ei(h1). This means that for any state variable xi with any value we can always nd an action a such that type (a) 2 H and ei(a) = xi. Hence the set (x; x0) de ned in De nition 5.1 exists. 2

Lemma 8.4 Suppose hM; S ; Hi belongs to the SAS(+)-PUBS class, that jHj = 2n and that H as de ned in De nition 5.3 is a partial order. Then hM; S ; Hi is reachable. 2 Proof: Let x; x0 2 S be any states. We want to show that there is a plan

from x to x0. According to Theorem 5.9 there is a plan from x to x0 if and only if the set of necessary actions (x; x0) exists and the relation  as de ned in De nition 5.3 is a partial order. It follows from Lemma 8.3 that the set (x; x0) exists. It only remains to show that  is a partial order. By de nition  is transitive, and from the proof of Lemma C.7 we know that if  is anti-symmetric, then it must be irre exive. Hence we only need to show that it is anti-symmetric. Let A be a set of actions containing one action of each type in H. Then for all a1; a2 2 A we get that a1Aa2 if and only if type (a1)H type (a2). This follows immediately from De nition 5.3. Hence A is a partial order if and only if H is a partial order. Obviously   A. Suppose  is not anti-symmetric. Then there exists a1; a2 2  such that a1a2 and a2a1. But   A gives that a1Aa2 and a2Aa1 which is a contradiction because A is a partial order. Hence  is a partial order and there exists a plan from x to x0. 2

Lemma 8.5 Suppose hM; S ; Hi belongs to the SAS(+)-PUBS class, jHj = 2n and H is not a partial order. Then hM; S ; Hi is not reachable. 2 Proof: We show that if H is not a partial order it is always possible to nd states x; x0 2 S such that there is no plan from x to x0. According to

Theorem 5.9 a plan from x to x0 exists if and only if the set (x; x0) de ned in De nition 5.1 exists and the relation  as de ned in De nition 5.3 is a partial order. It follows from Lemma 8.3 that the set (x; x0) exists for any

148

Reachability for the SAS(+)-PUBS class

states x; x0 2 S because jHj = 2n. Thus we must show that if H is not a partial order there exists states x; x0 2 S such that  is not a partial order. The relation H is transitive by de nition, and thus if it is not a partial order it is not anti-symmetric, or not irre exive but antisymmetric. 1. Suppose H is not irre exive but antisymmetric. Then there exists h 2 H and i 2 M such that either ei(h) = fi(h) 6= u, or u 6= fi(h) 6= ei(h) 6= u. This is a contradiction according to (S4) in De nition 3.5. 2. Suppose H is not anti-symmetric. Then there must be a loop in the relation graph such that the action types in this loop is directly related to each other, i.e., there exists h1; h2; : : :; hk 2 H such that h1h2; h2h3; : : : ; hk?1hk ; hk h1; (8.1) where  is as in De nition 5.3, i.e.,  =  [ . For simplicity we will here drop the subscripts on H, H and H and instead write ,  and . There are three cases depending on  in Equation 8.1. (a) Suppose h1h2; h2h3; : : : ; hk?1hk ; hk h1: There are two cases depending on whether or not there exists two action types in the loop which a ects the same state variable. i. Suppose there exists two action types hl; hm 2 H in such a loop which a ects the same state variable, i.e., hl ; hm 2 H[i] for some i 2 M and 1  l; m  k. From De nition 5.3 it follows that ei1 (h1 ) = fi1 (h2 ); : : :; eil (hl ) = fil (hl+1 ); : : : ; eim (hm ) = fim (hm+1 ); : : : ; eik (hk ) = fik (h1 ) where no state variable equals u. Because H is unary and hl; hm 2 H[i] we get i = il = im. Post-uniqueness gives ei (hl ) 6= ei (hm ) and hence u 6= fi (hl+1 ) 6= fi (hm+1 ) 6= u, which is a contradiction because H is single-valued. ii. Suppose h1; h2; : : : ; hk in Equation 8.1 all a ect di erent state variables. We get ei1 (h1 ) = fi1 (h2); ei2 (h2 ) = fi2 (h3 ); : : :; eik (hk ) = fik (h1); where no state variable equals u. Let x 2 S any state such that e(h1 ) t e(h2) t : : : t e(hk ) v x, and x0 2 S be any state such that for 1  l  k, fil (hl) = x0il . Then b(h1) t b(h2 ) t : : : t b(hk ) v x0 :

8.1 Reachability criterion

149

Since all the actions a ect di erent state variables such states must exist. Because H is post-unique there exists actions a1; a2; : : :; ak 2 (x0; x) such that type (al) = hl for 1  l  k according to De nition 5.1. Obviously we get a loop in , i.e., it is not anti-symmetric and hence not a partial order. (b) Suppose

h1h2; h2h3; : : : ; hk?1hk ; hk h1: There are two cases depending on whether or not there exists two action types in the loop which a ects the same state variable. i. Suppose there exists two action types hl; hm 2 H in such a loop which a ects the same state variable, i.e., hl; hm 2 H[i] for some i 2 M and 1  l; m  k. From De nition 5.3 it follows that fi1 (h1 ) 6= ei1 (h2); : : : ; fil?1 (hl?1 ) 6= eil?1 (hl ); : : :; fim?1 (hm?1 ) 6= eim?1 (hm ); : : :; fik (hk ) 6= eik (h1 ) where no state variable equals u. Because H is unary and hl; hm 2 H[i] we get i = il?1 = im?1. Post-uniqueness and binary state variables give ei(hl) 6= ei(hm) and hence u 6= fi (hl?1 ) 6= fi (hm?1 ) 6= u, which is a contradiction because H is single-valued. ii. Suppose h1; h2; : : :; hk in Equation 8.1 all a ect di erent state variables. We get fi1 (h1 ) 6= ei1 (h2 ); fi2 (h2 ) 6= ei2 (h3); : : : ; fik (hk ) 6= eik (h1 ); where no state variable equals u. Let x; x0 2 S be de ned as above. Then it follows from the same argument as in 2(a)ii that  is not anti-symmetric and hence not a partial order. (c) According to the above we cannot have h1h2; h2h3; : : :; hk?1 hk ; hk h1 or h1h2; h2h3; : : : ; hk?1hk ; hk h1: Thus there exists an l such that hl?1hl and hlhl+1 , 1  l  k. According to the de nition it follows that there exists i 2 M such that fi(hl?1) 6v ei(hl) 6= u and ei(hl) = fi(hl+1) 6= u. But then u 6= fi (hl?1 ) 6= ei (hl ) = fi (hl+1 ) 6= u which is a contradiction because H is single-valued.

150

Reachability for the SAS(+)-PUBS class Thus there exists states x; x0 2 S such that there is no plan from x to x0 and hence hM; S ; Hi is not reachable.

2 We can now state our main theorem giving us a reachability criterion.

Theorem 8.6 Suppose hM; S ; Hi belongs to the SAS(+)-PUBS class. Then hM; S ; Hi is reachable if and only if jHj = 2n = 2jMj and H as de ned in De nition 5.3 is a partial order.

Proof: Follows immediately from Lemmas 8.2, 8.4 and 8.5.

2 2

Checking if a SAS(+)-PUBS system hM; S ; Hi is reachable can obviously be done using part of Algorithm 5.1 for nding minimal plans. The algorithm is modi ed in the following way: the computation of the set of necessary and sucient actions (x0; x?) should be left out. The set of action types H should be considered instead of the set of actions (x0; x?) when constructing the ordering relation, and a test on the number of elements in H should be added.

Algorithm 8.1 Input: H, a set of action types. Output: t, a Boolean. 1 Procedure CheckReachability(M; H); 2 H : set of action types; 3 var 4 i : state variable index; 5 h; h0 : action type; 6 r : relation on the set H; 7 8 beginfCheckReachabilityg 9 10 if jHj 6= 2  jMjthen fail 11 endif ; 12 r := ;; 13 fComputation of `precedes'g 14 for h 2 H do 15 for h0 2 H do 16 for i 2 Mdo 17 fComputation of `enables'g 18 if ei(h) = fi(h0) 6= u then

8.2 Examples

151

19 Order(h; h0; r); 20 endif ; 21 fComputation of `disables'g 22 if u 6= fi(h) 6v ei(h0) 6= u then 23 Order(h; h0; r); 24 endif ; 25 endfor; 26 endfor; 27 endfor; 28 29 if r is acyclic then 30 returntrue; 31 else 32 returnfalse; 33 endif ; 34 end; fCheckReachabilityg We do not compute the transitive closure since this is not of any practical interest. This can of course be added if so desired. 2 It is easily realized that the algorithm above is correct (Theorem 8.7), and that the complexity is polynomial in the number of state variables (Theorem 8.8).

Theorem 8.7 Given a SAS(+)-PUBS-structure hM; S ; Hi, Algorithm 8.1 returns true if and only if hM; S ; Hi is reachable. 2 Proof: Immediate from Algorithms 5.1 and 8.1, and Theorems 5.20 and 8.6.

2

Theorem 8.8 Given a SAS(+)-PUBS-structure hM; S ; Hi, the worst-case ex3 ecution time of Algorithm 8.1 is O(jMj ). 2 Proof: Immediate from Theorem 5.22.

2

8.2 Examples In this section we apply the reachability criterion to the tunnel example presented in Section 3.4. The example is modi ed in two ways to illustrate what happens if the criterion given in Theorem 8.6 does not hold.

152

Reachability for the SAS(+)-PUBS class

Example 8.1 Consider Example 3.4. As stated in Example 4.1 the problem belongs to the SAS-PUBS class, and the criterion in Theorem 8.6 can be used. We see immediately that jHj = 6 = 2  n, so the rst part of the criterion given in Theorem 8.6 is ful lled. The relation H is given in Figure 8.1. We Off 1

Off 2

On 3

Off 3

On 2

On 1

Figure 8.1: The relation `precedes' (H) on the set H for the tunnel example (Example 8.1). Transitive arcs are omitted.

see that H is a partial order, and it follows from Theorem 8.6 that the system is reachable. The state graph is given in Figure 3.4, and as expected there is a path from any state x to any state x0. 2

Example 8.2 To illustrate what happens when the criterion in De nition 8.1 is not satis ed we modify Example 8.1. Suppose S is as in Example 8.1 but that the action type O 2 is removed from the set H. Thus H = fOn1; O 1; On2; On3; O 3g where the action types are de ned according to Table 3.2. The problem is still in the SAS-PUBS class but now jHj = 5 < 2  n = 6 and hence the system is not reachable according to Theorem 8.6. The state graph for this case is given in Figure 8.2. We see that if x2 = 1 we cannot change its value because of the \missing" action type. 2

8.2 Examples

153

(0,0,0)

On 1

Off 1 (1,0,0)

On 2 Off 1 (1,1,0)

(0,1,0) On 1

On 3

Off 3 Off 1 (1,1,1)

(0,1,1) On 1

On 2 (1,0,1)

Off 1

On 1 (0,0,1)

Figure 8.2: The state graph for the modi ed tunnel example in Ex-

ample 8.2.

154

Reachability for the SAS(+)-PUBS class

Example 8.3 Let us modify Example 8.1 in yet another way. Suppose S and H are as de ned in Example 8.1, but that the prevail-condition for the action

type On2 is modi ed so that f (On2) = (1; u; 1). The system is still in the SAS-PUBS class. The rst part of the criterion in Theorem 8.6 is satis ed according to Example 8.1. For the second part we get the relation H as in Figure 8.3. We see that H is not anti-symmetric and hence it is not a partial Off 1

Off 2

On 3

Off 3

On 2

On 1

Figure 8.3: The relation `precedes' (H) on the set H for the modi ed tunnel example (Example 8.3). Transitive arcs are omitted.

order so the system is not reachable according to Theorem 8.6. In Figure 8.4 the state graph for this modi cation is given. It is easily seen that starting in any of the three states (0; 0; 0) or (1; 0; 0), it is not possible to reach any other states. 2

8.2 Examples

155

(0,0,0)

On 1

Off 1 (1,0,0)

Off 2 Off 1 (0,1,0)

(1,1,0) On 1 On 3

Off 3 Off 1 (1,1,1)

(0,1,1) On 1

Off 2

On 2 (1,0,1)

Off 1

On 1 (0,0,1)

Figure 8.4: The state graph for the modi ed tunnel example (Example 8.3).

156

Reachability for the SAS(+)-PUBS class

9 Conclusions Automatic synthesis of control schemes is in general a complex problem. The complexity of automatic synthesis using a method that will work for any given sequential control problem increases exponentially with the number of state variables. Yet it is important to analyze sequential control problems formally, even if a general, e ective algorithm cannot be found. Doing this we may get an understanding for the di erences between simple and dicult planning problems. Once we have developed e ective algorithms for a class of simple problems, we could use ad hoc approximations for problems that are close to this class. This is, for example, done in system theory where non-linear di erential equations are approximated by linear di erential equations. The approach proposed in this thesis is in accordance with the classical control theory paradigm, i.e., to rst spend some e ort on modelling the plant. The model is then used to automatically construct, or synthesize, the control scheme, i.e., the plan. A model-based approach has many advantages. If the plant is modi ed after the control scheme has been developed, only the plant model must be modi ed. Additionally, a system that automatically creates control schemes based on models can be used on-line if it is fast enough. Thus such a system can be used for on-line operator supervision, and on-line construction of error recovery and restart procedures. We have presented fast algorithms for two restricted classes of planning problems called the SAS(+)-PUBS and the SAS-PUS class, respectively. These algorithms are proven to be sound, i.e., the returned plan actually solves the stated problem, and complete, i.e., they always returns a solution if any plan exist. Furthermore, their complexity increases polynomially with the number of state variables. It should also be pointed out that the SAS(+)-PUBS algorithm is sound even for SAS(+)-PUB planning problems. This means that if the algorithm returns a plan, then we know that it is correct, but if the algorithm fails, nothing can be said. 157

158

Conclusions

Based on the SAS(+)-PUBS algorithm we have also presented an algorithm solving SAS(+)-PUB problems. The SAS(+)-PUB class is proven to be intractable (problems with exponentially sized solutions exists) and an algorithm with polynomial worst-case complexity cannot be constructed. Our algorithm is based on splitting the original problem in a number of simpler problems that can each be solved using the SAS(+)-PUBS algorithm, and we present heuristics to guide the splitting procedure. If the splitting is not done in the right way, it can result in a lot of time consuming backtracking. Additionally, in a speci c application it might be possible to develop a strategy to chose the splits so that the number of splits is minimized. This algorithm is proven to be sound. It must be admitted that the class of SAS(+)-PUBS and SAS-PUS planning problems is a small class, and in a sense it contains \simple" problems. Nevertheless, important insights are gained since the algorithms presented here are based on theoretical considerations. Additionally, we believe many real-world applications can be modelled as, or approximately modelled as, SAS(+)-PUB planning problems. Based on the SAS(+)-PUBS algorithm, we have developed a polynomial time reachability criterion for this class of planning problems. To illustrate the ideas covered in this thesis we have implemented a planning tool. The input to the tool is the initial and the nal states and a database containing the plant model, i.e., the available action types and the state information. Using one of the algorithms presented here, a plan is constructed and automatically translated to a GRAFCET chart. Since a GRAFCET chart can be translated to ordinary executable PLC-code such a planning tool can be the basis for automatic code generation in real industrial applications.

A Notations

The following notations, some of which are non-standard, are used in the thesis:

8a 2 A (p(a)) 9a 2 A (p(a)) ^ _  A?B jAj M Si+ Si S S+ x2S x0 x? xi dim(x) xi vi x0i x v x0

t u

For all elements a in the set A, p(a) is true. There exists an element a in the set A such that p(a) is true. Logical and or the lattice operator meet. Logical or or the lattice operator join. Subset. If A and B are sets then A ? B = fx j x 2 A ^ x 2= B g. The cardinality (i.e. the number of elements) of the set A. The set of state variable indices. The domain for the ith state variable. The extended domain for the ith state variable. Si+ = Si [ fui; kig The total state space. S = Si1  Si2  : : :  Sin , where i1; : : :; in is an enumeration of M. The partial state space. S + = Si+1  Si+2  : : :  Si+n , where i1; : : :; in is an enumeration of M. A total state. The initial state. The nal state. The ith state variable for the state x. The set of all state variable indices i 2 M such that xi 6= ui. x0i is more informative than xi. x0 is more informative than x. The lattice operator join for the lattice de ned by v. The lattice operator meet for the lattice de ned by v. 159

160

Notations H L

The set of action types. The set of action labels. label (a) The label for the action a. If a = hl; hi then label(a) = l where l 2 L. type (a) The type for the action a. If a = hl; hi then type(a) = h where h 2 H. b(a) The pre-condition for the action (or the action type) a. bi (a) The ith state variable in b(a). e(a) The post-condition for the action (or the action type) a. ei (a) The ith state variable in e(a). f (a) The prevail-condition for the action (or the action type) a. fi (a) The ith state variable in f (a). [i] The subset of actions in the set which a ects the ith state variable. ; 0 s 7? ! s The state s can be transformed into the state s0 by performing the actions in the set in the total order given by . h ; i A plan from x0 to x?. is a set of actions and  is a partial order de ned on . a The inverse of the action a. P0 The set of primarily necessary actions (SAS(+)-PUB). P~ The set of secondarily necessary actions (SAS(+)-PUB). (x0; x?) The set of necessary actions (SAS(+)-PUB). (x0; x?) = P0 [ P~  The relation `enables' de ned on the set .  The relation `disables' (actually `inverse disables') de ned on the set .  The union of the relations and  de ned on the set .  The relation `precedes' de ned on the set . h; i A minimal and maximally parallel plan (SAS-PUS).  Greek letters with a bar denotes an i-chain. + The transitive closure of the relation . ?  The reduction of the relation . f (n) 2 O(g(n)) There exists a constant c > 0 such that, for some m > 0 and all n > m, f (n)  cg(n). f (n) 2 (g(n)) There exists a constant c > 0 such that, for some m > 0 and all n > m, f (n)  cg(n). f (n) 2 (g(n)) There exists constants c; c0 > 0 such that, for some m > 0 and all n > m, cg(n)  f (n)  c0g(n).

B A brief introduction to the theory of relations In this appendix a brief introduction to the theory of relations are given. The interested reader can nd a more thorough presentation in, for example, [61].

De nition B.1 A (binary) relation  de ned on a set A is any set of ordered pairs ha1; a2i; a1; a2 2 A. If ha1; a2i 2  then a1 is related to a2 under , and for this we introduce the notation a1a2.

2

A relation  on A can be displayed by drawing a directed graph. The graph consists of as many vertices as there are elements in A, and there is an arrow from vertex ai to vertex aj if and only if aiaj . When A is a nite set, a relation  on A can be represented by a jAj  jAj matrix called a relation matrix, where jAj denotes the number of elements in the set A. The element in position (i; j ) equals 1 if the ith element in the set is related to the j th element in A. Otherwise it is zero.

Example B.1 Let  be de ned on the set A = f1; 2; 3; 4g in the following way:

 = f(1; 1); (1; 2); (1; 3); (2; 1); (2; 4); (3; 2)g The relation matrix  is given by 01 1 1 01 B 1 0 0 1 CC =B B@ 0 1 0 0 CA 0 0 0 0 and the graph for the relation  is given in Figure B.1. 161

2

162

A brief introduction to the theory of relations

1

2

3

4

Figure B.1: Relation graph for the relation  on the set A in Example B.1.

De nition B.2 A relation  on A is  re exive if for all a 2 A, aa  irre exive if there is no a 2 A such that aa  antisymmetric if for all a1; a2 2 A, a1a2 and a2a1 ) a1 = a2  transitive if for all a1; a2; a3 2 A, a1a2 and a2a3 ) a1a3 A partial order on a set A is a relation which is irre exive, antisymmetric and transitive. A re exive partial order on a set A is a relation which is re exive, antisymmetric and transitive. 2 A partial order can be de ned in two di erent ways with or without re exivity. The de nition given above agrees with the de nition in [113]. These de nitions can be illustrated in a directed graph. When  is re exive there is an arrow from each vertex in the directed graph back to itself. This is shown in Figure B.2a. When  is antisymmetric and there is an arrow from vertex ai to vertex aj , there must be no arrow from aj to ai. The excluded case is shown in Figure B.2b. When  is transitive and there is an arrow from vertex a1 to vertex a2 and an arrow from a2 to a3, there is also an arrow from vertex a1 to vertex a3. This is shown in Figure B.2c. 1 1 1

(a)

2

3

2

(b)

(c)

Figure B.2: (a) A re exive relation. (b) Excluded if the relation is

antisymmetric. (c) A transitive relation.

163 Example B.2 Let the relation `divides' be de ned on the set A = f1; 3; 5; 15; 25g, i.e., a1 `divides' a2 if a1  k = a2 for some integer k. The relation `divides' is a re exive partial order. The relation graph for `divides' is given in Figure B.3. 3

15

5

25

1

Figure B.3: Relation graph for the relation `divides' de ned on the set A in Example B.2. Transitive arcs are omitted.

2 A common example of a re exive partial order is the relation `' on the set of natural numbers. This relation is in fact a total order, since any pair of elements are related to each other: either a1  a2 or a2  a1. The de nition of a total order is given in De nition B.3.

De nition B.3 A total order (linear order) is a partial order  on a set A such that for all a1; a2 2 A, a1a2 a2a1. 2 Since a total order is a partial order it is antisymmetric and, hence, if  is a total order we can not have a1a2 and a2a1. In a relation graph a total order is a straight line, as seen in Example B.3.

Example B.3 Consider the relation `' de ned on the set A = f1; 2; 3g. The relation graph is given in Figure B.4. 1

2

3

Figure B.4: Relation graph for the total order `' on the set A = f1; 2; 3g. 2 Note that in a total order every pair of elements in the set A are related.

164

A brief introduction to the theory of relations

De nition B.4 Let hM; i be a partially ordered set. An element a 2 A is called the join of a1 and a2, or the least upper bound(lub) of a1 and a2, if a1  a and a2  a and if for any a0 2 A a1  a0 and a2  a0 ) a  a0: An element a 2 A is called the meet of a1 and a2, or the greatest lower bound(glb) of a1 and a2, if a  a1 and a  a2 and if for any a0 2 A a0  a1 and a0  a2 ) a0  a: The join of a1 and a2 is denoted a1 _ a2, and the meet is denoted a1 ^ a2. 2 The de nitions of least upper bound and greatest lower bound is used to de ne a lattice. De nition B.5 A lattice is a partially ordered set hM; i in which every pair of elements has both a least upper bound(lub) and a greatest lower bound(glb).

2

Any partially ordered set hM; i can be made a lattice by adding two elements to the set M : one element which is greater than any element in the set M , and one element which is smaller than any element in M . This can be seen in Example B.4. Example B.4 Consider the relation `divides' on the set A = f1; 3; 5; 15; 25g de ned in Example B.2. The join of 3 and 5 is 3 _ 5 = 15 and the meet is 3 ^ 5 = 1. The partially ordered set hA; i is not a lattice, since there is no least upper bound for 15 and 25. To get a lattice we add the element 150 to the set A, and consider the partially ordered set hA [ f150g; i. The relation graph is given in Figure B.5, and we see immediately that this is a lattice.

2

If we combine two relations we get a composition of the relations. This is de ned below. De nition B.6 Let 1 and 2 be relations de ned on the set A. The composition of 1 and 2 is denoted 21 and de ned in the following way: (x; z) 2 21 if 9y 2 A such that (x; y) 2 1 and (y; x) 2 2:

2

165 3

15

1

150 5

25

Figure B.5: Relation graph for the relation `divides' on the set A [ f150g in Example B.4. Transitive arcs are omitted. B x

B

R2

R1

B z

y

R2R1

Figure B.6: The composition 21 of the two relations 1 and 2. The composition 21 of the two relations 1 and 2 is illustrated in Figure B.6. If 1 and 2 is the same relation , then the composition is denoted 2. From the graph for the relation  the graph for the relation 2 can be found by taking all vertices which can be reached in two steps. If  is a relation on a nite set A with relation matrix , then the relation matrix for the composite relation i =  : : :  is given by i where the matrix multiplication is de ned through Boolean addition and multiplication. In the graph of the relation this corresponds to the vertices which can be reached in i steps.

De nition B.7 The transitive closure of a relation  on A is denoted +, and is de ned as + = [1i=1i:

2

When A is nite this is the same as + = [jiA=1j i: The relation matrix for the transitive closure can be obtained by Boolean addition of the matrices ; 2; : : :; jAj. Taking the transitive closure of a

166

A brief introduction to the theory of relations

relation is the same as making the relation transitive, and in the graph of the relation it corresponds to the vertices which can be reached in any number of steps. (If the set A is nite the number of steps is nite.)

Example B.5 Consider the relation  de ned on the set A = f1; 2; 3; 4g given in Example B.1. The relation matrix for the composition 2 is 01 1 1 01 01 1 1 01 01 1 B 1 0 0 1 CC BB 1 0 0 1 CC BB 1 1 2 =    = B B@ 0 1 0 0 CA  B@ 0 1 0 0 CA = B@ 1 0 0 0 0 0 0 0 0 0 0 0

1 1 0 0

1 0 1 0

1 CC CA

and the graph of the relation is given in Figure B.7.

1

2

3

4

Figure B.7: Graph of the relation given by the composition 2. The relation matrix for the transitive closure of  is 01 1 1 1 B1 1 1 1 + =  + 2 + 3 + 4 = B B@ 1 1 1 1 0 0 0 0

1 CC CA

This means that if we start in 1, 2 or 3 we can reach any vertex, but if we start in 4 we cannot reach any vertex. 2

C Proofs of theorems in Chapter 5 C.1 Proofs of theorems in Section 5.1 In this section we give proofs for Lemma 5.7 (Lemma C.5), Theorem 5.8 (Theorem C.9) and Theorem 5.9 (Theorem C.14). First we make some auxiliary de nitions which will simplify the proofs later on. For any binary domain we de ne a set/reset pair for the ith state variable.

De nition C.1 Regard a SAS(+)-B planning problem  = hhM; S ; Hi; x0; x?i, and suppose that h ; i is a plan from x0 to x?. An action a 2 is a set action for the ith state variable if and only if u = 6 ei(a) =6 x0i and a is a reset action for the ith 0 state variable if ei(a) = xi . We further say that a pair of actions a; a0 2 is

a set/reset pair for the ith state variable if a is a set action and a0 is a reset action for the ith state variable. We usually say only set action (reset action, set/reset pair) if the state variable is implicit from the context. 2 The inverse of an action is de ned by exchanging the pre- and post-conditions if the state variables are binary and the set of action types is post-unique.

De nition C.2

Consider a SAS+ -structure hM; S ; Hi such that the state variable domains Si, i 2 M, are binary and the set of action types is post-unique. We introduce the following de nitions: 1. For any action type h 2 H, if there exists an action type h 2 H such that for some i 2 M u 6= ei(h) 6= ei(h) 6= u then h is called the inverse of h. 2. For any action a such that type (a) 2 H, if there exists an action a such that type (a) is the inverse of type (a), a is an inverse of a. 2 167

168

Proofs of theorems in Chapter 5

Note that if bi(h) 6= u then ei(h) = bi (h), and that the inverse of an action need not be unique. In Lemmas C.3 and C.5 we show that the actions in the set (x0; x?) de ned in De nition 5.1 are actually necessary, i.e., that any plan from x0 to x? must have actions of the same type as the actions in the set (x0; x?). Observe that we do not require single-valued sets of action types here. If the set of action types is single-valued, the set (x0; x?) will be both necessary and sucient as shown in Theorem C.10. The di erence when dealing with non-single-valued sets of action types is that a minimal plan might contain several actions of the same type.

Lemma C.3 Given a SAS(+)-PUB planning problem  = hhM; S ; Hi; x0; x?i, suppose that h ; i is a plan from x0 to x? and that (x0; x?) according to De nition 5.1 exists. Then there is a relabelling r such that r((x0; x?))  . 2

Proof: We prove that for all actions a 2 (x0; x?) we can choose a unique action a0 2 such that type (a) = type (a0). Using that (x0; x?) = [1k=0 Pk we make a proof by induction on k. Basis: Let D = fi 2 M j x?i 6v x0i g. By De nition 5.1 P0 contains exactly one action for each i 2 D and no other actions, i.e. jP0j = jDj. Let a1; : : :; ajDj be an enumeration of P0. Since H is unary, must contain at least one action for each i 2 D in order to change x0 to x?. Select one such action from for each i 2 D and let a01; : : : ; a0jDj be an enumeration of these actions such that, for all j such that 1  j  jDj, a0j and aj a ect the same state variable. Since H is post-unique there are no alternative ways to change x0i to x?i for any i 2 D so type (aj ) = type (a0j ) for all j such that 1  j  jDj. We de ne a relabelling r0 such that r0(label (aj )) = label (a0j ) for all j and r0(a) is unde ned for a 62 P0 . It follows that type (r0(a)) = type (a) for all a 2 P0 and therefore also that r0(P0)  . Induction: Suppose there is a relabelling rk such that rk (Pk )  for k > 0. By De nition 5.1, Pk+1 is either empty or consists of set/reset pairs and set actions. The case where Pk+1 = ; is trivial. For the other case, let Pk+1 = fa11; a12; a21; a22; : : :; aq1; aq2; a1; : : :; alg where am1; am2 are set/reset pairs, and ap are set actions. Thus, for each m such that 1  m  q there exists an im 2 M such that am1; am2 is a set/reset pair for the ithm state variable and x?im 6= u, and for each p such that 1  p  l there exists an ip 2 M such that ap is a set action for the ithp state variable and x?ip = u. This follows from De nition 5.1. Let us rst consider the set/reset pairs am1; am2 where 1  m  q. For each m, am1 is in Pk+1 because of some action a 2 Pk such that fi(a) 6v x0i and there

C.1 Proofs of theorems in Section 5.1

169

is no action a0 2 Tk such that ei(a0) = fi(a). The action am2 is likewise in Pk+1 to reset the ith state variable so that x0i = x?i is ascertained. By the induction hypothesis, rk (Pk )  and thus also rk (a) 2 . Hence there must be two actions a0m1; a0m2 2 such that ei(a0m1) = fi(rk (am)) = bi(a0m2) in order to ful ll the prevail-condition of rk (am) and to ascertain that x0i = x?i. Since S is binary and H is post-unique, type (am1) = type (a0m1) and type (am2) = type (a0m2). If we instead consider the set actions a1; : : : ; al it follows from the same argument that for each p such that 1  p  l there exists an action a0l 2 such that type (al) = type (a0l). It is thus possible to de ne a relabelling rk+1 such that rk+1(am1) = a0m1 and rk+1(am2) = a0m2 for 1  m  q, rk+1 (ap) = a0p for 1  p  l, rk+1 (a) = rk (a) for a 2 Tk and rk+1 is otherwise unde ned. Obviously, type (rk+1(a)) = type (a) for a 2 Pk+1 and it follows that rk+1 (Pk+1 )  . This proves the induction step. Now, for all k  0, rk (Pk )  for Pk as de ned in De nition 5.1 and rk as de ned above. We de ne r1 = [1k=0rk . Since, for all k, rk+1 always agree with rk on arguments in Tk , and rk is always unde ned for arguments not in Tk , it follows that r1 is a function. Furthermore, since all rk are relabellings, r1 is also a relabelling. Consequently, r1(Pk )  for all k  0 and since 0 ? (x0; x?) = [1 k=0 Pk it follows that r1 ((x ; x ))  , which was to be proven.

2

Corollary C.4 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUB planning problem. If h ; i is a plan from x0 to x? and (x0; x?) according to De nition 5.1 exists, then there is a relabelling r such that (x0; x?)  r( ). 2 Proof: Immediate from Lemma C.3. 2 We can now prove Lemma 5.7. Lemma C.5 [Lemma 5.7] Consider a SAS(+)-PUB planning problem  = hhM; S ; Hi; x0; x?i. If there is a plan h ; i from x0 to x? then the set of necessary actions (x0; x?) de ned in De nition 5.1 exists and there exists a relabeling r such that (x0; x?)  r( ). 2 Proof: Suppose that there is a plan h ; i and also suppose that there is no set (x0; x?) ful lling De nition 5.1. Then (x0; x?) may fail to exist for either of two reasons: either the set P0 does not exist or the set P~ does not exist, i.e., there is a k > 0 such that the set Pk does not exist. Suppose that P0 does not exist. The only possible reason for this is that there is an i 2 M such that x?i 6v x0i but there is no a 2 A such that bi(a) v x0i and ei(a) = x?i. It follows from the construction of A that for each h0 2 H there is a unique

170

Proofs of theorems in Chapter 5

a0 2 A such that type (a0) = h0. This means that there is no h 2 H such that bi (h) v x0i and ei (h) = x?i. Now, since h ; i is a plan from x0 to x? there must be an action a00 2 such that ei(a00) = x?i. Thus there exists h00 2 H such that ei(h00) = x?i and type (a00) = h00. It follows from (S2) and (S3) in De nition 3.5 and binariness that either bi(h00) = u or bi(h00) = x0i . In either case bi(h00) v x0i , which is a contradiction, so P0 must exist. The proof for the existence of Pk for k > 0 is analogous. This means that Pk exists for all k  0 and, by De nition 5.1, also (x0; x?) must exist. 2 Before proving that `precedes' de ned on the set of necessary actions is a partial order if any plan exists, we need the following lemma. Lemma C.6 Consider a SAS(+)-P problem  = hhM; S ; Hi; x0; x?i. Suppose that h ; i is a plan from x0 to x? and that there is a nonempty set   such that for all i 2 M, if x?i 6= u then ei(a) v x?i for all a 0 2 . Then there are two states x; x0 2 S and an action a0 2 such that x 7?a! x0; e(a) v x0 for all a 2  and type (a0) = type (a) for some a 2 . 2 Proof: Let I = [a2 dim(e(a)), and let S 0 be the set of all states x 2 S such that x?i = xi for all i 2 I. Since  is nonempty, there must be asome action a 2 and two states x; x0 2 S such that x 62 S 0, x0 2 S 0 and x 7?! x0. That e(a0) v x0 for all a0 2  is immediate, and, since a obviously a ects some j 2 I and H is post-unique, it also follows that type (a) = type (a0) for some a0 2 . 2 We can now show that if there is a plan and the set (x0; x?) exists, then the relation `precedes' () de ned in De nition 5.3 is a partial order.

Lemma C.7 Consider a SAS(+)-PUBS problem  = hhM; S ; Hi; x0; x?i. If there is a plan, then  de ned in De nition 5.3 is a partial order.

2

Proof: With notational abuse we just write  and  in the following. The proof is carried out by contradiction. Suppose there is a plan h ; i, and

that  is not a partial order. From Lemma C.5 we know that since there is a plan,  exists. From Corollary C.4 it follows that there is a relabelling r such that   r( ). Now, if  is not a partial order then it is either not irre exive or not antisymmetric, since it is transitive by de nition. However, non-antisymmetry implies non-irre exitivity because  is transitive. Thus  is either not irre exive but antisymmetric, or not antisymmetric. 1. Suppose  is not irre exive but antisymmetric. Then there must be an action a 2  such that aa, and, by antisymmetry and transitivity, we

C.1 Proofs of theorems in Section 5.1

171

get aa. This means that either aa or aa, i.e., there exists an i 2 M such that either ei(a) = fi(a) 6= u or u 6= fi(a) 6= ei(a) 6= u. This is a contradiction according to (S4) in De nition 3.5. 2. Suppose  is not antisymmetric. Then there exists two actions a01; a02 2   r( ) such that a01a02 and a02a01. It follows from De nition 5.3 that there exists a sequence a1; a2; : : :; ak 2  such that

a1a2; a2a3; a3a4; : : :; ak?1ak ; ak a1 where al = a01 and am = a02 for some l; m 2 f1; 2; : : : ; kg and  =  [ . Let A~ = fa1; : : :; ak g. The proof can be split in three parts depending on if  above equals  or . (a) Suppose a1a2; a2a3; a3a4; : : :; ak?1ak ; ak a1. Once again there are two cases: either there exists an action a 2 A~ such that x?i 6v ei(a) or x?i v ei(a) for all a 2 A~. i. Suppose there exists an action a 2 A~ such that x?i 6v ei(a). Then a 62 P0 according to De nition 5.1, and hence a 2 P~ . From De nition 5.1 it follows that there exists an action a0 2  such that

6

6

u = ei (a) = fi (a0) = x?i:

Now there exists an action ap 2 A~ such that apa and hence u 6= fj (ap) 6= ej (a) 6= u for some j 2 M according to De nition 5.3. Furthermore H is unary and hence i = j . We thus have

6

6

6

u = fi (a0) = ei (a) = fi (ap) = u

which is a contradiction since H is single-valued. ii. Suppose x?i v ei(a) for all a 2 A~. We know, by Lemma C.6 that there aare two states x; x0 2 S such that for some action a 2 ; x0 7?! x; e(a0) v x for all a0 2 A~ and type (a) = type (a00) for some a00 2 A~. By assumption there is an m such that a00am, i.e., u 6= fi(a00) 6= ei(am) 6= u for some i 2 M. Furthermore, fi (a00) = fi (a) v xi , so bi (am) = xi. By hypothesis and (S2) in De nition 3.5, we have ei(am) = xi, so bi (am) = ei(am) which contradicts (S3) in De nition 3.5. (b) Suppose a1a2; a2a3; a3a4; : : :; ak?1ak ; ak a1. First we note that for every a 2 A~ we have b(a) v x0. Either a 2 P0 or a 2 P~ . If a 2 P0 then it is obvious from De nition 5.1 that b(a) v x0. Suppose a 2 P~ . There exists an action ap 2 A~

172

Proofs of theorems in Chapter 5 such that aap and hence for some i 2 M ei(a) = fi(ap) 6= u. Since a 2 P~ we have u 6= ei (a) = fi (ap ) 6= x0i according to De nition 5.1. Then a is a set action in a set/reset pair for the ith state variable, and b(a) v x0. Hence b(a) v x0 for all a 2 A~. Any plan h ; i from x0 to x? must pass a state x1 2 S such that b(a1) t f (a1) v x1 . Let h 1 ; i be any plan from x0 to x1 . Now ak a1 and there exists ik 2 M such that eik (ak ) = fik (a1) 6= u. Furthermore b(ak ) v x0 and H is post-unique, which means that an action a0k such that type (a0k ) = type (ak ) must be performed before a1, i.e., a0k 2 1. Any plan from x0 to x1 must pass a state x2 2 S such that b(a0k ) t f (a0k ) = b(ak ) t f (ak ) v x2: Let h 2; i be any plan from x0 to x2. Now ak?1ak , ak?1a0 0k and hence for some ik?1 2 M, eik?1 (ak?1) = fik?1 (ak ) = fik?1 (a k ) 6= u. Since b(ak?1) v x0 and H is post-unique an action a0k?1 such that type (a0k?1 ) = type (ak?1 ) must be performed before a0k , that is, a0k?1 2 2. Repeating this we nally get that we must pass a state xk 2 S such that b(a02) t f (a02) = b(a2) t f (a2) v xk where type (a02) = type (a2). Let h k ; i be a plan from x0 to xk . Now a1a2 , a1a02 and for some i1 2 M, ei1 (a1) = fi1 (a2) 6= u. Since b(a1) v x0 and H is post-unique an action a01 such that type (a01) = type (a1) must be performed before a02, i.e., a01 2 k . Any plan from x0 to xk must pass a state xk+1 2 S such that b(a01 ) t f (a0 1) = b(a1) t f (a1 ) v xk+1 . Thus the states x1 and xk+1 ful ll the same condition. This will lead to an in nite loop, i.e., we have a contradiction. (c) According to the above we cannot have a1a2; a2a3; a3a4; : : :; ak?1ak ; ak a1 or a1a2; a2a3; a3a4; : : :; ak?1ak ; ak a1: Hence there exists actions a1; a2; a3 2 A~ (possible enumeration) such that a1a2 and a2a3. Now a1a2 means that for some i 2 M, u 6= fi (a1) 6= ei (a2) 6= u, and a2 a3 means that for some j 2 M, ej (a2) = fj (a3) 6= u. Since H is unary we get i = j . Hence u 6= fi (a1) 6= fi (a3) 6= u which is a contradiction because H is singlevalued. 2

C.1 Proofs of theorems in Section 5.1

173

Now, assuming that (x0; x?) exists and that  is a partial order we must prove that h(x0; x?); i actually is a plan from x0 to x?. In Lemma C.8 we show that the actions in (x0; x?) can be performed in the order speci ed by , and in Theorem C.9 we show that the resulting state will subsume the desired nal state. Lemma C.8 Suppose hhM; S ; Hi; x0; x?i is a SAS(+)-PUB planning problem. If  according to De nition 5.1 exists and  de ned in De nition 5.3 is a partial order, then there exists a state x 2 S such that h; i is a plan from x0 to x.

2

Proof: We must show that the actions can be performed in the given order, i.e., that b(a) t f (a) is ful lled when the action a is to be performed. We prove ; that for any total order  such that   , h; i is a linear plan, i.e, x0 7?! x. Let m = jj and let a1; : : : ; am be the actions in th set  ordered according

to . Such a total order always exists. Finding such a total order is called topological sorting, and an algorithm can be found in, for example, Gill [61]. The proof is carried out by induction over k such athat 1  k  m. We prove k k that there are states xk?1; xk 2 S such that xk?1 7?! x. Basis: There are two cases when a1 cannot be performed: either f (a1) 6v x0 or b(a1) 6v x0. 1. Suppose f (a1) 6v x0. Then the following is true: 9i 2 M such that u 6= fi(a1) 6v x0i and 9a 2  such that bi(a) = x0i and ei(a) = fi(a1): The second part follows from step 3 in De nition 5.1 since  exists. Now ei(a) = fi (a1), and according to De nition 5.3 aa1 ) aa1 ) aa1. This is a contradiction because a1 is the rst action under the order . Hence f (a1) v x0. 2. Suppose b(a1) 6v x0. Then the following is true: 9i 2 M such that u 6= bi(a1) 6v x0i : Hence ei(a1) = x0i since Si is binary for all i 2 M and, according to (S3) in De nition 3.5, bi(a1) 6= ei(a1). Now a1 cannot belong to the set P0 since ei(a1) = x0i . Hence a1 2 P~ and a1 is a reset action in a set/reset pair a1; a1. According to step 3 in the de nition of  there exists an action a 2  such that bi (a1) = ei (a1 ) = fi (a) as bi(a1) 6= u. Then, according to De nition 5.3, aa1 ) aa1 ) aa1, which is a contradiction, because a1 is the rst action under the order .

174

Proofs of theorems in Chapter 5

Thus b(a1) t f (a1) v x0, and a1 can be performed in the initial state, i.e., there 1 exists some state x1 2 S such that x0 7?a! x1 . Induction: For 1  k  m suppose there are states xk?1; xk 2 S such ak k k ? 1 that x 7?! x , that is, a1; a2; : : :; ak transforms x0 into xk . Show that ak+1 b(ak+1 ) t f (ak+1 ) v xk , i.e., that there exists xk+1 2 S such that xk 7?! xk+1 . 1. Suppose f (ak+1) 6v xk . Then the following is true:

9i 2 M such that u 6= fi(ak+1) 6v xki : (a) Suppose fi(ak+1) = 6 x0i . Then, since  exists, it follows from step 3 in De nition 5.1 that

9a 2  such that bi(a) v x0i and ei(a) = fi(ak+1): According to De nition 5.3 aak+1 ) aak+1 ) aak+1. This means

that a must be before ak+1 according to the total order which includes . Hence a = ap for some p  k. Furthermore if a 2  then a must be after ak+1. This follows from De nition 5.3, because u 6= fi(ak+1) 6= ei(a) and hence ak+1a ) ak+1a ) ak+1a. No other actions in  a ect the ith state variable. Thus

xki = ei(a) = ei(ap) = fi(ak+1) which is a contradiction because we assumed that xki 6= fi(ak+1). (b) Suppose fi(ak+1) = x0i 6= xki . Then for some p  k: bi(ap) v x0i and ei (ap) = xki . Thus u 6= fi (ak+1) 6= ei (ap) 6= u and according to De nition 5.3 ak+1ap ) ak+1ap ) ak+1ap which is a contradiction. Consequently f (ak+1 ) v xk . 2. Suppose b(ak+1) 6v xk . Then the following is true:

9i 2 M such that u 6= bi(ak+1) 6v xki and it follows from (S3) in De nition 3.5 and the binary assumption that ei (ak+1 ) = xki . (a) Suppose x0i 6= xki . Then there exists an action ap, where p  k, such that ap a ects the ith state variable and ei(ap) = xki . Because of post-uniqueness it follows that type (ap) = type (ak+1). This is a contradiction because by construction  contains at most one action of each type.

C.1 Proofs of theorems in Section 5.1

175

(b) Suppose x0i = xki . Then ak+1 is a reset action for i and, since P0 contains only set actions, ak+1 2 P~ . Hence, there must be an action a 2  such that u 6= fi(a) 6= ei(ak+1) 6= u, so aak+1 and also aak+1. We further know that [i] = fak+1; ak+1g because ak+1 is a set action. Now, ei(ak+1) = fi(a), so ak+1a and thus ak+1ak+1. It follows from the induction hypothesis and that no other actions a ect the ith state variable that xki = ei(ak+1), but bi (ak+1 ) v ei (ak+1 ) which is a contradiction. Thus b(ak+1) t f (ak+1) v xk , and ak+1 can be performed in the state xk . a k +1 Hence there exists some state xk+1 2 S such that xk 7?! xk+1, which ends the induction step. Finally, putting x = xm concludes the proof. 2 We can now show Theorems 5.8, namely that that if the set (x0; x?) exists and  is a partial order then h(x0; x?); i is a plan from x0 to x?, i.e., the actions in the set (x0; x?) performed in any total order which includes  transforms the initial state x0 into the desired nal state x?.

Theorem C.9 [Theorem 5.8] Consider a SAS(+)-PUB planning problem  = hhM; S ; Hi; x0; x?i. If (x0; x?) according to De nition 5.1 exists and  de ned in De nition 5.3 is a partial order then h(x0; x?); i is a plan from x0 to x?.

2

Proof: According to Lemma C.8 there exists a state x 2 S such that h; i is a plan from x0 to x. It only remains to show that x? v x. We prove that for each i 2 M, x?i v xi. Let D = fi 2 M : x?i 6v x0i g. By de nition the set of primarily necessary actions P0 contains exactly one action for each i 2 D and no other actions. Furthermore P~ =  ? [i2D[i] = [i62D [i]. Now, there are two cases. 1. Suppose i 2 D. According to the de nition there exists a 2 P0 such that bi(a) v x0i and ei(a) = x?i. Furthermore [i] = fag. Clearly xi = ei(a) = x?i and x?i v xi. 2. Suppose i 62 D. Then there are three cases depending on the size of [i]. By de nition j[i]j  2. (a) Suppose [i] = ;. It is immediate from the de nition that x?i v x0i . Since no actions a ect the ith state variable obviously xi = x0i and hence x?i v xi. (b) Suppose j[i]j = 1. Then x?i = u and obviously x?i v xi.

176

Proofs of theorems in Chapter 5 (c) Suppose j[i]j = 2. Let a; a be the set/reset pair in [i]. Nothing is permanently changed when performing a set/reset pair, and hence xi = x0i . It follows from the de nition that x?i = x0i , and thus x?i v x.

It follows that x? v x.

2

It now is easy to show that for SAS(+)-PUBS planning problems if there is a plan, then (x0; x?) exists and h(x0; x?); i is a plan. Note that if h(x0; x?); (x0;x?)i is a plan, then  is a partial order according to De nition 3.8.

Theorem C.10 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUBS planning problem. If there is a plan from x0 to x? then (x0; x?), de ned in De nition 5.1, exists and h(x0; x?); i, where  is de ned in De nition 5.3, is a plan from x0 to x?.

2

Proof: Immediate from Lemma 5.7, Lemma C.7 and Theorem 5.8.

2

We have thus shown that h(x0; x?); i is a plan, and in Theorem C.11 we show that it is in fact a minimal plan, i.e., any other plan h ; i from x0 to x? is such that the number of actions in the set (x0; x?) is less than or equal to the number of actions in the set .

Theorem C.11 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUB planning problem. If (x0; x?) exists and h(x0; x?); i is a plan then h(x0; x?); i is a minimal plan, where (x0; x?) and  are de ned in De nition 5.1 and De nition 5.3. 2

Proof: Lemmas C.4 and 5.7 give that if h ; i is a plan then  exists and there is a relabelling r such that   r( ). According to Proposition 5.6, and r( ) are isomorphic so j j = jr( )j. It follows that jj  j j, so if  exists and there is a partial order  on  such that h; i is a plan, then h; i is a minimal plan. 2 Next we show that h(x0; x?); (x0;x?)i is not only a minimal plan, but also a maximally parallel plan (Theorems C.12 and C.13).

Theorem C.12 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUB planning problem. If h(x0; x?); i is a plan from x0 to x? then it is a parallel plan.

2

C.1 Proofs of theorems in Section 5.1

177

Proof: Suppose there is a pair of distinct actions a; a0 2  such that neither

aa0 nor a0a and a and a0 are not independent. De nition 3.9 gives that either of the following cases must apply. 1. Suppose ei(a) 6= u and ei(a0) 6= u. Then obviously a; a0 2 P~i and a0 = a. It follows from De nition 5.1 that there is some a00 2  such that ei(a) = fi(a00) = ei(a0) or ei(a0) = fi(a00) = ei(a). Suppose the rst of these holds. Then a0a00 and a00a so De nition 5.3 gives a0a. The other case is symmetrical and results in aa0, so the assumption is violated. 2. Suppose ei(a) 6= u and fi(a0) 6= u. Then either ei(a) = fi(a0) or ei(a) 6= fi(a0) because of binariness, so either a0a or aa0. It follows by De nition 5.3 that either a0a or aa0, so the assumption is violated. 3. The case ei(a0) 6= u and fi(a) 6= u is analogous to the previous case. 4. The case fi(a) 6v fi(a0) and fi(a0) 6v fi(a) is impossible because of singlevaluedness. Consequently, a and a0 are independent and it follows from De nition 3.10 that h(x0; x?); i is a parallel plan. 2

Theorem C.13 Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-PUB planning problem. If h(x0; x?); i is a plan from x0 to x? then it is a maximally parallel plan.

2

Proof: That h(x0; x?); i is a parallel plan follows from Theorem C.12, so it only remains to show that it is maximally parallel. According to De nition 3.10 we must prove that there is no relation r   such that h(x0; x?); ri is a

parallel plan from x0 to x?. Suppose such a relation exists. Then r must be a partial order and r?  ?, where r? denotes the reduction of r as de ned in De nition 5.2. We prove that for every pair of actions a; a0 2 (x0; x?), if a?a0 then a and a0 are not independent. It is obvious from De nition 5.3 that if a?a0 then aa0 and thus either aa0 or aa0. 1. Suppose aa0. Then there exists i 2 M such that ei(a) = fi(a0) 6= u. It is obvious from De nition 3.9 that a and a0 are not independent. 2. Suppose aa0. Then there exists i 2 M such that fi(a) 6v ei(a0) 6= u. Thus fi(a) 6= u and it follows from De nition 3.9 that a and a0 are not independent.

178

Proofs of theorems in Chapter 5

Hence, if r?  ? then there must be two actions a; a0 2 (x0; x?) such that neither ar?a0 nor a0r? a but a and a0 are not independent. Consequently, h(x0; x?); ri cannot be a parallel plan which contradicts the assumption. 2 We can now prove the main theorem in Section 5.1, namely Theorem 5.9.

Theorem C.14 [Theorem 5.9] Suppose  = hhM; S ; Hi; x0; x?i is a SAS(+)-

PUBS planning problem. Then (x0; x?) according to De nition 5.1 exists and h(x0; x?); i, where  is de ned in De nition 5.3, is a minimal and maximally parallel plan from x0 to x? if and only if there is any plan from x0 to x?. 2

Proof: The if part follows from Theorems C.10, C.11 and C.13. The only-if part is immediate. 2

C.2 Proofs of theorems in Section 5.2 In this section we give the proofs for two of the theorems in Section 5.2. Subsection C.2.1 is devoted to the proof of Theorem 5.35, and in Subsection C.2.2 the proof of Theorem 5.37 is given.

C.2.1 Proof of Theorem 5.35

In this subsection we will give the proof for Theorem 5.35. The proof is quite long and is split into several parts. To show that h; i is a plan from x0 to x? we must show that the actions in the set  can be executed in the order speci ed by  (Lemma C.15) and that the resulting state is the desired nal state (Lemma C.18).

Lemma C.15 Given a SAS-PUS problem  = hhM; S ; Hix0; x?i suppose that h; i is as de ned in De nition 5.32. Then there is a state x 2 S such that h;  i is a plan from x0 to x. 2 ; Proof: We prove that there is a state x 2 S such that x0 7?! x for any total order  such that   . Let m = jj and let a1; : : :; am be the actions

in  ordered under . We prove by induction on k that, for each k such that k k 1  k  m, there is xk 2 S such that xk?1 7?a! x. Basis: Lemma 5.34 gives that there is an i 2 M such that a1 2  for some i-chain  = [i] from x0 to x? and, since it is the rst action in , it must also be the rst action in  . Obviously bi(a1) = bi( ) = x0i so unariness gives

C.2 Proofs of theorems in Section 5.2

179

v x0. Now suppose that f (a1) 6v x0, then there is some i 2 M such 6 fi(a) 6= x0i and there must, by De nition 5.32, be an i-chain    =  1. This contradicts the assumption, so f (a) v x0. from x0 to f (a) such that a Consequently, b(aa ) t f (a) v x0 and, by De nition 3.7, there must be a state x1 such that x0 7?! x1. Induction: Suppose that b(ak+1) 6v xk . Lemma 5.34 gives that there is an i 2 M such that ak+1 2  for some i-chain  = [i] from x0 to x?. First bi (a1) that u

1

suppose ak+1 is the rst action in  , then bi (ak+1) = bi( ) = x0i . Since ; k  = [i] there is some    such that [i] = ; and x0 7?! x , and th because there are no actions in  a ecting the i state variable obviously xki = x0i = bi ( ) = b(ak+1). Instead, suppose ak+1 is not the rst action in  . Then ak+1 has an immediate predecessor a 2  , according to   , so there ; ; k a 0 must be states x; x0 2 S such that x0 7?! x, x 7?! x and x0 7? ! x where ;   and  \ = ;. Since a immediately precedes ak+1 in the i-chain , [i] = ;. Thus no actions in a ect the ith state variable and hence xki = x0i = ei(a) and, since a immediately precedes ak+1 in the i-chain , also ei(a) = bi (ak+1 ). In either case, bi (ak+1 ) = xki , so unariness gives b(ak+1 ) v xk . Now suppose that f (ak+1) 6v xk , then there must be some i 2 M such that u 6= fi (ak+1 ) 6= xki . De nition 5.32 gives that there is an i-chain    from  k+1, and an i-chain    from f (ak+1) to x? such x0 to f (ak+1) such that a that ak+1 . We know from the proof of Lemma 5.34 that [i] = ( ; ), so ; ; k there are states x; x0 2 S such that x0 7?! x, x 7?a! x0 and x0 7? ! x where a is the last action in ,   ,  , and , and fag are disjunct. Obviously, [i] = ; so no actions in the set a ects the ith state variable and thus xki = x0i = ei(a) = ei( ) = fi(ak+1), which contradicts the assumption. Hence, f (ak+1) v xk . Consequently, b(ak+1) t f (ak+1) v xk , so there is some ak+1 k+1 !x . state xk+1 such that xk 7? ; m This concludes the induction so x0 7?! x . Finally, putting x = xm concludes the proof. 2

Lemma C.16 Suppose  = hhM; S ; Hi; x0; x?i is in the SAS class. If h ; i is a plan from x0 to x?, then [i] is totally ordered under .

2

Proof: Suppose there is an i 2 M such that [i] is not totally ordered under . Then there must be a; a0 2 [i] such that neither aa0 nor a0a. Let  and  be two total orders on that are both strengthenings of  (i.e.    and    ) and equal except that a is the immediate predecessor of a0 in 

and a0 is the immediate predecessor of a in  . De nition0 3.8 gives that there 1 ; x2; x3; x4 2 S such that x1 7?a! x2 7?a! x3, according to  , must be states x 0 a and x1 7?a! x4 7?! x3 according to  . Obviously, bi(a) = x1i = bi(a0) since

180

Proofs of theorems in Chapter 5

a; a0 2 [i]. Furthermore, ei(a) = x2i = bi(a0) = bi(a), thus contradicting (S3) in De nition 3.5. Hence, [i] must be totally ordered under . 2

Lemma C.17 Consider a SAS problem  = hhM; S ; Hi; x0; x?i. If h ; i is a plan from x0 to x?, then, for each i 2 M, there is an i-chain   from x0 2

to x?.

Proof: We know from Lemma C.16 that [i] is totally ordered under  for i 2 M. Suppose [i] is not an i-chain for some i 2 M. The case where [i] = ; is trivial, so assume the opposite. Let a1; : : :; am be [i] ordered under  and let  be an arbitrary total order such that   . There must be sets 1; : : : ; m and states x1; : : : ; xm; z2; : : :; zm 2 S such that 1. = fa1; : : :; amg [mk=1 k , 2. fa1g; : : :; famg; 1; : : :; m are disjunct,  ; 1 3. x0 7? !x , k k k ; k 4. xk?1 7?a! z 7?! x for 1 < k  m, and 1

5. xm = x?. Obviously, k [i] = ; for all k and since no actions in  a ects the ith state variable x0i = x1i and zik = xki for 1 < k  m. It is easily realized that ha1; : : :; ami must be an i-chain from x0 to x?. 2 Using the above lemmas we can now show soundness, i.e., that De nition 5.32 actually de nes a plan from the initial state to the nal state for the stated problem. Lemma C.18 Suppose  = hhM; S ; Hi; x0; x?i is in the SAS-PUS class. Then any tuple h;  i ful lling De nition 5.32 is a plan from x0 to x?.

2

Proof: Lemma C.15 gives that there is a state x 2 S such that h; i is a plan from x0 to x. It only remains to prove that x = x?, which we prove for an arbitrary total order  such that   . For each i 2 M, Lemma 5.34 gives that  = [i] is an i-chain from x0 to x?. For an arbitrary i 2 M, let a be the last action in the i-chain  according to   . Then there are states ; 0 0 a 00 ; x0; x00 2 S such that x0 7?! x , x 7?! x , and x00 7? ! x where  [fag[ =  and , , and fag are disjoint. Obviously, [i] = ; since a and a is the

C.2 Proofs of theorems in Section 5.2

181

last action in . Thus there are no actions in a ecting the ith state variable and hence xi = x00i . De nition 3.7 gives x00i = ei(a) = ei( ) = x?i, so xi = x?i. Consequently, xi = x?i for all i 2 M, so x = x?. 2 The next lemma shows that for any plan from x0 to x? it is possible to choose a subset ful lling De nition 5.32.

Lemma C.19 Suppose  = hhM; S ; Hi; x0; x?i is in the SAS-PUS class. If h ; i is a plan from x0 to x?, then there are   and    such that 

and  are as de ned in De nition 5.32. 2 Proof: Suppose h ; i is a plan from x0 to x?. It follows from Lemma C.17 that for each i 2 M there must be a, possibly empty, i-chain   from x0 to x?. Furthermore, for each a 2 such that fi(a) 6v x0i there must be an i-chain  and for each a 2 such that fi(a) 6v x?i   from x0 to f (a) such that a there must be an i-chain   from f (a) to x? such that a . Each of these chains is totally ordered under , which, by De nition 3.8, is a partial order on . Proposition 5.28 assures that minimal  , , and  can be selected such that  [  [  is minimal. Furthermore, if  or  exists as de ned above, then either   ,   , or    [ . Since  [  [   [i], it is obvious that there is some set   satisfying part 4 of De nition 5.32. Furthermore, we select  as the minimal subset of  such that h; i satis es parts 1{3 of De nition 5.32; we know from above that  must include such a . It follows that  satis es also part 5 since  must be a partial order. This proves the lemma. 2 We can now state a theorem proving correctness, i.e., that h; i is a plan from x0 to x? if and only if any plan exists.

Theorem C.20 Suppose  = hhM; S ; Hi; x0; x?i is in the SAS-PUS. Then there exists h; i as in De nition 5.32 if and only if there is a plan from x0 to x?. Furthermore h; i is a plan from x0 to x?. 2 Proof: Immediate from Lemmas C.18 and C.19. 2 Theorem C.21 Given a SAS-PUS problem  = hhM; S ; Hi; x0; x?i, any tuple h; i satisfying De nition 5.32 is a minimal plan from x0 to x?. 2 Proof: Lemma C.18 gives that h; i is a plan from x0 to x?. Suppose there is some plan h ; i from x0 to x? such that j j < jj. Lemma C.19

182

Proofs of theorems in Chapter 5

gives that there is some   and    such that h; i satis es De nition 5.32. De nition 5.5 and Lemma 5.33 give that there is a relabelling g such that g() = . Obviously, jj = jg()j = jj  j j thus contradicting that j j < jj. Hence, h; i must be a minimal plan from x0 to x?. 2

Theorem C.22 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem and that h; i is de ned according to De nition 5.32. Then h;  i

is a parallel plan from x0 to x?. 2 Proof: Let a; a0 2  be two arbitrary, distinct actions such that neither aa0 nor a0a. Suppose a and a0 are not independent. Then there is an i 2 M such that at least one of the three cases in De nition 3.9 is false. 1. First suppose ei(a) 6= u. Now suppose ei(a0) 6= u, then a; a0 2 [i] which, by Lemma 5.34, is an i-chain and [i] , so either aa0 or a0a, which contradicts the assumption. Instead, suppose fi(a0) 6= u, then parts 2 and 3 of De nition 5.32 give that there are two i-chains    from x0 to f (a0) and    from f (a0) to x? such that a0 and a0 . Hence,  [   [i] and, since ( ; ) is an i-chain from x0 to x?, part 1 of De nition 5.32 is satis ed, so part 4 of the same de nition gives [i] =  [ . Obviously, a 2 [i] so either a 2  or a 2 . Hence, either aa0 or a0a, which contradicts the assumption. 2. The case where ei(a0) 6= u is analogous to the previous case. 3. The case fi(a) 6v fi(a0) and fi(a0) 6v fi(a) is impossible because of singlevaluedness. Since neither of these cases apply, the assumption is contradicted. Hence, a and a0 must be independent, which proves the lemma.

2

Theorem C.23 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem and that h; i is de ned according to De nition 5.32. Then h;  i

is a maximally parallel plan from x0 to x?. 2 Proof: That h; i is a parallel plan follows from Theorem C.22, so it remains to prove that there is no relation    such that h; i is a parallel plan from x0 to x?. Suppose there is such a . Then  must be a partial order so ?  ? . We now prove that for every pair of actions a; a0 2 , if a?a0 then a and a0 are not independent. It is obvious from De nition 5.32 that if a?a0 then either of the following three cases must apply:

C.2 Proofs of theorems in Section 5.2

183

1. a; a0 2  where  is an i-chain from x0 to x? for some i 2 M, and a?a0 since a?  a0 and    . Obviously, a; a0 2 [i] so ei(a) 6= u and ei(a0 ) 6= u, and a and a0 cannot be independent. 2. fi(a0) 6= u and a 2  where  is an i-chain from x0 to f (a0) for some i 2 M, and a? a0 since a0 and   . Obviously, ei(a) 6= u so a and a0 cannot be independent. 3. fi(a) 6= u and a0 2  where  is an i-chain from f (a) to x? for some i 2 M, and a? a0 since a  and    . Obviously, ei(a0) 6= u so a and a0 cannot be independent. Hence, if ?  ? then there must be two actions a; a0 2  such that neither a?a0 nor a0?a but a and a0 are not independent. Consequently, h; i cannot be a parallel plan, so the assumption is contradicted and the lemma follows.

2

Finally, we can present the proof for Theorem 5.35.

Theorem C.24 [Theorem 5.35] Suppose  = hhM; S ; Hi; x0; x?i is a SASPUS planning problem. Then h; i as de ned according to De nition 5.32 is

a minimal and maximally parallel plan from x0 to x? if and only if there is any plan from x0 to x?. 2

Proof: Immediate from Theorems C.20, C.21 and C.23.

2

C.2.2 Proof of Theorem 5.37

This subsection proves the correctness of Algorithm 5.2, i.e., Theorem 5.37. We rst prove the correctness of BuildChain (Lemmas C.26 to C.28). Then a number of lemmas prove that the algorithm nds h; i according to De nition 5.32. This provides the basis for the proofs of soundness (Lemma C.35) and completeness (Lemma C.36) which are coalesced into Theorem 5.37. In this subsection, we let A denote the initial value of the parameter A to Algorithm 5.2. We rst prove that no minimal i-chain from x0 to x? contains an action whose pre-condition is ful lled in the nal state x?.

Lemma C.25 Given a SAS-PUS problem  = hhM; S ; Hi; x0; x?i let i 2 M.

Then a minimal i-chain from x0 to x? contains no action a such that bi(a) = x?i.

2

184

Proofs of theorems in Chapter 5

Proof: Trivial if x0i = x?i, so assume x0i 6= x?i. Suppose that  = ha1; : : :; ami is a minimal i-chain from x0 to x?, and that there is a k such that 1  k  m and bi(ak ) = x?i. Now, k = 6 1 since bi(ak ) = x?i =6 x0i . De nition 5.27 gives ? ei (ak?1 ) = xi , so ha1 ; : : :; ak?1 i must be an i-chain from x0 to x?, thus contradicting that  is minimal. Hence, the lemma must hold.

2

The next lemma shows that the procedure BuildChain in Algorithm 5.2 returns a minimal i-chain as an ordered list.

Lemma C.26 If A contains an i-chain from xF to xT then BuildChain nds a minimal i-chain  from xF to xT such that   A. Furthermore,  is removed from A, inserted into D and T , and ?  is inserted into r where ? is as de ned in De nition 5.2. Finally,  is returned as a list ordered under   . 2 Proof: A contains an i-chain from xF to xT so, by Proposition 5.28, there is a minimal i-chain   A from xF to xT . First suppose that xFi = xTi . Obviously,  = ;. Since x = xT , initially, xi = xTi = xFi and BuildChain never goes through the while-loop, so A, D,

T , and r remain invariant and the empty list is returned. This means that BuildChain has found the empty i-chain. Instead suppose that xFi 6= xTi . Let  = hbm; : : : ; b1i. We prove by induction over the number of turns, k, through the while-loop that for each k  m the loop makes at least k turns, and, after the kth turn, BuildChain has found a minimal i-chain k = fak ; : : :; a1g  A from b(bk ) to xT such that type (al) = type (bl) for 1  l  k, a0 = ak , and x = b(ak ). Basis: Before the rst turn through the loop, x = xT so xi 6= xFi and the loop makes at least one turn. FindAndRemove searches A for an action a such that ei(a) = xi = xTi . By de nition of , b1 is such an action so, since b1 2 A, there is at least one such action in A. Let a1 denote the action returned by FindAndRemove. Post-uniqueness gives type (a1) = type (b1 ). Trivially, a1 is a minimal i-chain from b(b1) to xT . Furthermore, a0 = a1 and x = b(a1) after the rst turn through the loop. Induction: After the kth turn through the loop, x = b(ak ) and b(ak ) = b(bk ) since type (ak) = type (bk ). If k < m, Lemma C.25 gives bi(bk ) 6= xFi since  is a minimal i-chain from xF to xT , so xi 6= xFi and the loop makes a (k +1)st turn. FindAndRemove searches A for an action a such that ei(a) = xi = bi (ak ) = bi (bk ). Such an action exists since, by de nition of , ei(bk+1) = bi(bk ), and bk+1 2 A as   A when BuildChain is called, and only the actions ak ; : : : ; a1 can possibly have been removed from A after the k rst turns through the loop. Let ak+1 denote the action returned by FindAndRemove. Post-uniqueness give type (ak+1 ) = type (bk+1 ). Obviously hak+1 ; : : :; a1 i is a minimal i-chain from b(bk+1 ) to xT . Furthermore, a0 = ak+1 and x = b(ak+1 ) after the (k + 1)st turn through the loop.

C.2 Proofs of theorems in Section 5.2

185

Consequently, x = b(am) = b(bm) after the mth turn through the loop. The de nition of  gives that bi(bm) = xFi and thus also xi = xFi after the mth turn, so the loop will do exactly m turns. Furthermore, after the mth turn a minimal i-chain  = ham; : : : ; a1i  A from b(bm) to xT such that type (al) = type (bl) for 1  l  m is found. Since, by de nition of , bi (bm) = xFi it follows that  is a minimal i-chain from xF to xT . In the kth turn through the loop, for 1  k  m, ak is removed from A by FindAndRemove, inserted into D and T , and added to the front of L. In the rst turn, a0 is nil so r remains invariant, but a0 = ak for 1 < k  m so ak+1rak is guaranteed. Obviously, BuildChain removes  from A, inserts it into D and T , inserts ?  into r, and returns  as a list ordered under   . 2 We must also show that BuildChain fails if it is impossible to construct an i-chain using the actions in the set A.

Lemma C.27 If A does not contain any i-chain from xF to xT then Build-

Chain fails.

2

Proof: Obviously, xFi 6= xTi since A does not contain the empty i-chain ; from xF to xT . We prove by induction over the number of turns through the while-loop that BuildChain fails at the latest in the m:th turn where m = jAj + 1. 6 xTi so the loop does at least one turn. A is Basis: Initially, xi = xFi =

searched for an action a1 such that ei(a1) = xi. If such an a1 exists, it is removed from A and x is set to bi(a1). Furthermore, xi = b(a1) 6= xFi since a1 would otherwise be an i-chain from xF to xT , which is impossible since a1 was initially in A. On the other hand, if there is no such a1 in A, then BuildChain fails. Induction: Suppose BuildChain has made k turns through the while-loop. It has then found a sequence ak ; : : :; a1 of actions in A and removed these. By induction hypothesis, x = b(ak ) and bi(ak ) 6= xFi so BuildChain makes at least one more turn through the loop and searches A for an action ak+1 such that ei(ak+1 ) = bi (ak ). If such an action is found, it is removed from A and x is set to b(ak+1). Furthermore, the sequence ak+1; : : :; a1 is an i-chain from b(ak+1) to xT , so bi(ak+1) 6= xFi since A would otherwise have contained an i-chain from xF to xT . If, on the other hand, no such action exists, then BuildChain fails. Since one action is removed from A every turn through the loop and A is nite, it is obvious that BuildChain eventually fails.

2

Using the lemmas above we give a lemma stating the correctness of the procedure BuildChain.

186

Proofs of theorems in Chapter 5

Lemma C.28 If BuildChain does not fail it nds a minimal i-chain from xF to xT .

2

Proof: Since BuildChain does not fail, the contrapositive of Lemma C.27 gives that A contains an i-chain from xF to xT since A  A and A is nite by

de nition. Hence, Lemma C.26 gives that BuildChain nds a minimal i-chain from xF to xT . 2

We can now turn our attention to the main procedure in Algorithm 5.2, namely the procedure PlanPUS. Lemma C.29 If Algorithm 5.2 does not fail, then D[i] is a minimal i-chain ? 0 ? from x to x for each i 2 M and r = [i2M[i] immediately after the for-loop in lines 41{44. 2 Proof: The loop calls BuildChain once for each i 2 M to build a minimal i-chain from x0 to x?. By assumption, PlanPUS does not fail so BuildChain does not fail either, and, by Lemma C.28, it nds such a, possibly empty, ichain  which is inserted into D. Obviously,   D[i] so, since BuildChain is called only once for each i 2 M, D[i] =  immediately after line 44. Moreover, for each minimal i-chain  that BuildChain inserts into D, it also inserts ?  2 into r. Nothing else is inserted into r so r = [i2M?[i].

Lemma C.30 After line 44 in Algorithm 5.2, for any i 2 M, D[i] can change only in the body of the for-loop over lines 48{76. Additionally D[i] can only be changed when the value of the loop variable is i and the action a removed from T this time through the while-loop is such that fi(a) = 6 u. 2 Proof: Obvious from Algorithm 5.2. 2 Given the lemmas above we can show that D and r in the algorithm ful lls the rst three parts in De nition 5.32. the proof is quite lengthy and is split into several parts depending on the value of the prevail-conditions.

Lemma C.31 Given a SAS-PUS planning problem, if Algorithm 5.2 does

not fail, then D and r+ ful ll points 1{3 of De nition 5.32 when the algorithm terminates. 2

Proof: We rst note that no action is ever removed from D so part 1 of Def-

inition 5.32 is ful lled by Lemma C.29. We also note that actions are inserted into D only by BuildChain, and every action inserted into D is also inserted

C.2 Proofs of theorems in Section 5.2

187

into T . Furthermore, no other actions are ever inserted into T and BuildChain is never called after the while-loop in lines 46{77. Hence, exactly those actions ever inserted into D are eventually selected in line 47, and processed by the body of the while-loop. We rst prove that, for any i 2 M, D[i] and r+ ful ll part 1 of De nition 5.32. Lemma C.29 gives that D[i] =  is a, possibly empty, minimal i-chain from x0 to x? and that ?   r immediately after the loop in lines 41{44. Since both D and r are non-decreasing,   D[i] and ?   r when PlanPUS terminates. Hence, D[i] and r+ ful ll part 1 of De nition 5.32. We now prove by cases that, for any i 2 M, D[i] and r+ ful ll parts 2 and 3 of De nition 5.32. Figure C.1 depicts the i-chains in D[i] for each case. To realize that these cases are exhaustive, one should note that single-valuedness gives that if there is any a 2 D such that fi(a) 6= u, then fi(a0) = fi(a) for all a0 2 D such that fi(a0) 6= u. 1.

α

5. x0, x*

x0, f(a)

α

2. x0

α

6. x*

3.

7.

x0,x*

γ

x*, f(a)

α1

α

2

f(a)

x*

β

β

4.

x0

x0

x0,x*, f(a)

x*

α

8. f(a)

x0

x* γ

f(a)

Figure C.1: The cases in the proof of Lemma C.31 (1,2) Case: f (a) = u for all a 2 D. Trivially ful lled since fi(a) v x0i and fi(a) v x?i for all a 2 D. (3) Case: x0i = x?i = f (a) for all a 2 D such that fi(a) 6= u. Trivially ful lled since fi(a) v x0i and fi(a) v x?i for all a 2 D.

188

Proofs of theorems in Chapter 5

(4) Case: x0i = x?i 6= f (a) for all a 2 D such that fi(a) 6= u. Let a1; : : : ; am be all actions a in T , and thus also in D, such that fi(a) 6= u in the order removed from T . Single-valuedness gives that fi(ak ) = fi(a1) for 1 < k  m. Also de ne Dk , where 1  k  m, to be D just before ak is removed from T and Dm+1 to be D just after am has been processed by the body of the while-loop. Lemma C.29 gives that D[i] = ; immediately after the for-loop in lines 40{42 so Lemma C.30 gives D1 [i] = ;. Now, consider the removal of a1 from T . By Lemma C.30, D[i] can only change when the loop variable of the for-loop in lines 48{76 has the value i, so D[i] = D1 [i] = ; immediately before this turn through the loop. Since fi(a) 6v x0i , D is searched for an action a0 such that ei(a0) = fi(a). If there were such an action in D1 , it would also have to be in D1 [i], but D1[i] = ; so no such action exists. Hence, BuildChain is called to nd a minimal i-chain from x0 to f (a) and, since PlanPUS does not fail, it nds such an i-chain  = hb1; : : : ; bm  i which it inserts into D. We now have D = D1 [ , so   D. Furthermore, BuildChain has also inserted ?  into r and bm  ra is established in line 54, so D[i] and r+ ful ll part 2 of the de nition. Now, fi(a) 6v x?i so D = D1 [  is searched for an action a0 such that bi (a0) = fi (a). If there were such an action in D, it would have to be in D[i], but D[i] = (D1 [ )[i] =  so Lemma C.25 gives that there can be no such action since  is minimal. Hence, BuildChain is called to nd a minimal i-chain from f (a) to x? and, since PlanPUS does not fail, it nds such an i-chain  = hc1; : : : ; cm  i which it inserts into D. We now have D = D1 [  [  so Lemma C.30 gives D2[i] = (D1 [  [ )[i] =  [ . Moreover, BuildChain has also inserted ?  into r and arcm  is established in line 52, so D[i] and r+ ful ll part 3 of the de nition. Now consider the removal of ak from T for any k such that 1 < k  m. Since fi(ak ) 6v x0i , D is searched for an action a0 such that ei(a0) = fi(ak ). At least one such action exists since bm  2  and, since D is nondecreasing,   D2[i]  Dk [i]. Similarly, fi(a) 6v x?i so D is searched for an action a00 such that bi(a00) = fi(a). Once again, at least one such action exists since cm  2  and, since D is non-decreasing,   D2[i]  Dk [i]. Hence, BuildChain is not called in this turn through the while-loop, so D[i] remains invariant and Lemma C.30 gives Dk+1 [i] = Dk [i]. Obviously, D[i] must have remained invariant also while processing al for 1 < l < k, so Lemma C.30 gives Dk [i] = D2[i] = (D1 [  [ )[i] =  [  since D1[i] = ;. Applying Lemma C.30 to both  and  gives that the only possible choices for a0 and a00 are a0 = bm  and a00 = c1 respectively. Lemma C.26 gives ;   D, ?   r and ?   r. Additionally, bm  ra and arc1 are secured in lines 54 and 72 respectively, so D and r+ ful ll

C.2 Proofs of theorems in Section 5.2

189

parts 2{3 of the de nition. (5) Case: x0i = f (a) 6= x?i for all a 2 D such that fi(a) 6= u. Let a 2 D be any action such that fi(a) 6= u and which has just been removed from T. Part 2 of the de nition is trivially ful lled since fi(a) v x0i . However, fi(a) 6v x?i so D is searched for an action a0 such that bi (a0) = fi (a). At least one such action exists in D since fi (a) = x0i and bi(a1) = x0i , so BuildChain is not called and Lemma C.30 gives that D remains invariant this turn through the while-loop. Furthermore, single-valuedness gives that any other action a00 such that fi(a00) 6= u and a00 is removed from T before a must obey fi(a00) = fi(a), and thus be processed in exactly the same way as a is. Consequently, BuildChain can not have been called for a00 either and Lemma C.30, once again, gives that D[i] must have remained invariant while processing any such a00. We get D[i] =  throughout the execution of the while-loop. Hence, a1 must be the only action a0 2 D such that bi(a0) = fi(a) and a1 is the action found in line 64. Since fi(a) = x0i , it follows that  is an i-chain from f (a) to x? . Moreover,   D[i], Lemma C.29 gives ?   r and ara1 is guaranteed in line 66, so part 3 of the de nition is ful lled for D[i] and r+ . (6) Case: x0i 6= f (a) = x?i for all a 2 D such that fi(a) 6= u. Analogous to case 5. (7) Case: x0i 6= x?i, x0i 6= fi(a) 6= x?i for all a 2 D such that fi(a) 6= u, and all i-chains from x0 to x? passes fi(a). Let a 2 D be any action such that fi(a) 6= u which has just been removed from T. Since fi(a) 6v x0i , D is searched for an action a0 such that ei(a0) = fi(a). By assumption,  must pass fi(a) so ei(ak ) = fi(a) for some k such that 1  k < m. Hence, there is at least one a0 such that ei(a0) = fi(a) in D. Furthermore, fi(a) 6v x?i so D is also searched for an action a00 such that bi(a00) = fi(a), and De nition 5.27 gives that ak+1 2   D must be such an action. Consequently, no new actions are inserted into D. Single-valuedness gives that any action a000 such that fi (a000) 6= u and a000 is removed from T before a must obey fi (a000) = fi (a), and it must thus have been handled in the same way as a. It follows from Lemma C.30 that D remains invariant throughout the algorithm after line 44, so D[i] = . Since  is minimal, ak and ak+1 are the only possible choices for a0 and a00 respectively. There are obviously i-chains  = ha1; : : :; ak i from x0 to f (a) and  = hak+1 ; : : :; ami from f (a) to x? such that ( ; ) =   D. In addition, ?  ; ?  ?   r, and ak ra and arak+1 are secured in lines 54 and 66 respectively, so parts 2 and 3 are ful lled.

190

Proofs of theorems in Chapter 5

(8) Case: x0i 6= x?i, x0i 6= fi(a) 6= x?i for all a 2 D such that fi(a) 6= u, and not all i-chains from x0 to x? passes fi(a). Let a1; : : : ; am be all actions a in T , and thus also in D, such that fi(a) 6= u in the order they are removed from T . Also let Dk , where 1  k  m, denote D just before ak is removed from T . Since x0i 6= x?i, there is, by Lemma C.29, a minimal i-chain  from x0 to x? such that   D so Lemma C.30 gives D1[i] = . Now consider the removal of a1 from T . Since fi(a1) 6v x0i , D is searched for an action a such that ei(a) = fi(a1). However, D1[i] =  and  does not pass fi(a1), so De nition 5.27 gives that D1[i] contains no such action. Consequently, since PlanPUS does not fail, Lemma C.28 gives that BuildChain inserts a minimal i-chain  = hb1; : : :; bm  i from x0 to f (a1) into D and inserts ?  into r. Furthermore, bm  ra1 is ascertained in line 58 so part 2 is ful lled for a1. Also, fi(a1) 6v x?i so D is searched for an action a such that bi(a) = fi(a1). Now, D = D1 [  so D[i] =  [ . However, a 62  since  does not pass fi(a1), and Lemma C.25 gives a 62  as  is a minimal i-chain from x0 to f (a1). Consequently, the search fails and BuildChain inserts a minimal i-chain  = hc1; : : : ; cm  i from f (a1) to x0 into D and inserts ?  into r. Furthermore, a1rc1 is ascertained in line 70. Now P is searched for an action a such that bi(a) = x0i . Since P [i] = D1 [i] =  and  is a minimal i-chain from x0 to x?, there is exactly one such a, namely the rst action in  . Line 72 then establishes cm  ra to ascertain ?( ; ) r, so Theorem 5.30 gives that ( ; ) is an i-chain from f (a1) to x?, ?( ; ) r, and ar+ ( ;  ), so part 3 is ful lled for a1. Now consider the removal from T of ak for 1 < k  m. fi(ak ) 6v x0i so Dk is searched for an action a such that ei(a) = fi(ak ). Such an action exists since bm  2 D, and single-valuedness implies ei(bm  ) = fi(a1) = f (ak ), so BuildChain is not called in line 57. Similarly, fi(ak ) 6v x?i so Dk is searched for an action a0 such that bi(a0) = fi(ak ). Once again, such an action exists since c1 2 D, so BuildChain is not called in line 69 either. It follows that D = Dk remains invariant and, since al for 1 < l < k must be handled the same way as ak , Lemma C.30 gives Dl [i] = D2[i] =  [  for 1 < l < k. Obviously bm  and c1 are the only possible choices for a and a0 respectively. Now, bm  rak is inserted into r, ?   r, and   D[i] is an i-chain from x0 to x?, so part 2 is ful lled for ak . Furthermore, ak rc1 is inserted into r, ?   r, ?   r, and cm  ra1 is inserted into r, thus guaranteeing that ?( ; ) r, and ( ;  )  D[i] is an i-chain from f (ak ) to x? so part 3 is ful lled for ak . For each i 2 M, parts 1{3 of De nition 5.32 are ful lled whichever of these cases applies. 2

C.2 Proofs of theorems in Section 5.2

191

The next lemma follows because of post-unique sets of action types.

Lemma C.32 Given a SAS-PUS-structure hM; S ; Hi suppose x; x0 2 S and i 2 M. If there is an i-chain from x to x0 not passing x00i , then any i-chain from x00 to x0 must pass xi.

2

Proof: Suppose  = ham ; : : : ; a1i is an i-chain from x to x0 not passing x00i and let  = hbm ; : : : ; b1i be an i-chain from x00 to x0. Post-uniqueness gives type (ak) = type (bk ) for 1  k  min(m ; m ). Suppose m < m , then

ei(bm +1 ) = bi (bm ) = bi (am ) = xi , so  passes xi . To the contrary, suppose m m , then ei(am +1) = bi(am ) = bi(bm ) = x00i , so  must pass xi, which contradicts the assumption. Hence, every i-chain from x00 to x0 must pass xi.



2

We can now show that the set D in Algorithm 5.2 is minimal with respect to 1{3 in De nition 5.32, thus ful lling part 4 of the de nition.

Lemma C.33 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning prob-

lem. If Algorithm 5.2 does not fail before line 68, then D ful lls part 4 of De nition 5.32. 2

Proof: We prove that for each i 2 M and for each of the cases in Lemma

C.31, D[i] is minimal. In all cases but 4 and 8, D[i] is a minimal i-chain from x0 to x?. Since an i-chain from x0 to x? is required by part 1 of the de nition, D[i] is minimal in these cases. In case 4, D[i] =  [  where  is a minimal i-chain from x0 to f (a) and  is a minimal i-chain from f (a) to x? for those a 2 D such that fi(a) 6= u. Both these i-chains are required by parts 2 and 3 of the de nition so, since they are both minimal, D[i] is minimal. In case 8, D[i] =  [  [  where  and  are minimal i-chains from x0 to f (a) and from f (a) to x0 respectively for those a 2 D such that fi (a) 6= u, and  is a minimal 0 ? i-chain from x to x . It remains to prove that ( ;  ) is a minimal i-chain from f (a) to x? . By assumption,  does not pass fi(a) so Lemma C.32 gives that ? any i-chain from f (a) to x must pass x0i . Since  is a minimal i-chain from f (a) to x0 , (

; ) must be a minimal i-chain from f (a) to x?. Both  and ( ; ) are required by parts 2 and 3 respectively and they are both minimal, so D[i] is minimal. 2 If the algorithm does not fail then r+ is a minimal partial order satisfying part 1{3 in De nition 5.32.

Lemma C.34 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS planning problem. If Algorithm 5.2 does not fail, then r ful lls part 5 of De nition 5.32.

2

192

Proofs of theorems in Chapter 5

Proof: Consider the proof of Lemma C.31. Every time BuildChain inserts a minimal i-chain  into D it also inserts ?  into r, which is required by parts

1{3 of the de nition, and otherwise does not a ect r. The only other possible modi cations of r occur in lines 54, 58, 66, 70 and 72. r is modi ed in line 54 or 58 only if there is some a 2 D such that fi(a) 6= x0i for some i 2 M, and it is then modi ed such that bmra where  = b1; : : : ; bm is an i-chain from x0 to f (a). This modi cation is necessary in order to ascertain r+ a, which is required by part 2 of the de nition. The modi cation of r in line 66 or 70 is analogous. Finally, r is modi ed in line 72 only if case 8 of Lemma C.31 applies. In this case, there are minimal i-chains  2 D from f (a) to x0 and  2 D from x0 to x? where the i-chain ( ;  ) from f (a) to x? is required by part 3 of the de nition, so ?( ; ) r is guaranteed in line 72. Obviously, r is not modi ed unless required by parts 1-3 of the de nition. Finally, at line 79 it is tested whether there are any cycles in r, and since the algorithm does not fail there is no such cycle. Now, that r contains no cycles is the same as that r+ is a partial order because r+ is irre exive by de nition if there are no cycles in r. It follows that r+ is a minimal partial order, and thus r+ ful lls part 5 of the de nition. 2 Using the lemmas above soundness of Algorithm 5.2 can be shown, i.e., we can show that if the algorithm does not fail it returns h; i as de ned in De nition 5.32. Lemma C.35 Suppose  = hhM; S ; Hi; x0; x?i is a SAS-PUS problem. If Algorithm 5.2 does not fail, then it returns a tuple hD; r i such that D =  and r+ =  where  and  are de ned according to De nition 5.32. 2 Proof: Immediate from Lemmas C.31, C.33 and C.34. 2 That Algorithm 5.2 is complete, i.e., it always fails if no plan exists, is shown in the next lemma. Lemma C.36 Suppose  = hhM; S ; Hi; x0; x?i is in the SAS-PUS class. If Algorithm 5.2 fails, then there is no plan from x0 to x?. 2 Proof: PlanPUS can fail either in BuildChain or in line 79. 1. Suppose BuildChain fails. This must be because BuildChain is called to nd a minimal i-chain  between two states xF and xT . But for some xi 2 Si such that  must pass xi there is no a 2 A such that ei (a) = xi. By Lemma C.28, BuildChain constructs only minimal ichains so Theorem 5.29 gives that any i-chain from x to x0 must contain an action a0 such that ei(a0) = xi. There are two possible reasons why A contains no such action.

C.2 Proofs of theorems in Section 5.2

193

(a) Suppose there is no h 2 H such that ei(h) = xi. Then, there can be no i-chain from xF to xT using only actions in H. It is obvious from the proofs of Lemmas C.31 and C.33 that BuildChain is only called to nd i-chains required by De nition 5.32. Obviously, there is no plan ful lling De nition 5.32. (b) Suppose there is an h 2 H such that ei(h) = xi. Then A must initially contain two actions of type h. If BuildChain fails when searching for such an action in A, it has obviously already inserted two such actions into D and tries to nd a third one, also in A, to insert into D. Lemma C.33 gives that PlanPUS inserts no action into D unless required by De nition 5.32 so, by Lemma 5.34 and post-uniqueness, there is no plan ful lling De nition 5.32. 2. Suppose PlanPUS fails in line 79, then r+ is not irre exive. Furthermore, suppose there is a plan h ; i from x0 to x?, then Lemma C.19 gives that there are   and    such that h; i satis es De nition 5.32. Lemma 5.33 gives that there is a relabelling g such that g(D) =  and, hence, hg(D);  i satis es De nition 5.32. Lemmas C.31 and C.34 give that r+ ful lls parts 1{3 of De nition 5.32 and that it is minimal in this respect. It is obvious that g(r+)   so, since r+ is not a partial order,  cannot be a partial order, thus contradicting that h; i, and thus also that h ; i is a plan. Hence, if PlanPUS fails in line 79, then there is no plan from x0 to x?. Consequently, if PlanPUS fails, there is no plan that is a -tuple from x0 to x?, and thus, by the contrapositive of Lemma C.19, there is no plan at all from x0 to x?.

2

Finally, we can state the main theorem in Section 5.2.3 (Theorem 5.37). This theorem states that Algorithm 5.2 returns a minimal and maximally parallel plan if and only if a plan exists. Theorem C.37 [Theorem 5.37] Suppose  = hhM; S ; Hi; x0; x?i is a SASPUS planning problem. If there is a plan from x0 to x?, then Algorithm 5.2 returns a tuple hD; ri such that hD; r+ i is a minimal and maximally parallel plan from x0 to x? according to De nition 5.32, and otherwise it fails. 2 Proof: Immediate from De nition 5.32, Theorem 5.35 and Lemmas C.35 and C.36. 2

Subject Index A

F

A

frame problem 24

32 action 21, 44 action label 43, 44 action type 21, 43 a ects 46 antisymmetric 164

G

goal actions 106 GRAFCET 11, 133

H

B

hybrid systems 37

I

binary 58 breadth- rst search 32

i-chain 87 independent actions 48 initial state 46 intractable 19 inverse 172 irre exive 164 isomorhic 73

C

complete 18, 67 composition 167 consistent 40 constraint-posting planner 27 contradictory 40 controllability 145

J

join 166

D

L

DEDS 35 de ned state variables 40 depth- rst search 32 Dijkstra's algorithm 35 directed graph 163 disables 70 discrete event dynamic system 35 dynamic programming 35

lattice 166 linearity assumption 25 linear order 165 linear plan 22, 48 linear planners 23

M

macro steps 11 maximally parallel plan 49 maximal element 110 meet 166 minimal element 110 minimal i-chain 88 minimal plan 49

E

enables 70 event 35 exponential complexity 19 extended simpli ed action structures 29 194

SUBJECT INDEX more informative 41

N

necessary actions (SAS-PUS) 89 necessary actions (SAS(+)-PUB) 68, 69 non-linear plan 22, 48 non-linear planner 26 NP 20 NP-complete 20

O

operator 21

P

P 20 parallel plan 22, 49 parsimonious 45 partial order 164 partial order planner 27 partial state 40 plan 21, 48 planner 21 planning 8, 21 polynomial complexity 19 post-condition 43 post-unique 58 pre-condition 43 pre-enabled actions 104 precedes 70 prevail-condition 43 primarily necessary actions 68, 69 primary split actions 108 PSPACE 20 PSPACE-complete 20

R

rate of growth 19 reachability 37, 145, 146 reactive planner 31 reduction 70 refelexive 164 re exive partial order 164 relabelling 73

195 relation 163 reset action 172

S

SAS-structure 57 SAS planning problem 57 SAS-PUBS 29 SAS-PUS 29, 58 SAS(+) 29 SAS(+) planning problem 58 SAS(+)-PUB 58 SAS(+)-PUBS 58 SAS+ 39 SAS+-structure 44, 45 SAS+ planning problem 49 scheduling 37 secondarily necessary actions 68, 69 sequential function charts 11 set/reset pair 171 set action 172 shortest path 32 single-valued 58 size of the input 19 sound 18, 67 state 22 state variable indices 40 steps 12 STRIPS assumption 24 subgoal 112

T

total order 165 total state 40 tractable 19 transitions 12 transitive closure 70, 168

U

unary 58 unde ned 40

196

SUBJECT INDEX

Bibliography [1] Proceedings of the 5th (US) National Conference on Arti cial Intelligence (AAAI-86), Philadelphia, PA, USA, August 1986. Morgan Kaufmann. [2] Proceedings of the 8th (US) National Conference on Arti cial Intelligence (AAAI-90), Boston, MA, USA, August 1990. MIT Press. [3] Proceedings of the 9th (US) National Conference on Arti cial Intelligence (AAAI-91), Anaheim, CA, USA, July 1991. AAAI Press/MIT Press. [4] S. Agaoua. Speci cation et commande des systemes a evenements discrets, le grafcet colore. PhD thesis, Grenoble University (INPG), Grenoble, France, 1987. [5] P. E. Agre and D. Chapman. Pengi: An implementation of a theory of activity. In Proceedings of the 6th (US) National Conference on Arti cial Intelligence (AAAI-87), Seattle, WA, USA, July 1987. [6] A. V. Aho and J. D. Ullman. Foundations of Computer Science. Computer Science Press, New York, 1992. [7] J. Allen. Formal models of planning. In Allen et al. [8], pages 50{55. [8] J. Allen, J. Hendler, and A. Tate, editors. Readings in Planning. Morgan Kaufmann, San Mateo, CA, 1990. [9] J. F. Allen. Towards a general theory of action and time. Arti cial Intelligence, 23:123{154, 1984. [10] J. F. Allen and J. A. Koomen. Planning using a temporal world model. In Alan Bundy, editor, Proceedings of the 8th International Joint Conference on Arti cial Intelligence (IJCAI-83), Karlsruhe, Germany, August 1983. William Kaufmann. [11] K-E.  Arzen. Sequential function charts for knowledge-based, real-time applications. In IFAC Workshop on AI in Real-Time Control, Rohnert Park, CA, USA, September 1991. 197

198

Bibliography

[12] K-E.  Arzen. A model-based control system concept. Technical report, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1992. Submitted to Intelligent Systems Engineering. [13] K-E.  Arzen. Grafcet for intelligent real-time systems. In IFAC Word Congress, Sydney, Australia, 1993. [14] K.J.  Astrom, A. Benveniste, P.E. Caines, G. Cohen, L. Ljung, and P. Varaiya. Facing the challenge of computer science in the industrial applications of control: a joint IEEE CSS-IFAC project. Technical report, IEEE, IFAC, 1991. Progress report. [15] S. Baase. Computer Algorithms: Introduction and Analysis. Addison Wesley, Reading, Massachusetts, 1988. [16] F. Baccelli and A. M. Makowski. Queueing models for systems with synchronization constraints. Proceedings of the IEEE, 77:138{161, 1989. [17] F. Bacchus and Q. Yang. The expected value of hierarchical problemsolving. In Proceedings of the 10th (US) National Conference on Arti cial Intelligence (AAAI-92), pages 369{374, San Jose, CA, USA, July 1992. [18] C. Backstrom. Computational Complexity of Reasoning about Plans. PhD thesis, Linkoping University, Linkoping, Sweden, 1992. [19] C. Backstrom. Equivalence and tractability results for SAS+ planning. In Bill Swartout and Bernhard Nebel, editors, Proceedings of the 3rd International Conference on Principles on Knowledge Representation and Reasoning (KR-92), pages 126{137, Cambridge, MA, USA, October 1992. Morgan Kaufmann. [20] C. Backstrom and I. Klein. Planning in polynomial time. In G Gottlob and W Nejdl, editors, Expert Systems in Engineering: Principles and Applications. International Workshop., pages 103{118, Vienna, Austria, September 1990. Springer. Published as volume 462 of Lecture Notes in Arti cial Inteligence. [21] C. Backstrom and I. Klein. Parallel non-binary planning in polynomial time. In Proceedings of the 12th International Joint Conference on Arti cial Intelligence, pages 268{273, Sydney, Australia, Aug 1991. [22] C. Backstrom and I. Klein. Planning in polynomial time: the SAS-PUBS class. Computational Intelligence, 7:181{197, August 1991. [23] C. Backstrom and B. Nebel. Complexity results for SAS+ planning. In Proceedings of the 14th International Joint Conference on Arti cial Intelligence, 1993. To appear.

Bibliography

199

[24] J. C. M. Baeten and W. P. Weijland. Process Algebra. Cambridge university press, Cambridge, Great Britain, 1990. [25] A. D. Baker, T. L. Johnson, D. I. Kerpelman, and H. A. Sutherland. GRAFCET and SFC as factory automation standards: advantages and limitations. In Proceedings of the 1987 American Control Conference, volume 3, pages 1725{1730, Minneapolis, MN, USA, June 1987. [26] S. Balemi. Control of Discrete Event Systems: Theory and Application. PhD thesis, Swiss Federal Institute of Technology, Zurich, Switzerland, 1992. [27] S. Balemi, G.J. Ho mann, P. Gyugyi, H. Wong-Toi, and G.F. Franklin. Supervisory control of a rapid thermal multiprocessor. Submitted to Joint Automatica{IEEE Transactions on Automatic Control Special Issue on \Meeting the Challenge of Computer Science in Industrial Applications of Control", 1993. [28] W. Baumgardt. Development and testing of programs for programable controllers. Elektrotechnik, 70(15):34{36, 1988. [29] J. Bresina and M. Drummond. Integrating planning and reaction. Technical Report FIA-91-20, amesai, January 1990. [30] R. A. Brooks. A robust layered control system for a mobile robot. IEEE journal of Robotics and Automation, RA-2(1):14{23, March 1986. [31] F. Brown, editor. The Frame Problem in Arti cial Intelligence, Proceedings of the 1987 Workshop, Lawrence, KS, USA, April 1987. Morgan Kaufmann. [32] T. Bylander. Complexity results for planning. In Reiter and Mylopoulos [134], pages 274{279. [33] T. Bylander. Complexity results for extended planning. In Hendler [74], pages 20{27. [34] P. E. Caines, R. Greiner, and S. Wang. Classical and logic-based dynamic observers for nite automata. IMA Journal of Mathematical Control & Information, 8:45{80, 1991. [35] Z. Chaochen, A. P. Ravn, and M. R. Hansen. An Extended Duration Calculus for Hybrid Real-Time Systems. In Proc. Workshop on Theory of Hybrid Systems, Lyngby, Denmark, October 1992. [36] D. Chapman. Planning for conjunctive goals. Arti cial Intelligence, 32:333{377, 1987.

200

Bibliography

[37] E. Charniak and D. McDermott. Introduction to Arti cial Intelligence. Addison Wesley, Reading, MA, 1985. [38] J. Christensen. A hierarchical planner that generates its own hierarchies. In AAAI-90 [2], pages 1004{1009. [39] R. Cieslak, C. Desclaux, A. S. Fawas, and P. Varaiya. Supervisory control of discrete-event processes with partial observations. IEEE Transactions on Automatic Control, 33:249{260, 1988. [40] G. Cohen, D. Dubois, J. P. Quadrat, and M. Viot. A linear-systemtheoretic view of discrete-event processes and its use for performance evaluation in manufacturing. IEEE Transactions on Automatic Control, AC-30(3):210{220, 1985. [41] G. Cohen, P. Moller, J. P. Quadrat, and M. Viot. Algebraic tools for the performance evaluation of discrete event systems. Proceedings of the IEEE, 70(1):39{58, 1989. [42] Gensym Corporation. G2 Reference Manual, version 3.0. Gensym Corporation, Cambridge, MA, USA, 1992. [43] K. Currie and A. Tate. O-Plan|Control in the open planning architecture. Expert Systems, 85:225{240, 1985. Reprinted in Allen et al [8], pages 361{368. [44] L. Daniel. Planning and operations research. In Tim O'Shea and Marc Eisenstadt, editors, Arti cial Intelligence: Tools, Techniques, and Applications, chapter 14. Harper & Row, New York, 1984. [45] R. David and H. Alla. Petri Nets and Grafcet: Tools for modelling discrete event systems. Prentice Hall, New York, 1992. [46] A. A. Desrochers. Modeling and Control of Automated Manufacturing Systems. IEEE Computer Society Press, Washington, 1990. [47] E. W. Dijkstra. A note on two problems in connection with graphs. Numerical Mathematics, 1:269{271, 1959. [48] M. Drummond and K. Currie. Goal ordering in partially ordered plans. In Sridharan [144], pages 960{965. [49] Y. C. Ho (Ed.). Special issue on the dynamics of discrete event systems. Proceedings of the IEEE, 77(1), 1989.

Bibliography

201

[50] K. Erol, D. S. Nau, and V. S. Subrahmanian. Complexity, decidability and undecidability results for domain-independent planning. Technical Report CS-TR-2797, University of Maryland, Department of Computer Science, College Park, MD, USA, 1991. [51] R. E. Fikes, P. E. Hart, and N. J. Nilsson. Learning and executing generalized robot plans. Arti cial Intelligence, 3:251{288, 1972. [52] R. E. Fikes and N. J. Nilsson. Strips: A new approach to the application of theorem proving to problem solving. Arti cial Intelligence, 2:189{208, 1971. [53] T. G. Fisher. Cell controllers, map communications, and sequential function chart programming {is this the ideal batch control system? ISA Transactions, 28(3):57{74, 1989. [54] K. M. Ford and P. J. Hayes, editors. Reasoning Agents in a Dynamic World: The Frame Problem. JAI Press, Greenwich, CT, 1991. [55] A. Galton. Logik for Information Technology. Wiley, Chichester, England, 1990. [56] M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, New York, 1979. [57] M. P. George . The representation of events in multiagent domains. In AAAI-86 [1], pages 70{75. [58] M. P. George . Planning. In Allen et al. [8], pages 5{25. [59] R. Germundsson. Basic results on ideals and varieties in nite elds. Technical Report LiTH-ISY-I-1259, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1991. [60] R. Germundsson. Analysis of polynomial dynamical systems over nite elds. Technical Report LiTH-ISY-I-1347, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1992. [61] A. Gill. Applied Algebra for the Computer Sciences. Prentice Hall, Englewood Cli s, New Jersey, 1976. [62] P. Glynn. A GSMP formalism for discrete event systems. Proceedings of the IEEE, 77:14{23, 1989. [63] C. Golaszewski and P. J. Ramadge. The complexity of some reachability problems for a system on a nite group. Systems & Control Letters, 12:431{435, 1989.

202

Bibliography

[64] C. H. Golaszewski and P. J. Ramadge. Mutual exclusion problems for discrete event systems with shared events. In Proceedings of the 27th Conference on Decision and Control, pages 234{239, Austin, Texas, 1988. IEEE. [65] C. Green. Application of theorem proving to planning. In Donald E Walker and Lewis M Norton, editors, Proceedings of the 1st International Joint Conference on Arti cial Intelligence (IJCAI-69), pages 219{239, Washington, DC, USA, May 1969. William Kaufmann. Reprinted in Allen et al [8], pages 67{87. [66] G. Groe and R. Waldinger. Towards a theory of simultaneous actions. In Hertzberg [77], pages 78{87. [67] N. Gupta and D. S Nau. Complexity results for blocks-world planning. In AAAI-91 [3], pages 629{633. [68] K. J. Hammond. CHEF: A model of case-based planning. In AAAI-86 [1], pages 261{271. [69] S. Hanks and D. McDermott. Default reasoning, nonmonotonic logics, and the frame problem. In AAAI-86 [1], pages 328{333. [70] S. Hanks and D. S. Weld. Systematic adaption for case-based planning. In Hendler [74], pages 96{105. [71] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum c ost paths. IEEE Transactions on SSC, 4, 1968. [72] P. E. Hart, N. J. Nilsson, and B. Raphael. Correction to `a formal basis for the heuristic determination of minimum cost paths'. SIGART Newsletter, 37, 1972. [73] P. J. Hayes. The frame problem and related problems in arti cial intelligence. Report CS-242, Computer Science Department, Stanford University, Stanford, CA, USA, 1971. A short version appears in [160], pages 223{230. [74] J. Hendler, editor. Arti cial Intelligence Planning Systems: Proceedings of the 1st International Conference, College Park, MD, USA, June 1992. Morgan Kaufmann. [75] R. M. Henry and M. Webb. Ladder logic for sequence generation {a methodology. Measurement and Control, 21(1):11{13, 1988.

Bibliography

203

[76] J. Hertzberg and A. Horz. Towards a theory of con ict detection and resolution in nonlinear plans. In Sridharan [144], pages 937{942. [77] J. Hertzberg, editor. European Workshop on Planning, volume 522 of Lecture Notes in Arti cial Intelligence, Sankt Augustin, Germany, March 1991. Springer. [78] Y. C. Ho and X. R. Cao. Pertubation Analysis of Discrete Event Dynamic Systems. Kluwer Academic Publishers, Norwell, Massachusetts, 1991. [79] Y. C. Ho and C. Cassandras. A new approach to the analysis of discrete event systems. Automatica, 19(2):189{208, 1983. [80] C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, Englewood Cli s, New Jersey, 1985. [81] L. E. Holloway and B. H. Krogh. On closed-loop liveness of discrete event systems under maximally permissive control. In Proceedings of the 28th Conference on Decision and Control, pages 2725{2730, Tampa, Florida, 1989. IEEE. [82] J. E. Hopcroft and J. D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison Wesley, Reading, MA, USA, 1979. [83] A. L. Hopkins, Jr. and G. R. Walker. The state transition diagram as a sequential control language. In Proceedings of the 25th Conference on Decision and Control, pages 1096{1101, Athens, Greece, 1986. IEEE. [84] IEC. Preparation of function charts for control systems - IEC 848. Technical Report 848:1988, IEC, Geneve, 1988. [85] Proceedings of the 4th International Joint Conference on Arti cial Intelligence (IJCAI-75), Tbilisi, USSR, September 1975. William Kaufmann. [86] K. Inan and P. Variaya. Finitely recursive process models for discrete event systems. IEEE Transactions on Automatic Control, 33(7):626{639, 1988. [87] L. C. Jasany. Step up to "high-level" PLC programming. Controls & Systems, 39(3):34{37, 1992. [88] L. P. Kaelbling and S. R. Rosenschein. Action and planning in embedded agents. Robotics and Autonomous Systems, 6:35{48, 1990. [89] I. Klein. Planning for a class of sequential control problems. Licentiate thesis 234, Department of Electrical Engineering, Linkoping, May 1990.

204

Bibliography

[90] I. Klein and C. Backstrom. On the planning problem in sequential control. In Proceedings of the 30th Conference on Decision and Control, pages 1819{1823, Brighton, England, 1991. IEEE. [91] I. Klein and C. Backstrom. Planning in polynomial time: The SASPUS class. Technical Report LiTH-ISY-I-1229, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1991. [92] I. Klein and P. Lindskog. Automatic creation of sequential control schemes in polynomial time. Technical Report LiTH-ISY-I-1430, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1992. [93] F. Kluzniak and S. Szpakowicz. APIC Studies in Data Processing, No 24, chapter 8.1. Academic press, 1974. Reprinted in Allen et al [8], pages 140{153. [94] C. A. Knoblock. A theory of abstraction for hierarchical planning. In D Paul Benjamin, editor, Change of Representation and Inductive Bias, 1st International Workshop. Revised selected papers., pages 81{ 104, Boston, MA, USA, 1990. Kluwer. [95] C. A. Knoblock. Search reduction in hierarchical problem solving. In AAAI-91 [3], pages 686{691. [96] C. A. Knoblock, J. D. Tenenberg, and Q. Yang. Characterizing abstraction hierarchies for planning. In AAAI-91 [3], pages 692{697. [97] R. E. Korf. Iterative-deepening-A: An optimal admissable tree search. In Proceedings of the 9th International Joint Conference on Arti cial Intelligence (IJCAI-85), Los Angeles, CA, USA, 1985. [98] R. E. Korf. Planning as search: A quantitative approach. Arti cial Intelligence, 33:65{88, 1987. [99] R. Kumar, V. Garg, and S. I. Marcus. On controllability and normality of discrete event dynamical systems. Systems & Control Letters, 17:157{ 168, 1991. [100] A. L. Lansky. Localized representation and planning. In Allen et al. [8], pages 670{674. [101] A. L. Lansky. Localized search for multiagent planning. In Reiter and Mylopoulos [134], pages 252{258. [102] Y. Li and W. M. Wonham. On supervisory control of real-time discreteevent systems. Information Sciences, 46:159{183, 1988.

Bibliography

205

[103] J. Y. Lin and D. Ionesco. A generalized temporal logic approach for control problems of a class of nodeteministic discrete event systems. In Proceedings of the 29th Conference on Decision and Control, pages 3440{ 3445, Honolulu, Hawaii, 1990. IEEE. [104] P. Lindskog. GrafcetTool - a GRAFCET implementation in G2. Technical Report LiTH-ISY-I-1402, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1992. [105] A. M. Lister and R. D. Eager. Fundamentals of Operating Systems. Macmillan, London, 4th edition, 1988. [106] M. Lloyd. GRAFCET adds sequence dimension to ladder language. I& CS, 62(5):81{82, 1989. [107] D. M. Lyons and A. J. Hendriks. Planning. In Shapiro [143], pages 1171{1181. [108] V. A. Lytle. A survey of PLC language features. In Proceedings of the Seventh Annual Control Engineering Conference, pages XXIII/14{20, Rosemount, IL, USA, June 1988. [109] D. McAllester and D. Rosenblitt. Systematic nonlinear planning. A.I. Memo 1339, Massachussettes Institute of Technology, AI Laboratory, Cambridge, MA, USA, December 1991. [110] J. McCarthy and P. J. Hayes. Some philosophical problems from the standpoint of AI. Machine Intelligence, 4, 1969. Reprinted in [160], pages 431{450. [111] D. McDermott. A temporal logic for reasoning about processes and plans. Cognitive Science, 6:101{155, 1982. [112] K. Mehlhorn. Graph Algorithms and NP-completeness, volume 2. Springer, Berlin, 1984. [113] E. Mendelson. Introduction to Mathematical Logic. Wadsworth & Brooks, Monterey, California, 1987. [114] R. Milner. A Calculus of Communicating Systems. Springer-Verlag, Berlin, 1980. [115] S. Minton, J. Bresina, and M. Drummond. Commitment strategies in planning: A comparative analysis. In Reiter and Mylopoulos [134], pages 259{265.

206

Bibliography

[116] T. Murata. Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77:541{580, 1989. [117] A. Nerode and W. Kohn. Models for Hybrid Systems: Automata, Topologies, Stability (Draft version 22 Dec. 92). In Proc. Workshop on Theory of Hybrid Systems, Lyngby, Denmark, October 1992. [118] A. Newell, J. C. Shaw, and H. A. Simon. Report of a general problemsolving program for a computer. In Proceedings of an International Conference on Information Processing, UNESCO, Paris, France, pages 256{ 264, 1960. [119] A. Newell and H. A. Simon. GPS, a program that simulates human thought. In Feigenbaum and Feldman, editors, Computers and Thought. McGraw Hill, 1963. [120] N. J. Nilsson. Principles of Arti cial Intelligence. Springer, Berlin, 1982. [121] J. S. Ostro . Temporal Logic for Real-Time Systems. Research Studies Press Ltd, Taunton, Somerset, England, 1989. [122] B. Oulton. Structured programming based on IEC SC 65A; using alternate programming methodologies and languages with programmable controllers. In IEEE Conference Record of 1992 Fort-Fourth Annual Conference of Electrical Engineering Problems in the Rubber and Plastics Industries, pages 18{20, Akron, OH, USA, April 1992. IEEE. [123] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization Algorithms and Complexity. Prentice Hall, Englewood Cli s, New Jersey, 1982. [124] K. M. Passino and P. J. Antsaklis. A system and control theoretic perspective on arti cial intelligence planning systems. Applied Arti cial Intelligence, pages 1{32, 1989. [125] K. M. Passino and U . O zguner. Modeling and analysis of hybrid systems: Examples. In Proceedings of the 1991 IEEE International Symposium on Intelligent Control, pages 251{256, Arlington, Virginia, USA, 1991. IEEE. [126] R. N. Pelavin and J. F. Allen. A formal logic of plans in temporally rich domains. Proc. of the IEEE, 74(10):1364{1382, 1986. [127] J. L. Peterson. Petri Net Theory and the Modeling of Systems. Prentice Hall, Englewood Cli s, N. J., 1981.

Bibliography

207

[128] L. F. Pollacia. A survey of discrete event simulation and state-of-the-art discrete event languages. Simulation Digest, 20(3):8{25, 1989. [129] F. Prunet, T. Giaccone, F. Pereyrol, D. Suau, D. Simottel, J. Bellera, and C. Persegol. Prototyping a discrete systems control for faster debugging and implementation. In Proceedings of the 16th Annual Conference of IEEE Industrial Electronics Society (IECON '90), pages 694{699, Paci c Grove, California, USA, November 1990. IEEE. [130] Z. W. Pylyshyn, editor. The Robots Dilemma: The Frame Problem in Arti cial Intelligence. Ablex, Norwood, NJ, 1987. [131] P. J. Ramadge and W. M. Wonham. Supervisory control of a class of discrete event processes. SIAM J. Control and Optimization, 25(1):206{ 230, 1987. [132] P. J. G. Ramadge. Some tractable supervisory control problems for discrete-event systems modeled by buchi automata. IEEE Transactions on Automatic Control, 34(1):10{19, 1989. [133] P. Regnier and B. Fade. Complete determination of parallel actions and temporal optimization in linear plans of action. In Hertzberg [77], pages 100{111. [134] R. Reiter and J. Mylopoulos, editors. Proceedings of the 12th International Joint Conference on Arti cial Intelligence (IJCAI-91), Sydney, Australia, August 1991. Morgan Kaufmann. [135] E. Rich. Arti cial Intelligence. McGraw-Hill, Inc., Singapore, 1983. [136] E. D. Sacerdoti. Planning in a hierarchy of abstraction spaces. Arti cial Intelligence, 5(2):115{135, 1974. [137] E. D. Sacerdoti. The non-linear nature of plans. In IJCAI-75 [85]. [138] E. Sandewall. An approach to the frame problem, and its implementation. Machine Intelligence, 7:195{204, 1972. [139] E. Sandewall and R. Ronnquist. A representation of action structures. In Proceedings of the Fifth National Conference on Arti cial Intelligence (AAAI-86), pages 89{97, Philadelphia, Pennsylvania, August 1986. Morgan Kaufman. [140] P. Sarraut. Implementation of a toolbox for colored petri nets in g2. Technical report, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1991.

208

Bibliography

[141] M. J. Schoppers. Universal plans for reactive robots in unpredictable environments. In John McDermott, editor, Proceedings of the 10th International Joint Conference on Arti cial Intelligence (IJCAI-87), pages 1039{1046, Milano, Italy, August 1987. Morgan Kaufmann. [142] J. G. Shanthikumar and D. Y. Yao. Second-order stochastic properties in queueing systems. Proceedings of the IEEE, 77:162{171, 1989. [143] S. C. Shapiro, editor. Encyclopedia of Arti cial Intelligence, volume 2. Wiley, New York, NY, 2nd edition, 1992. [144] N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Arti cial Intelligence (IJCAI-89), Detroit, MI, USA, August 1989. Morgan Kaufmann. [145] M. Ste k. Planning with constraints (MOLGEN: Part 1 and 2). Arti cial Intelligence, 16:111{170, 1981. [146] J. A. Stiver and P. J. Antsaklis. Modeling and analysis of hybrid control systems. In Proceedings of the 3tst Conference on Decision and Control, pages 3748{3751, Tucson, Arizona, 1992. IEEE. [147] J-E. Stromberg. Styrning av lego-bilfabrik. Technical report, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden, 1991. Manual for control laboratory session. [148] R. Suri. Perturbation analysis:the state of the art and research issues explained via the g1/g1 queue. Proceedings of the IEEE, 77(1):114{137, 1989. [149] G. J. Sussman. The virtuous nature of bugs. In Proceedings of the 1st conference of the Society for the study of AI and the Simulation of Behaviour, Brighton, UK, 1974. Reprinted in Allen et al [8], pages 111{ 117. [150] R. E. Tarjan. Data Structures and Network Algorithms. Society for industrial and applied mathematics, Philadelphia, Pennsylvania, 1983. [151] A. Tate. Interacting goals and their use. In IJCAI-75 [85], pages 215{128. [152] A. Tate. Generating project networks. In Proceedings of the 5th International Joint Conference on Arti cial Intelligence (IJCAI-77), pages 888{893, Cambridge, MA, USA, August 1977. Reprinted in Allen et al [8], pages 291{296. [153] A. Tate, J. Hendler, and M. Drummond. A review of AI planning techniques. In Allen et al. [8].

Bibliography

209

[154] J. G. Thistle and W. M. Wonham. Control problems in a temporal logic framework. Int. J. Control, 44(4):943{976, 1986. [155] J. N. Tsitsiklis. On the control of discrete-event dynamical systems. Math. of Control, Signals and Systems, 2:95{107, 1989. [156] M. M. Veloso, M. A. Perez, and J. G. Carbonell. Nonlinear planning with parallel resource allocation. In Katia P Sycara, editor, Workshop on Innovative Approaches to Planning, Scheduling and Control, pages 207{212, San Diego, CA, USA, November 1990. Morgan Kaufmann. [157] S. A. Vere. Planning. In Shapiro [143], pages 1159{1171. [158] R. Waldinger. Achieving several goals simultaneously. Machine Intelligence, 8, 1977. Reprinted in Allen et al [8], pages 118{139. [159] D. Warren. WARPLAN: A system for generating plans. Memo 76, Dept. of Computational Logic Edinburgh University, Edinburgh University, 1974. [160] Bonnie Lynn Webber and Nils J Nilsson, editors. Readings in Arti cial Intelligence. Morgan Kaufmann, 1981. [161] J. C. Weber. On the representation of concurrent actions in the situational calculus. In Peter F Patel-Schneider, editor, Proceedings of the 8th Canadian Conference on Arti cial Intelligence (CSCSI'90), Ottawa, ON, Canada, May 1990. [162] D. E. Wilkins. Domain-independent planning: Representation and plan generation. Arti cial Intelligence, 22:269{301, 1984. [163] D. E. Wilkins. Practical Planning. Morgan Kaufmann, San Mateo, CA, 1988. [164] H. P. Williams. Model Building in Mathematical Programming. Wiley, Chichester, 1978. [165] W. M. Wonham and P. J. Ramadge. On the supremal controllable sublanguage of a given language. SIAM J. Control and Optimization, 25(3):637{659, 1987. [166] Q. Yang. An algebraic approach to con ict resolution in planning. In AAAI-90 [2], pages 40{45. [167] Q. Yang. Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6(1):12{24, 1990.

210

Bibliography

[168] Q. Yang and J. D. Tenenberg. ABTWEAK: Abstracting a nonlinear, least commitment planner. In AAAI-90 [2], pages 204{209. [169] Q. Yang and J. D. Tenenberg. ABTWEAK: Abstracting a nonlinear, least commitment planner. Report CS-90-09, Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada, March 1990. [170] H. Zhong and M. Wonham. On the consistency of hierarchical supervision in discrete-event systems. IEEE Transactions on Automatic Control, 35(10):1125{1134, 1990.