A software framework for control of multi-sensor, multi ... - CiteSeerX

2 downloads 3275 Views 280KB Size Report
Our research is in the field of behaviour-based robotics. ... dent components that a developer can easily connect ... Definition 1 (Software framework) A software.
Proceedings of ICAR 2003 The 11th International Conference on Advanced Robotics Coimbra, Portugal, June 30 - July 3, 2003

A software framework for control of multi-sensor, multi-actuator systems B.J.W. Waarsing, M. Nuttin and H. Van Brussel Katholieke Universiteit Leuven Department of Mechanical Engineering B-3001 Leuven, Belgium [email protected] Abstract This article proposes a software framework for multi-sensor, multi-actuator systems, such as our mobile robot LiAS (Leuven intelligent Autonomous System, see figure 1). The framework is based on an agent-based philosophy, which makes it extremely useful for programming behaviour-based controllers, but it is applicable for any type of control. The framework implements the communication protocols necessary for such a system and offers easy-to-use interfaces to the user.

1

them in a controller when needed. This goal corresponds well to the new paradigm of robot controller design, i.e. that robot controller systems should be built up from relatively independent components that a developer can easily connect to compose his or her desired controller [2]. 1.1 Related work Lots of different architectures exist for mobile robots. For example, the AuRA architecture of Arkin [1] was designed for hybrid systems, with a deliberative control component controlling the reactive components below. Saphira [3] is a well-known environment for programming robots based on the behaviourbased paradigm, where behaviours are defined and coordinated using fuzzy logic. Another system designed for behaviour-based control is DAMN [4], which is an architecture based on the combination of behaviours by voting. Brooks [5] proposes an integrated system with his subsumption architecture, which is not strictly a software architecture, but more a system architecture. Kasper et al. [6] offer another recent mobile robot architecture, designed especially for behaviour-based learning by demonstration.

Introduction

Our research is in the field of behaviour-based robotics. Behaviour-based controllers have been implemented for a range of tasks and on various platforms, which can roughly be grouped into navigational platforms and insect-like robots [1]. We are extending the niche of behaviour-based control to mobile manipulation. A popular objection to behaviour-based control is its so-called lack of determinism in operation. But this problem resides mainly in the definition and coordination of the behaviours and not in behaviourbased control as such. However, it is a serious problem that should be tackled when one wants to implement a behaviour-based controller for manipulation tasks, since manipulators in normal operation are more intensively in contact with their environment than the well known examples such as mobile platforms (think of handling of objects or the execution of tasks as opening a door). This issue was our reason for looking into a formalisation of the behaviour coordination in a software framework. The formalisation should define all possible manners of communication and coordination between the different behaviours, so that it is possible to reason in this framework about the execution of the behaviours. This will also provide us the possibility to create a library of behaviours. From this library, we can easily select the necessary behaviours and fit

1.2 Overview of the paper This paper focuses on the software part of the framework. Section 2 defines our requirements, section 3 gives the details of our framework implementation. Finally, in section 4 our framework is evaluated and some possible future research directions are mentioned.

2

The framework definition

To situate the framework design, let’s introduce a first target application: the behaviour-based control of our mobile manipulator LiAS (see figure 1 and also [7]). This is a mobile robot with various sensors (e.g. laser range finder, cameras, force-torque sensor) and actuators (e.g. mobile platform, industrial manipulator). These sensors and actuators have each different characteristics concerning sampling frequency, speed

41

and are still doing work on mobile navigation which does not use behaviour-based principles. Ideally, the framework should not restrict the control architecture of the controller in any way, only define its execution and communication. Finally, one of our research goals is to establish a controller library from which users can pick components1 and add them together. This will enable the designer to, for example, program their application in some sort of graphical user interface, in contrast to hard coding everything over and over again. Furthermore, a combination of the framework with such a library can also provide a basis for learningby-demonstration algorithms. Learning, in general, is the adaptation of the parameters of some explicit or implicit model. The combination of framework and library provide all necessary components, thus this can change the learning problem from a low-level skill acquisition to a higher level skill combination problem. This means that the framework should provide the possibility of changing the configuration of the control program at run-time, i.e. the modules should be dynamically reconfigurable. Of course, this complicates matters, since it cannot be known at design time which skills are available, which skills should be used in certain situations, etc. Therefore, a lot of autonomy should be added to the components, and our framework should be able to handle that. Summarising the requirements:

Figure 1: Our mobile robot LiAS

of data acquisition, etc. A nice example clarifying this is the comparison of the vision cameras with the laser range finder. The cameras can work at frame rate (50 Hz), but the processing time of the image ranges from real-time up to minutes, depending on which information needs to be extracted. The laser range finder on the other hand only works at a frequency of 2 Hz and most sensor processing can be done within its sampling period. The same comparison goes for the actuators: an industrial manipulator (controlled at frequencies ranging from 100 Hz to 1000 Hz) and the mobile platform (controller at a frequency of order of magnitude 1 Hz).

1. The framework should be able to deal with a multitude of sensors and actuators, each addressed and used at its own sample frequency. 2. The framework should define the execution of the controller, but should not restrict the possible architecture of the controller. 3. The framework should provide the means for easy definition and combination of modules, by clearly defining the interfaces of the modules. These interfaces are used during execution of the controller. The modules do not necessarily need to be control modules in the sense of classical control theory.

2.1 Requirements First of all, let’s define what is meant by a ‘software framework’ and in what way it is formalised. Definition 1 (Software framework) A software framework is a collection of base classes from which all components of the controller should inherit. These base classes implement the possible ways the components can be integrated into a control program and formalise their communication.

4. The framework should consist of hot-swappable modules, i.e. modules that can be loaded and unloaded at runtime. When a module is loaded, it should be able to function (semi) autonomously, getting all information it needs (when possible); it should decide for itself if it can be executed.

Now let’s investigate the requirements of this framework. First of all, as is obvious from our target application, the framework should be able to handle all kinds of sensors and actuators at different frequencies. A second requirement is that the framework should be suitable for behaviour-based control, but it should also be useful for different approaches, since we have

2.2 Design choices Now that we have defined what the framework should be able to do and cope with, let’s further define 1 Such a component does not necessarily have to be a control module, any kind of useful functionality can be considered.

42

the responsibilities of components in the framework. In control systems, there are sensors to be read, control signals to be calculated (possibly using trajectories, possibly using other goal functions) and actuators to be driven (at correct frequencies). We’ll treat all the parts in the subsequent sections.

MultiAgentController 0..n 0..n 1 1 0..n

0..n

1

1

Input

2.2.1 In- and output. We have opted for sensors and actuators as autonomous components. By this we mean they control their own internal state and function. You can request a value of a sensor at any time, the sensor itself will make sure you get the most recent value. An actuator component will ask for new values when it is time to steer its physical actuator2 . These choices guarantee that requirement 1 is met. To put it in a definition:

Output

1

1

Agency

BasicAgent

1 coordinates

1 CoordinationObject 0..n

Definition 2 (Sensor component) A sensor component is a software component that represents a physical sensing device in the software program. The sensor component is autonomous in the sense that it keeps its values up to date. Discrete-time sensors3 have the possibility of firing events when new values are measured.

0..n

Figure 2: UML figure of the basic components of the framework

problems with a focus on SISO systems. We have modified the framework at some crucial points, such as allowing multiple actuators and allowing the a single agent to be a member of multiple agencies. This makes the framework much more suited for our target systems. We use the object-oriented design method to define the classes and their relations (see figure 2 for an UML description). The software components will be further treated in the following section.

Definition 3 (Actuator component) An actuator component is a software component that represents a physical actuation device in the software program. The actuator is autonomous in the sense that it actively pulls a new control signal out of the controller to drive the actuator at a fixed control frequency.

3.1

2.2.2 Controller. Requirements 2 to 4 conjure up an image of small, autonomous parts in the controller, all busy doing their part in the controlling process. This image is closely linked to the agent-based theory (e.g. [8], [9]), which is adopted by us. We define our global controller to be built up from basic (semi) autonomous components, which we shall call agents. These agents have the possibility to express their appropriateness to the current situation.

The framework components

Each program in our framework is built up from basic components (BasicAgents), that implement the communication protocols in the program. These basic agents can implement any function needed in the program. The basic agents are coordinated and combined into an Agency by an appropriate CoordinationObject. The interface that the agency offers to the outside, is the same as the interface of a basic agent. This provides the possibility of building even larger agencies from basic agents and agencies. Although this immediately conjures up an image of a hierarchical decomposition, it should be noted that one basic agent or agency is also allowed to be member of several agencies, which is not strictly hierarchical. The components will be treated in more detail below.

Definition 4 (Controller component) The controller component is a component that encapsulates the agglomeration of different (semi) autonomous subcontrollers. These sub-controllers do a certain part of the total calculation, each with their own speciality.

3

Agent

The software implementation

3.1.1 Basic agent. A basic agent is an autonomous component, schematically represented in figure 3(a) . Each agent has a set of inputs and a set of outputs (see section 3.1.4). The function of the agent is not defined and can range from a simple input-output mapping to a symbolic task planner. The agent has its own opinion on what the correct output should be (again, ranging from actuator command to a subgoal list, and so forth). The agent is

Our software framework is closely linked to the work done by Van Breemen ([9]), who has designed a multiple controller framework for complex control 2 This request is considered to take an infinite small amount of time. In section 3.1.5 will be explained why this is necessary and how this is achieved. 3 Discrete-time sensors provide measurements on discrete moments in time, in contrast to continuous-time sensors, which can be read out any time.

43

SApp

SApp





In

Agent X

SAck

In

Out

SApp, i

3.1.2 Coordination object. A coordination object can be regarded as the manager of a group of basic agents and agencies (see figure 3(b)). It has access to the same inputs as its agents and it has all the types of outputs as its agents have6 . Furthermore, a coordination object has the same interface as an agent and thus the communication is the same: an outgoing signal defining its estimate of appropriateness Sapp and an incoming signal informing the coordination object whether its opinion on the output will be used Sack . The extra communication the coordination object defines is the communication to its agents (again Sapp and Sack signals). The function of a coordination object is to combine the Sapp signals of all its agents and coherently distribute Sack signals to all its agents. When it is asked for the opinion of the agency for the output, it fuses the outputs of all the appropriate agents. As a real manager, the coordination object has the final decision on which agents are allowed to contribute to the fused output and how much their opinion weighs. The way the coordination object performs these tasks is not defined by the framework. Some examples of coordination mechanisms in behaviour-based systems are priority based coordination or a weighted sum coordination.

SAck

Coordination Out



(a)



SAck, i

(b)

Figure 3: A schematic representation of a basic agent (a) and a coordination object (b) autonomous in the sense that it expresses its belief of its own appropriateness to the current situation as sensed through its inputs (signal Sapp in figure 3(a))4 . From its coordination objects, the agent gets information about whether its opinion on the output will be used in the global output (signal Sack in figure 3(a)). Sapp and Sack are defined as: Sapp ∈ [0, 1] Sack ∈ {true, f alse}

(1)

As an example, let’s treat the communication flow for an agent (Agent X) that is coordinated by two coordination objects (A and B), when a new value for ’Output’ is demanded5 :

3.1.3 Agency. The agency object is an encapsulation of a coordination object and its agents. The extra functionality the agency implements is the possibility for the agency to be coordinated by several coordination objects; it extends the one-to-one communication of the coordination object with its coordinator to a many-to-one communication.

← Coordination object A asks Agent X for its estimate of its appropriateness Sapp to the current situation. ← Coordination object B asks Agent X for its estimate of its appropriateness Sapp to the current situation.

3.1.4 Inputs and Outputs. Inputs and Outputs are only defined by their interface and can represent sensors, buffers, actuators, . . . The Input and Output objects are essentially passive in the framework (i.e. they will not do anything on their own, as agents do). They are just buffers or provide access to the components they represent. E.g., since a sensor has the responsibility of refreshing its sensor values, the Input object only offers an access to the sensor values. Exceptions are the special objects representing the actuators. Such an object is an Output object controlling an actuator and is active in the sense that it periodically asks the controller (see section 3.1.5) for new values to send to the physical actuator.

→ Coordination object A informs Agent X its opinion on the output will be used using signal Sack . → Coordination object B informs Agent X its opinion on the output will not be used using signal Sack . ← Coordination object A asks Agent X for its opinion on ’Output’.

3.1.5 Multi-agent controller. We have defined objects that can be used to incrementally build up a controller. The missing part is an endpoint to this incremental building. This endpoint is the MultiAgentController (MAC). The MAC is the access point

4 In the software implementation of an agent, the agent object can also be an active object, (i.e. in possession of its own thread of execution). This make the agent even more autonomous, and is essential in cases where the calculation step of the agent takes too much time with respect to the sample period of the actuator. In this case the execution of the agent is no longer synchronous. 5 To express the direction of communication, ’pull’ communication is preceded by an arrow pointing left (←) and ’push’ communication is preceded by an arrow pointing right (→).

6 Notice that it is possible for a coordination object to combine agents with different types of outputs!

44

for starting and stopping the program. It owns and controls one (the uppermost) agency and the inputs and outputs. The actuators can use the MAC to trigger a new calculation step. In section 2.2.1 we required that the calculation of the new value takes an infinitely small amount of time, which translates to the requirement that the calculation step is an atomical operation. This is achieved by the MAC: when an actuator asks for new values, the MAC first freezes the sensor values (so that all agents use the same values). It then executes the calculation (by calling the appropriateness function of its agency, acknowledging its agency and asking for its opinion) and finally it passes the opinion of its agency to the actuator. During this calculation, the MAC is locked and not accessible for other actuators.

agent-based controller. It offers clear interfaces to be implemented by the user. Until now, the framework is mainly used to implement behaviour-based controllers, but work is also being done on implementing a vision-based navigational algorithm with topological maps. In this work, the final controller can be described as a hybrid deliberative/reactive system, with the same structured communication. 4.2

Conclusion with respect to behaviourbased control The framework has structured the cooperation of agents on communication level, which allows easy combination of behaviours and reasoning about the resultant observable behaviour of the robot. Although this may seem contradictory with the general behaviour-based approach, it is necessary when you want to implement a behaviour-based controller on an industrial manipulator. An industrial manipulator is a completely different platform compared to a mobile platform in terms of speed and power. Therefore it might not be a big problem if a mobile platform does not perform as expected when testing, it is a completely different matter altogether with an industrial manipulator. Its behaviour must be predictable, even in unpredictable situations. As already stated, the framework has been used for the implementation of a door opening algorithm on our mobile platform [11]. Furthermore, the framework is also being used in the Ambience project to program the navigational algorithm of an indoor mobile robot [12]. The implementations with the framework were satisfactory. Several behaviours on different actuators with different sample frequencies have been implemented. Neither the combination of these behaviours nor the addition of new behaviours was a problem. The structure provided by the framework also enables experimenting with different kinds of coordination objects, without having to change the agents. The combination problem of behaviours is solved in a structured manner by providing a formalised framework in which one can reason about effects of the interaction between behaviours. To be clear: no scientific work has been done on the combination itself, such as comparing rules and methods. The achievement here lies more in the fact that coordination algorithms can be combined in a structured manner. The problem of finding the correct methods of coordination still exists. For an in depth overview of existing coordination methods see [13].

3.2 UML description Figure 2 shows an UML diagram of the defined classes and their relations. The Agent class implements the interface of the Agency and the BasicAgent. It also has the Inputs and Outputs. Each BasicAgent implements one full function, using some of the possible Inputs to produce some of the possible Outputs. An Agency is built up from one CoordinationObject that manages several (one to many) Agents, each of which is an Agency or a BasicAgent. The Inputs and Outputs of the managing CoordinationObject are determined by its Agents. The Agency that unifies all Agents is owned by the MultiAgentController, which is the access to the final program. 3.3 Implementation issues We have implemented the framework in C++ and it is integrated in MoRE [10], our development environment. Since this environment is platform independent, it has been used on several platforms, among which our mobile robot LiAS (see figure 1). LiAS is equipped with several computers. At the moment, we use the framework on a Pentium III machine running Linux in a door-opening program [11]. Notice that this is not a real-time operating system, which means that the framework only works in softrealtime at the moment. Nothing in the design however limits its use to soft-realtime. This point has been made clear by Van Breemen [9], who has implemented a similar framework for low level control of motion systems.

4

4.3 Future work The framework is the basis for a development of a library of basic and advanced agents. These agents can be purely reactive input-output relations, but can also be agents mapping the environment, or agents

Conclusion and future work

4.1 General conclusions The designed framework defines and implements the basic communication protocols needed for an

45

planning and sequencing skills to be executed. As motivated, it also provides a solid basis for learning. The next step that shall be taken is a learning algorithm that learns new skills or behaviours starting from a small basic collection of actions. This may lead to a system that improves its own performance and even learns completely new tasks.

5

[10] B. J. W. Waarsing, “More documentation”, URL http://www.mech.kuleuven.ac.be/pma. [11] B. J. W. Waarsing, M. Nuttin, and H. Van Brussel, “Behaviour-based mobiel manipulation: the opening of a door”, Proc. of ASER’03, Bardolino, Italy, pp. 168–175, March 13-15 2003. [12] A.J.N Van Breemen, K. Crucq, B.J.A. Krose, M. Nuttin, J.M. Porta, and E. Demeester, “A user-interface robot for ambient intelligent environments”, Proc. of ASER 2003, Bardolino, Italy, pp. 132–139, March 13-15 2003.

Acknowledgements

This research has been carried out with the financial help the Flemish government under the ITEA project AMBIENCE and the Belgian programme on Interuniversity Poles of attraction initiated by the Belgian State, Prime Minister’s Office, Science Policy Programming. The scientific responsibility is assumed by its authors. The authors are further indebted to Aitzol Astigarraga for the work he has done on this topic as part of his M.Sc. thesis.

[13] P. Pirjanian, “Behavior coordination mechanisms - state of the art”, Tech-report IRIS-99375, Institute for Robotics and Intelligent Systems, School of Engineering, University of Southern California, October 1999.

References [1] R.C. Arkin, Behavior-based robotics, MIT Press, Cambridge, Massachusetts, U.S.A., 1998. [2] C. Szyperski, Component Software: Beyond Object-Oriented Programming, Addison Wesley, 1998. [3] K. Konolidge and K. Myers, “The saphira architecture for autonomous mobile robots”, Artificial Intelligence and Mobile Robot, pp. 211–242, 2000. [4] J. K. Rosenblatt, DAMN: A Distributed Architecture for Mobile Navigation, PhD thesis, The Robotics Institute, Carnegie Mellon University, 1997. [5] R. A. Brooks, Cambrian Intelligence, the early history of the new AI, The MIT Press, Cambridge Massachusetts, U.S.A, 1999. [6] M. Kasper, G. Fricke, K. Steuernagel, and E. Von Puttkamer, “A behavior-based mobile robot architecture for learning from demonstration”, Robotics and Autonomous Systems, vol. 34, pp. 153–164, 2001. [7] B. J. W. Waarsing, M. Nuttin, and H. Van Brussel, “Introducing robots into a human-centred environment - the behaviour-based approach”, Proceedings of the 4th International Conference on CLAWAR, pp. 465–470, 2001. [8] M. Minski, Society of Mind, Simon and Schuster, Inc., New York, U.S.A., 1986. Agent-based multi[9] A. J. N. Van Breemen, controller systems: a design framework for complex control problems, PhD thesis, Twente University, The Netherlands, 2001.

46

Suggest Documents