Daisy: A Distributed Architecture For Intelligent System - Computer ...

3 downloads 362 Views 991KB Size Report
grates the processing of data coming from the behav- ioral agents with a ... In these cases, visual data are usually collected ..... is able to recover superquadric parameters of the ..... ronment, connecting via Internet heterogeneous hard-.
DAISY: a Distributed Architecture for Intelligent System A.Chella, V.Di Gesh, S.Gaglio, G.Gerardi, A.Messina, R.Pirrone, P.Storniolo I.Infantino, D .Intravaia, B. Lenzitti, G.Lo BOSCO, Centro Interdipartimentale Tecnologie della Conoscenza, Universiti di Palermo Palermo, ITALY 90128 Abstract

cally with the environment. Information-fusion techniques [9], implemented on distributed systems, are suitable to develop goals-oriented strategies [17]. In fact, the computation can be driven by complementary information sources, and it may evolve on the basis of adaptive internal models and environment transformations [16]. Moreover, results of several processing elements can be integrated to find an optimal solution. Distributed systems are characterized by an huge amount of states and parameters, functional4y dependent, that are distributed through several elementary sub-system units. Automatic control assessment and motion detection in risky environments are examples of distributed systems. In these cases, visual data are usually collected from multi-sensors, and their elaborations are carried out on local processing units, which are logically interconnected to share and interchange knowledge (models, data and algorithms). The design and the implementation of distributed algorithms depends on the network topology and the navigation of both data and processes through the processing elements. The proposed system-architecture is based on the integration of both functional and behavioral approaches, traditionally used to design architectures dedicated to perceptual systems. This approach allows us to implement the concept of co-operating agents, which is at the basis of any system based on information fusion; the agents are supervised by a “Central Engagement Module” which maintains a rich and concrete conceptual representation of the environment. The implementation of DAISY results from the integration of the cognitive architecture for artificial vision, proposed in [7] into the M-VIF machine [9]. This choice has been motivated by fact that the architecture of M-VIF is distributed, reconfigurable, and it provides coupling functions between actuators (processing elements) that allows easily to implement co-

Distributed perceptual systems are endowed with different kind of sensors, from which information flows to suitable modules to perform useful elaborations for decisions making. In this paper a new distributed architecture, named “Distributed Architecture for Intelligent System” (DAISY), is proposed. It is based on the concept of co-operating behavioral agents supervised by a “Central Engagement Module”. This module integrates the processing of data coming from the behavioral agents with a symbolic level of representation, by the introduction of a ‘%onceptual space” intermediate analogue representation. The D A I S Y project is under development; experiments on navigation and exploration for an autonomous robot are done to evaluate its performance.

1

Introduction

A new distributed architecture, named Distributed Architecture for Intelligent S y s t e m (DAISY), is proposed. It is under development as an interdisciplinary project , involving several researchers of the “Centro Interdipartimentale di Tecnologie della Conoscenza” (Inter-Departmental Center for Knowledge Technology) at the University of Palermo, for applications in robotics, artificial intelligence and multi sensory perception. Distributed perceptual systems are endowed with different kind of sensors, from which information flows to suitable modules to perform elaborations useful for decisions making. The performance of a perceptual system lies on the ability to focus the areas of interest, by maximizing a given costs/benefit utility criterion [ 5 ] . The selection of interesting regions is relevant; in fact the ability to select salient features is the basic question of intelligence, both artificial and natural. Moreover, visual perceptual systems should be able to adapt their behavior depending on the current goal and the nature of the input data. Such performance can be obtained in systems able to interact dynami-

42 0-8186-7987-5/97 $10.00 0 1997 IEEE

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

In the behauzoral approach, on the contrary, the control architecture of an agent may be well decomposed as a set of behavior modules arranged in parallel. Each module receives a sensory input and produces a control signal responsible for a specific system behavior (see Maes [‘22]). The data processing occurring in a single behavior module is generally very simple as it is essentially based on a list of condztton-actzon rules. In this approach complex tasks of the agent are not explicitly programmed in some sort of central control module, but they emerge from the interactions of these simpler modules; (see [4, 11). Only few proposals consider an integration of the two approaches. Brady and Hu [I91 propose a robot architecture based on the LICA4(LocaIlj Intelligent Control Agent) module. The module is able to act reactively, as in the behavioral approach, but it has some high level reasoning capabilities. Several modules may then be organized in a functional hierarchy as in the functional approach. Malcom and Smithers [23] propose the Sornass system, a hybrid architecture for an autonomous agent subdivided as in the Strips-Planex system; the high level is a Prolog assembly planner, while the low level is a plan execution agent build up by behavioral modules which defines the symbol grounding for the planner. Arkin [2] proposes a behavioral architecture, named A U R A , for an autonomous agent in which a distinction is made between the “a priori” and the “dynamic” knowledge of the robot. The first one is a sort of long term memory of the agent. An integration of the two approaches can been achieved by introducing the concept of Actzve Information Fuszon (AIF) loop [lo]. The AIF loop may be described by five functionalities (see Fig. 2 ) : Observe, Process, World Model, Choose-Next, and Actzon. Observe directly works on sensory data performing the necessary preprocessing tasks; Process works on the information flow of data coming out from the previous functionality. Choose-Next selects the process to activate. It should be noted that the processes may run in parallel and they may be executed on distributed hardware. The processes are also driven by the knowledge obtained by the World-Model. The outputs of the processes feed both the Actzon, that operates on the environment, and the Observeagain, that drives further sensor-explorations. The Real World represents the environment on which DAISY operates. Within the system, the information flows in a continuous active fusion loop. Burt [6] has developed a similar approach to system control, named Dynamic Vzszon, which is based on in-

Figure 1: The autonomous robot in the working environment. operating agent mechanism. The DAISY architecture has been designed to face perceptual problems; in this paper we keep in mind the control architecture for an autonomous robot; in particular its test-bed is a RWZ B-12 autonomous robot equipped by a vision system composed of a CCD video camera mounted on a pan-tilt. The robot mounts a 486 computer on board running Linux and it is linked with a network of Unix workstations by a Local Radio Ethernet. Fig. 1 shows the robot in its working environment. The remaining of the paper will describe the proposed architecture in Sect. 2; Sect. 4 reports an application of DAISY to realize robotic navigation based on the fusion of auditory and visual information with some implementation notes. Final remarks are given in Sect. 6.

2

The DAISY Architecture

DAISY design is based on the combination of two main complementary approaches, described in the following.

2.1

The Active Information Fusion Loop

The functional approach, as in the Strip-Planex [13] system, is based on the idea that the architecture of an autonomous agent may be decomposed in a hierarchy of functional layers. The bottom layers are related t o the processing of data flowing by sensors and actuators; their main task is the execution of the plan generated by the top layers. The top layers process the information coming out from the bottom layers in order to find a strategy to satisfy the goals of the agent. These layers then feed back the commands to the bottom layers.

43

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

the LN-module, it is dedicated to the interconnection of CN’s, in order to realize several reconfigurable network topologies.

WORLD

4

Figure 2: The Active Information Fusion loop. formation integration. It is the analogous of the visual attention mechanisms in humans, and it consists of three elements: foveation, tracking and high level interpretation. This attention mechanism supports efficient analysis by focusing the system’s sensing and computing on selected areas of the scene. Moreover, these resources will be rapidly redirecting by the evolution of the scene and goals. The described loop is at the basis of the implementation of the DAISY architecture. Behavioral cooperating agents solve the tasks related to the Observe and Action functionalities, while functional solutions have been adopted for the group of the Process, World Model and Choose-Next functionalities. As described in details later, these are the kernel of our “Central Engagement Module” .

2.2

Figure 3: (a) The repository of agents in the DAISY architecture. (b) Structure of a CN. The co-operating agents are responsible for the reactive operation of the architecture. Some of these are directly connected to the actuators of the robot, i.e. the mobile base and the pan tilt in our implementation. The agents process data coming from the sensors, i.e. the camera and the odometer in our implementation and feeds the Central Engagement Module. We have implemented several behaviors agents: e

The self-localization agent is able to define the absolute coordinate system of the agent moving in a room environment, by knowing also approximately the dimensions of the room. The agent is based on the application of the Hough Transform Ill] to find the corners of the room representing the robot environment. After the localization of three corners the agent estimates, by simple trigonometric calculations, the position of the robot in the room. Fig. 4 shows the intersection of two approximately horizontal edge lines with a vertical edge line and the localization of an upper corner of the room by the agent.

e

The wandering and obstacle detection agent. The agent is able to estimate if there is an obstacle or free space in front of the robot, by analyzing the images acquired by the robot camera. The recognition of the situation in front of the robot is based on a Self Organizing Map neural network [21] trained by examples of typical obstacles and free floor situations in the robot environment.

The Behavioral Co-operating Agents

The Observe and Action functionalities of the AIF loop have been implemented in DAISY by a repository of behavioral co-operating agents. Fig. 3(a) shows more in details the repository of agents in DAISY. Each agent is based on the concept of Compound Node (CN), introduced in [SI,which is composed of the following modules, shown in Fig. 3(b):

the C-module, it is the controller of the CN, it is responsible for the evolution of the local computation; the H-modules, they are dedicated to specific information processing (actuators); the IP-modules, they are dedicated input/output data management;

to

44

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

Figure 4: Localization of icorner of the robot environment by the Hough T ansform and results of the self-localization operation The output of the neural network activates a simple niotor action to avoid the obstacle, if possible, or to stop the robot in waiting of a new command. Figs. 5 and 6 show the operations of the robot when it encounter an obstacle: it recognized the obstacle, it turns to the right to find a free path, it moves to the right and it turns to left tjo restore its original orientation.

.;

:A:. :

.@.

.

..@ .

.........

.

.

. . .:.. . . .:...: . .. . . .

.

.

.

:B:

I

..

..

2m.:.

.

..

I.

. .. ~. .:. .. . .

Figure 6: The operation of the robot when it encounters an obstacle. to the local distribution of the gray levals. When the b-snake reaches an object contour, it assumes identical form. In this way it is possible to extract the object shape of the image view. Then, the symmetry transform[8] is calculated and we have a set of symmetry axes associated with an import,ance index. The variation of the object symmetry axes in the sequence has been used as an useful feature to classify the object. Fig. 7 shows two frames of a sequence related to a vase and two frame related to a mug. The vase we used is an object perfectly symmetric and its symmetry axes remains unchanged in the different view points. In the case of the mug, when the robot acquires frames going around the object, the direction of the symmetry axes change. The temporal evolution of symmetry axes makes possible to classify this object and distinguish it from the other.

:7. . - ../. . .1

, . . , . ...................... . . . . .

...

,

.

.

*

.

. . ... . . ... . . . . . . .... . . . ... . . . .. . . . .. . . . ..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. . . :.. ............ .. .. :.. . . . .... . . :.. ........ ..

Figure 5: The trajectory followed by the robot when it encounters an obstacle.

The 3 0 reconstruction agent. The agent acquires images from the vision head of the robot and estimated the parameters of suitable 3D primitives describing the scene [7]. In particular, the agent is able to recover superquadric parameters of the present objects [‘24, 251. Fig. 8 shows a scene acquired by the robot representing a hammer, along with results of the recovery of the superquadrics.

The object recognition agent. The agent is able to extract useful features describing an object in the scene. These features are related to the visual aspect of the object, as contours and symmetry transform. The agent acquires a sequence of object images by robot camera; the sequence represents object views at different location. The object localization and robot motion around the object are obtained by the obstacle detection agent. Object 2D shape is extracted from every frame of the sequence, by using b-snakes [20] on the images. B-snake is a deformable curve that moves in the image under the influence of forces related

The auditory system agent is implemented in a simple way. It recognizes the direction of a given sound. At this preliminary stage of our experimentation the sound are already known to the system and are limited to three different source

45

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

varlrllllU+iw

I

I“ *

wsubu.raw

I

Figure 8: A scene acquired by the robot camera and its superquadric reconstruction.

Figure 7: Two objects and their symmetry axes in different frames acquired by robot camera samples. Figure 9: The Central Engagement Module of DAISY.

2.3 The Central Engagement Module As anticipated in the introduction, in DAISY all CN’s are supervised by the “Central Engagement Module” (CEM) that controls the whole behavior of the agents, acting in each CN. At the same time it is also responsible for the task distribution and the network reconfiguration. The CEM implements the previously described functionalities of Process, World Model and Choose Next. The role of the CEM is also to perform the fusion of sound and vision. The needs for a CEM that controls the whole behavior of the architecture] has been summarized] among others, by Balkenius [3]. He notes that the behavioral agents may not be executed all at the same time, and the arbitration between them may not be always local. The agents may be activated instead on the basis of the internal drives, i.e. the goal; the external incentives, which are strictly related to the perceived situations, and internal incentives that are related to the previous knowledge. Fig. 9 shows the main structure of the CEM module of DAISY. The Conceptual Space Representation receives inputs from the agents and it acts as the interpretation level for the subsequent World Model Representation. According to Gardenfors, a conceptual space is a metric space defined by a certain number of cognitive di-

mensions as color, pitch, mass, spatial coordinates, independent from any specific language [14]. The dimensions are “cognitive” in that they correspond to qualities of the represented environment] without reference to any linguistic descriptions. In our architecture we have adopted a very simple conceptual space that corresponds to the set of feature parameters describing the objects in the scene. Some feature are related to the visual aspect of the object, as symmetry transform, contours; other are related to 3D geometric features of the object (superquadrics parameters [24]); Other more parameters are related to possible sounds generated by the object as Fourier transform parameters. The fusion among different perceptual modalities is based on a confirmation paradigm and reinforcement rule. The World Model Representation generates the World Model functionality previously introduced] by describing in a first-order logic language the objects and situations perceived by the system. The interpretation domain of the representation is based on the conceptual space: each concept of the World Model correspond to a set of points in the conceptual space. As an example let us consider the class of hammers:

46

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

the hammer heads and the hammer handles correspond to sets of points in the conceptual space, as shown in Fig. 10. If the object generates some sound, the features related to the sounds are stored as parameters in the conceptual space.

hammer(x1). hammer-head(x2,xi). hammer-handle(x3,xi).

Figure 11: Assertions describing the identification process.

Hammer has-handle

has-head

\

the robot recognizes a hammer, as in fig. 8, it expects to find nails since in the World Model the hammers are associated with nails and screws, by using simple rules as in fig. 12. The expectation process therefore activate the corresponding behavioral agents.

. . . _ . .I _. . . . . . . . . .I.. .

cs

expectation(X,Y):-X=hammer,Y=nail. expectation(X,Y):-X=hammer,Y=screw.

+-

Figure 12: Procedures describing the expectation generation process.

Details on the DAISY low level archit ecture

3

DAISY has been conceived as a distributed operating system, running on a collection of computers that are interconnected via a high speed network. Users can view this collection of computers as a single multiprocessor machine. The critical research issues in this design as presented by Tilborg [26] are:

a Figure 10: The link between the Conceptual Space and the World Model Representations. The choice of the process to activate and which process to activate next, as described in the AIF loop, is performed by the identification and expectation process mechanisms in the CEM, as described in [7]. The identification mechanism aims at recognizing the objects and the situations perceived by the architecture. The input is a structure at the conceptual level; its output is sent to the World Model t o produce a sentential description of the scene. A.s an example, when some of the behavioral agents finds a hammer, as in Fig. 8, the World Model generates the assertions reported in Fig. 11. The expectation mechanism is responsible for the the generation of expectations. It receives as input the instances of concepts from the knowledge base and it suitably activates the agents to seek for the corresponding expected objects in the scene. Then, when

e

transparency of system resources and distribution,

e

fault tolerance and robustness,

e

life cycle enhancement,

0

comprehensible model of interaction with users and standards.

Transparency was mentioned as a key attribute. There are several forms of transparency as presented in Goscinski[l8]. We have access, location, name, control, data, execution and performance transparency. Access transparency refers to the process having the same type of access mechanism for resources whether they are local or remote. Location transparency means that the location of the resource is not visible as far as the access method is concerned. The naming of an object is independent

47

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

with respect to their current computational load and their computational power. All these parameters are normalized to a unique scale and are updated runtime. The computational power is determined using a benchmark suite test. Another layer of the low level architecture is based on the collection of servers running on the robot computer. Each server corresponds to one effector of the robot; the effectors currently available are: the mobile base, the camera and the pan tilt unit. The Controller communicates with these server in a client/server mode via one to one connections. The design takes another server in consideration, the emergency server, by which is possible to stop any robot action at any time. The client/server model is also used in the communications between the Controller and an agent, or between an agent and other agents. The latter occurs when an agent has to achieve a goal that can be split into many simple goals, achieved by agents to execute somewhere in the network.

of the node on the network on which the object was issued in naming transparency. In control transparency, all information describing a system should have identical appearance to a user or application. We have data transparency when the ability to access remote data in a transparent manner is supported. In execution transparency, the load of a system can be distributed, processes and data can be moved around the system, interprocess communication between remote processes is permitted. Finally in performance transparency, the user should witness little reduction in performance as a result of remote calls in the system.

4

An application to robotic navigation

An application regarding the navigation of autonomous robot in an unknown working environment is under development in DAISY. The task of the robot is to explore its room environment and to describe the “interesting” objects that it finds during its exploration. In this section we sketch the operation of the robot performing the task. Fig 14 shows the “monitor” of DAISY, implemented as a Java applet [la]. The applet gives rich information about the position of the robot, its actions and its observations. It is also possible, by the monitor, to overcome the robot actions and to manually drive the robot, as in telerobotics applications. The robot starts by calling the self-localization agent to calibrate its position, as in fig. 4. Then it starts wandering by calling the the wandering and obstacle detection agent. When the space in front of the robot is not free, as in fig. 5, the robot recognizes the obstacle by means of the Self Organizing Map neural networks and it avoids collision by turning around the obstacle if it is possible. If the robot is in front of a wall it simply turns back. During its wandering in the room environment, when it encounter a little object, it supposes it should be an interesting thing. The navigation of the robot is also guided by the detection of the emission direction of interesting sounds related to an object. The combination of visual aspect, 3D reconstruction and sound features are used by the CEM to perform object recognition and description.

Figure 13: The DAISY low level architecture. In particular, the DAISY hardware layer (see fig. 13) is based on a TCP/IP LAN of general purpose workstations with different operating systems: ULTRIX, Digital UNIX, WindowsNT, Solaris, IRIX. Each workstation can run all agents except those requiring dedicated hardware, in our case the A l l 0 MultiDSP. Common data exchange is achieved by the use of a shared memory space configured as a R A M disk on a workstation and exported via NFS to other machines. This approach has two main advantages: fault tolerance and reduction of the computational load of the whole system. Process scheduling and allocation on different machines is managed by the so called Controller. This can be viewed as a super-agent aimed to dynamic resource allocation in order to reduce the computational load. Moreover the controller sends commands to the RWI B12 robot. Dynamic resource allocation is achieved by a knowledge base that stores the status of all workstations

48

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

The experimental sequence consists of 16 frames related to view points at same distance in a circular path. The procedure 6-snakes extracts from every frame 2D shapes localizing the object. From this point of view this procedure can be considered a sort of preliminary segmentation. After that, the procedure sym-axes extracts the most significant symmetry axes of the of the previously selected area. After the combined activation of these two procedures the object is described as a list of symmetry axes.

6

Figure 14: The monitor of DAISY architecture. In order to find the absolute position of the object, if necessary, the self-localization module is again called. The activated agents fill the conceptual level with the parameters describing the object, activating the process of recognition and expectation generation (see fig. 9). In our experiments the robot assumes to be in a risky context in which to localize the presence of ringing phones, bell, and alarm-clock. All the other objects that do not belong to the activated contexts are then ignored.

5

Conclusions

The proposed architecture is a step toward the effective integration between the functional and behavioral approaches to autonomous agents. The integration is based on cognitive consideration and it maintains the main advantages of both approaches. DAISY is under implementation on a distributed environment, connecting via Internet heterogeneous hardware (UNIX workstations, PC/Windows95, and dedicated processors [ 151). Preliminary results are encouraging.

Acknowledgments Authors would like to thank Giuseppe Sajeva, Fausto Torterolo, Fulvio Ornato, Nunzio Ingraffia, Domenico Tegolo and Salvatore Vitabile for their contributions to the software implementation of the behavioral agents.

References P.E. Agre and D. Chapman. What are plans for? Robotics and Autonom. Systems, 6:17-34, 1990.

An application to classification of objects

R.C. Arkin. Integrating behavioral, perceptual, and world knowledge in reactive navigation. Robotics and Autonom. Systems, 6:105-122, 1990.

Another application is under development in DAISY and it regards the classification of objects. The application is based on cooperating algorithms based on two procedures: the b-snakes and the sym-axes. In this section we sketch the operations of the robot and the related agent performing the task. When the robot wanders in the room environment, the attention is focused on little objects. For each selected zone, it acquires multiviews of the object,s at different angles. The robot motion is driven by the navigation and obstacle detection agent, and it is overcome by the monitor described in the previous sections. At this point, it is called the object recognition agent in order to process the object sequence. In our experiment, we have considered two different objects: a vase and a mug (Fig. 7).

C. Balkenius. Natural Intelligence in Artijkial Creatures. PhD thesis, Lund University Cognitive Studies, Lund, Sweden, 1995. R.A. Brooks. A robust layered control system for a mobile robot. IEEE J. Robotics and Automation, 2:14-23, 1986.

C.M. Brown. Issue in selective perception. In Proc.1ith IAPR 1nt.Conf.on Putt. Recog., volume A, pages 21-30, Los Alamitos, CA, 1992. IEEE Computer Society Press. P.J. Burt. Smart sensing within a pyramid vision machine. Proc. of the IEEE, 76:1006-1015, 1988.

49

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.

A. Chella, M. Frixione, and S. Gaglio. A cognitive architecture for artificial vision. Artif. Intell., 89:73-111, 1997.

[19] H. Hu and M. Brady. A parallel processing architecture for sensor-based control of intelligent mobile robots. Robotacs and Autonomous Systems, 17:235-257, 1996.

V. Di Gesh, C. Valenti, and L. Strinati. Local operators to detect regions of interest. Pattern Recognition Letters, 1997. (in press).

[20] M. Kass, A. Witkin, and D. Terzoupolos. Snakes: active contour models. In Proc. of Farst Intern. Conf. on Computer Vasaon, pages 259-268. Springer-Verlag, 1987.

Di Gesii V., G . Gerardi, and D. Tegolo. M-VIF: a machine-vision based on information fusion. In M.A. Bayoumi, L.S. Davis, and K.P. Valavanis, editors, Proceedings of CAMP’93, pages 428-435, Los Alamitos, CA, 1993. IEEE Computer Society Press.

[21] T. Kohonen. Self- Organzzzng Maps. SpringerVerlag, Berlin, 1995.

[22] P. Maes. Designing autonomous agents. Robotzcs and Autonom. Systems, 6:l-2, 1990.

Di Gesii V., F. Isgrb, B. Lenzitti, and D. Tegolo. Visual dynamic environment for distributed systems. In V. Cantoni, L. Lombardi, M. Mosconi, M. Savini, and A. Setti, editors, Proceedings of CAMP’95, pages 359-366, Los Alamitos, CA, 1995. IEEE Computer Society Press.

[23] C. Malcom and Smithers T. Symbol grounding via a hybrid architecture in an autonomous assembly system. Robotacs and Autonom. Systems, 6:123-144, 1990. [24] A.P. Pentland. Perceptual organization and the representation of natural form. Artaf. Intell., 28:293-331, 1986.

R.O. Duda and P.E. Hart. Use of the Hough transform to detect lines and curves in pictures. Commun. ACM: 15:11-15, 1972.

[25] F. Solina and R. Bajcsy. Recovery of parametric models from range images: The case for superquadrics with global deformations. IEEE Trans. Patt. Anal. Mach. Intell., 12(2):131-146, 1990.

B. Eckel. Thinking in Java. Prentice-Hall, Englewood Cliffs, New Jersey, 1997. (in press).

R.E. Fikes, P.E. Hart, and N.J. Nilsson. Learning and exectuing generalized robot plans. Art25 Intell., 3:251-288, 1972.

[26] A.M. Van Tilborg. Critical research issues in distribuited operating systems. In proc. of IEEE 7th Int. Conf. on Dastrabuated Computang Systems, 1987.

P. Gardenfors. Three levels of inductive inference. In D. Prawitz, B. Skyrms, and D. Westsrstbhl, editors, Logic, Methodology, and Philosophy of Science I X . Elsevier Science, Amsterdam, The Netherlands, 1994.

G. Gerardi and G . Parodi. The programmable and configurable low level vision unit of the HERMIA machine. In Proceedings of IAPR Workshop on Machine Vision Appplications, MVA ’92, 1992. E. Gerianotis and Y.A. Chau. Robust data fusion for multisensor detection systems. IEEE Trans. on Inf- Th., 36(6):1265-1279, 1990. H. Gomaa. Configuration of distributed heterogeneous information systems. In Proceedings Second International Workshop on Configurable Distributed Systems, page 210, Los Alamitos, CA, 1994. IEEE Computer Society Press. A. Goscinski. Distribuited Operating Systems, The Logical Design. Addison-Wesley, 1991.

50

Authorized licensed use limited to: Universita degli Studi de Palermo. Downloaded on February 9, 2009 at 03:58 from IEEE Xplore. Restrictions apply.