Modeling Mutiple Robot Systems for Area ... - Semantic Scholar

7 downloads 0 Views 320KB Size Report
Michigan State Univ. E. Lansing, MI 48824, USA. Weihua Sheng. Elect. and Comp. Eng. Kettering Univ. Flint, MI, 48504, USA. Jizhong Xiao. Elect. and Comp.
Proceedings of the 2004 IEEE International Conference on Robotics & Automation New Orleans, LA • April 2004

Modeling Multiple Robot Systems for Area Coverage and Cooperation Jindong Tan

Ning Xi

Weihua Sheng

Jizhong Xiao

Elect. and Comp. Eng. Michigan Technological Univ. Houghton, MI 49931, USA

Elect. and Comp. Eng. Michigan State Univ. E. Lansing, MI 48824, USA

Elect. and Comp. Eng. Kettering Univ. Flint, MI, 48504, USA

Elect. and Comp. Eng. City College, CCNY New York, NY 10031,USA

Abstract— This paper presents a distributed model for cooperative multiple mobile robot systems. In a multiple robot system, each mobile robot has sensing, computation and communication capabilities. The mobile robots spread out across certain area and share sensory information through an ad hoc wireless network. The multiple mobile robot system is therefore a mobile sensor network. In this paper, Voronoi diagram and Delaunay triangulation are introduced to model the area coverage and cooperation of mobile sensor networks. Based on the model, this paper discusses a fault tolerant algorithm for autonomous deployment of the mobile robots. The algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. The proposed formation control algorithm allows the mobile sensor network to track moving target and sweep a larger area along specified paths.

I. I NTRODUCTION Distributed wireless sensor networks have recently been emerging as an important research area [1], [11], [12]. Sensor networks, which consist of wireless linked sensing devices with a variety of different type of sensors and embedded microprocessors, can be rapidly deployed in hostile environments, inaccessible terrains or disaster relief operations. This paper presents the cooperation and formation control of a mobile sensor network, which consists of a variety of sensors on a collection of mobile robots. In a multiple robotic vehicle system, as shown in Figure 1, each mobile robot has sensing, computation and communication capabilities. The mobile robots share sensory information through an ad hoc wireless network. A multiple mobile robot systems is therefore a mobile sensor network [5], [10]. In this paper, multiple robot cooperation and formation control is addressed to reconfigure the mobile robots for varying sensor network operations. Various control structure for multiple robots cooperation and formation control has been discussed in the literature, such as [2], [15], [16]. In the present paper, the objective for the cooperation of multiple mobile robots is to cover certain area using the onboard sensors. The sensor network coverage and sensor deployment problems have been discussed in a variety of static sensor networks, such as [3], [14]. One of the objectives for robot cooperation is to maximize the sensor network coverage area by deploying the mobile sensing nodes. However, only limited work provides sensor deployment approaches for area coverage problems of mobile sensor networks [10]. This paper suggests a distributed model for the formal analysis and design of robot cooperations in a mobile sensor network. The proposed model defines the geographical relationship of different robots 0-7803-8232-3/04/$17.00 ©2004 IEEE

Fig. 1.

Block diagram of a mobile sensor network

using Voronoi diagram and Delaunay triangulation. Based on this model, the proposed autonomous deployment algorithm enables redeployment of randomly distributed robots. The algorithms are robust to sensor node failures. Formation control of the sensor network system is further discussed. The formation control algorithms allow the mobile sensor network to sweep larger area along specified paths. II. M ODEL OF M OBILE S ENSOR N ETWORKS A. Problem Formulation In a mobile sensor network, the mobile sensing nodes are wireless connected robotic vehicles, which are denoted by R = {R1 , R2 , · · · , Rn }. Here n is the number of robot vehicles. The configuration of robot Ri is denoted by qi (t) = {xi (t), yi (t), θi (t)}T , i = 1, 2, · · · , n. The dynamics of each subsystem can be described by q˙ i = fi (qi , ui ), where ui is the control input of subsystem Ri . The configuration and control input of the entire system can then be denoted by q = {q1 , q2 , · · · , qn }T and u = {u1 , u2 , · · · , un }T . f is the vector field of system dynamics. The multiple robotic vehicle system can be modelled as a large dimensional interconnected system, the overall system can be denoted by q˙ = f (q, u) [8]. In this paper, the dynamics model of mobile sensor networks is considered in two parts: the dynamical model of each subsystem, and the relationship between a subsystem and its neighboring robots. Assuming that the translational and rotational velocities are controllable, the robot model fi with nonholonomic constraints can be described as: x˙ i y˙ i θ˙i

= vi · cos θi = vi · sin θi = wi

(1)

where ui = {vi , wi }T is the control input of subsystem Ri , vi is the longitudinal velocity and wi is the angular velocity of the robot vehicle.

2568

The relationship between neighboring nodes is defined by two graphs, a Delaunay tessellation and a Voronoi diagram, as shown in figure 2. Coverage is addressed based on the Delaunay tessellation and Voronoi diagram in a number of research works [14], [5], [13]. Given an open set Ω ⊆ IRn , the Delaunay tessellation is a triangulation of the space based on a set of points p = {p1 , p2 , · · · , pn }T , where pi = {˜ xi (t), y˜i (t)}T is position of robot Ri in an inertial coordinate system. The Delaunay triangulation with a set of nodes p is defined such that any additional edge between any two nodes intersects one of the existing edges. In the graph, the nodes that are directly connected to node Ri are called the one-hop neighbors of Ri . The Delaunay triangulation defines the link properties between one-hop neighbors, which are defined by a set of edges E = {di j (t), i, j = 1, 2, · · · , n, i = j}. The distance between one-hop neighbors, di j , can be estimated by using local positioning systems or global position systems. Motivated by the adjacency matrix for multiple vehicle system [7], an adjacency matrix, A(t), is defined to specify the connectivity of the Delaunay tessellation. A(t) is a square matrix. Here Aij = di j if Ri and Rj are one-hop neighbors, otherwise Aij = 0. The ith column of A(t), denoted by d(t) = {di 1 , di 2 , · · · , di n }T , defines the link properties of Ri and its one-hop neighboring robots.

the collection < qi , ui , fi , di , Vi > defines the dynamics of the subsystem and its relationship to its one-hop neighbors. The details of this model is discussed as follows. B. Coordinate Systems and Information Sharing Sensor network kinematics. It is worth noting that the configuration of the robot, qi , is defined in a local inertial coordinate system, as shown in figure 2. The relationship between the local coordinate systems of neighboring robots is the key to share meaningful information in between. The local coordinate system for each robot may be different and time variant, such as robots R1 and R2 shown in Figures 2 and 3. The sensing and data fusion for each robot carries out with respect to its local coordinate frame. In order to share meaningful information between robots, a transformation matrix between two robots is needed. For instance, robot R2 detects an object, whose configuration is denoted by ξs2 (t) in coordinate system Σ2 . To share this information with robot R1 , the relation between the two coordinate systems Σ1 and Σ2 has to be determined. α1_ 2

d12 R4

Σ

R2

Σ2 R1

R8

Region V1

x

Σ

Fig. 2.

Fig. 3.

R10

R6

Σ1

Delaunay tessellation

R5

Σ1

R1

R9 R3

y

R2

α 2 _ 1 θ12

R12 R11

moving object

Σ2

Voronoi diagram

Mathematic Model of Sensor Networks

The coverage area of each sensing node and the entire sensor network is another important property of sensor networks. In this paper, Voronoi diagrams, as shown by the dashed line graph in figure 2, are used to describe this property. The Voronoi diagram is a dual graph of the Delaunay Tessellation in a planar space. Given a number of robots R = {R1 , R2 , · · · , Rn }, each robot is responsible for certain part of the region Ω. The subregion for sensing node Ri in Ω is defined by: Vi = {x ∈ Ω | |x − pi | < |x − pj |} ∀ j = 1, · · · , n, j = i.

(2)

V = {V1 , V2 , · · · , Vn } forms a partition of the planar space Ω, where sensing node Ri is responsible for the subregion Vi . The robot Ri is also called a generator of the diagram. Vi is defined as the region of the convex polygon that covers robot Ri in the Voronoi diagram. Distributed algorithms can be find to define the Voronoi region Vi [13]. In summary, the model of mobile sensor networks can be defined as a collection . For each robot Ri ,

Local Coordinate Systems

The relationship for robots R1 and R2 can be defined by transformation matrices T21 and T12 , which are represented by the distance between the two robots and their relative orientation. One-hop nodes can communicate to compute the relative orientation of neighboring robots [4] and the transformation matrix. For example, for one-hop robots Ri and Rj , Ri sends Rj its perception about x and y coordinates of Rj with respect to local reference frame Σi , and Rj sends Ri its perception about x and y coordinates of Ri with respect to local reference frame Σj . For example, based on the communication between robots R1 and R2 , α1 2 and α2 1 are known for both robots, where α1 2 is the orientation of R1 in the coordinate system of robot R2 and α2 1 is the orientation of R2 in the coordinate system of robot R1 . The relative orientation between robot R1 and R2 , denoted by θ12 , can be easily computed. Both robots are able to compute the transformation matrix Tij and Tji by sharing information, where   cos θij − sin θij 0 dij cos αj i  sin θij cos θij 0 dij sin αj i  . Tij =    0 0 1 0 0 0 0 1 αj i is the orientation of robot Rj in the local coordinate system of robot Ri . Communication is considered as one kind of perception in sensor networks.

2569

Based on the localization between one-hop nodes in sensor networks, the resulted transformation matrix Tij is one of the building blocks for global sensor network kinematics. In each local coordinate system, the transformation matrix defines the relation between one-hop neighbors such that meaningful information can be shared, such as robot R11 and R10 in figure 4. Proposition 2.1: For any two robots Ri and Rj in a complete Delaunay tessellation, there exits a path Rp1 , · · · , Rpm which connects robots Ri and Rj . Here Rp1 , Rpm are onehop neighbors of Ri and Rj respectively. To share information between robots Ri and Rj , there exists a distributed algorithm to pass information represented in the local coordinate system of sensor robot Ri to sensor robot Rj . Proof: Assuming sensor robot Ri detects a moving target, the temporal information of the target is ξsi (t) in the coordinate system of sensor robot Ri . Denote ξcj (t) and ξck (t) as the temporal information of target P in the coordinate system of robot Rj and Rpk respectively, where Rpk is a node on the path Rp1 , · · · , Rpm . The subscript s and c here represent information obtained by observation and communication respectively. ξcj (t) can be obtained from ξci (t) by the following operations: ξcp1 (t) ξcp2 (t) ξcj (t)

= Tip1 ξsi (t), = Tpp12 ξcp1 (t), .. .

= Tpjm ξcpm (t)

The transformation matrices Tip1 , Tpp12 , · · · , Tpjm is maintained locally by each robot. If the target is in the sensing range of a robot on the path, information of the target from both observation and communication exist. The information can be fused and denoted as pk (t). The information sharing between robot Rpk and Rpk+1 ξsc is described by:

with respect to the local coordinate system, such as Σ1 and Σ2 in Figure 2. However, the tasks for the sensor network and sensor fusion are described in a unified coordinate frame Σ. Here we propose to adopt two coordinate systems for each robot, local coordinate systems Σi , i = 1, 2, · · · , n and a global coordinate system Σ, as shown in Figure 2. We call the global coordinate system the sensor network coordinate system, which are the basis for information mapping and fusion, task dissemination, mobile robot cooperation, position based packet routing and forwarding etc. In the sensor network coordinate frame, the tasks of the sensor network can be clearly defined. The relationship between the coordinate system of robot Ri and the sensor network coordination system can be defined by a transformation matrix T0i . C. Sensor Network Coverage Area In a sensor network, the coverage area of each sensing node can be defined in terms of its communication range and/or sensing range, as shown in figure 5. The sensing range of robot Ri can be defined as si , as shown in Figure 5. We assume that the maximum region covered by the sensors of the robot is less than the total region to be explored, denoted by Ω. The multiple robots team needs to move dynamically to maximize the coverage of sensor networks and explore larger area. We further assume that the wireless communication mechanisms only allow robots to transmit and receive messages to and from each other within certain range. The range of communication for robot Ri is defined as ci . The range of communication is assumed to be lager than the range of sensing, si , as shown in Figure 5. Communication range

Sensing range

Ri

pk k ξcs (t). ξcpk+1 (t) = Tppk+1

Σ

It is seen that the sensor fusion and information sharing between two nodes can be done based on the local coordinate systems. Based on the above discussion, it can be seen the Delaunay graph provides a distributed definition of the relationship between any two robots in the network. A robot maintains its local coordinate system and their relationship with respect to its one-hop neighbors. As shown in figure 4, information between robots R11 and R10 can be shared without a global coordinate system. R3 y

R2

Σ11

Σ10

Σ2

R11

R10

R6 x

R8

Fig. 4.

Information sharing

The Sensor Network Coordinate System. In a mobile sensor network, each robot processes its sensor information

Region Vi

Fig. 5.

Sensing and Communication Range

The Voronoi region for some sensor nodes are not closed, such as robot R1 in figure 6(a). A special line segment de or  is used to close this region, as shown in Figure 6. an arc de If an obstacle is present in the sensing range, the definition of the region considers the shape of the obstacle. For example, line segments de and ef are used to form region Vi as shown in Figure 6 (b). The resulted Voronoi region Vi can be treated as a normal region for the cooperation of robots. Based on the above discussion, the coverage area of given number of sensor nodes can be formed. Figure 7 shows the coverage area of a group of 16 randomly deployed robots. Figure 7 (a) is a Voronoi diagram. Some of the Voronoi subregion are not closed. Assuming that each robot has the same sensing range, the coverage area of the sensor network can be determined, as shown in figure 7 (b). For some cases, a robot may have only one or two one-hop neighbors,

2570

d

d

Sensing range

Sensing range

Sensing range

c

c

Sensing range

e Ri

y

Ri

b

Σi

Fig. 8.

(b)

20

20

15

15

10

10

5

5

0

0

−5

−5

−5

0

Senor Node Redeployment

Sensor node coverage and redeployment

the Voronoi diagram and the sensor coverage area may be different. It is not discussed due to the space of the paper.

−10

Σi

f

a

(a)

Fig. 6.

Ri

Σi

Σi

e

a

Σ

x

Ri

b

5

10

(a) Voronoi Diagram

15

−10

−5

0

5

10

15

(b) Modified Diagram

Fig. 7. (a) The robots are randomly deployed, the planar space are partition. (b) The sensing range si = 8m is used to form the coverage area of the sensor network.

III. S ENSOR D EPLOYMENT AND C OOPERATION In a mobile sensor network, the robot can cooperate to perform spatially distributed tasks such as distributed sensing, cooperative manipulation. One of the objectives of robot cooperation in a mobile sensor network is to maximize the coverage area of the network. As shown in figure 7(b), it can be seen that the robot can be redeployed to cover a larger area. The mobile sensor network as a whole can also maneuver itself to explore a larger area, which requires coordinated motion of the multiple robotic systems. A. Problem Formulation The necessities of autonomous redeployment can be illustrated by several examples. If the Voronoi region Vi is not totally in the sensing range of robot Ri , the robot has the freedom of moving the sensor node such that the sensing range si will totally cover the region Vi , as shown in Figure 8. The direction of the motion is pointing to the geometric center of Vi . Note that Vi is not a fixed set. If robot Ri moves, the relationship between Ri and its neighboring nodes is changed. The region Vi and its geometric center change accordingly. The motion of the robot has to be adjusted based on the new region Vi . B. Continuous Deployment Algorithm In Figure 8, it is shown that the mobile robot can move to its geometrical center to enlarge the sensor coverage area. We define the geometrical center as the centroid of the Voronoi region. For a Voronoi diagram, if the centroid of

each region also serves as the generator for the region, it is called centroidal Voronoi diagram [6]. For a closed region and given number of generators, the centroidal Voronoi tessellation are not unique. In a closed region, a discrete Lloyd’s method can be used to generate the centroidal Voronoi tessellation. In [5], a distributed control law has been discussed based a gradient descent scheme. The distributed control can generate continuous motion to generate the Voronoi tessellation of certain closed region. In this paper, we assume the region to be explored, denoted by Ω, is an open set. Ω is much larger than the area covered by the sensor network. The sensor network has to sweep certain part of region Ω to achieve certain tasks. The mathematical model of each robot is defined by the collection < qi , ui , fi , di , Vi >, where Vi is the Voronoi region. The computation of the vertices for Vi involves only the one-hop neighbors of Ri . Assuming that the kinematics of the robot Ri as: p˙ i = ui , where ui = {vx , vy }T represent the velocities of the robot along x-direction and y-direction in its local coordinate frame. The continuous control law based on the Lloyd Algorithms [5] can be described as: ui = kp (pi − CVi ) where CVi = {CVi ,x , CVi ,y }T is the centroid of the region. Notice that CVi is time variant vector computed in its local coordinate system. Upon determination of the vertices of Vi , i.e., CVi can be computed locally. Since wheeled mobile robots are often used, the control algorithms and convergence analysis based on robot model in eq. (1) is addressed in a different paper. The value of CVi is not only affected by the motion of robot Ri , but also its neighbors. The centroid CVi evolves continuously with respect to the shape change of Vi . However, Vi can change drastically when a topological event occurs [9]. A topological event occurs when two vertices merge or one vertex splits. For instance, the number of vertices of robot 1 in figure 9 changes from 6 to 5 while two vertices merge. At the time mergence and split, the values of CVi may change discontinuously. The distributed continuous deployment algorithm can be summarized as: 1) Construct the Voronoi tessellation Vi in open space Ω associated with robot Ri ; 2) If Vi is an open set, modify Vi based on the sensing range of Ri to form a closed region; 3) Based on the vertices of Vi , compute the centroid of Vi ; 4) Execute controller ui for certain period of time ti ; 5) Return to step 1.

2571

25

25 node 1

20

20

15

15

10

10

5

5

0

0

−5

−5

−10

−15

−10

−15

−10

−5

0

5

10

15

−15

20

(a) Random deployment Fig. 9.

−15

−10

−5

0

5

10

15

20

(b) Final Configuration

Continuous deployment algorithm to maximize the coverage area

Figure 9 shows the simulation results of the above algorithm. For a number of 16 randomly deployed robots, the coverage area of the sensor network is shown in Figure 9(a). In all the following figures, + denotes the centroid of the Voronoi region and the small circle denotes the location of the robots. The unit for x-axis and y-axis are meter. The orientation of the robots is not shown in the figures. Figures 9(b) and (c) are snapshots of the deployment. Figure 9(d) shows the final deployment of the robots, which is a centroidal Voronoi diagram. The centroidal Voronoi diagram is formed based on the local information of each node, the results may not be the optimal deployment. 30

30

25

25

20

20

15

15

10

10 3

5

5

0

0 2

−5

−5 1

−10

−15 −20

−10

−15

−10

−5

0

5

10

15

20

25

−15 −20

−15

(a) Initial configuration

−10

−5

0

5

10

15

20

25

(b) Lost 1 node

30

30

25

25

20

20

15

15

10

10

5

5

0

0

IV. F ORMATION C ONTROL The objective of formation control is to coordinate a collection of robots in such a way that they maintain a given formation relative to each other. A team of cooperative robots can often be used to perform tasks that are difficult for a single robot to achieve. Formation is important in search and rescue in hazardous environment and military applications [2] where a sensor network and robots can cooperative to accomplished complicated tasks. Disregarding the variety of the applications of robot formations, the goal of the approaches for coordinating multiple robots is to accomplish a common objective collaboratively. Asynchronous Formation Control. In this paper, we discuss two kinds of formation control, asynchronous formation control and synchronous formation control. If only one of the robots in the sensor network know the desired motion of the sensor network, and it is not sent to the other robots via communication, we call it asynchronous formation control. The robot that has the desired motion of formation is called the leading robot. If the leading robot changes its position according to the motion plan of the formation, its neighboring robots detect the motion by the change of their own Voronoi regions. The continuous deployment algorithm discussed in section III-B is then applied. Figure 11 shows the simulation results of asynchronous formation control. The desired motion of the formation is a straight line. The robots other than the leading robot are unaware of the motion plan. It can be shown that the formation pattern is slightly changed but the robots in the sensor network can follow the motion of the leading robot. It is worthy noting that the motion of each robot is controlled in its local coordinate system. There is no agreement on a global coordinate system. In an asynchronous formation system, the motion of the leading robot is restricted by the responding time of the sensing system and the number of the robots in the sensor network. If the speed of the leading robot is too big, simulation results show that the rest of the robot may not be able to follow and the formation is broken. 30

−5

−5

−10

−10

−15 −20

−15 −20

−15

−10

−5

0

5

10

15

20

(c) Lost 2 nodes Fig. 10.

25

The leading node 25 20 15 10 −15

−10

−5

0

5

10

15

20

25

5

(d) Lost 4 nodes

0 −5

Fault Tolerant

−10 −15 −20

One of the most important feature for sensor networks is their redundancy and fault tolerant. If one or more robots fail, the other robots can redeploy their positions to adjust the sensor network. Figure 10 shows the validity of the continuous deployment algorithm for this situation. Starting from a sensor network shown in figure 10(a), figures 10(b), (c) and (d) show the adjustment of the sensor network after 1, 2, and 4 nodes are lost respectively. Under the continuous deployment algorithm, the sensor network can reorganize itself if some robots fail.

−10

0

10

20

30

40

50

60

70

80

Fig. 11. Asynchronous formation control, the desired motion of the leading robot is a straight line

Synchronous Formation Control. The sensor network is simultaneously a communication network. If one robot receives a formation motion plan, the network flooding protocol can be used to send the formation plan to all the other robots in the network. It is worthy noting that the interpretation of the same plan in each sensor node is different from each other.

2572

30

Denoting s as the desired path of the formation and si is its interpretation in the coordinate system of robot Ri , the information sharing between neighboring robots Ri and Rj is done by the transformation matrix Tij . The derivatives of si can be shared by the following formula:     s˙ i,x s˙ j,x  s˙ j,y  = J ·  s˙ i,y  , s˙ j,θ s˙ i,θ Tij

where the Jacobian J is derived from and s˙ i,x , s˙ i,y denote the velocity along x-direction, y-direction respectively, and s˙ i,θ is the desired angular velocity, if applicable. The synchronous formation control algorithm can them be summarized as follows. 1) Construct the Voronoi tessellation Vi in open space Ω associated with robot Ri ; 2) If Vi is an open set, modify Vi based on the sensing range of Ri to form a closed region; 3) Based on the vertices of Vi , compute the centroid of Vi ; 4) Receive plans from a neighbors, transform it into local coordinate system, and transmit this information to other neighbors according to the flooding protocol; 5) Compute and execute controller ui combining both CVi and si , s˙ i ; 6) Return to step 1. Figure 12 shows the simulation results of the algorithms. The initial configuration is not yet a centroidal Voronoi tessellation. The desired motion of the formation s is a sinusoidal wave. This information is sent to the leading robot and then flooded to the sensor network. The leading robot keeps on sending out its desired motion to the network. From the simulation results, it can be seen that the formation stabilize itself to a centroidal Voronoi tessellation and follows the desired path of the formation, which is a sinusoidal wave. The synchronous formation control is not a leading-follower system. If the leading robot fails during the operation, any other robot can act as the leading robot instead. 50 The leading node 40 30 20 10 0 −10 0

20

40

60

80

100

120

Fig. 12. The formation follows a sinusoidal wave and maximize its coverage area during the operation.

Hybrid Formation Control. The purpose of the hybrid formation control algorithm is to improve the intelligence of the network. More complicated hybrid formation control algorithms can be designed based on the continuous algorithm. For example, three neighboring robots in the network can form a special geometric pattern while the rest can be freely deployed. The simulation results are shown in figure 13.

25 20 15 10 5 0 −5 −10 −15 −20 −20

−10

0

10

20

30

40

50

60

70

80

Fig. 13. Synchronized motion with hybrid control, three robots in the sensor network keeps a special formation during the operation.

V. C ONCLUSION AND F UTURE WORK This paper proposes a distributed model for mobile sensor networks, which defines the geographical relationship of different robots. Based on this model, continuous and hybrid deployment algorithms are proposed to deploy the robots to cover a larger area. The formation control of a group of mobile robots is further discussed. The paper simulated all the proposed algorithms. The simulation results have shown efficacy and validity of the algorithms. Future work includes the convergence analysis of the algorithm, and the implementation of the algorithm on real robotic systems. R EFERENCES [1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci. Wireless sensor networks: a survey. Computer Networks, 38(4):393–422, 2002. [2] T. Balch and R. Arkin. Behavior-based formation control for multirobot teams. IEEE Trans. on Robotics and Automation, 14(6):926–939, 1998. [3] N. Bulusu, J. Heidemann, D. Estrin, and T. Tran. Self-configuring localization systems: Design and experimental evaluation. May 2003. [4] S. Capkun, M. Hamdi, and J.-P. Hubaux. Gps-free positioning in mobile ad hoc networks. Cluster Computing, 5(2):157–167, 2002. [5] J. Cortes, S. Martinez, T. Karatas, and F. Bullo. Coverage control for mobile sensing networks. In Proc. of IEEE ICRA, pages 1327–1332, Washington DC, May 2002. [6] Q. Du, V. Faber, and M. Gunzburger. Centroidal voronoi tessellations: Applications and algorithms. SIAM Review, 41(4):637–676, 1999. [7] J. A. Fax. Optimal and Cooperative Control of Vehicle Formations. Dissertation of California Institute of Technology, Pasadena, California, 2002. [8] J. F. Feddema, C. Lewis, and D. A. Schoenwald. Decentralized control of cooperative robotic vehicles: Theory and application. IEEE Trans. on Robotics and Automation, 18(5):852–864, 2002. [9] J. J. Fu and R. C. T. Lee. Voronoi diagrams of moving points in the plane. International Journal of Computational Geometry and Applications, 1(1):23–32, 1991. [10] A. Howard, M. J. Mataric, and G. S. Sukhatme. An incremental selfdeployment algorithm for mobile sensor networks. Autonomous Robots, 13:113–126, 2002. [11] S. S. Iyengar and S. Kumar. Preface of special issue on distributed sensor networks. The International Journal of High Performance Computing Applications, 16(3):203–205, 2002. [12] S. Kumar, F. Zhao, and D. Shepherd. Special issue on collaborative signal and information processing in microsensor networks. IEEE Signal Processing Magazine, pages 13–14, 2002. [13] X.-Y. Li, P.-J. Wan, and O. Frieder. Coverage in wireless ad hoc sensor networks. IEEE Transactions on Computers, 52(6):753–763, 2003. [14] S. Meguerdichian, S. Slijepcevic, V. Karayan, and M. Potkonjak. Localized algorithms in wireless ad-hoc networks: location discovery and sensor exposure. pages 106–116, Long Beach, CA, July 2001. [15] L. E. Parker. Distributed algorithms for multi-robot observation of multiple moving targets. Autonomous Robots, 12:231–255, 2002. [16] J. Tan, N. Xi, A. Goradia, and W. Sheng. Coordination of human and mobile manipulator formation in a perceptive reference frame. In Proc. of IEEE ICRA, 2003.

2573

Suggest Documents