Probabilistic Spatial Mapping and Curve Tracking in Distributed Multi ...

1 downloads 0 Views 2MB Size Report
value at the position of the ith agent, p(ri) ∈ R, denoted pi. We require ..... [1] D. Paley, N. Leonard, R. Sepulchre, and I. Couzin, “Spatial models of bistability in ...
Probabilistic Spatial Mapping and Curve Tracking in Distributed Multi-Agent Systems Ryan K. Williams and Gaurav S. Sukhatme Abstract— In this paper we consider a probabilistic method for mapping a spatial process over a distributed multi-agent system and a coordinated level curve tracking algorithm for adaptive sampling. As opposed to assuming the independence of spatial features (e.g. an occupancy grid model), we adopt a novel model of spatial dependence based on the grid-structured Markov random field that exploits spatial structure to enhance mapping. The multi-agent Markov random field framework is utilized to distribute the model over the system and to decompose the problem of global inference into local belief propagation problems coupled with neighbor-wise inter-agent message passing. A Lyapunov stable control law for tracking level curves in the plane is derived and a method of gradient and Hessian estimation is presented for applying the control in a probabilistic map of the process. Simulation results over a real-world dataset with the goal of mapping a plume-like oceanographic process demonstrate the efficacy of the proposed algorithms. It is shown that a system of agents are able to map the process over a distributed set of heterogeneous observations in an efficient, accurate, and convergent manner. Scalability and complexity results suggest the feasibility of the algorithms in realistic multi-agent deployments.

I. I NTRODUCTION There has been significant interest recently in the study of distributed systems of interconnected agents, coordinating to achieve a common objective. Such systems offer numerous advantages over their single-agent counterparts, including enhancements in scale, robustness, sampling, and processing capability. The analysis of multi-agent systems is relevant in various contexts, for example when considering biologicallyinspired [1] or cooperative systems [2]. We consider a system of agents operating in the plane that each possesses locomotion, communication, computation, and measurement capabilities. The objective for the system is to cooperatively and efficiently map a spatial process over a grid workspace for the purposes of sampling. In the literature common methods for achieving such goals include occupancy grids and level curve tracking algorithms (e.g. [3] and [4]); however, these approaches exhibit several shortcomings. Such methods are generally inefficient as the spatial structure of the process is ignored [5] and a reliance on directly sampled gradient and Hessian information (e.g. in [6]–[9]) can present difficulties in practice. In this work we propose a method that assumes a probabilistic model of spatial dependence in the process to achieve efficient mapping, and a Lyapunov stable level curve tracking control that operates not through direct process observation, The authors are with the Departments of Electrical Engineering and Computer Science at the University of Southern California, Los Angeles, CA 90089 USA (email: [email protected]; [email protected]).

but by estimating gradient and Hessian information over the resultant process map. A grid-structured pairwise Markov random field model is assumed and the multi-agent Markov random field framework is applied to decompose the problem of global inference into local inference problems coupled with inter-agent message passing (in [5] a single-agent version of this model is explored; see [10] for our previous work on the multi-agent extension). Simulations of virtual autonomous surface vehicles over a real-world dataset demonstrate the efficacy of the proposed methods in mapping and sampling a plume-like oceanographic process (data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) [11]). It is shown that the agents are able to map the plume-like process over a distributed set of heterogeneous observations in an efficient, accurate, and convergent manner. The feasibility of the approach in realistic multi-agent deployments is suggested by acceptable scaling and computational complexity over reasonably sized agent networks and workspaces. The outline of the paper is as follows. In Section II we provide a model of spatial dependence and a cooperative inference algorithm for generating a probabilistic map of a spatial process. A curve model and a Lyapunov stable control law for tracking levels curves in the plane are given in Section III. Section IV discusses gradient and Hessian estimation for tracking the level curves of a process over a discrete grid. Simulation results are provided in Section V, and concluding remarks as well as directions for future work are stated in Section VI. II. S PATIAL M APPING IN M ULTI -AGENT S YSTEMS Consider a system of K agents indexed by {1, . . . , K} operating over a fixed spatial grid of N × N cells. To each cell we assign a random variable Xij , termed a process variable, with row and column indices i, j = 1, . . . , N . We associate with each process variable a set of random variables Yijk , termed model variables, with k = 1, . . . , M . The model variables represent a measurable realization of a unobservable (or hidden) process given by the Xij ’s, with M observables per grid cell. The joint probability distribution over the domain X = {Xij } ∪ {Yijk } is represented by the Markov random field M = (X , H, P ), where H = (X , E, Φ) is an undirected graph with nodes labeled by X , edges E, and potential set Φ, and P is the probability distribution over X . We assume a pairwise dependence model over the process variables defined by node potentials φ(Xij ) and edge potentials φ(Xij , Xlm ). The system observation models are heterogeneous in nature and are defined by edge

potentials φ(Xij , Yijk ). The interpretation of the potentials is application-specific, however informally they represent the likelihood of a cell taking a particular value, a pair of adjacent cells having a particular state configuration, and an observation taking a particular value given a known cell state [5]. In order to apply the given spatial model over a distributed multi-agent system we consider the multi-agent Markov random field framework (see [10] for our previous work). The set of domain variables X is partitioned over the system with agent subsets Xi ⊆ X , and local uncertainty is represented by Markov random fields Mi = (Xi , Hi , Pi ) managed by each agent. The subgraphs H = {H1 ∪ H2 ∪ . . . ∪ HK } are organized into a tree Ψ = (H, L) where an edge lij ∈ L between agents i and j is labeled by the intersection of the domains Xi ∩ Xj (i.e. an interface). The set of neighbors of agent i is given by Ni = {j 6= i | lij ∈ L}. Assume also that Ψ satisfies the running intersection property, thereby enforcing d-separation and ensuring probabilistically sound inter-agent message-passing. The tree Ψ, called a hypertree over H, defines the communication topology of the multiagent system [12]. The local domains Xi are defined as follows. To all agents we assign the set of process variables, requiring that the probabilistic interfaces Xi ∩ Xj are uniformly {Xij }. The model variables are then partitioned in an application-specific manner to meet system goals or constraints (e.g. assigning each agent a unique sensor and associated model). Such a configuration allows the agents to retain a homogeneous view of the workspace (via the process variables), while simultaneously enabling varied observation models, and ultimately fusion. Given a distributed set of heterogeneous observations, the agents must cooperate via local communication to reason over the global distribution P . In particular the joint probability distribution P (X ) =

K 1 Y Y φj Z i=1

(1)

φj ∈Φi

is the product of the potentials associated with each Hi = (Xi , Ei , Φi ) (with duplication ignored), where Z is a normalizing constant. Fig. 1 illustrates a novel 3 × 3 grid structured multi-agent Markov random field with two agents and one set of observables per agent (not all shown for clarity). A. Inter-Agent Belief Exchange and Inference To generate a probabilistic spatial map we must infer the grid of marginal distributions {P (Xij | O), i, j = 1, . . . , N } that represents the posterior beliefs of the multi-agent system conditioned on local observation sets O = {O1 ∪ O2 ∪ . . . ∪ OK }. We consider a distributed loopy belief propagation algorithm to render the problem of global inference feasible and to decompose it into a process of belief exchanges and local inference problems. The cooperative inference process begins with each agent reducing assigned model potentials by the set of local evidence Oi to generate process beliefs ψk (observed model variables indexed linearly by k for

1 Y21

1 Y11

X11

X21 X12

X31

X32

X22

X23

X13

X33

2 Y21

H1

2 Y32

{Xij }

X31

X32

X21

X22

X23

X12

X13

H2

Xij : Process variable Yijk : Model variable (observations)

X11

X33

Fig. 1. 3 × 3 grid-structured pairwise multi-agent Markov random field with two subgraphs sharing process variables {Xij : i, j = 1, . . . , 3}. Each agent maintains a set of model variables {Yijk : i, j = 1, . . . , 3, k = 1, 2} over which observations are made (not all shown for clarity). The agent hypertree structure is H1 − H2 with interface {Xij }.

notational simplicity). The agents then perform messagepassing over the hypertree to diffuse the beliefs over the network. We consider an asynchronous message passing scheme to mimic realistic communication conditions, though a structured collect/distribute scheme is equally appropriate [13]. At any given instance an agent i exchanges a message set with a neighboring agent j ∈ Ni only when it is ready; that is, when all messages have been received from all other neighboring agents, excluding j. Each message set, defined by Y k k {δi→j } = ψk δn→i , ∀ k ∈ Oi (2) n∈Ni −{j}

consists of belief products generated by observed model variables, where the domain of each message is the associated process variable. Message passing terminates when each agent has sent a message to each of its neighbors. Convergence is guaranteed (see [13] for proof) with communication beginning in the hypertree leaves and propagating inward. The structured message-passing process induces an agent consensus with respect to the system-wide process beliefs. Each agent then runs the standard loopy belief propagation algorithm locally over Hi (with belief messages integrated, respecting scope) and extracts cell-wise marginal distributions over the Xij ’s to generate a probabilistic process map. We omit the details of the local inference process here due to space constraints (see [5], [10] and more generally [13] for details). III. T RACKING L EVEL C URVES IN THE P LANE In addition to mapping the process of interest, we also desire our agents to actively sample the process in a coordinated manner. Towards this goal we consider tracking the level curves of a planar process p. Assume for now that p satisfies the following assumption: Assumption 1: The process function p is a C 2 smooth function on some bounded open set B ⊂ R2 . There exists a set of closed curves C(p); the level curves of p. On the set B we have ||∇p|| = 6 0 [4].

p

We construct a planar curve model by considering the time derivative of p along the trajectory of a moving agent. Applying the directional derivative and assuming the common double integrator agent model, our multi-agent dynamical system is given by

yp,i ∇pi

ri

r˙ i = vi v˙ i = ui

vi xp,i

(3)

p˙i = ∇pi · vi with agent position and velocity ri , vi ∈ R2 , and process value at the position of the ith agent, p(ri ) ∈ R, denoted pi . We require inputs ui that will drive the system (3) to a desired configuration asymptotically in time, i.e. one in which the agents stably track a specified set of level curves. Towards that goal, consider the following desired system configuration: pi = Ci

(4)

vi = µi xp,i

where for each agent i, Ci ∈ B is the desired level curve value, µi > 0 is the desired tracking speed, and xp,i is the curve tangent vector with yp,i =

∇pi ||∇pi ||

xp,i = Ryp,i

(5)

where rotation matrix R is chosen such that xp,i and yp,i form a right-handed frame. Fig. 2 depicts our assumed level curve tracking model [4]. To design a control law to satisfy configuration (4) we consider the following Lyapunov candidate function K

V =

K

1X 1X ηi (Ci − pi )2 + ||vi − µi xp,i ||2 2 i=1 2 i=1

(6)

where ηi > 0. This function has been constructed such that the desired system configuration is the unique critical point. The time derivative of the Lyapunov function along the trajectories of system (3) is given by V˙ = −

K X

ηi (Ci − pi )∇pi · vi +

K X

where ∇2 pi is the Hessian matrix of the process function p(ri ). The agent controls ui are chosen to ensure the negative semi-definiteness of (7) as required by standard Lyapunov control design. Considering control laws for i = 1, . . . , K   µi ui = R∇2 pi vi − yp,i · ∇2 pi vi xp,i ||∇pi || (8) + ηi (Ci − pi )∇pi − αi (vi − µi xp,i ) with αi > 0 gives K X i=1

αi ||vi − µi xp,i ||2 ≤ 0

our desired result. The resulting form of controls (8) is quite intuitive. The target level curve is sought through gradient descent/ascent while tracking is stabilized by a velocity alignment term and a curvature anticipation term. Scalars ηi and αi are tunable control parameters for descent rate and velocity alignment, respectively, while the choice of R controls the direction of curve traversal (i.e. clockwise or counter-clockwise). Assuming that the initial conditions of the multi-agent system are such that the Lyapunov function V given by (6) is finite and that Assumption 1 is satisfied, the desired system configuration (4) is achieved asymptotically under controls (8). This fact follows directly from (9), LaSalle’s Invariance Principle, and Barbalat’s lemma [14] (proof omitted due to space constraints). We also mention briefly collision avoidance between agents in the system. In realistic environments and particularly in systems with many agents, avoiding collisions is critical. Assume that each agent has a circular collision region of radius di within which other agents are detected and avoided. We consider the following collision control based on a logarithmic potential between agents [15]:  X 1 di ui,col = −βi R − rij (10) ||rij || ||rij ||2 j∈Ωi

(vi − µi xp,i ) ·

i=1  (7)  i=1   µi R∇2 pi vi − yp,i · ∇2 pi vi xp,i ui − ||∇pi ||

V˙ = −

Fig. 2. Level curve model for tracking control design. An agent i is depicted by a solid dot with state (ri , vi , p(ri )). The dashed ellipses are the level curves of a planar process function p, with the right-handed frame (xp,i , yp,i ) depicted. Tracking is achieved when vi aligns with xp,i at desired speed ||vi || = µi , and p(ri ) = Ci with desired level Ci .

(9)

where rij = ri − rj , βi > 0, and Ωi = {j 6= i : ||rij || ≤ di } is the set of detected collision threats. IV. L OCALIZED G RADIENT AND H ESSIAN E STIMATION The controls derived in Section III require both the gradient and the Hessian of the process of interest in order to perform level curve tracking. Acquiring both quantities in a directly sampled environment can be difficult and costly, or even impossible. Alternatives to direct sampling have been proposed in [8], [9] wherein localized groups of agents are used to generate gradient and Hessian estimates in order to track level curves. However, such a technique encumbers the agents spatially and mitigates globally distributed sampling and exploration. We instead consider tracking probabilistic level curves over the spatial map generated using the results from Section II. In this way a workspace is cooperatively

mapped and sampled through distributed global observations by tracking decision boundaries, as opposed to directly tracking process level curves. Not only is this method more robust due to its probabilistic underpinnings (i.e. an informed process/observation model forms its foundation), each agent can also easily generate gradient and Hessian estimates directly from the map that are induced by global information without the need for coordinating agent localization. The process function p defined in the context of Section III is now assumed to be the underlying probability function of which the grid generated in Section II is a discrete realization. Considering only a single agent and dropping subscripts for clarity, we examine the second order Taylor expansion of p at a point in the neighborhood of r given by 1 (11) p(r + ¯r) = p(r) + ¯rT ∇p(r) + ¯rT ∇2 p(r)¯r 2 where we assume r+¯r is sufficiently close to r such that (11) is a reasonable local approximation. It is clear that with a set of points in the neighborhood of r we can solve a system of equations directly for local approximations of ∇p(r) and ∇2 p(r). Considering a grid of points around r with a scale h as depicted by Fig. 3, we form the linear system Ax = b, where the values of p(r) and the neighboring points are interpolated from the probability grid using standard techniques (e.g. bilinear interpolation). Specifically we have  1 2  0 −h 0 0 0 2h 1 2   0 h 0 0 0  2h    1 2  −h 0 0 0 0  2h   1 2  h  0 h 0 0 0   2 A= 1 2 1 2 1 2  1 2  −h h  −2h −2h 2h 2h    h 1 2 1 2 1 2 1 2  h  2h 2h 2h 2h   1 1 1 1 2  2 2 2  h −h 2 h − 2 h − 2 h  2h 1 2 2h

1 2 2h

−h −h

    x=    

∇x p(r)



∇y p(r)    2 ∇xx p(r)   ∇2xy p(r)    ∇2yx p(r)  ∇2yy p(r)

1 2 2h

(12) 



1 2 2h

p(r − hˆ ey )

 p(r + hˆ ey )    p(r − hˆ ex )   p(r + hˆ ex )  b=  p(r − hˆ ex−y )   p(r + h1)    p(r + hˆ ex−y )

         − p(r)1      

p(r − h1)

where 1 = [1, 1]T and ˆ ex and ˆ ey are the standard Cartesian basis vectors with ˆ ex−y = ˆ ex − ˆ ey . Solving the overdetermined, rank deficient system (12) can be accomplished by orthogonal decomposition of the coefficient matrix A [16]. Considering the singular value decomposition where A = UΣV the gradient and Hessian are found easily by computing x = VT Σ+ UT b where Σ+ is the pseudo-inverse of Σ. Precomputation of VT Σ+ UT ∈ R6×8 allows the agents to efficiently estimate gradient and Hessian informa-

∇p

p(r)

{

h

Fig. 3. Model for gradient and Hessian estimation over probabilistic grids generated by the spatial mapping method of Section II. Discrete cell-wise probabilities P (Xij = x) are represented by hollow dots. Probability value p(r) at agent position (large solid dot) and neighboring values in grid of scale h (small solid dots) are interpolated from discrete grid. The interpolants are used in localized second order gradient and Hessian estimation.

tion of the spatial grid and track probabilistic level curves using controls (8). V. S IMULATION R ESULTS To evaluate the proposed spatial mapping and curve tracking algorithms we consider the problem of mapping oceanographic processes that are plume-like in nature (e.g. harmful algal blooms (HABs) or environmental spills). Such phenomenon are not only of environmental importance, but they are also excellent candidates for our method as they exhibit significant spatial structure. For our simulations we assume that each agent notionally represents an ASV on the ocean surface and that there exists some algorithm that translates our control vectors (8), (10) to generate appropriate locomotion. Measurements are sampled from a real-world dataset taken from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) [11]. Fig. 4 depicts sea surface temperature (◦ C) and chlorophyll concentration (mg/m3 ) data over an 80×80 region containing a plume-like process which we select for sampling and simulation. For our mapping task we concentrate on plume detection, choosing binary process variables with uniform potentials " # " # 0.5 0.7 0.3 φ(Xij ) = , φ(Xij , Xlm ) = (13) 0.5 0.3 0.7 Observations are assumed to be Gaussian distributed with P (Yijk | x) ∼ N (µx ; σx2 )

(14)

where for each x ∈ {0, 1} we have a mean and variance, µx and σx2 . Prior to simulation a portion of the workspace was designated as being in-process and an empirical observation model was calculated giving 2 µsst = (14.5 13.75), σsst = (0.5 0.25) 2 µcc = (1.2 1.9), σcc = (0.3 0.2)

(15)

Fig. 5(a)-(d) depicts an example of a 100 time unit simulation of a 30 agent system performing simultaneous probabilistic mapping and level curve tracking of the process

Sea Surface Temperature (ºC)

15

14.5

14

13.5

13

12.5

12

(a) Chlorophyll Concentration (mg/m3)

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2

(b) Fig. 4. MODIS data used for measurement sampling in agent simulations: (a) sea surface temperature; (b) chlorophyll concentration.

shown in Fig. 4. The agents are divided into two teams; one is assigned sea surface temperature, Ci = 0.95, and µi = 1.0, while the other is assigned chlorophyll concentration, Ci = 0.87, and µi = 1.5. The initial positions of the agents are random and the control parameters are given by ηi = 10.0 and αi = 1.0, with collision avoidance parameters di = 2 and βi = 2.0. Over the simulation approximately 30% of the workspace is measured. To access the performance of our proposed algorithm we consider three metrics: the percentage of agent samples that are in-process (detection performance), the threshold classification error over sampled cells (decisioning performance), and the percentage of in-process cells sampled (mapping performance). By aggregating simulations over the proposed metrics, we consider algorithm performance with varying level curve targets and agent network sizes. Fig. 5e depicts detection (top) and decisioning (bottom) performance over time for varying level curve targets. For each target we generated 20 randomized simulation runs of 100 time units, for a 30 agent system with Ci ∈ {0.95, 0.90, 0.85, 0.80} and µi = 1.0. Fig. 5f shows mapping (top) and detection (bottom) performance over time for a system with a varying number of agents. Again we generated 20 random simula-

tions for each K ∈ {50, 40, 30, 20, 10}, with Ci = 0.9 and µi = 1.0. For all sets of simulations the remaining control parameters are identical to those previously described. A. Discussion The performance of the spatial mapping and tracking algorithm is apparent from our simulations. The multi-agent system is able to map the plume-like process over a set of distributed observations in an efficient, accurate, and convergent manner (c.f. Fig. 5(c)-(f)). Through the spatial dependence model and gradient controls, the agents quickly detect and stably track the probabilistic level curves of the process, where the target curve acts to modulate performance (c.f. Fig. 5e). From Fig. 5d we see that the act of tracking generates a coordinated radial sampling pattern induced by the probabilistic process model. In comparison to methods that track the level curves of a scalar process directly using gradient controls (e.g. [6]–[9]), our probabilistic approach offers several benefits. It does not require environmental gradients that can be difficult to obtain in practice and it operates according to global information through an informed model as opposed to only local process information. Additionally as the entire probability grid is available at all times, issues such as local extrema are easily avoidable and action planning can be pursued on a global scale. Finally our approach seamlessly fuses observations from heterogeneous agents (sensors) and offers the extensibility for increasingly complex process models (e.g. the addition of a sensor noise/failure model to enhance robustness). We see from Fig. 5f that the mapping performance increases according to the size of the multi-agent system, an expected result. Increasing the number of agents in the system leads to better spatial distribution over the workspace and more observations, leading to more informed process beliefs. Fig. 5f also shows that detection (bottom) is marginally impacted in larger systems as there is greater opportunity for observations that are misleading (e.g. agents that are localized in areas that are out of process), a natural tradeoff. Communication cost should scale well as network size increases given the neighbor-wise agent topology (as opposed to an all-to-all or centralized scheme). In terms of complexity, we have achieved computation rates of 5-10 Hz on 100 × 100 grids with minimal optimization (similar to [5]). With the recent explosion in multi-core processing and the amenability of the belief propagation algorithm to parallel implementation, we believe our method represents a viable option for cooperatively mapping spatial processes in realistic distributed systems. VI. C ONCLUSIONS AND F UTURE W ORK In this paper we considered methods for mapping and tracking the probabilistic level curves of a spatial process in a multi-agent system. A grid-structured pairwise Markov random field model was assumed and the multiagent Markov random field framework was discussed for inferring a probabilistic process map. A Lyapunov stable curve tracking control was derived and a method for gradient and

Dynamic Tracking -- Time = 10.0 / 100.0

Dynamic Tracking -- Time = 100.0 / 100.0

Belief Tracking vs. Time

Belief Value

1 0.8 0.6 0.4 0.2 0 0

20

8

40 60 80 Simulation Time Tracking Speed vs. Time

100

Speed

6 4 2 0 0

(a)

20

(b)

(d)

% Process Sampled

60 50

60

40 60 Simulation Time

100

60 40 20 0 0

Sampled Classification Error vs. Time

50

C = 0.95 C = 0.90 C = 0.85 C = 0.80

40 30 20 10 0

80

20

40 60 Simulation Time

(e)

80

100

% Process Samples

% Process Samples

70

20

100

Process Sampling vs. Time

80

40 0

80

(c)

In Process Samples vs. Time

Classification Error

Agent Paths over Time

40 60 Simulation Time

20

40 60 80 Simulation Time In Process Samples vs. Time

100

80 K = 10 K = 20 K = 30 K = 40 K = 50

70 60 50 40 0

20

40 60 Simulation Time

80

100

(f)

Fig. 5. Simulation results of a 30 agent system cooperatively mapping and tracking probabilistic level curves Ci = {0.95, 0.87} of the process depicted by Fig. 4: (a) system configuration at t = 10; (b) converged system configuration at t = 100; (c) target curve and tracking speed convergence; (d) agent trajectories; (e) aggregate in-process sample percentage and process classification error rate (threshold 0.8) for Ci ∈ {0.95, 0.90, 0.85, 0.80}; (f) aggregate process sampling and in-process sample percentages for K ∈ {50, 40, 30, 20, 10}.

Hessian estimation was presented for applying the control in a discrete map of the process. Simulation results showed promising performance and feasibility of the algorithms in realistic multi-agent deployments. Directions for future work include investigating more flexible agent communication topologies, varied process and model distributions, as well as coordinated curve tracking control goals to further expand our proposed mapping and tracking methods. R EFERENCES [1] D. Paley, N. Leonard, R. Sepulchre, and I. Couzin, “Spatial models of bistability in biological collectives,” 46th IEEE Conference on Decision and Control, 2007. [2] R. Olfati-Saber, J. Fax, and R. Murray, “Consensus and Cooperation in Networked Multi-Agent Systems,” Proceedings of the IEEE, 2007. [3] A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” Computer, 1989. [4] F. Zhang and N. Leonard, “Coordinated Patterns on Smooth Curves,” Proceedings of the IEEE International Conference on Networking, Sensing, and Control, 2006. [5] Y. Rachlin, J. Dolan, and P. Khosla, “Efficient mapping through exploitation of spatial dependencies,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.

[6] P. Ogren, E. Fiorelli, and N. Leonard, “Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment,” IEEE Transactions on Automatic Control, 2004. [7] A. Bertozzi, M. Kemp, and D. Marthaler, “Determining Environmental Boundaries: Asynchronous Communication and Physical Scales,” in Cooperative Control, 2005. [8] F. Zhang, E. Fiorelli, and N. Leonard, “Exploring scalar fields using multiple sensor platforms: Tracking level curves,” 46th IEEE Conference on Decision and Control, 2007. [9] K. Dantu and G. Sukhatme, “Detecting and Tracking Level Sets of Scalar Fields using a Robotic Sensor Network,” IEEE International Conference on Robotics and Automation, 2007. [10] R. K. Williams and G. S. Sukhatme, “Cooperative Multi-Agent Inference Over Grid-Structured Markov Random Fields,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (to appear), 2011. [11] B. Maccherone. (2011) MODIS Website. [Online]. Available: http://modis.gsfc.nasa.gov/ [12] Y. Xiang, Probabilistic Reasoning in Multi-Agent Systems: A Graphical Models Approach. Cambridge University Press, 2002. [13] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009. [14] H. K. Khalil, Nonlinear Systems (3rd Edition). Prentice Hall, 2001. [15] M. De Gennaro, L. Iannelli, and F. Vasca, “Formation Control and Collision Avoidance in Mobile Agent Systems,” Proceedings of the IEEE International Symposium on Intelligent Control, 2005. [16] G. Strang, Linear Algebra and Its Applications. Brooks Cole, 2005.

Suggest Documents