A Cooperative Architecture for Target Localization with ... - LAAS

0 downloads 0 Views 863KB Size Report
ware architecture for a fleet of underwater and surface ve- hicles, which will .... Figure 1 represents the timeline used for the Example 1. To perform a “move”, the ...
A Cooperative Architecture for Target Localization with Underwater Vehicles ∗ Assia Belbachir, F´elix Ingrand, Simon Lacroix LAAS/CNRS, Toulouse University, France e-mail: < f irstname.lastname@laas. f r >

Abstract We are interested in designing and implementing a software architecture for a fleet of underwater and surface vehicles, which will allow them to be simultaneously cooperative and autonomous. To provide the required autonomy to these vehicles, we build upon an existing system (T-ReX, developed at MBARI) which provides an embedded planning and execution control framework. We extend this architecture to allow vehicles to communicate and exchange perception data, which may lead them to modify their trajectory to improve their mission outcomes. The type of scenario we consider are target localization (plume, algae bloom, etc). We present some preliminary results obtained on an ad hoc simulator, as results on IFREMER.

1

Introduction

There is a growing research interest in using Autonomous Underwater Vehicles (AUV) to better understand the deep sea biological and chemical phenomena as well as the effects the human way of life has on them. This need has pushed the development of new technologies to design more efficient and more autonomous underwater vehicles. In the context of this paper, autonomy refers to the “decisional autonomy” of this kind of vehicles, i.e. the capability to take decisions, in uncertain, varying and mostly unknown environments.

A more recent concern in this area is to consider not only one vehicle, but a fleet of cooperating vehicle (Autonomous Underwater Vehicles (AUV), Autonomous Surface Vehicles (ASV), etc). Indeed, multiple robots (with heterogeneous capabilities) have several advantages over a single robot system [5]. Cooperative robots, when properly managed, have the potential to accomplish tasks faster and better than robots evolving independently. Numerous studies propose interesting architectures for robot cooperation. Nevertheless most of them are focused on terrestrial robots for which the communication among vehicle is reliable and permanent. For underwater vehicles, it is a different story. The water acoustic greatly restricts the communication range and bandwidth [14]. Indeed, communication between two vehicles can only occur when they are inside a vertical acoustic cone with respect to each other. As a result, the general robot cooperation problem has to be reconsidered to take this particular constraint into account. In this study, we are interested in exploration scenarios, where the fleet of vehicle survey an unknown zone and try to localize a target (plume, hot-spring, etc) as fast as possible. For this, we want to use several underwater vehicles that cooperate by sharing their information under communication constraints.

We consider different types of vehicle: AUVs and ASVs, each with a specific role. AUVs have as goal to localize the target and must achieve communication with the ASV on predefined points (to report and share data with others). Using the ASV has a communication ∗ Part of this work is supported by the IFREMER (Institut Franc ¸ ais de hub greatly reduce AUVs energy consumption (as they do Recherche pour l’Exploitation de la MER), CNRS (Centre National de la Recherche Scientifique) and AWI (Alfred Wegener Institute for Polar not need to surface). The ASV moves to the predefined points, communicates and then redistribute the collected and Marine Research).

information among the other AUVs upon communication. Communication rendez vous restrict the ability of the AUVs to explore the area as they want. Still, the data obtained from other vehicles, when correlated with their own, can greatly help them localize the target as soon as possible. This paper describes a mono and a multi robots scenario. In both case, we extended the T-REX architecture to allow for opportunistic goal posting, and to permit for some level of cooperative exploration. We have implemented the proposed architecture in a simulation environment to check and validate our approach. The paper is organized as follow. The first section presents some previous work related to our context. We then describe the T-REX architecture on the top of which we implemented our system. The following section describes the proposed extension for our cooperative architecture. Finally, we describe the experimentations we have done so far and then we conclude and present the future work we plan to achieve.

2

though the notion of cooperation (to be at the same time at the same area (e.g. to communication)) is not taken into account. Task driven approaches allow ordering of given goals (multiple tasks). These approaches can generate new plans to allow the robot to perform the specified goals. The generation of new plans is done by a planner. Several types of planners exist in the literature (IxTeT [11] used for terrestrial robotic, HATP [2] applied for human robot interaction, EUROPA [8] used for space exploration,...). Anthony Stentz and al. [16] or Dani Goldberg and al. [9] propose a technique that allocate tasks to different robots called market-based algorithm. In [17], the authors consider the problem of disaster mitigation in the RoboCup Rescue Simulation [10] to allocate tasks. The authors use a greedy search method to explore the world where the robot has to visit nodes the least number of times. These research take into account the collaboration between vehicles, but in general they do not take into account two aspects: • These approaches are tasks oriented. If the strategy of exploration is unknown, then the predefined tasks cannot perform the exploration.

State of the Art

There are many studies, in the robotics field, which deal with the problem of exploring unknown environments. We can consider two type of approaches: data driven, and task driven. Data driven approaches usually consider that the vehicle chooses its next step of exploration, according to its sensing. David R. Thompson and al. [18] propose an algorithm to perform the exploration of the robot using the collected pictures from the robot. G. Dudek and al. [6] represent the environment by a graph. The vehicle chooses the next cell to explore according to a protocol. Other researchers use a technique based on the quantity of pheromone sensed in the the environment [7] to track targets. Here, each vehicle puts and/or evaporates pheromone and according to this quantity of pheromone they choose the next cell to explore. In another area, Wolfram Burgard and al. [5] propose an algorithm where each vehicle computes a utility and a cost to go to a target. This utility is obtained by the distance to that target. At the end, each vehicle will be assigned to a nearest target. These approaches are interesting when one deal with one task (exploration, localization of a target, etc), al-

• The cooperation between vehicles is mostly with a high bandwidth communication, while in our case (the ocean) the communication is restricted between vehicles because of the acoustic properties. To deal with these constraints, we propose to deploy: • Some decisional autonomy on each vehicle. Every vehicle is able to accomplish its tasks independently (task driving) and modify its trajectory to collect more information about the target (data driving), if necessary. • Some cooperation between vehicles. Each vehicle shares its collected information with others, at fixed points called “Rendezvous”, to be able to jointly better localize the target as quickly as possible.

3

The T-REX Architecture

To endow each vehicle with some autonomy, we have used an architecture for planning and execution control

TimeLine

!! *756#3#6#.8* !! *9"$86/,#$68*

4$,56#3.*

1"2"3#$%&'#((0*

!"#$%&'#(()*+,$-./0*

4$,56#3.*

LifeTime! !H) Execution frontier (!)

1"2"3#$%&+,$-./0*

r1

!(L)

t1

t0

Deliberative

PathController

GoalTokens (Gr)

W

t

TimeLine(L) Latency(!) Look-ahead (")

called T-REX [12], and originally developed at MBARI (Monterey Bay Aquarium Research Institute). This architecture provides each vehicle the capability to plan and execute its plan on board. T-REX incorporates planners that can cope with different planning horizon and deliberation time. A T-REX agent is divided into several layers called “reactors” (R = {r1 . . . rn }). Each reactor can be deliberative or reactive (depending on the horizon and deliberation time). The basic data structures used in T-REX are timelines (L). Those timelines represent the evolution of variables over time, witch is a set of ordered activities called Tokens (τ(L)) that are mutually exclusive. Example 1 : holds(Location, 10, 20, Going(Hill, Lander)) Means that the robot will move from the Hill to the Land between 10 ut to 20 ut (ut = unit of time). Location represents a timeline and Going(Hill, Lander) from 10 to 20 is a Token. Temporal constraints can be associated to tokens. These constraints can be compatibilities (temporal constraints taken among the thirteen temporal relations of Allen [3]) or guards (conditional compatibilities similar to the conditional statement in traditional programming language). Figure 1 represents the timeline used for the Example 1. To perform a “move”, the robot has to be at the “Hill” po-

.'$%&'()* $+,%)+'%*!#"-*

Synchronizer

ObservationTokens(Or)

Dispatcher

GoalTokens (Gr)

r2

Figure 2: The description of the T-REX Architecture. sition before, where the related constraint is Met by. This constraint is considered as a compatibility. At the end, the robot should be at the “Lander” place, where the associated constraint is “ends”. If the activity “Going(Hill, Lander)” starts then the robot has to be at the end of the “Going” on the “Lander” place. This constraint is called a guard. All those components (timelines, activities and constraints) are implemented in NDDL (New Domain Description Language) [4] developed by NASA. In T-REX each reactor is composed of (see Figure 2) : • A database: it is a data structure which holds the current plan. This database provides to the planner all target goals and the constraints between timelines. • A planner: it is used to populate timelines with tokens according to what is defined in the database. Each reactor has a planner with different “planning horizon” λr and a “look-ahead” π. The more we go down in the architecture, the more the reactor is reactive and vice versa. In the current implementation, T-ReX uses EUROPA [8] as the planner.

Deliberative

Figure 1: An example that illustrates the use of timelines, activities and constraints in T-REX.

Reactive

"#$%&'()* $+,%)+'%*!!"-*

4

Goals Observations

!"#"$"%&'

!"#"("%&' ,-.'(*/."0)*+

!"#$%&'()*+

T-REX Agent

!)1"2)*%&"3)+ Mission Manager

!)1"2)*%&"3)+

Decisional Autonomy of Each Vehicle

In the scenario we consider, the main objective is to localize a target. We assume that the vehicle has predefined goals to achieve (way points, communication rendezvous). Those goals define the strategy of exploration. To improve this strategy, the vehicle has to reason on its perceived data.

Executive !"#"("%&' ,-.'(*/."0)*+

!"#$%&'()*+

Goals Observations )&*+,-&'./0#1/-'23$%4%#&5'

Figure 3: The used T-REX Architecture at MBARI [13].

CoopReactor

!"#"("%&' !"#$%&'()*+

,-.'(*/."0)*+ Generate opportunistic goals according to the sensing

plan Mission Manager !"#"$"%&'

!"#"("%&'

• A dispatcher: allows goals(Gr ) to be dispatched to other reactors. This management is done at each unit of time called a tick.

MapReactor

,-.'(*/."0)*+

!"#$%&'()*+

!"#"("%&' ,-.'(*/."0)*+

!"#$%&'()*+ !)1"2)*%&"3)+ Executive

• A synchronizer: coordinates observations (Or ) from other reactors. Those observations are done by an external timelines (Er ). Those external timelines can be observed but cannot be modified by this reactor.

!"#"("%&' ,-.'(*/."0)*+

sense

T-REX has been used extensively on one AUV at MBARI to plan its mission and to control it [13]. In their particular setup, the architecture is composed of two reactors (see Figure 3).

!"#$%&'()*+

act

Figure 4: The general proposed and used architecture.

We introduce a new reactor in the T-REX architecture which allows getting up-to-date information on vehicle The Mission Manager reactor provides high-level rea- state and modifying its trajectory if necessary. Thus, the soning to satisfy all mission goals. This reactor is consid- MapReactor is the component that takes into account the ered as the deliberative reactor, where its look-ahead is perception of the vehicle and generates new goals for the the whole mission. Mission Manager. Two questions arise from using this reactor, the first The second reactor is the Executive reactor. It receives one is when to generate a goal and the second is how to goals from the Mission Manager reactor and plans to send compute the goal. behaviors to the functional level of the vehicle. The planning horizon of this reactor is smaller than the Mission The proposed idea to generate a goal is based on unManager reactor, that is why it is considered as a reactive certainty management. The more the vehicle is uncertain reactor. about the collected data, the more it has to (re)explore this

T-REX Agent

!)1"2)*%&"3)+

zone 1 . We suppose that the vehicle, using its sensors (CTD: Conductivity, Temperature and Depth), is able to compute a probability of its proximity to the target (e.g. hot spring). A threshold is defined, based on the notion of entropy, to decide when the vehicle has to return around ambiguous zones. To reduce the time to localize the target we use cooperative vehicles. This reactor, called CoopReactor, gets sensory data from other vehicle and pass them to the MapReactor. The Figure 4 shows our proposed architecture, where each reactor has a specific role. The two first reactors are similar to the one proposed by the MBARI. The two last are our contribution to the addressed problem.

To allow the vehicle to change its strategy of exploration we have to represent the perception of the vehicle (see § 4.1.1), measure the threshold to compute a goal (see § 4.1.2) and finally find a function to compute the goal (see § 4.1.3).

• The Mission Manager represent the deliberative reactor. It manages high level goals of the whole mission. • The Executive is the reactive reactor. It takes into account all generated goals from the Mission Manager, plan and send behaviors to the vehicle controller.

Figure 5: The components of the MapReactor using the structure of a reactor

• The MapReactor has as the role to generate opportunistic goals. Those goals can be taken into account by the Mission Manager.

4.1.1

• The CoopReactor is the cooperative reactor which communicate with the other vehicles whenever communication is possible.

4.1

MapReactor

The MapReactor (Figure 5) component is composed of a synchronizer, a database and a dispatcher, where the synchronizer and the dispatcher have the same frequency (tick). The database contains a grid called Grid perception which represents the map of the whole exploration zone. The vehicle changes its strategy of exploration (generating goals) when it accumulates perceptions that are ambiguous. This ambiguity means that the vehicle cannot definitely decide if the explored zone “contains” a hot spring or not.

Representation of the Vehicle Perception

We represent the collected perception in a grid that has two dimensions as follow: G perception (X,Y ) = (EXPLO,Vexplored ) Where X, Y ∈ D and EXPLO, Vexplored ∈ [0, 1]. D is the mission coverage domain.

When a vehicle explores an area (a cell of the grid) the explored value (Vexplored ) becomes 1 (meaning that the zone is being explored by the vehicle) otherwise the value of Vexplored is 0 (default value). Vexplored represents the probability sensed by the vehicle to be near a hot spring. This probability is done according to the collected data from the CTD. A predefined model is used to map each perception to some probability value. For example in [13], the authors use Hidden Markov Model to build such a probability map. We assume in 1 To redo the exploration of the same zone, the vehicle has to compute our experimentations that the probability map is given for a new goal. sufficient values of the CTD sensor.

4.1.2

Measure the Threshold to Compute the Goal

Where α and β are the number of cells that the vehicle has to return to. In our approach the vehicle has to return Depending on the sensed value Vexplored , the vehicle com- near the first ambiguous cell. putes an entropy value that measures the certitude to be near a hot spring. The entropy depends on two computed probabilities: the probability to be close to and far from a 4.2 The Cooperative Reactor hot spring. In the case where the two computed probabilities are To allow the cooperation between vehicles, each one of equals (0.5), the computed entropy will be in the maxi- them has to be at predefined rendezvous points to commum value, this indicates that the vehicle is in an ambigu- municate with the ASV. Those rendezvous points allow ous state. When the vehicle get this equi probable value, vehicles to exchange informations. To manage those init has to compute new goals to reduce this ambiguity. terchange we added a new reactor called “CoopReacNevertheless, the dimension of the cells on the grid will tor”. It is composed of a synchronizer, a dispatcher and affect the decision of the vehicle. The smaller size of a cell a database. This database, contains the predefined renis, the less concluding information will be obtained by the dezvous points for the whole mission. value of entropy. For example, if a cell of the grid has a In our experimentations a client/server protocol is imdimension 5m*5m (i.e. a small cell) and also the vehicle plemented for data exchange between vehicles. The exget a probability of 0.5 to be near a hot spring a new goal changed data can be the explored grid or the failed tasks. will be generated. All those exchanged information can help the vehicle to However the distance is too small to react and generate localize the target and accomplish the tasks of the other a new goal, that is why we decided to use a threshold of AUV. cells. Each vehicle computes an average value of entropy for each maximum distance (MaxDist). Where MaxDist Algorithm 1 The Proposed Algorithm. is given by the operator of the mission. The computed RUN(R, τ,Π) if (τ ≥ Π ) then value of entropy is computed as follow : init+CellNber



Entropy =

entropyi

i=init

CellNber

(1)

Where init is the initial cell where the vehicle is at the beginning of the mission, CellNber the maximum cell to traverse before computing the entropy. CellNber = DistMax Dim . DistMax is given by the operator of the mission, Dim is the dimension of one cell in the grid. 4.1.3

Change the Strategy of Exploration

The goal (G) is computed as Gx and Gy according to the actual state of the vehicle along the x axes (Sx ) and the y axes (Sy ). ( Gx = Sx − α(Dim) G= (2) Gy = Sy − β (Dim)

Mission end end if Dispatch(R, τ) R’← SynchronizeAgent (R,τ) if (R’ == φ ) then return end if //Deliberate in steps until done or clock transition δ ← τ +1 done ← ⊥ if (R’ == MapReactor ) then Grid perception ← U pdateGrid(G perception ) Compute( f ctsu f f iciency , G perception ) if ( f ctsu f f iciency ) < Valsu f f iciency ) then Goal ← ChangeStrategy end if else while ((⊥ > τ) ∧ (¬done)) do done ← Deliberate(R’, τ) end while end if while (⊥ > τ) do sleep end while RUN(R’, τ, Π)

All the described reactors, compose a general architecture of four reactors (see Figure 4). To manage those reactors an algorithm is implemented (see Algorithm 1). Each reactor R, has an execution time τ and a total mission execution Π. If the actual execution time is greater or equal to the whole mission time then the T-REX agent exits. Otherwise, the reactor dispatches his computed goals and synchronizes observations that are coming from other reactors. If the reactor is a MapReactor, it first updates its observations on the G perception . Then, it computes a function based on the equation 1. A comparison between the computed value and a threshold is done. A goal is generated when the value is smaller then the threshold. If the reactor has a deliberative components it may deliberate until the end of the tick (τ+1).

5

5.1

Using One Vehicle

The vehicle is an AUV. We defined eight points (A, B, C, D, E, F, G and H) that represent the target position for the vehicle. Those points are represented on the Figure 6. Each point is defined along the three dimensions (x, y, z). In our case, all points are in the same plane and have the same depth (z = 10 meters), except the initial point where the vehicle is at surface (z = 0 meters). There are some way points that the vehicle has to visit within a particular time interval. For example for the target A, the vehicle has to be in A at the earliest at 4 ut and at the latest at 400 ut.

Experiments

In our experimentations we first used a simulator which itself is a reactor. This simulator reproduces the path of the vehicle and their communication points. We have done two types of experiment. The first one uses one vehicle and shows how the vehicle can take into account opportunistic goals. The second experiment indicates how vehicles can cooperate to localize the target.

Figure 6: Initial predefined goals of the AUV.

Figure 7: The executed path of the AUV for the predefined goals. The Figure 7 shows the executed path of the AUV without using the MapReactor. This simulation results proves that the AUV can attempt all the predefined points at time. The second experiment has as objective to show that the vehicle can modify its path according to its perception. We used, in this simulation, a grid where each cell (x, y) has a predefined probability. This probability represent the proximity of the vehicle to the target (as the CTD sensor would measure). Each vehicle, computes an entropy function for each different cell(x,y). Where x and y represent the actual position of the vehicle. The computed entropy value of the whole mission for the AUV is shown in Figure 8. This figure has three dimension x, y and the computed value of entropy using the equation1. When the vehicle exceeds a threshold value of en-

5.2

With Multiple Vehicles

To extend the obtained results toward cooperative vehicles, we now use one ASV and one AUV. Predefined rendezvous points are specified for each vehicle. The mission specification of the AUV and the ASV are given on Figure 10 and Figure 11.

Figure 8: The computed value of entropy for the whole AUV mission, by the MapReactor.

tropy (here we choose 0.51), it generates a goal using the equation2. In this case the generated goal is G1 where G1 = (2200, 1700,10). The Figure 9 shows that the generated goal G1 has been taken into account by the vehicle. At the same time, it is able to achieve all the predefined goals.

Figure 10: The initial predefined points for the mission of the AUV.

Figure 11: The initial predefined points for the ASV. Figure 9: The path of the AUV taking into account opportunistic goal.

The results are shown on Figure 12. The dark (respectively the gray) lines represent the path of the ASV (respectively the AUV). The AUV does not modify its trajectory as it did in the previous experiment (Figure 9).

The quality of this model has a great influence on the quality of the new target goals generated. Another extension can be done on a scenario of several vehicles (two AUVs and one ASV) that exchange data and decide witch way to use, for target localization. A comparison between, the execution of the mission for one and also several vehicles using our strategy can push us to prove the interest on using a fleet of vehicles. Last, not least, if the simulation tests are conclusive, we are definitely interested in implementing our approach on IFREMER platforms. Figure 12: The path of two cooperative vehicles (one AUV with grey line and one ASV with dark line).

This is due to the rendezvous point. Because of the limited amount of time for the overall mission, the vehicle chooses to achieve the communication point rather than exploration.

6

Conclusion and Future Work

In this article we presented a cooperative architecture for autonomous underwater vehicles. This architecture allows each vehicle to be autonomous and to use the collected information by the fleet of vehicle to localize the target. We used an existing architecture for planning and execution control (T-REX) which allows for deliberation as well as reactive behaviors. Two new reactors were implemented to improve on the fly the predefined strategy for area coverage exploration and to allow for cooperation. The algorithm was tested on our testbed. The MapReactor was also tested using a simulator platform at IFREMER. This is an “hardware in the loop” simulator which uses the same controller than the one used on Asterix (one of IFREMER’s AUVs). The Figure 13 shows the executed mission by the vehicle using MIMOSA [15]. All predefined goals were attempted by the vehicle. Several extensions to this work are being considered. One of them is to improve the generation of opportunistic goals. The vehicle has to compute, online and according to its perception, a predictive model of the environment.

7

Acknowledgment

Thanks to Kanna Rajan and Fr´ederic Py (MBARI) and to Conor McGann (Willow Garage) for their help on using the T-REX Architecture and for all the fruitful discussions, which greatly helped us in designing and setting up the global architecture of our systems. Thanks to Jan Opderbecke and Michel Perrier (IFREMER) for helping us connecting the system to their simulator.

References [1] R. Alami, S. Fleury, M. Herrb, F. Ingrand, F. Robert. Multi Robot Cooperation in the Martha Project, IEEE Robotics and Automation Magazine (Special Issue on “Robotics and Automation in the European Union”), vol. 5, N 1, March 1998. [2] Samir Alili, Rachid Alami and Vincent Montreuil. A Task Planner for an Autonomous Social Robot, Distributed Autonomous Robotic Systems 2008 (DARS2008), November 17-19, 2008. [3] Allen, J.F. Maintaining Knowledge about Temporal Intervals. Communications of the ACM 26, vol 11, November 1983, pp 832-843. [4] S. Bernardini and D. Smith. Translating PDDL2.2. into a Constraint-based Variable/Value Language, ICAPS-08 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS), 2008. [5] Wolfram Burgard and Mark Moors and Cyrill Stachniss and Frank Schneider, Coordinated multi-robot

Figure 13: The path of the vehicle display on IFREMER simulator. exploration, IEEE Transactions on Robotics, vol. 21, 2005, pp 376-386.

[10] H. Kitano, S. Tadokoro, I. Noda, H. Matsubara, T. Takahashi, A. Shinjoh, and S. Shimada. Robocup rescue: Search and rescue in large-scale disasters as [6] G. Dudek and D. Marinakis, Topological Mapping a domain for autonomous agents research. In IEEE with Weak Sensory Data, AAAI National Conference SMC, vol VI, October 1999, pp 739-743. on Artificial Intelligence, July 2007. [7] Van Dyke and Brueckner, Sven, Entropy and selforganization in multi-agent systems, AGENTS ’01: Proceedings of the fifth international conference on Autonomous agents, 2001, pp 124-130. [8] Jeremy Frank and Ari Jnsson, Constraint-based attribute and interval planning, Journal of Constraints, Special Issue on Constraints and Planning, vol. 8, 2003, pp 339-364.

[11] Laborie, P. Ghallab, M. IxTeT: an integrated approach for plan generation and scheduling, Emerging Technologies and Factory Automation, 1995. ETFA ’95, Proceedings., 1995 INRIA/IEEE Symposium on, vol.1, pp 485-495. [12] C. McGann, F. Py, K. Rajan, H. Thomas, R. Henthorn and R. McEwen, T-REX: A Deliberative System for AUV Control, ICAPS, September 22-26, 2007,

[9] Dani Goldberg and Vincent Cicirello and M. Bernardine Dias and Reid Simmons and Stephen Smith and Trey Smith and Anthony Stentz, A distributed lay- [13] C. McGann, F. Py, K. Rajan, H. Thomas, R. Henered architecture for mobile robot coordination: Apthorn et R. McEwen, Preliminary Results for Modelplication to space exploration, In Proceedings of the Based Adaptive Control of an Autonomous Under3rd International NASA Workshop on Planning and water Vehicle, 11th Int’l Symp. on Experimental and Scheduling for Space, 2002. Robotics (ISER), July 2008, Athens, Grece.

[14] Matthias Meyer and Jean-Pierre Hermand, Backpropagation techniques in ocean acoustic inversion: time reversal, retrogation and adjoint model – A review, Acoustic Sensing Techniques for the Shallow Water Environment, 2006. [15] User’s Manual, MIMOSA (MIssion Managment fOr Subsea Autonomous vehicles). [16] Anthony Stentz and M. Bernardine Dias and Robert Zlot and Nidhi Kalra, Market-Based Approaches for Coordination of Multi-Robot Teams at Different Granularities of Interaction, Proceedings of the ANS 10th International Conference on Robotics and Remote Systems for Hazardous Environments, 2004. [17] Milind Tambe and Stacy Marsella, Task Allocation in the RoboCup Rescue Simulation Domain: A short note, Book RoboCup 2001: Robot Soccer World Cup V, vol 2377, 2002, pp 1-22. [18] David R. Thompson, Trey Smith and David Wettergreen, Information-Optimal Selective Data Return for Autonomous Rover Traverse Science and Survey, IEEE International Conference on Robotics and Automation. Pasadena, CA, USA, May 2008, pp 1923.