Implementation of an Indoor Active Sensor Network

5 downloads 0 Views 388KB Size Report
robots) in an office space using a network of 12 stationary sensors. The network was ... military and civilian surveillance, fire fighting, intelligent buildings, etc. The Active .... points, Certainty Grid, and grid-based general pdf). Three Nodes: one ...
Implementation of an Indoor Active Sensor Network Alex Brooks, Alexei Makarenko, Tobias Kaupp, Stefan Williams, Hugh Durrant-Whyte ARC Centre of Excellence in Autonomous Systems (CAS) The University of Sydney, Australia {a.brooks, a.makarenko, t.kaupp, s.williams, hugh}@cas.edu.au http://www.cas.edu.au Abstract. This paper describes an indoor Active Sensor Network, focussing on the implementation aspects of the system, including communication and the application framework. To make the system description more tangible we describe the latest in a series of indoor experiments implemented using ASN. The task is to detect and map motion of people (and robots) in an office space using a network of 12 stationary sensors. The network was operational for several days, with individual platform coming on and off line. On several occasions the network consisted of 39 components. The paper includes a section on the lessons learned during the project’s design and development which may be applicable to other heterogeneous distributed systems with data-intensive algorithms.

1

Introduction

A large number of autonomous sensing platforms connected into a network promise better spatial coverage, higher responsiveness, survivability and robustness compared to a single vehicle solution. The modular design may also lead to lower costs, despite the increase in complexity of the overall system. The need for such systems exists in many applications involving tasks in which timely fusion and delivery of heterogeneous information streams is of critical importance. Examples include military and civilian surveillance, fire fighting, intelligent buildings, etc. The Active Sensor Network (ASN) project at the University of Sydney aims to combine decentralized data fusion and control algorithms into a unified yet flexible system architecture suitable for a wide range of sensing tasks. The ASN can be described along three dimensions: the architecture [16], the algorithms [17], and the concrete implementation which is the focus of this paper. The rest of the paper is organized as follows. Section 2 describes related work. Section 3 provides a brief background on the ASN project. The experiment and implementation details are covered in Section 4. Section 5 discusses lessons (positive and negative) learned during the system development. It also comments on current and future work based on these lessons.

2

Related Work

As a rule, distributed systems are more complicated than monolithic ones. To keep the system implementation tractable, our approach employs two design principles:

2

A. Brooks et al Requirements Small unit size Long mission duration Large team size Heterogeneous team High information accuracy Low information latency

µSN X X X

MSN X X X X

MRS

X X X

Table 1. Non-functional requirements for different SN categories. Those having direct impact on implementation approach are highlighted.

the use of components which enforces modularity and enables code re-use, and the use of middleware which hides the complexity of component interaction. There has been a strong trend towards using both of these approaches in robotics, but different communities within the Sensor Network (SN) field use them to a different degree. We find it convenient to view the SN research broadly divided into three categories: multi-robot systems (MRS), macro SN (MSN) and micro SN (µSN). The divisions are determined by application domains expressed as non-functional requirements in Table 1. Different drivers naturally lead to very different solution approaches. µSN. For various domain-specific reasons, small unit size is considered a requirement in these systems. In its most extreme form-factor, these systems are known as “smart dust” [12]. Until recently, most work has been done in simulation, but with the advent of commercially available Berkeley Motes some experiments have been performed [2] [8]. A typical deployment would involve a very large number of small identical units designed by a single team. Combined with limited processing capacity, this makes the need for modality secondary to efficiency (primarily in terms of energy consumption). Severe hardware constraints also preclude the use of standard middleware (as well as OS’s, communication stacks, etc.) Specialized middleware is being developed by several groups [11]. MRS. Collecting environment information is not the main objective in these systems but information gathering and sharing is performed as part of the domainspecific task (e.g. in RoboCup [4]). Platforms are often heterogeneous but the team size is small so, just as with µSN, component-based design is of limited value [19]. The situation is beginning to change with experimental deployments reaching the 100 robot barrier. E.g. the CentiBOT project [13] uses Jini for inter-platform communication – a general-purpose Java middleware. Some modern robotic architectures, e.g. Joint Architecture for Unmanned Systems (JAUS) [9], are defined in terms of components. JAUS intends to be distributed and open and therefore can be in principle extended to the sensor network level but currently is not. It defines its own custom message-based middleware. OROCOS [1] [14] is also primarily intended for implementing single vehicle architectures. It is component-based from the ground up and is built on top of a CORBA middleware implementation. MSN. These networks are comprised of large numbers of capable nodes and are required to operate for long periods of time. These capable nodes are often required to process rich data streams or control arbitrarily complex actuators. The power required

Implementation of an Indoor Active Sensor Network

3

for communication is very small compared with locomotion and active sensing. The extra processing power means that complex probabilistic data fusion algorithms can be executed. It also allows use of standard software tools including middleware. Large heterogeneous teams, likely to be designed by different organizations, strongly encourage modular component-based design. The system described in this paper is an example of a MSN. We are not aware of current deployments of other MSN’s described in open literature. There is also substantial military interest in MSN’s where the doctrine of network-centric warfare aims at assembling a global-scale network of very capable platforms [3].

3

The Active Sensor Network Project

We seek a solution to the problem of distributed information gathering (DIG). We consider a distributed phenomenon which can be described by a state vector x. There is a set of heterogeneous robotic platforms equipped with sensors and actuators. There is also a set of operators who monitor the phenomenon directly using human senses or by interacting with the network through a user interface (UI). We think of all the entities, human and robotic, as members of a team. The functional requirements of the DIG problem cover three broad areas: a) collecting information (which includes sensing, information fusion, information dissemination, and sensor management); b) interacting with human operators; and c) reconfiguring the system in dynamic environments. Some of the desirable qualities of the solution are: network scalability, robustness to failure of individual components, information accuracy, etc. Additional constraints may limit design choices, e.g. platform size and energy budget, stealth of operation, privacy concerns, etc. It is unlikely that a single approach will be able to solve all possible permutations of this problem. Nevertheless, we are interested a conceptual framework which is flexible enough to allow wide variations in non-functional requirements. ASN is an architecture for cooperative autonomous sensing platforms [16]. Autonomy implies that a platform is able to work in isolation and does not rely on infrastructure services, remote control, or other external inputs. Cooperative means that the platforms share common goals and, when possible, work together to achieve them. Platform are likely to have different capabilities but each comes equipped with power, processing and communication facilities, sensors and actuators. Each one fuses local observations with information communicated from neighboring nodes into a synchronized view of the world. Similarly, each one makes local control decisions based on the knowledge of local platform capabilities and the global synchronized world view. The ASN system is composed of software components communicating asynchronously with each other. Several component types are defined, each with specific roles. With respect to environmental information, a component can be a source (producer), a sink (consumer), or a fuser/distributor. Similarly, with respect to control commands, a component can be a source (decision maker) or a sink (controlled

4

A. Brooks et al

object). A particular component can play several of these roles at once. To make the reference clear, the component types will be capitalized. The fundamental principle of the ASN architectural style is decentralization. Compared to a centralized or a distributed system, a decentralized system is characterized by two key constraints: a) no central services and facilities and b) no knowledge of global topology. The resulting system offers a number of advantages over other architectures. Scalability: the computational and communication load at each node is independent of the size of the network. Robustness: no element of the system is mission critical, so that the system is survivable in the event of run-time loss of components. Modularity: components can be implemented and deployed independently from each other. The Bayesian Decentralized Data Fusion (BDDF) algorithm and its extension to information-based control were described in [17].

4 4.1

Implementation The Experiment

To make the system description more tangible we describe the latest in a series of indoor experiments implemented using ASN. The aim of the experiment is to use a dynamically configured sensor network to monitor motion in an office environment over a extended period of time. Figure 1(a) shows two types of physical sensors used to detect motion: video cameras and laser scanners. The sensor network was deployed by hand so as to achieve reasonable coverage, as shown in Figure 1(b). Each sensing platform has a processor and executes three software components: a Frame, a Sensor, and a Node. The Frame is responsible for estimating its position in the global coordinate system. In these experiments the platform poses were surveyed manually and specified on initialization. The Sensor is responsible for reading and pre-processing measurements from the sensing hardware and generating observations using a model of the physical sensor. The pose required to transform local range-bearing measurements into global observations is obtained from the Frame. The Node is responsible for maintaining and sharing a view of the global state using decentralized data fusion algorithms. The estimate is updated using observations from Sensors and information communicated by other Nodes. The global state of the world is stored in Decentralized Certainty Grid representation [18] – a decentralized version of the original [5]. The space is represented by a grid large enough to cover the 30x25m building. Each cell contains the probability that something is moving in the 30x30cm part of the office it represents. Observations increase the certainty of motion in the cell. In the absence of observations, the certainty of the cell state gradually decreases (the entropy of the distribution increases). When most of the sensing platforms have been positioned, the first Node is started, initializing its certainty grid with an uninformative prior. This moment marks the beginning of Network Up-Time. Because new Nodes are supplied with

Implementation of an Indoor Active Sensor Network

(a)

5

(b)

Fig. 1. Indoor sensor network: (a) sensor hardware and (b) deployment throughout the office: cameras are shown as triangles, lasers as squares.

the current estimate of the world state on start-up, this is the moment from which no information will be lost as long as at least one Node remains operational. One or more operators may monitor the network operation using a Graphical User Interfaces (GUI). The world view of the GUI, shown in Figure 2(a), displays the state of the world as seen from the platform to which the GUI is connected. The motion grid, from the point of view of the Node in the lower-right corner, is shown in shades of gray and red. The dark red color indicates higher certainty of motion in that cell. The large dark red blob near the centre of the image corresponds to a meeting in progress. The floor plan of the building is overlaid on top of the motion map for clarity. The network view of the GUI, shown in Figure 2(b) shows the topology of the network, which defines communication paths. Blue circles indicate Nodes, and the blue lines indicate connections between Nodes. Red squares indicate Frames and yellow triangles indicate Sensors. The right-hand panel shows details of the connections of an individual component. 4.2

Application Framework

ASN is an application framework implemented as a set of C++ libraries on Linux platforms. Currently the system consists of 250 classes implemented in roughly 50,000 lines of code. Included in this are approximately 40 classes implementing the framework which provides the base functionality and infrastructure. The bulk of the generic functionality provided by the library is in automatic system configuration: establishing, maintaining, and re-establishing service connections between the components. The rest of the code implements specific component types. The algorithms for data fusion, sensor management and platform control were largely reused from previous projects. A lot of the effort was saved by across-the-board reliance on

6

A. Brooks et al

Fig. 2. A view of the sensor network in operation, as seen from the operator’s GUI: (a) the view of the environment and the platforms and (b) the topology of the network.

hardware abstraction and device-level simulation provided by Player/Stage [7] – an open source project originating at the USC robotics group. Each component is encapsulated within a separate process. All together, 24 different component types have been or are being implemented. Five Frames: a stationary “box”, a simulated Stage platform, a mobile self-localizing Pioneer, a self-localizing stationary camera, and a pan-tilt-zoom camera. Ten Sensors: a partially filled matrix of 3 sensor modalities (laser, vision, and sonar), 3 feature types (stationary target, occupancy, and motion), and 3 feature representations (Gaussian points, Certainty Grid, and grid-based general pdf ). Three Nodes: one for each feature representation. Three Controllers: exploration using occupancy grid, point feature information surfing, and Bayesian search. Two Operator UI’s: a desktop and a hand-held versions. 4.3

Inter-Component Communications

ASN uses message-based communication. A set of messages are defined, which are decipherable by all components according to a MessageType field in the message header. Messages are implemented as C++ classes responsible for their own marshalling and de-marshalling. Similar to [20] we identify three communication types needed to implement our architecture: 1) global; 2) local; 3) point-to-point. Global distribution is facilitated by the Nodes and is implemented using component-to-component communications. It is used to propagate environment information, team control priorities, and other information with global scope. Local distribution uses broadcast which may be constrained by the use of communication channels. Components subscribe to channels

Implementation of an Indoor Active Sensor Network

7

organized by information type. This method is used for run-time system configuration. Addressed point-to-point communication forms the backbone of our system. This method is used in sensor-to-node, node-to-node, and most other links. All three communication types are facilitated by a single library which works as follows. Each host runs a daemon process which is responsible for message routing. On start up, component processes register with the daemon and specify the channels to which they want to listen. Daemons maintain a table of other daemons within communication range and the channels to which they are subscribing. This is done using a custom low-bandwidth broadcast protocol. The daemon is indistinguishable from any other communicating process except that it provides a “gateway” or forwarding facility to other remote hosts. Messages are sent from one component process to another in the following manner. If the destination component is local, the process simply writes its message contents to the addressee’s shared memory. If the message is inter-host, the process first writes to the daemon’s shared memory which then sends an addressed UDP packet to the daemon on the receiving host who forwards it to the appropriate process. Unaddressed (broadcast) messages are sent to daemons in the neighborhood subscribing to the particular channel who forward it to all subscribing processes on the same host.

5

Lessons Learned

Implementation of any design inevitably highlights its strengths and weaknesses. The purpose of this section is to share some of the insights gained by the project team during this process, by outlining what worked, what did not work, and what direction the project is taking as a result of this experience. 5.1

What Worked

Our experience has validated the benefits of open decentralized system architecture. Such systems have a higher implementation entry barrier, but once the basic software infrastructure is in place, it becomes relatively easy to extend the system both physically and functionally. Increasing system capability by adding new platforms is a simple as starting up the necessary components and all connections are done automatically. The system is capable of covering the entire building and supplying enough processing power to process images from many cameras in real-time. The system’s performance degraded gracefully with inevitable node failure and outlived any given Node. The experiment also demonstrated scalability in terms of communication requirement at any particular Node. Breaking up the system into independent interacting components was also a success. This approach allowed to design, implement, and debug components independently form each other. The message-based middleware approach proved quite adequate for this task. Component-based design also allowed great flexibility in system deployment. Each component can execute on any host as long as it is connected to the network. Implementation of this feature did not require much effort due the use of a location-transparent communication mechanism.

8

5.2

A. Brooks et al

What Did Not Work

The ASN architecture, as it stands, is fairly inflexible. A small set of fundamental component types are defined, and links between components are defined in terms of the component type on either end. This highly prescriptive architecture is tailor-made for the current system design, however it does not easily generalize to other problems. A more flexible and extensible approach is to define links between components in terms of services, and to leave the granularity of component implementation up to the designer. This way of structuring the problem, with reference to the ASN framework, is described further in [16]. More importantly, it transpired that a number of issues were hidden behind the minimalist framework requirements given in Section 5.1 [6]: Marshalling and De-Marshalling Complex objects need to be converted to serial form before transmission over a network, then re-assembled on receipt. Additionally, the byte-order for basic data types depends on the host architecture. The task of writing these functions is tedious and error-prone, and must be done for each and every object type unless a general solution is identified. This is particularly difficult for variable-length objects or objects with complex internal structure. Naming and Addressing ASN had to design a system for assigning unique IDs, with no central ID server, and for mapping from names to IDs. These IDs then need to map to the IP address of a host and a process ID on that host. Sending Large Objects The ASN communication mechanism can’t deal with re-assembling fragmented datagrams. Components therefore need to ensure their messages are small enough, or deal with fragmentation and reassembly themselves. Routing across subnets Since ASN relies on broadcast UDP, this isn’t possible without designing a special ASN message router. Type Safety ASN has no mechanism to ensure that operations requested on a server actually exist on that server, or that the parameters supplied by a client have matching definitions on the server. This is the source of many potential bugs. Communication Patterns Synchronous and asynchronous communication patterns are usually needed. ASN had to design these from scratch. OS and Language Independence Since an ASN components communicate through a well-defined set of messages, in principle a client could be written in any language on any platform. In practice, writing a Java client would involve substantial effort including re-implementing solutions to all the problems discussed. This barrier would be lowered if the application framework and the communications framework weren’t so tightly coupled. Automatic Service Discovery ASN offers a simple ping-reply mechanism for components to connect dynamically. When a component starts up it broadcasts a special message which specifies which of the pre-defined component types it is looking for. A more complete mechanism would allow components to advertise the services they require. The ASN solution to these issues involved a great deal of work, the solutions are incomplete, and the implementations are sometimes buggy. The primary lessons learned from the implementation of this project are an appreciation for the size of the task and a realization that re-inventing the wheel is unnecessary.

Implementation of an Indoor Active Sensor Network

5.3

9

Future Directions

The challenge of building large distributed applications is not unique to robotics or sensor networks. It has been the subject of intense study in the field of componentbased software engineering [10]. A practical solution has emerged in the form of middleware – “a layer between network operating systems and applications that aims to resolve heterogeneity and distribution” [6]. ASN has essentially designed its own middleware, built into the ASN application framework. Unfortunately the task of designing a robust, flexible, efficient, and well-documented middleware package is not trivial. This fact is highlighted by the numerous deficiencies found in ASN and listed in Section 5.2. In light of this, it would be desirable, at least for some tasks, to be able to adopt a general-purpose middleware package. In other aspects of hardware and software usage, the trend towards using off-the-shelf components and tools in main-stream robotics is already well established. This applies to processors, networking, serial buses, operating systems and programming languages. We believe that in the MSN field it is possible to re-use all of the above plus the middleware and only design specialized application code. In some applications, however, the µSN field being a prominent example, the hardware, operating system and communication stacks are still highly specialized. It is very likely that for those applications, specialized middleware will be required as well. In view of these lessons, direction for future research is to re-design the ASN framework on top of a standard middleware package. Three general-purpose distributed component models exist today: OMG’s CORBA, Microsoft’s COM+, and Sun’s Enterprize Java Beans [15]. CORBA’s distinct advantage is that it is languageagnostic, can run on Linux, and has open-source and real-time implementations. It is anticipated that the adoption of a standard middleware package will reduce the effort required in implementing communications and allowing inter-operability of components, producing shorter development cycles and promoting code re-use. In addition, the adoption of standard software will lower the barrier for sharing code between institutions, making collaboration easier.

6

Conclusion

This paper has described a long-term, large-scale experiment with an indoor sensor network, paying particular attention to how the system was implemented. This implementation brought to light some of the complexities that must be dealt with when building distributed, and particularly decentralized, systems. A result of this work is that future effort will be directed towards examining the use of standard middleware packages for sensor networks. Acknowledgement This work is supported by the ARC Centre of Excellence programme, funded by the Australian Research Council (ARC) and the New South Wales State Government. The authors would like to thank Matt Ridley for his communications library and continual Linux support.

10

A. Brooks et al

References 1. H Bruyninckx. Open robot control software: the OROCOS project. In IEEE ICRA, volume 3, pages 2523–28, 2001. 2. A Cerpa, J Elson, D Estrin, L Girod, M Hamilton, and J Zhao. Habitat monitoring: application driver for wireless communications technology. SIGCOMM Computer Comm. Review, 31(2 suppliment):20–41, 2001. 3. Solipsys Corp. Tactical component network: Overview. White paper, www.solipsys.com, 2000. 4. M Dietl, J-S Gutmann, and B Nebel. Cooperative sensing in dynamic environments. In IEEE/RSJ IROS), pages 1706–13, Maui, Hawaii, USA, 2001. 5. A Elfes. Robot navigation: Integrating perception, environmental constraints and task execution within a probabilistic framework. In Int. Workshop on Reasoning with Uncertainty in Robotics, pages 93–129, Amsterdam, Netherlands, 1995. 6. W Emmerich. Engineering Distributed Objects. John Wiley and Sons, Ltd., 2000. 7. BP Gerkey, RT Vaughan, and A Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. In Int. Conf. on Adv. Robotics, Coibra, Portugal, 2003. 8. L Girod, J Elson, A Cerpa, T Stathopoulos, N Ramanathan, and D Estrin. Emstar: a software environment for developing and deploying wireless sensor networks. In USENIX Tech. Conf., 2003. 9. JAUS Working Group. The joint architecture for unmanned systems. Tech. report, www.jauswg.org, Feb 2004. 10. GT Heineman and WT Councill, editors. Component-based software engineering: putting the pieces together. Addison-Wesley, Boston, 2001. 11. WB Heinzelman, AL Murphy, HS Carvalho, and MA Perillo. Middleware to support sensor network applications. IEEE Network Mag., 18(1):6–14, 2004. 12. J Kahn, R Katz, and K Pister. Emerging challenges: Mobile networking for “smart dust”. J. of Comm. Networks, pages 188–196, 2000. 13. K Konolige, C Ortiz, R Vincent, A Agno, M Eriksen, B Limketkai, M Lewis, L Brieseister, and E Ruspini. Centibots: Large scale robot teams. Tech. report, SRI International, 2002. 14. W Li, HI Christensen, A Oreback, and D Chen. An architecture for indoor navigation. In IEEE ICRA, New Orleans, LA, 2004. 15. A Longshaw. Choosing between COM+, EJB, and CCM. In GT Heineman and WT Councill, editors, Component-based software engineering : putting the pieces together. Addison-Wesley, Boston, 2001. 16. A Makarenko, A Brooks, S Williams, H Durrant-Whyte, and B Grocholsky. A decentralized architecture for active sensor networks. In IEEE ICRA, New Orleans, LA, 2004. 17. A Makarenko and H Durrant-Whyte. Decentralized data fusion and control in active sensor networks. In Int. Conf. on Info. Fusion, Stockholm, Sweden, 2004. 18. A Makarenko, SB Williams, and HF Durrant-Whyte. Decentralized certainty grid maps. In IEEE/RSJ IROS, pages 3258–63, Las Vegas, NV, 2003. 19. RG Simmons, D Apfelbaum, W Burgard, D Fox, M Moors, S Thrun, and H Younes. Coordination for multi-robot exploration and mapping. In AAAI Nat. Conf. on AI, pages 852–58, Austin, TX, 2000. 20. L Subramanian and RH Katz. An architecture for building self-configurable systems. In Workshop on Mobile and Ad Hoc Networking and Computing, pages 63–73, 2000.

Suggest Documents