a standardized interface to the onboard systems and enables system scaling. ... them through the ad-hoc network using a mobile monitoring station. ... able to coordinate with other UAVs to provide the best effective coverage for the given ...
Networked UAV Command, Control and Communication Jack Elston ∗ , Eric Frew† , Brian Argrow
‡
University of Colorado, Boulder, CO, 80309, USA
The networked UAV command, control, and communication project has designed, implemented, and tested a network-centric, intelligent flight management system for UAVs. The system utilizes an existing ad-hoc network to demonstrate the ability of UAVs to make mission level decisions autonomously based upon network metrics and operator-specified parameters. Each UAV features a highly modular hardware architecture which provides a standardized interface to the onboard systems and enables system scaling. A high-level bus protocol is used on this interface to transport messages between the avionics package, network node, supervisory computer, and scientific payload. All decisions are made autonomously, but an operator maintains control of the mission parameters, and may modify them through the ad-hoc network using a mobile monitoring station. The station features a GUI to display feedback on node status as well as provide a visualization of any science data being collected. The collective system was tested and verified utilizing both a hardware-in-the-loop simulation, and an experiment at an outdoor range.
I.
Introduction
he ability to collect and transport data across large, mobile networks is central to many humanitarian T and scientific efforts. In emergency situations, where power is lost or extra network capacity is needed, deployment of a mobile communications support network is invaluable. In wildland fire fighting situations 1
where both incident commanders and fire fighters have need for real time situational data, a deployable mobile network is imperative to insure firefighter survival and minimal loss of property. A deeper understanding of tornado formation will lead to a direct improvement in advanced warning systems, saving many lives,2 and will benefit from mobile sensor networks that relay data instantly from sensors placed dangerously close to the storm to researchers. Mobile sensor networks that relay time sensitive information will also aid investigation of the role of the Arctic as a bellwether for global climate change.3 Aerial platforms have been demonstrated to be ideal for use in maintaining these mobile networks.3 They enable deployment in areas impassible to other vehicles while maintaining the necessary mobility to provide coverage to highly dynamic, or widely dispersed networks. A network-centric methodology is needed to provide the UAVs with the ability to autonomously position themselves for ideal connectivity, and to be able to coordinate with other UAVs to provide the best effective coverage for the given situation and mission parameters. Several projects have made progress in effectively demonstrating control and coordination of miniature and small UAVs. One miniature UAV project4 has developed a system that can support several intelligent vehicles, and has demonstrated a method for real-time path planning, along with cooperative aerial surveillance.5 This could potentially be expanded to provide for the UAVs to support a mobile network and its applications mentioned above. A small UAV project6 has developed and tested a system that enables a group of vehicles to perform coordinated tasks by assigning the group a mission and allowing the autonomous systems to perform tasking and waypoint planning in real time. The inter-vehicle communications for this project are done through ground nodes, but by moving this functionality to the vehicles, this project could similarly support a mobile network. ∗ Graduate
Research Assistant, Department of Aerospace Engineering Sciences. Student Member Professor, Department of Aerospace Engineering Sciences. ‡ Associate Professor, Director Research and Engineering Center for Unmanned Vehicles. Senior Member. † Assistant
1 of 9 American Institute of Aeronautics and Astronautics
Figure 1. Onboard Architecture.
Rather than first developing aerial platforms and later adapting the network to meet the UAV’s control demands, the Networked UAV C3 methodology is to use an existing communications schema and adapt the UAVs to the network framework. In this manner, the problem of the construction and maintenance of a mobile network can be more effectively approached. Furthermore, it will facilitate expansion of the network and allow for future investigation of other network based algorithms using both fixed and mobile platforms.
(a) Ground MNR
Deployed
(b) Vehicle Deployed MNR
(c) UAV Deployed MNR
Figure 2. Network Nodes
The Ad-hoc UAV Ground Network (AUGNet) was developed at the University of Colorado, Boulder to investigate the performance of airborne mobile ad-hoc networks. The current AUGNet system allows for the connection of many mobile nodes into an ad-hoc network utilizing dynamic source routing and the 802.11b wireless protocol.7 Small nodes may be randomly placed throughout a range, and data may then be relayed to any chosen node on the network. Network topology may change, and in the case of deployment of a node on a UAV, the network topology may change at a significant rate. The network operates on IP based addressing and the transport layer can support both TCP and UDP transport schemes. It has been tested and benchmarked in its current configuration with static and both terrestrial and aerial based mobile nodes (Figure 2). All of the nodes are constructed from COTS technology, keeping system cost down and enabling easy system upgrades and integration with other devices. A small aerial platform ( 10kg) was constructed which can support an onboard network node for approx2 of 9 American Institute of Aeronautics and Astronautics
imately one hour.7 The platform contains a Piccolo avionics package from Cloudcap Technologies8 which enables the plane to be autonomously piloted around a given waypoint pattern, and to store several of these waypoint patterns onboard. The system has been tested and verified in several experiments conducted at an outdoor range. This paper presents the implementation and testing of an advanced Communication, Command, and Control (C3) system for these existing small UAVs built upon the AUGNet mobile ad-hoc network. Integration of devices is provided through an onboard system that allows for intelligent exchange of information between the existing network and aircraft1. This synthesis combines network metrics, vehicle status, and mission parameters to create an intelligent node that may perform data-centric tasks while remaining within specified mission parameters. Mission parameters are defined by an operator through a console operated from anywhere within the ad-hoc network. The console utilizes the network to provide real time status for both the network and each particular node, along with providing overall situational awareness.
Figure 3. Experiment Block Diagram
II.
Networked UAV C3
he goals of the Networked UAV C3 project are to provide the necessary command, control, and comT munications functions to a group of UAVs to maintain a purely autonomous flock in support of ground communications through an ad-hoc network. In order to accomplish these goals, a modular architecture has been implemented for the synthesis of onboard systems needed in mission level decision making. The architecture, pictured in Figure 3 employs a lightweight interface node to connects to each COTS component, and provide an interface through a shared bus. Each node contains some intelligence and is responsible for initialization, data fetching, and any needed data manipulation for a particular device. By enforcing this paradigm, the bus traffic remains high level, and enables significant system scaling with no reprogramming of the existing nodes. System upgrades may also be performed with ease, so long as the high-level data being pushed to the bus remains unchanged. The current setup employs a Soekris single board computer for 802.11b routing and communications, a Piccolo avionics packages from Cloudcap Technologies,8 a flight computer for making abstracted mission level decisions, and a simulated scientific payload. A.
Naiad Node
The particular interface nodes used in the current implementation are part of the Naiad system, which was developed for use in other aerospace projects.9 The Naiad node features distributed computing based upon the Atmel ATmega128 microcontroller. These full-featured microcontrollers feature six channels of PWM
3 of 9 American Institute of Aeronautics and Astronautics
output, several general purpose I/O lines, external interrupts, 8 10bit A/D converters, and a suite of bus interfaces including UART, I2 C, and SPI. By utilizing one of these communication methods, the node can be interfaced to a large number of COTS components with very little need for additional hardware. Each node is interconnected through the fault-tolerant, high-speed Controller Area Network (CAN) serial bus. The CAN bus is ideal for this application due to its fault tolerance and message type based addressing protocol. CAN supports data exchange rates up to 1 Mbps using a very large voltage differential signal to make the bus much less susceptible to noise. Coupling this with the optional fault-tolerant transceiver provides a system that guarantees transmission so long as the destination node is connected to the bus. Furthermore, each message is broadcast to all nodes in the system, freeing the transmitting node from verifying reception by a particular client. To enable this, messages are addressed using a type rather than destination. Each receiving node may set a hardware filter to allow only the message types that it is interested in to be placed in its queue. By enabling particular subsystems to transmit and receive a set of message types, the system may be scaled without having to add any further software to any of the existing system nodes. Furthermore, redundant systems may be placed on the bus without creating contention.
Figure 4. Onboard Naiad Interface Nodes and Thalassa Sensor Payload
1.
Thalassa Node
The addition of a simulated scientific node allows the system to demonstrate the downlink of real-time experimental data that can be used to reconstruct mission parameters, or provide limits to the onboard flight computer. The particular node used in the UAV C3 system, named Thalassa, is an expanded Naiad node with integrated temperature, pressure, and humidity sensors. The Naiad core of the Thalassa proivdes communications across the CAN bus, sensor data acquisition, along with device interface firmware. This resulted in a significant savings in both development time and cost. The Thalassa board serves as an interface for all three sensors. Naiad and Thalassa nodes are shown with a battery pack in Figure 4. Pictured is the mounting configuration for the flight experiment. 2.
Low-level Drivers and Device Interface
Interface to the Piccolo autopilot is provided through libraries provided by Cloudcap Technologies that have been modified to run given the processing and memory constraints of a Naiad node. Some functionality was cut out, such as message logging and queuing, but with the system being assumed to work with pseudo real-time scheduling, the caching of messages is not important. Currently, the communications to the Piccolo unit is also limited to requesting a waypoint change, requesting a turn rate, uploading a set of waypoints, reading GPS data, checking battery and communication states, and reading the current destination waypoint. The communication node does, however, provide some error handling support and will verify that uploaded waypoints sets are properly stored, and that any waypoint change requests are received and acted upon by the Piccolo Unit. Interface to the Soekris 802.11b payload node was implemented using the same protocol as was used over the TCP/UDP connection to the Soekris board from the user interface. A ping packet was transmitted at a set rate from the Soekris node in order to verify communications, and to provide stats about the serial link.
4 of 9 American Institute of Aeronautics and Astronautics
Software on the Naiad was able to decode the packets, verify checksums and either transmit over the CAN bus, or respond back appropriately over the serial link. B.
Remote Monitoring Station
A user interface to the network node status and control utilizing the ad-hoc network was created. Given that each node communicates with its neighbors periodically in order to provide a routing table, by building the GUI on top of the network, the system can provide the users with statistics on a per node basis. Communications with a particular node for data requests and control are done through the use of network sockets. Both UDP and TCP protocols are supported, and the system performs its own packetizing and error handling in a Layer above TCP or UDP packetizing. All of these communications are logged, and can be later used to verify experiments and extract interesting relationships between the data. The GUI is divided into several windows which represent a given context of the system. An overview of the network status is presented to the user in the main window. From this, he can quickly deduce the number of nodes connected on the network, their current location, node type (fixed, mobile, UAV), and some simple node context information. Node context information consists currently of the destination waypoint for UAV type nodes, and node ID and altitude for all nodes. Furthermore, the flight plan for the UAV node is displayed on top of the geo-referenced TIFF satellite image of the range. A secondary window exists for each node in the system and provides the operator with a command, control, and communications interface particular to the node type. In the case of the UAV node, this interface was not intended to replace the 900 MHz link and operator interface for the Piccolo unit.8 Instead, it provides control over only those parameters affecting mission level objectives. For the current experiment, starting waypoints for the default flight pattern, and the pattern to be flown for the experiment can be specified. Furthermore, limits are placed on sensor measurements that will warrant a termination of the experiment and a return to the default flight pattern. Also available in the secondary window is an interface to data being collected by the node. This includes various system health and status variables, round trip ping times, and any data being collected by sensors connected to the node. C.
Hardware in the Loop Simulation
Initial testing of the system was done through a hardware in the loop simulation performed in the lab. By employing the HIL simulation developed by Cloudcap,8 and placing wireless nodes in the lab, a scenario could be constructed that fully tested the ability of the aircraft to perform its autonomous tasks. Any problems with the onboard system or communication between the aircraft and remote monitoring station using the ad-hoc network were resolved before field deployment.
III.
Experimental Results
he system has been fully verified utilizing ground and UAV nodes at an outdoor test range. An experT iment was executed that demonstrated the UAV’s ability to make mission level decisions based upon communications status and sensor measurements. A.
Experimental Setup
Experimental setup is given in Figure 3, and provides a detailed component view of the many systems involved in the experiment. The aircraft setup, as previously discussed, is composed of a Soekris SBC for 802.11b packet routing and communications to and from the plane, a Piccolo avionics package, a flight computer, a scientific payload, and interface nodes to tie the systems together. A ground station to support Piccolo operations was based at the airfield and was composed of the Piccolo ground station, a Pilot Console for manual piloting, and a Laptop connected to the ground station serial interface to allow an operator to command changes to the Piccolo system. This ground station was used for takeoff and landing (since this is performed manually), and once the UAV was placed under autonomous control, was only maintained to provide a failsafe in case a problem was encountered during the experiment. A laptop was used as the remote monitoring station to provide an operator with status and control for the various nodes on the ad-hoc network. All of the network packets, along with a periodic status packet sent by each node, was transmitted to a gateway which allowed for transport to an off-site database to be used in analysis. Two network nodes in
5 of 9 American Institute of Aeronautics and Astronautics
Table 1. Experimental Test Plan
1.
The UAV is manually piloted for takeoff and correct behavior of the Piccolo unit is verified
2.
Ares is commanded into autonomous mode and flies flight plan 1, which is preloaded.
3.
A “start experiment” command is sent from the remote monitoring station to the UAV over the ad-hoc network.
4.
The UAV transitions into flight plan 2 where it sends a sensor report every second consisting of temperature, pressure, and humidity data.
5.
The UAV maintains flight plan 2 until one of the following conditions is met: a. The temperature probe records a “simulated” temperature below 40 degrees Fahrenheit (potential icing). b. The communication link between the RMS and UAV has been down for more than 40 sec.
6.
Ares transitions back into flight plan 1.
Figure 5. Experiment Test Range and Flight Patterns
6 of 9 American Institute of Aeronautics and Astronautics
addition to the node on the plane were located in the field to provide routing between the various components on the network. The experimental procedure presents minimal complexity and fully demonstrates the capabilities of the remote monitor station and onboard systems in the aircraft. A brief procedural outline is given by Table 1. A visual representation of the experimental plan is shown in Figure 5. From this diagram it becomes evident that flight plan 1 is a “return to base” formation while flight plan 2 represents a much larger track that would be taken to gather experimental data. The given mission parameters for mandating a “return to base” were chosen to demonstrate the ability of the plane to conduct a mission based upon any combination of network metrics, scientific data, or aircraft status. Ambient temperature was chosen as a constraint as it represents both an experimental constraint (sensor measurements might not be desirable below a certain temperature), and a platform constraint (problems with wing icing occur below a certain temperature). Communications with the network monitoring station was also chosen since it allows the plane to react to a metric derived from the communications network. Each of these parameters were simulated as it was not desirable to lose complete communications with the monitoring station for experimental purposes, mainly the recording of data, and the temperature in a large outdoor environment cannot be predicted or changed. “Communications” with the monitoring station was defined as the reception of a ping packet through the ad-hoc network. The operator at the station could stop and start the ping packet transmission at will and thus induce the plane to return to base should the ping packet fail to be transmitted over a set period of time. Temperature was recorded from the sensor on the Thalassa node, but when the experiment was started, the Thalassa subtracted a degree from the reading for each second passed before transmitting the value on the CAN bus. In this manner a temperature drop could be simulated and eventually hit a the temperature limit for the experiment specified by the operator. B.
Results Waypoint Number [#]
Target Waypoint Number
10 5
Tracked Waypoint Start Experiment Experiment Trigger
0
Ping Time [ms]
Temperature [deg F]
58
59
60
100
61 62 63 Recorded Temperature on UAV
64
65
66
67
Temperature Trigger Level
50
0 57
58
59
60
61 62 63 GS−UAV Round Trip Ping Time
64
65
66
67
58
59
60
61 62 63 Mission Time [min]
64
65
66
67
800 600 400 200 0
Figure 6. Experimental Results: a. Destination waypoint number b. Recorded ping times from the plane to the monitoring station
Recorded onboard temperature c.
Results were obtained from a 50 minute flight of the UAV which was autonomously piloted for 30 of those minutes. Most of the other 20 minutes were used to verify correct Piccolo operations and communications between all of the nodes in the network. During the experiment all of the messages between the network monitoring station and the UAV were recorded, and the network metrics were backhauled to the off-site database and stored for later analysis. The scientific data measured by the Thalassa node was downlinked to the network monitor at a frequency of 2Hz. Figure 6 shows the primary results of the experiment. All of the graphs shown depict a value vs. time in minutes since the plane began communicating with the network monitoring station. The top graph shows
7 of 9 American Institute of Aeronautics and Astronautics
the destination waypoint for the UAV. The waypoint plans (as can be seen in Figure 5, consist of waypoints 2-7 for flight plan 1 and waypoints 10-15 for flight plan 2. Any transition between these two flight plans is made evident by a change in range of the destination waypoints. The second graph depicts the recorded (and simulated) temperature by the Thalassa node during the experiment. Interesting points include the linear transition from the current temperature to 32 degrees Fahrenheit as the Thalassa subtracts from the actual temperature following a start of experiment command, and the transition back to actual temperature following a second start of experiment command. The bottom graph shows ping times in milliseconds for the round trip transmission of a packet from the monitoring station to the plane and back. By analyzing this graph, it can be shown that the plane was able to autonomously make decisions based upon parameters set by the operator over the 802.11b ad-hoc link. For the both experiments, the temperature limit was set to 40 degrees, as marked by the horizontal dashed line in the second graph. The intersection of the reported temperature and the limit is identified by the left-most dashed red line which is carried through the other two graphs. In the second experiment, a communications timeout between the plane and the monitoring station was set to 40 seconds. The absence of the ping packets transmitted between the plane and monitoring station (simulating a communications dropout) for 40 seconds is marked by the right most vertical dashed line which is carried through the other two graphs. At both places where an experiment is being performed (the plane is tracking waypoints from the second pattern) and a limit is reached, a automatic transition to the “return to base” waypoint pattern can be seen. Furthermore, by looking at the second and third graph, it can be shown that each type of trigger was acted upon since only one trigger is reached per experiment.
Figure 7. Humidity Microclimate Interpolated from Flight Data
Following postprocessing of the data a further interesting result was encountered. The scientific data recorded from the Thalassa node readings included relative humidity. By graphing the humidity vs. GPS position of the plane, it can be seen that a spatially consistent change of about 2% is present over the test range. Interpolation between the data points using the Kriging technique reveals a humidity controur as hown in Figure 7.
IV.
Conclusion
An onboard flight management system was constructed to provide a link between two existing systems, an ad-hoc 802.11b network and a UAV avionics package. This synthesis provides for an intelligent UAV platform which can make high level mission decisions based upon network metrics and specified operating conditions. The modular nature of the onboard system allows for significant system scaling to be performed without requiring a large overhead. Furthermore, it enables the addition of several scientific payloads for taking onboard measurements that may be relayed in near real time to a network monitoring station. The system was fully verified through both a hardware in the loop simulation and a simple experiment conducted at an outdoor range.
8 of 9 American Institute of Aeronautics and Astronautics
References 1 Meissner, A., Luckenbach, T., Risse, T., Kirste, T., and Kirchner, H., “Design Challenges for an Integrated Disaster Management Communication and Information System,” Proc. IEEE Workshop on Disaster Recovery Networks (DIREN’02), New York City, New York, June 2002. 2 “VORTEX-2,” http://www.vortex2.org, 2005. 3 Argrow, B., Lawrence, D., and Rasmussen, E., “UAV Systems for Sensor Dispersal, Telemetry, and Visualization in Hazardous Environments,” Proc. AIAA of the 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, Jan. 2005. 4 Beard, R., Kingston, D., Quigley, M., Snyder, D., Christiansen, R., Johnson, W., McLain, T., and Goodrich, M. A., “Autonomous Vehicle Technologies for Small Fixed-Wing UAVs,” JOURNAL OF AEROSPACE COMPUTING, INFORMATION, AND COMMUNICATION , Vol. 2, 2005, pp. 92–108. 5 Beard, R., Mclain, T., Nelson, D., and Kingston, D., “Decentralized Cooperative Aerial Surveillance using Fixed-Wing Miniature UAVs,” IEEE Proceedings: Special Issue on Multi-Robot Systems, (to appear), 2006. 6 How, J., King, E., and Kuwata, Y., “Flight Demonstrations of Cooperative Control for UAV Teams,” Proc. AIAA 3rd Unmanned Unlimited Technical Conference, Workshop and Exhibit, Chicago, Illinois, Sept. 2004. 7 Brown, T. X., Doshi, S., Jadhav, S., and Himmelstein, J., “Test Bed for a Wireless Network on Small UAVs,” 2004. 8 “The Cloudcap Website,” http://cloudcaptech.com, 2005. 9 Elston, J., Argrow, B., and Frew, E., “A Distributed Avionics Package for Small UAVs,” Infotech@Aerospace Techinical Converence, AIAA, Arlington, VA, 2005.
9 of 9 American Institute of Aeronautics and Astronautics