Programmable Multispectral Imager Development as Light ... - CiteSeerX

2 downloads 0 Views 775KB Size Report
PAYLOAD FOR LOW COST FIXED WING UNMANNED AERIAL VEHICLES. Yiding Han, Austin ... The application of such imager system includes multispectral re- mote sensing, ground mapping, target recognition, etc. In this paper, we will ... control, border patrol, fire-fighting management [2], agriculture monitoring [3], etc.
Proceedings of the ASME 2009 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2009 August 30 - September 2, 2009, San Diego, California, USA Proceedings of the ASME 2009 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2009 August 30-September 2, 2009, San Diego, USA

DETC2009-87741

DETC2009/MESA-87741

PROGRAMMABLE MULTISPECTRAL IMAGER DEVELOPMENT AS LIGHT WEIGHT PAYLOAD FOR LOW COST FIXED WING UNMANNED AERIAL VEHICLES

Yiding Han, Austin Jensen, Huifang Dou Center for Self-Organizing and Intelligent Systems Electrical and Computer Engineering Department Utah State University, 4120 Old Main Hill Logan, UT 84322-4120 Email: [email protected] [email protected] [email protected]

ABSTRACT

NOMENCLATURE UAV Unmanned Aerial Vehicle. IMU Inertial Measurement Unit. Typically a 3-axis roll rate gyro co-axial with a 3-axis accelerometer. GPS Global Positioning System. GCS Ground Control Station. Typically a Paparazzi GUI program for real time monitoring and commanding Paparazzi onboard autopilot system. gRAID Geospatial Real-Time Aerial Image Display.

In this paper, we have developed a light-weight and costefficient multispectral imager payload for low cost fixed wing UAVs (Unmanned Aerial Vehicles) that need no runway for takeoff and landing. The imager is band-reconfigurable, covering both visual (RGB) and near infrared (NIR) spectrum. The number of the RGB and NIR sensors is scalable, depending on the demands of specific applications. The UAV on-board microcomputer programs and controls the imager system, synchronizing each camera individually to capture airborne imagery. It also bridges the payload to the UAV system by sending and receiving message packages. The airborne imagery is time-stamped with the corresponding local and geodetic coordinates data measured by the onboard IMU (Inertia Measurement Unit) and GPS (Global Positioning System) module. Subsequently, the imagery will be orthorectified with the recorded geo-referencing data. The application of such imager system includes multispectral remote sensing, ground mapping, target recognition, etc. In this paper, we will outline the technologies, demonstrate our experimental results from actual UAV flight missions, and compare the results with our previous imager system.

Introduction Nowadays, a huge market is emerging from various applications offered by small unmanned aerial vehicles (UAVs) [1]. They can be used in various civilian applications such as traffic control, border patrol, fire-fighting management [2], agriculture monitoring [3], etc. In general, all these applications rely on the UAV’s onboard machine vision or remote sensing system. Compared with the conventional satellites and manned aircraft based remote sensing platform, the UAV based system has several advantages such as lower cost, higher flexibility, finer spatial and temporal resolution, etc. In [4, 5], multispectral imagery over a coffee plantation collected by NASA Pathfinder Plus UAV is used to estimate ripeness status and evaluate harvest readiness. 1

c 2009 by ASME Copyright °

In [6], a UAV from IntelliTech Microsystems, Inc., fitted with five down-looking digital cameras, an up-looking quantum sensor, is utilized for precision agriculture. Besides, UAV systems featuring machine vision technologies are applied as versatile platforms. [7] presents a vision algorithm based on a video camera on UAV. The video stream is transmitted to the ground station for real time vehicle detection. Also in [8] a vision-based road following navigation system for UAV is presented. It is obvious that great potential exists in UAV based remote sensing and vision platform. However, due to the size constraint of these small aircrafts, their remote sensing systems has much smaller footprints than satellite and manned aircraft. Undoubtedly, in order to get a complete ground map, the airborne imagery needs to be stitched up. However, errors in mapping these images are inevitable since miniature aircrafts are difficult to keep a stable flying posture. Therefore, great efforts have been made to geometrically correct the airborne raw images. This process is called image georeferencing. Traditionally it is relied on ground control points within a projected area. But this process is tedious and costly. In some applications, such as agricultural plantation where imagery has little variation, ground control points are difficult to define. Efforts are made to auto-identify ground control points and tie points. In [9], real-time video registration for forest fire mapping is presented, but the they only manage to show some preliminary results. In [10, 11], differential GPS (Global positioning system) and IMU (Inertial measurement unit) is integrated in the mapping system and high georeferencing accuracy is achieved without the use of ground control points, but it is hardly feasible on miniature fix-wing UAVs due to the large dimension of differential GPS. In the Center of Self-Organizing and Intelligent System (CSOIS) at Utah State University, miniature fixed-wing autonomous UAVs are developed for civil applications, such as water management, irrigation control, highway mapping, etc. In our previous system, the UAVs are equipped with light-weighted multispectral high-resolution optical imager for aerial images within reconfigurable bands [12]. Georeferenced aerial imagery is retrieved with a system called gRAID which is presented in [13]. An inexpensive compact Inertial Measurement Unit (IMU) and GPS module is integrated in the system to provide georeferecing data for the imagery. A man-in-the-loop approach is used to minimized the error from the IMU and GPS. In [14], a multi-spectral imager, called the GhostFinger, was developed for our previous UAV system. The imager system contains a digital camera and a circuit board which automatically triggers the camera. This system works reliably and captures high quality images. But the payload is impossible to communicate with from the ground station, therefore it is difficult to monitor the status of the payload, or control it. In addition, only poor accuracy can be achieved in georeferencing the airborne imagery. As a result, we desire to develop a multi-spectral imager payload that is able to interface with the on-board flight micro-

Figure 1.

AIRCRAFT LAYOUT WITH UNICORN AIRFRAME.

computer and the ground control station in order to automatically record the georeferencing information for the airborne imagery, as well as providing improved image quality. In addition, it is designed for multi-purposes and will allow scalable number of the on-board imagers for both remote sensing and machine vision applications. The novel imager system is called GhostFoto (GFoto). In this paper, we will outline GFoto’s hardware and software architecture, present the features of GFoto, and explain our methodology in detail. In the result section, multi-spectral imagery collected from actual flight missions by GFoto remote sensing platform will be demonstrated. SYSTEM DESCRIPTIONS The airframe in use currently is called Unicorn, which is designed to have low speed gliding characteristics. The wings are made of resilient foam (EPP) that bounces upon impact. Different dimensions of the airframe are applied for diverse purposes, currently the 48, 60 and 72 inch airframes are in use. The layout of the aircraft is shown in Fig. 1. The on-board autopilot system is a free and open-source hardware and software project known as Paparazzi [15]. On the aircraft, a Paparazzi airborne board is installed. The Paparazzi autopilot is based on the flight data measured from the IMU and GPS receiver. Meanwhile, Paparazzi provides wireless communication between the aircraft and ground station. The ground station is a computer installed with the open-source software that runs on Linux operating system. The software, included by Paparazzi, contains the Paparazzi Center for configuring the Paparazzi airborne code, and the Ground Control Station, which monitors the flight messages and commands the autopilot system in real-time in order to make changes to flight plan. An embedded microprocessor called Gumstix [16] (shown in Fig. 2) is mounted on the aircraft. Gumstix is the backbone of the airborne system, establishing the data link to Paparazzi autopilot from IMU and GPS module. It also controls the payload and records flight logs and geo-referencing data for airborne imagery. The Verdex production line of Gumstix is currently being used in our UAV system. 2

c 2009 by ASME Copyright °

Figure 2. GUMSTIX MICROCOMPUTER.

GFoto cameras are remotely controlled by the embedded microcomputer. The digital cameras communicate with the Gumstix Verdex through USB 1.1 interface. As mentioned above, GFoto remote sensors operate in multiple spectra. Besides the commonly used RGB channels, we also use cameras that works in near infrared spectra. By replacing the visible light filter with an NIR filter, a CCD digital camera can be modify to take near infrared images. In our case, the Lee 87C NIR filter is put in front of the camera’s CCD sensor. The filter’s start transmission wavelength is at 800nm. For GFoto system, we choose to use Canon PowerShot SX100 IS CCD camera, which is illustrated in Fig. 3. This camera features remote capturing capability, 8 mega pixel CCD panel which captures images with size up to 3264 x 2448 pixels and a 10x optical zoom lens with optical image stabilizer. The compact size and relatively light weight of this camera fits to be the payload on miniature UAVs. The camera body of PowerShot SX100 IS weights approximately 265 grams without the battery, around 200 grams after the cover and LCD panel are disassembled. This weight is light enough for the 48 inch airframe UAV to carry one imager, and carry two imagers for the 72 inch airframe, as shown in Fig. 4.

Figure 3.

CAMERA BODY (LEFT) AND ITS CCD SENSOR (RIGHT).

In addition, GhostEye does not only control the cameras, but also provides the communication link between the payload and the UAV system. Messages can be reported from GhostEye to the ground station. The messages can be shared even with other UAVs with the same protocol. Meanwhile, messages from the UAV system can trigger the imagers. For example, after the altitude of the aircraft reaches a certain level, the plane is able to command the imager to activate or deactivate capturing. Moreover, the georeferencing data is logged by GhostEye. The data is saved in xml formate which can be directly imported to gRAID software [13] to orthorectify the imagery. Multi-Threading Architecture Cameras are recognized by GhostEye during program initialization. GhostEye searches for Canon PowerShot SX100 IS cameras through the USB 1.1 port. Once found, a thread which controls the camera is created. Therefore, as shown in Fig. 5, the cameras are controlled separately by individual threads under GhostEye. The multi-threading architecture of GhostEye can ensure the flexibility and robustness of the system. With more than one cameras on board, the multi-threading architecture is able to synchronize the capture very accurately. Also in situations that malfunctioning occurs on one camera, the other cameras would not be affected.

SOFTWARE DESIGN We designed a program called the GhostEye which runs on the Linux operating system on Gumstix. GhostEye is based on libgphoto2 [17], which is an open source portable digital camera library of C functions for UNIX-like operating systems, providing support of various types of digital cameras, which include Canon PowerShot SX100 IS. With libgphoto2 library functions, GhostEye is able to remotely control and configure multiple Canon PowerShot SX100 IS cameras simultaneously through a Picture Transfer Protocol (PTP) driver. PTP is a widely supported protocol developed by the International Imaging Industry Association for transfer of images from digital cameras to computers [18]. The version of libgphoto2 we currently use on the Gumstix is 2.4.5.

Imager Control and Message Feedback GhostEye provides functions to communicate with each individual thread, so that the correct commands and camera settings are used for different cameras. It also offers interfaces to report the status of each camera, which are fed back to the UAV system. A ghosteye object is created for each camera and its control 3

c 2009 by ASME Copyright °

Figure 5.

ARCHITECTURE OF GHOSTEYE PROGRAM.

Configure Camera Settings GhostEye is able to change the camera settings in order to get accurate capture configurations. The Canon PowerShot SX100 IS supports fully manual shooting mode, which allows the user to configure settings such as aperture size, shutter speed white balance and ISO etc. These settings are configured remotely by GhostEye. Besides, the optical and digital zoom, image size and quality are also configurable. For purpose like remote sensing, it is important to keep the manual shooting mode for aerial images, because the camera are required to keep with a constant shooting set up that is independent to the light condition. But for applications like target recognition, GhostEye can also configure the camera to use automatic shooting mode for optimized image exposure quality.

Figure 4. TWO GHOST-FOTO CAMERAS MOUNTED ON 72 INCHES UNICORN AIRFRAME.

thread. Given this object the program is able to retrieve information about the corresponding camera, such as camera status, captured image number, etc. Also, commands can be send to this ghosteye object to activate or deactivate shooting, or modify the camera settings, such as exposure time, aperture size, etc.

Continuous Imagery Shooting Mode In GhostEye threads, cameras are controlled to capture images periodically. The outline of the control process includes initialization, lens check, capture enabling/disabling and image capturing. The flow chart of this process is illustrated in Fig. 7. The initialization is for GhostEye to identify the cameras and create an ghosteye object. After initialization the camera is asked to check itself by extending and retracting the optical lens. This is usually done on the ground before UAV is launched, in order that the users can have a visual check of the imager. When the aircraft is in the air, the lens is retracted for protection until certain conditions are met to start capturing images. The condition is usually the altitude of the plane, or a command sent from the GCS. During this process, the camera is configured with settings including aperture size, shutter speed, iso etc.

GhostEye needs to identify the cameras to be able to distinguish them so that the correct settings and commands can be applied. But since the cameras are completely identical on the hardware level, GhostEye is unable to identify the camera unless the identity of the camera is saved inside the camera. Therefore, in the Canon PowerShot SX100 IS camera, a configurable word string called “Owner” is used for saving the camera’s identity. Several keywords are designed, such as “RGB” or “NIR” for sensor operating spectrum, “LEFT” or “RIGHT” for mounting position on the UAV, etc. GhostEye is able to load these keyword strings from the “Owner” string inside the camera during program initialization and thus “recognize” the cameras. 4

c 2009 by ASME Copyright °

Figure 6. IMAGER FOOTPRINT.

Once the camera enters the capturing mode, it is triggered to capture pictures periodically. The time interval between each picture is set for the demands of certain flight missions. For example, in a task for ground mapping, aerial imagery would need to be stitched together to form a complete map. The stitching algorithm requires certain amount of overlapping between the adjacent images, normally with an area of a minimum of 30% of the whole image. According to [12], eqn. (1) calculates the minimum time interval. Therefore, when the UAV flies at 300 meters above ground with a ground speed of 15 m/s, the minimum time interval to reach 30% of overlapping is 10.8 seconds.

tmin =

(1 − p%) × Fy v

Figure 7.

val). For NIR cameras a capture interval of 3 seconds is achievable due to the less optical flux of the NIR filter. If two RGB cameras are carried in one flight mission and used to capture images alternately, the maximum capture speed is then doubled. State Machine State machine is a device that stores the status of the object and changes its status for a given input at a given time. A state machine is designed inside the ghosteye object to define the status of GFoto imager. The state machine of GhostEye threads is illustrated in Fig. 8. Each state is marked with a different color, correspondingly to the colors in Fig. 7. 8 states are defined for the imager, including an “Abnormal” state which is used when the libgphoto2 function returns with error.

(1)

where p% is the percentage of overlapping area, Fy is the vertical length of FGoto imager’s footprint, as illustrated in Fig. 6. v is the speed of the plane. Fy is calculated from Eqn. (2).

Fy =

h × Py × PN f

Georeferencing Data Log When image is being captured, georeferencing data of this image is logged by GhostEye. The data is the flight information from the on board IMU and GPS simultaneously when the picture is taken. The IMU provides pitch, yaw and roll angle of the plane, and GPS module offers the geographical coordinates of the plane such as altitude, longitude and latitude, etc. With these information, every pixel on the image can be orthorectified and its geographical coordinates can therefore be calculated.

(2)

where h is the flight height, Py is the pixel size and PN is the number of pixels on the CCD array. f is the focal length of the camera. In our case: Py =

4.31 (mm), PN = 2448, f = 6(mm). PN

FLOWCHART OF GHOSTEYE THREADS.

(3)

Feedback Message A dedicated message channel is open in Paparazzi for down link of GFoto messages to the GCS. The messages include information of the camera’s identity, state machine status, and

The Current GFoto RGB cameras can capture image with a maximum speed of 0.4 picture/second (2.5 seconds capture inter5

c 2009 by ASME Copyright °

(a)

Figure 8.

LOGIC SCHEMATIC FOR GHOSTEYE STATE MACHINE.

the number of pictures taken. This message can also be shared among the UAV systems. For instance, some other UAVs that are flying together with the one with GFoto may also have access to the payload status.

(b)

Airborne Imagery The images taken by the camera can either be saved on the SD card within the camera, or downloaded instantaneously onto Gumstix. A WiFi ethernet is set up between the Gumstix on the UAV and ground stations. The range of this WiFi is able to reach approximately 2 miles, which allows the airborne images to be transmitted down to the ground station in real time, for monitoring and/or image processing.

RESULTS The images from GhostEye are processed with a program called Geospatial Real-Time Aerial Image Display (gRAID) [19]. gRAID undistorts, georeferences, and displays the images on a 3D world viewer called World Wind [20]. The images are georeferenced by finding the position of thier corners in the image plane and by rotating and translating them into Earth Centered Earth Fixed Coordinates. This is done using the orientation and postion of the UAV at the time the image was taken. These corners can then be used to transform and position the image correctly on the globe. More information on gRAID can be found in [19]. Several flight tests were done on the Utah State University (USU) owned farm (GPS: 41.8194 -111.9885, nearly 550 acres [13]). The illustrated images are taken in the morning on October

(c) Figure 9.

IMAGES TAKEN BY GPHOTO IMAGERS. (a) RGB IMAGE,

(b) NIR IMAGE. (c) NDVI IMAGE GENERATED USING EQU. (4).

6th, 2008. As shown in Fig. 9, RGB and NIR imagery are captured during the flight. In order to demonstrate the capability of GFoto imager in identifying vegetated areas, a Normalized Difference Vegetation Index (NDVI) image is generated from NIR and RED band images. The NDVI is defined as: NDV I = 6

NIR − RED NIR + RED

(4)

c 2009 by ASME Copyright °

We can see that the NDVI image (Fig. 9 (c)) has rich spacial detail. The vegetated area of the farmland which is covered by grass or trees is high-lighted so that it can be easily identified from the rest of the area. Also by calculating this index, the shade piece of cloud is removed from the figure.

[4]

[5]

CONCLUSIONS In this paper, we have successfully designed and tested a UAV based multispectral imagery system called GhostFoto. With the current working system, future work will include automatic georeferencing airborne images, real-time multispectral image processing, real-time target recognition and multispectral image classification for water management and vegetation identification. Compared to GhostFinger, our new GhostFoto architecture has several advantages. Firstly, the image quality is greatly improved. Canon PowerShot SX100 IS has a much better lens system, higher resolution CCD panel, and more manual capture settings. Secondly, the new platform implements the flight microcomputer as the controller of the imager. Thus messaging from the imager to UAV system is enabled, which means both the UAV and the operator on the Ground Control Station can monitor the status of the sensors. Meanwhile, command messages can be sent from the GCS to control the sensors in real-time. In addition, the microcomputer is capable of synchronizing the georeferencing data with the corresponding images, which greatly improve the effectiveness of further image processing. Moreover, on-board image processing is also possible if the captured images are transferred onto Gumstix.

[6]

[7]

[8]

[9]

[10] ACKNOWLEDGMENT The authors would like to thank professor YangQuan Chen, professor Mac McKee, Christopher Hall, Haiyang Chao and Cal Coopmans for their contributions. Further thanks are due to Daniel Morgan and Dee Long for assisting in the flight tests. This research is sponsored by Utah Water Research Lab (UWRL).

[11]

[12] REFERENCES [1] Chao, H., Cao, Y., and Chen, Y., 2007. “Autopilots for small fixed wing unmanned air vehicles: a survey”. Proceedings of IEEE International Conference on Mechatronics and Automation, August. [2] Casbeer, D. W., Sai-Ming Li, Beard, R. W., McLain, T. W., and Mehra, R. K., 2005. “Forest fire monitoring with multiple small UAVs”. Proceedings of the American Control Conference, June. [3] Johnson, L. F., Herwitz, S. R., Dunagan, S. E., Lobitz, B. M., Sullivan, D. V., and Slye, R. E., 2003. “Collection of ultra high spatial and spectral resolution image data

[13]

[14]

7

over california vineyards with a small UAV”. Proceedings of the 30th International Symposium on Remote Sensing of Environment. Herwitz, S., Johnson, L., dunagan, S., and Higgins, R., 2004. “Imaging from an unmanned aerial vehicle: agricultural surveillance and decision support”. Computers and Electronics in Agriculture, 44, pp. 49–61. Johnson, L. F., Herwitz, S. R., Lobitz, B. M., and Dunagan, S. E., 2004. “Feasibility of monitoring coffee field ripeness with airborne multispectral imagery”. Applied Engineering in Agriculture, 20(6), pp. 845–849. Hunt, E. R., Walthall, C. L., and Daughtry, C. S. T., 2005. “High-resolution multispectral digital photography using unmanned airborne vehicles”. 20th biennial workshop on aerial photography, videography, and high resolution digital imagery for resurce assessment, Weslaco, TX. Kaaniche, K., Champion, B., Pegard, C., and Vasseur, P., 2005. “A vision algorithm for dynamic detection of moving vehicles with a UAV”. Proc. 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, April. Frew, E., McGee, T., Kim, Z., Xiao, X., Jackson, S., Michael Morimoto, Sivakumar Rathinam, J. P., and Sengupta, R., 2004. “Vision-based road-following using a small autonomous aircraft”. Proc. IEEE Aerospace Conference, Big Sky, MT, May. Zhou, G., Li, C., and Cheng, P., 2005. “Unmanned aerial vehicle (UAV) real-time video registration for forest fire monitoring”. Geoscience and Remote Sensing Symposium, 2005. IGARSS ’05. Proceedings. 2005 IEEE International, 3, July, pp. 1803–1806. Toth, C., 2002. “Sensor integration in airborne mapping”. Instrumentation and Measurement, IEEE Transactions on, 51(6), Dec, pp. 1367–1373. Xiang, H., and Tian, L., 2007. “Autonomous aerial image georeferencing for an UAV-based data collection platform using integrated navigation system”. 2007 American Society of Agricultural and Biological Engineers Annual International Meeting. Chao, H., Baumann, M., Jensen, A., Chen, Y., Cao, Y., Ren, W., and McKee, M., 2008. “Band-reconfigurable multiUAV-based cooperative remote sensing for real-time water management and distributed irrigation control”. IFAC World Congress, Seoul, Korea, July. Jensen, A., Baumann, M., and Chen, Y., 2007. “Low-cost multispectral aerial imaging using autonomous runwayfree small flying wing vehicles”. IEEE International Geoscience and Remote Sensing Symposium, Oct. Baumann, M., 2007. “Imager development and image processing for small UAV-based real-time multispectral remote sensing”. Master’s thesis, University of Applied Sciences Ravensburg-Weingarten and Utah State University. c 2009 by ASME Copyright °

[15] Paparazzi UAV. URL http://paparazzi.enac.fr/. [16] Gumstix. URL http://www.gumstix.com/. [17] gPhoto. URL http://www.gphoto.org/. [18] Picture Transfer Protocol. URL http://en.wikipedia.org/ wiki/Picture_Transfer_Protocol. [19] Jensen, A., 2009. “gRAID: A geospatial real-time aerial image display for a low-cost autonomous multispectral remote sensing platform (AggieAir)”. Master’s thesis, Utah State University. [20] WorldWind. URL http://www.worldwindcentral.com/.

8

c 2009 by ASME Copyright °

Suggest Documents