The Eden Project Multi-Sensor Data Set - CiteSeerX

105 downloads 33241 Views 2MB Size Report
Apr 10, 2006 - 570 TV lines resolution). A 25mm lens was used .... then digitised using an Apple Macintosh G5 computer .... ios. Examples of preliminary work on registration and fusion carried out by the University of Bristol are given below.
10th April 2006

Technical Report Number: TR-UoB-WS-Eden-Project-Data-Set

The Eden Project Multi-Sensor Data Set∗ J. J. Lewis, S. G. Nikolov, A. Loza, E. Fernandez Canga, N. Cvejic, J. Li, A. Cardinali, C. N. Canagarajah, D. R. Bull University of Bristol Bristol, BS8 1UB, UK {John.Lewis, Stavri.Nikolov}@bristol.ac.uk

and video fusion studies, however it was envisaged that such data would be useful in developing a wide variety of image and video processing methods. To this end the data collected includes:

Abstract - This report details a data gathering exercise carried out at the Eden Project in Cornwall, UK. A large number of different sensors (including infra-red, standard and high definition, and lower quality visible sensors) were used to film simultaneously various scenes and scenarios. Ground truth data and meta-data were also recorded. We believe we have gathered a rich and varied collection of multi-sensor image and video data that would be very useful to those who undertake research in image and video fusion, multi-source multi-target tracking, target identification, sensor management, etc. The data primarily includes scenarios of short range surveillance type applications filmed under varying illumination conditions. Scenes include people (who are dressed in both civilian dress and camouflage, stationary, walking or running, or carrying various objects), vehicles, foliage and buildings/structures. The majority of this data has been made publicly available and can either be accessed directly through, or requested via, the ImageFusion.org website.

• Multi-modal data with high complementarity of sensors for image and video fusion algorithm development and assessment; • Single and multiple targets with occlusions from both stationary and moving sensors for the tracking of targets through video; and • Multiple sensor locations for work on sensor distribution and management.

2 2.1

tion, Video Fusion, Image Fusion, Tracking

Introduction

In order to successfully develop new image and video processing algorithms it is important to have a wide variety of high quality data. There are only few small data sets in the public domain which are suitable for image and video fusion research. Good target tracking data sets are more readily available but few include data from multiple sensors. To improve this situation, we recently embarked on a number of data gathering exercises both within the University of Bristol precinct and at the Eden Project in Cornwall [1]. The data has been collected with a number of high quality sensors including infra-red (IR), standard definition (SD) cameras and high definition (HD) cameras under varying illumination conditions. Scenarios include people (who are dressed in both civilian dress and camouflage, stationary, walking or running, or carrying various objects), vehicles, foliage and buildings/structures. The main aim was to collect data for our image ∗ Available

The Eden Project Data Gathering Exercise The Eden Project

The Eden Project is a large botanical garden situated near St. Austel, Cornwall, built in a disused china clay quarry [1]. A wide variety of plants grow in its three different biomes: “tropical”; “warm temperate”; and “outdoor”. Figure 1 shows some photographs of the Eden Project. As well as being a popular tourist destination, the Eden Project is also concerned with education and research.

Keywords: Multi-modal, Multi-sensor, Video Registra-

1

T. Riley, D. Hickman, M. I. Smith Waterfall Solutions Ltd Parklands, Guildford Surrey, GU2 9JX, UK {Tom.Riley, Duncan.Hickman, Moira.Smith}@waterfallsolutions.co.uk

(a) The Eden Project

(b) The Flybot

Figure 1: The Eden Project

The tropical biosphere offers a particularly interesting environment including lush tropical foliage, running water, a network of pathways and the occasional “hut”. The dense foliage gives texturally rich environment with more vibrant colours than possible in most

at www.imagefusion.org

1

environments in the UK. Some of the pathways are significantly higher than the floor of the dome, with views over the canopy, so it was possible to have sensor locations overlooking the target area. Additionally, it was possible to collect some aerial footage from a powered helium dirigible balloon, “flybot”, shown in Figure 1(b). The majority of filming was done in this biome. The biggest disadvantage with footage from the dome was the short sensor to target distances and hence a longer distance sequence was filmed outdoors. The data collection at the Eden Project was jointly undertaken by Bristol University and Waterfall Solutions Ltd, both of whom have extensive experience in image processing and image fusion. The project was also supported by Time-Slice Films Ltd and General Dynamics UK. This collaborative partnership has enabled a broad range of sensors and instrumentation to be deployed which, has allowed a comprehensive and relevant image data set to be generated.

2.2

Sensors

A complete list of sensors and other equipment used is as follows, although not all sensors were used in all exercises: • Ratheon Thermaleye 250D (UoB IR): IR sensor with 75mm lens and sensor resolution of 320x240. The composite PAL output was recorded onto miniDV tapes via a Sony Handycam. • Panasonic CCD AW-E300A (UoB Viz): Professional SD camera was paired with UoB IR and fixed to the same tripod with a multi-camera bracket. Data was recorded onto miniDV tape in both progressive and interlaced modes. • miniDV Sony Handycam (UoB SD, GD SD, TS SD): Standard off-the-shelf cameras recorded onto miniDV tapes at 25fps. GD SD recorded in progressive mode and UoB SD and TS SD in interlaced mode. • HDW-F900 (UoB HD): Professional HD broadcast quality camera recorded at 25fps progressive onto HD tape and used a Cannon HJ16 8-128mm HDTV lens. • Indigo MR1 (WS IR1): A 320x240 pixel uncooled thermal imager that operates in the 10-12m LWIR spectral band. It is an NTSC camera giving 30 fps. Two different lenses were used: 25mm, F/1.4 wide angle lens giving a horizontal field of view of 37.5◦ ; and 100mm, F/1.4 lens with a horizontal field of view of 9.5◦ . • JVC-S250 (WS Viz): An monochrome visible band camera with a 1/3” CCD sensor (30fps, up to 570 TV lines resolution). A 25mm lens was used with an optional 1.5X extender to give a field of view of 13.5◦ and 10◦ . It was boresighted with the WS IR1.

(a) UoB IR and UoB Viz (b) UoB HD, UoB IR and Mounted on Electric Vehicle UoB Viz by Malaysian Gardens

Figure 2: Sensors at the Eden Project

• Low-light monochrome camera (WS LL): operates at the standard PAL frame rate of 25fps. It has a 1/3” sensor and a resolution of 550TV lines. • Low Quality Cameras: including mobile phone cameras and a web camera (UoB WC). • Temperature and Humidity Sensors. • GPS and Laser Distance Meter. Synchronisation is obviously an important issue and in most cases it was not possible to electronically synchronise all of the sensors. Where possible, sensors captured at 25fps. At the start of each scene, an omnidirectional flash, visible from all cameras, was used (this is actually 8 camera flash guns attached together). The flash produced lasted for 1/10000s. This is visible on at least one frame and is visible in the IR as well as the visible spectra. Figure 2 shows some of the sensors used at the Eden Project.

2.3

Scenarios

The data collection centred around two sites in the Tropical Biome, identified as best fulfilling our aims. These are shown in Figure 3, the Malaysian Garden, including a “hut” and the Tropical South American zone. Filming was only possible while the Biome was shut to the public which suited our requirement for filming in low light conditions. It was felt that imagery gathered during the dawn and dusk periods would be of significant value in that it offered a good balance between visible and LWIR spectral band properties. Such data is particularly useful for the design and development of military and civil security systems as well as a range of other image fusion applications. The main lighting in the dome is from sunlight [2]. Sunrise and sunset times are given in Table 1. There is also some path level lighting (which we could switch on and off). The aim was to record each scenario four times with different lighting conditions, three times in the evening and once early in the morning. Ground truth data was collected as accurately as possible. GPS coordinates were recorded for all sensor positions and for target areas. Additionally, sensor-sensor and sensor-target distances were measured with a laser distance meter.

Figure 3: Map of the Tropical Biome: H.03: Malaysian Gardens; H.05 & H.06: Tropical Forest

Figure 4: Tropical Forest Sensor Locations

Table 1: Sunset/rise times Date 3/10 4/10 4/10 5/10

set rise set rise

Sunset/ rise 1 19:10 07:15 19:09 07:16

Civil2 / Nautical3 / Astronomical4 Twilight 19:31 / 19:57 / 20:23 06:53 / 06:28 / 06:02 19:31 / 19:57 / 20:22 06:54 / 06:28 / 06:02

The Tropical Biome provided an environment of high humidity, a data recording condition which is generally difficult for UK and European based organisations to replicate without recourse to costly trials. Although it is generally recognised that the LWIR contrast-range performance is significantly impacted by higher humidity levels, the limited ranges of the Biome ensured that a reasonable LWIR performance was still obtained. The scenarios have been broadly split into two categories: • The Tropical Forest area: data includes people in both civilian and camouflage dress and an electric vehicle; and • The Malaysian gardens: scenarios involve people dressed as civilians interacting and a hut. 2.3.1

The Tropical Forest

These scenarios include a number of targets such as camouflaged and non-camouflaged people, decoy mines placed around the site and an electric vehicle. Cameras were placed at both elevated and lower points of the dome with one pair of IR and visible sensors on the vehicle for drive-through type data for some scenes. The targets vary from completely unobstructed, partially occluded and fully occluded during the course of

Figure 5: Malaysian Garden’s Sensor Locations

a scene. Flybot provided an aerial view of some of the scenes. The four main scenes are: 1. Civilians walking: Both a single person and many people walking through or loitering in the target area, some talking on mobile phones, interacting etc.; 2. Camouflaged targets: Both a single person and many people walking through the target area, some of the time moving covertly; 3. Vehicle: The UoB IR and UoB Viz pair of sensors mounted on the vehicle driving through the target area with people/dummy mines hidden on the side of the path; and 4. Person Acting Suspiciously: People walking around the target area while one person abandons a rucksack which is picked up by someone else.

1 Time

upper tip of disk just at horizon at which ground objects clearly distinguished 3 Limit for general outlines of ground objects visible but detailed operations not possible 4 No illumination due to the sun 2 Limit

The detailed descriptions for these scenarios are given in Table 2. The sensor layout is shown on the map in Figure 4.

Table 2: Tropical Forest Scenarios Sn. 1 1.1

1.2

2 2.1 2.2

3 3.1

4 4.1

2.3.2

Actor/ Object Civilians 1 person (civilian)

Action

walking Walk down stairs behind foliage and reappear in target area, across the bridge, then turn back accross target area and exit down path. 4+ People enter target area people from different directions (civilian) at different times (stairs, bridge and path), loiter in target area, and exit. People in Camouflage 1 person Similar to 1.1 but in camou(Camo.) flage. 4+ Similar to 1.2 but in campeople ouflage and moving covertly (Camo.) through the target area. Vehicle Vehicle Vehicle drives through the (+driver) target area and across the + 4 peo- bridge, turns round and ple drives back across exiting (civilvia the path. People hidians and ing close to the path edge camo.) dressed both in camouflage and civilian dress. Suspicious People 4+ Civil- People walk into the main ians target area from the stairs, path and bridge. These people interact with one another or walk straight through. One person abandons a rucksack. This is later picked up by two people and put on the bridge.

Table 3: Malaysian Gardens Scenarios Time

Sn.

≈2min

5 5.1

≈6min

5.2

≈2min ≈5min

5.3

≈6min

3

The Malaysian Gardens

These scenes centre around the Malaysian Gardens. The hut in this area is defined as a sterile zone meaning anyone who comes near or enters the building, or interacts with someone who has been in the building is considered suspicious. Three main scenes are: 1. Inside the hut: People walk into the hut, which is under surveillance, interact and then leave; 2. Investigating hut: A number of people walk past the hut and one person stops and investigates it; and 3. Briefcase: One person enters the hut carrying a briefcase or package and leaves. A second person enters and removes the briefcase. The detailed description of these scenarios is given in Table 3. The sensor layout is shown on the map in Figure 5.

Time

≈6min

≈3min

≈4min

The Eden Project Data Set

3.1 ≈6min

Actor/ Action Object Inside the hut 3 people Walk separately into hut. (civilOnce inside, move around ians) apart and then sit together. Move to veranda. Leave separately. Camera sneaks up to the hut to film action inside after the actors have been inside for a short while. 3+ A number of people walk people past the hut and one person (civilstops to investigates the hut, ians) then enters the hut briefly and leaves. 3+ A number of people walk people past the hut. One person en(civilters the hut carrying a brief ians) case and leaves without it. Second person enters the hut to retrieve the case.

Ground Truth Information

Ground truth data is important in validating algorithms as well as assessing performance. It has been collected with all trials and this information includes: • Accurate sensor positions; • Target locations; and • Environmental data. Each data set has associated with it a digital map showing sensor position (marked in different colours) and target area as shown, for example, in Figure 5. Additional meta-data is stored in an accompanying text file. This includes sensor description and settings, scene description and key words. An example of a meta-data file is shown in the following listing. METADATA filename: UoB_Viz_Tropical_4.1_i.avi Date: 04-10-2005 Owner: University of Bristol Sensor: Panasonic AG-DVX100E Dig SD Camera Sensor Settings: Focus Auto Focus ~66 Zoom 37 Scene File F4 (Optimized for filming at dusk) Automatic Aperture Gain 0db Sensor Movement: Stationary Location: The Eden Project: Tropical Area Scene Name: UoB_Viz_Tropical_4.1_i Scene Description: One person carrys a backpack into the target area and abandons it. Later a second person picks it up and puts it on the bridge. Other people walk into target area from bridge, stairs and path, interact, leave and re-enter Time: 1814 Environment: diffuse light Temperature 1730 27.5C at WS area 26.5C at target area Humidity

1730 96% at WS area 92% at target area Comments: On same tripod as "IR" camera Resolution: 720x576 Number Frames: 15791 Frame Rate: 25fps Compression: dvsd Key Words: multiple people, outdoor, occlusions.

3.2

Description of Data Set

Video captured on the UoB *, GD SD and TS SD sensors were initially recorded onto magnetic tape and was then digitised using an Apple Macintosh G5 computer with a Cinewave card and Final Cut Pro to produce uncompressed avi files. The data has been split into files based on the scenes described in Tabels 2 and 3. These videos include the synchronisation flash usually used immediately before a scene was acted out. The overall size of the full uncompressed Eden Project Data set is around 2 TB and this data is stored in on a multi-terabyte file server at the University of Bristol. The data from WS IR1 and WS Viz was captured on miniDV cassette and has since been transferred without compression to hard-drive and backed up as avi files on DVD. Footage was captured in both NTSC and PAL formats. The master record of this data is held by Waterfall Solutions Ltd on their system server and a copy of key sequences has been provided to for inclusion on the ImageFusion.org website.

(a) UoB Viz

(b) UoB IR

(c) UoB HD

(d) UoB SD on Flybot

(e) WS Viz

(f) WS IR1

Figure 6: Tropical Forest Scene 1.2(iii)

3.3

The Tropical Forest

The data from the Tropical Forest area was collected on the 4th of October 2005 between 1730 and 2030 hours and on the 5th of October between 0600 and 0730. Three sets of data for each scene were collected, twice in the evening and once in the morning. The exception is the vehicle scene which was filmed twice (both in the evening). In addition, two extra scenes were filmed: packing up the equipment in the tropical forest area; and an outdoor scene. Temperature and humidity data is given in Table 4. Stills of the data from the Tropical Forest Area are shown in Figure 6, 7 and 8 . The full list of scene names together with scene and sensor descriptions (taken from the meta-files) are available from [3]. Thermal data from a sensor provided by Waterfall Solutions (WS IR2) was used on the flybot for the first two takes of the scenes in the evening. For the third take in the morning, a Sony Handycam (UoB SD) was added to this, co-located with the thermal sensor. Due to the large amount of pendulum type motion some of the aerial data collected is not as useful as hoped.

(a) UoB Viz

(b) UoB IR

(c) UoB HD

(d) UoB SD on Flybot

(e) WS Viz

(f) WS IR1

Table 4: Tropical Forest area Temperature and Humidity Date, Time Tue 4/10, 5:30pm 8:00pm Wed 5/10, 6:40am

Location ’Path’ ’Waterfall’ ’Path’ ’Waterfall’ ’Path’ ’Waterfall’

T, ◦ C 26.5 27.5 24.5 25.2 21.5 21.0

H, % 92 96 96 98 91 91

Figure 7: Tropical Forest Scene 2.2(iii)

Table 5: Malaysian Garden Temperature (T) and Humidity (H) Date, Time Mon 3/10, 5:30pm 6:45pm (a) UoB Viz

Tue 4/10, 6:30am

(b) UoB IR

Location ’Hut’ S-W ’Hut’ N ’Hut’ S-W ’Hut’ N ’Hut’ S-W ’Hut’ N

T, ◦ C 25.0 25.0 24.0 24.5 21.5 21.5

H, % 98 96 96 96 96 96

(c) UoB HD

(d) WS Viz

(a) UoB Viz

(b) UoB IR

(c) TS SD

(d) GD SD

(e) WS IR1

Figure 8: Tropical Forest Scene 3.1(i)

Figure 9: Malaysian Gardens Scene 5.1(i) The three different illumination methods give very different lighting conditions. The first set of data (i) has relatively good illumination and objects are fairly easily recognisable in the visible data. The second take (ii) has very low light and almost no information is available from the visible band sensors. The final take (iii), filmed the following morning, has the floor level lighting turned on as well as very low ambient lighting due to the sun for scenes 1.1, 1.2 and 4.1 with actors in civilian dress. The lights were switched off for scenes involving camouflaged actors (2.1 and 2.2). The scenes with actors in civilian dress make tasks such as classification, tracking and target recognition easier to perform compared to the similar scenes where the actors wear camouflage. This mixture of illumination and actor type gives data of varying degrees of difficulty to perform image processing tasks, from the relatively easy to the very challenging, which is useful when developing new algorithms.

3.4

The Malaysian Gardens

The data from the Malaysian Garden area was collected on the 3rd of October 2005 between 1730 and 2030 hours and on the 4th of October between 0600 and 0730. In general the data collected follows the plan outlined above and four sets of data under different illumination for each scene were captured. Figure 9 shows a selection of example stills from the Malaysian Gardens Area. A full list of scene names together with scene and sensor descriptions are available from [3]. The four different illuminations this data was

recorded give varied amounts of information from the visible sensors: fairly well lit with objects easily distinguished (i); low light levels with limited information (ii); almost no information contained in the visible sensor (iii); and low light levels giving limited information recorded the following morning (iv).

3.5

Access to and Use of the Data

A large part of the Eden Project data has been made publicly available5 to the international multi-sensor image fusion and target tracking research communities through the ImageFusion.org website [3]. It is is accessible via a web-based searchable database that allow standard search options such as the ability to search for individual metadata fields or groups of these using different logical operators. The result of a search would be the metadata file for the found video(s) with link(s) to thumbnail video samples. In addition, for each scene, data can be accessed through a visual interface by clicking on a sensor position on a map of the area. On this map the sensor and target locations are marked (as in Figure 5). Access to and use of the image data is free although it is requested that users of the data should 5 Users of the data should note that Bristol University, Waterfall Solutions Ltd or other parties involved in the gathering of the trials data cannot accept responsibility for any errors in the data (imagery or supporting data).

acknowledge the respective data owners and the ImageFusion.org website. Uncompressed full versions of the videos will be available upon request from the respective data owners, i.e. University of Bristol and Waterfall Solutions, at a small charge covering the cost of media and postage. It is envisaged that further data from the trial will be added to the website over the coming months.

3.6

(b) Registered UoB IR

(c) Average Fusion

(d) Contrast Pyramid Fusion

(e) DWT Fusion

(f) CWT Fusion

Initial Fusion Experiments

Some experiments have been carried out using parts of this data, including recent work at Bristol on motionbased video fusion using optical flow information [4] and scanpath analysis of fused multi-sensor images with luminance change [5]. The data gathered from the Eden Project is particularly useful for the development and assessment of image registration and fusion algorithms for short-range surveillance and security scenarios. Examples of preliminary work on registration and fusion carried out by the University of Bristol are given below. 3.6.1

(a) Registered UoB Viz

Video Fusion

A number of short videos have been fused using the Video Fusion Toolbox (VFT) developed at the University of Bristol. Initially, the multi-sensor videos are registered by manually selecting correspondences throughout the video sequence and using the OpenCV and Intel’s CPP to compute an affine transform mapping. The registered video data was fused with four different pixel-based methods in a frame-by-frame manner using: Simple Averaging (AVE); Contrast Pyramids (PYR); The Discrete Wavelet Transform (DWT); and The Dual-Tree Complex Wavelet Transform (CWT). These methods, for grayscale image fusion, are described in more detail in [6]. To apply this to colour visible images, the infra-red image has been fused in turn with each colour plane of the visible image to give a colour fused image. Examples of the fusion are shown in Figures 10 and 11. Perceptually, the CWT algorithm has been found to produce the best results, showing sharp fused images without pulling through or emphasising the noise compared with the DWT algorithm. The AVE fusion produces an image that suffers from poor dynamic range and does not produce sharp images. The PYR fusion and to a lesser extent the DWT have amplified the noise which is particularly distracting in the video. The PYR fusion adds very distracting errors where a feature is not pulled through from all colour planes. This is the cause of the magenta coloured noise in the PYR fused images. This analysis is backed up by the use of two image fusion metrics, the Xydeas and Petrovic QAB/F metric [7] and the Piella and Heijnmans (IQM) [8]. These image fusion metrics were applied to every 50th frame of the video. The results for the sequences shown in Figures 10 and 11 are given in the Table 6. This shows the CWT to perform slightly better than the DWT which both significantly outperform the PYR and AVE fusion method.

Figure 10: Examples of Fusion from Scene 2.1iii

(a) Registered UoB Viz

(b) Registered UoB IR

(c) Average Fusion

(d) Contrast Pyramid Fusion

(e) DWT Fusion

(f) CWT Fusion

Figure 11: Examples of Fusion from Scene 4.1iii

Table 6: Comparison of Fusion Methods Scene

Met.

Tropical AVE 2.1iii PYR DWT CWT Tropical AVE 4.1iii PYR DWT CWT

4

QAB/F Mean 0.2901 0.2310 0.4916 0.5179 0.2866 0.2000 0.5490 0.5874

σ 0.0031 0.0344 0.0294 0.0337 0.0028 0.0071 0.0221 0.0246

IQM Mean 0.6295 0.4276 0.7398 0.7531 0.6564 0.4835 0.7985 0.8087

σ 0.0105 0.0807 0.0205 0.0211 0.0082 0.0341 0.0082 0.0211

Conclusions

The data collected should prove useful to research in the areas of multi-sensor data fusion and tracking. As far as the authors are aware this is the first such large and diverse multi-sensor data set accompanied by ground-truth data available publicly. The dense foliage in both areas offers targets ranging from easy to very difficult to track. The low light levels offer some interesting opportunities for fusion and image enhancement. The different light levels heavily affect the visible sensors, with the data ranging from very clear in the visible band to almost no information. Additionally, the camouflage dress provides not only a more difficult tracking problem (in the visible band) but increased need for IR data. The data offers a challenging case for testing semi- or fully-automatic video registration algorithms. Initial experiments in registration and colour video fusion using this data were discussed. The authors hope that the Eden Project data will be widely used by the research community to develop and benchmark new and existing multi-sensor image processing algorithms and systems in various applications.

Acknowledgements Much of this work has been funded by the UK MOD Data and Information Fusion Defence Technology Centre. The Authors would like to thank the Tropical Biome Team at the Eden Project, particularly Don Murray, for their invaluable help. We would also like to thank all those who worked in the trials with the planning, operating and provision of sensors and acting out scenes: Tim MacMillan of Time-Slice Films Ltd; Lindsay Hitchin of General Dynamics UK; and the Defence Science and Technology Laboratory (UK MOD) who also kindly supplied camouflage uniforms and dummy mines. Waterfall Solutions Ltd would also like to thank Octec for the loan of a number of cameras and recording equipment.

References [1] Tim Smit. Eden. Corgi Adult, 2005.

[2] B. Kimpton and J. Sproull. Investigation into light levels in the Warm Temperate Biome at the Eden Project. 2002. [3] The Online Resource for Research in Image Fusion (ImageFusion.org). www.ImageFusion.org, 2005. viewed December 2005. [4] J. Li, S. G. Nikolov, C. P. Benton, and N. E. ScottSamuel. Motion-based video fusion using optical flow information. In 9th International Conference on Information Fusion (Fusion 2006), Florence, Italy, 10-13 July, 2006. [5] T. Dixon, S. G. Nikolov, E. F. Canga, J. J. Lewis, T. Troscianko, J. Noyes, C. N. Canagarajah, and D. R. Bull. Scanpath analysis of fused multi-sensor images with luminance change: A pilot study. In 9th International Conference on Information Fusion (Fusion 2006), Florence, Italy, 10-13 July, 2006. [6] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah. Pixel- and regionbased image fusion using complex wavelets. In Information Fusion, Special Issue on Image Fusion: Advances in the State of the Art. Elsevier, (in press), 2005. [7] V. Petrovic and C. Xydeas. On the effects of sensor noise in pixel-level image fusion performance. In Proceedings of the Third International Conference on Image Fusion, volume 2, pages 14–19, Paris, France, 2000. [8] G. Piella and H. Heijmans. A new quality metric for image fusion. In International Conference on Image Processing, ICIP, pages 173–176, Barcelona, Spain, 2003.

Suggest Documents