Averaging Aircraft Runway Approach Paths Tristan Lewis, Robyn Owens, Adrian Baddeley Department of Computer Science The University of Western Australia Nedlands, WA, 6907
[email protected]
Abstract Aircraft do not follow the same route each ight so an accurate analysis of the underlying areas aected by noise cannot be achieved by considering individual aircraft tracks. This paper describes a method for combining multiple images of aircraft tracks to obtain an `average' ight path using the Distance Transform as a pre-processing step. The procedures used to average the images of aircraft tracks include computing the distance transform of the input images (using various neighbourhoods of distance values), combining the distance transformed images via the pixelwise median and then thresholding at a suitable value to produce a nal binary image. If the thresholding does not take place, then the nal greyscale image might be used to give an idea of the degree to which an underlying area will be aected by aircraft noise.
Keywords: Image combination; distance transform; average ight path; noise analysis. CR Classi cation: G.3 [Probability and Statistics]: Statistical Computing. I.4.m [Image Processing]: Miscellaneous.
1 Introduction In remote sensing a scene can change in some way allowing multiple, dierent recordings to be obtained at regular intervals. It is very likely that successive images will dier; such dierences occur because objects might change position, the imaging system might be noisy or random objects might occlude the view of the object under consideration. Eective analysis of the full event cannot (usually) be undertaken by considering each image alone. Analysis would be better performed on one image that is a combination (or average) of all the individual images. The averaging procedure must combine the images in such a way that areas of interest are preserved in the nal image, and annoyances (such as noise) that appear in any of the original images, are not present (or are at least minimised) in the nal image. An example in which average images need to be computed is in the analysis of aircraft runway approach paths to investigate the most noise-aected underlying areas. Examination of aircraft tracks show that the aeroplanes do not y the same path on each sortie, so computation of the
Department of Mathematics, The University of Western Australia
`average' ight path would lead to the identi cation of the underlying areas that are potentially aected by aircraft noise most often. The most conventional approach to computing an average image is to use the Vorob'ev Mean [1, 6]. This method is summarised as follows: Vorob'ev introduced a mean of a random set X de ned as a threshold E V X = fx : pX (x) pg of the coverage probability function pX (x) = Pfx 2 X g at an \optimal" level p, which is chosen in such a way that E V X has a volume close to the expected volume of X . In practical terms, the Vorob'ev mean is computed by obtaining the average pixel value at all points in the input images and then thresholding this at the value p to produce either a 1 or 0 (representing the object and background pixels respectively) for each pixel in the nal combined image. Other less conventional image combination techniques are outlined by Stoyan & Stoyan [6]. The problem with the Vorob'ev mean is that it relies on all the images being exactly aligned. For our images of aircraft tracks, this is not the case. The aircraft do not follow identical ight paths each ight. So, to compute a sensible average ight path, we need a technique that also takes into account the value of neighbouring pixels. This is why we compute the distance transform (DT) of the input images before combining them and it is this approach that we will present in this paper.
2 Background Information The concept of distance between pixels in an image was rst investigated by Rosenfeld & Pfaltz [4, 5]. They de ned the distance between two pixels as follows: For any two distinct points P and Q in a digitised image, the distance d(P; Q) is the minimum number of steps required to move from P to Q. Steps are taken between neighbouring pixels; we use 8{connectivity between pixels to de ne our neighbourhood. If we treat our binary image raster as a set E, and the object that we are looking at as a subset S E, the distance function d (; S ) classi es all points (or pixels) P in E according to their minimum distance away from S . For example, we have the image (represented as a table of pixel values) shown in Figure 1 (a). Object pixels are denoted by 0 while background pixels are denoted by 255. If we de ne our vertical and horizontal distances (between pixels) to be 3 and the diagonal distance to be 4, and if we classify all points in the image with respect to their distance from the 0 or object pixels, we get our `distance transformed' image shown in Figure 1 (b). 255 255 255 255 255
6
4
3
4
7
255 255 0 255 255
0 255 0 255 0
0
3
3
0
3
4
0 255
0
3
0
0
3
0 255
3
0
0
0
3
4
3
3
3
0
255 255 255 255 0
(b)
(a)
Figure 1: Image (a) is an example binary image, object pixels are denoted by 0 while background pixels
are denoted by 255. Image (b) is the distance transformed image. The distance transform computed is known as the Chamfer 3{4 DT.
The use of distances other than 1 or in nity to de ne the vertical, horizontal, and diagonal distances between neighbouring pixels was rst introduced by Borgefors [2, 3]. She introduced 2
the Chamfer DT which uses a step function s(x; y ) to de ne the distance between neighbouring points x and y . The example above (where the horizontal and vertical distances are de ned as 3 and the diagonal distance is de ned as 4) is known as the Chamfer 3{4 DT. The matrix of distance values for the Chamfer 3{4 DT is shown in Figure 2. Also shown is the distance matrix for the Chamfer 5{7{11 DT. Implementation of the 2D DT involves performing two 11
11
4
3
4
11 7
5
7 11
3
0
3
5
0
5
4
3
4
11 7
5
7 11
11
11
Figure 2: The masks of distance values used for the various Chamfer distance transforms. The mask on
the left represents the Chamfer 3{4 DT while the mask on the right represents the Chamfer 5{7{11 DT.
passes (forward and backward) over the input image to compute the distance values for all the pixels. The forward pass is left to right, top to bottom and the backward pass is right to left, bottom to top. Half of the neighbourhood distance mask is used for each pass, and Figure 3 shows how the distance masks are broken up for the two passes. *
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
* *
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
Figure 3: The top and bottom halves of the 3 3 and 5 5 neighbourhood distance masks. The cells
containing an asterisk make up the mask half. The two images on the left are the top halves and the two images on the right are the bottom halves.
At each stage of the forward and backward passes, a set of two term sums is formed. These sums are formed where a pixel in the image corresponds to an asterisk in the mask. The underlying pixel values are added to the distance value in the mask, and the centre pixel is replaced with the minimum of the sums. The neighbourhood distances used for the implementations of the 2D distance transform in this paper are shown in Figure 2. The image manipulation procedures presented in this paper are implemented in the VIP image processing system, version 4.0, developed by the Robotics and Vision Research Group at the Department of Computer Science, The University of Western Australia. The code is written in the C programming language.
3 The Image Combination Process The procedure used to produce the images shown in Figures 5 and 6 consists of three steps. These are summarised below: 1. Compute the distance transform of the input images; 2. Combine the distance transformed input images via the pixelwise median; 3
3. Threshold to produce the nal binary image. Masks of neighbourhood distance values are used to approximate the euclidean distance between points in an image. Depending on the application, the degree of accuracy of the distance values can aect the output image. However, for our case, it does not. This is explained in Section 4. To give us an idea of the degree to which an underlying area will be aected by noise, only the rst two steps (above) are performed. We are then left with a grey-scale image, and we use the intensity of the pixels in this image as the basis for our idea | the darker the pixel colour (that is, the less its value) the greater the exposure of the underlying area to aircraft noise. We de ne an index of exposure p(x; y ) to represent this where 255 ? b(x; y ) : p(x; y ) = 255 Here, b(x; y ) represents the value of the pixel (x; y ) and 255 is the maximum pixel value. If b(x; y ) is 0 then p(x; y ) is 1 and we conclude that all aircraft will always y over the point (x; y ). For step 3 there are a number of ways that we can threshold the combined grey-scale image to produce the nal binary image. We can threshold at 0, that is, all pixels with a value of 0 stay 0 but all pixels with a value greater than 0 have their value changed to 255. But, our output image will be equal to that produced by the Vorob'ev mean (Section 1). We therefore have the following theorem:
Theorem 1 The Vorob'ev mean with threshold of 0.5 is equivalent to applying the distance
transform to the input images, combining these distance transformed images via the pixelwise median and then thresholding the combined grey-scale image at 0.
Proof: If our total number of images is and we are thresholding at 50% then for the Vorob'ev n
mean, a 0 in the nal image at the point (x; y ) implies that greater than n2 input images had a 0 at the point (x; y ). Here, a 0 is representative of an object pixel. So, if greater than n2 images have a 0 at the point (x; y ) then when we compute the distance transform of the input images and sort the array of pixel values for the point (x; y ) (to nd the median pixel value) there will be a 0 in the central position of the array (for an odd n) or in the central position plus 1 (for an even n). This value will be chosen for the nal grey-scale image. Then, when we threshold at 0, all the pixels of value 0 stay 0 while those that are not 0 become 255; and hence we get the same nal binary image as that produced by using the Vorob'ev mean. ❚
As a result of this, if we were to threshold at 0 then we still have the problem of alignment. We could threshold at a value greater than 0 but if we were to use the one procedure on a variety of image types, we would have to change this value each time. We overcome this by having our threshold value change automatically depending on the input images. The idea of automatic thresholding, in this context, was introduced by Baddeley & Molchanov [1]. The motivation behind it is to remove the `human' aspect of the image combination procedure and also to reduce the computational load associated with tuning the threshold value. The implementation used here is analogous to histogram matching. All of the input images are rstly examined and the mean object pixel coverage of the images is computed. This object pixel coverage is the ratio of black (object) pixels to the total number of pixels. The histogram of the resultant grey-scale image is then examined. The number of pixels having a value of 0, 1, 2, 3 . . . are summed to work out the point (or value) where the percentage of pixels less than or equal to this value is as close as possible to the coverage percentage computed in the rst step. The grey-scale image is then thresholded at this value. 4
4 Experimental Results The input image set consists of 76 images, each containing a path that an aircraft has taken to the runway overlaid on a map of the underlying suburbs. These images have been divided into ve sets for the image combination process, where each set represents a particular route that the aircraft have taken to the runway (such as approaching from the north east or north west etc.). There were 3 images in the south west set, 27 in the south east set, 13 in the north set, 6 in the north west set and 27 in the north east set. The proportion of aircraft tracks in each set is relatively typical of the proportion of ights that cover these areas. The proportion of ights arriving from the south west however is ever increasing due to the continuing growth of trac from Asian regions to Perth. Examples of the ve sets of input images are shown in Figure 4. Figures 5 and 6 show the average ight path and the nal grey-scale image before thresholding for the ve sets of input images. All the results for each set of ight paths are shown on the one image for simplicity and to get a better idea of where the aircraft y. The output images in Figures 5 (b) and 6 (b) are the same. Although a dierent distance transform was used in each, the value at which the grey-scale image was thresholded changed. This result is useful if a very large number of images are to be averaged and time is a factor. On a Sun Sparc 5 machine, the average increase in time is 30% for the Chamfer 5{7{11 DT over the Chamfer 3{4 DT. The use of the Chamfer 3{4 DT also gives a less localised spread of pixels in the grey-scale image (Figure 5 (a)). This gives a more even indication (assuming a linear spread) of the distribution of the noise from the aircraft's engines.
5 Conclusion This paper has presented ideas about averaging aircraft runway approach paths to enable monitoring and analysis of the noise from aircraft engines. This is achieved by computing the distance transform of the aircraft track images before combining them via the pixelwise median. Possible problems with the current techniques include the following: 1. Thickening of the ight path in the nal image. Thresholding takes place at a value other than 0 | it will be a value one greater than an integer multiple of the step length between horizontal neighbouring pixels. For example, thresholding will occur at 4, 7 or 10 . . . for images that have their distance transform computed using the Chamfer 3{4 DT. This means that at least one pixel either side of 0 pixels in the combined grey-scale image will be preserved in the nal image, and hence the characteristic thickening. It is for this reason that the distance transform performs better than the Vorob'ev mean when images are not aligned perfectly. 2. The average ight path is not connected whereas all the input paths are. This occurs because the thresholding picks out local maxima in the grey-scale combined image. One way to get around this is to (instead of thresholding the combined grey-scale image) perform ridge detection on the image. Figure 7 shows the ridge detected version of the image in Figure 5 (a). We are however still left with spurious lines on the image that correspond to localised ridges. In our case, this is particularly evident in the bottom right of the image. There is a need here for averaging techniques that preserve the topological properties of the input images and this is one area of our current research. 5
(a)
(b)
(c)
(d)
(e) Figure 4: Examples of aircraft track images. Images (a) { (e) show aircraft approaching the runway from the south west, south east, north west, north and north east of the image respectively. In each image, the aircraft track is shown in black while the background is in grey.
6
(a)
(b)
Figure 5: The combined aircraft track images from using the Chamfer 3{4 DT. Image (a) is the combined grey-scale image and image (b) shows the average ight path. In image (b), the average ight path is shown in black, while the background is grey.
(a)
(b)
Figure 6: The combined aircraft track images from using the Chamfer 5{7{11 DT. Image (a) is the
combined grey-scale image and image (b) shows the average ight path. In image (b), the average ight path is shown in black, while the background is grey.
7
Figure 7: The ridge detected version of the combined grey-scale image shown in Figure 5 (a).
6 Acknowledgements The authors would like to thank Graham Moyle of the Air Trac Services Centre at Perth Domestic Airport for providing the data of aircraft runway approach paths and Chris Pudney for ridge tracing the grey-scale images.
References [1] A. J. Baddeley and I. S. Molchanov. Averaging of random sets based on their distance functions. Journal of Mathematical Imaging and Vision, 1997. To appear. [2] Gunilla Borgefors. Distance transformations in arbitrary dimensions. Computer Vision, Graphics, and Image Processing, 27:321{345, 1984. [3] Gunilla Borgefors. Distance transformations in digital images. Computer Vision, Graphics, and Image Processing, 34:344{371, 1986. [4] A. Rosenfeld and J. L. Pfaltz. Distance functions on digital pictures. Pattern Recognition, 1:33{61, 1968. [5] Azriel Rosenfeld and John L. Pfaltz. Sequential operations in digital picture processing. Journal of the Association for Computing Machinery, 13(4):471{494, October 1966. [6] D. Stoyan and M. Stoyan. Fractals, Random Shapes and Point Fields, pages 108{116. Wiley, Chichester, 1995.
8