An Image Processing Algorithm for Ground Navigation ... - Springer Link

0 downloads 0 Views 408KB Size Report
can automatically navigate aircraft by using image processing techniques to determine its ... ing system. Unlike satellite navigation systems, image processing systems use .... However, this visual effect is not an issue because ..... If the fourth condition is met ..... [13] Gonzalez, R.C., Woods, R.E.: Digital image processing, pp.
An Image Processing Algorithm for Ground Navigation of Aircraft Kevin Theuma and David Zammit Mangion

Abstract. Aircraft taxiing can be challenging in periods of bad weather and pilots tend to face a considerable increase in workload. In order to minimise the impact of adverse weather such as low visibility conditions, we propose a solution that can automatically navigate aircraft by using image processing techniques to determine its position relative to the taxiway. The output position is intended to provide feedback to a dedicated controller. The task of the image processing algorithm is to identify the taxiway centreline markings by extracting features from the image and processing this information. Afterwards, the detected centreline markings are modelled through curve fitting techniques. The cross-track and heading errors of the aircraft are measured from these curves and these define its position. Results show that the developed algorithm provides the position of the aircraft with centimetric accuracy. The algorithm performs well in various weather conditions including clear, stormy and foggy weather. It also works well during day and night time conditions.

1

Introduction

As air traffic keeps on growing, so does the interest in automating all phases of flight. The aim is to shift tasks from the pilot to an autonomous system so that operations can be carried out quicker and in a safer way. Statistics published by PlaneCrashInfo.com [1] show that the majority of aircraft accidents are a result of Kevin Theuma University of Malta, Msida MSD 2080, Malta e-mail: [email protected] David Zammit Mangion University of Malta, Msida MSD 2080, Malta and Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK e-mail: [email protected] © Springer International Publishing Switzerland 2015 J. Bordeneuve-Guibé et al. (eds.), Advances in Aerospace Guidance, Navigation and Control, DOI: 10.1007/978-3-319-17518-8_22

381

382

K. Theuma and D.Z. Mangion

human error, possibly due to fatigue or distraction. Aircraft taxi, together with take-off has still not been automated while other phases such as cruise and landing have already been automated. An autonomous system is particularly helpful in low visibility conditions. In this kind of weather, pilots encounter more challenges than usual because they face a considerable increase in workload. Also, there is an increase in hazards which could lead to possible incidents and accidents, and a reduction in the efficiency of the operation of the National Airspace System. In low visibility conditions, it is common for flights to be delayed, rerouted or cancelled. This has a negative effect on the industry’s economics as it adds to their costs and lowers their revenue [2]. Today, the most popular systems for position fixing are satellite navigation systems. The traditional Global Positioning System (GPS) can pinpoint the position of the receiver with inaccuracies of up to 10m [3]. Unfortunately GPS is susceptible to various errors including atmospheric effects, multipath effects, relativistic effects, and ephemeris and clock errors [4]. Additionally, it is disrupted by stormy weather and solar flares [5]. Due to these problems, GPS fails to attain centimetric accuracy without additional hardware and reference ground stations. Therefore, on its own, GPS is not reliable enough for ground navigation of aircraft because an error of a few metres can result in a collision or the aircraft going off track. The use of advanced satellite navigation systems that make use of reference ground stations, such as Differential GPS (DGPS) [6] and Wide Area Augmentation System (WAAS) [7], is not feasible because it would require the airport to be equipped with the reference ground stations, which is not always the case. An alternate solution is to use vision-based systems for determining the position of the aircraft relative to the taxiway or runway. The multinational corporation Google has already managed to construct a driverless car by equipping it with a Light Detection And Ranging (LIDAR) system together with an image processing system. Unlike satellite navigation systems, image processing systems use relative position fixing rather than absolute position fixing. Therefore errors related to absolute position fixing such as the path definition error are eliminated. These kind of systems are promising for achieving position fixing with centimetric accuracy in all weather conditions. With specialised equipment and image processing techniques, the system can be made robust to bad weather conditions, even low visibility. This work focused on image processing techniques that can be used to guide aircraft on the ground in real-time. The algorithm was developed for all weather conditions and for all times of the day. By analysing and processing images, the algorithm calculates the cross-track error (i.e. the position of the aircraft relative to the centreline) and the heading error (i.e. the orientation of the aircraft relative to the centreline). The cross-track error ρ and the heading error θ are illustrated in Figure 1. Centimetric accuracy was desired for use in aircraft guidance on the ground. After finding the crosstrack and heading errors of the aircraft, these can be used to provide feedback to the controller developed by Zammit et al. [8].

An Image Processing Algorithm for Ground Navigation of Aircraft

383

Fig. 1 An illustration of the cross-track error ρ and the heading error θ

2

Literature Review

Due to the lack of literature concerning the identification of taxiway markings through image processing techniques, the developed system was based on concepts and ideas taken from lane detection and tracking systems intended for road vehicles. These techniques are relevant to this work because road markings are similar to the ones present in the taxiway. In a road with two lanes, lane markings are identical to centrelines and markings of road boundaries are identical to taxiway side-markings. Therefore, due to these similarities, techniques from lane detection and tracking can be readily applied to systems that identify the taxiway markings. The present research focused on work that takes a feature extraction approach in order to accurately identify the edges of the taxiway centreline and use this information to measure the cross-track and heading errors. Notable work on lane detection and tracking includes those by Bertozzi et al. [9], Wang et al. [10], Yu et al. [11] and Hota et al. [12]. Unfortunately, these systems suffer from certain shortcomings that make them unsuitable for the intended application (of detecting and tracking taxiway centreline markings) without any adjustments and modifications. The system proposed by Bertozzi et al. requires the centreline and both side-markings to be present in the input image. Also, the intensities of the taxiway markings have to be considerably higher than the rest of the ground. The Canny / Hough Estimation of Vanishing Point (CHEVP) algorithm proposed by Wang et al .did not work correctly when tested on video sequences having parallel lines. Since the CHEVP algorithm divides the input images into five sections, the lines detected in each section did not necessarily belong to the same edge, thus causing erroneous results. The method proposed by Yu et al. does not use a sufficiently flexible curve model for representing the centreline edges because the parabolic curves do not superimpose the centreline edges at the bottom of the image. The cross-track and heading errors are measured at the bottom of the image so, in that part of the image, the curves of the centreline edges have to accurately represent

384

K. Theuma and D.Z. Mangion

the edges by superimposing them. The line clustering approach proposed by Hota et al. assumes that edges are straight or almost straight. However, this work was expected to handle scenarios in which the centreline is curved so such assumptions could not be taken. Considering that there was no work in the reviewed literature that provided the desired results, the concepts and ideas from these systems were analysed, tested and compared in order to choose the best components for integrating them into a system tailor-made for the desired application.

3

Design of the Algorithm

The algorithm was designed and developed in Matlab environment, which includes an image processing toolbox that facilitates the integration of image processing techniques and provides an interface for debugging and checking the performance of the individual components. The complete system was intended to handle images from an external VGA CCD camera, so the algorithm was designed to handle images with a resolution of 640×480. Colour information is not used throughout the algorithm. Consequently, the algorithm was designed to handle grayscale images in order to simplify processing and minimise execution time. Colour images are therefore converted into grayscale prior to inputting them into the algorithm. The flowchart of the system proposed is presented in Figure 2. Once the algorithm reads the image, it increases its contrast using a novel technique referred to as the Contrast-Limited Local Histogram Equalization (CLLHE). Then a Sobel edge detector identifies the left and right edges of the image and produces two separate binary images. These binary images are thinned using the morphological thinning technique [13]. The Hough transform [14] identifies the most dominant lines in the binary images and after finding the peaks, the line segments are reconstructed in the Cartesian space. From the line segments, a pair of line segments which best identifies the centreline edges is chosen. The line segments are mapped from the image plane to the ground plane by using the Homographic Transform [15]. Other line segments which appear to belong to the same edges described by the chosen pair of line segments are found and clustered. Points are sampled from the clustered line segments and inputting in the Weighted Least Squares Fit (WLSF) [16] technique which attempts to fit the best curves through these points. At this point, the algorithm will have generated two curves that represent the left and right edges of the centreline. The curve describing the actual centreline is derived from these curves by calculating the coefficients of the curve that lies in the middle of these two curves. The curve describing the actual centreline is used to measure the cross-track and heading errors. These values are tracked using the Kalman Filter [17] in order to minimise noise. The individual components are explained in detail in the following sections.

An Image Processing Algorithm for Ground Navigation of Aircraft

385

Fig. 2 The flowchart of the developed image processing algorithm

3.1

Contrast Enhancement

The contrast of the input images is enhanced in order to bring out details that might be obscured due to bad weather or poor illumination. This ensures that the desired features are extracted in various weather conditions and at different times of the day, thus providing a solution that works even in scenarios of low visibility and at night. The technique used for enhancing the contrast is referred to as the Contrast-Limited Local Histogram Equalisation which is based on the ContrastLimited Adaptive Histogram Equalisation (CLAHE) [18] but is adjusted to execute faster than the CLAHE so that it can be used for real-time applications on embedded devices. The CLLHE starts by diving the image into tiles with dimensions 80×80. The Histogram Equalisation technique is used individually on each tile. This consists of constructing the histogram by counting the occurrence of each grey level in the tile. Next, the cumulative histogram is constructed by cumulatively adding the number of occurrences in each bin. This process can be summarised by the cumulative distribution function (cdf) shown in Equation 1.

386

K. Theuma and D.Z. Mangion i

cdf x (i ) = ∑ p x ( j ) j =0

(1)

where px(j) is the occurrence of grey level j and cdfx(i) is the cumulative distribution function of grey level i. Contrast enhancement is restricted so that in cases where the number of different grey levels is small, the contrast in the image is not enhanced excessively. This helps limit the amount of noise that can be produced by the Histogram Equalisation technique. The contrast is limited by spreading bin counts that exceed a specified limit. The limit per bin was chosen as 32 counts. Whenever a bin exceeds this limit, the excess is distributed by dividing it by the total number of bins (i.e. 256) and adding the result to each bin. This means that bins that exceed the limit still receive a share of the excess, but since they do not receive the full amount, the contrast enhancement is restricted. The reason the excess is not redistributed one by one as in the CLAHE technique is to minimise execution time. Furthermore, a division by 256 can be translated to 8 logical shifts to the right. The mapped grey levels are found by using Equation 2:

h(k ) =

L −1 × cdf ( k ) MN

(2)

where L is the total number of grey levels than can appear in the image, M and N are the width and height of the tiles respectively, cdf(k) is the cumulative distribution function of grey level k and h(k) is the mapped value of grey level k. Finally, the pixels in the tile are mapped to the new values by using this transformation. An example of the output produced by the CLLHE process after being applied on the image in Figure 3 is presented in Figure 4. Since the CLLHE algorithm works on individual tiles, it produces a block-effect. However, this visual effect is not an issue because the output is not meant to be pleasing to the human eye but is meant to be processed further by the algorithm. The downside is that the edge detector tends to detect the boundaries of these blocks as edges. This is unwanted, so the edges at these locations are suppressed and therefore edge information is lost. The contrast can be increased even further by increasing the value of the limit but this also increases the noise in the image so a compromise between the two has to be reached.

3.2

Edge Detection and Morphological Thinning

After enhancing the contrast of the input images, the algorithm identifies the edges by using the Sobel edge detector [19]. When compared to other edge detection techniques such as the Roberts Cross and the Canny edge detectors [20], the Sobel filter appears to provide the best trade-off between sensitivity to noise and execution time. For example, when compared to the Roberts Cross technique, it takes

An Image Processing Algorithm for Ground Navigation of Aircraft

Fig. 3 An image of a taxiway in low visibility conditions used for testing the CLLHE algorithm

387

Fig. 4 An image showing the result of the CLLHE algorithm on Figure 3, demonstrating the block-effect

longer to execute but is less sensitive to noise. On the other hand, when compared to the Canny edge detector, the Sobel edge detector executes faster but is more sensitive to noise. The Sobel edge detector works by sliding a 3×3 window over the entire image. For every set of pixels in the window, the elements in the window are convoluted with the vertical and horizontal Sobel masks, thus producing the vertical and horizontal gradients denoted by Gy and Gx respectively. This operation is summarized by Equation 3 and Equation 4:

⎡ − 1 0 1⎤ G x = ⎢⎢− 2 0 2⎥⎥ × W ⎢⎣ − 1 0 1⎥⎦

(3)

⎡− 1 − 2 − 1⎤ 0 0 ⎥⎥ × W G y = ⎢⎢ 0 ⎢⎣ 1 2 1 ⎥⎦

(4)

where W is the 3×3 sliding window. These gradients are used to calculate the gradient magnitude G and the direction of the edges Θ as shown in Equation 5 and Equation 6:

G = G x2 + G y2

⎛ Gy Θ = tan −1 ⎜⎜ ⎝ Gx

⎞ ⎟⎟ ⎠

(5)

(6)

388

K. Theuma and D.Z. Mangion

In order to distinguish between left and right edges, the thresholding method is replaced by another technique that apart from checking that the gradient magnitude exceeds the threshold, it also checks the sign of Gx. The value of Gx is negative when the intensity along the horizontal axis increases and positive when the intensity along the horizontal axis decreases. The centreline markings are assumed to be lighter than the rest of the ground and consequently a negative value of Gx indicates a left edge while a positive value of Gx indicates a right edge. The thresholding process is represented by Equation 7 and Equation 8, where T is the threshold and SL and SR are the binary outputs indicating the left and right edges respectively. The value of T is set to 77.

⎧1, G ≥ T and G x < 0 SL = ⎨ ⎩0, elsewhere

(7)

⎧1, G ≥ T and G x > 0 SR = ⎨ ⎩0, elsewhere

(8)

The binary outputs are used to construct two new binary images indicating the left and right edges. When Gx is equal to zero, the edge is not detected because the edge is perfectly horizontal and hence it is neither a left edge nor a right edge. However, perfectly horizontal edges are not of interest because since the aircraft follows the centreline, the edges of the centreline will normally have a non-zero horizontal gradient. The edges that result from the block effect caused by the CLLHE algorithm are suppressed by ignoring edges at the boundaries of the tiles. The Sobel edge detector can produce thick edges. Hence the binary images outputted by the Sobel edge detector are thinned using the morphological thinning technique. This removes duplicate edge information in order to reduce the processing time of the Hough Transform and minimise redundant lines that can be identified by the Hough Transform.

3.3

Detection of Line Segments

The Hough Transform is used to detect the most dominant line segments in the binary images produced by the Sobel edge detector. Considering the Hough Transform equation in Equation 9, θ is incremented from -90° to 89° in steps of 1°.

ρ = x cos θ + y sin θ

(9)

where (x,y) are the Cartesian coordinates of the binary images and (ρ,θ) are the coordinates in the Hough space. Once the Hough Transform is used to construct the Hough accumulator, the peaks in the Hough space are identified to generate two lists of lines per binary image. One list consists of lines having the 5 most votes and that exceed 30% of the highest vote count. The other list consists of

An Image Processing Algorithm for Ground Navigation of Aircraft

389

lines having the 25 most votes and that exceed 10% of the highest vote count. These lines are reconstructed back in the Cartesian space by inputting the ρ and θ values into the Hough Transform equation and finding the points in the binary images at which the equation holds true. Whenever the equation holds true, it means that a line segment is present at that point. Line segments that are separated by gaps smaller by 5 pixels are merged in order to make up for any discontinuities produced by the Sobel edge detector and line segments smaller than 15 pixels are discarded because these are considered to be a result of noise. Therefore, ultimately, this stage of the algorithm will produce two lists of line segments per binary image i.e. two lists for left edges and two lists for right edges.

3.4

Centreline Detection

After detecting the most dominant line segments, a pair of line segments which best represents the centreline edges is selected from the list of line segments derived from the lines with the 5 highest vote counts. By working on a small set of dominant line segments, the chance of having line segments resulting from noise is smaller than that of a larger set. Erroneous line segments can be detrimental to this component and this is the reason why the algorithm works on the smaller list of line segments. Line segments that are horizontal or almost horizontal (i.e. having θ between -90° to -81° or between 81° to 89°) are ignored and not used in this component because these normally belong to the holding position markings. First, the pair of line segments that are closest to the bottom centre point in the image are found by calculating the distances between the line segments and the bottom centre point (320,480). Considering line segment AB and point C, the equation that is used depends upon the position of the perpendicular projection of C onto AB, denoted by r. The value of r is calculated by Equation 10.

r=

AC • AB AB

2

=

( A y − C y )( A y − B y ) − ( Ax − C x )( B x − Ax ) ( B x − Ax ) 2 + ( B y − A y ) 2

(10)

Next, the distance between point C and line segment AB is found using Equation 11.

d min

⎧ (D − C ) 2 + (D − C ) 2 , 0 ≤ r ≤ 1 x x y y ⎪⎪ 2 = ⎨ ( Ax − C x ) + ( Ay − C y ) 2 , r < 0 ⎪ 2 2 ⎪⎩ ( B x − C x ) + ( B y − C y ) , r > 1

where D = (Ax + r(Bx - Ax), Ay + r(By - Ay)).

(11)

390

K. Theuma and D.Z. Mangion

There may be situations in which the line segment representing the left edge of the centreline lies on the right of the one representing the right edge of the centreline. To identify such situations, the points at which the extended line segments intersect with the bottom of the image are calculated and compared. When these situations are identified, the line segment in the pair that is closest to the bottom centre point (320,480) is retained while the other one is discarded. The discarded line segment is then replaced by the next line segment lying closest to bottom centre point and on the correct side of the retained line segment. This ensures that the line segment representing the left edge of the centreline lies on the left of the one representing the right edge.

3.5

Inverse Perspective

The line segments that are derived from the lines having the 25 most votes are mapped from the image plane to the ground plane by using the Homographic Transform. Unlike the Inverse Perspective Mapping [21], the camera parameters do not have to be known, but the transformation matrix is found through a calibration procedure. The most common calibration method makes use of a checkboard of known dimensions. The coordinates of the squares in the image and on the board are inputted into Equation 12 in order to derive the Homography matrix.

⎡ x1 ⎢0 ⎢ ⎢ x2 ⎢ ⎢0 ⎢# ⎢ ⎢ xn ⎢0 ⎣

y2 0 y2 0 # yn 0

1 0 0 x1 1 0 0 x2 # # 1 0 0 xn

0 y1 0 y2 # 0 yn

0 − X 1 x1 1 − Y1 x1 0 − X 2 x2 1 − Y2 x 2 # # 0 − X n xn 1 − Yn x n

⎡a ⎤ − X 1 y1 ⎤ ⎢ ⎥ ⎡ X 1 ⎤ b − Y1 y1 ⎥⎥ ⎢ ⎥ ⎢⎢ Y1 ⎥⎥ ⎢c⎥ − X 2 y2 ⎥ ⎢ ⎥ ⎢ X 2 ⎥ ⎢ ⎥ ⎥ d − Y2 y 2 ⎥ ⎢ ⎥ = ⎢ Y2 ⎥ ⎢e⎥ # ⎥⎢ ⎥ ⎢ # ⎥ ⎢ ⎥ ⎥ f − X n y n ⎥ ⎢⎢ ⎥⎥ ⎢ X n ⎥ g − Yn y n ⎥⎦ ⎢ ⎥ ⎢⎣ Yn ⎥⎦ ⎢⎣ h ⎥⎦

(12)

where (x,y) are the image plane coordinates, (X,Y) are the ground plane coordinates and a, b, c, d, e, f, g and h are the elements of the Homography matrix. The Homography matrix remains constant as long as the camera parameters are left unchanged. During the execution of the algorithm, the endpoints of the line segments are mapped from the image plane to the ground plane by inputting them into Equation 13:

An Image Processing Algorithm for Ground Navigation of Aircraft

⎡ XW ⎤ ⎡ a b ⎢ YW ⎥ = ⎢ d e ⎢ ⎥ ⎢ ⎢⎣ W ⎥⎦ ⎢⎣ g h

c ⎤⎡x⎤ f ⎥⎥ ⎢⎢ y ⎥⎥ 1 ⎥⎦ ⎢⎣ 1 ⎥⎦

391

(13)

where the matrix with elements a, b, c, d, e, f, g and h is the Homography matrix.

3.6

Line Clustering

Up to the centreline detection stage, the algorithm describes the centreline edges using a pair of line segments. This is only suitable for straight centrelines. In order to be able to described curved centrelines, the algorithm groups other line segments that appear to belong to the centreline edges. The pair of line segments found earlier are used as a starting point for finding the other line segments that lie on the centreline edges. First, the search is repeated iteratively from one segment to another in the upward direction. Then, it is repeated in the downward direction (once again starting from the pair of line segments). Considering the example illustrated in Figure 5, line segment a is considered to belong to the line segment pair representing the centreline edges and therefore it is used as a starting point. The algorithm then searches upwards from line segment a in order to find the next line segment that appears to belong to the same edge and finds line segment b. Once again, the algorithm searches upwards, this time from line segment b and finds line segment c. However, when the search is repeated upwards from line segment c, it does not find any other line segments that appear to belong to the same edge so the upward search stops there. This procedure is then repeated in the downward direction. The algorithm searches downward of line segment a and finds line segment d. However, it fails to find any line segment below line segment d, so the downward search stops there. All of the line segments that appear to belong to the same edge are added to the cluster. The algorithm selects the line segments by comparing various characteristics and ensuring that they meet certain criteria. The first property that is checked is the difference in their orientation. The angles of successive line segments must not differ by more than 45°. Next, four other conditions are checked and if any of these is satisfied, the line segment under consideration is selected and added to the cluster. In case that there is more than one line segment that match these criteria, only the one closest to the last selected line segment is added to the cluster. One of the four conditions is whether the line segments intersect. The algorithm determines if two line segments intersect by comparing the relative positions of the line segment endpoints. Considering two intersecting line segments AB and CD, one endpoint of line segment CD should lie on the left of AB while the other endpoint should lie on the right of AB. If this is not the case, then the line segments do not intersect. The positions of the end points relative to each other are found by using the cross product. For example, to find the position of endpoint C relative to line segment AB, the cross product is calculated as in Equation 14.

392

K. Theuma and D.Z. Mangion

Fig. 5 An illustration depicting the procecedure that clusters line segments belonging to the same edge

To find the position of D relative to line segment AB the same equation is used, but C is replaced with D. Opposite signs indicate that points C and D lie on opposite sides while identical signs indicate that points C and D lie on the same side [22].

p = ( B − A) × (C − A)

(14)

The second condition checked is whether the distance between the endpoints of the line segments is smaller than 5 pixels. If the search is upwards, the distance is measured from the upper endpoint of the last selected line segment to the lower endpoint of the line segment under consideration. If the search is downwards, the distance is measured from the lower endpoint of the last selected line segment to the upper endpoint of the line segment under consideration. The distance between the points is calculated by using Pythagoras’ Theorem. The third conditions checked is whether the line segments are collinear or almost collinear. This is done by using the equation that calculates the distance between a point and a line. The line segment is treated as an extended line. If the search for the next line segment is upwards, then the distance calculated is that between the last selected line segment and the lower endpoint of the line under consideration. Otherwise if the search is downwards, the distance calculated is that between the last selected line segment and the upper end point of the line under consideration. The maximum accepted distance is 5 pixels. The last of the four conditions is similar to the third but the acceptable distance is increased as the line segments get farther from each other and is set equal to the distance measured whilst checking for the second condition. If the fourth condition is met, the line segments must satisfy another condition that checks whether the line segments form a turn. This is the case when a line passing through the closest endpoints of the two line segments has both line segments on the same side.

An Image Processing Algorithm for Ground Navigation of Aircraft

3.7

393

Curve Fitting

Curves are fitted through the clustered line segments by using the Weighted Least Squares Fit (WSLF). This fitting technique provides smooth polynomial curves for representing the centreline edges and allows priority to be given to the lower part of the curve by assigning higher weights to points in that part. The WLSF takes points as inputs so points are sampled from the clustered line segments and inputted into the WLSF. The points are sampled by repeatedly splitting the clustered line segments into two until they are smaller than 1 pixel. When this occurs, the endpoints of the line segments are used as the points for the WLSF. The advantage of sampling using this technique is that there are no problems associated with infinite gradients and divisions by 2 can be translated into logical shifts to the right. Therefore this technique is preferred to having to derive the line equations. The x and y-coordinates of the sampled points are swapped when inputted into the WLSF so that the resulting polynomial is a function of the y-coordinate. Fifth order weights are assigned to the points depending on their vertical position. The lower they are, the higher the weight that is assigned to them. Consequently, the weight of a point in row r is r5. This ensures that the lower part of the curve is characterised well because this part of extreme importance since the cross-track and heading errors are derived from it. The curves are fitted as third order polynomials which are sufficiently flexible for characterising bends and does no not result in sub-optimal fits (that usually result when fitting curves of high order polynomials). When the polynomials describing the left and right edges of the centreline are fitted, the one describing the actual centreline is found by adding corresponding polynomial coefficients and dividing them by two. This gives the polynomial equation of the curve in the middle of those describing the centreline edges.

3.8

Measurement of Cross-track and Heading Errors

After obtaining the polynomial equation describing the taxiway centreline, the cross-track and heading errors are measured from it. The cross-track error is the horizontal distance between the bottom of the curve and the middle of the image. It is calculated by substituting variable y with the height of the image (i.e. 480) and subtracting half the image width (i.e. 320) from the result. The resulting equation is Equation 15:

x t = c 3 h 3 + c 2 h 2 + c1 h + c 0 −

w 2

(15)

where c0, c1, c2 and c3 are the polynomial coefficients, h is the height of the image, w is the width of the image and xt is the cross-track error. The heading error is indicated by the tangent to the bottom of the curve. The slope of the tangent is found by differentiating the curve equation and then substituting variable y with the height of the image. The arctangent function is used on the resulting slope in

394

K. Theuma and D.Z. Mangion

order to find the heading error in terms of an angle. The calculation of the heading error is summarised by Equation 16:

θ e = tan −1 (3c3 h 2 + 2c 2 h + c1 )

(16)

where c1, c2 and c3 are the polynomial coefficients, h is the height of the image and θe is the heading error.

3.9

Tracking Filter

The Kalman Filter is used to track the cross-track and heading errors in order to filter noise. The Kalman filter reduces noise by comparing two measurements with each other. In this work, the cross-track and heading errors measured by the algorithm are compared to a mathematical model. The mathematical model of the cross-track error is derived from the SUVAT equations. The mathematical model of the heading error is derived from equations describing angular motion and results in the same coefficients. The model of the Kalman filter for both cross-track and heading errors is presented in Equation 17:

⎡1 t 0.5t 2 ⎤ ⎢ ⎥ x k = ⎢0 1 t ⎥ x k −1 + wk −1 ⎢0 0 1 ⎥⎦ ⎣

(17)

z k = [1 0 0]x k + v k

(18)

where xk is the prediction value, xk-1 is the previous prediction value, wk-1 is the process noise, zk is the measurement value and vk is the measurement noise. The process noise and measurement noise covariance matrices (Q and R respectively) were chosen by experimentation. These were set as in Equation 19 and Equation 20.

⎡0.001 0 0⎤ Q = ⎢⎢ 0 1 0⎥⎥ ⎢⎣ 0 0 1⎥⎦

(19)

R = [1]

(20)

3.10 Adapting the Algorithm for Infrared Vision The components that have been described in this section above are intended for processing images captured by visible light cameras. In order to take advantage of

An Image Processing Algorithm for Ground Navigation of Aircraft

395

infrared technology (considering that infrared cameras penetrate through poor visibility and are immune to shadows and variable illumination) the algorithm was adapted for these type of images. When testing the algorithm on images captured by infrared cameras, in some scenarios artefacts were appearing in the images. These artefacts are obfuscated by blurring the images so that they are not detected as part of the centreline. Also, since the desired information remained in the same range of grey levels (between 64 and 90) during testing, the contrast enhancement technique was replaced by another one that simply stretches the histogram. Therefore, the contrast enhancement is not affected by pixels that are not of interest. Equation 21 is used for implementing this technique.

g ( x, y ) =

f ( x, y ) − f min × 255 f max − f min

(21)

where fmax denotes the upper boundary of the wanted range (i.e. 90), fmin denotes the lower boundary (i.e. 64), f(x,y) is the original pixel intensity and g(x,y) is the mapped pixel intensity. Light emitting objects generally have intensities that are considerably higher than passive objects making up the rest of the image. This gap in intensities was exploited to suppress edges caused by lights. The lights are identified by binarising the image with a threshold of 128. The binarised image is dilated with a structuring element having the shape of a disc whose radius is 10 pixels. The edges produced by the Sobel edge detector that superimpose the dilated region are removed.

4

Results

As an initial evaluation, the algorithm was tested on videos captured from simulations in X-Plane. A Boeing 737 was taxied around the taxiways in an environment simulating Malta International Airport and the session was stored. This session was then replayed under different simulated weather conditions (broken, cirrus, clear, foggy, low visibility, overcast, scattered and stormy) and at different times of the day (noon, midnight, 6am and 6pm). The recorded videos were each 225 seconds long and had a native resolution of 1280×960 but were saved at a resolution of 640×480. Since the horizon was visible in the captured videos, the area above the horizon was ignored by the algorithm so that it did not interfere with the results. The cross-track and heading errors measured by the algorithm were compared to ones measured manually and the differences were tabulated. These values were measured manually by opening the video frames in an image editing software and using an inbuilt ruler to measure the cross-track error and an onscreen protractor to measure the heading error. Statistical data of the differences observed in 200

396

K. Theuma and D.Z. Mangion

successive frames in a simulation involving clear visibility is presented in Table 1. The results indicate that the algorithm managed to attain centimetric accuracy and hence has the desired performance. Table 1 Statistical data of the discrepancies in cross-track and heading errors measured by the algorithm against ones measured manually when processing the synthetic video simulated in clear weather Discrepancy in cross-track error (mm) Discrepancy in heading error (°) Maximum

50.54

4.45

Minimum

1.23

0.07

Average

32.80

1.13

Standard deviation

13.58

0.89

95th percentile

49.89

2.58

The algorithm was also tested on video captured from simulations generated in the lowest visibility in X-Plane i.e. 0.10 statute miles. Statistical data of the differences between the cross-track and heading errors measured by the algorithm against ones measured manually in 200 frames is presented in Table 2. The results show that the algorithm works well in poor visibility conditions. The algorithm actually performs better than in clear weather conditions. This is due to the fact that the coefficients of the algorithm were mostly based on tests performed in poor visibility and at night. Table 2 Statistical data of the discrepancies in cross-track and heading errors measured by the algorithm against ones measured manually when processing the synthetic video simulated in the lowest visibility setting Discrepancy in cross-track error (mm) Discrepancy in heading error (°) Maximum

45.97

3.85

Minimum

1.14

0.01

Average

23.19

1.06

Standard deviation

13.26

0.82

95th percentile

44.81

2.56

The algorithm was also tested on real videos captured at Malta International Airport in field trials using a cameras mounted on a van driving around the airfield. These videos were obtained from trials conducted during the EC FP7 ALICIA project. The footage was taken on 13th August 2013 and on 14th August 2013 at night, and was recorded by two types of cameras: visible light and infrared. Two visible light videos were captured by a Go Pro Hero 3 camera at a resolution of 1280×720 and at a rate of 25 frames per second. These had a duration of 1418 seconds and 1439 seconds. Another two infrared videos were recorded using a Flare SC7000 Thermal

An Image Processing Algorithm for Ground Navigation of Aircraft

397

IR camera with a resolution of 320×256 recording at a rate of 25 frames per second. These video streams had a duration of 893 seconds and 914 seconds. All videos were resized to a resolution of 640×480 before they were input to the algorithm and the area above the horizon was ignored so that it did not affect the results. When testing the algorithm on the visible light videos, the differences between the cross-track and heading errors measured by the algorithm against ones measured manually were tabulated. Statistical data of the differences in 200 frames is presented in Table 3. The results show that the algorithm still performs with centimetric accuracy, and therefore it performs adequately. Table 3 Statistical data of the discrepancies in cross-track and heading errors measured by the algorithm against ones measured manually when processing the visible light videos captured at Malta International Airport Discrepancy in cross-track error (mm) Discrepancy in heading error (°) Maximum

16.06

1.63

Minimum

0.09

0.02

Average

4.77

0.70

Standard deviation

4.17

0.44

95th percentile

14.08

1.60

Statistical data of the differences belonging to video sequences in which the centreline is curved is presented in Table 4. The algorithm still performs well, but understandably the accuracy decreases. Table 4 Statistical data of the discrepancies in cross-track and heading errors measured by the algorithm against ones measured manually when processing video sequences from the visible light video captured at Malta International Airport in which the centreline markings are curved Discrepancy in cross-track error (mm) Discrepancy in heading error (°) Maximum

13.48

2.50

Minimum

0.41

0.04

Average

5.96

0.56

Standard deviation

3.68

0.48

95th percentile

12.51

1.47

Results not presented herein also show that the algorithm only works well when the centreline markings are lighter than the ground. During testing, in some of the frames, the ground was lighter than the centreline and the curves representing the centreline edges were being fitted on edges of other markings. Another problem that was noticed was that at night, whenever the centreline lights were present, the contrast between the lights of the ground increased while that between the centreline

398

K. Theuma and D.Z. Mangion

and the ground dropped. As a result, the Sobel edge detector was only detecting the edges of the lights and curves were being fitted on the edges of the lights. To test the algorithm adapted to handle infrared videos, statistical data of the differences in 200 frames was extracted, comparing the cross-track and heading errors estimated by the algorithm against ones measured manually. Results are presented in Table 5 and indicate that the adapted algorithm also performs well and its accuracy is well within a few centimetres. Table 5 Statistical data of the discrepancies in cross-track and heading errors measured by the algorithm against ones measured manually when processing the infrared videos captured at Malta International Airport Discrepancy in cross-track error (mm) Discrepancy in heading error (°) Maximum

18.05

9.65

Minimum

0.17

0.31

Average

6.21

3.41

Standard deviation

4.65

2.28

95th percentile

16.56

7.36

5

Conclusion

In this work, an algorithm that can determine the position of the aircraft with respect to the taxiway centreline has been presented. The algorithm identifies the centreline markings through image processing techniques, models them using curve fitting techniques and then uses this information to measure the cross-track and heading errors. The algorithm was originally developed for processing visible light images, and was later adapted to process infrared imagery. Results show that the original algorithm and the adapted one work adequately in straight lines, bends, in low visibility and at night, indicating that the objective of centimetric accuracy may be achieved in real operations. Further work will include improvements to the algorithm to process visible light imagery so that it can automatically identify situations in which the ground is lighter than the centreline markings and, in these cases, it will automatically swap the binary images indicating left and right edges. Also, the centreline lights could be suppressed so that they do not affect the algorithm designed for visible light imagery.

References [1] PlaneCrashInfo.com, Accident statistics (August 28, 2014), http://www.planecrashinfo.com/cause.htm [2] Kulesa, G.: Weather and aviation: How does weather affect the safety and operations of airports and aviation, and how does FAA work to manage weather-related effects? In: The Potential Impacts of Climate Change on Transportation (2003)

An Image Processing Algorithm for Ground Navigation of Aircraft

399

[3] Clynch, J.R.: GPS Accuracy Levels. (2001), http://www.oc.nps.edu/oc2902w/gps/gpsacc.html (October 15, 2014) [4] Belabbas, B., Hornbostel, A., Sadeque, M.Z.: Error analysis of single frequency gps measurements and impact on timing and positioning accuracy (2005) [5] Fox, K.C.: Impacts of Strong Solar Flares. (2013), http://www.nasa.gov/mission_pages/sunearth/news/ flare-impacts.html (October 15, 2014) [6] Authority, A.M.S.: Differential Global Positioning System. (October 15, 2014), http://www.amsa.gov.au/navigation/services/dgps/ [7] F. A. Administration. WAAS - How It Works. (2010), http://www.faa.gov/ about/office_org/headquarters_offices/ato/service_units/ techops/navservices/gnss/waas/howitworks/ (October 15, 2014) [8] Zammit, C., Zammit-Mangion, D.: An enhanced automatic taxi control algorithm for fixed wing aircraft (2014) [9] Bertozzi, M., Broggi, A.: GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Process. 7, 62–81 (1998) [10] Wang, Y., Teoh, E.K., Shen, D.: Lane detection and tracking using B-Snake. Image and Vision Computing 22, 269–280 (2004) [11] Yu, B., Jain, A.K.: Lane boundary detection using a multiresolution hough transform. In: Proceedings of the International Conference on Image Processing, pp. 748–751 (1997) [12] Hota, R.N., Syed, S., Bandyopadhyay, S., Krishna, P.R.: A Simple and Efficient Lane Detection using Clustering and Weighted Regression. In: COMAD (2009) [13] Gonzalez, R.C., Woods, R.E.: Digital image processing, pp. 671–672. Prentice Hall, Upper Saddle River (2007) [14] Gonzalez, R.C., Woods, R.E.: Digital image processing, pp. 755–760. Prentice Hall, Upper Saddle River (2007) [15] Criminisi, A., Reid, I., Zisserman, A.: A plane measuring device. Image and Vision Computing 17, 625–634 (1999) [16] W. MathWorld. Least Squares Fitting, http://mathworld.wolfram.com/ LeastSquaresFitting.html (October 15, 2014) [17] Esme, B.: Kalman Filter For Dummies (2009), http://bilgin.esme.org/ BitsBytes/KalmanFilterforDummies.aspx (October 15, 2014) [18] Zuiderveld, K.: Contrast limited adaptive histogram equalization. In: Paul, S.H. (ed.) Graphics Gems IV, pp. 474–485. Academic Press Professional, Inc. (1994) [19] Gonzalez, R.C., Woods, R.E.: Digital image processing, p. 189. Prentice Hall, Upper Saddle River (2007) [20] Gonzalez, R.C., Woods, R.E.: Digital image processing, pp. 741–747. Prentice Hall, Upper Saddle River (2007) [21] Muad, A.M., Hussain, A., Samad, S.A., Mustaffa, M.M., Majlis, B.Y.: Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system. In: 2004 IEEE Region 10 Conference, TENCON 2004, pp. 207–210 (2004) [22] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to algorithms, pp. 1015–1019. MIT Press, Cambridge (2009)

Suggest Documents