formation fusion ideas to improve system performance. In this application ..... Oxford University, Oxford, U.K., where he worked on a Ford Motor Co. (USA) project ...
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 4, JULY 2007
1631
Robust Video/Ultrasonic Fusion-Based Estimation for Automotive Applications Pubudu N. Pathirana, Allan E. K. Lim, Andrey V. Savkin, and Peter D. Hodgson
Abstract—In this paper, we use recently developed robust estimation ideas to improve object tracking by a stationary or nonstationary camera. Large uncertainties are always present in vision-based systems, particularly, in relation to the estimation of the initial state as well as the measurement of object motion. The robustness of these systems can be significantly improved by employing a robust extended Kalman filter (REKF). The system performance can also be enhanced by increasing the spatial diversity in measurements via employing additional cameras for video capture. We compare the performances of various image segmentation techniques in moving-object localization and show that normal-flow-based segmentation yields comparable results to, but requires significantly less time than, optical-flow-based segmentation. We also demonstrate with simulations that dynamic system modeling coupled with the application of an REKF significantly improves the estimation system performance, particularly, when subjected to large uncertainties. Index Terms—Collision avoidance, optical flow, robust extended Kalman filter (REKF).
I. I NTRODUCTION
T
HIS PAPER considers the problem of accurately tracking moving objects using a combination of sensors that are mounted on a vehicle undergoing independent motion. Recent developments in the automotive industry and subsystem technologies have made further improvements in collisionavoidance technology possible. In the past, due to real-time computing limitations, major simplifications of the kinematics model, performance index, and constraints had to be implemented in order to render the solution suitable for mechanization of a real system. With recent technological advances, particularly, in computing, these past restrictions are no longer necessary. It is now feasible to design tracking strategies that are not only more capable of achieving accurate estimates of position and velocity but also economical and simpler in production. Onboard video cameras can be used for collision-avoidance alarm systems. Video imaging is inexpensive both in terms of cost and energy, and is simple to add to onboard electronics. Here, differing sensory mechanisms
Manuscript received December 23, 2005; revised April 4, 2006 and April 11, 2006. This work was supported by GM Holden Ltd., Australia, and the Australian Research Council. The review of this paper was coordinated by Dr. M. Abul Masrur. P. N. Pathirana, A. E. K. Lim, and P. D. Hodgson are with the School of Engineering and Technology, Deakin University, Geelong, Vic. 3217, Australia. A. V. Savkin is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, N.S.W. 2052, Australia. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TVT.2007.897202
are utilized to improve the overall performance of rapid tracking of moving bodies around a video acquisition camera. There are two fundamentally distinct approaches to the problem of motion parameter estimation using a video stream. Feature-based techniques rely on the identification and tracking of correspondence points (identified by certain features of the underlying object). The optical-flow-based techniques represent motions in the image plane as sampled velocity fields. These techniques have been used alongside dynamic modeling of the objects that are involved for parameter estimation. In [1], the object motion was modeled by retaining an arbitrary number of terms in the appropriate Taylor series, while the neglected terms of the series were modeled as process noise. Recursive estimation of structure and motion parameters was performed with an iterative extended Kalman filter (EKF). Roach and Aggarwal [2] introduced and presented numerical solution techniques for a system of nonlinear equations that determined the 3-D motion of an object relative to the camera from a sequence of 2-D images. They, as well as Fang and Huang [3], drew attention to the fact that most existing techniques perform poorly when the images (coordinates of matched points) are noisy. More recently, Blostein et al. [4] demonstrated the feasibility and enhanced performance of a hybrid optical flow/feature point approach, which applied an EKF to a constant-velocity object motion model. The current work is also motivated in finding a more realistic kinematics model of multibody dynamics. The aim is to incorporate an image object localization process and enhance the robustness of the overall estimation process by using information fusion ideas to improve system performance. In this application, the primary objective of the video processing stage is iterative target localization, as opposed to object identification. Therefore, we use optical-flow-based image segmentation, as the feature-based approaches are less appropriate for this particular objective. In this paper, a possibly moving object is tracked by an onboard video camera that is mounted on a car. Furthermore, we assume that other video cameras mounted elsewhere in the vehicle can also track the object and send their measurements to a fusion-based form of the estimator. Then, our problem is to estimate the coordinates and velocity of the object from the entire set of video measurements that are available. The automotive systems are subjected to large uncertainties and measurement noise. Such uncertainty specifically exists in vision-based systems with respect to the initial conditions and also in image position localization [3]. In addition, the unpredictable maneuver of the target can make accurate tracking particularly difficult. For this purpose, we employ the robust
0018-9545/$25.00 © 2007 IEEE
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
1632
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 4, JULY 2007
EKF (REKF) technique that is developed in [5] and [6]. This robust version of the EKF takes into account large uncertainties and errors in the measurements and the system model. We show that the system performance can be significantly improved by incorporating additional video measurements. The performance improvement of inherently uncertain systems is addressed in certain aspects. • A more realistic model based on automobile-object kinematics is used as opposed to stochastic assumptions or series-approximation-based models. • The robustness of the approach is guaranteed without making any assumptions on the uncertain object maneuver. • The superiority of the proposed robust estimation over the EKF. • Improvement due to information fusion from video sensors and an increased number of sensors. First, we introduce the 2-D car/mobile object kinematics in Section II. We then present the video-based measurements, taking into account translational dynamics in Section III. To demonstrate the advantages of using information fusion ideas in video-based object estimation, we present two examples in Section IV: 1) object tracking with a video camera that is mounted on an automobile and 2) object tracking with a stationary video camera. We describe a normal-flow-based segmentation technique to locate the mobile object in a video stream in Section V. The REKF that is used for the nonlinear state estimation problem is presented in Section VI. Finally, in Section VII, we present the simulation results with brief concluding remarks in Section VIII. II. A UTOMOBILE /O BJECT K INEMATICS M ODEL In this paper, we use a realistic kinematics model consisting of a car (as a representative automobile) and a second mobile object (which may be another vehicle or a pedestrian). We define the car/object multibody in terms of relative car/object variables (system states) including the car steering commands (control inputs). As the object maneuverability and behavior are fundamentally unknown, we consider them to form an uncertain input w. Using indices C and O to specify car and object, respectively, we state the position and velocity vector of the C/O car and the object with respect to earth E as xE ∈ R2 and C/O C/O vE ∈ R2 . We define xE ∈ R4 as C/O xE
:=
C/O
xE
C/O
(1)
vE
as the car/object position and velocity absolute state vector, and introduce a new state variable (object relative state with respect to the car) as O C 4 x = xO C := xE − xE ∈ R .
(2)
Then, the state equation for the car/object dynamic system is [7]–[9] x˙ = Ax + B1 u + B2 w
(3)
Fig. 1.
Reference frames for vehicle navigation.
with system matrices
0 0 A= 0 0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 0 0 B1 = −1 0 0 −1
0 0 B2 = 1 0
0 0 . 0 1 (4)
Here, u(t) ∈ R2 is the control input vector of the car at time t, and w ∈ R2 is the unknown deterministic object maneuver at time t. Notice that the state that is defined in (2) is with respect to a nonrotating frame that is positioned on the car. We use F to denote this nonrotating frame moving on the car with the corresponding unit vectors {ie , je }.
III. M EASUREMENT E QUATION Unlike in the design of missile control systems, it is sufficient to restrict motion to two dimensions in a typical automotive application. Therefore, we make two reasonable assumptions: Assumption 1: Both car and object do not possess any vertical motion. Assumption 2: The car principal axis is along the direction of the car absolute velocity. Let FR denote the camera fixed axis moving on the car with its y-axis along the principal axis of the car. Its absolute velocity R is vC E = [v1 v2 ] . The unit vectors that are defined on F are R {i, j}. Considering the coordinate transformation F ⇐⇒ F from Fig. 1, if v2 = 0, the following equations can be written:
i i =Γ e j je
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
(5)
PATHIRANA et al.: ROBUST VIDEO/ULTRASONIC FUSION-BASED ESTIMATION FOR AUTOMOTIVE APPLICATIONS
1633
sensory technologies will improve the state estimation and overall system performance. A. Fusion Based on Multiple Cameras We employ two cameras to improve the estimator performance. Here, C(·) takes the form of f x2 x1 −f
fh x1 −f C(x, x) = f (x2 −d) x1 −f +c
(11)
fh x1 −f +c
Fig. 2.
Perspective projection of the object onto the car video image frame.
where
Γ=
1 K
v2 v1
−v1 , v2
I2 ,
B. Ultrasound-Assisted Video-Based Estimation K = 0
(6)
K=0
with K = vC E . The position x ˜ ∈ F on the car can be transformed into the relevant position x ˜ ∈ FR using the transformation function x ˜ = (Γ )−1 x ˜,
with |Γ| = 0.
(7)
Taking x ˜=
x1 x2
(8)
according to the principles of geometrical optics [10], the perspective projection concept can be used in the image formation. As shown in Fig. 2, the measurements x1 and x2 with measurement noise v can be given as
f x1 fh [x1 h ] = + v. (9) x2 − f x2 − f This is in the form of y = C(x) + v
where the lenses of the two cameras are located at (f, 0) ∈ FR and (f − c, d) ∈ FR , respectively.
(10)
x) = Γ−1 x ˜ and C1 (˜ x) = where C(·) = C1 ◦ C2 (·) when C2 (˜ [f x1 /(x2 − f ) f h/(x2 − f )] . IV. S ENSOR F USION A PPLIED TO R OBUST E STIMATION Information fusion ideas can successfully be used to significantly improve the performance of vision- or radar-based guidance strategies. Vision information is provided by onboard cameras on the car. The ground or onboard ultrasound measurements can provide location estimates of the objects around the car. The state estimation of system (3), together with the measurement equation (10), corresponds to video measurements or video and ultrasound measurements. Increasing the number of independent measurements by using additional ultrasonic sensors or multiple video cameras, or a combination of both
Ultrasound-based distance measurements are used in conjunction with video measurements to estimate the location of a nearby vehicle or pedestrian. In this case, the C(·) in (10) is of the form 2 x1 + x22 f x2 x1 −f Ultrasound and video(one camera) fh x1 −f x2 + x2 1 2 . C(x, x) = f x2 x1 −f fh x1 −f Ultrasound and video(two cameras) f (x −d) 2 x1 −f +c fh x1 −f +c
(12)
V. O BJECT L OCALIZATION Optical flow is the apparent motion of the brightness/ intensity patterns that are observed when the camera is moving relative to the objects being imaged [10], [11]. Let I(x1 , x2 , t) denote the image intensity function at time t at image point (x1 , x2 ). Assuming that the overall intensity of the image is time independent, the well-known optical flow constraint equation can be written as ∂I dx2 ∂I ∂I dx1 + + = 0. ∂x1 dt ∂x2 dt ∂t
(13)
This problem is ill posed, and the optical flow components (dx1 /dt, dx2 /dt) cannot be computed locally without introducing additional constraints [12]. Local and global techniques to calculate the optical flow are presented in [12] and [13], respectively. The normal flow, which is the component of the optical flow in the direction of the brightness gradient, is vN =
− ∂I ∂t 2 2 . ∂I ∂I + ∂x1 ∂x2
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
(14)
1634
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 4, JULY 2007
It has been shown in [14] that the normal component of the motion field, which is the projection of the 3-D motion onto the image plane, is close to the normal flow field provided that the intensity gradient is sufficiently high. Under these conditions, the normal flow field can be used to infer 3-D motion [15]. Unlike the optical flow field, the normal flow field can be directly estimated without any iterative schemes used by regularization methods [12]. Therefore, in a practical system, where computational efficiency is crucial, we propose that moving object localization be performed using normal flow instead of optical flow, which is more demanding to calculate [12], [13], [16]. The advantage of this approach is illustrated by a direct comparison of average execution times for object localization based on normal flow, the Lucas–Kanade method for estimating optical flow [13], as well as various well-known single-frame image processing techniques.
control input and x(·) and y(·) are the resulting trajectories, then x(s) = x and y(t) = y 0 (t), for all 0 ≤ t ≤ s. A. REKF Petersen and Savkin [6] presented a characterization of the set Xs as an EKF version of the solution to the Set Value State Estimation problem for a linear plant with the uncertainty described by an integral quadratic constraint (IQC). This IQC is also presented as a special case of (16). We consider the uncertain system that is described by (15) and an IQC of the form 1 (x(0) − x0 ) X0 (x(0) − x0 ) + 2
s 0
1 + v(t) R(t)v(t)dt ≤ d + 2
We consider a nonlinear uncertain system of the form x˙ = A(x, u) + B2 w z = K(x, u) (15)
that is defined on finite time interval [0, s] as a general form of the system that is given by (3). Here, x(t) ∈ Rn denotes the state of the system, y(t) ∈ Rl is the measured output, and z(t) ∈ Rq is the uncertainty output. The uncertainty inputs are w(t) ∈ Rp and v(t) ∈ Rl . In addition, u(t) ∈ Rm is the known control input. We assume that all of the functions appearing in (15) possess continuous and bounded partial derivatives. Additionally, we assume that K(x, u) is bounded. This assumption simplifies the mathematical derivations but can be removed in practice [6]. Matrix B2 is assumed to be independent of x and is of full rank. The uncertainty in the system is defined by the following nonlinear integral constraint [5], [6], [17], [18]: s
s L1 (w(t), v(t)) dt ≤ d +
Φ (x(0)) + 0
L2 (z(t)) dt (16) 0
φ(x) − φ(x ) ≤ β (1 + x + x ) x − x
z(t) z(t)dt. (18)
where N > 0, Q > 0, and R > 0. For system (15), (18), the REKF generalization that is presented in [6] can be written as x(t)) R y 0 − C (˜ x(t)) x ˜˙ (t) = A x ˜(t), u0 + X −1 ∇x C (˜ x ˜(t) = x0 . (19) ˜(t), u0 K x ˜(t), u0 , + ∇x K x X(t) is defined as the solution to the riccati differential equation X˙ + ∇x A(˜ x, u0 ) X + X∇x A(˜ x , u0 ) + XB2 Q−1 B2 X − ∇x C(˜ x) R∇x C(˜ x) +∇x K(˜ x, u0 ) ∇x K(˜ x, u0 ) = 0,
(17)
where · is the Euclidean norm with β > 0, and φ = Φ, L1 , L2 . Uncertainty inputs w(·), v(·) satisfying this condition are called admissible uncertainties. We consider the problem of characterizing the set of all possible states Xs of system (15) at time s ≥ 0, which are consistent with a given control input u0 (·) and a given output path y 0 (·), i.e., x ∈ Xs if and only if there exists admissible uncertainties such that, if u0 (t) is the
X(0) = N. (20)
An approximate formula for the set Xs is given by 1 X˜s = x ∈ Rn : (x − x ˜(s)) X(s) (x − x ˜(s)) ≤ d − φ(s) 2 where 1 φ(t) = 2
t 0
where d ≥ 0 is a positive real number. Here, Φ, L1 , and L2 are bounded nonnegative functions with continuous partial derivatives satisfying growth conditions of type
s 0
VI. S ET -V ALUE S TATE E STIMATION W ITH A N ONLINEAR S IGNAL M ODEL
y = C(x) + v
(w(t) Q(t)w(t))
y 0 − C(˜ x) R y 0 − C(˜ x)
x, u0 ) dτ. − K(˜ x, u0 ) K(˜
(21)
In the application of the REKF in video-based tracking, the car/object system is represented during the time interval by the nonlinear uncertain system in (3) and the IQC given by (18), where Q > 0, R > 0, and N > 0 are the weighting matrices (with appropriate dimensions) for the system in each case. The initial state x0 is the estimated state of respective systems at the initial time. With an uncertainty relationship of the form of (18), the inherent measurement noise (10), unknown mobile user acceleration/driving command, and uncertainty in the initial condition are considered as bounded deterministic uncertain
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
PATHIRANA et al.: ROBUST VIDEO/ULTRASONIC FUSION-BASED ESTIMATION FOR AUTOMOTIVE APPLICATIONS
1635
TABLE I SIMULATION PARAMETERS
inputs. In particular, the measurement equation (10) with the standard norm bounded uncertainty can be written as y = C(x) + δC(x) + v0
(22)
where |δ| ≤ ξ, with ξ, which is a constant indicating the upper bound of the norm bounded portion of the noise. By choosing z = ξC(x) and ν = δC(x), T
T |ν|dt ≤
0
z z dt.
(23)
0
Considering v0 and the corresponding uncertainty in w as w0 satisfying the bound in the form of Φ (x(0)) +
T
[w0 Qw0 + v0 Rv0 ] dt ≤ d
(24)
0
it is clear that this uncertain system leads to the satisfaction of the inequality in (16) and hence the constraint in (18) is satisfied (see [6]). This more realistic approach removes any noise model assumptions in algorithm development and guarantees the robustness of the solution. B. Robust Versus Optimal State Estimation The REKF intends to increase the robustness of the state estimation process and reduce the chance that a small deviation from the Gaussian process in the system noise causes a significant negative impact on the solution. However, we lose optimality, and our solution will be just suboptimal. To explain the connection between REKF and the standard EKF, consider system (15) with K(x, u) = νK0 (x, u)
(25)
where K0 (x, u) is some bounded function, and ν > 0 is a parameter. Then, the REKF estimate x ˜(t) for the system (15), (18), (25) that is defined by (19) and (20) converges to x ˜0 (t) as 0 ν tends to 0. Here, x ˜ (t) is the extended Kalman state estimate for system (15) with the Gaussian noise [w(t) v(t) ] satisfying
w(t) Q(t) 0 E [ w(t) v(t) ] = v(t) 0 R(t)
Fig. 3. Motion of the car/object system in 2-D.
see, e.g., [19]. Parameter ν in (25) describes the size of uncertainty in the system and measurement noise. For small ν, our robust state estimate becomes close to the Kalman state estimate with Gaussian noise; for larger ν, we achieve more robustness but less optimality. Hence, there is always some tradeoff between robustness and optimality. We will show below via computer simulations that, in the case of larger uncertainty (which is quite realistic), our robust filter still performs better. VII. S IMULATION We simulate the dynamic system of a car and a possibly mobile object in order to estimate the position and velocity of the object for the purpose of implementing a safety system for the car. We use a sensor fusion-based approach to improve the overall estimator performance using ultrasound and video sensing in conjunction. The simulation parameters that are used are given in Table I. The car and the object are assumed to have accelerations along the Y -direction of 0.5 and 0.8 ms−2 , respectively. Fig. 3 shows the 2-D motions of the car and the object. Fig. 4 shows that increasing the number of cameras from one to two decreases the incurred error in terms of the object position estimation, while Figs. 5 and 6 show the
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
1636
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 4, JULY 2007
Fig. 4. Comparison of mean squared errors (MSEs) in position estimation resulting from the use of one and two cameras.
Fig. 6. Comparison of MSEs in velocity estimation resulting from combining ultrasound sensing with video measurements.
Fig. 5. Comparison of MSEs in position estimation resulting from combining ultrasound sensing with video measurements.
Fig. 7. Comparison of MSEs in position estimation resulting from the use of EKF and REKF.
improvement due to augmenting ultrasound measurements with video sensing. Robustness is crucial in vision-based systems due to the inherent and relatively significant initial state error. Thus, to demonstrate the superiority of our proposed REKF, we simulate the dynamic system based on vision using both EKF and REKF. Fig. 7 shows the performance improvement of the systems by using the proposed REKF as opposed to the EKF. We have also carried out simulations for object localization in the video stream. In a real-time application, both accuracy and speed are of importance. We compare the effectiveness in object localization using static image processing techniques and optical-flow-based techniques. Figs. 8–10 show the optical-flow-based localization of the moving object for a cyclist, a car, and a person, respectively, whereas Figs. 11–13 show the results of normal-flow-based localization applied to the same three pairs of frames. For both optical- and normal-flow techniques, the flow was calculated for each pair of consecutive frames and then low-passed filtered using a filter with a passband of less than 0.2 of the normal-
ized frequency. Target localization was achieved by drawing a contour at 0.2 of the maximum flow intensity, and in each case, the centroid of points exceeding this threshold was marked (using a square) on the image. The pair of cyclist frames (240 pixels × 320 pixels) that was used to generate Fig. 11 was also used to compare execution times of various image-segmentation techniques. Timing tests were performed on a workstation with a single Pentium 4 3.00-GHz central processing unit and 504 MB of random access memory. The reported execution times (Table II) are averages over 100 runs. It is noteworthy that normal-flowbased segmentation is capable of producing results that are very similar to those from full optical-flow-based segmentation but is significantly less computationally demanding. We point out that the single frame analyses detect all edges in the frame and are unable to identify objects in relative motion with the camera. The noise in the images are assumed to be bounded functions of time and space; hence, the estimated target locations are subject to bounded functions in time. These robust assumptions
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
PATHIRANA et al.: ROBUST VIDEO/ULTRASONIC FUSION-BASED ESTIMATION FOR AUTOMOTIVE APPLICATIONS
Fig. 8.
Optical-flow-based localization of a cyclist in a video steam. (a) Optical flow (cyclist). (b) Threshold localization.
Fig. 9.
Optical-flow-based localization of a car in a video stream. (a) Optical flow (car). (b) Threshold localization.
1637
Fig. 10. Optical-flow-based localization of a walking person in a video steam. (a) Optical flow (person). (b) Threshold localization.
are in line with the REKF assumptions that are presented in Section VI. VIII. C ONCLUSION The kinematics between a car and a pedestrian or another vehicle has been considered for the development of a safety system for collision avoidance in highway conditions. First,
we investigated the possibility of using measurements from an onboard video camera for the estimation of position and velocity of another possibly mobile object. We used the REKF to solve the nonlinear state estimation problem emerging from the geometric perspective projection of the object onto the image plane of the video camera that is mounted on the car. We have demonstrated the superior performance of the REKF compared to that of the EKF with respect to vision-based
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
1638
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 4, JULY 2007
Fig. 11. Normal-flow-based localization of a cyclist in a video stream.
Fig. 13. Normal-flow-based localization of a walking person in a video stream. TABLE II COMPARISON OF AVERAGE EXECUTION TIMES FOR VARIOUS VIDEO FRAME ANALYSIS TECHNIQUES
complexity of the problem while effectively localizing the target image in the video stream. R EFERENCES Fig. 12. Normal-flow-based localization of a car in a video stream.
systems, which are inherently susceptible to large uncertainties (mainly in the initial conditions). We have also demonstrated the advantage of combining ultrasound measurements with video sensing in vision-based systems. The success of this sensor fusion approach implies that the possibility of multiple cars that are fitted with this technology sharing information with each other in a highway situation may be a fruitful line of investigation in the future. We compared various target localization techniques in a video stream. Normal-flow-based segmentation was found to yield comparable results to optical-flow-based segmentation. Therefore, the former technique is more suitable for real-time implementations, as it significantly reduces the computational
[1] T. J. Broida, S. Chandrashekhar, and R. Chellappa, “Recursive 3-D motion estimation from a monocular image sequence,” IEEE Trans. Aerosp. Electron. Syst., vol. 26, no. 4, pp. 639–655, Jul. 1990. [2] J. W. Roach and J. K. Aggarwal, “Determining the movement of objects from a sequence of images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-2, no. 6, pp. 554–562, Nov. 1980. [3] J. Q. Fang and T. S. Huang, “Some experiments on estimating the 3-D motion parameters of a rigid body from two consecutive image frames,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-6, no. 5, pp. 545–554, Sep. 1984. [4] S. D. Blostein, L. Zhao, and R. M. Chann, “Three-dimensional trajectory estimation from image position and velocity,” IEEE Trans. Aerosp. Electron. Syst., vol. 36, no. 4, pp. 1075–1088, Oct. 2000. [5] A. V. Savkin and I. R. Petersen, “Recursive state estimation for uncertain systems with an integral quadratic constraint,” IEEE Trans. Autom. Control, vol. 40, no. 6, pp. 1080–1083, Jun. 1995. [6] I. R. Petersen and A. V. Savkin, Robust Kalman Filtering for Signals and Systems With Large Uncertainities. Boston, MA: Birkhauser, 1999. [7] A. V. Savkin, P. N. Pathirana, and F. A. Faruqi, “Problem of precision missile guidance: LQR and H ∞ control frameworks,” IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 3, pp. 901–910, Jul. 2003.
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.
PATHIRANA et al.: ROBUST VIDEO/ULTRASONIC FUSION-BASED ESTIMATION FOR AUTOMOTIVE APPLICATIONS
[8] P. N. Pathirana, A. V. Savkin, and S. Jha, “Location estimation and trajectory prediction for cellular networks with mobile base stations,” IEEE Trans. Veh. Technol., vol. 53, no. 6, pp. 1903–1913, Nov. 2004. [9] P. N. Pathirana, A. V. Savkin, S. Jha, and N. Bulusu, “Node localization using mobile robots in disconnected sensor networks,” IEEE Trans. Mobile Comput., vol. 4, no. 3, pp. 285–296, May/Jun. 2005. [10] T. A. Murat, Digital Video Processing. Upper Saddle River, NJ: Prentice-Hall, 1995. [11] B. K. P. Horn, Robot Vision. Cambridge, MA: MIT Press, 1986. [12] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell., vol. 17, no. 1–3, pp. 185–203, Aug. 1981. [13] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. DARPA Image Understanding Workshop, 1981, pp. 121–130. [14] A. Verri and T. Poggio, “Against quantitative optical flow,” in Proc. 1st IEEE Int. Conf. Comput. Vis., London, U.K., Jun. 1987, pp. 171–180. [15] C. Fermuller and Y. Aloimonos, “Perceptual computational advantages of tracking,” in Proc. 11th IAPR Int. Conf. Pattern Recog., The Hague, The Netherlands, Aug. 1992, pp. 599–602. [16] J. Weber and J. Malik, “Robust computation of optical flow in a multi-scale differential framework,” Int. J. Comput. Vis., vol. 14, no. 1, pp. 67–81, Jan. 1995. [17] A. V. Savkin and I. R. Petersen, “Nonlinear versus linear control in the absolute stabilizability of uncertain systems with structured uncertainty,” IEEE Trans. Autom. Control, vol. 40, no. 1, pp. 122–127, Jan. 1995. [18] A. V. Savkin and R. J. Evans, Hybrid Dynamical Systems: Controller and Sensor Switching Problems. Boston, MA: Birkhäuser, 2002. [19] B. D. O. Anderson and J. B. Moore, Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979.
Pubudu N. Pathirana was born in Matara, Sri Lanka, in 1970. He was educated at Royal College Colombo, Colombo, Sri Lanka. He received the B.E. degree (first-class honors) in electrical engineering, the B.Sc. degree in mathematics, and the Ph.D. degree in electrical engineering in 1996, 1996, and 2000, respectively, all from the University of Western Australia, Perth, Australia, sponsored by the government of Australia on EMSS and IPRS scholarships, respectively. From 1997 to 1998, he was a Research Engineer in industry in Singapore and Sri Lanka. He was a Postdoctoral Research Fellow in Oxford University, Oxford, U.K., where he worked on a Ford Motor Co. (USA) project, in 2001; a Research Fellow in the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia; and a Consultant to the Defence Science and Technology Organization Australia in 2002. He is currently with the School of Engineering and Technology, Deakin University, Geelong, Australia. His current research interests include missile guidance, autonomous systems, target tracking, computer integrated manufacturing, vision-based navigation systems, quality of service management, and mobile/wireless Internet.
1639
Allan E. K. Lim received the B.Sc. degree (firstclass honors) and the Ph.D. degree from Deakin University, Geelong, Australia. He is currently a Research Fellow in the School of Engineering and Technology, Deakin University. His research interests include graph theory, sensor networks, computer vision, modeling, and control.
Andrey V. Savkin was born in Norilsk, Russia, in 1965. He received the M.S. and Ph.D. degrees from Leningrad University, St. Petersburg, Russia, in 1987 and 1991, respectively. From 1987 to 1992, he was with the All-Union Television Research Institute, St. Petersburg. From 1992 to 1994, he held a postdoctoral position in the Department of Electrical Engineering, Australian Defence Force Academy, Canberra, Australia. From 1994 to 1996, he was a Research Fellow in the Department of Electrical and Electronic Engineering and the Cooperative Research Center for Sensor Signal and Information Processing, University of Melbourne, Melbourne, Australia. Since 1996, he has been a Senior Lecturer and then an Associate Professor in the Department of Electrical and Electronic Engineering, University of Western Australia, Perth, Australia. Since 2000, he has been a Professor in the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia. He has published four books and numerous journal and conference papers on these topics. His current research interests include robust control and filtering, hybrid dynamical systems, missile guidance, networked control systems, computer-integrated manufacturing, control of mobile robots, computer vision, and application of control and signal processing to biomedical engineering and medicine. Prof. Savkin served as an Associate Editor for several international journals and conferences.
Peter D. Hodgson received the B.Eng. (Hons.) from Monash University, Victoria, Australia, in 1980 and the Ph.D. degree from the University of Queensland, Brisbane, Australia, in 1994. He joined Deakin University, Geelong, Australia, in 1996, after 16 years at BHP Research. He has been the Head of the School of Engineering and Technology and Associate Dean (Research) for the Faculty of Science of Deaking University. Since joining Deakin University, he has published more than 230 papers, with his total number of publications including company reports and patents exceeding 300. Much of his research is related to the steel and automotive industries with a focus on material developments and the application of advanced modeling techniques. Prof. Hodgson is a Fellow on the IEAust and a member of editorial and advisory boards of a number of journals. In 2004, he was conferred as one of the inaugural Alfred Deakin Professors at Deakin University for outstanding research and contributions to the University and awarded an Australian Research Council Federation Fellowship.
Authorized licensed use limited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on August 6, 2009 at 01:53 from IEEE Xplore. Restrictions apply.