sensors Article
IMU-Based Virtual Road Profile Sensor for Vehicle Localization Juhui Gim and Changsun Ahn * School of Mechanical Engineering, Pusan National University, Busan 46241, Korea;
[email protected] * Correspondence:
[email protected]; Tel.: +82-51-510-2979 Received: 4 September 2018; Accepted: 1 October 2018; Published: 7 October 2018
Abstract: A road profile can be a good reference feature for vehicle localization when a Global Positioning System signal is unavailable. However, cost effective and compact devices measuring road profiles are not available for production vehicles. This paper presents a longitudinal road profile estimation method as a virtual sensor for vehicle localization without using bulky and expensive sensor systems. An inertial measurement unit installed in the vehicle provides filtered signals of the vehicle’s responses to the longitudinal road profile. A disturbance observer was designed to extract the characteristic features of the road profile from the signals measured by the inertial measurement unit. Design synthesis based on a Kalman filter was used for the observer design. A nonlinear damper is explicitly considered to improve the estimation accuracy. Virtual measurement signals are introduced for observability. The suggested methodology estimates the road profile that is sufficiently accurate for localization. Based on the estimated longitudinal road profile, we generated spectrogram plots as the features for localization. The localization is realized by matching the spectrogram plot with pre-indexed plots. The localization using the estimated road profile shows a few meters accuracy, suggesting a possible road profile estimation method as an alternative sensor for vehicle localization. Keywords: road profile estimation; unknown input Kalman filter; virtual measurement; inertial sensor signals; localization
1. Introduction Autonomous ground vehicles recognize the surrounding environment and autonomously generate driving commands. The possible benefits of autonomous vehicles in traffic are the reduction of traffic accidents, increased convenience for drivers, optimized traffic flow, and expeditious driving [1–3]. To enable autonomous driving, various techniques have been developed in the fields of advanced vehicle safety systems and intelligent transportation systems [4,5]. Among them, vehicle localization is one of the most important techniques for autonomous driving. A typical method for vehicle localization involves using Global Positioning System (GPS) signals. GPS provides absolute position information with reliability within a few meters at a low cost. However, GPS requires a clear line of sight to satellites because GPS signals are vulnerable to visual obstacles and signal reflections [6,7]. Furthermore, the position accuracy of single-frequency GPS receivers is low (up to tens of meters) [8]. By using two-frequency GPS receivers, the accuracy can be improved to within meters, but the cost increases significantly [9–12]. GPS and other methods are fused to obtain accurate position information at an affordable price for production vehicles. For example, GPS is integrated with cameras [13], laser scanners [14], or inertial measurement units (IMUs) [15,16]. When GPS is available, additional sensors can be used to compensate for the GPS error [17,18], which is the main purpose of sensor fusion. If GPS is unavailable, the additional sensors are used as back-up sensors. Some back-up sensors can act as visual odometers, such as cameras and laser scanners, but they are vulnerable to weather and lighting conditions. When Sensors 2018, 18, 3344; doi:10.3390/s18103344
www.mdpi.com/journal/sensors
Sensors 2018, 18, 3344
2 of 16
an IMU is used as a back-up sensor, dead reckoning is mainly used [19,20]. However, dead reckoning is only valid for a short time horizon because of error accumulation. Nevertheless, an IMU has many advantages over other sensors. First, IMU signals are more robust against external interruptions than GPS and vision sensors because they are directly measured from the signal sources. In contrast, GPS or vision signals are measured through non-contact methods. Therefore, IMU signals are less affected by visual obstacles or weather conditions and well- reflect the response of the vehicle to excitations from the surrounding environment. Secondly, an IMU is cheap and readily mounted on most passenger vehicles for standard safety features such as electric stability control. Therefore, IMUs are available at a very low cost. If we can reduce the effect of error accumulation, an IMU can be an attractive back-up sensor for a localization system using GPS or a vision system when corresponding signals are interrupted or jammed. Because of these advantages, we focus on an alternative IMU-based localization method that is not based on dead reckoning. Our approach is based on the fact that the road surface at a position has characteristic longitudinal profiles that are distinct from those of other positions. A heuristic analogy is guessing one’s own location by feeling the vibrations and motions of a vehicle when on a familiar road. Some researchers have presented IMU-based localization methods that use pitch rate signals [21,22]. The shapes of measured pitch rate signals from a ground vehicle are compared with the shapes of pre-indexed pitch rate signals to identify the position. This method is based on different road profiles generating different vehicle motions. The method works without error accumulation, but the localization accuracy is affected by measurement noise because the pre-indexed signals and measured signals are compared in the time domain. Furthermore, vehicle dynamics signals, such as the acceleration and angular velocities, are dependent on the vehicle speed, which also affects the estimation accuracy. The proposed approach is error-accumulation-free IMU-based localization that is less dependent on vehicle speed variations and robust to signal noise. The basic concept for our approach is comparing the profile of the current position with a pre-indexed road profile database, which does not require the addition of a GPS signal. This method may work as a redundant sensor in the localization process when in conjunction with a GPS or be a backup sensor when GPS is unavailable, which both increase the reliability of the navigation system. The suggesting localization method consists of two main parts. One is estimating the longitudinal road profile of the current location, and the other is feature matching between the estimated profile and the profiles in the database in the spectrogram framework. The dependency on the vehicle speed can be removed by using the estimated road profile, because the longitudinal road profile is time independent information. We estimate the road profiles with measured IMU signals because a cost-effective device for direct measurement of a road profile is unavailable. The estimation is based on a disturbance observer. However, estimating disturbance signals using only a body-attached IMU is impossible due to the lack of observability. Because of the observability issue, some researchers use additional sensors to estimate a longitudinal road profile without concerns on observability. Examples of such sensors are the suspension stroke sensor [23–26] and the velocity or acceleration sensors for the un-sprung mass [26–28], but this approach is not favored for production vehicles. Another approach found in the literature is that the sprung mass position is assumed to be the same as the road profile [29,30], but the assumption is not valid in general production vehicles. Our remedy is to introduce a virtual measurement that is based on a less stringent assumption than the assumption on the sprung mass position. The virtual measurement is an assumed value of the averaged road heights. The assumption is based on the hypothesis that the road surface variations in longer wavelengths than the vehicle wheelbase, such as the averaged road heights, are ignorable. Therefore, the virtual measurement is a low-pass-filtered road profile and set as 0. By introducing the virtual measurement, the observability conditions are satisfied without using additional sensors or a too stringent assumption. The second part for localization is to match the road profiles between the estimated one and the pre-indexed one in the database. The immunity to noise is improved by a feature extraction
Sensors 2018, 18, 3344
3 of 16
method. We extract the features of a road profile in the frequency-distance domain. Characteristics are Sensors 2018, 18, x FOR PEER REVIEW 3 of 17 Sensors 2018, 18, x FOR PEERinREVIEW 3 of 17 more clearly presented the frequency domain than in the time domain, so we adopt a spectrogram We extract the features of a road profile in the frequency-distance domain. Characteristics are more technique for feature extraction [31]. A spectrogram is a two-dimensional plot in the time-frequency We extract the features of a road profile in the frequency-distance domain. Characteristics are more clearly presented in the frequency domain than in thewith time richer domain, so we adoptAnother a spectrogram domain, so the characteristics of signals are displayed information. benefit of a clearly presented in the frequency domain than in the time domain, so we adopt a spectrogram technique is forthe feature extraction [31]. A spectrogramnoise. is a two-dimensional plot in thethe time-frequency spectrogram easy rejection of measurement In addition, by using spectrogram, we technique for feature extraction [31]. A spectrogram is a two-dimensional plot in the time-frequency domain, so the characteristics of signals are displayed with richer information. Another benefit of a candomain, use image processing techniques forare feature matching because the spectrogram is displayed in a so the characteristics of signals displayed with richer information. Another benefit of a spectrogram is the easy rejection of measurement noise. In addition, by using the spectrogram, we spectrogram is the easy rejection of measurement noise. In addition, by using the spectrogram, we two-dimensional domain. can use image processing techniques for feature matching because the spectrogram is displayed in a can userest image matching because spectrogram displayed design in a The of processing the papertechniques consists for of feature the following. Sectionthe 2 presents theisobserver for two-dimensional domain. two-dimensional domain. longitudinal road profile estimation. Section 3 presents the feature extraction and feature matching, The rest of the paper consists of the following. Section 2 presents the observer design for The rest of thethe paper consists ofvalidation. the following. Section 2 presents observer design for and longitudinal Section 4 shows experimental Section presents thethe conclusions. road profile estimation. Section 3 presents the5feature extraction and feature matching, longitudinal road profile estimation. Section 3 presents the feature extraction and feature matching, and Section 4 shows the experimental validation. Section 5 presents the conclusions. and Section 4 showsfor theRoad experimental validation. Section 5 presents the conclusions. 2. Observer Design Profile Estimation
2. Observer Design for Road Profile Estimation pitchDesign rate and acceleration are the most representative signals obtained from road 2. The Observer forvertical Road Profile Estimation The pitch rate andsignals verticalare acceleration are the most representative signals obtained from road profiles.The However, such affected bythe many such as the longitudinal pitch rate and vertical acceleration are mostfactors, representative signals obtained fromacceleration, road profiles. However, such signals are affected by many factors, such as the longitudinal acceleration, measurement noise, and vehicle velocity. Therefore, a localization method based on the measured profiles. However, such signals are affected by many factors, such as the longitudinal acceleration, pitch measurement noise, and vehicle velocity. Therefore, a localization method based on the measured ratemeasurement or vertical acceleration signalsvelocity. may not be robust. An input observer or a disturbance observer is noise, and vehicle Therefore, a localization method based on the measured pitch rate or vertical acceleration signals may not be robust. An input observer or a disturbance used to achieve a signalacceleration that is onlysignals affected bynot theberoad profile. pitch rate or vertical may robust. An input observer or a disturbance observer is used to achieve a signal that is only affected by the road profile. observer is used tothe achieve a signal that is only affected by the road Figure 1 shows concept of the road profile estimation. For profile. straight driving, the main excitations Figure 1 shows the concept of the road profile estimation. For straight driving, the main Figure 1 dynamic shows theresponses concept ofare thethe road profile estimation. For straight driving, the main of the vehicle’s vertical displacement of the tire-road contact points and excitations of the vehicle’s dynamic responses are the vertical displacement of the tire-road contact excitations of the vehicle’s dynamic responses are the vertical displacement of the tire-road contact the longitudinal This information is used tois estimate a road aprofile usingusing measured points and theacceleration. longitudinal acceleration. This information used to estimate road profile points and the longitudinal acceleration. This information is used to estimate a road profile using IMUmeasured signals. IMU signals. measured IMU signals.
Figure ofroad roadprofile profile estimation. Figure1. 1. Concept Concept of estimation. Figure 1. Concept of road profile estimation.
2.1. Vehicle Vertical Model 2.1. Vehicle Vertical Model
2.1. Vehicle Vertical Model To capture thethe vertical weconsider consider a typical model, as shown in To capture verticaland andpitch pitchdynamics, dynamics, we a typical halfhalf car car model, as shown in To capture the vertical and pitch dynamics, we consider a typical half car model, as shown in Figure 2. The and suspensionare are modeled modeled as systems, but but the damper of theof the Figure 2. The tiretire and thethe suspension asspring-damper spring-damper systems, the damper Figure 2. The tire and the suspension are modeled as spring-damper systems, but the damper of the suspension is modeled as a non-linear system. system. The nonlinearity of of thethe suspension damper is oneisofone of suspension is ismodeled The nonlinearity suspension damper suspension modeledas as aa non-linear non-linear system. The nonlinearity of the suspension damper is one of the most significant factors in the accuracy of the estimation results. Therefore, the nonlinearity of the thethe most significant accuracyofofthe the estimation results. Therefore, the nonlinearity most significantfactors factors in in the the accuracy estimation results. Therefore, the nonlinearity of the of the suspension dampers is explicitly considered in the model design. The damping force model is shown suspension dampers consideredininthe the model design. damping is shown suspension dampersisisexplicitly explicitly considered model design. The The damping force force modelmodel is shown in Figure 3. in Figure 3. 3. in Figure
Figure 2. Half car model for vertical dynamics.
Figure 2.2.Half model forfor vertical dynamics. Figure Halfcar car model vertical dynamics.
Sensors Sensors2018, 2018,18, 18,3344 x FOR PEER REVIEW
4 of of 16 17
Figure3.3.Damper Dampermodel. model. Figure
Thehalf halfcar carmodel modelwith withnonlinear nonlineardampers dampersisisexpressed expressedas asaastate statespace spacemodel: model: The .
x = f ( x) + B u + B d ,
x = f ( x ) + B11u + B2 2 d,
(1) (1) y = h( x), y = h ( x ), where x is a state vector, u is a vector of known inputs, and d is a vector of unknown inputs. The where x isare a defined state vector, u is a vector of known inputs, and d is a vector of unknown inputs. variables as follows: The variables are defined as follows: T x h= s f s f sr sr z f z f zr zr i , T . . . . x =u = as f , s f sr sr z f z f zr zr , x
u =d a=x , R R R R T , r h f . f . r iT d = R f RTf Rr Rr , y h= z i , . .. T y= θ z , where sf and sr are the front and rear suspension strokes, respectively; and Rf, Rr and their timederivatives ares unknown inputs to the vehicle dynamics. These unknown inputs are determined by where sf and r are the front and rear suspension strokes, respectively; and Rf , Rr and their the road profiles and the vehicle velocity, and they cannot be easily measured using time-derivatives are unknown inputs to the vehicle dynamics. These unknown inputs areconventional determined sensors. y is a vector of the measurement signals and is a tuple of the pitch rate and vertical by the road profiles and the vehicle velocity, and they cannot be easily measured using conventional acceleration from IMU sensor. sensors. y is asignals vector of the the measurement signals and is a tuple of the pitch rate and vertical acceleration signals from the IMU sensor. 2.2. Synthesis for Unknown Input Estimation 2.2. Synthesis for Unknown Input The states of a model canEstimation be estimated using conventional observers or Kalman filters. To estimate unknown inputs, Darouach modified Kalman filter for a linear system [32,33]. The states of a model can be proposed estimated ausing conventional observers or Kalman filters. The summarized synthesis a discreteproposed linear system with unknown input the following. In the To estimate unknown inputs,for Darouach a modified Kalman filter fordkaislinear system [32,33]. following discrete system: The summarized synthesis for a discrete linear system with unknown input dk is the following. In the following discrete system:
xk +1 = Fxk + G1uk + G2 d k + wk ,
xk+1y ==Fx Hxk + + vG1, uk + G2 dk + wk , k
k
k
(2) (2)
yk = Hxk + vk , where xk is the state; uk is the known input; yk is the measurement; and wk and vk are zero mean where xk isnoise the state; uk is the known input; therespectively. measurement; vk are zero mean k is V, k and Gaussian with covariance matrices of Wyand Theand statewand unknown inputs are Gaussian covariance matrices of W and V, respectively. The state and unknown inputs are estimatednoise fromwith the modified Kalman filter: estimated from the modified Kalman filter:
Sensors 2018, 18, 3344
5 of 16
x k/k = F xˆ k/k + G1 uk , n o xˆ k+1/k+1 = x k/k + G2 dˆk/k+1 + Kkx+1 yk+1 − H x k/k + G2 dˆk/k+1 , dˆk/k+1 = Kkd+1 (yk+1 − Hx k/k ), −1 −1 Kkx+1 = Pk/k + H T V −1 H H T V −1 ,
(3)
T −1 Kkd+1 = Pkdx +1/k+1 H V ,
where
Pk/k = F xˆ k/k F T + W,
−1 −1 d T −1 T −1 Pkdx , +1/k+1 = Pk/k+1 G2 Pk/k Pk/k + H V H n o −1 d T T T −1 HG Pk/k , 2 +1 = G2 H V + HPk/k H −1 −1 −1 −1 −1 −1 T − 1 T T x G2 Pk/k . Pk+1/k+1 = Pk/k + H V H − Pk/k G2 G2 Pk/k G2 2.3. Linearization and Discretization of the Model The synthesis can also be applied to a nonlinear system using the Jacobian matrices of the state transition and measurement functions. To design a disturbance observer, the nonlinear model (1) is linearized. At an operating point x0 , the linearized model is: .
δ x = A(t)δx + B1 δu + B2 δd,
(4)
δy = C (t)δx, where
i h ∂ f (x) A(t) = ∂x , x0 h m B1 = 0 La Iys h k 0 − mt f tf ct f 0 −m tf B2 = 0 0 0
0
C (t) =
h
i
∂h( x ) , ∂x x0
− Lb mIys h 0 0 0 0
0
k
0
0
0
− mttff
0
0
0
0
− mktrtr
0
− mctrtr
iT
,
0
0
− mttff
0
0
0
0
0
− mktrtr
0
0
0
− mctrtr
c
T .
Finally, the linearized model (4) is discretized as a linear time-varying system for the observer design as follows: xk+1 = Fk xk + G1 uk + G2 dk , (5) yk = Hk xk . 2.4. Measurement Bias Model The measurement signals include inevitable biases from the sensors or the installation error that usually occurs in a production vehicle. The measurement bias may cause accumulated errors in the estimation. If the sensor biases are not ignorable, the biases should be explicitly taken into account in the observer design. The biases are defined in the measurement vector ym : " ym =
.
θm .. zm
#
"
=
.
θ + bθ. .. z + bz..
# ,
(6)
Sensors 2018, 18, 3344
6 of 16
.
..
where θ m is the measured pitch rate, bθ. is the bias of the pitch rate sensor, zm is the measured vertical acceleration, and bz.. is the bias of the vertical acceleration sensor. The biases are usually constant or change slowly. Therefore, the bias model is assumed to be constant: bθ. (k + 1) = bθ. (k ),
(7)
bz.. (k + 1) = bz.. (k). 2.5. Virtual Measurement for Observability
When the bias model is added to the linearized discrete system (5), the system model becomes larger and thus has 10 states, four unknown inputs, one known input, and two measurement signals. Before the observer design synthesis is applied to the system, the observability is investigated. The observability of an input observer can be tested using the following two conditions [34]: 1. rank(O) = rank
Hk H k Fk .. . H k F k n −1
= n,
(8)
2. rank ( H k+1 Gd ) = rank ( Gd ) = q, where n and q are the number of states and the number of unknown inputs, respectively. The states can be estimated if the observability matrix O has full column rank. The unknown inputs can be estimated if the matrices in the second condition have full column rank. The second condition is not satisfied in our problem. A very easy resolution is to add additional measurement signals, but this requires additional cost. Therefore, we introduce a virtual measurement signal based on a reasonable assumption. The assumption is that the average road grade is zero in a distance horizon that is much longer than the vehicle wheel base. With this assumption, we introduce two virtual measurements, R f and Rr , which are the low-pass-filtered vertical displacements of the contact patches of the front and rear tires, respectively. The two virtual measurement signals are expressed as follows: R f (k + 1) = αR f (k) + (1 − α) R f (k), (9) Rr (k + 1) = βRr (k ) + (1 − β) Rr (k), where α and β are the filter parameters. The virtual measurements and ym are added to the final measurement vector: ym ym (10) y aug = R f = 0 . 0 Rr The final model for the road profile estimation is completed by adding the bias model and the virtual measurement model to the discretized vertical model (5): x aug (k + 1) = Faug (k) x aug (k) + G1,aug u(k) + G2,aug d(k), y aug (k) = Haug (k ) x aug (k),
(11)
Sensors 2018, x FOR PEER REVIEW Sensors 2018, 18, 18, 3344
7 16 of 17 7 of
Sensors 2018, 18, x FOR PEER REVIEW
7 of 17 T
xaug = x , b , bz , R f , Rr , T
where
T T
xaugyaug = =xT, mb ,zmbzi,TRRf f , RrRr , , T . x aug = x , bθ , bz.. , R f , Rr , T 1 0 h . yaug = m zm i TR f Rr , 022 0 2 2 .. 0 1 y aug = θ m zm R f R rFk ,0104 0 , F =1 0 , Faug (k ) = , F = 0 F21 F22 21 48 022 22 10−022 0 1 F 0 0 1 #k 0 0 " 0 10 4 2 2 ,02×2 )= , F21 = 048 2×2 0 ,F22 = FFkaug (k010 0 1 0 01 − ×4 F21 Faug (k) = , F21F22= 04×8 α 0 0 , F22 =0 1 − , α 2 2 F21 F22 G1 1 − 0 G2 0 1 − 0 0 β G1, aug = , G2, aug = , 2×2 0 1 − β 042 " # G041 " G# 1 2 , GG11, aug = H ,G2, aug G=2 k= G1,aug = , G , 0 0 2,aug 4 1 4 04×1H aug = 0 . 04×2 2 # Hk 21 " HkH aug = . . 021the Haug = system satisfies This observability conditions, and the disturbance observer can be designed. 02 × 1 This system satisfies the observability conditions, and the disturbance observer can be designed. 2.6. Road Profile Estimation Validation This system satisfies the observability conditions, and the disturbance observer can be designed. We verified the estimation 2.6. Road Profile Estimation Validationalgorithm with a simulation and experiments. The simulation was 2.6.performed Road Profilewith Estimation Validation the virtual measurements vehicle dynamics simulation software Carsim ® We verified the estimation algorithm with a simulation and experiments. The simulation was (Mechanical Corporation, MI, USA). The vehicle was full-sized was sedan We verifiedSimulation the estimation algorithmAnn withArbor, a simulation and experiments. The asimulation performed with the virtual measurements vehicle dynamics simulation software Carsim ® ® running at 25 km/h on a simple road profile, as shown in Figure 4. Figure 5 shows the road profile performed with the virtual measurements vehicle dynamics simulation software Carsim (Mechanical (Mechanical Simulation Corporation, Ann Arbor, MI, USA). The vehicle was a full-sized sedan estimation results obtained using the model withwas linear dampers sedan and nonlinear The Simulation Corporation, Ann Arbor, MI,system USA). The vehicle a full-sized running dampers. at 25 km/h running at 25 km/h on a simple road profile, as shown in Figure 4. Figure 5 shows the road profile measurement signal does not have any noise or bias, so the error in the estimation mainly comes from on a simple road profile, as shown in Figure 4. Figure 5 shows the road profile estimation results estimation results obtained using the system model with linear dampers and nonlinear dampers. The the model error. the result nonlinear dampers shows adampers. higher accuracy than the result obtained using theTherefore, system model withwith linear dampers and nonlinear The measurement measurement signal does not have any noise or bias, so the error in the estimation mainly comes from withdoes linear dampers. signal not have any noise or bias, so the error in the estimation mainly comes from the model the model error. Therefore, the result with nonlinear dampers shows a higher accuracy than the result error. Therefore, the result with nonlinear dampers shows a higher accuracy than the result with with linear dampers. linear dampers.
h
Figure 4. Simulation environment. Figure 4. Simulation environment. Figure 4. Simulation environment.
Figure 5. Estimated vertical displacement of front-tire contact patch. Figure 5. Estimated vertical displacement of front-tire contact patch. Figure 5. Estimated vertical displacement contact The experimental validation was performed usingofa afront-tire speed bump, as shown in Figure 6. The experimental validation was performed using speed bump, aspatch. shown in Figure 6. A fullA full-sized sedan was driven on the speed bump, and the profile of the speed bump was measured sized sedan was driven on the speed bump, and the profile of the speed bump was measured using The experimental validation was performed using aresults speed bump, as shown in Figure black 6. A fullusing ad-hoc device shown Figure Theestimated estimated line the the ad-hoc device shown in in Figure 7.7.The results are are plotted plottedin inFigure Figure8.8.The The black line sized sedan was driven on the speed bump, and the profile of the speed bump was measured using the ad-hoc device shown in Figure 7. The estimated results are plotted in Figure 8. The black line
Sensors 2018, 18, x FOR PEER REVIEW
8 of 17
presents the measured profile. The observer with the nonlinear damper model provided more 17 after the vehicle ran over the bump. This occurred because 8a oflinear damper model would show a significant discrepancy from a true damper when the suspension stroke Sensors 2018,the 18, 3344 8 of 16 presents measured profile. The observer with the nonlinear damper model provided more velocity is high. A high suspension stroke velocity usually occurs at sudden road profile changes Sensors 2018, 18, x FOR PEER REVIEW 8 of 17 accurate especially after the vehicle ran over the bump. This occurred because a linear such asestimates, speed bumps or pot holes. damper model would show a significant discrepancy from a true damper when the suspension stroke presents measured profile. The observer with the nonlinear model provided more presents thethe measured profile. The observer with the nonlinear damper damper model provided more accurate velocity is high. A high suspension stroke velocity usually occurs at sudden road profile changes accurate especially estimates,after especially afterran theover vehicle ran over bump. because This occurred a linear estimates, the vehicle the bump. Thisthe occurred a linear because damper model such as speed bumps or pot holes. damper model would show a significant discrepancy from a true damper when the suspension stroke would show a significant discrepancy from a true damper when the suspension stroke velocity is high. is high. A highvelocity suspension stroke velocity usually suddensuch roadasprofile changes Avelocity high suspension stroke usually occurs at sudden roadoccurs profileatchanges speed bumps as speed bumps or pot holes. orsuch pot holes. Sensors 2018, 18, x FOR PEER REVIEW accurate estimates, especially
Figure 6. Experimental test of longitudinal road profile estimation.
Figure 6. Experimental test of longitudinal road profile estimation. Figure 6. Experimental test of longitudinal road profile estimation. Figure 6. Experimental test of longitudinal road profile estimation.
Figure 7. Road profile measurement device. Figure 7. Road profile measurement device.
Figure 7. Road profile measurement device.
Figure 7. Road profile measurement device.
Figure Estimated longitudinal road profile. Figure 8. 8. Estimated longitudinal road profile.
3. 3. Localization Using Estimated Road Profile Localization Using Estimated Road Profile Figure 8. Estimated longitudinal road profile.
3.1. Feature Extraction 3.1. Feature Extraction 3. Localization Using Estimated Road Profileto identify a specific location. The spectrogram shows Unique and robust features are required Figure 8.required Estimatedtolongitudinal road profile. Unique and robust features are identify a specific location. The spectrogram shows the features of road profiles as an image in the time-frequency domain. ByBy transforming a road profile the features of road profiles as an image in the time-frequency domain. transforming a road profile 3.1. Feature Extraction to3. a spectrogram, we can apply various image feature extraction methods. Furthermore, we can extract Localization Using Estimated Road Profile to a spectrogram, we can apply various image feature extraction methods. Furthermore, we can robust features when there is noise because the spectrogram has twohas feature dimensions: time Unique andeven robust features are required to identify athe specific location. The spectrogram shows extract robust features even when there is noise because spectrogram two feature dimensions: and frequency. However, the speed affects the road profile distribution if the estimated road 3.1. Feature the features ofExtraction road profiles as vehicle an image in the time-frequency domain. By transforming a road profile time and frequency. However, the vehicle speed affects the road profile distribution if the estimated profiles are expressed in time. Therefore, we transform the road profile indexed in time into one to a spectrogram, we can apply various image feature extraction methods. Furthermore, we can roadUnique profilesand are expressed in time. weidentify transform the road profile The indexed in time into one robust features areTherefore, required to a specific location. spectrogram shows that is indexed by the accumulated distance. By generating a spectrogram in the distance-frequency extract robust features even when there is noise By because the spectrogram has two feature dimensions: that is indexed by the accumulated distance. generating a spectrogram in the distance-frequency the features of road profiles as an image in the time-frequency domain. By transforming a road profile domain, time dependency is is removed. Figure 9 9shows a aspectrogram with the road time and the frequency. However, the vehicle speed affects the road profile distribution ifestimated the estimated domain, the time dependency removed. Figure shows spectrogram with the estimated to a spectrogram, we can apply various image feature extraction methods. Furthermore, weroad can profile result from Figure 5. This spectrogram was generated with a moving window with dimensions road profiles are expressed in time. Therefore, we transform the road profile indexed in time into one extract robust features even when there is noise because the spectrogram has two feature dimensions: of 10ismindexed × 1 round/m. The color changes in the represent the road profile that by the accumulated distance. Byfigure generating a spectrogram in thechanges. distance-frequency time and frequency. However, the vehicle speed affects the road profile distribution if the estimated domain, the time dependency is removed. Figure 9 shows a spectrogram with the estimated road road profiles are expressed in time. Therefore, we transform the road profile indexed in time into one that is indexed by the accumulated distance. By generating a spectrogram in the distance-frequency domain, the time dependency is removed. Figure 9 shows a spectrogram with the estimated road
Sensors 2018, 18,18, x FOR PEER REVIEW Sensors 2018, x FOR PEER REVIEW
9 of 1717 9 of
profile profileresult resultfrom fromFigure Figure5. 5.This Thisspectrogram spectrogramwas wasgenerated generatedwith witha amoving movingwindow windowwith with Sensors 2018, 18,of 3344 9 of 16 dimensions 1010 mm × 1× round/m. The color changes inin the figure represent the road profile changes. dimensions of 1 round/m. The color changes the figure represent the road profile changes.
Figure 9. 9. Spectrogram ofof the estimated road profile. Figure Spectrogram the estimated road profile. Figure 9. Spectrogram of the estimated road profile.
Feature Matching 3.2. 3.2. Feature Matching The feature matching involves determining the most corresponding index between a test sample The feature matching involves determining the most corresponding index between a test sample a portion ofof the database. We comparison based onon normalized gray-level and use anan image method and a portion the database. We use image comparison method based normalized gray-level correlation. The database is is a long image ofof a spectrogram that is is obtained inin advance. The test sample correlation. correlation. The database a long image aspectrogram spectrogram that obtained advance. The test sample signal transformed normalized level image. signalis is transformedtotoa spectrogram a spectrogramand and normalizedtotoa gray a gray level image.The Thesimilarity similaritycan canbebe evaluated with the inner product operation: evaluated with the inner product operation: M MN N
M N a (i, ij,)jbj))(bbi,((i,ij,)jj)) ∑ ∑ aa((i,
a b i =1i =j1=1j =1 i =1 j =1 r =a · ba b= = , , sM MN N r = r =a a b = , M MN N b 2 2 | a| · |b| M Na (i, j ) 2 M N 22 b ( i , j ) 2 a ( i , j ) b ( i , j ) ∑ ∑ a(i, j)i =1 ∑j =1 ∑ b(i, j) i =1 j =1 j =11 i =i =11j=
(12) (12) (12)
i =11 jj= =11 i=
where a is isthe database vector, and b bis isthe test sample vector. If Ifthe where thenormalized normalized database thenormalized normalized sample the where a isathe normalized database vector,vector, and b isand the normalized test sampletest vector. If thevector. correlation correlation coefficient r has a ahigh value near 1, 1,the test sample image is issimilar totoa aportion ofofthe correlation coefficient r has high value near the test sample image similar portion the coefficient r has a high value near 1, the test sample image is similar to a portion of the database image. database image. Among the r values obtained from scanning the database, the index of the highest database Among the from r values obtained scanning the database, the index ofshould the highest Among theimage. r values obtained scanning the from database, the index of the highest value be value should bebe the most likely current location. Figure 1010 shows the conceptual process ofof the feature value should the most likely current location. Figure shows the conceptual process the feature the most likely current location. Figure 10 shows the conceptual process of the feature extraction extraction and matching. extraction and matching. and matching.
Figure 10. Normalized gray-level correlation process. Figure 10.10. Normalized gray-level correlation process. Figure Normalized gray-level correlation process.
Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, 3344 Sensors 2018, 18, x FOR PEER REVIEW
10 of 17 10 of 16 10 of 17
4. Experimental Validation 4. Experimental Validation To verify the localization method, we drove two different cars on the same road: a full-sized To and weFigure a full-sized sedan a compact sedan, asmethod, shown in Thedifferent vehicle velocities aresame shown in Figure 12. The verify the localization drove11. two cars on the road: full-sized ransedan, at about km/h, and the sedan ran at about 16.7 km/h. sedan and compact asas24.4 shown inin Figure 11.compact TheThe vehicle velocities are shown in Figure 12. 13a–c The and aasedan compact sedan, shown Figure 11. vehicle velocities are shown inFigure Figure 12. full-sized sedan ran at about 24.4 km/h, and the compact sedan ran at about 16.7 km/h. Figure 13a–c show the pitch rate, vertical acceleration, and longitudinal acceleration signals measured through the The full-sized sedan ran at about 24.4 km/h, and the compact sedan ran at about 16.7 km/h. IMUthe for both cases, respectively. The signals in Figure 13a, b are the output y,measured and the signal in Figure show pitch rate, vertical and longitudinal acceleration signals through the Figure 13a–c show the pitch acceleration, rate, vertical acceleration, and longitudinal acceleration signals measured IMU both cases,for respectively. signalsfor in the Figure 13a, binare the output and signal Figure 13c for is the input u in the The estimator road profile. Figure 13d y, shows the driving distance through theknown IMU both cases, respectively. The signals Figure 13a,b are thethe output y,inand the with respect to time for both vehicles. 13c is the known input u in the estimator for the road profile. Figure 13d shows the driving distance signal in Figure 13c is the known input u in the estimator for the road profile. Figure 13d shows the with respect to time both vehicles. driving distance withfor respect to time for both vehicles.
Figure 11. Test location for vehicle localization. Figure 11. 11. Test Test location location for for vehicle vehicle localization. localization. Figure
Figure12. 12.Vehicle Vehicle velocities localization tests. Figure velocities forfor localization tests. Figure 12. Vehicle velocities for localization tests.
4.1. Consistency of Road Profile Estimatioin
The road profile should be extracted consistently for any type of vehicle at any speed. Figure 14 shows the estimated longitudinal road profile. The estimated profiles from the two vehicles show consistent patterns, even though the exact values are not identical. These consistent road profile results can provide the features for the localization in the same location, regardless of the speed or type of vehicle. The spectrograms of the estimated profiles from 750 to 1000 m in Figure 15 show more similar patterns. This occurs because the spectrogram is more affected by the distance and the frequency of the specific signal shapes in the distance domain than their magnitudes.
Sensors 2018, 18, 3344 Sensors 2018, 18, x FOR PEER REVIEW
11 of 16 11 of 17
Figure 13. Measured signals from IMU sensor: (a) Pitch rate signals; (b) Vertical acceleration signals; Sensors 2018, 18,13. x FOR PEER REVIEW 12 of 17 Figure Measured signals from(d) IMU sensor:with (a) Pitch ratetosignals; (c) Longitudinal acceleration signals; Location respect time. (b) Vertical acceleration signals;
(c) Longitudinal acceleration signals; (d) Location with respect to time.
4.1. Consistency of Road Profile Estimatioin The road profile should be extracted consistently for any type of vehicle at any speed. Figure 14 shows the estimated longitudinal road profile. The estimated profiles from the two vehicles show consistent patterns, even though the exact values are not identical. These consistent road profile results can provide the features for the localization in the same location, regardless of the speed or type of vehicle. The spectrograms of the estimated profiles from 750 to 1000 m in Figure 15 show more similar patterns. This occurs because the spectrogram is more affected by the distance and the frequency of the specific signal shapes in the distance domain than their magnitudes.
Figure Estimated longitudinal road profile. Figure 14.14. Estimated longitudinal road profile.
Sensors 2018, 18, 3344
Figure 14. Estimated longitudinal road profile.
12 of 16
Figure 15. Spectrogram at 750–1000 m: (a) Full-sized sedan; (b) Compact-sized sedan. Figure 15. Spectrogram at 750–1000 m: (a) Full-sized sedan; (b) Compact-sized sedan.
4.2. Localization 4.2. Localization The estimated road profile from the full-sized sedan was used as a pre-indexed database, and that Thecompact estimated roadwas profile thetest full-sized used as pre-indexed database, and from the sedan usedfrom as the signal. sedan Figurewas 16 shows thea spectrogram transformed that from the compact sedan was used as the test signal. Figure 16 shows the spectrogram from the estimated road profile obtained with database signals with a window size of 250 m for the transformedgeneration. from the estimated profile obtained with signals withwith a window size of spectrogram Figure 17 road presents the spectrogram of database the test runs obtained this window. 250 m for the spectrogram generation. Figure 17 presents the spectrogram of the test runs obtained The localization was implemented by scanning the database spectrogram for the spectrogram of test with this The was implemented by from scanning the vehicle database spectrogram for17a, the signals. Forwindow. example, forlocalization the spectrogram image obtained the test shown in Figure spectrogram of testissignals. Formexample, for theindex. spectrogram image obtained from the test the current position about 400 in the database In the same way, Figure 17b shows the vehicle image shown in Figure 17a, the current position is about 400 m in the database index. In the same way, for 1450 m, and Figure 17c shows the image for 2200 m in the database index. The localization results Sensors 2018, 18, x FOR PEER REVIEW 13 of 17 Figure 17b shows the image for 1450 m, and Figure 17c shows the image for 2200 m in the database for the total test signals are plotted in Figure 18. index. The localization results for the total test signals are plotted in Figure 18.
Figure 16. Reference spectrogram (pre-indexed database). Figure 16. Reference spectrogram (pre-indexed database).
Figure Spectrogram three positions test vehicles. Figure 17.17. Spectrogram at at three positions of of test vehicles.
Each graph in Figure 18 presents the localization result for when the lengths of the image moving window are 100, 200, 250, and 500 m. When the window size is larger, the estimated result is more accurate because the lager window includes more features. If the numbers and variety of features in the window are sufficient for localization, the localization accuracy increases, as shown in the case of the 500 m window. Notably, the localization results do not have accumulating error problems for any
Sensors 2018, 18, 3344
Figure 17. Spectrogram at three positions of test vehicles.
13 of 16
Each graph in Figure 18 presents the localization result for when the lengths of the image moving Each graph in Figure 18 and presents theWhen localization result for the lengths of the image window are 100, 200, 250, 500 m. the window sizewhen is larger, the estimated resultmoving is more window are 100, 200, 250, and 500 m. When the window size is larger, the estimated result is morein accurate because the lager window includes more features. If the numbers and variety of features accurate because lager window includesthe more features. accuracy If the numbers and as variety ofin features the window are the sufficient for localization, localization increases, shown the caseinof the window are sufficient for localization, the localization accuracy increases, as shown in the of the 500 m window. Notably, the localization results do not have accumulating error problemscase for any the 500 m window. Notably, the localization results do not have accumulating error problems for any window sizes. window sizes.
Figure 18.18. Localization result: (a)(a) Window size = 100 m;m; (b)(b) Window size = 200 m;m; (c)(c) Window size == Figure Localization result: Window size = 100 Window size = 200 Window size Sensors250 2018, 18, xWindow FOR PEERsize REVIEW 14 of 17 m;m; (d)(d) = 500 m.m. 250 Window size = 500
Figure 19 error between the true and theand estimated positionspositions with the measured 19shows showsthethe error between the position true position the estimated with the signals fromsignals different environments when the window size 500 m. The estimated localization results measured from different environments when theis window size is 500 m. The estimated have under 5 m error without error accumulation. localization results have under 5 m error without error accumulation.
Figure 19. 19. Estimation errors (window (window size size = = 500 Figure Estimation errors 500 m). m).
Kuutti recently recentlysurveyed surveyedthe thestate-of-the-art state-of-the-art localization techniques [35]. summarized localization techniques [35]. HeHe summarized thatthat the the localization techniques based on only on-board sensor a standalone a accuracy low accuracy localization techniques based on only one one on-board sensor as a as standalone offeroffer a low and and robustness for autonomous vehicles. The only GPS-based localization has under a 10 m accuracy robustness for autonomous vehicles. The only GPS-based localization has under a 10 m accuracy [35], [35],only the only radar-based technique an average accuracy mand [36],even and the even thecamera-based only camerathe radar-based technique has anhas average accuracy of 10.5ofm10.5 [36], only based localization up tom20.5 m cumulative errorThe [37].fusion The fusion techniques with vision sensors localization has uphas to 20.5 cumulative error [37]. techniques with vision sensors have have under meter accuracy; these techniques are vulnerable to illumination, the under a metera accuracy; however,however, these techniques are vulnerable to illumination, the observation observation angle, the weather and condition, andenvironments. dynamic environments. The integrated and IMU angle, the weather condition, dynamic The integrated GPS andGPS IMU sensor sensor localization technique has a relatively low accuracy of 7.2 m and error accumulation problem [38]. Another error-accumulation-free IMU-based localization method [21] has about a 5~10 m accuracy and estimation error deviations are large due to the signal noise. On the other hand, the suggested localization method has under 5 m error without an error accumulation problem using only an IMU sensor. Therefore, the suggested method can be applied as an alternative localization
and robustness for autonomous vehicles. The only GPS-based localization has under a 10 m accuracy [35], the only radar-based technique has an average accuracy of 10.5 m [36], and even the only camerabased localization has up to 20.5 m cumulative error [37]. The fusion techniques with vision sensors have under a meter accuracy; however, these techniques are vulnerable to illumination, the observation IMU Sensors 2018, 18, 3344angle, the weather condition, and dynamic environments. The integrated GPS and14 of 16 sensor localization technique has a relatively low accuracy of 7.2 m and error accumulation problem [38]. Another error-accumulation-free IMU-based localization method [21] has about a 5~10 m localization has aerror relatively low accuracy 7.2tomthe and errornoise. accumulation problem accuracytechnique and estimation deviations are large of due signal On the other hand, [38]. the Another error-accumulation-free IMU-based localization method [21] has about a 5~10 m accuracy suggested localization method has under 5 m error without an error accumulation problem using and only estimation deviations arethe large due to method the signal On the hand, thelocalization suggested an IMUerror sensor. Therefore, suggested cannoise. be applied asother an alternative localization method has under 5 m error without an error accumulation problem using only an IMU technique. We conducted the localization with pitch rate signals tolocalization confirm the effect of the sensor. Therefore, the suggested methodprocess can be applied as an alternative technique. longitudinal roadthe profile estimationprocess in the localization The same used We conducted localization with pitchprocess. rate signals to methodology confirm the was effect of for the localization, the base signal in is the thelocalization pitch rate signal and not the methodology road profile. was Because longitudinal roadbut profile estimation process. The same usedthe for magnitude and frequency of the pitch rate signals are affected by the vehicle speed, the accuracy is localization, but the base signal is the pitch rate signal and not the road profile. Because the magnitude much lower than the result with the estimated road profile, even with a larger window size, as shown and frequency of the pitch rate signals are affected by the vehicle speed, the accuracy is much lower Figure thaninthe result20. with the estimated road profile, even with a larger window size, as shown in Figure 20.
Figure 20. Localization result without road profile estimation(window (windowsize size== 250 250 m). Figure 20. Localization result without road profile estimation m).
5. Conclusions This paper presented a non-dead-reckoning-based localization method using an IMU sensor without the concern of error accumulation. The localization concept matches the obtained signals with a chunk of the pre-indexed database. This method consists of two parts: (1) transforming the IMU signals in the time domain to the road profile in the distance domain through the disturbance observer; and (2) extracting and matching features in the spectrogram of the estimated road profile. The road profile estimation is implemented with the modified Kalman filter framework for unknown input estimation. The observer deals with the nonlinearity of the suspension for accurate estimation and introduces the virtual measurement to satisfy the observability condition without using additional sensors. The features for localization are in a distance-frequency spectrogram and the current position is identified using an image matching technique. The test results showed the validity of the suggested method with a few meters accuracy and repeatability without error accumulation, which are similar to or superior than those of only one-sensor based localization techniques without any sensor fusion. The suggested method has some advantages: (1) no error accumulation in the virtue of the time-independent road profile based localization; (2) a robust performance against vehicle speed variations; (3) richer features in a distance-frequency spectrogram than in the time-signal domain; (4) easy noise separation because of the road profiles transformed to the frequency domain; and (5) a higher possibility of implementation than other single sensor-based localization methods because of the low cost IMU. Therefore, the suggested method can be applied as a redundant sensor of GPS to improve the reliability of the navigation system or even applied as a back-up sensor when GPS is unavailable.
Sensors 2018, 18, 3344
15 of 16
Author Contributions: Writing-Original Draft Preparation, J.G.; Writing-Review & Editing, C.A. Funding: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2016R1C1B1006540). Conflicts of Interest: The authors declare no conflict of interest.
References 1.
2. 3. 4.
5. 6. 7. 8. 9.
10. 11. 12.
13.
14.
15. 16. 17.
18. 19.
Autonomous Vehicle Implementation Predictions: Implications for Transport Planning. Available online: http://orfe.princeton.edu/~alaink/SmartDrivingCars/Reports&Speaches_External/Litman_ AutonomousVehicleImplementationPredictions.pdf (accessed on 3 October 2018). Autonomous Vehicles: Drivers for Change. Available online: https://www.roadsbridges.com/autonomousvehicles-drivers-change (accessed on 3 October 2018). Effects of Next-Generation Vehicles on Travel Demand and Highway Capacity. Available online: http://orfe. princeton.edu/~alaink/Papers/FP_NextGenVehicleWhitePaper012414.pdf (accessed on 3 October 2018). Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Mapping and localization using GPS, lane markings and proprioceptive sensors. In Proceedings of the International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 406–412. Lu, M.; Wevers, K.; Van Der Heijden, R. Technical feasibility of advanced driver assistance systems (ADAS) for road traffic safety. Transp. Plan. Technol. 2005, 28, 167–187. [CrossRef] Kleusberg, A.; Langley, R.B. The limitations of GPS. GPS World 1990, 1, 50–52. Skone, S.; Knudsen, K.; de Jong, M. Limitations in GPS receiver tracking performance under ionospheric scintillation conditions. Phys. Chem. Earth Part A Solid Earth Geod. 2001, 26, 613–621. [CrossRef] Drawil, N.M.; Amar, H.M.; Basir, O.A. GPS localization accuracy classification: A context-based approach. IEEE Trans. Intell. Transp. Syst. 2013, 14, 262–273. [CrossRef] Blomenhofer, H.; Hein, G.W.; Taveira Blomenhofer, E.; Werner, W. Development of a real-time DGPS system in the centimeter range. In Proceedings of the Position Location and Navigation Symposium, Las Vegas, NV, USA, 11–15 April 1994; pp. 532–539. Farrell, J.; Givargis, T. Differential GPS reference station algorithm-design and analysis. IEEE Trans. Control Syst. Technol. 2000, 8, 519–531. [CrossRef] Cannon, M.E. High-accuracy GPS semikinematic positioning: Modeling and results. J. Inst. Navig. 1990, 37, 53–64. [CrossRef] Jie, D.; Masters, J.; Barth, M. Lane-level positioning for in-vehicle navigation and automated vehicle location (AVL) systems. In Proceedings of the International IEEE Conference on Intelligent Transportation Systems, Washington, WA, USA, 3–6 October 2004; pp. 35–40. Meguro, J.-I.; Murata, T.; Takiguchi, J.-I.; Amano, Y.; Hashizume, T. GPS accuracy improvement by satellite selection using omnidirectional infrared camera. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 1804–1810. Parra Alonso, I.; Fernandez Llorca, D.; Gavilan, M.; Alvarez Pardo, S.; Garcia-Garrido, M.A.; Vlacic, L.; Sotelo, M.A. Accurate global localization using visual odometry and digital maps on urban environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1535–1545. [CrossRef] Redmill, K.A.; Kitajima, T.; Ozguner, U. DGPS/INS integrated positioning for control of automated vehicle. In Proceedings of the Intelligent Transportation Systems, Oakland, CA, USA, 25–29 August 2001; pp. 172–178. Farrell, J.A.; Givargis, T.D.; Barth, M.J. Real-time differential carrier phase GPS-aided INS. IEEE Trans. Control Syst. Technol. 2000, 8, 709–721. [CrossRef] Wang, J.; Garratt, M.; Lambert, A.; Wang, J.J.; Hana, S.; Sinclair, D. Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles. In Proceedings of the The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 963–970. Skog, I.; Handel, P. In-Car positioning and navigation technologies-A survey. IEEE Trans. Intell. Transp. Syst. 2009, 10, 4–21. [CrossRef] Wei-Wen, K. Integration of GPS and dead-reckoning navigation systems. In Proceedings of the Vehicle Navigation and Information Systems Conference, Troy, MI, USA, 20–23 Octorber 1991; pp. 635–643.
Sensors 2018, 18, 3344
20.
21. 22. 23. 24.
25.
26.
27. 28.
29.
30.
31.
32. 33. 34. 35.
36.
37. 38.
16 of 16
Krakiwsky, E.J.; Harris, C.B.; Wong, R.V. A Kalman filter for integrating dead reckoning, map matching and GPS positioning. In Proceedings of the Position Location and Navigation Symposium, Orlando, FL, USA, 29 November–2 December 1988; pp. 39–46. Laftchiev, E.I.; Lagoa, C.M.; Brennan, S.N. Vehicle localization using in-vehicle pitch data and dynamical models. IEEE Trans. Intell. Transp. Syst. 2015, 16, 206–220. [CrossRef] Dean, A.; Martini, R.; Brennan, S. Terrain-based road vehicle localization using particle filters. Veh. Syst. Dyn. 2011, 49, 1209–1223. [CrossRef] Imine, H.; Delanne, Y.; M’Sirdi, N.K. Road profile input estimation in vehicle dynamics simulation. Veh. Syst. Dyn. 2006, 44, 285–303. [CrossRef] Doumiati, M.; Victorino, A.; Charara, A.; Lechner, D. Estimation of road profile for vehicle dynamics motion: Experimental validation. In Proceedings of the American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; pp. 5237–5242. Yu, W.; Zhang, X.; Guo, K.; Karimi, H.R.; Ma, F.; Zheng, F. Adaptive real-time estimation on road disturbances properties considering load variation via vehicle vertical dynamics. Math. Probl. Eng. 2013, 2013, 9. [CrossRef] Tudón-Martínez, J.C.; Fergani, S.; Sename, O.; Morales-Menendez, R.; Dugard, L. Online road profile estimation in automotive vehicles. In Proceedings of the European Control Conference, Strasbourg, France, 24–27 June 2014; pp. 2370–2375. Rath, J.J.; Veluvolu, K.C.; Defoort, M. Adaptive Super-Twisting observer for estimation of random road excitation profile in automotive suspension systems. Sci. World J. 2014, 2014, 1–8. [CrossRef] [PubMed] Rath, J.J.; Veluvolu, K.C.; Defoort, M. Estimation of road profile for suspension systems using adaptive super-twisting observer. In Proceedings of the European Control Conference, Strasbourg, France, 24–27 June 2014; pp. 1675–1680. Tudón-Martínez, J.C.; Fergani, S.; Sename, O.; Martinez, J.J.; Morales-Menendez, R.; Dugard, L. Adaptive road profile estimation in semiactive car suspensions. IEEE Trans. Control Syst. Technol. 2015, 23, 2293–2305. [CrossRef] Doumiati, M.; Martinez, J.; Sename, O.; Dugard, L.; Lechner, D. Road profile estimation using an adaptive Youla–Kuˇcera parametric observer: Comparison to real profilers. Control Eng. Pract. 2017, 61, 270–278. [CrossRef] Chassande-Mottin, É.; Auger, F.; Flandrin, P. Reassignment. In Time-Frequency Analysis: Concepts and Methods; Hlawatsch, F., Auger, F., Eds.; ISTE/John Wiley and Sons: London, UK, 2010; pp. 249–277, ISBN 9781848210332. Zasadzinski, M.; Mehdi, D.; Darouach, M. Recursive state estimation for singular systems. In Proceedings of the American Control Conference, Boston, MA, USA, 26–28 June 1991; pp. 2850–2851. Darouach, M.; Zasadzinski, M.; Onana, A.B.; Nowakowski, S. Kalman filtering with unknown inputs via optimal state estimation of singular systems. Int. J. Syst. Sci. 1995, 26, 2015–2028. [CrossRef] Gillijns, S.; De Moor, B. Unbiased minimum-variance input and state estimation for linear discrete-time systems with direct feedthrough. Automatica 2007, 43, 934–937. [CrossRef] Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications. IEEE Internet Things J. 2018, 5, 829–846. [CrossRef] Vivet, D.; Gérossier, F.; Checchin, P.; Trassoudaine, L.; Chapuis, R. Mobile Ground-Based Radar Sensor for Localization and Mapping: An Evaluation of two Approaches. Int. J. Adv. Robot. Syst. 2013, 10, 307–319. [CrossRef] Parra, I.; Sotelo, M.A.; Llorca, D.F.; Oca, M. Robust visual odometry for vehicle localization in urban environments. Robotica 2010, 28, 441–452. [CrossRef] Zhang, F.; Stähle, H.; Chen, G.; Simon, C.C.C.; Buckl, C.; Knoll, A. A sensor fusion approach for localization with cumulative error elimination. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Hamburg, Germany, 13–15 September 2012; pp. 1–6. © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).