On the design of a nonlinear adaptive variable structure derivative ...

8 downloads 151 Views 209KB Size Report
Abstract—This paper develops a new derivative estimator using variable structure system (VSS) and adaptive techniques. The proposed nonlinear adaptive ...
1028

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

On the Design of a Nonlinear Adaptive Variable Structure Derivative Estimator Jian-Xin Xu, Qing-Wei Jia, and Tong Heng Lee

Abstract—This paper develops a new derivative estimator using variable structure system (VSS) and adaptive techniques. The proposed nonlinear adaptive variable structure derivative estimator requires little knowledge about the observed signal source and is able to handle the observed signals that are the outputs of either linear or nonlinear dynamic systems with uncertainties. In the absence of system noise, the uniform boundedness of the proposed derivative estimation is guaranteed by means of the adaptive control approach. By virtue of VSS, the new derivative estimator is capable of adapting itself to the noise environment, especially the unpredictable changes in system noise. Index Terms—Adaptive control, nonlinear derivative estimation, variable structure systems.

I. INTRODUCTION Signal derivatives are widely used in many applications. There are many situations that require the determination or estimation of the time derivatives of a given signal. Typically, for example, in radar application, the velocity and acceleration are estimated from the position measurement using a differentiator. The use of a differentiator can also be found in industrial applications, such as the estimation of heating rates from temperature data. In the area of control engineering, derivative estimates of state signals are well known and of particular importance in both controller design and controller structure simplification. The differentiation problem is normally treated as a “filter design” problem and has been intensively studied in the past two decades. These methods can be classified into two categories: model-based and modelfree. Typical model-based approaches are the differentiating Wiener filter or the Kalman filter. A limitation of applying the Wiener and Kalman filters is that the dynamics of the process needs to be known a priori. It would be difficult for the above filters to still be optimal if the system to be observed has nonlinearities or uncertainties and disturbances. It is worth noting that other kinds of numerical differentiation methods exist that are quite model-free. The most simple and well-used one is perhaps a pure differentiator concatenated with a low-pass filter. The difficulty in designing such numerical differentiators is the determination of the filter cutoff frequency, or how to choose a proper time constant so as to ensure a significant attenuation of high-frequency noise and, meanwhile, retain the low-frequency useful signals. Generally speaking, in the noise-free environment, or if the signal/noise ratio is so large that the noise effect can be ignored, an accurate derivative filter possessing a wide bandwidth is preferred. On the contrary, when the signal to be differentiated is corrupted by noise that is usually characterized by high frequencies, the derivative estimator should be able to avoid differentiating those high-frequency components, namely working as a pass-through filter. It seems difficult to design a numerical differentiating filter to meet the above-mentioned two requirements simultaneously. Recently, VSS techniques have been used to tackle signal processing problems, including parameter identification and state

Manuscript received December 4, 1996; revised April 22, 1999, November 9, 1998, and April 15, 1999. Recommended by Associate Editor, G. Tao. The authors are with the Department of Electrical Engineering, National University of Singapore, 119260. Publisher Item Identifier S 0018-9286(00)04157-X.

estimates [1]–[7]. Contrary to the control problems, applications of VSS in signal processing actively make use of the “chattering” information, which reflects the system’s uncertainties. In [8], a VSS-type Derivative Estimator (VSDE) is constructed with relay components. Based on the concept of sliding mode in VSS, the estimator can be designed without using signal and noise models. It possesses the property of adapting to unpredictable changes in the system noise. The main drawback of the VSDE method is that the upper bound of the derivative signal must be known in advance as a priori knowledge for the VSDE design. This is an obstacle for implementation because, for a signal to be the output of a nonlinear dynamic system with uncertainties, it is almost impossible to know the boundary of its derivative in the stage of design. This problem has been partly addressed in [9] by incorporating adaptive techniques. Another problem associated with the VSDE approach is its severe chattering due to the presence of system noise. On account of the discontinuous property of the VSDE, the larger the switching gain, the more severe will be the chattering, which will degrade the estimation. The existing limitations motivate us to develop a new Nonlinear Adaptive Variable Structure Derivative Estimator (NAVSDE). NAVSDE is an improved version of VSDE by incorporating adaptive techniques and three different kinds of nonlinear components into the estimator. The NAVSDE scheme has the following three features. First, the asymptotic convergence of the derivative estimation to a prescribed region can be achieved when the system is free from the noise perturbation. The input signals to the estimator can be generated by any nonlinear uncertain systems. This feature implies that NAVSDE is able to generate derivative signals with a wide power spectrum. Second, if the useful signal is contaminated by noise but the noise spectral density can be more or less separated from that of the useful signal, we can still design the NAVSDE such that it will behave like a first order pseudo-differentiator with its cutoff frequency reciprocal to the noise deviation. In other words, NAVSDE retains the novel property of VSDE: adapting to possibly unpredictable parametric changes in system noise. Finally, NAVSDE can significantly reduce the chattering, achieving a much smoother derivative estimation in comparison with VSDE. The paper is organized as follows. The structure of the NAVSDE is presented in Section II. The properties of NAVSDE in the absence of noise is analyzed in detail in this section. The properties of NAVSDE in the presence of noise is discussed in Section III. In Section IV, the design guideline and simulation examples are provided for NAVSDE design, comparison, and verification. II. NAVSDE PROPERTIES IN THE ABSENCE OF NOISE The basic structure of the proposed derivative estimator is illustrated by the block diagram shown in Fig. 1. Here, r(t) is the input signal of the estimator. In the absence of system noise, the first order derivative of r(t) is assumed to be bounded, that is, an arbitrarily large constant c1 exists such that jr_ (t)j < c1 : It should be noted that the signal source (mechanism) that generates r(t) could be completely unknown and highly nonlinear. Thus, conventional derivative signal estimation approaches cannot perform well. Note that in Fig. 1, there are three nonlinear components: l0 is a switching gain, l1 is a feed-through gain with deadzone, and l2 (t) is a saturation gain, which is tuned by an adaptive mechanism. From the basic structure of Fig. 1, we have s(t) = r(t) s1 (t) = s(t)

0018–9286/00$10.00 © 2000 IEEE

0 y(t) 0 " sat(s(t)=")

(1) (2)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

Fig. 1.

"g: Because V_  0c when t 2 S2 ; the total time for t 2 S2 ; during which adaptation takes place, is finite [10]. In fact, assuming that t0 2 S2 ; we can declare that the total time T for t 2 S2 is bounded by T  (1=2c)[s2 (t0 ) + 01 (l2 (t0 ) 0 c1 )2 ]: This means that in the limit, s(t) will remain in E0 and l2 (t) approach to c1 : Choosing " <  it is straightforward to get the lemma; that is, for any  > 0; a finite time t3 exists, such that js(t)j <  for all t  t3 : Remark 1: Note that in (5), the adaptation ceases as soon as s reaches the zone E0 : This avoids the undesirable long-term parameter drift problem, and further assures the boundedness of l2 (t) because the total time for adaptation is finite with such an adaptive law. Remark 2: c1 is usually unknown, which makes the VSDE [8] design difficult. On the other hand, it is easy to treat c1 as the unknown bound of the unknown signal r_ so that robust adaptive control approaches can be applied to estimate c1 : Based on the above lemma and selecting a filter F (p) = 1=( p +1); the main property of the proposed derivative estimator in the absence of noise is given in the following theorem. Theorem: Assume that the second derivative of r(t) is bounded, i.e., jr(t)j  c2 ; where c2 is an arbitrarily positive constant. For any given positive number "0 ; a sufficient small time constant  of the filter F (p) and a sufficient large ty exist, such that

Diagram of the NAVSDE.

z (t) = l0 sgn(s) + l1 s1 + l2 (t) sat(s(t)=") y_ (t) = z (t)

(3)

js(t)j; for js(t)j > " for js(t)j  " 0;

(5)

and > 0 is an adaptive rate. The saturation function sat(w) with respect to any real number w is defined to be sat(w ) =

sgn(w );

w;

for jwj > 1 for jwj  1:

For the proposed derivative estimator, we have the following lemma. Lemma 1: For any given l0 and l1 ; the aforementioned estimator with the adaptive law (5) has the following asymptotic property: For any  > 0; a " <  and a finite time t3 exist such that js(t)j <  for t  t3 : Proof of Lemma: Define a continuous positive definite function as follows

V

1 s2 + 1 01 (l2 0 c1 )2 ; 2 2 1 "2 + 1 01 (l2 0 c1 )2 ; 2 2

=

1

for js(t)j > "

for js(t)j  ":

(6)

Let E0 = fs: jsj  "g: From (5) and (6), it is obvious that V_ = 0 when s 2 E0 : Now, we consider the situation when s 2 R 0 E0 : Differentiating V with respect to t yields

V_

2 0 c1 )jsj 0 l0 sgn(s) 0 l1 s1 0 l2 sat(s=")] + (l2 0 c1 )jsj:

= ss_ + (l = s[r_

From the definition of s1 , and sat(s=") = sgn(s);

s 2 R 0 E0

(7)

(8)

it follows that

V_

 c1 jsj 0 l0 jsj 0 l1 ss1 0 l2 jsj + (l2 0 c1 )jsj  0l0 jsj 0 l1 ss1  0l0 " def = 0c:

8 t  ty

x(1) (t) 0 r_ (t)

 "0 :

(10)

(4)

where l2 (t) is generated according to the following adaptive law

l_2 (t) =

1029

Remark 3: The proof of this theorem is based on the concept and derivation of equivalent control with a first-order filter and is similar to that in [1]. Therefore, it is omitted. Remark 4: In terms of the above theorem, in the absence of noise, the derivative signal can be estimated accurately without using prior knowledge concerning the signal source—especially the upper bound c1 , which is indispensable in the VSDE design. Hence, the proposed derivative estimator can work with a wide spectrum for quite general classes of nonlinear uncertain systems. III. NAVSDE ANALYSIS IN THE PRESENCE OF NOISE In order to meet the engineering use and requirements of practical applications, it is imperative to take into account the noise problem when analyzing and evaluating the effectiveness of a filter, especially a derivative estimator. In addition to the nonlinearities and uncertainties, the signal source of r(t) may also be contaminated by either system or measurement noise n(t): Hence, the input signal r(t) of the estimator is the additive mixture of the useful signal u(t) and noise n(t); i.e.,

r(t) = u(t) + n(t): In the presence of n(t); the estimation process is no longer deterministic, but stochastic. Thus, a stochastic analysis of the proposed derivative estimator is carried out to show its statistic properties. It should be noted that such a finite upperbound c1 for r_ may not exist as in the deterministic case. In such a case, instead of using the deadzone scheme, a damping term (such as the  -modification scheme [11]) can be added to the adaptive law. In this paper, we use the following modification scheme [12]: l_2 = jsj 0 l2 (1 0 (l2 = 0 ))2f (l2 ); where 0 is chosen to be a large value, and

f (l2 ) =

(9)

The above inequality implies that the estimator will enter the region 1 1 s 2 E0 in finite time. Let S1 = ft: js(t)j  "g and S2 = ft: js(t)j >

1; 0;

if l2 > 0 otherwise.

It can be seen that the same adaptive law is used as in (5) when l2  0 ; whereas a correction term is used to force the parameter back into the region l2  0 : We assume that noise n(t) is a weakly stationary Gaussian, uncorrelated process with zero expectation. In addition, a value !0 exists such

1030

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

that the spectral densities Su (!) and Sn (! ) satisfy the following inequalities: ! ! Su (!) d!  Sn (!) d! (11) 0

1

!

Su (!) d! 

0

1

!

Sn (!) d!:

(12)

Remark 5: From the physical point of view, the above assumption means the useful signal u(t) can more or less be separated from noise n(t) at the frequency !0 : This is a common assumption made for the noise and signal to be differentiated. Most frequency domain design approaches require this assumption as a priori knowledge. Let s2 denote the variance of the signal s(t): Using the statistical linearization of the nonlinear elements as shown in Fig. 1, the equivalent transfer coefficient, according to a random component, is obtained as follows (see Appendix) 2 l0  s

l = l1 +

+

2



l2 0 l1 " 0(" =2 ) e : s

(13)

Note that this equivalent transfer coefficient of nonlinear elements is related to the standard deviation of the error signal s(t): Following, we show that s2  n2 : Based on the statistical linearization, the equivalent transfer function from the input r(t) to the signal s(t) in Fig. 1 can be derived as

FAB (s ; j!) =

j! : j! + l

Fig. 2. x).

Estimated results of VSDE with

l

= 5 (—: true x; 0 0: estimated

(14)

Then, the signal s can be expressed as the output of the filter FAB

s(t) = FAB [r(t)] = FAB [u(t)] + FAB [n(t)] = su (t) + sn (t)

where FAB [1] stands for the filter output in time domain, and su (t) and sn (t) are the two components of the filter output driven by u(t) and n(t), respectively. Noting that su (t) is deterministic and noise n(t) is of zero mean, the mathematic expectation of s(t) is

E [s] = E [su ] + E [sn ] = su : Hence, the variance of s is

s2 = Var(s 0 E [s]) = Var(sn ) = E [s2n ] 0 E [sn ]2 =

1

2

1

01

jFAB (s ; j!)j2Sn(!) d!

(15)

where Sn is the power spectral density of noise n(t): In terms of (11), we know that noise n(t) is dominated by frequencies above !0 . On the other hand, l can be designed such that

jFAB (s ; j!)j2  1; jFAB (s ; j!)j2  1;

It then follows from (15) that

s2 

1

2

1 !

for ! < !0

for ! > !0 :

Sn (!) d! = n2 :

(16) (17)

(18)

Further applying the statistical linearization, the equivalent transfer function FAC (s ; j! ) from the input terminal A to z is obtained as follows:

FAC (s ; j!) =

1

j!l j! + l

=

j!  j! + 1

(19)

where  = l01 : In terms of (13), we have

!0 l  l1 + lim l  l1 :  !1 lim



2 l0  s

1

(20) (21)

Fig. 3. Estimated results of VSDE with l = 15 (—: true x; x).

00: estimated

It can be seen that, by choosing a relatively small l1 ; the filter FAC presents a differentiator for small s (small  ) and a pass-through filter for large s (relatively large  ): The equivalent transfer function of FAC (s ; j! ) clearly shows that it is a pass-through filter for high frequencies and a differentiator for low frequencies. It should be noted that  in FAC is a function of s ; therefore a function of n : A small n leads to a small  ; thus FAC behaves more like a pure differentiator. In other words, in a lower noise environment, the NAVSDE automatically raises its cutoff frequency so as to detect more useful frequencies in u(t): But when n is large, i.e., the environment is more noisy in a high-frequency domain, the equivalent time constant  of the filter FAC will be tuned larger automatically. Accordingly, FAC becomes more like a pure pass-through filter to avoid differential operation in the high-frequency domain, which is dominated by noise. This property shows that FAC ; namely the proposed NAVSDE, does possess the desired capability of adapting to the unpredictable changes in system noise. IV. DESIGN GUIDELINE AND SIMULATION EXAMPLES Note that there are quite a few “free” parameters to be chosen in the NAVSDE design. The design guideline is summarized as follows:

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

Fig. 4. Estimated results of NAVSDE with l (0) = 1:0 (—: true x;

1031

00: estimated

1) choosing l0 as small as possible to reduce chattering; 2) choosing l1 relatively small such that NAVSDE performs like a pass-through filter at the high-frequency domain; 3) choosing a small initial value l2 (0) and a large adaptation gain

so that l2 (t) can quickly “adapt” itself to the desired level near c1 ; 4) choosing a small deadzone " to ensure the estimation accuracy. Note that an overly small l1 may lead to a larger distortion at the beginning stage of estimation. An overly small " may also incur chattering. A compromise has to be made in the determination of ", depending on the size of sampling period. A proper " should be approximately 20–200 times that of the sampling period. Two case studies are conducted to compare the proposed NAVSDE approach with VSDE and differential filtering approaches that do not use any stochastic process model. The objective is to estimate the derivative signal of a sinusoidal input u(t) = sin(4t) in the presence of a Gaussian, uncorrelated, zero mean noise with standard deviation n : The sampling interval is 0.0001 s. The filter time constant is chosen to be  = 0:01 for both VSDE and NAVSDE. Case 1. Comparison with VSDE: Note that VSDE is a particular case of NAVSDE and can be achieved by letting l1 = l2 = 0: As stated before, the main drawbacks of the VSDE approach are the chattering and demand of the derivative signal bound c1 : Since the signal source is usually unknown, the magnitude of the derivative signal c1 = 4 is not available in the stage of VSDE design. Consider a low noise circumstance where n = 0:01: By choosing l0 = 5; Fig. 2 shows that the signal is “clamped” at the level of 5 due to the improper choice of the switching gain l0 : Next, we choose l0 = 15; which is slightly above the actual bound c1 : Fig. 3 shows high chattering in this case; consequently, the estimation performance is degraded. Now, we construct NAVSDE with gains l0 = 0:01; l1 = 10; the initial value of l2 (0) = 1; deadzone size " = 0:1, and the adaptation gain = 200: From Fig. 4, we can observe that NAVSDE can work well after a short transient period. Fig. 5(a) shows that when the system is free from noise, the gain l2 rapidly converges to the desired level of 4 and remains unchanged. Fig. 5(b) shows the gradual adaptation of l2 to the level around 4 when the signal is contaminated with noise ( 0 = 14):

x

).

Fig. 6(a) provides a different angle to observe the updating process of l2 (t): It is shown clearly in Fig. 6(a) that when the system is free from noise, parameter updating of l2 (t) takes place only twice before reaching the desired value 4 from the initial value 1.0. Thereafter, the system stays inside the deadzone forever. The deadzone scheme also works in the presence of small system perturbation (noise), as far as the size of the perturbation is less than that of the dead zone, as shown in Fig. 6(b). Case 2. Comparison with a Differential Filter: To show the adaptation capability of NAVSDE to the changes in system noise, the following differential filter with fixed time constants is used p : (1 p + 1)(2 p + 1) For a fair comparison, we choose 1 =  = 0:01: For simplicity, we fix the gain l2 = 15 in NAVSDE, and equivalently set

1

=

1

l1 + l2 ="

=

1 160

in the differential filter. Table I shows the root mean square (rms) values of the derivative estimation with respect to different system noise levels. It can be seen that NAVSDE works equally well as the differential filter at the low noise level and can work better at the heavily noisecontaminated environment. To make the rms values equal at n = 1:0; a larger time constant (lower cutoff frequency) 2 = 1=35 has to be used in the differential filter, which generates rms to be 5.6615. The phase delay, however, is doubled due to the low cutoff frequency. As a consequence, the estimation error of the differential filter is also twice as large as the NAVSDE. V. CONCLUSIONS A nonlinear derivative estimator is developed in this paper. The detailed analysis of the proposed derivative estimator is given for cases in the absence and presence of system measurement noise. The new NAVSDE shows the property of adapting to possible unpredictable changes in system noise. A simple adaptive law is used to tune the

1032

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

Fig. 5. Adaptation of

Fig. 6.

l

(t) versus

l

(t):

js(t)j:

TABLE I COMPARATIVE RESULTS IN rms VALUES

APPENDIX STATISTICAL LINEARIZATION Assume that the input s is a Gaussian random process with zero mean. The possibility function of signal s is

Ps = switching gain of the estimator and, hence, remove the requirement of the upper bound of the derivative signal. Both theoretical analysis and simulation results show that the proposed NAVSDE can achieve better performance in comparison with other methods, such as VSDE and numerical differentiators.

p1

2s

e0(s =2 ) :

(22)

In quantitative terms, let ls(t) be the resulting linear approximation of a nonlinear function f (s): The mean squared error criterion chosen to be minimized is T 1 I = lim ff [s(t)] 0 ls(t)g2 dt: (23) T !1 2T 0T

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

This criterion is equivalent to the following equation:

I

=

1

01

[f (s)

0 ls] Ps ds: 2

(24)

The solution of minimizing I can be obtained from @I=@l = 0 (note 2 = 2s > 0): It follows:

@ 2 I=@ 2 l

1

sf (s)Ps ds l = 011 s2 Ps ds

=

01

1

1

s2 01

sf (s)Ps ds:

(25)

Note that for our case, the nonlinear function consists of the following three components:

f (s) = f0 (s) + f1 (s) + f2 (s) f0 (s) = l0 sgn(s) l1 (s 0 ") s > " jsj  " f1 (s) = 0 l1 (s + ") s < 0" l2 s>" f2 (s) = l2 s=" jsj  " 0l2 s < 0": First, for the signum function f0 (s) = l0 sgn(s); we have l0 =

0

1

s

2

01

s(0l0 )Ps ds +

t

0

 e0(s =2 ) ds = p s 2s 2

Comments on “Robustifying Nonlinear Systems Using High-Order Neural Network Controllers” Y. Zhang and P. Y. Peng

(26)

Abstract—This paper shows that the above-named paper has technical errors and the result therein is incorrect. Index Terms—Neural networks, robustness.

sl0 Ps ds :

(27)

0e0 s =  (

2

)

t t

paper,1

In the above the author considered a nonlinear dynamical system of the following form:

: (28)

Substituting (28) into (27) yields

l0 =

2 l0 :  s

(29)

2 l1 " 0(" =2 e  s

l1 = l1 0

)

(30)

where 0 < " < " is a constant. The equivalent gain of f2 (s) is

l2 0(" =2 ) e :  s 2

x_ = f (x) + G(x)u + !(x) (1) n m where x is the measurable state vector, u is the control n , G: n n2m are locally Lipschitz known input, f : n n represents the uncertainty nonlinearivector fields, and ! : n ties. The following assumptions are made. Assumption 1: There exists a nominal controller un = (x): n m of class 1 and a 1 Lyapunov function such that

2< < 7!