Recurrent-neural-network-based implementation

2 downloads 0 Views 123KB Size Report
University of Tennessee, Knoxville, TN 37996-2100 USA, on leave from ..... The switched reluctance motor (SRM) has advantages of mechan- ical and thermal ...
662

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 46, NO. 3, JUNE 1999

Letters to the Editor Recurrent-Neural-Network-Based Implementation of a Programmable Cascaded Low-Pass Filter Used in Stator Flux Synthesis of Vector-Controlled Induction Motor Drive Luiz. E. B. da Silva, Bimal K. Bose, and Joao O. P. Pinto

Abstract—The concept of programmable cascaded low-pass filter for stator flux vector synthesis by ideal integration of stator voltages at any frequency was introduced by Bose and Patel. A new form of implementation of this filter is being proposed here that uses a combination of recurrent neural network trained by Kalman filter and a polynomial neural network. The proposed structure is simple, permits faster implementation by digital signal processor, and gives improved performance. Index Terms—Flux estimation, induction motor drive, polynomial neural network, recurrent neural network, vector control.

I. INTRODUCTION

The difficulty of stator flux vector (9sds ; 9sqs ) synthesis by ideal s s integration of stator voltages (vds ; vqs ) at low frequency is well known in the literature. Accurate estimation of the flux vector in the full frequency range is essential in both stator- and rotor-fluxoriented direct vector control of an induction motor drive. Instead of using a single-stage integrator, a programmable cascaded low-pass filter (PCLPF) method of integration was suggested that operates very well from extremely low frequency to high-frequency fieldweakening range. The concept of PCLPF was applied to a 100-kW electric vehicle drive with stator-flux-oriented vector control, and performance was found to be very good. Of course, the same PCLPF can also be used for rotor flux vector estimation, which can be used for rotor-flux-oriented vector control. A neural-network-based implementation of the same filter is proposed here, which is simpler, has better performance, and can have faster execution by digital signal processor (DSP). The proposed filter is based on a combination of a recurrent neural network (RNN) trained by Kalman filter and a polynomial neural network (PNN). The filter implemented like this behaves more like an ideal integrator at any frequency. II. REVIEW

OF

PCLPF

The PCLPF, as described in [1] and [2], uses a series of n firstorder low-pass filters in order to obtain a total phase shift of 90 and an appropriate gain to have ideal integration at any frequency. The time constant ( ) of the component filters and amplitude compensation gain (G) of the PCLPF are nonlinear functions of frequency Manuscript received July 20, 1998; revised February 18, 1999. Abstract published on the Internet March 1, 1999. L. E. B. da Silva is with the Department of Electrical Engineering, University of Tennessee, Knoxville, TN 37996-2100 USA, on leave from the Escola Federal de Engenharia de Itajuba, Itajuba, MG, Brazil (e-mail: [email protected]). B. K. Bose and J. O. P. Pinto are with the Department of Electrical Engineering, University of Tennessee, Knoxville, TN 37996-2100 USA. Publisher Item Identifier S 0278-0046(99)04149-0.

Fig. 1. PCLPF for stator flux synthesis.

which are given by the following equations:

= (1=!e )tan (1=n) tan01 (h !e ) + 90 = f (:)!e n G = (1=!e ) [1 + ( !e )2 ] [1 + (h !e )2 ] = g (:)!e 

(1) (2)

where !e is the frequency, n is the number of filter stages, h is the time constant of the hardware low-pass filter, and f (:) and g (:) are the respective nonlinear functions, as indicated. Note that the programmable  keeps the phase shift and gain of each filter stage identical at any frequency, and the programmable G adjusts the total gain of the PCLPF to achieve ideal integration of stator voltage signal behind the stator resistance drop. The h compensates the phase shift (8h ) due to input hardware filter. Fig. 1 shows the block diagram of a three-stage cascaded filter where the stator resistance drop is subtracted from machine terminal voltage and phase shift (8h ) due to hardware low-pass filter has been indicated. III. RNN The identification and control of dynamical systems using neural networks have been widely studied in recent years. A common approach to realize an artificial-neural-network (ANN)-based dynamical system is to incorporate tapped delay lines to applied inputs, measured outputs, or the delayed feedback of a static feedforward neural network [3]. The temporal and dynamic system representation capabilities of RNN’s have been shown to be considerably greater than those of purely static networks. Fig. 2(a) shows the general

0278–0046/99$10.00  1999 IEEE

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 46, NO. 3, JUNE 1999

663

(a)

(b)

(c)

(d)

Fig. 2. (a) General structure of RNN. (b) General structure of PNN. (c) Stator flux synthesis by RNN–PNN-based filter. (d) Weights of RNN as function of frequency.

architecture of an RNN, where Z 01 indicates one sample time delay. The network inputs are defined as an M 2 1 vector u(kk) applied at the discrete instant k, the outputs are defined as an N 2 1 vector Y (k+1) corresponding to each neuron activity produced one step ahead at the discrete time (k + 1), and W11 to WN (N +M ) are weights of the network. The internal activity of the RNN relating input and output is described by U (k )

=

N +M ) ( ) T N ( ) 1( ) M ( )]T

U1 (k ) 1 1 1 U(

= [Y1 (k )

111 Y

k

k u

k

111 u

k

:

(3)

The net internal activity of neuron

j

net (k ) =

N +M i=1

at discrete time

j

ji (

or, in matrix form, net1

.. . .. .

net

W11

=

N

.. . .. .

N1

W

111

..

.

..

.

111

k)

(4)

N +M )

W1(

.. . .. .

N (N +M )

W

is given by

i(

k )U

W

k

U1

.. . .. .

N +M )

U(

:

(5)

664

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 46, NO. 3, JUNE 1999

The RNN desired output Y (k + 1) is given by Yj (k

(netj (k)), where (:) is the activation function.

+ 1) =

y

IV. EXTENDED KALMAN FILTER TRAINING In recent years, the investigation of the extended Kalman filter (EKF) as the basis for an RNN training algorithm has shown very good results, in terms of the number of training data and the total training time. The training algorithm based on an EKF was shown to require significantly smaller training data than the pure gradient descent algorithms [4]. Using the same derivative information as the gradient descent algorithms, the EKF algorithms, with appropriate simplifications, have modest computational needs for training an RNN. The RNN training can be viewed as a parameter estimation problem. The only problem lies in the computation of the derivatives of the network outputs with respect to the trainable weights. The training is formulated as a weighted least-square minimization problem, where the error vector is the difference between functions of the network’s output nodes and the desired values for these functions. The desired vector at time k is given by d(k) = [d1 (k) 1 1 1 dN (k)]T . Let h(k) denote a vector of functions of the network’s outputs Y (k). Both d(k) and h(k) are of the length N . The training cost function is given by E (k) = (1=2) (k)T S (k) (k), where S (k) is a user-specified nonnegative definite weighting matrix, and  (k) is the error vector defined as  (k) = d(k) 0 h(k). The RNN trainable weights are arranged into an m-dimensional vector W (k). The EKF algorithm updates the network weight values and the approximate error covariance matrix P (k) which models the correlation between each pair of weights in the network. The algorithm works in such a way that, at discrete time k, the input signals and recurrent nodes outputs are propagated through the network, and the functions h(k) are calculated. The error vector  (k) is computed and evaluated for the current estimates of weights W . The dynamic derivatives of each component of h(k) are formed with respect to the weights of the network. These derivatives are arranged in an m 2 N matrix H (k)

H (k) =

@ 1 (net1 ) @W1 .. . @ 1 (net1 ) @Wm

111 ..

.

111

@ N (netN ) @W1 .. : . @ N (netN ) @Wm

(6)

Then, W (k) and P (K ) are updated using the general EKF recursion equations

01

((k)S (k))01 + H (k)T P (k)H (k) K (k) = P (k)H (k)A(k) W (k + 1) = W (k) + K (k) (k) (7) P (k + 1) = P (k) 0 K (k)H (k)T P (k) + Q(k) where  (k) is the scalar learning parameter, and Q(k) is a diagonal A(k) =

by a polynomial function of the inputs xi and xj and coefficients A; B; C; D; E , and F , which are equivalent to the network weights. The output y of each neuron is given by

covariance matrix that provides a mechanism to take care of the noise affecting the signals involved in the training process. The presence of this matrix helps to avoid numerical divergence of the algorithm and also helps the algorithm to not stop in local minima [4]. V. PNN The architecture of a PNN is based on the fact that the activation function is an elementary polynomial of arbitrary order and the final structure represents the Ivakhnenko polynomial [5]. This feedforward architecture is very well adapted for function approximation. Fig. 2(b) shows the general structure of a PNN where an individual neuron is indicated as a subset. Each neuron output can be represented

= A + Bxi + Cxj + Dx2i + Ex2j + F xi xj :

(8)

The second-order polynomial function is easy to implement and gives the desired nonlinearities. The main features of this network compared to the standard backpropagation network are automatic definition of architecture during the training process, no need to set learning parameters, no local minima problem, and fast convergence. To achieve all these features, the group method of data handling (GMDH) algorithm of Ivakhnenko [5] was used to adjust the polynomial coefficients and define the final architecture of the network. The method basically consists of a multiphase predictive modeling, which iterates subject to a stopping rule relating a selection of predictors based on explanatory power and creation of new predictors. In other words, the algorithm has three elements: prediction, selection, and stopping rule. VI. RNN-BASED IMPLEMENTATION OF PROGRAMMABLE CASCADED FILTER The application of the PCLPF in a stator-flux-oriented vectorcontrolled induction motor drive deals with the integration of machine terminal voltage signals behind the stator resistance drop. For sinusoidal signals at any frequency, the output of the integrators (9sds ; 9sqs ) should be 90 out of phase and multiplied by a gain factor of 1=!e . In the RNN-based implementation, a three-stage PCLPF was represented using only two tapped delay lines in each integrator. The transfer function for the two-stage filter, in discrete time, can be derived in the general matrix form as

Y1 (k + 1)

Y2 (k + 1)

=

W11

W12

W21

W22

Y1 (k)

Y2 (k)

+

W13 W23

u(k)

(9)

where Y is the filter output, u is the filter input, and W is the trainable weights. It can be shown that W12 = W23 = 0 in (9). s and 9qs s synthesis using the RNN The filter implementation for 9ds is shown in the upper part of Fig. 2(c). The external hardware lowpass filter and stator resistance drop compensation are the same as in Fig. 1. The RNN-based integrator was trained off-line by the Kalman filter algorithm for the full range of the motor frequency (0.01–200 Hz). The weights, calculated for each frequency, were used to train the PNN off-line. These weights, as a function of frequency, are shown in Fig. 2(d). For example, the weights at 32 Hz are W11 = 0:9764; W22 = 0:9739, W21 = 0:0254; W13 = 0:0002. The PNN training was found to be very accurate with an error less than 1005 . The final structure of the combined RNN- and PNN-based flux vector estimator is shown in Fig. 2(c). The RNN–PNN system works in such a way that, for each input frequency !e , a set of new weights is updated by the PNN and fed to the RNN, as shown, to correctly estimate the fluxes. Note that the PNN essentially acts as a lookup table generator. The training (off-line) time of the RNN network depends on the frequency (!e ) and the sample time; the smaller the frequency is, the longer will be the training time and vice versa. For the entire range of motor operation (0.01–200 Hz in steps of 0.1 Hz) it takes 18 h of training using PentiumII (200 MHz)-based PC and a sample time of 125 s. The PNN training takes just a few minutes. VII. RESULTS The extensive study of the RNN–PNN-based flux estimator has shown excellent performance in the whole frequency range (0.01–200 Hz). Fig. 3 shows the transient response of the PCLPF

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 46, NO. 3, JUNE 1999

665

A Startup Control Algorithm for the Split-Link Converter for a Switched Reluctance Motor Drive Y. Liu and P. Pillay

Abstract—This letter analyzes and presents the motor startup problem when a split-link converter is used for switched reluctance motor drives. A new control algorithm to solve this problem is presented in this paper, as well as the calculation of the split-link capacitance required during normal operation. Index Terms—DC-link capacitance, multiphase operation.

I. INTRODUCTION

Fig. 3. Comparison of response of RNN–PNN filter and PCLPF.

and RNN–PNN system for one of the flux channels. These outputs were equally scaled before plotting for clarity. The responses are identical in steady state, but there is a difference in transient response. The PCLPF presents a sharp discontinuity at the transient response, whereas the RNN–PNN response is gradual. The objective of the filter is to achieve an operation approaching that of an ideal integrator. Therefore, it can be seen, from the transient response shown in Fig. 3, that the smoother response of the RNN–PNN will be more precise, doing the job, than the PCLPF. The estimated time to run the RNN–PNN filter, using DSP type TMS320C30, is around 13.5 s. VIII. CONCLUSION PCLPF-based stator flux vector synthesis was successfully replaced by an RNN where the network weights as a function of frequency were updated by another feedforward polynomial-type neural network. The RNN structure was derived on the basis of a two-stage filter, and was trained by a Kalman filter. The whole RNN–PNN flux vector estimator was evaluated extensively in the frequency range of 0.01–200 Hz and showed improved performance. The proposed flux estimator is simpler and easier to implement in the DSP. The estimator can also be used for rotor-flux-oriented vector control, as well as a direct-torque-control-based control system. REFERENCES [1] B. K. Bose and N. R. Patel, “A programmable cascaded low-pass filterbased flux synthesis for a stator flux-oriented vector-controlled induction motor drive,” IEEE Trans. Ind. Electron., vol. 44, pp. 140–143, Feb. 1997. [2] , “A sensorless stator flux oriented vector controlled induction motor drive with neuro-fuzzy based performance enhancement,” in Conf. Rec. IEEE-IAS Annu. Meeting, 1997, pp. 393–400. [3] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamic systems using neural networks,” IEEE Trans. Neural Networks, vol. 1, pp. 4–27, Jan. 1990. [4] G. V. Puskorius and L. A. Feldkamp, “Neurocontrol of nonlinear dynamic systems with Kalman filter trained recurrent networks,” IEEE Trans. Neural Networks, vol. 5, pp. 279–297, Mar. 1994. [5] A. P. A. Silva, P. C. Nascimento, G. L. Torres, and L. E. Borges da Silva, “Alternative approach for adaptive real-time control using a nonparametric neural network,” in Conf. Rec. IEEE-IAS Annu. Meeting, 1995, pp. 1788–1794.

The switched reluctance motor (SRM) has advantages of mechanical and thermal robustness, ease of manufacture, and high-speed capability over squirrel-cage induction motors [1]. It is, therefore, attracting considerable research interest, as well as the attention of manufacturers. Four types of converters are widely used in SRM drives: the classic converter with 2n active switches, the n + 1 switch converter, the C-dump converter, and the split-link converter. The split-link converter has the advantage of the lowest component number, when compared to the other types [2] and is, therefore, available commercially. While there is very little existing literature on this converter, some drawbacks are mentioned in [2]. These include the reduced ability for soft chopping and the difficulties associated with maintaining voltage balance between phases. It can be only used in SRM drives with an even phase number. For industrial applications, there are additional considerations. Since two capacitors divide the dc-link voltage in two and each capacitor provides the phase voltage, the conduction of a phase causes ripple in the capacitor voltage. At startup and low-speed operation, the problem is more severe. It is quite possible that the motor does not start. This letter analyzes this problem and presents a new algorithm to solve this problem, as well as the calculation of the capacitance required for the split-link converter to operate satisfactorily over its speed range. II. VOLTAGE RIPPLE ANALYSIS AND THE STARTUP ALGORITHM The converter circuit for a four-phase drive is shown in Fig. 1(a). The two capacitors act like two dc power supplies for the upper two phases and the lower two phases. When Q1 is turned on, current flows through Q1 and phase A. C1 is, therefore, being discharged and C2 is being charged, as shown in Fig. 1(b). When Q1 is turned off, the current in phase A goes through D1 and C2, hence, C2 is charged again and Vc2 increases further. A similar procedure occurs when phases B, C, and D are excited. During normal operation, the conducting phase shifts between the upper leg and the lower leg, and the capacitors are charged and discharged in turn, so that the capacitor voltages remain relatively stable. However, at low speed, the frequency of shifting between the upper leg and the lower leg is low, so that the voltage of the capacitor can drop significantly. Fig. 2(a) shows the currents of phases A and C and the voltage of C2 at an operating speed of 79 r/min with 15 (mechanical degrees) Manuscript received March 30, 1998; revised February 4, 1999. Abstract published on the Internet March 1, 1999. The authors are with the Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13699-5720 USA (e-mail: [email protected]). Publisher Item Identifier S 0278-0046(99)04151-9.

0278–0046/99$10.00  1999 IEEE