Direct Torque Control of Induction Motor Based on Artificial Neural Networks Speed Control Using MRAS and Neural PID Controller M. L Doumbia, B. Hamane and P. M. Koumba Université du Québec à Trois-Rivières
M. L. Zegai, M. Bendjebbar, K. Belhadri University of Science and Technology of Oran
Department of Electrical and Computer Engineering Industrial Electronic Research Group Trois Rivières, QC, Canada
[email protected],
[email protected] [email protected]
Department of Electrical Engineering Laboratory of Development of Electrical Devices (LDEE) Oran, Algeria
[email protected],
[email protected] [email protected]
Abstract—This contribution deals with the proposal of direct torque control (DTC) for Induction Motor (IM) with the use of artificial neural networks (ANN) to increase the system’s performance. Model Reference Adaptive System (MRAS) method is used for the estimation and regulation of rotor’s speed. The whole structure of DTC is designed by Matlab/Simulink. The neural controller is designed using neural Toolbox, and the system’s performance is compared with conventional DTC.
The stator flux angle shown in Figure 1.
determines the number of sector, as
Index Terms--Induction Motor, Direct Torque Control, Model Reference Adaptive System, Artificial Neural Networks, Neural Controller.
I.
INTRODUCTION
Direct Torque Control (DTC) of pulse width modulated inverter fed induction motor drive is receiving large attention in the last years [1-2]. Figure 1 presents the basic configuration for the direct torque controlled induction motor drive. The scheme uses stator flux vector and torque estimators to control a PWM inverter fed drive. The stator flux amplitude and torque are the command signals and are compared with the estimated and values to give the instantaneous flux error and torque error. These signals errors are delivered to two and three level compactors. In general, conventional DTC scheme has the following disadvantages: • Variable switching frequency; • Current and torque distortions caused by the sector changes; • Starting and low - speed operation problems; • High sampling frequency needs for digital implementation of hysteresis comparators.
Figure 1. Basic configuration of conventional DTC scheme
To avoid torque’s ripple and increase the performance of conventional DTC system, the following strategy was proposed. The traditional controller (Look-up table of switching state selector) as mentioned in table I is replaced by an artificial neural networks (ANN) assisted controller and the conventional PID regulator is replaced by a neural PID. Morever, the rotor’s speed is estimed by MRAS technique.
331
TABLE I. Look-UP Table For Conventional DTC
1 0 -1 1 0 -1
1
0
S1
S2
S3
S4
S5
V2 V0 V6 V3 V7 V5
V3 V7 V1 V4 V0 V6
V4 V0 V2 V5 V7 V1
V5 V7 V3 V6 V0 V2
V6 V0 V4 V1 V7 V3
S6 V1 V7 V5 V2 V0 V4
INDUCTION MOTOR MODELING
II.
The Induction machine dynamic model expressed in the synchronous reference frame is given by stator voltage equation [8-10]: σ (1) σ The stator and rotor flux components can be given as [9]: (2) Equation of electromagnetic and the motion equation is [10]: (3)
Ω III.
ARTIFICIAL NEURAL NETWORK AS A UNIVERSAL APPROXIMATOR
The universal approximators such as ANN, Fuzzy Logic (FL), and hybrid system of FL have been successfully applied to solve many nonlinear control problems. The design objective of ANN or FL aimed to approximate some nonlinear mappings into idealistic approaches. This means that an arbitrary function f: is to be approximated by ANN or FL. It is well known that any continuous function f: , defined on a compact set can be uniformly approximated by polynomials for any degree of accuracy, and |p(x)-f(x)|< for polynomial p(x) on , makes ∈ , given . That is, the Weierstrass theorem, and the extended form of the Stone-Weierstrass theorem provide the general framework for designing the approximation function [3]. The ANN is composed of input, output and hidden layers, and because it satisfies the Stone-Weierstrass conditions, it can be used as a universal approximator. A schematic diagram of its general form consisting of n inputs, single output and single hidden layer is shown in Figure 2. The output of the ANN, is as shown in the following equation (4). , , , ,
…
…
(4)
Where x is the input vector, , 1 … are the weights th between the i node and the output and W ∈ Rn is the vector of , .The Figure 2 shows structure of ANN [12].
Figure 2. Structure of artificial neural network
IV.
TRAINING PRINCIPLE OF THE ARTIFICIAL NEURAL NETWORKS
The back propagation algorithm is one of the most popular algorithms for training a network due to its success from both the simplicity and applicability viewpoints. The algorithm is designed in two stages: the training stage and the recall stage. During the training phase, the weights of the network are randomly initialized. Then, the output of the network is calculated and compared to the desired value. Next, the training error E of the network is calculated and used to adjust the weights of the output layer. Similarly, the network error is also propagated backwards and used to update the weights of the previous layers. Figure 3 [13-14] shows how the error values are generated and propagated to adjust the weights of the network. In the recall phase, only the feed forward computations are used to assign weights from the training phase. The feed forward process is used in both the recall and training phases, as mentioned in Figure 4 [4-6].
Figure 3. Example of Back propagation in a two layer ANN
There are two different methods of updating the weights: •
The weights can be updated for each of the input patterns using an iteration method;
•
An overall error for all the input and output patterns of training sets can be calculated.
In other words, either each of the input patterns, or all of the patterns together, can be used for updating the weights.
332
The training phase will be terminatedd when the error value is less than the minimum set value provided by the designer. One disadvantage of the back propagation algorithm (Figure 4) is that the training phhase is very time consuming.
hidden layer, and three neurons in th he output layer, as shown in Figure 5.
During the recall phase, the networkk with the final weights resulting from the training proceess is employed. Therefore, for every input pattern in this pphase, the output will be calculated using both the linear caalculation and the nonlinear activation functions. An importaant advantage of this process is that it yields a very fast netw work in the recall phase [4-5]. Figure 5. Neural network strructure for DTC
Get input output example data Patterns from experimental or simulation results
After several tests, the architecturee 3 9 3 with just a single hidden layer was adopted. The activation function of the hidden layer is Tansig:
Select ANN topology with number of layer, nodes and activation function
(5) And the activation function of the ou utput layer is Purelin: (6) The learning of the neural networrk is done by using the Levenberg Marquardt (LVM) algo orithm with a number of iterations 5000 and max ximal value of the error is 10 .
Initialize with random weights, define Emax, Kmax
VI.
K K=K+1
Select all input-output patterns
The conventional PID speed reg gulator was replaced by a neural network based PID with the t goal to get optimal performances for the closed loop p control scheme. This regulator has one input which iss the error between the reference speed and the process output speed . The output of the regulator is applied to o the entry of the process. After several tests, the architecturee 1 15 21 1 with two hidden layers was adopted [15].
Calculatee new weights by trainiing algorithm
Calculate ANN output and compute error
Yes
E