neural networks for system identification and control

9 downloads 74853 Views 40KB Size Report
Try the backprop demos in MATLAB's Neural Network Toolbox ... the data set into a training set consisting of the first 512 samples, and a test set consisting ... evaluated by simulation (this means that you will have to write some MATLAB code.
A COURSE IN

NEURAL NETWORKS FOR SYSTEM IDENTIFICATION AND CONTROL ___________________________________________________________________________ Goal: These exercises aim to give the student a fundamental ”feeling” for how neural networks work. What are they? How are they trained? How can they be used for modelling of nonlinear dynamic systems? How can they be used for control? Material: The book ”Neural Networks for Modelling and Control of Dynamic Systems” by M. Nørgaard, O. Ravn, N. K. Poulsen, and L. K. Hansen. Springer-Verlag, London, 2000. Supplementary Literature (for exercise 5): ”Nonlinear black-box modeling in system identification: a unified overview” by Sjöberg et al., Automatica, 32(12), pp. 1691-1724, 1995. Software: MATLAB, SIMULINK, and the Neural Network Toolbox must be available. The m-functions ’control.m’, ’initfile.m’, ‘dio.m’, ’diophant.m’, ‘siggener.m’, ‘shift.m’, ‘progress.m’. The data set ’actuator.mat’. Report: The report should be written as a ”typical” research paper (2 columns, min. 10 pt. font). One student: Write max. 5 pages. Two students working together: Write max. 8 pages. Work load: 5 ECTS points. ___________________________________________________________________________

1. Introduction to MATLAB If you are not familiar with MATLAB already, you must first learn how the package works. Take a look in the manual or find one of the many tutorials available on the internet.

2. Introduction to MLP Networks 



Read Chapter 1 in the textbook. Try the backprop demos in MATLAB's Neural Network Toolbox

3. Training of Neural Networks 



In order to understand how neural networks are trained, you should derive the backpropagation algorithm, the Gauss-Newton algorithm, and the Levenberg-Marquardt algorithm for a network with tanh units in the hidden layer and one linear unit in the output layer (see Section 2.4). Generate the following example in MATLAB: X = 2*pi*rand(1,300); Y = sin(X) + 0.2*randn(1,length(X)) plot(X,Y,’+’) and train a network to approximate the underlying sine wave. Use a two-layer network with 7 tanh hidden units and 1 linear output unit. Compare the convergence rate for the following three training algorithms: ordinary back-propagation, backpropagation with adaptive step size, and the Levenberg-Marquardt algorithm. The comparison is made by plotting the value of the criterion after each iteration. 



Reuse the data generated above and choose the fastest of the tested training algorithms. Train a network with 20 hidden units and compare the final value of the criterion (that is, when the network has been trained to the minimum) with the value obtained above. Introduce a set of test data generated exactly as the training data set: X2 = 2*pi*rand(1,300); Y2 = sin(X2) + 0.2*randn(1,length(X2)) plot(X2,Y2,’o’) Write a small m-file that trains the network with 20 hidden units in the following fashion: - Train 5 iterations with the Levenberg-Marquardt algorithm. - Evaluate the criterion on the test set. - Train 5 iterations - etc, etc, etc Evaluate the criterion on training and test set, respectively, and compare them. Run a lot of iterations to be sure that the minimum is reached. What happens? 

Train the network to the minimum 7 times, each time initializing the weight matrices differently. Compare the final value of the criterion (and the weight matrices). What is the conclusion of this experiment?

4. Choice of Network Architecture 

Use the two data sets from before. Train 10 networks with 1-10 hidden units. Evaluate each of the networks on the test data set. What is the optimal number of hidden units? (Use what you learned above when you conduct this experiment!). 2

5. Modelling of Dynamic Systems



Load the file ‘actuator’ into the MATLAB workspace. This file contains the data set for the hydraulic actuator that was used in Section 4.2 of the book. The variable “u” contains control inputs while the variable “p” contains the observed outputs. Split the data set into a training set consisting of the first 512 samples, and a test set consisting of the remaining 512 samples. Try to estimate a linear ARX model as described in the book. If the System Identification toolbox for MATLAB is avilable you can use this. Otherwise use a ”neural network” without hidden units (only one linear output). Now train a neural network ARX model with 10 hidden tanh units and a linear output. Train the network using early stopping. This means that the network is evaluated on the test set after each iteration (after every 5 iterations will work too and is a little faster). When the minimum test error has been reached the training is stopped. Compare the result with that obtained in the book. Notice that the network should be evaluated by simulation (this means that you will have to write some MATLAB code to simulate the neural network model).

6. Control with Neural Networks 1 In this and in the following exercise you will need the MATLAB script control.m and the associated initialization file initfile.m. You are now going to design and implement a controller for the system: y ′′(t ) + y ′(t ) + y (t ) + y 3 (t ) = u (t ) which was used throughout Chapter 3 of the book. The system is stable, which means that an open-loop simulation is possible.



Implement the system in SIMULINK. You must use an ‘inport’ at the input and and ‘outport’ at the output. Simulate the system by using the ”control” script. Use a sampling period of 0.2 seconds. Apply square waves with different amplitudes (for example 0.3, 2, and 10). How would a linear system behave differently? Identify a neural network model of the system. Select a ”sensible” control signal (remember that the system is nonlinear) and simulate approximately 200 seconds. Split the data set into a training set and a test set of approximately the same size. Train the network 'g1' to model the system: yˆ (t ) = g1 [ y (t − 1), y (t − 2), u (t − 1), u (t − 2)]

and use the test set to evaluate the trained model. Now use the data set to train an inverse model, g 2 , of the system:

uˆ (t ) = g 2 [ y (t + 1), y (t ), y (t − 1), u (t − 1)] (This is called generalized inverse learning) 3



The inverse model can now be used as controller for the system. Instead of y (t + 1) the desired output at time t+1 should be inserted. Modify the program ’control.m’ to do this. Let the ”desired” closed-loop system behave as: H desired ( z ) = z −1

and H desired ( z ) =

0.3 z −1 1 − 0.7 z −1

The first is obtained by setting y desired (t + 1) = ref (t ) ; the second is obtained by a suitable filtering of the reference signal. Evaluate the closed-loop system by applying, as reference signal, square waves of different amplitude.

7. Control with Neural Networks 2 You should now control the same system again, only this time with a completely different type of controller. The principle is described in the book, Section 3.7. It is called pole placement based on instantaneous linearization. The linearization principle is briefly repeated below: Assume that the following neural network model is available of the system under consideration

y (t ) = g ([ y (t − 1) Λ

y (t − n) u (t − 1) Λ

u (t − m)])

and introduce the state vector

ϕ (t ) = [ y (t − 1) Λ

y (t − n) u (t − 1) Λ

u (t − m)]

T

At time t=τ, g is linearized about the current state vector ϕ (t ) , resulting in the approximate model: ~ y (t ) ≈ −a ~ y (t − 1) − Κ − a ~ y (t − n) + b u~ (t − 1) + Κ + b u~ (t − m) 1

n

1

where

ai =

∂g (ϕ (t ) ) ∂y (t − i ) ϕ (t ) =ϕ (τ )

bi =

∂g (ϕ (t ) ) ∂u (t − i ) ϕ (t ) =ϕ (τ )

and ~ y (t − i ) ≡ y (t − i ) − y (τ − i ) ~ u (t − i ) ≡ u (t − i ) − u (τ − i )

4

m

Under the assumption that equality holds, and by separating from the rest the part of the equation containing components of the current state vector, the approximate model can alternatively can be written in the form

(

)

y (t ) = 1 − y (q −1 ) y (t ) + q −1 B(q −1 )u (t ) + ζ (τ ) where the bias term, ζ (τ ) , is given by

ζ (τ ) ≈ y (τ ) + a1 y (τ − 1) + Κ + a n y (τ − n) − b1u (τ − 1) − Κ − bm u (τ − m) The a and b coefficients have been collected in the polynomials A and B:

A(q −1 ) = 1 + a1 q −1 + Κ + a n q − n B(q −1 ) = b1 q −1 + Κ + bm q −m In other words, the approximate model can be interpreted as a linear model affected by an operating point dependent DC-disturbance, ζ (τ ) . Utilization of this principle for control of nonlinear systems is straightforward. Consider the following control system: Linearized model parameters

Control design

Extract linear model

Controller Parameters Reference Controller

System Input

Output

Notice the structural similarity with the well-known indirect adaptive controller. The only difference is that a linear model is extracted from a nonlinear neural network model instead of using a recursive estimation algorithm for estimating a linear model. 

Incorporate this method in ’control.m’ and design two different pole placement controllers. First try a controller that cancels the zero and places two poles in z=0.7. Next, let the zero stay and place only the poles. Remember to use integral action to compensate for ζ(τ). The m-function ’dio.m’ can be used for solving the Diophantine equation. 5