A high performance, interactive, real-time controller ...

4 downloads 4249 Views 652KB Size Report
King George V Ave, Durban, 4000, South Africa. Keywords. Transputers, ... of different types [l-51; custom ASICs, microcontrollers and signal ... development work or commissioning. ... Software development is cumbersome and it is left to.
A HIGH PERFORMANCE, INTERACTIVE, REAL -TIME CONTROLLER FOR A FIELD ORIENTED CONTROLLED AC SERVO DRIVE G Diana M R Webster R G Harley DCLevJr Senior Member IEEE Electrical Engineering Department, University of Natal, King George V Ave, Durban, 4000, South Africa

Keywords.

Transputers, Parallel processing, Real-time, Multiprocessors, Field Oriented Control, Space Vector Modulation, AC Motors, Data flow, Granularity, Load Balancing, Context Switching.

Abstract : Microprocessor based digital control of high performance AC servo drives offer the potential for a flexible and interactive control tool. Typical Field Oriented Control (FOC) servo drives have current loop sampling rates of 5 to 10 kHz. This implies that the controller must process the control algorithms within 100 to 200 p. Thus if complex or novel control methods are to be employed, and if user interaction is required, controllers with high computational power are required. This paper discusses the use of transputers and parallel processing for the implementation of an interactive, high-performance digital controller for a FOC servo drive. Various design issues encountered during the implementation are considered and the results obtained to date are presented. INTRODUCTION The development of high-frequency power switches has enabled inverters to produce frequencies well in excess of 500 Hz. As a result the realisation of high-performance, high-speed AC servos is no longer exceptional. To facilitate higher stator frequencies and reduce losses, modern high-speed AC motors are being designed with smaller electrical time constants. As a result digital control of high performance AC servos using FOC or similar strategies require controllers with significant computational power and throughput[l-51. Most AC servo systems to date are controlled by multi-processors, often comprising a mixture of different types [l-51; custom ASICs, microcontrollers and signal processors have also been used effectively in embedded systems. Servo drives often require additional features such as self-tuning, fault tolerance, on-line diagnostics, protection, data capturing, and user friendly interfaces, which include medium performance graphics; the latter two features are especially important for development work or commissioning. These additional features are difficult to implement on an ASIC, microcontroller or signal processor based system. Existing multi-processor solutions based upon Van Neumann architectures, communicating via a common resource, a bus or shared memory, have the following limitations : i) A shared resource becomes a bottleneck limiting system expansion. ii) Expansion of the system is difficult due to complexity. iii) Bus arbitration logic is necessary, which increases complexity, cost and chip count. iv) Multiprocessor bus layout suffers from high track densities, capacitance and cost problems. v) Software development is cumbersome and it is left to the programmer to establish the parallelism using languages inherently designed for sequential processors and programs. This paper discusses an alternative solution, using the RISC-based transputer, a family of single chip

microcomputers, and Occam, a programming language based on the CSP (Communicating Sequential Processes) concept, t o achieve a high performance digital controller capable of meeting the computational demands discussed earlier, without suffering from the limitations of conventional multi-microprocessor systems. The paper does not elaborate on FOC itself, and assumes that the reader is familiar with the details of FOC. Space Vector Modulation (SVM) is a well established method of achieving low harmonic, pulse-width-modulated inverter control [1,2]. In this paper both FOC and SVM are implemented entirely in S/W; the philosophy is to establish a flexible tool with which to investigate servo drive control, which could be scaled down for an embedded solution. Although the use of transputers to control AC machines has been investigated by others [7-lo], the emphasis of this study is to show how the FOC and SVM algorithms can be parallelized, thereby increasing the system’s sampling rate. It also shows that other features can be added, thus enhancing the servo drive system. PARALLEL PHILOSOPHY The main objective is to examine how to minimize the execution time of the control algorithms, thus increasing the sampling frequency, which in turn enables the control of systems with large bandwidths. There are three distinct alternatives to achieve this : i) Increase the execution rate (ie clock frequency) of the processing system, thus enabling it to process the algorithms in the required time. This is normally achieved by the use of a single, fast CPU. ii) Use an implementation specific architecture, for example a DSP based solution or an ASIC chip set; these solutions are free from the overheads imposed by a generalized solution and hence have the potential for the highest raw performance. iii) Implement a parallel or multiprocessor solution. Assumin that the algorithms/tasks have a sufficient degree of concurrency and that the granularity will allow the sampling criteria to be met. This paper discusses the implementation of a parallel solution. Parallelism has been divided into three main programming paradigms [18,19] : i) Processor farms - Replication of independant tasks. ii) Geometric - Utilisation of data structure. iii) Algorithmic - Utilization of data flow. In this paper only algorithmic data flow is examined. Most control loop algorithm’s data flow structure is in the following pipeline form :

.

In-1bransform-l.. .+Control+. .+TransformI+Out

1

A t = c o n t r o l l e r delay

1

Fig 1. Single Processor Controller Decomposing the elements of a pipeline and allocating them to individual processors, as shown in the following diagram, will increase the data flow through the pipeline, as each

90/CH 2935-5/90/~0613$01.0001990 IEEE

produces the pulses must execute at high priority. In practice the transputer is capable of producing a minimum pulse width of approximately 2 - 8 p ; this depends upon two main factors : i) If the floating point unit (FPU) is in operation, there is a maximum interrupt latency of 78 cycles (6 p Q 20 MHz), from the moment when a high priority process becomes ready to execute, until it actually begins execution. If the FPU is not in use the latency is a maximum of 58 cycles (2,9 p 0 20 MHz). ii) The communication setup delay for output (!) over a link is 26 cycles (1,3 p Q 20 MHz) The above figures are important as they indicate the range of pulse widths, and hence voltages which the software based PWM generator is capable of producing (ie the minimum duty cycle possible for a particular switching frequency). For systems which require finer resolution several changes can be made : i) The pulse generation software must execute on a CPU without any floating point usage; this will reduce the minimum pulse width to approximately 5

processing element will accept new input data as soon as it outputs its results,

1

A t = c o n t r o l l e r delay

4

Fig 2. Multiprocessor Pipeline but the delay between sampling and plant actuation remains the same, if not longer due to the interprocessor communications required. To reduce the controller delay the data flow path must be parallelized to have ideally, a structure as follows :

P.

+lEl

In+ Transform

'

ii)

Transform +Out

A t = c o n t r o l l e r delay

iii)

A dedicated CPU is used to generate the timing pulses, and no other processes are executed in 'parallel' (timesliced) with the pulse timing routine. This implies as approximate 2 p minimum pulse width. An off-chip pulse generator is required, and could be addressed as a P O R T (ie memory mapped). The pulse widths generated here depend upon the method used to implement the timing strategy.

Hard Real-time Constraints. In order for the system to satisfy the real-time constraints ie. regular sampling of the current, speed and position loops at the required frequency it requires that the processor has a small interrupt latency. This has been discussed above. One of the drawbacks of the transputer, however, is its limited priority structure [ll]; this is augmented by the transputer's built-in scheduler which frees the programmer from having to create a rudimentary operating system and implies high performance task switching, which is necessary if a control system is required to have a fast response time.

I

Fig 3. Multiprocessor Parallel Implemetation The above discussion is not representative of all control problems, and the possiblities for parallelism are not always apparent or efficient to implement; often data dependancies enforce sequential execution. The granularity of the parallelism, that is the extent to which the algorithms can be broken down into smaller pieces capable of executing concurrently, determines how short the controller delay can become. Control systems which contain several, nested control loops can be directly parallelised, and each control loop can be executed concurrently.

Connectivitv. The transputer only has four links with which to interconnect the Drocessors. This imDlies that a limited amount of intercohnection is possible. T h e more processors that can be directly interconnected reduces the communication overhead in a distributed system, which decreases the granularity of the system. For the system described here only three processors are used, a fourth is used for user interfacing (Fig 6), but doesn't form part of the real-time system, and hence direct interconnection is possible, thus allowing spare links to interface to 1/0 peripherals.

IMPLEMENATION ISSUES Granularity. The main constraint imposed by the hardware imolementation is the sranularitv of the svstem. This is l i i i t e d by the com&unicatioi setup "delay between processors, and the data transfer rate. On a transputer the setup delay is approximatly 1,3 p and the data rate is 0,4 to 0,8 p per byte depending on the link speed selected. From Fig 4 it is evident that there is a small degree of parallelism in the data flow of the control algorithm. In order for the conditions for FOC to be met the dq current loop is required to have sampling delays of 10 to 20 times smaller than the stator time constant. Thus the current loop is the critical algorithm which determines the rate at which the system must sample. Moreover, the SVM inverter control algorithm is part of the current loop, and hence must be parallelized with the dq current controllers. Fig 5 shows a task-level description of how the various parts of the current loops and SVM algorithm can be paralleled and distributed over three processors.

Performance, Load Balancine and Task Allocation. Performance maximisation is necessary if the full potential is to be obtained from a particular transputer im lementation. The following issues must be addressed 1207 : i) Making use of the on-chip memory. The internal memory cycles are approximately 5 times faster than the external cycles. Hence to maximise performance the most frequently accessed variables should be placed in internal memory. This can be achieved by using the PLACE statement. ii) Abbreviations. Abbreviations can be used to bring non local variables into scope, thereby removing static chaining. Abreviations can also speed up execution by removing the necessity for range checking instructions. iii) Retyping. This can speed up bit and byte extraction from a word.

Prioritv. IntermDt Latencv and Timinpl. The SVM requires timers to facilitate the production of pulse-width-modulated inverter control signals; in order to have access to the 1p resolution timer the procedure which 614

iv) v)

SOFTWARE AND HARDWARE DESIGN Hardware. The hardware used to implement the prototype is an AIL system (Analog in loop) developed by the Electronics Institute at Stellenbosch University, South Africa. This a hardware is designed around an AIL cell, pseudo-transputer concept (presently a 8031 microcontroller) which is used to implement a flexible converter interface to a 16 channel digital-to-analog circuit, an 8 channel analog-to-digital circuit, and a 6 by 8 bit bi-directional digital to digital circuit. Included in the AIL concept is the link extended transputer; this board has four memory mapped link adaptors, with GUY code drivers, which are used to extend the communication ability of the transputer hence allowing interfacing to several AIL cells while still allowin interconnection to the rest of the processor network. i l s o supplied with the system is the necessary Occam primitives plus library routines to handle the interfacing and control of these cards. At present the 8031 microcontroller forms a bottle neck in the 1/0 system and allows a minimum sampling interval of 125 p per analog channel 8 kHz for 1 channel, 4 kHz for 2 channels etc.). Althoug it is not envisage that the prototype system will exceed this rate, it is felt that since the granularity of the system is limited by an approximate 2 p link delay it should be possible to process signals at frequencies as high as a factor of 2 to 3 times greater than the present limit. Hence the redesign of the AIL concept to incorporate a T222 or a T425 transputer is currently underway. Fig 7 shows an overview of the prototype setup.

Decouplin communication and computation. To avoid the finks waiting on the processor or vica versa the communication should be buffered. Load balancing. This is important to ensure maximum efficiency and performance. Occam, the programmin language for transputers, is a CSP language; tfis implies that the synchronization and triggering of processes over a network of CPUs relies on successful communications. It is therefore important to balance the processing time between communications on each processor to ensure maximum throughput. At present there are few tools available to assist the designer in this matter. A load balancing evaluation program is available from Stirling University, in the U.K, but this requires manual insertion into the code under test and becomes extremely cumbersome as the program increases in size, or decreases in granularity. The programmer can adopt a simple methodology to ensure reasonable efficiency by measuring the execution time of each module of code, (either by inserting it into a timing harness, or by totalling the instruction execution times), and then manually adjust the instance of the code to improve the performance. Static allocation of code to processors, with some measure of load balancing, appears at present to be the only way to extract maximum performance from a transputer network. The overheads involved in dynamic task allocation or farming imply inferior performance.

h

Software. Figs 6 and 8 show the hardware structure and t h e software distribution respectively. In Fig 8 the current loop, which incorporates the SVM, is parallelized in a manner similar to Fig 5, but the sub-tasks and associated channel communications have not been shown. The PAI, Parameter Adapt and Inspect process allows host interaction with each controller process.

Distributed Parameters. Although the controller algorithms are designed using a data flow concept and do not use global memory for variable storage, it is required that the system parameters and variables can be monitored and changed during the controller’s execution. There are several ways in which to manipulate data in a distributed network, but this system has three specific requirements : i) Blocks of data are to be captured at the sampling frequency of the system and sent to the host for analysis. ii) Sporadic requests from the user to update the present value of a variable resident on an arbitrary processor. iii) Requests from the user for the system to communicate data values if there is sufficient processing time available. The first requirement places some restrictions on the real-time response, while the second is a small processing overhead. To implement the data capture using a message passing system would impose a some communication overhead; however since the system under discussion consists of only three processors in the real-time loop, there can be a direct interconnection from each processor to the host, thus minimizing the communication overhead. Fig 6 shows how the processors are interconnected.

RESULTS At the time of writing the SVM, FOC and controller harness have been completed, and where possible most of the calculations have been implemented in floating point arithmetic This, coupled with the high level features of Occam made program development simple and flexible. The host based interface has been completed, which includes a folded file interface, for the setup of the system variable names. The SVM algorithm has been written and tested on a single T414, two T414s and later converted to run on a T800, using floating point arithmetic. Typical results from testing the SVM algorithm appear in Fig 9.1 9.4; they were obtained for a load of R = 2552 and L = 36 mH per phase of a star-connected load fed by a G.T.O. inverter. The current loop has been closed and at present the sampling frequency is limited t o 2 kHz. This limitation is due to : Slow I/O. The AIL system has a limited sampling i) delay of 125 p er channel as discussed above. ii) Poor interrupthvent facilities. At present the AIL system does not support multiple external interrupts. iii) Software based PWM. The SVM algorithm; the calculations and the pulse interval timing and generation is implemented entirely in S/W. This was done to investigate the transputers capabilities, as well as to establish a flexible and high bandwidth PWM generator. The minimum pulse width obtainable implies that there is an inherent distortion in the SVM, similar to that caused by miminim on-times, which increases as either the load resistance or the switching period decreases. iv) Optimisation. The program has not yet been optimised according to the strategy outlined in the preceding text. Overheads. The controller harness, which allows the v) user to sample, modify and inspect variables is an additional computational load and adds to the context switching overhead.

Lorjcal Control. Maintaining logical control of a distributed program is complex. A criticism that has been leveled at using parallel processing in real-time control applications is the non-determinancy; this means that the state of the controller at a particular point in time can not be accurately predetermined. This implies that conventional methods of controlling the execution of a program on a single processor are not valid, and if attempted are likely to produce systems which deadlock. Welch whereby reset commands are sprea the network of processors, each process sending a reset command on all its output channels and remaining active until all its input channels have recieved a reset command. Using a scheme like this ensures that the system will precipitate into a predefined state after a finite time. This scheme has been implemented in two levels to provide resetting and termination. 615

CONCLUSION This paper has described an entirely sofware based transputer implemetation of FOC using SVM. From the work done to date it is clear that the transputer and Occam offers a flexible platform with which to build powerful control systems. However there are several pertinent observations : External hardware is required to create accurate, i) programmable pulse widths of less than 8 p. Thus an ASIC or an off chip timer solution must be used to implement the PWM. ii) The granularity chosen for this implementation was chosen according to the interprocessor communication delay alone. The context switching overhead should be included, and hence if redesigned, the granularity of the algorithms will be increased. iii) The AIL system, in its present format, is not suitable to implement control systems of the required bandwidth; due to slow I/O and poor interrupt facilities. iv) The controller software must be put through an optimisation process. Other parallelisation strategies should be v) investigated. 1.

2.

3. 4.

5. 6.

7.

8.

9.

10.

11. 12. 13. 14.

15.

16. 17.

18. 19.

20.

REFERENCES Holtz J, Lammert P and Lotzkat W, "High Speed drive System with Ultrasonic Mosfet-PWM-Inverter and Single Chip - Microprocessor Control", Conference Record IEEE-IAS, Part 1, pp 12-17, 1986. Lessmeier R, Schumacher W and Leonard W, "Microprocessor-controlled AC Servo Drives with Synchronous or Induction Motors, Which is Preferable:", Conference record IEEE-IAS, Part 2, pp 529-535, 1988. Leonard W "Microcomputer Control of High dynamic Performance AC-drives - A Survey'l, Automatica, Vol. 22, No. 1, pp 1-19, 1986. Bose B K "Technolony trends in MicrocomDuter Control of Electrical Michines", IEEE transaitions on Industrial Electronics, Vol. 35, No. 1, pp 160 177. Feb. 1988. "Microcomputer Control of Power Bose B K, Electronics Drives" IEEE Press, 1987. Novotnv D W. I h o T A. "Vector Control and Field Chientation", * WEMPEC Tutorial, Report 53-5, University of Wisconsin, Madison, Wisconsin, 1983. "Recent Developments in PWM Bowes S R, Switchinn Strategies for MicroDtocessor-Controlled Inverter "Drives": Motor-Coi. Proceedings, pp 10-22, June 1988. Bowes S R and Clark P R. "Transuuter-based Harmonic Elimination PWM' Control *of Inverter Drives", Conference Record IEEE - IAS, Part 1, pp 744-752, 1989. Asher G M, "Real-time Motor Control Using a transputer Parallel Processing Network", EPE 89 Proceedinns, Aachen, West Germany, Oct 1989, pp 433438. Jones D I and Fleming P J, "Control Apdications of Transputers", Personal communication with Jones D I, Bristol University, United Kingdom, March 1988. Welch P H, "Managing Hard Real-time Demands on Transputers", Proceedings of Occam Users Group 7, Grenoble, France, pp 135-145, September 1987. welch P H, "Graceful Termination - Graceful Resetting", Proceedings of Occam Users Group 10, Enschede, Netherlands, pp 310-317, April 1989 Inmos, "TransDuter Reference Manual", Inmos 1986. May D, "Communicating Processes and Occam", Inmos Technical note 20.

May D, "Occam 2 LanguaEe Definition", Inmos, Bristol, 1982. May D and Shepard R, "The Transputer Implementation of Occam", Inmos Technical Note "r

Al. -

Modi J J, "Parallel A1 orithms and Matrix Comuutation", Computing Science Series, Clarendon Press, Oxford 1988. "Experiments in Algorithmic Capon P C, Parallelism", Proceedinrcs of Occam Users GrouD 10, Enschede, Netherlands, pp 1-14, 1989. Pritchard D J, "Mathematical Models of Distributed Computation", Proceedinas of Occam Users GrouD 2, Grenoble, France, 14 -16th September 1987, pp 25-36. Inmos, "Performance Maximisation", The TransDuter ADDlications Notebook. Svstems Performance, 1st Edition 1989, pp 280-298.

I I

I I I

I

I

'--r----+

H

I

616

I

5

09

-J

\

N o r m . Magnitude

N o r m . Magnitude

Amps

22

09

E E)"

ZN

II

m

0 cn

EF

nl ul

0

wul P

AmPS

N o r m . Magnitude

Amps

I -

o

y

1

l

o

r

n

c

I

o 0

0

0 N 0

0 0

P 0 0 m 0

0

0

m I O Y . 3 L0 m .

0 0 r

N 0

0 v

P

-

0 0

0 0 0 L.

m 0

0 N

618

7

Suggest Documents