Condition Monitoring and Distributed Real-Time Control of Industrial Drive Systems via Ethernet Tech. Lic. ENG Lilantha Samaranayake
ENG Dr. Sanath Alahakoon
Dept. of Electrical Engineering, Royal Institute of Technology (KTH), 100 44 Stockholm. E-mail:
[email protected]
Dept. of Electrical and Electronic Engineering, University of Peradeniya, 20400 Peradeniya. E-mail:
[email protected]
SENSOR 1
NW Interface
Being a versatile networking hardware and software solution developed over two decades, Ethernet has received a lot of attention from industry as the future industrial communication medium [1]. One objective of this research is to address this problem of interfacing of the sensor/actuator node to the communication network, when the communication is done via Ethernet. The second objective is to investigate the possibility of using standard TCP/IP Ethernet for distributed real-time control of industrial drive systems [2]. OUTPUT PROCESS
SENSOR
NW Interface
SENSOR 2
Ethernet
NW Interface
Ethernet
OUTPUT PROCESS
SENSOR NW Interface
Monitoring Computer
(a) Condition monitoring
NW Interface
Condition monitoring and closed-loop control are essential and well-known techniques in any industrial environment. A controller or an observer receives information about the industrial drive system (or the process) to be controlled or observed from the sensors and in case of a controller it sends out driving signals to the actuator. Condition monitoring done from a location remote to the place at which the particular industrial process is commissioned is at the tip of today’s cutting edge technology (e.g. Control room of a factory). Control loops that are closed over a communication network, called Distributed Control Systems (DCS), also get more and more common as the hardware devices for network and network nodes become cheaper thanks to advanced cost effective silicon technology. One important feature of such a distribution is that, instead of hardwiring the control devices with point-to-point connections, sensors, actuators and controllers are all connected to the local area network (LAN) as nodes. Several advantages of this implementation include: reduced system wiring, plug and play devices, increased system agility and ease of system diagnosis and maintenance.
NW Interface
1. Introduction
NW Interface
Ethernet today is the most widely used information carrier and the service provider for many important applications in our dayto-day life such as e-mail, voice-image data and web based information. It is also emerging strongly into the area of industrial communication. This paper presents research and development being carried out to enhance the possibilities of using standard TCP/IP for condition monitoring and distributed realtime control of industrial drive systems via Ethernet.
In such a system, measurement and control signals are transmitted between process and controller/observer modules as encapsulated data packets. These types of industrial applications demand fast, flexible, secure, reliable and robust data communication at a reasonable cost. Employing a suitable fieldbus full-fills some of them. Profibus, ControlNet, DeviceNet, Ethernet, Suconet, and Interbus etc are among the commonly used field-buses. One major requirement of such a system regardless of the vendor, is its ability to connect any physical sensor or actuator to the network with minimum system administrative overhead and cost. In other words it is the interfacing of the sensor/actuator node to the communication network without much of a burden.
NW Interface
ABSTRACT
Control Computer
(b) Distributed Control
Figure 1: Ethernet for industrial communication
The Ethernet based system topologies shown in Figure 1(a) and (b) respectively would enable condition monitoring and distributed real-time control as depicted.
2. Ethernet In the mid-1990s, standardization activities were started both in the United States and in Europe. While the U.S. activities (UCA 2.0—Utility Communication Architecture) primarily focused on standardization between the station and bay levels, the European approach (driven by IEC TC57, WG 10, 11, and 12) included the communication down to the time-critical process level from the beginning. In 1998, the two activities were merged to define one worldwide applicable standard: IEC 61850 [3]. Instead of debating between several competing fieldbuses, an agreement was reached to use Ethernet as a communication base for the station bus. This agreement was based on the fact that the Ethernet technology has evolved significantly. Starting out as a network solution for office and business applications, Ethernet today is applied more and more as a solution for high-speed communication backbone applications between PCs and industrial networks [4]. The high-speed properties of current Ethernet technology, together with its dominant position in the Local Area Networks (LAN), makes Ethernet an interesting communication technology condition monitoring and distributed control of industrial drive systems [5].
attempt to retransmit the information and the process is repeated. The probability for a collision depends on the collision domain, i.e. the range of the Ethernet, and the network load. A traditional CSMA/CD Ethernet with 20% utilization has less than 0.1% collision, while as much as 5% of the packets will experience collisions if the network utilization is above 40%. A CSMA/CD network with 40% utilization is in trouble, and the net data rate will in fact decrease due to collisions if the load is further increased. However, the latter collisions are not errors. Collision is a normal part of Ethernet networks. The figures below show the principles of CSMA/CD.
Workstation
Workstation
CSMA / CD Figure 2: Carrier Sense
Workstation
Workstation
CSMA / CD Figure 3: Multiple Access
2.1 Traditional Ethernet Traditional Ethernet is not real-time friendly. The Carrier Sense Multiple Access / Collision Detection (CSMA/CD) scheme of Ethernet makes access to the medium non-deterministic. An Ethernet controller connected to a thin Ethernet (coax - 10BASE-2) or a hub (10BASE or 100BASE) is not able to send a packet as long as the medium is busy sending another packet. The Ethernet controller is free to send its packet as soon as the Ethernet is idle. While transmitting, the station continues to listen on the wire to ensure successful communications. If two stations attempt to transmit information at the same time, the transmissions overlap causing a collision. If a collision occurs, the transmitting station recognizes the interference on the network and transmits a bit sequence called jam. The jam helps to ensure that the other transmitting station recognizes that a collision has occurred. After a random delay, the stations
Workstation
Workstation
CSMA / CD Figure 4: Collision Detection
2.2 Switched Ethernet From a functional point of view, switching is exactly the same as bridging [6]. However switches use specially designed hardware called Application Specific Integrated Circuits (ASIC) to perform the bridging and packet forwarding functionality (opposed to implementations using a central CPU and special software). As a consequence switches are much faster than bridges. Ethernet switches provide 10, 100, 1 Gbps or even 10 Gbps (under development) on each drop link. This represents a scalable and huge bandwidth increase compared to e.g. an
Ethernet hub where the bandwidth is either 10 or 100 Mbps and shared between all users connected to the same network segment. Ethernet switches also offer both half and full duplex connectivity. This means that an Ethernet controller never will see any collision if full duplex connectivity is used. However, packets can still be delayed or even lost if one of the following scenarios appear: 1. The total network load exceeds the switching capability of the switch engine. I.e. the switch is not able to handle full wire speed on each drop link. 2. The output buffer capacity is not sufficient. I.e. the amount of packets sent to an output port exceeds the bandwidth of this port for a time period that is longer than the output buffer is able to handle. Thus, packets from several input ports compete for the same output port causing a non-deterministic buffering delay. Higher protocol layers at the stations must handle lost packets. These two scenarios can be avoided by using the following Ethernet techniques: •
•
•
Back pressure: The switch can send a jam pattern simulating traffic on a port operating in half duplex mode if the amount of packets received on this port is more than the switch can handle. Flow control: The switch can send PAUSE packets according to IEEE802.3x on a port operating in full duplex mode if the amount of packets received on this port is more than the switch can handle. Priority: Ethernet packets that are identified as high priority packets are put in a high priority queue. Packets from a high priority queue are sent before the low priority packets. The low priority packets may still be lost. This is the most relevant technique with respect to optimal real-time properties for latency sensitive real-time data.
vendor to vendor. Relevant alternating schemes for a switch with two priority queues could be: 1. Round-robin weighting - I.e. N packets are sent from the high priority queue before one packet is sent from the low priority queue. 2. Strict priority - I.e. all packets will be transmitted from the high priority queue. Packets from the low priority queue will only be sent in case the high priority queue is empty. Thus a high priority packet will be delayed due to a low priority packet if the transmission of this packet is started before the high priority packet enters the output port. The high priority packet will then be delayed by the time it takes to flush the rest of the packet. Worst case will be that the transmission of an Ethernet packet with maximum packet length (1518 bytes) is just started. The extra delay will then be 122 µs in case of 100 Mbps, and 1.22 ms in case of 10 Mbps. A high priority packet may also be delayed through the switch due to other real-time packets that are already queued for transmission in the same high priority queue. However, it is often a straightforward job to calculate the worst-case the network load and switch latency such a packet may experience if traffic pattern of the real-time application using the high priority queues are known, and all other traffic use lower priority. Typical worst-case switch latency for a high priority packet in such a system will be a few hundred micro seconds in case of 100 Mbps on each drop link [5]. This demands the controller (speed / position controller in case of distributed industrial drive system) to take necessary actions against the latter unavoidable delays on measurement and control signal node to node communications.
3. Distributed Controller Figure 5 outlines the timing aspects of a distributed real-time control system.
2.3 Priority Ethernet switches today may have support for priority containing two or more output queues per port, where the high priority queue(s) are reserved for real-time time critical data offering Quality of Service (QoS). How the switch alternates between the priority queues vary from
Figure 5: Controller distributed over the communication network: τsc sensor to controller delay, τca controller to actuator delay, τc controller execution delay
x(kh)
Process State
x( kh + τ )
τ
sck
Estimator
sck
τ
Controller
ck
u ( kh + τ ) k
τ
cak
τ
k
Actuator ( k − 1)h
kh h: sampling period
( k + 1) h
Figure 7: Micro-controller to interface the sensors and actuators to Ethernet
k: kth sample
Figure 6: Timing diagram for delays involved in various nodes of the distributed control system
Despite the Switch, state measurements x(kh) of a system distributed over an Ethernet network can get delayed in reaching the controller node as shown in Figure 5. By that time, the actual process/plant state may have changed (in the Figure 6, the process state x(kh) is different from x(kh+τsc)). Therefore an estimator must be used to evaluate the state measurements pretending the states just before the control signal has been released at the actuator node. This is essential as there is another delay τca before the control signal reaches the actuator. The control delay τ is unknown prior to the control signal computation and is therefore estimated from the known τsc and τc. The real-time delay compensation scheme based on time stamped state measurements is well described in [2] by the same author.
4. Ethernet-Ready Actuators
Sensors
And
Among the objectives of this research is to design and implement sensor and actuator nodes that can easily be connected to Ethernet in order to implement the distributed system at an affordable cost to the local industries. As a substitution, the micro-controller shown in the Figure 7 is used to interface the available Sensors and Actuators in the test-rig of a Brush Less DC (BLDC) motor drive system used for this study (Figure 8) [7]. Most of the outputs of physical sensors can be obtained in the form of an analog voltage (a taco generator) or a digital word (an incremental encoder). Similarly, inputs of many actuators can
Figure 8: BLDC motor drive system with speed control loop closed via Ethernet
be provided in the form of an analog voltage (linear power amplifier) or a digital word (PWM inverter). Thus, the whole design problem is reduced to design and implementing the hardware needed to interface (a) An Analog to Digital Converter (ADC) (b) Digital to Analog Converter (DAC) and (c) A digital I/O to the Ethernet.
4.1 Design approach The complete design problem is divided into two phases. Namely, Phase I - Interfacing of an ADC and a DAC to the Ethernet and Phase II Interfacing of a digital I/O to the Ethernet.
4.2 Interfacing of an ADC and a DAC to the Ethernet A hardware module that interfaces an ADC and a DAC to the Ethernet through a hardware TCP/IP stack as shown in Figure 9 has been designed and implemented. Associated software programming is being done at present. The unit
will be capable of first receiving the information such as sampling time etc. from the central control or monitoring computer and operating the ADC and DAC accordingly. RJ45
Ethernet Rx/ Tx
TCP/IP
Microcontroller
Additional memory
A/D Converter
D/A Converter
External system
Figure 9: Ethernet Interface module for ADC and DAC
The prototype module has been designed to meet the performance demands of hard real-time closed loop control of the drives as they are more demanding provides necessary and sufficient conditions to implement real-time condition monitoring. The particular application considered here is the closed loop speed control of motor drives through Ethernet. The prototype module will be capable of completing one analog to digital conversion and a digital to analog conversion needed for a single actuation cycle within 1 ms.
4.3 Hardware development The entire system has been simulated in KEIL C micro-controller programming and simulation platform. Establishment of the communication link between the module and the remote host has been successfully completed. Operating the ADC and DAC using the microprocessor has also been implemented (see Figure 11). Corresponding communication link diagram obtained from the prototype is given in the Figure 10.
Figure 10 (a): The implemented Ethernet connectable module was given the IP address 10.40.16.146 and it is replying correctly once is tested with the “ping” command.
Figure 11: Ethernet connectable hardware module
5. Speed controller strategies The aim here is to implement different controller strategies for systems with non-deterministic time delays (Figure 11) in order to make a good comparative study between them. The controllers that have been implemented so far are (a) PI with special tuning for time delays [8,9,10] (b) Smith Predictor modified for time delayed systems [11,12] (c) State feedback controller with online delay compensation [2]. Other practical issues such as actuator saturation and integrator wind-up have also been considered at the implementation level [13,14]. Figure 12 compares the actual performances of different controller strategies mentioned. The PI controller uses parameters tuned off-line, despite the time variations caused by τ ( = τ + τ + τ ) . sc
c
ca
It further considers as a constant equal to its mean value. The Smith predictor uses the Coefficient Diagram Method (CDM) in its controller parameter calculations. The CDM produces a more stable controller at presence of model errors due to its structure with higher degrees of freedom. In this case too, τ is taken as constant. Both controllers are equally good in minimizing the average steady state error, with tiny bursts even in the steady state despite the integral action as the τ changes considerably from its average. This is because both controllers have been tuned to the average delay and in reality there are variations. Further the PI controller is faster but has oscillations in the transients. The reason is that the numerical integration in the PI controller increases in fixed steps expecting that
module of Figure 11, excluding fabrication capital is approximately one tenth that of the commercially available micro-controller shown in Figure 7. Still the non-determinism of the standard TCP / IP has to be addressed in a control system perspective. The experimental results so far reveals that the state feedback controller with online delay compensation, which is also an original contribution has better step response performance. This is verified from Figure 13. Figure 12: Time delay distribution between various nodes in the Ethernet network
7. References [1]
[2]
[3]
[4]
Figure 13: Step response comparison
the control signal is applied to the actuator periodically. The period is supposed to be the integrator time step. But it is not the case and is irregular in practice due to the time variations. The Smith predictor is smooth but slower in transients. Because the error between the actual system and its model is so large that the control signal is small and takes time to track the changes. On the other hand, the State feedback controller has a comparatively better steady state performance despite the lack of integral action. Further, its transients are smoother and as fast as the PI controller. This is because the time variations are treated real-time.
6. Conclusion Results so far of the hardware implementation of the Ethernet connectable ADC / DAC module is promising. The local industries will be able to use it for real-time condition monitoring and distributed real-time control of their industrial drive systems via the already existing Ethernet networks at a friendly budget. The cost per
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
N.D. Aakvaag, N.A. Nordbotten, “Implications of the Next-Generation Internet Protocol for ABB – IPv6 the new Internet Protocol”, ABB Review, Jan 2003, pp30-35. L. Samaranayake, “Distributed Control of Electric Drives via Ethernet”, Tech. Lic. Thesis, Department of Electrical Engineering, Royal Institute of Technology, Sweden, June 2003. “IEC 61850 Communication Networks and Systems in Substations, Part 5: Communication Requirements for Functions and Device Models, Part 7-2: Basic Communication Structure for Substations and Feeder Equipment,” 1999. C. LeBlanc, “The future of industrial networking and connectivity,” Dedicated Systems Magazine, pp. 9-11, Mar. 2000. T. Skeie, S. Johannessen, C. Brunner, “ETHERNET in Substation Automation”, IEEE Control Systems Magazine, pp. 43-51, June 2002. R. Perlman, Interconnections – Bridges, Routers, Switches and Internetworking Protocols, 2nded., vol.1.Addison Wisley, 2000, pp 1 –40. “C-Programmable Single-Board Computer with Ethernet and Operator Interface”, User Manual, ZWorld Inc, California, USA. A. O’Dwyer, “Performance and robustness issues in the compensation of FOLPD processes with PI and PID controllers,” Irish Signals and Systems Conference, Dublin, Ireland, June 1998. J. Syder, T. Heeg, A. O’Dwyer, “Dead-time compensators: performance and robustness issue”s, Dublin Institute of Technology, 1998. J. G. Ziegler and N. B. Nichols, “Optimum settings for automatic controllers,” ASME Trans.1942, Vol. 64, pp. 759–768. S. E. Hamamchi, I. Kaya, D. P. Atherton, “Smith Predictor Design by CDM,” European Control Conference 2001. S. Manabe, “Application of Coefficient Diagram Method to MIMO design in aerospace,” IFAC 15th Triennial World Congress, Barcelona, Spain, 2002. K. S. Walgama, “On the Control Systems with Input Saturation or Periodic Disturbances”, Luleå University of Technology, Luleå, Sweden, 1992. S. Alahakoon, “Digital Motion Control Techniques for Electrical Drives”, Royal Institute of Technology, Stockholm, Sweden, 2000.