Ethernet in Substation Automation Tor Skeie1, Svein Johannessen2 and Christoph Brunner3
1.
The process-level functionality is more or less an interface to the primary equipment. Typical functions specified at this level are data acquisition (sampling) and issuing of I/O commands.
2.
The bay-level functionality is concerned with coordinated measurement and control relating to a well-defined subpart of a substation (usually denoted as a bay).
3.
At the topmost station level we find the functions that protect and control the entire substation or larger parts of it. As part of the station-level functions, we often also find human-machine interface functions (HMI) as well as linkages to remote control centers.
When substation automation systems were first introduced several years ago, the basic architecture was as given in Fig. 1. The connections between station- and bay-level devices were replaced with a communication bus, usually implementing a vendor-specific communication protocol. The connection to the process level devices was implemented with parallel wiring using standardized interfaces. 1. 2. 3.
E≥ ρ≥
&
&
E≥ ρ≥
PISA A
PISA B
PASS M2
Due to progress in technology, new electronic current and voltage sensors became available that might provide significant cost reduction. However, this technology required a different kind of interface to the rest of the system. The fact that these new sensors incorporated a microprocessor provided the opportunity to replace the analog interface with a digital communication link. In addition, the introduction of microprocessor-based electronics in the switching devices made it possible to replace the wiring between the bay and process levels with a communication bus. Such a solution meant a significant reduction in engineering cost and labor and is illustrated in Fig. 2 to Network Control Center
Station Bus
Interbay Bus Bay Control, Feeder Protection, Busbar Protection, Metering
Integrated Control and Protection
SI EMEN S
SEG
with additional non ABB products
Process Bus DS/ES/FES/ CDES Drive
EXK 01
ELK 14
U/I-Combisensor
CB Drive
ELK 34
PASS M0
PASS M1
&
Traditionally, the functionality of SA systems has been logically allocated on three distinct levels called the station, bay, and process levels (a classical hierarchical system architecture).
PASS M1
&
I. TRADITIONAL HIERARCHICAL SYSTEMS AND THE VISION
PASS M0
Figure 1. Classical substation automation system, in which the process level devices are hardwired directly to the control and protection units.
E≥ ρ≥
The challenge: Making substation automation run on top of Ethernet using a standard networking protocol.
CB Drive
Voltage and Current Transducer
ELK 34
with additional non ABB products
E≥ ρ≥
•
SEG
PISA B
The opportunity: Ethernet, which has gotten steadily faster and more efficient while remaining low in cost.
ELK 14
S IEMEN S
PISA A
•
DS/ES/FES/ CDES Drive
EXK 01
Bay Control, Feeder Protection, Busbar Protection, Metering
Integrated Control and Protection
PISA B
The problem: Substation automation is a very demanding and expensive application.
only Bay Control
PISA A
•
Interbay Bus
PISA
In this article we investigate whether Ethernet has sufficient performance characteristics to meet the real-time demands of substation automation. More precisely, the evaluation is carried out with respect to switched Fast Ethernet and UDP/IP as the time-critical protocol. In short:
to Network Control Center
Station Bus
PISA
In the energy distribution world, a substation is an installation where the energy is combined, split, or transformed. A substation automation (SA) system is dedicated to the monitoring and protection of the primary equipment of such a substation and its associated feeders. In addition, the SA system has administrative duties such as configuration, communication management, and software management.
PASS M2
Figure 2. Modern substation automation system realizing a process bus for real-time traffic.
Some of the first installations using this new technology were based on vendor-specific communication solutions with the goal of gaining experience with the new technology. However, since the connection between protection equipment at the bay level and sensors at the process level is typically a multivendor connection (a typical substation will have two redundant protections from different vendors), there has been a strong customer drive for standardized solutions.
Skeie is with ABB Corporate Research, Norway. He is also with the Simula Research Laboratory,
[email protected]. Johannessen is with ABB Corporate Research, Bergerveien 12, N-1375 Billingstad, Norway. He is also with KODE A/S,
[email protected]. Brunner is with ABB High Voltage Technologies Ltd., CH-4050 Zürich, Switzerland.
II. ENTER THE ETHERNET In the mid 90’s, standardization activities were started both in the U.S. and in Europe. While the U.S. activities (UCA 2.0—Utility Communication Architecture) primarily focused on standardization between the station and bay levels, the European approach (driven by IEC TC57, WG 10, 11, and 12) included the communication down to the time-critical process level from the beginning. In 1998, the two activities were merged to define one world-wide applicable standard: IEC 61850 [1]. Instead of debating between several competing fieldbuses, agreement was made to use Ethernet as a communication base for the station bus. This agreement was based on the fact that the Ethernet technology has evolved significantly. Starting out as a network solution for office and business applications, Ethernet today is applied more and more as a solution for high-speed communication backbone applications between PCs and industrial networks [2]. The high-speed properties of current Ethernet technology together with its dominant position in the local area network (LAN) field makes Ethernet an interesting communication technology for substation automation usage. While classical 10 Mbit/s Ethernet used to have some competition (most notably IEEE802.5 Token Ring), fast (100 Mbit/s and up) switched Ethernet currently has none. Other communication standards like asynchronous transfer mode (ATM) [3] and fiber distribution data interface (FDDI) [4] have had little or no success in the local area network (LAN) field. An emerging communication technology, the InfiniBandTM Architecture, recently specified by a consortium of the world’s seven largest computer companies, might present Gigabit Ethernet with some competition in the future. InfiniBand is high-performance interconnect technology (where link speeds span from 2.5 to 30 Gbit/s) for connecting CPUs to the I/O devices (in that respect it will replace today’s nonscalable PCI bus), as well as for networking of computers where switches and routers are key building components [5]. For various economic and standardization reasons, the industry would like to eliminate the various (and usually incompatible) communication networks at the traditional SA levels, migrating to a single, all-encompassing network concept. Therefore, it was natural to consider the use of Ethernet technology for the communication between bay and process levels as well. However, those levels pose two very demanding data transfer challenges: •
•
The transmission of sampled values from the sensors to the protection devices. Here we have a large amount of data (usually a sampling rate of 1,440 Hz, with current and voltage information from three phases). These data should be transmitted with a maximum delay of about 4 ms. Any loss of data should be detected. The transmission of trip signals. This is a short but mission-critical data packet. The maximum admissible transmission delay is in the same range as that for the sampled values.
In this article we shall examine the potential of state-ofthe-art switched Ethernet in such a common network concept holding multiple coexisting traffic types. To our knowledge, very little related work has been done to address the performance relation of switched Ethernet and substation automation. J. Tengdin et al. studied LAN congestion scenarios in Ethernet-based substations in [6]. The research presented here, however, differs from [6] in several key aspects: •
The substation configurations do not contain any (shared) process bus. Moreover, the current and voltage transformers (process data sources) are hardwired directly to the protective relays.
•
No protocols specified by the Internet Engineering Task Force (IETF) are deployed above the Ethernet (MAC— Medium Access Control) layer (examples of such protocols are TCP/IP and UDP/IP).
•
There is no background traffic load. The system is based on the assumption that a background traffic of five times normal SCADA traffic plus two large file transfers will have little impact on message delivery times. Our results indicate that such is not the case.
In this article, we also discuss the latest achievement of Ethernet technology, traffic class prioritization. This technology makes it possible to give mission-critical data preferential treatment over noncritical data—an important milestone on the road toward deterministic Ethernet [7]. III. JUST HOW DEMANDING CAN THAT APPLICATION BE? As mentioned earlier, the critical path in a substation automation system is the information flow sensor—protection relay—circuit breaker. The overall communication delay shall remain in the low milliseconds and may not be influenced by other data services such as file transfers to upload monitoring information stored in a device or to download new parameters. In summary, we have the following tough set of requirements: •
A data sampling rate of 1,440 Hz (IEC TC57 WG 12, protection and control class 4, 60 Hz system).
•
Since everything is three-phase, we need three sets of measurements for each measurement point.
•
A typical setup has 8 to 12 measurement points.
•
The measurement data must be sent to multiple (two to four) destinations.
•
In addition to the measurement data, we must be able to handle administrative data, trip data, and file transfers.
A gross estimate of the amount of data traffic in the substation can be made from the information given above and es-
timating the administrative overhead. The final result is that we will have about 140,000 packets per second on the substation network. If a standard payload for the measurement data is 32 bytes with a total protocol overhead of 60 bytes, a standard packet will be 736 bits. Multiplying this number by the estimated number of packets per second gives an estimated data volume of about 103 Mbit/s—slightly more than the gross capacity of Fast Ethernet. IV. HOW TO SQUEEZE A LARGE AMOUNT OF DATA THROUGH ETHERNET Since the worst-case data volume is greater than Fast Ethernet can handle, it may seem that there is no point to further investigation. However, Ethernet offers several possibilities that have not even been looked at. Interesting avenues open to exploration include: •
Transmitting measurement data to multiple destinations at the same time (multicasting). This approach will reduce the data volume to about 30 Mbit/s.
•
Permitting one measurement node to transmit measurement data for three phases instead of one. This approach will reduce the data volume to about 50 Mbit/s.
The alternative is to simulate the whole network. Using a competent simulator package, it is possible to model the data nodes and the network traffic for both normal and abnormal traffic situations. It is even possible to simulate abnormal conditions several times per second! Predicting network performance accurately by queuing theory (analytically) is not feasible for switched networks because it is difficult to model all the interactions between connected network elements at a detailed level (in the literature the analytical approach has been shown to be practical for predicting performance of single switch networks only.). In that respect, the only promising approach is simulations [10]. The key word in the paragraph above is competent. Not many network simulators exist, and their capabilities differ significantly. After some research, we decided on OPNET from MIL3 [11]. This simulator, originally developed at the Massachusetts Institute of Technology, has many advantages for our purposes. One advantage is that the simulator is objectoriented; the user can create new objects from existing general-purpose objects at will. VI. SETTING THE SCENE
•
Using switched Fast Ethernet. This approach will not reduce the data volume, but it will increase the available data transfer bandwidth [1], [7], [8], [9].
•
Using Gigabit Ethernet. This is the most expensive method, but it has sufficient bandwidth to handle the data volume easily.
Before we start simulating (or measuring) performance, we need to address some important issues. Those are: •
Protocol: A de facto protocol standard for network communication is the IETF protocol suite, usually called TCP/IP. This suite contains two transport protocols: TCP and UDP. The main differences are that TCP is slow, reliable, and connection-oriented whereas UDP is fast, unreliable, and connectionless. In the case of high-speed measurement data, TCP is more or less useless due to the protocol overhead involved. Therefore, we chose UDP as the measurement data real-time protocol. This turns out to be more than satisfactory, since we sample the measurement values at a very high rate. Therefore, if a data set is lost, another set will be coming along shortly anyhow (to achieve reliability with a connectionless protocol like UDP, the IEC 61850 standardization committee is discussing a repeating transmission scheme for mission-critical messages).
•
Disturbances: If measurement data were the only traffic on the network, speed and response would be easy to calculate. At times, however, a node will be upgraded or reconfigured, something that implies the transfer of long, structured data. We must incorporate such a scenario in our simulation, and we chose to use the File Transfer Protocol (FTP) for such a transfer.
•
Performance requirements: The normal response requirement for substation automation is 4 ms (event—protection calculation—action). The extra-high-performance requirement is 1 ms.
•
Substation topology and message rate: For the experiments performed in this study, these are dictated by
Having several solutions available to us, the next step is to find out whether any of them will work in practice and then select the “best” (least expensive, fastest, most reliable) of the workable solutions. V. HOW TO TEST NETWORK PERFORMANCE WITHOUT INSTALLING ANYTHING Traditionally, testing these network alternatives would involve buying or leasing network components and using them to test out the communication in a test setup. In our case such a test is impractical for various reasons: •
A high-voltage substation is such an expensive piece of equipment that even the capital interest involved in dedicating it to run network tests for a few months might be prohibitive.
•
An important parameter is the time from when an abnormal network condition is discovered until the circuit breaker trips. Such a condition is not easy to provoke, and thus usually only a few measurements will be made.
actual substation topologies (configurations) and message rate. •
Switched Ethernet characteristics: Switched Ethernet is in some respects fundamentally different from classic Ethernet. The most important differences are: •
No collisions: Switched Ethernet can still lose packets, but the collision mechanism is not used.
•
Full duplex: A switched Ethernet connection can transmit and receive different packets simultaneously.
•
Store and forward. The time a packet takes to travel through a switched Ethernet fabric is very difficult to predict, as the packet may be forced to wait in a buffer inside the switch.
which is set to a default 5,000 packets/s for the PISAs and a hefty 20,000 packets/s for the controllers. Most of the simulation application layers tend to emphasize the client-server or the request-response paradigms. Since we are focusing on data acquisition traffic (which is more oneway in nature), we settled for a modified videoconference application. This videoconference can be configured for different traffic loads in different directions and runs on top of UDP, making it an excellent simulation vehicle for our purposes. In short, this application layer allows us to specify the amount of UDP traffic to be generated, the destination(s) of the packets, and so on. One caveat: In a videoconference session, just as in SA applications, a packet going in one direction is not a result of a packet going in the other direction. Thus, a round-trip delay must be estimated as the sum of the delay in one direction, the delay in the other direction, plus an estimated reaction time in the controller. VIII. DISCUSSING THE SIMULATED PERFORMANCE
VII. SIMULATING A SWITCH-BASED “FLAT” NETWORK
UDP transfer delay for test node
Figure 3. The measurement setup in the OPNET simulator
Fig. 3 shows the resulting OPNET simulator setup. Both the PISAs and the controllers are simulated using the predefined “Ethernet Advanced Workstation” object (this object is closest to what we are trying to achieve). The important configuration parameter turns out to be the IP processing rate
900 800 700 600 Delay (us)
Although we could have chosen one tentative solution after another and run a test case for each, for the purpose of this article, we will concentrate on a somewhat mixed solution. Since the main difference (from a network point of view) between a multicast solution and a standard solution is reduced network traffic, we will ignore multicast for now. What we will simulate is a medium number of producer nodes (16), called PISAs (Process Interface for Sensors and Actuators) in the SA context, that are transmitting medium-size packets (60bytes payload) to two different receive nodes. Recall that the sampling rate is 1440 Hz. In addition one PISA node will be subject of FTP upload and download from a dedicated server, the file size is 101kbyte (50% send and 50% receive). If this succeeds we can avoid the expense of Gigabit Ethernet or the protocol hassle of multicasting.
500
Sink to source
400
Source to sink
300 200 100 0 100
102
104
106
108
110
Time (s)
Figure 4. The end-to-end delay for a packet at the application level.
Fig. 4 shows two important components of the system reaction time: •
The time it takes a measurement packet to travel from the measurement software in the PISA to the application layer in the controller, and
•
The corresponding time for a control packet to travel from the controller back to the PISA.
Under normal circumstances, the delay from the PISA to the controller is just below 0.3 ms and the delay going the other way is just above 0.3 ms, resulting in a total round-trip delay of about 0.6 ms. Under abnormal circumstances (heavy FTP traffic), the delay from the controller to the PISA increases to about 0.85 ms, adding up to a total round-trip delay of less than 1.2 ms. This increase stems mostly from additional protocol stack software involvement, since the increase in end-to-end delay on the Ethernet level is about 20 µs and is otherwise constant at 19 µs.
IX. SIMULATION RESULTS FOR A SWITCH-BASED MULTILEVEL NETWORK
X. THEME AND VARIATIONS UDP transfer delay for test node 400 350
Delay (us)
300 250
Sink to source Source to sink
200
Ethernet
150 100 50 0 100
102
104
106
108
110
112
Time (s)
Figure 7. Data transfer delay with a high-performance PISA
Figure 5. A multilevel switch test setup
In real life, the measurement points in a high-voltage substation may be far apart, and it is thus an advantage to have local “data concentrators” in order to have simpler cabling. An example of such a data concentrator is a local Ethernet switch; thus, we need to investigate the effect of multiple switches in the data path. The simulation setup for such a multilevel switch configuration is shown in Fig. 5. In this scenario, the Ethernet end-to-end delay for incoming UDP traffic to a PISA under normal load circumstances is about 29 µs. Thus we observe an increase in the latency of 10 µs due to the introduction of one more switch in the data path between sinks and some of the PISAs. During heavy FTP load, we also notice that the Ethernet delay increases more than that of the single-switch network. The reason for this is that hooking up six PISAs by the drop link between switch 1 and switch 4 causes some buffering delay (head-of-line blocking) related to this link. At the same time the UDP delay from the sinks to the PISAs connected to switch 4 increases during the FTP session (see Fig. 6). UDP transfer delay for test node 1400 1200
Delay (us)
1000 800
Sink to source Source to sink
600 400 200 0 100
105
110
115
Time (s)
Figure 6. End-to-end delay at the application level for a multilevel switch setup.
Looking closely at the test results for the previous tests, we see that the “IP delay” (the time a packet spends inside a node traversing the protocol stack) is a major part of the endto-end delay time. To show the effect of this delay, we reran the multilevel tests with a high-performance PISA test node having an IP service rate of 20,000 packets/s. Fig. 7 shows the results for the UDP end-to-end delay. If we compare it with Fig. 6, we see a dramatic difference—the worst-case delay has been reduced from about 1.3 ms to about 370 µs. Such a high performance PISA would traditionally mean a powerful CPU and a large amount of memory, but recently several companies have succeeded in incorporating large parts of the IP in hardware, dramatically lowering the software processing overhead. Fig. 7 also shows the Ethernet delay for the “sink to source” traffic. Observe that the increase in UDP “sink to source” delay closely tracks the increased Ethernet delay. We have already determined that using a hub instead of a switch in the setup illustrated in Fig. 3 will not work, and a short simulation shows more than 60,000 collisions every second and a saturated network. Introducing hubs instead of the lower switches (switch 4 and switch 5) in the setup illustrated in Fig. 5, and keeping the high-performance PISAs discussed above (a low-performance PISA was not even able to keep up with the traffic caused by collisions and subsequent retries), we reran the simulation. The results indicated that it was possible to use such a mixed network configuration, but also that it is much more sensitive to abnormal traffic circumstances than the layered switch solution. The collision count leveled out at about 2,800 collisions per second and two packets were lost at the Ethernet level (a packet is lost after 16 retransmissions). Packet loss causes retransmission at the TCP level (invoked by FTP) having an impact on the UDP delay as well. Thus, even if it is possible to use hubs in such a setup, it is not recommended. XI. SIMULATING A REAL CONFIGURATION Having practiced simulation on the previous setups, we decided to tackle a realistic setup for substation automation. Fig. 8 shows the setup we decided on, consisting of eight feeder bays and two transformer bays. Each bay is an almost per-
fect subnetwork; most of the traffic stays within the bay. Unlike conventional multilevel architectures, as discussed in the introduction, this configuration will apply Ethernet as a single medium for both process and station/interbay communication.
different times and the file size is 1 Mbyte. In addition to this pattern, the controller nodes request file download. A transformer bay may have a different purpose, but the communication requirements are very similar. Thus, for simulation purposes, the transformer bay has the same traffic pattern as a feeder bay. In this setup we specified PISAs with IP service rates of 10,000 packets/s. The busbar node must handle a lot of packets and thus a service rate of 50,000 packets/s was specified for that node. The simulation results fall naturally into two classes: intrabay and interbay traffic. The important intrabay delay (delay inside a bay) is the delay from when a measurement is finished (CT/VT PISA) to when a trip command arrives at the circuit breaker (CB PISA). The simulations indicate a maximum delay from a CT/VT PISA to the local Protection Unit of 160 µs and a maximum delay from the local Protection Unit to the circuit breaker of about the same length (see Fig. 9). Heavy FTP traffic increases the delays by less than 300 µs. Local end-to-end UDP delays
Figure 8. The simulation setup for a normal feeder substation.
180 160
A feeder bay consists of:
140
3 current transformer (CT) / voltage transformer (VT) PISAs
•
Four Distance Earth PISAs
•
One Fast Earth PISA (same communication specification as the Distance Earth PISAs)
•
One Circuit Breaker PISA
•
A Bay Controller, a Protection Unit, and a Differential Protection Unit.
Delay (us)
120
•
100
CT/VT to bay Bay to CB
80 60 40 20 0 0
20
40
60
80
100
120
140
160
Time (s)
Figure 9. Intrabay end-to-end delays. Global end-to-end UDP delays 350 300
1.
2.
A high-speed stream of CT/VT data from the CT/VT PISAs to the local Bay Controller, Protection Unit and Differential Protection Unit, and to the global busbar. In this context the data-sampling rate was specified as 1,000Hz instead of 1,440Hz. A medium-speed stream of controller data to the Circuit Breaker PISA from the same nodes of frequency 250 Hz and packet size 16 bytes.
3.
A low-speed data exchange between all PISAs in a bay and the local Bay Controller, Protection Unit, and Differential Protection Unit. The frequency of these data streams is 10 Hz and the packet size is 32 bytes.
4.
Each PISA in some of the bays does a file transfer—usually download—but one PISA also does an upload (25% of the total FTP transfer). The file transfers take place at
250 Delay (us)
The traffic pattern of such a feeder bay is fairly complex, but the important data streams follow the pattern mentioned earlier:
200
CT/VT to Busbar Busbar to CB
150 100 50 0 0
50
100
150
Time (s)
Figure 10. Global end-to-end delays.
The important interbay delay is the sum of the delay from a CT/VT PISA inside one of the bays to the busbar and from the busbar to the CB-PISA inside one of the bays. In our case the simulations indicate a maximum delay from a CT/VT PISA to the Busbar (taken over all CT/VT PISAs) of about 180 µs and the maximum delay from the Busbar back to the CBPISAs of about 170 µs. Again, heavy FTP traffic increases both delays by less than 300µs. Fig. 10 shows both delays; the
small peaks are due to FTP download and the large peak is due to FTP upload. XII. THE PERFORMANCE IMPACT The excellent results above led us to suspect that the highperformance nodes might have a large impact on the results. We therefore reran the simulations twice with reduced performance specifications. In the first simulation, the PISA performance was reduced to 5,000 packets/s and the busbar node performance was reduced to 40000 packets/s. The second simulation mantained the node performance, but reduced the network speed inside the bays to 10 Mbit/s. Maximum delay from U/I PISA to Busbar 6000 5000
Delay (us)
4000 UDP level
3000
Ethernet level
2000 1000 0 0
20
40
60
80
100
120
140
160
Time (s)
Figure 11. Global end-to-end delays with slower PISAs.
Fig. 11 shows the resulting delay between a U/I PISA and the busbar node. A comparison with Fig. 10 shows that while the “steady-state” delay is of the same order of magnitude, the transient behavior is very bad. The peaks are due to FTP transmission from a PISA and show up in the same way on the local PISA-to-bay delays. These results emphasize that for 100 Mbit/s switched Ethernet, the actual critical part of substation networks is the performance of the endnodes. Maximum delay from U/I PISA to Busbar 20000 18000 16000
Delay (us)
14000 12000 UDP level
10000
Ethernet level
8000 6000 4000 2000 0 0
20
40
60
80
100
120
140
160
Time (s)
Figure 12. Global end-to-end delays with slower network.
Fig. 11 shows what happens when the network speed inside a bay is reduced. The low-speed link is fully able to keep up with the “steady-state” traffic, but the FTP transmission from the U/I PISA introduces a long delay due to transmission queue blocking. In summary, the FTP upload becomes very critical for lower performance end nodes and slower communication links. Clearly, some queue priority mechanism must
be introduced at the MAC layer if the added delay is to be kept at a reasonable level. We have seen that processing of UDP/TCP/IP layers places great demands on the CPU and, in that respect, could raise objections to this protocol stack. We hasten to add, however, that these demands may now be significantly reduced or even eliminated. The iReady company has recently announced a hardware implementation of TCP/IP [12], while another company, Netsilicon, has an IP implementation that is five times faster than other relevant IP implementations [13]. XIII. INTRODUCING TRAFFIC CLASS EXPEDITING The IEEE 802.1 Interworking Task Group recently ratified the LAN enhancement standard IEEE 802.1p—Traffic Class Expediting and Dynamic Multicast Filtering [14]. This standard gives the provision for expedited traffic (packet prioritization) capabilities. Such capabilities may help support the transmission of time-critical data in a LAN as well as defining layer 2 protocols that support efficient multicasting in a switched or bridged LAN environment (the latter feature will not be discussed further here). Standard Ethernet offers no encapsulation of Quality of Service (QoS) information in its packet format; however this weakness was remedied by the IEEE 802.1Q standard, which defines an extended Ethernet packet format holding 3 priority bits as part of a dedicated Tag Control Information field [15]. In that respect IEEE 802.1Q complements IEEE 802.1p, but otherwise discusses the operation and administration of Virtual LAN (VLAN) topologies in a switch-based LAN environment. The main driving force for these new standards has been the multimedia market. This market area, dominated by applications such as Voice Over IP (VOIP), Video On Demand, and Video Conferencing, is expanding rapidly [7]. These applications may be characterized as isochronal traffic, often with multiple recipients. In that respect, the multimedia market has driven the need for LANs to be able to deliver various types of time-critical and non-time-critical data. For that reason, the automation and control industries have started evaluating these new technologies for possible benefits. For a more comprehensive introduction to the prioritization mechanisms of switched Ethernet, refer to [7]. For the purpose of substation automation, our simulations show no immediate need for traffic expediting (i.e. because we ascertain little jitter at the Ethernet level). This is because the Ethernet switches are fairly lightly loaded causing little internal buffering (head-of-line blocking). We will, however, recommend that IEEE 802.1p-compliant components be chosen for the next-generation network concept. This will ensure that despite any future system configuration migration, very hard time requirements can still be met. XIV. CONCLUSION In this article we have elaborated on Ethernet’s usability as communication technology for substation automation. Through extensive simulations, we have studied whether:
1.
2.
Switched Ethernet has sufficient performance characteristics to meet the real-time demands of substation automation. UDP/IP on top of Ethernet may be used as the real-time protocol.
Moreover, we examined Ethernet’s potential for use as a common network handling multiple coexisting traffic types. The main conclusions from the simulations are: •
A switch-based Fast Ethernet network handles various SA configurations under all tested load conditions with ease.
•
Connecting fast data sources through hubs instead of switches is not recommended.
•
The application end-to-end latency mainly stems from traversing the stacks.
•
•
C. LeBlanc, “The future of industrial networking and connectivity,” Dedicated Systems Magazine, March 2000.
[3]
Asynchronous Transfer Mode [Online]. Available at http://www.atmforum.com
[4]
Fiber Distribution Data Interface [Online]. Available at http:// www.cisco.com/univercd/cc/td/doc/cisntwk/ito_doc/ fddi..htm
[5]
“InfiniBandTM Architecture Specification 1.0,” October 2000 [Online]. Available at http://www.infinibandta.com
[6]
J. Tengdin, M.S. Simon, and C.R. Sufana, “LAN congestion scenario and performance evaluation,” Proceedings, IEEE Power Engineering Society Winter Meeting 1999.
[7]
Ø. Holmeide and T. Skeie, “VoIP drives realtime Ethernet,” Industrial Ethernet Book, vol. 5, March 2001.
[8]
IBM, “Migration to switched Ethernet LANs,” Technical report, 1998.
[9]
White paper: “Real time services (QoS) in Ethernet based industrial automation networks”, Technical report, Hirschmann Rheinmetall Elektronik, 1999.
[10] J. Duato, S. Yalamanchilli, and L. Ni, “Interconnection networks, an engineering approach”, IEEE Computer Society, 1997. [11] OPNET Modeler opnet_home.html.
[Online].
Available
at
http://www.mil3.com/
The stack protocol handling performance of the nodes has a dominating influence on the UDP end-to-end latency.
[12] IReady, TCP/IP implementation in hardware [Online]. Available at http://www.iready.com/technology
UDP/IP as a real-time protocol is able to meet the time requirements, but the end nodes must be fairly high performance machines. This problem can be reduced/eliminated in the future by recently launched hardware and trimmed Internet stack implementations.
[14] “IEEE 802.1D, Information Technology – Telecommunications and Information exchange between systems - Local and Metropolitan Area Networks – Common Specification – Part 3: Media Access Control Bridges”, 1998 (includes IEEE 802.1p).
XV. REFERENCES [1]
[2]
“IEC 61850 Communication Networks and Systems in Substations, Part 5: Communication Requirements for Functions and Device Models, Part 7-2: Basic Communication Structure for Substations and Feeder Equipment,” 1999.
[13] Netsilicon, Net+Fast IP [Online]. Available at http://www.netsilicon.com
[15] “IEEE 802.1Q, Standard for Local and Metropolitan Area Networks: Virtual Bridged Local Area Networks,” 1998. [16] Cisco Systems, Internetworking Technology Overview [Online]. Available at http://www.lsiinc.com/univercd/cc/td/doc/cisintwk/ito_doc/ index.htm