Classification Methodologies for Power Quality - Electrical Power ...

1 downloads 0 Views 1MB Size Report
Electrical Power Quality and Utilisation, Magazine Vol. II, No. ... transmission and distribution of electricity is considered to be a natural monopoly, which should.
Electrical Power Quality and Utilisation, Magazine Vol. II, No. 1, 2006

Classification Methodologies for Power Quality J.F.G. COBBEN1), Jasper F.L. van CASTEREN2) 1)

Continuon and University of Technology, Eindhoven; 2) KEMA

Summary: Ongoing processes of restructuring and regulation are taking place in the electricity sectors of all European countries. These processes are intended to bring about more transparent competition in the generation and retailing of electrical energy. In this context, the physical transmission and distribution of electricity is considered to be a natural monopoly, which should be separated from generation and retailing and which needs to be regulated. New performance based regulatory frameworks are being discussed, such as price caps or revenue caps. In order to avoid transmission and distribution companies reducing their costs at the expense of power quality, some type of quality regulation has to be put in place. Alongside economic regulation, the object of which is to provide strong incentives for cost reduction and efficiency, quality regulation must at least provide incentives for safeguarding minimum levels of power quality. Ideally, the regulatory framework should create an environment where transmission and distribution companies are automatically inclined to provide their customers with an optimised mix of low price and high quality. As modern societies become more and more sensitive to power quality problems, some kind of PQ regulation will become inevitable in the near future. Such regulation will not be possible without sound and transparent tools for monitoring and reporting the PQ levels. These levels are used not only for regulatory issues, but also for coordination between the manufacturers of electrical equipment, the customers and the transmission and distribution companies. This article looks at methodologies for quantifying PQ levels for a range of power quality problems. 1. INTRODUCTION In order to avoid the high cost of equipment failures, all customers have to make sure that they obtain an electricity supply of satisfactory quality, and that their electrical equipment is capable of functioning as required when small disturbances occur. This can only be guaranteed if the limits within which power quality may vary can be specified. Such limits can be defined by standards, by the national regulator, by the customer in a power quality contract, by a manufacturer in a device manual or by the grid operator in an operating guideline. These limits must be meaningful and transparent, and it must be easy to compare actual power quality levels against them. Coordination of all these different limits is necessary, on the one hand to prevent devices or installations from malfunctioning, on the other hand for clear communication about the quality of supply that is provided or demanded. As a starting point for the definition of the quality of supply voltage, the limits set by the National Regulator are used. These limits have at least to be met at the point of common coupling (PCC) with the customer, which generally is at the electrical connection point. The European Standard EN50160 [1] is normally used in European countries as a basis for the quality of supply, which is often defined as the voltage quality. At the moment, there is no

standard for the current quality at the PCC. The interaction between voltage and current makes it hard to separate the customer as “receiving” and the network company as “supplying” a certain level of power quality. The voltage quality (for which the network is often considered responsible) and current quality (for which the customer is often considered responsible) are affecting each other by mutual interaction. The effects of insufficient power quality are normally expressed in terms of “emission,” “immunity” and “compatibility,” as shown in Figure 1. The supply voltage level is used in this example, but the compatibility description method can be used for other power quality characteristics. The “emission” is defined as the causal disturbance, such as the offset of a voltage from its nominal value. The “immunity” is the degree to which the equipment will be able to function as planned in spite of the emission. The compatibility level is the level at which the risk of the equipment malfunctioning is sufficiently low.

J.F.G. Cobben and Jasper F.L. van Casteren: Classification Methodologies for Power Quality

Key words: power quality, dip-classification, flicker, voltage variations, dips

Fig. 1. Voltage versus current quality



appliances such as washing machines. This “ABC” classification uses letters to label the various levels of quality. The same format can be adapted for voltage quality, as is depicted in Figure 3. The first step towards defining a classification system of this kind is to normalise all power quality characteristics [2]. For each characteristic we can calculate the normalised power quality level using the formula: r(v,q,p ) = 1 − Fig. 2. Emission, immunity and compatibility level

This article discusses methods for describing the quality levels for the supply voltage. The following quality aspects are distinguished: — Slow voltage variations — Voltage flicker — Voltage dips The slow voltage variations and the flicker level are continuous phenomena, in contrast with voltage dips, which are short-duration events. Harmonic distortion, phase voltage unbalance, electric frequency and other PQ problems are not dealt with in this article. The quality levels are described by using a PQ classification format. This classification should make it clear whether a PQ standard is met or not, but should also express the relative performance for each specific aspect of power quality. 2. CLASSIFICATION FORMAT When designing a classification format, we have to realise that power quality is not a subject with which many customers are very familiar. Important for the communication with the customer is that the classification is kept very simple and understandable. At the same time, the classification must be meaningful and transparent and it must allow for the aggregation of large amounts of measured data into a single measure of quality. Most customers are familiar with the kind of classification that is already in use to classify the energy efficiency of cars and household

Fig. 3. Classification of power quality characteristics



m(v,q,p ) l(q )

(1)

where: r(v, q, p)

— The normalised power quality characteristic q, on site v, for phase p m(v, q, p) — The actual level of characteristic q, on site v, for phase p l (q) — The compatibility level of characteristic q When there is no disturbance, the normalised value will be 1 (m = 0). If the disturbance level is right on the limit specified by the applicable standard, then the normalised value will be zero. If the disturbance level exceeds the specified limit, the performance index r becomes negative. The range from “no disturbance” to a level of “twice the acceptable disturbance” is divided into six areas from very high quality (A) to extremely poor quality (F), as shown in figure 3. 3. VOLTAGE FLICKER AND THE PERCENTILE METHOD For the flicker level, as well as for the slow voltage variations, the Dutch regulator uses a percentile and level as a limit for the measured average values. For the ten-minute average flicker level (Plt), this means a maximum value of 1.0 for 95% of all measured samples, and a maximum value of 5.0 for 100% of all measured samples. The compatibility level for the flicker is thus defined in terms of the 95th percentile being lower than a certain value. This type of compatibility level can be defined for all continuous phenomena, such as flicker, voltage variations, harmonics, etc. These power quality phenomena can be classified accurately using the percentile method. This method is explained below by reference to the specimen flicker measurement illustrated in Figure 4. The graph traces data measured over a one-week period at a PCC, showing 7*24*6 = 1008 ten-minute average values.

Electric Power Quality and Utilization, Magazine • Vol. II, No 1, 2006

By sorting the measured values, as shown in Figure 5, the 95th percentile value can be established. In the illustrated case, the 95% flicker level works out at 0.48. Normalising this flicker level by the standard of 1.0, this leads for this PCC to a value of: r(v,q ,p ) = 1 − 0.48 = 0.52 1

(2)

This corresponds with a “B” classification (high quality). The classification limits for the voltage flicker are calculated by using (1). This leads to the classification system illustrated Figure 6. For checking the quality against the standard, as set by the regulator in most cases, the percentile method is straightforward and accurate. One disadvantage of this classification method is, however, that the end result doesn't give a lot of information about the actual flicker levels. Figure 7 shows two very different distribution patterns, which nevertheless yield the same 95th percentile value; the distribution on the left correlates to much better flicker performance than the one on the right. Grid operators need good general information about power quality levels, so a 95th percentile value does not on its own satisfy their needs. This problem can be addressed by using a performance indicator that gives more information than the percentile method alone.

Fig. 4. Ten-minute average flicker measurements at a PCC

Fig. 5. Sorted flicker level data

4. SLOW VOLTAGE VARIATIONS AND THE STAV METHOD Depending on the demand level, network power flows and voltage control devices, the voltage level at the PCC will change from minute to minute and from hour to hour. The European Standard and the Dutch regulator define the compatibility level for the supply voltage as 230 ± 10% volts, i.e. from 207 to 253 volts. More precisely, the standard states that 95% of all ten-minute measured values for the average RMS voltage over a period of one week shall be 230 ± 10% volts, and 100% of those tenminute measured values shall be within the range 230 – 15% to 230 + 10% volts. Slow variations in the supply voltage can be classified by the STAV (Standard Deviation, Average Value) classification method. This method provides more information about the actual voltage deviations than may be obtained by a percentile method. The STAV method is explained below by reference to the example illustrated in Figure 8, which shows measured ten-minute average voltage values at a PCC over a one-week period.

Fig. 6. Flicker classification Fig. 7. Two different distributions of flicker values

From these measurements, the average value and the standard deviation are calculated as: n

n

Um =

∑ vi i =1

n

= 225.3 V and

σ=

(vi − U m )2 ∑ i =1

J.F.G. Cobben and Jasper F.L. van Casteren: Classification Methodologies for Power Quality

n −1

= 2.43 V

(3)

!

In order to create a classification, the boundary between the classifications C and D has to be defined. For this, the limits given by the Dutch national regulator are taken. By way of further explanation of the STAV classification method, two different measured voltage distributions are presented in Figure 10. This figure also shows the ±10% voltage limits specified under the national standard. The upper limit may not be exceeded, and no more than 5% of all measured values may fall below the lower limit. Assuming a normally distributed supply voltage, we can calculate the probability of the voltage straying outside these limits. For the upper limit, the requirement of 100% is softened to 99.9% for practical reasons. This results in:

Fig. 8. Voltage measured at a randomly chosen PCC

x −Um   = P {X ≤ x} = P Y ≤ σ   253 − U m   = P Y ≤  = 99.9% σ  

(4)

By using a statistical table for the normal distribution, we find that, where the relationship between the average and the standard deviation is concerned:

253 − U m < 3.1 σ Fig. 9. Comparing measured distribution with normal distribution

(5)

We can do the same for the lower limit:

x −Um   = P{ X ≤ x } = P Y ≤ σ   207 − U m   = P Y ≤  = 5% σ  

(6)

Hence:

207 − U m > −1.65 σ Fig. 10. Different low voltage level distributions

"

Assuming the voltage to be normally distributed, a classification method based on the same principles as the percentile method can be used. Figure 9 shows the histogram and the probability function for the measurements in Figure 8. Presented in Figure 9 is the bestfitted normal distribution, from which it is clear that the measured voltage can indeed be modelled as a normal distribution.

(7)

For both the upper and the lower voltage limit, the relationship between the average and the standard deviation can be drawn as lines in the (Um, I) plane. This is shown in Figure 11. For supply voltage measurement, such as that shown in Figure 8, we can calculate the average and the standard deviation and locate the resulting value as a single point on the (Um, I) plane. This point must lie within the triangle

Electric Power Quality and Utilization, Magazine • Vol. II, No 1, 2006

bounded by the upper and lower limits in order to comply with the national standard. This means that with an average supply voltage around the nominal voltage of 230 volts, a larger standard deviation is allowed than with an average voltage that is offset from the nominal value. This is because a high voltage with a large standard deviation gives a risk of exceeding the upper voltage limit, and a low voltage with a large standard deviation gives a risk of falling beneath the lower limit. Better performance is achieved when the voltage is more likely to be around the nominal value. By using the same excess risk values — 99.9% for the upper limit and 5% for the lower limit — and by proportionally reducing the permitted voltage limits, boundaries can be defined for the other “ABC” classification categories. Where the upper voltage boundaries are concerned, this means that:

Fig. 11. Limits for the average voltage and standard deviation in the (Um, I) plane

U − 230 253 − U m = r = 1− m = 1− m 253 − 230 253 − 230 l ⇒ Um = 253 − r * (253 − 230) U a / b = 253 − 2 / 3* 23 = 237.7 V U b / c = 253 − 1 / 3* 23 = 245.3 V U c / d = 253 − 0 / 3* 23 = 253.0 V U a / e = 253 + 1 / 3* 23 = 260.7 V U e / f = 253 + 2 / 3* 23 = 268.3 V

while, where the lower voltage is concerned: U − 230 207 − U m = r = 1− m = 1− m 207 − 230 207 − 230 l ⇒ Um = 207 − r * (207 − 230) U a / b = 207 + 2 / 3* 23 = 222.3 V U b / c = 207 + 1 / 3* 23 = 214.7 V U c / d = 207 + 0 / 3* 23 = 207.0 V U a / e = 207 − 1 / 3 * 23 = 199.3 V U e / f = 207 − 2 / 3* 23 = 191.7 V

Combining these limits results in a set of triangular areas on the (U m, I) plane, as depicted in Figure 12. From data on the supply voltage over the course of a week, the average and standard deviation can be calculated and plotted as a single point on the (U m , I) plane. The corresponding classification can then be read

from the figure. By plotting various measurements in this way, trends can be made visible or the relative performance of various locations can be compared. This is a very illuminating way of presenting slow voltage variations as a characteristic of power quality.

Fig. 12. Low voltage level classification

5. VOLTAGE DIPS Voltage dips are receiving more and more attention as a characteristic of power quality, since it is now apparent that the annual socioeconomic costs attributable to them are comparable to those attributable to power interruptions. One problem with dips is that it is not possible to measure performance over a short period of time. Only by monitoring over a period of years can one describe the level of dip performance in quantitative terms. The compatibility level for voltage dips is defined by tolerance curves, such as the ITI curve depicted in Figure 13. This figure shows three areas. In area 1, electric devices should be able to function according to specification. When the duration of the dip becomes too long, and the residual voltage too low (area 2), or too high (area 3), electric devices are likely to malfunction.

J.F.G. Cobben and Jasper F.L. van Casteren: Classification Methodologies for Power Quality

#

Fig. 13. ITI dip tolerance curve

Fig. 14. dip types

Fig. 15. A dip compatibility table (left) multiplied by a dip cost table (right)

$

It is possible to define a compatibility level as the annual maximum number of dips in area 2. This number of dips is known as the SARFIITI dip performance indicator, but other dip performance indicators can be used in the same way. The problem is, however, that different devices have different tolerance curves, and the ITI curve was not designed to fit all industrial, commercial or domestic customers. The basic problem with most dip performance indicators is that they do not directly relate to the problems that are caused by dips. Long shallow dips or short deep dips are very different in their effects on equipment and in their causes and possible mitigations. A good dip classification method should account for these differences in a meaningful and transparent way. The basis for such a classification is found in the voltage dip table, which is a convenient format for reporting measured or simulated

voltage dips. The dips table shows the annual average number of dips of certain depth and duration. The voltage dip table can be divided into different dip types and regions ([4]). Nine dip types and three regions are distinguished in Figure 14. Each region represents an area of responsibility. The upper region with the dip types K0, M0 and L0 depicts the area where it becomes very hard for network companies to further reduce the voltage dips ([5]). The equipment manufacturers therefore have to design their products to withstand these types of dip. The bottom region of K2, M2 and L2 depicts the area where customers cannot be expected to install equipment that is able to ride through dips of the types in question. It is therefore the responsibility of the network company to minimise the number of these dips. The midregion of K1, M1 and L1 depicts the area where a balance has to be found between the customer's willingness to pay, the network investment required, risk financing cost, etc. Each dip type requires another approach. A power quality standard along the lines of NRS 048 [3] is best suited to address the differences involved. This method of dip standard definition is based on setting limits for each dip type separately. These limits are based on the estimated costs associated with each type of dip. These costs can be reported in a dip cost table, such as the one on the right in figure 15. The higher the costs, the fewer dips can be allowed. This leads to a dip compatibility table that is inversely proportional to the dip cost table. Figure 15 shows, on the left side, a compatibility table that is more or less inversely proportional to the cost table, but not exactly. This is because the capital cost of reducing the number of dips has to be taken into account as well when establishing a dip compatibility table. The example in Figure 15 shows a compatibility table where a customer may experience only eight dips per year of type K1, four of types K2 and M1, two of types M2 and L1, and one of type L2. This dip compatibility table is clear and easy to understand. No limits are set for dip types K0, M0 and L0, as equipment is expected to be compatible with dips of these types. From the compatibility table and the dip cost table, the value for the CARCI index (Customer Average RMS voltage variation Cost Index) is obtained by multiplying the values per dip type and adding the results. The CARCI index expresses the total expected annual costs due to all types of voltage dip. The dip cost table can be normalised to get a CARCI value of 1.0 for the compatibility table.

Electric Power Quality and Utilization, Magazine • Vol. II, No 1, 2006

In order to classify measured voltage dips, one must first create a dip table by calculating the annual average number of dips of each type. This dip table is then multiplied by the dip costs table to get the CARCI value. The normalised CARCI value is used as the actual level of the power quality characteristic in equation (1), to produce the normalised level of power quality. The range from “no disturbance” to a level of “twice the acceptable disturbance” is again divided into six areas from very high quality (A) to extremely poor quality (F), as shown in Figure 16. This method of classification can be tailored to suit specific groups of customers or specific network areas by adjusting the dip cost table. The dip table offers valuable information to the network planner but can also be used by any (industrial) customer with its own, “personal” cost table. In that case, the customer can calculate its own CARCI value and use the average annual costs for ranking investment options. Questions such as “What would we gain by making this equipment less vulnerable to dips of this type?” can be answered using this method. Because the annual number of dips can be quite low in some countries, it may be necessary to use a sliding average for the annual number of voltage dips over a period of five years or more. Otherwise, the classification may jump categories from year to year, which would undermine the trust that customers and network planners have in this performance indicator. 6. CONCLUSIONS This article describes methods for classifying flicker, voltage variations and voltage dips. Along with power interruptions, these three characteristics of power quality are responsible for most customer complaints and associated costs. The three classification methods combine transparency with simplicity. The percentile method is relatively simple but requires large amounts of data to be processed. Its main use is for monitoring against the standard, and for calculating the performance at specific PCCs. For planning purposes, the results of the percentile method are less informative. The STAV method only requires average and standard deviation data, and produces an outcome that is useful for planning purposes. It is suited for benchmarking PCCs and for trend analyses. The CARCI method for classifying voltage dips is suitable not only for power quality monitoring but also for planning purposes and decisionmaking. It can easily be tailored to suit specific groups of customers, voltage levels or geographical network areas.

Fig. 16. Classification of dip tables

All methods produce normalised results, which make it easier to value the reported levels of power quality performance. Used in combination with a colour-coded “ABC” classification system, these methods enable information on the difficult subject of power quality to be communicated to all customers in a meaningful and transparent way. REFERENCES 1. European standard EN-50160: Voltage characteristics of electricity supplied by public distribution systems. 1994 CENELEC, Brussels, Belgium. 2. M e y e r J . , S c h e g n e r P. , W i n k l e r G . : Efficient method for power quality surveying in distribution networks. CIRED conference, Turin 6– 9 June 2005. 3. NRS 048-2, 2003, Electricity Supply: Power Quality. Part 2. Voltage characteristics compatibility levels, limits and assessment methods. http:// www.sabs.co.za. 4. va n C a s t e r e n J . F . L . , C o b b e n J . F . G . , et a l .: A customer oriented approach to the classification of voltage dips. CIRED international conference, Turin, 6–9 June 2005, http://www.kema.com/nl/Images/Prego; 21_tcm108428.pdf, in Dutch. 5. P r o v o o s t F . , C o b b e n J . F . G . a n d M y r z i k J . M . A . : Advantages of cables regarding network performance. 2004. IEEE Young Researchers Symposium.

Sjef Cobben

was born in Nuth, Netherlands, in 1956. He received the bachelor's degree in electrical engineering from the Technical University of Heerlen in 1979. In 2002, he received a masters degree in electrical engineering from Eindhoven University of Technology (TU/e). In 1979, he joined NUON, one of the largest energy and water organisations in the Netherlands. Since 2000, he has been working for Continuon, where he is concerned with power quality problems and safety requirements. He is currently a part-time PhD student at TU/ e. His main research topics are distribution systems, renewable energy sources and power quality problems. Address: Continuon Arnhem Eindhoven, University of Technology Tel + 31 26 844 23 53 e-mail: [email protected]; [email protected]

J.F.G. Cobben and Jasper F.L. van Casteren: Classification Methodologies for Power Quality

%

Jasper van Casteren

received his MSc from the University of Technology in the Netherlands in 1997, and his PhD from Chalmers University of Technology in Sweden in 2003. He worked for seven years as a principal software engineer for the DIgSILENT Company in Germany, were he implemented reliability assessment calculations and the objectoriented scripting language "DPL". He currently works for the KEMA Transmission and Distribution consultancy. He specialises in reliability and power quality issues, in stochastic network analyses for planning purposes and in automated scenario analyses. Address: KEMA Transmission and Distribution Consultancy. Tel: + 31.26.356.3868 Fax: +31.(0)26.351.3683 e-mail: [email protected]

&

Electric Power Quality and Utilization, Magazine • Vol. II, No 1, 2006