Measuring Performance Of Electric Power Distribution Systems ...

16 downloads 149 Views 285KB Size Report
Systems – IEEE Std. 1366-2003. Presented at the NARUC. Staff Subcommittee on Electric Reliability in Washington, DC. February 13, 2005 by Cheryl A. Warren  ...
Measuring Performance Of Electric Power Distribution Systems – IEEE Std. 1366-2003 Presented at the NARUC Staff Subcommittee on Electric Reliability in Washington, DC February 13, 2005 by Cheryl A. Warren ([email protected]) Chair, IEEE WG on System Design

Topics for Today ‹

What changed between 2001 and 2003?

‹

Why use 1366-2003 - Guide for Electric Power Distribution Reliability Indices

‹

The Major Event Day Definition

‹

Summary

‹

Potential Impact on Regulation

‹

Benchmarking

2

Industry Guidelines Developed ‹IEEE

Std 1366, 2001 Edition, Revised in 2003

‹Refines

definitions

‹Created

the Major Event Day concept

‹Ballot

‹Approved

by IEEE REVCOM in December 2003

‹Approved

by ANSI April 2004

‹Published

by IEEE SA June 2004



3

passed w/98% affirmative

Available at www.ieee.org

Major changes between 2001 & 2003 ‹

Tightened definitions.

‹

Created the Major Event Day Concept.

‹

A tool that can assist company decisions and regulatory policy making by standardizing engineering metrics and approaches. ‹

Standard definitions are offered that will lead to better comparability. –

4

Still need to address data collection methods.

‹

Policy decisions are left to the regulatory community.

‹

All Events are reviewed including Major Events.

Why Use 1366-2003 ‹

Sound Basis for Measuring Performance.

‹

A clearer view of performance, both on a ‹

Daily basis and

‹

During Major Events

Can form a solid basis for review of operational effectiveness, decision making and policy making. ‹

‹

5

More consistent benchmarking.

States are Moving Toward Reliability Regulation BC

OR

ND

MT

WA ID

WY

MN WI

SD NE

NV

UT

CO

MI

IA IL

KS

IN

MO

OK AZ

NM

WV

TN

AR

MS AL LA

TX

PA

OH

KY

CA

ME

VT NH NY

MA,CT,RI VA

MD,DE,NJ

NC GA

SC

FL

Alaska Hawaii

Considering Adoption of MED Reliability Standards in place

Adopted MED Reliability Standards being considered

*source NRRI 2001 survey & other regulatory documents 6

Three State’s Rules ‹

State 1 ‹

‹

State 2 ‹

‹

A Major Storm is a period of adverse weather during which service interruptions affect at least 10 percent of the customers in an operating area and/or result in customers being without electric service for durations of at least 24 hours.

State 3 ‹

7

Major Storms excluded if 15% of customers served are interrupted over the duration of the event.

It was the result of a major weather event which caused more than 10% or a district or total company customers to be without service at a given time.

SAIDI Performance Different Measurement Methods - Same Company 225 200

SAIDI (min)

175 150

Method 1

125

Method 2

100

Method 3

75 50 25 0 1997

1998

1999

2000 Year

8

2001

2002

2003

Methodology Development ‹IEEE

WG on System Design, that has over 130 members, developed the “2.5 Beta methodology” in IEEE Std 1366 - 2003. ‹Members

include utility employees, regulatory staff, employees from manufacturers, consultants and academics. ‹Seven

9

members stepped up to perform the analysis.

Foundations of the Process ‹

Definition must be understandable by all and easy to apply.

‹

Definition must be specific and calculated using the same process for all utilities.

‹

Must be fair to all utilities. ‹

‹

SAIDI was chosen as the indicator… ‹ ‹

10

Large and small, urban and rural….

because it is size independent and it is the best indicator of system stresses beyond those that utility’s staff, build, design and operate to minimize.

Foundations of the MED Definition ‹

To allow appropriate review of both operating conditions partition the data into ‹

daily performance and

‹

major event day performance.

‹

Major event days are days where the system operational and/or design limits are exceeded.

‹

We suggest:

11

‹

Using day-to-day events for trending, internal goal setting, and Commission mandated targets.

‹

Separately report performance during major events (no exclusions).

Methodology Development ‹

Several methods were tested and rejected because they did not meet the basic requirements stated in foundations of the process.

‹

Epiphanies

12

‹

SAIDI is a good indicator of major events.

‹

Interruption data is most closely represented by the log normal distribution.

Log-normal nature of the data 68.3% of population is covered by the mean + 1 std. dev.; 95.4% of population is covered by the mean + 2Log-normal std. dev.; SAIDI per Day SAIDI per Day Histogram 99.7% of population is covered by the mean + 3 std. dev.; 60

100 80 60 40 20 0 0

1

2

-20

3

Annualized # of Occurances over 8 years

Annualized # of Occurances over 8 years

120

50 40 30 20 10 0 -10

-5

0 -10

SAIDI

natural log of SAIDI

Allows Mathematicians to use their statistical toolkits. 13

5

10

Two Categories for Measurement ‹

14

The 2.5 Beta Methodology allows segmentation of reliability data into two distinct sets for review. ‹

One set represents those events of such a reliability magnitude that a crisis mode of operation is required to adequately respond. (major events).

‹

The other set represents the reliability impact of those events that a company has built the system to withstand and staffed to respond to in a manner that does not require a crisis mode of operation. (day-to-day operation).

Major Events versus Day to Day Operations All Sustained Interruptions Recommend that targets be set on Day-to-Day Operations.

Including: Transmission, Planned, Etc.

Day-to-Day Operations

15

Report on both data sets separately.

Major Event Days

Results from One Company SAIDI 167.3

SAIDI (Minutes

180 160 140 120 100 80 60 40 20 0

102.64 64.66 May 3 July 9 July 23 Sept. 11 Nov 17-18 Dec 25

Major Event Days 16

365 Days

358 Days

Day to Day

All

Seven Simple Steps 1.

Collect values of daily SAIDI for five sequential years ending on the last day of the last complete reporting period. If fewer than five years of historical data are available, use all available historical data

2.

If any day in the data set has a value of zero for SAIDI, do not include that day in the analysis.

3.

Take the natural logarithm (ln) of each daily SAIDI value in the data set.

4.

Find α (Alpha), the average of the logarithms (also known as the log-average) of the data set.

5.

Find β (Beta), the standard deviation of the logarithms (also known as the log-standard deviation) of the data set.

6.

Compute the major event day threshold, TMED, using the equation:

TMED = e (α +2.5 β ) 7.

17

Any day with daily SAIDI greater than the threshold value TMED that occurs during the subsequent reporting period is classified as a major event day.

Can be calculated in Excel or an OMS program

Major Event Days – A few facts ‹

A day in which the daily system SAIDI exceeds a threshold value, TMED that is determined by using the 2.5 beta method. ‹

‹

18

For example, if TMED = 3 minutes, than any day where more than 3 minutes of SAIDI is accrued is declared a major event day

Activities that occur on major event days should be separately analyzed and reported. Nothing is “Excluded”!!

Benefits of the Approach ‹

19

Adoption of the 2.5 Beta methodology ‹

will allow for consistent calculation of reliability metrics,

‹

provide companies and commissions with a more accurate indication of a Company’s controllable service quality results,

‹

allow a clear review of company response to crisis mode events, and

‹

provide a less distorted indication of the reliability results for companies of all sizes.

Summary of IEEE 2.5 Beta Methodology ‹

Improves the ability to view system reliability performance, thereby making goal setting and trending more meaningful.

‹

Provides a mechanism for reporting on both day-to-day performance and performance during major events. A mechanism that:

‹ 20

‹

allows for review of day-to-day performance without considering the outliers that often mask it.

‹

AND, meaningfully focuses on major event performance in its own right to give a clear view of this very different operating condition.

Consistent method that can be applied by all.

Beyond the Standard… PBR and Benchmarking

Ramifications on Commission Mandated Targets One Company's Performance Exclusion Criteria: Commision- 15% System,

200 175

SAIDI Commission Commission Max Penalty Commission Min Penalty Start of Deadband

SAIDI (min)

150 125 100 75 50 25 0 1997

1998

1999

2000

2001 Year

22

2002

2003

2004

Ramifications on Commission Mandated Targets One Company's Performance Major Event Criteria: IEEE2.5β Method

200 175

SAIDI IEEE

SAIDI (min)

150 Lognormal Max Penalty

125 100

Lognormal Min Penalty Start of Deadband

75 50 25 0 1997

1998

1999

2000

2001 Year

23

2002

2003

2004

Ramifications on Commission Mandated Targets ‹

The existing PBR targets will require adjustment if lognormal nature of data is considered. ‹

The variability in year to year performance should be significantly reduced by using the IEEE methodology, therefore performance bands should be adjusted. –

‹

‹

24

Minor weather events will be captured in the day-to-day performance that will still lead to some variability.

Since we know that reliability data is most closely represented by the log-normal distribution, performance bands should be developed using the log-normal data.

Targets should also be reviewed if OMS systems/processes are introduced.

Basis for Target Setting ‹

It’s appealing to use an industry average (benchmark data) to set targets for utilities in your state.

‹

It’s not practical to do so because: ‹

Each utility has historical rates, maintenance and operating practices, system design, customer density, etc. –

‹

‹

Forcing utilities to change to a radically different performance level has serious rate implications.

Using the companies own historical performance is a reasonable approach. ‹

25

These factors set the underlying performance level of the system.

Issues: Introduction of an OMS, changing weather patterns, mergers and acquisitions.

Benchmarking ‹

Data is Never exactly the same!

‹

Two main reasons for differences:

‹

26

‹

Data Collection Process/System Differences

‹

Exclusion Criteria Differences (Basis)

IEEE 1366-2003 ‹

addresses data basis issues by clearly defining the rules.

‹

It DOES NOT address the data collection issues

Benchmark We have anonymously analyzed data for 95 companies throughout the US & Canada for companies that range from rural electric co-ops to IOUs. ‹

‹Basic

Results for 2003.

Quartile SAIDI IEEE 1 83.59 2 110.99 3 156.10 4 401.55

All Respondents SAIDI All SAIFI IEEE SAIFI All CAIDI IEEE CAIDI All 119.45 0.94 1.29 71.04 84.77 223.68 1.23 1.59 93.69 146.73 400.24 1.50 2.14 121.77 211.73 2352.29 3.15 5.53 424.74 1154.85

*includes results from 70 respondents.

Performing another benchmark in 2005. 27

28

IEEE - SAIDI

440 420 400 380 360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0

Benchmarking data cannot be taken at face value…performance is a function of many things such as rates, service territory, maintenance/operating practices, etc. Q4 may be an acceptable level of performance for some utilities

Q4

Q3 Q2 Q1

93 72 91 3 82 90 71 50 92 89 21 55 24 2 56 81 77 66 69 38 63 25 67 7 65 1 17 22 6 75 88 23 41 49 13 45 29 14 53 48 74 86 54 78 84 52 58 4 20 60 19 64 57 80 12 85 44 16 73 59 51 47 68 15 61 83 62 70 76 79

SAIDI (min)

(2003)

Utility

29

SAIDI IEEE Trends 160 140 120

Minutes

100 80

•OMS Systems

60

•Weather Changes

40 20 0 1999

2000

2001

2002

2003

2004

Year All Companies

30

Small Companies

Medium Companies

Large Companies

Questions…

31