Document not found! Please try again

Distributed Processing of Reliability Index Assessment ... - UTK-EECS

3 downloads 0 Views 241KB Size Report
A controller is a process with the following responsibilities: • to accept user input; ... dices, a distributed processing approach for RIA is proposed. This approach ...
230

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 1, FEBRUARY 2005

Distributed Processing of Reliability Index Assessment and Reliability-Based Network Reconfiguration in Power Distribution Systems Fangxing Li, Member, IEEE

Abstract—Parallel and distributed processing has been broadly applied to scientific and engineering computing, including various aspects of power system analysis. This paper first presents a distributed processing approach of reliability index assessment (RIA) for distribution systems. Then, this paper proposes a balanced task partition approach to achieve better efficiency. Next, the distributed processing of RIA is applied to reliability-based network reconfiguration (NR), which employs an algorithm combining local search and simulated annealing to optimize system reliability. Testing results are presented to demonstrate the speeded execution of RIA and NR with distributed processing. Index Terms—Network reconfiguration, parallel and distributed processing, power distribution systems, reliability index assessment, scalability, simulated annealing.

I. INTRODUCTION

P

ARALLEL and distributed processing [1], [2] has significant contributions to scientific and engineering computations, especially for time-critical or time-consuming tasks. Parallel processing is usually carried out in dedicated multiprocessors with a global clock and shared memory, while distributed processing is usually carried out at multiple workstations or computers connected to a network without a central clock and shared memory. In distributed processing, message passing is a common technique to share information since there is no shared memory. Although the performance of networked computers is not as competitive as a dedicated parallel computer, networked computers are less expensive and more broadly available such as in local area networks (LANs). As such, distributed processing is sometimes referred to as the low-end parallel processing. It should be noted that the term “parallel” is used occasionally in this paper to indicate the concurrent execution of a computing task. The discussion in this paper is essentially based on distributed processing. Especially, like many other distributed processing approaches, the proposed approach employs the message-passing scheme for data sharing among collaborating processors. Parallel and distributed processing has been applied to power system computing in various areas [3]–[12], such as load flows [4]–[7], optimal power flows [8], [9], state estimation [10], contingency analysis [11], and reliability evaluation for

Manuscript received May 16, 2004. Paper no. TPWRS-00643-2003. The author is with ABB Inc., Raleigh, NC 27606 USA (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/TPWRS.2004.841231

generation and transmission systems by Monte Carlo simulation [12]. These previous works can be classified into two categories: applications of parallel processing [4]–[6], [12] and applications of distributed processing [7]–[12]. It appears that more recent works focus on distributed processing. This is probably due to the latest development in network hardware and software, which makes distributed processing faster, more broadly available, and easier-to-implement than before. This paper presents distributed processing schemes of reliability index assessment (RIA) and reliability-based network reconfiguration (NR) for distribution systems. Here, the radial distribution system is addressed, since the majority of U.S. distribution systems are radial. In addition, system-level reliability indices are addressed because they are the primary reliability concerns of utilities. The discussion shows that RIA can be easily de-coupled and executed in parallel among different processors to achieve speedup in wall-clock running time. The discussion also shows NR is mainly composed of iterative runs of RIA. Therefore, NR can be executed in parallel based on the distributed processing of RIA. The proposed implementation of distributed processing considers the unbalanced computing ability of different processors, a typical feature of heterogeneous computers connected to a LAN or a similar network. This paper is organized as follows. Section II presents a controller-worker model based on message passing to share the data and coordinate the activity of different processors. Section III first discusses the principle of an analytical approach to assess distribution reliability and why it is highly parallelizable; then presents a coarse-grained distributed processing scheme for RIA. Section IV discusses the balanced task partition among different processors in order to achieve better performance and efficiency. Section V presents and discusses the testing results for distributed processing of RIA. Section VI applies the distributed processing of RIA to reliability-based NR, which employs an annealed local search and RIA. Test results are also provided. Section VII concludes the paper. II. CONTROLLER-WORKER MODEL FOR DISTRIBUTED PROCESSING Distributed processing has two fundamental units: computation and communication. Computation is referred to as the CPU activity to carry out the actual computing task, while communication is referred to as the overhead activity to transfer data or share information among different collaborating processors. Although communication is overhead indeed (and is desired to

0885-8950/$20.00 © 2005 IEEE

LI: DISTRIBUTED PROCESSING OF RIA AND RELIABILITY-BASED NR

231

be minimum), it is normally a necessary part for distributed processing due to the lack of shared memory. This paper employs the controller-worker model [7] to coordinate computation and communication in distributed processing. As described in [7], the model is comprised of two types of processes, controllers and workers, which play different roles in the message-passing scheme. A controller is a process with the following responsibilities: • • • • • •

to accept user input; to assign initial data to workers; to invoke workers to execute tasks; to send/receive intermediate data to/from workers; to do system-wide, nonintensive calculations; to terminate the algorithm at the appropriate time and notify workers to stop. A worker is a process with the following responsibilities: • • • •

to receive initial data from a controller; to do intensive calculations upon a controller’s request; to send/receive intermediate data to/from a controller; to stop when notified.

In the approach here, there is only one controller, while there are multiple workers. The controller is mainly responsible for coordinating all workers and maybe some nonintensive computations, while workers are responsible for intensive computations [7]. This model implies a coarse-grained distributed processing, in which parallelism occurs at the level of subroutine or group of subroutines.

Fig. 1.

Input and output of RIA.

(the failure on a component). Taking SAIFI as an example, this can be expressed as

(1) where contribution to SAIFI from component ; total number of components. Since SAIFI is defined as the number of customer interruptions that an average customer experiences during a year, is the number of customer interruptions at an average customer caused by failures on component . Hence, can be written as [18] (2)

III. DISTRIBUTED PROCESSING OF RIA This work assumes N-1 contingencies. Further, only permanent component faults are considered in this section for simplicity and illustration. A survey of 205 U.S. utilities [13] showed that five reliability indices, SAIFI, SAIDI, MAIFI , CAIDI, and ASAI (or ), have been popularly employed in the utilities. The survey also mentioned other four less popular indices, CAIFI, CTAIDI, ASIFI, and ASIDI. This section takes SAIFI as an example to illustrate why the calculation of a reliability index can be divided into many parallelizable steps. The Appendix briefly illustrates that the other system reliability indices can be assessed in a similar way. The purpose of RIA for distribution systems is to model each system contingency and compute the reliability impact of each contingency. It may be carried out through various approaches, such as analytical approach based on component contributions [14], [15], failure mode and effect analysis (FMEA) [16], Monte Carlo simulation [17], [18], Markov modeling [19], and other practical approaches [20]. This work employs the analytical approach in the previous work [14], [15] to evaluate reliability indices such as SAIFI, SAIDI, MAIFI , etc. Fig. 1 briefly illustrates the input and output of RIA analysis. Appendix C also provides the fundamental principles for the analytical simulation of RIA. More detailed algorithms can be found in [15]. With the assumption of N-1 contingencies, a reliability index can be viewed as the sum of contribution from each contingency

where failure rate per year of component ; number of customers experiencing sustained interruption due to a failure of component ; total number of customers. Combining (1) and (2), we have

(3) Since is a constant and is related to the component itself, the above equation shows that the key to calculate SAIFI is to calculate . Since stands for the number of customers experiencing sustained interruptions due to a failure of component , it can be evaluated by identifying which components will be de-energized based on the system topology, protection scheme, may be compliand restoration. Although the evaluation of cated, it is certain that the evaluation of is independent of . In other words, two processors can evaluate and , respectively, in parallel, if both processors know the input of the system data. Hence, evaluation of SAIFI contributions from and , can be carried out component and , or at different processors independently. As shown in the Appendix, similar conclusions can be easily drawn that the value of other indices is the sum of contributions from all individual components, or can be directly obtained as a

232

function of other indices. Hence, the evaluation of the component contributions can be carried out independently and concurrently. With this simple, but very important feature of reliability indices, a distributed processing approach for RIA is proposed. This approach employs the controller-worker model presented in the previous section. Assume there is a unique controller and workers. Also assume the system has components. The approach is described as follows. 1) The controller reads in all input data including system topology, component reliability rates, etc. 2) The controller sends the input data to all workers. 3) The controller requests each worker to calculate the reliability index contributions from randomly selected components. For instance, with the assumption that the component ID is randomly distributed, the first worker computes reliability index contributions for components ; the second worker computes reliability index 1 to to ; and so contributions for components on. 4) Each worker concurrently evaluates the reliability index contributions for its assigned components. 5) Each worker sends the reliability index contributions of its assigned components to the controller. 6) The controller adds up all reliability index contributions to obtain the system reliability index. The proposed coarse-grained distributed algorithm is highly parallelized because of the following two reasons. • The calculations of reliability index contributions among different workers are highly independent. A worker does not need any further communication with others once the actual calculation starts. • The communication is kept minimum. The major communication occurs only at the very beginning to distribute the input data (system topology, component reliability data, etc) to workers. The collection of reliability index contributions calculated by workers needs much less communication time. It is noteworthy to mention that although this work assumes N-1 contingencies, higher order contingencies can be applied as well based on the component (contingency) contribution approach. The reason is that the basic unit of this approach is to evaluate the impact or contribution to reliability indices from each contingency event. Because each contingency event can be assessed independently, RIA can be carried out in parallel. Due to the same reason, similar parallelization approach can be applied to meshed network RIA, although more implementation efforts are needed than for radial system RIA.

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 1, FEBRUARY 2005

Fig. 2.

Partition of a task with s independent steps.

This is typically true for parallel processing carried out in a dedicated supercomputer. However, sometimes the ideal environment may not be available, especially in a “loosely coupled” distributed processing environment like heterogeneous networked computers. In this case, the computing power of different computers may vary. An equal partition may lead some inefficiency and task balancing needs to be considered for higher efficiency. Assume a task consisting of 3000 steps, computationally identical and logically independent, will be carried out at two different processors, P1 and P2. If running separately, P1 can complete the task in 100 s while P2 in 50 s. Ignoring overhead, an equal task partition (1500:1500) leads P1 to complete its assignment in 50 s and P2 in 25 s. This causes P2 to be idle for 25 s. The overall wall-clock running time is 50 s. To fully utilize the computing power, P2 should receive more tasks than P1. If 1000 steps are assigned to P1 and 2000 steps to P2, then each processor will finish its assignment in 33.3 s, or 16.7 s less than the previous equal task partition. It can be easily proved that this 1000:2000 partition is the most efficient partition. Based on the above analysis, the balanced task partition with computing ability adjusted for two processors is illustrated in Fig. 2. The weighting factors of task partition for each processor, and , are given by (4) (5) where time to complete the task with a single processor P1; time to complete the task with a single processor P2. Hence, each processor finishes its own assignment in s for 1 p.u. task. Equations (4) and (5) can be extended to multiple processors. Equation (6) shows the weighting factor for the th processor. It can be easily verified that each processor completes its assignment simultaneously in (6)

IV. BALANCED TASK PARTITION WITH THE CONSIDERATION OF COMPUTING ABILITY Parallel or distributed processing usually partitions a task into many small units processed at each individual participating processors. It is a common practice that the task is equally partitioned among processors, with the ideal assumption of a balanced computing environment. That is, all processors can finish the same amount of computing work in the same time duration.

where weighting factor for processor ; time to complete the task with a single processor The above discussion is based on two assumptions. 1) Each step within a task is independent. 2) Each step is computationally equal.

.

LI: DISTRIBUTED PROCESSING OF RIA AND RELIABILITY-BASED NR

The first assumption is true for the RIA task because each RIA step, i.e., the evaluations of the contribution to a reliability index from each individual component, is independent. The second assumption may not be precisely true for the RIA algorithm because the evaluation of the reliability index contribution from a component depends on connectivity, protection, restoration, etc. For example, it takes less CPU time to evaluate SAIFI contribution from a component in a feeder with 50 components than a component in a feeder with 100 components. However, the second assumption shall be roughly true for a large distribution system in the scale of thousands of components, because each processor is assigned a large number of randomly selected components. Hence, an average step carried out at one processor shall be computationally equal to an average step at another processor. The above discussion makes it apparent that Step 3 in the distributed processing of RIA should be modified slightly as follows: The th processor should be responsible for randomly components instead of components. selected It should be noted that there are alternative ways for task balancing such as dynamic assignment, which balances the computation among processors at run-time based on assignment-completion loops [11]. However, in general, it requires more synchronization and communication overhead. It is more suitable for a dynamic task with computation time very unpredictable or in a wide range. Since the computation-dominant RIA application completes in a stable manner (in steps of component contribution evaluation), the proposed approach tends to be effective for RIA application. The reason is that there is no multiple, iterative run-time communication involved, because each worker receives its total assignment at the start of the algorithm. Implementation is also simplified. Meanwhile, efficiency can be ensured with the task partition adjusted by weighting factors. V. TEST RESULTS OF RIA DISTRIBUTED PROCESSING A. Test Systems, Environment, and Procedure This section presents testing results of the distributed processing of RIA. Here the five popular system-level reliability indices, SAIFI, SAIDI, MAIFI , CAIDI, and ASAI [13], are considered. The tests are explored in several actual distribution systems with sizes from 3835 components to 11 026 components. This is shown in Table I. The test environment is a Local Area Network with up to 6 computers, each with Pentium III processor. The speeds of the 6 processors vary from 450 MHz to 1 GHz. The network speed is 10 M bps. The memory sizes vary from 128 to 256 MB. The distributed RIA processing is implemented in Visual C++ with TCP/IP protocol for message passing. The testing procedure is described as follows. First, the test is examined for the RIA algorithm in a sequential processing mode. That is, the RIA algorithm is executed in a single processor for each of the six processors. The running time of each individual processor will be used to compute the weighting factor for balanced task partition based on (6). Then, the RIA is executed in the distributed processing mode with balanced task partition in 2, 3, 4, 5, and 6 processors, respectively. The results

233

TABLE I SIZES OF FIVE TEST SYSTEMS

TIME (SECONDS)

TABLE II COMPLETE A SEQUENTIAL RIA INDIVIDUAL MACHINE

TO

AT

EACH

of scalability are presented. Here the scalability is measured as speedup and efficiency, defined by the following equations [1], [2]: (7) (8) where wall-clock time to complete the best sequential algorithm on a single processor; wall-clock time to complete the parallel or distributed algorithm on multiple processors; number of processors. The above speedup definition does not indicate which processor is associated with. This implies each processor is identical for typical parallel processing environment. Since this work is carried out in processors with different computing capability, is taken as the average of the sequential RIA running time at all participating processors. This section also presents an illustrative test to verify the performance improvement with balanced task partition. The test is performed in the mid-sized system SYS-3. B. Test Results Table II shows the running time in seconds to complete a sequential RIA algorithm with a single processor. It shows that these networked computers have different computing power. This is used for balanced task partition in the distributed processing. Tests for distributed processing are carried out for five scenarios with 2–6 processors working in parallel. The processor assignment is as follows. Two processors: P1, P2. Three processors: P1, P2, P3. Four processors: P1, P2, P3, P4. Five processors: P1, P2, P3, P4, P5. Six processors: P1, P2, P3, P4, P5, P6. In the distributed processing scenario, each processor runs a worker process. In addition, one of the processors runs the

234

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 1, FEBRUARY 2005

TABLE III SPEEDUP OF DISTRIBUTED PROCESSING OF RIA

SPEEDUP

OF

TABLE IV DISTRIBUTED PROCESSING WITH TWO PARTITIONS THE SYS-3 SYSTEM

FOR

reliability-based optimization such as NR. NR is accomplished by closing normally open switches and opening normally closed switches. Since the NR is usually very time-consuming (from tens of minutes to hours), the application of distributed processing may be very beneficial to improve the wall-clock running time. Reliability-based NR is to identify the optimal network configuration without any financial investment to achieve the highest possible reliability. Although various formulations may be applied to NR, this work takes a weighted sum of multiple reliability indices as the objective function [15], [21]

Fig. 3. Efficiency of distributed processing of RIA.

controller process as well. The purpose is to fully utilize the computing ability of that processor, because a controller is not computationally intensive while a worker is. In fact, when the worker is calculating the reliability index contribution, the controller is essentially idle except waiting for the feedbacks from all workers. Typically, the most powerful processor is selected to host the controller and a worker. Table III and Fig. 3 present the speedup and efficiency, respectively, of the distributed processing of RIA. It is shown that the distributed processing can speed the wall-clock execution time of RIA. It is also shown that, in general, the speedup and the efficiency increase when the system size increases. When the number of involved processors increases up to 6, the speedup keeps increasing at a considerable rate, although the efficiency drops. C. Tests for Equal and Balanced Task Partitions A test run of distributed RIA processing with the equal task partition is carried out for comparison against the balanced task partition. The test is executed in the mid-sized system SYS-3 for illustrative purpose. In Table IV, the second row shows the speedup with equal task partition, while the third row shows the speedup with balanced task partition. The last row shows the improved speedup in percentage. It is clearly demonstrated that the balanced task partition improves the efficiency of the distributed processing for RIA, if the processors have various computing capabilities. VI. DISTRIBUTED PROCESSING OF RELIABILITY-BASED NRETWORK RECONFIGURATION The distributed processing for RIA proposed in Section III, together with the balanced task partition, can be easily applied to

(9) The constraints could be component loadings and voltage drop limits. Penalties can be added into the objective function if constraints are violated [21]. Since NR is a nonlinear, noncontinuous optimization problem, many combinatory techniques like local search, tabu search, simulated annealing and genetic algorithm have been applied to NR. This paper employs an approach presented in [15] and [21], which combines local search and simulated annealing [22], to demonstrate that the previous distributed processing of RIA can be easily applied to NR. The traditional, sequential algorithm is described in details in previous works [15], [21]. Here it is sketched as follows. 1) Set initial parameters for simulated annealing such as the starting temperature (T) and the annealing rate (R). of the initial config2) Identify the objective function uration. 3) Set the above obj as the temporary best objective, OBJ. 4) For each tie switch: a) identify a new configuration by performing a tie switch shift; b) perform an RIA to calculate the new objective function, obj’, for the new configuration; c) generate a random number ; d) check whether the new configuration is acceptable or not: if or , then ’; otherwise, shift the tie switch to its original position; e) Go to 4a) to perform switch shift for remaining adjacent switches. 5) Repeat 4) for all tie switches. 6) Change annealing factor by setting . 7) If there is no change of OBJ since last change of T, stop. Otherwise, go to step 4). As the above description shows, the NR algorithm requires multiple runs of Step 4), especially Step 4b) to complete a RIA

LI: DISTRIBUTED PROCESSING OF RIA AND RELIABILITY-BASED NR

run for each new configuration. Normally, there may be hundreds, even thousands, of new configurations to examine in large systems. Even though network simplification may be employed to reduce the running time to some extent, the NR algorithm is still very time-consuming. Therefore, it is very beneficial if distributed processing can be applied to NR to reduce the wallclock running time. Practical tests show that the repeated RIA executions of Step 4b) typically consume more than 98% CPU time of the overall NR algorithm. Hence, the key to parallelize NR is to parallelize the RIA executions. With the annealing algorithm, it is very difficult to carry out the multiple instances of RIA in parallel. This is because a new configuration is determined only after the RIA of the previous configuration is completed. The acceptance or rejection of the previous configuration, together with the perturbation mechanism, may affect the new configuration. However, there is a simple yet effective approach to parallelize the sequential version of NR. The approach is to parallelize each individual RIA using the distributed RIA processing presented previously, rather than to parallelize different RIA instances. As such, the following three principles are proposed for the distributed processing of NR. • The controller knows the global parameters like T, R, and OBJ. All steps except 4b) will be sequentially carried out at the controller without the involvement of the workers. • The responsibility of the workers is to collaboratively complete Step 4b), the RIA, in parallel for each new configuration. The workers know the system data in order to complete RIA and do not have any knowledge about the global parameters for NR. Here, balanced task partition is considered in the distributed processing of RIA. • The controller needs to transfer the bulk system data (components, topology, protection schemes, etc) to the workers in the first RIA run. In the following RIA run, the controller only needs to notify the workers about the topological changes after each new configuration is identified. Hence, the workers will be able to complete another RIA run for the new configuration without unnecessary bulk data transfer. The above distributed processing of NR can be easily implemented by reusing the previous approach of RIA distributed processing. Good efficiency is expected even though Step 4b) is the only step carried out in parallel because it dominates over the other steps in CPU consumption. It should be noted that the annealed local search algorithm is selected for illustrative purpose. Other optimization algorithms like a genetic algorithm may work for NR as well, but it is likely that reliability-based optimization depends on the RIA algorithm to evaluate the system reliability for each new configuration. Therefore, the proposed distributed processing for NR may be applied to other similar reliability-based optimization algorithms. The distributed processing of reliability-based NR is implemented as an extension of the sequential version of the previous works [15], [21]. Then, It is tested at the mid-sized system SYS-3 for demonstrative purpose. To achieve acceptable solution quality without unnecessary computation time, an annealing rate of 0.9 and a starting temperature of 0.3 are selected.

235

TABLE V SCALABILITY OF DISTRIBUTED PROCESSING FOR NR ON THE SYSTEM SYS-3

The description of the distributed processing implies that executions of NR may yield different results and require different number of RIA runs because of the random number generation in Step 4c). To examine the efficiency of distributed processing, the completion time of a NR test is normalized to each RIA run. It should be noted that if the random numbers are identically generated among different NR tests, the result of each NR test should be the same regardless of distributed processing or not. The number of RIA runs within each NR test should be the same, too. Table V shows the speedup of the distributed processing of NR. It appears that the scalability here is higher than that of the distributed processing of RIA. The reason is that less relative overhead is involved in distributed processing of NR. Detailed explanations are given as follows. The NR algorithm contains many iterative RIA runs. The initial bulk data transfer (Step 2 in the distributed processing of RIA in Section III) is necessary only in the first RIA run. In the follow-up runs, only a small amount of information needs to be transferred from the controller to the workers. The information is the change of network topology such as which switches are opened or closed. There is no need to re-send the whole system information. Therefore, the initial data transfer, which is a considerable overhead for RIA algorithm, is negligible in the NR algorithm. Hence, it is apparent that the higher speedup should be observed in the distributed processing of NR algorithm. VII. CONCLUSION The RIA for distribution systems can be carried out in parallel, because the contributions to a reliability index from different components are independent. A balanced task partition scheme may be applied to improve efficiency, especially in a distributed processing environment where the computing abilities of participating processors are usually different. The distributed processing of RIA can be applied to a NR algorithm, which is to optimize the system reliability based on annealed local search. Since the kernel of the NR algorithm mainly consists of multiple RIA runs, it can be easily implemented on top of the distributed processing of RIA. The proposed distributed processing of RIA and NR should be practically beneficial if applied to utility distribution systems, which may have tens, or even hundreds, of thousands of components. It should be especially attractive for NR application since NR adds a dimension of computational complexity on top of RIA. APPENDIX A. System Reliability Indices The analytical approach considers a reliability index as the sum of contributions from failures of individual components during the course of a year. This Appendix gives the following

236

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 1, FEBRUARY 2005

equations as a formal method to calculate reliability indices. These equations are expected to make it easy to understand why the analytical approach is parallelizable. SAIFI, SAIDI, and MAIFI can be calculated with (A1)–(A3). Equation (3) is repeated here as (A1) for completeness. These three equations can also be found in [18]

TABLE AI OCCURRENCE OF INTERRUPTION AT LOAD POINTS DUE TO N-1 CONTINGENCIES

(A1)

(A9) (A2)

(A3) (A10)

where failure rate of component ; number of customers experiencing sustained interruption due to a failure of component ; sustained interruption durations for all customers due to a failure of component ; sustained interruption duration for customer due to a ; failure of component number of customers experiencing temporary interruption (event) due to a failure of component ; total number of customers; total number of components. Three other popular reliability indices, CAIDI, ASAI, and ASUI, can be directly derived from SAIFI and SAIDI. They are given as follows:

where total number of customers experiencing at least 1 interruption per year; total connected kVA served; interrupted kVA due to a failure of component ; interrupted kVA weighted by interruption hours due to a failure of component ; interrupted kVA for customer due to a failure of component . (Note: ). Other important reliability indices, though not shown in [13], include Expected Energy Not Served (EENS) and Average Energy Not Served (AENS). They can be obtained with (A11)

(A4) (A12)

(A5) where

is the power factor of customer .

(A6) Some less popular reliability indices, such as CAIFI, CTAIDI, ASIFI, and ASIDI [13], can still be obtained using the contribution approach. They are given by

(A7)

(A8)

B. Load Point Reliability Indices Although this work does not address load point reliability indices [16], [17], they can be calculated with slight refinement of the analytical approach based on component contributions. The refinement is as follows: when a component failure is analyzed, a temporary record is kept for each load point to identify: 1) whether it will be interrupted and 2) the duration of the interruption. Table AI shows a simple example of the impact to load points from component failures under N-1 contingencies. If a load point experiences an interruption after a component fails, the corresponding entry is set to “1”. Otherwise, it is set to “0” if no interruption at the load point. Hence, the annual interruption , is the sum of its column, weighted rate at each load point,

LI: DISTRIBUTED PROCESSING OF RIA AND RELIABILITY-BASED NR

237

by component failure rate. If the values are replaced with interruption durations, then we can obtain the annual load point , which is the weighted column sum. interruption duration The average interruption time can be obtained as . This is summarized as follows: (A13) (A14) (A15) where if a failure of component causes an interruption at load point ; otherwise 0; interruption duration at load point if component fails. It is noteworthy to mention that the sum of the row in Table AI, weighted by the number of customers at each load point shall be , which is the contribution factor used to calculate SAIFI. If the values in the table are outage durations, then the sum of the row weighted by the number of customers is , which is used to calculate SAIDI. Since such tables are intermediate results and consumes lots of memory, it is not necessary to implement the tables. In the actual implementation, it is only necessary to keep records of the accumulated value of the weighted column (or row) sum for load point reliability indices (or for system reliability indices as done in this work). can be employed to The interruption rate at load point calculate , the total number of customers experiencing at least one interruption per year, which is used in (A7)–(A8). This is given by

(A16) where total number of load points; number of customers at load point . is the probability that load In the above equation, point experiences at least one interruption per year. This is befollows cause a component with a constant interruption rate a Poisson process [15], [18]. The probability density function of is given as follows: its being interrupted times per year

(A17) Hence, the probability of being interrupted at least once per year is given by (A18)

C. Description of the Analytical Approach of RIA Simulation The algorithm of the analytical approach of RIA is outlined as follows. 1) For a permanent fault at component C (repair time ): a) Fault isolation: An upstream search is performed to find the nearest protection or reclosing device, P, which operates to isolate the fault. b) Upstream restoration: If there is a switching device S minutes between P and C, S is opened in to restore components between P and S. All restored components experience a (sustained) interruption of minutes. If this is less than the threshold of momentary interruption in the case that S is automated, the interruption is classified as a momentary interruption event. c) Downstream restoration: If there is an alternate power source through a normally open switch (NOS) and there is another normally closed switch (NCS) between NOS and C, all components between the two switches experience a (sustained) interruption of minutes due to downstream restoration. If the time is less than the momentary interruption threshold because of automated switching, the interruption is classified as a momentary interruption event. d) All isolated and unrestored components experience a sustained interruption of MTTR minutes. Note:

2) For a temporary fault: a) Fuse-saving: If a temporary fault can be cleared by an upstream reclosing device, R, with fuse-saving scheme, all components downstream of R experience a momentary interruption event. b) Fuse-clearing: If the there is no fuse-saving protection, the fuse blows and isolates its downstream components. The interrupted downstream components may be restored through back-feed as described in 1c). The above approach shall be applied to each component to identify the interruption type and duration at each load/customer point if the specific component fails. Hence, the contribution factors to reliability from a component failure, such as and so on, can be easily obtained. The above approach can be extended to address more complicated cases like operation failures, transfer switches, and distributed generations. Details can be found in [15].

ACKNOWLEDGMENT The author is very grateful to Dr. R. E. Brown for his thoughts and discussions in power distribution reliability issues. The author would also like to thank the reviewers and the editor for

238

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 1, FEBRUARY 2005

their comments and suggestions, which helped the author to revise and improve this paper.

[13] IEEE/PES Working Group on System Design, “A survey of distribution reliability measurement practices in the U.S.,” IEEE Trans. Power Delivery, vol. 14, no. 1, pp. 250–257, Jan. 1999. [14] G. Kjolle and K. Sand, “RELRAD—An analytical approach for distribution system reliability assessment,” IEEE Trans. Power Delivery, vol. 7, no. 2, pp. 809–814, Apr. 1992. [15] R. E. Brown, Electric Power Distribution Reliability. New York: Marcel Dekker, 2001. [16] R. Billinton and R. N. Allan, Reliability Evaluation of Power Systems, 2nd ed. New York: Plenum, 1996. [17] R. Billinton and P. Wang, “Teaching distribution system reliability evaluation using Monte Carlo simulation,” IEEE Trans. Power Syst., vol. 14, no. 2, pp. 397–403, May 1999. [18] F. Li, R. E. Brown, and L. A. A. Freeman, “A linear contribution factor model and its applications in Monte Carlo simulation and sensitivity analysis,” IEEE Trans. Power Syst., vol. 18, no. 3, pp. 1213–1215, Aug. 2003. [19] R. E. Brown, S. Gupta, S. S. Venkata, R. D. Christie, and R. Fletcher, “Distribution system reliability assessment using hierarchical markov modeling,” IEEE Trans. Power Delivery, vol. 11, no. 4, pp. 1929–1934, Oct. 1996. [20] S. R. Gilligan, “A method for estimating the reliability of distribution circuits,” IEEE Trans. Power Delivery, vol. 7, no. 2, pp. 694–698, Apr. 1992. [21] R. E. Brown, “Distribution reliability assessment and reconfiguration optimization,” in Proc. IEEE PES Transmission and Distribution Conf., vol. 2, 2001, pp. 994–999. [22] H. D. Chiang, J. C. Wang, O. Cockings, and H. D. Shin, “Optimal capacitor placements in distribution systems: Part 1: A new formulation and the overall problem,” IEEE Trans. Power Delivery, vol. 5, no. 2, pp. 634–642, Apr. 1990.

REFERENCES [1] V. Kumar, A. Grama, A. Gupta, and G. Karypis, Introduction to Parallel Computing: Design and Analysis of Algorithms. Reading, MA: Benjamin-Cummings (Addison-Wesley), 1994. [2] U. Manber, Introduction to Algorithms—A Creative Approach. Reading, MA: Addison-Wesley, 1989. [3] D. J. Tylavsky et al., “Parallel processing in power systems computation,” IEEE Trans. Power Syst., vol. 7, no. 2, pp. 629–638, May 1992. [4] A. Abur, “A parallel scheme for the forward/backward substitutions in solving sparse linear equations,” IEEE Trans. Power Syst., vol. 3, no. 4, pp. 1471–1478, Nov. 1988. [5] J. Q. Wu and A. Bose, “Parallel solution of large sparse matrix equations and parallel power flow,” IEEE Trans. Power Syst., vol. 10, no. 3, pp. 1343–1349, Aug. 1995. [6] S. D. Chen and J. F. Chen, “Fast load flow using multiprocessors,” Int. J. Elect. Power & Energy Syst., vol. 22, no. 4, pp. 231–236, 2000. [7] F. Li and R. P. Broadwater, “Distributed algorithms with theoretic scalability analysis of radial and looped load flows for power distribution systems,” Elect. Power Syst. Res., vol. 65, no. 2, pp. 169–177, 2003. [8] R. Baldick, B. H. Kim, C. Craig, and Y. Luo, “A fast distributed implementation of optimal power flow,” IEEE Trans. Power Syst., vol. 14, no. 3, pp. 858–864, Aug. 1999. [9] B. H. Kim and R. Baldick, “A comparison of distributed optimal power flow algorithm,” IEEE Trans. Power Syst., vol. 15, no. 2, pp. 599–604, May 2000. [10] R. Ebrahimian and R. Baldick, “State estimation distributed processing,” IEEE Trans. Power Syst., vol. 15, no. 4, pp. 1240–1246, Nov. 2000. [11] J. R. Santos, A. G. Exposito, and J. L. M. Ramos, “Distributed contingency analysis: Practical issues,” IEEE Trans. Power Syst., vol. 14, no. 4, pp. 1349–1354, Nov. 1999. [12] C. L. T. Borges, D. M. Falcao, J. C. O. Mello, and A. C. G. Melo, “Composite reliability evaluation by sequential Monte Carlo simulation on parallel and distributed processing environments,” IEEE Trans. Power Syst., vol. 16, no. 2, pp. 203–209, May 2001.

Fangxing Li (M’01) received the B.S.E.E. and M.S.E.E. degrees from Southeast University, Nanjing, China, in 1994 and 1997, respectively, and the Ph.D. degree from Virginia Tech, Blacksburg, Virginia, in 2001. He is presently a Senior Consulting R&D Engineer at ABB Inc., Raleigh, NC, where he specializes in computational methods and applications in power systems, especially in power distribution analysis and energy market simulation. Dr. Li is a member of Sigma Xi.