Estimation of DC Offset Parameters Using Global Optimization ...

5 downloads 400 Views 264KB Size Report
algorithm based on grid search to estimate DC offset time constants from a ... Keywords - Global optimization, HCP algorithm, DC offset .... is not guaranteed. LM and ... been used as preprocessing tool to find the best starting point for LM or ...
Estimation of DC Offset Parameters Using Global Optimization Technique Fang Duan*, Rastko Zivanovic School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA 5005, Australia [email protected] [email protected] Abstract-In this paper we present the global optimization algorithm based on grid search to estimate DC offset time constants from a current recorded during short circuit on transmission line. The grid on which the search for optimum is performed is constructed using the Hyperbolic Cross Points (HCP). We are able to estimate three time-constants of the DC offset in the fault current signal, namely: time constant of the Thevenin's equivalent impedance of the network, Current Transformer (CT) time constant and auxiliary CT time constant. The global search on the HCP grid is complemented with Levenberg-Marquardt (LM) or Nelder-Mead (NM) local search algorithms, to refine the result. The results presented in the paper show that LM and NM local algorithms find better solution and converge faster if the HCP algorithm is used to determine the starting point. The HCP search, as preprocessing step helps LM and NM algorithms to converge faster to a global minimum.

optimal approach in fitting the model would be the Separable Least Squares technique [3]. In solving the non-linear leastsquares part we are suggesting use of either the LevenbergMarquardt (LM) method [4] or Nelder-Mead (NM) method [5] that does not require gradient information. These algorithms are iterative (require starting point) and local. The optimum found will strongly depend on the starting point. To find the starting point for these local algorithms we propose use of global optimization technique [6]. II. THE PARAMETER ESTIMATION PROCEDURE

The structure of the current signal recorded during a fault on a transmission line is assumed to be: Keywords - Global optimization, HCP algorithm, DC offset s = An exp(−t / Tn ) + Act exp(−t / Tct ) + Aaux exp(−t / Taux ) (1) + A sin(ω 0 t + ϕ ) + noise I. INTRODUCTION There are three DC components in the signal: Operation of a power system is frequently affected by faults, 1) exp(−t / T ) , where T = R / L is the time constant of the n n n n which give rise to disturbances in energy supply. Power system Thevenin's equivalent impedance of the network at the faults are generally categorized by their severity and duration. measurement place. The typical value is around 30ms [2]; In this paper faults with short duration are considered and 2) exp(−t / Tct ) , where Tct = Rb /( Lb + Lm ) is the Current would be classified as transient and short-duration disturbances, Transformer (CT) time constant; Rb + jω Lb is the burden as detected by a Digital Fault Recorder (DFR) or protective relay (with disturbance recording capability). It is a common impedance and Lm is the magnetizing impedance of the practice to analyze waveforms originated from such CT. The typical value could be larger than 200ms but for disturbances using Fourier related transforms (for example the air-gap CTs values could be less than 60ms [2]; Discrete Fourier Transform - DFT) [1]. Unfortunately such 3) exp(−t / Taux ) , where Taux = Raux / Lmaux is the auxiliary CT transforms are not able to decompose recorded fault signals time constant. The typical value is around 100ms but for into realistic components. For example the DFT technique can CTs with air-gap is around 10ms [2]. not estimate magnitude and time constant of a DC offset which The noise component contains measurement errors having is present in current signals recorded during faults on normal distribution and higher frequency components. transmission lines. In practical situations the DC offset of the In the preprocessing part of the parameter estimation fault current signals is composed of three exponential algorithm, the higher frequency signal components are filtered components representing network transient, Current out using the lowpass linear-phase (FIR) filter with the cutoff Transformer (CT) transient and Auxiliary Transformer (at frequency 130Hz. The fundamental frequency ( ω component 0 relay input) transient [2]. To overcome problems of the DFT in the signal model (1)) has been removed by convoluting the approach when estimating parameters of the DC offset, we are signal with the following notch filter: proposing in this paper use of the parametric signal processing n n approach. We are using the signal model for the DC offset that (2) B( z ) = ∏ (1 − exp( z k ) z −1 ) =1 + ∑ bk z −k , k =1 k =1 has 6 parameters in total: 2 parameters (magnitude and time constant) for each of the 3 exponential functions that represent where in our case n = 2 and z k are complex conjugate poles transients. Since three parameters (magnitudes) are linear the representing the fundamental frequency.

2008 Australasian Universities Power Engineering Conference (AUPEC'08)

Paper P-077 page 1

According to [3] for a set of recorded signal samples s, the Separable Nonlinear Least Squares problem is defined as one for which the model is linear combination of nonlinear functions that can depend on multiple parameters. An entry i of the residual vector r for this problem is written as d

ri (a, x) = s (t i ) − ∑ A j ϕ j (t i , X j ) ,

(3)

j =1

where ϕ j is the exponential function, A j is the linear parameter (an entry of vector a) and X j = −1 / T j is the nonlinear parameter (an entry of vector x). In our case d = 3 and linear and non-linear parameters correspond to three DC components as in the model (1). According to the Least Squares approach the parameters are determined for the minimum of the sum of squared residuals:

r (a, x) where ∗

2 2

2 2

= s − Φ ( x) a

2 2

,

(4)

is the second vector norm squared. The columns of

the matrix Φ in (4) correspond to the nonlinear exponential functions ϕ j (t i , X j ) evaluated at all the t i values. The vectors a and s in (4) represent the linear parameters ( A j ) and the recorded samples ( s(t i ) ) respectively. If we knew the nonlinear parameters x, then the linear parameters a could be obtained by solving the linear Least Squares problem: (5) a = Φ ( x) + s , where ‘+’ is the matrix psudo-inverse. The reduced size nonlinear Least Squares problem is obtained by replacing (5) in the original functional (4) 2

min (I − Φ (x) Φ (x) + ) s) .

(6)

2

x

In this parameter estimation problem, the minimum (6) and the nonlinear parameters x are found by either using the Levenberg-Marquardt (LM) method [4] or Nelder-Mead (NM) method [5]. Since these are local methods the global minimum is not guaranteed. LM and NM iterative algorithms can stack in different local minimums depending on the starting point. To overcome this problem the global optimization method has been used as preprocessing tool to find the best starting point for LM or NM local methods. III. GLOBAL OPTIMIZATION ALGORITHM The Hyperbolic Cross Points (HCP) method, implemented as the preprocessing step in the parameter estimation procedure, is the adaptive algorithm that conducts search on the sparse grid. The main advantage of the optimum search on the sparse grid compared to a full grid search or random search is the fact that lower number of function evaluations are required resulting in the faster operation. We now present a review of the HCP algorithm [6] and the implementation in solving (6). The d-dimension box (hyperrectangle) D := [ −0.5, 0.5] ⊂ R specifies the domain of the d-dimensional continuous objective function f (x) . This is the d

d

function that we minimize in (6) representing the nonlinear part of the parameter estimation procedure. The HCP algorithm ∗

searches for the minimum, i.e. f ( x ) = min f ( x ) in the x∈D

domain D. Any point x ⊂ R d in the arbitrary domain [a, b] d can be transformed into D using the transformation (x − a ) (b − a ) − 0.5 . In our case the search domain is specified according to the practical values of the time constants for the network and two CTs [2]. The algorithm is using the search points x = ( x1 , . . . . , xd ) ∈ D with dyadic coordinates constricted through a finite binary expansion: ki

xi = ± ∑ a j 2− j , a j ∈ {0,1} . j =1

(7)

For xi ≠ 0 , aki must be equal to 1 in (7). Under this circumstance, level ( xi ) =: ki is called the level of xi and the level of point 0 is defined as 0. The level of a point x is defined d

as

level( x ) :=

∑ level(xi ) .

(8)

i=1

In the basic non-adaptive implementation of the HCP algorithm, the objective function f (x) values are computed for all hyperbolic cross points x with level ( x ) ≤ ki . To explain the adaptive version of the HCP algorithm, three additional definitions are needed. Firstly, the way how to calculate coordinates of neighboring points to a point x is defined: a neighbor point y ∈ D of degree m will have one coordinate

yi = xi ± 2 − level ( xi ) − m ,

(9) and all other coordinates are equal to the coordinates of x. With any given degree of m, an HCP has up to 2d neighbors except for the points on the border of the box D. Next we define the rank of a point x. For a set of points X = {x1 , x 2 ,… , x n } and the corresponding set of function evaluations Y = { f (x1 ), f (x 2 ),… , f (x n )} the rank of a point x ∈ X is defined as (10) rank (x) = #{y ∈ Y | y ≤ f (x)} . It means, the rank of x is the number (“#”) of points in the set X including x having function values smaller or equal than the function value of the point x. It is clear that if rank (x) = 1 , f (x) is the minimum in the set Y. Lastly, the quality index g (x) is defined as: g ( x) = (level (x) + degree(x))α rank ( x)1−α , α ∈ [0,1], (11) where α is the control parameter [6]. The main tuning parameters of the adaptive HCP algorithm are the maximum level that can be reached, which is the stopping criterion, and the control parameter α . The HCP algorithm has the following steps [6]: 1) Start from the origin ~ x = 0 , with the lowest level = 0 and ~ ) . The rank for x is 1 degree = 0, and evaluate the function f (x

2008 Australasian Universities Power Engineering Conference (AUPEC'08)

Paper P-077 page 2

~} and the point set and the function evaluation set are X = {x and Y = { f (~ x )} respectively. 2) degree(~ x ) = degree(~ x) + 1 3) Calculate coordinates of all neighbors X n of a point ~ x ~ using (9) with m = degree(x ) . 4) For all neighbor points x ∈ X n calculate new levels as level (x) = level (~ x ) + degree(~ x) .

5) Make additional function evaluations f ( X n ) for all points in X n and extend the point and function evaluation sets: X = X ∪ X n and Y = Y ∪ { f ( X n )} . Calculate new ranks for all points in the new set X using the formula (10). 6) Calculate quality index (11) for all points in X and find the current minimum point ~ x = arg min ( g (x)) .

The number of iterations, function evaluations, and execution times, of LM and NM algorithms can be reduced by more than half by using global search pre-processing. The sum of squared residuals at solution is given by n

Residuals = ∑ [ f (ti ) − s (ti ) ]

where f (ti ) and s (ti ) are simulated and real signal, respectively. It is also improved by using the HCP global search. It is obvious that pre-processing with the HCP algorithm helps the local search methods to find better minimum. The function evaluations points (total 305 points) that are used by the HCP algorithm are plotted in Fig. 2. Table I: Impact of the HCP algorithm on the parameter estimation

x∈X

Without the HCP global search With the HCP global search

x∈X

15

Number of iterations Number of function evaluations Sum of squared residuals at solution Execution time

Amplitude (V)

10

5

0

(12)

i =1

7) If maximum level not reached go to 2). If maximum level reached, ~ x = arg min ( f (x)) , then the evaluation will be stopped. Simulated signal Simulated signal after Notch filter

2

Tn , Tct , Taux

[s]

LevenbergMarquardt algorithm

Nelder-Mead algorithm

LevenbergMarquardt algorithm

Nelder-Mead algorithm

38

251

8

148

302

443

59

268

5.93×10-5

9.75×10-19

0.561 0.0294172 0.232755 0.011083

1.882 0.03 0.3 0.012

1.07×10-8 2.88×10-19 0.078 0.0300078 0.301076 0.0120114

0.39 0.03 0.3 0.012

-5

-10

0

0.05

0.1 Time (ms)

0.15

0.2

0.02

Figure 1: Simulated signal T aux

IV. SIMULATION RESULTS AND DISCUSSION

0.015 0.01

In our study the fundamental frequency is 50Hz (sinusoidal 0.005 signal) and sampling frequency is 1000Hz. It is assumed that the signal is polluted with random noise and in our simulation 0 study we will vary the noise level standard deviation in the 0.4 range 0 to 0.01, to determine the effect on parameter estimation. 0.08 0.35 The sinusoidal component of the signal is removed by using a 0.06 0.3 0.04 Notch filter (2). The filter does not affect DC time constants 0.25 0.02 which are going to be estimated. The simulated current signal 0.2 0 Tct T and the DC offset (the output of the notch filter) are shown in n the Fig.1. The time constants of the DC components of the Figure 2: Search grid points using the HCP algorithm simulated signal are Tn = 30ms , Tct = 300ms and Taux = 12ms . The HCP algorithm is used as a global search engine to estimate approximate values for Tn , Tct and Taux . The local In the real power system, signals are always corrupted by search method (LM or NM algorithms) is using these values as noise. So, normally distributed random noise is added to test a starting point to refine the search and find the final estimates. the efficiency of the HCP algorithm. The simulation results are The results with and without the HCP pre-processing phase are tabulated in the Table II. The results show that the HCP presented in Table I. In this testing case we did not add noise.

2008 Australasian Universities Power Engineering Conference (AUPEC'08)

Paper P-077 page 3

algorithm will reduce the computation time and improve the accuracy of the final estimation values.

14 NM Algorithm without HCP as global search NM Algorithm with HCP as global search

12

Table II: Impact of the HCP algorithm on the parameter estimation (noise level – standard deviation = 0.001)

Number of iterations Number of function evaluations Sum of squared residuals at solution Execution time Tn , Tct , Taux

[s]

With the HCP global search

LevenbergMarquardt algorithm

Nelder-Mead algorithm

LevenbergMarquardt algorithm

NelderMead algorithm

29

275

11

141

302

495

86

268

0.113362

0.10032

0.0900938

0.0900938

1.071

2.614

0.1

0.55

0.0245558 0.0707688 0.00131171

0.029862 0.25717 0.0120781

0.0302872 0.348399 0.0125884

0.0302861 0.348277 0.0125867

8

6

4

2

0

0

0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 Noise Level

0.01

Figure 4: Residuals with different noise level using NM algorithm

The sum of squared residuals with different noise levels are shown in the figures 3 and 4. With the increase of noise level, the residuals of LM and NM increase rapidly. However, it can be seen in the figures, that the HCP algorithm will improve the accuracy of results. 14 LM Algorithm without HCP as global search LM Algorithm with HCP as global search

12

Residuals

Without the HCP global search

10

V. CONCLUSION This paper demonstrates that use of the global optimization technique can improve the accuracy of DC offset time constants estimation. Such algorithm can be used in practice to find good starting points for local search algorithms such as Levenberg-Marquardt and Nelder-Mead. As shown in the paper by adopting the HCP global optimization algorithm to find starting points, execution time of Levenberg-Marquardt and Nelder-Mead algorithms can be reduced by more than half. Furthermore the test results show that the global search preprocessing improves estimation accuracy.

10

Residuals

REFERENCES 8

[1]

6

[2] [3]

4

[4]

2

0

[5] 0

0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 Noise Level

0.01

Figure 3: Residuals with different noise level using LM algorithm

[6]

IEEE Tutorial Course, Advancements in microprocessor based protection and communication, 1997. G. Zieger, Numerical Distance Protection: Principles and Applications, SIEMENS and Publicis-MCD-Verl., 1999 G.H. Golub and V. Pereyra, “The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate”, SIAM J. Numer. Anal. 10, 1973, pp. 413-432 Gill, P.E., W. Murray, and M.H.Wright, Practical Optimization, London, Academic Press, 1981 J.A. Nelder and R. Mead, “A Simplex Method for Function Minimization”, Computer Journal, 7 (4), 1965, pp. 308-313 E. Novak and K. Ritter, “Global Optimization Using Hyperbolic Cross Points”, in State of the Art in Global Optimization (C.A. Floudas and P.M. Pardalos eds.), Kluwer, Dordercht, 1996, pp.19-33.

2008 Australasian Universities Power Engineering Conference (AUPEC'08)

Paper P-077 page 4

Suggest Documents