Interconnect Macromodelling and Approximation of Matrix Exponent

1 downloads 0 Views 208KB Size Report
Interconnect Macromodelling and Approximation of Matrix Exponent. V.G. KURBATOV AND M.N. ORESHINA. Lipetsk State Technical University, Russia. E-mail: ...
Analog Integrated Circuits and Signal Processing, 40, 5–19, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. 

Interconnect Macromodelling and Approximation of Matrix Exponent V.G. KURBATOV AND M.N. ORESHINA Lipetsk State Technical University, Russia E-mail: [email protected]; [email protected]

Received October 3, 2001; Revised March 14, 2003; Accepted May 15, 2003

Abstract. The simplest interconnection model for an integrated circuit is a discrete RC-circuit governed by a system Ri + Di = u  of differential equations with positive definite matrices R and D. Such a system usually has a high dimension, so it is natural to solve it approximately. Traditionally it is investigated by means of Krylov subspace methods. In this article a close approach is discussed. This approach admits an effective estimation of accuracy. It is based on the approximation of the main factor e At (here A = −R −1 D) in the impulse response H (t) = e At R −1 by a polynomial r (A, t) = a0 (t)1 + a1 (t)A + · · · + an (t)An with the coefficients ak (t) depending on t or by a rational function of a similar form. Key Words: interconnect macromodeling, best approximation, impulse response, matrix exponent, Krylov’s subspaces, order reduction

Introduction At the current evolution stage of integrated circuits the task of an accurate and efficient prediction of interconnection effects is vitally important. Interconnection models constructed from physical considerations usually consist of many (hundreds and thousands) resistors and capacitors, which results in a high order of the corresponding system of differential equations. Solving such a system of differential equations by direct numerical methods is usually impractical and inefficient, because the designing of integrated circuits requires taking an enormous number of interconnections into account. On the other hand, the numerical integration of such a system of differential equations is usually not very convenient for designing purposes. It is much more useful to build a simplified model of the analysed interconnections, which could be combined with linear and non-linear models of integrated circuit components. Usually a simplified model is constructed by replacing the original system of differential equations with a system of an essentially lower dimension. Therefore it is customary to call such kinds of approaches reduced order methods. The most popular approach to the building of reduced order models is based on rational interpolations [1] of the transfer function. There is a lot of different

variants of this approach: AWE [2], PVL [3], PRIMA [4, 5], etc. High popularity of these techniques is explained by several factors. First, they exploit the rather obvious idea of the time domain moment matching, which is equivalent to matching Taylor’s coefficient in the frequency domain. This approach is usually justified by the reasoning that if our function matches several first moments of the original system response then it approximates the response. In practice it usually works, but sometimes such an approximation can be rather inaccurate (e.g., can lead to an unstable model) even if it matches a large number of moments. Second, time domain moments can be computed efficiently from the original circuit or from the reduced system of differential equations. This computation does not require solving the original system of differential equations. Third, the most sophisticated and successful variants of this approach use such well developed and very popular mathematical ideas as Krylov space methods, and Lanczos and Arnoldi algorithms, see, e.g., [6–10]. Unfortunately, in the theory of reduced order methods there are known estimates in the frequency domain (see, e.g., [5, 9, 10]) only, but no estimates in the time domain. In this article we consider an operator approach to the construction of an approximate model for the impulse response of RC-interconnections. Its main idea

6

Kurbatov and Oreshina

consists in the approximation of the matrix exponent e At by polynomials and rational functions of A. This idea is natural for the operator theory [11, 12]. It has also demonstrated its efficiency in numerical methods; a discussion of such approximations from the computational point of view can be found, e.g., in [13, ch. 11]. The amount of calculations needed for the implementation of the method under discussion is compatible with that for reduced order methods, but it does not require calculation of eigenvalues and eigenvectors even for simplified matrices. The main advantage of this method is that it admits explicit estimates of precision (see Section 3). Hence it is possible to control the precision of the approximation. Particularly one can control also the stability of an approximation. Besides, the method covers a wider class of approximations than usual reduced order methods. In Section 1 we recall the definition of a holomorphic function of a matrix and represent the impulse response H of an RC-circuit in the form H (t) = e At R −1 . In Section 2 we explain that usual reduced order methods can be interpreted as a special case of the method under discussion. In Section 3 we present the main idea: we approximate the factor e At in the impulse response by a holomorphic function r (A, t) of A. The simplest example of such a function is a polynomial r (A, t) = a0 (t)1 + a1 (t)A + · · · + an (t)An with coefficients depending on t. The main results are Theorems 6 and 9, where we give estimates of the error r (A, t)R −1 − H (t) in terms of the maximum of |r (λ, t) − eλt |, where λ runs over the spectrum σ (A) of the matrix A. The comparison of Theorems 6 and 9 shows that the choice of an approximation r depends on the kind of precision one wants. We suppose that we do not know σ (A) exactly. So, in Section 4 we discuss briefly the ways of its estimation. In Section 5 we discuss the case where a polynomial is taken for the approximation r . We consider three ways of construction of the polynomial r : the Taylor polynomial, the interpolation polynomial, and the polynomial of the best approximation. Unfortunately, a polynomial approximation to e At is good only on a small segment of the form [0, t0 ]. In Section 6 we take rational functions for the approximation r . We discuss the Pade approximations and the best rational approximations. It turns out that the usage of rational approximations in Theorem 6 does not require necessarily the estimation of the spectrum of A. Moreover, rational approximations give a very high

precision. But, unfortunately, the direct calculation of the Pade approximations and the best rational approximations requires to invert the matrix represented by the denominator at each point t separately. Freezing the denominator admits to avoid this problem and construct an approximation to the impulse response on a segment [t∗ , t ∗ ], which can be taken in any part of (0, +∞). But such a segment never contains zero. So, for an approximation of an impulse response near zero, it is better to use polynomials. In Section 7 we present a numerical example. 1.

Representation of Impulse Responses by Means of Matrix Exponent

In this section we introduce some notation and give definitions of impulse responses. It is of decisive importance that the matrix impulse response is a selfconjugate matrix though its main factors may be not. We denote by C N the space of all complex N -vectors ϕ = (ϕ1 , . . . , ϕ N ). We consider complex (and, particularly, real) N × N matrices A = {Ai j }. We denote by 1 the identity matrix. Each  N N × N matrix A induces N an operator (Aϕ)m = k=1 Amk ϕk acting from C N N to C . We consider on C the standard inner product ϕ, ψ = ϕ1 ψ¯ 1 + ϕ2 ψ¯ 2 + · · · + ϕ N ψ¯ N , where bars mean complex conjugates. It induces the standard (Euclidean) norm on C N (R N ) and the standard norm of matrices: ψ =

 ψ, ψ,

A = sup{Aψ : ψ ≤ 1}.

The (hermitian) conjugate of a matrix A = {Ai j } is the matrix A∗ = {A ji }. Clearly, Aϕ, ψ = ϕ, A∗ ψ and A∗∗ = A. A matrix A is called normal if A∗ A = A A∗ . A matrix A is called self-conjugate if A∗ = A. Clearly, any self-conjugate matrix is normal. A selfconjugate matrix A is called non-negative ( positive, non-positive) definite if Aϕ, ϕ ≥ 0 (respectively, Aϕ, ϕ > 0, Aϕ, ϕ ≤ 0) for all non-zero ϕ ∈ C N . If Aϕ = λϕ for some λ ∈ C and non-zero ϕ ∈ C N , the number λ is called an eigenvalue and ϕ is called an eigenvector. The set σ (A) ⊆ C of all eigenvalues is called the spectrum of the matrix A. The spectral radius of A is the number (A) = maxλ∈σ (A) |λ|. Let U be a neighbourhood of the spectrum of a matrix (operator) A and f : U → C be a holomorphic function. The function f of the matrix A is the matrix f (A) defined (see for details, e.g., [11, 12]) by the

Interconnect Macromodelling and Approximation of Matrix Exponent

rule f (A) =

1 2πi

 

Usually only some components of the input u and the output x are of interest. In such a case it is reasonable to consider the linear dynamical system

f (λ)(λ1 − A)−1 dλ,

R x˙ + Dx = bu(t),

where  is a contour in U that surrounds σ (A).

(a) A polynomial of degree n with coefficients depending on a parameter t is the function p(λ, t) = a0 (t) + a1 (t)λ + · · · + an (t)λn . In this case

where b, l ∈ C N are some vectors. We call the matrix impulse response of the system (4) (and of Eqs. (2) and (3)) the N × N -matrix solution t → H (t) of the equation ˙ H˙ (t) = AH (t) + R −1 ϑ(t),

p(A, t) = a0 (t)1 + a1 (t)A + · · · + an (t)An . (b) A rational function of degree (k, m) has the form

 ϑ(t) =

r (A) = (a0 1 + a1 A + · · · + ak Ak ) × (1 + b1 A + · · · + bm Am )−1 × (a0 1 + a1 A + · · · + ak Ak ).

(1)

= Ae At = e At A.

It is well known (see, e.g., [14]) that the mesh equations of an RC-circuit can be represented in the form (2)

where u is the vector of voltages and x is the vector of currents. It is important that (see, e.g., [14]) the matrices R and D are self-conjugate and non-negative definite. In order to make the exposition of the main ideas more clear, we assume that the matrix R is positive definite and, hence, invertible. Then Eq. (2) can be rewritten as ˙ x˙ = Ax + R −1 u, where A = −R −1 D.

0

for t < 0

t ∈ R.

(5)

It is straightforward to verify that 

(c) The exponential function f (λ, t) = eλt of a matrix A is defined as

˙ R x˙ + Dx = u,

for t > 0,

H˙ (t) = AH (t) + R −1 δ(t),

= (1 + b1 A + · · · + bm Am )−1

d (e At ) dt

1

is the Heaviside function. We recall that ϑ˙ = δ, where δ is the Dirac delta-function. Thus the equation can be rewritten in the form

In this case

Clearly,

t ∈ R,

that is equal to zero on (−∞, 0). Here

a0 + a1 λ + a2 λ2 + · · · + ak λk r (λ) = . 1 + b1 λ + b2 λ2 + · · · + bm λm

∞ k  t k A . k! k=0

(4)

y(t) = x(t), l,

Example 1. The following examples of holomorphic function play an important role in our exposition.

e At =

7

(3)

H (t) =

e At R −1 0

for t > 0, for t < 0.

(6)

Clearly, H (t) is self-conjugate for all t ∈ R. We call the (scalar) impulse response of the system (4) the solution h = y of Eqs. (4) that corresponds to the input u = δ and is equal to zero on (−∞, 0). Clearly, h(t) = h lb (t) = H (t)b, l.

(7)

We also consider the auxiliary ‘diagonal’ impulse responses h ll (t) = H (t)l, l 2.

and

h bb (t) = H (t)b, b. (8)

Krylov-Based Order Reduction Models

In this section we explain how the topic of the article arises within the framework of reduced order methods [4, 6, 8–10]. We show that usual reductions to Krylov subspaces can be interpreted as special approximations

8

Kurbatov and Oreshina

of the matrix exponent. For brevity, we restrict ourselves to simple multipoint rational interpolations. Consider a linear dynamic system (4). In this section we do not suppose that R and D are self-conjugate, but for simplicity we assume that R is invertible. Let n < N . Let W and V be N × n matrices with linearly independent columns. We consider the reduced order system ˜ R˜ x˙˜ + D˜ x˜ = bu(t), ˜ y˜ (t) = x˜ (t), l,

(9)

H˜ (t) =

n 

˜ −1 R˜ −1 ci (t)(si 1 − A)

i=1

=

n 

˜ −1 . ci (t)(si R˜ + D)

i=1

Hence the impulse response h˜ of system (9) can be represented in the form  ˜ = h(t)

where R˜ = W ∗ RV,

H˜ (t) can be represented as

n 

 ˜ ci (t)(si R˜ + D)

−1

˜ l˜ . b,

(11)

i=1

D˜ = W ∗ DV,

b˜ = W ∗ b,

l˜ = V ∗l.

We assume that the matrix R˜ is invertible, and we set ˜ A˜ = − R˜ −1 D. The following proposition can be found in almost any book on operator theory (see, e.g., [11, 12]). Book [12] contains the most detailed exposition for the finitedimensional case. Proposition 1. Let functions f and g be holomor˜ Let the phic in a neighbourhood of the spectrum of A. spectrum of the matrix A˜ be simple, i.e. it consists of n ˜ = g( A) ˜ if and distinct points µ1 , . . . , µn . Then f ( A) only if f (µi ) = g(µi ), i = 1, . . . , n. If the spectrum of A˜ is multiple, then f and g must coincide at µi together with relevant derivatives. We fix n points s1 , s2 , . . . , sn ∈ C such that the in˜ −1 , i = 1, . . . , n, verses (si R + D)−1 and (si R˜ + D) −1 exist. Clearly, (si R + D) = (si 1 − A)−1 R −1 and ˜ −1 R˜ −1 . For each t > 0, we ˜ −1 = (si 1 − A) (si R˜ + D) choose ci (t) so that the function n  ci (t) r (λ, t) = s −λ i=1 i

˜ On the other hand, since H˜ (t) = e At R˜ −1 , the function h˜ can be also represented as

˜ = h(t)

coincides (for the existence of such an r , see, e.g., [1, 15]) with the function λ → eλt at the eigenvalues λ = µi , i = 1, . . . , n, of A˜ (together with the relevant derivatives if the spectrum of A˜ is multiple). Then, by (6) and Proposition 1, the value H˜ (t) of the matrix impulse response H˜ (see (6)) of the system (9) ˜ t) R˜ −1 . Consequently, at the point t coincides with r ( A,

dk e µ k t

(12)

k=1

with some dk . But we need not this representation now. In a similar way one can apply the function r to the matrix A and obtain the matrix G(t) = r (A, t)R −1 =

n 

ci (t)(si 1 − A)−1 R −1

i=1

=

n 

ci (t)(si R + D)−1 .

i=1

This matrix can be regarded as an approximation to the matrix impulse response H (t) of system (4), because r is an approximation to the function λ → eλt . Hence the function   n  −1 −1 g(t) = R b, l ci (t)(si 1 − A) 

(10)

n 

=

i=1 n 

 ci (t)(si R + D)−1 b, l

(13)

i=1

is an approximation to the impulse response (7) of the system (4). Below we assume that V and W are obtained either from Lanczos’s process [3, 8] or from Arnoldi’s process [4, 8]. In the both cases the linear span of the columns of the matrix V is defined as the linear span of the vectors (si R + D)−1 b, i = 1, 2. . . . , n.

Interconnect Macromodelling and Approximation of Matrix Exponent

Theorem 2. Let the linear span of the columns of the matrix V coincide with the linear span of the vectors (si R + D)−1 b, i = 1, 2 . . . , n. Then (11) and (13) are equal functions. Proof: We set bi = (si R + D)−1 b

or equivalently

b = (si R + D)bi . Since bi belongs to the linear span of the columns of the matrix V , it can be represented as bi = V b˜ i for some b˜ i ∈ Cn . Hence b = (si RV + DV )b˜ i , which implies ˜ b˜ i . b˜ = W ∗ b = (si W ∗ RV + W ∗ DV )b˜ i = (si R˜ + D) Therefore ˜ −1 b. ˜ b˜ i = (si R˜ + D) Finally we arrive at (si R + D)−1 b, l = bi , l = V b˜ i , l = b˜ i , V ∗l ˜ −1 b, ˜ l ˜ ˜ = (si R˜ + D) = b˜ i , l for all i = 1, 2 . . . , n, which implies the equality of the functions (11) and (13). So, Theorem 2 states that reduced order methods under consideration can be interpreted as an interpolation of the matrix exponent e At by holomorphic functions r of the matrix A. In usual implementations of reduced order methods one does not calculate the function r explicitly, and representation (13) is not considered; instead of it the points si are exploited in conditions of the kind η(s ˜ i ) = η(si ), where η˜ and η are Laplace’s transforms of h˜ and h respectively (usually it is assumed that η˜ and η coincide at si together with some derivatives); and the approximation η˜ is sought in the class of rational functions, cf. (12). Nevertheless, the function r can be easily restored proceeding from the conditions: r has the form (10) and r interpolates λ → eλt at the ˜ As soon as the function r is spectrum of the matrix A. restored, one can use Theorems 6 and 9 (see below) for the estimation of precision of a Krylov-based method.

9

It is important that approximation (13) can be calculated without constructing reduced order system (9) provided the function r is given. Thus, the function r can be regarded as an independent object. In this article we consider different holomorphic functions r those approximate the function eλt . Any such a function induces approximation (13) to the exact impulse response (7) of the system (4). We discuss how to estimate the precision of such an approximation and how one can choose the function r . We notice two features. The calculation by formula (13) requires the calculation of (si R + D)−1 b, i = 1, . . . , n, which is common with the construction of Krylov’s subspace. The majority of functions r are not generated by any reduced order system (9); indirectly it is evident from the fact that the best approximations g are not linear combinations of exponents. 3.

Estimates of Approximations

In this article we discuss an approximate operator method of calculating of impulse responses. Assume that the order N of the matrices R and D is high. So, a direct calculation of the impulse responses H and h lb is complicated or even impossible. Hence it is desirable to find a good approximation at least to h lb . Since the main factor e At in the matrix impulse response (6) can be interpreted as the function f (λ, t) = eλt of A, we suggest to replace f (λ, t) by a close function r (λ, t) that is more suitable from the computational viewpoint. Then we replace e At by r (A, t) and, respectively, we take the matrix G(t) = r (A, t)R −1 for an approximation to (6) and the function g(t) = glb (t) = r (A, t)R −1 b, l

(14)

for an approximation to (7). In Sections 5 and 6 we discuss examples of such functions r . But before that, in this section we derive some estimates of precision of the approximation to h(t) by g(t). It permits to evaluate capabilities of this approach and compare different methods for choosing r . For instance (confer Theorems 6 and 9), we shall see that the choice of the approximation r depends on the kind of precision we want. In this section we essentially use the assumption that we deal with RC-circuits. Let R be a positive definite matrix (we mean by R the coefficient R in (2)). Together with the standard

10

Kurbatov and Oreshina

inner product we consider the R-inner product defined by the formula ϕ, ψ R = Rϕ, ψ,

(15)

where ·, ·—is the standard inner product. And together with the standard norms we consider the Rnorms  ψ R = Rψ, ψ, A R = sup{Aψ R : ψ R ≤ 1}

Then (a) For the matrix R f (A) we have the estimate   |R f (A)ϕ, ψ| ≤ ε Rϕ, ϕ Rψ, ψ. (b) For the matrix f (A)R −1 we have the estimates   | f (A)R −1 ϕ, ψ| ≤ ε ϕ, R −1 ϕ ψ, R −1 ψ,  f (A)R −1  ≤ εR −1 .

induced by the inner product (15). Proposition 3. The matrix A = −R −1 D (see Eq. (3)) is self-conjugate and non-positive definite with respect to the R-inner product, i.e., Aϕ, ψ R = ϕ, Aψ R

and Aϕ, ϕ R ≤ 0

for all ϕ, ψ ∈ C N .

Proof: (a) It is well known (see, e.g., [11, Theorem 11.28]) that T  equals to the spectral radius (T ), for a normal matrix T . We also recall that a holomorphic function f (T ) of a normal matrix T is normal as well; moreover (see, e.g., [11]) σ ( f (T )) = { f (λ) : λ ∈ σ (T )}. Consequently, we have  f (A) R = ( f (A)) = max | f (λ)|. λ∈σ (A)

Proof: Actually, we have R −1 Dϕ, ψ R = R R −1 Dϕ, ψ = Dϕ, ψ = ϕ, Dψ, ϕ, R −1 Dψ R

Therefore, | f (A)ϕ, ψ R | ≤  f (A) R · ϕ R · ψ R

= Rϕ, R −1 Dψ = ϕ, R R −1 Dψ

= max | f (λ)| · ϕ R · ψ R

= ϕ, Dψ.

≤ εϕ R · ψ R

The right sides coincide, hence A = −R −1 D is Rself-conjugate. The matrix A = −R −1 D is non-positive definite. Indeed, we have −Aϕ, ϕ R = R −1 Dϕ, ϕ R = Dϕ, ϕ ≥ 0 since D is non-negative definite. Corollary 4. The spectrum of the matrix A = −R −1 D is contained in (−∞, 0]. There exists an Rorthonormal basis ϕ1 , ϕ2 , . . . , ϕ N in C N consisting of eigenvectors of A. Proof: This is a well known property of a selfconjugate non-positive definite matrix. Proposition 5. Let U be a neighbourhood of σ (A), where A = −R −1 D, see (3), and f : U → C be a holomorphic function. Assume that for some ε > 0 we have the estimate | f (λ)| ≤ ε

for all λ ∈ σ (A).

λ∈σ (A)

or, equivalently,   |R f (A)ϕ, ψ| ≤ ε Rϕ, ϕ Rψ, ψ. (b) Replacing in (a) ϕ by R −1 ϕ and ψ by R −1 ψ, we obtain   |R f (A)R −1 ϕ, R −1 ψ| ≤ ε ϕ, R −1 ϕ ψ, R −1 ψ. Then, since R is self-conjugate, from this estimate we have | f (A)R −1 ϕ, ψ| = | f (A)R −1 ϕ, R R −1 ψ| = |R f (A)R −1 ϕ, R −1 ψ|   ≤ ε ϕ, R −1 ϕ ψ, R −1 ψ, which, in particular, implies  f (A)R −1  ≤ εR −1 .

Interconnect Macromodelling and Approximation of Matrix Exponent

=  f (A)R −1 ϕ, R −1 ψ R N N   = αk β¯ k f (λk ) = ck f (λk ).

Let us come back to matrix impulse response (6) of an RC-circuit. Assume that for some t > 0, we have a holomorphic function λ → r (λ, t) that approximates the function λ → eλt . Theorem 6. Let on the spectrum of A = −R −1 D the approximation λ → r (λ, t) differ from the function λ → eλt by ε(t), i.e., |r (λ, t) − eλt | ≤ ε(t) for all λ ∈ σ (A).

(16)

k=1

(17)

Proposition 8. Let U be a neighbourhood of σ (A) and f, f 1 , f 2 : U → C be holomorphic functions. Assume we have the estimate | f (λ)| ≤ f 2 (λ)

Then we substitute ψ = l and ϕ = b in this formula and observe that (r (A, t)R −1 − H (t))b, l = glb (t) − h lb (t). It remains to recall that according to (6) h ll (0) = l, R −1l and h bb (0) = b, R −1 b. Proposition 7. Let U be a neighbourhood of σ (A) and f : U → C be a holomorphic function. Then

for all λ ∈ σ (A).

(18)

And assume that a vector ϕ ∈ C N and the function f 1 possess the property

Proof: We apply Proposition 5(b) taking for f the function f (λ) = r (λ, t) − eλt . Thus we obtain the estimate (recall that H (t) = e At R −1 ) |(r (A, t)R −1 − H (t))ϕ, ψ|   ≤ ε(t) ψ, R −1 ψ ϕ, R −1 ϕ.

k=1

(b) Indeed, from (a) we have dk = |ϕ, ϕk |2 .

Then for the approximate impulse response (14) we have the estimate  |glb (t) − h lb (t)| ≤ ε(t) h ll (0)h bb (0).

11

 f 1 (A)R −1 ϕ, ϕ =  f 2 (A)R −1 ϕ, ϕ. Then | f (A)R −1 ϕ, ϕ| ≤  f 1 (A)R −1 ϕ, ϕ.

(19)

Assume additionally that f 1 = f 2 , the functions f and f 1 are entire, and f (λ) and f 1 (λ) are real for all λ ∈ R. Then for any vectors ϕ, ψ ∈ C N we have | f (A)R −1 ϕ, ψ| ≤ | f 1 (A)R −1 ϕ, ψ|   +  f 1 (A)R −1 ϕ, ϕ  f 1 (A)R −1 ψ, ψ. (20) Proof:

By Proposition 7(b), we have

(a) For any vectors ϕ, ψ ∈ C N the expression  f (A) N ck f (λk ), R −1 ϕ, ψ can be represented as k=1 where ck = ϕ, ϕk ψ, ϕk  and ϕk are Rorthonormal eigenvectors of A, i.e., Aϕk = λk ϕk and Rϕi , ϕ j  = δi j (see Corollary 4). (b) For any vector ϕ ∈ C N the expression  f (A) R −1 ϕ, ϕ can be represented in the form  N k=1 dk f (λk ) with dk ≥ 0 independent of f .



N N







| f (A)R −1 ϕ, ϕ| =

dk f (λk ) ≤ d | f (λk )|

k=1

k=1 k

N Proof: (a)  Indeed, let R −1 ϕ = k=1 αk ϕk and N R −1 ψ = β ϕ . Then we can write αk = k k k=1 R −1 ϕ, ϕk  R = ϕ, ϕk . Similarly βk = ψ, ϕk . Thus we have

Since f (λ) and f 1 (λ) are entire and for λ ∈ R, real ∞ m we have  the expansions f (λ) = and m=0 am λ ∞ m f 1 (λ) = m=0 bm λ with real a and b . Therefore m ∞ m −1 −1 the matrices f (A)R = D)m R −1 m=0 am (−R  ∞ −1 −1 m −1 and f 1 (A)R = D) R are selfm=0 bm (−R conjugate. Consequently, the formula

 f (A)R −1 ϕ, ψ = R −1 R f (A)R −1 ϕ, ψ = R f (A)R −1 ϕ, R −1 ψ



N 

dk f 2 (λk ) =  f 2 (A)R −1 ϕ, ϕ

k=1

=  f 1 (A)R −1 ϕ, ϕ.

ϕ, ψ∗ = ( f 1 (A) − f (A))R −1 ϕ, ψ

12

Kurbatov and Oreshina

defines the inner product, which is non-negative definite by (19). For any inner product one has the estimate |ϕ, ψ∗ | ≤



or |(r (A, t)R −1 − H (t))ϕ, ϕ| ≤ ε(t)H (t)ϕ, ϕ.



ϕ, ϕ∗ ψ, ψ∗ .

Substituting is this formula the definition of ϕ, ψ∗ , we obtain |( f 1 (A) − f (A))R −1 ϕ, ψ|  ≤ ( f 1 (A) − f (A))R −1 ϕ, ϕ  × ( f 1 (A) − f (A))R −1 ψ, ψ   ≤  f 1 (A)R −1 ϕ, ϕ  f 1 (A)R −1 ψ, ψ,

Remark 1. We show that estimates (17) and (22) are unimprovable.

which implies (20). Now we come to an estimate of a different kind than in Theorem 6: in Theorem 6 we compare the error at the present instant t of time with the entries of H (0) at the initial moment of time, but in Theorem 9 we compare the error at the present instant t of time with the entries of H (t) at the same moment. Assume again that for some t > 0, we have a holomorphic function λ → r (λ, t) that approximates the function λ → eλt . Theorem 9. Let on the spectrum of A = −R −1 D the approximation λ → r (λ, t) be real for real λ and differ from the function λ → eλt by ε(t)eλt , i.e., |r (λ, t) − eλt | ≤ ε(t)eλt

for all λ ∈ σ (A).

It remains to substitute ϕ = l. Next we take the same f and f 1 , and ϕ = l and ψ = b, and substitute them in (20). In a similar way we obtain (23).

(21)

To get started, we derive some auxiliary identities. Let b = l = Rϕ1 , where ϕ1 is the first eigenvector of A, ϕ1  R = 1. In this case h lb = e At R −1 b, l = e At R −1 Rϕ1 , Rϕ1  = e At ϕ1 , Rϕ1  = eλ1 t ϕ1 , Rϕ1  = eλ1 t ϕ1 , Rϕ1  = eλ1 t ϕ1 2R = eλ1 t . In a similar way glb (t) = r (λ1 , t). We also note that h lb (0) = R −1 b, l = ϕ1 , Rϕ1  = 1. First, for definiteness we assume that |r (λ1 , t) − eλ1 t | = ε(t), i.e., λ1 is just the point where estimate (16) turns into equality. We suppose that b = l = Rϕ1 . Then we have |glb (t) − h lb (t)| = |r (λ1 , t) − eλ1 t | = ε(t). Thus estimate (17) is unimprovable. Next, for definiteness we assume that |r (λ1 , t) − eλ1 t | = ε(t)eλ1 t , i.e., λ1 is just the point where estimate (21) turns into equality. We suppose that b = l = Rϕ1 . Then we have |gll (t) − h ll (t)| = |r (λ1 , t) − eλ1 t | = ε(t)eλ1 t . Thus estimate (22) is unimprovable. Taking b = l in (23), we see that estimate (23) is only twice worse than unimprovable estimate (22).

Then for the approximate ‘diagonal’ impulse response 4.

gll (t) = r (A, t)R −1l, l we have the estimate |gll (t) − h ll (t)| ≤ ε(t)h ll (t).

(22)

And for approximate impulse response (14) we have |glb (t) − h lb (t)| ≤ ε(t)(|h lb (t)| +



h ll (t)h bb (t)). (23)

Proof: We set f (λ) = r (λ, t) − eλt , f 1 (λ) = ε(t)eλt , and f 2 = f 1 , and substitute them in (19). Then we arrive at the estimate |(r (A, t) − e At )R −1 ϕ, ϕ| ≤ ε(t)e At R −1 ϕ, ϕ

Estimates of the Spectrum of A

According to the results of Section 3 the precision of an approximation to an impulse response is characterised by functions ε, which, in turn, depend on a successful choice of the function r . In subsequent Sections we discuss some ways of construction of the function r . But before applying the estimates, it is useful (see (16) and (21)) to localise the spectrum σ (A) of the matrix A. By corollary 4, the spectrum of A is contained in a segment of the form [−β, −α], where 0 ≤ α ≤ β. Clearly, the minimal value of β such that the segment [−β, −α] contains the spectrum σ (A), coincides with the spectral radius (A) of A. We call the lower spectral radius r(A) of A the minimal value of α such that the segment [−β, −α] contains the spectrum σ (A). In order to estimate the spectral radius (A), the well

Interconnect Macromodelling and Approximation of Matrix Exponent

known formula (see, e.g., [11, Theorem 10.13]) for the spectral radius (A) = inf An 1/n = lim An 1/n n

n→∞

can be applied. (The process can be speeded up if one uses the powers n = 2m .) In particular, we can set β = An 1/n for some n. It is convenient to take for the norm of A the norm induced by l∞ - or l1 -norms on C N , which are easily calculated: Al∞ →l∞ = max i

Al1 →l1 = max i

N 

or

N 

e A1 t = e(αt)/(β−α) e At/(β−α) . (25)

If a holomorphic function r1 (λ, t) of the matrix A1 possesses the property |(r1 (A1 , t) − e A1 t )b, l| ≤ ε(t) with some ε(t) ≥ 0 and b, l ∈ C N , then the function r (λ, t) = e−αt r1

|(r (A, t) − e At )b, l|

  

= e−αt r1 (A1 , (β − α)t) − e A1 (β−α)t b, l

j=1

A = −α1 + (β − α)A1 . (24)



Proof: The proof of formula (25) is based on direct calculations. We prove the second assertion. We make use of the formula (25). On the other hand, by virtue of (26) we have r (A, t) = e−αt r1 ( A+α1 , (β − α)t). Since β−α A1 = A+α1 (see (24)), we can rewrite this formula as β−α r (A, t) = e−αt r1 (A1 , (β − α)t). Hence,

Therefore,

|Ai j |.

Thus, clearly, the spectrum of A1 is contained in the segment [−1, 0], and e At = e−αt e A1 (β−α)t ,

|(r (A, t) − e At )b, l| ≤ e−αt ε((β − α)t).

j=1

Proposition 10. Let the spectrum of A be contained in a segment [−β, −α]. We define the auxiliary matrix A1 related to A by the linear transformation A + α1 β −α

of the matrix A possesses the property

r (A, t) − e At = e−αt r1 (A1 , (β − α)t) − e−αt e A1 (β−α)t .

|Ai j |,

We can estimate the lower spectral radius r(A) in a similar way. We consider the matrix T = A + β1. Clearly, r(A) = (T ) − β. And we can estimate (T ) using the above rule. If we obtain the estimate α < 0, than, probably, 0 ∈ σ (A) and we can set α = 0 any way. The following proposition shows that the case of an arbitrary segment [−β, −α] can be reduced to the case of [−1, 0].

A1 =

13

λ+α , (β − α)t β −α

(26)

≤ e−αt ε((β − α)t). According to this proposition we develop further discussion mainly for a matrix A whose spectrum is contained in the segment [−1, 0]. 5.

Polynomial Approximations

In this section we take a polynomial p(λ, t) in λ for an approximation r (see the discussion in the beginning of Section 3) to the function f (λ, t) = eλt . A polynomial is the simplest function from the computational point of view. To construct a polynomial approximation to the matrix impulse response H (t) one must take a suitable polynomial λ → p(λ, t), calculate powers A, A2 , . . . , An of A, and substitute them into the polynomial. Remark 2. Assume p is a polynomial p(λ, t) = a0 (t) + a1 (t)λ + · · · + an (t)λn . If we are interested in approximation (14), there is no need to calculate the matrix powers Ak completely. It suffices to calculate only the vectors Ak R −1 b, or the vectors l ∗ Ak . Indeed, identity glb(t) =  p(A, t)R −1 b, l = n we have kthe−1 n −1 k ∗ k=0 ak (t)A R b, l = k=0 R b, (A ) l. This observation is well known it the theory of Krylov’s methods. The simplest method of polynomial approximation is the approximation of the function t) = eλt by its n f (λ, tk k Taylor polynomial pn (λ, t) = k=0 k! λ . This means

14

Kurbatov and Oreshina

 t k k −1 that we take G(t) = pn (A, t)R −1 = nk=0 k! A R At −1 for an approximation to H (t) = e R . Let f be a given function of (real or) complex argument, and λ0 , λ1 , . . . , λn be given n + 1 points in the domain of the function f . We recall (see, e.g., [13, 15, 16]) that the interpolation polynomial of the function f with the points of interpolation λ0 , λ1 , . . . , λn is the polynomial p(λ) = a0 + a1 λ + a2 λ2 + · · · + an λn of degree n, coincident with the function f at the points λ0 , λ1 , . . . , λn : p(λk ) = f (λk ),

k = 0, 1, . . . , n.

We consider the fundamental interpolation polynomials k−1

lk (λ) =

i=0 (λ k−1 i=0 (λk

− λi ) − λi )

n

i=k+1 (λ

n

− λi )

i=k+1 (λk

,

− λi )

k = 0, 1, . . . , n.

(27)

It is not difficult to see that lk (λ j ) = 0 if k = j, and lk (λ j ) = 1 if k = j, k, j = 0, 1, . . . , n. We recall the Lagrange representation of the interpolation polynomial: p(λ) =

n 

f (λk )lk (λ).

(28)

k=0

Let us come back to the approximation of a holomorphic function f by a polynomial p on the segment [−β, −α]. A comparatively good universal method of a construction of the polynomial p is the following. The polynomial p must be the interpolation polynomial with the roots of the Chebyshev polynomial of the first

kind (see, e.g., [15, p. 99 and 190]) taken as points of interpolation reduced (cf. Proposition 10) to the segment [−β, −α]. It is known (see, e.g., [15, p. 190]) that the roots of the Chebyshev polynomials are asymptotically optimal points of interpolation. A polynomial p of degree n is said (see, e.g., [15, ch. 3, Section 1]) to be a polynomial of the best approximation to a given continuous function f on [a, b] if the infimum of the value maxλ∈[a,b] | f (λ) − p(λ)| over all polynomials p of the given degree n is attained at the considered p. There exists an effective algorithm [16] for construction of the polynomial of the best approximation. Example 2. In Figs. 1–3 we present results of numerical experiments with polynomial approximations of the exponential function. We consider the Taylor polynomials (Fig. 1), the interpolation polynomials with the zeroes of the Chebyshev polynomials taken as points of interpolation (Fig. 2), and the polynomials of the best approximation (Fig. 3). In all figures we present the graphs of the functions ε from Theorems 6 (left) and 9 (right) corresponding to polynomials of the indicated degrees, provided the spectrum of A is contained in [−1, 0] (see Proposition 10). In terms of Krylov’s subspaces (see Section 2), polynomial approximation corresponds to the case when the linear span of the columns of the matrix V is defined as the linear span of the vectors Ai R −1 b, i = 0, 1, . . . , n, cf. (13). This approach is equivalent to the Pade approximation of the transfer function at infinity (see, e.g., [9, p. 17]). According to Remark 1, Fig. 3 shows the best possible approximation that can be achieved in this way. Moreover, reduced order methods can give only an approximation of the kind (12), whereas the

Fig. 1. The graphs of the functions ε(t) = maxλ∈[−1,0] |eλt − p(λ, t)| = |1 − p(0, t)| (left) and ε(t) = maxλ∈[−1,0] | p(λ, t)/eλt − 1| = 1 − p(−1, t)/e−t (right), where p is the (n − 1)-th Taylor’s polynomial of λ → eλt , n = 4, 8, 16, 32.

Interconnect Macromodelling and Approximation of Matrix Exponent

15

Fig. 2. The graphs of the functions ε(t) = maxλ∈[−1,0] |eλt − p(λ, t)| = |1 − p(0, t)| (left) and ε(t) = maxλ∈[−1,0] | p(λ, t)/eλt − 1| = 1 − p(−1, t)/e−t (right), where the roots of Chebyshev polynomials are taken as the points of interpolation for the interpolation polynomial p of degree n − 1, n = 4, 8, 16, 32.

Fig. 3. The graphs of the function ε(t) = maxλ∈[−1,0] |eλt − p(λ, t)| (left) and ε(t) = maxλ∈[−1,0] | p(λ, t)/eλt − 1| (right), where p is the polynomial of the best approximation of degree n − 1.

best polynomial approximation leads to an approxima ˜ = n dk eµk (t)t with µk dependtion of the form h(t) k=1 ing on t. Thus, actually the Pade approximation of the transfer function at infinity can give only a precision compatible with that for the interpolation polynomials with the roots of the Chebyshev polynomial since they give the best asymptotical rate of convergence in the class of interpolation polynomials.

Let f be a function holomorphic in a neighbourhood of zero. The Pade approximation (at zero) to the function f of degree (k, m) is (see, e.g., [1]) the rational function πk,m (λ) =

a 0 + a 1 λ + a 2 λ2 + · · · + a k λk pk,m (λ) = qk,m (λ) 1 + b1 λ + b2 λ2 + · · · + bm λm (29)

that possesses the property 6.

Rational Approximations f (λ) = πk,m (λ) + O(λk+m+1 )

In this section we discuss the usage of rational functions for the construction of an approximation r to the impulse response. We begin with the Pade approximations. The Pade approximations are often used (see, e.g., [3]) for the construction of approximations of impulse responses, but in a somewhat different way. Namely (cf. the end of Section 5), usually they are used for an approximation of the frequency response, i.e., the Laplace transform of the impulse response, but we consider the Pade approximations to the impulse response itself.

as λ → 0.

Clearly, Pade approximations of degree (k, 0) are Taylor polynomials. An example of the Pade approximation to the matrix exponent e At has the form 3At A2 t 2 A2 t 3 −1 At π1,3 (At) = 1 + 1− + − 4 4 4 24 2 2 2 3 −1 At 3At A t A t 1+ = 1− + − . 4 4 24 4 (30)

16

Kurbatov and Oreshina Table 1.

The value of maxλ∈(−∞,0] |πk,m (λ) − eλ | for some rational functions.

The degree (k, m) of approximation The Pade approximation The best rational approximation

(1, 3)

(2, 6)

(3, 9)

(4, 12)

(5, 15)

.022393

8.406 · 10−4

3.053 · 10−5

1.120 · 10−6

4.122 · 10−8

.001737

6.684 · 10−6

2.575 · 10−8

9.903 · 10−11

3.803 · 10−13

There is known [1] an effective formula for Pade approximations to eλ of any degree. The following proposition shows that Pade approximations πk,m with m = 3k are the most convenient for our problem. Proposition 11 ([1, part 1, Section 5.7]). The sequence πk,αk of Pade approximations to the function λ → eλ has the best rate of convergence on (−∞, 0] if α = 3. A better result can be obtained if one uses rational functions of the best approximation instead of Pade approximations. An algorithm for constructing of the best rational approximation can be found in, e.g., [17]. The best rational approximations can also approximate eλ on the whole (−∞, 0]. So, we write them in the form (29) as well. Remark 3. If one is interested only in approximation p(λ,t) (14) and uses a rational function r (λ, t) = q(λ,t) as an λt approximation to λ → e , then there is no need to calculate the numerator p(A, t) and the inverse of the denominator q(A, t) completely. Namely, one may act as in Remark 2. Indeed, glb (t) = r (A, t)R −1 b, l =  p(A, t)(q(A, t))−1 R −1 b, l = (q(A, t))−1 R −1 b, ( p (A, t))∗l = ( p(A, t))R −1 b, ((q(A, t))−1 )∗l. Example 3. In Table 1 and Fig. 4 we present results of numerical experiments with the Pade approximations

and the best rational approximations to the exponential function. In Table 1 we compare estimates (16) from Theorem 6 for the Pade approximations and the best rational approximations to the exponential function; in the both cases the estimates do not require a localisation (see Section 4) of the spectrum σ (A), i.e., one can obtain an estimate that holds for all λ ∈ (−∞, 0]. Figure 4 shows the graphs of the function ε from Theorem 9 provided the spectrum of A is contained in [−1, 0]. Unfortunately, for the computing of the best rational approximation πk,m (At) (or the Pade approximation) we are forced to calculate the inverse of the denominator qk,m (At) at each new point t all over again, cf. formula (30). To overcome this problem, we suggest the following modification of formulae of the kind (30). We fix a point t0 . For all t’s close to t0 , we suggest to use the denominator qk,m (λt0 ) independent from t, but a numerator p(λ, t) dependent on t (and is not obligatorily equal to pk,m (λt)). More detail, let us fix a point t0 . We describe how to construct a good rational approximation λ → r (λ, t) to λ → eλt for t’s in a neighbourhood of t0 . We set r (λ, t) = p(λ, t)/qk,m (λt0 ), where qk,m is the denominator of the best rational approximation πk,m with given k and m, and λ → p(λ, t) is a polynomial of degree k, whose coefficients depend on t. We choose the numerator p(λ, t) in such a way that

Fig. 4. The graphs of the functions ε(t) = maxλ∈[−1,0] |πk,3k (λt)/eλt − 1| where πk,3k is the Pade approximation (left) or the best rational approximation (right), k = 1, 2, 3, 4, 5.

Interconnect Macromodelling and Approximation of Matrix Exponent p(λ,t) λ → qk,m be the generalised polynomial of the best (λt0 ) approximation to the function λ → eλt with respect to the Chebyshev system [15, ch. 3, Section 1] λ → 1/qk,m (λt0 ), λ → λ/qk,m (λt0 ), . . . , λ → λk /qk,m (λt0 ). Finally we use r (A, t) = p(A, t)(qk,m (At0 ))−1 = (qk,m (At0 ))−1 p(A, t) as an approximation to e At .

Example 4. We take k = 10 and m = 15, and t0 = 1. We construct the rational function r (λ, t) = p(λ, t)/q10,15 (λt0 ) as described above. We find points t∗ and t ∗ such that for the function ε(t) = maxλ∈(−∞,0] |r (λ, t) − eλt | (see (16)) the estimate ε(t) ≤ 10−3 takes place for all t ∈ [t∗ , t ∗ ]. It turns out that t∗ = .084 and t ∗ = 2.876. Now we can try to apply Theorem 6 on the segment [t∗ , t ∗ ]. If one wants to have a better accuracy ε he must shrink the segment [t∗ , t ∗ ] or enlarge m and k.

Fig. 5.

7.

17

A discrete model of an RC transmission line.

An Example of an RC-Circuit

We consider the simplest RC-line shown in Fig. 5. It consists of n capacitors and n + 1 resistors. We assume that R = R0 /(n + 1) and C = C0 /n, and n = 100. For definiteness we set R0 = 120 and C0 = 0.3. We assume that the right contacts are connected together, and a voltage U0 is applied to the left ones, and we are interested in currents I0 and In between the left and right contacts, respectively. Taking the internal squares in Fig. 5 for basis contours, we arrive at the mesh Kirchhoff equations:

The following proposition states that it suffices to construct a numerator p(λ, t) for t’s in a neighbourhood of the only point t0 .

I  (t) +

1 1 DI (t) = U  (t), RC R

(31)

where Proposition 12. Let a polynomial p satisfy the estimate sup

λ∈(−∞,0]



p(λ, t)

λt

− e

q (λ)

≤ε

for all t ∈ [t∗ , t ∗ ]

k,m

with some t∗ and t ∗ (which corresponds to the case t0 = 1). Let t0 > 0 be an arbitrary point. Then for the polynomial λ → p(λt0 , t/t0 ) we have



p(λt0 , t/t0 )

λt

sup

−e ≤ ε qk,m (λt0 ) λ∈(−∞,0] for all t ∈ [t∗ t0 , t ∗ t0 ]. Proof: Indeed, if one substitutes t = t  /t0 and λ = λ t0 into the first formula, he obtains the second formula. According to this proposition one may try to cover the real half axis {t : t ≥ 0} by segments of the form [t∗ t0 , t ∗ t0 ] with different t0 ’s. We note that for the covering a neighbourhood of 0 one needs infinitely many such segments. So if one is interested in an impulse response at points those are very close to t = 0, it is better to use a polynomial approximation.



I0





U0

    I1   0     ..   I =  .  , U =  ...        In−1   0 0 In  1 −1 0 · · ·  −1 2 −1 · · ·    0 −1 2 · · ·  D= . .. .. ..  .. . . .    0 0 0 ··· 0 0 0 ···

     ,    0 0 0 .. . 2 −1

 0 0    0   . ..  .    −1  1

We take for U0 the Heaviside function ϑ. Thus U0 is 1 the δ-function. We denote the matrix − RC D by A. In this notation the matrix impulse response of Eq. (31) has the form H (t) = R1 e At for t > 0. In Fig. 6 we present the results of estimation (constructed according to Example 4) of the currents I0 and In , which correspond to the ends of the line. In each graph we show functions I˜(t) ± ε(t) on the segment [t∗ , t ∗ ], where I˜ is an approximation to the corresponding current; the actual current I is somewhere between

18

Kurbatov and Oreshina

Fig. 6. The gates for Y -impulse responses of the circuit presented in Fig. 5. The graphs show the upper and lower estimates for the input current I0 (left column) and the output current In (right column) induced by the input voltage U0 = ϑ (see Section 6). Rows correspond to different segments [t∗ , t ∗ ] (see Example 4).

them (the approximation I˜ is just in the middle between them). The precision ε(t) is less than 10−3 for all t ∈ [t∗ , t ∗ ]. Indeed, Theorem 6 guarantees that the difference between the actual and approximate values of I0 (t) and In (t) is less than ε(t) ∗ 101 ≤ 10−3 ∗ 101 ≤ 120 120 −3 ∗ 10 for t ∈ [t∗ , t ], because H00 (0) = Hnn (0) = R1 = n+1 = 101 . R0 120 We conclude with the following remark. The matrix q10,15 (At0 ) is badly conditioned. So it is hard to invert it. To avoid this problem, we represented the rational function λ → p(λ, t)/q10,15 (λt0 ) as the sum of rational functions with denominators of low degree, and only then substituted A for λ. This idea is borrowed from [18]. Acknowledgment The authors wish to express their gratitude to V.P. Zolotov for helpful discussions.

References 1. G.A. Baker and P. Graves-Morris, Pad´e Approximations. Addison-Wesley Publ. Co.: London–Amsterdam–Sydney– Tokyo, 1981. 2. L.T. Pillage and R.A. Rohrer, “Asymptotic waveform evaluation for timing analysis.” IEEE Trans. Computer-Aided Design, vol. 9, no. 4, pp. 352–366, 1990. 3. P. Feldman and R.W. Freund, “Efficient linear circuit analysis by Pade approximation via the Lanczos process.” IEEE Trans. Computer-Aided Design, vol. 14, no. 5, pp. 639–649, 1995. 4. A. Odabasioglu, M. Celik, and L.T. Pillegi, “PRIMA: Passive reduced-order interconnect macromodeling algorithm.” IEEE Trans. Computer-Aided Design, vol. 17, no. 8, pp. 645–654, 1998. 5. A. Odabasioglu, M. Celik, and L.T. Pillegi, “Practical consideration for passive reduction of RLC circuits,” in Proc. IEEE/ACM Int. Conf. on Computer-Aided Design, Nov. 1999, pp. 214– 219. 6. J.I. Aliaga, D.L. Boley, R.W. Freund, and V. Hernandez, A Lanczos-Type Method for Multiple Starting Vectors. Numerical Analysis Manuscript no. 98-3-05, Bell Laboratories, Murray Hill, New Jersey, Sept. 1998. http://cm.bell-labs.com/cs/doc/98.

Interconnect Macromodelling and Approximation of Matrix Exponent 7. D.L. Boley, “Krylov space methods on state-space control models.” Circuits Systems Signal Processing, vol. 13, no. 6, pp. 733– 758, 1994. 8. R.W. Freund, Passive Reduced-Order Modelling via Krylov Subspace Methods. Numerical Analysis Manuscript no. 00-3-02. March 2000. http://cm.bell-labs.com/cs/doc/00. 9. E.J. Grimme, Krylov Projection Methods for Model Order Reduction. Ph.D Thesis. University of Illinois at UrbanaChampaign, 1997. 10. D. Skoogh, A Rational Method for Model Order Reduction. Report no. 1998-47. http://www.math.chalmers.se/Math/Research/ Preprints. 11. W. Rudin, Functional Analysis. McGraw-Hill Book Company: New York, 1990. 12. G.E. Shylov, Finite-Dimensional Linear Spaces. Nauka, Moscow, 1969 (Russian). 13. G.H. Golub and C.F. Van Loan, Matrix Computations. The John Hopkins University Press: Baltimore-London, 1989. 14. N. Balabanian, Network Synthesis. Prentice-Hall, 1958. 15. K.I. Babenko, Foundations of Numerical Analysis. Nauka, Moscow, 1986 (Russian). 16. E.Ya. Remez, Foundations of Numerical Methods of Chebyshev’s Approximations. Naukova Dumka, Kiev, 1969 (Russian). 17. E.W. Cheney and H.L. Loeb, “Two new algorithms for rational approximation.” Numer. Math., vol. 3, no. 1, pp. 72–75, 1961. 18. A. Dounavis, Xin Li, M.S. Nakhla, and R. Achar, “Passive closed-form transmission-line model for general purpose circuit simulators.” IEEE Trans. on Microwave Theory and Techniques, vol. 47, no. 12, pp. 2450–2459, 1999.

Vitalii Gennad’evich Kurbatov received his Candidate degrees in 1978 and his Doctoral degrees in

19

1992, both in physical and mathematical sciences. He was with Voronezh State University, Russia, from 1969 to 1994. From 1994 he is with Lipetsk State Technical University, Russia. His research interests: operator theory, differential equations, control, electrical engineering.

Maria Nikolaevna Oreshina was born in 1980. She is a post graduate student in Lipetsk State Technical University, Russia. Her research interests: operator theory, numerical methods, electrical engineering.

Suggest Documents