Model Reduction for Time-Varying Descriptor

0 downloads 0 Views 305KB Size Report
We will present a projection approach for model reduction of linear time-varying descriptor systems based on earlier ideas in the work of Philips and others.
Model Reduction for Time-Varying Descriptor Systems Using Krylov-Subspaces Projection Techniques Mohammad-Sahadet Hossain, Peter Benner Department of Mathematics Chemnitz University of Technology D–09107 Chemnitz Germany Abstract We will present a projection approach for model reduction of linear time-varying descriptor systems based on earlier ideas in the work of Philips and others. The idea behind the proposed procedure is based on a multipoint rational approximation of the monodromy matrix of the corresponding differential-algebraic equation. This is realized by orthogonal projection onto a rational Krylov subspace. The algorithmic realization of the method employes recycling techniques for shifted Krylov subspaces and their invariance properties. The proposed method works efficiently for macromodels, such as time varying circuit systems and models arising in network interconnection, on limited frequency ranges. Bode plots and step response are used to illustrate both the stability and accuracy of the reduced order models.

Keywords: Krylov subspace techniques, linear time-varying systems, recycled Krylov subspaces, model-order reduction.

1

Introduction

Model reduction using projection formulation has become a popular and well accepted technique in the field of signal analysis and electrical interconnections. Today, one of the best choices of this projection subspace, in model reduction of LTI systems, is the Krylov subspace. In this approach, the lower order model is obtained such that some of the first moments (and/or markov parameters)[4, 1] of the original and reduced systems are matched where the moments are the coefficients of the Taylor series expansion of the transfer function at a suitable point. Method based on multipoint rational approximations [2, 1] are also efficient for particular cases.

However, model reduction for time-varying systems has a little bit complexity in that projection approach. Balanced truncation methods [14, 5] have been applied, but there is some uncertainty about the effective implementation of these techniques. Therefore, huge talents have been worked on developing the techniques of model reduction using rational approximations and the projection formulations. The first task of the projection techniques is to analyze the proposed LTV system in this framework. Once the framework is identified, the next work is to determine appropriate subspaces and to compute them efficiently. In this paper, we focus on the RF problems and our interest is on periodically linear time-varying systems (PLTV). Our implementations and application example also focus on these systems. In section 2, we discuss the basic essentials of LTI systems regarding the model reduction. Also we discuss some definitions and theorems related to our projection approach. In section 3, we represent the LTV signal analysis for a small response and discuss the frequency-domain matrix formulation of the system which gives the concept for the model reduction procedure. In the next section, we choose a projection framework using a finite discretization method and approximate the appropriate subspaces and discuss how to compute them more efficiently. We also show that the model based on approximate multipoint Krylov-subspace can be efficiently achieved from the approximated subspace. In the last section we give numerical examples of the proposed model reduction technique. We generalize the method from [11] to differential-algebraic equations.

2 2.1

Background LTI systems

Before going to the details of linear time-varying system, we will discuss here some essentials of linear time-invariant systems and model reduction based on Krylov subspace projection methods. Let us consider the linear time-invariant multi-input and multi-output (MIMO) system in differential-algebraic form E x(t) ˙ = Ax(t) + Bu(t), y(t) = C T x(t),

(1)

where x(t) ∈ Rn , called the system state, u(t) ∈ Rp (p ≤ n), called the system input, y(t) ∈ Rq , called the system output, n is the system order, and p and q are the numbers of system inputs and outputs, respectively. The matrices E, A ∈ Rn×n , B ∈ Rn×p , C ∈ Rn×q are the time invariant matrices. For single-input single-output (SISO) systems, p, q=1, the matrices B and C changes to vectors b and c, respectively. Now performing the Laplace transformations on the system, we obtain sE x˜(s) = A˜ x(s) + B u˜(s), y˜(s) = C T x˜(s), 2

(2)

where x˜(s), etc. represents the Laplace transform of x(t), etc. It can easily be shown that the output function of the above system is now y˜(s) = C T (sE − A)−1 B u˜(s) where C T (sE − A)−1 B is called the transfer function of the system. We observe that the transfer function is a rational function in s, so it is logical that it should be approximated through a rational approach, such as Pad´e approximation [4, 1, 15]. In case of Pad´e approximation, the reduced order model is obtained such that 2r + 1 moments (SISO) of the original and reduced system are matched, where r is the order of the reduced system. In MIMO case the number of moment matching may varies. It is shown in [9] that in a time domain, a reduced rth order p-input q-output model can match (r/p) + (r/q) time moment matrices with that of its original system. For p = q, the number of moment matrices matching will be (2r/p). Suppose that a rational approximation of the LTI system (1) is given by ˆ ˆ Eˆ z(t) ˙ = Az(t) + Bu(t) yˆ(t) = Cˆ T z(t) ˆ Aˆ ∈ Rr×r ; z(t) ∈ Rr , B ˆ ∈ Rr×p , Cˆ ∈ Rr×q , and r  n for the successful reduction where E, of order. The reduced matrices are obtained from the projection matrices W and V by ˆ = W T B, Cˆ = V T C. Eˆ = W T EV, Aˆ = W T AV, B

(3)

We can show that the kth derivative at zero, or moment, of the transfer function is given by C T A−k−1 B. Clearly our desired approximations are connected with powers of the matrix A−1 acting on B, or powers of A−T acting on C. Hence the main work of our reduction technique is to connect the moments with the projection matrices W and V . More of these ideas will be discussed in the next section.

2.2

The Krylov-Subspace Projection

Definition 1: (Krylov subspace) The Krylov subspace of order m is the space defined as κm (A, b) = span{b, Ab, A2 b, ......, Am−1 b},

(4)

where A ∈ Rn×n and b ∈ Rn is called the starting vector. The vectors b, Ab, A2 b, ......, Am−1 b that construct the subspace, are called basic vectors.

3

If the ith basic vector in the Krylov subspace (4) is a linear combination of the previous vectors, then the next basic vectors can be written as a linear combination of the first i-1 vectors. Therefore the first independent basic vectors can be considered as a basis of the Krylov subspace. Definition 2: (block Krylov subspace) The block Krylov subspace of order m is the space defined as κm (A, B) = span{B, AB, A2 B, ......, Am−1 B},

(5)

where A ∈ Rn×n and B ∈ Rn×m is called the starting vectors. The block Krylov subspace with m starting vectors can be considered as a union of m Krylov subspaces defined for each starting vector. Theorem 1 If the columns of the matrix V form a basis for the order m Krylov subspace κm (A−1 E, A−1 b) and the matrix W ∈ Rn×m is chosen such that Aˆ is nonsingular, then the first m moments (about zero) of the original and reduced order system match. Proof[see [18], [9]] Theorem 2 If the columns of the matrix W form a basis for the order n Krylov subspace κn (A−T E T , A−T c) and the matrix V ∈ Rn×m is chosen such that Aˆ is nonsingular, then the first n moments (about zero) of the original and reduced order system match. Proof[see [18, 9]] Theorem 3 Assume that A and Aˆ are invertible. If the columns of the matrices V and W form bases for the Krylov subspaces κm (A−1 E, A−1 b) and κn (A−T E T , A−T c), respectively, then the first m + n moments of the original and reduced order system match. Proof [see [18, 9]] The projection matrices play an important role in the formulation of reduced model. In case of Pad´e approximation [6, 9, 15] and Lanczos approximation technique [1, 18, 6], the 4

choice of W and V is W T = WlT A−1 , V = Vl , where Wl and Vl contain the biorthogonal Lanzcos vectors, and in case of Arnoldi method [1, 18], Va = Wa where Va is the orthonormal matrix generated by Arnoldi process. In our projection technique, we will take W = V . Because, in that case the reduced model will inherit the structural properties, such as stability and passibity, from the original model. We will extend this approach to our multipoint approximations where the moments match at several points in the complex plane [9, 3]. In that case W and V will contain the basis for the union of the Krylov subspaces constructed at different frequency points.

3

LTV systems

The time-varying system analog to (1) is the state-space model E(t)x(t) ˙ = A(t)x(t) + B(t)u(t), y(t) = C T (t)x(t),

(6)

where E(t), A(t), B(t), C(t) are matrices of order compatible with x(t), u(t), and y(t) and are assumed to be continuous functions of time. In integrated circuit applications, the most common origin of LTV systems is by linearization of the nonlinear system of equations around a time-varying operating point. We will discuss details of this linearization scheme in the next subsection.

3.1

Analysis of LTV signals

We consider the nonlinear system driven by a large signal bl and a small signal u(t), to produce an output zt. For simplicity we assume that both u(t) and z(t) are scalars. Now we model the nonlinear system using differential-algebraic equations, that describes the circuit equations [16, 13, 7, 8] and many other applications: dq(v(t)) dt

+ f (v(t)) = bl (t) + bu(t), z(t) = cT v(t),

(7)

where v(t) represents the nodal voltages and branch currents; u(t) represents the input source, q(.) and f (.) are nonlinear functions describing the charge/flux and resistive terms, respectively, in the circuit. b and c are vectors that link the input and output to the rest of the system. In equation (7), voltage v(t) and input variable u(t) represent the total quantities. Now we split them into two parts, a large signal part and a small signal part [16, 17, 12], in order to obtain a LTV model, u = u(L) + u(s) ,

v = v (L) + v (s)

5

(8)

Now linearizing around the large signal part v (L) [16, 12, 13], we obtain a linear time-varying system of the form d ¯ (s) (s) ¯ ) = bu(s) (t) (9) G(t)v + (C(t)v dt ¯ ¯ where G(t) = ∂f (v (L) )(t)/∂v (L) and C(t) = ∂q(v (L) )(t)/∂v (L) are the time-varying conductance and capacitance matrices, for the small response v (s) . We can write (9) in a more general form d ¯ ¯ G(t)v + (C(t)v) = bu(t) (10) dt for a small signal v. To relate to the standard notation, we may make the identification ¯ ¯ + C(t)) ¯˙ E(t) = C(t), A(t) = −(G(t)

3.2

Frequency-domain matrix of LTV system

Most of the work in model reduction for LTI systems has been done on the basis of rational approximations of the frequency-domain transfer function. Thus motivated, we adopt the formalism of L. Zadeh’s variable transfer functions [?] that were developed to describe timevarying systems. In this formalism the response v(t) can be written as an inverse Fourier transform of the product of a time-varying transfer function and Fourier transform of u(t), u(ω). That is, Z

−∞

v(t) =

h(i´ ω , t)u(´ ω )ei´ωt d´ ω,

(11)



where h(i´ ω , t) is the time-varying transfer function. To obtain the frequency-by-frequency response, we assume u as a single-frequency input. For this reason we will express u in terms of Delta function, u(´ ω ) = u(ω)δ(ω − ω ´ ). Equation (11) then transforms to the form v(t) = h(iω, t)u(ω)eiωt .

(12)

Writing s = iω and substituting into (10), we get an equation for h(s, t) as d ¯ ¯ ¯ t)) + sC(t)h(s, t) = b G(t)h(s, t) + (C(t)h(s, dt

(13)

Now defining ¯ + d C(t), ¯ K = G(t) dt (13) can be written as ¯ [K + sC]h(s, t) = b.

6

(14)

It is also possible to get the above expression from the finite-difference formulations [see [16, 19]] in the limit as the timestep goes to zero. Equation (14) is also known as the frequency-domain expression of the LTI transfer functions. Since h(s, t) comes from a lumped linear system, it will be a rational function with an infinite number of poles. For example, in a time-varying system with fundamental frequency T , if ξ is a pole of the system, then ξ + kT , k an integer, will be a pole of h(s, t). The reason is that, signals can be converted by a harmonic factor in a LTV system.

4

Model Reduction of LTV Systems

4.1

Discretization of rational functions

We consider here that the time-varying transfer functions are rational functions. Hence it is reasonable that the reduced order model will be obtained from the same sorts of rational approximations that have been suitable for reduction of LTI systems. Therefore, we first seek a representation of the transfer functions in terms of finite-dimensional matrices. The rational matrix function can be obtained by discretizing the operators K and ¯ C . We will focus our work to periodically time-varying linear (PTVL) systems. In this case we must specify C(t) and B(t) over a fundamental period. But in our case we do not have much freedom of choice. In circuit problems, since the output ports are fixed, we may write C(t) = C[c1 (t), c2 (t), . . . . . . , cp (t)] and B(t) = B[b1 (t), b2 (t), . . . . . . , bq (t)] for some periodic scalar functions ci (t), bi (t) . The most common choice, as in [13, 12], is to choose c(t) and b(t) to be single-tone sinusoids. Since the focus of this paper is to the PTVL systems that occur in RF applications, at this point we will introduce explicit assumptions about the time-variation of the system, and discuss how the input and output functions incorporate each other. For simplicity we will only discuss the SISO case. In the work [13, 12], backward-Euler discretization method is used for solving the periodically time-varying linear system described by (13). Following this approach, we obtain ¯ (K + sC)h(s) =B with

   K=  

C¯1 41

¯1 +G ¯ C1 −4 2 ...

¯

C¯2 42

¯2 +G ... ¯

− C4MM−1

7

− C4M1

¯M C 4M

¯M +G

     

(15)

 C¯1  C¯2  C¯ =  

 ..

. C¯M

   

(16)

 T h(s) = h1 (s) h2 (s) . . . hM (s)

(17)

B = [b1 b2 . . . bM ]T

(18)

and

¯ j = G(t ¯ j ), C¯j = C(t ¯ j ), bj = b(tj ), hj (s) = h(s, tj ), and 4j is the jth timestep. where G More generally we make the identification K = −AT V ,

C¯ = ET V

(19)

with additionally c = [c1 c2 . . . cM ]T

(20)

where cj = c(tj ),and then we have the matrix of baseband-equivalent transfer functions HT V in a more usual way HT V = cT h(s) = cT (sET V − AT V )−1 B.

(21)

It is clear from the above expression that the frequency-domain matrix (sC¯ + K) in (14) is equivalent to the system (sET V − AT V ) through the discritization . The discritization procedure has converted the time-varying system of (14) to an equivalent LTI system, of dimension larger by a factor equal to the number of time steps in the discretization. At that point any of the algorithmic approach, that can be used for the model reduction of lumped LTI systems, can be applied to matrices defined in (15)-(21). 4.2

Approximation Scheme of Krylov space

Following the work in [13], the transfer function for a small-signal steady-state response of the periodic time-varying system is obtained by solving the finite-difference equations      

C¯1 41

¯1 +G ¯ C1 −4 2 ...

¯

C¯2 42

¯2 +G ... ¯

− C4MM−1

− C4M1 · α(s)

¯M C 4M

¯M +G 8



   ˜ t1 ) B(s, v˜(t1 )   B(s,    v˜(t2 )   ˜ t2 )   .  =    ..   ..    .  ˜ tM ) v˜(tM ) B(s,

(22)

where α(s) ≡ e−sT , T is the fundamental period. The transfer function h(s, t) is then given by h(s, t) = e−st v˜(t) (23) Although (22) can be solved using sparse matrix techniques, a more efficient approach can be also applied. To describe this approach, we decompose the matrix of (15) into two triangular parts, AT V = L + U , where L be the lower triangular portion and U be the upper triangular portion of AT V in (22), i.e.,    L=   and

C¯1 41

¯1 +G ¯ C1 −4 2 .. .

 C¯2 42

¯2 +G .. . ¯

− C4MM−1

¯M C 4M

¯M +G

    

 ¯  0 . . . − C4M1  ..  U =  ... . . . .  0 ... 0

(24)

(25)

Using the expression for L and U , (22) becomes ˜ (L + α(s)U )˜ v = B(s) If we define a small-signal modulation operator ψ(s),   Iest1  0  Iest2   ψ(s) =   ... ...   stM 0 Ie

(26)

(27)

then we obtain an expression of the transfer function as follows, h(s) = ψ H (s)˜ v (s),

˜ B(s) = ψ(s)B

(28)

and now we can write, K + sC¯ ' ψ(s)[L + α(s)U ]ψ H (s)

(29)

and with more specific way, we can make the identification sET V − AT V ' ψ(s)[L + α(s)U ]ψ H (s)

9

(30)

The left hand side of (29) represents a spectral discretization and the right hand side represents a finite-difference discretization. The difference between the two sides depend on the treatment of the small signal that has been applied to the test. Suppose we need to solve (26) for some different ˜b. Again following [13], consider preconditioning with the matrix L. Then the system can be written as ˜ (I + α(s)L−1 U )˜ v = L−1 B(s)

(31)

The structure of (31) suggests us to use the recycled Krylov subspace technique to solve (26). This is because of the ”shift-invariant” property of Krylov subspace [see [15]]. It says that the Krylov subspace of a matrix A is invariant to shifts of the form A → A + βI and thats why the same Krylov subspace can be used to solve (26) at different frequency points. This recycling of the Krylov space also accelerates solution with different right-hand sides. Our interest is to see how the finite difference method approximates the appropriate basis V for the projecting the matrices AT V , ET V , c and b. To see this, suppose the projection matrix V is not a basis for the Krylov subspace of (sET V − AT V )−1 , but a nearby matrix. The reduced model would still be a projection of the original, having some small error in it. As long as the model is not evaluated in the neighborhood of a pole, it can be expected that the additional errors introduce into the model are small enough. This suggests that the basis for the projector in the model-reduction procedure be obtained from the finitedifference equations. Because a good approximation of the Krylov subspace can be achieved using this method. One more important thing, it make no extra cost to obtain the projectors from the expansions about multiple frequency-points (due to the reason of recycling scheme) comparing with the single frequency-point expansion. This leads us to the propose model reduction algorithm, Algorithm 1. In order to simplify the presentation, only real-valued expansion points are used in Algorithm 1. We also took the input function b(t) as a constant vector over the whole period. The overall algorithm can be described in two stages. In the first stage, the algorithm produces a matrix approximating the Krylov subspace for every frequency points si . Note that at each iteration i ∈ [1, n], Algorithm 1 generates m columns of the projection matrix V , where m is the approximation-order of the Krylov subspace. Once the projection columns of V for a particular si are obtained, the algorithm run for the next frequency point. The projection matrix V is the union of all these projections obtained for every si , where i runs form 1 to n. Therefore , the number of columns of the projection matrix V is n × mn. This can be expressed as range(V ) =

n [

κi (AT V , B)

i=1

In Algorithm 1, step-9 uses recycle technique to produce the projection columns of V . It is clear from the context that each new vector in the model reduction was obtained 10

by an inner Krylov iteration with the matrix AT V . Due to the shift-invaiance property [13], since each new right-hand-side ui in the model reduction procedure is drawn from a Krylov subspace of κm (AT V , B), it is resoanable that we use the same Krylov subspace to solve step9 for different frequency points. Also this recycling technique ensures that the next term in the space of κi (AT V , B) would be related to the κm (AT V , B). At the beginning step-9 takes B as its right-side-vector and generates the first column of V . ¯ as its right-side-vector, where v is the For the second column, it takes now C.v previous orthogonalized column generated for V . The total number of such orthogonalized columns is counted by k and it is initialized at the beginning of the algorithm. The net result of the algorithm is an n × mn projection matrix V with orthonormal columns. Algorithm 1: Approximate Multipoint Krylov-subspace Model Reduction INPUT : K, E, b, c, n, m, s, α(s) ˆ E, ˆ ˆb, cˆ OUTPUT: V, A, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Set k = 1 for i = 1 to n do for j = 1 to m do if j = 1 then w=B else ¯ k−1 w = Cv end u = ψ H (si )[L + α(si )U ]−1 ψ(si )w for l = 1 to k − 1 do u = u − vlT u end vk = u/kuk k =k+1 end end ˆ = −V T KV Aˆ = −K Eˆ = V T EV ˆb = V T b cˆ = V T c

In addition, to compare the efficiency of the proposed recycle technique, we discussed a block Arnoldi algorithm, Algorithm 2, which can be considered as an adaptation of the block rational Arnoldi algorithm introduced in [10, 8] in the context of multipoint RLC Networks. The innermost iteration of Algorithm 2, which runs from k ∈ [1, m], generates a block matrix for a particular frequency point. 11

In that case, the projection matrix V is the union of these matrix blocks Vi with orthonormal columns. We observe that no recycle techniques used in Algorithm 2. Some pictorial and numerical results have been also discussed in the next chapter to make the idea more significant. Algorithm 2: Block Arnoldi Method to approximate multipoint Krylov-subspace INPUT : K, E, b, c, n, m, s, α(s) ˆ E, ˆ ˆb, cˆ OUTPUT: V, A, 1 2 3 4 5 6 7 8

15

for l = 1 to k − 1 do u = u − vlT u end vki = u/kuk end Vi = [V1i , . . . , Vmi ] end

16

V = [V1 , V2 , . . . , Vn ]

9 10 11 12 13 14

17 18 19 20

5

for i = 1 to n do for k = 1 to m do if k = 1 then w=B else ¯ k−1 w = Cv end u = ψ H (si )[L + α(si )U ]−1 ψ(si )w

ˆ = −V T KV Aˆ = −K Eˆ = V T EV ˆb = V T b cˆ = V T c

Numerical experiments

In this section we present a numerical example to illustrate the effectiveness of the described model reduction method for linear time-varying system. We test the proposed algorithm for three different cases of time-varying system. We implemented this to a conceived problem described as follows:

12

    A(t) =    

2 + sin t 0 5 cos t 1 0 0 0 3 + sin t 0 0 1 1 + sin t 0 0 4 cos t 1 0 1 t t 3e 0 3 e 1 0 1 0 2 0 sin 2t 1 2 1 0 0 1 2 cos 2t 

   E(t) =    

2 sin t 0 cos t 0 1 − sin t 2 3t 2 t sin t 0 0 0 1 0 3 0 1 b(t) =

c(t) =





0 1 1 1 0 1 1 2 t t e 1 sin t 1 sin 2t 0 et 0 cos 2t

−2 1 0 1 4 0

T

    ,   

    ,   

,

2 − sin t t 1 − cos t 0 3 + sin t t



.

5.1 LTV system in generalized state-space form First we worked with a generalized LTV model where n = 6 and we wished to achieve a reduced model for different values of m (m = 2, 3, 4, 5). We took six different frequency points and the frequency range from 10−6 Hz to 106 Hz. The sample problem was taken over a 200ns period. Since the matrix K is a block matrix, every block containing 6 × 6 matrix, it is a 36 × 36 matrix. The vectors bi and ci were of order 6 × 1 for i = 1 . . . 6. We used the rank revealing QR factorization for the formulation of the projected matrix. Because, the matrix V , obtained from the direct use of the proposed algorithm, had linear dependent columns. The rank revealing QR factorization truncated those redundant constraints and produced an orthonormal basis of the projected matrix for the reduced system. The assigned algorithm produced a very good approximation of the original model for m = 3 and the computing time for the reduced model was very small and efficient comparing to the original model. The original projected matrix V was of order 36 × 18. After using the rank revealing QR factorization, 14 columns of it were deflated. For m = 4, 5, the approximation was more better but the computing time increased. The bode diagram and the step response showed the better efficiency of the reduced model.

13

Figure 1: The Descriptor system transfer function HN (s) and reduced Hn (s) with relative error for m=3. 5.2 The standard LTV system ¯ i,e,. ET V is an identity matrix of For a standard LTV model, we assumed every block of C, order 6 × 6. That means it was also of order 36 × 36. The timesteps were taken constant. We took the same frequency point that we took for the generalized system. The reduced order model produced a very good approximation of the original model for m = 3 and the error was very small and negligible. The algorithm was tested for m = 4, 5 and a good approximation of the original model was obtained with more small error term but the computing time increased. Table 1 shows the statistics for the computational costs for the reduced model extraction and evaluation. Both the model took less time to generate a reduced model from the original one. But the recycle Krylov technique took much smaller time compare to Block Arnoldi. Increment of order also shows the increment of computational cost. 5.3 Descriptor system We then tested the algorithm for a descriptor system. In that case, every block of matrix ¯ i,e,. ET V is a singular matrix of order 6 × 6. The other parameters were as like the LTV C, system. A good approximation of the original model was obtained and the relative error term was very small. To test the efficiency of the algorithm, we again chose m = 4,5. In that case, the reduced order model was also very efficient.

14

Figure 2: The bode diagram of the descriptor system.

Figure 3: Step response of the descriptor system.

15

Figure 4: The LTV transfer function HN (s) and reduced Hn (s) with relative error for m=3. Systems

Order m=3

Exact 0.0793s

MOR/Recycle 0.0043s

MOR/Block Arnoldi 0.0120s

Descriptor

m=4

0.0803s

0.0047s

0.0131s

m=5

0.0831s

0.0049s

0.0133s

m=3

0.0778s

0.0042s

0.0112s

m=4

0.0807s

0.0044s

0.0118s

m=5

0.0817

0.0048s

0.0124s

m=3

0.0812s

0.0050s

0.0108s

m=4

0.0830s

0.0054s

0.0132s

m=5

0.0855s

0.0053s

0.0143s

Standard LTV

DAE

Table 1: comparison of computational costs for LTV model reduction procedures. In the work of J.Phillips [11], the reduction techniques were implemented in a time-domain RF circuit simulator. The large-signal periodic steady state is calculated using the shooting method. Backward-difference formulae were used for discretization. The input function b(t) was taken constant over a period. The reduced model was a very good approximation of the original model.

16

Figure 5: The bode diagram of the LTV system.

Figure 6: Step response of the LTV system.

17

Figure 7: Transfer function HN (s) and reduced Hn(s) with relative error for the DAE system (m=3).

Figure 8: The bode diagram of the DAE system.

18

Figure 9: Step response of the DAE system.

6

Concluding remarks

The system model-design applied here is efficient for model with very smaller signals and time parameters.Therefore the model is capable for representing very complicated physical dynamics in circuit problems. Step response for the reduced model also generated. It was observed that the reduced model described in that paper work for a specific frequency ranges. When the frequency is grater than 108 Hz, the methods gave some unstable results. There are several scopes for the future extensions of the ideas of this paper. The formalism and algorithms can be trivially extended to the case of quasi-periodic small signal analysis.

References [1] Z. Bai. Krylov subspace techniques for reduced-order modeling for large-scale dynamical system. App. Numer.Math, 43(1-2):9–44, 2002. [2] Peter Benner. Solving large-scale control problems. IEEE Control System Magazine, 24:44–59, 2004. [3] Peter Benner. Control Theory,In Lesilie Hobben(Hrsg). Handbook of Linear Algebra, Kapitel 57. Chapmann and Hall/CRC, 2006.

19

[4] Peter Benner. Numerical linear algebra for model reduction in control and simulation. GAMM Mitteilungen, 23:275–296, 2006. [5] Peter Benner. Model reduction algorithm using spectral projection methods. Householder symposium XV, 12:23–33, June 17-21, 2002. [6] B.N.Datta. Numerical methods for Linear Control Systems. Elsevier Academic Press, 2004. [7] Michel Nakhla Emad Gad. Efficient model reduction of linear time-varying systems via compressed transient system function. Proceeding of the conference on Design, automation and test in Europe, page 916, 2002. [8] Michel Nakhla Emad Gad. Efficient model reduction of linear periodically time-varying systems via compressed transient system function. IEEE Transactions on Circuit and Systems, 52, June 2005. [9] Eric J. Grimme. Kryloy projection methods for model reduction. PhD thesis, University of Illinois at Urbana, Champaign, 1997. [10] David D. Ling I.M. Elfadel. A block arnoldi algorithm for multipoint passive modelorder reduction of multiport RLC networks. IEEE Transactions on circuits and systems, pages 291–299, March 1997. [11] J. Phillips. Model reduction of time-varying linear systems using multipoint krylovsubspace projectors. In Proc. ICCAD, pages 44–59, Nov 1998. [12] J. White R. Telichevesky and K. Kundert. Efficient steady-state analysis based on matrix-free krylov-subspace methods. In Proceeding of 32rd Design Automation Conference, June 1995. [13] J. White R. Telichevesky and K. Kundert. Efficient AC and noise analysis of two-tone RF circuits. In Proceeding of 33rd Design Automation Conference, June 1996. [14] J.Han R.E. Skelton, M.Oliveira. System modeling and model reduction. Paper available on the URL: http://maeweb.ucsd.edu/ skelton/publications. [15] R.Freund. Model reduction methods based on krylov subspaces. Numerical Analysis Manuscript, 43(1-2), 2003. [16] J. Roychowdhury. Reduced-order modelling of linear time-varying systems. Design Automation Conference, 1999.Proceedings of the ASP-DAC ’99. Asia and South Pacific, 1:53–56, 18-21 Jan.1999. [17] J. Roychowdhury. Reduced-order modeling of time-varying systems. IEEE Control Systems Magazine, 46:1273 – 1288, Oct 1999.

20

[18] Seyed Behnam Salimbahrami. Structure Preserving Order Reduction of Large Scale Second Order Models. PhD thesis, Technische Universitt Mnchen, Fakultt fr Maschinenwesen, Germany, 2005. [19] J. Roychowdhury Xiaolue Lai. Macromodelling oscillators using krylov-subspace methods. Proceedings of the 2006 conference on Asia South Pacific design automation, Yokohama, Japan, pages 527–532, 2006.

21