An overview of balanced truncation for large-scale systems

5 downloads 10658 Views 1021KB Size Report
Software. References. Model Reduction for Linear Systems. Linear, Time-Invariant (LTI) Systems f (t, x, u) = Ax + Bu,. A ∈ Rn×n, B ∈ Rn×m, g(t, x, u) = Cx + Du,.
BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS Peter Benner Professur Mathematik in Industrie und Technik Fakult¨ at f¨ ur Mathematik Technische Universit¨ at Chemnitz

Recent Advances in Model Order Reduction Workshop — TU Eindhoven, November 23, 2007

Overview BALANCINGRELATED MODEL REDUCTION

1 Introduction

Peter Benner Introduction Balanced Truncation

2

Large Matrix Equations Implementations m, p = O(n) Examples

3

Software References

4 5 6 7 8

Model Reduction for Linear Systems Application Areas Goals Balanced Truncation The Basic Ideas Balancing-Related Model Reduction Numerical Algorithms for Balanced Truncation Solving Large-Scale Matrix Equations ADI Method for Lyapunov Equations Newton’s Method for AREs Balanced Truncation Implementations of Complexity < O(n3 ) Systems with a Large Number of Terminals Examples Software References

Introduction BALANCINGRELATED MODEL REDUCTION

Dynamical Systems

Peter Benner



Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Σ:

x(t) ˙ = f (t, x(t), u(t)), y (t) = g (t, x(t), u(t)),

with states x(t) ∈ Rn , inputs u(t) ∈ Rm , outputs y (t) ∈ Rp .

x(t0 ) = x0 ,

Model Reduction for Dynamical Systems BALANCINGRELATED MODEL REDUCTION

Original System

Reduced-Order System

Peter Benner

 Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

Σ:

x(t) ˙ = f (t, x(t), u(t)), y (t) = g (t, x(t), u(t)).

 b: Σ

f (t, xˆ(t), u(t)), xˆ˙ (t) = b yˆ (t) = gb(t, xˆ(t), u(t)).

states xˆ(t) ∈ Rr , r  n

states x(t) ∈ Rn , inputs u(t) ∈ Rm , p

outputs y (t) ∈ R .

inputs u(t) ∈ Rm , outputs yˆ (t) ∈ Rp .

Implementations m, p = O(n) Examples Software References

Goal: ky − yˆ k < tolerance · kuk for all admissible input signals.

Model Reduction for Dynamical Systems BALANCINGRELATED MODEL REDUCTION

Original System

Reduced-Order System

Peter Benner

 Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

Σ:

x(t) ˙ = f (t, x(t), u(t)), y (t) = g (t, x(t), u(t)).

 b: Σ

f (t, xˆ(t), u(t)), xˆ˙ (t) = b yˆ (t) = gb(t, xˆ(t), u(t)).

states xˆ(t) ∈ Rr , r  n

states x(t) ∈ Rn , inputs u(t) ∈ Rm , p

outputs y (t) ∈ R .

inputs u(t) ∈ Rm , outputs yˆ (t) ∈ Rp .

Implementations m, p = O(n) Examples Software References

Goal: ky − yˆ k < tolerance · kuk for all admissible input signals.

Model Reduction for Dynamical Systems BALANCINGRELATED MODEL REDUCTION

Original System

Reduced-Order System

Peter Benner

 Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

Σ:

x(t) ˙ = f (t, x(t), u(t)), y (t) = g (t, x(t), u(t)).

 b: Σ

f (t, xˆ(t), u(t)), xˆ˙ (t) = b yˆ (t) = gb(t, xˆ(t), u(t)).

states xˆ(t) ∈ Rr , r  n

states x(t) ∈ Rn , inputs u(t) ∈ Rm , p

outputs y (t) ∈ R .

inputs u(t) ∈ Rm , outputs yˆ (t) ∈ Rp .

Implementations m, p = O(n) Examples Software References

Goal: ky − yˆ k < tolerance · kuk for all admissible input signals.

Model Reduction for Linear Systems BALANCINGRELATED MODEL REDUCTION

Linear, Time-Invariant (LTI) Systems

Peter Benner Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

f (t, x, u) = Ax + Bu, g (t, x, u) = Cx + Du,

A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .

Linear Systems Frequency Domain Application of Laplace transformation (x(t) 7→ x(s), x(t) ˙ 7→ sx(s)) to linear system with x(0) = 0:

Implementations m, p = O(n)

sx(s) = Ax(s) + Bu(s),

y (s) = Bx(s) + Du(s),

Examples Software References

yields I/O-relation in frequency domain:   y (s) = C (sIn − A)−1 B + D u(s) | {z } =:G (s)

G is the transfer function of Σ.

Model Reduction for Linear Systems BALANCINGRELATED MODEL REDUCTION

Linear, Time-Invariant (LTI) Systems

Peter Benner Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

f (t, x, u) = Ax + Bu, g (t, x, u) = Cx + Du,

A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .

Linear Systems Frequency Domain Application of Laplace transformation (x(t) 7→ x(s), x(t) ˙ 7→ sx(s)) to linear system with x(0) = 0:

Implementations m, p = O(n)

sx(s) = Ax(s) + Bu(s),

y (s) = Bx(s) + Du(s),

Examples Software References

yields I/O-relation in frequency domain:   y (s) = C (sIn − A)−1 B + D u(s) {z } | =:G (s)

G is the transfer function of Σ.

Model Reduction for Linear Systems BALANCINGRELATED MODEL REDUCTION

Linear, Time-Invariant (LTI) Systems

Peter Benner Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations

f (t, x, u) = Ax + Bu, g (t, x, u) = Cx + Du,

A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .

Linear Systems Frequency Domain Application of Laplace transformation (x(t) 7→ x(s), x(t) ˙ 7→ sx(s)) to linear system with x(0) = 0:

Implementations m, p = O(n)

sx(s) = Ax(s) + Bu(s),

y (s) = Bx(s) + Du(s),

Examples Software References

yields I/O-relation in frequency domain:   y (s) = C (sIn − A)−1 B + D u(s) {z } | =:G (s)

G is the transfer function of Σ.

Model Reduction for Linear Systems BALANCINGRELATED MODEL REDUCTION Peter Benner

Problem Approximate the dynamical system

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

x˙ y

= Ax + Bu, = Cx + Du,

A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .

by reduced-order system xˆ˙ yˆ

ˆ x + Bu, ˆ = Aˆ ˆ ˆ = C xˆ + Du,

ˆ ∈ Rr ×r , B ˆ ∈ Rr ×m , A p×r ˆ ˆ C ∈ R , D ∈ Rp×m ,

Examples Software References

of order r  n, such that ˆ uk ≤ kG − G ˆ kkuk < tolerance · kuk. ky − yˆ k = kGu − G ˆ k. =⇒ Approximation problem: minorder (Gˆ )≤r kG − G

Model Reduction for Linear Systems BALANCINGRELATED MODEL REDUCTION Peter Benner

Problem Approximate the dynamical system

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

x˙ y

= Ax + Bu, = Cx + Du,

A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .

by reduced-order system xˆ˙ yˆ

ˆ x + Bu, ˆ = Aˆ ˆ ˆ = C xˆ + Du,

ˆ ∈ Rr ×r , B ˆ ∈ Rr ×m , A p×r ˆ ˆ C ∈ R , D ∈ Rp×m ,

Examples Software References

of order r  n, such that ˆ uk ≤ kG − G ˆ kkuk < tolerance · kuk. ky − yˆ k = kGu − G ˆ k. =⇒ Approximation problem: minorder (Gˆ )≤r kG − G

Application Areas BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Feedback Control – controllers designed by LQR/LQG, H2 , H∞ methods are LTI systems of order ≥ n, but technological implementation needs order ∼ 10. Optimization/open-loop control – time-discretization of already large-scale systems leads to huge number of equality constraints in mathematical program. Microelectronics – verification of VLSI/ULSI chip design requires high number of simulations for different input signals, various effects due to progressive miniaturization lead to large-scale systems of differential(-algebraic) equations (order ∼ 108 ). MEMS/Microsystem design – smart system integration needs compact models for efficient coupled simulation. ...

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Goals BALANCINGRELATED MODEL REDUCTION Peter Benner

Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want

Introduction Linear Systems Application Areas Goals Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

ky − yˆ k < tolerance · kuk

∀u ∈ L2 (R, Rm ).

=⇒ Need computable error bound/estimate! Preserve physical properties: – stability (poles of G in C− , i.e., Λ (A) ⊂ C− ), – minimum phase (zeroes of G in C− ), – passivity: Rt u(τ )T y (τ ) dτ ≥ 0 ∀t ∈ R, ∀u ∈ L2 (R, Rm ). −∞ (“system does not generate energy”).

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Idea: A system Σ, realized by (A, B, C , D), is called balanced, if solutions P, Q of the Lyapunov equations

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

AP + PAT + BB T = 0,

AT Q + QA + C T C = 0,

satisfy: P = Q = diag(σ1 , . . . , σn ) with σ1 ≥ σ2 ≥ . . . ≥ σn > 0. {σ1 , . . . , σn } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C , D)

Examples

7→ =

Software References

Truncation

(TAT −1 , TB, CT −1 , D) „» – » – ˆ A11 A12 B1 , , C1 A21 A22 B2

ˆ B, ˆ C ˆ , D) ˆ = (A11 , B1 , C1 , D). (A,

C2

˜

« ,D

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Idea: A system Σ, realized by (A, B, C , D), is called balanced, if solutions P, Q of the Lyapunov equations

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

AP + PAT + BB T = 0,

AT Q + QA + C T C = 0,

satisfy: P = Q = diag(σ1 , . . . , σn ) with σ1 ≥ σ2 ≥ . . . ≥ σn > 0. {σ1 , . . . , σn } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C , D)

Examples

7→ =

Software References

Truncation

(TAT −1 , TB, CT −1 , D) „» – » – ˆ A11 A12 B1 , , C1 A21 A22 B2

ˆ B, ˆ C ˆ , D) ˆ = (A11 , B1 , C1 , D). (A,

C2

˜

« ,D

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Idea: A system Σ, realized by (A, B, C , D), is called balanced, if solutions P, Q of the Lyapunov equations

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

AP + PAT + BB T = 0,

AT Q + QA + C T C = 0,

satisfy: P = Q = diag(σ1 , . . . , σn ) with σ1 ≥ σ2 ≥ . . . ≥ σn > 0. {σ1 , . . . , σn } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C , D)

Examples

7→ =

Software References

Truncation

(TAT −1 , TB, CT −1 , D) „» – » – ˆ A11 A12 B1 , , C1 A21 A22 B2

ˆ B, ˆ C ˆ , D) ˆ = (A11 , B1 , C1 , D). (A,

C2

˜

« ,D

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Idea: A system Σ, realized by (A, B, C , D), is called balanced, if solutions P, Q of the Lyapunov equations

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

AP + PAT + BB T = 0,

AT Q + QA + C T C = 0,

satisfy: P = Q = diag(σ1 , . . . , σn ) with σ1 ≥ σ2 ≥ . . . ≥ σn > 0. {σ1 , . . . , σn } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C , D)

Examples

7→ =

Software References

Truncation

(TAT −1 , TB, CT −1 , D) „» – » – ˆ A11 A12 B1 , , C1 A21 A22 B2

ˆ B, ˆ C ˆ , D) ˆ = (A11 , B1 , C1 , D). (A,

C2

˜

« ,D

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

H : L2 (−∞, 0) 7→ L2 (0, ∞) : u− 7→ y+ .

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map

Introduction

H : L2 (−∞, 0) 7→ L2 (0, ∞) : u− 7→ y+ .

Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

In balanced coordinates . . . energy transfer from u− to y+ : R∞ E :=

sup u∈L2 (−∞,0] x(0)=x0

y (t)T y (t) dt

0 R0 −∞

= u(t)T u(t) dt

n 1 X 2 2 σj x0,j kx0 k2 j=1

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map

Introduction

H : L2 (−∞, 0) 7→ L2 (0, ∞) : u− 7→ y+ .

Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n)

In balanced coordinates . . . energy transfer from u− to y+ : R∞ E :=

sup u∈L2 (−∞,0] x(0)=x0

y (t)T y (t) dt

0 R0

= u(t)T u(t) dt

n 1 X 2 2 σj x0,j kx0 k2 j=1

−∞

Examples Software References

=⇒ Truncate states corresponding to “small” HSVs =⇒ complete analogy to best approximation via SVD!

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Implementation: SR Method 1

Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R.

2

Compute SVD (thanks, Gene!) " Σ1 T SR = [ U1 , U2 ]

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations

3

Examples Software References

Set

−1/2

W = R T V1 Σ1

m, p = O(n)

4

T

# Σ2

V1T V2T

 .

−1/2

V = S T U1 Σ1

, T

Reduced model is (W AV , W B, CV , D).

.

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Implementation: SR Method 1

Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R.

2

Compute SVD (thanks, Gene!) " Σ1 T SR = [ U1 , U2 ]

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations

3

Examples Software References

Set

−1/2

W = R T V1 Σ1

m, p = O(n)

4

T

# Σ2

V1T V2T

 .

−1/2

V = S T U1 Σ1

, T

Reduced model is (W AV , W B, CV , D).

.

Balanced Truncation The Basic Ideas

BALANCINGRELATED MODEL REDUCTION Peter Benner

Implementation: SR Method 1

Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R.

2

Compute SVD (thanks, Gene!) " Σ1 T SR = [ U1 , U2 ]

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations

3

Examples Software References

Set

−1/2

W = R T V1 Σ1

m, p = O(n)

4

T

# Σ2

V1T V2T

 .

−1/2

V = S T U1 Σ1

, T

Reduced model is (W AV , W B, CV , D).

.

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 .

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 . Classical Balanced Truncation (BT)

Mullis/Roberts ’76, Moore ’81

P = controllability Gramian of system given by (A, B, C , D). Q = observability Gramian of system given by (A, B, C , D). P, Q solve dual Lyapunov equations

Software References

AP + PAT + BB T = 0,

AT Q + QA + C T C = 0.

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 . LQG Balanced Truncation (LQGBT)

Jonckheere/Silverman ’83

P/Q = controllability/observability Gramian of closed-loop system based on LQG compensator. P, Q solve dual algebraic Riccati equations (AREs)

Software References

0

=

AP + PAT − PC T CP + B T B,

0

=

AT Q + QA − QBB T Q + C T C .

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 . Balanced Stochastic Truncation (BST)

Desai/Pal ’84, Green ’88

P = controllability Gramian of system given by (A, B, C , D), i.e., solution of Lyapunov equation AP + PAT + BB T = 0. Q = observability Gramian of right spectral factor of power spectrum of system given by (A, B, C , D), i.e., solution of ARE T ˆT Q + QA ˆ + QBW (DD T )−1 BW Q + C T (DD T )−1 C = 0, A

ˆ := A − BW (DD T )−1 C , BW := BD T + PC T . where A

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 . Positive-Real Balanced Truncation (PRBT)

Green ’88

Based on positive-real equations, related to positive real (Kalman-Yakubovich-Popov-Anderson) lemma. P, Q solve dual AREs 0

=

0

=

¯ + PA ¯ T + PC T R ¯ −1 CP + B R ¯ −1 B T , AP ¯T Q + QA ¯ + QB R ¯ −1 B T Q + C T R ¯ −1 C , A

¯ = D + DT , A ¯ = A − BR ¯ −1 C . where R

Balancing-Related Model Reduction BALANCINGRELATED MODEL REDUCTION Peter Benner

Basic Principle of Balanced Truncation Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that

Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software

P = Q = diag(σ1 , . . . , σn ) = Σ,

σ1 ≥ . . . ≥ σn ≥ 0,

and truncate corresponding realization at size r with σr > σr +1 . Other Balancing-Based Methods Bounded-real balanced truncation (BRBT) – based on bounded real lemma [Opdenacker/Jonckheere ’88]; H∞ balanced truncation (HinfBT) – closed-loop balancing based on H∞ compensator [Mustafa/Glover ’91].

References

Both approaches require solution of dual AREs. Frequency-weighted versions of the above approaches.

Balancing-Related Model Reduction Properties

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations

Guaranteed preservation of physical properties like – stability (all), – passivity (PRBT), – minimum phase (BST). Computable error bounds, e.g., BT: kG − Gr k∞ LQGBT: kG − Gr k∞

References

BST: kG − Gr k∞

σjBT ,

j=r +1 n X

≤ 2

σjLQG q



m, p = O(n)

Software

≤2

j=r +1

Implementations

Examples

n X

≤

n Y

j=r +1

1+(σjLQG )2

 1+σjBST 1−σjBST

− 1 kG k∞ ,

Can be combined with singular perturbation approximation for steady-state performance. Computations can be modularized.

Numerical Algorithms for Balanced Truncation BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

General misconception: complexity O(n3 ) – true for several implementations! (e.g., Matlab, SLICOT).

Numerical Algorithms for Balanced Truncation BALANCINGRELATED MODEL REDUCTION

General misconception: complexity O(n3 ) – true for several implementations! (e.g., Matlab, SLICOT).

Peter Benner Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Algorithmic ideas from numerical linear algebra (since ∼ 1997):

Numerical Algorithms for Balanced Truncation BALANCINGRELATED MODEL REDUCTION

General misconception: complexity O(n3 ) – true for several implementations! (e.g., Matlab, SLICOT).

Peter Benner Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Algorithmic ideas from numerical linear algebra (since ∼ 1997): – Instead of Gramians P, Q compute S, R ∈ Rn×k , k  n, such that P ≈ SS T ,

Q ≈ RR T .

– Compute S, R with problem-specific Lyapunov/Riccati solvers of “low” complexity directly.

Numerical Algorithms for Balanced Truncation BALANCINGRELATED MODEL REDUCTION

General misconception: complexity O(n3 ) – true for several implementations! (e.g., Matlab, SLICOT).

Peter Benner Introduction Balanced Truncation The Basic Ideas BalancingRelated MR Numerical Algorithms for Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software

Algorithmic ideas from numerical linear algebra (since ∼ 1997): – Instead of Gramians P, Q compute S, R ∈ Rn×k , k  n, such that P ≈ SS T ,

Q ≈ RR T .

– Compute S, R with problem-specific Lyapunov/Riccati solvers of “low” complexity directly.

References

need solver for large-scale matrix equations which computes S, R directly!

Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations

BALANCINGRELATED MODEL REDUCTION Peter Benner

General form for A, G = G T , W = W T ∈ Rn×n given and X ∈ Rn×n unknown:

Introduction Balanced Truncation

0

= L(X ) := AT X + XA + W ,

Large Matrix Equations

0

= R(X ) := AT X + XA − XGX + W .

ADI for Lyapunov Newton’s Method for AREs Implementations m, p = O(n) Examples Software References

In large scale applications, typically n = 103 – 106 (=⇒ 106 – 1012 unknowns!), A has sparse representation (A = −M −1 K for FEM), G , W low-rank with G , W ∈ {BB T , C T C }, where B ∈ Rn×m , m  n, C ∈ Rp×n , p  n. Standard (eigenproblem-based) O(n3 ) methods are not applicable!

Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations

BALANCINGRELATED MODEL REDUCTION Peter Benner

General form for A, G = G T , W = W T ∈ Rn×n given and X ∈ Rn×n unknown:

Introduction Balanced Truncation

0

= L(X ) := AT X + XA + W ,

Large Matrix Equations

0

= R(X ) := AT X + XA − XGX + W .

ADI for Lyapunov Newton’s Method for AREs Implementations m, p = O(n) Examples Software References

In large scale applications, typically n = 103 – 106 (=⇒ 106 – 1012 unknowns!), A has sparse representation (A = −M −1 K for FEM), G , W low-rank with G , W ∈ {BB T , C T C }, where B ∈ Rn×m , m  n, C ∈ Rp×n , p  n. Standard (eigenproblem-based) O(n3 ) methods are not applicable!

ADI Method for Lyapunov Equations BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs

For A ∈ Rn×n stable, B ∈ Rn×m (w  n), consider Lyapunov equation AX + XAT = −BB T . ADI Iteration: (A + pk I )X(j−1)/2 (A + pk I )Xk T

[Wachspress ’88] = −BB T − Xk−1 (AT − pk I ) = −BB T − X(j−1)/2 (AT − pk I )

Implementations m, p = O(n) Examples Software References

with parameters pk ∈ C− and pk+1 = pk if pk 6∈ R. For X0 = 0 and proper choice of pk : lim Xk = X superlinear. k→∞

Re-formulation using Xk = Yk YkT yields iteration for Yk ...

ADI Method for Lyapunov Equations BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs

For A ∈ Rn×n stable, B ∈ Rn×m (w  n), consider Lyapunov equation AX + XAT = −BB T . ADI Iteration: (A + pk I )X(j−1)/2 (A + pk I )Xk T

[Wachspress ’88] = −BB T − Xk−1 (AT − pk I ) = −BB T − X(j−1)/2 (AT − pk I )

Implementations m, p = O(n) Examples Software References

with parameters pk ∈ C− and pk+1 = pk if pk 6∈ R. For X0 = 0 and proper choice of pk : lim Xk = X superlinear. k→∞

Re-formulation using Xk = Yk YkT yields iteration for Yk ...

ADI Method for Lyapunov Equations BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs

For A ∈ Rn×n stable, B ∈ Rn×m (w  n), consider Lyapunov equation AX + XAT = −BB T . ADI Iteration: (A + pk I )X(j−1)/2 (A + pk I )Xk T

[Wachspress ’88] = −BB T − Xk−1 (AT − pk I ) = −BB T − X(j−1)/2 (AT − pk I )

Implementations m, p = O(n) Examples Software References

with parameters pk ∈ C− and pk+1 = pk if pk 6∈ R. For X0 = 0 and proper choice of pk : lim Xk = X superlinear. k→∞

Re-formulation using Xk = Yk YkT yields iteration for Yk ...

ADI Method for Lyapunov Equations BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs

For A ∈ Rn×n stable, B ∈ Rn×m (w  n), consider Lyapunov equation AX + XAT = −BB T . ADI Iteration: (A + pk I )X(j−1)/2 (A + pk I )Xk T

[Wachspress ’88] = −BB T − Xk−1 (AT − pk I ) = −BB T − X(j−1)/2 (AT − pk I )

Implementations m, p = O(n) Examples Software References

with parameters pk ∈ C− and pk+1 = pk if pk 6∈ R. For X0 = 0 and proper choice of pk : lim Xk = X superlinear. k→∞

Re-formulation using Xk = Yk YkT yields iteration for Yk ...

Factored ADI Iteration T Lyapunov equation AX + XA

BALANCINGRELATED MODEL REDUCTION Peter Benner

= −BB T .

Setting Xk = Yk YkT , some algebraic manipulations =⇒ Algorithm

[Penzl ’97, Li/White ’02, B./Li/Penzl ’99/’07]

Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

V1



p

−2Re (p1 )(A + p1 I )−1 B,

Y1 ← V1

FOR j = 2, 3, . . . q

Re (pk ) Re (pk−1 )

´ ` Vk−1 − (pk + pk−1 )(A + pk I )−1 Vk−1 , `ˆ ˜´ Yk−1 Vk Yk ← rrqr % column compression Vk ←

m, p = O(n) Examples Software

At convergence, Ykmax YkTmax ≈ X , where

References

range (Ykmax ) = range



V1

...

Vkmax



,

Vk = ∈ Cn×m .

Factored ADI Iteration T Lyapunov equation AX + XA

BALANCINGRELATED MODEL REDUCTION Peter Benner

= −BB T .

Setting Xk = Yk YkT , some algebraic manipulations =⇒ Algorithm

[Penzl ’97, Li/White ’02, B./Li/Penzl ’99/’07]

Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

V1



p

−2Re (p1 )(A + p1 I )−1 B,

Y1 ← V1

FOR j = 2, 3, . . . q

Re (pk ) Re (pk−1 )

´ ` Vk−1 − (pk + pk−1 )(A + pk I )−1 Vk−1 , `ˆ ˜´ Yk−1 Vk Yk ← rrqr % column compression Vk ←

m, p = O(n) Examples Software

At convergence, Ykmax YkTmax ≈ X , where

References

range (Ykmax ) = range



V1

...

Vkmax



,

Vk = ∈ Cn×m .

Newton’s Method for AREs [Kleinman ’68, Mehrmann ’91, Lancaster/Rodman ’95, B./Byers ’94/’98, B. ’97, Guo/Laub ’99] BALANCINGRELATED MODEL REDUCTION

Consider

Peter Benner

Frech´et derivative of R(X ) at X : 0

Introduction

RX : Z → (A − BB T X )T Z + Z (A − BB T X ).

Balanced Truncation

Newton-Kantorovich method:  0 −1 Xj+1 = Xj − RXj R(Xj ),

Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

0 = R(X ) = C T C + AT X + XA − XBB T X .

j = 0, 1, 2, . . .

Newton’s method (with line search) for AREs

m, p = O(n) Examples Software

FOR j = 0, 1, . . . 1

Aj ← A − BB T Xj =: A − BKj .

2

Solve the Lyapunov equation

3

Xj+1 ← Xj + tj Nj .

References

END FOR j

AT j Nj + Nj Aj = −R(Xj ).

Newton’s Method for AREs [Kleinman ’68, Mehrmann ’91, Lancaster/Rodman ’95, B./Byers ’94/’98, B. ’97, Guo/Laub ’99] BALANCINGRELATED MODEL REDUCTION

Consider

Peter Benner

Frech´et derivative of R(X ) at X : 0

Introduction

RX : Z → (A − BB T X )T Z + Z (A − BB T X ).

Balanced Truncation

Newton-Kantorovich method:  0 −1 Xj+1 = Xj − RXj R(Xj ),

Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

0 = R(X ) = C T C + AT X + XA − XBB T X .

j = 0, 1, 2, . . .

Newton’s method (with line search) for AREs

m, p = O(n) Examples Software

FOR j = 0, 1, . . . 1

Aj ← A − BB T Xj =: A − BKj .

2

Solve the Lyapunov equation

3

Xj+1 ← Xj + tj Nj .

References

END FOR j

AT j Nj + Nj Aj = −R(Xj ).

Newton’s Method for AREs [Kleinman ’68, Mehrmann ’91, Lancaster/Rodman ’95, B./Byers ’94/’98, B. ’97, Guo/Laub ’99] BALANCINGRELATED MODEL REDUCTION

Consider

Peter Benner

Frech´et derivative of R(X ) at X : 0

Introduction

RX : Z → (A − BB T X )T Z + Z (A − BB T X ).

Balanced Truncation

Newton-Kantorovich method:  0 −1 Xj+1 = Xj − RXj R(Xj ),

Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

0 = R(X ) = C T C + AT X + XA − XBB T X .

j = 0, 1, 2, . . .

Newton’s method (with line search) for AREs

m, p = O(n) Examples Software

FOR j = 0, 1, . . . 1

Aj ← A − BB T Xj =: A − BKj .

2

Solve the Lyapunov equation

3

Xj+1 ← Xj + tj Nj .

References

END FOR j

AT j Nj + Nj Aj = −R(Xj ).

Newton’s Method for AREs [Kleinman ’68, Mehrmann ’91, Lancaster/Rodman ’95, B./Byers ’94/’98, B. ’97, Guo/Laub ’99] BALANCINGRELATED MODEL REDUCTION

Consider

Peter Benner

Frech´et derivative of R(X ) at X : 0

Introduction

RX : Z → (A − BB T X )T Z + Z (A − BB T X ).

Balanced Truncation

Newton-Kantorovich method:  0 −1 Xj+1 = Xj − RXj R(Xj ),

Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

0 = R(X ) = C T C + AT X + XA − XBB T X .

j = 0, 1, 2, . . .

Newton’s method (with line search) for AREs

m, p = O(n) Examples Software

FOR j = 0, 1, . . . 1

Aj ← A − BB T Xj =: A − BKj .

2

Solve the Lyapunov equation

3

Xj+1 ← Xj + tj Nj .

References

END FOR j

AT j Nj + Nj Aj = −R(Xj ).

Newton’s Method for AREs Properties and Implementation

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

Convergence for K0 stabilizing: Aj = A − BKj = A − BB T Xj is stable ∀ j ≥ 0. limj→∞ kR(Xj )kF = 0 (monotonically). limj→∞ Xj = X∗ ≥ 0 (locally quadratic).

Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but “sparse+low rank” coefficient matrix Aj : Aj = A − B · Kj =

sparse



m

·

m, p = O(n) Examples Software References

m  n =⇒ efficient “inversion” using Sherman-Morrison-Woodbury formula: (A − BKj )−1 = (In + A−1 B(Im − Kj A−1 B)−1 Kj )A−1 .

BUT: X = X T ∈ Rn×n =⇒ n(n + 1)/2 unknowns!

Newton’s Method for AREs Properties and Implementation

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

Convergence for K0 stabilizing: Aj = A − BKj = A − BB T Xj is stable ∀ j ≥ 0. limj→∞ kR(Xj )kF = 0 (monotonically). limj→∞ Xj = X∗ ≥ 0 (locally quadratic).

Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but “sparse+low rank” coefficient matrix Aj : Aj = A − B · Kj =

sparse



m

·

m, p = O(n) Examples Software References

m  n =⇒ efficient “inversion” using Sherman-Morrison-Woodbury formula: (A − BKj )−1 = (In + A−1 B(Im − Kj A−1 B)−1 Kj )A−1 .

BUT: X = X T ∈ Rn×n =⇒ n(n + 1)/2 unknowns!

Newton’s Method for AREs Properties and Implementation

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

Convergence for K0 stabilizing: Aj = A − BKj = A − BB T Xj is stable ∀ j ≥ 0. limj→∞ kR(Xj )kF = 0 (monotonically). limj→∞ Xj = X∗ ≥ 0 (locally quadratic).

Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but “sparse+low rank” coefficient matrix Aj : Aj = A − B · Kj =

sparse



m

·

m, p = O(n) Examples Software References

m  n =⇒ efficient “inversion” using Sherman-Morrison-Woodbury formula: (A − BKj )−1 = (In + A−1 B(Im − Kj A−1 B)−1 Kj )A−1 .

BUT: X = X T ∈ Rn×n =⇒ n(n + 1)/2 unknowns!

Newton’s Method for AREs Properties and Implementation

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations ADI for Lyapunov Newton’s Method for AREs Implementations

Convergence for K0 stabilizing: Aj = A − BKj = A − BB T Xj is stable ∀ j ≥ 0. limj→∞ kR(Xj )kF = 0 (monotonically). limj→∞ Xj = X∗ ≥ 0 (locally quadratic).

Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but “sparse+low rank” coefficient matrix Aj : Aj = A − B · Kj =

sparse



m

·

m, p = O(n) Examples Software References

m  n =⇒ efficient “inversion” using Sherman-Morrison-Woodbury formula: (A − BKj )−1 = (In + A−1 B(Im − Kj A−1 B)−1 Kj )A−1 .

BUT: X = X T ∈ Rn×n =⇒ n(n + 1)/2 unknowns!

Low-Rank Newton-ADI for AREs BALANCINGRELATED MODEL REDUCTION

Re-write Newton’s method for AREs AT j Nj + Nj Aj = −R(Xj ) ⇐⇒

Peter Benner Introduction Balanced Truncation Large Matrix Equations

T T AT j (Xj + Nj ) + (Xj + Nj ) Aj = −C C − Xj BB Xj | {z } | {z } | {z }

ADI for Lyapunov Newton’s Method for AREs

Set Xj = Zj ZjT for rank (Zj )  n =⇒

=Xj+1

=Xj+1

=:−Wj WjT

Implementations m, p = O(n)

  T T T AT j Zj+1 Zj+1 + Zj+1 Zj+1 Aj = −Wj Wj

Examples Software References

Factored Newton Iteration

[B./Li/Penzl ’99/’07]

Solve Lyapunov equations for Zj+1 directly by factored ADI iteration and use ‘sparse + low-rank’ structure of Aj .

Low-Rank Newton-ADI for AREs BALANCINGRELATED MODEL REDUCTION

Re-write Newton’s method for AREs AT j Nj + Nj Aj = −R(Xj ) ⇐⇒

Peter Benner Introduction Balanced Truncation Large Matrix Equations

T T AT j (Xj + Nj ) + (Xj + Nj ) Aj = −C C − Xj BB Xj | {z } | {z } | {z }

ADI for Lyapunov Newton’s Method for AREs

Set Xj = Zj ZjT for rank (Zj )  n =⇒

=Xj+1

=Xj+1

=:−Wj WjT

Implementations m, p = O(n)

  T T T AT j Zj+1 Zj+1 + Zj+1 Zj+1 Aj = −Wj Wj

Examples Software References

Factored Newton Iteration

[B./Li/Penzl ’99/’07]

Solve Lyapunov equations for Zj+1 directly by factored ADI iteration and use ‘sparse + low-rank’ structure of Aj .

Balanced Truncation Implementations of Complexity < O(n3 ) BALANCINGRELATED MODEL REDUCTION Peter Benner

Parallelization: – Efficient parallel algorithms based on matrix sign function.

Introduction

– Complexity O(n3 /q) on q-processor machine.

Balanced Truncation

– Software library PLiCMR.

Large Matrix Equations

(B./Quintana-Ort´ı/Quintana-Ort´ı since 1999)

Implementations m, p = O(n) Examples Software References

Formatted Arithmetic: For special problems from PDE control use implementation based on hierarchical matrices and matrix sign function method (Baur/B.), complexity O(n log2 (n)r 2 ).

Balanced Truncation Implementations of Complexity < O(n3 ) BALANCINGRELATED MODEL REDUCTION Peter Benner

Parallelization: – Efficient parallel algorithms based on matrix sign function.

Introduction

– Complexity O(n3 /q) on q-processor machine.

Balanced Truncation

– Software library PLiCMR.

Large Matrix Equations

(B./Quintana-Ort´ı/Quintana-Ort´ı since 1999)

Implementations m, p = O(n) Examples Software References

Formatted Arithmetic: For special problems from PDE control use implementation based on hierarchical matrices and matrix sign function method (Baur/B.), complexity O(n log2 (n)r 2 ).

Balanced Truncation Implementations of Complexity < O(n3 ) BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Sparse Balanced Truncation: – Sparse implementation using sparse Lyapunov solver (ADI+MUMPS/SuperLU). – Complexity O(n(k 2 + r 2 )). – Software: + Matlab toolbox LyaPack (Penzl 99), + Software library SpaRed with WebComputing interface. (Bad´ıa/B./Quintana-Ort´ı/Quintana-Ort´ı since ’03)

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T

m, p = O(n) Examples Software References



U1 Σ1 V1T

(rank-k approximation),

so that kG (s0 ) − U1 Σ1 V1T k2 = σk+1 . ˜ := BV1 , C ˜ := U1T C , Now define B

then

˜ ˜ V1T . G (s) ≈ U1 B(sE − A)−1 C {z } | ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T

m, p = O(n) Examples Software References



U1 Σ1 V1T

(rank-k approximation),

so that kG (s0 ) − U1 Σ1 V1T k2 = σk+1 . ˜ := BV1 , C ˜ := U1T C , Now define B

then

˜ ˜ V1T . G (s) ≈ U1 B(sE − A)−1 C {z } | ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T

m, p = O(n) Examples Software References



U1 Σ1 V1T

(rank-k approximation),

so that kG (s0 ) − U1 Σ1 V1T k2 = σk+1 . ˜ := BV1 , C ˜ := U1T C , Now define B

then

˜ ˜ V1T . G (s) ≈ U1 B(sE − A)−1 C {z } | ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T

m, p = O(n) Examples Software References



U1 Σ1 V1T

(rank-k approximation),

so that kG (s0 ) − U1 Σ1 V1T k2 = σk+1 . ˜ := BV1 , C ˜ := U1T C , Now define B

then

˜ V1T . ˜ G (s) ≈ U1 B(sE − A)−1 C | {z } ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T ≈

U1 Σ1 V1T

(rank-k approximation),

so that kG (s0 ) − U1 Σ1 V1T k2 = σk+1 . ˜ := BV1 , C ˜ := U1T C , Now define B

then

˜ ˜ V1T . G (s) ≈ U1 B(sE − A)−1 C | {z } ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

multi-point truncated SVD, error bounds → Master’s thesis A. Schneider, 2007.

Systems with a Large Number of Terminals m, p = O(n)

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Efficient BT implementations are based on assumption n  m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here, m, p = O(n), e.g., m = p = n2 , n4 . Cure: BT can easily be combined with SVDMOR [Feldmann/Liu ’04]: for G (s) = C (sE − A)−1 B, let » –» T – ˆ ˜ Σ1 V1 G (s0 ) = C (s0 E − A)−1 B = U1 U2 Σ2 V2T ≈

U1 Σ1 V1T

(rank-k approximation),

U1 Σ1 V1T k2

so that kG (s0 ) − ˜ := BV1 , Now define B

= σk+1 . ˜ := U1T C , C

then

˜ ˜ V1T . G (s) ≈ U1 B(sE − A)−1 C | {z } ˜ (s) =:G

˜ (s) Now apply BT to G

ˆ (s), then G (s) ≈ U1 G ˆ (s)V1T . G

Alternative for medium-size m: superposition of reduced-order SIMO models using Pad´ e-type approximation [Feng/B./Rudnyi 07].

Examples Optimal Control: Cooling of Steel Profiles

BALANCINGRELATED MODEL REDUCTION

16

Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

15

Mathematical model: boundary control for linearized 2D heat equation.

83

92

∂ c ·ρ x ∂t ∂ λ x ∂n ∂ x ∂n

=

ξ∈Ω

λ∆x,

34 10

9 55 47

=

κ(uk − x),

=

0,

ξ ∈ Γk , 1 ≤ k ≤ 7,

ξ ∈ Γ7 .

51

43

=⇒ m = 7, p = 6. FEM Discretization, different models for initial mesh (n = 371), 1, 2, 3, 4 steps of mesh refinement ⇒ n = 1357, 5177, 20209, 79841.

4 3

60

63 22

2

Source: Physical model: courtesy of Mannesmann/Demag. Math. model: Tr¨ oltzsch/Unger 1999/2001, Penzl 1999, Saak 2003.

Examples Optimal Control: Cooling of Steel Profiles

BALANCINGRELATED MODEL REDUCTION

n = 1357, Absolute Error

Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

– BT model computed with sign function method, – MT w/o static condensation, same order as BT model.

Examples Optimal Control: Cooling of Steel Profiles

BALANCINGRELATED MODEL REDUCTION

n = 1357, Absolute Error

n = 79841, Absolute error

Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

– BT model computed with sign function method,

– BT model computed using SpaRed,

– MT w/o static condensation, same order as BT model.

– computation time: 8 min.

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Co-integration of solid fuel with silicon micromachined system. Goal: Ignition of solid fuel cells by electric impulse. Application: nano satellites. Thermo-dynamical model, ignition via heating an electric resistance by applying voltage source. Design problem: reach ignition temperature of fuel cell w/o firing neighboring cells. Spatial FEM discretization of thermo-dynamical model linear system, m = 1, p = 7. Source: The Oberwolfach Benchmark Collection http://www.imtek.de/simulation/benchmark Courtesy of C. Rossi, LAAS-CNRS/EU project “Micropyros”.

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

axial-symmetric 2D model FEM discretization using linear (quadratic) elements (11, 445) m = 1, p = 7.

n = 4, 257

Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai’s PVL implementation.

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation

axial-symmetric 2D model FEM discretization using linear (quadratic) elements (11, 445) m = 1, p = 7.

Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai’s PVL implementation.

Large Matrix Equations Implementations m, p = O(n) Examples Software References

n = 4, 257

Relative error n = 4, 257

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation

axial-symmetric 2D model FEM discretization using linear (quadratic) elements (11, 445) m = 1, p = 7.

n = 4, 257

Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai’s PVL implementation.

Large Matrix Equations Implementations m, p = O(n) Examples Software References

Relative error n = 4, 257

Relative error n = 11, 445

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation

axial-symmetric 2D model FEM discretization using linear (quadratic) elements (11, 445) m = 1, p = 7.

Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai’s PVL implementation.

Large Matrix Equations Implementations m, p = O(n) Examples Software References

n = 4, 257

Frequency Response BT/PVL

Examples MEMS: Microthruster

BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation

axial-symmetric 2D model FEM discretization using linear (quadratic) elements (11, 445) m = 1, p = 7.

n = 4, 257

Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai’s PVL implementation.

Large Matrix Equations Implementations m, p = O(n) Examples Software References

Frequency Response BT/PVL

Frequency Response BT/MT

Examples MEMS: Microgyroscope (Butterfly Gyro)

BALANCINGRELATED MODEL REDUCTION

Vibrating micro-mechanical gyroscope for inertial navigation.

Peter Benner

Rotational position sensor.

Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

By applying AC voltage to electrodes, wings are forced to vibrate in anti-phase in wafer plane. Coriolis forces induce motion of wings out of wafer plane yielding sensor data. Source: The Oberwolfach Benchmark Collection http://www.imtek.de/simulation/benchmark Courtesy of D. Billger (Imego Institute, G¨ oteborg), Saab Bofors Dynamics AB.

Examples MEMS: Butterfly Gyro

BALANCINGRELATED MODEL REDUCTION

Introduction

FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12.

Balanced Truncation

Reduced model computed using SpaRed, r = 30.

Peter Benner

Large Matrix Equations Implementations m, p = O(n) Examples Software References

Examples MEMS: Butterfly Gyro

BALANCINGRELATED MODEL REDUCTION

Introduction

FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12.

Balanced Truncation

Reduced model computed using SpaRed, r = 30.

Peter Benner

Large Matrix Equations Implementations m, p = O(n) Examples Software References

Frequency Response Analysis

Examples MEMS: Butterfly Gyro

BALANCINGRELATED MODEL REDUCTION

Introduction

FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12.

Balanced Truncation

Reduced model computed using SpaRed, r = 30.

Peter Benner

Large Matrix Equations Implementations m, p = O(n) Examples Software References

Frequency Response Analysis

Hankel Singular Values

Examples Robust Control: Heat Flow in Copper

BALANCINGRELATED MODEL REDUCTION

Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC.

Peter Benner

FDM

Introduction Balanced Truncation Large Matrix Equations

n = 4496, m = 2; 4 sensor locations

p = 4.

Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10.

Implementations m, p = O(n) Examples Software References

Source: COMPle ib v1.1, www.compleib.de.

Examples Robust Control: Heat Flow in Copper

BALANCINGRELATED MODEL REDUCTION

Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC.

Peter Benner

FDM

Introduction Balanced Truncation Large Matrix Equations

n = 4496, m = 2; 4 sensor locations

p = 4.

Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10.

Implementations m, p = O(n) Examples Software References

Source: COMPle ib v1.1, www.compleib.de.

Examples Robust Control: Heat Flow in Copper

BALANCINGRELATED MODEL REDUCTION

Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC.

Peter Benner

FDM

Introduction Balanced Truncation Large Matrix Equations

n = 4496, m = 2; 4 sensor locations

p = 4.

Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10.

Implementations m, p = O(n) Examples Software References

Source: COMPle ib v1.1, www.compleib.de.

Model Reduction Software BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

SLICOT Model and Controller Reduction Toolbox Stand-alone Matlab Toolbox based on the Subroutine Library in Systems and Control Theory SLICOT which provides > 400 Fortran 77 routines for systems and control related computations; other Matlab Toolboxes: – SLICOT Basic Systems and Control Toolbox, – SLICOT System Identification Toolbox. Much enhanced functionality compared to Matlab’s Control Toolboxes, in particular coprime factorization methods, frequency-weighted BT methods, controller reduction. Maintained by NICONET e.V. Distributed by Synoptio GmbH, Berlin. For more information, visit http://www.slicot.org and http://www.synoptio.de.

Model Reduction Software BALANCINGRELATED MODEL REDUCTION Peter Benner Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

LyaPack Matlab toolbox for computations involving large-scale Lyapunov equations with sparse coefficient matrix A and low-rank constant term; contains BT implementation and dominant subspace approximation method. Available as additional software (no registration necessary) from http://www.slicot.org. New version (algorithmic improvements, easier to use) in progress.

PLiCMR, SpaRed Parallel implementations of BT and some balancing-related methods for dense and sparse linear systems.

MorLAB Model-reduction LABoratory, contains Matlab functions balancing-related methods, available from http://www.tu-chemnitz.de/~benner/software.php.

for

References BALANCINGRELATED MODEL REDUCTION Peter Benner

1

G. Obinata and B.D.O. Anderson. Model Reduction for Control System Design. Springer-Verlag, London, UK, 2001.

2

Z. Bai. Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Appl. Numer. Math, 43(1–2):9–44, 2002.

3

R. Freund. Model reduction methods based on Krylov subspaces. Acta Numerica, 12:267–319, 2003.

4

P. Benner, E.S. Quintana-Ort´ı, and G. Quintana-Ort´ı. State-space truncation methods for parallel model reduction of large-scale systems. Parallel Comput., 29:1701–1722, 2003.

5

P. Benner. Numerical linear algebra for model reduction in control and simulation. GAMM Mitt., 29(2):275–296, 2006.

6

P. Benner, V. Mehrmann, and D. Sorensen (editors). Dimension Reduction of Large-Scale Systems. Lecture Notes in Computational Science and Engineering, Vol. 45, Springer-Verlag, Berlin/Heidelberg, Germany, 2005.

7

A.C. Antoulas. Lectures on the Approximation of Large-Scale Dynamical Systems. SIAM Publications, Philadelphia, PA, 2005.

8

P. Benner, R. Freund, D. Sorensen, and A. Varga (editors). Special issue on Order Reduction of Large-Scale Systems. Linear Algebra Appl., Vol. 415, June 2006.

Introduction Balanced Truncation Large Matrix Equations Implementations m, p = O(n) Examples Software References

Suggest Documents