The Newton-Raphson algorithm

52 downloads 55 Views 452KB Size Report
J. Roychowdhury, University of California at Berkeley. Slide 1. Quiescent Steady State (DC) Analysis. The Newton-Raphson Method ...
Quiescent Steady State (DC) Analysis The Newton-Raphson Method

J. Roychowdhury, University of California at Berkeley

Slide 1

Solving the System's DAEs d ~ q (~ x(t)) + f~ (~x(t)) + ~b(t) = ~0 dt ●

DAEs: many types of solutions useful ● ● ●

DC steady state: state no time variations transient: transient ckt. waveforms changing with time periodic steady state: changes periodic w time ➔ ➔

● ●

linear(ized): all sinusoidal waveforms: AC analysis nonlinear steady state: shooting, shooting harmonic balance

noise analysis: analysis random/stochastic waveforms sensitivity analysis: analysis effects of changes in circuit parameters

J. Roychowdhury, University of California at Berkeley

Slide 2

QSS: Quiescent Steady State (“DC”) Analysis d ~ q (~ x(t)) + f~ (~x(t)) + ~b(t) = ~0 dt Assumption: nothing changes with time





x, b are constant vectors; d/dt term vanishes ~ g (~ x)



z }| { f~ (~ x) + ~b = ~0

Why do QSS? ➔ ➔



quiescent operation: first step in verifying functionality stepping stone to other analyses: AC, transient, noise, ...

Nonlinear system of equations ➔ ➔

the problem: solving them numerically most common/useful technique: Newton-Raphson method

J. Roychowdhury, University of California at Berkeley

Slide 3

The Newton Raphson Method ●

Iterative numerical algorithm to solve ~g(~x) = ~0 start with some guess for the solution repeat

1 2

check if current guess solves equation

a i ii



if yes: done! if no: do something to update/improve the guess

Newton-Raphson algorithm ● ●

start with initial guess ~x0 ; i=0 repeat until “convergence” (or max #iterations) d~g (~ xi ) compute Jacobian matrix: Ji = d~ x solve for update ±~ x : Ji ±~ x = ¡~g (~xi )



➔ ➔ ➔ ➔

xi+1 = ~xi + ±~x update guess: ~ i++;

J. Roychowdhury, University of California at Berkeley

Slide 4

Newton-Raphson Graphically

g(x)

Scalar case above ● Key property: generalizes to vector case ●

J. Roychowdhury, University of California at Berkeley

Slide 5

Newton Raphson (contd.) ●

Does it always work? No.



Conditions for NR to converge reliably ➔ ➔



g(x) must be “smooth”: continuous, differentiable starting guess “close enough” to solution

practical NR: needs application-specific heuristics

J. Roychowdhury, University of California at Berkeley

Slide 6

NR: Convergence Rate ●

Key property of NR: quadratic convergence ¤ ● Suppose x is the exact solution of g(x) = 0 th ● At the i NR iteration, define the error ²i = xi ¡ x¤ ●

meaning of quadratic convergence: ²i+1 < c²2i ●



(where c is a constant)

NR's quadratic convergence properties ➔ ➔ ➔



if g(x) is smooth (at least continuous 1st and 2nd derivatives) and g 0 (x¤ ) 6= 0 and kxi ¡ x¤ k is small enough, then: NR features quadratic convergence

J. Roychowdhury, University of California at Berkeley

Slide 7

Convergence Rate in Digits of Accuracy

Quadratic convergence

J. Roychowdhury, University of California at Berkeley

Linear convergence

Slide 8

NR: Convergence Strategies ●

reltol-abstol on deltax ●

stop if norm(deltax)