Interior Point Methods for Linear Programming - Semantic Scholar

9 downloads 364 Views 1MB Size Report
Interior Point Methods for Linear Programming: Just Call Newton, Lagrange, and Fiacco ... We use information technology and tools to increase productivity and ...
Interior Point Methods for Linear Programming: Just Call Newton, Lagrange, and Fiacco and McCormick! Author(s): Roy Marsten, Radhika Subramanian, Matthew Saltzman, Irvin Lustig, David Shanno Source: Interfaces, Vol. 20, No. 4, The Practice of Mathematical Programming (Jul. - Aug., 1990), pp. 105-116 Published by: INFORMS Stable URL: http://www.jstor.org/stable/25061374 . Accessed: 15/05/2011 04:03 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . http://www.jstor.org/action/showPublisher?publisherCode=informs. . Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Interfaces.

http://www.jstor.org

for Linear

Point Methods

Interior

Just Call Newton,

Programming: and Fiacco

Lagrange,

and McCormick! Roy Marsten

School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332

Radhika

Subramanian

School of Industrial and Systems Engineering Georgia Institute of Technology

Matthew

Saltzman

of Mathematical

Department

Sciences

Clemson University, Clemson, SC 29631

Irvin Lustig

Deptartment of Civil Engineering and Operations Research

Princeton University, Princeton, NJ 08544

David

Shanno

RUTCOR, Rutgers University New Brunswick, NJ 08903

are the right way

Interior point methods programs.

are also much

They

to convert

a minimization

constrained

minimization.

to convert

a minimization

with

minimizations This

with

what field

re

heavily

involved

to pick a time to pause things to a wider audience. forum,

in

INTERFACES

20: 4 July-August

In

is ideal for

Copyright ? 1990, The Institute of Management Sciences 0091-2102/90/2004/0105$01.25 This paper was refereed.

1990

the

"new

era

gramming" landmark paper.

it difficult

and explain terfaces, as an informal

to

are

is really a dispatch from the battle rather than a scholarly paper.

with

but the rapid these methods, implementing constant excitement have progress and made

programs

In 1984, Narendra gan

report will be very up to date (as of late March 1990) but brief and to the

This

been

told us how

Newton

and inequalities. Voila!

equations

a status could not be complete without port on the new interior point methods.

have

into an un

told us how into a se constraints

Linear

Interfaces issue on the current state of the art in linear programming

point. We

Lagrange

inequality

minimizations.

and

told us how

constraints equality Fiacco and McCormick

minimizations.

unconstrained

motivate,

with

quence of unconstrained solve

to derive,

easier

than they at first appeared.

understand

to solve large linear

Karmarkar

of mathematical

[1984] be pro

of his the publication Shortly thereafter his em

ployer, AT&T, invited the professional to mathematical community programming as representa roll over and die. Speaking we took this as tives of this community, rather a challenge. to make dramatic PROGRAMMING?LINEAR TUTORIAL

(pp. 105-116)

of us proceeded to the improvements Some

MARSTEN, SUBRAMANIAN, SALTZMAN for example, Robert Bixby simplex method; with CPlex, John Forrest with OSL, and Michael Saunders with the new MINOS. Many others worked Karmarkar's method, peared to be coming field, into our classical mization.

on bringing at first ap which out of left completely for opti important re

The first and most

sult in this effort was which

framework

et al. [1986], an equivalence be Gill

demonstrated

tween Karmarkar's Newton

method

and Shanno

[1988],

Marsten,

and Shanno

[1989].

and projected

barrier methods.

Newton Recall Newton's

[1986] Interfaces focus on the algorithms,

xk+1

xk-f(xk)/f'(xk)

e =

10"8.

we

have n equations = 0, where

Suppose variables: f(x)

results. This

dXj

r1

Jacobian at the point x, J(x), is de as the matrix with (/, j) component

The

(X)'

and Newton's

method

looks

like

for unconstrained

optimization. for op Lagrange's [1788] method timization with equality constraints. Third,

Second,

and McCormick's

method

for optimization constraints. Using these show how

[1968] barrier with inequality

three pieces, we to construct the primal-dual

and also

the most

successful

See, for example, putationally. al. [1989] and Lustig, Marsten, [1989].

The primal-dual

has been

com

Marsten and method

[1986], by Megiddo developed and Yoshise Mizuno, Kojima, [1988], Monteiro and Adler [1989], McShane, and Shanno Monma, [1989], Choi,

INTERFACES20:4 106

et

=

xk+i

xk

if we

Or, dx

interior point method. is The primal-dual of the many the most elegant theoretically

Shanno

=

is, in fact,

consists of three crucial point methods building blocks. First, Newton's [1687] method for solving nonlinear equations

variants

x

and

fined

Fiacco

in n

article. We

the whole subject has much clearer and sim just recently gotten foundation for interior pler. The theoretical

and hence

a

=

their imple and their performance, rather

mentation, than on complexity a good time because

will

for finding of a single variable: method

zero of a function = 0. Given an initial estimate x? we f(x) compute a sequence of trial solutions

example;

lent Hooker

and Lustig,

for k = 0, 1, 2, ... , stopping when \f(xk)\ < ewhere e is some stopping tolerance; for

A great deal of theoretical and computa tional work has been done since the excel

will

Monma,

=

__ [/(x*)]"1/^*).

let

xk+1

-

xk

vector and move the displacement to the other side the Jacobian matrix

denote

J(x")dx

=

Newton's unconstrained well:

(1)

-f(xk). method

can be applied

to the

problem as = g(x) take/(x) g'(x) to search for a method

minimization

to minimize

and use Newton's

can zero of f(x). Each step of the method a quadratic be interpreted as constructing of g(x) and stepping di approximation

INTERIORPOINT METHODS to the minimum

rectly

approximation. If g is a real valued

L(x, y) by solving the system of in (n + m) variables: (n + m) equations

of n vari

function of g will

of n equations

system

following variables:

the unconstrained

function

a local minimum

ables,

and then minimize

of this

satisfy in n

the

?k = ?Lix)_ZvM(x) dXj dxj() ?1y'dxj( =

for

j

1,

. . ., n

1,

. . ., m.

'

= o

and

f:w=o i

for

In this case

the Newton

iteration

( 1 ) looks

like

equations

Newton's

-Vg(xk),

component

x > 0. The idea of conditions: negativity the barrier function approach is to start from a point in the strict interior of the in

??(x).

dXjdXj If g is convex,

then any local minimizer is also a global minimizer. If x* is a local minimizer of g(x); that is, Vg(x*) = 0, and if V2g(x*) method

has

will

sufficiently

full rank,

then Newton's

to x* if started converge close to x*.

(x? > 0 for all /) and construct a equalities barrier that prevents any variable from = 0). For exam reaching the boundary (xf ple, adding "?log (x7)" to the objective function

Of

Lagrange constrained

discovered

how

to transform

optimization problem, with into an unconstrained constraints,

equality

To solve

problem. minimize

the problem

= 0 subject to gi(x) i

=

1,

. . ., m

form a Lagrangian

f{x)-

a

course

0. is on

if the constrained

optimum = some the boundary is, 0, which (that xj is always true for linear programming), then

prevent us from solution is to use a barrier

the barrier will

reaching

it. The

that balances the contribution of parameter the true objective function against that of the barrier

function

2m 0

to

subject f(x) is replaced by the family

of unconstrained

7 1

on the positive ?x.Let x(/x) be the mini

is parameterized

barrier parameter mizer of B(x|??).

Fiacco and McCormick that x*, the con x(p) [1968] as \x 0. The set of strained minimizer, show

minimizers

x(??) is called

the "central

trajectory."

As

a simple

example,

consider

the

XX

0,

=

. . . , n.

1,

for;

we

In practice,

do not have

0.

on barrier methods

1955],

Carroll

Primal-Dual with

constrained

Interior Newton's

minimization

(x1)-Mlog(x2) a dB = OXi

dB a= ox2

Xi(M)

2(x1 + 2(x2 + =

X2(M)

for un

and

the Lagran for converting into unconstrained

linear programming. the primal-dual Consider

(x2+l)2-Mlog

Point Method

and barrier methods

>0:

?

and Huard

method

gian constrained

l)2 +

of Frisch

[1961],

The unconstrained minimum would be at ? is ( 1, ?1), but the constrained minimum at the origin (x*, x%) = (0, 0). For any \i

+

to compute

?i. x(ijl) very accurately before reducing extended the ear Fiacco and McCormick

Armed

?(x|M)-(x1

-

M_Q

[1964].

to >

0.

x;

dx;

The

x2

->

as ^

dg(;,U)_d/(*) dx;

[1954,

(Xl+ l)2 + (x2+ l)2

>

x*

in n variables:

equations

lier work

problem minimize

subject

->

xk(nk)

?k.

1 and go to 2.

In step 2, we can find x{?i) by using to solve the system of n Newton's method

2 = log(zy)

B(x\ix)=f(x)-?

/j,k+1


+ -Vl

bTy

subject

to

ATy

c

0.

+ 2M

-* 0. which (0, 0) as ?x approaches In general we can't get a closed-form solution for the central trajectory, but can use

the following algorithm: = 0. 1. Choose ju0> 0, set k

INTERFACES20:4 108

can be handled by La The equations and the nonnegativity grange's method Fiacco and McCormick's conditions by barrier method. strained

functions

by Newton's

The resulting uncon can then be optimized

method.

Suppose

that we

INTERIORPOINT METHODS first introduce two Lagrangians:

2 ATy +

and then form

the barriers

= cTx Lp(x, y)

-

?i2

= 7 1

log (Xj)

-

b)

yT(Ax

n

=

Ld(x, y, z) -

bTy +

zxT(ATy +

?i2

= 7 1

dP

c).

dD

all of the partial derivatives them to zero, we get just three sets of equations (because of duplication).

for

bility,

=

1,

slackness.

mentary

ATy? + z?-c the primal

That

is, p

we will

start p at some positive let it approach zero. If we

matrix

let diag { } denote with the listed elements

=

value

and

a diagonal on its

A 0 0 0 AT 1 Z 0 X

diag

=

Suggest Documents