by YAIR CENSOR' and STAVROS A. ZENIOS3. 1. Introduction. The proximal minimization algorithm deals with the optimization problem minimize. F(x).
A. BERMAN,
212 THEOREM 4.
If
M. GOLDBERG,
AND D. HERSHKOWI’IZ
n > 2 und T : H,, -P H,, is a rank-2 preserver, then T is of the form
(*) or (**). REFERENCES 1
P. Botta,
Linear
maps preserving
Multilinear Algebra 20:197-201 2
J. W.
Helton
matrices, 3
and L.
Rodman,
and S. Pierce,
C. R. Johnson
Linear
and S. Pierce,
less than or equal preserving
linear
maps on Hermitian
M. H. Lim, Linear
Linear and
Linear
transformation
maps of Hermitian
matrices:
The stabilizer
of
matrices:
The stabilizer
of
(1985).
Bull. 28:401-404
maps on Hermitian
an inertia class, II, Linear and Multilinear Algebra 19:21-31 5
to one,
(1985).
Algebra 17:401-404
an inertia class, Canad. Math. 4
Signature
Linear and M&&near
C. R. Johnson
rank
(1987).
on symmetric
matrices,
(1986). Linear
and M&linear
Algebra 7:47-57 (1979). 6
M. H. Lim, Linear
mappings
on second
symmetric
product
spaces that preserve
rank less than or equal to one, Linear and M&linear 7
R. Loewy, Linear transformations R. Loewy, Linear R. Loewy,
maps that preserve
Linear
maps which preserve
Appl. 134:165-179 10
S. Pierce
inertia,
a balanced
by YAIR CENSOR’
Linear
preservers
class,
Linear Algebra
of the class of Hermitian
matrices
(1988).
and STAVROS A. ZENIOS3
Introduction proximal minimization
algorithm deals with the optimization minimize
convex
problem
F(x)
to
(1.1)
XEX,
F : W” -t LFlis a given proper convex function and X c W” is a nonempty subset
of the
* Department Haifa 31999,
n-dimensional
of Mathematics
Euclidean
and Computer
space
W”. The
approach
Science,
University
of Haifa,
closed
is based
of Decision
PA 19104-6366.
Sciences,
The
Wharton
School,
University
on
Mt. Carmel,
Israel.
3 Department Philadelphia,
Appl.
with D-Functions
subject where
inertia
SZAM J. Matrix Anal. AppZ. 9:461-472
On the Proximal Minimization Algorithm
1.
class, SZAM J. Matrix Anal.
(1990).
and L. Rodman,
with balanced
The
an inertia
(1990).
11:107-1123 9
which preserve
(1989).
Appl. 121:151-161 8
Algebra 26:187-193 (1990). or decrease rank, Linear Algebra
of Pennsylvania,
CONFERENCE converting
213
REPORT
(1.1) into a sequence
functions
obtained
of optimization
by adding quadratic
The origins of the algorithm this algorithm
the dual problem problem
classes
of a strictly these
In [7] we generalize
obtained
for which convergence
were introduced
by Bregman
and Iusem
from our scheme
linear
programming
optimization e.g.,
(F
problems,
and
sequence
of entropy
problems
the structure
of the algorithm proximal
choice
row-action
the quadratic
and properties
can be preserved. by Censor
minimization
of a D-function.
are all linear)
for which
work is contained
computations.
of
These
and Lent
algorithm
A different
is
choice
algorithm with an entropy additive term. In the case of
x EX
several
[S, 6, 131. Such an approach
relevant
for parallel
on some specialized
algorithm by replacing
original
by one special
and can be
For several important
[3] and studied further
[B]. The
leads to a proximal minimization
is differentiable
problems.
the proximal minimization
some such D-functions
point
tool. This is so because
ascent.
can be decomposed
D : R” x W” + R and specifying
term with a function
[4] and De Pierro
problem
like dual coordinate
transportation
[15], and Rockafellar
in the family of proximal
computational
[19] in this issue, which reports
for a class of nonlinear
D-functions
interest
convex optimization
procedures
dual algorithms
See also the synopsis algorithms
theoretical
is also an important
solved by simple iterative
with strictly convex objective
go back to Minty [14], Moreau
[16, 171. In addition to considerable algorithms,
problems
terms to F(x).
the latter
leads
good special-purpose
of replacing
a linear
was heuristically
in [l, 2, 9-11,
to purely
algorithms
programming
suggested
entropy
exist;
problem
by Eriksson
see,
with a
[12]. Further
181.
The Proximal Minimization Algorithm with D-Functions
2. Let
S be a nonempty
open convex
closure
of S, and
A is the domain
twice
continuously
differentiable
gradient
and its Hessian
continuous
and strictly
these assumptions From f(x)
at every
f
such
that
s E A, where
5 is the
: A E R” -+ R. Assume that _f( x) is
XE S, and denote
matrix at x, respectively.
by Vf(x)
Furthermore,
and V’f(x)
assume
where
(
D,-( x, y), Df : .? x S c Rz” + & by
(2.1)
. , * ) denotes the usual inner product in R”. Df-functions Df-projections
optimization
algorithms
The following
is
to as an auxiliary function.
the D-function
Df(r,Y)=f(x)-f(Y)-(Vf(Y),x-Y).
defining
its
that f(x)
convex on .?. The set S is called the zone of f, and f obeying
is referred
construct
set in R”
of a function
onto
convex
sets
and play a key role
are instrumental in the
in
primal-dual
in [3, 4, 81.
additional
properties
tions, their zones, and the Df-functions
need to be postulated constructed
for the auxiliary
from them. Denote,
func-
for any cx E LQ,
by
L>(% Y) = {dpf(X, the partial
level sets of
Y) Q
Df( x, y).
a),
L$(x,a) =
{y~SjZ+(x,
y)