Computer-Algebra Implementation of the Least-Squares Method on ...

4 downloads 0 Views 338KB Size Report
was addressed, where A is a given m × n matrix and b is a given vector of dimension m. The least-squares problem with linear equality constraints (Problem ...
Journal of Mathematical Sciences, Vol. 132, No. 2, 2006

COMPUTER-ALGEBRA IMPLEMENTATION OF THE LEAST-SQUARES METHOD ON THE NONNEGATIVE ORTHANT Kh. D. Ikramov∗ and M. Matin far∗

UDC 512

Maple procedures for solving the so-called NNLS (Nonnegative Least Squares) problem are described. The NNLS problem is to minimize ||Ex − f ||2 subject to

x ≥ 0.

The solution of an NNLS problem is the crucial step of the conventional algorithm for solving linear least-squares problems with linear inequality constraints. Bibliography: 5 titles.

1. Introduction This is the fifth among our publications devoted to different aspects of the computer-algebra implementation of the least-squares method. In [1–3], the unconditional linear least-squares problem (Problem LS) min ||Ax − b||2

x∈Rn

(1)

was addressed, where A is a given m × n matrix and b is a given vector of dimension m. The least-squares problem with linear equality constraints (Problem LSE) was considered in [4]. The present paper dwells on the least-squares problem with linear inequality constraints (Problem LSI). It is stated as follows. Given a real m × n matrix G, a real m2 × n matrix E, and real vectors h and f of dimensions m and m2 , respectively, find a vector x ∈ Rn that minimizes the quantity ||Ex − f||2 and satisfies the system of linear inequalities Gx ≥ h.

(2)

Hereafter, inequalities interrelating real vectors of the same dimension are understood as componentwise inequalities. It is well known (see, e.g., [5]) that Problem LSI can be reduced to its own particular case, namely, the so-called Problem NNLS (Nonnegative Least Squares): minimize ||Ex − f||2

(3)

x ≥ 0.

(4)

subject to A method for performing this reduction, different from the method advised in [5], will be discussed in a subsequent paper. However, the intention to solve Problem LSI using the auxiliary problem (3), (4) means that the algorithm for solving Problem NNLS must be the core of a program package for solving Problem LSI. It is the computeralgebra implementation of such an algorithm (namely, Algorithm NNLS) that is the subject of this paper. In Sec. 2, we recall the structure of Algorithm NNLS and discuss means of increasing its efficiency. They are related to the fact that the matrices of unconditional least-squares problems solved at consecutive steps of the algorithm are low-rank (and even rank-one, as a rule) corrections of each other. In Sec. 3, we describe test examples and present numerical results obtained with our implementation of Algorithm NNLS as a Maple procedure. ∗ Moscow

State University, Russia, e-mail: [email protected].

Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 309, 2004, pp. 23–29. Original article submitted February 2, 2004. 156

c 2006 Springer Science+Business Media, Inc. 1072-3374/06/1322-0156 

2. Algorithm NNLS Essentially, Algorithm NNLS (see [5]) looks for a vector x and its dual vector w that satisfy the Kuhn–Tucker conditions for Problem NNLS. The input data of the algorithm are the integers m2 and n, the m2 × n matrix E, and the m2 -vector f. The algorithm uses working space consisting of two n-vectors w and z and two subsets P and L of the index set {1, 2, . . . , n}. The components of the current vector x with indices from the set L are zero. The components with indices from the set P can take nonzero values. If such a component is nonpositive, then the algorithm will either change its value to a positive one or set it to zero and move its index from P to L. On termination, the vector x contains the solution of Problem NNLS, whereas w contains the dual vector. Algorithm N N LS 1. Set P = ∅, L = {1, 2, . . . , n}, and x = 0. 2. Compute the n-vector w = E T (f − Ex). 3. If the set L is empty or wj ≤ 0 for all j ∈ L, then go to Step 12. 4. Find an index t ∈ L such that wt = max{wj | j ∈ L}. 5. Move the index t from L to P . 6. Let EP be the m2 × n matrix defined as follows:  column j of E if j ∈ P, column j of EP = 0 if j ∈ L. Compute the n-vector z as the minimum-norm solution of the least-squares problem EP z = f.

(5)

Note that the components zj , j ∈ L, are zero. 7. If zj > 0 for all j ∈ P , then set x = z and go to Step 2. 8. Find an index q ∈ P such that   xq xj = min | zj ≤ 0, j ∈ P . xq − zq xj − zj 9. Set α=

xq . xq − zq

10. Reset x to x + α(z − x). 11. Move all the indices j ∈ P for which xj = 0 from the set P to the set L. Go to Step 6. 12. Terminate the process. Algorithm NNLS may be regarded as consisting of a main loop (Loop A) and an inner loop (Loop B). Loop B consists of Steps 6–11; it has a single entry point at Step 6 and a single exit point at Step 7. Loop A consists of Steps 2–5 and Loop B; it starts at Step 2 and exits from Step 3. It was shown in [5, Chap. 23, Sec. 3] that Algorithm NNLS is finite because Loop A must terminate after a finite number of iterations, and the number of repetitions of Loop B within Loop A is finite as well. In practice, the exit from Loop B usually occurs immediately upon reaching Step 7. As for Loop A, it was observed that the average number of its repetitions is about n2 . Note that the least-squares problem that is solved at Step 6 differs from the problem that has previously been solved at Step 6 in that either another nonzero column has been added to the matrix EP or one or several nonzero columns of EP have been deleted at Step 11. In both cases, the matrices EP of two adjacent steps are low-rank (typically, even rank-one) corrections of each other. This situation and the computational benefits it offers were thoroughly discussed in [2, 3]. 3. Numerical results We have implemented Algorithm NNLS, described in the preceding section, as a Maple procedure. There are two versions of this procedure. In the first version, NNLS1, the unconditional least-squares problem at Step 6 is solved by invoking the least-squares procedure from the standard library of Maple. In the second version, NNLS2, between every two successive executions of Step 6 the minimum-norm solution is updated using the Maple procedures AC and DC (meaning add column and delete column) developed in [3]. 157

The performance of these two versions was examined in two series of numerical experiments. Test problems were constructed using M -matrices. Recall that a real n×n matrix A with nonpositive off-diagonal entries is called an M -matrix if it is nonsingular and its inverse A−1 is (entrywise) nonnegative. A well-known example of M -matrices are Stieltjes matrices, i.e., positive-definite matrices with nonpositive off-diagonal entries. For a symmetric matrix A with positive diagonal, its positive definiteness is implied, for instance, by diagonal dominance. In our experiments, we have used the n × n matrices   n −1 −1 · · · −1  −1 n −1 · · · −1  (6) An =   ··· ··· ··· ··· ··· −1 −1 −1 · · · n with different values of n. Let the matrix of a system of linear equations Ax = b

(7)

be an M -matrix and let b be a nonnegative vector. Then the unique solution of system (7), i.e., the vector x0 = A−1 b is nonnegative as well. Therefore, if one sets E = A and f = b in problem (3), (4), then its solution must be the vector x0 . For example, if E = An and b = (1, 1, . . . , 1)T , (8) then

T x0 = A−1 n b = (1, 1, . . . , 1) = b.

(9)

In the first series of experiments, we solved system (7) with the matrix An and the right-hand side (8) for n = 10, 20, 30, 40, 50. The computations were performed on a 650 MHz Pentium III, using version 7 of Maple. The run times (in seconds) for both versions of procedure NNLS are indicated in Table 1. Table 1 n NNLS1 10 1.6 20 11.1 30 44.2 40 126.6 50 296.2

NNLS2 0.8 5.2 19.5 52.7 120.3

Relation (9) means that the vector (8) is an eigenvector of the matrix A−1 n associated with its (largest) has no other nonnegative eigenvectors (noncollinear to b). Similarly, up eigenvalue 1. It is easy to show that A−1 n to collinearity, b is the only nonnegative eigenvector of the matrix An . (It corresponds to its smallest eigenvalue 1.) Therefore, Problem NNLS with   0

An − In ···  E= and f = (10) , 1 · · ·1 0 1 where f is the vector of dimension n + 1 with one nonzero component in the last position, has the solution x0 =

1 1 1 , ,... , n n n

T .

The last equation of the problem, namely, x1 + x2 + . . . + xn = 1 158

(11)

can be regarded as a normalization condition for an eigenvector. In the second series of experiments, the matrix and vector in problem (3), (4) were defined by formulas (10) with n = 10, 20, 30, 40, 50. The run times for both versions of procedure NNLS are provided in Table 2. We have put a counter for the number of repetitions of Step 6 in our programs. For all test runs, the value of this counter at the exit from the algorithm was equal to the dimension n of the vector x. This is related to the fact that the solution is a positive vector (namely, the vector (9) or (11)), whereas the initial approximation is always a zero vector (see Step 1). Table 2 n NNLS1 10 1.3 20 9.7 30 38.8 40 111.6 50 264.5

NNLS2 0.8 5.4 20.0 54.4 123.3

For the reasons explained in Sec. 2, we expected that NNLS2 would be more efficient than NNLS1. Tables 1 and 2 show that, to a certain extent, these expectations were confirmed. Indeed, NNLS2 takes 2 to 2.5 times less time than NNLS1. The fact that the superiority of NNLS2 has not proved to be more convincing can be explained as follows. Our experiments were not completely fair because the library procedure, making use of all the capabilities of Maple, was competing with the imperfect experimental procedures AC and DC. Translated by Kh. D. Ikramov. REFERENCES 1. Kh. D. Ikramov and M. Matin far, “Computer-algebra procedures for matrix pseudoinversion,” Zh. Vychisl. Mat. Mat. Fiz., 43, 163–168 (2003). 2. Kh. D. Ikramov and M. Matin far, “Rank-one modifications and updating pseudoinverse matrices,” Vestn. Mosk. Gos. Univ. Ser. 15 Vychisl. Mat. Kibern., No. 4, 12–17 (2003). 3. Kh. D. Ikramov and M. Matin far, “Updating the minimum-norm least-squares solution under rank-one modifications of a matrix,” Zh. Vychisl. Mat. Mat. Fiz., 43, 493–505 (2003). 4. Kh. D. Ikramov and M. Matin far, “On computer-algebra procedures for linear least-squares problems with linear equality constraints,” Zh. Vychisl. Mat. Mat. Fiz., 44, 206–212 (2004). 5. C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs (1974).

159

Suggest Documents