Solving Variational Inequality Problems with Linear Constraints Based

0 downloads 0 Views 304KB Size Report
1 + x2 + x3. 5x1 + 3x2. 2 + 10(x2)+ + 3x3. 10x2. 1 + 8x2. 2 + 4(x3)+ + 3x2. 3 ... 2 + 10x3 + 2x4 − 2. 3x1 + x1x2 + 2x2. 2 + 2x3 + 9x4 − 9 x2. 1 + 3x2. 2 + 2x3 + 3x4.
Solving Variational Inequality Problems with Linear Constraints Based on a Novel Recurrent Neural Network Youshen Xia1 and Jun Wang2 1 2

College of Mathematics and Computer Science, Fuzhou University, China Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, China

Abstract. Variational inequalities with linear inequality constraints are widely used in constrained optimization and engineering problems. By extending a new recurrent neural network [14], this paper presents a recurrent neural network for solving variational inequalities with general linear constraints in real time. The proposed neural network has onelayer projection structure and is amenable to parallel implementation. As a special case, the proposed neural network can include two existing recurrent neural networks for solving convex optimization problems and monotone variational inequality problems with box constraints, respectively. The proposed neural network is stable in the sense of Lyapunov and globally convergent to the solution under a monotone condition of the nonlinear mapping without the Lipschitz condition. Illustrative examples show that the proposed neural network is effective for solving this class of variational inequality problems.

1

Introduction

Many engineering problems, including robot control, electrical networks control, and communications, can be formulated as optimization problems with linear constraints [1]. Because of the time-varying nature of these optimization problems, they have to be solved in real time. One application of real-time optimization in robotics is robot motion control [2]. Another suitable application of real-time optimization in signal processing is for adaptive beamforming [3]. Because of the nature of digital computers, conventional numerical optimization techniques are usually not competent. As parallel computational models, neural networks possess many desirable properties such as real-time information processing. Therefore, recurrent neural networks for optimization, control, and signal processing received tremendous interests [3-18]. These neural network models are theoretically analyzed to be globally convergent under various different conditions and extensively simulated to further demonstrate their operating characteristics in solving various optimization problems. For example, by the penalty method Kennedy and Chua [4] extended Hopfield and Tank’s neural D. Liu et al. (Eds.): ISNN 2007, Part III, LNCS 4493, pp. 95–104, 2007. c Springer-Verlag Berlin Heidelberg 2007 

96

Y. Xia and J. Wang

network for solving nonlinear programming problems. To avoid penalty parameters Rodr´ıguez-V´azquez et al. [6] proposed a switched-capacitor neural network for solving a class of nonlinear optimization problems. In term of Lagrange function methods, Zhang and Constantinides [8] developed a Lagrange programming neural network for solving a class of nonlinear optimization problems with equality constraints. Based on some projection formulations, Xia, and Xia and Wang [10-13] developed several recurrent neural networks for solving constrained optimization and related problems, respectively. Recently, based a normal mapping equation, Xia and Feng [14] developed a recurrent neural network for solving a class of projection equations and variational inequality problems with box constraints. In this paper, we are concerned with the following variational inequality problem with general linear constraints: (x − x∗ )T F (x∗ ) ≥ 0, x ∈ Ω0

(1)

where F : Rn → Rn is differentiable, Ω0 = {x ∈ Rn | Bx = b, Ax ≤ d, x ∈ X}, where B ∈ Rr×n , A ∈ Rm×n , b ∈ Rr , d ∈ Rm , and X = {x ∈ Rn | l ≤ x ≤ h}. The problem (1) has been viewed as a natural framework for unifying the treatment of a large class of constrained optimization problems [1, 19,20]. The objective of this paper is to extend the existing recurrent neural network to solve (1). The proposed neural network is thus a significant extension for solving variational inequality problems from box constraints to linear constraints. Compared with existing projection neural network [12], the proposed neural network does not require the initial point in the feasible set X. Compared with the modified projection-type numerical method for solving (1) [20], which requires a varying step length, the proposed neural network not only has a lower complexity and but also has no requirement of the Lipschitz condition of the mapping F .

2

Neural Network Models

A. Reformulation For convenience of discussion, we assume that the problem (1) has at least solution and denote the solution set by X ∗ = {x∗ ∈ Rn | x∗ solves (1)}. From optimization literatures (Bertsekas, 1989) we know that the Karush-KuhnTucker (KKT) condition for (1) can be written as the following  y ≥ 0, Ax ≤ d, l ≤ x ≤ h Bx = b, y T (Ax − d) = 0

Solving Variational Inequality Problems with Linear Constraints

and

97

⎧ ⎨ (F (x) + AT y − B T z)i ≥ 0 if xi = hi (F (x) + AT y − B T z)i = 0 if li ≤ xi ≤ hi ⎩ (F (x) + AT y − B T z)i ≤ 0 if xi = li ,

where y ∈ Rm , z ∈ Rr . According to the projection theorem (Kinderlehrer and Stampcchia, 1980) the above KKT condition can be equivalently represented as ⎧ ⎨ PX (x − (F (x) + AT y − B T z)) = x (y + Ax − d)+ = y (2) ⎩ Bx = b where (y)+ = [(y1 )+ , ..., (ym )+ ]T with (yi )+ = max{0, x}. Thus x∗ is a solution to (1) if and only if there exits y ∗ ∈ Rm and z ∗ ∈ Rr such that (x∗ , y ∗ , z ∗ ) m × Rr and PΩ (u) = [PX (x), (y)+ , z]T , satisfies (2). Furthermore, let Ω = X × R+ m where R+ = {y ∈ Rm | y ≥ 0} and ⎧ xi < li ⎨ li PX (xi ) = xi li ≤ xi ≤ hi , ⎩ hi xi > hi and let

⎞ ⎛ ⎞ ⎛ x F (x) + AT y − B T z ⎠. −(Ax − d) u = ⎝ y ⎠ , T (u) = ⎝ Bx − b z

Then the projection equations can be simply written as PΩ [u − T (u)] = u.

(3)

Therefore, we have the following lemma. Lemma 1. x∗ is a solution to (1) if and only if there exits y ∗ ∈ Rm and z ∗ ∈ Rr such that u∗ = (x∗ , y ∗ , z ∗ ) is a solution of the projection equation (3). B. Dynamical Equations Based on the novel structure of the recurrent neural network developed in [14], we present a recurrent neural network for solving the problem (1) as follows State equation dw = λ{−T (PΩ (w)) + PΩ (w) − w}, dt

(4)

u = PΩ (w)

(5)

Output equation

where λ > 0 is a designing constant, w = (x, y, z) ∈ Rn × Rm × Rr is a state vector and u is an output vector. Substituting the representation of T (u) into (4),

98

Y. Xia and J. Wang

we can represent the state equation of the proposed neural network as follows ⎛ ⎞ −F [PX (x)] − AT (y)+ + B T z + PX (x) − x dw ⎠. (APX (x) − y − d) + (y)+ = λ⎝ (6) dt −B[P (x)] + b X

It can be seen that the state equation (6) can be realized by a recurrent neural network with a one-layer projection structure, which consists of n + m + r integrators, m + n piecewise activation functions for PX (x) and (y)+ , n processors for F (x), 2n(m + r) connection weights, and some summers. Therefore, the network complexity depends only on the mapping F (x). It can be seen that the proposed neural network has same network complexity as the projection neural network given in [13]. Moreover, the proposed neural network is a significant generalization of existing neural networks. C. Two Special Cases Case 1: A = O,B = O,b = 0, and d = 0, the considered variational inequality becomes (7) (x − x∗ )T F (x∗ ) ≥ 0, x ∈ X. The corresponding neural network is then given by State equation dw ˆ = λ{−F (PX (w)) ˆ + PX (w) ˆ − w} ˆ dt Output equation x = PΩ (w), ˆ n where w ˆ ∈ R . The neural network model was presented in [14].

(8) (9)

Case 2: F (x) = ∇f (x), where f (x) is a continuously differentiable function. In the case, (1) becomes a well-known nonlinear programming problem with general linear constraints minimize f (x) subject to Bx = b, Ax ≤ d, l ≤ x ≤ h (10) The corresponding neural network is then given by State equation ⎞ ⎛ −∇f [PX (x)] − AT (y)+ + B T z + PX (x) − x dw ⎠. (APX (x) − y − d) + (y)+ = λ⎝ (11) dt −B[PX (x)] + b Output equation v = PX (x). In particular, when A = O,B = O,b = 0, and d = 0, the nonlinear programming (10) becomes a bounded nonlinear program minimize f (x) subject to

l≤x≤h

(12)

Solving Variational Inequality Problems with Linear Constraints

99

The corresponding neural network is then given by State equation dw = ∇f [PX (w)] + PX (w) − w dt

(13)

Output equation x = PX (w), where w ˆ ∈ R . This neural network model was presented in [10]. n

3

Convergence Results

As for the convergence of the proposed neural network in (4) and (5), by combining analysis techniques of papers [10, 11, 14] we can obtain the following main results. Theorem 1. (i) For any initial point, there exists a unique solution trajectory for (4). (ii) The proposed neural network model in (4) and (5) has at least an equilibrium point. Moreover, if w∗ = (x∗ , y ∗ , z ∗ ) is an equilibrium point of the proposed neural network model in (4) and (5), then PX (x∗ ) is a solution of (1). Theorem 2. Assume that the Jacobian matrix, ∇F (x), of F is positive semidefinite. If ∇F (x∗ ) is positive definite, then the proposed neural network in (4) and (5) is stable in the sense of Lyapunov and its output trajectory converge globally to a solution of (1). The following result is an improvement on the existing one given in [14]. Theorem 3. Assume that F (x) is pseudomonotone: F (x∗ )T (x − x∗ ) ≥ 0 ⇒ F (x)T (x − x∗ ) ≥ 0, ∀x ∈ X. If (x−x∗ )T F (x) = 0 ⇒ x ∈ X ∗ , then the output trajectory of the neural network defined in (8) and (9) converges globally to a solution of (7). Remark 1. The convergence condition of Theorem 3 is weaker than the one given in [14], where F (x) is a strictly monotone mapping or F (x) is a monotone gradient mapping. As pointed in paper [18], the pseudomonotonicity is a generalization of monotonicity, and the pseudomonotonicity of F implies the monotonicity of F . As a direct corollary of Theorem 3, we have the following result. Corollary 1. Assume that F (x) is strictly pseudomonotone: F (x∗ )T (x − x∗ ) > 0 ⇒ F (x)T (x − x∗ ) > 0, ∀x = x∗ , x ∈ X. Then the input trajectory of the neural network defined in (8) and (9) converges globally to a solution of (7). Remark 2. Since the nonlinear mapping F (x) is only differentiable, the above results don’t require the local Lipschitz condition of F .

100

4

Y. Xia and J. Wang

Simulation Examples

Example 1. Let us consider the variational inequality problem (1), where ⎤ ⎡ 5(x1 )+ + x21 + x2 + x3 F (x) = ⎣ 5x1 + 3x22 + 10(x2 )+ + 3x3 ⎦ , 10x21 + 8x22 + 4(x3 )+ + 3x23 X = {x ∈ R3 |x1 + x2 + x3 ≥ 6, x ≥ 0}, F (x) is monotone. This problem has a unique solution x∗ = [4.5, 1.5, 0]T . We use the proposed neural network in (4) and (5) to solve the above problem. All simulation results show that the neural network in (4) and (5) is always convergent to x∗ . For example, let λ = 5 and let an initial point be zero. The obtained solution x ˆ = [4.499, 1.499, 0]T . Fig. 1 5

4.5 x1(t)

4

3.5

3

2.5

2 x2(t) 1.5

1

0.5 x3(t) 0

0

0.2

0.4

0.6

0.8

1 Time (sec)

1.2

1.4

1.6

1.8

2

Fig. 1. The transient behavior of the output trajectory of the proposed neural network in (4) and (5) with a zero initial point in Example 1 7

6

5

*

||x(t)−x ||

4

3

2

1

0

0

0.1

0.2

0.3

0.4

0.5 Time (sec)

0.6

0.7

0.8

0.9

1

Fig. 2. The convergence behavior of the output trajectory error based on the neural network in (4) and (5) with 5 random initial points in Example 1

Solving Variational Inequality Problems with Linear Constraints

101

displays its transient behavior of the output trajectory of the neural network in (4) and (5). Fig. 2 displays the convergence behavior of the output trajectory error based on the proposed neural network in (4) and (5) with 5 random initial points. 6

5

4 x (t) 3 3

2 x1(t) 1 x (t) 2 0

−1

−2

x4(t)

0

0.2

0.4

0.6

0.8

1 Time (sec)

1.2

1.4

1.6

1.8

2

Fig. 3. The transient behavior of the neural network in (8) and (9) with initial point x0 = [−2, −2, −2, 2]T in Example 2 3.5

x3(t) 3

2.5

2

1.5

x (t) 1 1

0.5 x (t), x (t) 2

0

0

0.2

0.4

0.6

0.8

1 Time (sec)

4

1.2

1.4

1.6

1.8

2

Fig. 4. The transient behavior of the neural network in (8) and (9) with initial point x0 = [−2, −2, −2, 2]T in Example 2

Example 2. Let us consider a nonlinear complementarity problem (NCP) x ≥ 0, F (x) ≥ 0, xT F (x) = 0, where



⎞ 3x21 + 2x1 x2 + 2x22 + x3 , +3x4 − 6 ⎜ 2x21 + x1 + x22 + 10x3 + 2x4 − 2 ⎟ ⎟ F (x) = ⎜ ⎝ 3x1 + x1x2 + 2x22 + 2x3 + 9x4 − 9 ⎠ x21 + 3x22 + 2x3 + 3x4

102

Y. Xia and J. Wang 4

x (t) 3

3

x (t) 2

2

x4(t)

x1(t)

1

0

−1

−2

0

0.2

0.4

0.6

0.8

1 Time (sec)

1.2

1.4

1.6

1.8

2

Fig. 5. The transient behavior of the neural network in (8) and (9) with initial point x0 = [2, 2, 2, −2]T in Example 2

2

1.8

1.6

1.4 x1(t) 1.2

1

0.8

0.6

x (t) 4

0.4

0.2 x2(t), x3(t) 0

0

0.2

0.4

0.6

0.8

1 Time (sec)

1.2

1.4

1.6

1.8

2

Fig. 6. The transient behavior of the projection neural network with initial point x0 = [2, 2, 2, −2]T in Example 2

√ This problem has two solutions x1 = [1, 0, 3, 0]T and x2 = [ 6/2, 0, 0, 1/2]T . The NCP can be converted into the variational inequality problem (7), where X = {x ∈ R4 | x ≥ 0} and F (x) is not monotone on X. We perform the neural network in (8) and (9) and the projection neural network given in [12] to solve the above problem. Let the initial point be x0 = [−2, −2, −2, 2]T and λ = 10. The neural network in (8) and (9) is convergent to x1 , shown in Fig 3, and the projection neural network converges to x1 also, shown in Fig.4. Again, let the initial point be x0 = [2, 2, 2, −2]T and λ = 10. The neural network in (8) and (9) is convergent to x2 , shown in Fig 5. While the projection neural network converges to x1 still, shown in Fig 6.

Solving Variational Inequality Problems with Linear Constraints

5

103

Concluding Remarks

We have presented a recurrent neural network for solving variational inequality problems with general linear constraints. The proposed neural network has same network complexity as the existing projection neural network but no requirement that the initial point be in the feasible set. Moreover, the proposed neural network is a significant generalization of several existing neural networks for constrained optimization. The proposed neural network is stable in the sense of Lyapunov and globally convergent to the solution under a monotone condition of the nonlinear mapping without the Lipschitz condition. Further investigations will be aimed at the convergence rate of the proposed neural network and engineering applications

References 1. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms (2nd Ed.), John Wiley, New York, 1993 2. Yoshikawa, T.: Foundations of Robotics: Analysis and Control, MIT Press, Cambridge, MA, 1990 3. Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing, Wiley, England, 1993 4. Kennedy, M.P., Chua, L.O.: Neural Networks for Nonlinear Programming. IEEE Transactions on Circuits and Systems 35(5) (1988) 554-562 5. Lillo, W.E., Loh, M.H., Hui,S., Zˇ ak, S.H.: On Solving Constrained Optimization Problems with Neural Networks: A Penalty Method Approach. IEEE Transactions on Neural Networks 4(6) (1993) 931-939 6. Rodr´ıguez-V´ azquez, A., Dom´ınguez-Castro, R., Rueda, A., Huertas, J.L., ´ anchezSinencio, E.S: Nonlinear Switched-capacitor ‘Neural Networks’ for Optimization Problems. IEEE Transactions on Circuits and Systems 37 (1990) 384-397 7. Zˇ ak, S.H., Upatising, V., Hui, S.: Solving Linear Programming Problems with Neural Networks: A Comparative Study. IEEE Transactions on Neural Networks 6 (1995) 94-104 8. Zhang, S., Constantinides, A.G.: Lagrange Programming Neural Networks. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 39 (1992) 441-452 9. Bouzerdoum, A., Pattison, T.R.: Neural Network for Quadratic Optimization with Bound Constraints. IEEE Transactions on Neural Networks 4(2) (1993) 293-304 10. Xia, Y.S.: ODE Methods for Solving Convex Programming Problems with Bounded Variables. Chinese Journal of Numerical Mathematics and Applications (English edition) 18(1) 1995 11. Xia, Y.S., Wang, J.: On the Stability of Globally Projected Dynamic Systems. Journal of Optimization Theory and Applications 106(1) (2000) 129-150 12. Xia, Y.S., Leung, H., Wang, J.: A Projection Neural Network and its Application to Constrained Optimization Problems. IEEE Transactions on Circuits and Systems - Part I 49(4) (2002) 447-458 13. Xia, Y.S.: An Extended Projection Neural Network for Constrained Optimization. Neural Computation 16(4) (2004) 863-883 14. Xia, Y.S., Feng, G.: A New Neural Network for Solving Nonlinear Projection equations. Accepted in Neural Networks, 2007.

104

Y. Xia and J. Wang

15. Sun, C.Y., Feng, C.B.: Neural Networks for Nonconvex Nonlinear Programming Problems: A Switching Control Approach. Lecture Notes In Computer Science 3496 (2005) 694-699 16. Hu, S.Q., Liu, D.R.: On the Global Output Convergence of A Class of Recurrent Neural Networks with Time-varying Inputs. Neural Networks 18 (2005) 171-178 17. Liao, X.X., Zeng, Z.G.: Global Exponential Stability in Lagrange Sense of Continuous-time Recurrent Neural Networks. Lecture Notes In Computer Science 3971 (2006) 115-121 18. Hu, X., Wang, J.: Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network. 6 (2006) 1487-1499 19. Kinderlehrer, D., Stampcchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York, NY, 1980 20. Solodov, M.V., Tseng, P.: Modified Projection-type Methods for Monotone Variational Inequalities. SIAM J. Control and Optimization 2 (1996) 1814-1830

Suggest Documents