researchers in systems in linear inequalities and optimization who would rather go to original .... Eliminating dependent Inequalities from a System of Linear Inequalities. ... Nonnegative Solutions of Systems of Linear Equations. .... is the nonexistence of a positive linear combination (i.e., with positive coeffi cients) of the.
LINEAR INEQUALITIES by: Sergei N. Chernikov* Translated from Russian by Mohamed El-Hodiri and Bulat Mukhamadeyev University of Kansas, الفارابيKazakh National University
Abstract Chernikov’s seminal and neglected book was translated and edited by us as a service to researchers in systems in linear inequalities and optimization who would rather go to original sources rather than to sources that are allergic to cite their original sources
*Nauka, Moscow 1967
Table of Contents FOREWORD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . i INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Chapter I: The Boundary Solution Principle 1. Nodal Solution and Nodal Subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. Boundary Solutions and Boundary Subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. The Solution Polyhedron of a System of Linear Inequalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. Consistency Conditions for Systems of Linear inequalities in P n :. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Generalization the Kronecker-Capelli Theorem.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Some Criteria for the Existence of Positive and of Negative Solutions of Systems of Linear Inequalities.. . . . . . . . . . . . 7. Conditions for the Nondegeneracy of the Solution Polyhedron of a System of Linear Inequalities.. . . . . . . . . . . . . . . . . . 8. Conditions for the unboundedness of solution polyhedra of systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . 9. The Modular forms of Systems of Linear Inequalities. Conditions for their Reducibility.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter II: The Principle of Duality 1. The Minkowski-Farakas Theorem on the Consequence of a System of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . .
2. Systems of rank r that consist of r+1 Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Some Applications of the Results of l. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Weyl’s Theorem on Convex Cons with Finite Sets of Generating Elements, and some of its corollaries.. . . . . . . . . . . . 5. The Conjugate Cane of an Arbitrary System of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. The Class of Finitely Generated Convex Cones in P n :. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7. Separability of Convex Polyheral Sets.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter III: Methods of deriving General forms of Solutions for Systems of Linear Inequalities. 1. De…nition of the Fundamental Solution System for a System of Homogeneous Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . 2. A Computational Scheme for Finding the General Form of Nonnegative Solutions for a System of Linear Inequalities.. . .
3. Eliminating dependent Inequalities from a System of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. A Computational Scheme for Finding the general forms of nonnegative Solutions of a system of Linear Equations.. . . . . . 5. On the Equivalence of Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter IV : Systems of Linear Strict Inequalities. Mixed Systems. 1. Systems of Linear Strict Inequalities and the Stably Consistent Systems Associated with them.. . . . . . . . . . . . . . . . . . . . . . . . . . .
2. Mixed Systems of Linear Inequalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Some Properties of Independent Stable Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Combining Compatible Systems of Linear Inequalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5. Matrix Criteria for Stability of Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter V : Convolutions of Systems of Linear Inequalities: Elimination of Unknowns. l. Splicing Cons for a System of Linear Inequalities and their Convolutions.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Iterated Convolutions: Elimination of Unknowns.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Nonnegative Solutions of Systems of Linear Equations.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. Convolutions of Special Types of Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Convolutions of Systems of Linear Equations: An Algorithm for the Elimination of Unknowns.. . . . . . . . . . . . . . . . . . . . . . . . . . .
6. Convolutions of Systems of Linear inequalities that include strict Inequalities:. . . . . . . . . . . . . . . . . . . . . . . .
7. Homomorphic Equivalence of Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter V I: Theory of Linear Programming. l. The General Problem of Linear Programming.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. Linear Programming in P n :. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Canonical Problem in Linear programming.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. A measure of they inconsistency in a system of linear inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Application of the Method of Convolutions of Systems of Linear Inequalities to Linear Programming.. . . . . . . . . . . . . . . . . . . . . 6. Optimization of Vector Valued Linear Function.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter V II: Some in…nite Systems of Linear Inequalities.
1. Polyhedral Closed Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convolutions of Polyhedral Closed Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Separation Theorems of Convex Sets.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
4. Linear Programming Problems for Polyhedral Closed Systems of Linear Inequalities.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
P REF ACE Distinct properties of systems of linear inequalities were, already being investigated during the …rst half of the last century in connection with some problems of analytical mechanics whose solutions reduced to solving certain systems of linear inequalities. The systematic studies of these properties started at the end of the last century. However, it could not be said that a theory of linear inequalities existed until the end of the twenties of the present century. The thirties witnessed the emergence of a sketch of Weyl’s theory of …nite linear inequality systems which did not make -use of the topological properties of the …eld of real numbers. Further research established the theory of …nite systems of linear inequalities as a branch of linear algebra under appropriate conditions on the underlying ordered …elds of these systems (i.e. the …eld of coe¢ cients). This book is devoted to the basic theory of …nite systems of linear inequalities and the algebraic methods of their solution. The theory we present (Chapters I
V I) is based
on linear algebraic methods and appropriate …nite methods which follow from assumptions about the ordering of the underlying …elds. These methods are also, used in the study of certain classes of in…nite systems (Chapter V II). The algebraic theory of linear inequalities presented in this book includes all the basic results about …nite systems of linear inequalities with real coe¢ cients and free terms and in this way provides a uni…ed algebraic approach to these systems. In particular, it includes the results of the theory of linear programming. The book also provides an algebraic method of solving linear inequalities. The passage from the …eld of real numbers to an arbitrary ordered …eld is accomplished by means of a clear exposition of the theory of linear inequalities and does not present any complication. It is self evident that no part of the exposition would have to change if we replace the underlying …eld by the …eld of real numbers. A de…ciency of this book might be the absence of the algorithms that are currently used in linear programming. However, this material may be presented in a book of substantial size. Another reason is that the inclusion. of this material here, may violate our adopted principle of …niteness of the exposition. A more serious de…ciency of the book may be that it does not include the extensions of the results about …nite system, of linear inequalities, to linear topological spaces (see, e.g. Arrow, Hurwicz, Uzawa [1] and Braunschweiger [1]). However, 2
there is a lot of material and their exposition would require a separate book. Furthermore, it is clear that this subject is not fundamentally related to the material presented in this book. This book grew out of lectures on linear inequalities given by the author during 19561959 at the University of Permsk and 1961-1963 at the University of Sverdlovsk. The central proposition of these lectures had been the principle of bounding solutions established by the author. It states that, from each consistent …nite system of linear inequalities over the …eld of real numbers having nonzero rank, it is possible to choose a subsystem with the same rank and a number of inequalities equal to that rank such that any solution of the subsystem which turns all of its inequalities to equations, is a solution to the original system. In the lectures, many theorems on …nite systems of linear inequalities were derived from the principle of bounding solutions and from the Minkowski-Farakas theorem on dependent inequalities. In this book, the principle of bounding solutions is proved (by purely algebraic …nitely methods) for …nite systems of linear inequalities over an arbitrary ordered …eld and is used as the basis of the theory of these systems. The noted Minkowski-Farakas theorem is also extended to such systems. The information that is required for understanding the material presented here is no more than the elements of linear algebra, e.g. the …rst three chapters of the book: L. Ya. Okuneva [1]: The author takes this opportunity to express his sincere gratitude to Academician A.I. Mal.tsev for his support of the author’s initiative in connection with writing this book. S.N. Chernikov Kiev April, 1966
Introduction The term "system of linear inequalities" denotes, usually, a …nite or in…nite system of the form: !
fa ( x) where a
a = a 1 x1 + ::: + a n xn k
a
0 (a 2 M ); (1)
and a are given real numbers, xk are the unknowns and
is an element of
an index set M . The study of systems of this type have started as early as the time of Fourier (see Fourier[1]) who, in the twenties of the last century, proposed the elimination method as a method of solution for such systems. M. V. Ostorogradskii, in his work in analytical 3
mechanics, devoted a great deal of attention to such systems. Ostrogradskii [1] showed that the stability of systems of coupled mass points reduces to the study of systems of type (1). Ostrogradskii’s work was later extended by specialists in analytical mechanics (see the Farakas’s article [7] and the literature cited there) who established many properties of systems of linear inequalities. The problem of best approximations of tabulated functions by means of a polynomial of the form y = c1
1 (x)
+ ::: + cn
n (xn );
Where (x), .., (xn ) are given functions, reduces to the study of a …nite system of type (1). In case a ranges through a continuum of values, then the classical problem of approximation of a function on a given interval reduces to the study of system (1). Originating with the work of P. L, Chebychev, these considerations have lead the study of linear inequalities in the direction of close association with the study of extremal problems (see Kirchberger [1], [2], Remez [1] and Ivanov [1]). This direction (extremal problems) touches upon the theories or approximation of functions, minimax problems, theory of moments, etc. Also related to the study of linear systems of inequalities is the work of Voronoi [1] , on quadratic forms of integral variables. To study such problems, it was necessary to study convex polyhedral manifolds in . . . that are de…ned by solutions to system (1), Here, the basic questions are those of consistency of …nite systems (1), independence of the inequalities of the system, and questions about the geometric structure of the solution set, about its dimension, its boundedness, the position of its vertices, edges and faces. In particular, Voronoi [1] established necessary and su¢ cient conditions for the set of solutions of the …nite system: fa (x) = a 1 x1 + ::: + a n xn
0 (a 2 M ) (2)
which is a system of homogeneous linear inequalities, to be n-dimensional. The condition is the nonexistence of a positive linear combination (i.e., with positive coe¢ cients) of the elements of tha left hand side of (2) which is always equal to zero. This theorem may be viewed as a theorem about the consistency of the …nite system a 1 x1 + ::: + a n xn < 0 (a 2 M )
or strong linear inequalities. Carver’s [1] theorem about the consistency of the …nite system a 1 x1 + ::: + a n xn
a < 0 (a 2 M ) (3)
follows easily from Voronoi’s theorem. Results associated with the above, geometrical, approach to the theory of linear inequalities are closely related to the theory of systems of
4
linear equations, theory of convex bodies in Rn and to soma questions in functional analysis and some general problems in linear topological spaces. The …rst signi…cant result in the theory of linear inequalities was obtained by Minkowski, who in his well-known book Geometry of Numbers, published in 1896 (see Minkowski [1], established the following theorems. Every inequality b1 x1 + ::: + bn xn
0 0 being a consequence of the …nite system (2),
may be represented in the form of a linear combination of the inequalities of the system with appropriately chose nonnegative coe¢ cients. The cone of solutions of the …nite system (2) of rank in n has a …nite set of generators. The linear inequality f (x)
b = b1 x1 + ::: + bn xn
b
0
is called a consequence of system (1) if it is satis…ed by all solutions of the system. The above two theorems represent the beginning of the algebraic theory of linear inequalities and will be referred to as the …rst and second Minkowski’s theorems. The …rst is analogous to the theorem expressing the consequence of a system of homogeneous linear equations in terms of the equations of the system. The second is analogous to the theorem about the existence of a …nite set of generators of the set of solutions of a system of linear homogeneous equations. After the publication of Minkowski’s two theorems, there appeared numerous papers, in America, Europe, and Japan, that dealt mostly with these theorems . . . ..the well known paper of Farkas. The contents of these papers were reviewed in 1933 by Dines and McCoy[1]. The …rst Minkowski theorem may be easily extended to apply to system (1) of nonhomogeneous linear inequalities. The results in a theorem that is analogous to the corresponding theorem for the nonhomogeneours equation system of the form !
fa ( x)
a = a 1 x1 + ::: + a n xn
with real a
n
a = 0 (a 2 M ); (1 )
and a n . The relationship among the two theorems is explained in what
follows. All solutions of …nite system (1) (…nite systems (1 ) satisfy a given inequality f (x) 0 (a given equation f (x)
b = 0) if and only if there exist nonnegative numbers (some
numbers) P ; a 2 M , such that for all x 2 Rn we have !
fa ( x)
b
a2M P
!
a )
!
a )
(fa ( x)
and, respectively, !
fa ( x)
b=
a2M P
b
(fa ( x)
It follows from the …rst Minkowski theorem that if (b 1 ; :::; b n ) ( 2 N ) 5
is an arbitrary element of the set of solutions of system (2), then the set of solutions of the system b 1 x1 + ::: + b n xn
0 (a 2 M )
coincide with the cone generated, in Rn , by (b 1 ; :::; b n ) ( 2 N )
This assertion, together with the second theorem of Minkowski, constitutes what is known as the duality principles of theory linear inequalities. Investigations of questions related to the assertion and the theorem and of geometric considerations related to them were carried out in Weyl’s paper "Tune Elementary Theory of Convex Polyhedral”(see Weyl [1]).However, the central theme of Weyl’s paper was another matter. It was the construction of convex polyhedral by way of exclusively …nitary methods, i.e., by the method of linear algebra with the real numbers as an ordered …eld. This is done without any use of in…nitary results, e.g., closedness and compactedness, particularly methods that depend on the separation theorem of closed convex sets in Rn . If S is a closed bounded set of points in an n-dimensional a¢ ne space then the elements of the convex closure of S may be de…ned in two ways: 1) they are centroids of the points in S n when a unite mass is distributed among them in all possible ways, and 2) they belong to the set of all supports of S, i.e., of all half spces a 1 x1 + ::: + a n xn
a
0 ((a 1 ; :::; a n ) 6= (0; :::; 0));
that contain all points of S.
The fundamental theorem of convex closed sets asserts the equivalence of these two de…nitions. That theorem was proved by using non…nitary methods; by the use of topological properties of the space of real numbers (see Caratheodory [1]). If the set S consists of a …nite number of points then its convex closure is a convex polyhedron. For the case, Weyl’s paper that we mentioned, that we mentioned, gives a …nitary proof of the theorem without recourse to topological properties of the reals. This result of Weyl is easily a de…nitive in the algebraic (…nitary) theory of …nite systems or linear inequalities, in that it is one of the results related to the parametric representation of the set of solutions of systems. This …rst steps on the road of …nding and algorithm for the solution of the problem was taken by a theorem (see Chernikov [2] ) which asserts that every consistent system of nonzero rank r possesses a subsystem of rank r which consists of r inequality such that all solutions of the bounding equations of this subsystem satisfy system 1. This theorem, under the name of “principle of bounding solutions (see Fan [1]),” is in fact equivalent to the assertion that any consistent …nite system of type (1) of rank . . . has at least one solution (nodal solution) 6
where r of its inequalities, with linearly independent forms . . . are satis…ed as equalities (see Chrnikov [5]). In the Chernikov [1] paper the principle of bounding solutions, in this weak form, was established for …nite systems of the form jaj1 x1 + ::: + ajn xn
aj j
"j (j = 1; 2; :::; m);
Where ajk and aj are arbitrary complex or real numbers. The principle obviously applies to …nite systems of equations. Examing the theory of …nite systems of linear inequalities, one …nds that the principle of bounding solutions is one of the fundamental principles of this theory. From this principle we may (see Nefedev [1]) deduce such important results as the …rst Minkoski theorem and the theorem of Voronoi. Also we may infer the theorem of Alexandrov [1] on the consitency of …nite systems (3) with nonzero rank when the following is satis…ed: Each positive linear combination of the form S+1 P ! P k ak ( x) k=1
that is identically zero and which includes S+1 functions (of which S are linearly inde-
pendent) satis…es the relation S+1 P P Kk k > 0: k=1
Employing the principle bounding solution, it was possible (see Chrnikov [5]) to establish
that a …nite system (1) has rank r > 0 if and only if we have for some nonzero minor its matrix A and for all determinants
of
obtained by bordering that minor by the columns
a and by arbitrary rows of A, the relation 0 ( 2 M)
is satis…ed.
For …nite system of equations (1 ) this last inequality changes to equality. A series of propositions about the analogous properties of systems of linear inequalities and of systems of equations may be found in the paper by Kuhn [1] and in the author’s paper [17]. In the late thirties a new area of applications of linear inequalities opened up. This was the area of economic planning and the work was initiated by Kanorovich [1], [2]. At that time it appeared, that many problems of economic planning would be formulated as problems of maximization or minimization of some linear from over a set of points determined by a …nite system of games and operations research. To solve these problems, e¤ective algorithm were required. This was the impetus for developing a new direction in the area of linear inequalities, namely that of linear programming. The …rst results were established by the Soviet mathematician Kantirocon in the late thirties and early forties. But broader development of linear programming came at least ten years later. This was a¤ected by the works of American 7
mathematicians and economists (see Dantzig, Koopmans, Tucker, Charnes, etc.). In their research (see Koopmans [1]) practical applications of linear programming were emphasized together with establishing its relationship to game theory and with re…ning the numerical algorithms. An example of this last direction is the development of the simplex method (Dantzig [1], [2]), The result of the research was a broad extension of the subject. The simplex method is an algebraic (…nitary) method of solving linear programming problems. With the help we may be able to investigate some problems related to the theory of …nite system of linear inequalities, and occasionally we might …nd speci…c solutions. However, the simplex method of linear programming problems and similar …nitary and in…nitary methods of solving linear programming problems could never accomplish the expression of all of the solutions (in parametric form) of …nite systems of linear inequalities. In order to solve this problem, which is of theoretical and practical importance, Motzkin proposed a method called the double description method (see Motzkin, Raifa, Thompson, Throll [1]). The idea of the method has been used earlier in purely theoretical investigations. (For a proof of Minkowski’s second theorem for arbitrary systems (1) see Motzkin [1]). The method of double description was not formally established by Motzkin but rather geometrically illustrated. It was proved by purely algebraic (…nitary) methods by Burger [1]. Utilizing the results of Burger’s paper, Chernikova [1], [2] obtained a simple numerical algorithm for …nding the general (parametric) formulae for nonnegative solutions of systems of equations and …nite systems of linear inequalities. Both algorithms may be used to …nd the set of all optimal solutions to a linear programming problem. In order to use the second algorithm for this purpose, one may use, e.g. a method analogous to that of Uzawa [1]. In that case we a¤ect a signi…cant reduction of the size of the Uzawa matrix (see Uzawa [1] 1). In connection with the search for algorithms to solve a di¤erent type of a problem, originating the area of linear inequalities, it was natural to return to the method of elimination of the unknowns proposed by Frier [1]. This algorithm requires a great many elementary operations whose number grows exceedingly fast with the increase of the number unknowns and inequalities. This is why the algorithm is used only for theoretical purposes (see Kuhn [1]). In order to use the algorithm for numerical purposes, it is essential …rst of all to clarify the question of the feasibility of cutting the number of required operations by using a sequential method of elimination of the unknowns. In the course of study of this topic, we are lead to investigate the more general question: Under what conditions could one form positive linear combination of some inequalities of a consistent arbitrary system (1) without changing the constituency of the system.
8
In connection with this problem, a property of positive linear combinations of inequalities in …nite system (1) was obtained, where one combines some of the inequalities of a …nite s system (1) in such a way that the solution set of the resulting system coincides with a projection of the set of solutions of the original system (see Chernikov [6], [10], [12], [13], toagiven [16]). Obviously positive linear combination of inequalities in system (1), for the purpose subspace of eliminating one variable or several variables, representinaRn partial solution of combination problems of this type. By investigating the question of reducing the system (1) an a¢ rmative answer to the question of reducing the number of combinations in Forier’s method was obtained. Also a simpler way of eliminating redundant inequalities was found. The author’s algorithm shortened method of solving a host of problems related to …nite system of inequalities (1) and speci…cally those of obtaining general forms of solutions to the …nite system (1) together with the related problem of …nding the set of all optimal solutions to linear programming problems. The method of convolution may be used to obtain the general form of nonnegative solutions to systems of linear equations. In fact the method implies the numerical scheme used by Chernikova to solve that problem (see Chernikova [1]). The method of convolution is also used to solve some problems in the theory of …nite systems of linear inequalities. For instance we use it in Chapter V o f this book to prove the Dines theorem (Dines [1]) which provides an algebraic form of the conditions for consistency of a system of linear inequalities of the type (3). *1 In the last 10-15 years the, …nitary, algebraic theory of …nite systems of linear inequalities has accomplished a task that had occupied it a great deal. In earlier times, in order to prove certain results in the theory of …nite linear systems, one had to use the theorems on separation of closed convex sets (not a …nitary method). Now this is no longer necessary. Furthermore, …nitary methods have succeeded in obtaining theorems about separation (strong separation) of convex sets de…ned in Rn by way of …nite and some in…nite systems of linear inequalities (Chernikov [7], [14], [18]). With the growth of algebraic (…nitary) direction in the theory of linear spaces came the realization that some of the theorems are independent of the assertion that the scalar …eld is that of the reals. Such theorems are those related to …nite systems of linear inequalities. Thus came the generalization of these theorems to spaces with arbitrary ordered …elds. Remarks on this may be found in the Kuhn-Tucker collection [1]. Later on, Vorono’s theorem (see Charnes and Cooper [1]) were added to that class of theorems. In this book we present an algebraic (…nitary) exposition of the theorem of …nite systems 1
* Later on, the method of convolution has been applied to solving cybernetic problems of pattern recognition (noted in proof).
9
of linear inequalities, including the theory of linear programming, over an arbitrary ordered …eld. The construction is based on two …nitary theorems: The principle of bounding solutions and the …rst theoem of Minkoski. In fact, it is based only on the principle of bounding solution since the …rst Minkowski theorem may be obtained form that principle by …nitary methods. We shall not present those methods of solving …nite systems of linear inequalities that based on in…nitary methods, such as the relaxation method (Agmon [1], Matzkim and Shoenburg [1], Ermin [1], which are based on the topological properties of the …eld R of real numbers, Nor do we include the generalized simplex method which applies to problems in arbitrary ordered …elds. There are, no doubt, excellent treatments of the subject in linear programming books (see Gass [1]). Those algorithms that relate to the questions of parametric representing the set of all optimal solutions of a given …nite system of linear inequalities are presented. So are those algorithms that relate to the elimination of variation. It is clear that such questions are of theoretical as well as practical algorithm. In…nite systems are linear inequalities receive only one chapter of this book (Chapter VII). In it we only study those in…nite systems whose consequence inequality is the consequence of some …nite systems. The basic …eld here is an arbitrary ordered …eld. The basic problem concerns such systems (1) with real coe¢ cients a
k
and free terms a for which the convex
cone C, generated by the (n + 1) vector (a 1 ; :::; a
N;
and
a ) ( 2 M)
(0; :::; 0; 1) (the conjugate cone of system (1)) is topologically closed in Rn+1 . In particular we are concerned with systems of type (1) which are stably consitent and which have a bounded and topologically closed set A: (a 1 ; :::; a
N;
a ) ( 2 M ).
System (1) is said to be stably consistent if it possesses at least one solution that satis…es all of its inequalities as strict inequalities. In Rn+1 the topologically closed convex cone is either the intersection of all half spaces containing it or coincides with Rn+1 (polyhedral closure, see Bourbaki [1]). Hence, the transaction to systems (1) with coe¢ cients a
k
and
free terms a in arbitrary ordered …eld is accomplished when we …nd conditions for the polyhedral closure of the cone generated by them. Algebraic (…nitary) methods are used in Chapter VII to study polyhedral closedness for such systems and for more general analogous systems.
10
The …rst attempts to study in…nite systems of type (1) was that of Haar [1], nearly forty years ago where the use of the …rst Minkowski theorem was attempted. He considered systems of type (1) with a set A that is topologically closed in Rn+1 (closed systems). However, Haar’s assertion of the applicability of the …rst Minkowski theorem has been shown to be incorrect (see Chernikov [1]). Speaking of systems of linear inequalities, we had in mind …nite or in…nite systems of type (1). However, this does not limit our discussion to systems of that type. It turns our that all of the theorems about …nite systems of linear inequalities and the theory of polyhedrally closed in…nite systems are, easily, extendable to systems of the more general type: f
a
0 ( 2 M)
where f is a linear function de…ned on an arbitrary linear space L = L(P ) with an ordered …eld P and where a
2 P . This is why a considerable portion of this book is
devoted to the study of systems of this type. This, clearly, did not prolong the exposition,
and did not always lead to the exclusion of the general algebraic form of the basic results of the theory of …nite systems of linear inequalities which are obtainable in the manner indicated above. Also not included are results relating the system jf (x)
a j
"
( 2 M)
on the …led of complex numbers, our exposition, here minimizing the sue of residuals that are not based on …nitary (algebraic) constructs (see Chernikov [1] and Ky Fan [1]). The general purpose of this book is to emphasize those aspects of current research on systems of linear inequalities over arbitrary ordered …eld, which could be carried out by the exclusive use of algebraic (…nitary) methods. The bibliography at the end of the book enumerates only cited literates. A more complete bibliography of theory of linear inequalities up to 1958 may be found in Kuhn and Tucker [1]. A bibliography containing 37 titles may be found the Denis-McCoy [1] survey.
Chapter I –THE PRINCIPAL OF BOUNDING SOLUTIONS In this chapter we study some problems that are directly related to one of the fundamental propositions in the theory of linear inequalities; the proposition of the existence of nodeal solutions of consistence systems of linear inequalities with nonzero rank. This proposition is proved under a very general assumptions about the linear space L in wihch the problem 11
is considered. It is assumed that the …eld for L is an arbitrary ordered …eld P for whose we have 1) For each a 2 P we have one and only one of the relations: a = 0; a > 0; a > 0;
2) If a > 0 and b > 0 then a + b > 0 and ab > 0 An exposition of the basic properties of ordered …elds can be found in Van Der Waerden [1] (see also Fuchs [1]). Having chose the ordered …eld P over which L is de…ned we write L = L(p) (or L = L(R) if P = R is the …eld of real number). If L is the space of n-vectors (x1; :::; xn ) whose coordinates xi; belong to the …eld P , we use the notation P n to denote it (if P is the real number …eld we use Rn ). For an arbitrary linear space L(P ), a …nite system of linear inequalities may be written in the form fj (x)
aj
0 (j = 1; 2; :::; m); (1)
wherefj (x) are linear (i.e., additive and homogeneous) function on L(P ) with values in P . We note the free terms of (1) will not be referred to as
aj but as aj An element x of L(P )
that satis…ed all the inequalities of the system (1) si called a solution of that system. System (1) is said to be consistent if it has at least onesolution. Othewise it is said to be inconsistent. The maximum number of linearly indepent (over P ) functions f (x) is called the rank of the j
system (1). In the case of the space P n .(and in particular in the case of Rn ), system (1) has the form !
fa ( x)
aj = lj (x1 ; :::; xn )
aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m); (2)
where all the coe¢ cient aji and the constants aj are elements of P (respectively in R, the …eld of real numbers). The greater part of this chapter (section 4-9) is devoted to the study of such a system. Arbitrary forms of system (1) are studied in sections x1
x3: In x1 we
establish the principle is used to study the properties of related extremal subsystems. In x3 we present some related geometric terms. In x4
9, the principle of bounding solution is
used to study systems of type (2). This study attempts to …nd the properties of the matrix of system (2), i.e., of the 1 matrix 0 a11 ::: a1n a1 @ ::: ::: ::: ::: A ; am1 ::: amn am that are related to certain properties of the solution set of the system (2). In x4 we
present that matrix that are necessary and su¢ cient for that set not to be empty (i.e., for the consistency of system (2)). In x7 9 we present properties of the matix that are necessary 12
adn su¢ cient for the boundedness, for the non-degeneracy of the solution set and for the existence of non-negative (particularly, postive) elements in that set. The basic condition is that of consitency of system (2) studied in x4, all other conditions are derived by using it. The basic property that enables us to to from a system of linear inequalities to a system of
linear equations (i.e., to a system o linear inequalities where each inequality appears twice with opposite senses of the inequality) turns out to be a generalization of the KronecherKapelli theorem about the consitency of a system of linear equations. This will be the topic of x5.
The principal of bounding solution of the theory of linear inequalities was established
by the author in Chernikov [1]-[3] and [5] and partially by Fan, Ky [1]. Section x4
9 are
based on the authors [2] and [3]. In this connection, we note that the presentation here is independent of the assumption that P is the …eld of real numbers R.
x1. NODAL SOLUTIONS AND NODAL SUBSYSTEMS
DEFINITION 1.1. A solution of a consitent system (1) of non-zero rank is said to be a bounding solution if it satis…es at least one of the systems inequalities, with a nonzero function fj (x), as an equation. A bounding solution of a system (1) with rank r > 0 is said to be a nodal solution if it satis…es r of its inequalities, whose function fj are linearly independent over P , as equation. De…nition 1.2. Asubsystem of a system (1) whose rank is not zero is said to be a boundary subsystem if it has a nonzero rank that is equal to the number of inequalities in the subsystem and if at least one of its nodal solutions is a solution to the system (1). A boundary subsystem is said to be nodal subsystem of the system (1) if all of its nodal solution are solutions to system (1). We note here these properties of boundary subsystems: 1. Every consistent system (1) with nonzero rank has at least one boundary subsystem. 2. If the rank of a given boundary subsystem is di¤erent from the rank of the system, then the boundary subsystem can be included in another boundary subsystem with a larger number of inequalities. 3. A boundary subsystem is a nodal subsystem of the system (1) i¤ its rank is the same as that of system (1). 0
Property 1 is established as follows. Let x be a solution of system (1) and let x" be an 0
element of L(P ) that does not satisfy that system. The existence of an x follows from the 0
"
assumption about the rank of the system. Consider the interval [x ; x ] de…ned by 13
0
0
x(t) = x + t(x"
x)
where t is an element of P de…ned by 0
t
1 (the set of such elements of P will be
denoted by P [0; 1]). Substituting x = x(t) in system (1) we get the system tfj (x"
0
x)
0
(aj
fj (x ))
0
(j = 1; 2; :::; m);
where at least one of the coe¢ cients of t is positive (otherwise x" satis…es the system (1), contrary to our hypothesis). If a
j
0
f (x ) 0 x)
t0 =min f j(x"j
(the minimum is taken over all j for which the ratio is positive), then t = t0 is one of the solutions of this system. Denoting by j0 the indices of those inequalities where that minimum is attained we have: 0
t0 fj0 (x"
x)
(aj0
0
fj0 (x )) = 0
and 0
t0 fj (x"
x)
(aj
0
fj (x ))
or, equivalently, fj0 (x(t0 ))
0 (j = 1; 2; :::; m; j 6= j0 ; )
aj0 = 0
and fj0 (x(t0 ))
aj0
0 (j = 1; 2; :::; m; j 6= j0 ; ):
consequently x = x(t0 ) is a solution of system (1) which satis…es its j 0 inequalities as equations. Since fj0 (x"
0
x ) 6= 0, the function fj0 is not a zero function. But the subsystem
consisting of at least one of the j 0 inequalities is a boundary subsystem of system (1). This prove property 1. The proofs of properties 2 and 3 are based on the following proposition: (*) Let f1 (x); :::; fm (x) be independent (on P) linear function de…ned on L(P ) with values inP . Then any given aj 2 P the system fj (x)
aj = 0
(j = 1; 2; :::; m);
of equations has at least one solution in L(P ). Then ordering of P is not signi…cantly in this context, and P could be any …eld with a zero and L(P ) could be any linear space over it. We prove the following lemma, from which proposition (*) easily follows.
14
LEMMA 1.1. The rank of a system of linear function f1 (x); f2 (x) ; :::; fm (x) de…ned on a linear space L(P ) with values in its …eld P is equal to the dimension of the linear space to which L(P ) is transformed by way of the linear transformation F: x ! (f1 (x); f2 (x); :::; fm (x)) 2 P m (x 2 P m ).
Proof: Obviously the image F (L(P )) of L(P ) under F is a subspace of P m . Let m (m
0
0
0
"
m ) denote the dimension of that subspace. If m = 0: Let U be the maximal
subspace of L(P ) where the functions fj (x) take on the value zero. Let V be the direct complement of U in L(P). i.e., V is such that U + V = L(P ) and U \ V is the null subspace.
The transformation F transforms V in one-to-one manner into F (L(P )) and the image has 0
dimension m . Let x1 ; :::; xm , be the maximal system of linearly independent elements of 0
V . Then f1 (x1 ); :::; fm (x1 ); i = 1; 2; :::; m , is the maximal systems of linearly independent vectors in F (L(P )). But then the rank of the matrix 0 1 f1 (x1 ) ::: fm (x1 ) @ ::: A; ::: ::: f1 (xm ) ::: fm (xm ) 0
0
is m : Without loss of generality we might say that the …rst m columns of that matrix
are linearly indepent. In this case, the functions f1 (x); :::; fm0 (x) are linearly independent (under P) on the set of elements x1 ; :::; xm , (and, thus, on all of L(P)) and each function fj (x) (j = 1; 2; :::; m) may be linearly expressed in terms of these functions. Since the elements x1 ; :::; xm ; span the space V and since L(P ) = U + V , the linear expressions of the functions fj related to the set of elements x1 ; :::; xm ; must hold for 0
any point x 2 L(P ). Thus m is the largest number of linearly independent functions among f1 (x) ,...,fm (x) and must be the rank of the system of functions.
If f1 (x); :::; fm (x) are linearly independent then it follows from Lemma 1.1 that, the image of L(P) under F coincides with P m : Thus, for any element (a1 ; :::; am ) in P m it is possible to …nd an element x0 of L(P ) whose image f1 (x0 ); :::; fm (x0 ) coincide with (a1 ; :::; am ) . This proves the assertion (*). REMARK. We state below a proposition which we used in the proof of lemma1.1. If V is the maximal subspace of L(P ) where the function fj (x), j = 1; 2; :::; m, of system (1) of rank r > 0 have zero values (U is the kernel of system 1) and if V is the direct complement of U in L(P ) then the dimension of V is r. Using this proposition, we may reduce the study of arbitrary systems of type (1) to that of the study of tepe (2). Indeed, if v1 ; :::; vm is the maximal system of linearly independent 15
vectors in V then any element x 2 L(P ) may be (uniquely) written in the form: x = t1 v1+:::+ tr vr + u (u 2 U )
Where t1 ; :::; tr are elements of the …eld P. Thus the system (1) may be written as: t1 fj (v1 ) + ::: + tr fj (vr )
0 (j = 1; 2; :::; m)
aj r
with r variables t1 ; :::; tr in p , i.e., system (1) may be written in the form of system (2) in pr . To simplify our further use of the above result we introduce the following de…nition. DEFINITION 1.3. The kernel of a system of type (1) is de…ned to be maximal subspace U of the linear space L(P ) where the function fj (x) take the value zero. Let v1 ; :::; vr be a maximal system of linearly independent elements in the direct complement of U in L(P ). For these vectors, the system (1) would be: t1 fj (v1 ) + ::: + tr fj (vr )
0 (j = 1; 2; :::; m)
aj
where t1 ; :::; tr are the unknown and are to have values in the …eld P . This latest form is called the skeletal system for systems 1. we now prove property 2. Let fjk (x)
0 (k = 1; 2; :::; `)
ajk
be a boundary subsystem of system (1) with rank ` < r (which is the rank of the system (1)) and let x0 be one of its nodal solutions satisfying (1). Since` < r there exists an inequality fj 0 (x)
aj 0
0 in the system (1) such that the functions fjk (k = 1; 2; :::; `) and
fj 0 (x) are linearly independent. But, by proposition (*), the system fjk (x)
ajk = 0 (k = 1; 2; :::; `)
fj 0 (x)
(aj 0 + ") = 0 (" > 0)
is consistent. Let x 2 L(P ) be one of its solutions. Substituing x(t) = x0 + t(x
x0 ) (t 2 P [0; 1])
into system (1) we get tfj (x
x0 )
(aj
fj (x0 ))
0 (j = 1; 2; :::; m)
with rank one and one unknown t. As in the proof of property one, it is easy to see that the system has a solution t 2 P [0; 1] that satis…es at least one of its inequalities, with a non-zero t coe¢ cient, in the form of an equation. Let j`+1 . be the index of these inequalities. Since fj`+1 (x
x0 ) 6= 0 and since (x
x0 ) is a solution of
fjk (x) = 0 (k = 1; 2; :::; `)
the function fj`+1 could not be expressed as a linear combination of the function fjk (x) (k = 1; 2; :::; `). Consequently, the rank of the system fjk (x)
ajk = 0 (k = 1; 2; :::; ` + 1) 16
equals the number of its equations. Since the solution of that system satis…es system (1), the system fjk (x)
ajk
0 (k = 1; 2; :::; ` + 1)
(with x(t) as its nodal solution) is a boundary subsystem of system (1). This completes the proof property 2. Property 3 is shown in the following way. Suppose the rank of the susbsystem fjk (x)
ajk
0 (k = 1; 2; :::; r) 0
is the same as the rank of system (1), and suppose x is a nodal solution of the subsystem which satis…es all of the system (1). Then any nodal solution may be expressed in the form 0
of x + u, where u is a solution of the system fjk (x) = 0 (k = 1; 2; :::; r) Since r is the rank of system (1), each of the function fj (x) may be expressed as a linear combination of the functions fjk (x) (k = 1; 2; :::; r) and thus fj (u) = 0 (k = 1; 2; :::; m) But then 0
fj (x + u)
0
aj = fj (x )
aj
0 (k = 1; 2; :::; m)
Thus every nodal solution of that boundary subsystem satis…es (1). Consequently it is a nodal subsystem of (1). Suppose further that fjk (x)
ajk
0 (k = 1; 2; :::; `)
is some boundary subsystem, all of whose nodal solutions satisfy system (1). If, in addition, there is a function fj0 (x) that cannot be expressed linearly in terms of the other functions of fjk (x) (k = 1; 2; :::; r), then for any " > 0 the system fjk (x)
ajk = 0 (k = 1; 2; :::; r)
fj0 (x)
(aj0 + ") = 0
is consitent, in view of proposition (*). But this means that the boundary subsystem (of system (1)) has a nodal solution x0 for which fj0 (x)
aj0 = " > 0
which contradicts our assumption about the boundary subsystem. Thus the functions fj (x) (j = 1; 2; :::; m) may be linearly expressed in terms of the functions fjk (x) (k = 1; 2; :::; r) (with coe¢ cients in P ). Since the latter are linearly independent (in P ), it follows that the rank of system (1) is `. This is what we wanted to show. Properties (1)-(3) of boundary subsystems result in the following theorem. THEOREM 1.1. Every consistent system (1) of nonzero rank possesses at least one nodal subsystem and, hence, at least one nodal solution.
17
Based on that proposition and properties of boundary solutions we easily obtain the following generation of theorem 1.1. THEOREM 1.2. Suppose system (1) has a solution in common with the system of linear equations: fj (x)
aj = 0 (j = m + 1; :::; m + n);
(3)
(on L(P)) but at least one of the solutions of the latter does not satisfy (1). Then system (1) has a boundary subsystem each of whose solutions satisfying all of its inequalities as equations and satisfying all of its inequalities as equations and satisfying system (3), is also a solution of system (1). PROOF: By hypothesis of the theorem, system (1) is consistent and there is one point in L(P) that does not satisfy that system. Thus system (1) has nonzero rank. If (3) has zero rank, then all the elements of L(P) are solutions to it, in which case the current theorem 0
0
reduces to theorem 1.1. Let the rank, r , of system (3) be nonzero. If fjk (x) (k = 1; 2; :::; r ) 0
are r linearly independent functions among the functions fj (x) (j = m + 1; :::; m + n); then the system fj (x)
aj
0 (j = 1; 2; :::; m);
fj (x)
aj
0 (j = m + 1; :::; m + n);
fj (x) + aj
0 (j = m + 1; :::; m + n)
should have a boundary subsystem fjk (x)
ajk
0
0 (k = 1; 2; :::; r ).
But then, by virtue of properties 2 and 3 of boundary subsystems, there exists a nodal subsystem of that system consisting of S of its inequalities which contains the boundary susbsystem. Since not all solutions of (3) are solutions (1), the subsystem S must contain at least one of the inequalities fj (x)
aj
0 with j = 1; 2; :::; m: Let S’be the maximal
subsystem of S consisting of all such inequalities included in S. Since the subsystem S is nodal, each solution of its subsystem S’satis…es all of its inequalities as equations and satis…es the equation system fjk (x)
0
ajk = 0 (k = 1; 2; :::; r ),
of the present system of inequalities. Thus such a solution satis…es all inequalities whose index j = 1; 2; :::; m, i.e., all the inequalities of system (1). By consistency the rank of the present system of equations equals the rank of the equation system (3) that contains it. Thus the sets of solutions of both systems coincide, and the truth of the conclusion of the theorem is established. x2. BOUNDARY AND EXTERNAL SUBSYSTEMS. 18
DEFINITION 1.4. A subsystem of system (1) with nonzero rank that consists of inequalities that are satis…ed as equations by a solution of system (1) is called a bounding subsystem of system 1. A bounding subsystem that is not contained in any di¤erent bounding subsystem is called an extremal subsystem. Since every boundary subsystem of system (1) is bounding subsystem, property (1) of boundary subsystems implies that every consitent system (1) with nonzero rank contains at least one boundring subsystem. Using properties 2 and 3 of bounding subsystems we can easily establish the following properties of bounding systems. 1. Every bounding subsystem S of system (1) of rank r(r > 0) is contained in some other subsystem of rank r. bounding 2. The rank of a bounding subsystem of system (1) equals the rank of the latter i¤ each solution of the subsystem that satis…es all of its inequalities as equations also satis…es the system (1). 3. For each solution of an extremal subsystem of system (1) that turns all of its inequalities to equations, the left hand side of the inequalities that are not included in it attains negative values. REMARK. In view of property (1), every extremal subsystem of a system (1) with rank r(r > 0) has rank r. Property 1 is obtained as follows. Let S’ be some nodal subsystem of system S. It, obviously, is a boundary subsystem of system (1) and, by property 2 and 3 of boundary subsystem, is contained in some nodal subsystem S" of system (1). By joining systems of S and S" we, easily, obtain a bounding system of rank r of system (1). Indeed, its rank is r since this is the rank of its subsystem S". If the solution x0 of system (1) turns all of the inequalities of the system of S" into equations, then it turns all of the inequalities of its subsystem S0 into equations, and hence all those of S (since S 0 is a nodal subsystem of S): To obtain property 2 assume …rst that the rank of the bounding subsystem S0 of system (1) coincides with that of the latter. Furthermore we assume that the nodal subsystem S0 of the system S is a boundary subsystem of rank r of system (1) and thus is a nodal subsystem of the latter (see property 3 of boundary subsystem). Since the subsystem S0 and S of system (1) have the same rank, it is possible by turning their inequalities to equations to obtain two equivalent systems of equations (i.e., two systems that have one and the same solutions). Since S0 is a nodal subsystem of system (1) it follows that all of solutions fo S that turn its inequalities to equations also satisfy system (1). Thus system (1) will be satis…ed by all nodal solutions of arbitrary nodal subsystems S0 and S. Since S0 is, obviously, a boundary
19
subsystem of system (1), it follows, by property 3 of boundary subsystems, that is has the same rank as system (1). But then the rank of the latter should coincide with that of system S. Property 3 of bounding systems follows, actually, from properties 1 and 2 of such systems that we have proved. Indeed, by property 1, the rank of the extremal subsystem is that of system (1). But then, by property 2, each of its solutions that turns its inequalities to equationss satis…es system (1). From this it follows, by the de…nition of extremal systems, that for all of these solutions none of the inequalities of (1) that are not included in the extremal subsystems can hold as an equations. But all of these solutions, by property 2, satisfy system (1). Thus follows property 3. Now every solution of an extremal subsystem of the system (1) that turns all of its inequalities to equations is, obviously, a nodal solution of system (1). Thus, in addition to establishing property 3, we have proved the necessary part of the following proposition: The subsystem T of system (1) with nonzero rank is extremal i¤ there exists a nodal solution of system (1) which turns to equations those and only those inequalities included in T. The su¢ ciency of this propositions ins proved as follows. Suppose x0 is nodal solution of system (1) which turns to equations those and only those inequalities included in T. Then the rank of system T must coincide with r; the rank of system (1). Since the system T, satisfying the above condition, is a bounding subsystem of system (1) it follows form property 2 that any of its solutions that turns all of its inequalities to equations also satis…es system (1). Each of those solutions of T –may be written as the sum of x0 and some element u that gives the value zero to the function fj (x) that appear in the system’s inequalities. The rank of T is that of system (1). Hence any solution of T that turns all of its inequalities to equations does not satisfy any excluded system (1) inequlity as an equation. This implies that T is extremal. Indeed if T was not maximal, then for some solution of system (1) all fo T’s inequalities to equalities together with some (1)’s inequalities that are excluded form T . From the proposition we just proved, it follows that once we …nd all of the nodal subsystems of system (1) it is possible to …nd all of its extremal subsystems by proceeding in the following manner. Let x1 be a nodal solution of one of them. If there is a nodal subsystem which does not have x1 as a nodal solution, we denote by x2 the nodal solution of that system. If there is a nodal subsystem which has neither x1 nor x2 for nodal solution, then we denote its solution by x3 and so on. It is clear that for elements xi (i = 1; 2; :::) chosen in this manner, no two turn the inequalities of one and the same subsystem of system (1).
20
Let S(xi ) denote the subsystem of all inequalities in system (1) turned to equations by xi (i = 1; 2; :::). By the above proposition, each of the subsystems S(xi ) is an extremal subsystem of (1). Thus we exhaust the class of extremal susbsystem of system 1. Now each extremal subsystem of (1) contains, obviously, a nodal subsystem. Let us suppose i1 6= i2
and S(xi1 ) is the same as S(xi2 ): Then if follows that xi1 and xi2 turn the inequalities of one nodal subsystem into equations. But this contradicts the way the xi1 ’s were chosen. Hence systems S(xi1 ) and S(xi2 ) with di¤erent i1 and i2 must be di¤erent. Let us now move on to the study of one of the fundamental methods of …nding extremal subsystems of (1). The method is based on the correspondence between extremal subsystems and certain solutions of systems of linear equations associated with them. Let the rank r of system (1) be di¤erent from zero and from m. Without loss of generally we may assert that the …rst r of the function fj are linearly independent. If x0 is a solution of the equation system fj (x)
aj = 0 (j = 1; 2; :::; r);
(by proposition (*) of x1, the system is consistent) then substituting x = x + x0 into
system (1) it becomes of the form fj (x)
0 (j = 1; 2; :::; r);
fj (x)
a0j
0 (j = r + 1; :::; m);
where a0j = aj
fj (x0 ). Forming the expression
fj (x) = cji f1 (x) + ::: + cjr fr (x) = 0 (j = r + 1; :::; m);
(4)
and introducing the notation fj (x) = fj (x)
uj (j = 1; 2; :::; r); a0j =
uj (j = r + 1; :::; m);
(5)
We obtain the equation system cji u1 + ::: + cjr ur + uj = a0j
(j = r + 1; :::; m);
(6)
We shall show below that extremal subsystems of system (1) are in one to one correspondence with a particular set of solutions of that system; the set of its supporting solutions. DEFINITION 1.5. A nontrivial solution of the equation system cji x1 + ::: + cjn xn = aj
(j = 1; 2; :::; m);
(7)
(on Pn ) all of whose coordinates are nonnegative (nonnegative solution) is called its supporting 0 a11 @ ::: am1
solution i¤ 1 the collumns of the matrix ::: a1n ::: ::: A ; ::: amn
that correspond to its positive coordinates are linearly independent on P. 21
THEOREM 1.3. If a consistent system (1) with nonzero rank does not have solutions that turn all of its inequalities to equations then there is a one to one correspondence between it extremal subsystems and the supporting solutions of system (6). In the correspondence, an extremal subsystem corresponds to a supporting solution in such a way that the indices of zero coordinates correspond to the indices of the inequalities of the extremal subsystem. REMARK. If system (1) with nonzero rank has a solution that turns all of its inequalities into equations then its unique, extremal subsystem coincides with it. The proof Theorem 1.3 is based on the following two lemmas. Lemma 1.2. Suppose some element of a given space L(P) could be expressed linearly with nonnegative coe¢ cients, in P, in terms of a system of elements of that space having nonzero rank. Then that element may be expressed linearly with nonnegative coe¢ cients, in P, in terms of a linearly independent subsystem of that …rst system. Proof. Let the element C 2 L(P ) be obtainable as a nonnegative linear combination of
the elements of the system c1 ; :::; cm belong to L(P). Let cj1 ; :::; cj` be a subsystem with the smallest number of elements such that c = p1 cj1 + ::: + p` cj` for some nonnegative elements p1 ; :::; p` in P. If the elements cj1 ; :::; cj` are linearly dependent then there exist elements q1 ; :::; q` in the …eld P not all zero such that we have 0 = q1 cj1 + ::: + q` cj` : Multiplying it by the element t 2 P and substracting the result from the previous relation,
we get
c = (p1
tq1 )cj1 + ::: + (p`
tq` )cj`
Let us consider the linear inequality system qk t
pk
0 (k = 1; 2; :::; `)
in one unknown, t. Since it is, obviously, consistent (the zero element of P satis…es it) and since its rank is nonzero (it is one), it follows from Theorem 1.1 that it has a nodal solution t0 : By de…nition of nodal solutions (see De…nition 1.1) there exists an index k = k0 such that qk0 6= 0 and qk0 t0 c = (p1
pk0 = 0: If C is not a zero element then in the expression
t0q1 )cj1 + ::: + (p`
t0q` )cj` ;
all of whose coe¢ cients are nonnegative, we may eliminate the one with index k0 . Consequently, in that case, the element C is expressed with nonnegative coe¢ cients in terms of the system cj1 ; :::; cjk0 1 ; cjk0 +1 ; :::; cj` with a number of elements that is less than `. This contradiction shows that cj1 ; :::; cj` are linearly independent. The lemma follows from the fact that a nonzero C is expressed as a nonnegative linear combination of these elements and
22
from the fact that the result is trival for a zero C. REMARK. Using Lemma 1.2 we may show that if an equation system (7) with at least one nonzero constant aj has a solution with nonnegative components, then it has a supporting solution. indeed, by hypothesis, the vector (a1 ; :::; am ) may be expressed as a nonnegative linear combination of the system of vectors (a1i ; :::; ami ) (i = 1; 2; :::; n) whose rank if obviously nonzero (since the vector of constant is a nonzero vector). Now we apply Lemma 1.2 and obtain the desired expression. LEMMA 1.3. Two supporting solutions of system (7) are distributed i¤ each of them has a positive coordinate whenever the corresponding coordinate (with the same index) in the other is zero. 0
0
PROOF. Let (x1 ; :::; xm ) and (x"1 ; :::; x"m ) ne two distinct supporting solutions of …rst has no positive coordinates with indices corresponding to zero coordinates of the second. Assume, withour loss of generality, that the positive coordinates of the second solution occupy the last n
P places (p
its last n
0): Then the indices of positive coordinates of the …rst are not among
P indices of coordinates. Substituting these solutions in (7) we thus get 0
0
ajp+1 xp+1 + ::: + ajn xn = aj
(j = 1; 2; :::; m)
and ajp+1 x"p+1 + ::: + ajn x"n = aj Since the n
(j = 1; 2; :::; m)
P coordinates of the supporting solution (x"1 ; :::; x"n ) are positive, it follows
from the de…nition of supporting solutions that the coe¢ cient columns (here) are linearly independent. But the constant column aj (j = 1; 2; :::; m) could not be expressed in two 0
"
di¤erent ways in terms of these columns. Thus x1 = x1 for i = p + 1; :::; n: It thus follows that the two solutions coincide. Since any two solutions that satisfy the conditions of the lemma are, obviously, distinct. We have completed the rproof of the lemma. PROOF OF THEOREM 1.3. Let fjk (x)
ajk
0 (k = 1; 2; :::; s)
(8) 0
be an extremal subsystem of system (1) and let x be a solution of system (1) that turns 0
0
all the inequalities of the subsystem into equations. Suppose (u1 ; :::; um ) is a nonnegative solution of system (6) that is di…ned by formulae (5) for x = x
0
0
x0 .Then ujk
0
(k =
1; 2; :::; s) by die…nition of extremal subsystems. And by property 3 of bouding subsystems, 0
uj > 0 for j 0 cr+1;1 @ ::: cm;1
6= j1 ; :::; js . ::: cr+1;r ::: ::: ::: cm;r
If the columns of the matrix 1 1 ::: 0 ::: ::: ::: A 0 ::: 1 23
0
of coe¢ cients of the equations (6) corresponding to positive uj are linearly dependent, then by (by Lemma 1.2) it is easy to show that the existence of a solution (u"1 ; :::; u"m ) of (6) which has zero coordinates with indices j1 ; :::; js as well as at least one with index 0
0
0
j 6= j1 ; :::; js . Lemma 1.2 was used here for the column vector (ar+1 ; :::; am ) and the system of vector column of the above matrix whose indices are distinct from j1 ; :::; js . The rank 0
0
of that system is not zero since we may linearly express the nonzero vector (ar+1 ; :::; am ) in terms of it (by conditions of the theorem about the absence of solutions of system (1) that 0
0
turn all of its inequalities to equations it is obviously that the vector (ar+1 ; :::; am ) could not be a zero vector). By proposition (*) of x1; the equation system fj (x) =
u"j
(j = 1; 2; :::; r);
of rank r has at least one solution on L(P ). If x = x"
x0 is one of its solutions, then
by using formulae (4) and (5) it is easy to see that x" is some solution of system (1) that 0
turns the inequalities whose indices are j1 ; :::; js . j of that system into equations. But this contradicts the assumption about the maximality of the subsystem (8). 0
Thus the columns of the above matrix that correspond to positive uj are linearly inde0
0
pendent, and (u1 ; :::; um ) is a supporting solution of system (6) where the indices of zero coordinates correspond to those inequalities that are included in our extremal susbsystem of system (1). 0
0
x0 be a
Now let (u1 ; :::; um ) be a supporting solution of system (6) and let x = x solution of the system fj (x) =
u0j
(j = 1; 2; :::; r);
Then, using formulae (4) and (5), it is not hard to conclude that x is a solution of system (1) which turns to equation those inequalities with indices of the zero coordinates of 0
0
the supporting solution (u1 ; :::; um ). If the system S of these inequlities is not an extremal subsystem of system (1), then there exists an extremal subsystem di¤erent from S that contains S. But then (by the proved part of the theorem), the supporting solution (u1 ; :::; um ) 0
0
corresponding to that subsystem has zero coordinates at the same positions where (u1 ; :::; um ) has zeros inaddition to at least one more position. But by Lemma 1.3, this is not possible. Hence the subsystem S is extremal. Thus among the extremal subsystems of system (1) and supporting solutions of system (6) there is a correspondence in such a way that the indices of those inequalities included in the extremal subsystem are those of zero coordinates of the corresponding solution. This completes the proof of the theorem. The following problem is fundamentally related to the problem we just resolved. Consider
24
thte linear equation system fj (x)
aj = 0 (j = 1; 2; :::; r);
on L(P ). Find in it the subsystem containing the smallest number of equations such that for at least one of its solutions, the left hand sides of all equations not included in it assume negative (positive) values. In both cases, the problem reduces to the choice of a matrimal subsystem with the largest number of inequalities, i.e., (by theorem 1.3) to the selection of a supporting solution of the corresponding linear equation system (of the form (6)) with the largest number of zero components, equivalently, with the smallest number of positive components. The problem of determing all distinct supporting solutions of equation system aji x1 + ::: + ajn xn = aj
(j = 1; 2; :::; m);
(9)
(and in particular, that of determining those supporting solutions that interest us here) in the space P n (with P as an ordered …eld) will be studies later in Chapter V. Concluding the present section we prove the following proposition. Supporting solutions of system (9) coincide with nonzero nodal solutions of the system aji x1 + ::: + ajn xn x1
0; :::; xn
aj = 0
(j = 1; 2; :::; m);
(10)
0
Furthmore, system (9) has a supporting solution i¤ system (10) has a nonzero nodal solution. We know that "a nodal solution of system (10)" means a nodal solution of the equivalent system aji x1 + ::: + ajn xn -aji x1 x1
:::
aj
ajn xn + aj
0; :::; xn
0 0
(j = 1; 2; :::; m); (j = 1; 2; :::; m);
0
In order to prove our proposition, it will be convenient to use the following de…nition. DEFINITION 1.6. A solution of the system fj (x)
aj
0 (j = 1; 2; :::; m);
fj (x)
aj = 0 (j = m + 1; :::; m + n);
of rank r > 0 on L(P ), is said to be a nodal solution of that system if from the equations fj (x)
aj = 0 (j = 1; 2; :::; m + n);
satis…ed by it, we may select a system fjk (x)
ajk
0
(k = 1; 2; :::; r)
of rank r. We note here that if each equation of system (1) is replaced by the two inequalites
25
equivalent to it, then the nodal solutions of that system of inequalities, and only then, are the nodal solutions of (11) in the same of the de…nition. 0
0
Now the proof fo the proposition is presented. Let (x1 ; :::; xn ) be a supporting solution of system (9). Without loss of generality we may assume that the last n
P (P
0; P < n)
coordinates of that vector are di¤erent from zero. But then, by De…nition 1.3 of supporting solutions, the last n 1P columns of the matrix 0 a11 ::: a1n @ ::: ::: ::: A : am1 ::: amn are linearly independent. Thus the rank of the equation system aji x1 + ::: + ajn xn = aj
(j = 1; 2; :::; m);
(12)
x1 = 0; :::; xp = 0 (if P = 0 the system consists of the …rst n equations only) equals n. Since the number n equals the rank of system (10) and since our supporting solution 0
0
(x1 ; :::; xn ) satis…es system (10) as well as system (12), it follows (in view of de…nition 1.6) that is is a nodal solution of system (12) and, obviously, that it is a nonzero nodal solution of it. 0
0
Now suppose (x1 ; :::; xn ) is some nonzero nodal solution of system (10). Without loss of generality we may assume that the last n
P (P
0; P < n) of its components are distinct
from zero. Thus, withour loss of generality, we may conclude, that the equation e last system to be satis…ed by that solution, in view of De…nition 1.6, has the form aji x1 + ::: + ajn xn = aj
(j = 1; 2; :::; m);
x1 = 0; :::; xq = 0 Sicne the rank of that system must be n which is the rank of system (10) (see De…nition 1.8), it follows that the last (n the last (n
q) columns of its matrix of coe¢ cients or, equivalently, 0
0
q) columns of system (9) must be linearly independent. Since (x1 ; :::; xn ) is
a solution of system (9) with nonnegative coordinates, it follows, in view of De…nition 1.5, that it is a supporting solution of that system. This completes our proof.
x3: THE SOLUTION OF POLYHEDRON OF A SYSTEM OF LINEAR EQUATIONS.
Let f (x) be any linear nonzero function de…ned on a linear space L(P ) and let a be a element of P . The set of elements of L(P ) that satisfy the equation f (x) called a plane in L(P ) de…ned by that equation. For each element x 2 L(P ) we have one of the following relations: f (x)
a < 0; f (x)
a = 0; f (x)
a > 0;
26
a = 0 (0 2 P ) is
Moreover, each of these relations is satis…ed by at least one element of L(P ) that satisfy one of the …rst two relations (i:e:; f (x) last two (i:e:; f (x)
0) are called halfspaces of L(P ) di…ned by these relations. The set
a
of points in a given subspaces of f (x) satisfy f (x)
0) and the sets points that satisfy one of the
a
a < 0 (or f (x)
a
0 (or f (x)
a
0) consisting of points that
a > 0) is called the interior region of the half space.
It is not hard to show that each halfspace is a convex set, i.e., for each two elements x
0
and x" in the halfspace, the line segment connecting them is also in the halfspace, i.e., the halfspace includes the set of points de…ned by 0
x(t) = x + t(x"
0
x)
where t is an element of P that satis…es the inequalitiy, 0 x1; t 2 [0; 1]).
t
1 (or in the notation of
0
It is easy to show that if one of the elements x belongs to one of the halfspaces de…ned 0
a = 0 and if x" belongs to the other, then the segment [x ; x" ] has at least one
by f (x)
point in common with the plane f (x) 0
f (x )
"
0 and f (x )
a
a
a = 0: Indeed suppose
0.
Since our proposition is obvious for the case 0
f (x )
a = 0 and f (x" )
a = 0,
we shall assume 0
f (x )
a < 0 and f (x" )
a > 0.
0
0
Hence f (x ) 6= f (x" ) and thus there exists an element t 2 P . such that 0
t =
0
f (x )+a 0 : f (x" ) f (x ) 0
0
0
a = 0. But this means that the segment [x ; x" ] abd
Obviously, t 2 P [0; 1] and f (t )
0
the plane have a point in common, namely x(t ). From these considerations, it is not hard to see that a necessary and su¢ cient conditions for the uniqueness of such a point is ahta at 0
least one of the two inequalities f (x )
a < 0 or f (x" )
a > 0 must be satis…ed.
Let us move on now and introduced some de…nitions and propositions of a geometric nature that are related to the principle and that it has a nonezero rank r. We then show that at least one of its n inequalities fj (x)
aj
0 has a nonzero function fj (x) and is
associated with one of the two halfspaces of L(P ) de…ned by fj (x) an inequality fj (x)
aj
aj = 0. We say that
0 is an identity if f (x) is a zero function, otherwise it is called
a nonidentity. Thus we may give the following geometric interpretation of solution sets of system (1). The set of M of solutions of a consistent system (1) in L(P ) of nonzero rank is the intersection of all halfspaces in L(P ) determined by its noidentity inequalities.
27
In connection with the geometric structure introduced here, it si possible to provide geometric interpretations of the results of sections 1 and 2. To this end we introduce the following. DEFINITION 1.7. The set of solutions of a given consistent system (1) will be called a convex polyhedral set in the space L(P ). If the rank of a consistent system (1) is di¤erent from zero, then the set of its solutions is called the polyhedron of its solutions or simply its polyhedron. If system (1) has at least one solution that turns all of its inequalities to equations, the polyhedron M is called the cone of its solutions*2 . The intersection of the set of nodal solutions of a boundary subsystems of system (1) with the polyhedron M of its solutions is called a face of that polyhedron. A face that does not include a face distinct from it is called a minimal face. If j1 ; :::; jk are indices of those inequalities of (1) that are included in one of its boundary subsystems the associated face of M will be denoted by M (j1 ; :::; jk ): In view of the properties of boundary subsystems of system (1) presented in x1, the
following properties of the faces of polyhedron of system (1) are valid.
If r > 0 is the rank of system (1) then its polyhedron M has faces M (j1 ; :::; jk ); k = 1; :::; r. Each face M (j1 ; :::; jk ) with k < r contains another face M (j1 ; :::; jk ; jk+1 ). Eache face M (j1 ; :::; jk ) with k = r is minimal face of the polyhedron M . From the second property noted above it is clear that each of the polydedron M contains at least one of its minimal faces and that each minimal face coincides with one of the faces M (j1 ; :::; jrk ). These properties of the faces of the polyhedron M require more intuitive interpretations when the system (1) is de…ned as Rn , i.e., in case it is of form (2) and P = R. When r = n, all the minimal faces of the polyhedron M are zero-dimensional and are, naturally, called its vertices. In order to interpret one of the noted geometric properties of the faces of the polyhedron M for the general case, we present the following de…nition. DEFINITION 1.8. Let all solutions of a consitent system (1) of nonzero rank satisfy either inequality f (x)
a
0 or f (x)
a
0 where f (x) are linear functions on L(P ) and
a 2 P . Suppose the at least one of these solutions sati…es the equation f (x) we say that the plane f (x)
a = 0. Then
a = 0 and polydron M are contiguous. The set of all points
common to both of them is called the track of Contiguity. The problem that we are interested now may be formulated in terms of the following property: The track of contact of a polyhedron M of system (1) and a given plane (with which it has contact) coincides with one of its faces and conversly each face of the polyhedron 2
* A set C 2 L(P ) is called a convex coen if for x; y 2 C implies x + y 2 C and p > 0 (p 2 P ) imply px 2 C: The set of solutions of system (1) is a cone, in that sense, if all aj 0s in it are zero.
28
M is the track of contact between it and some plane.This property will be proved in the Chapter II. DEFINITION 1.9. The linear continuation of a given face of the polydron M of system (1) is the intersection of all planes in L(P) containing that face. It is not hard to show that: The linear continuation of a given face of the polydedron M of systems (1) coincides with the intersection of all planes fj (x)
aj = 0 that contain it and
that correspond to the inequalities of system (1). 0
Indeed, let M’s be a given face of the polyhedron M , let M be its linear continuation "
and let M be the intersection of all planes fj (x)
aj = 0 that contain it. Let j1 ; :::; jk be
the indices of these planes. Since it is obvious that M 0
"
0
"
M , we need only show that M
"
0
"
M . Suppose x is an element of M that is not element of M . We show that there exists 0
0
0
an element x of M such that among the elements of the line de…ned by x and x" in L(P ), i.e., the elements fo L(P ) di…end by 0
0
x(t) = x + t(x"
x)
(13) 0
0
where t is any element of P , there exists an element in M that is distinct from x . 0
0
Let us note …rst, that for any t 2 P any x 2 M ; the relations fj` (x)
aj`
0
(` = 1; 2; :::; k)
(14)
must hold. Indeed, the validity of this assertion is easily established with the help of the following manipulation 0
0
= (fj` (x )
"
aj` ) + t(fj` (x )
For k=m, the relation M each j = j` (1 0
0
aj` = fj` (x ) + tfj` (x"
fj` (x)
"
x) aj` ) 0
aj` 0
t(fj` (x )
aj` ):
M is obvious. Thus, we may assume that k < m. Hence, for
m; ` = 1; 2; :::; k) there exists an element xj such that fj (xj )
j
0
If j1 ; :::; jh (h = m
aj < 0:
k) are the indicies j that are considered then the condition of our
assertion would be satis…ed by the element 0
0
0
x = h1 (xj1 + ::: + xjh ) Indeed, by the relations h X 0 0 1 fj (x ) aj = h (fj (xjs ) 0
fj (xjs )
aj )
(j = 1; 2; :::; m);
s=1
aj
0
(j = 1; 2; :::; m; s = 1; 2; :::; h)
and fj` (xj )
aj` = 0
it is not hard to show that M
( ` = 1; 2; :::; k) "
0
M : In view of the …rst of the relations and of the
relation
29
0
fj (xjs )
aj < 0 (j = 1; 2; :::; m; j 6= j1 ; :::; jk )
we have 0
fj (x )
(j = 1; 2; :::; m; j 6= j1 ; :::; jk )
aj < 0
But then, there exists an element t = t > 0 such that fj (x(t ))
(j = 1; 2; :::; m; j 6= j1 ; :::; jk )
aj < 0
0
(see Lemma 1.4 below). Since for any x 2 M and any t 2 P the element x(t) satis…es 0
0
equation (14), it follows that x(t ) 2 M . Element x(t ) is, obviously, distinct from x , and thus our assertion is establised. Going back to the beginning of our proof we conclude that 0
the element x of formula (13) may be chosen in according with the assertion just established 0
0
and that for t = t0 we get an element x0 = x(t0 ) in M thas is distinct from x : By de…nition 0
of the set M , all elements de…ned by the formula: 0
0
x0 (t) = x + t(x0
x)
must belong to it. Since for t =
1 t0
(the element t0 is obviously not zero) that formula
gives us the the element x" we have reached a contradiction. Consequently M
"
0
0
"
M and hence M = M which was to be proved.
REMARK. Since each minimal face of the polyhedron M of system (1) coincides with one of the faces M (j1 ; :::; jr ) where r is the rank of system (1), it follows that the corresponding boundary subsystems fjk (x)
ajk
0
(k = 1; 2; :::; `)
are nodal subsystems of system (1), and hence all the solutions of the corresponding equation systems fjk (x)
ajk = 0
(k = 1; 2; :::; `)
is, obviously, a boudning subsystem of system (1). Thus its rank, in view of property 2 of bounding systems of x2, equal r which is the rank of (1). But then each of its subsystems of
rank r with r inequalities must be a nodal solution of system (1). Since the set of all nodal solutions of any of these subsystems coincides with S, it follows that S’is a minimal of face of the polyhedron M. Lemma 1.4. If x0 is a solution o f system (1) satisfying the inequality fj (x0 )
aj < 0
(j = 1; 2; :::; m) 0
0
then for each x in L(P ) we may …nd a positive element t of the …eld P such that fj (x0 + t(x
0
x0 ))
aj < 0
for all t in P satisfying 0
(j = 1; 2; :::; m) t
0
t .
PROOF. If for each j = 1; 2:::; m, we have fj (x fj (x0 + t(x
0
x0 ))
aj = fj (x0 )
aj + tfj (x
30
0
0
x0 ) x0 ) < 0
0
0, then for each t
0 we have
0
0
and in the that case, we may take t to be any positive element in P. If for some j = j , fj 0 (x
0
0
x0 ) > 0 then we may take t to be a positive element in P that is less than the least
of the elements :fj (x0 )+aj 0 fj (x x0 )
(1
j
m)
by some positive elements. DEFINITION 1.10. Let S be a face of the polyhedron M of system (1) of nonzero rank, and let fjk (x)
ajk
0
(k = 1; 2; :::; `)
(15)
be subsystem of system (1) that consists of thoes inequalities that are turned into equations by all elements of S. Consider the set M(S) of solutions of this subsystem. The system of called as the boundary cone of M determined by its face S or, for short, the boundary S-Cone of the polyhedron M. If S is a minimal face of the polyhedron M, then that cone is called the minimal boundary cone of the polyhedron M. Using the results of x2, it is not hard to show that the system (15) de…ning a minimal
boundary cone of the polyhedron M is an extremal subsystem of system (1). We may also show that the polyhedron of solutions of an extremal subsystem of system (1) is one of the minimal houndary cones of the polyhedron M of system (1); the minimal face that de…nes it coincide with the set of all nodal solutions fo the extremal subsystem. Let us now note the following obvious (in case of Rn ) gometric property of boundary cones of the polyhedron M. Theorem 1.4. If the polyhedron M of solutions of system (1) is in contact with some plane along one of its faces S (S is the trace of their contact), then it is in contact with its boundary S-cone M (S) and the track of their contact coincide with the linear continuous of S. PROOF. Indeed, let the polyhedron M be in contact with the plane f (x) completely contained in the halfspace f (x)
a
a = 0 and be
0, while (15) is the subsystem of system
(1) that de…nes the boundary S-cone M (S). If M (S) 6= M then it is not hard to show that the face S of the polyhedron M contains an element x0 which does not turn to an equation, any of the inequalities not included in the subsystem that de…nes the cone M(S). Indeed, for each j 6= j1 ; :::; j` , we may …nd such element xj of the face S such that fj (x) (j = 1; 2; :::; m; j 6= j1 ; :::; j` ). But then the element m X 0 1 xj x m `
aj < 0
j=1(j6=j1 ;:::;j` ))
(if M (S) 6= M; then m
` 6= 0) belong sto the face S and satis…es any of the inequalities: 31
fj (x)
(j = 1; 2; :::; m; j 6= j1 ; :::; j` )
aj < 0
If the cone M(S) does not contact the plane f (x) satis…es the inequality lane f (x)
a = 0, then at least one elementx
0
a > 0. It is not hard to establish that this inequality is
satis…ed by the elements x distinct from x0 that are given by x(t) = x0 + t(x
0
x0 );
where it is an arbitrary nonnegative element of the …eld P. On the other hand applying Lemma 1.4 to the system fj (x)
(j = 1; 2; :::; m; j 6= j1 ; :::; j` )
aj < 0
and element x0 satis…es all of the inequalities of system (15) that de…nes the cone M(S) 0
0
for any nonnegative t 2 P; and in particular for t = t , it follows that x(t ) is a solution of system (1) that is distinct from x0 : But then 0
f (x(t ))
a
0
which contradicts the inequality f (x(t))
a > 0; t > 0 (t 2 P )
The obtained contradiction implies that in the present case the cone M(S) is in contact with the plane (x)
a = 0. Since the case M = M (S), the …rst part of the theorem needs
no proof, this complete the proof of the …rst part of the theorem. To prove the second of part of the theorem, let us consider, as above, the case M 6= M (S).
We know …rst that, in view of the proposition about the linear continuation of the faces of the polyhedron M that we proved in connection with de…nition 1.9, the linear continuation S of S coincides with set of solutions of the equation system: f (x(t))
a = 0; (k = 1; 2; :::; `)
i.e., with the unique minimal boundary cone M (S). On the other hand, all elements of the linear continuation S satisfy the equation (x)
a = 0, since all the elements of S satisfy
this equation (see De…nition 1.9). Consequently, the linear continuation S of the contact between the cone M (S) and the plane f (x)
a = 0: suppose S 6= S and that x is an element of S which does not belong
to S. Consider
x (t) = x0 + t(x
x0 );
(where t is an arbitrary element of the …eld P ). Using an approach similar to the one used above we may prove the existence of a positive element t = t of P such that the element x (t ) satis…es system (1) and the equation f (x)
a = 0. But this means that it belongs
to the track of contacts between the polyhedron M and the plane f (x) 0
a = 0. But then
x (t ) 2 S , Since x (t ) 6= x , it follows that x 2 S. This, however, contradicts the way of 32
choosing x : Since the case M = M (S) our proposition needs no proof, it proof is complete, The theorem is proved. x4. CONSISTENCY OF CONDITIONS OF SYSTEMS OF LINEAR INEQUALITIES
IN SPACE P n
In this section we present condition for the consistency of systems in the form (2) that follows the principle of bounding solutions (i.e., from Theorem 1.1). THEOREM 1.5. A necessary and su¢ cient condition for the consistency of the linear inequality system (2) on P n of rank r > 0 is the existence, in its coe¢ cient matrix, of a minor aj1 i1 ::: aj1 ir ::: ::: ::: ajr i1 ::: ajr ir of order r such that the following relation hold aj1 i1 ::: aj1 ir aj1 ::: ::: ::: ::: 1 0 (j = 1; 2; :::; m): (16) ajr i1 ::: ajr ir ajr aji1 ::: ajir aj PROOF. If the system (2) of rank r > 0 is consistent then, by theorem 1.1, it has at least =
one nodal sub system. By property 3 of boundary sub systems (see x1 the rank of the later is r. Thus among the forms `j (x1 ; :::; xn ) of the inequality system (2) there r linear forms `jk (x1 ; :::; xn ) such that each solution of the equation system `jk (x1 ; :::; xn )
ajk = ajk 1 x1 + ::: + ajk n xn
ajk = 0
(k = 1; 2; :::; r)
(17)
Satis…es system (2). Without loss of generality, we may assume that j1< j2< ::: < jr . Since the chosen forms are linearly independent, the coe¢ cient matrix of system (2) has a nonzero minor aj1 i1 ::: aj1 ir ::: ::: ::: ajr i1 ::: ajr ir of order r. Let us introduce the notation xj = `j (x1 ; :::; xn ) and consider the determinant aj1 i1 ::: aj1 ir xj1 ::: ::: ::: ::: (j = 1; 2; :::; m); ajr i1 ::: ajr ir xjr aji1 ::: ajir xj which is identically (with respect to x1 ; :::; xn ) zero. We obtain the following expression =
for aj1 i1 ::: aj1 ir xj1 ::: ::: ::: ::: (j = 1; 2; :::; m); xj = 1 ajr i1 ::: ajr ir xjr aji1 ::: ajir xj Introducing these expressions into system (2) transforms it to the form 33
aj1 i1 ::: aj1 ir xj1 ::: ::: ::: ::: 1 0 (j = 1; 2; :::; m): (18) ajr i1 ::: ajr ir xjr aji1 ::: ajir xj For j = j1 ; :::; jr , the inequalities of that system, after an obvious transformation become fjk (x)
ajk < 0
(k = 1; 2; :::; r)
Since on replacing the unknown jk (k = 1; 2; :::; r) in system (18) with their expressions xjk = `jk (x1 ; :::; xn ) in terms of the unknown x1 ; :::; xn system (18) turns into system (2), and since the later is satis…ed by all solutions fo system (17) it follows that inequalities (16) are not violated if we substitute ajk for xjk Thus the inequalities (16) are necessary condition for the consistency of system (2). We now show that if for some nonzero minor aj1 i1 ::: aj1 ir ::: ::: ::: = ajr i1 ::: ajr ir of order r, the relation (16) is satis…ed then the system (2) is consistent. Indeed, from (16), it follows that system (18) has the solution (xj1 ; :::; xjr ) = (aj1 ; :::; ajr )
(19)
Substituting for ajk their expressions in terms of x1 ; :::; xn we get system (17). Since the nonzero determinant
is a minor of order r of the matrix of system (17) it follows that the
rank of the system equals the number of its equations. Thus the system is consistent. Each of its solutions, in view of the existence, of the solution (19) to the system (18), satis…es aj1 i1 ::: aj1 ir `j1 (x1 ; :::; xn ) ::: ::: ::: ::: 1 0 (j = 1; 2; :::; m): ajr i1 ::: ajr ir `jr (x1 ; :::; xn ) aji1 ::: ajir aj which di¤ers from system (2) only in the form in which the inequalities are written. Consequently system (2) is consistent. Thus relation (16) is a necessary and su¢ cient condition for consistency. The theorem is proved. REMARK. If r = m, then relation (16) will be satis…ed by any nonzero minor
of order
r of the matrix of system (2). Indeed, in our relation, the index j may take only on of the values j1 ; :::; jr . . . .for each of which the relation holds. In connection with Theorem 1.5, we introduce: DEFINITION 1.11. The nonzero minor of order r, of the matrix of a system (2) of rank r > 0, is called the node of system (2) if it satis…es relation (16). The determinants j
(j = 1; 2; :::; m) obtained from
by ordering it from below and left respectively with
elements of a row of the matrix of system (2) and the constant aj are called the companion determinants of
: 34
Using the de…nition, we obtain the following simple form of Theorem 1.5. System (2) of linear inequalities with rank r > 0 is consistent i¤ for at least one nonzero minor of order r of its matrix any companion determinet is either zero or has the same sign as the minor. An even shorter form of Theorem 1.5 is: System (2) of linear inequalities of rank r > 0 is consistent if and only if it has at least one node. COROLLARY 1.1. If system (2) of rank r > 0 is consistent then so is each system aji1 xi1 + ::: + ajir xir
aj
0
(j = 1; 2; :::; m)
(21)
of rank r de…ned by it. PROOF. Indeed, let system (2) of rank r > 0 be consistent and let (x01 ; :::; x0n ) be one of its solutions, Then any system aji1 xi1 + ::: + ajir xir De…ned by it has
(aj
`j (x01 ; :::; x0n ) + aji1 x0i1 + ::: + ajir xi0r )
(x01 ; :::; x0n )
0
(j = 1; 2; :::; m)
as a solution. If its rank is r, then by Theorem 1.5, there
exists a nonzero determinant aj1 i1 ::: aj1 ir ::: ::: ::: = ajr i1 ::: ajr ir for which the relation aj1 i1 ::: aj1 ir aj1 ::: ::: ::: ::: 1 0 (j = 1; 2; :::; m); ajr i1 ::: ajr ir ajr aji1 ::: ajir aj 0 hold, where aj = aj `j (x01 ; :::; x0n ) + aji1 x0i1 + ::: + ajir x0ir : It is easy to see, however, that 0
it is possible to exchange aj with aj in the above relations without violating any of them. By theorem 1.5, the relation thus obtained means that the system (21) is consistent. This ends the proof. In connection with proposition that we proved, it is appropriate to introduce the following. DEFINITIN 1.12. A system of the form aji1 xi1 + ::: + ajir xir
aj
0
(j = 1; 2; :::; m)
De…ned by system (2) is called a section of that system. A section of system (2) with nonzero rank is called a regular section. A proper section whose rank coincides with the rank of the system (2) is called a principle section of that system. Now we may state Corollary 1.1 in the following simple form. If system (2) with nonzero rank is consistent then so are all its principle sections. Since each node of a given principle section of system (2) is a node of the latter, it follows that any r linearly independent columns of the matrix coe¢ cients of a consistent system (2) 35
of rank r>0 contain a node of the latter. If aj1 i1 ::: aj1 ir ::: ::: ::: = ajr i1 ::: ajr ir is one of the nodes of a consistent system (2) of rank r > 0 then, by (16) the corresponding system (18) has (aj1 ; :::; ajr ). . . for a solution. Since by substituting xjk by x1 ; :::; xn through the formula xjk = `jk (x1 ; :::; xn ) = 0
(k = 1; 2; :::; r)
it reduces to system (2), it follows that each solution of the system `jk (x1 ; :::; xn )
ajk = 0
(k = 1; 2; :::; r)
which is consistent, in view of to the nodal sub system: `jk (x1 ; :::; xn )
ajk
0
6= 0, satis…es system (2). Thus the node
corresponds
(k = 1; 2; :::; r)
of system (2); namely to that nodal subsystem whose matrix contains the minor
. From
the proof Theorem 1.5, it is easy to see that each nonzero minor of order r of that matrix is a node of system (2). In connection with the constructs presented here we introduce: DEFINITION 1.13. Two minor of a matrix that are of the same order are said to be horizontally (vertically) compatible if they are contained in the same rows (columns) of that matrix. Now we present a property of nodes of system (2). All nonzero minors of the matrix of system (2) that are obviously compatible with one of its nodes are nodes of system (2). All of them correspond to a nodal subsystem of system (2); namely to that nodal subsystem whose matrix contains them. We also note the following property of minors of matrices of system (2) that are vertically compatible with some nodes of system (2). An arbitrary minor of the matrix of an arbitrary nodal subsystem of system (2) that is vertically compatible in the matrix of the later with one of its nodes is a node of system (2). Since all nonzero minors of the highest order of matrices of nodal subsystems (2) are nodes of the latter, it su¢ ces to show that the minors in which we are interested are nonzero. Suppose i1 ; :::; ir ) (r > 0 is the rank of system (2)) are the indices of the columns of system (2) containing its elements and let `k (x1 ; :::; xn )
ajk
0
(k = 1; 2; :::; r)
be the nodal subsystem of system (2) whose matrix contains those columns. If (x01 ; :::; x0n ). Is a nodal solution of that subsystem (we speak here of a solution that turns all inequalities
36
to equations) then all solutions to the equation system: ajk i1 xi1 + ::: + ajk ir xir
(ajk
`jk (x01 ; :::; x0n ) + ajk i1 x0i1 + ::: + ajk ir x0ir ) = 0
(k = 1; 2; :::; r)
(which is obviously consistent) satisfy the system aji1 xi1 + ::: + ajir xir
`j (x01 ; :::; x0n ) + aji1 x0i1 + ::: + ajir x0ir )
(aj
0
(j = 1; 2; :::; m)
(22) Since the rank of the later is obviously nonzero (it equals r > 0) it follows that the equation system under study has a nonzero rank. But then, the inequality system ajk i1 xi1 + ::: + ajk ir xir
(ajk
`jk (x01 ; :::; x0n ) + ajk i1 x0i1 + ::: + ajk ir x0ir )
0
(k = 1; 2; :::; r)
is a bounding subsystem of system (22) all of whose solutions satisfy system (22). By property 2 of bounding subsystems (see x2) the rank of the later system coincides with
the rank of system (2) and is equal to r. But then our minor turns to be the determinant of that bounding subsystem and its nonzero. This is what we set out to prove. The above properties of nodes result in our next proposition.
Suppose r>0 is the rank of a consistent system (2) and suppose S is one of its principle sections, Then a subsystem of (2) is nodal i¤ the largest minor of the matrix of that subsystem contained in the matrix of S is a node of the latter. Thus we get, in particular, that if some r row of the matrix of a principle section of system (2) constitute a node of it, then the corresponding r (having the same indices) rows of the matrix of any of its other principle sections constitute a node of the latter. In view of the above proposition, in order to …nd the set of all nodal subsystems of a consistent system (2) with rank r > 0, it is su¢ cient to …nd all the nodes of one of its principle sections, and then the nodal subsystems will be those subsystems with r inequalities whose matrices contain those nodes. Clearly, in this regard, several nodal subsystems may have one and the same nodal solution. The question that arises about conditions about which two nodes de…ne two nodal subsystems with the same nodal solution is related to that of isolating those inequalities of system (2) for which the relation holds as the equations. Going into that question, let us take a node aj1 i1 ::: aj1 ir ::: ::: ::: = ajr i1 ::: ajr ir of system (2) and subsystem `jk (x1 ; :::; xn )
ajk
0
(k = 1; 2; :::; r)
corresponding to it 0
suppose for some j = j we have
37
aj1 i1 ::: aj1 ir aj1 ::: ::: ::: ::: 1 =0 ajr i1 ::: ajr ir ajr aji1 ::: ajir aj Transforming system (2) as we did in the proof of Theorem 1.5, its j-th inequality is transformed to aj1 i1 ::: aj1 ir `j1 (x1 ; :::; xn ) ::: ::: ::: ::: 0 `j 0 (x1 ; :::; xn ) aj 0 = 1 ajr i1 ::: ajr ir `jr (x1 ; :::; xn ) aj 0 aj 0 i1 ::: aj 0 ir In view of the preceeding relation, it follows from this inequality that all the solutions of the equation system `jk (x1 ; :::; xn )
ajk = 0
(k = 1; 2; :::; r)
Satisfy the equation `j 0 (x1 ; :::; xn )
aj 0 = 0
(k = 1; 2; :::; r)
from these considerations, it is not hard to see the converse of the proposition, i.e., if all solutions of the preceeding equation system turn a given inequality of system (2) into 0
equation then for j = j that inequality satis…es relation (23). Thus, for a …xed node
, relation (23) holds for, and only for, those indices j = 1; : : : ; m
of the inequalities of system (2) that are turned to equations by all nodal solutions of (2)’s nodal subsystem that correspond to
. It is appropriate here to introduce:
DEFINITION 1.14. The maximal subsystem of inequalities of system (2) whose coe¢ cients satisfy (23) for any …xed node of system (2) is called the subsystem of (2) pertaining to the node
or simply the
-subsystem of system (2).
Associating the properties of the inequalities of
-subsystem with the properties of ex-
tremal subsystems, established in x2, we have : Each
-subsystem of system (2) is an ex-
tremal subsystem of the latter, or more precisely, it is an extremal subsystem that contains a nodal subsystem of (2) de…ned by the node
.
Using the properties of subsystems presented in x2, it is not di¢ cult to prove that each
extremal subsystem of system (2) is a
-subsystem for some node
of the latter. Indeed,
any extremal subsystem includes at least one nodal subsystem of system (2). It has been noted above that a nonzero minor of maximal order in the matrix of system (2) is node of the latter. But then, in view of the proposition stated after De…nition 1.4, it follows that our extremal subsystem is a
-subsystem for any of these minors.
Using the established extremality of by nodes
-subsystem, we conclude that
-subsystem de…ned
that are horizontally compatible must coincide. Thus the indices of zero in
relation (16) coincide for any two horizontally compatible nodes. Hence, in order to …nd all 38
of the extremal subsystems of (2) we may use its principle sections. Now the question we are considering, i.e., the question of conditions under which two nodes
0
and
de…ne two nodal subsystems with one and the same solution, may be
formulated as a question about conditions under which the nodes
and
0
of system (2)
de…ne one and the same extremal subsystem of the latter, In view of the above properties of nodes de…ned by principle sections of system (2) those conditions will be in terms of vertical compatibility of the nodes
and
0
. In order to solve our problem we introduce the next
proposition. Two vertically compatible nodes
0
and
of system (2) de…ne the same extremal sub-
system of (2) i¤ all the rows of the nodes de…ne zero companion determinants of the other. Indeed, the necessary part of the proposition follow easily from the property of
-
subsystem formulated after De…nition 1.14. We now prove su¢ ciency. Suppose the rows of one node (say
) determine the zero companion determinants of the other. Then the ex-
tended matrix of the system of inequalities that is obtained by joining the nodal subsystems de…ned by
and
0
has the rank of r (r is the rank of system (2)), Since the rank of the
matrix of that system is r, it follows that changing the inequalities of the latter to equations will result in a consistent system of equations. Since the set of solutions of that system coincides with the sets of solutions of subsystems of it with rank r, we conclude that the joined nodal subsystems have the same nodal solution. But then, in view of the extremality of
-determinants, it follows that the nodes
and
0
must coincide. This completes the
proof of the proposition. It was noted that indices of relations (16) de…ned by two horizontally compatible nodes coincides. Acutally a stronger proposition is valid. If
j
and
j
0
(j = 1; 2; :::; m) are two sets of companion determinants for two horizontally
compatible nodes j
=
0
j 0
and
0
of a consistent system 2, then
(j = 1; 2; :::; m):
Indeed, let aj1 i1 ::: aj1 ir ::: ::: ::: = ajr i1 ::: ajr ir As in the proof of Theorem 1.5, system (2) may be transformed to (see (20)) aj1 i1 ::: aj1 ir `j1 (x1 ; :::; xn ) ::: ::: ::: ::: `j 0 (x1 ; :::; xn ) aj = 1 0 ajr i1 ::: ajr ir `jr (x1 ; :::; xn ) aj 0 i1 ::: aj 0 ir aj 0 0 If we substitute here some nodal solution (x1 ; :::; x0n ) of the nodal subsystem
39
`jk (x1 ; :::; xn )
ajk
0
(k = 1; 2; :::; r)
correspond to the node
then we get
`j (x01 ; :::; x0n )
(j = 1; 2; :::; m)
j
aj =
0
Dong the same with respect to the node node
, which is horizontally compatible with the
, we get
`jk (x1 ; :::; xn )
ajk =
0 j 0
(j = 1; 2; :::; m)
Consequently j
=
0 j 0
(j = 1; 2; :::; m):
which is what we wanted to prove. Let us note here that, in the case of normalized consistency of system (2) in Rn ; these arguments imply that the relations system’s polyhedron (de…ned by
j
coincide with distances of the minimal face of the
) from the bounding planes de…ned by the system’s in-
equalities. System (2) on Rn is said to be normalized if the relation a2j1 + ::: + a2jn = 1
(j = 1; 2; :::; m)
is satis…ed by all the coe¢ cients of the systems’s inequalities. Normalizing an arbitrary consistent system (2) of linear inequalities (on Rn ) not containing inequalities with null linear forms `j (x1 ; :::; xn ), and using the stated geometric notations related to j
j
, we qget the formular = Hj ( ) a2j1 + ::: + a2jn
(j = 1; 2; :::; m)
where Hj ( ) is the distance between the minimal face of the polyhedrion of system (2), de…ned by the node
, from the plane of the half space de…ned in Rn by the j-th inequality
of system (2). This formula was derived by Ermin (see Ermin [1]).
x5: GENERALIZATION OF THE KRONECKER-KAPELLI THEOREM Here we study the problem of consistency of the system (P n ) `j (x1 ; :::; xn )
aj = aj1 x1 + ::: + ajn xn
aj
0
(j = 1; 2; :::; m) hj (x1 ; :::; xn )
bj = bj1 x1 + ::: + bjn xn
bj
0
(24)
(j = 1; 2; :::; p) which is, in general, a system of linear inequalities augmented with a system of equations; the case where system (24) coincides with the preceding is not excluded. We denote the 0
rank of system (24) and that of the linear equation system included in it by r and r > 0 0
respectively. In this connection, we assume that r > 0. Replacing each equation in (24) by the two inequalities equivalent to it, we get
40
`j (x1 ; :::; xn )
aj
0
(j = 1; 2; :::; m)
hj (x1 ; :::; xn )
bj
0
(j = 1; 2; :::; p)
hj (x1 ; :::; xn ) + bj
0
(25)
(j = 1; 2; :::; p)
The desired sought conditions for consistency for that system follow from Theorem 1.5. Now, obviously, each extremal subsystem of that system contains all the inequalities obtained from equations. Also, each subsystem of rank r of any extremal subsystem is a nodal 0
subsystem of system (25). Thus, there exists a nodal subsystem that contains r inequalities hjk (x1 ; :::; xn )
bjk
0
0
(k = 1; 2; :::; r )
whose linear forms hjk (x1 ; :::; xn ) are independent. Hence, there must exist a node of system (25) which consists of the rows of coe¢ cients of the linear forms hjk (x1 ; :::; xn ). In the augmented matrix of (25), the rows de…ned by the coe¢ cients and the constants of the inequalities hjk (x1 ; :::; xn )
b
0 reapppear with opposite sign. Thus, the determinant
and its companion determinants, corresponding to these row, reapper with the opposite signs. Hence, in the relations (16) of Theorem 1.5 (with respect to the node (25)), each of the companion determinants of
of system
is zero. Thus, we have proved the follwoing
theorem. Theorem 1.6. Let r > 0 be the rank of system (24) is consistent i¤ its matrix containing 0
a nonzero minor of order r, containing r rows that consist of the coe¢ cients of the linear equations of (24), such that those companion determinants of
corresponding to equations
are zeros and those that correspond to inequalities are nonnegative. If system (24) do notcontainsto inequalities then the theorem reduce to a condition for the of the linear equation system conistency hj (x1 ; :::; xn )
bj = bj1 x1 + ::: + bjn xn
bj = 0
(j = 1; 2; :::; p) (with nonzero rank) and s equivalent (for systems with nonzero rank) to the well-known Kroneker-Kapelli theorem which asserts that an arbitrary system of linear equations s consistent if its rank coincide with the rank of its augmented matrix. Thus, excluding the case of linear equations systems of zero rank, Theorem 1.6 is a generalization of the KronekerKapelli theorem. We note here that we consider systems in the space P n for an ordered …eld of P . As a further generalization of the Kroneker-Kapelli theorem, we present conditions consistency o fthe system: jai1 x1 + ::: + ain xn
ai j
"i
(i = 1; 2; :::; m)
(26)
where ai1 ; :::; ain and ai (i = 1; :::; n) are arbitrary elements in an ordered P , "i (i =
41
1; 2; :::; m) are some of its nonnegative elements and where jj is the absolute value sign in that …eld. For any two elements a and b of P the following relations (see Van der Warden [1]) must hold: jabj = jaj jbj and
ja + bj
jaj + jbj
ja + bj
jaj
Using the second of these, we get jbj :
Let us write system (26) in the form: ai1 x1 + ::: + ain xn ai1 x1
:::
ain xn
(ai + "i ) ("i
ai )
0 (i = 1; 2; :::; m). 0 (i = 1; 2; :::; m).
Introducing the notation a for j = 2i 0 ajk = f ik aik for j = 2i
1 (i = 1; 2; :::; m); (i = 1; 2; :::; m); (k = 1; 2; :::; m); ai + "i for j = 2i 1 (i = 1; 2; :::; m); 0 aj = f "i ai for j = 2i (i = 1; 2; :::; m); the last system may be written in the form 0
0
aj1 x1 + ::: + ajn xn
0
aj
0
(j = 1; 2; :::; 2m);
(27)
In view of Thereom 1.5 this system (and, hence, the equivalent system (26)) is consistent if it possesses at least one node: 0 aj1 i1 ::: a0j1 kr 0 ::: ::: ::: = 0 0 ajr k1 ::: ajr kr (where r > 0 is the rank of the matrix of coe¢ cients of system (26)); in our notation it must satisfy 0 a0j1 k1 ::: a0j1 kr aj1 ::: ::: ::: ::: 1 0 (j = 1; 2; :::; 2m) 0 a0jr k1 ::: a0jr kr a0jk a0jk1 ::: a0jkr a0j Using that notation we get. 0 a0j1 k1 ::: a0j1 kr aj1 ::: ::: ::: ::: 1 0 and 0 a0jr k1 ::: a0jr kr a0jk a0jk1 ::: a0jkr ai + "i 0 a0j1 k1 ::: a0j1 kr aj1 ::: ::: ::: ::: 1 0 0 a0jr k1 ::: a0jr kr a0jk a0ik1 ::: a0ikr "i ai (i = 1; 2; :::; m) 42
(28)
which may be written, pairwise, as 0 a0j1 k1 ::: a0j1 kr aj1 ::: ::: ::: ::: "i j 0 j (j = 1; 2; :::; m) mod a0jr k1 ::: a0jr kr a0jk a0ik1 ::: a0ikr ai where the expression "mod" denotes absolute value. Again, using that notation and taking ou the -1 signs of even indexed rows, we get 0 ai1 k1 ::: ai1 kr ui1 ::: ::: ::: ::: "i mod 0 air k1 ::: air kr uir aik1 ::: aikr ai ai1 k1 ::: ai1 kr ::: ::: ::: (i = 1; 2; :::; m) (29) and mod air k1 ::: air kr 0 0 where (ui1 ; :::; uir ) is a solution of the system juik
uik j = "k
(k = 1; 2; :::; r):
(30)
Therefore, if system (26) whose rank r is nonzero is consistent, then for some nonzero minor of its matrix and for a solution of system (30), relation (29) is satis…ed. 0
0
Conversely, beginning with relation (29) and a solution (ui1 ; :::; uir ) that satis…es it, and using our notation we can, going the other way, obtain relation (28) for system (27). But then (27) would be consistent and so would the quivalent system (26). We have proved the following. THEOREM 1.7. A necessary and su¢ cient condition fro the consistency of the system (26) of rank r > 0 (r is the rank of the matrix of coe¢ cients of system (26)) is the existence of a nonzero minor ai1 k1 ::: ai1 kr ::: ::: ::: = air k1 ::: air kr 0 0 of order r in its matrix and a solution (ui1 ; :::; uir ) of the system juik
uik j = "k
(k = 1; 2; :::; r);
such that, for all i = 1; 2; :::; m, we have 0 ai1 k1 ::: ai1 kr ui1 ::: ::: ::: ::: nod 1 "i 0 air k1 ::: air kr uir aik1 ::: aikr ai For "1 = "2 = ::: = "m = 0, system (26) reduces to the equation system ai1 x1 + ::: + ain xn
ai
0
(i = 1; 2; :::; m);
and the relations in Theorem 1.7 express the equivalence of the rank of the system and the rank of its augmented matrix. Thus, if we limit ourselves to linear equation system
43
with zero rank and to, the more general, system (26) of nonzero rank. Theorem 1.7 is a generalization of the Kronecker-Kapelli theorem to the later.
x6: SOME CRITERTA FOR THE EXISTENCE OF POSITIVE AND NEGATIVE SO-
LUTIONS FOR SYSTEMS OF LINEAR INEQUALITIES
This section is devoted to explaining the use of some results of x4 in investigating the
existence of positive and negative solutions of systems of linear equations and systems of linear inequalities. DEFINITION 1.15.
A solution (x01 ; :::; x0n ) of system (2) is nonnegative (nonpostive)
relative to the unknowns (xk1 ; :::; xk` ) if the coordinates x0k1 ; :::; x0k` are nonnegative (nonpostive). If the solution (x01 ; :::; x0n ) is nonnegative (nonpositive) relative to the all of the unknowns x1 ; :::; xn then it is simply called nonnegative (nonpostive). A nonnegative (nonpostive) solution (x01 ; :::; x0n ) with respect tto the unknowns xk1 ; :::; xk` of at least one of the components x0k1 ; :::; x0k` is di¤erent from zero. In particular, a nonnegative (nonpositive) solution of system (2) is said to be positive (negative) if at least one of its components is nonzero. A positive (negative) solution of system (2) is said to be strictly positive (strictly negative) if none of its components is zero. Theorem 1.8. Suppose system (2) of nonzero rank does not have any zero solutions. Then system has a nonnegative (nonpostive) solution relative to the unknowns (xk1 ; :::; xk` ; (`
n) i¤ at least one of its proper sections has a nodal solution all of whose coordinates are
nonzero and whose of its coordinates with indices among the numbers k1 ; :::; k` are positive (negative). PROOF. Since it is always possible to add zero components on to any solution of my section of system (1) until it becomes a solution to system (2), the su¢ ciency of the condition is obvious. To show the necessity let us limit ourselves to the case of nonnegative solutions relative to the unknowns xk1 ; :::; xk` . Let x01 ; :::; x0n be one of those solutions of system (2). Consider the equation system aj1 x1 + ::: + ajn xn = aj + uj
(j = 1; 2; :::; m);
in which aj1 x01 + ::: + ajn x0n = aj + uj
(j = 1; 2; :::; m);
Introducing the notation 0
x0i
a (j = 1; 2; :::; m; i = 1; 2; ::; n); (31) jx0i j ji 0 and taking j0j = 1 (0 2 P ):we construct the new system
aji =
a0j1 x1 + ::: + a0jn xn = aj + uj
(j = 1; 2; :::; m); 44
The latter, obviously, has a nonnegative solution (jx01 j ; :::; jx0n j) and hence, the vector (a1 + u1 ; :::; am + um ) 2 P m
may be linearly expressed with linear coe¢ cients (jx01 j ; :::; jx0n j) (from the …eld P ) in terms
of the vectors
(a01i ; :::; a0mi )
(i = 1; 2; ::; n)
of the space P n . But then, by Lemma 1.2, it should be linearly expressable with nonnegative coe¢ cients in terms of its linearly independent subsystems. Let (a01is ; :::; a0mis )
(s = 1; 2; ::; p)
be one of them. Then the system of equations a0ji1 xi1 + ::: + a0jip xip = aj + uj
(j = 1; 2; :::; m);
and, hence, the system of inequalities a0ji1 xi1 + ::: + a0jip xip
aj
0
(j = 1; 2; :::; m);
must have at least one nonnegative solution. In view of the notation (31) and by our assumption about the absense of zero solutions of system (2), it follows that the system aji1 xi1 + ::: + ajip xip
aj
0
(j = 1; 2; :::; m); (32)
being, obviously, a proper section of system (2), has a nonzero solution which is nonnegative relative to those components of xk1 ; :::; xk` that are included in it. Clearly, the case where it includes none of these unknowns is not excluded. From the proper sections of system (2), that possess this property, let us take the one which includes the smallest number of unknowns. Without loss of generality, we may assume that system (32) is a section of this type. If it does not include any of the unknowns xk1 ; :::; xk` then our proposition is proved since none of the solutions of the chosen system (32) (and particularly, none of its nodal solutions) may have any zero coordinates. In the opposite case, let xi1 ; :::; xiq (q
p) denote those of the components xk1 ; :::; xk` which are included in
system (32), it is obvious that non gerality is lost by this, and consisder the system aji1 xi1 + ::: + ajip xip aj 0 (j = 1; 2; :::; m); g (33) xis 0 (s = 1; 2; :::; q): In view of the choice of system (32), this system is consistent. But then, by Theorem 1.5, it should possess at least one node
. If at least one of the rows of this node has common
elements with a row of the matrix of the system xis
0 (s = 1; 2; :::; q);
then the system we are considering has a solution where at least one of the coordinats xis
0
(s = 1; 2; :::; q) is zero. But this is not possible since it means that there is a
chose proper section of (2) that includes a smaller number of variables than those included in system (32), which contradicts our hypothesis. Thus the node
45
is a minor of the matrix
of system (32) and hence, is a node of that system (since all of its inequalities are included in (33), its node is the determinant corresponding to the node
). From the fact that the nodal subsystem of (32)
is also a nodal subsystem of (33), it follows that the nodal
solution of system (32) (de…ned by
) is nonnegative relative to the unknowns xi1 ; :::; xiq .
Since all coordinates of the solutions are di¤erent from zero (this follows for our choice of (32), this proves the necessity part. COROLLARY 1.2. If system (2) of nonzero rank has no zero solutions, then it has a positive (negative) solution i¤ at least one of its proper sections has a strictly positive (strictly negative) nodal solution. DEFINITION 1.16. A nonzero minor
of the matrix A of system (2) is said to be
a nodal minor of that system if the ratios of determinants, obtained by bordering it with elements of arbitrary rows of A and with elements of the columns of constants aj 0 to it are nonnegative. A nodal minor
of the system (2) is said to be nonnegative (nonpositive)
oriented if the ratios, to it, of the determinants obtained from it by replacing one of its columns by the column of aj 0 ’s to it are nonnegative (nonpositive). Theorem 1.9. Let the linear inequality system (2) of nonzero rank have no zero solutions. A necessary and su¢ cient condition for the system to have a positive (negative) solution is that at least one of the minors of the matrix of the system is a nonnegatively (positively) oriented nodal minor of that system. Necessity follows from Corollary 1.2. Let us prove su¢ ciency. If aj1 i1 ::: aj1 ip ::: ::: ::: = ajp i1 ::: ajp ip is a nonnegatively (positively) oriented nodal minor of system (2) (according to the conditions of the theorem) then there exists a solution of the equation system ajk i1 xi1 + ::: + ajk ip xip
ajk = 0 (p = 1; 2; :::; p);
(34)
Augmenting that solution with zeros up to a solution of system (2),we get a positive (negative) solution of the latter. Indeed, the determinant
is, obviously, a node of the
system aji1 xi1 + ::: + ajip xip Let us note that
aj
0
(j = 1; 2; :::; m);
6= 0 implies the consistency of system (34) and the uniqueness of its
solution. This solution could not be zero, for then we would have
aj
0
(j = 1; 2; :::; m)
and system (2) would have a zero solution and we would contradict our hypothesis. It is not hard to convince oneself that that solution is positive (negative) by solving system (34) by Cramer’s rule and noting the signs of the ratios of determinants, used by that formula,
46
that are given in De…nition 1.16. Augmenting our solution of system (34) with zero up to a solution of system (2) we get a positive (negative) solution of the latter. In addition to Theorem 1.9 we note that the question of existence of a positive (negative) solution for a linear inequality system, that has a zero solution, together with the question of existence of a strictly positive (strictly negative) solution to any system of linear inequalities, reduces to a question about the existence of nonnegative (nonpositive) solutions for systems of linear inequalities and, actually, to Theorem 1.9. Indeed, this follows from the following propositions. 1. Systems (2) has at least one positive solutin i¤ among the numbers 1; 2; :::; n there exists at least one number k0 such that the system aj1 x1 + ::: + ajn xn
aj xn+1
aj + ajk0
0
(j = 1; 2; :::; m);
has a nonnegative solution. For the case of negative solutions, that system is replaced by the system aj1 x1 + ::: + ajn xn + aj xn+1
aj
ajk0
0
(j = 1; 2; :::; m):
2. System (2) has a strictly positive solution i¤ the system n X aj1 x1 + ::: + ajn xn aj xn+1 aj + aji 0 (j = 1; 2; :::; m); i=1
has at least one nonnegative solution. For the case of strictly negative solutions, that system is replaced by aj1 x1 + ::: + ajn xn + aj xn+1
aj
n X
aji
0
(j = 1; 2; :::; m):
i=1
PROOF OF PROPOSITION 1. Indeed, let (x01 ; :::; x0n ) be a positive solution of system (2). If one of its positive coordinates x0k0 and an element t0 2 P we have t0 >
1 x0k
, then in
0
view of the relations
aj1 (t0 + 1)x01 + ::: + ajk0 ((t0 + 1)x0k0
1) + ::: + ajn (t0 + 1)x0n
aj t0
aj + ajk0
0
(j = 1; 2; :::; m); which obvious for t0 > 0; the elements 0
xi = (t0 + 1)x0i 0
xk0 = (t0 + 1)x0k0
(i = 1; 2; ::; n + 1; i 6= k0 ; n + 1); 1
and 0
xn+1 = t0 0
0
constitue a nonnegative, even a positive, solution (x1 ; :::; xn+1 ) to the …rst of the two systems of Proposition 1. Conversely, if (x1 ; :::; xn ) is a nonnegative solution of that system, then the elements x"i =
xi xn+1 +1
(i = 1; 2; ::; n + 1; i 6= k0 ; n + 1); 47
and "
x k0 =
xk0 +1 xn+1 +1
constitute a positive solution (x"1 ; :::; x"n ) to system (2) with a positive coordinate x"k0 : PROOF OF PROPOSITION 2. Indeed, let (x01 ; :::; x0n ) be strictly positive solution of system (2). If t0 is an element of P satisfying the inequality t0 >
1 x0i
(i = 1; 2; ::; n);
then, in veiw of the relation aj1 (t0 + 1)x01 + ::: + ajn ((t0 + 1)x0n
1)
aj t0
aj +
n X
aji
0 (j = 1; 2; :::; m);
i=1
which is obvious for t0 > 0; the elements 0
xi = (t0 + 1)x0i
1 (i = 1; 2; ::; n)
and 0
xn+1 = t0 0
0
constitute a nonnegative (even a strictly positive) solution (x1 ; :::; xn+1 ) to the …rst of the two systems of Proposition 2. Conversely, if (x1 ; :::; xn ) is some nonnegative solution of that system, then the elements x"i =
xi xn+1 +1
(i = 1; 2; ::; n)
constitute a strictly positive solution (x"1 ; :::; x"n ) of system (2) Let us now go on to consider the question of existence of positive and negative solutions of the linear equation system aj1 x1 + ::: + ajn xn
aj = 0
(j = 1; 2; :::; m) (35)
in the space P n . We introduce: DEFINITION 1.17. We say that the nonzero minor
of the matrix A of the system (35)
is a nodal minor of that system if all of the ratio, to it, of the determinants obtained from it by bordering it with elements rows of A and elements of the column of aj ’s are zero. A nodal minor
of system (35) will be called nonnegatively oriented (nonpositively oriented)
if the ratios, to it of the determinants obtained from it by replacing one of its columns by substituting elements of the column of aj ’s is nonnegative (positive). It is hard to see that a nodal minor of system (35) is a nonnegatively oriented nodal minor for the equivalent linear inequality system aj1 x1 + ::: + ajn xn aj 0 aj1 x1 + ::: + ajn xn ( aj ) 0 g (j = 1; 2; :::; m): Using Theorem 1.9 here, we get the following characterization of the existence of nonnegative (positive) solutions of a system of linear equations. THEOREM 1.10. If the linear equation system (35) of nonzero rank has no zero solution, 48
then it has a positive (negative) solution i¤ at least one minor in the matrix of system (35) is a nonnegatively (positively) oriented nodal minor of that system. We note here that the de…nition of positive (negative) and nonnegative (nonpositive) solutions of linear equation systems are the same as the de…nition of those terms for linear inequality system. In addition to Theorem 1.10 we establish the next proposition. If the linear equation system (35) of rank r > 0 has at least one nodal minor that is nonnegatively (positively) oriented, then it has a nodal minor of order r that is nonnegatively (nonpositively) oriented. PROOF. We shall prove the proposition for the case where system (35) has a nodal minor aj1 i1 ::: aj1 ip ::: ::: ::: = ajp i1 ::: ajp ip That is nonnegatively oriented. Since the system aji1 xi1 + ::: + ajip xip
aj = 0
(j = 1; 2; :::; m);
De…ned by it, obviously, has a nonnegative solution, it follows that each system aji1 xi1 + ::: + ajir xir
aj = 0
containing it whose rank is r (r
(j = 1; 2; :::; m);
(36)
p) has a nonnegative solution. But then each subsystem
of such a system has a nonnegative solution and hence, in particular, so does any subsystem of rank r with r equations. Thus, the determinant of any these subsystems satis…es the condition: the ratios of determinants obtained by replacing any of its columns by the column of constants . . . are nonnegative. From here it is not hard to see that each nonzero minor of order r of system (36) is a nodal minor of the system that satis…es the requirement of the proposition. REMARK. For an arbitrary system of linear inequalities, the analogous proposition is not true. Indeed, it is not hard to show that the system x1 + x2 1 0 g 4x1 3x2 + 2 0 of rank 2 has a nonnegatively oriented nodal minor of order 1 and does not have nonnegatively oriented minor of order 2. In addition to Theorem 1.10 we note also that the problem of existence of positive (negative) solutions of systems of linear equations (35) that posses zero solutions and the problem of existence of strictly positive (strictly negative) solutions for arbitrary linear equation system (35) reduces to a problem of existence of nonnegative (nonpositive) solutions to systems of linear equations and, in face, to Theorem 1.10. Indeed, this is made possible through the
49
following two propositions. 1. System (35) to have at least one positive solution, it is necessary and su¢ cient that among the numbers 1; 2; : : : ; n there exists a number k0 such that the system aj1 x1 + ::: + ajn xn
aj xn+1
aj + ajk0 = 0
(j = 1; 2; :::; m)
has a nonnegative solution. For the case of negative solution, that system is replaced by the system aj1 x1 + ::: + ajn xn + aj xn+1
aj
ajk0 = 0
(j = 1; 2; :::; m):
2. For the system (35) to have at least one strictly positive solution, it is necessary and su¢ cient that the system aj1 x1 + ::: + ajn xn
aj xn+1
aj +
n X
aji = 0
(j = 1; 2; :::; m)
i=1
has at least one nonnegative solution. For the case of the strictly negative solutions that system is replaced by the system aj1 x1 + ::: + ajn xn + aj xn+1
aj
n X
aji = 0
(j = 1; 2; :::; m):
i=1
These propositions are proved in a way that is analogous to the proof of the proposition following Theorem 1.9. From Theorem 1.10 and the …rst extension following it we have COROLLARY 1.3. For system (35) of linear equations, with rank r > 0, to have at least one nonnegative (nonpositive) solution it is necessary and su¢ cient at least one of the system of rank r, obtained from it by eliminating of its terms involving n-r of its unknowns, has nonnegative (nonpositive) solution. REMARK. T his proposition may be obtained as a direct consequence of Lemma 1.2. From Corollary 1.3, follows the next characterization of the existence of strictly positive (strictly negative) solutions of systems of linear equations with aj (j = 1; 2; :::; n) (of systems of homogenous linear equations.) LEMMA 1.5. For a homogeneous linear equation system (in P n ) of rank r > 0 to have (for r < n) at least one strictly positive (strictly negative) solution, it is necessary and su¢ cient that for each column xi included with a nonzero coe¢ cient in at least one of its equations, at least one of the systems of rank r in the unknown obtained from our system by eliminating all the terms containing n
r
1 unknowns, that adre distinct from xi , has
a positive (negative) solution with a nonzero xi coordinate. PROOF. We shall present a proof for the case of negative solutions. Let (x01 ; :::; x0n ) be strictly negative solution of the system aj1 x1 + ::: + ajn xn = 0
(j = 1; 2; :::; m)
50
Let one of the coe¢ cientof... aji xi + ::: + aji 1 xi
1
+ aji+1 xi+1 + :::ajn xn
aji = 0 (j = 1; 2; :::; m):
would possess a nonpositive (strictly negative) solution x0i 1 x0i+1 x01 x0n ; :::; ; ; :::; 0 0 0 jxi j jxi j jxi j jx0i j
:
Since its rank is, obviously, r it follows from corollary 1.3 that a given system aji1 xi1 + ::: + ajir xir
aj1 = 0
(j = 1; 2; :::; m);
(i 6= i1 ; ; :::; ir ; j = 1; 2; :::; m)
of rank r has a nonpositive solution. But then the system aji1 xi1 + ::: + ajir xir + aj1 xi = 0
(j = 1; 2; :::; m);
( j = 1; 2; :::; m) (37)
would have a negative solution with an xi coordinate that is distinct from zero. This establishes the necessity part of the proposition. The su¢ ciency is now proved. Let i be one of the numbers fro which the coe¢ cient aji ( j = 1; 2; :::; m) is not zero and let (37) be the corresponding system of rank r that has a negative solution with xi 6= 0. Substitute, in each of these systems, one of its negative solutions with xi 6= 0. Then, for each j = 1; 2; :::; m; add the resulting relation combining terms with the same aji , e.g., aji x1i + aji x2i = aji (x1i + x2i ) = aji xi : From these operation, it follows that the resulting homogeneous system of linear equations has a strictly negative solution. Lemma 1.5 is proved. The proof for the case of positive solutions is the same as the above proof. DEFINITION 1.18. A system of homogeneous linear equations is said to be simple if the number of its unknowns is greater than its rank THEOREM 1.11. A system of homogeneous linear equations with rank r > 0 has a strictly positive (strictly negative) solution i¤ among the simple systems determined by it (by eliminating all terms containing certain unknowns) there exists a system with a strictly positive (strictly negative) solution and whose augmented matrix has rank r. PROOF. We shall prove the theorem for the case of strictly positive solutions. Consider an arbitrary simple system that has a positive solution and a system S of linear homogeneous equations de…ned by it. In these systems, eliminate all terms involving unknowns that correspond to zero coordinates of the positive solution vector. The new simple system, thus obtained, has strictly positive solution. If the system S has a strictly positive solution and if its rank is r, then, by Lemma 1.5, its augmented matrix would have rank r, the necessity part of the theorem is proved. Now suppose a homogeneous linear equation system aj1 x1 + ::: + ajn xn = 0
(j = 1; 2; :::; m) (38)
of rank r meets the requirements of the theorem and let the …rst k columns of its matrix
51
constitute the augmented matrix referred to in the theorem. Then the system aj1 x1 + ::: + ajk xk = 0
(j = 1; 2; :::; m)
must have at least one strictly positive solution. Let aj1 x1 + ::: + aj` x` = 0
(j = 1; 2; :::; m) (39)
be a system having a strictly positive solution with the largest ` (`
k). We shall show
that it coincides with system (38). Indeed, suppose this not to be the case, i.e., ` < n. Since system (38) and (39) have one and the same rank, the system aj1 x1 + ::: + aj` x` =
aj`+1
0
(j = 1; 2; :::; m) (40)
0
is consistent. If (x1 ; :::; x` ) is one of the solutions of system (40) and if (x01 ; :::; x0n ) is a strictly positive solution of (39), then there exists t > 0 such that 0
0
(x1 + tx01 ; :::; x` + tx0` ) is a strictly positive solution of system (40). But this means that the system aj1 x1 + ::: + aj` x` + aj`+1 x`+1 = 0
(j = 1; 2; :::; m)
has a strictly positive solution 0
0
(x1 + tx01 ; :::; x` + tx0` ; 1): However, this is contrary to the method selecting system (39). Consequently, ` < n, and hence, system (38) has a strictly positive solution. The above theorem was followed by Gummer [1]. REMARK 1. A system of homogeneous linear equations of rank r > 0 (for r < n) has a positive (negative) solution i¤ at least one of the simple systems determined by it has a strictly positive (strictly negative) solution. Indeed, let (x01 ; :::; x0n ) be a positive solution of the homogeneous linear equation system (38). Without loss of generality, we may assume that x0n > 0. Then the system aj1 x1 + ::: + ajn 1 xn
1
+ ajn = 0
(j = 1; 2; :::; m)
would have a nonnegative solution x0 x01 ; :::; jxn0 j1 jx0n j n
:
Consequently, the column vectors
ajn (j = 1; 2; :::; m) could be expressed linearly with
nonnegative coe¢ cients in terms of the column vectors (ajn ); :::; (ajn 1 ) (j = 1; 2; :::; m): Since the rank of this system, which is r and hence, is nonzero, using Lemma 1.2, we easily obtain the necessity part of the proposition. Su¢ ciency is obvious.
52
For the case of the negative solutions, the above method of proof may be applied. REMARK 2. It should not be supposed that a system of linear homogeneous equation, of rank r > 0 having at least one simple system of rank r with strictly positive (strictly negative) solution (such an assertion is, e.g., implied by the basic theorem in Mikhelson’s [1] article). Indeed, the system x1
x4 = 0;
x2
x3 = 0
of the rank 2 has a strictly negative solution ( 1; 1; 1; 1) but none of the simple system of rank 2 determined by it has a strictly negative solution. x7: CONDITON FOR THE NONDEGENERACY OF THE SOLUTION POLYHEDRON
OF A SYSTEM OF LINEAR INEQUALITIES.
DEFINITION 1.19. We say that system (1) is stably consistent or simply stable if it has a solution that satis…es all of its inequalities as strict inequalities (a stable solution). If system (1) becomes stably consistent after we remove from it those inequalities fj (x) aj
0
with null functions fj (x) and zero constants aj (null inequalities) then we say that system (1) is nondegenerate. The set of stable solutions of that system is called the interior region of the polyhedron M. A consistent but not stably consistent system (1) is said to be instably consistent or simply instable. Analogously, a solution of system (1) that is not stable is called an instable solution of it. In the section we present some conditions for the nondegeneracy of the solution polyhedron of system (1) on P n (p is an arbitrary ordered …eld) that follow from Theorem 1.5. An arbitrary system of this type is in the form (2). In the special case of system (2) on Rn , the de…nition of nondegeneracy of the polyhedron of solution of the system is equivalent to the following de…nition: The polyhedron of solution of system (2) on Rn is said to be nondegenerate if it has an interior point (relative to Rn ) Since the problem of nondegeneracy conditions for a polyhedron of solutions reduces to the problem of stable consistency of the system, we shall present our conditions in terms of the latter. We …rst note the following almost trivial proposition. LEMMA 1.6. A system of linear homogeneous inequalities fj (x)
0 (j = 1; 2; :::; m) (41)
is stably consistent i¤ the system fj (x) + 1
0 (j = 1; 2; :::; m) (42)
is consistent. 53
Indeed, if system (41) is stably consistent then there exists a solution, to it, x0 such that fj (x0 ) < 0 (j = 1; 2; :::; m): But then, the system t0
fj (x)
(j = 1; 2; :::; m);
0
where t is the greatest of the elements fj (x0 ) (j = 1; 2; :::; m), has a solution x = x0 . Hence, system (42) has a solution
x0 . t0
0
Conversely, if system (42) is consistent and if x is one
of its solution, then from the inequality 0
fj (x ) + 1
0 (j = 1; 2; :::; m), 0
if follows that x is a stable solution of system (41) and theorem is proved. REMARK. System (1) is consistent if the corresponding system fj (x) aj t 0 (j = 1; 2; :::; m); g t 0; is consistent. Thus, by Lemma 1.6, system (1) is stably consistent if the system fj (x) aj t + 1 0 (j = 1; 2; :::; m); g t + 1 0; is consistent. LEMMA 1.7. If an extremal subsystem of a system (1) (of nonzero rank) is stably consistent, then so is the whole system 91). PROOF. Let fjk (x)
ajk
0 (k = 1; 2; :::; `);
be stably consistent extremal subsystem of system (1), x0 a solution of system (1) that 0
turns the inequalities of that system to equation and let x be a stable solution of the latter. Then all elements of the ray x(t) = x0 + t(x
0
x0 ) (t
0);
that are distinct from x0 are, obviously, stable solution of system (43). On the other hand, in view of the extremality of subsystem (43), either all those inequalities that are not included in it hold as strict inequalities at x = x0 or system (1) coincides with (43) and hence, it strict stable. But then applying Lemma 1.4 to the system consistenting of these 0
0
inequalities, we conclude that for t = t > 0, x(t ) is a stable solution of that system. Since 0
x(t) is a stable solution of (43) for any t > 0, x(t ) is a stable solution of system (1), which we were to show. REMARK. Since every
-subsystem of a consistent subsystem (2) (of nonzero rank) is
an extremal subsystem (see x4) Lemma 1.7, for system of type (2), may be stated as: If some
-subsystem of system (2) is stably consistent, then so is the whole system (2).
THEOREM 1.12. If system (2) of nonzero rank is stably consistent then it has a node such that after replacing every element of the right hand column of each of its zero-valued 54
companion determined by one, those determinants either remain zero-valued or assume the opposite sign of
. Conversely, if a system (2) has such a node, then it is stably consistent.
PROOF. Let aj1 i1 ::: aj1 ir 0 ::: ::: ::: = ajr i1 ::: ajr ir be a node of system (2) and let ajk 1 x1 + ::: + ajk n xn be the
0
ajk
0 (k = 1; 2; :::; `; ` > r), (44)
-subsystem de…ned it. If (x01 ; :::; x0n ) is a solution of system (2) that turns all of
the inequalities of that subsystem to equations then, from that subsystem we get ajk 1 (x1
x01 ) + ::: + ajk n (xn
x0n ) = ajk 1 x1 + ::: + ajk n xn
0 (k = 1; 2; :::; `)
If system (2) is stably consistent then that system is stably consistent. But then, by Lemma 1.6, the system ajk 1 x1 + ::: + ajk n xn + 1
0 (k = 1; 2; :::; `) (45)
is consistent. Thus, it has a node
. It is not hard to show that
system (2). Indeed, the subsystem of system (44) corresponding to
is also a node of 0
and the subsystem
consisting of those inequalities of (44) corresponding to rows of the determinant
, those two
subsystem has one and the same nodal solution. Since the …rst of these is a nodal subsystem of system (2), the second is also a nodal subsystem of system (2). But then the nonzero minor of order r of the matrix of the latter is a node of system (2). Thus, the
-subsystem of
the system (2) corresponding to it coincides with (44) (in view of maximality of the latter). Thus the companion determinants of determinants companion to
in system (2) that are zero are the same as those
in system (44). Substituting the element 1 of the …eld P for
each element of the right hand column of all of these determinants we get all the companion determinants to
system (45). This proves the necessity part. Assume now that system
(2) possesses a node that has the property prescribed by the theorem. To avoid introducing new notation, we take the above de…ned corresponding
0
0
to be that node, and let system (44) be the
-subsystem of system (2). Since
0
is, obviously, a node of system (45) if
follows from Theorem 1.5 that the latter is consistent. By Lemma 1.6, this implies the stable consistency of system (44). By the Lemma 1.6, this in turn implies the stable consistency of system (2). The theorem is proved. REMARK. In view of the proof of Theorem 1.12, it is not di¢ cult to see the validity of the following proposition. If system (2) of nonzero rank is stably consistent then so it each of its principle sections. Indeed, by the results of x4, the matrix of each principle sections of (2) contains at least 55
one of its nodes. Thus, we may take a node of a given principle section S of system (2) for the node
0
of the proof of Theorem 1.12. Substituting one for each of the aj ’s (j = 1; 2; :::; m)
of S, we get a principle section of system (45). Since system (45) is consistent, the matrix of the latter contains a node of system (45). The node matrix. Thus the companion determinants of in S (since
in the proof may be taken from that
in system (2) are companion determinants of
is a node of S). Repeating the arguments on the …rst of the proof of Theorem
1.12, we conclude that the latter determinants satisfy the requirement of the theorem. Thus, by the second part of the theorem, if follows that system S is stable consistent. In conclusion of the present section, we consider the following problem: A consistent system (1) (and in particular (2)) has a solution. Can we assert that such a solution is a strict solution? If the answer is yes, then the system is stable. Thus, we shall study the problem in the context of a certain bounding solution x0 of system (1). To …nd a solution, it is necessary to introduce: LEMMA 1.8. Let x0 be a bounding solution of system (1). If the subsystem consisting of all those inequalities of (1) is stable consistent. PROOF. In the proof of Lemma 1.7, denote a given bounding solution by x0 and let fjk (x)
ajk
0 (k = 1; 2; :::; `)
(46)
denote the subsystem consisting of all these inequalities of system (1) that hold as equations at x = x0 . The proof Lemma 1.7 is now a proof of Lemma 1.8. We now proceed with presenting the solution to our problem. Let x0 be a bounding solution of system (1) and let (46) be the system determined by it having the property of Lemma 1.8. The stable consistency of that subsystem, by the Lemma 1.8, it is a su¢ cient condition for the stable consistency of system (1). Clearly, the condition is also necessary. Thus, by the Lemma 1.6, we may assert that system (1) is stably consistent i¤ the system fjk (x) + 1
0 (k = 1; 2; :::; `)
(47)
x0 is consistent.
where x = x
Furthermore, we shall show that: To a bounding solution x0 of a stably consistent system 0
(1), there corresponds at least one stable solution of the system. Let x be a solution of system (47). Introduce the notation: 0
x(") = x0 + "x ; where " is an arbitrary nonnegative element of P. It is not hard to show that x(") is a stable solution of system (46) for " > 0. If ` = m, i.e., if system (46) coincides with system (1), then we are done. Suppose ` 6= m. For " = 0; x(")(x(0) = x0 ) is obviously a stable solution of
56
fj (x)
0 (j = 1; 2; :::; m; j 6= j1 ; ::; j` ); (48)
aj
Thus, by Lemma 1.4, there exists " = "0 > 0 such that x("0 ) is also a stable solution of that system. But then x("0 ) has to be a stable solution of system (1). Thus, substituting x = x(") in system (48), we obtain a system 0
0
fj (x ) + fj (x )
aj
0 (j = 1; 2; :::; m; j 6= j1 ; ::; j` );
in one unknown, ", which has " = "0 as a stable positive solution. It is not hard to 0
0
see that for any positive stable solution " = " of that system, x(" ) is a stable solution of system (48) and thus of system (1).
x8. CONDITIONS FOR UNBOUNDEDNESS OF POLYHEDRA OF SYSTEMS OF
LINEAR INEQUALITIES.
DEFINITION 1.20. Let h = (h1 ; :::; hn ) be a nonzero element of P n . The set whose elements are p"h = (p"h1 ; :::; p"hn ) where p is any element of P , is called the axis of the space P n de…ned by the element h. A set M
P n is said to be unbound with respect to the axis
de…ned by an element h in P n if for each positive element a 2 P there exists an element (x1 ; :::; xn ) in M such that
jh1 x1 + ::: + hn xn j > a:
The set M is said to be unbounded in the positive (negative) direction of that axis if h1 x1 + ::: + hn xn > a (respectively
a; xi > a (xi
0 0
and let x0 be a solution of system (1). Then the element x(t) = x0 + tx is also a solution of system (1) for t > 0. Hence, by hypothesis of the lemma, we have 0
jf (x)j = f (x0 ) + tf (x )
c
and, consequently, the inequality: 0
t f (x )
jf (x0 )j
c:
Since, for jf (x0 )j+c+1 t = jf (x0 )j > 0
the latter inequality is violated, we have a contradiction. Thus, all solutions of (49) satisfy
the inequality jf (x0 )j
the equation f (x) = 0.
0. But this implies that they satisfy jf (x0 )j = 0 or, equivalently,
REMARK. If the set of solutions of system (2) is bounded with respect to the axis xi then all of its elements satisfy (for some c
0; c 2 P ) the inequality jxi j
c. In view of the
above lemma, it follows that the i-th coordinate of all solutions of the system aj1 x1 + ::: + ajn xn
0
(j = 1; 2; :::; m);
which corresponds to system (2), is zero. This means, in particular, that the solution set of the above system is bounded with respect to the coordinate axis xi . THEOREM 1.14. Suppose the rank ri of the matrix Ai , obtained by eliminating the i-th column of the matrix A of a consistent system (2), is not zero. Then a necessary and su¢ cient condition for the unboundedness of the polyhedron M of system (2) with respect
58
to the coordinate axis xi of P n , is the existence (in Ai ) of a nonzero minor that: all of the determinants obtained by bordering
of order ri such
by the column and an arbitrary row
of A are of the same sign. PROOF. Su¢ ciency: Let the matrix Ai contain a determinant quirement of the theorem. Then aj1 x1 + ::: + aji 1 xi
that satis…es the re-
in a node of one of the systems
1
+ aji+1 xi+1 + ::: + ajn xn + aji
0
(j = 1; 2; :::; m);
1
+ aji+1 xi+1 + ::: + ajn xn
0
(j = 1; 2; :::; m);
or aj1 x1 + ::: + aji 1 xi
aji
and, hence, one of the them is consistent. Suppose, for instance, that the …rst of them is consistent and let (x01 ; :::; x0i 1 ; x0i+1 ; :::; x0n ) be one of its solutions. Then (x01 ; :::; x0i 1 ; x0i+1 ; :::; x0n ) is a solution of the system aj1 x1 + ::: + ajn xn
0
(j = 1; 2; :::; m);
But then the set of its solution is unbounded with respect to the coordinate axis xi . But then, in view of the consequence of Lemma 1.9, formulated in the remark following it, the set M is unbounded with respect to that axis. This is what was to be shown. Necessity: Suppose that polyhedron of system (2) is unbounded with respect to the axis xi . Then it is unbounded with respect to at least one of its two directions. Let it be unbounded in the positive direction. In that case, we may associated with each p 2 P a solution (x1 (p); :::; xn (p)) of system (2) such that xi (p) > p. But the, for each p 2 P , the system
aj1 x1 + ::: + aji 1 xi
1
+ aji+1 xi+1 + ::: + ajn xn
aj
aji xi
(j = 1; 2; :::; m) (50)
is consistent (since it has a solution (x1 (p); :::; xn (p)) and hence, by Theorem 1.5, possess at least one node. If aj1 i1 ::: aj1 ih 0 ::: ::: ::: = (h = ri ) ajh i1 ::: ajh ih is one of these nodes, then for at least one p 2 P . we have the relations aj1 i1 ::: aj1 ih aj1 aj1 i xi (p) ::: ::: ::: ::: 1 0 ajr i1 ::: ajh ih ajh ajh i xi (p) aji1 ::: ajih aj aji xi (p) (j = 1; 2; :::; m; i1 ; :::; ih 6= i):
Let us denote the set of all elements p 2 P for which these relations are satis…ed by
P ( ). Since the matrix of system (50) may determined only a …nite number of the minor
,
it follows that at least one of the sets P ( ) is not bounded from above. Indeed, otherwise 0
these would exist an element p = p 2 P such that at least one of the relations is not satis…ed for an arbitrary
. But then, in view of Theorem 1.5, this would contradict the consistency 59
0
of system (50) for p = p . In order not to introduce more notation we shall say that the node with the unbound P ( ) is the minor
in the above relations. Let us rewrite these relations
in the form aj1 i1 ::: aj1 ih aj1 aj1 i1 ::: ::: ::: ::: ::: + 1 -1 ajr i1 ::: ajh ih ajh ajr i1 aji1 aji1 ::: ajih aj (j = 1; 2; :::; m; i1 ; :::; ih 6= i):
::: aj1 ih aj1 i xi (p) ::: ::: ::: ::: ajh ih ajh i xi (p) ::: ajih aji xi (p)
0
Note that the set of elements xi (p) with p 2 P ( ) is unbounded from above (this follows
from the inequality: xi (p) > P and from the unboundedness from above of the set P ( )). It, then, easily follows that all nonzero coe¢ cients of xi (p) in these relations must be negative. But then, all the determinants aj1 i1 ::: aj1 ih aj1 i ::: ::: ::: ::: ajr i1 ::: ajh ih ajh i aji1 ::: ajih aji being factors in these coe¢ cients, have one and the same sign as we were to prove.
REMARK 1. If ri = 0, then either all the coe¢ cients of the unknowns of system (2) are zeros or all the coe¢ cients of all the variables, distinct from xi , are zeros and at least one of the coe¢ cients of xi is not zero. If the …rst possibility is the case, then system (2) is not interesting (its set of solution is unbounded with respect to each axis Xi ) In the second case, a necessary and su¢ cient condition for the unboundedness of the polyhedron M with respect to the axis Xi is that the nonzero coe¢ cients of Xi in system (2) have the same sign. REMARK 2. In view of Theorem 1.5, Theorem 1.13 means that the polyhedron of solutions of system (2) is not bounded with respect to the axis Xi i¤ the polyhedron of solutions of the (corresponding to system (2)) system aj1 x1 + ::: + ajn xn
0
(j = 1; 2; :::; m);
is unbounded with respect to that axis. (The su¢ ciency of that condition was noted in the remark following Lemma 1.9). Indeed, in view of Theorem 1.5, Theorem 1.13 means that the polyhedron M of it is unbounded with respect to the axis Xi if one of the system (introduced in the beginning of the proof Theorem 1.13) is consistent or, also, if the above system has a solution with a nonzero i-th coordinates, i.e., when the set of solutions of that system is unbounded with respect to axis Xi . We also note that the validity of the following proposition follow from the later formulation of Theorem 1.13 and form the results of x4.
Let at least one of the coe¢ cients of the variable xi in a consistent system (2) be nonzero.
Consider a principle section of that system (2) which contains the i-th column of the matrix 60
of system (2). The unboundedness with respect to the axis Xi of the polyhedron of that principle section follows from the unboundedness, with respect to that axis, of the polyhedron M of that system (2). The converse is also valid. Furthermore, the unboundedness of the polyhedron M follows even from the unboundedness with respect to the axis Xi of the polyhedron of a section of that type. The analysis of the proof of Lemma 1.9 easily leads to the validity of the analogous result. LEMMA 1.10. If all the solutions of a consistent system (1) satisfy the inequality f (x)
c (c 2 P )
then all of its solution that satisfy the linear homogeneous system: fj (x) 1; 2; :::; m); corresponding to it, satisfy the inequality f (x)
0 (j =
0.
If the set of solution of system (2) is bounded in the positive (negative) direction of the axis Xi then all of its elements satisfy the relations xi
c ( xi
c) for some c
0. In
view of Lemma 1.10, this implies that the i-th coordinate of all solutions of the homogenous linear inequality system corresponding to system (@) are nonpositive (nonnegative). Using this fact, it is easy to turn the proof of Theorem 1.13 into a proof of the following theorem. Theorem 1.14. Suppose the rank ri of the matrix Ai , obtained by eliminating the ith column of the matrix A of a consistent system (2), is not zero. Then a necessary and su¢ cient condition for the unboundedness of the polyhedron M of system (@) in the positive (negative) direction of the axis Xi in P n is the existence in Ai of a nonzero minor ri such that the determinants obtained through bordering
of order
by elements of the i-th column
of A and of an arbitrary row of the A have signs di¤erent from (the same as) the sign of
.
In connection with Theorem 1.13 we note further that if the rank r of system (2) is less than the number n of its unknowns then the following proposition is implied by that theorem. If one of the nodes of system (2) has no common elements with the i-th column of its matrix then the polyhedron M of that system is unbounded with respect to the axis Xi . This proposition follows directly from the properties of the linear equations that de…ne the minimal faces of the polyhedron M of system (2). If we consider the fact that, for a system (2) of nonzero rank r, each linearly independent r columns of its matrix contain at least one node of system (2) (see x4), then the above proposition may be formulated as the following:
If after eliminating the i-th column of the matrix of a consistent system (2) of rank r > 0, we get the matrix of the same rank, then the polyhedron M of the system is unbounded with
61
respect to the axis Xi . If the rank r of a consistent system (2) coincides with the number n of its unknowns and if n > 1, then for each i = 1; 2; :::; n, the rank ri of the matrix Ai obtained from the matrix A of the system by eliminating the i-th column is equal to n matrix Ai possess a nonzero minor of order n
1. Consequently, each of the
1. If the conditions of Theorem 1.13 are not
satis…ed for any of these minors, then the polyhedron M is obtained with respect to each axis Xi . Thus the following theorem is valid. THEOREM 1.15. Suppose the rank of a consistent system (2) is equal to the number of n of its unknowns and suppose n > 1. Then its polyhedron M is bounded if for each nonzero minor
of order n-1 in the matrix Ai of system (2) we have: Among the determinants
obtained by bordering
with elements of a column of A that has no common elements with
and elements of a row of A, at least one has a di¤erent sign from the others. Let us …nally note that if the rank of system (2) is the same as the number of it unknowns as well as its inequalities (such a system is obviously consistent) then its polyhedron M is not bounded with respect to any axis.. Indeed, in this case, for each nonzero minor n
of order
1 in the matrix A there exists only one determinants which has no rows with repeated
indices, that is obtained by bordering elements with
with elements of a column of A not having common
. Thus, by theorem 1.13, the polyhedron M is bounded with respect to each
axis Xi of P n . If the number n of the unknowns of system (2) equals 1, then the question of unboundedness of the polyhedron of its solutions is settled by the Remark 1 of Theorem 1.13.
x9: NODULAR FORMS OF SYSTEM OF LINEAR INEQUALITIES; RELATED QUES-
TIONS.
The problem of boundedness and unboundedness of the polyhedron of system (2), studied in the last paragraph, is related to the problem of reducing that system to modular form, i.e., to the form jbj1 x1 + ::: + bjn xn
bj j
"j (j = 1; 2; :::; `);
where bji and bj are elements of P and where all the "j ’s are nonnegative elements of that …eld. It is not hard to show that not all system (2) could be reduced to modular from. Such is, e.g., the system x1 0; g x2 0: Our problem may be naturally formulated for an arbitrary system (1). To this end we introduce the following. 62
DEFINITION 1.21. A consistent system of linear inequalities on L(P ) is said to be reducible (on L(P ) if it has an equivalent (i.e., with the same sef of solutions) system of the form jFj (x)
bj j
"j (j = 1; 2; :::; `)
(51)
where all the functions Fj (x) are linear functions on L(P ), all the bj ’s are elements of the …eld P and where all the "j are nonnegative elements of that …eld. In that case, the latter system is called the modular form of the linear inequality system (1). The question of reducibility of system (1) to modular form is basically related to the next proposition. LEMMA 1.11. If all solutions of a consistent system (51) of rank r > 0 satisfy the inequality f (x) element c
"
c
0
0 (on L(P )) with a nonnull linear function f(x), then there exists an
0
c such that all solutions of system (51) satisfy the inequality
f (x) + c
"
0.
PROOF. We shall show that the function f (x) may be expressed as a linear combination, with coe¢ cients in P , of the functions Fj (x). Indeed, the system of inequalities Fj (x) (bj + "j ) 0 (j = 1; 2; :::; `); g (52) Fj (x) ("j bj ) 0 (j = 1; 2; :::; `): of rank r is equivalent to system (51). In view of Theorem 1.1, it has at least one nodal subsystem. Let 0 Fjk (x) (bjk + "jk ) 0 (k = 1; 2; :::; r ); g Fjk (x) ("jk bjk ) 0 (k = 1; 2; :::; r): be one of these subsystems. Then all of its nodal solutions, i.e., all solutions of 0 Fjk (x) (bjk + "jk ) = 0 (k = 1; 2; :::; r ); g Fjk (x) ("jk bjk ) = 0 (k = 1; 2; :::; r): 0 must satisfy the inequality f (x) c 0. From this, we easily conclude (with the help of proposition (*) of x1, chapter 1) that the function f (x) and Fj (x) (k = 1; 2; :::; r) could not be linearly independent (on the …eld P ). Consequently, for any x in L(P ) we have the relation f (x) = pj1 Fj1 (x) + ::: + pjr Fjr (x): 0
Replacing each of the elements pjk by the di¤erence pjk
p"jk of true nonnegative numbers
0
pjk and p"jk we write the relation in the form: r r X X 0 " f (x) = pjk ( Fjk (x)) + pjk Fjk (x) k=1
k=1
and, furthermore, in the form r X 0 " f (x) + c = pjk ( Fjk (x)
("jk
bjk )) +
k=1
r X k=1
where
63
"
pjk (Fjk (x)
(bjk + "jk ));
c" =
r X
0
pjk ("jk
r X
bjk )
k=1
"
pjk (bjk + "jk ):
k=1
From this last relation, it follows that all solutions of system (52), and hence, all solutions of system (51), must satisfy the inequality: f (x) c"
0. If x0 is one of them, then f (x0 ) c"
since all solutions of system (51) satisfy f (x0 )
c
0
0. On the other hand, f (x0 ) c" 0: Consequently, c
"
0;
0
c the lemma is
proved. THEOREM 1.16. A consistent system (1) is reducible i¤ for each j = 1; 2; :::; m; it is possible to …nd an "j > 0 such that upon replacing aj by aj
"j the resulting system has
no solution. PROOF. If the rank of system (1) is zero, then the proposition requires no proof. Assume, then, that it is nonzero. If system (1) is reducible then for each j, for which the function fj (x) is not identi…ed zero, the existence, of an ."j > 0(" 2 P ) such that the system obtained from system (1) by replacing aj by aj
"j follows from Lemma 1.11. For inequalities with
an identically zero function fj (x) the assertion is obvious. The necessity part of the theorem is proved. If some consistent system (1) satis…es the requirement of the theorem then the set of its solutions coincides with the set of solutions of the system fj (x)
(aj
"j ) 2
0 (j = 1; 2; :::; m) for which it is reducible to form (53). From this, it follows that if system (1)’s homogeneous linear inequalities fj (x)
0 (j = 1; 2; :::; m)
is reducible, then it is equivalent to the system of equations fj (x) = 0 (j = 1; 2; :::; m) (i.e., the set of solutions of the inequality system coincides with that of the equation system). Indeed, if that inequality system is reducible, then it is equivalent to a system of the form fj (x) 0; (j = 1; 2; :::; m); g fj (x) "j ; (j = 1; 2; :::; m); where "j ’s are some positive elements in P . If x0 is a solution of the system that does not satisfy one of the equations fj (x) = 0 say the equation fj0 (x) = 0, then fj0 (tx0 ) any t > 0. Since
"j0 for
fj0 (tx0 ) = t(fj0 (x0 )) > 0 this is impossible. The obtained contradiction
64
proved our assertion. Using Lemma 1.10 of the previous section and the remark about Theorem 1.16, we may prove the following more general proposition. If a consistent system (1) is reducible, then the inequality homogeneous system fj (x)
0 (j = 1; 2; :::; m)
corresponding to it, is equivalent to the equation system fj (x) = 0 (j = 1; 2; :::; m) Indeed, by Lemma 1.10, if two systems of type (1) are equivalent (i.e., if they have the one and the same solution set) then the corresponding homogeneous inequality systems are equivalent. Since the reducibility of system (1) implies that it is equivalent to a system of type (53), if follows that the homogeneous inequality system corresponding to system (1) is equivalent to the homogeneous inequality system fj (x) 0; (j = 1; 2; :::; m); g fj (x) 0; (j = 1; 2; :::; m); corresponding to that system of type (53). Since the above system is equivalent to the equation system fj (x) = 0 (j = 1; 2; :::; m) our proposition is proved. THEOREM 1.17. A consistent system (1) of linear inequalities is reducible i¤ the homogeneous system of inequalities fj (x)
0 (j = 1; 2; :::; m):
corresponding to it is reducible. PROOF. The necessity follows form the proposition we just proved in view of the fact that the equation system fj (x) = 0 (j = 1; 2; :::; m) may be written in the form of the system of inequalities jfj (x)j
0 (j = 1; 2; :::; m) and hence, is reducible. Su¢ ciency is
equivalent to the assertion that the nonreducibility of system (1) implies the nonreducibility of the homogeneous inequality system corresponding to it. To prove the assertion, suppose the inequality system fj (x)
aj
0 (j = 1; 2; :::; m)
of type (1) is irreducible. By theorem 1.17, there exists a number j = j0 such that the system fj (x) aj 0 (j = 1; 2; :::; m; j 6= j0 ) aj0 + " 0 g (54) " 0 is consistent for any " 0. Let U be the subset of L(P ) on which the function fj (x) take fj0 (x)
only the value zero, and let V be its direct complement on L(P ). If the rank of our system (1) is r then there exist r linearly independent generating elements v1 ; :::; vr in V . We note here that the irreducibility of the system implies that its rank is positive. Since any element
65
in L(P ) may be written in the form x = t1 v2 + :::; tr vr + u; where t1 ; :::; tr are elements of P and u 2 U , it follows that system (54) reduces to the
system
t1 fj (v1 ) + ::: + tr fj (vr ) aj 0 (j = 1; 2; :::; m; j 6= j0 ) t1 fj0 (v1 ) + ::: + tr fj0 (vr ) aj0 + " 0 g " 0 in P r+1 with unknown t1 ; :::; tr , is consistent for any nonnegative value of ". By remark 2 of Theorem 1.13, the unboundedness of the solution set of this system with respect to the r + 1st coordinate axis in P r+1 implies the unboundedness with respect to that axis of the corresponding system t1 fj (v1 ) + ::: + tr fj (vr ) 0 (j = 1; 2; :::; m; j 6= j0 ) t1 fj0 (v1 ) + ::: + tr fj0 (vr ) + " 0 g " 0 corresponding to it. Taking " as a nonnegative parameter and using Theorem 1.16 we conclude that the system t1 fj (v1 ) + ::: + tr fj (vr )
0 (j = 1; 2; :::; m)
is not reducible. But then, obviously, the system fj (x)
0 (j = 1; 2; :::; m)
is not reducible. This …nishes our proof. THEOREM 1.18. The polyhedron of a consistent system (2) whose rank equals the number n of its unknowing is bounded when and only when system (2) is reducible. PROOF. Necessity: If the polyhedron of the system is bounded then, by De…nition 1.20, it is bounded with respect to all axis of P n and is, in particular, bounded in the negative direction of the axis determined by those elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) that correspond to inequalities of system whose linear forms are not identically zero. Thus, for j = 1; 2; :::; m there exists an "j > 0 such that the system obtained from system (2) by replacing aj by aj
"j has no solution. But then, by Theorem 1.16, system (2) is reducible
as we have to prove. Su¢ ciency: Let system (2) be reducible. Then, by the remark of Theorem 1.16, it is reducible to the form aj1 x1 + ::: + ajn xn
(aj
"j ) 2
"j ; 2
(j = 1; 2; :::; m) (55)
where "j > 0; (j = 1; 2; :::; m). Choosing from the forms aj1 x1 + ::: + ajn xn
(j = 1; 2; :::; m)
a linearly independent form ajk 1 x1 + ::: + ajk n xn
(k = 1; 2; :::; n)
66
and denoting them by Xjk (k = 1; 2; :::; n) we get Xjk
(ajk
"jk ; 2
"jk ) 2
(k = 1; 2; :::; n) (55)
Since determinants of the system of the forms Xjk (k = 1; 2; :::; n) is nonzero, it is possible to express each of the unknowns x1 ; :::; xn as linear expressions in terms of the unknowns Xjk (k = 1; 2; :::; n). Obviously, each of these forms is bounded in the set of solutions (56). Thus, the polyhedron of solutions of system (55) is bounded relative to each of the coordinate axis of P n and, hence , is bounded in P n (see De…nition 1.20). Since our system (2) is equivalent to system (55), the su¢ ciency part of the theorem is proved. The above theorem means that, for a consistent system (2) whose rank equals the number of its unknowns, reducibility is algebraically equivalent to the boundedness of the system’s polyhedron of solution. THEOREM 1.19. If a consistent system (2) of rank r > 0 is reducible and if
is any
nonzero minor of order r in its matrix, then r < m. Also, for each i = 1; 2; :::; m there exists, among the characteristic determinants corresponding to in which the cofactor of ai has a sign di¤erent from that
, at least one determinants
. Conversely, if there conditions
are satis…ed by all the nonzero r-order minors of the matrix of some principle section of a system (2), then the system is reducible. By "characteristic determinants" of system (2) corresponding to a minor those determinants obtained by bordering (2) having no common elements with
, we mean
with elements of a row of the matrix of system
and with the column of elements ai (j = 1; :::; m).
PROOF. The relation r < m follows immediately from the Theorem 1.16, since if r=m the system is consistent for arbitrary values of its constant terms. In view of Theorem 1.17, system (2) is reducible i¤ the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m):
of homogeneous linear inequalities that corresponds to it is reducible. In view of Theorem 1.16, the latter system is reducible if and only if each of the system aj1 x1 + ::: + ajn xn + aj xn+1 xn+1
0 (j = 1; 2; :::; m): (57) 0
where one aj is unity and the rest are zeros, has a solution set that is bounded with respect to xn+1 . Suppose now that system (2) is reducible and consider one of the system (57), e.g., the one with aj 6= 0. If
is an arbitrary nonzero minor of order r in the matrix of system (2)
then, in connection with Theorem 1.13, the condition for unboundedness relative to the axis Xn+1 is violated, i.e., upon bordering
with the elements of the column of coe¢ cients of
67
xn+1 in the chosen system (57) (with aj 6= 0) we must get at least two determinants with opposite signs. Since one of the determinants obtained by bordering obtained by bordering
by elements of the row of coe¢ cients of the inequality
it, thus, coincides with the same sign as
in this manner, is xn+1
0.
. Hence we must …nd, among these determinants, one that has
.
Not that the coe¢ cients of nonzero determinants obtained by bordering
with the
help of the column of coe¢ cients of xn+1 contains, in addition to the determinants already considered, only those determinants with ai in their last column. Thus, at least one of these determinants has the name sign as
. But any one of these determinants,
, coincides with
the cofactor of ai in the characteristic determinants obtained by replacing the last column of
by the elements ai . Hence, the necessity part is established. Su¢ ciency is obtained as follows. Suppose all nonzero determinants of order r of a
principle section S of system (2) satisfy the requirement of Theorem 1.19. If system (2) is not reduces at least one of the systems (57), say the one with ai 6= 0, has a solution set that is unbounded with respect to the axis Xn+1 . But then the system aj1 x1 + ::: + ajn xn + aj xn+1
0 (j = 1; 2; :::; m);
where aj 6= 0 is consistent for any xn+1
aj1 x1 + ::: + ajn xn + aj
0 and, this means the system
0 (j = 1; 2; :::; m):
is consistent. Hence, by Theorem 1.5 and its corollary 1.1, the principle section of that system de…ned by the matrix of system S has at least one node of
0
. have the same sign as
the minor
0
0
. Now all nonzero companion determinants
. Hence the cofactors of
ai must have the opposite sign of
. This, however, contradicts our assumption about the nonzero determinants
of order r of system S. This established su¢ ciency and the theorem. COROLLARY 1.4. If a consistent system (2) of rank r > 0 is reducible, then so is each of its principle subsections. Conversely, it is reducible if at least one of its principle section is reducible.
CHAPTER II –THE DUALITY PRINCIPLE The principle of duality with which we are concerned here is characterized by way of the following assertions. 68
I. If aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
is a …nite system of homogeneous linear inequalities on P n (P n is de…ned by an arbitrary ordered …eld) and (b 1 ; :::; b n ) ( system b 1 x1 + ::: + b n xn
2 M ) is a solution of it, then the solution set of the
0
(we shall call it the dual of our system) coincides with the convex cone generated in P n by the elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) (i.e., with the dual cone of the system). II. Any convex cones with a …nite set of transforming elements of P n (i.e., with a …nite generating cone) is the solution set of at least one …nite system of linear (homogeneous) inequalities on P n . In the case of in…nite system assertion I is not correct. Indeed, consider the system x2 0; g x1 + kx2 0; (k = 1; 2; :::) on R2 (R is the …eld of real number). The set of solutions of the system is a ray from the origin in the positive direction of the X1 axis. Hence, the dual of our system consists of all inequalities of the form: ax1 + 0x2
0, where a is any nonnegative number. The solution
set of that system is, obviously, the halfspace x1
0 of the space R2 . This is, obviously,
distinct from the dual cone of the system; which is generated in R2 by the elements ( 1; k) (k = 1; 2; :::) and (0; 1). Assertion I is, basically, equivalent to the Minkowski-Farakas Theorem on the consequence inequality of a …nite system of linear inequalities which we present in Section 1 of the present chapter (see Lemma 2.4 and Theorem 2.4). Assertion II is Weyl’s Theorem about convex cones with …nite sets of generating elements which is presented in x4 (see Lemma 2.7 and Theorem 2.16). Using the Minkowski-Farakas Theorem and Assertion II, we easily conclude that the set of solutionsofa …nite system of homogeneous linear inequalities is a cone with a …nite generating setconvex. This proposition implies that if (b
i1
; :::; b
in
0
) (i = 1; 2; :::; m )
is any system of generating elements of the cone of solutions to the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m);
then the dual cone of the latter coincides with the solution set of the …nite subsystem b
i1
x1 + ::::b
in
x1
0
0 (i = 1; 2; :::; m )
69
of the dual system. Let A be a …nitely generated cone in P n (aj1 ; :::; ajm ) (j = 1; 2; :::; m) be system of generating elements of that cone and let B be the cone of solutions of the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
Then, by the Minkowski-Farakas Lemma and by the above noted proposition, the relation A ! B is angle-valued map from the set of …nitely generated cones in P n into itself. In x6, we show that this set of a structure and that the above relation is its automorphism.
Assertion I means that the presented map is its dual automorphism (its involution). Indeed, if we call the image in B of cone A the polar of the latter, the Assertion I implies that the polar of a …nitely generated cone coincides with it. As in the last chapter, we study system of linear inequalities under the assumption that the …led P is an arbitrary ordered …eld. A central role in the exposition is played by the Minkowski-Farakas Theorem and Weyl’s Theorem mentioned above. The …rst of these theorems is presented together with some of its corollaries and through some directly related propositions for linear inequality systems in L(P ) in general. Weyl’s Theorem and its corollaries are presented for spaces P n . The proof of Weyl’s Theorem, below, reduces to an application of the Minkowski-Farakas Theorem the basic theorem of Chapter I, namely the theorem about nodal solutions (Theorem 1.1). The latter theorem is used in the proof of the Minkowski-Farakas Theorem. As consequence of some corollaries of the Minkowski-Farakas Theorem we obtain some well-known theorem (for P = R) about linear inequality systems. For instance, in x1 we
obtain the D. Alexandro¤-Fan,Ky Theorem about the condition for consistency of a system of linear inequalities (see Theorem 2.3), in .x3, we get the theorem of Voronoi about the stable consistency of a system of linear inequalities (see Theorem 2.12 and it Corollary 2.8) and Tucker’s Theorem (Theorem 1.13) which plays a central role in the theory of linear programming. We shall also present some results that shapen Minkowski-Farakas Theorem. One of these is Corollay 2.1 with which help the question of inconsistency of a system of linear inequalities with rank r is reduced to the question of existence of nonconsistent subsystem of it of rank r with r + 1 inequalities. Realizing that other questions are related to the study of subsystems of rank r with r + 1 inequalities, a special section (section 2) is devoted to these subsystems. As one of the geometric implications of the Minkowski-Farakas Theorem and of some of its direct consequences, we study here (in section &) the question of separating tow convex
70
polyhedral sets by planes. Here we study polyhedral convex sets in L(p) with an arbitrary ordered …eld P . It is shown that the results that are true in particular for L(R) are preserved here (see Theorem 2.27 and 2.28). The duality principle and some of its implications are presented in x4 n
x7. Speci…cally,
in x4 it is shown that the polar of a …nitely generated cone in P has a …nite set of generating elements and that the polar of that cone coincides with it. This proposition is used in x4 to obtain the general form of the solution of an arbitrary …nite system of linear homogeneous
inequalities in P n (see Theorem 2.19) For system of homogeneous linear inequalities whose rank is equal to the number of their unknowns, the theorem was proved by Minkowski (see the remark following Theorem 2.19). With the help of the results of x4, we study (in x5) arbitrary (generally speaking non-
homogeneous) systems of linear inequalities in P n . For this purpose, each nonhomogeneous system is associated in a de…nite manner with the homogeneous system that corresponds to it (see the beginning of x5). This way we obtain a theorem about the decomposition
of the solution set of an arbitrary linear inequality system into a sum of …nitely generated cones and …nitely generated centroids (see Theorem 2.23). For the case of systems of linear inequalities on Rn , this theorem is known (see remark on that theorem). In x6, it is used to study the class of arbitrary convex polyhedral sets in P n .
x1. THE MINKOWSKI-FARAKAS THEOREM ON THE CONSEQUENCE OF SYS-
TEMS OF LINEAR INEQUALITIES.
DEFINITION 2.1. The linear inequalities f (x) a
0 on L(P ) is said to be a consequence
of the system fj (x)
aj
0 (j = 1; 2; :::; m) (1)
of linear inequalities on L(P ) if it is satis…ed by all the solutions of that system. The content of this section is determined by way of two problems: The problem of …nding the algebraic conditions that must be satis…ed by the left hand side of the above inequality in order for it to be a consequence of system (1) with nonzero rank and the question of existence of a nodal subsystem of the latter that has that inequality as a consequence. LEMMA 2.1. If the linear inequality f (x)
a
0 (2)
is a consequence of system (1) nonzero rank and if it is turned into an equation by a solution x = x0 of the latter, the it is a consequence of its subsystem that consists of all those inequalities turned into equations by x = x0 . PROOF. Without loss of generality we may assume that subsystem to be the subsystem 71
fj (x)
aj
0
0 (j = 1; 2; :::; m ; m
0
m) (3)
0
consisting of the …rst m inequalities of system (1). Suppose one of its solutions x = x 0
does not satisfy inequality (2); then we have f (x ) inequality f (x) 0
0
a > 0. It si nod hard to show the
a > 0 is satis…ed by all the elements of the ray
x(t) = x + t(x
0
x0 )
distinct from x0 , where t is an arbitrary nonnegative element of the …eld P . On the 0
0
other hand, among those elements there is an element x(t ) (t > 0) that satis…es system (1). Indeed, for x = x0 the inequalities fj (x)
aj
0
0 (j = m + 1; :::; m) 0
hold, and hence, (by Lemma 1.4) there exists an element t = t > 0 such that the elemen 0
0
x(t ) satis…es all of these inequalities. Since x(t ) like each element x(t) (t > 0), satis…es 0
0
system (3) it follows that x(t ) is a solution of system (1) with x(t ) 6= x0 . But all elements x(t) (t > 0) satisfy the inequality f (x) f (x)
a
a > 0 and all solutions of system (1) satisfy
0 . This is a contradiction. Consequently, the lemma is proved.
The system (3) under consideration is a bounding subsystem of system (1), thus it contains an extremal subsystem. Clearly, the inequality f (x)
a
0 is a consequence of that
extremal subsystem. Thus Lemma 2.1 implies the validity of the proposition. If the inequality (2) is a consequence of a consistence system (1) with nonzero rank and if f (x)
a = 0 for at least one of that system’s solutions, then inequality (2) is a consequence
of at least one of its extremal subsystems. LEMMA 2.2. If the inequality (2) is a consequence of a consistent system (1) of nonzero rank, then it is a consequence if at least one of its extremal subsystems. PROOF. we shall assume that f (x) is not null since the case of a null f (x) requires no proof. If system (1) has a solution with f (x)
a = 0 then our assertion is valid (by the
previous lemma). Hence, assume all solutions of (1) satisfy the inequality f (x)
a < 0.
From among the minimal faces of the polyhedron M of system M (the number of which is …nite), let us pick one, S, where f (x) attains its greatest value. This is possible because, obviously, sincef (x) has the same value at all elements of each of these faces. Let c be the value attained by f (x) on S (clearly c < a). We shall show that f (x) c
0 is a consequence
of system (1). Suppose it is not so, i.e., suppose there exists a solution of system (1) such that f (x) c > 0, Thus the system fj (x) aj 0(j = 1; :::; m); f (x) c > 0 g (4) f (x) a < 0
72
0
is consistent. Let x be a solution of this system and let x" be an element of L(P ) that satis…es f (x)
a > 0; i.e., that does not satisfy system (1).
Considering the interval 0
0
x (t) = x" + t(x"
x ) t 2 P [0; 1];
and using a method that is analogous to the method used in providing Property 1 of boundary systems (see Chapter I, x1), we can easily show that system (1) has a bounding
solution that satis…es system (4). Thus, system (1) has a boundary subsystem (possibly consistent of one inequality) with at least one nodal solution that satis…es (4). Let fjk (x)
ajk
0 (k = 1; 2; :::; `)
(5)
be one of those boundary subsystems and let x0 be one of its nodal solutions that satisfy (4); and let ` the rank of (5), be less than the rank r of system (1). If all solutions of fjk (x)
ajk
0 (k = 1; 2; :::; `)
(6)
satisfy the last two inequalities of (4), then we may argue as we did in establishing Property 2 of boundary subsystems in x1, Chapter I. The boundary subsystem of rank ` + 1 will have a nodal solution that satis…es (4).
Suppose it is not true that all of the solutions of (6) satisfy the last two inequalities in (4). It is not possible for all of them to satisfy f (x) f (x)
c
a < 0, for then they would satisfy
0 which would contradict our hypothesis about x0 . Thus, at least one solution x
of (6) satis…es the inequality f (x)
a
0. Since x , obviously, does not satisfy system (1),
we conclude that, as we did in the proof of Property 2 of boundary subsystems, there exists x(t) (t > 0) on the segment x(t) = x0 + t(x
x0 ) t 2 P [0; 1];
which satis…es (1) and turns one of the inequalities of system (5); fj`+1 (x)
aj`+1
0
into equation. As in the proof of Property 2, we conclude that the rank of the system fjk (x)
ajk
0 (k = 1; 2; :::; ` + 1)
is ` + 1. Hence, the latter is a boundary subsystem of system (1). But its nodal solution x(t) satis…es system (1). Thus, we must have f (x) have f (x)
c > 0 there (since f (x0 )
a < 0 there. On the other hand, we also
c > 0 and f (x)
c > f (x)
a
0). Consequently,
the boundary subsystem we obtained has the property that we sought, i.e., it has a nodal solution that satis…es (4). If follows from these arguments, that among the boundary subsystems of (1) having this property, there exists a boundary subsystem of rank r, i.e., a nodal subsystem. Thus, system
73
(1) has at least one nodal solution x = x such that f (x ) > c, in contradiction with the way of choosing c 2 P . This contradiction means that system (4) is not consistent. Hence, all solutions of system (1) satisfy f (x)
0; which we wanted to show. but then, by Lemma
c
2.1, this inequality, and hence the inequality f (x)
a
0, is a consequence of at least one
extremal subsystem of system (1). The lemma is proved. The validity of the next theorem follows from the proof of Lemma 2.2 THEOREM 2.1. If the linear function f (x) (x 2 L(P )) is bounded above on the poly-
hedron of solution of a consistent system (on L(P )) of nonzero rank, then the set of values off (x) on at least one minimal face of M . LEMMA 2.3. If the inequality f (x) inequality system: fj (x)
0 (x 2 L(P )) is a consequence of the linear
0
0 (j = m + 1; :::; m) (on L(P )) of rank r > 0, then it is a 0
0
consequence of one of its subsystems of rank 0 < r < r with r inequalities. PROOF. Since the assertion it trivial for a null function f (x), we shall assume the f (x) is nonnull. By hypothesis of the lemma, the system fj (x) 0 (j = 1; 2; :::; m); g f (x) + 1 0 is consistent. Let fjk (x) 0 (k = 1; 2; :::; `); g (7) f (x) + 1 0 be one of its inconsistent subsystems all of whose proper subsystems are consistent. Let 0
0
0
r be its rank and let r + s (s
1) be the number of its inequalities (obviously r 6= 0). In
view of the inconsistency of (7), the corresponding equation system fjk (x) = 0 (k = 1; 2; :::; `); g (8) f (x) + 1 = 0: is consistent. If the rank of the system fjk (x) = 0 (k = 1; 2; :::; `) 0
is distinct from r then augmenting its maximal linearly independent subsystem by the equation
f (x)+1 = 0, we get a consistent system (by proposition ( ) of x1, Chapter I). But
then, obviously, system (8) would, contrary to our assertion, be consistent. Without loss of 0
generality, we may assume that the …rst r equations of system (8) are linearly independent. In this case, its subsystem 0 fjk (x) = 0 (k = 1; 2; :::; r ); g f (x) + 1 = 0: is, obviously, inconsistent.
(9)
Assume now that s > 1. Since system (7) is inconsistent, the inequality hence, the inequality
fj` (x)
fj` (x) < 0, and
0, is a consequence of the consistent (in view of the method
of choosing system (7)) system
74
fjk (x)
0 (k = 1; 2; :::; ` 1); g (10) f (x) + 1 0: But then it must be a consequence of one of its extremal subsystems; T (see Lemma 2.2). Clearly the strict inequality
fj` (x) < 0 is a consequence of the subsystem T . Indeed,
otherwise zero would be the maximum value of the function fj` (x) on the polyhedron of the subsystem T . By Theorem 2.1 this maximum is attained (uniquely) on one of the minimal faces of its polyhedron. But that face coincides with one of minimal faces of system (10). Corresponding to that face, satisfy the equality: fj` (x) = 0. But then system (7), contrary to our assertion, is consistent. Consequently Thus, by adjoining fj` (x)
fj` (x) < 0 is a consequence of system T .
0 to it, we obtain an inconsistence subsystem of system (7), this
is not possible unless T is distinct from system (10). Hence, system (10) does not have an extremal subsystem distinct from itself. Thus, the equation system fjk (x) = 0 (k = 1; 2; :::; ` 1); g f (x) + 1 0: corresponding to it is consistent. But this contradicts the method of choosing system (9). Thus s = 1. Thus, the number, `, of inequalities in the …rst part of system (7) coincides with its rank 0
r. The inconsistency of system (7) implies the inconsistency of the system 0 fjk (x) 0 (k = 1; 2; :::; `; ` = r ); g f (x) + " 0: corresponding to (7), for any " > 0. Thus f (x) " is a consequence of system 0
fjk (x) = 0 (k = 1; 2; :::; r ) for any " > 0. But this means, obviously, that the latter has f (x)
0 as a consequence.
0
Since the rank of that system coincides with the number r of its inequalities our lemma is proved. THEOREM 2.2. If the linear inequality f (x)
a
0: (on L(P )) is a consequence of a
consistent system (1) of nonzero rank, then it is a consequence of at least one of its nodal subsystems. PROOF. By Lemma 2.2, the inequality f (x)
a
0 is a consequence of an extremal
subsystem fjk (x)
ajk
0 (k = 1; 2; :::; `) (11)
of system (1). Let x = x0 be a solution of system (1) that turns all the inequalities of the subsystem to equation and let f (x0 ) = c. Then, in view of the uniqueness of minimal faces of the polyhedron of system (11), Theorem 2.1 implies that f (x) system (11). For x = u + x0 , system (11) and inequality f (x)
75
c c
0 is a consequence of 0 can be written in the
form fjk (u)
0 (k = 1; 2; :::; `)
and f (u)
0. The inequality f (u)
0 is, obviously, a consequence of this subsystem
of inequalities. Thus, by Lemma 2.3, it is also a consequence of one of its subsystems of 0
0
0
rank r > 0 that consists of r inequalities (r does not exceed the rank of system (11) which coincides with the rank r of system (1)). But this means that the inequality f (x) and hence f (x)
a
c
0,
0, is a consequence of a subsystem of system (11) that corresponds to
the latter subsystem. From this, it follows easily that the latter inequality is a consequence of a subsystem of rank r that consists of r-inequalities from system (11). But such a subsystem is, clearly, a nodal subsystem of system (1). Thus, the proof is complete. LEMMA 2.4. If the linear inequality f (x) fj (x)
0 on L(P ) is a consequence of
0 (j = 1; 2; :::; m) (12)
then there exist nonnegative elements P1 ; :::; Pn in P such that the relation m X f (x) = pj fj (x) j=1
holds as an identity in x for all x 2 L(P ).
PROOF. If the rank r of system (12) is zero, the assertion is validated by choosing P1 = P2 = ::: = Pn = 0. Thus, we shall assume that r > 0. Thus, by Lemma 2.3, the inequality f (x)
0 must be a consequence of a subsystem of rank r
0
0
r . of system (12),
0
that consists of r inequalities. Without loss of generality, we may assume that subsystem 0
to consist of the …rst r inequalities of system (12). The rank of the system 0 fj (x) 0 (j = 1; 2; :::; r ); g f (x) 0 0 is r . This is so, for otherwise, the system 0 fj (x) = 0 (j = 1; 2; :::; r ); g f (x) = 1 would be consistent. But this is impossible, since all solutions of the subsystem fj (x) 0
(j = 1; 2; :::; r ) must satisfy f (x)
0
0. Hence, there exist elements q1 ; :::; qr0 in P such that
the relation 0 r X f (x) = pj fj (x) j=1
holds as an identity inx for x 2 L(P ). If all the qj ’s are nonnegative, then the lemma is
obviously proved. Suppose on e of the elements, say q1 is negative and let x0 be a solution of the system f1 (x) = 1 g 0 fj (x) = 0 (j = 1; 2; :::; r ); 76
(in view of proposition (*) of x1, Chapter I, this system is consistent). Then 0 r X f (x0 ) = pj fj (x0 ) = q1 > 0: j=1
Since x0 satis…es the system 0
fj (x)
0 (j = 1; 2; :::; r );
which has f (x)
0 as a consequence, we get a contradiction. Thus all of the qj ’s are
nonnegative. Now the lemma is proved. REMARK. The proof of Lemma 2.4 may be obtained by induction on the indices of the inequalities of system (12) (see Chernikov [14]). For the case where system (12) and inequality f (x)
0 are on Rn , the lemma was …rst proved by Minkowski [1], and somewhat
later by Farakas [1]. Lemma 2.4 implies, in particular, what the Minkowski-Farakas Theorem maybe applies to the systems: fj (x) = aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m); (13)
and f (x) = b1 x1 + ::: + bn xn
0 (14)
with coe¢ cients in an arbitrary ordered …eld P . Thus, we have the following proposition. If inequality (14) with coe¢ cients in an arbitrary ordered …eld P is a consequence to system (13) with coe¢ cients in that …eld P, then there exist (in P) nonnegative elements P1 ; :::; Pn such that for all x1 ; :::; xn with values in P we have the relation m X b1 x1 + ::: + bn xn = pj (xj1 x1 + ::: + ajn xn ): j=1
We note here the next two formulations of the above proposition (for P = R, see Beck-
enback S. Bellman [1], page 119 and Gale [2], page 72). 1. Suppose ! x and ! y are n-dimensional vectors with coordinates in an ordered …led P and suppose 0 a11 A = @ ::: am1 is a matrix
1 ::: a1n ::: ::: A ::: amn with elements in P .
The, a necessary and su¢ cient condition for the nonnegativity of the scalar product (! x; ! y ) for A 0 satisfying A is the existence ! of a nonnegative m-dimensional vector p = (p1 ; :::; pm ) with coordinates in the …eld P (p 0; p 0; :::; p 0) such that ! y = AT ! p (AT is the transpose of the matrix A).3 * 1
2
m
2. One and only one of the following two alternative occurs: either the equation system a1i u1 + ::: + ami um = bi (i = 1; :::; n); has a nonnegative solution, or the inequality system 3
* To multiply a row-vector by a matrix, we write it here as a column-vector.
77
aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m); g b1 x1 + ::: + bn xn > 0 has a solution. From Lemma 2.4 follows: THEOEM 2.3. System (1) is inconsistent i¤ there exist nonnegative elements p1 ; :::; pm of P such that, for all x 2 L(P ) we have the relation m X pj fj (x) = 0 j=1
and the inequality m X pj fj (x) < 0 j=1
PROOF. Suppose system (1) is inconsistent. Then the inequality t
0 is a consequence
of system fj (x)
aj t
0
Applying Lemma 2.4, we get: For some nonnegative elements p1 ; :::; pm in the …eld P and for all x 2 L(P ) and all t 2 P we have the relation m X t= pj (fj (x) aj t); j=1
which, for t= 1, implies the relation and the inequality of the theorem. Su¢ ciency here
is obvious. For P = R, Theorem 2.3 was proved in the paper of Fan, Ky [1] and it was its fundamental theorem. For system of linear inequalities in Rn it was proved earlier by A. D. Alexandro¤ [1]. Theorem 2.3, as it is clear from its proof, is a direct corollary of Lemma 2.4. On the other hand, it is not hard to show that Lemma 2.4 may be obtained as a direct corollary of Theorem 2.3. Indeed, let f (x) fj (x)
0 be a consequence of the system
0 (j = 1; 2; :::; m):
Then the system fj (x) 0 (j = 1; 2; :::; m); g f (x) + t 0: where t is some positive element of P is consistent. Applying Theorem 2.3 we get, for some nonnegative elements p0 ; p1 ; :::; pm in P for any x 2 L(P ) the relation m X p0 f (x) + pj fj (x) < 0 j=1
and the inequality p0 t > 0
From the latter we get p0 > 0. But then, for any x 2 L(P ) we have
78
m X p
j
f (x) =
f (x) p0 j
j=1
and hence, Lemma 2.4. For systems of linear inequalities in P n , Theorem 2.3 may be formulated in the following form (for P = R see Gale [2]): One and only one of the following alternatives occurs: Either the system aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m);
is consistent or the system a1i u1 + ::: + ami um = bi (i = 1; :::; n); g a1 u1 + ::: + am um = 1 has a nonnegative solution. LEMMA 2.5. The linear inequality f (x)
a
0
(15)
is a consequence of a consistent system (1) i¤ the inequality f (x)
at
0
(16)
is a consequence of the system fj (x) aj t 0 (j = 1; 2; :::; m); g t 0: PROOF. Su¢ ciency. Suppose inequality (16) is a consequence of system (17). If x = x0 is an arbitrary solution of system (1), then (x0 ; 1) is a solution of system (17) and hence, f (x0 )
a
0. In view of the arbitrariness of .. as a solution of system (1), inequality (15)
is a consequence of system (1). Necessity. Suppose inequality (15) is a consequence of system (1) and let (x0 ; t0 ) be a solution of system (17). If t0 6= 0, then from the relations fj (x0 ) aj t0 0 (j = 1; 2; :::; m); g t0 < 0: it follows that 0
fj ( xt0 )
aj
0 (j = 1; 2; :::; m);
But since 0
f ( xt0 )
aj
0
this implies f (x0 )
at0
0.
Thus the solution (x0 ; t0 ) satis…es the inequality (16). If t0 = 0, then x0 is a solution of fj (x0 )
0 (k = 1; 2; :::; m);
But then, by Lemma 1.10, f (x0 )
0. Thus, also in this case, the solution (x0 ; t0 ) of
system (17) satis…es inequality (16). Consequently, the inequality (16) is a consequence of system (17) and the lemma is proved. 79
REMARK. The following proposition, to be used later, follows from Lemma 2.5. Two consistent systems of linear inequalities on L(P ) (1)
fj (x)
(1)
0 (j = 1; 2; :::; m1 )
(2)
0 (j = 1; 2; :::; m2 )
aj
and (2)
fj (x)
aj
have one and the same solution set (are equivalent) i¤ the corresponding to them) system (1) (1) fj (x) aj 0 (j = 1; 2; :::; m1 ); g t 0: and (2) (2) fj (x) aj 0 (j = 1; 2; :::; m2 ); g t 0: have one and the same solution set. From lemma 2.4 and 2.5 follows: THOEREM 2.4. If the linear inequality (15) is a consequence of a consistent system (1), the there exist nonnegative elements p0 ; p1 ; :::; pm in P such that the relation m X f (x) a = pj (fj (x) aj ) p0 : (18) j=1
holds as an identity in x 2 L(P ):
PROOF. Indeed, by Lemma 2.5, in this case inequality (16) is a consequence of system (17). But then, by Lemma 2.4, there exists nonnegative elements p0 ; p1 ; :::; pm in P such that the relation m X f (x) at = pj (fj (x)
aj t)
p0 t:
j=1
holds as an identity in x 2 L(P ) and x 2 P: We get, fro t = 1, that relation (18) holds
as an identity in x 2 L(P ).
The theorem we just proved generalizes the results of Lemma 2.4 to the case of nonho-
mogeneous inequalities. In what follows we shall refer to it as the Generalized MinkowskiFarakas Theorem or even simply as the Minkowski-Farakas Theorem. REMARK. For L(P ) = Rn the following generalization of that theorem is well-known (see Berge and Ghouila-Houri [1]). Let f (x); f1 (x); :::; fm (x) be a concave real-valued functions de…ned on Rn . Suppose there exists a number q
m such that the function fj (x) (q < j
system fj (x)
m) are a¢ nely linear. If the
0 (j = 1; 2; :::; m); g f (x) > 0 does not have a solution in Rn but the system p1 ; :::; pm not all zero such that fj (x) > 0 1 j q; g fj (x) 0 q < j m 80
has at least one solution in Rn , then there exist nonnegative numbers p1 ; :::; pm not all zero such that m X f (x) + pj fj (x)
0
j=1
for all x in Rn .
A function F (x) on Rn is said to be a¢ nely linear if the function F (X) A function F (x) is said to be concave if the function (x) = satis…es the condition: If x1 ; x2 ; Rn and p1 ; p2
F (0) is linear.
F (X) is convex, i.e., if (x)
0; p1 + p2 = 1; then
(p1 x1 + p2 x2 )
p1 (x1 ) + p2 (x2 ): From Theorem 2.2 and 2.4 follows: COROLLAYR 2.1. Suppose the linear inequality (15) is a consequence of a consistent system (1) of rank r > 0. Then system (1) has a subsystem fjk (x)
ajk
0 (k = 1; 2; :::; r)
of rank r containing r inequalities such that, for some nonnegative elements p0 ; pj1 ; :::; pjr in P the relationr X f (x) a = pjk (fjk (x)
ajk )
p0 :
k=1
holds as an identity for x 2 L(P ).
From Theorem 2.2 and 2.4 also follows: COROLLARY 2.2. If the inequality f (x) a < 0 on L(P ) is a consequence of a consistent system (1) of nonzero rank then it is a consequence of at least one of its nodal subsystems. PROOF. In this case f (x)
a
0 is a consequence of system (1). Thus, by Theorem
2.2, it is a consequence of one of its nodal subsystems fjk (x)
ajk
0 (k = 1; 2; :::; r)
and hence, by Theorem 2.4, for some nonnegative elements p0 ; pj1 ; :::; pjr of P we have r X f (x) a = pjk (fjk (x) ajk ) p0 k=1
Her p0 6= 0, since for p0 = 0 each nodal solution of the chosen nodal subsystem satis…es
the equation f (x)
a = 0. But then it does not satisfy the inequality f (x)
it follows form the preceding relation that the inequality f (x) solutions of the chosen subsystem. Thus f (x)
a < 0. But
a < 0. is satis…ed by all
a < 0 is consequence of a nodal subsystem
of system (1). Concluding this section, we consider the case where the inequality f (x)
a
0 with a
nonnull function f (x) as a consequence of a consistent system (1) if nonzero rank is turned into an equation by one of the solutions of that system; i.e., we consider the case where the boundary plane f (x) a = 0 de…ned by the half space f (x) a 81
0 gets into contact with the
polyhedron of our system (1). This case is completely characterized in the next proposition. Let f (x) a
0 be a consequence of a consistent system (1) of nonzero rank. It is turned
to an equation by one of the solutions of the system i¤ there does not exist nonnegative elements p1 ; :::; pm in P and a positive element p0 in P such that the relation m X f (x) a = pj (fj (x) aj ) p0 j=1
holds as an identity in x 2 L(P ).
Indeed, the necessity of the condition follows directly from Theorem 2.4 and its su¢ ciency follows from Theorem 2.2. The problem we just considered leads to an interesting question about the properties of the collection of points for which f (x)
a = 0. A fundamental property of this collection is
given by: THEOREM 2.5. The track of contact between the polyhedron of solutions of system (1) (of nonzero rank) and a plane, is a face of that polyhedron. Conversely, each face of the polyhedron M is the track of contact between it and some plane.4 * PROOF. Let S be the track of contact between the polyhedron M and some plane a = 0. Assume, without loss of generality, that M is completely contained in the
F (x)
halfspace de…ned by F (x)
a
0
0. But then S coincides with the set, M , of solutions of
the system fj (x) aj 0 (j = 1; 2; :::; m); g F (x) + a 0 But all solutions of this system (together with all solutions of system (1)) satisfy the inequality: F (x) F (x)
0. Thus, by Theorem 2.4, the relation m X a = p( F (x) + a) + pj (fj (x) aj ) p0 a
j=1
for any x 2 L(P ); where p; p0 ; :::; pm are nonnegative elements of P . From this relation
we have
F (x)
a=
1 1+p
m X
pj (fj (x)
aj )
p0 : 1+p
j=1
0
0
Substituting fro x an arbitrary elements x of M , we get m X p0 1 pj (fj (x) aj ) 1+p = 0.. 1+p j=1
Since all the summands are nonpositive, they are all zeros. Thus, p0 = 0 and among 0
the coe¢ cients p0 ; :::; pm only those for which fj (x )
distinct from zero. Thus, if pj1 ; :::; pjh are those coe¢ cients with fj` (x) 4
0
0
a = 0 for arbitrary x 2 M could be aj` (` = 1; 2; :::; k)
* From this theorem, it follows that the de…nition of the face a polyhedron given towards the end of x3, Chapter I, for …nite linear inequality systems is independent of the choice of de…ning system.
82
0
0
for arbitrary element x in M , then the relation h X 1 F (x) a = 1+p pj` (fj` (x) aj` ): `=1
is an identity in x 2 L(P ).
By the method of choosing the di¤erences fj` (x)
aj` and by that relation, the set
0
M coincides with the set of solutions of the system fj (x) aj 0 (j = 1; 2; :::; m; j 6= j1 ; :::; jk ); g fj` (x) aj` = 0 (` = 1; 2; :::; h); If fj1 (x); :::; fjk (x) is the maximal linearly independent subsystem of the system of functions fj` (x)(` = 1; 2; :::; k) the solution set of the latter system coincides with the solution set of the system fj (x) aj 0 (j = 1; 2; :::; m; j 6= j1 ; :::; jk ); g fj` (x) aj` = 0 (` = 1; 2; :::; k); 0 This implies that the set M coincides with the face M (j1 ; :::; jk ) of the polyhedron M . 0
0
Since M = S , the …rst part of the theorem is proved. Suppose now that M (j1 ; :::; jk ) is some face of the polyhedron M . It might be considered to be the set M * of solutions of the system that are linearly independent. Obviously, the set of solutions of the equation system fj (x) aj 0 (j = 1; 2; :::; m); g fj` (x) aj` = 0 (` = 1; 2; :::; k); where the functions fj` (x)(` = 1; 2; :::; k) are linear independent. Obviously, the set of solutions of the equation system fj` (x)
aj` = 0 (` = 1; 2; :::; k)
coincides with the solution set of the inequality system fj` (x) aj` = 0 (` = 1; 2; :::; h); k X g F (x) + a = pj` (fj` (x) aj` ) 0: `=1
Thus, the set M * coincides with the set of solutions of the system fj (x) aj 0 (j = 1; 2; :::; m); g F (x) + a 0: Since the function fj` (x)(` = 1; 2; :::; k) are linearly independent, the function F (x) is not a null function. On the other hand, all solutions of system (1), obviously, satisfy
F (x) + a
the set M = M (j1 ; :::; jk ) coincides with the set of solutions of the system fj (x) aj 0 (j = 1; 2; :::; m); g F (x) + a = 0: i.e., with the track of contact between the polyhedron M and the plane F (x) The second part of (and hence the ) theorem is proved.
83
0. Hence
a = 0.
x2: SYSTEM OF RANK r THAT CONSIST OF r + 1 LINEAR INEQUALITIES.
THEOREM 2.6. If system (1) of rank r is inconsistent, then so is at least one of its subsystems of rank r that contains r + 1 inequalities. PROOF. suppose system (1) is not consistent. Then the inequality T
0 is a consequence
of the system fj (x)
aj t
0 (j = 1; 2; :::; m):
It follows from the inconsistency of system (1) that the largest number of linearly independent function among the functions: fj (x)
aj t
0 cannot be equal to r. Indeed,
otherwise the system fj (x)
aj = 0 (j = 1; 2; :::; m);
of linear equations has a solution in L(P ). So would system (1), as could be seen by using the transformation x ! (f1 (x); :::; fm (x)) 2 P m (x 2 L(P ))
(see Lemma 1.1). But then the rank of the system of functions fj (x)
aj t (j = 1; 2; :::; m):
must be r + 1 (this is shown by using that transformation). Hence, by Corollary 2.1, there exists a relation r+1 X t= pjk (fjk (x) ajk t) k=1
with nonnegative pj1 ; :::; pjr+1 where the system of functions fjk (x)
ajk t (k = 1; 2; :::; r + 1)
is linearly independent. This, in turn, means that the inequality t
0 is a consequence
of the system fjk (x)
ajk t
0 (k = 1; 2; :::; r + 1)
But then the system fjk (x)
ajk
0 (k = 1; 2; :::; r + 1)
is not consistent. Since its rank is obviously, equal to r, the theorem is proved. Corollary 2.3. System (1) of rank r is stably consistent if all of its subsystems of rank r that contains r + 1 inequalities are stably consistent. PROOF. Suppose there exists a positive element " in P such that all subsystems of rank r that contain r + 1 inequalities of the form fj (x)
(aj
")
0 (j = 1; 2; :::; m)
are consistent. But then, by Theorem 2.6, the whole system is consistent. Thus system (1) is stably consistent.
84
DEFINITION 2.2. An inconsistent system (1) with m inequalities, m > 1, is said to be irreducibly inconsistent if any of its proper (i.e., not equal to it) subsystems is consistent. THEOREM 2.7. system (1) (with m > 1) is irreducibly inconsistent i¤ any of its subsystems with m
1 inequalities has rank m
1 and there exist positive elements p1 ; :::; pm
in P msuch that X pj fj (x) = 0 j=1
and m X pj fj (x) < 0 j=1
PROOF. Necessity: Suppose system (1) is irreducibly inconsistent. Then for any of its inequalities fj 0 (x)
aj 0
0 the inequality -fj 0 (x)
aj 0 < 0 is a consequence of the subsystem
consisting of its remaining inequalities and, hence, is a consequence of the nodal subsystem fjk (x)
ajk
0 (k = 1; 2; :::; `)
of the latter. But then the system fjk (x) ajk 0 (k = 1; 2; :::; `) g fj 0 (x) aj 0 0 is consistent. Hence, by the irreducible inconsistency of system (1), ` = m
1: Since
0
the number j is an arbitrary number in the set 1; 2; ::; m this implies that any subsystem of system (1) consisting of m
1 of its inequalities has rank m
1. The second property of
irreducibly inconsistent system (1) follows directly from Theorem 2.3. Necessity is proved. Su¢ ciency: Let system (1) satisfy the two conditions of the system is consistent. The …rst condition implies that each proper subsystem of the system is consistent. The second condition, by Theorem 2.3, implies that the system is consistent. The theorem is proved. In view of Theorem 2.6, the question of consistency of a system (1) of rank r reduces to the question of consistency of its subsystems of rank r that consist of r + 1 inequalities. In connection with this, we study the consistency of systems of rank r > 0 that consist of r + 1 inequalities (since such systems with rank r = 0 are not interesting). We introduce the following. DEFINITION 2.3. (See Shkolnik [1], [2].) The characteristic determinant of the system aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; r + 1) (19)
of inequalities with rank r > 0, is the determinant obtained by bordering a nonzero minor of order r of its matrix using the column of elements aj (j = 1; 2; :::; r + 1) and a row of the matrix that is not included in the minor. If D is an arbitrary characteristic determinant of system and if A1 ; :::; Ar+1 are the cofactors of its elements aj , the number of cofactors with the same sign as D in case 6= 0 or the number of contrasts if D = 0 is called the characteristic 85
of system (19). A contrast is a cofactor whose sign di¤ers from the sign of no more than half nonzero cofactors among the cofactors A1 ; :::; Ar+1 . We note here that the de…nition of the characteristic of system (19) is independent of the choice of the minor D. Indeed, the cofactors A1 ; :::; Ar+1 of elements aj of an arbitrary characteristic determinant D satisfy the relation r+1 X aji Aj (x) = 0 (i = 1; 2; :::; n) j=1
Since the equation system r+1 X aji uj (x) = 0 (i = 1; 2; :::; n) j=1
has, obviously, a rank that di¤ers by one from the number of its unknowns, it follows that for any two distinct solutions of it, one is a constant multiple of the other. Thus, if any one of the characteristic determinants is nonzero then all the characteristic determinants are nonzero. THEOREM 2.8. System (19) of rank r > 0 is inconsistent i¤ all of its characteristic determinants are not zero and its characteristic is zero. PROOF. If system (19) is inconsistent then its arbitrary characteristic determinant D is nonzero. Suppose, then, that the characteristic of system (19) is nonzero and, thus, that 0
one of the cofactors Ak of an element ak in D has the same sign as D. Let D be the 0
determinant obtained from D by shifting its rth row to the bottom position and let Ak be 0
the complementary minor of ak in D . Then we have 0
D 0 Ak
=
( 1)r ( 1)r
k+1 D k+1 A k
>0
But then, by Theorem 1.5, system (19) is consistent. This contradiction proves the necessary part of the theorem. Suppose now that the characteristic of system (19) is zero and that D 6= 0. Then for any
nonzeor cofactor Ak (Ak is the cofactor of ak in D) we have
D Ak
< 0. .Shifting the kth row 0
of D to the bottom position and going from the cofactor Ak to the complementary minor Ak we have
0
D 0 Ak
< 0: But then, by Theorem 1.5, system (19) is consistent. The su¢ ciency part
of the theorem is proved. This proposition was establised (for P = R) by Shkolnik [2]. COROLLARY 2.4. (See Shholnik [2].) For a system (19) of rank r > 0 with r +1 inequalities to have a stable solution, it is necessary and su¢ cinet that the system’s characteristic be nonzero. PROOF. Necessity: Suppose system (19) has a stable solution and suppose its characteristic is zero. By Theorem 2.8, any character determinant D of the system is zero. But 86
this implies that all of the cofactors Ak of the element ak in D have the same sign (in view of the assumption about the rank of system (19), at least one of the cofactors Ak is nonzero). Thus, we have the relation r+1 X aji jAj j = 0 (i = 1; 2; :::; n) j=1
and r+1 X aj jAj j = 0 j=1
Using these relations we get, as an identity in ! x = (x1 ; :::; xn ), the relation r+1 X jAj j (aj1 x1 + ::: + ajn xn aj ) = 0 j=1
Suppose Ar+1 6= 0. Then, using this relation, we conclude that all solutions of system
(19) satisfy the equation
ar+11 x1 + ::: + ar+1n xn
ar+1 = 0
which contradictions our assumption that system (19) has a stable solution. The necessity part of our proposition is proved. Su¢ ciency: Suppose the characteristic of system (19) is nonzero. Then, by Theorem 2.8, system (19) is consistent. If it is not stably consistent then all of its solutions turn at least one of its inequalities, say the r + 1st , into an equation. But then the inequality ar+11 x1
:::
ar+1n xn + ar+1
0
is a consequence of the system. Thus, by Theorem 2.4, the relation r+1 X ar+11 x1 ::: ar+1n xn + ar+1 = pj (aj1 x1 + ::: + ajn xn aj ) p0 ; j=1
where all coe¢ cients p1 ; :::; pr+1 are nonnegative and where the free term p0 is zero, holds as an identity ! x = (x1 ; :::; xn ). Using this relation, we conclude that the system r+1 X aji uj = 0 (i = 1; 2; :::; n); j=1
r+1 X
aj u j = 0
g (20)
j=1
has a positive solution u0j = pj (j = 1; :::; r); uj = pr+ + 1. Thus its rank is distinct from r + 1, the number of its unknowns, and hence is smaller by one than the latter. Hence, in particular, any characteristic determinant D of (19) is zero. But then the cofactors A1 ; :::; Ar+1 of the elements a1 ; :::; ar+1 of D must satisfy the system (20) is smaller than the number of its unknowns by one, any two distinct solutions of it are constant multiples of one another. Comparing its two nonzero solutions (u01 ; :::; A0r+1 ) and (A1 ; :::; Ar+1 ) we …nd its two nonzero cofactors Aj have the same sign. Since D = 0, this means that the characteristic of 87
system (19), contrary to our assumption, is zero. Thus, system (19) is stably consistent. REMARK. By Theorem 2.8 and Corollary 2.4, we have: System (19) of rank r > 0 is unstably consistent i¤ its characteristics as well as all of its characteristic determinants are zero. COROLLARY 2.5. (See Fan Ky [1].) Let the …rst r columns of the matrix A of system (1) be linearly independent and let A* be A’s submatrix consisting of them. Let Mj (j = 1; 2; :::; r + 1) denote the minor of order r obtained from A* by deleting its jth row. Then system (19) is irreducibly inconsistent i¤ the following two conditions are satis…ed: Mj Mj+1 = 0 (j = 1; 2; :::; r) (21) and r+1 X aj jMj j = 0: (22) j=1
PROOF. Su¢ ciency. Condition (21) implied that all r order minors of A* are nonzero. But then all subsystems of system (19) that consist of r inequalities and, hence, all of its proper subsystems are consistent. In view of (21) all the cofactors of the elements aj of the determinant a11 ::: a1r ::: ::: D= ::: ar+11 ::: ar+1r have the same sign. If r+1 X D= aj jMj j
a1 ::: ar+1 this is a plus sign then
j=1
and, by (22), they are all of the opposite sign of D. They are of the opposite sign of D
also in the case of minus sign. Indeed, in this case r+1 X D= aj jMj j j=1
and, in view of relation (22), D > 0. Hence the characteristic of our system is zero. Since
its characteristic determinant is nonzero, it follows from Theorem 2.8 that it is consistent. This proves su¢ ciency. Necessity: Suppose system (19) is irreducibly inconsistent. Then, by Theorem 2.8, its characteristic determinant D is nonzero and all the nonzero cofactors among the cofactors A1 ; :::; Ar+1 of its elements a1 ; :::; ar+1 have di¤erent signs from the sign of D. Consequently r+1 X D= aj jMj j j=1
if they are positive and r+1 X D= aj jMj j j=1
88
if they are negative. Thus follows relation (22). If all the minors Mj are nonzero then relation (21) follows from the fact that all nonzero cofactors Aj have the same sign. We now show that all the cofactors Aj are nonzero. Indeed, in view of the already proved constancy of the sigh of the nonzero Aj ’s we must have r+1 X aji jMj j = 0 (i = 1; 2; :::; n) j=1
and, hence, the relation r+1 X jMj j (aj1 x1 + ::: + ajn xn
aj ) = 0
j=1
if at least one of the minor Mj i.e., the minor Mr+1 , is zero then the above relations together with the inequality (22) imply that the system aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; r)
is consistent. But the contradicts our assumption that (19) is irreducibly inconsistent. Consequently all of the Aj ’s are nonzero. CORALLARY 2.6. Consider a system (19) of rank r > 0 all of whose proper subsystems are stably consistent. Such a system is unstably consistent i¤ (in the notation of Corollary 2.5) the following two conditions are satis…ed: Mj Mj+1 < 0 (j = 1; 2; :::; r); (23) and r+1 X aj jMj j = 0
(24)
j=1
PROOF. Su¢ ciency. Let system (19) satisfy conditions (23) and (24). Then, by (23), all of the nonzero cofactors among the cofactors of the elements, a1 ; :::; ar+1 in the determinant D have the same sign. But then D, except possibly for the sign, is equal to the sum r+1 X aj jMj j j=1
and, in view of (24), is equal to zero. Hence the characteristic of system (19) is zero.
Su¢ ciency is proved (see remark about Theorem 2.4). Necessity: suppose our system is unstably consistent. By the remark about Corollary 2.4, its characteristic as well as any of any of its characteristic determinants is zero. Hence, in particular the determinant D is zero. Thus, the signs of all nonzero Aj coincides and, hence, we get the relation (24). Almost exactly as well did in the proof of the previous proposition, we may show that all Aj are nonzero. But then we get relation (23). Necessity is established. `(! x ) aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m) 89
(25)
on P n which does not contain any inequalities with null linear forms, is inconsistent only if at least one of its subsystems is irreducibly inconsistent. Hence, from Corollary 2.5 follows: THEOREM 2.9. (See Fan Ky [1]) System (25) containing no inequalities with null linear forms 0 is inconsistent i¤ its 1matrix contains a submatrix aj1 i1 ::: aj1 is 1 @ ::: ::: A ::: ajs i1 ::: ajs 1 is 1 of rank s 1 such that Nk Nk 1 < 0 (1 s X aik jNk j < 0
k
s
1);
k=1
where Nk < 0 (1
s) is the determinant of order s 1 obtained from that submatrix
k
by eliminating its kth row. From Corollary 2.5 and 2.6 follows: THEOREM 2.10. System (25) that does not contain any inequalities with null linear form 0is unstably consistent 1 i¤ its matrix contains a submatrix aj1 i1 ::: aj1 is 1 @ ::: ::: A ::: ajs i1 ::: ajs 1 is 1 of rank r 1 such that, in the notation of the preceding theorem, we have Nk Nk+1 < 0 (1
k
s
1);
and s X aik jNk j < 0 k=1
Indeed, a system (25) that satis…es the conditions of the theorem is unstably consistent i¤ it contains an unstably consistent subsystem S all of whose proper subsystems are stably consistent. Using Theorem 2.7, it is not hard to show that the rank of such a subsystem is less by one than the number of its unknowns. Actually, the stable consistency of system (1) implies that the existence of a positive " 2 P such that the system obtained from system (1) by replacing the latter’s aj ’s by "j )’s is consistent. If we take such an " for each proper subsystems of S and choose
(aj
the least among them "0 , then the system S("0 ) obtained by replacing the aj ’s of system S by (aj
"j )’s would not be consistent. In view of Theorem 2.7, the rank of the system S("0 )
is less by one than the number of its unknowns. Hence, the assertion of Corollary 2.6 applies to the system S. COROLLARY 2.7. (See Motzkin [1].) The system aj1 x1 + ::: + ajn xn < 0 (j = 1; 2; :::; m) 90
of homogeneous linear inequalities which does not contain any inequalities with a null linear 0 form is inconsistent1i¤ its matrix contains a submatrix aj1 i1 ::: aj1 is 1 @ ::: ::: A ::: ajs i1 ::: ajs 1 is 1 of rank r L which, in the notation of Theorem 2.4, satis…es Nk Nk+1 < 0 (1
k
s
1):
DEFINITION 2.10 . An inequality of the consistent system (1) is said to be a dependent inequality in that system if the set of solutions of the system is unchanged by deleting that inequality from system. A consistent system (1) that contains at least one dependent inequality is said to be a dependent system. Since any dependent inequality of system (1) is a consequence of the subsystem consisting of the other inequalities in (1), the following proposition follows from Corollary 2.1. A consistent system (1) with nonzero rank r is dependent i¤ at least one of its subsystems of rank r with r + 1 inequalities is dependent. In this regard, the following theorem is of interest. THEOREM 2.11. (See Shkolnik [2]) Consider a consistent system (25) with rank r > 0, m = r + 1 inequalities and with no inequalities that have null linear forms. This system is dependent i¤ its characteristic is equal to one. Let D be a characteristic determinant of such as system with a characteristic that is equal to one and let A1 ; :::; Ar+1 be the cofactors of elements a1 ; :::; ar+1 in D. If D 6= 0, then the inequalities for which Aj has the same sign as D are dependent. If D = 0, the inequalities where Aj is a contrast are dependent (see De…nition 2.3). PROOF. Suppose, for example, the last inequality of (25) is dependent. Then, by Theorem 2.4, for some nonnegative p0 ; p1 ; :::; pm m X1 ! `m ( x ) am = pj (`j (! x ) aj ) p 0 :
1
(m = r + 1) the relation
j=1
holds as an identity in ! x = (x1 ; :::; xn ). From the relation, it follows that the system a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n) g (26) a1 u1 + ::: + am um < 0 has a solution (p1 ; :::; pm 1 ; 1). Hence, any two nonzero solutions of the system a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n) di¤er only by a constant multiple (this follows from the fact that the system’s rank is
one less Thant the number of its unknowns). Also it, obviously, has a solution, (A1 ; :::; Am ) (m = r + 1) . Thus, we must have either A1
0; :::; Am
1
0; Am < 0
91
and D = a1 A1 + ::: + am Am
0;
or A1
0; :::; Am
1
0; Am > 0
and D = a1 A1 + ::: + am Am
0:
The fact that system (25) contains no inequalities with null linear forms implies that at least one of the elements p1 ; :::; pm A1 ; :::; Am
1
1
is nonzero. Hence, at least one of the cofactors
is nonzero.
The assertion that proved implies that the characteristic of system (25) is one (see Definition 2.3). This proves the necessity in the …rst (fundamental) part of the theorem. The su¢ ciency will be proved simultaneously with proof of the second part of the theorem which is presented below. Suppose the characteristic of our system (25) is one and that, e.g., Am < 0 and A1 0; :::; Am
1
0: Then, by the assumption that the system’s characteristic is one, we have
D = a1 A1 + ::: + am Am
0:
Consequently (A1 ; :::; Am ) is a solution of system (26) and hence: m X1 A ! x ) aj ) ADm : `m ( x ) am = ( Amj )(`j (! j=1
Since Aj Am
0 (j = 1; 2; :::; m
1),
D Am
0;
it follows from relation (27) that the inequality `m (! x ) am
0 is a dependent inequality
in system (25). If we, here, take Am > 0 and A1
0; :::; Am
1
0; then D = a1 A1 + ::: + am Am
0:
and hence ( A1 ; :::; Am ) will be a solution of system (26). But then relation (27) would also be satis…ed in this case. Since all its coe¢ cients are nonnegative and since inequality `m (! x ) am 0 is a dependent inequality. The theorem is proved.
D Am
0, the
x3. SOME APPLICATIONS OF THE RESULTS OF x1
THEOREM 2.12. A consistent system (1) is stably consistent i¤ there does not exist nonnegative elements q0 ; q1 ; :::; qm of the …eld P not all of which are zeros such that the relation m X pj (fj (x)
aj )
q0 = 0:
j=1
holds as an identity in x 2 L(P ). 92
PROOF. Necessity: Suppose system (1) is stably consistent. Then, it has a solution x0 such that fj (x0 )
aj < 0
But then, for any nonnegative elements q0 ; q1 ; :::; qm of the …eld P not all of them zeros, we have the relation m X pj (fj (x0 ) aj )
q0 < 0:
j=1
Consequently, for a stably consistent system (1), relation (28) does not hold for any
nonnegative elements q0 ; q1 ; :::; qm of the …eld P that are not all zeros. Su¢ ciency: Let a consistent system (1) satisfy the conditions of the theorem. We shall show that it is stably consistent. Indeed, if it is not stably consistent then all of its solutions turn some of its inequalities to equations. Actually, if for an arbitrary inequalities fj (x) aj 0 of the system, we can …nd a solution xj of the system such that fj (x) 0
x =
aj < 0 the element
x1 +:::+xm m
would satisfy all the inequalities fj (x)
aj < 0 (j = 1; 2; :::; m)
which contradicts our hypothesis of the unstable consistency of system (1). Suppose fj 0 (x)
aj 0 < 0
is one of the inequalities of system (1) that is turned to an equation by all of its solutions. Then the inequality fj 0 (x)
aj 0
0 would be a consequence of system (1). Using Theorem 0
0
2.4, we conclude that for some nonnegative elements q1 ; :::; qm ; q0 ;the inequality m X fj 0 (x) aj 0 = pj 0 (fj (x) aj ) q0 j=1
holds as an identity in x 2 L(P ). But then for the nonnegative elements 0 aj 0 (j 6= j ) qj = f (j = 1; 2; :::; m) 0 aj 0 + 1 (j = j ) of which aj 0 is nonzero, relation (28) holds. But this contradicts our assumption about
system (1). This proves su¢ ciency. In this case where system (1) consists of homogeneous inequalities on Rn , the above theorem reduces to the well known Voronoi criterion for the n-dimensionality of the polyhedron of solutions of the system. In this case we only consider relation (28) with q0 = 0 and Voroni’s Theorem becomes equivalent to the next proposition (for the special case of P = R). Corollary 2.8. The system aj1 x1 + ::: + ajn xn < 0 (j = 1; 2; :::; m) of linear inequalities on P n is inconsistent i¤ the equation system a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n). 93
has a positive solution. THEOREM 2.13. For1 an arbitrary matrix 0 a11 ::: a1n A = @ ::: ::: ::: A am1 ::: amn on an arbitrary ordered …eld P , at least one nonnegative solution (nonnegatively relative to the unknown tuple ! u = (u1 ; :::; um )) (! x ;! u ) = (x1 ; :::; xm ; u1 ; :::; um ) of the system ! `j ( x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) ; g (29) ` 0 (! u ) = a u + ::: + a u = 0 (i = 1; 2; :::; n) mi m
1i 1
i
satis…es the system `j (! x ) + uj = aj1 x1 + ::: + ajn xn + uj > 0 (j = 1; 2; :::; m): (30)
PROOF. It su¢ ces to show that for an arbitrary inequality `j (! x ) + uj = aj1 x1 + ::: + ajn xn + uj > 0 there exists a nonnegative relative to ! u ; solution (! x ;! u ) of system (29) that satis…es it.
Indeed, then a nonnegative relative to ! u solution m X (! x 0; ! u 0) = (! x j; ! u j) j=1
of system (29) would satisfy (30). Let `j0 (! x ) + uj0 = aj0 1 x1 + ::: + aj0 n xn + uj0 > 0
be an inequality in system (30) which is not satis…es by any of the nonnegative, relative ! to u solutions of system (29). Then all of the solutions of this type must satisfy ` (! x)+u 0 j0
j0
Using Theorem 2.4, we get for some nonnegative elements p1 ; :::; pm of the …eld P that the relation ` (! x)= j0
m X
pj `j (! x)
j=1
is satis…ed as an arbitrary in ! x . Hence ! p = (p1 ; :::; pj0 1 ; pj0 + 1; pj0 +1 ; :::; pm ) is a nonnegative solution of the system ` 0 (! u ) = 0 (i = 1; 2; :::; n) i
with a nonzero jth coordinate. On the other hand, it follow form relation (31) that all solutions of the system ` (! x ) = 0 (j = 1; 2; :::; m) j
satisfy the equation `j0 (! x ) = 0. But then for any solution ! x of that system, the nonnegative relative to ! u ; solution (! x ;! u ) of system (29) with ! u =! p would not satisfy ! the inequality ` ( x ) + u 0. This contradiction proves our assertion and, hence, the j0
j0
theorem. 94
For P = R, Theorem 2.13 was proved by Tucker [1]. Zoutendijk [1], proved that, for P = R, it follows directly from the Minkowski-Farakas Theorem. Tucker’s Theorem may be used as a fundamental theorem in the theory of linear programming. COROLLARY 2.9. The equation system `i0 (! u ) = a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n) has a positive solution if the system `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) ; does not have a stable solution. Furthermore, the …rst system has a strictly positive solution if the second system is equivalent to the equation system `j (! x ) = 0 (j = 1; 2; :::; m). ! `j ( x ) 0 (j = 1; 2; :::; m) `i0 (! u ) = 0 (i = 1; 2; :::; n) g ! `j ( x ) + uj 0 (j = 1; 2; :::; m) has a nonnegative relative to ! u solution (! x 0; ! u 0 ) = (x01 ; :::; x0m ; u01 ; :::; u0m ) In the …rst case `j (! x 0 ) = 0 for at least one j, and hence we have x 0) + u > 0 u0 = ` (! j
j
j
for at least one j. Hence, in this case, the system `i0 (! u ) = 0 (i = 1; 2; :::; n) has a positive solution (u0 ; :::; u0 ). In the second case ` (! x 0 ) = 0 for all j and hence, all the coordinates 1
of its solution
m 0 (u1 ; :::; u0m )
j
are positive. Hence, in the second case, the system has a positive
solution. This proposition is contained in the paper Tucker [1]. It was published earlier in Stiemke [1] and its …rst part was published still earlier in Gordan [1]. REMARK. The second part of Corollary 2.9 may be extended in the follow manner: All solutions of the system of inequalities `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) ; satisfy the equation system ` (! x ) = 0 (j = 1; 2; :::; m) ; j
i¤ the equation system u ) = a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n) `i0 (! possesses a strictly positive solution. Indeed, if the equation system `i0 (! u ) = 0 (i = 1; 2; :::; n) has a strictly positive solution
! u 0 = (u01 ; :::; u0m ) then the relation m m X X ! 0 uj `j ( x ) = u0j (aj1 x1 + ::: + ajn xn ) = 0 j=1
j=1
holds as an identity in x1 ; :::; xn . Hence, each solution of the system `j (! x)
of its inequalities to equations. 95
0 turns all
THEOREM 2.14. One and only one of the following alternatives occurs: Either the system a1i u1 + ::: + ami um
bi
0 (i = 1; 2; :::; n) (32)
has a nonnegative solution, or the system aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) ; g (33) b1 x1 + ::: + bn xn < 0: has a nonnegative solution. PROOF. If system (32) has a nonnegative solution, then the system a1i u1 + ::: + ami um bi 0 (i = 1; 2; :::; n); g (34) uj < 0 (j = 1; 2; :::; m) is consistent. Hence, by Theorem 2.3, for any nonnegative elements x1 ; :::; xn ; xn+1 ; :::; xn+m of the …eld P for which the relation n X xi (a1i u1 + ::: + ami um ) xn+1 u1
:::
xn+m um = 0; (35)
i=1
holds as an identity in u1 ; :::; um we have b1 x1 + ::: + bn xn
0:
Hence system (33) would not have a nonnegative solution. If system (32) does not have a nonnegative solution then system (34) is inconsistent and hence, by Theorem 2.3, there exist nonnegative elements x01 ; :::; x0n ; x0n+1 ; :::; x0n+m which satisfy relation (35) and the inequality b1 x1 + ::: + bn xn < 0: This implies that (x01 ; :::; x0n ) is a nonnegative solution of system (33). The Theorem is proved. For P = R, this theorem was proved in the book Gale [2]. COROLLARY 2.10. One and only one of the following alternatives occurs: Either the system a1i u1 + ::: + ami um
bi
0 (i = 1; 2; :::; n)
has a positive solution, or the system aj1 x1 + ::: + ajn xn > 0 (j = 1; 2; :::; m) has a nonnegative solution. PROOF. Suppose the …rst system does not have a positive solution. Then the system a1i u1 + ::: + ami um 0 (i = 1; 2; :::; n) g u1 ::: um 1 has no nonnegative solution. Thus, in view of Theorem 2.14, the system aj1 x1 + ::: + ajn xn xn+1 0 (j = 1; 2; :::; m) ; g xn+1 < 0: has a nonnegative solution. But then the system aj1 x1 + ::: + ajn xn > 0 (j = 1; 2; :::; m) ; 96
has a nonnegative solution. Suppose the …rst system has a positive solution (u01 ; :::; u0n ) and that the second system has a nonnegative solution (x01 ; :::; x0n ). Then, multiplying the inequalities aj1 x01 + ::: + ajn x0n > 0 (j = 1; 2; :::; m) ; respectively by u01 ; :::; u0n and assuming we get m X n m X X 0 0 aji xi uj = (aj1 x01 + ::: + ajn x0n )uj0 > 0 j=1 i=1
j=1
On the other hand, multiplying the inequalities a1i u01 + ::: + ami u0m
0 (i = 1; 2; :::; n)
respectively by x01 ; :::; x0n and summing we get n X m n X X 0 0 aji uj xi = x0i (a1i u01 + ::: + ami u0m ) 0: i=1 j=1
i=1
Since m X n X
aji x0i u0j =
j=1 i=1
n X m X
aji u0j x0i
i=1 j=1
we get a contradiction. Consequently the existence of a positive solution of the …rst system precludes the existence of a nonnegative solution of the second system. The corollary is proved. THEOREM 2.15. Let ! x = (x1 ; :::; xn ) and ! y = (y1 ; :::; yn ) be elements of P n (P is an arbitrary ordered …eld) that satisfy the relations n X xi 0 (i = 1; 2; :::; n); xi = 1; (36) i=1
yi
0 (i = 1; 2; :::; n);
n X
yi = 1; (37)
i=1
and let (aij ) be a matrix of order n with elements in the …eld P . Then n n X X minmax aij xi yj and maxmin aij xi yj y
x
x
j;i=1
y
j;i=1
exist and are equal to each other.
For P = R this theorem is known as the minimax theorem of J. Von Neuman (see Beckenbach S. Bellman [1]). PROOF. For given values of y1 ; :::; yn that satisfy condition (37). the linear form n n X X aij xi yj = (ai1 y1 + ::: + ain yn )xi j;i=1
j;i=1
with unknowns x1 ; :::; xn is, obviously, bounded above on the polyhedron M1 of solutions of system (36) (replacing the equation in it by a part of inequalities). Thus, in view of Theorem 2.1, its value on at least one of the vertices (1; 0; :::0); (0; 1; 0; :::0); :::; (0; :::; 0; 1) 97
is its maximum value on M1 . Hence n X max aij xi yj = max(ai1 y1 + ::: + ain yn ) (38) x M1
i
j;i=1
Analogously, for …xed values of x1 ; :::; xn we get the relation n X min aij xi yj = min(a1j y1 + ::: + anj yn ) (39) y M2
j
j;i=1
(M 2 is the polyhedron of system (37)).
Suppose further that T1 is the smallest value of the unknown yn+1 for which the system n X yi 0 (i = 1; 2; :::; n); yi = 1; g (40) i=1 ai1 y1 + ::: + ain yn yn+1 0 (i = 1; 2; :::; n); is consistent and let T2 be the greatest value of the unknown xn+1 for which the system n X xi 0 (i = 1; 2; :::; n); xi = 1; g (41) i=1 a1j x1 ::: anj xn + xn+1 0 (j = 1; 2; :::; n): is consistent. In view of Theorem 2.1, the existence of T1 and T2 follows form the boundedness of the linear functions yn+1 and xn+1 respectively from below and from above on the solution sets of systems (40) and (41) (this follows form the boundedness of the sets M1 and M2 ). Now T1 = min max(ai1 y1 + ::: + ain yn ) y M2
i
T2 = maxmin(a1j y1 + ::: + anj yn ) x M1
j
g
Thus, in view of (38) and (39), this established the existence of n n X X minmax aij xi yj and maxmin aij xi yj y
x
x
j;i=1
y
j;i=1
which are equal to T1 and T2 respectively.
We now show that T1 = T2 . Note …rst that the inequality of system (40) and that the inequality xn+1
yn+1 + T1
0 is a consequence
0 is a consequence of system (41).
T2
Applying Theorem 2.4 to the …rst inequality and system (40) (replacing the equation by two inequalities) we get the relation n n n n X X X X 0 0 yn+1 + T1 = pi yi +q ( yi 1)+q ( yi +1)+ xi (ai1 y1 +:::+ain yn yn+1 ) p0 ; i=1
i=1
i=1
0
i=1
in which all coe¢ cients p0 ; p1 ; :::; pn ; x1 ; :::; xn ; q and q " are nonnegative. Since T1 is
attained in the set of solutions of system (40), it follows that p0 = 0. From the preceding relation we get n X xi = 1; i=1
a1j x1
:::
0
"
anj xn + ( q + q ) =
pj
0 98
(j = 1; 2; :::; n) 0
T1 =
"
q +q :
But this means that for xn+1 = T1 , system (41) is consistent. Hence T1
T2 (since T2 is
the largest value of xn+1 for which it is consistent). On the other hand, for any …xed ! x satisfying (36) we have n n X X min aij xi yj aij xi yj y
j;i=1
j;i=1
and for any given ! y satisfying (37) we have n n X X aij xi yj max aij xi yj x
j;i=1
Hencen X min aij xi yj y
j;i=1
j;i=1
max x
n X
aij xi yj
j;i=1
The left hand side is independent of ! y . Thus n n X X min aij xi yj minmax aij xi yj y
y
j;i=1
x
j;i=1
Also, the right hand side is independent of ! x . Hence n n X X minmax aij xi yj maxmin aij xi yj y
x
j;i=1
x
y
j;i=1
i.e., T2
T1 . But we have shown that T1 n n X X minmax aij xi yj = maxmin aij xi yj y
x
j;i=1
x
y
T2 . Thus T1 = T2 Consequently
j;i=1
as we were to show.
x4. WEYL’S THEOREM ON CONVEX CONES WITH FINITE SETS OF GENERA-
TION ELEMENTS, AND SOME OF ITS COROLLARIES
DEFINITIN 2.4. A convex cone, generated by a …nite set of elements ! x k = (xk1 ; :::; xkn ) (k = 1; 2; :::; `) of the space P n (P is an arbitrary ordered …eld), is de…ned to be the set of elements ! x 2 P n such that ! x = p1 ! x 1 + ::: + p` ! x` where p1 ; :::; p` are arbitrary nonnegative elements in the …eld P . Here, the elements ! 1 x ; :::; ! x ` are called the generating elements of the cone. A convex cone with a …nite set of generating elements is said to be a …nitely generated cone. A basis of a …nitely generated convex cone is a …nite subset K of the set of generating elements of the cone which does not contain any element that could be expressed as a linear
99
combination of some other elements of K with nonnegative coe¢ cients from P. The dimension of a …nitely generated convex cone is the rank of any …nite set that generates it. A convex cone is said to be acute if it does not contain any subspaces with nonzero dimension. DEFINITION 2.5. The support of a …nite set S of elements of P n is de…ned by way of the inequality a1 x1 + ::: + an xn n
0 ((a1 ; :::; an ) 6= (0; :::; 0)) ;
on P that is satis…ed by all elements of S. If r > 0 is the rank of S then its support is called a boundary support if its de…ning inequality is turned to an equation by no less than r
1 linearly independent elements of S. If r = 0, then any half space of P n with abounding
plane that passes through the zero element (0; :::; 0), may be taken as a boundary support of s’. A boundary support of a set of S is said to be a boundary support the …rst type if its de…ning inequality is satis…ed as an equality by r linearly independent elements of S; otherwise, it will be called a boundary support of the second type. Two boundary supports of the second type are considered essentially nondistinct if the inequalities de…ning them are turned to equations by the same subset of S with r
1 linearly
independent elements. If r = n, then S could not have any boundary supports of the …rst type. Clearly, in this case, a …nite set S could only have a …nite number of boundary supports. For n > 1, they may be obtained from all possible combinations of n
1 linearly independent elements of S
according to the following criterion: One of the two half spaces (a1 x1 + ::: + an xn )
0;
de…ned by the hyperplane a1 x1 + ::: + an xn = 0; passing through an acceptable combination is not a support. For n = 1, the inequality that de…nes the boundary support of S is not turned to an equation by any nonzero element in S, i.e., it is turned to an equation by n
1 = 0 linearly
independent elements of S. Analogously, for r = 1 the inequality that de…nes a boundary support for a set S is not turned to an equation by any element of S. LEMMA 2.6. If the inequality f (x) = a1 f1 (x) + ::: + am fm (x)
0
where a1 ; :::; am are elements of the …eld, P is not a consequence of the system fj (x)
0 (j = 1; 2; :::; m)
of linear inequalities (on L(P )) with rank r > 0, then there exists a nonnodal solution
100
x = x0 of the system, turning r
1 of its inequalities with linearly independent functions
fj (x) to equations, such that f (x0 ) > 0. For r = 1, x0 does not turn any inequality with a nonnull function fj (x) into an equation. 0
PROOF. By hypothesis of the lemma, there exists a solution x = x of the above system 0
such that f (x ) > 0. But then the system fj (x)
0 (j = 1; 2; :::; m); 0
f (x) + t
0 0
0
is consistent for t = 12 f (x ). Since its rank is di¤erent from zero, it follows from Theorem 1.1 that it has a nodal solution x = x0 . Consequently, it could not be true that r linear inequalities fj (x)
0, with linearly independent functions fj (x) are turned to equations.
For it this was so, then the inequality 0
0
f (x0 ) + t = t
0
would hold, which is not possible. But then, by de…nition of nodal solutions, x = x0 must turn
0
f (x) + t
0 and r
1 inequalities fj (x)
0 with linearly independent functions
fj (x) into equations. The lemma is proved. REMARK. If in Lemma 2.6, we take fj (! x ) = aj1 x1 + ::: + ajn xn (j = 1; 2; :::; m) and f (! x ) = b1 x1 + ::: + bn xn where aji and bi are elements of the …eld P , then for r = n there exist a1 ; :::; an 2 P such
that
f (! x ) = a1 f1 (! x ) + ::: + am fm (! x) Hence the following proposition follows from Lemma 2.6. If the inequality f (! x ) = b1 x1 + ::: + bn xn
0
(on P n ) is a consequence of the system fj (! x ) = aj1 x1 + ::: + ajn xn (j = 1; 2; :::; m)
(on P n ) which has rank n, then there exists a solution ! x 0 = (x01 ; :::; x0n ) of the system that turns r 1 of its inequalities, with linearly independent linear form, to equations and satis…es f (! x 0 ) > 0. LEMMA. (See Wey1 [1].) Let C be a convex cone generated by a …nite set of elements S in the space P n . Suppose S has rank n and suppose C is distinct form P n . Then S has boundary supports (a …nite number of boundary supports) and their intersection coincides with C.
101
PROOF. Let (aj1 ; :::; ajn ) (j = 1; 2; :::; m)
! be an arbitrary element of S. If b = (b1 ; :::; bn ) is an arbitrary element of P n which is
not an element of C, then by Theorem 2.4 the inequality b1 x1 + ::: + bn xn
0
could not be a consequence of the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
corresponding to the set S. But then, there exists a solution ! x 0 = (x01 ; :::; x0n ) of that system which satis…es the conditions of the corollary or Lemma 2.6 (started in the remark following that lemma). Clearly, the half space x01 x1 + ::: + x0n xn
0
corresponding to it is a boundary support of S which contains the cone C and does not ! ! contain the element b . But b is an arbitrary element of P n which is not in C. Thus, the existence of such a boundary support of S implies that the cone C is the intersection of all such boundary support. The lemma is proved. For P = R, Lemma 2.7 follows from Weyl’s well-known theorem on convex pyramids. THEOREM 2.16. Let C be a convex cone generated by a …nite set S of elements in P n and support C is distinct from P n . Then the set S possess boundary supports and it is possible to choose from these supports a …nite number of supports whose intersection is the cone C. PROOF. If the set S does not contain any nonzero elements, then the proposition is trivial. Thus, we shall assume that it has at least one nonzero element. If the rank, r, of S is equal to n, then the theorem coincides with Lemma 2.7. Hence, we will assume that r < n. Let C denote the minimal linear subspace of P n that contains S, i.e., the subspace that consists of all …nite linear combinations of the elements of S with coe¢ cients in P . The subspace C may be represented as the intersection of a …nite number of planes in P n . In case C = C the assertion of the theorem easily follows from this last assertion. Now suppose the cone C is distinct from C. But the rank of the latter coincides, obviously, with the rank r of system S. Thus, Lemma 2.7 applies here. Consequently, the cone C, as a cone in the subspace C, may be represented as the intersection of a …nite number of boundary support of the second type of S in P n (such an extension makes algebraic sense). Now we adjoin, to the thus obtained boundary supports, the planes in P n from the set K. Each one of these planes is a boundary support of S in P n . We now have a …nite number of boundary supports of (in P n ) whose intersection is S. The theorem is proved.
102
REMARK. It follows from the proof of Theorem 2.16 that a set S of elements of P n with rank r < n has no supports of the second type i¤ the cone C generated by the elements of S coincides with the half space C of P n generated by these elements. It also follows from that proof that if the set S has boundary supports of the second type, then the cone C generated by the elements of S could not be expressed as the intersection of supports of the …rst type of S. Finally we note that it also follows from the proof of Theorem 2.16 that the …nite collection boundary supports of S whose intersection is the cone C may be chosen so it would not include any boundary supports of the second type that are unessentially distinct. Utilizing our remark about Corollary 2.9, we further obtain the proposition: Consider a cone generated in P n by the set S of elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) which has rank r < n. Such a cone coincides with the subspace of P n generated by S i¤ the set of solutions of the inequality system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
coincides with the set of solutions of the equation system aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m) Hence the set S of elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) of rank r < n has no boundary supports of the second type i¤ each solution of the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
turns all of its inequalities to equations. If the set of half spaces of P n is augmented by the singular halfspaces 0x1 + ::: + 0xn
0
then Theorem 2.16 yields: COROLLARY 2.11. Every …nitely generated cone in P n is the intersection of a …nite set of …nitely generated halfspace of P n . We also note: COROLLARY 2.12. The system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m) 0
n
on P has no nonzero solutions i¤ the cone C generated in P n by the elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m); coincides with P n . 0
PROOF. Indeed, suppose P n = C . Suppose also that the system has a nonzero solution
103
(x01 ; :::; x0n ). Then the halfspace of P n corresponding to the inequality x01 x1 + ::: + x0n xn
0 0
contains the cone C . But this is not possible, since it is a nontrivial subspace of P n ((x01 ; :::; x0n ) is no t zero). Hence, in this case, our system could not have a nonzero solution. 0
Now suppose C 6= P n . Then by Theorem 2.16, there exists a nontrivial halfspace of P n 0
that contains C . Let c1 x1 + ::: + cn xn
0 (j = 1; 2; :::; m)
((c1 ; :::; cn ) 6= (0; :::; 0)) be the inequality which de…nes the halfspace. Since all the gen0
erating elements of C satisfy this inequality, it follows that (c1 ; :::; cn ) is a nonzero solution of our system. The corollary is proved. 0
DEFINITION 2.6. The convex cone C generated in P n by the set of the elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m); is called the dual cone of the system: aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m) (42)
of linear inequalities. 0
Let H be a …nitely generated convex cone in P n . Let H be the dual cone of a …nite 0
system of linear inequalities whose set of solutions is H. Then H is called the dual cone of the cone H. 0
It follows from Theorem 2.4 that the cone C consists of all elements (a1 ; :::; an ) of P n for which the inequality a1 x1 + ::: + an xn
0 0
is a consequence of system (42). Using this, it is not hard to show that the cone H is independent of the choice of the system whose solution set is H. 0
In view of Corollary 2.11, the dual cone C of system (42) may be expressed as the intersection of a …nite number of half spaces of P n . (here we include the trivial half space). Let 0
0
aj1 x1 + ::: + ajn xn
0
0 (j = 1; 2; :::; m ) (43)
be the system of inequalities corresponding to these half spaces and let C " be the dual cone of this system. It is easy to see that it is equivalent to the system b 1 x1 + ::: + b n xn
0 ( 2 M)
where (b 1 ; :::; b n ) ( 2 M ) is an arbitrary solution of system (42). Indeed, any solution 0
of system (43), i.e., any elements of the cone C , obviously, satis…es that system. This is
because the latter is satis…ed by any element (aj1 ; :::; ajn ) (j = 1; 2; :::; m). Conversely, for any solution (x01 ; :::; x0n ) of it, the inequality
104
x01 x1 + ::: + x0n xn
0
is, obviously, a consequence of system (42). Hence, by Theorem 2.4 we have 0
(x01 ; :::; x0n ) 2 C :
In view of the de…nition of system (43), this means that (x01 ; :::; x0n ) is one of its solutions. Now, system (43) is satis…ed by each of the elements: (aj1 ; :::; ajn ) (j = 1; 2; :::; m). Thus 0
0
(aj1 ; :::; ajn ) (j = 1; 2; :::; m) are solutions of system (43) which is a subsystem of the system (43*) and is equivalent to the latter. Using Theorem 2.4, this implies that the elements 0
0
(aj1 ; :::; ajn ) (j = 1; 2; :::; m) are generating elements of the cone C of solutions (b 1 ; :::; b n ) ( 2 M ). Thus we have proved:
THEOREM 2.17. The cone C of solutions of an arbitrary system (42) coincides with the 0
dual cone C of its dual cone and hence, has a …nite set of generating elements. From Theorem 2.17 follows: 0 0
0
THEOREM 2.18. The dual cone (H ) of the dual cone H of arbitrary …nitely generated cone H in P n , coincides with H. 0
Indeed, let H be the solution set of a system (42). Let H be the dual cone of H and let 0
system (43) be a system whose solution set is H : Then in view of Theorem 2.17, the dual 0 0
0
cone H " of system (43) being (by de…nition) the dual cone (H ) of the cone H coincides with H. Under the assumptions that system (42) has rank r equal to the number n of its unknowns, that P = R and that the system is stable, Theorem 2.17 was proved by Wely [1]. Under the assumptions that the rank r of system (42) equals the number n of its unknown and that P = R the assertion of Theorem 2.17 that there exists a …nite set that generated the solution set of (42) was proved by Minkowski [1]. For the case of system (42) on P n (where P is an arbitrary ordered …eld) and without the assumption that the rank of the system equals the number of its unknowns, the assertion was proved in the article by Charnes and Cooper [1]. In connection with Theorem 2.17 and 2.18, it is appropriate to introduce: DEFINITION 2.7. The polar cone A* of a cone A with generating elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) (44) is the cone of solutions of the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
It is not hard to show that the polar cone A* of a …nitely generated cone does not depend on the choice of generating elements. now we may formulate the basic assertion of Theorem 2.17 as follows: The dual cone of a …nitely generated cone in P n coincides with its polar.
105
Theorem 2.18 becomes equivalent to: For an arbitrary …nitely generated cone A in P n we have (A*)*=A. The next de…nition plays an essential role in what follows. DEFINITION 2.8. A solution of system (42) of nonzero rank r is called an essential solution if it does not turn all of the system’s inequalities to equations. An essential solution that turns (r
1) linearly independent, i.e., with linearly independent left hand sides,
inequalities of the system to equations is called a fundamental solution. In this context, for r = 1, any stable solutions is a fundamental solution. Consider two fundamental solutions of system (42). Suppose the system does not contain r
1 linearly independent inequalities
that are turned to equations by both of them. Such two fundamental solutions are said to be essentially distinct. The maximal system of essentially distinct fundamental solutions of system (42) is called the fundamental system of solutions of (42). For r = 1, obviously, there could not exist two essentially distinct fundamental solutions. THEOREM 2.19. If system (42) possesses essential solutions, then it possesses at least one fundamental solution. If ! x k = (xk ; :::; xk ) (k = 1; 2; :::; `) n
1
x is one of its nonessential solutions, is a fundamental solution system of (42). And if ! ! i.e., if x is a solution of the system bounding equations aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m): (45) Then any solution of system (42) may be expressed in the form ` X k ! ! x = x + pk ! x ; k=1
where (p1 ; :::; p` ) are arbitrary nonnegative numbers in the …eld P . The elements of two arbitrary fundamental systems of solutions of system (42) coincide
except for a positive constant multiplies (in P) and terms that satisfy (45). REMARK. If k ! x = (xk1 ; :::; xkn ) (k = 1; 2; :::; h) is a maximal system of linearly independent solutions of system (45) then any solution of system (45) may be written in the form h X k ! x = qk ! x ; k=1
where q k (k = 1; 2; :::; h) are arbitrary element of P , or in the form h h X X k k k ! ! 0 x = pk x + p ( ! x ); k=1
k=1
where pk (k = 1; 2; :::; h) are arbitrary nonnegative elements of the …eld P .
106
PROOF. By hypothesis, the rank r of system (42) is nonzero. For r = 1, the theorem is obviously true. Hence, we shall assume that r > 1. By hypothesis system (42) has nonzero solutions. Hence, in view of the present theorem, not all solutions of (42) satisfy (45). Thus, the set S of elements (aj1 ; :::; ajn ) (j = 1; 2; :::; m) 0
that generate the cone C has boundary supports of the second type and at least one of these supports is included in a …nite system of boundary supports of S whose interaction is .. (see remark about Theorem 2.16). Let T be one of these systems and let (43) be system of inequalities that de…ne the supports which are included in T . In this connection, we shall assume that system T does not include boundary supports of the second type that are not essentially distinct (see remark about Theorem 2.16). 0
0
In view of Theorem 2.17, the dual cone (C )0 of the cone C coincides with the cone C of solutions system (42). Hence, all elements 0
0
(aj1 ; :::; ajn ) (j = 1; 2; :::; m )
(46)
de…ning the inequalities of system (43), satisfying system (42). But then we have the relation 0
0
0
0
aj1 ai1 + ::: + ajn ain
0 (j = 1; 2; :::; m; i = 1; 2; :::; m )
As we already noted, system T contains at least one boundary support of the second type of the set S. Let the inequality 0
0
ai0 1 x1 + ::: + ai0 n xn
0
de…ne one of these supports. This implie4s that among the preceding relations there are 1 relations
r
0
0
ajs 1 ai0 1 + ::: + ajs n ai0 n = 0 (s = 1; 2; :::; r
1)
with linearly independent elements (ajs 1 ; :::; ajs n ) (s = 1; 2; :::; r
1) 0
0
It follows from here, that (ai0 1 ; :::; ai0 n ) is a fundamental solution of system (42). 0
0
ai0 1 x1 + ::: + ai0 n xn
0
of system (43) that de…nes a boundary support of the second order of S corresponds to 0
0
the fundamental solution (ai0 1 ; :::; ai0 n ) of system (42). If the inequality 0
0
ai1 x1 + ::: + ain xn
0
of system (43) de…nes a boundary support of the …rst type of the set S then the element 0
0
(ai1 ; :::; ain ), obviously, satis…es system (45). Now the elements (46) generate the cone C. Thus, for any solution ! x of system (42) we have the formula
107
0
! x =
m X
0
qj (aj1 ; :::; ajn )
j=1
0 where q1 ; :::; qm0 , are arbitrary nonnegative elements of the …eld P . If ! x is the sum of
those terms of those formula that satisfy (45), then, introducing new indices for the remaining terms and new values for the coe¢ cients, we have ` X 0 0 0 0 ! x =! x + pk (ajk 1 ; :::; ajk n ) k=1
But, obviously, essentially distinct boundary supports of the second type of S correspond
to essentially distinct fundamental solutions of system (42). Furthermore, system T does not contain any nonessentially distinct boundary supports of the second type of S. Hence, the 0
0
elements (ajk 1 ; :::; ajk n ) (k = 1; 2; :::; `) there, constitute an essentially distinct fundamental system of solutions of (42). 0 Suppose now that ! x is a fundamental solution of (42) and that ` X 0 0 0 ! x0 =! x0+ p0k (ajk 1 ; :::; ajk n ) k=1
is its form, determined by the formula obtained above. Substitute these expressions for ! x 0 in those inequalities of system (42) that are turned to equations by ! x 0 . It is obvious 0
0
that these inequalities will be turned to equations for any elements (ajk 1 ; :::; ajk n ) which are included in these expressions with positive coe¢ cients p0k . But all the fundamental solutions 0
0
(ajk 1 ; :::; ajk n ) are essentially distinct. Thus among the coe¢ cients p0k (k = 1; 2; :::; `) only one is di¤erent from zero. Hence, any fundamental solution of system (42) coincides, up to 0
0
a positive multiple and with terms satisfying (45), with the elements (ajk 1 ; :::; ajk n ). Thus, these elements constitute a system of fundamental solutions of system (42) and the number of fundamental solutions of (42) coincides with their number `. Let ! x k = (xk1 ; :::; xkn ) (k = 1; 2; :::; `) be a fundamental solution system of system (42). The, up to the number of its elements, we have 0 0 0 (a ; :::; a ) = ! x +t ! x k (k = 1; 2; :::; `) jk 1
jk n
k
k
0 where ! x k is a solution of system (45) and where tk is a positive element of P . Using
these and the above formula, it is easy to obtain the formula for solutions of system (42) asserted by the theorem. The latter assertion of the theorem follows also from these relations. The theorem is proved. For systems (42) on Rn and assuming r = n, the above theorem was proved by Minkowski x is dropped. [1]. In this case, in the formula introduced in the theorem, the element !
COROLLARY 2.13. Two fundamental solutions ! x and ! x " of system (42) are nonessen0
108
0 tially distinct i¤ there exists a positive element p 2 P such that p! x
" ! x is a nonnegative
solution of system (42).
In other words, two fundamental solutions of system (42) are nonessentially distinct i¤ there exists a solution of system (42) which is a positive multiple of each of them (with a scalar from P) such that its di¤erence from either is a nonessential solution of (42). 0 PROOF. Su¢ ciency is obvious. We prove necessity. Let ! x and ! x " be two nonessentially distinct fundamental solutions of (42). Let F be a subsystem of (42) with rank r 1 (r is 0 the rank of (42)) with r 1 inequalities that are turned to equations by ! x and ! x " . Finally, let ! x k = (xk1 ; :::; xkn ) (k = 1; 2; :::; `)
0 be a system of fundamental solutions of (42) that contains ! x . It is not hard to show 0 that all the elements of ! x that are not included in the terms of the expression ` X " ! p"k ! xk ; x" = ! x +
k=1
for ! x " , applying Theorem 2.19, are all zero (fro otherwise ! x and ! x " would not be able 0
to turn all the inequalities of F to equations). If P is one of the coe¢ cients p"k which are " 0 0 ! not zero then ! x" = ! x + p! x and hence, p! x x " . turn all the inequalities of (42) to equations. Necessity is proved. From the proof it follows that, for r = n, two fundamental solutions of (42) are nonessentially distinct i¤ they di¤er only by a scalar positive multiple which is an element of P . COROLLARY 2.14. Let ! x 0 = (x01 ; :::; x0n ) be an essential solution of (42). It is a fundamental solution of that system i¤ for any decomposition 1 k ! x + ::: + ! x (47) x0 =! of it into solutions of system (42) there exist element (p1 ; :::; pn ) …eld P , depending on the choice of the decomposition, such that the di¤erences pi ! x0 ! x i (i = 1; 2; :::; k) are nonessential solutions of (42). PROOF. Indeed, let ! x 0 be a fundamental solution of system (42).Then there exists a subsystem F of it with rank r 1 consisting of r 1 inequalities that are all turned to equations by ! x = ! x 0 . However, this is only possible if they are turned to equalities by all the terms of any decomposition (47) of ! x 0 . Hence any of the terms of ! x is either a fundamental solution of (42) that is nonessentially distinct from ! x 0 or is a nonessential solution of (42). In the second case, the di¤erence pi ! x0 ! x i would be a nonessential solution of (42) for p = 0. In the second case p ! x0 ! x i would be a nonessential solution i
i
of (42) for some pi (see Corollary 2.13). This established necessity. 109
Now suppose all the terms of an arbitrary decomposition (47) of a given solution ! x 0 of (42) satis…es the conditions of the theorem. We shall show that ! x 0 is a fundamental solution of system (42). In view of Theorem 2.19 we have the decomposition ` X k ! ! 0 p0k xk (p01 0; :::; p0` 0) x = x + ::: + k=1
k where ! x k (k = 1; 2; :::; `) is a fundamental solution system of (42) and where ! x is a
nonessential solutions of the system. Suppose p01 > 0. By hypothesis, all the terms of this decomposition satisfy the condition of the present corollary. Hence there exists a nonnegative 0 number p 2 P such that ! x = p! x 0 p0 ! x 1 turns all the inequalities of the system to 1
1
0 equations. Here p1 6= 0, for otherwise the fundamental solution ! x system (42) would have
this property, which is not possible in view of the de…nition of fundamental solutions. 0 0 p0 !0 From the decomposition ! x = p1 ! x 0 p01 ! x 1 we have ! x 0 = p11 ! x x . If F is an p1 r 1 inequality subsystem of (42) all of whose inequalities are turned to equations by ! x 1, there by the present composition, they would be turned to equations by ! x 0 . Hence ! x 0 is a fundamental solution of (42) as we were to show. For system with rank r = n, the above proposition was proved by Weyl [1] and in some other form by Minkowski [1]. From Corollary 2.14 follows: COROLLARY 2.15a. None of the elements of a fundamental solution system of (42) may be expressed as the sum of a nonessential solution of system (42) and a linear combination of the other elements of the fundamental solution system with nonnegative coe¢ cients in P . Utilizing Theorem 2.19 and Corollary 2.14 it is not hard to prove the following proposition which is, in some sense, a converse of the above proposition. COROLLARY 2.15b. Consider the maximal subsystem of any …nite system of generating elements for the cone C of solutions of system (42) with at least one essential solution of (42) possessing the property stated in Corollary 2.15a. That maximal subsystem is a fundamental solution system of (42). Indeed, let D be a …nite system of generating elements of the cone C. Let f! c k g (k = 1; 2; :::; `) be a maximal subsystem of D with the stated property. Let f! c k g (k = 1; 2; :::; `)
be a fundamental solution system of (42). Since D is a system of generating elements of C, each ! x k may be expressed, with nonnegative coe¢ cients, in terms of elements of D. But, then, using the property of the subsystem f! c k g of system D, we may write ! x k in the form: ! xk =! c 0 + pk ! c 1 + ::: + pk ! c h (k = 1; 2; :::; `) k
1
n
110
where pk1 ; :::; pkn ; are nonnegative elements of P and where ! c 0k is an element that turns all of the inequalities of (42) equations. Applying Corollary 2.14 and using the fact that f! c k g contains no elements that could be expressed as the sum of a nonessential solution of (42) and a nonnegative linear combination of the other elements of f! c k g, we easily conclude
c k g has as a subsystem a that at least one of the coe¢ cients pk1 ; :::; pkn is nonzero. Hence, f! fundamental solution system f! c k g (k = 1; 2; :::; `) of system (42). If any of the elements of f! c k g is not included in that subsystem, then by Theorem 2.19
it could be expressed as the sum of a nonessential solution of (42) and a nonnegative linear combination of elements of the latter, with coe¢ cients in P. Since this is contrary to our assumption about f! c k g, that system must coincides with its subsystem f! c k g. Hence, it is a fundamental solution system of (42), which we wished to show.
REMARK. It follows from Corollary 2.15b that any system of generating elements of the cone C of solutions of (42) with an essential solution, has at least one fundamental solutions of that system. No nonnegative linear combination of essential solution of (42) is a nonzero nonessential solution of (42). Thus a system generating elements for the cone of solution of (42) containing a nonessential nonzero solution must have at least one such solution among its elements. It is not hard to show that the cone generated by all such elements coincides with the set of all nonessential nonzero solutions of (42). k Let ! x k (k = 1; 2; :::; `) be a fundamental solution system of (42) and let ! x be some maximal linearly independent system of nonessential solutions of (42). Then, in view of Corollary 2.15a, the generating system k 0 ! x k (k = 1; 2; :::; `) ,! x (k = 1; 2; :::; h); ! x =
h X
k ! x
k=1
of the cone C (see Theorem 2.19 and the remark about it) is generating system of C with the smallest number of elements. If a system (42) of nonzero rank possesses no nonessential nonzero solutions, then any of its fundamental solutions would be a generating system of the above type. DEFINITION 2.9. The system 0
0
aj1 x1 + ::: + ajn xn
0
0 (j = 1; 2; :::; m ) (48)
is called the dual of system (42), i.e., of the system aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m);
if its dual cone coincides with the solution cone of the latter. A system of this type is, obviously, equivalent to the system (43*) which was introduced at the beginning of this section and which was called, provisionally, the dual of system (42). If system (42) possesses no nonzero solutions, then its dual system (48) contains no 111
inequalities that are distinct from the null inequality 0x1 + ::: + 0xn
0. We note that, in
view of Theorem 2.18, the dual system of a dual system is equivalent to the original system. Hence, systems (42) and (48) are mutually dual to each other. Any fundamental solution (x01 ; :::; x0n ) of system (42) de…nes a boundary support x01 x1 + ::: + x0n xn
0 of the second type of the set (aj1 ; :::; ajn ) (j = 1; 2; :::; m). Conversely, any 0
0
boundary support x1 x1 + ::: + xn xn 0
0 of that set corresponds to a fundamental solution
0
(x1 ; :::; xn ) of system (42). It is not hard to show, thus, that essential distinct fundamental solutions correspond to essentially distinct boundary supports and that essentially distinct boundary supports correspond to essentially distinct fundamental solutions. In view of Corollary 2.15b we obtain the following proposition. Suppose system (42) has at least one essential solution. Then any of its dual systems has a subsystem whose inequalities de…ne the maximal system of essentially distinct boundary supports of the second type of the set (aj1 ; :::; ajn ) (j = 1; 2; :::; m): In view of Corollary 2.15b we also get: LEMMA 2.8. Suppose that the set of solutions of system (42) is a nonempty pointed one. Consider an independent subsystem of a dual system of (42) that is equivalent to that dual system. Then such a subsystem consists of the inequalities that de…nes all of the essentially distinct supports of the second type of the set (aj1 ; :::; ajn ) corresponding to the inequalities of system (42). From the discussion that preceded De…nition 2.9, we conclude that the dual system xk1 x1 + ::: + xkn xn
0 (k = 1; 2; :::; `);
xk1 x1 + ::: + xkn xn h X ( xk1 )x1 :::
0 (k = 1; 2; :::; `); h X ( xkn )xn 0
k=1
k=1
of system (42) is the dual system with the smallest number of inequalities. In conclusion of this section we introduce some propositions of a geometric nature, which are connected to duality, about …nitely generated convex cones, First of all we introduce: 0
LEMMA 2.9. The dual cone A of a …nitely generated convex n-dimensional pointed cone 0
A has these properties. If the cone A is not pointed, then A is not n-dimensional. 0
PROOF. Convexity of A and its being …nitely generated were noted in the de…nition of 0
a dual cone; De…nition 2.6. We now prove that A is pointed. Let f(aj1 ; :::; ajn )g (j = 1; 2; :::; m): be a …nite set of generating elements of the cone A. 0
Then the dual cone A of the cone A coincides with the cone C of solutions of system (42) corresponding to our set of generating elements of A (see Theorem 2.17). If the cone C is not pointed, the system 112
aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m); obviously, has a nonzero solution. But then its rank, and hence the rank of system (42), may not coincide with the number of the unknowns. But this contradicts our hypothesis 0
about the dimension of A. Thus the cone C, and hence the cone A , is pointed. 0
If the cone A is not n-dimensional, then the rank s of the dual system (48) to our system (42) is less than n. But then the set A of solutions to it must have a subspace of dimension n
s > 0 (a set of solutions that turn all of the inequalities to equations). However, this 0
contradicts the assumption that A is pointed. Consequently, A is n-dimensional. To prove the second assertion of the lemma, assume that the cone A is not pointed. Then the system obtained by replacing the inequalities of system (48) by equations has a nonzero solution. Consequently the rank of that equation system, and hence of system (48), is less 0
than n. But the dual cone of system (48) coincides with A , and hence the assertion is valid. This completes the proof of the lemma. 0
REMARK. It follows from Lemma 2.9 that if the dual cone A of a …nitely generated 0
convex n-dimensional cone A is n-dimensional, the both of the cones A and A are pointed. DEFINITION 2.10. Two elements of a convex cone A are said to be nonessentially distinct if they di¤er by only a positive multiple which is an element of P . The set of all elements of a pointed cone A that are nonessentially distinct from an element a of A is called a ray in A de…ned by the element a. The rays de…ned by a basis of a …nitely generated convex pointed cone A are called edge of the cone A. Since the basis of such a cone is unique up to nonessentially distinct elements, this de…nition of edge of a cone is independent of the choice of basis. THEOREM 2.20. (See Weyl [1].) Let the …nite set S of elements of P n generated a pointed convex cone A. Then through an arbitrary edge of the cone A there passes the bounding planes of n
1 linearly independent boundary supports of S. On the other hand
in the bounding planes of any boundary supports of the set S there lies n
1 linearly
independent edges of the cone A. 0
0
PROOF. Let (aj1 ; :::; ajn ) (j = 1; 2; :::; m) be the elements of S and let (aj1 ; :::; ajn ) 0
0
(j = 1; 2; :::; m ): be a system, say S , of generating elements of the cone of solutions of the system (48) corresponding to S. From the n-dimensionality and pointedness of the convex cone A generated by S follows the n-dimensionality and pointedness of its dual cone A
0
0
generated by S (see Lemma 2.9). Thus, in view of Lemma 2.8, the dual system (48) of the system (42) may be reduced to its equivalent independent subsystem whose inequalities de…ne all of the essentially distinct supports of the set S. In order to introduce new notation,
113
let us say that this subsystem coincides with system (48). Noting that basis of the cone A coincides with the fundamental solution system of system (48), it now follows (using the de…nition of fundamental solutions) that the …rst assertion of the theorem is valid. Let us now reduce system (42) to an independent equivalent subsystem of itself. In order not to introduce new notation, let us say that the reduced system coincides with system (42). The elements (aj1 ; :::; ajn ) corresponding to the reduced system (42) uniquely determine the 0
0
edges of the cone A. Hence substituting in them an element (aj1 ; :::; ajn ) de…ned by any inequality of the reduced system (48) (generating a fundamental solution of (42)) and using the de…nition of fundamental solutions we obtain the second part of the theorem and the proof is complete. x5. THE CONJUGATE CONE OF ARBITRARY SYSTEM OF LINEAR INEQUALI-
TIES.
DEFINITION 2.11. The conjugate cone of an arbitrary system ` (! x ) a = a x + ::: + a x a 0 (j = 1; 2; :::; m) (49) j
j
j1 1
jn n
j
n
of linear inequalities in P is the dual cone of the system: `j (! x ) aj t = aj1 x1 + ::: + ajn xn aj t 0 (j = 1; 2; :::; m) (50) t
0
of homogeneous linear inequalities in P n+1 . The introduction of the inequalities
t
0 in system (50) is justi…ed by the property
of the consequence inequality introduced in Lemma 2.5. We note here that, in view of De…nition 2.11, Lemma 2.5 (forL(P ) = P n ) may be restated in the more compact form: The inequality `(! x ) b = b1 x1 + ::: + bn xn
b
0
is a consequence of system (49) if and only if its conjugate cone contains the element (b1 ; :::; bn ; b): THEOREM 2.21. The system (49) is inconsistent when and only when its conjugate cone K contains the element ! e n+1 = (0; :::; 0; 1) of the space P n+1 . PROOF. Suppose ! e 2 K. Then there exist nonnegative elements p ; p :::; p of the n+1
…eld P such that the relation m X ! e n+1 = pj (aj1 ; :::; ajn ; aj )
0
p0 ! e n+1 (51)
j=1
holds. This implies that the relation
114
1
m
t=
m X
pj (aj1 x1 + ::: + ajn xn
aj t)
p0 t (52)
j=1
holds as an identity in x1 ; :::; xn ; t and, furthermore, that the relation m X 0 t= pj (aj1 x1 + ::: + ajn xn aj t) j=1
holds as an identity in x1 ; :::; xn ; t where 0
pj =
pj p0 +1 0
0
From the fact that all p1 :::; pm are nonnegative, in view of this relation, it follows that t
0 is a consequence of the system `(! x ) aj t 0 (j = 1; 2; :::; m) But this implies the nonexistence of any positive t for which the system is consistent.
Thus system (49) is inconsistent. Assume now that system (49) is consistent. Then t
0 must be a consequence of
the preceding system and, hence, of system (50). Thus, in view of Theorem 2.4, there exists nonnegative elements p0 ; p1 :::; pm of P such that relation (52) holds as an identity in x1 ; :::; xn ; t. But then, so would (51). Hence, ! e n+1 is included in the cone K. The theorem is proved. REMARK. It follows from Theorem 2.21 that the conjugate cone K of a consistent system (49) is always distinct from P n+1 . We now note some properties of conjugate cones K of consistent systems (49) of nonzero rank r. 1. If ! x 0 = (x01 ; :::; x0n ) is any nodal solution of system (49), then x01 x1 + ::: + x0n xn + xn+1
0
is a boundary support of the second type of the set T of elements (aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; m) that generate the cone K. If 0
0
x1 x1 + ::: + xn xn + xn+1
0 0
0
0
is a boundary support of the second type of the set T , then x = (x1 ; :::; xn ) is a solution of system (49). Indeed if `jk (! x ) ajk
0 (k = 1; 2; :::; r)
is a subsystem of (49), then the subsystem `jk (! x ) ajk t 0 (k = 1; 2; :::; r) of (50) has rank r. Let ! x 0 = (x0 ; :::; x0 ) be a solution of system (49) that turns all of the 1
n
inequalities of the …rst subsystem to equations. Then (x01 ; :::; x0n ; 1) is a solution of system 115
(50) that turns all the inequalities of the second subsystem to equations. But the rank of system (50) is r + 1 and that solution does not turn all of the inequalities of system (50) to equations. Thus follows the validity of the …rst assertion. The second assertion is now proved. Let 0
0
x1 x1 + ::: + xn xn + xn+1
0 0
0
0
is a boundary support of the second type of T . Then x = (x1 ; :::; xn ; 1) is a solution of system (50) that turns to equations, the inequalities of one of the subsystems Q of (50) of rank r consisting of r inequalities. Obviously, the inequalities
t
0 is not included in Q (otherwise we would have
1=
0). Thus, for t = 1, the inequalities of Q represent a subsystem of (49) of rank r with r inequalities. Since all of the inequalities of the latter are turned to equations by the solution 0
0
0
0
(x1 ; :::; xn ) of system (49), it follows that (x1 ; :::; xn ) is a nodal solution of system (49). This is what we wanted to prove. REMARK. All nodal solutions of one and the same nodal subsystem of (49) de…ne boundary supports of T whose bounding planes pass through the same elements of T . 2. The inequality x1 x1 + ::: + xn xn + xn+1 xn+1
0 ((x1 ; :::; xn ) 6= (0; :::; 0));
de…nes a boundary support of the …rst order of T i¤ xn+1 = 0 and (x1 ; :::; xn ) is a nonzero solution of the homogeneous equation system ` (! x ) = 0 (j = 1; 2; :::; m) (53) j
where `j (! x ) are the linear forms of the inequalities of system (49). The above inequality
de…nes a boundary support of the second order of T with a bounding plane that passes through the element ! e n+1 i¤ xn+1 = 0 and (x1 ; :::; xn ) is a fundamental solution of the ! system ` ( x ) 0; j = 1; 2; :::; m , i.e., i¤ x = 0 and (x ; :::; x ) is a solution of ` (! x) 0 j
n+1
1
n
j
that does not satisfy the system (53) but satis…es one of its subsystems of rank r r
1 with
1 equations. The su¢ ciency of the …rst condition is obvious. The necessity of it is now proved. Let 0
the above inequality de…ne a boundary support of the set T and let T be a maximal subset of T with the property that all of its elements turn our inequality to equations. Obviously 0 0 T = T . From ! e 2 T = T it follows, obviously, that x = 0 and hence n+1
n+1
x1 x1 + ::: + xn xn = 0 for
(x1 ; :::; xn ) = (aj1 ; :::; ajn ) (j = 1; 2; :::; m) Consequently (x1 ; :::; xn ) is a solution to the equation system (53). This proves necessity
116
of the …rst condition. We now move on to the second condition. Necessity is proved as follows. If the inequality de…nes a support with the sought property, then xn+1 = 0. Furthermore, we note that its bounding plane passes through r linearly independent elements of T but not through r + 1 linearly independent elements of T . Thus, the element (x1 ; :::; xn ) satis…es the condition of our assertion. This ends our proof. 3. There is a one-one correspondence between nodal subsystem of (49) and subsystems of elements of T rank r containing r elements which do not include the element ! e n+1 through each of which passes the bounding plane of one of the boundary supports of the second order of this set. Indeed, this property follow easily from Property 1. Let `jk (! x ) ajk 0 (k = 1; 2; :::; r) be a nodal subsystem of (49) and let ! x 0 = (x01 ; :::; x0n ) be some nodal solution of that subsystem. Then the nonzero solution (x01 ; :::; x0n ; 1) of system (50) will turn all of the inequalities of the subsystem `jk (! x ) ajk t 0 (k = 1; 2; :::; r) of system (50) to equations. But this means that the chosen nodal solution of system (49) corresponds to the subsystem (ajk 1 ; :::; ajk n ; ajk ) (k = 1; 2; :::; r) of rank r of r elements of the set T that are distinct from bounding plane of the boundary support x01 x1 + ::: + x0n xn + xn+1
! e n+1 and that are in the 0 of the second type of
T . It is obvious that the correspondence is one-to-one. 4. The extremal subsystem of system (49) are in one-to-one correspondence with the maximal subsystems, not including ! e n+1 ; of the set T through each of which passes the bounding plane of the at least one boundary support of the second order of the set T . This property follows directly from property 3 (see remark about Property 1). 5. If the inequality `(! x ) b = b1 x1 + ::: + bn xn
b
0
is a consequence of a consistent system (49) then the element (b1 ; :::; bn ; b) is an element of a subcone of the cone K which is generated by the element ! e n+1 and r linearly inde! pendent elements of T distinct from e through which passes the bounding plane of a n+1
boundary support of the second type of T . This property follows directly from Property 3 and Theorem 3 and Theorem 2.2. It is geometrically equivalent to Theorem 2.2 for linear inequality systems on P n .
117
Theorem 2.22. Let ! x k = (xk1 ; :::; xkn ) (k = 1; 2; :::; `) be a nodal solution of extremal subsystems of system (49) (with nonzero rank) taken one at a time. Suppose, for each of them, ! y k = (y k ; :::; y k ) (k = 1; 2; :::; h) 1
n
is the maximal system of linearly independent solutions of the system ` (! x ) = 0 (j = 1; 2; :::; m) (system (53)) j
! y 0 = (y10 ; :::; yn0 ) =
h X
(y1k ; :::; ynk )
k=1
and let ! z k = (z1k ; :::; znk ) (k = 1; 2; :::; g) be a fundamental solution system of the system ` (! x ) = 0 (j = 1; 2; :::; m) (54) j
Then the set M of elements (! x k ; 0) (k = 1; 2; :::; `); (! y k ; 0) (k = 1; 2; :::; h); (! z k ; 0) (k = 1; 2; :::; g) 0
of the space P n+1 is a generating set of elements for the dual cone K of the convex cone 0
K of system (49) (in view of Theorem 2.17, the cone K coincides with the set of solutions of (50)) that has the smallest number of elements. PROOF. In view of Properties 1 and 2, each element of M corresponds to a boundary support of the set T of elements: (aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; M ) ! e = (0; :::; 0; 1): n+1
let K be the intersection of these supports. In view of Theorem 2.16 (see also the remark about Theorem 2.21) there exists a …nite set Q of supports of T whose interaction is the cone K. By the de…nition of support, each element of K satis…es every inequality that de…nes a support of T . Hence K
K:
On the other hand, it is not hard to see that if x1 x1 + ::: + xn xn + xn+1 xn+1
0 (55)
is an inequality de…ned by one of the supports included in the set Q, then it is satis…ed by all elements of K. Actually, if that inequality is de…ned by a support of the …rst order then, in view of Property 2, xn+1 = 0 and (x1 ; :::; xn ) is a nonzero solution of system (53). But then there exist nonnegative elements p1 ; :::; pn of the …eld P such that h X (x1 ; :::; xn ; 0) = pk (! y k ; 0) k=0
118
But then, this inequality is a consequence of the inequality system y1k x1 + ::: + ynk xn + 0xn+1
0 (k = 1; 2; :::; h)
Hence all the elements of K satisfy it. Suppose (55) is de…ned by a support of the second type. Then, in view of Properties 1 and 2, two cases are possible: xn+1 = 0 and xn+1 6= 0. In the second case, in view of Property 2, ! x = (x1 ; :::; xn is a fundamental solution of system (54). Hence there exists a 0 x =! x p! z k0 satis…es positive element p 2 P such that, for some k = k 0 , the element !
system (53) (see Corollary 2.13). Thus the inequality (55) (with xn+1 = 0) may be written in the form 0
0
p(z1k0 x1 + ::: + znk0 xn + 0xn+1 ) + (x1 x1 + ::: + xn xn + 0xn+1 )
0
But all elements of K satisfy the inequality z1k0 x1 + ::: + znk0 xn + 0xn+1
0
by de…nition of that cone. Also, in view of the above discussion, they satisfy: 0
0
x1 x1 + ::: + xn xn + 0xn+1
0
0
0
(because the element (x1 ; :::; xn ) is a nonnegative linear combination of the elements ! y k (k = 1; :::; h)). Hence, in this case (of xn+1 = 0), all elements of K satisfy the inequality (55). to
In the second case (of xn+1 6= 0), obviously xn+1 > 0. Hence the inequality (55) reduces x01 x1 + ::: + x0n xn + xn+1
0
But then, in view of Property 1, ! x 0 = (x01 ; :::; x0n ) is a nodal solution of system (49). Now, each nodal solution of (49) is a nodal solution of any one of its extremal subsystems. Hence, 0 among the elements ! x k (k = 1; :::; `) we may …nd an element ! x k such that both ! x 0 and 0 0 ! x k are nodal solutions of one and the same extremal subsystem. But then ! x =! x0
0 ! xk
is a solution of system (53). Hence our inequality may be written in the form 0
0
0
0
xk1 x1 + ::: + xkn xn + xn+1 ) + (x1 x1 + ::: + xn xn )
0
Clearly, this inequality, and hence, our inequality (55), is satis…ed by all the elements of K.This proves that K
K. Since K
K, this means that the interaction of the boundary
supports de…ned by the elements of the set M coincide with the cone K. This proves the …rst assertion of the theorem. The minimality of the number of elements of M follows from the discussion below. The elements (! x k ; 1) (k = 1; 2; :::; `) and (! z k ; 0) (k = 1; 2; :::; g) constitute a fundamental solution system of system (50). Indeed, the fundamentality
119
of the solutions (! x k ; 1) is established in the auxiliary proposition introduced below. The fundamentality of (! z k ; 1) follows directly from its de…nition. It is not hard to see that any fundamental solution of (50) is either nonessentially distinct from (! z k ; 0) (if its last coordinate is zero). Finally it is not hard to show that all fundamental solutions introduced here are essentially distinct from each other. It is also obvious that the elements (! y k ; 0) (k = 1; 2; :::; h) constitute a maximal linearly independent solution system of the equations: ` (! x ) a t = 0 (j = 1; 2; :::; m) j
j
t=0 Hence, by the second of the remark about Corollary 2.15b, it follows that the number of 0
elements of the set M of generating elements of K is minimal. We now formulate our auxiliary proposition. x (x1 ; :::; xn ) of system (49) of rank r > 0 is a nodal solution of that system A solution ! if and only if (! x ; 1) is a fundamental solution of system (50). This proposition is, obviously, equivalent to Property 1 of the conjugate cone K of system (49) as established above. To formulate a useful corollary of the above proposition, we introduce our next de…nition. DEFINITION 2.12. An extremal element of the set S P n is one of its elements ! x such that the relation ! ! ! x = x 1 +2 x 2 (! x 1 2 S; ! x 2 2 S) ! ! implies that x = x 1 ; and ! x = ! x 1 . An extremal element of the set of solutions of system (49) is called an extremal element solution of that system. It is easy to show that the extremal solutions of system (49) of rank r = n are those, and only those, solutions of (49) that are nodal solutions of it. Indeed let ! x 0 = (x0 ; :::; x0 ) be a 1
nodal solution of (49) and let ! x 1 and ! x 2 be two of its solution with ! ! ! x 0 = x 1+ x 2 : 2
If `jk (! x)
ajk = 0 (k = 1; 2; :::; n)
is a nodal subsystem of (49) such that `jk (! x 0 ) ajk = 0 (k = 1; 2; :::; n) then substituting ! x = 12 (! x1+! x 2 ) we get ! ! 1 1 (` ( x ) a ) + (` ( x ) a ) = 0 2
jk
1
jk
2
jk
2
jk
Since ! x 1 and ! x 2 are solutions of (49) we have ! (`jk ( x 1 ) ajk 0 (k = 1; 2; :::; n) and
120
n
(`jk (! x 2)
ajk
0 (k = 1; 2; :::; n)
Hence, we have (`jk (! x 1 ) ajk = 0 (k = 1; 2; :::; n) and (`jk (! x 2)
ajk = 0 (k = 1; 2; :::; n) Hence all of the three elements ! x 0; ! x 1 and ! x 2 are solutions of the system ! (` ( x ) a = 0 (k = 1; 2; :::; n) jk
jk
of rank n. Since that system has a unique solution, we have: ! x0 = ! x1 = ! x 2 . Consequently ! x 0 is an extremal solution of system (49). 0 Now let ! x be an extremal solution of system (49) of rank r = n. If it is not a nodal 0 solution of (49) then the maximal subsystem (of (49)) that ! x turns to equations has rank ` = n. If (` (! x)
0
ajk = 0 (k = 1; 2; :::; m ) 0 is that subsystem, then ! x satis…es the system (` (! x ) a = 0 (j = 1; 2; :::; m; j 6= j ; :::; j 0 ) jk
j
j
1
m
Since the rank ` 6= n, the equation system 0 (`jk (! x ) ajk = 0 (k = 1; 2; :::; m ) 0 0 has a solution ! x " 6= ! x . In view of Lemma 1.4, there exist t = t > 0 such that, for 0
0
0
t ; x + t(x" x )satis…es the preceding system of inequalities. Using the lemma 0 ! (taking 2! x x " for an ! x " ) we conclude easily that there exists t = t" > 0 such that for 0 0 0 all nonnegative t t" the element ! x t(! x" ! x ) satis…es that system. If t = min(t ; t" ) 0
t
0
then both elements 0 ! x =! x t0 (! x" 1
0 0 ! x ); ! x2 =! x
t0 (! x"
0 ! x )
satisfy that system. Since ! x 1 and ! x 2 are, obviously, solutions to our equation system, it follows that they are solution to (49). But then ! ! 0 ! x = x 1+ x 2 : 2
0 0 and the extremality of ! x imply that ! x = ! x1 = ! x 2 . However, for t0 > 0 this is 0 obviously not possible. The obtained contradiction implies that ! x is a nodal solution of
system (49). 0 In case m = 0 the argument is analogous (! x " is an arbitrary element of P n distinct 0 from ! x ).
Now the proposition which we used in the proof of the last theorem may be restated in the following form (see Goldman [1]).
121
LEMMA 2.10. A solution ! x of system (49) of rank n is extremal i¤ (! x ; 1) is a fundamental solution of system (50). To introduce our next proposition we introduce: DEFINITION 2.13. The centroid of a …nite set of elements ! x 1 ; :::; ! x ` of P n is the set of elements of P n de…ned by the formula ` ! x =p ! x 1 + ::: + p ! x `
1
where p1 ; :::; p` are nonnegative elements of P whose sum p1 +::: + p` is one. Such a centriod is said to be …nitely generated and the elements ! x 1 ; :::; ! x ` are called its generating elements. THOEREM 2.23. (See Goldman [1], Theorem 1.) The set D of solution of a consistent system (49) of nonzero rank coincides with the algebraic sum of the cone C of solutions of the corresponding system (54) and the centroid E of the set of nodal solution of all possible extremal subsystems of (49) taking one nodal solution for each extremal subsystem. If any set F in the space P n could be decompose into the sum of a …nitely generated centriod H and a convex cone A, then it is the solution set of at least one system of linear inequalities on P n , i.e., in the form (49), with the property that any solution set of its corresponding system (54). coincides with the cone A. PROOF. Let K be the conjugate cone of a consistent system (49). In view of Theorem 0
2.17, it dual cone K coincides with the solution set of its corresponding system (50). Hence, 0 the set D of solutions of our system (49) coincides with the set K of those elements ! x of P n 1
0 0 for which (! x ; 1) 2 K : By theorem 2.22 the cone K coincides with the algebraic sum of two cones: the cone generated by the elements (! y k ; 1) (k = 1; 2; :::; h) and (! z k ; 1) (k = 1; 2; :::; g)
and the cone generated by elements (! x k ; 1) (k = 1; 2; :::; `). But in view of Theorem 2.19, the elements (! y k ; 1) (k = 1; 2; :::; h) and (! z k ; 1) (k = 1; 2; :::; g) generate the cone C of 0
solutions system (54). Hence any element of K1 may be written in the form ` ! c +p ! x 1 + ::: + p ! x 1
`
where ! c is an element of C and where p1 ; :::; p` are nonnegative elements of P whose
sum p1 +:::+p` is unity. Consequently D = C +E. This proved the …rst part of the theorem. To prove the second assertion of the theorem denote by A the cone in P n+1 with elements (! a ; 0) ; ! a 2 A. And denote by H the cone generated by the elements !k ( h ; 1) (k = 1; 2; :::; `) ! where h k (k = 1; 2; :::; `) generated the cone H. Denote the sum A + H by F . Obviously F is a …nitely generated cone in P n+1 and the last of its coordinates, xn+1 is nonnegative. Hence, in view of Corollary 2.11, there exists at least one system of linear inequalities of the
122
form 0 0 aj1 x1 + ::: + ajn xn
0
0
aj xn+1 0 (j = 1; 2; :::; m ) g xn+1 0 whose solution set if F . Clearly, for xn+1 = 0 it reduces to a system (49) with the solution
set F = A + H. This proves the second assertion of the theorem. The third assertion is proved in the following manner. Let 0 0 0 0 0 `j (! x ) aj = aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m ) (56) be a system (49) whose solution set is F . It is not hard to show that the set of solutions of the system 0 0 0 0 0 `j (! x ) aj xn+1 = aj1 x1 + ::: + ajn xn aj xn+1 0 (j = 1; 2; :::; m ) g xn+1 0 that satisfy (56) coincides with F = A + H (see the notation in the preceding part of the proof). Indeed, in view of Corollary 2.11, there exists at least one system of linear inequalities: a"j1 x1 + ::: + a"jn xn
a"n xn+1 0 (j = 1; 2; :::; m" ) g xn+1 0 whose solution set if F . But the set of solutions of its corresponding system a"j1 x1 + ::: + a"jn xn
a"j
0 (j = 1; 2; :::; m" )
obviously coincides with F. Hence, in view of the remark about Lemma 2.5, the …rst system is equivalent to system (57). Thus the solution set of the latter coincides with F . But the set of solutions of (57) whose last coordinate xn+1 is zero coincides with A. Hence the set of solutions of the system 0 0 `j (! x ) 0 (j = 1; 2; :::; m ) coincides with A. The third assertion of the theorem is proved. REMARK ABOUT THOEREM 2.23. In view of the …rst part of Theorem 2.23, a solution ! x of system (49) may be written in the form of a sum ! u +! v , where ! v is a solution of the homogeneous linear inequality system corresponding to (49) (by setting a1 = ::: = am = 0) and where .. is an element of the cnetroid generated by the nodal solution of all extremal subsystem of (49) taken so that one nodal solution corresponds to one extremal subsystem. Let ! v ; :::; ! v be those solutions. Let ! u 0 ; :::; ! u 0 (h = n r where r is the rank of (49))) 1
`
1
h
be the maximal system of linearly independent solutions of the equation system ` (! x ) = a x + ::: + a x = 0 (j = 1; 2; :::; m) j
j1 1
jn n
and let ! u 1 ; :::; ! u k be a fundamental solution system of the inequality system: ! ` ( x ) 0 (j = 1; 2; :::; m) j
Then using the decomposition ! x =! u +! v and Theorem 2.19 we obtain the following parametric representation of any solution of system (49)
123
! x =! u +! v =
h X
ci ! u 0i +
i=1
k X
pi ! ui+
i=1
` X
qi ! vi
i=1
where c1 ; :::; ch are arbitrary elements of P , p1 ; :::; pk are nonnegative elements of P and where q1 ; :::; q` are nonnegative elements of P which add up to one. For a system (4() with the …eld P = R. the representation (58) was obtained by Rubenstein [1]. For a linear homogeneous inequality systems (in Rn ) whose rank equals the number of its unknowns, the representation was obtained by Minkowski [1]. The general case ma be specialized to yield the above representation for these special cases (see Chernikov [4]). In view of Theorem 2.19 and its remark, the assertion of the …rst part of Theorem 2.23 implies , in particular, that we may select a …nite number of elements ! ! ! a 1 ; :::; ! a k ; b 1 ; :::; b ` from the elements of the solution set of a consistent system (49) such that every element ! x o f that set could be written as ! ! ! x == p1 ! a 1 + ::: + pk ! a k + q1 b 1 + ::: + q` b ` where p1 ; :::; pk are nonnegative elements of P and where q1 ; :::; q` are nonnegative elements of P what add up to one. For P = R, this assertion is essentially equivalent to the well-known Motzkim theorem about the existence of a …nite basis for any nonempty convex polyhedral set (see Motzkim [1]). The validity of the converse of this theorem follows from the assertion of the second part of Theorem 2.23 (for the case P = R, it was proved in the paper Goldman [1]). COROLLARY 2.16. If F = A1 + H 1 = A2 + H 2 are two decompositions of the set F
P n into sum of …nitely generated cones and
…nitely generated elements of the centroids, then A1 = A2 : Suppose the sets Hi (i = 1; 2) of generating elements of the centroid Hi (i = 1; 2) which do not include any elements that could be expressed as the sum of the centriod generated by the remaining elements of the set and an element of the cone A = A1 = A2 (in that case the sets Hi are called A-minimal). Then the elements of H1 and H2 coincide up to within a term of the maximal linear subspace 0
0
A of the cone A. We express this fact by writing H1 = H2 (A ): Proof. In view of Theorem 2.23, there exists a system of linear inequalities of the form (49) whose solution set coincides with F . Also, in view of that theorem, the solution set of that system’s associated system (54) coincides with the cone A1 as well as the cone A2 . This proves our …rst assertion. The second assertion is obviously true if F = P n . Hence we assume, for the remainder of this proof, that F 6= P n . 124
In view of the second assertion of Theorem 2.23, it follows that there exists a system (49) whose solution set coincides with F . Hence, for F 6= P n , the second assertion of our corollary follows from the next proposition.
Suppose the set F of solution of a system (49) of nonzero rank could be decomposed into the sum of a …nitely generated convex cone and a …nitely generated centroid H with an 0
A-minimal generating set H . Then the latter coincides with a set W of nodal solution of all possible extremal subsystems of (49) taken in such a way that a solution corresponds to a subsystem. 0
Let W be a set of nodal solution of system (49) satisfying this condition. We shall show 0
0
0
0
0
0
0
0
…rst that W H (A ), i.e., that there exists a subset H1 of H such that H1 = W (A ). ! 0 0 0 0 ! Indeed, let w be an element of W . Since ! w 2 F = A + H, it follows that ! w =! a + h 0
(a 2 A; h 2 H). Substituting the element w in the inequalities of system (49) we get: ! 0 `j (! w ) aj = `j (! a ) `j ( h ) aj 0 (j = 1; 2; :::; m):
If r is the rank of (49) then, in view of the de…nition of nodal solutions, there exist r linearly independent linear form `jk (! x ) (k = 1; 2; :::; r): such that ! 0 `jk (! w ) ajk = `jk (! a ) `jk ( h ) ajk = 0 (k = 1; 2; :::; r) (59) Now ! `j ( h )
aj
0 (j = 1; 2; :::; m)
! (this follows form the obvious fact that h 2 F ). Furthermore ` (! a ) 0 (k = 1; 2; :::; r) j
in view of the second part of Theorem 2.23. Thus (59) yields: ` (! a ) = 0 (k = 1; 2; :::; r) jk
and ! `jk ( h )
ajk = 0 (k = 1; 2; :::; r):
0 Consequently a 2 A (since the linear forms `jk (! x ) (k = 1; 2; :::; r) are linearly inde! pendent and r is the rank of system (49)) and h is one of the nodal solution of system
(49).
! The element h as an element if the centroid H has the decomposition: ! !0 !0 0 0 0 h = p1 h 1 + ::: + ps h s (h1 ; :::; h1 2 H ) !0 !0 where the coe¢ cients p1 ; :::; ps are positive and add up to one. Since h 1 ; :::; h s are in ! F , substituting these expressions in the equations `jk ( h ) ajk = 0 yields !0 !0 `jk ( h 1 ) ajk = ::: = `jk ( h s ) ajk = 0 (k = 1; 2; :::; r): !0 !0 But this means that all of the elements h 1 ; :::; h s are nodal solutions of (49) and that 0
they are distinct from each other at most by a term in A . In view of the A-minimality of
125
! 0 0 0 the set H , this implies that s = 1. But then h 2 H . Since ! a 2 A , this means that 0 0 0 0 0 0 ! w H (A ). Hence W H (A ). !0 !0 0 0 0 Now suppose h 2 H and h 2 = W (A ). In view of the …rst part of Theorem 2.23 0
F = A + B, where B is a centroid with W as a set of generating elements. Hence !0 ! 0 0 0 0 0 h = a + q1 ! w 1 + ::: + qt ! w t (! a 2 A; ! w 1 ; :::; ! wt 2 W ) 0
where q1 ; :::; qt are positive elements of P that add up to one. The set H has been shown
to contain the elements 0 0 0 0 ! w +! a ; :::; ! w +! a 1
1
t
t
0 0 0 for some elements ! a 1 ; :::; ! a t in A . Thus we get the decomposition !0 ! 0 0 0 0 0 0 h = a q1 ! a 1 ::: qt ! a t + q1 (! w1 + ! a 1 ) + ::: + qt (! wt + ! a t) 0 0 in which (! a q1 ! a 1 ::: qt ! a t ) 2 A. But this contradicts the A-minimality of the 0 0 0 0 0 0 0 0 set H . Consequently H W (A ). But ! w +! a (! a 2 A ) is a nodal solution of (49) 0 0 whenever ! w is a nodal solution of (49). Hence the set H coincides with a set W that
satis…es the condition of the proposition. As we already noted, this implies that the validity of the second assertion of the present corollary to Theorem 2.23. Following Minkowski (see Minkowski [2], see also Bonnesen and Fenchel [1]) we introduce: DEFINITION 2.14. Let H 1 ; :::; H s . be the solution sets of S consistent inequality system of the form (49) and let p1 ; :::; ps be nonnegative elements of P. Then the set H of elements ! ! ! h = p1 h 1 + ::: + ps h s , ! where h i is an element of H i (i = 1; :::s), is called a linear combination of the sets H 1 ; :::; H s and is denoted by H = p1 H 1 + ::: + ps H s COROLLARY 2.17. Any linear combination H = p1 H 1 + ::: + ps H s of the sets H 1 ; :::; H s of solutions of s consistent systems (49) is a solution set of at least one system of the form (49). PROOF. In view of Theorem 2.23 H i = Ci + Ei
(i = 1; :::s);
i
where C is a …nitely generated cone in P n and where E i is a …nitely generated centroid in P n . Let ! c ik = 0 (k = 1; 2; :::; `i ): be generating elements of C i (i = 1; :::s); and let ! e i = 0 (k = 1; 2; :::; h ): k
i
be the generating elements of E i (i = 1; :::s);. The set H is the sum of a cone generated by
126
! c i1 ; :::; ! c i`i (i = 1; :::; s) and a centroid generated by all possible elements p1 ! e 1k(1) + ::: + ps ! e sk(s) where k(i) are any number k = 1; :::; hi But then, in view of Theorem 2.23, there exists a system (49) for which H is a solution set. This is what we were to prove. In connection with the above proposition, there arises the question of …nding a system in the form (49) which has the solution set H = p1 H 1 + ::: + ps H s . For the sake of simplicity, we answer this question for the special case where H = H 1 + H 2 . where H 1 and H 2 are the solutions of the system x ) a1j = a1j1 x1 + ::: + a1jn xn `1j (! `2 (! x ) a2 = a2 x + ::: + a2 x j
j
j1 1 n
jn n
a1j
0 (j = 1; 2; :::; m1 );
a2j
0 (j = 1; 2; :::; m2 )
respectively (on P ). Let ! x k = (xk1 ; :::; xkn ) (k = 1; 2; :::; `1 ) and ! y k = (y1k ; :::; ynk ) (k = 1; 2; :::; `2 ) be, respectively, nodal solutions of the …rst system and of the second system while each solution corresponds to an extremal subsystem. Let ! u k = (uk1 ; :::; ukn ) (k = 1; 2; :::; h1 ) and ! v k = (v1k ; :::; vnk ) (k = 1; 2; :::; h2 ) be, respectively, the generating elements of the cones C 1 and C 2 of solutions of the systems x) `1 (!
0 (j = 1; 2; :::; m1 );
and `2 (! x)
0 (j = 1; 2; :::; m2 );
j
j
Let us recall here the minimal set of generating elements for the cone of solution of a homogeneous linear inequality system, established in Theorem 2.19 and its remark. Let C be a cone generated by the elements: ! u k (k = 1; 2; :::; h1 ) and ! v k (k = 1; 2; :::; h2 ) and let E be the centroid of all possible elements of the form ! z (k ; k ) = ! x k1 + ! y k2 1
2
where k1 is any of the number k1 = 1; 2; :::; `1 and where k2 is any of the numbers k2 = 1; 2; :::; `2 . Consider the cone D in the space P n+1 with generating elements
127
(! u k ; 0) (k = 1; 2; :::; h1 ), (! v k ; 0) (k = 1; 2; :::; h2 ) (! z (k1 ; k2 ); 1) (k1 = 1; 2; :::; `1 ; k2 = 1; 2; :::; `2 ) In view of Theorem 2.11 there exists a (still undetermined) system aj1 x1 + ::: + ajn xn
aj xn+1
0 (j = 1; 2; :::; m) (60)
whose solution set coincides with D. But then, in view of Theorem 2.17, the cone F generated by the elements (aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; m) (61) coincides with the solution set of the system uk1 x1 + ::: + ukn xn 0 (k = 1; 2; :::; h1 ) v1k x1 + ::: + vnk xn 0 (k = 1; 2; :::; h2 ) } z1 (k1 ; k2 )x1 + ::: + zn (k1 ; k2 )xn + xn+1 0 (k1 = 1; 2; :::; `1 ; k2 = 1; 2; :::; `2 ): Thus we may take, for the set (61), those solutions of that system which satisfy the conditions of Theorem 2.19 and its remark. Thus system (60) is now determined. Setting x = 1 in it we get a system (49) whose solutions consist of those elements ! x 2 P n for n+1
which (! x ; 1) 2 D. i.e., of elements of the set H. Thus, for xn+1 = 1, system (60) reduces to
a system (49) we sought.
Let us now move on and consider some properties of centroids generated by a …nite number of elements. First of all we note a property that directly follows from Theorem 2.23. COROLLARY 2.18. Let D be the set of solutions of a consistent system (49) of nonzero rank. Consider the centroid generated by the set of nodal solutions of all possible extremal subsystems of (49) taken such that a nodal solution corresponds to an extremal subsystem. The set D coincides with the centroid i¤ the system `j (! x ) 0 (j = 1; 2; :::; m) associated with (49) has no nonzero solutions. Any …nitely generated centroid in P n is the solution set of such a system (49). REMARK. If the solution set of a consistent system (49) is bounded, then its associated system ` (! x ) 0 (j = 1; 2; :::; m) does not possess nonzero solutions. Thus, Corollary 2.18 j
follows from the following proposition. If the set D of solutions of the system (49) is bounded, then the set of its nodal solutions is …nite and its centroid coincides with D. A system (49) whose solution set coincides with a given …nitely generated centroid, may be obtained in the following way. Let ! x k = (xk ; :::; xk ) (k = 1; 2; :::; `) (62) 1
n
be the generating elements of a …nitely generated centroid E and let E be a cone generated by the elements 128
(! x k ; 1) = (xk1 ; :::; xkn ; 1) (k = 1; 2; :::; `) (63) Consider the system xk1 x1 + :: + xkn xn + xn+1
0 (k = 1; 2; :::; `) (64)
In view of Theorem 2.17, the solution set of that system coincides with the dual cone E
0
of the cone E. Finding, according to Theorem 2.19 and its remark, a fundamental solution system (aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; m) (65) of the system (64) and a maximal linearly independent solution system 0
(bj1 ; :::; bjn ; bj ) (j = 1; 2; :::; m ) of the equation system xk1 x1 + :: + xkn xn + xn+1 = 0 (k = 1; 2; :::; `) (66) we obtain the system aj1 x1 + ::: + ajn xn aj xn+1 0 (j = 1; 2; :::; m) 0 bj1 x1 + ::: + bjn xn bj xn+1 0 (j = 1; 2; :::; m ) 0 0 0 g (67) m m m X X X bj ) 0 ( bj1 )x1 + ::: + ( bjn )xn ( j=1
j=1
j=1
whose solution set is E and also the system aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m) 0 bj1 x1 + ::: + bjn xn bj 0 (j = 1; 2; :::; m ) 0 0 0 g (68) m m m X X X bj ) 0 ( bj1 )x1 + ::: + ( bjn )xn ( j=1
j=1
j=1
whose solution set is E.
DEFINITION 2.15. The vertices of a centroid E with a …nite system H of generating element in P n are those elements of H that are not contained in the centroid generated by the remainning elements of H. The centroid E is said to be nondegenerate if there does not exist a plane in P n that contains all elements of H. The boundng plane of a half space of P n that contains E
P n and which passes through at least n of the centroid’s vertices is called
a face of the centroid E. Such a half space is called a boundary support of the centroid E. A …nite system of elements of P n that coincides with the system of vertices fo the centroid generated by it in P n is said to be centroidally independent. By a "plane" we mean, here, a set of elements of P n that satis…es the equation: c1 x1 + ::: + cn xn = c ((c1 ; :::; cn ) 6= (0; :::0); c1 ; :::; cn 2 P ):
LEMMA 2.11. If the set of solutions of system (49) is a …nitely generatd centroid, then the system of its nodal solution (it is a …nite system in this case) is centroidally independent. PROOF. Indeed let
129
! z k (k = 1; 2; :::; `; ` > 1)
be the set of all nodal solutions of (49) and suppose one of them, say ! z 1 satis…es ! z1=q ! z 2 + ::: + q ! z `1 , `1
2
where q2 > 0; :::; q`1 > 0; q2 +; ::: + q`1 = 1; and `1
`:
If `jk (! x)
ajk = ajk 1 x1 + ::: + ajk n xn
0 (k = 1; 2; :::; s) is an extremal subsystem of (49) with a nodal solution ! z 1 , then `jk (q2 ! z 2 + ::: + q`1 ! z `1 ) ajk 0 (k = 1; 2; :::; s) Hence we get q2 `jk (! z 2 ajk ) + ::: + q`1 `jk (! z `1
ajk
ajk )
0 (k = 1; 2; :::; s)
and also `jk (! z 2 ajk ) = 0 (k = 1; 2; :::; s) :::::::::::::::::::::::::::::::::::: ` (! z 1 a ) 0 (k = 1; 2; :::; s) jk
jk
By hypothesis of the lemma, any extremal subsystem of (49) has a unique solution. Thus we have a contradiction. Consequently the lemma is proved. REMARK. Analogously, we may obtain the following more general proposition: The set of nodal solutions, taken such that one solution corresponds to one extremal subsystem of (49) is centroidally independent. Using Lemma 2.11, it is easy to conclude that the above de…nition of vertices of a centroid is independent of the choice of generating elements for that centroid. Indeed, this follows from the fact that the vertices of a …nitely generated centroid E coincides with the set of nodal solution of a system (49) whose solution set coincides with E. The validity of the above proposition is established in the following manner. In view of Corollary 2.18 (see also its remark) and Lemma 2.11, the set of nodal solutions of a system (49) with solution set E. It is not hard to show that centroid E does possess two distinct systems of generating elements which satisfy this condition. From this, it follows that the assertion of the above proposition is valid. THEOREM 2.24. A nondegenerate …nitely generated centroid E in P n coincides with the intersection of its boundary supports. Through each of its vertices pass no less than n of its faces. PROOF. Let (62) be a system of generating elements of the centroid E. Thus in view of the nondegeneracy of E, the system (63) of generating elements of the cone E has rank n+1.
130
Hence, that cone is n + 1-dimensional. Since it is, obviously, pointed, it follows that its dual 0
cone E is pointed and n + 1-dimensional (see Lemma 2.9). In view of the pointedness of 0
E , system (66) has only a zero solution. Therefore, the above generating system coincides with the fundamental system (64) of solutions to (64). In view of its n + 1-dimensionality, this system has rank n + 1. Hence, systems (67) and (68) have, respectively, the forms: aj1 x1 + ::: + ajn xn
aj xn+1
aj1 x1 + ::: + ajn xn
aj
0
0 (j = 1; 2; :::; m) (67 ) 0
0 (j = 1; 2; :::; m) (68 )
and their ranks are respectively n + 1 and n. Let k ! x = (xk ; :::; xk ) (k = 1; 2; ::; `) n
1
0
be the set of all distinct nodal solutions of system (68 ). As we already noted, this set 0
coincides with the set of vertices of the centroid E which is the solution set of (68 ). The k x . Hence the cone E, obviously, is generated by centroid E is generated by those elements ! k (! x ; 1) = (xk1 ; :::; xkn ; 1) (k = 1; 2; ::; `)
Furthermore, they constitute a basis for the cone E. For otherwise one of the elements k ! x may be linearly expressed in terms of the other elements with nonnegative coe¢ cients that add up to one. But this is not possible in view of Lemma 2.11. Since two bases of a …nitely generated convex pointed cone coincide up to a positive multiple of their elements, and since the system (63) of generating elements of E contains its basis, it follows that the elements (xk ; 1) of the present basis must coincide with system (63) (they could not di¤er from them by a positive multiple). Hence, system (64) contains an extremal subsystem 0
xk1 x1 + :: + xkn xn + xn+1 = 0 (k = 1; 2; :::; `) (64 ) 0
The rank of (64 ), obviously, coincides with the rank of system (64). Furthermore, its 0
fundamental solution system coincides with that of (64). Since n + 1 is the rank of (64 ) if follows, using the de…nition of a fundamental solution, that each equation aj1 x1 + ::: + ajn xn
aj = 0 (j = 1; 2; :::; m)
is satis…ed by no less than n of the elements xk = (xk1 ; :::; xkn ) which are vertices of the 0
centroid E. Hence, the inequalities of (68 ) de…ne its boundary supports. Since the centroid 0
E is the solution set of (68 ), the …rst assertion of the theorem is proved. In view of the coincidence of the nodal solutions of that system and the vertices of the centroid E, the second assertion of the theorem follows the de…nition of fundamental solutions (also it follows that 0
the rank of (68 ) is equal to the number n of its unknowns). The theorem is proved. x6. THE COLLECTION OF FINITELY GENERATED CONVEX CONES IN P n AS
A STRUCTURE.
In the present section we study some properties of …nitely generated convex cones in the 131
space P n , together with those of the related polar cones C of convex …nitely generated cones C introduced in x4. We …rst note the following properties of this collection (see denote it by (P n )).
1. For any of two elements C1 and C2 of (P n ) the intersection C1 \ C2 is an element of
that collection.
2. For any two elements C1 and C2 of.., the algebraic sum C1 + C2 , i.e., the smallest convex cone C1 [ C2 containing the cones C1 and C2 is an element of the collection (P n ).
The …rst property is obtained in the following manner. In view of Corollary 2.11 C1 and
C2 are solutions sets of some …nite linear inequality systems in P n . If S1 and S2 are such systems with solution sets C1 and C2 , then the system obtained by adjoining one of them to the other has solution set C1 \ C2 . But then in view of Theorem 2.19 and its remark, C1 \ C2 is a …nitely generated convex cone.
The second property is simple and obvious since the union of a set of generating elements
of C1 with a set of generating elements of C2 is a generating element of C1 + C2 = C1 [ C2 .
Properties 1 and 2 mean that the collection (P n ) is a structure relative to the partial
order induced by set theoretic inclusion (see Berko¤ [1]). The null cone is the smallest element of this structure and the whole space P n is its largest element. We now consider the correspondence C dence C
! C : It is not hard to see that the correspon-
! C is a dual automorphisn (an involution) of (P n ), i.e.,
a) C
=C
b) C
= C i¤ C
=C
c) For nay element C 0 2 (P n ) there exists an element C 0 2 (P n ) such that C 0 = C0
d) C1
C2 i¤ C2
C1
e) (C1 \ C2 ) = C1 [ C2
f) (C1 [ C2 ) = C1 \ C2
Property a) was noted in connection with De…nition 2.7 of the polar cone of a …nitely generated cone. If C1 = C2 then, obviously, C1 = C2 . If C1 = C2 then (C1 ) = (C2 ) and hence C1 = C2 . This proves Property b). In view of a), and hence C 0 = ((C 0 ) ) may be taken as the element (C 0 ) with requirement of Property c). If C1
C2 ; then by the de…nition of polar cones, C1
(C2 ) , and so C1
C2 . This proves Property d).
Let C1 be the solution set of systems S1 given by
132
C2 . If C1
C2 , then (C1 )
! ( A 1j ; ! x ) = a1j1 x1 + ::: + a1jn xn
0 (j = 1; 2; :::; m1 )
and let C2 be the solution set of system S2 given by ! x ) = a2j1 x1 + ::: + a2jn xn 0 (j = 1; 2; :::; m2 ) ( A 2j ; ! ! ! (here ( A 1j ) = (a1j1 ; :::; a1jn ) and ( A 2j ) = (a2j1 ; :::; a2jn )). Then the cone C1 \C2 is the solution
set of the union of systems S1 and S2 and, hence, is the polar of the cone A generated by the elements !1 ! ! ! A 1 ; :::; A 1m1 ; A 21 ; :::; A 2m2 ; i.e., C1 \ C2 = A : Hence
(C1 \ C2 ) = A = A1 \ A2
! ! ! ! where A1 and A2 are cones with generating elements A 11 ; :::; A 1m1 and A 21 ; :::; A 2m2 ; re-
spectively. Since C1 = (A1 ) and C2 = (A2 ) we have (C1 \ C2 ) = C1 [ C2
Consequently Property e) is proved. Property f) is obtained in the following manner. In view of Property c) there exist convex …nitely generated cones A1 and A2 such that C1 = A1 and C2 = A2 But then, as above, (A1 \ A2 ) = A1 [A2 and hence
((A1 \ A2 ) ) = (A1 [A2 ) : From here we get
A1 \ A2 = (A1 [A2 ) ; and hence,
(C1 [ C2 ) = C1 \ C2
We now prove an additional Property g). C \ C is the null cone and C [ C = P n
(orthocomplementarity). Indeed, if ! x = (x1 ; :::; xn ) 2 C [C then x21 +:::+x2n
0 and hence, ! x = 0. Furthermore,
we have (C [ C ) = C \ C = 0. Consequently C [ C = P n (since, in view of Corollary 2.12, the cone P n is dual to the null cone).
The above properties of convex …nitely generated cones in P n are summarized in the proposition below (for P = R, see Gale [1], Goldman and Tucker [1]). THEOREM 2.25. Finitely generated cones in P n constructing a structure (P n ) relative
133
to set theoretical inclusion. The transformation C $ C is its dual automorphism associated each cone C in (P n ) with its orthocomplement C 2 (P n ):
2, the structure (P n ) is not a Dedekind structure. Indeed, let C1 be
Remark. For n
the cone of solutions of the system: x1 0; x2 0; g x1 x2 0; x1 + x2 0; let C2 be the solution cone of the system x1
0; x2
0; x2
0;
and let C3 be the solution cone of the system x1
0; x2
0;
(All systems here are supposed to be on P n with n
2). Then
C1 [ (C2 \ C3 ) = C1 :
whileC1 [ (C2 \ C3 ) == C1 is the solution cone of the system x1
0; x2
0; x1
x2
0;
Consequently, C1 [ (C2 \ C3 ) 6= C1 [ (C2 \ C3 ) Since C1
C3 , this means (Berko¤ [1]) that the structure (P n ) of …nitely generated
cones in P n is not a Dekekind structure. Since the normal devisors of a group constitute a Dedekind structure (see Berko¤ [1]), it follows that the substructure of
(P n ) consisting of all subspaces of P n is a Dedekind
structure. Let us now study some properties of the collection (P n ) of convex polyhedral subsets of P n , i.e., of sets of solutions of an arbitrary system of linear inequalities on P n . We shall assume that this collection is partially ordered by set theoretic inclusion. Clearly, it is not a structure since it contains pairs of elements that do not intersect. If two elements D1 and D2 of (P n ) intersect, then their intersection D1 \ D2 is a convex polyhedral set and, hence
in that case, is an element of (P n ). In what follows, the sign \ is put between only those
elements of
(P n ) whose intersection is not empty. Hence, if we write D1 \ D2 , then in
particular, the intersection of the elements D1 and D2 is not empty.
We now introduce the union operation [ for elements of the collection (P n ). Let D1
and D2 be in (P n ). In view of Theorem 2.23 D1 = C1 + E1 and D2 = C2 + E2
where C1 and C2 are …nitely generated convex cones and where E1 and E2 are …nitely generated centroids. If C1 [ C2 is the smallest convex once containing both C1 and C2 and
if E1 [ E2 is the centroid generated by the set of vertices of Both E1 and E2 , then in view 134
of Theorem 2.23, the set (C1 [ C2 ) + (E1 [ E2 ) is a convex polyhedral set and hence is an element of (P n ). We shall de…ne such an element to be the union D1 [ D2 of the elements
D1 and D2 of that collection.
In the preceding paragraph, elements of (P n ) were associated with the …nitely generated convex cones of P n+1 which lay in the half space xn+1 Denote the collection of these cones by
+
0 and intersect the plane xn+1 = 0.
(P n+1 ). We shall now consider a more appropri-
ated relationship between them and elements of our set (P n ). We shall assume that the set
+
(P n+1 ) is partially ordered by set theoretic inclusion. To this end we introduce the
mapping (x1 ; :::; xn ; t) = ( xt1 ; :::; xtn ) of the set (P n ; t) of elements of P n+1 with a positive last coordinate, into P n . If D+ is (P n+1 ), then we shall use (D+ ) to denote the collection of images under
an element of
+
the mapping
of the elements of the cone D+ with positive last coordinate. By de…nition
of the collection + (P n+1 ), the set (D+ ) is not empty. If `j (! x ) aj t = aj1 x1 + ::: + ajn xn aj t 0 (j = 1; 2; :::; m) g (69) t 0; is a linear inequality system on P n+1 whose solution set coincides with the cone D+ (the inequality
0, obviously, may be added to such a system without altering its solution
t
set), then the solution set of the system `j (! x ) aj = aj1 x1 + ::: + ajn xn aj coincides, obviously, with the image
0 (j = 1; 2; :::; m) (70) (D+ ) of the cone D+ . Consequently,
(P n ): Hence, the mapping +
induces the mapping
(D+ ) 2
de…ned by
+
(D ) = (D ) of the set
+
(P n+1 ) into the set (P n ). If D is an element of
(P n ), then in view of
Corollary 2.11, there exists the system (70) to which D is a solution set. By theorem 2.23, D = C + E where C is the (…nitely generated convex) cone of solutions of system `j (! x ) 0 (j = 1; 2; :::; m) and where E is a …nitely generated centroid. Let ! c k = (ck ; :::; ck ) (k = 1; 2; :::; `) 1
n
the generating elements of the cone C, let ! e k = (ek1 ; :::; ekn ) (k = 1; 2; :::; `)
be the generating elements of the centroid D and let D+ be the cone of elements (! x ; t)
which may be written according to the formula
135
(! x ; t) =
` X
pk (! c k ; 0) +
k=1
h X
qk (! e k ; 1)
k=1
where all pk and qk are nonnegative. Clearly, D+ 2
+
(P n+1 ). Clearly, also, D+ is the
solution cone of system (69) associated with system (70). Hence (D+ ) = D. Thus the maps
mapping
+
(P n+1 ) onto (P n ):
It is not hard to show that the mapping
is independent of the choice of system (69)
whose solution set coincides with D+ . Indeed, if 0 0 0 `j (! x ) aj t 0 (j = 1; 2; :::; m ) g t 0; is any other system whose solution set is D+ , then, in view of the remark about Lemma 2.5, the solution set of its associated system 0 0 0 `j (! x ) aj 0 (j = 1; 2; :::; m ) (70) coincides with the set D of solutions of system (70) associated with the system (69) initially chosen. We now show that
is a 1
+
1 mapping from
Indeed let D1+ and D2+ be two elements of
+
(P n+1 ) onto (P n ):
(P n+1 ) for which
(D1+ ) = (D2+ ) Let ` (! x) j
t
aj t
0 (j = 1; 2; :::; m)
0
0 (j = 1; 2; :::; m )
0
and 0 ` (! x) j
t
aj t
0
0
be two systems having solution sets D1 and D2 respectively. By hypothesis, (D1+ ) = (D2+ ), hence (D1+ ) = (D2+ ). Thus the system `j (! x ) aj 0 (j = 1; 2; :::; m) is equivalent to the system 0 0 0 `j (! x ) aj 0 (j = 1; 2; :::; m ) But then, in view of the remark about Lemma 2.5, the preceding two systems are equivalent. Consequently D1+ = D2+ . LEMMA 2.12. The mapping : (x1 ; :::; xn ; t) = ( xt1 ; :::; xtn ) of the set (P n ; t) of elements (x1 ; :::; xn ; t) (t > 0) of the space P n+1 includes a 1 mapping +
D 2
+
of the set (P
n+1
+
(P
n+1
n
) onto the set (P ) where the image +
n
) coincides with the element D 2 (P ). 136
(D1+ )
1
of an element
We now note some properties of the mapping a) If D1+ and D2+ 2 +
+
(P n+1 ) then D1+
Conversely if D1+ and D2+ 2
(P n+1 ) have the relation D1+
D2+ implies (D1+ )
(D2+ )
(P n ) then the corresponding elements D1+ and D2+ of D2+ .
Indeed, the …rst assertion follows directly from the de…nition of . To prove the second assertion, consider two systems of the form (70) with solution sets D1 and D2 . Then, in view of Lemma 2.5, the set D1+ of solutions of the …rst of the system (69) associated with the …rst of these two systems would satisfy each inequality of the system (69) associated with the second system. Since the solution set of the latter coincides with D2+ , it follows that D1+
D2+ . b) If D1+ and D2+ are two elements of the set
that set then
+
(P n+1 ) with an intersection D1+ \D2+ in
(D1+ \ D2+ ) = (D1+ ) \ (D2+ )
Indeed, since D1+ \ D2+ then
D1+ and D1+ \ D2+
D2+ ;
(D1+ \ D2+ )
(D1+ ) and (D1+ \ D2+ )
(D1+ \ D2+ )
(D1+ ) \ (D2+ );
(D1+ \ D2+ )
(D1+ ), (D1+ \ D2+ )
and hence
(D2+ );
On the other hand and hence 1
1
( (D1+ ) \ (D2+ ))
D1+ ,
( (D1+ ) \ (D2+ ))
D1+ \ D2+ :
and hence 1
Thus, we get (
1
( (D1+ ) \ (D2+ )))
Consequently
(D2+ );
( (D1+ ) \ (D2+ ))
(D1+ ) \ (D2+ )
(D1+ \ D2+ ) = (D1+ ) \ (D2+ );
which we were to prove.
(D1+ [ D2+ ) = (D1+ ) [ (D2+ );
Indeed, let
(D1+ ) = C1 + E1 and (D2+ ) = C2 + E2 ;
137
D2+ ;
(D1+ \ D2+ )
where C1 and C2 are …nitely generated convex cones and where E1 and E2 are …nitely generated centroids. Let us denote by ! c k1 = (k = 1; 2; :::; `1 ) and ! c k2 = (k = 1; 2; :::; `2 ) the sets of generating elements of .. and C1 and C2 respectively, and by ! e k1 = (k = 1; 2; :::; h1 ) and ! e k2 = (k = 1; 2; :::; h2 ) the sets of generating elements of E1 and E2 respectively. Since D1+ [ D2+ is the smallest
…nitely generated convex cone that contains the cones D1+ and D2+ , then any of its elements (! x ; t) is de…ned by the formula: (! x ; t) =
`1 X
p1k (! c k1 ; 0)
+
k=1
`2 X
p2k (! c k2 ; 0)
+
k=1
h1 X
qk1 (! e k1 ; 1)
k=1
+
h2 X
qk2 (! e k2 ; 1)
k=1
where the elements p1k ; p2k ; qk1 ; qk2 are nonnegatively elements of P . But then D1+ [ D2+ = (C 1 [ C 2 ) + (E 1 [ E 2 );
where C 1 ; C 2 and E 1 ; E 2 are cones de…ned respectively by the …rst two sums and by the
second two sums of that formula. Hence we easily get: (D1+ [ D2+ ) = (C1 [ C2 ) + (E1 [ E2 );
where E1 [ E2 is a centroid generated by the vertices of both centriods E1 and E2 : Since (D1+ ) [ (D2+ ) = (C1 [ C2 ) + (E1 [ E2 );
by de…nition of the operating [ on the set (P n ), our relation is valid. THEOREM 2.26. The mapping partially ordered sets Since the mapping
+
(P
n+1
of Lemma 2.12 is an isomorphism between the two
) and (P n ), i.e., it is a 1
1 order preserving mapping.
does not change the operation [ of taking the union of two elements
of (P n ) and since the union of two elements of
+
(P n+1 ) results in the smallest (by inclusion)
element of that set that contains these two elements, the above property follows from the de…nition of the operation [. Hence the smallest (by inclusion) polyhedral set that contains two given polyhedral sets D1 and D2 (D1 and D2 2 (P n ) coincides with the union D1 [ D2
of these two sets in the above de…ned sense.
we now show how to obtain a system of inequalities (on P n ) whose solution set is D1 [ D2
when the inequality systems `1j (! x ) a1j = a1j1 x1 + ::: + a1jn xn and `2 (! x) j
a2j = a2j1 x1 + ::: + a2jn xn
a1j
0 (j = 1; 2; :::; m1 )
a2j
0 (j = 1; 2; :::; m2 )
have D1 and D2 respectively, as solution sets. Let: ! x k = (xk ; :::; xk ) (k = 1; 2; :::; ` ) 1
n
1
and ! y k = (y1k ; :::; ynk ) (k = 1; 2; :::; `2 ) 138
be nodal solutions to the …rst and the second system respectively, taken so that a nodal solution corresponds to an extremal subsystem. Let: ! u k = (uk ; :::; uk ) (k = 1; 2; :::; h ) 1
n
1
and ! v k = (v1k ; :::; vnk ) (k = 1; 2; :::; h2 ) be, respectively, the generating sets of elements for the cones C1 and C2 . of solutions to systems `1 (! x)
0 (j = 1; 2; :::; m1 )
and `2 (! x)
0 (j = 1; 2; :::; m2 )
j
j
taken in connection with Theorem 2.19 and the remark about it. If C is a cone generated by the elements: ! u k (k = 1; 2; :::; h1 ) and ! v k (k = 1; 2; :::; h2 ) and if E is a centroid generated by ! x k (k = 1; 2; :::; `1 ) and ! y k (k = 1; 2; :::; `2 ) be
then D1 [ D2 = C + E. Thus the generating elements for the cone
1
(D1 [ D2 ) would
(! u k ; 0) (k = 1; 2; :::; h1 ), (! v k ; 0) (k = 1; 2; :::; h2 ) (! x k ; 1) (k = 1; 2; :::; `1 ); (! y k ; 1) (k = 1; 2; :::; `2 ) Let S be a system aj1 x1 + ::: + ajn xn
aj xn+1 0 (j = 1; 2; :::; m) g (69) xn+1 0;
(though unknown) whose solution set is F generated by the elements
1
(D1 [ D2 ). In view of Theorem 2.17, the cone
(0; :::; 0; 1), (aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; m) in the space P n+1 is the solution set of uk1 xk1 + ::: + ukn xkn 0 (k = 1; 2; :::; h1 ); v1k xk1 + ::: + vnk xkn 0 (k = 1; 2; :::; h2 ); g xk1 x1 + ::: + xkn xn + xn+1 0 (k = 1; 2; :::; `1 ); y1k x1 + ::: + ynk xn + xn+1 0 (k = 1; 2; :::; `2 ): Taking, in connection with Theorem 2.19 and the remark about it, the elements 0
0
0
0
(aj1 ; :::; ajn ; aj ) (j = 1; 2; :::; m ) as generating elements of the cone of the solutions to the above system, we obtain the system 0
0
aj1 x1 + ::: + ajn xn
0
aj xn+1
0
0 (j = 1; 2; :::; m )
which is, obviously, equivalent to S. Since, for xn+1 = 1; system S reduces to a system
139
whose solution set is D1 [ D2 , it follows that the set D1 [ D2 coincides with the solutions
set of the system 0
0
aj1 x1 + ::: + ajn xn
0
aj
0
0 (j = 1; 2; :::; m )
which we obtain from the preceding system for xn+1 = 1: x7. SEPARABILITY OF CONVEX POLYHEDRAL SETS.
In this section we study the question of separating planes of two convex polyhedral sets in the space L(P ) that have no points in common or which are contiguous. Recall that a convex polyhedral set in L(P ) (see De…nition 1.7) is the solution set of an arbitrary consistent system (1), i.e., of a system of the form fj (x)
aj
0 (j = 1; 2; :::; m)
on L(P ). The main result of this section was state4d without proof by the author in [7]. DEFNITION 2.16. Let A and B be two sets in L(P ), let f(x) be a nonnull linear function on L(P ) and let a be an element of the …eld P . If f (x) f (x)
a
0 for all x 2 B then the plane f (x)
the equality f (x)
a
0 for all x 2 A and if
a = 0 is said to be separated A and B. If
a = 0 never holds for any element in A and never holds for any element
in B, then we say that the plane f(x)-1=0 strictly separates A and B. We shall say that the sets A and B are contiguous if they have elements in common and if there exists an element c 2 L(P ) such that translating either set by the element ct for t > 0 (t 2 P ) results in a set
that has no common elements with the other set. It should be noted that the same result could be obtained by translating the other set by
ct.
THEOREM 2.27. If two convex polyhedral sets in L(P ) has no points in common, then there exists a plane in L(P ) which strictly separates them. PROOF. Indeed let A1 and A2 be any two such sets and let fj1 (x)
a1j
0 (j = 1; 2; :::; m1 )
fj2 (x)
a2j
0 (j = 1; 2; :::; m2 )
be the linear inequality systems de…ned them. If the two sets have no points in common, then the system obtained by adjoining one system to the other is inconsistent. But then, in view of Theorem 2.3, there exists an identically zero linear combination m1 m2 X X p1j fj1 (x) + p2j fj2 (x) j=1
j=1
with nonnegative p1j and p2j such that m1 m2 X X 1 1 p2j a2j < 0 p j aj + j=1
j=1
Introducing the notation:
140
f (x) =
m1 X
p1j fj1 (x)
j=1
p0 =
m1 X
p1j a1j
+
m2 X
p2j a2j
and
a= p20
+
j=1
j=1
f (x) + a =
p1j a1j
j=1
we obtain the relation m1 X f (x) a = p1j (fj1 (x) j=1 m2 X
m1 X
p0 2
a1j )
p2j (fj2 (x)
a2j )
p0 2
j=1
from which it easily follows that the plane f (x)
a = 0 strictly separates the sets A1 and
A2 : Remark. It is well-known (see Bourbaki [1]) that, in the space Rn , two closed convex sets may be separated by a plane. But the case of nonstrict separability is not excluded. Hence, for convex polyhedral sets in Rn (which are obviously closed), Theorem 2.27 implies strict separability. In view of Corollary 2.2, Theorem 2.7 implies: COROLLARY 2.19. If two consistent systems of type (1) with nonzero ranks have no common solution, then it is possible to select form them, respectively, two nodal subsystems which have no solution in common. LEMMA 2.13. Let A1 and A2 be convex polyhedral sets in L(P ). Let T1 and T2 be the linear inequality systems fj1 (x)
a1j
0 (j = 1; 2; :::; m1 )
fj2 (x)
a2j
0 (j = 1; 2; :::; m2 )
that de…nes A1 and A2 . If A1 and A2 are contiguous, then each of T1 and T2 has a subsystem whose rank equals the number of its inequalities such that the cones of solutions of the two subsystems are contiguous. PROOF. Let "at" be an element of L(P ) with positive t 2 P . Translating the set A2 we
get a set A2 (t) which has no elements in common with A1 . Thus, by de…nition of contiguity, the system T2 (t) fj2 (x)
(a2j + tfj2 (a))
0 (j = 1; 2; :::; m2 )
has no common solutions with T1 for all t > 0. Hence, the inequality t
0 is a con-
sequence of the system T1 + T2 (t): Thus, the system T1 + T2 (t) may be considered a linear inequality system on L(P ) + P 1 of pairs (x; t), t 2 L(P ) and T 2 P 1 .
Since the rank r of that system is, obviously, distinct from zero, it follows, by Theorem 0
0
2.2, that it has a nodal solution T1 + T2 (t): fj1k (x)
a1jk
0
0 (k = 1; 2; :::; r1 ) (T1 ) 141
fj2k (x)
(a2jk + tfj2k (a))
0
0 (k = 1; 2; :::; r2 ) (T2 (t))
where r = r1 + r2 , all the solutions of which satisfy the inequality t 0
0. But then system
0
0
T2 (t), t > 0, has no common solution with T1 . Hence, the set of solutions of system T2 : fj2k (x)
a2jk
0 (k = 1; 2; :::; r2 ) 0
is contiguous with the set of solutions of system T1 . (their common solutions are, obvi0
0
ously, common solutions for T1 and T2 ). Since the ranks of T1 and T2 are equal, obviously, to the number of inequalities in them, this proves the lemma. THEOREM 2.28. Two contiguous convex polyhedral sets A1 and A2 in L(P ) are separable by way of a plane in L(P ). PROOF. In view of Lemma 2.13, it su¢ ces to prove the theorem for the case when the systems A1 and A2 de…ning the sets T1 and T2 have ranks equal to the number of their inequalities. This means that it su¢ ces to consider the case where the functions fj1 (x) (j = 1; 2; :::; m1 ) and fj2 (x) (j = 1; 2; :::; m2 ) are linearly independent. By contiguity of A1 and A2 system T2 (t) has no common solutions with T1 for t > 0. Hence for t > 0, the system T1 + T2 (t) is not consistent. Thus, as in the proof of Lemma 2.13, this system may be considered a linear inequality system on L(P ) + P 1 : Applying Theorem 2.4, we conclude that themrelation: 1 X p1j (fj1 (x) t=
a1j )
+
j=1
m2 X
p2j (fj2 (x)
a2j
tfj2 (a))
p0
(71)
j=1
holds as an identity in (x; t) 2 L(P ) + P 1 , with nonnegative p1j (j = 1; 2; :::; m1 ); p2j (j =
1; 2; :::; m2 ) and p0 . Hence the relation: m1 m2 X X 1 1 pj fj (x) + p2j fj2 (x) = 0 (72) j=1
j=1
holds as an identity in x 2 L(P ) and the inequality m1 m2 X X h= p1j a1 + p2j a2j 0 j=1
j=1
is valid.
On the other hand the system T1 + T2 obtained by combining systems T1 and T2 is consistent (since A1 and A2 are contiguous). Thus, in view of Theorem 2.3, for any two elements p1j and p2j for which the relation (72) holds, we have m1 m2 X X 1 1 p j aj + h= p2j a2j 0 j=1
j=1
Consequently h = 0. But then the relation m1 m2 X X 1 1 1 p2j (fj2 (x) a2j ) = 0 pj (fj (x) aj ) + j=1
j=1
holds as an identity in x 2 L(P ).
142
Introducing the notation m1 X f (x) c = p1j (fj1 (x) a1j ) j=1
we get f (x)
c=
m2 X
p2j (fj2 (x)
a2j ): (73)
j=1
Using these expressions for f (x) f (x)
c
0 for any x 2 A1:
f (x)
c
0 for any x 2 A2:
c and
f (x)
c we get
(74)
By relation (71) we have m2 X t = t p2j fj2 (a) j=1
Hence, at least one pj is nonzero. But then, in view of the linear independence of the functions p2j (j = 1; 2; :::; m2 ), the expression (73) yields the fact that the function f (x) is nonnull. Consequently, f (x)
c = 0 is the equation of a plane in L(P ). Thus, in view of
(74), our proof is complete. DEFINITION 2.17. (See Chernikov [2].) An inequality fj0 (x)
aj0
0 of a consistent
satis…es system (1) is said to be stable if one of the solutions of the system satis…es the inequality: fj0 (x)
aj0 < 0. Otherwise, it is said to be unstable.
LEMMA 2.14. The inequality fj0 (x) the equation m X (fj (x)
aj0
0 of a consistent system (1) is unstable i¤
aj )uj = 0 (75)
j=1
in the unknowns u1 ; :::; um has a nonnegative solution (u1 ; :::; um ) (independent of x 2
L(P )) with a positive coordinates uj0 . PROOF. Necessity. If fj0 (x) inequality
(fj0 (x)
aj0
0 is an unstable inequality in system (1), then the
0 is a consequence of the system. Applying Theorem 2.4 we
aj0 )
conclude that the relation m X (fj0 (x) aj0 ) = pj (fj (x)
aj )
p0
j=1
holds as an identity in x 2 L(P ), where p0 and pj (j = 1; 2; :::; m1 ) are nonnegative. Now 0
p0 = 0 since fj0 (x) aj0 = 0 for any solution x0 of system (1). But then, the elements pj = pj 0
(j = 1; 2; :::; m1 ; j 6= j0 ) and pj0 = pj0 + 1 constitute a nonnegative solution of equation (75) with a positive j0 th coordinate. Necessity is proved. 0
0
0
Su¢ ciency. Let (u1 ; :::; um ) be a nonegative solution of (75) wit positive coordinate uj0 . For each solution of system (1), all the terms of the sum: 143
m X
0
(fj (x)
aj )uj = 0
j=1
are nonnegative. Hence (fj0 (x)
0
aj0 )uj0 = 0
(k = 1; 2; :::; `)
holds for all solutions of system (1). This means that fj0 (x)
aj0 = 0 for all solutions of
0
system (1) since uj0 6= 0. This proves su¢ ciency. COROLLAY 2.20. For the subsystem fjk (x)
ajk
0
of system (1) to coincide with the subsystems of all unstable inequalities of system (1), it is necessary and su¢ cient that the equation (75) possess a nonnegative solution with positive coordinates ujk (k = 1; 2; :::; `) and that it has no nonnegative solutions with other positive coordinates. In particular, for all the inequalities of system (1) to be unstable, it is necessary and su¢ cient that (75) has a strictly positive solution. To obtain this inequality, it su¢ ces to use the fact that the sum of two nonnegative solutions of (75) is also a nonnegative solution of it for which the number of positive coordinates equals the sum of positive coordinates in those nonnegative solutions. LEMMA 2.15. Suppose f (x)
0 is a consequence of system (1) which is turned to an
c
equation by a set D of solutions to (1). Then it is a consequence of the maximal subsystem of (1) consisting of those inequalities of (1) that are turned to equations by all elements of the set D. PROOF. In view of Theorem 2.4, there exist nonnegative elements p0 ; p1 ; :::; pm of the …eld P such that m the relation X f (x) c = pj (fj (x) aj )
p0
j=1
holds as an identity for x 2 L(P ) (by hypothesis of the lemma p0 = 0). If some inequality
fj 0 (x)
aj 0
0
0 of system (1) is not turned to an equality by some x 2 D then, for some
pj 0 6= 0 we have: pj 0 (fj 0 (x)
aj 0 ) < 0
0
But this leads to a contradiction since for x = x all the other terms on the right hand
side of the above relation are nonpositive while its left hand side is zero. Thus it could not be trued that pj 0 6= 0. This proves that the only nonzero pj ’s are those that correspond to the equations fj (x)
aj = 0 for all x 2 D. In view of the fact that the relation is an identity
for x 2 L(P ), our assertion is proved.
Lemma 2.15 is essentially a generalization of Lemma 2.1 used in x1.
THEOREM 2.29. Let A1 and A2 be two contiguous polyhedral sets in L(P ) de…ned by the system T1 and T2 of linear inequalities on L(P ) (see the notation of Lemma 2.13). Let 144
D be the intersection of the sets. Then D is contained in at least one face of either of the two sets A1 and A2 . Furthermore, there exists a plane S that separates the two sets such that each of its traces of contact Si with Ai (i = 1; 2) (obviously Si
D) coincides with the
smallest (by conclusion) face of the latter that contains D. 0
PROOF. Let f (x) 0
f (x )
0
0 and f (x )
c
c = 0 be a plane separating the two sets A1 and A2 . If x 2 D then c
0
0. Hence f (x )
trace of contact of the plane f (x)
c = 0. Thus, the set D is contained in the
c = 0 and each of the sets Ai (i = 1; 2). In view of
Theorem 2.5, this implies that D is contained in at least one face of each of the two sets. let 0
fj1k (x)
a1jk
0 (k = 1; 2; :::; k1 ) (T1 )
fj2k (x)
a2jk
0 (k = 1; 2; :::; k2 ) (T2 )
0
be the maximal subsystem of T1 and T2 that consists of all the inequalities turned to equations by all elements x 2 D. In view of Lemma 2.15, the plane f (x) 0
0
0
c = 0 separates 0
the sets A1 and A2 of solutions of these subsystems. Since the sets A1 and A2 have common 0
0
elements, it follows that they are contiguous. Since T1 + T2 coincides, obviously, with the maximal subsystem of T1 + T2 consisting of all unstable inequalities in it. it follows from Corollary 2.20 that there exist positive elements p11 ; :::; p1k1 and p21 ; :::; p2k1 of P such that the relation k1 X p1k (fj1k (x)
a1jk )
=
k2 X
p2k (fj2k (x)
a2jk ):
k=1
k=1
holds as an identity in x 2 L(P ). 0
If at least one Ti (i = 1; 2) has at least one stable inequality, then the function k1 X F (x) = p1k fj1k (x) = 0 k=1
can not be a null function. Thus, the equation k1 X F (x) a = p1k (fj1k (x) a1jk ) = 0 k=1
de…nes a plane on L(P ). Using the above relation, it is easy to see that the plane separates A1 and A2 . Using the de…nition of a face of a polyhedron of solutions of a linear 0
inequality system and recalling that system Ti (i = 1; 2) are maximal subsystems of Ti (i = 1; 2) consisting of their inequalities that are turned to equations by all x 2 D, we get, furthermore, that the traces of contact of F (x)
a = 0 with the sets Ai (i = 1; 2) coincides
with their smallest faces that contain the set D as was to be shown. 0
0
We now consider the case where neither of the systems T1 and T2 contains any stable inequalities. In this case, the solution set of either system coincides with the solution set of
145
the systems bounding equation system. Hence the system fj1k (x)
a1jk
0 (k = 1; 2; :::; k1 )
fj2k (x)
a2jk
0 (k = 1; 2; :::; k2 )
is consistent. From the consistency of this system, from the fact that the plane f (x) c = 0
0
0
0
0 separates the sets A1 and A2 of solutions of systems T1 and T2 , and from the fundamental property of the continuation of faces of polyhedral of linear inequality systems (see page 43) it follows that the plane contains the sets of solutions systems: fj1k (x)
a1jk
0 (k = 1; 2; :::; k1 )
a2jk
0 (k = 1; 2; :::; k2 )
and fj2k (x)
Using the de…nition of the solution polyhedron of a linear inequality system, and the 0
0
de…nition of the systems T1 and T2 ; the assertion of the theorem is valid for the present case. This completes our proof. COROLLARY 2.21. If a subspace H of P n and a cone C in P n are contiguous at a vertex of the cone then there exists a plane T in P n which passes through H and which is in contact with the cone C at the vertex. For the case where C is the nonnegative orthant,5 * Corollary 2.21 implies: ! COROLLARY 2.22. A linear subspace H of P n has only one point = (0; :::; 0) in common with the nonnegative orthant K of P n if and only if there exists a plane T (in P n ) ! passing through H which has no points in common with the orthant K aside from . This proposition may be considered geometrically equivalent to the theorem of Voronol (see Corollary 2.8). Indeed, the subspace H corresponds to an equation system: a1i u1 + ::: + ani un = 0 (i = 1; 2; :::; m); (76) and the plane T corresponds to an equation b1 u1 + ::: + bn un = 0 The requirement about H \K and T \Kmean that the equation system and the equation
have no positive solution. We note here that the equation, above, has no positive solutions if
and only if its coe¢ cients are not all zero and have the same sign. Without loss of generality, let us assume those coe¢ cients to be negative. The passage of the plane T through the 0
0
subspace H implies the existence of elements x1 ; :::; xn of P for which the relation m X 0 b1 u1 + ::: + bn un = (a1i u1 + ::: + ani un )xi i=1
identically in u1 ; :::; un is equivalent to
5
* The nonnegative (positive) orthant of the space P n is the set of all its elements (x1 ; :::; xn ) that satisfy: xi 0 (< 0) (i = 1; :::; n):
146
0
0
aj1 x1 + ::: + ajm xm = bj (i = 1; 2; :::; n): Using these relations, it is not hard to obtain the following algebraic form of Corollary 2.22. The system (76) of linear equations has no positive solutions i¤ the inequality system aj1 x1 + ::: + ajm xm < 0 (j = 1; 2; :::; n): is consistent. Conversely, the inequality system is consistent i¤ the equation system (76) has no positive solutions. This formulation of Corollary 2.22 is equivalent, except for the notation, to the theorem of Voronosi introduced in Corollary .28. From Corollary .22 follows the next proposition which is essentially equivalent to it. If the following subspace H of P n has dimension h n 2 is contiguous with the ! nonnegative orthant only at the point = (0; :::; 0), then there exists a linear subspace 0
P n contained in H with dimension h+1 which has no common points with nonnegative ! orthant other than .
H
This proposition in the basic result in the paper by Ben Israil [1] and was used by him to generalize various theorems about linear inequalities on Rn to those systems on P n (P is any ordered …eld), particularly Tucker’s Theorem (see Theorem 2.13).
CHAPTER III METHODS OF DERIVING GENERAL FOR SOLUTIONS OF SYSTEMS OF LINEAR INEQUATIONS The problems of constructing the solution set of a system of linear inequalities of the form aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m);
on P n (P is any ordered …eld) is completely solved by theorems 2.22 and 2.23. However, the method of deriving the general formula of this system’s solutions (in particular form) directly from these theorems is very cumbersome. In this chapter, we present a simpler method. The basic notion of the method is to construct the general form of solutions to a given homogeneous linear inequality system to our system and constructing the formula for solutions to the augmented system. We note here that the problem under consideration, completely reduces to a problem about homogeneous systems. Indeed if we know the general form of solutions to the system aj1 x1 + ::: + ajn xn aj xn+1 0 (j = 1; 2; :::; m); g xn+1 0 147
then, for xn+1 = 1 (which is equivalent to the condition that xn+1 > 0) we obtain from it the general formula for the solution of the nonhomogenous system. Hence, in this chapter we shall be mainly interested in homogeneous systems of linear inequalities. The method of constructing the general form of solutions of systems of homogeneous linear inequalities by way of obtaining the general forms of solutions systems obtained by adjoining appropriate systems of linear homogeneous inequalities to the original systems is one of the basic results of the paper by Motzkin, Raifa, Thompson and Thrall [1]. However, the method proposed in that paper is not established, but only illustrated by way of descriptive geometric structures related to the geometric interpretation of the problem. A strict purely algebraic basis for this method (for P = R) was given by Burger [1]. In x1 of this
chapter we base the method on obtaining the systems of fundamental solutions of a system of linear inequalities in P n . In x1 we prove, in particular, that our problem reduces to the problem of determining the fundamental solution system of a system of linear homogeneous inequalities, related to the original system, containing an inequality
xi
0 for each un-
known xi and to the problem of …nding the minimal generating elements of the (pointed) cone of nonnegative solutions of a given system of linear inequalities. This particular problem is dealt with in x2. There, we introduce an algorithm to solve it
which was obtained by Chernikov [2] using the Motzkim-Burger method. Using the algorithm it is not only possible to get the general forms of solutions of a system of linear inequalities, but it is also possible to get the general forms of solutions of a system of linear inequalities, but it is also possible to …nd those inequalities that may be eliminated from the system without changing the solution set (i.e. dependent inequalities), This problem is introduced in x3. The Chernikov numerical algorithm is introduced in x2. Later, in x4, we will turn to the related scheme of Chernikonva (see Chernikova [1]) for …nding the general form of
nonnegative solutions of linear equation systems. These considerations were presented in the author’s article [10] and [16]. Finally in x5, we discuss the problem of determining the condition under which two
systems of linear inequalities have one and the same solution set (the equivalent problem).
For the most part, the section will be devoted to an exposition of Chernikov’s version of the solution of this problem. In this version each linear inequalities system (on P n ) is associated, in a de…nitely way, with a canonical system. Then the problem of equivalence of two systems reduces to, the easier, problem of equivalence of their associated canonical systems. A direct solution to the equivalence problem is presented, at the beginning of x5, for
any two systems of linear inequalities with coe¢ cients in P . The solution is presented as
148
application of th e algorithm developed in x2.
The problem of equivalence of two systems of linear inequalities was …rst formulated in
Kuznestsor’s article [1]. This solution is not presented here. In x5, systems of linear inequalities are equivalent if their solution sets are equal. The
equivalence of systems may also be formulated in terms of sequence of linear transformations transforming them (or their solutions sets) to each other (or to each other’s solution sets).
This sense of equivalence was brought out in the interesting articles by Heller [1]. It is clear from his results that such an equivalence may be used to reduce some problems in linear optimization to well de…ned easily solved canonical problems. A concrete example of this is the transportation problem. x1. DEFINITION OF FUNDAMENTAL SOLUTIONS FOR SYSTEMS OF HONO-
GENEOUS LINEAR INEQUALITIES.
Let C be the cone of solutions of the system `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) (1) of homogeneous linear inequalities of rank r > 0 de…ned on P n . Let H(C) be the maximal subspace of C and let U (C) = f! u 1 ; :::; ! u s g (s = n
r)
be a basis for the latter space (if r = n then U (C) is an empty set). Let v(C) = f! v 1 ; :::; ! v sg
be a fundamental solution system of system (1) (in case system (1) has no fundamental solution, V (C) is an empty set). A djoining the inequality `j (! x ) = aj1 x1 + ::: + ajn xn 0 to system (1) we obtain (on P n ) the system: `j (! x ) 0 (j = 1; 2; :::; m); g (2) `(! x) 0 whose solution cone C is a subset of the solution cone C of system (1) and is, in general, distinct from C. The problem of determining even one basis U (C) of the maximal linear subspace H(C) of the cone C and even one fundamental system V (C) of solutions of system (2) with basis with basis U (C) and fundamental system V (C) are of signi…cant theoretical and practical interest (see Motzkin, Rai¤a, Thompson and Thrall [1]). This section id devoted to this problem which is called the Mutzkin-Burger problem. The problem of …nding the general form of the solutions to system (1), for instance, reduces to a Mutzkin-Burger problem. In fact, augmenting the trivial inequality 0x1 + ::: + 0xn
0 149
by one of the inequalities of system (1), solving and repeating the process leads to the determination of U (C) and V (C) for system (1). From these, in view of theorem 2.19, we obtain the general forms of the solutions of systems (1). We noted above that the algorithm for solving our problem (for P = R) introduced in the Motzkin, Rai¤a, Thompson and Thrall [1] paper has an intuitive geometric interpretation. The formal justi…cation for the algorithm is given in Burger’s paper [1]. Burger’s results are based, in addition to some theorems from linear algebra, on the Minkowski-Farakas theorem (see theorem 2.4) and on Minkowski’s theorem (see theorem 2.19). It is easy to show that the results are independent of the choice of the …eld P. The Mutzkin-Burger algorithm may be stated in the following form (see Burger [1]). Theorem 3.1:1 If s = 1 and `(! u ) = 0 (i = 1; 2; :::; s);
u 1 ) 6= 0, then H(C) = H(C) and hence U (C) = U (C). If s > 1 and, for instance, `(!
then the s 1 elements ! u i `(! u 1) ! u 1 `(! u i ) (i = 1; 2; :::; s);
constitute a basis U (C) for the space H(C). For s = 1 and `(! u 1 ) 6= 0, and also for s = 0,
the set U (C) is empty.
2a. If s = 1 and `(! u ) = 0 (i = 1; 2; :::; s);
and also if s = 0, then the system V (C) consists of elements ! v i for which `(! v i ) 5 0. In case s > 1 each of the elements is distinct from the other according to the formula: ! v (p; q) = ! v q j`(! v p )j + ! v p j`(! v q )j
with `(! v p )=`(! v q ) < 0. For each of these elements there exist r 2 linearly independent inequalities of system (1) (i.e., there exist r 2 inequalities `j (! x ) 5 0 with linearly independent forms ` (! x )) which hold as equations at ! v p as well as ! v q (for r = 2 this condition j
is dropped)). 2b. If, for instance, `(! u 1 ) 6= 0 then the system V (C) consists of elements ! u 0; ! v p `(! u 0) + ! u 0 `(! v p ) (p = 1; 2; :::); (3)
where ! u 0 is any element of H(C) with p `(! u 0 ) < 0:
3. Among the inequalities of system 1 that hold as equations at the element ! v p as well as ! v q of the system V (C), there are r 2 linearly independent inequalities if and only if no other element of V (C) turns all of them into equations. The next lemma is needed for the proof of the above theorem.
150
LEMMA 3.1. Let ! x 1 and ! x 2 be essentially distinct fundamental solutions of system (1) which has rank r (r > 1). Then the following assertion are true: a) If at least one of the coe¢ cients in the combination p1 ! x 1 +p2 ! x 2 (p1 p2 ; 2 P ) is di¤erent from zero then p ! x1+p ! x2 2 = H(C). 1
2
b) If ! x 1 and ! x 2 turn some r 2 linearly independent inequalities of system (1) into equations, then for any ! x 0 2 P n turning those inequalities into equations there exists elements p and p in P such that ! x0 p ! x1 p ! x 2 is in H(C). (For r = 2, x is any 1
2
1
2
0
n
element of P ) c) If in b) the elements ! x 0; ! x 1 and ! x 2 belong to the same fundamental system of solutions of (1) then ! x 0 coincides with either ! x 1 or ! x 2. PROOF: a) Suppose p1 ! x 1 + p2 ! x 2 2 H(C) and p1 6= 0. Without loss of generality, we may assume that p > 0. Since the fundamental solutions ! x 1 and ! x 2 are essentially distinct, it follows 1
from corollary 2.13 that p2 = 0. But then both terms of each of the relations p ` (! x 1 ) + p ` (! x 2 ) (j = 1; 2; :::; m); 1 j
2 j
are non-positive and hence `j (! x 1 ) = 0 (j = 1; 2; :::; m) since p1 > 0. But this contradicts the fundamentality of ! x 1 (see de…nition 2.8). This proves a). b) For r = 2, the assertion is obviously true. Let r > 2 and `jk (! x ) 0 (k = 1; 2; :::; r 2)
be linearly independent inequalities turned to equations by ! x 0; ! x 1 and ! x 2 : Then the set N of solutions of the system ` (! x ) 0 (k = 1; 2; :::; r 2) jk
has dimension n r + 2. The subspace fH(C); ! x 1; ! x 2 g, of N , generated by H(C) and the elements ! x 1 and ! x 2 has, by part a, the same dimension. Thus N coincides with fH(C); ! x 1; ! x 2 g. Hence there exist elements p and p of P such that 1
2
! x 0 = p1 ! x 1 + p2 ! x 2 + h (h 2 H(C)) But then ! x 0 p1 ! x 1 p2 ! x 2 2 H(C) as we wanted to prove. c) By conditions of b) and c) we have: ! x0 = p ! x1+p ! x 2 + h (h 2 H(C)) 1
2
Thus, by corollary 2.14, there exist nonnegative numbers p1 and p2 2 H(C) such that p1 ! x0 =p ! x 1 2 H(C) and p1 ! x0 =p ! x 2 2 H(C): 1
1
2
2
If ! x 0 6= ! x 1 and ! x 0 6= ! x 2 then, by a) above, p1 = p2 = 0: But then ! x 0 2 H(C) which is not possible, since ! x 0 is a fundamental solution of system (1). The contradiction establishes c).
151
Proof of theorem 3.1. 1. The …rst assertion is obviously true. The second part of 1 is true since the rank of the system of elements coincides with the dimension s
1 of the space H(C).
2a. Let us note …rst that all elements considered in this part of the theorem are elements of the cone C. We now show that each of them is a fundamental solution of the system (2). For elements ! v i with `(! v i ) = 0 this is obvious, since, in the present case, the rank of system (2) coincides with the rank r of system (1) (because H(C) = H(C)). For elements ! v (p; q) this ! is established in the following way. By a) of lemma 3.1 we have v (p; q) 2 = H(C) = H(C).
On the other hand, it follows from the de…nition of the elements ! v (p; q) that they are solutions of system (2) that turn r
2 linearly independent inequalities of system (1) into equations in addition to the inequality `(! x ) 5 0. This last inequality may not be expressed linearly (under P ) in terms of r
2 independent linear inequalities in such a way that it will, contrary to the assertion, turn to an equation when evaluated at ! v p and ! v q . Consequently, the element ! v (p; q) turns r
1 linearly independent inequalities of system (2) to equations. Thus the rank of that system is r. This proves that ! v (p; q) is a fundamental solution. We now show that all of our fundamental solutions ! v i and ! v (p; q) are essentially distinct.
Since H(C) = H(C) it is obvious that any two elements ! v i are essentially distinct. Now suppose two elements ! v i and ! v (p; q) are not essentially distinct. Then it follows from corollary 2.13 that there exists a positive element t 2 P such that ! v i t! v (p; q) 2 H(C) = H(C). This shows that the element ! v i turns into equations, those and only those inequalities of systems (1) that are so turned by the elements ! v p and ! v q : Hence there exist r 1 linearly independent inequalities of system (1) which are turned into equations by ! v p as well as ! v q. But this is not possible since ! v p and ! v q belong to the same system of fundamental solutions
and hence are essentially distinct. Thus any two fundamental solutions ! v i and ! v (p; q) are essentially distinct. Suppose two fundamental solution ! v (p1 ; q1 ) and ! v (p2 ; q2 ) have distinct indices but are not essentially distinct. Then it follows from corollary 2.13 that there exists a positive element d 2 P such that ! v (p1 ; q1 ) d ! v (p2 ; q2 ) 2 H(C) = H(C). Thus the element ! v (p ; q ) (the element ! v (p ; q )) turns into equations those and only those inequalities of 1
1
2
2
system (1) which are turned into equations by the elements ! v p2 and ! v q2 (the elements ! v p1 and ! v q1 ). But then there exist r 2 linearly independent inequalities of system (1) which are turned into equations by the elements ! v p1 ,! v q1 ; ! v p2 ,! v q2 . By part c) of lemma 3.1 it, thus, follows that the pairs (! v p1 ; ! v q1 ) and (! v p2 ; ! v q2 ) are identical. Thus the two elements
(! v p1 ; ! v q1 ) and (! v p2 ; ! v q2 ) could not have distinct indices (see de…nition of those elements
152
in the statement of the theorem), which is contrary to our assumption. Thus all fundamental solutions ! v (p; q) of (2) considered in part 2a of the theorem are essentially distinct. Finally we show that the elements considered in part 2a of the theorem constitute a fundamental system of solutions of system (2), i.e. we show that the fundamental solution ! v of system (2) is not essentially distinct form at least one of those elements in 2a. First we note that such an element does not belong to the subspace H(C) (this follows from the de…nition of fundamental solutions) and that it turns (r 1) linearly independent inequalities into equations. Consider …rst the case where ! v turns (r 1) independent inequalities of system (1) into equations. In this case, it is a fundamental solution of system (1) (since ! v 2 = H(C) = H(C)). But then it is noe essentially distinct from one of the elements of
V (C). It is, thus, not hard to show that one of its elements (denoted by ! v i1 ) satis…es system (2) and hence is one of the elements ! v i of 2a. Indeed, in view of corollary 2.13
! there exists a positive element d 2 P such that d! v v i1 2 H(C) = H(C). But then ! ! v i1 = d! v + h 1 ( h 1 2 H(C) Since d! v is a solution of system (2), it follows that ! v i1 is a solution of system 2. Thus ! v i1 is one of the elements ! v i in 2a. Now consider the case where ! v is not a fundamental solutions of system (1) and thus turns only r 2 linearly independent inequalities of system (1) into equations, in addition to the inequality `(! x) 0 which is independent from the others. Since ! v 2 C, there exist non-negative elements d1 ; :::; d` of P such that ! ! ! v = d1 ! v 1 + ::: + d` ! v ` + h ( h 2 H(C) = H(C))
In that expression, at least two coe¢ cients dj1 and dj2 are positive. Indeed, if this is not so, then either ! v 2 H(C) or (r 1) independent linear inequalities of system (1) are turned into equations. But then ! v turns into equation r 2 of system (1)’s inequalities that are ! turned into equations by v j1 and ! v j2 : In view of part b) of lemma 3.1 we have !0 !0 ! v = q1 ! v j1 + q2 ! v j2 + h ( h 2 H(C) = H(C)) we conclude that ! v j1 and ! v j2 turn r 1 linear independent inequalities of system (1) into equations. But this contradicts our assumption about ! v : Thus q1 (and similarly q2 ) is positive. It is now easy to see that the elements ! v j1 and ! v j2 turn into equalities some r 2 linear independent inequalities of system (1) into equations by ! v . Thus, if `(! v j1 ) = 0; then ! v j1 and ! v j2 are essentially nondistinct fundamental solutions of (2). Hence, by corollary 2.13 there exists a positive element d 2 P such that d! v j1 ! v 2 H(C. Since H(C) = H(C), it ! follows that v is a fundamental solutions of system (1). But this contradicts our assumption. Hence `(! v j1 ) 6= 0. Similarly `(! v j2 ) 6= 0. 153
Now `(! v ) = 0 and q1 and q2 positive imply `(! v j1 )=`(! v j2 ) < 0: Thus the element ! v j1 j`(! v j2 )j + ! v j2 j`(! v j1 )j is one of the elements ! v (p; q) of the theorem we are proving. It follows from `(! v ) = 0 that ! `( v ) = q1 j`(! v j2 )j q2 j`(! v j1 )j = 0 !0 But then the expression ! v = q1 ! v j1 + q2 ! v j2 + h reduces to !0 ! v = q! v (j2 ; j1 ) + h !0 v of where q = q1 j`(! v j2 )j. Since h 2 H(C), this means that the fundamental solution ! ! system (2) is not essentially distinct from the fundamental solution v (j ; j ) of that system. 2
1
This concludes that the proof of 2a. 2b. In view of the condition that `(! u 1 ) 6= 0, the linear form `(x) may not be obtained by linearly combining the forms `j (! x) 0 (j = 1; 2; :::; m);. All elements of (3) with the exception of ! u 0 yield `(! x ) = 0 and ! u 0 2 H(C). Thus all of these elements (obviously being solutions to (2)) turn r linear independent inequalities of system (2) into equations.
But the rank of system (2) is r + 1. Furthermore, none of the elements of (3) is in H(C). These two facts imply all of these elements are fundamental solutions of system (2). We now show that every two of these elements are essentially distinct. Indeed, if ! u 0 is not essentially distinct from one of the elements ! v p `(! u 0) + ! u 0 `(! v p ) then, by corollary 2.13, there exists an element d 2 P such that d! u0+! v p `(! u 0) ! u 0 `(! v p ) 2 H(C) = H(C) But this is not possible since ! vp 2 = H(C). Now suppose two elements of (3) that are ! 0 essentially distinct from u are not essentially distinct. Let these be, for instance, the solutions with p = p1 and p = p2 : Then for some positive t 2 P we have t[ ! v p1 `(! u 0) + ! u 0 `(! v p1 )] [ ! v p2 `(! u 0) + ! u 0 `(! v p )] 2 H(C) = H(C) and, hence, t! v p1 `(! u 0) + ! v p2 `(! u 0 ) 2 H(C) and, further, t! v p1 ! v p2 2 H(C)
which is not possible since ! v p1 and ! v p2 are essentially distinct fundamental solutions of system (1). Thus the elements of (3) are essentially distinct fundamental solutions of system (2). Finally we show that the elements of (3) constitute a fundamental system of solutions of system (2). Let ! v be any fundamental solution of system (2). Since, in the present case, H(C) is generated by the space H(C) and the element ! u 0 it follows that for any ! v 2 H(C) we have:
154
! ! v = a! u0+ h
! where a 2 P and h 2 H(C). On the other hand, since ! v 2 = H(C), we have `(! v ) 6= 0; which implies that `(! v ) < 0. Hence ! ! ! `( v ) = a`( u 0 ) + `( h ) = a`(! u 0 ) < 0; ! and thus a > 0. But then the relation: ! v = a! u 0 + h implies (see corollary 2.13) that
the fundamental ! v solutions and ! u 0 of system (2) are not essentially distinct. Consider the case where ! v 2 = H(C). In that case, the element ! v turns r 1 linear ! independent inequalities of system (1) and `( x ) 5 0 into equations. Hence it is a fundamental solution of system (1). But then it is not essentially distinct from one of the elements ! v 1 ; :::; ! v ` , say! v 1.
In view of 2.13, this means that ! v = q! v1+! u 1 (! u 1 2 H(C)) for some positive q 2 P . u 0 . This yields the on the other hand, as we already noted, H(C) is generated by H(C) and ! 0 0 0 relation: ! u 1 = b! u 0 +! u where b 2 P and ! u 2 H(C). Consequently ! v = q! v 1 +b! u 0 +! u . Using the equation `(! v ) = 0 and the positiveness of q it is not hard to show that ! v is
1
essentially distinct from the fundamental solution
`(! u 0 )! v 1 + `(! v 1 )! u 0 . Indeed, since we
have `(! v ) = q`(! v 1 ) + b`(! u 0 ) = 0; we have: if b = 0 then `(! v 1 ) = 0. Letting k =
q=`(! u 0 ) (k > 0) and using the above
relation we have 0 ! v = q! v 1 +b! u 0 +! u =
0 0 k! v 1 `(! u 0 )+k`(! v 1 )! u 0 +! u = k( ! v 1 `(! u 0 )+`(! v 1 )! u 0 )+ ! u : This relation implies that ! v is not essentially distinct from the fundamental solution ! ! 1 !0 1 !0 v `( u ) + `( v ) u of system (2). This completes the proof of 2b of the theorem.
3. The necessity of the condition of part 3) of the theorem follows from part c) of the lemma. We now prove su¢ ciency. For the case where r = 2 the condition are trivial. Let ! v p and ! v q be two elements of the fundamental system V (C) such that one other element of V (C) does not turn into equations all of those inequalities that are turned to equations by ! v p and ! v q . Let `ji (! x)
0 (i = 1; 2; :::; k);
be the subsystem of (1) which consists of those inequalities. Let K be the cone of solutions of the system `j (! x ) 0 (j = 1; 2; :::; m); g (4) `ji (! x ) 0 (i = 1; 2; :::; k): It is obvious that H(K) = H(C) and that, in view of 2a of the theorem, the fundamental system of solutions of (4) consists of all elements of the fundamental system V (C) that turn all of the inequalities 155
`ji (! x)
0 (i = 1; 2; :::; k);
into equations, i.e. of the elements ! v p and ! v q . But then K = H(C) + f! v p; ! v qg (5) ! p !q where { v , v } is the cone generated by elements ! v p and ! v q. By a) of lemma 3.1, the dimension of linear space K exceeds by 2 the dimension of the subspace H(C) and hence is equal to n
r + 2. We show further that K consists of the set
of solutions of the system of equations: ` (! x ) = 0 (i = 1; 2; :::; k); (6) ji
In fact, let `(! x ) = a1 x1 + ::: + an xn = 0 be an equation on P which holds at all elements of K. In view of (5) the inequality `(! x ) 5 0 is a consequence of the system (4). But then, by the Minkowski-Farakas lemma (see Theorem 2.4), there exist nonnegative elements p1 ; :::; pm ; q1 ; :::; qk of the …eld P such that the relation: m X ! `( x ) = pj `j (! x) j=1
k X
qi `i (! x)
i=1
holds uniformly in ! x . The form `(! x ) and all the forms `ji (! x ) are zero at ! v p and ! v q. Thus the coe¢ cients p of those forms ` (! x ) that do not become zeros at least one of the j
j
elements ! v p and ! v q , are zeros. Hence, the form `(! x ) is a linear combination of the forms ` (! x ) (i = 1; 2; :::; k): ji
Thus any equation `(! x ) = 0 which holds on K is a consequence of (6). Thus the set of
solutions of that system coincides with K. Since the dimension of the space K is n it follows that the rank of system (6) is n
(n
r + 2) = r
r + 2,
2 as we wanted to show. This
completes with proof of the theorem. The algorithm described in theorem 3.1 is considerably shorted for the case where the rank r of the system (1) equals the number n of its unknowns. This is true because, in that case, it has no nonzero solution which turn all of its inequalities into equations, and hence the space H(C) for it is a null space. For this case, theorem 3.1 reduces to our next theorem. THEOREM 3.2. Let ! v 1 ; :::; ! v ` (` > 1) be a fundamental system V (C) of solutions of the linear inequality system ` (! x ) = a x + ::: + a x j
j1 1
of rank n on P n and let `(! x ) = a x + ::: + a x 1 1
0 (j = 1; 2; :::; m)
jn n
n n
0
(1 )
0
be a system of linear inequalities on P n : Consider the elements ! x i with `(! x i ) 5 0 and the elements 156
! v q j`(! v p )j + ! v p j`(! v q )j with `(! v p )=`(! v q ) < 0 (which are distinct from each other in at least one of the indices (p; q) and such that n 2 linearly independent inequalities ` (! x ) 5 0 turn to equations at j
! v p and at ! v q . Those elements constitutes a fundamental system of solutions of the system `j (! x ) 0 (j = 1; 2; :::; m); g `(! x) 0 If these elements do not exist then the latter systems has no fundamental solutions. Among the inequalities of the system (1’) that turn into equations for the elements ! vp and ! v q of V (C) there are n
2 linearly independent inequalities if and only if no other
elements of V (C) turn all of them into equations. If we wish to determine nonnegative solutions of a system (1) then all we have to do is augment the system by the inequalities xi
0 (i = 1; 2; :::; n):
We also note here that a special case of the above problem is the problem mentioned in the beginning of this section; the question of determining the general form of solutions of system (1). In fact, let (y1 ; :::; yn+1 ) be any nonnegative solution of the system aj1 y1 + ::: + ajn+1 yn+1 where
n X
ajn+1 =
0 (j = 1; 2; :::; m)
(7)
aji
i=1
Then the element (x1 ; :::; xn ) with xi = yi
yn+1 (i = 1; 2; :::; n) is a solution of system
(1). On the other hand, if (x1 ; :::; xn ) is any solution of system (1) and yn+1 is an element of P satisfying the condition: jxi j
yn+1 (i = 1; 2; :::; n)
then the element (y1 ; :::; yn+1 ) with yi = xi +yn+1 (i = 1; 2; :::; n) is a non-negative solution of system (7). Thus if ! k y k = (y1k ; :::; yn+1 ) (k = 1; 2; :::; h) is a system of generating elements of the cone of non-negative solutions of system (7) and hence is a fundamental system of solutions of system (7) that satisfy the condition
yn+1
0
(i = 1; 2; :::; n + 1) then the formula h X ! x = pk ! xk k=1
where ! x k = (y k 1
k yn+1 ; :::; ynk
k yn+1 ) (k = 1; 2; :::; h)
and where p1 ; :::; ph are non-negative elements of P gives all of the solutions of system (1) and hence is the general form of the solutions. Hence the problem of …nding the general form of the solutions of system (1) reduces to a problem of determining the general form of 157
non-negative solutions. This problem is discussed in the following section. At the conclusion of the present section we present an example of a direct solution to the …rst problem. Example: Find the general form `1 (! x ) = x1 x2 + 3x3 8x4 ! `2 ( x ) = x1 + x2 3x3 + x4 `3 (! x ) = 2x1 x2 2x3 + x4 ! `4 ( x ) = 3x1 + x2 x3 + 6x4 `5 (! x ) = x1 + x2 3x3 + 2x4 4 on R .
of the solution of the system: 0; 0; 0; g 0; 0:
1) Adjoin the trivial inequality ` (! x ) = 0x + ::: + 0x 0; 0
1
4
x ) 0. In this step, the basis U (C) consists of the four vectors: to the inequality `1 (! ! u 4 = (0; 0; 0; 1): u 1 = (1; 0; 0; 0); ! u 2 = (0; 1; 0; 0); ! u 3 = (0; 0; 1; 0); ! 0
0
0
0
The basic U (C) consists of the vectors: ! u 10 = (1; 1; 0; 0); u 20 + ! u 20 ) = ! u 10 `1 (! u 10 ) ! u 20 `1 (! u 11 = ! ! u2 =! u 3 ` (! u 1) ! u 1 ` (! u 3) = ! u 3 3! u 1 = ( 3; 0; 1; 0); 1
0 1
0 1
0
0
0
0
! u 10 = (8; 0; 0; 1): u 20 + 8! u 40 ) = ! u 10 `1 (!
! u 10 ) u 40 `1 (! u 31 = !
The system V (C) is empty. Since H(C) 6= H(C) , it follows from part 2b of the theorem that the system V (C) consists of a single vector. Since ` (! u 2 ) < 0, we conclude that 1
0
! u 20 is that vector. v 11 = !
2) Adjoin the inequality `2 (! x ) 0 to the system `0 (! x ) 0; `1 (! x ) 0. The basic U (C) u 31 . The basic U (C), here, consists of the vector: u 21 ; ! here consists of the vector ! u 11 ; ! ! u 1 = ( 5; 2; 1; 0); u 2 2! u 2) = ! u 1 ` (! u 1) ! u 2 ` (! u1 =! 2
1 2
1 2
1
1
1
1
! u 22 = ! u 31 `2 (! u 11 )
! u 11 `2 (! u 31 ) = ! u 31 + 7! u 11 = (15; 7; 0; 1): The system V (C) consists of the vector ! v 11 . The system V (C) consists of the vector ! v1=! u 3 and ! v2= ! v 1 ` (! u 3) + ! u 3 ` (! v 1 ) = 7! v 1 + 2! u 3 = (16; 7; 0; 2): 2
1
2
1 2
1
1 2
1
1
1
3) To the system ` (! x ) 0; ` (! x)
0; `2 (! x) 0 adjoin the inequality `3 (! x ) 0: The basis U (C) here is: ! ! ! ! u 13 = u 22 `3 ( u 12 ) u 12 `3 (! u 22 ) = 10! u 22 24! u 12 = ( 30; 22; 24; 10): 0
1
The system V (C) is: ! v1=! u 1; 3
! v 23 = ! v 33 =
2
! v 12 = ! v 12 `3 (! u 12 ) + ! u 12 `3 (! v 12 ) = 10! v 12 + 17! u 12 = ( 5; 34; 17; 10); ! v 2 ` (! u 1) + ! u 1 ` (! v 1 ) = 10! v 1 + 27! u 1 = (25; 16; 27; 20): 2 3
2
2 3
2
2
2
4) To the system 158
`0 (! x)
0; `1 (! x)
0; `2 (! x ) 0; `3 (! x) 0 adjoin the inequality `4 (! x ) 0. The basis U (C) here is empty since `4 (! u 13 ) 6= 0 Since `4 ( 12 ! u 13 ) < 0, the vector ! v 14 = 12 ! u 13 may be included in the systm V (C). In addition, that system includes ! 1 !1 u 13 = 16( 5; 2; 1; 0) 6( 30; 22; 24; 10) = v 13 `4 ( 21 ! v 13 6! v 24 = ! v 13 ) = 16! u 13 ) u 3 `4 (! 2 (100; 100; 160; 60) ! v 2` ( v3= !
1 !1 u 3) 2
1 !1 v 23 ) u 3 `4 (! 2
u 13 = 16( 5; 34; 17; 10) 12( 30; 22; 24; 10) = = 16! v 23 12!
(280; 280; 560; 280); ! v4= ! v 3` ( 1 ! u 1)
1 !1 u 3 `4 (! v 33 ) 2
= 16! v 33 17! u 13 = 16(25; 16; 27; 20): 17( 30; 22; 24; 10) =
3 4
4
3 4
4
2
3
(910; 630; 840; 490): All of the vectors ! v 14 ; ! v 24 ; ! v 34 ; ! v 44 satisfy the inequality `5 (! x)
0. Thus it may be
eliminated from the system without changing the solution set. The process is now complete. Introducing the notation ! v1=! v 1 = (15; 11; 12; 5);
! v2= ! v3= ! v4=
4 1 !2 v 4 = (5; 5; 8; 3); 20 1 !3 v 4 = (1; 1; 2; 1); 280 1 !4 v 4 = (13; 9; 12; 7); 70
we may write the general form of the solution of our system in the form ! x =p ! v1+p ! v2+p ! v3+p ! v4= 1
2
3
4
= (15p1 +5p2 +p3 +13p4 ; 11p1 +5p2
p3 +9p4 ; 12p1 +8p2 +2p3 +12p4 ; 5p1 +3p2 +p3 +7p4 )
where p1 ; p2 ; p3 ; p4 are any non-negative real numbers. x2. A NUMBERICAL SCHEME FOR FINDING THE GENERAL FORM OF NON-
NENGATIVE SOLUTIONS TO A SYSTEM OF LINEAR INEQUALITIES
Using theorem 3.2, Chernikova (see [2]) devised a scheme for …nding the general form of the non-negative solutions of system (1). The scheme 0 1 consists of operating on the table: 2 ::: 0 a11 ::: ana T 1 = (T11 j T21 ) = @ ::: ::: ::: j ::: ::: ::: A 0 ::: 1 a1n ::: ann 1 The left part of the table T1 is an identity matrix En of order n. The right part T21 of 0
the table is the transpose A of the matrix A of system (1). Chernikova’s scheme will be presented here with some slight changes. We assume that the tables T i (i = 2; :::) obtained by sequentially transforming T 1 have the form of table T 2 and that they are the same dividing line. Thus the left and right parts of T i will be denoted by T1i and T2i and the columns of T2i will be numbered 1; 2; ::; m from left to right. The transformation scheme starts by determining the way in which table T 2 is obtained from table T 1 . 159
If all the elements of a given column of T21 are non-positive (i.e. if we have a non-positive column) then it is replaced by a zero column, since the inequality represented by that column if obviously dependent on the inequalities of the system `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) g (8) xi 0 (i = 1; 2; :::; n) We now choose a basic column of T 1 . As a basic column we may take any column of T21 which has at least one positive element. If there is no such column then table is empty. Let t1 be a basic column of T 1 . Then the …rst step in obtaining T 2 is no transform those rows of T 1 whose intersection with t1 is a non-positive element. Other rows of T 2 are obtained by combining pairs of rows of T 1 . We combine pairs of admissible rows of T 1 . Two rows are admissible if the elements of .. in them are nonzero and have opposite signs. If table T 1 has more than two rows of and if T11 has columns that intersect all admissible pairs at zero elements and if there are no other rows of T 1 , then all of these columns should be replaced by zero. In this case, we say that the admissible pairs are equilibrium pairs. If table T 1 has only two rows and if they are an admissible pair then they are equilibrium pair. Stable equilibrium rows are combinations, with positive coe¢ cients, of equilibrium admissible pairs of rows of T 1 which intersect t1 at zero elements. The collection of all stable equilibrium rows of T 1 are brought into T 2 . After that all nonzero elements of the column of T 2 arising from t1 are replaced by -1 and all other elements of its non-positive columns are replaced by zeros. This completes the construction of table T 2 . To go from table T 2 to T 3 or from T i to T i+1 (i
2) we start by changing admissible
pairs to equilibrium pairs. If T i has more rows than two, then equilibrium pairs are those admissible pairs of rows for which the columns in T1i arising from the basic columns t1 ; :::; ti in T 1 ; :::; T i
1
1
intersect all of these pairs at zero elements and no other rows of T i intersect all
such columns at zero elements. If T i has two rows and if those two rows make an admissible pair then this pair is an equilibrium pair. The second step consists of replacing the nonzero elements of the column of T i+1 arising from the basic column ti of T i by 1
i 1
columns arising from t ; :::; t
1 and also replacing by
1 the nonzero elements of its
. We then replace by zero all non-positive elements of the
remaining columns of the right hand partT2i+1 of T i+1 We note here that all admissible pairs of T 1 are equilibrium pairs. After a …nite number of steps we end up with a table T k whose right side either consists of non-positive columns or has a column which is strictly positive (i.e., all of its elements are positive). In the …rst case, the process terminates since we cannot choose a basic column. In the second case we take the strictly positive column as a basic column. In both cases the
160
next table dose not exist. In the …rst case, the row-vectors ! x 1 ; :::; ! x ` of the left side T1k of table T k constitute a fundamental system of solutions of system (8) and hence the general form of non-negative solutions of system (1) has the form: ` X ! x = ph ! x h ph 0 (h = 1; 2; :::; `) h=1
In the second form system (1) does not have a non-negative solution that di¤ers from the
zero vector. In order to prove these assertions, referred to as assertions A and B respectively, we note the following properties of table T i . 0
a) Let j be the index of some column in the right part T2i of table T i which has at least one positive element. Let x be any row of the left part of T i . Then the element of that column contained in the extension of the row to the right side of T i coincides with ` 0 (! x ). j
1
In fact, for T , this property holds for any column in its right part. Clearly, for other tables T i , the columns obtained by changing the elements to zero or to
1 need not have
this property. For other columns, i.e., for columns containing positive elements it is easy to show that our assertion is true in view of the relation a(! x ; `j 0 (! x )) + b(! y ; `j 0 (! y )) = (a! x + b! y ; `j 0 (a! x + b! y )); ! ! where x and y are any two in the left side of table T i 1 and a; b 2 P: 0
b) If j is the index of a column in table T2i which arises from one of the basic columns
of T 1 ; :::; T i
1
, then assertion a) holds for its zero elements.
c) If j is the index of a non-positive column of table T2i , then any row ! x of table T1i satis…es `j 0 (! x ) 0. 0
For table T 1 the assertion is obviously true. For the other tables T i , it follows easily from assertion a) and the relation a` 0 (! x ) + b` 0 (! y ) = ` 0 (a! x + b! y ): j
j
j
which is obviously true. We now relate the transition from table T i to tableT i+1 to the method of theorem 3.2 used to obtain the fundamental solutions of the system xi 0 (i = 1; 2; :::; n) g (9) `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; i) from the fundamental solutions of the system xi 0 (i = 1; 2; :::; n) g (9) ! `j ( x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; i 1) Clearly, without loss of generality we may assume that the indices of the columns t1 ; :::; ti
1
are i = 1; 2; :::i. The association of the two processes dictates that we show that
rows of the left hand side of table T i
1
form a fundamental system of solutions of system
(9). Indeed, if i = 1, then the proposition is trivial. For an arbitrary i it follows from a), b) and c).
161
Using property c) we may easily establish one further property of the table: 0 d) Let j be the index of a non-positive column of table T2i , The inequality `j 0 (! x)
a consequence of the system of inequalities made of the inequalities
xi
0 is
0(i = 1; 2; :::; n)
and those inequalities of system (1) whose indices coincides with the indices of those columns in T2i arising from the basic columns t1 ; :::; ti 1 . Let T k be a table such that all the columns in its right side are non-positive. Then, in views of a), it follows from the preceding assertion that the rows of its left hand side constitute a fundamental system of solutions of (8). This proves proposition A. Proposition B is easily established by using property a). Indeed, it follows from property a) that all zero solutions of system (8) satisfy the inequality `j 0 (! x ) 0, corresponding to the strictly positive column in T k associated with the basic Colum tk . REMARK: In the numerical scheme the elements of non-positive columns arising from non-basic columns are changed zeros (clearly this shortens the computations). The above considerations imply that if the rows of the left side of .. result form such a change then they make up a fundamental system of solutions of system (9). Since property a) and d) remain valid, so do assertions A and B. The numerical scheme may also be used to derive the general form of non-negative solutions to nonhomogeneous systems of linear inequalities of the form aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m)
(see Chernkova [2]). To do this we apply the scheme to …nd the fundamental system of solutions of the homogeneous system of linear inequalities aj1 x1 + ::: + ajn xn aj xn+1 0 (j = 1; 2; :::; m) g (8) xi 0 (i = 1; 2; :::; n + 1) If all of the n + 1 coordinates of all the obtained solutions are zero, then the original system obviously have no non-negative solutions. If even one of the obtained fundamental solutions has a nonzero n+1st coordinate then the general form of the non-negative solutions ! v of the original nonhomogeneous system is given by:
1 +:::+p v ` ! ` v (1) = p1 vp11 v +:::+p ` ` vn+1 n+1 ! ! 1 ` where v ; :::; v are fundamental solutions of the associated homogeneous system, where
1 ` vn+1 ; :::; vn+1 are their n + 1 coordinates and where p1 ; :::; p` are non-negative elements in P
such that 1 ` p1 vn+1 + ::: + p` vn+1 >0
Example: Find the general form of the solutions of the system (R5 ):
162
x1 x2 + 3x3 8x4 + 5x5 0; x1 + x2 3x3 + x4 x5 0; 2x1 x2 2x3 + x4 + 0x5 0; g 3x1 + x2 x3 + 6x4 3x5 0; x1 + x2 3x3 + 2x4 x5 0: According to our 0 numerical scheme we form the tale: 1 1 0 0 0 0 1 1 2 3 1 B 0 1 0 0 0 1 2 1 1 1 C B C 1 1 1 C 0 0 1 0 0 3 1 2 1 3 T = (T1 j T2 ) = B j B C @ 0 0 0 1 0 8 1 1 6 2 A 0 0 0 0 1 5 1 0 3 1 Taking the …rst column of the right side to the basic column we obtain the following table:
0
1 0 1 0 0 0 1 2 1 1 1 B 0 0 0 1 0 1 1 1 6 2 C B C B 1 1 0 0 0 C 0 1 1 2 2 B C B 0 3 1 0 0 0 5 5 2 0 C 2 2 2 B C T = (T1 j T2 ) = B j C 0 5 0 0 1 0 9 5 2 4 B C B 8 0 0 1 0 C 0 5 17 18 10 B C @ 0 0 8 3 0 0 7 13 10 18 A 0 0 0 5 8 0 3 5 6 2 2 Taking the last column of table T as the basic column we get: 0 1 0 3 1 0 0 0 5 5 2 0 B 0 0 8 3 0 0 5 13 10 1 C B C 3 3 3 B 88 88 112 0 C T = (T1 j T2 ) = B 72 0 40 24 0 j 0 C @ 0 0 8 48 72 0 32 32 64 0 A 0 0 8 12 0 1 4 4 64 0 2 For such a choice of basic column, the equilibrium pairs of table T are the pairs of rows: (2,7), (6,7) and (7,8) where the numbers in brackets are the indices of those rows. In T 3 we take the second column of the right side to be the basic column, and go to T 4 .
For this choice , alll of 0the admissible pairs of table 0 0 8 3 0 B 72 0 40 24 0 B B 0 0 8 48 72 B B 0 3 9 3 0 B 4 4 4 B T = (T1 j T2 ) = B 360 264 288 120 0 j B 0 96 72 72 0 B B 0 0 72 72 0 B @ 72 0 216 288 0 0 0 72 144 72 we take the third column of its right side T24 as table:
163
T 3 are equilibrium pairs. For 1 the table 0 1 13 10 1 0 1 88 112 0 C C 0 1 32 64 0 C C 0 0 18 12 0 C C 0 0 0 384 0 C C 0 0 0 384 0 C C 1 0 72 360 1 C C 1 0 0 1296 0 A 1 0 0 576 0 a basic column and transform it to the
0
1 0 0 8 3 0 0 1 1 10 1 B 0 3 9 3 0 0 0 1 12 0 C B C B 360 264 288 120 0 0 0 0 384 0 C B C B 0 C 96 72 72 0 0 0 0 384 0 B C 5 5 5 C 0 0 72 72 0 1 0 1 360 1 T = (T1 j T2 ) = B j B C B 72 0 216 288 0 1 0 0 1296 0 C B C B 0 C 0 72 144 72 1 0 0 576 0 B C @ 936 0 1244 576 0 0 1 0 576 1 A 0 0 360 720 936 0 1 0 1152 1 For this table, the equilibrium pairs are the two pairs (1,2) and (1,3) of rows of T 4 . Take, for a basic column of T 5 , the fourth column of its right side T25 . The equilibrium pairs of rows are those pairs of rows of T 6 . whose indices are: (2,3) (3,4), (3,6), (6,8), and (8,9). We then obtain the table0
360 264 288 120 0 B 936 0 1244 576 0 B B 360 360 576 216 0 B B 360 360 360 360 360 T 6 = (T16 j T26 ) = B B 10296 7128 9054 5544 0 j B B 8712 0 11880 5544 0 B @ 1827 0 2808 6336 936 4680 0 8424 1827 0 and the process ends.
0 0 0 0 1 1 0 0
0 1 0 0 0 1 1 1
0 0 1 0 0 0 0 0
1 1 0 0 0 0 1 0
0 1 0 0 0 1 1 1
1 C C C C C C C C C C A
The rows of the left side T16 of this table constitute a minimal system of generating elements for the cone of non-negative solutions of our system of linear inequalities. Dividing each row by a positive number, we may write the system in the following form: ! y 1 = (15; 11; 12; 5; 0); ! y 2 = (13; 0; 17; 8; 0); ! y 3 = (5; 5; 8; 3; 0); ! y 4 = (1; 1; 1; 1; 1);
! y 5 = (13; 9; 12; 7; 0); ! y 6 = (11; 0; 15; 8; 0); ! y 7 = (2; 0; 3; 2; 1); ! y 8 = (5; 0; 9; 4; 0):
Thus, the general form of the non-negative solutions of the system is given by: 8 X ! y = pi ! yi i=1
where p1 ; :::; p8 are non-negative real numbers.
The system in the above example is related to the example in x1 in the same way that
system (7) is related to system (1). Thus the cone generated by the elements:
164
! x 1 = (y1i y5i ; y2i y5i ; y3i ! x 1 = (15; 11; 12; 5); ! x 2 = (13; 0; 17; 8);
y5i ; y4i
y5i )
(i = 1; 2; :::; 8);
! x 3 = (5; 5; 8; 3); ! x 4 = (0; 0; 0; 0); ! x 5 = (13; 9; 12; 7); ! x 6 = (11; 0; 15; 8); ! x 7 = (1; 1; 2; 1); ! x 8 = (5; 0; 9; 4):
should coincide with the cone of solutions of the example in x1.
To show that, we associate the above elements with the generating elements of the cone of solutions for the example in x1. Those are among the above elements; ! x 1; ! x 3; ! x 5; ! x 7. On the other hand, the remaining ! x i ’s may be obtained as non-negative linear combinations of these four elements as follows: ! x 2 = 1! x 2 + 11 ! x 7; 2
2
2
2
! x 4 = 0! x 2 + 0! x 3 + 0! x 5 + 0! x 7; ! ! x 6 = 21 ! x 5 + 11 x 7; 2 ! x 3 + 5! x 7: x 8 = 1! x3: EXCLUSION OF DEPENDENT INEQUALITIES FROM A SYSTEM OF SIMU-
LATENESOU INEQUALTIES
In this section we study the problem of selecting a subsystem that is equivalent to the whole system and that contains no dependent inequalities. A system of linear inequalities that contains no dependent inequalities is said to be irreducible. Obliviously, by property d) of the tableT i of the preceding section, the scheme described there may be used to solve the following problem: (*) From system (8) choose a subsystem `jk (! x ) = ajk 1 x1 + ::: + ajk n xn 0 (k = 1; 2; :::; `) g xi 0 (i = 1; 2; :::; n) which is equivalent to (8) and which contains no inequalities of the form `jk (! x)
0 that
are dependent. In fact, it follows from property c) in the numerical scheme of x2, that for each property chosen basic column there is an inequality `jk (! x ) 0 (j = 1; 2; :::; m) of system (8) which is a dependent inequality. It is easy to see that , for a homogeneous system of the form (1), our problem totally reduces to problem (*). In fact, as we noted at the end of x1, the 165
solution of system (1) are de…ned in terms of non-negative solutions to system (7) introduced or, equivalently, of the system: aj1 y1 + ::: + ajn+1 yn+1 0 (j = 1; 2; :::; m) yi 0 (i = 1; 2; :::; n + 1) where n X ajn+1 = aji :
g (10)
i=1
Thus …nding the general form of the solution of system (1) is the same as …nding the
system of fundamental solutions of system (10). Let ajk 1 y1 + ::: + ajk n+1 yn+1 0 (k = 1; 2; :::; `) g (11) yi 0 (i = 1; 2; :::; n + 1) be a subsystem of (10) satisfying the requirement of problem (*). We show then that the subsystem ajk 1 x1 + ::: + ajk n xn
0 (k = 1; 2; :::; `)
(12)
of system (1) contains no dependent inequalities and that its solution set coincides with that of system (1). In view of the Mikowski-Farakas theorem, the …rst of these assertions follows from the absence of dependent inequalities from the system: ajk 1 y1 + ::: + ajk n+1 yn+1
0 (k = 1; 2; :::; `)
The second assertion follows from the fact that fundamental system of solutions of system (11) coincides, obviously, with the fundamental system of solutions of system (10). In fact, if ! k y k = (y1k ; :::; yn+1 ) (k = 1; 2; :::; h) is a fundamental system of solutions of system (10) the formula h X ! k k x = pk (y1k yn+1 ; :::; ynk yn+1 ), k=1
where p1 ; :::; ph are non-negative number in P , is the general formula for the solutions of system (1) as well as system (12). Thus in case of system (1), the solution to our problem is a corollary to the solution of problem (*). Now let `j (! x ) aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m) (13)
be a system of linear nonhomogeneous inequalities on P n . In view of lemma 2.5 one of its inequalities `j 0 (! x ) aj 0 0 is dependent if and only if it is a consequence of one its subsystems. This is true if and only if the inequality ` 0 (! x ) a 0x 0 is a consequence j
of a subsystem of
166
j
n+1
`j (! x)
0
0 (j = 1; 2; :::; m; j 6= j ); g xn+1 0 but this means that it is a dependent inequality of the system: `j (! x ) aj xn+1 0 (j = 1; 2; :::; m); g (13) xn+1 0 Hence, the problem of choosing an irreducible subsystem of (13) is the same of that of aj xn+1
choosing a subsystem `jk (! x ) ajk xn+1 0 (j = 1; 2; :::; m); g (13) xn+1 0 of system (13’) which contains no dependent inequalities whose indices are j1 ; :::; j` and which is equivalent to system (13). Using the above arguments, it is easy to see that this problem reduces to a problem of choosing from system aj1 x1 + ::: + ajn xn
aj xn+1 + (aj
n X
aji )xn+2
0 (k = 1; 2; :::; m) g (13")
i=1
xi
0 (i = 1; 2; :::; n + 2) xn+1 + xn+2 0 corresponding to system (13’), a subsystem n X ajk 1 x1 + ::: + ajk n xn ajk xn+1 + (ajk ajk i )xn+2
0 (k = 1; 2; :::; `)
i=1 g (13 ) 0 (i = 1; 2; :::; n + 2) xn+1 + xn+2 0 which is equivalent to system (13") and which contains no dependent inequalities with
xi
one of the indices j1 ; :::; j` . Thus, in this case, the subsystem ajk 1 x1 + ::: + ajk n xn ajk xn+1 0 (k = 1; 2; :::; `) g xn+1 0 of system (13’) is equivalent to system (13’) and, with the possible exception of
xn+1
0, does not contain any dependent inequalities. But then the subsystem ajk 1 x1 + ::: + ajk n xn
ajk
0 (k = 1; 2; :::; `)
is equivalent to system (13) and is an irreducible subsystem. Consequently, the problem of this section for system (13) reduces to problem (*) for the corresponding system (13"). But then it reduces to the numerical scheme of x2. Furthermore, the column .. corresponds to the inequality
xn+1 + xn+2
0.
Example: Consider the system: x1 x2 + x3 x4 0; x1 + x2 x3 + x4 0; 2x1 + x2 x3 2x4 0; g x1 + 2x2 2x3 + x4 0; x1 + 4x2 2x3 + x4 0; 2x1 + 4x2 4x3 + 0x4 0; in R5 . Choose an equivalent subsystem with the smallest number of inequalities. Does the 167
chosen subsystem include any dependent inequalities? To answer the question we investigate the dependence of the 6th inequalities of the system: x1 + x2 + x3 x4 2x5 0; x1 + x2 x3 + x4 + 0x5 0; 2x1 + x2 x3 2x4 + 4x5 0; x1 + 2x2 2x3 + x4 2x5 0; g x1 + 4x2 2x3 + x4 4x5 0; 2x1 + 4x2 4x3 + 0x4 + 2x5 0; x1 0; x2 0; x3 0 x4 0; x5 0 According to the numerical scheme of x2, we form the table: 0 1 1 0 0 0 0 1 1 2 3 1 2 B 0 1 0 0 0 1 2 1 1 1 4 C B C 1 1 1 B 1 2 1 2 4 C T = (T1 j T2 ) = B 0 0 1 0 0 j 1 C @ 0 0 0 1 0 1 1 2 1 1 0 A 0 0 0 0 1 2 0 4 2 4 2 1 Choosing the …rst0column of T2 as a basic column we get the table1 0 0 0 1 0 1 1 2 1 1 0 B 0 0 0 0 1 1 0 4 2 4 2 C B C B 1 0 0 1 0 C 0 0 4 2 2 2 B C B 0 1 0 1 0 0 2 1 3 5 4 C 2 2 2 B C T = (T1 j T2 ) = B j C 0 0 1 1 0 0 0 3 1 1 4 B C B 2 0 0 0 1 0 2 0 0 2 2 C B C @ 0 2 0 0 1 0 2 6 2 4 10 A 0 0 2 0 1 0 2 2 6 8 6 Choosing the second column of T22 as a basic colum0n and …nding the equilibrium pair of rows we get the table: 0 1 0 0 0 0 1 1 0 4 2 4 2 B 1 0 0 1 0 0 0 4 2 2 2 C B C B 0 1 0 1 0 C 0 0 3 1 1 4 B C 3 3 3 B 1 0 0 2 2 C T = (T1 j T2 ) = B 0 0 1 1 0 j 0 C B 2 0 0 0 1 C 0 1 2 6 8 6 B C @ 0 2 0 0 1 0 0 8 4 4 4 A 0 0 2 0 1 0 0 0 2 2 8 The equilibrium pairs here are (7,8) and (6,7). Taking the third column of T23 as a basic column of T23 and …nding 0 1 B 0 B B 2 B B 1 4 4 4 T = (T1 j T2 ) = B B 0 B B 0 B @ 0 7
the 0 0 0 0 0 0 6 4
equilibrium 0 1 0 1 1 0 0 0 1 0 1 1 j 4 4 3 8 2 3 14 8 6 0 3 4
pair of rows we 0 0 1 0 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 168
get the table: 2 2 2 1 1 4 0 2 2 0 2 0 10 16 10 10 26 26 20 20 20 10 10 10
1 C C C C C C C C C C A
The equilibrium pairs here are: (1,2); (1,3); (3,5); (3,6); (2,7). For choose the 4th0column of T24 B 0 0 1 B 2 0 0 B B 1 0 0 B B 0 0 4 5 5 5 T = (T1 j T2 ) = B B 0 0 8 B B 0 6 14 B B 1 0 2 @ 14 14 14
as a basic column. This leads 0 0 1 1 1 0 0 1 0 0 0 1 1 0 0 0 1 1 1 0 0 1 4 3 1 0 1 j 0 2 3 0 0 0 1 8 6 0 0 0 1 3 0 0 0 1 0 14 14 0 0 0 0 The equilibrium pairs here are: (1,2) and (7,8).
to the 1 2 2 16 26 20 10 0 0
table1 4 2 C C 0 C C 10 C C 26 C C 20 C C 10 C C 10 A 0
Since the 5th and 6th column of have no positive elements and since they arise from nonbasic columns, it follows property c) of x2 and from the above arguments that the 5th
and 6th inequalities are dependent and may be eliminated from the system without a¤ecting
the set of solutions. It is easy to see that the 5th inequality is the sum of the 1st, 2nd and 4th inequalities and that the 6th is the sum of the 2nd, 3rd, and 4th. Since the …rst four inequalities have linearly independent left hand sides, they constitute an irreducible subsystem. REMARK: In the above example, the answer to our question is independent of the magnitude of the numbers at the left side of the table, hence each of the positive numbers there could have been set equal to unity. To be more precise, we could have used the rule: the sum of a positive number and a non-negative number equals the sum of two positive numbers equals one. x4. A NUMERICAL SCHEME FOR FINDING THE GENERAL EXPRESSION FOR
NON-NEGATIVE SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS.
In this section we apply the numerical scheme of x2 (more precisely, a variant of that
scheme) to …nd the general expressions for non-negative solutions of systems of the form `j (! x ) = aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m) (14) or, equivalently, general expressions for solutions of systems: `j (! x ) = aj1 x1 + ::: + ajn xn 0 (j = 1; 2; :::; m) `j (! x ) = aj1 x1 ::: ajn xn 0 (j = 1; 2; :::; m) g (15) xi 0 (i = 1; 2; :::; n) 1 The table T , in this 0 case, takes the form: 1 1 ::: 0 a11 a11 ::: am1 am1 A: ::: ::: ::: T 1 = (T11 j T21 ) = @ ::: ::: ::: j 0 ::: 1 a1n a1n ::: amn amn 169
If A1j = (aji ) , (i = 1; 2; :::; n) is a nonzero column in the right side of Table T 1 then A1j or
A1j . may be taken as a basic column for T 1 . Let A1j be the chosen basic column. We
now apply the procedure of ..2 to accomplish the transition to T 2 . Let A2j be the column of T 2 to which A1j is transformed. If A2j contains nonzero elements, then
A2j may be chosen
as a basic column for T 2 since it would contain a positive element. Obviously, the column A2j contains zero or positive elements. Thus, according to the transition rule, Table T 3 will A2j ate zero element. It is not hard to
contain those and only those rows that intersect
show that the set of such rows consists of those rows of Table T 1 that intersect A1j at zero elements and the stable equilibrium admissible pairs of rows of table T 1 . If we take
A1j as
a basic column for T 1 , the discussion is analogous to the above discussion. The above arguments show that the transition from table T 1 to table T 3 may be accomplished without the use of table T 2 . The rule of transition is now stated. Choose, as a basic column for T 1 , one of two nonzero columns A1j or
A1j . Let us pick A1j . Include in table
T 3 = (T13 j T23 ) those rows that intersect A1j at zero elements. In addition to those, include T 3 the equilibriu, admissible pairs of rows of T 1 . Here, we use the procedures and rules of ..2 In the completed T 3 , the columns A3j and
A3j that arise from the basic column of the
preceding table are zero columns. Clearly, the procedure outlined above may be followed to complete the next steps. The columns of T 3 ; T 5 ; ::; T 2k+1 that arise from basic columns of the preceding tables contain only zero elements. Hence ,they don’t a¤ect the choice of equilibrium admissible pairs. This observation leads us to formulating a process of …nding the fundamental system of solutions of (15), or equivalently, of …nding the minimal system of generating elements of the cone of non-negative solutions of system (14). For system (14), set 0 up the table 1 1 ::: 0 a11 ::: am1 S 1 = (S11 j S21 ) = @ ::: ::: ::: j ::: ::: ::: A : 0 ::: 1 a1n ::: amn 2 i 1 Suppose tables S ; :::; S arise from S 1 . To construct the table S i , …nd a nonzero column on the right hand side of S i column of S
i 1
1
if one exists. Let that be the …rst column and call it a basic
. Take into table S i = (S1i j S2i ) those rows of S i
1
whose intersections with
the basic column are zero elements. Now consider pairs of rows of S i
1
whose intersections
with the basic column are nor zero and are of opposite signs (for each pair). Such pairs are called admissible pairs. Clearly, such pairs need not exist. If S i rows and if there is a column in S i
1
1
contains more than two
whose intersections with both rows of an admissible
pairs. Clearly, such pairs need not exist. If S i
170
1
contains more than two rows and if there
is a column in S i
1
whose intersections with both rows of an admissible pair are zeros and
if no other rows of S i
1
equilibrium pair. If S i
intersect such columns at zero then the admissible pair is called an 1
contains only two rows and if they constitute an admissible pair
then we agree to call this pair an equilibrium pair. Stable equilibrium pairs are de…ned as in x2. The stable equilibrium pairs of rows in S i
1
are brought into S i , and its construction
is, thus, completed.
In a …nite number of steps, the above process terminates. In the …nal step the right side of the table consists of columns which are either zero columns or don’t lead to equilibrium pairs of rows if chose as basic columns. Comparing the above process with the process of constructing tables T i for (15), it is easy to see that the …rst possibility leads to i …nal whose right side consists of zero columns and that the second possibility results in choosing a basic column that leads to a table all of whose elements are positive. Thus, in view of the arguments of ..2, the …rst possibility means that the minimal system of generating elements of the cone of non-negative solutions of (14) consists of the rows of the left side of the last table S i . In the second case, that cone contains no nonzero elements. The scheme described here was worked out by Chernikova (see Chernkova [1]) on the basis of the following proposition which follows from theorem 3.2. Let ! v 1 ; :::; ! v 1 (` > 1) be a fundamental system of solutions of the linear inequality system: `j (! x ) = aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m)
n
of rank n on P and let `j (! x ) = aj1 x1 + ::: + ajn xn = 0 be a linear equation on P n . Then the minimal system of generating elements of the solution cone of the system `j (! x ) 0 (j = 1; 2; :::; m); g (13) `(! x) 0 consists of elements ! v i with `(! v i ) = 0 and of those elements ! v q j`(! v p )j + ! v p j`(! v q )j
with `(! v p )=`(! v q ) < 0 which are distinct in at least one of the indices p and q for each of which there are n 2 linearly inequalities that are turned into equalities by ! v p as well as ! v q. Consider those inequalities `j (! x)
! v q . There are n
0 that are turned into equations by ! v p as well as 2 linearly independent such inequalities if and only if no other ! v k turns
all of them into equations. The question of developing algorithms for …nding non-negative solutions to system of
171
homogeneous linear equations is important form the point of view of the theory of linear inequalities as well as form the point of view of applications. For instance, as shown in Chernikova’s papers [10] and [16], a closely related question is the question of eliminating unknowns in a system of linear inequalities. This relationship between the two problems is used in constructing algorithms for both of them. In fact the algorithm for the problem of eliminating unknowns coincides with the algorithm described in the present section. We now show that the above algorithm may be used to obtain the general expression of the non-negative solutions of the system: aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m) (16) of nonhomogeneous linear inequalities (see Chernikova [1]). Actually, any non-negative solution (x01 ; :::; x0n ) of that system de…nes a non-negative solution (x01 ; :::; x0n ; 1) of the homogenous system aj1 x1 + ::: + ajn xn
aj xn+1 = 0 (j = 1; 2; :::; m) (17)
of homogeneous equations. Conversely any non-negative solution (x01 ; :::; x0n ; 1) of homogeneous equations. Conversely any non-negative solution (x01 ; :::; x0n ) of the above system de…nes a non-negative solution of system (16) of nonhomogenous linear equations. Thus our problem reduces to …nding the non-negative solution of system (17) that have last coordinates equals to one. To accomplish our task, we apply the algorithm of this section to system (17). If it turns out that it has no nonzero non-negative solutions, then system (16) does not have any non-negative solutions. In the opposite case, let ! i v i = ( v1i ; :::; vn+1 ) (i = 1; 2; :::; `) be the minimal system of generating elements for the cone of non-negative solutions to i system (17). If vn+1 = 0 (i = 1; 2; :::; `) then system (17) has non-negative solutions whose last coordinate is unity and, thence, system (16) has no non-negative solutions. Let ! v i1 ; :::
! v ik denotes those elements of ! v 1 ; ::: ! v ` whose last coordinates are positive. The general
formula for non-negative solutions of (17) whose last coordinates are positive is, thus, given by ! v = p1 ! v 1 + ::: + p` ! v` where all p1 ; :::; p` are non-negative and at least one among pi1 ; :::; pik is positive. Hence, all the non-negative solutions of system (17) with one as a last coordinate satisfy the condition: ! v (1) =
p1 ! v 1 +:::+p` ! v` 1 ` p1 vn+1 +:::+p` vn+1
where p1 ; :::; p` are non-negative and at least one among pi1 ; :::; pik is positive. Thus the
172
expression for non-negative solutions of (16) is !1 ! ! +:::+p` v ` v = p1pv11v +:::+p v` n+1
` n+1
where !i v = ( v1i ; :::; vni ) (i = 1; 2; :::; `); ! v = (v1 ; :::; vn ); pi
(i = 1; 2; :::; `); and at least one among pi1 ; :::; pik is positive.
At the conclusion of this section we note that the algorithm for …nding the general formula for non-negative solutions of a system of linear equations, explained here, may be used to obtain the more general problem of obtaining the general formula for the solution of a system of linear inequalities. Indeed, a homogeneous linear inequalities system (1) may be written in the equivalent form of a linear equation system `j (! x ) = aj1 x1 + ::: + ajn xn = uj (j = 1; 2; :::; m) with non-negative parameters uj . Assumed the rank r of the system to be nonzero, we apply the Gauss algorithm for eliminating unknowns of our system. Without loss of generality, we may assume that the formula, thus obtained, for the unknowns, and for the parameters are given by 0
xs = bs1 u1 + ::: + bsr ur + as1 xr+1 + ::: + asn r xn (s = 1; 2; :::; r) and c1i u1 + ::: + cri ur + ur+i (i = 1; 2; :::; m If m
r)
r then we wouldn’t have the last equation and we would have obtained the general
form of the solution of system (1). If m < r then we use the algorithm of this section and the non-negative solutions to the equation system, in ! u , has the form (u1 ; :::; um ) =
` X
pk (uk1 ; :::; ukm )
k=1
where p1 ; :::; p` are non-negative. Substituting these for u1 ; :::; um in the preceding formula we get xs =
` X k=1
pk (bs1 uk1 + ::: + bsr ukm ) +
n r X
0
asi xr+i (s = 1; 2; :::; r)
k=1
which is the expression for all the solution of system (1) of homogeneous linear inequalities. The case of an arbitrary system (non-homogeneous) of linear inequalities reduce, as we have repeatedly noted, to the present case. EXAMPLE. Find the general formula for the solution of the system
173
x1 + x2 + x3 1 0; x1 + x2 x3 + 1 0; 2x1 + x2 x3 2 0; g x1 + 2x2 2x3 + 1 0; 2x1 2x2 + x3 2 0: First we …nd the general formula for the solution of the system x1 + x2 + x3 x4 0; x1 + x2 x3 + x4 0; 2x1 + x2 x3 2x4 0; g x1 + 2x2 2x3 + x4 0; 2x1 2x2 + x3 2x4 0; x4 0: To do this, consider the equation system x1 + x2 + x3 x4 = u1 ; x1 + x2 x3 + x4 = u2 ; 2x1 + x2 x3 2x4 = u3 ; g x1 + 2x2 2x3 + x4 = u4 ; 2x1 2x2 + x3 2x4 = u5 ; x 4 = u6 : Applying the Gaussian algorithm for elimination of unknowns we get: x1 + x2 + x3 x4 = u1 ; 2x2 + 0x3 + 0x4 = u1 u2 ; 2x3 8x4 = u1 + 3u2 2u3 ; g 20x4 = 10u2 6u3 2u3 ; 0 = 5u1 5u2 + 6u3 8u4 10u5 ; 0 = 10u2 + 6u3 + 2u4 20u6 : From this we get the formula 1 3 u u; 10 3 10 4 1 1 u u; 2 1 2 2 1 1 2 4 u u + 10 u3 + 10 u4 ; 2 1 2 2 1 3 1 u + 10 u3 + 10 u4 ; 2 2
x1 = 21 u1 + x2 = x3 = x4 =
and the equation system 5u1 5u2 + 6u3 8u4 10u5 = 0; g 5u2 + 3u3 + u4 10u6 = 0 in the non-negative parameters u1 ; u2 ; u3 ; u4 ; u5 ; u6 . We know …nd the general formula for non-negative solutions of that system. To do that we 0 1construct the table 1 0 0 0 0 0 5 0 B 0 1 0 0 0 0 5 5 C B C B 0 0 1 0 0 0 6 3 C 1 1 1 B C S = (S1 j S2 ) = B j C 0 0 0 1 0 0 8 1 B C @ 0 0 0 0 1 0 10 0 A 0 0 0 0 0 1 0 10 1 Let the …rst column of table S2 be the basic column for table S 1 . Then we get the table:
174
1 0 10 0 0 0 0 0 1 B 6 0 5 0 0 0 0 15 C C B 2 2 2 15 C S = (S1 j S2 ) = B C B 0 6 5 0 0 0 j 0 @ 0 0 4 3 0 0 0 15 A 0 15 0 0 5 0 3 0 From the last table 1 0 we obtain: 12 0 10 0 0 3 0 0 B 0 0 8 6 0 3 0 0 C C B C B 0 0 0 0 10 0 6 3 C j S 3 = (S13 j S23 ) = B C B 6 6 10 0 0 0 0 0 C B @ 0 6 9 3 0 0 0 0 A 0 0 0 6 10 0 3 0 The rows in the last table are the steady equilibrium pairs of rows for the table S 3 . 0
Here, all admissible pairs of S 2 are equilibrium pairs. The rows of table S 3 constitute a minimal system of generating elements of the cone of non-negative solutions of the system of equations in the parameters. Hence the general formula for these solutions is given by (u1 ; u2 ; u3 ; u4 ; u5 ; u6 ) = (12; 0; 10; 0; 0; 3)p1 + (0; 0; 8; 6; 0; 3)p2 +(0; 0; 10; 0; 6; 3)p3 + (6; 6; 10; 0; 0; 0; )p4 +(0; 6; 9; 3; 0; 0)p5 + (0; 6; 10; 0; 3; 0)p6 ; where p1 ; p2 ; p3 ; p4 ; p5 ; p6 are non-negative. Thus we get the formula u1 = 12p + 6p4 ; u2 = 6p4 + 6p5 + 6p6 ; u3 = 10p1 + 8p2 + 10p3 + 10p4 + 9p5 + 10p6 ; u4 = 6p2 + 3p5 : Substituting for u in the expressions for x1 ; x2 ; x3 ; and x4 we get the expressions: x1 = p1
p2 + p3 + 4p4 + 3p5 + 4p6 ;
x2 =
6p1
6p4
3p5
x3 =
4p1 + 4p2 + 2p3
3p6 ; 4p4
p6 ;
x4 = 3p1 + 3p2 + 3p3 ; for the coordinating of the solutions of the system of homogeneous liner inequalities. Assuming x4 = 3p1 + 3p2 + 3p3 = 1 in the above expression we obtain the formula for the coordinates x1 ; x2 ; and x3 of the non-homogeneous system. Otherwise, if x4 > 0, the expressions for the these expressions for these coordinates are given by x1 = x2 =
p1 p2 +p3 +4p4 +3p5 +4p6 ; 3p1 +3p2 +3p3 6p1 6p4 3p5 3p6 ; 3p1 +3p2 +3p3
175
x1 =
4p1 +4p2 +2p3 4p4 p6 ; 3p1 +3p2 +3p3
in which p is non-negative and where not all of p1 ; p2 ; and p3 are zeros. If we introduce a new, non-negative, set of parameters: t1 = 3p1 =x4 ; t1 = 3p2 =x4 ; t1 = 3p3 =x4 ; and v1 = p4 =x4 ; v2 = p5 =x4 ; v1 = p6 =x4 ; (x4 ; > 0) then the general formula for the solutions of our system is given by: 4 )t 3 1
(x1 ; x2 ; x3 ) = ( 31 ; 2
+(
1 ; 0; 43 )t2 3
+ ( 13 ; 0; 23 )t3 + (4; 6; 4)v1 + (3; 3; 0)v2 +
(4; 3; 1)v3 ; where t1 ; t2 ; and t3 are non-negative parameters satisfying t1 + t2 + t3 = 1; where v1 ; v2 ; and v3 are non-negative parameters. These expressions show that the solutions set of our system is the (algebraic) sum of the cone C whose generating elements are (4; 6; 4); (3; 3; 0); (4; 3; 1); and the centroid E whose generating elements are ( 31 ; 2
4 ); 3
(
1 ; 0; 43 ); 3
( 13 ; 0; 23 )
Since the latter is the nodal solution set of the system, the solution con…rms with the basic assertion of the theorem 2.23. x5. ON EQUIVALENT SYSTEM OF LINEAR INEQUALITIES.
The following question is intimately related to the questions discussed in the preceding sections: under what conditions do two systems p1j1 x1 + ::: + p1jn xn p1j 0 (j = 1; 2; :::; m1 ) 0 g (17 ) p2j1 x1 + ::: + p2jn xn p2j 0 (j = 1; 2; :::; m2 ) have one and the same set of solutions (i.e. under what conditions would the two systems be equivalent). Two systems of linear inequalities are equivalent if and only if each inequality in either is a consequence of the other system. Thus, in view of lemma 2.5, two system (17’) are equivalent if and only if the two systems p1j1 x1 + ::: + p1jn xn p1j xn+1 0 (j = 1; 2; :::; m1 ) xn+1 0
g (18)
p2j1 x1
+ ::: +
p2jn xn
p2j xn+1
0 (j = 1; 2; :::; m2 )
g xn+1 0 of homogeneous linear inequalities are equivalent. Hence, the question of equivalence of two arbitrary system of linear inequalities on P n reduces to a question of equivalence of two systems of homogeneous linear inequalities. We study the latter question in this section. 176
Let `1j (! x ) = a1j1 x1 + ::: + a1jn xn 0 (j = 1; 2; :::; m1 ) g (19) `2j (! x ) = a2j1 x1 + ::: + a2jn xn 0 (j = 1; 2; :::; m1 ) be two systems of homogeneous linear inequalities on .. Clearly, the two systems are equivalent if and only if the two systems `1j (! y ) + a1jn+1 yn+1 0 (j = 1; 2; :::; m1 ) yi 0 (i = 1; 2; :::; n + 1) `2j (! y ) + a2jn+1 yn+1 0 (j = 1; 2; :::; m2 ) yi 0 (i = 1; 2; :::; n + 1) are equivalent, where ! y = (y1 ; :::; yn ) 2 Pn ; n X 1 ajn+1 = a1jn ; g (21) i=1 n X a2jn+1 = a2jn ;
g (20) g
i=1
Indeed, it follows from the Minkowski-Farakas theorem that the two systems (19) are equivalent if and only if each inequality in either system is a non-negative linear combination of the inequalities of the other system. Using formula (21) it is easy to show that the condition holds for system (20). Thus system (20) are equivalent. Now suppose the two systems of (20) are equivalent. Since the system (20) have, obviously, nonzero solutions, it follows from their equivalence that they have a common fundamental system of solutions. Let us denote the elements of that system by: ! k y k = (y1k ; :::; yn+1 ) (k = 1; 2; :::; h): Then the formula h X ! k k x = pk (y1k yn+1 ; :::; ynk
k yn+1 )
k=1
where p1 ; :::; ph are non-negative elements of P is the general formula for the solutions of each of the two system (19), (see the end of ..1). Hence they have one and the same solution set and are, thus, equivalent. Therefore, to answer the question of equivalence of two system (19) we need, …rst, to …nd the fundamental system of solutions of each of the system (20). To do that, we apply the algorithm of ..2 to each of them and compare the resultant fundamental system of solution. These would be the same if each solution in one is positive multiple (in P) of a solution in the other. As we already noted in the beginning of the section, to resolve the question of equivalence of two non-homogenous systems (17’) of linear inequalities, it su¢ ces to resolve the question of equivalence of two homogeneous systems corresponding to them. Thus the above argument also applies to non-homogeneous systems as well. 177
We note here that, in order to solve the problem of equivalence of the two systems (17’) it is not all necessary to presuppose that they are consistent. Indeed, let us transform them …rst to system of type (18) and then to systems h X k p1j1 y1 + ::: + p1jn yn p1j yn+1 ( (p1ji p1j )yn+2
0 g
i=1
yn+1 + yn+2
(j = 1; 2; :::; m1 ) 0; yi 0 (i = 1; 2; :::; n + 2)
(22) p2j1 y1 + ::: + p2jn yn
p2j yn+1
h X ( (p2ji
k p2j )yn+2
0
i=1 g (j = 1; 2; :::; m2 ) yn+1 + yn+2 0; yi 0 (i = 1; 2; :::; n + 2) Applying the algorithm of x2 we …nd the fundamental system of solutions for each of the
above systems. If in all of the solutions in one of them the di¤erence yn+1
yn+2 is zero, then
the corresponding system in (18) does not have any solutions with positive last coordinates. But then the corresponding system of (17’) is not consistent. If the di¤erence yn+1
yn+2
is nonzero for at least one element in each of the two fundamental systems of solutions then each of the systems (17’) is consistent. As exposited above, the solution to the problem of equivalence of two system of linear inequalities reduces to a problem of …nding the generating elements of the cones of solutions of two systems (22) that correspond to the original systems or of two systems (20) in the special case of homogeneous systems. Some other aspects of the solution was suggested to the authors by N.V. Chernikova. On examination of these aspects will occupy us for the rest of this section. DEFINITION 3.1. Consider a nonempty …nite system ! ( b 0j ; ! x ) = b0j1 x1 + ::: + b0jn xn = 0 (j 2 J1 ); g (23) ! 0 ! ( c j ; x ) = c0j1 x1 + ::: + c0jn xn 0 (j 2 J2 ); where J1 and J2 are …nite sets of indices at least one of which is nonempty. The above system is said to be a canonical system if it satis…es the following conditions: ! 1) All the vectors b 0j (j 2 J1 ) are nonzero and each of the vectors ! c 0j (j 2 J2 ) has the property that the sum of the absolute values of its coordinates is one (such vectors will be called quasi-unit vectors). ! 2) The vectors b 0j (j 2 J1 ) are orthogonal to each other end to ! c 0j (j 2 J2 ). 3) Not one of the vectors ! c 0j (j 2 J2 ) is a non-negative linear combination of the others or the acute cone generated by them.
If one of the sets J1 and J2 is empty we need only verify those parts of 1),2) and 3) that make sense for the case under consideration. 178
REMARK 1. The vectors considered above are elements of P n . As in the case of Rn , two vectors (x1 ; :::; xn ) 2 P n and (y1 ; :::; yn ) 2 P n are said to be orthogonal if and only if x1 y1 + ::: + xn yn = 0.
REMARK 2. Condition 1) and 2) may be directly veri…ed. The …rst part of condition 3 may be veri…ed by using the algorithm of x2 (see the elimination of dependent inequalities
in x3). The veri…cation of the
second part of condition 3) may be veri…ed by using the algorithm of x4 together with
the following proposition.
The set of nonzero vectors ! a j (j = 1; 2; :::; m) in P n generates an acute cone if and only if the equation ! a 1 x1 + ::: + ! a m xm = 0. has no nonzero non-negative solution. Recall that an acute cone is a cone such that the maximal subspace included in it is a zero subspace. REMARK 3. It follows from the second part of condition 3) that none of the inequalities x ) 0 of system (23) may be exchanged for an equation without changing the solution (! c 0; ! j
set of the system.
! DEFINITION 3.2. Two canonical systems are nonessentialy distinct if the sets b 0j (j 2 J1 ) corresponding to them generate the same subspace and if the vectors ! c 0j (j 2 J2 ) are the same for the two systems ! ! ! x ) = 0 (j 2 J1 ); ( b 0j ; ! ( b 0j ; x ) = 0 (j 2 J 1 ); g (24) g and ! x ) 0 (j 2 J2 ); (! c 0j ; ! ( c 0j ; ! x ) 0 (j 2 J 2 ); are equivalent if and only if then are nonessentially distinct.
Indeed, the proof of su¢ ciency is trivial. We now prove necessity. Suppose systems (24) are equivalent. ! ( b 0j ; ! x) !0 ( b j ; 0) (! c 0; ! x)
Thus the systems
0 0 0 j are equivalent.
cide. I.e. !0 b j; ! b 0j ;
! ( b 0j ; ! x ) 0 (j 2 J 1 ); (j 2 J1 ); ! (j 2 J1 ); g and ( b 0j ; 0) 0 (j 2 J 1 ); g ! (j 2 J2 ); ( c 0j ; ! x ) 0 (j 2 J 2 ); Hence, in view of the Minkowski-Farakas theorem, their dual cones coin-
the cones generated, respectively, by the vectors !0 b j (j 2 J1 ); ! c 0j (j 2 J2 ); ! ! b 0j (j 2 J 1 ); c 0j (j 2 J 2 );
coincide. But then their maximal linear subspaces H(K1 ) and H(K2 ) would also coincide. It follow from the de…nition of canonical systems and from remark 3 about the de…nition, ! that the subspace H(K1 ) is generated by the vectors b 0j (j 2 J1 ) and that the subspace 179
! H(K2 ) generated by b 0j (j 2 J 1 ). Indeed suppose, for instance, that the subspace H(K2 ) ! is not generated by the vectors b 0j (j 2 J1 ). Then it would have an element of the form !0 !0 ! b + c . where b 0 is a linear combination (with coe¢ cients from P ) of elements ! c 0j (j 2 J2 ) and where ! c 0 is a positive (with coe¢ cients which are positive elements of P ) ! linear combination of some elements ! c 0j (j 2 J2 ). Since b 0j 2 H(K1 ) if follows that ! c 0 2 H(K ). But then ! c 0 2 H(K ) K : Thus the cone K contains at least one of 1
j
j
1
1
1
the elements ! c 0j (j 2 J1 ). In fact, if this was not so then expressing ! c 0 as a positive linear combination of some of the vectors ! c 0j , expressing the vector ! c 0 as a positive linear ! combination of some of the vectors ! c 0j ; ! c 0j and b 0j and adding the two expressions we obtain a contradiction to condition to conditions 2) and 3) of de…nition 3.1. If ! c0 2K j0
then the solution set of system (23) doe not change if we adjoin inequality ( ! c 0j0 ; ! x) the system, i.e. the solution set is unchanged by exchanging the inequality (! c 0 ;! x) j0
1
0 to
0 for
an equation. But this is contrary to the property of system (23) noted in remark 3 following the de…nition of a canonical system. Consequently the observation about the subspace H(K1 ) ! ! and H(K2 ) is valid and the sets of vectors b 0j (j 2 J1 ) and b 0j (j 2 J 1 ) generate one and the same subspace in P n .
! We not show that the sets of vectors ! c 0j (j 2 J2 ) and c 0j (j 2 J 2 ) coincide. Let
H ? (K1 ) = H ? (K2 ) be a subspace of P h which is orthogonal to H(K1 ) = H(K2 ). Then the relation C(K1 ) = K1 \ H ? (K1 ) = K2 \ H ? (K2 ) = C(K2 )
is valid. Using the de…nition of a canonical system, it is easy to show that the cone C(K1 ) is generated by the elements ! c 0j (j 2 J2 ) and that the cone C(K2 ) is generated by ! the elements c 0j (j 2 J 2 ). The it would contain an element C(K1 ) with ! c 0j (j 2 J2 ). But ! ! ! since b 0 + ! c 0 and ! c 0 are orthogonal to b 0 , this means that b 0 is orthogonal to itself. ! ! However, this is possible for b 0 6= 0 0 : Consequently, our remark about C(K1 ) and C(K2 ) is valid. But then, in view of the properties of the vectors .. and .. , noted in de…nition ! 3.1, the set of vectors ! c 0j and c 0j constitute minimal systems of generating elements for these cones. It follows from acuteness of the cones C(K1 ) and C(K2 ) that the vector ! c 0j ! and c 0j are quasi-unit vectors. Thus, since C(K1 ) and C(K2 ) coincide, the sets of vectors ! ! c 0j (j 2 J2 ) and c 0j (j 2 J 2 ) coincide, Thus the assertion about equivalence of canonical systems is established.
In view of the proposition we just proved, in the case where the sets of vectors ! c 0j and
!0 c j coincide, and in order to check whether tow canonical systems are equivalent, we need ! ! only check whether the sets of vectors b 0j and b 0j generate one and the same subspace in 180
P n . In other words, we only have to verify the validity of the relations ! ! !0 X ( ! b 0j ; b 0i ) b 0i bj = (j 2 J 1 ) ! ! 0 0 j2J 1
( b i; b i)
and ! ! X (! ! b 0j ; b 0i ) b 0i b 0j = (j 2 J 1 ) ! ! 0 0 j2J 1
( b i; b i)
! ! Clearly, the vectors b 0j and the vectors b 0j generate one and the same subspace in P n if and only if all of these relations hold. Consider, now, the system (! a j; ! x ) = aj1 x1 + ::: + ajn xn
0 (j = 1; 2; :::; m) (25)
of homogeneous linear inequalities on P n . Let K be its dual cone, i.e., its maximal subspace, and let C(K) be the intersection of the cone K with the orthogonal complement H ? (K1 ) of that subspace. If the subspace H(K) contains nonzero elements, then ! ! let b 01 ; :::; b 0r be an orthogonal basis for it. If C(K) contains nonzero elements, then let ! c 0 ; :::; ! c 0 be a minimal system of quasi-unit generating elements for it. Clearly, under these s
1
conditions, the system ! ( b 0j ; ! x ) = 0 (j = 1; 2; :::; r); ! 0 ! ( c j ; x ) 0 (j = 1; 2; :::; s); is a canonical system.
g
(26)
DEFINITION 3.3. The system (26) is called the canonical system for system (25). If H(K) is a null subspace or if C(K) is a null cone, then the canonical system for system (25) is either (! c 0; ! x) j
0 (j = 1; 2; :::; s);
or the system ! ( b 0j ; ! x ) = 0 (j = 1; 2; :::; r); The case where both H(K) and C(K) are null, i.e. the case where the rank of system (25) is zero is not considered here. It follows from the de…nition we introduced here that the dual cone of system (25) coincides with the dual cone of system (26). The dual cone of a mixed system of equalities and inequalities is de…ned to the dual cone of the system of inequalities obtained by replacing each equation of the mixed system by two inequalities equivalent to it. The coincidence of the dual cones of (25) and (26) means in particular that system (25) is equivalent to its canonical system. The above observations about the conditions for the equivalence of two canonical systems lead to the next proposition. Theorem 2.2 (N.V. Chrnikova). Two systems of the form (25) are equivalent if and only if their canonical systems are nonessentially distinct. 181
We now describe an algorithm for deriving the canonical systems for a system (25). We shall assume that the system contains no trivial inequalities, i.e. inequalities with null linear form. In view of the above discussion, the algorithm starts by choosing the maximal subspace H(K) in the dual cone K of system (25). It is not hart to show that such a subspace coincides with the cone generated by the set of vectors ! a = (a ; :::; a ) (j = 1; 2; :::; m) (27) j
j1
jn
each of which is obtainable as a no-positive linear combinations of the other elements of (25). Indeed, if an element ! a j satis…es this condition, then we have ! a j 0 2 K and, hence, the cone K contains the subspace f! a 0 g generated by the element ! a 0 . But then, in view j
j
of the maximality of the subspace H(K), we have f! a j0 g H(K). Hence, ! a j0 H(K). ! ! Suppose a j1 ; :::; a jK are elements of the set (27) that satisfy our condition. If his an element ! of H(K), then h 2 H(K) K and hence there exist non-negative elements p1 ; :::; pm .of
P such that ! h = p1 ! a 1 + ::: + pm ! a m (28) ! On the other hand h 2 K and hence there exist non-negative elements q1 ; :::; qm such that ! h = q1 ! a 1 + ::: + qm ! a m (29)
Consequently (q1 + p1 )! a 1 + ::: + (qm + pm )! am =0
Hence, in particular, every element ! a j with a positive coe¢ cient in either (28) or (29) may be expressed as a non-negative linear combination of the remaining elements ! ! a j1 ; :::; ! a jK . Furthermore, we have shown that each element h of the subspace H(K) may be expressed as a non-negative linear combination of elements ! a ; :::; ! a . Consej1
jK
quently the subspace H(K) is a cone with the generating elements ! a j1 ; :::; ! a jK which is what we want to prove.
From the de…nition of the elements . if follows that the search for them coincides with …nding such a non-negative solution of the equations ! a u + ::: + ! a u =0 1 1
m m
or else, of the equation system a1i u1 + :::; +ami um = 0 (i = 1; 2; :::; m) (30) with the largest number of positive coordinates. The vectors ! a j1 ; :::; ! a jK are those vectors of the collection (27) for which the coordinates uj of such a solution is positive. To …nd the non-negative solution of system (30) with the largest number of positive coordinate, we may use the algorithm of x4. Indeed, if by using that algorithm we …nd that 182
the minimal system of generating elements of its cone of solutions to be ! u i = (! u i ; :::; ! u i ) (i = 1; 2; :::; `) m
m
then: ! u =! u 1 + ::: + ! u` is the non-negative solution of (30) with the greatest number of positive coordinates. since we have found the elements ! a j1 ; :::; ! a jK already, in order to construct an orthogonal basis for H(K), we only have to apply the orthogonalization process to these elements. ! ! a j1 : Then the element b j1 orthogonal to it is given by: Namely, let b j1 = ! ! ! ! (! a ;b ) b =! a + b with = !j2 !j1 j2
j2
j1
1
1
(b
j
;b
j
)
1 1 ! ! ! the element b j3 orthogonal to both b j1 and b j2 is given by ! ! ! ! (! a j3 ; b j2 ) (! a j3 ; b j1 ) + b with = b j3 = ! a j3 ! ! ! ! 2 j2 2 ( b j1 ; b j1 ) ( b j2 ; b j2 ) ! if b j1 6= 0 or by ! ! (! a j3 ; b j1 ) b =! a ! !
j1
j3
(b
! if b j2 = 0
j1 ;
b
j1 )
! ! Continuing this process we obtain a system b j1 ; :::; b jK of orthogonal vectors. Elimi! ! nating the zero vectors from this system we obtain the sought orthogonal basis b 01 ; :::; b 0r for the space H(K). Now, to construct the canonical system for system (25), we …nd the minimal system of generating elements of the cone C(K). First, let us …nd a generating (not necessarily minimal) system of elements. We show that r ! X (! a 0j ; b 0i ) !0 ! ! cj = aj !0 !0 b i (j = 1; 2; :::; m) i=1
( b i; b i)
may be chosen as such a system. Indeed, we note …rst of all that r ! X !0 !0 (! a 0j ; b 0i ) !0 !0 ! ! (c ; b )=(a ; b ) ! ! ( b ; b ) = 0 j
k
j
k
i=1
( b 0i ; b 0i )
i
k
! ! (since ( b 01 ; ::; b 0r are orthogonal). This implies that the vectors ! c j (j = 1; 2; :::; m) !0 !0 are orthogonal to any one of the vectors b 1 ; :::; b r . Thus the vectors ! c j (j = 1; 2; :::; m) generate the cone C(K). Let ! c be an element of that cone. Since C(K) K there exist non-negative element ! ! ! p1 ; :::; pm .of P such that c = p1 a 1 + ::: + pm a m . On the other hand ! c is orthogonal to ! the space H(K) and hence (! c ; b 0i ) = 0 (i = 1; 2; :::; r). Hence m X ! pj (! a j ; b 0i ) = 0(i = 1; 2; :::; r): j=1
Consequently,
183
m X
pj
j=1
! (! a 0j ; b 0i ) !0 bi ! ! ( b 0i ; b 0i )
! = 0 (i = 1; 2; :::; r)
Using all of these equations we get m r ! X X ! (! a 0j ; b 0i ) !0 pj !0 !0 b i = 0 j=1
i=1
( b i; b i)
From the last equation we get m m m r ! X X X X (! a 0j ; b 0i ) !0 ! ! c = pj (! aj pj a j pj !0 !0 b i = j=1
j=1
i=1
( b i; b i)
r m ! X X (! a j0 ; b 0i ) !0 pj ! c j; !0 !0 b i ) =
j=1
i=1
which is what we want to prove.
( b i; b i)
j=1
Let us pick out the essentially distinct nonzero vectors in the collection of vectors ! cj which we have found. Two elements of an acute cone are nonessentially distinct if one is a positive multiplier (from P ) of the other. Let ! c = (c ; :::; c ) (i = 1; 2; :::; k) (31) ji
ji 1
ji n
be the collection of chosen elements. Choose an element of (31). If it can be expressed as a non-negative linear combination of the other elements then it is included in the minimal system of generating elements of the cone C(K). Since the cone C(K) is acute, it follows that this system if unique up to a positive multiple. To accomplish the desired reduction of system (31) we introduce the system cji 1 x1 + ::: + cji n xn 0 (i = 1; 2; :::; k); g xi 0 (i = 1; 2; :::; n + 1); of linear inequalities, where n X cji = cji ` `=1
and apply to it the algorithm of x2. Thus eliminating, according to x3, from it one of the
other dependent inequalities with index ji . If 0 0 0 cj1 x1 + ::: + cjn xn + cjn+1 xn+1 0 (i = 1; 2; :::; s); xi 0 (i = 1; 2; :::; n + 1); are the remaining inequalities, then the vectors 0 0 0 ! c = (c ; :::; c ) (i = 1; 2; :::; k) j
j1
g
jn
constituting a minimal system of generating elements for C(K). Finally normalizing 0 0 0 these vectors according to the rule ! c =( c + ::: + c ) = ! c 0 , we obtain a minimal system j
j1
jn
j
of quasi-unitary generating elements of the cone C(K). This ends the process of constructing a canonical system for the system (25). From the description of the relation between system (25) and its canonical system described above, we derive one further concept. Consider the vectors ! a j1 ; :::; ! a jk obtained in the …rst stage of deriving the canonical system of system (25). I.e., consider those vectors of (27) which may be expressed as non-
184
negative linear combinations of the others. These vectors are in one to one correspondence with the non-nodal inequalities of (25), i.e. with those inequalities that are turned into equations by all solutions of the system (see de…nition 2.17). Indeed, it follows from the de…nition of the vectors ! a ; :::; ! a that all solutions of (25) satisfy each of the inequalities j1
aji 1 x1
:::
aji n xn
jk
0 (i = 1; 2; :::; k)
But then they turn each of the inequalities aji 1 x1 + ::: + aji n xn
0 (i = 1; 2; :::; k)
into an equation. Conversely if all solutions of (2) turn one of its inequalities, say aj0 1 x1 + ::: + aj0 n xn
0;
into an equation, then the inequality aj0 1 x1
:::
aj0 n xn
0;
is a consequence of the system. Thus, by the Minkowski-Farakas theorem, there exist non-negative elements p1 ; :::; pm of P such that the relation m X aj0 1 x1 ::: aj0 n xn = pj (aj1 x1 + ::: + ajn xn ) j=1
holds uniformly in ! x = (x1 ; :::; xn ) 2 P n . Thus we have ( pj0 1)! a j0 = p1 ! a 1 + ::: + pj0 1 ! a j0 1 + pj0 +1 ! a j0 +1 + ::: + pm ! am ! which proved that a is a linear non-negative combination of the other vectors of system j0
(25). Thus, the …rst step of the process of transition form system (25) to its canonical system results in an algorithm for determining the non-nodal inequalities of system (25). If it is known that system (25) is stably consistent, i.e., that it has no non-nodal inequalities then the …rst step of the transition process is not needed. The second step will then consist of eliminating the dependent inequalities of system (25). Indeed, in that case C(K) = K and hence, the vectors of (27) are generating elements of the cone C(K). But then the second stage of the process becomes the last one, i.e. it reduces to eliminating the dependent inequalities of (25). Thus, we have proved the next proposition. The canonical system for system (25) (containing no trivial inequalities) does not have any equations if and only if system (25) is stably consistent. From the above discussion it is also easy to see that: The canonical system for system (25) has no inequalities if and only if system (25) has no nodal inequalities.
CHAPTER IV 185
SYSTEMS OF STRICT LINEAR INEQUALITIES MIXED SYSTEMS. This chapter is devoted to a fundamental question that comes up in the process of isolating, in a consistent system of linear inequalities on the space L(P), those inequalities that are turned into equations by all solutions of the system (unstable inequalities). We distinguish two extreme possibilities: the case when the system contains no non-nodal inequalities: the case when the system contains non-nodal inequalities and the case where all of the inequalities in the system are non-nodal. In the …rst case, the system is stably consistent or, equivalently, the system into which it turns is consistent if all of its inequalities may be thought of as strict inequalities. Several properties of stably consistent systems have been discussed in the preceding chapters. In this chapter we discuss some other properties of strictly stable systems which are formulated in terms of properties of systems of strictly inequalities systems (see x1).
In the second case the system is equivalent to a system of equations, into which it turns, if
all of its inequalities may be thought of as equations. Here, the question is: what conditions are satis…ed by such systems. Question of this type are discussed in x2. In x2 we discuss
a method of construction, for a given equation system, an equivalent system of inequalities with the smallest number of inequalities. In general, a system of linear inequalities is equivalent to a system derived from it, if all of its non-nodal inequalities may be thought of as equations. In this connection we discuss a group of questions, in .x2, related to mixed systems consisting of inequalities and equations. Dividing the inequalities of a system into nodal and non-nodal sets is accomplished by way of tests of independence for the system’s inequalities. Some such tests are introduced in x3.
In x4 we show that if a system contains more than one inequality and contains on in-
equalities with a null linear form, then it can be written as the union of two stably consistent subsystems with no inequalities in common. In this connection we investigate, in x4, condi-
tions under which a consistent (strictly consistent) system can be expressed as the union of two consistent (strictly consistent) systems of linear inequalities.
The classi…cation of the inequalities of a system into nodal and non-nodal inequalities was, apparently, …rst brought up in Chernikova’s article [2]. In that article the nodality or the non-nodality of the inequalities of a given system, on Rn , was determined in terms of the systems’matrix. The criterion obtained there was used to determine the dimension of the solution set of a given system. In particular, it was used to obtain conditions under which the dimension of that set is zero, i.e. conditions for the existence of a unique solution to the 186
system. These questions are investigated in ..5, for systems over P n , where P is any ordered …eld. x1. SYSTEMS OF STRICT LINEAR INEQUALITES AND STABLE CONSISTENCY
OF SYSTEMS ASSOCIATED WITH THEM.
This section is devoted, essentially, to the study of systems of linear inequalities of the form: fj (x)
aj < 0 (j = 1; 2; :::; m) (1)
on the space L(P ). To distinguish such inequalities from inequalities of the form: fj (x) aj
0; they are called strict inequalities. The problem of determining the consistency of
such systems reduces to a problem of determining the stable consistency of the system fj (x)
aj
0 (j = 1; 2; :::; m) (2)
associated with it, i.e. to the problem of determining whether the latter system has a solution x = x0 such that fj (x0 )
aj < 0 (j = 1; 2; :::; m)
We note, further, that system (2) is stably consistent if for each inequality in it there exists a solution of the system that turns that inequalities into an equation. Indeed, if xj (j = 1; 2; :::; m) are solutions of system (2) for which fj (xj ) n X 1 0 xj x =m
aj < 0 then
j=1
is a stable solution of that system, i.e., is a solution that satis…es all of the inequalities
as strict inequalities. THEOREM 4.1. The system (1) is consistent if and only if the equation n X uj fj (x) = 0 (x 2 L(P )): (3) j=1
in the unknowns (u1 ; :::; um ) has no positive solution satisfying the equation: a1 u1 + ::: + am um
0: (4)
Theorem 4.1 paraphrases the assertion of theorem 2.12 so that is could be used here, and relations (3) and (4) correspond to relation (28) in the statement of that theorem. For systems (1) on Rn , theorem 4.1 was proved by Alexandrov [1] and for systems (1) on the space L(R) it was proved by Fan Ky [1]. COROLLARY 4.1. The system fj (! x ) aj = aj1 x1 + ::: + ajn xn
aj < 0 (j = 1; 2; :::; m) (5)
on P n containing no null inequalities is consistent if and only if the conjugate cone for the system fj (! x ) aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m) (6) 187
associated with it is an acute cone. PROOF: We note …rst that system (5) is consistent if and only if the system fj (! x ) aj t < 0 (j = 1; 2; :::; m); g t 0, then there exists
x0 ) satis…es system (1) and b
0 is a consequence of
system (2). Since all solutions of system (1) satisfy the inequality f (x)
b < 0, the necessity
part is proved. Suppose now that all the solution of a consistent system (1) satisfy the inequality: f (x) b < 0. Then they also satisfy the inequality f (x)
b
0. Assume that at least one of the
0
solutions, say x , of system (2) does not satisfy this inequality. Let x be a solution of system (1) and let t be a nonzero element of P [0; 1]. Then the element x(t) = x0 + t(x system (1) and hence the inequality f (x) b
x0 ) satis…es
0. On the other hand f (x0 ) b > 0 and hence
there exists a nonzero element t0 2 P [0; 1] such that f (x(t0 ))
b > 0. The contradiction
proves the su¢ ciency part of the lemma.
THEOREM 4.2. Consider two consistent and irreducible (i.e. containing no dependent inequalities) systems of type (1) with nonzero ranks. If they have the same solution set (i.e. are equivalent), then the inequalities included in them di¤er at most by a positive multiple (i.e. they are non-essentially distinct). PROOF: we prove theorem 4.2, …rst, for two systems of the form: fj1 (! x ) a1j = a1j1 x1 + ::: + a1jn xn a1j < 0 (j = 1; 2; :::; m1 ) g (9) 2 ! fj ( x ) a2j = a2j1 x1 + ::: + a2jn xn a2j < 0 (j = 1; 2; :::; m2 ) on P n : Suppose these systems are consistent equivalent and irreducible. Hence, in view of lemma 4.1, the systems fj1 (! x ) a1j 0 (j = 1; 2; :::; m1 ) g (10) ! fj2 ( x ) a2j 0 (j = 1; 2; :::; m2 ) associated with them are irreducible and equivalent. Furthermore, by corollary 4.1, the conjugate cones K1 and K2 of the latter are acute. In view of the equivalence of the systems (0; :::; 0; 1); (a1j1 ; :::; a1jn ; a1j ) (j = 1; 2; :::; m1 ) in (10) these g (11) (0; :::; 0; 1); (a2j1 ; :::; a2jn ; a2j ) (j = 1; 2; :::; m2 ) constitute, respectively, two systems of generating elements for one and the same cone K = K1 +K2 . The irreducibility of the systems in (10) implies, in particular, that each of the systems contains a unique element (0; :::; 0; 1) of the form (0; :::; 0; a) and that the system di¤ers from the basis for the cone K included in it only as far as this element is concerned. Since the basis for an acute cone is unique up to a positive multiple of its elements and in view of the noted uniqueness of the element (0; 0; :::; 0; 1) it is not possible for one of the two systems in (11) to coincide with the basis for K included in it while the other one does not. But the uniqueness of the basis for the cone K implies that the elements of the two systems, that are distinct form (0; :::; 0; 1) di¤er only be a scalar multiple. This proves theorem 4.1 for systems (9) on P n Now let 0 0 0 fj (! x ) aj 0 (j = 1; 2; :::; m ) g (12) 0 0 fj (! x ) aj 0 (j = 1; 2; :::; m" ) be two consistent, irreducible and equivalent systems of type (1) on a space L(P). Let U 189
0 0 be the maximal subspace in L(P) where all the functions fj (! x ) (j = 1; 2; :::; m ) are zero
valued and let U be the direct complement of U in L(P ). Using theorem 2.4, it is easy to " show that U is the maximal subspace of L(P ) where the functions fj (! x ) (j = 1; 2; :::; m" ) are zero valued. Let x1 ; :::; xr , denote the basis of the space V . Then any element x 2 L(P )
can be written in the form x = u + t1 x1 + ::: + tr xr where u 2 U and t1 ; :::; tr are elements
of P . Hence the system (12) may be written in the form 0 0 fj (! x ) a0j = a0j1 x1 + ::: + a0jn xn a0j < 0 (j = 1; 2; :::; m ) and f " (! x)
a"j = a"j1 x1 + ::: + a"jn xn
j
a"j < 0 (j = 1; 2; :::; m" )
These systems (being skeletal systems for (22), see de…nition 1.3) are obviously consistent, irreducible and equivalent. Thus, it follows from the established case of theorem 4.2 that the inequalities included in the two systems are non-essentially distinct. This completes the proof. In view of lemma 4.1 the above theorem is identical to the next proposition. THEOREM 4.2*. If two stably consistent systems of type (2) of nonzero rank are equivalent, then the inequalities included in them are non-essentially distinct. COROLLARY 4.2. consider a consistent system (1) (a stably consistent system (2)) with nonzero rank which does not contain inequalities that are nonessentially distinct inequalities with non-null functions fj (x). Then its equivalent irreducible subsystem is unique. LEMMA 4.2. The inequality fj0 (x)
aj0 < 0 in a consistent system (1) is a dependent
inequality in that system (i.e. may be removed without a¤ecting the solution set of the system) if and only upon replacing it with the equation fj0 (x)
aj0 = 0. the system becomes
inconsistent. PROOF: In case the function fj0 (x) is a null function, the lemma is trivial. We assume, thus, that fj0 is not null function. The proof of necessity is obvious. We now prove su¢ ciency. Suppose the system fj (x) aj < 0 (j = 1; 2; :::; m; j 6= j0 ); g (13) fj0 (x) aj0 = 0 is consistent. If, then, the inequality: fj0 (x) aj0 < 0 is not a consequence of the system fj (x)
aj < 0 (j = 1; 2; :::; m; j 6= j0 ); (14) 0
then at least one of its solutions x must satisfy the inequality fj0 (x) elements of the segment x(t) = x" + t(x
0
x" ) (t 2 P [0; 1])
obviously satisfy system (14). Hence, for t0 =
aj0 fj0 (x" ) (fj0 (x ) aj0 ) (aj0 fj0 (x" )) 0
2 P [0; 1] 190
aj0 > 0. All the
the element x(t0 ) of that segment satis…es the equation aj0
fj0 (x" ) = 0. But this is not
possible, in view of the inconsistency of system (13). The lemma is proved. From lemmas 4.1 and 4.2., we get: THEOREM 4.3. A consistent system (1) is irreducible if and only if it stays consistent when one of its inequalities is replaced by an equation. A stably consistent system (2) is irreducible if and only if the system (1) associated with it is irreducible. An inequality in a stably consistent system (2) is a dependent inequality in that system if and only if none of the systems solutions turns it into an equation. COROLLARY 4.3. Let a stably consistent system (2) with a nonzero rank be irreducible. Each of its inequalities is turned into an equation by a solution of the system that turns no other inequality of the system into an equation. Consider a stably consistent system (2) with nonzero rank which has no nonessentially distinct inequalities with null functions. In view of corollary 4.2, such a system contains at least one inequality which could not be removed without changing the solution set of the system (an independent inequality). Hence, by theorem 4.3, such a system has at least one boundary solution that turns one and only one of its inequalities into an equation. In view of lemma 1.8, a consistent system (2) is stably consistent if its subsystem consisting of those inequalities turned into equations by some boundary solution of the system is stably consistent. Thus system (2) which has a boundary solution with the noted property is always stably consistent. Thus we have: COROLLARY 4.4. Consider a consistent system (2) with nonzero rank whose inequalities with nonzero functions fj (x) are essentially distinct. Such a system is stably consistent if and only if it has a boundary solution that turns one and only one of its equalities into an equation. x2. MIXED SYSTEMS OF LINEAR INEQUALITIES. Let
fj (x)
aj
0 (j = 1; 2; :::; m); (15)
be a consistent system of linear inequalities on L(P ). Upon replacing one of its inequalities by a strict inequality fj (x) aj < 0. it may become inconsistent. The extreme case where it stays consistent after we exchange each of its inequalities by an equation was discussed in the previous section. In this section we investigate conditions under which replacing all the inequalities of a system by strict inequalities may a¤ect the consistency of the system. THEOREM 4.4. If system (15) is consistent then the system
191
0
0
fj (x) aj < 0 (j = 1; 2; :::; m ; m m); g (16) 0 fj (x) aj 0 (j = m + 1; 2; :::; m) associated with it is consistent if and only if the equation n X uj fj (x) = 0 (x 2 L(P )) (17) j=1
in the unknown u1 ; :::; um has no positive solution satisfying the conditions: a1 u1 + ::: + am um = 0 g (18) u1 + ::: + um0 > 0 PROOF: The necessity is obvious. Su¢ ciency will be proven presently. Suppose equation (17) has no positive solution that satis…es the inequality u1 +:::+um0 > 0. Then the equations n X uj (fj (x) aj ) = 0 (x 2 L(P )) (19) j=1
do not have any positive solutions that satisfy that inequality. But system (15) is con-
sistent by hypothesis. Thus, by lemma 2.14, it follows that each of the …rst m’s inequalities of that system are nodal. Thus, for each inequality 0
fj (x)
aj < 0 (j = 1; 2; :::; m )
there exists a solution xj of system (15) such that fj (xj )
0
aj < 0 (j = 1; 2; :::; m )
But then 0
x =
1 0( m
0
x1 + ::: + xm )
is a solution of system (16). Consequently our system is consistent. Suppose equation (!7) has a positive solution that satis…es the inequality u1 +:::+um0 > 0 but no solution that satis…es the equation a1 u1 + ::: + am um = 0. If system (16) is not consistent then at least one of the …rst inequality of system (15) is non-nodal. But then, in view of lemma 2.14, one of the positive solutions of (17) satis…es the inequality u1 +:::+um0 > 0 and the equation a1 u1 +:::+am um = 0. But this contradicts the hypothesis. This completes the proof of the theorem. By theorem 2.13, it is possible to express the condition for consistency of system (15) formulated in theorem 4.4 in terms of the non-existence of positive solutions of equation (17) satisfying the inequality a1 u1 + ::: + am um < 0. Thus we may write theorem 4.4 as follows: THEOREM 4.4*. In order that system (16) be consistent, it is necessary and su¢ cient that equation (17) has no positive solutions that satisfy a1 u1 + ::: + am um = 0 and satisfy condition (18). In this form theorem 4.4 implies theorem 4.1. In relation to lemma 2.14, theorem 4.4 results in the following obvious fact: 192
Upon changing some of the inequalities of a consistent system (15) to strict inequalities, the system stays consistent if and only if none of the changed inequalities is a nodal inequality. If in going form system (15) to system (16) we change to strict inequalities those and only those inequalities that are stable, then the resulting system (16) is equivalent to the following mixed system 0 fj (x) aj < 0 (j = 1; 2; :::; m ); g 0 fj (x) aj = 0 (j = m + 1; 2; :::; m): of equations and inequalities. Indeed, this follows from the fact that the set of solutions of system (15) is una¤ected by changing some or all of its unstable inequalities to equations. The property of system (16) noted here may be formulated in terms of system (15). To this end we introduce: DEFINITION 4.1. Suppose the system fjk (x) ajk 0 (k = 1; 2; :::; `); g 0 fj (x) aj = 0 (j = m + 1; 2; :::; m; j 6= j1 ; :::; j` ) is consistent and is equivalent to system (15), where fjk (x)
(20)
ajk < 0 (k = 1; 2; :::; `)
is a nodal subsystem of (15) with a nodal solution that satis…es all of system (15). Then system (20), or a system equivalent to it except for
1 multiple of its equations, is called a
representation of system (15). System (20) will be called a proper representation of system (15) in what follows. System (20) does not contain equations (respectively, inequalities) if system (15) does not contain non-nodal (respectively, nodal) inequalities. Now the property of system (15) that interests us may be formulated. THEOREM 4.5. Each consistent system (15) has a proper representation. The proper representation is unique. The uniqueness of the proper presentation follows from the fact that the …rst part of system (20) coincides with the system of all nodal inequalities of system (15). REMARK 4.5. Each consistent system (15) has a proper representation. The proper presentation is unique. The uniqueness of the proper representation follows from the fact that the …rst part of system (20) coincides with the system of all nodal inequalities of system (15). REMAK. Using corollary 2.20 we obtain the following fact. Consider a subsystem of (15) that contains all the non-nodal inequalities of the latter. Then the set of equations contained in a proper representation of the subsystem coincides with the set of equations contained in a proper representation of system (15). 193
In connection with de…nition 4.1, it is natural to investigate conditions under which a system S, obtained by adjoining a stably consistent system fj1 (x)
a1j < 0 (j = 1; 2; :::; m1 )
of inequalities with a consistent system fj2 (x)
a2j < 0 (j = 1; 2; :::; m2 )
of equations, may be considered a (not necessarily proper) representation of a system of type (15). Such a system, clearly, does not always exist, and if one does then it is not unique. Clearly, the problem reduces to question about the consistency of system fj1 (x) a1j < 0 (j = 1; 2; :::; m1 ); g (21) fj2 (x) a2j < 0 (j = 1; 2; :::; m2 ) and about the conditions under which the equation system fj2 (x)
a2j = 0 (j = 1; 2; :::; m2 )
may be considered a consequence of a system of linear inequalities. If each of the equations of (21) is replaced by two inequalities of opposite sense, then the answer to the …rst question follows from theorem 4.4*. We have: THEOREM 4.6. System (21) is consistent if and only if the equation m1 m2 X X 1 1 u2j fj2 (x) = 0 (x 2 L(P )) uj fj (x) + j=1
j=1
in the unknowns u11 ; :::; u1m1 ; u21 ; :::; u2m2 does not have a non-negative solution u11 ; :::; u1m1 thatmsatis…es themrelation 1 2 X X 1 1 u j aj x + u2j a2j < 0 j=1
j=1
and no positive solution u11 ; :::; u1m1 satisfying m1 m2 X X 1 1 u j aj x + u2j a2j = 0 j=1
j=1
The solution to the second question follows easily form corollary 2.20. We have: Fj (x)
aj = 0 (j = 1; 2; :::; m)
of equations be a representation of a system of inequalities of L(P ), it is necessary and su¢ cient that the equation m X uj (Fj (x) aj ) = 0 (x 2 L(P )) j=1
has at least one solution (u1 ; :::; um ) with no zero coordinates uj (a strictly nonzero
solution). If (u01 ; :::; u0m ) is a strictly nonzero solution of that equation with u0jK > 0(K = 1; 2; :::; `) then the equation system is a representation of the inequality system Fjk (x) ajk 0 (k = 1; 2; :::; `); g (20) 0 Fj (x) + aj 0 (j = m + 1; 2; :::; m; j 6= j1 ; :::; j` ) For the case of systems of the form
194
fj (! x)
aj = aj1 x1 + ::: + ajn xn
aj = 0 (j = 1; 2; :::; m)
(L(P ) = P n ) equation (22) is equivalent to the equation system a1i u1 + ::: + ami um = 0 (i = 1; 2; :::; n) g (23) a1 u1 + ::: + am um = 0 The question of existence of a strictly nonzero solution of the latter is easy to answer if we know the fundamental system of solutions of the system. S system (23) has strictly nonzero solution if every j = 1; 2; :::; m there exists a fundamental solution with a nonzero j-th coordinate. Indeed, let ! u s = (us1 ; :::; u2m ) (s = 1; 2; :::r) be the fundamental system of solutions of system (23) that satisfy this condition. Using the solution ! u 1 and ! u 1 we obtain, for nonzero q 2 P , a solution ! u1 +q ! u 2 . which has 1
1
zero ! u 1 and ! u 2 coordinates only where have zero coordinates. Taking ! u 1 + q1 ! u 2 and ! u 3, we obtain, for a solution: ! u 1+q ! u 2+q ! u 3 which has zero coordinates only where ! u 1; ! u2 1
2
and ! u 3 have zero coordinates. Clearly, in a number of steps h < r we end up with a strictly nonzero solution of system (23). REMARK. A consistent system of equations of the type considered in theorem 4.7 may be augmented in many ways so that the resulting equivalent system is a proper representation of a system of linear inequalities. The easier way is to add the equations (Fj (x)
aj ) = 0 (k = 1; 2; :::; m);
or even the equation m X (Fj (x) aj ) = 0 j=1
In the …rst case the corresponding linear inequality system is Fj (x) aj 0 (j = 1; 2; :::; m); g 0 (Fj (x) aj ) 0 (j = m + 1; 2; :::; m) In the second case it is Fj (x) aj 0 (j = 1; 2; :::; m); m X g (Fj (x) aj ) 0 j=1
x3 SOME PROPERTIES OF INDEPENDENT NODAL INEQUALITIES. Suppose all of the inequalities of system (15) are nodal or, equivalently, that the system is stably consistent. It was shown in x1 (see theorem 4.3) that an inequality in such a system is independent if and only if there exists a solution of the system turning that inequality and only it into equation. We recall that an inequality in a given system is independent if upon eliminating that inequality, the solution set of the system changes. Clearly, for a 195
system with only one inequality, the independence of that inequality means that it is de…ned by a non-null function. For an arbitrary consistent system (15) (containing nodal as well as non-nodal inequalities) the above characteristized property of nodal inequalities is expressed in the next theorem. THEOREM 4.8. A nodal inequality in a consistent system (15) is an independent inequality if and only if there exists a solution of the system which turns that inequality into an equation and which does not turn any of the systems other nodal inequalities into equations. PROOF. Indeed, let the …rst .. inequalities of system (15) be its nodal inequalities and let the last of these be independent. If .. then, by theorem 4.5, it is obvious that the desired solution exists. suppose, then, that .. Denote by S the system composed of the …rst .. strict inequalities fj (x)
aj < 0 (j = 1; 2; :::; m
0
1)
0
and the last m
m inequalities of system (15).
First we show that if fm0 (x)
0 is a consequence of the system S the it must
am0
be a consequence of the system S consisting of the …rst m m
0
1 inequalities and the last
0
m inequalities of system (15). Indeed, let x0 be a solution of S that does not satisfy our 0
inequality. If x is a solution of system S and if t is a nonzero element of the segment P [0; 1], 0
then the element x (t) = x0 + t(x satis…es the inequality fm0 (x)
0
x0 ) is, obviously, a solution of system S and hence 0. However, on the other hand, fm0 (x)
am0
0
Hence, there exists a nonzero element t 2 P [0; 1] such that fm0 (x (t0 )
am0 > 0.
am0 ) > 0. The
contradiction proves our assertion.
By hypothesis, the inequality fm0 (x) am0
0 is an independent inequality in the system
(15). Thus, it follows from the above assertion that, it cannot be a consequence of .. Hence, there exists a solution x1 of system S which satis…es the inequality fm0 (x)
am0 > 0. On
0
the other hand, it follows from the nodality of each of the …rst m inequalities of system (15) that there exists a solution x2 of system (15) such that fm0 (x) there exists an element t1 2 P [0; 1] such that x = (x1 fm0 (x)
am0 < 0. Thus
x2 )t1 + x2 satis…es the equation
am0 = 0. But for every t 2 P [0; 1], the element (x1
x2 )t1 + x2 is a solution
of system S’. Thus x is a solution of system (15) which satis…es only one of the equations fj (x)
0
aj = 0 (j = 1; 2; :::; m ); namely the equation: fm0 (x)
am0 = 0. The necessity part
is now established. Now let the inequality fm0 (x)
am0
0 be one of the nodal inequalities of system (15)
indicated above and assume it to be dependent. Let x0 be a solution of system (15) that turns this inequality into an equation and that turns no other inequality among the …rst
196
m
0
have
1 nodal, inequalities of the system into an equation. By theorem 2.4, for x 2 L(P ); we
fm0 (x)
m X
am0 =
pj (fj (x)
aj )
p0
0
j=1(m6=m )
with non-negative elements p0 ; p1 ; :::; pm . Substituting x = x0 we get p0 = 0 and pj = 0(j = 1; 2; :::; m
0
0
1). But then all solution of system (15) turn the m -th inequality into
an equation, which contradicts the nodalitiy of that inequality. Hence, this inequality could not be a dependent inequality in system (15). This complete the proof of the theorem. Theorem 4.8 implies in particular that if a system (15) has at least one independent nodal inequality and s non-nodal inequalities then the minimum number of its inequalities turned into equations by any of its boundary solution is s + 1. A condition (su¢ cient) for system (15) to have at least one independent nodal inequality is given by: COROLLARY 4.5. Let system (15) contain nodal and non-nodal inequalities. Suppose at least one of the nodal inequalities is not a consequence of the subsystems Q of non-nodal inequalities of the system. Suppose also that, using such independent nodal inequalities, we could not form an inequality pj " (fj " (x) (1
j
0
aj " )
pj 0 (fj 0 (x)
m; 1
j
"
m; 1
aj 0 ) 0
j 6= j " )
which, for some positive pj 0 and pj " is not a non-negative linear combination of the inequalities of subsystem Q. Then system (15) has a nodal independent inequality. PROOF. Let H denote the subsystem of (15) consisting of those nodal inequalities which are not consequence of the subsystem Q. Let T be the union H + Q of H and Q. Let us omit, if possible, form T one of its dependent inequalities which is in the subsystem H. From subsystem, let us eliminate another, if possible, dependent inequality in H. Repeating 0
0
0
this process we get, in a …nite number of steps, a subsystem T = H + Q (H
0
H) where
0
every inequality in H is independent (clearly H is not empty). We show that each of these inequalities is independent in T. 0
0
Let j be the index of one such inequality. This inequality is nodal in the subsystem T . 0
0
0
Thus, by theorem 4.8, there exists a solution x(j ) of T such that the j th inequality is the 0
0
only inequality of H that turns into an equation. Clearly, x(j ) is a solution of T. Let j " be 0
0
the index of an inequality of H that is not in H which is turned into equality at x = x(j ). 0
If there is no such an inequality, then by theorem 4.8, the inequality j " is independent T . (since its solution set coincides with the solution set of (15)). Using theorem 2.4, we get for any x 2 L(P ) : 197
fj " (x)
aj " =
X H
pj (fj (x)
X
aj ) +
0
pj (fj (x)
aj )
p0
Q
0
where all the pj ’s are non-negative. Thus for x = x(j ) we conclude that p0 and all the pj ’s in the …rst sum, except for pj 0 , are zeros. But then the relation has for form X fj " (x) aj " = pj 0 (fj 0 (x) aj 0 ) + pj (fj (x) aj ): Q
Since the j " th inequality is not a consequence of Q, pj 0 6= 0. Hence we have a contradiction 0
of the assumption. Consequently each inequality in H is independent in T . But none of the
inequalities in T contained in H is dependent in (15) (since the inequalities of the subsystem Q are non-nodal in any subsystem of (15) containing it, see corollary 2.20). This complete the proof of the corollary. Consider a consistent system (15) of nonzero rank which does not include any non-nodal inequalities. Adjoining the inequality fm+1 (x)
am+1
0 which null function fm+1 (x) and
null am+1 to such a system, we obtain a system which satis…es the …rst two condition of corollary 4.5. The third condition of the corollary, in this case, is equivalent to requiring that the augmented system not contain any non-null inequalities which di¤er form each other only by a positive multiple. In this special case, then, corollary 4.5 may be written in the form: Suppose a consistent system (15) with nonzero rank does not contain nonessentially distinct inequalities with non-null functions .. Then it has at least one independent inequality. This proposition follows form corollary 4.2. x4. UNIONS OF CONSISTENT SYSTEMS OF LINEAR INEQUALITIES. DEFINITIN 4.2. The linear inequality system: fj (x)
aj
0 (j = 1; 2; :::; m) (24)
on L(P ) is said to be a free system if the equation m X uj fj (x) = 0 (x 2 L(P )) (25) j=1
in the unknowns u1 ; :::; um has no positive solutions. In view of theorem 4.1, a free system (24) is stably consistent for any values of the free terms aj in P. The converse is also true, a system (24) that is stably consistent fro all values of aj is a free system,. Indeed, if under this assumption, the equation (25) has a positive solution (u01 ; :::; u0m ) then the equation u01 a1 + ::: + u0m am =
1
and hence the inequality: u01 a1 + ::: + u0m am < 0 198
has a solution (a01 ; :::; a0m ). But then, in view of theorem 2.3, system (24) with free terms aj = a0j (j = 1; 2; :::; m) is consistent contrary to the hypothesis. By theorem 4.1, system (24) is free if and only if the system fj (x)
0 (j = 1; 2; :::; m)
associated with it is stably consistent. DEFINITION 4.3. The transformation of system aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m)
to a system: aj1 x1 + ::: + (ajk + aaj` )xk + ::: + aj` x` + ::: + ajn xn
aj
where a is an arbitrary element of P and where u` = x`
0 (j = 1; 2; :::; m) axk , is called an elementary
transformation. THEOREM 4.9. Consider a system (24) with more than one inequality and with on inequalities whose functions are null functions. The system is then the union of two of its subsystems, each of which is free and having no common inequalities with the other. PROOF. Let fj (x)
aj = fj (x1 )t1 + ::: + fj (xr )tr
aj
0 (j = 1; 2; :::; m) (26)
be a skeletal system (see de…nition 1.3) for system (24). Here, r is the rank of system (24) and x1 ; :::; xr is a basis of the subspace V in L(P ) which is a direct complement of the subspace U
L(P ) for which the functions fj (x) (j = 1; 2; :::; m) take only zero values and
where x = x1 t1 + ::: + xr tr + u (u 2 U )
is an element of the space L(P ) (t1 ; :::; tr are elements of P ). It is nor hard to show that a system (26) with non-null linear form fj (x1 )t1 + ::: + fj (xr )tr may, using a sequence of appropriately chosen elements, be transformed to a system where all of the coe¢ cients for at least one unknown are nonzero. If all of these coe¢ cients have the same sign, the new system is a free system.. But then, system (26), and hence system (24), is a free system. This follows from the fact that under an elementary transformation, a consistent system is transformed to a consistent system and from the fact a system is free if it is consistent for any values of the free terms. Hence, in this case, the requirement of the theorem is ful…lled. If not all of the coe¢ cients have the same sign then the new (transformed) system may be divided into two systems where the coe¢ cients have the same signs. The subsystems of system (24) corresponding to them are free. This completes the proof of the theorem. COROLLARY 4.6. Suppose the matrix of the system aj1 x1 + ::: + ajn xn = 0 (j = 1; 2; :::; m)
199
having a positive solution, does not contain any zero columns. Then, for n>1 the array of unknowns x1 ; :::; xn may be partitioned into two non-intersecting parts such that every positive solution of the system has nonzero components in one part as well as the other. PROOF. Consider the system ai1 x1 + ::: + ami xm = 0 (i = 1; 2; :::; n) 0
By theorem 4.9 it is the union of two free systems. Let N and N " be true index sets of the inequalities included in these two subsystems. But then, every positive solution of the equation (25) corresponding to these two systems (such solutions exist by hypothesis) has 0
components in N as well as in N " . Turning from this system of inequalities to our equation system, this assertion is equivalent to the assertion that we wish to prove. DEFINITION 4.4. The systems fj1 (x) a1j 0 (j = 1; 2; :::; m1 ); g (27) fj2 (x) a2j 0 (j = 1; 2; :::; m2 ) containing no null inequalities are said to be mutually combined either if for any identically zero positive combinations s1 s2 X X p1j fj1k (x) + p2j fj2k (x) (s1 k=1
1; s1
1)
(28)
k=1
of the systems of functions
fj1k (x) (k = 1; 2; :::; s1 ); fj2k (x) (k = 1; 2; :::; s2 ) of rank s1 + s1 1. we have the relation s2 s1 X X p2j fj2k (x) 0 (29) p1j fj1k (x) + k=1
k=1
or if it is not possible to form a combination (28) of the functions fj1 (x) and fj2 (x) with
positive coe¢ cients which is identically zero. In the latter case, and in the case where every inequality (29) is a strict inequality, we say that systems (27) are stably mutually combined. THEOREM 4.10. Suppose each of the two systems (27) is consistent (stably consistent) and does not contain inequalities with null functions. If the system of (27) are mutually combined (stably mutually combined) then the system obtained by combining them is consistent (stably consistent). Conversely if this system is consistent (stably consistent) then the two systems (27) are mutually combined (stably mutually combined). The proof of the theorem follows directly from the next lemma, LEMMA 4.3. The system fj (x)
aj
0 (j = 1; 2; :::; r + 1)
(on L(P)) of rank r is consistent (stably consistent) if and only if: Either there exists an identically zero linear positive combination
200
s X
p1j fjk (x) (s
1)
(30)
k=1
of the system of functions fj1 (x); :::; fjs (x) of rank s
1 such that pj1 aj1 + ::: + pjs ajs or if
there does not exists an identically zero combination (30), with positive coe¢ cients, of that system of functions. PROOF. First we note that, in the present case, the positive solution (u1 ; :::; ur+1 ) of the equation u1 f1 (x) + ::: + ur+1 fr+1 (x) = 0 (x 2 L(P ))
0
0
is unique up to a scalar positive multiple in P. Indeed, let (u1 ; :::; ur+1 ) and (u"1 ; :::; u"r+1 ) 0
be two positive solutions of this equation. Assume ur+1 6= 0. Then the function fr+1 (x)
is a linear combination of the functions f1 (x); :::; fr (x). But then the latter are linearly independent. Hence the expression: fr+1 (x) = is unique. fr+1 (x) =
0
0
u u1 f (x) + ::: + u0 R fr (x) 0 ur+1 1 r+1 If u"r+1 6= 0. then, similarly, " u"1 f (x) + ::: + uu" r fr (x) u"r+1 1 r+1
we have
But then 0
uj 0 ur+1
=
u"j u"r+1
(j = 1; 2; :::; r):
Consequently, for u"r+1 6= 0 the lemma is established. If u"r+1 = 0 then u"1 f1 (x) + ::: + u"r fr (x) = 0
which contradicted the linear independence of the functions: f1 (x); :::; fr (x). Thus it is not possible to haveu"r+1 = 0. In view of the established assertion, the linear combination in the statement of the lemma is unique up to a positive multiple, Thus the assertion of the lemma, relating the stably consistency of a system to the stably consistency of its components subsystems, follows directly form theorem 4.1. The other assertion, concerning simple consistency, follows from theorem 2.3. PROOF OF THEOREM 4.10. Let the system fj1 (x) a1j 0 (j = 1; 2; :::; m1 ); g (31) fj2 (x) a2j 0 (j = 1; 2; :::; m2 ) obtained by combining two consistent systems (27) satisfying the assumption of the theorem be inconsistent. Then, by theorem 2.6, any subsystem of (31) with rank r consisting of r+1 inequalities is consistent (r is the rank of system (31)). Hence, in view of lemma 4.3, there exists an identically zero linear positive combination (30) such that s X pj ajk (x) < 0 k=1
(where ajs are some free terms of system (31)). Since both subsystems of (31) are con-
sistent, the combination (30) should involve functions fj1 and functions fj2 . But then it is of 201
the form (28). Hence, since (27) are mutually combined, we have s X pj ajk (x) 0 k=1
(see relation (29)). But this contradicts the preceding inequality which was derived the
non-consistency of (31). Thus system (31) is consistent. The assertion relation the stably consistency of (31) to the stably consistency of its, stably mutually combined, component subsystems can be proved analogous by using corollary 2.3 of theorem 2.6. We now prove the converse. If system (31) is consistent then, by theorem 2.6, all of its subsystems of rank r with r +1 inequalities are consistent. But the, in view of lemma 4.3, for each of these subsystems, either every linear positive combination (28) which is identically zero satis…es (29) or there do not exist any such combinations. But that means that systems (27) are mutually combined. If system (31) is stably consistent then the stable mutual combinability of the systems (27) may be established analogously by using corollary 2.3. This complete the proof of the theorem. Arguing analogously we obtain further: THEOREM 4.11. The system fj (x)
aj
0 (j = 1; 2; :::; m)
on L(P) is stably consistent (simply consistent) if and only if: Either every identically zero linear positive combination s X pjk fjk (x) (s 1) (30) k=1
(s not …xed, s
1) of the system of functions fj1 (x); :::; fjs (x) of rank s
relation s s X X pjk fjk (x) > 0 ( pjk fjk (x) k=1
1 satis…es the
0)
k=1
or these does not exist such a combination. The assertion of the theorem about the stable consistency follows from lemma 4.3 by
corollary 2.3 to theorem 2.6. The assertion about consistency follows from theorem 2.6. x5. MARTIX CRITERIA FRO NODALITY OF LINEAR INEQUALITIES.
In this section we investigate the conditions on the matrix of the consistent system `j (! x ) aj = aj1 x1 + ::: + ajn xn aj < 0 (j = 1; 2; :::; m) (32) (on P n ) under which one or the other of the system’s inequalities is nodal or not nodal. To do that, we have to solve the problem of determining the dimension of the set of solutions
202
of the system. In particular, we wish to determined conditions under which that set is zeroclimensional, i.e. conditions under which the solution of the system (32) is unique As we noted above, for system (32) on .., their question was considered in the author’s article [2]. DEFINITION 4.5. Consider a system (32) with nonzero rank. The h-th row of system (see de…nition 1.11) if there us a companion determinant, to that row, which is zero (see de…nition 1.11) containing one and only one row with an element ah whose algebraic complement has the same sign as .. The rows of A that are unstable relative to some node are called the unstable rows of the matrix A. THEOREM 4.12. An inequality in a consistent system (32) with nonzero rank is unstable if and only if the row in the system’s matrix A corresponding to that inequality is unstable. To prove the theorem we …rst established: LEMMA 4.4. The unstable inequalities of the linear inequalities system fj (x)
aj
0 (j = 1; 2; :::; m)
on L(P ) are contained in each of its external subsystems and are unstable in that subsystem. PROOOF OF THEOREM 4.12. Necessity. Let the h-th inequality of system (32) be unstable. Let
be an arbitrary node of system (32) and consider its
-subsystem, i.e., the
subsystems of inequalities whose companion determinants with respect to
are zeros (see
de…nition 1.14). This subsystem is extremal (see the assertion following de…nition 1.14). Without loss of generality we may assume that a11 ::: a1r = ::: ::: ::: ar1 ::: arr where r is the rank of system (32) and that 0 `j (! x ) aj = aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m ) (33) is the
-subsystem of (32).
By the lemma 4.4 the h-th inequality of system (32) is an unstable inequality of system (33). Thus for t > 0 the system `j (! x ) (aj aj t) = aj1 x1 + ::: + ajn xn
(aj
aj t)
0
0 (j = 1; 2; :::; m ) (34)
where 0 for j 6= h; 1 for j = h; is inconsistent. If ! x =! x 0 is a nodal solution of system (33) and if ! y =! x
aj = f
! x 0 ; the
then that system results in the non-consistent systems 0 `j (! y ) aj t 0 (j = 1; 2; :::; m ) (35) By theorem 1.5 this implies that for each nonzero minor of order r of the system’s matrix,
203
and hence in particular for
, on of its companion determinants has a sign opposite to that
of the minor. But then the algebraic complement of the element
ah . in that companion
determinants would have the same sign as the chose minor. By de…nition of
-subsystem, all the companion determinants corresponding to the in-
equalities of system (33) are zeros. Thus the h-th row of matrix A is unstable with respect to the node
. Since
is arbitrary this completes the proof of necessity.
Su¢ ciency. Let the h-th row of the matrix A be unstable. Let of system (32) and let system (33) be the corresponding
be an arbitrary node
-subsystem of system (32). Then
the instability of the h-th row with respect to any node of system (32) and the fact that all nonzero minor of the matrix of system (33) or order r are nodes of system (32), imply that system (35) is not consistent. But then system (34) is not consistent for t > 0. Thus, as is easy to see, the h-th inequality of system (32) is unstable. This completes the proof of su¢ ciency and the theorem. Below, we introduce two possible ways of extending the above theorem. For the h-th inequality of system (32) to be unstable it is necessary and su¢ cient that the system `j (! x ) aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m)
where 0 for j 6= h; 1 for j = h; is not consistent for any positive t in P . Assume that aj1 i1 ::: aj1 ir ::: ::: ::: ajr i1 ::: ajr ir is a nonzero minor of the matrix A of systems (32). Then, by theorem 1.1, this assumption aj = f
holds if and only if the system aji1 x1 + ::: + ajir xir
(aj
aj t)
0 (j = 1; 2; :::; m)
is not consistent for any t>0. But this means that the h-th inequality of the system: aji1 x1 + ::: + ajir xir
aj
0 (j = 1; 2; :::; m)
is unstable. This established the following proposition: Consider a system (32) with nonzero rank. An inequality in such a system is unstable if and only it the inequality it de…nes in the system’s principle section is unstable. (See de…nition 1.12). Hence a row in the matrix A of system (32) is unstable if and only if it is unstable with respect to all nodes of system (32) de…ned b some r linearly independent columns of the matrix A (r > 0 is the rank of the system 32).
204
The above assertion represents the …rst possibility of sharpening theorem 4.12. The second possibility stems from lemma 4.4. Indeed, since the extremal subsystems of system (320 coincides with its
-subsystems, it follows from lemma 4.4 that all the unstable
inequalities of (32) are included in all of its
-subsystems and are unstable with respect to
them. Hence a row in the matrix A of system (32) is unstable if and only if it is unstable with respect to all the nodes of system (32) de…ned by the matrix of any of its
-subsystems.
DEFINITION 4.6. An a¢ ned subspace of the space P n is the set of solutions of an arbitrary consistent …nite system of equations in n unknowns with coe¢ cients in P . If r is the rank of such a subsystem then the number (n
r) is the dimension of the (a¢ ne)
subspace of solutions of the system. The dimension of the polyhedron M of solutions of a consistent inequality system (32) (on P n ) is the dimension of the minimal a¢ ne subspace of P n containing M (i.e. the intersection of all a¢ ne subspace containing it). Two equivalent (i.e. having one and the same set of solutions) systems of linear equations in n unknowns have the same rank. Thus the dimension of an a¢ ne subspace of P n does not depend on the choice of the …nite linear system mentioned in the de…nition. It follows from the uniqueness of the minimal a¢ ne subspace containing the polyhedron M that the dimension of such a subspace is unique. In particular if there does not exist any subspace other than P n that contains M then the dimension of that minimal subspace is n. LEMMA 4.5. Suppose the polyhedron M of solution of system (32) is not n-dimensional and let ` (! x) jk
ajk
0 (k = 1; 2; :::; s)
be the subspace consisting of all the unstable inequalities of (32). 0
Then the minimal a¢ ne subspace M in P n containing M coincides with the set M of solutions of the equation system `jk (! x ) ajk = 0 (k = 1; 2; :::; s) 0
PROOF. For s = m the assertion is obvious since M coincides with M in that case. So, it remains to prove the assertion for s < m. Without loss of generality we may assume that jk = k (k = 1; 2; :::; s). 0 0 0 M we only have to show that M M . suppose then that ! x 2 M is not an element of M . Then there exists an element ! x 0 in M such that among the elements on
Since M
the line : 0 ! x (t) = ! x 0 + t(! x
! x 0)
where t is an element of P , there exists at least one element of M distinct from ! x 0. The last m s inequalities of (32) are stable, hence there exist elements ! x j (j = s +
205
1; :::; m) of M such that `j (! x j ) aj < 0 (j = s + 1; :::; m) But then them element X ! ! x " = m1 s xj j=s+1
obviously satis…es system (32) (i.e., ! x " 2 M ). In addition we have ` (! x " ) a < 0 (j = s + 1; :::; m) j
j
0 ! It is easy to show that the element ! x " + t(! x x " ) satis…es each of the equations: `j (! x ) aj < 0 (j = 1; 2; :::; s). But then it satis…es system (32) and hence is in M . Since , 0 0 ! x " + t(! x x " ) is distinct from ! x " it follows that ! x " satis…es for ! x 2 = M ; the element !
all of the assumptions of our proposition and may be chosen as an ! x 0 (which we shall do).
By de…nition of M , the element de…ned by: 0 ! x 0 (t) = ! x 0 + t(! x (t0 ) ! x 0 ) (t 2 P ); should belong to it (since ! x 0 2 M ). On the other hand, for t = t10 (t0 = 6 0). this formula 0 0 de…nes an element ! x 2 = M . This is a contradiction. Consequently M M , which is what we wanted to prove.
By lemma 4.5 the following proposition is true: If the dimension of the polyhedron M of solution of system (32) is distinct form the number n of its unknowns then the dimension of M equals the dimension of the a¢ ne subspace of P n de…ned by the subsystem formed by the unstable inequalities of system (32). By theorem 4.12 this implies: THEOREM 4.13. The polyhedron M of a consistent system (32) with nonzero rank has dimension (n
k) (0
k
n.) if and only if the maximum number of linearly independent
unstable rows of the system’s matrix is k. COROLLARY 4.7. A necessary and su¢ cient condition for the uniqueness of the solutions of a consistent system of linear inequalities with nonzero rank is that its matrix has n linearly independent unstable rows. COROLLARY 4.8. A system of homogenous linear inequalities on pn (with nonzero rank) has a nonzero solution if and only if the number of linearly independent unstable rows in its matrix is less than the number of its unknowns.
CHAPTER V CONVOLUTIONS OF SYSTEM OF LINEAR INEQUALITIES. ELIMINATION OF UNKNOWS. 206
In this chapter we discuss a problem which is more or less connected with the elimination of unknowns from a system of linear inequalities. Even Fourier [1] recognized the elimination of unknowns as a natural way of solving systems of linear inequalities on Rn . He arrived at a simple algorithm which reduced to a rule of using non-negative numbers to combine linear inequalities (i.e. to obtain non-negative numbers to combine linear inequalities) in such a way that the resulting system contains a smaller number of the unknowns than the original system and whose solutions are solutions of the original system. In the absence of appropriate restrictions on the type of combinations, such an algorithm requires an enormous number of elementary operations and this number grows at a great rate with the growth in the number of unknowns and the number of inequalities. Algorithms of this type that aim tat eliminating unknowns were developed by more recent authors (see, for instance, Dines [1], Kuhn [1] and Chernikov [6]). In the Dines [1] and Kuhn [1] the elimination of unknowns was used as a method of investigating the solvability and consistency of a system of linear inequalities (in the second paper, consistency and solvability were de…ned as distinct concepts). The analysis was based on de…nitions that are analogous to those used for linear inequalities. Some of the attempts at restricting the types of non-negative combinations used to restricting the types of non-negative combinations used to eliminate unknowns from a system of linear inequalities were presented by the author in [10] and [16]. For the most part, these articles were devoted to the search for restrictions on the rules of combining inequalities with the objective of minimizing the number of combinations. The problem of combining the inequalities of a system in order to reduce the number of its unknowns was studied, in these papers, as a special case of a more general problem; the problem of combining the inequalities of a system obtaining inequality combinations with a set of solutions of the set of solutions of the original system into that subspace. Projections are de…ned here as taking a direct complement of the subspace. Here we formulate the more general problem, in connection with eliminating unknowns, in terms of linear inequality system on an arbitrary linear space L(P), where P is an arbitrary ordered …eld. In the author’s articles [10] and [16] the problem is solved for the case of L(R). Consider a subspace V of L(R) and one of its direct complements U in L(R) where the functions de…ning a given linear inequality system such that none of those functions, obtained as non-negative linear combinations of those functions is obtainable as a linear non-negative combination of the others, take only the values zero. Then each combination is a complement with respect to the inequalities that de…ne it. In the author’s article [16] it was shown that such a system of combination-inequalities (the
207
fundamental U-convolution of the system) is a solution to the problem formulated above. On the basis of the above rule an optimal algorithm (in the sense of minimizing the number of combinations required to eliminate unknowns) for sequential elimination of unknowns was introduced in the author’s article [1]. This was done for systems on .. where the free term is a free parameter. For a system with a given free term, the optimal algorithm is not always valid. There questions are discussed in sections, x1 and x2 of this chapter. In x1 and x2 as well
as in all of this chapter (except for x5) the problem is formulated for system over L(P) where p is an ordered …eld and in particular over P n .
In x2 we show that the fundamental convolution of any fundamental convolution of a
system of inequalities on L(P) contains a fundamental convolution of the latter and we
describe a procedure for obtaining such a convolution. This procedure is the basis for the optimal algorithm for sequentially eliminating variables form a system of linear inequalities on Pn . In x2 the algorithm is called the algorithm of reduces fundamental convolutions.
In x3 we show that the algorithm for reduced fundamental convolutions may be viewed
as an algorithm for obtaining the general formula for non-negative solutions for an arbitrary system of linear equations on the space .. In x4, the general algorithm for a fundamental convolution is considered in connection
with systems of inequalities of special type. These are systems that contain inequalities
de…ned by functions that di¤er only in their signs. One example of such systems is a system of equations and inequalities or even simple systems of equations. For such systems, the fundamental convolution is transformed into a more compact system equivalent to it. If each equation of a given equation system is replaced by two inequalities with left hand sides of opposite signs then the noted above transformation of the convolution for the system will consist of pairs of inequalities with left hand sides having opposite signs. Hence, fundamental convolution of the system of linear equations under consideration. Clearly the fundamental convolution for an equation system on L(P) (Where P is an arbitrary ordered …eld) satis…es the "project property" of fundamental convolution for inequality systems on L(P). Namely, if L(P ) = U + V is some direct decomposition of the space L(P), then the solution set for a fundamental U-convolution of a given equation system on the subspace V coincides with the U-projection on V of the solution set of that system. The order on P plays no role in determining whether the fundamental convolution of an equation system on L(P) satis…es the projection property. Thus, the following question arises: Is it possible to apply the de…nition to a system of equations de…ned on a linear space
208
L(P) where P is not necessarily ordered? This question is considered in x5 of the present chapter (the author’s article [19] was devoted to it).
In x5 we discuss fundamental convolutions of linear equations on linear space L(P) with
an arbitrary (not necessarily ordered) …eld P and the fundamental convolutions for such systems. In x5 we show that the convolution of an arbitrary convolution of a linear equation system is de…ned on P n (without assuming that P is ordered) this assertion leads to a series of
algorithm for elimination of unknowns. One of these algorithms is a generalized elimination algorithm by Gauss. In x6 we consider an algorithm for a sequential convolution and elimination of unknowns
for a system of linear inequalities which contains strict inequalities. The results are used to derive a well-known theorem of Dines [1] on the consistency of a system of strict linear inequalities. In x7 we use convolutions of systems of linear inequalities to solve the problem of deter-
mining conditions under which two consistent systems of linear inequalities on L(P) (P is an ordered …eld) have solution sets that agree on a subspace V. I.e. conditions under which the elements of the two sets are the same up to inclusion in a subspace V. x1: CONES OF COMPOSITIOS OF SYSTEMS OF LINEAR INEQUALITIES AN D
THESE CONVOLUTIONS. DEFINITION 5.1. Let f1 (x); :::; fm (x)
(1)
be a system of linear (i.e. additive and homogeneous) functions de…ned on a linear space L(P) and let U be subspace of L(P). The cone C(U) of non-negative solutions (u1 ; :::; um ) of the equation u1 f1 (x) + ::: + um fm (x) = 0 (x 2 U )
is said to the cone of U-composition of the system (1) of functions. The set of indices of
nonzero coordinates of a given element of C(U) is called the index of that element. DEFINITION 5.2. A U-composition of a system (1) of functions is a positive linear combination, of an arbitrary (non-empty) subsystem of that system, which is identically zero on U. A linear combination is said to be a positive linear combination if all of its coe¢ cients are non-negative and at least one of them is positive. A U-decomposition u0j1 fj1 (x) + ::: + u0js+1 fjs+1 (x) with a positive coe¢ cient u0jk (k = 1; 2; :::; s + 1) is called a fundamental U-composition if s of the s + 1 functions involved in it are linearly independent on U. (Clearly, in this case, any s of the s + 1 functions involved in it are linearly independent on U.) In this case an 209
element (u1 ; :::; um ) of the cone C(U) with u0jk = ujk (k = 1; 2; :::; s + 1) and where the other uj ’s are zeros is called a fundamental element of that cone. It follows from lemma 1.2 that if a cone C(U) contains nonzero elements then it contains fundamental elements. It is also easy to show that two fundamental elements of C(U) having the same index di¤er from one another by a positive multiple (i.e. they are non-essentially distinct). Two fundamental elements are essentially distinct only in that case when they have di¤erent indices. We note here that it is impossible for an index of a fundamental element to be a proper subset of the index of another fundamental element (see the remark in parenthesis after the de…nition of fundamental U-composition). It follows form this discussion that none of the elements of a system of essentially distinct fundamental elements of a cone C(U) can be expressed as a positive linear combination of the other elements. (i.e. the system is irreducible). DEFINTION 5.3. Consider the linear inequality system fj (x) + tj < 0 (j = 1; 2; :::; m) (2) where fj (x) (j = 1; 2; :::; m) are linear functions de…ned on L(P) and where tj are parameters having values in P. The cone on U-composition for such a system is the cone C(U) which is the U-composition for the system fj (x) (j = 1; 2; :::; m) of functions that de…nes system (2). In particular, if the set of values of the parameters t1 ; :::; tm .is, respectively, the set
a1 ; :::; am of elements in P then that cone is the U-composition cone for the system
of linear inequalities fj (x) = aj
0
0 (j = 1; 2; :::; m) (2 )
on L(P). DEFITION 5.4. If ui = fui1 ; :::; uim g (i = 1; 2; :::; h)
is a maximal system of essentially distinct fundamental elements of the cone C(U) of U-composition of system (2), then the system m m X X j uj fj (x) + uij tj 0 (i = 1; 2; :::; m) (3) j=1
j=1
on L(P) is said to be a fundamental U-composition of system (2). In particular, if
tj =
0
aj it is called a fundamental U-composition of system (2 ). If the cone C(U) is empty
then the fundamental U-convolution of system (2) is empty. In view of the properties of the fundamental elements of the one C(U) noted as we de…ned these elements, the maximal system of essentially distinct fundamental elements of C(U) is unique to a scalar multiple of its elements. But then the fundamental U-convolution of system (2) is unique in that sense, for a …xed U. Thus, in what follows, we shall not 210
distinguished between fundamental convolutions of that system that correspond to the same U, and we will simply speak of the fundamental U-convolution of the system. Consider a system (2), where the parameters tj are independent one from another and which take all the values of P (i.e. are free parameters), as a system of linear inequalities on L(P ) + P m (where P m is the set of vectors (t1 ; :::; tm ) with components in P ). Since every system of essentially distinct fundamental elements of the cone C(U) is irreducible, the fundamental U-convolution of the system is irreducible, i.e. it contains no dependent inequalities. In other cases it may be reducible. THEOREM 5.1. The maximal system of essentially distinct fundamental elements of the non-null cone C(U) of U-composition of the system (1) of functions and only it, is a basis for that cone. PROOF. consider system (2) with functions (1) and free parameters tj . Let system (3) be its fundamental convolution, We shall show that if V is a direct complement of the subspace U in L(P) and if x0 = u0 + v 0 (u0 2 U; v 0 2 V ) is a solution of system (3) for some values tj = t0j (j = 1; 2; :::; m) then the system (2) with tj = t0j has 0
0
0
a solution u + v 0 (u 2 U ). Indeed, the system (3) with tj = tj = t0j + fj (v 0 ) as a system
of linear inequalities on U, is consistent (having a solution u0 2 U ). But then, by theorem 0
0
4.11, system (2) with tj = tj is consistent on U. If u 2 U is one of its solutions, then system 0
(2) with tj = t0j has u + v 0 as a solution, as we wanted to show. By the assertion which we proved, we have the inequalities 0
fj (u + v 0 ) + t0j 0 (j = 1; 2; :::; m). m X if pj fj (x) is any U-composition of the system fj (x) (j = 1; 2; :::; m) of functions, then j=1
we get m X j=1
0
pj fj (x ) +
m X
pj t0j
0
j=1
and further m m X X 0 pj fj (x ) + pj t0j j=1
0
0 (x = u + v 0 )
0 (x0 = u0 + v 0 )
j=1
Since x0 and t0j are arbitrary, this implies that the inequality m m X X pj fj (x) + pj tj 0 j=1
j=1
is a consequence of system (3) on the space L(P ) + P m (where P m is the set of vectors m X (t1 ; :::; tm )). Using theorem 2.4 we conclude that the form pj tj is a linear combination, j=1
with non-negative coe¢ cients, of those linear forms in the parameters tj which appear in
211
system (3). This means that any element (p1 ; :::; pm ) of the cone C(U) is a linear combination with non-negative coe¢ cients of the elements of the maximal systems of essentially distinct fundamental elements of C(U) for system (3), i.e. of the elements of any maximal system for a system of this type. Thus, each maximal system of essentially distinct fundamental element of the cone C(U) generates it. As we already noted, each of these systems is irreducible. Hence every one of them is a basis for that cone. Since the cone C(U) is acute, its basis is unique up to a positive multiple of its elements, Thus it has no other basis and the theorem is proved. The …rst part of this proof may be used to prove the …rst part of the next theorem. THEORE 5.2. Consider a system (2) with a non-empty fundamental U-convolution with respect to some U 2 L(P ). Such a system has a solution in L(P) for those values of tj
for which its fundamental U-convolution has a solution on L(P). System (2) with an empty U-convolution has a solution in U for all values of the parameters involved in it. THEOREM 5.2’. The system of nonzero generating elements of the cone it contains its basis which, by theorem 5.1, coincides with the maximal system of essentially distinct fundamental elements of the cone C(U). If a system of essentially distinct generating elements of the cone C(U) is a basis of that cone then it coincides with the maximal system of essentially distinct fundamental elements of C(U) contained in it. Hence it would satisfy the condition of our proposition. Conversely, if a system of nonzero generating elements of a cone C(U) satis…es this condition then it includes no element which could be expressed as a linear combination with positive coe¢ cients of elements contained in a basis of the cone. Hence, the system would coincide with the basis. This completes the proof. DEFINITION 5.5. If ci = fci1 ; :::; cim g (i = 1; 2; :::; k)
is a …nite system of nonzero generating elements of the cone C(U) of U-composition of system (2) then the system m m X X j cj fj (x) + cij tj 0 (i = 1; 2; :::; m) (3) j=1
j=1
is said to be a U-convolution of system (2) (in case tj =
aj (j = 1; 2; :::; m) it is called a
0
U-convolution of system (2 )). Furthermore, the index of the inequalities of the U-convolution is the index of the elements of the cone C(U) de…ned by it. If the cone C(U) is null then we say that the U-convolution of system (2) is empty. For U = L(P ) the U-convolution is called a total convolution. COROLLARY 5.2. Every U-convolution of system (2) contains a fundamental U-convolution and is equivalent to it for any choice of values of the parameters tj (j = 1; 2; :::; m) such that 212
the system (2) is consistent. By theorem 5.2 we, thus, have: CORALLY 5.3. Suppose, for some subspace U of L(p), that the cone C(U) of the Ucomposition of system is non-null. Then system (2) and each of its U-compositions, for that subspace U, have a solution in L(p) for one set of values of .. involved in the composition and for only that set of values. If the cone C(U) is null, then system (2) has a solution in U for any values of the parameters. Corollary 5.3 is equivalent to: COROLLARY 5.3’. If the linear inequality system (2’) is consistent then each of its U-convolutions is empty or consistent for any subspace U in L(P). If a U-convolution is consistent or empty for at least one subspace U of L(P) then system (2’) is consistent. From this last proposition follows: COROLLARY 5.4. System (2’) is consistent if its total convolution is consistent or empty. If system (2’) is consistent then its total convolution is empty or consistent. Consider a system of generating elements from the cone C(U). Suppose we eliminating each element whose index contains the index of at least one of the remaining elements and repeat the process. In the end, by theorem 5.1, we obtain a basis for C(U). Hence upon eliminating, from a system of generating elements of a cone C(U), some elements with indices that include the indices of some other elements of the system, we still have a generating system of the cone. Consequently we have: COROLLARY 5.5. consider a U-convolution of a system (2). Eliminating an inequality whose index includes the index of another inequality in the convolution. What remains is still a U-convolution of system (2). DEFINITION 5.6. Let U be a subspace of L(P) and let V be a direct complement of V. A system S of linear inequalities is said to be a UV-combination of system (2) with free parameters tj if it satis…es the following conditions: a) Each inequality of S is positive linear combination of inequalities of system (2) whose functions fj (x) take only zero values on U. b) For each value tj , the solution set of S in V coincides with the U-projection on V of the solution set of system (2). (Under this projection, an empty set is projected into an empty set.) THEOREM 5.3. If the cone C(U) of a U-composition of system (2) with free parameters is non-null then each of its U-convolutions is a UV-combination of it for any direct complement
213
V of U in L(P). Conversely if a linear inequality system is a UV-combination of system (2) for some direct complement V of the subspace U of L(P), then it coincides with a U-convolution of that system (2). PROOF. Let V be any direct complement of U in L(P) and let x = v 0 be an element of V which is a solution of a U-convolution S of system (2) with values tj = t0j (j = 1; 2; :::; m) 0
for which that system is consistent. Thus for tj = tj = t0j + fj (v 0 ) system S has a solution in U. If system (2) and system S are considered as linear inequality systems on U, then by the …rst part of corollary 5.3, it follows that the system 0
fj (x) + tj < 0 (j = 1; 2; :::; m) has at least one solution u0 in U. But then system (2) with tj = t0j (j = 1; 2; :::; m) has a solution u0 + v 0 . Now, system (2) is not consistent of those value of tj for which system S is not consistent and every nonempty convolution of system (2) satis…es the …rst condition of the de…nition of UV-combinations. Hence, the …rst part of the theorem is proved. Suppose now that T is some UV-combination of system (2) for V = V 0 and suppose m X
qj fj (x) is a U-composition of the system functions fj (x) (j = 1; 2; :::; m): If x0 = u0 +
j=1
v 0 (u0 2 U; v 0 2 V ) is an arbitrary solution of system T for arbitrary values tj = t0j (j =
1; 2; :::; m) for which it is consistent, then by de…nition of UV-combination it has V 0 as a 0
0
solution. But then by that de…nition, system (2) with tj = t0j has a solution u + v 0 (u 2 U ) and hence 0
fj (u + v 0 ) + t0j
0 (j = 1; 2; :::; m):
Thus we have m m X X qj fj (x0 ) + qj t0j j=1
0
j=1
Since x0 is an arbitrary solution of system T with values .. for which it is consistent, the inequality m m X X qj fj (x) + qj tj j=1
0
j=1
is a consequence of system T taken as a system of linear inequalities on L(P ) + P m
(where P m is the set of vectors (t1 ; :::; tm )). Thus, as in the proof theorem 5.1, the vectors of coe¢ cients of the linear combination de…ning system T for system (2) generate the cone C(U). Consequently, system T is a U-convolution of system (2). This completes the proof of the theorem. REMARK. It follows from theorem 5.3 that the solution set of a non-empty U-convolution of system (2’) in a direct complement V of U in L(P) coincides with the projection (the Uprojection) of the solution set of (2’)on V. It is easy to show that for an empty U-convolution, 214
that projection coincides with V. Indeed, let .. be an element of V. Since the U-convolution of system (2’) is empty, then so is the U-convolution of the system fj (x)
(aj
fj (v 0 ))
0 (j = 1; 2; :::; m);
if we consider it as a system of linear inequalities on U. But then, by corollary 5.3’, this system has at least one solution u = u0 2 u and hence the element x0 = u0 + v 0 is a solution
of (2’) whose projection on V is v 0 2 V . Since v 0 is arbitrary, it follows that the projection of the solution set of (2’) coincides with V.
The kernel of system (2) is the maximal subspace H of L(P) where all the functions fj (x) take only the value zero. We conclude this section with: THEOREM 5.4. If some U-convolution of system (2) with respect to a certain subspace U * H is not empty, then its rank is less than the rank of system (2) by no less than the dimension of the direct complement H of H \ U in U.
The rank of system (2) is the rank of the system fj of functions de…ning it. PROOF. From the de…nition of a U-convolution of system (2) it follows that its kernel
contains the subspaces H and U. Clearly, the rank of the U-convolution equals the dimension 0
0
of the direct complement Q of its kernel H in L(P) and the rank coincides with the dimension of the direct complement of its kernel H. Form the direct decomposition 0
0
0
0
H = L(P ) = H + Q = (H + H + H) + Q = H + (H + H + Q ) 0
where H is some direct complement of the subspace H + U in H . From this we conclude that the rank under consideration is less than the rank of system (2) by the dimension of H + H. But this means that it is less than the rank of (2) by no less than the rank of H as we were to show.
x2. ITERATED CONVOLUTION. ELIMINATION OF UNKNOWNS. 0
DEFINITION 5.7. Let U and U be two subspaces of L(P). If a U-convolution S of system 0
(2) is not empty then we call the U -convolution of S (as a system on L(P)) an iterated 0
convolution of system (2). More precisely, the latter is an iterated (U ; U )-convolution of 0
system (2). If system S is empty or if its U -convolution is empty. An analogous de…nition 0
may be stated for the iterated (U ; U ; U " )-convolution of system (2) etc. 0
THEOREM 5.5. Each iterated (U ; U )-convolution of system (2) coincides with some 0
(U + U )-convolution of that system. PROOF. Clearly, it su¢ ces to prove the theorem for the case where the parameters of 0
system (2) are free parameters. Let S be some U-convolution of system (2) and let S be 0
0
some U -convolution of S . First we consider the case where both of them are not empty. Let 215
0
0
U be a direct complement of the intersection U \ U in U and let V be a direct complement 0
0
of subspace U " = U + U = U + U in L(P). It is not hard to show that the system S ! is a U -convolution of system S. In view of the …rst part of theorem 5.3 the set M (U; t ) ! t = (t1 ; :::; tm ) of solution of system S on the subspace U + V for arbitrary values of the parameters tj of solutions of system S on the subspace U + V for arbitrary values of the ! parameters tj in system (2) is a projection of the solution set M ( t ) of the latter on that subspace. If system S is considered as a system on the subspace U +V then, by theorem 5.3, ! 0 the set of solution of the system S on V coincides with the projection of the set M (U; t ) ! on that subspace and, hence with the projection of the set M ( t ) of solutions of system 0
(2) on that subspace. System S clearly, satis…es the …rst condition in the de…nition of a 0
UV-convolution of system (2) for S = U " . Thus S is a U " V -convolution of system (2). But 0
then, by the second part of theorem 5.3, system S must be a U " -convolution of system (2). Thus, for the present case, the theorem is proved. 0
Now suppose that at least one of the two systems S and S is empty. If system S is 0
0
empty, then, clearly, system (2) has an empty U + U -convolution. If system S is empty and system S is not then, by the second part of corollary 5.3, system S has a solution in U
0
0
and hence in U + U for any values of the parameters tj involved in it. But then, by the …rst 0
part of that corollary, system (2) has a solution in U + U for all values of the parameters tj 0
involved in it (here, system (2) and system S are considered as systems on U + U ). Thus 0
the system .. has a solution in U + U : It is clear then that the system fj (x) (j = 1; 2; :::; m) 0
0
of functions cannot have a U + U -composition. Hence, for this case, the U + U -convolution of system (2) is empty. This completes the proof of the theorem. By corollary 5.5, theorem 5.5. implies: 0
COROLLARY 5.6. From an iterated (U ; U )-convolution of system (2) it is possible to 0
eliminate any inequality containing a non-fundamental (U +U )-composition of the functions .. (the index of such an inequality contains the index of at least one of the remaining 0
inequalities),without changing the fact that it is a (U + U )-convolution of (2). This possibility is used in the algorithm, presented below, for reducing fundamental convolutions by sequentially obtaining fundamental U-convolution of the system, f (! x ) + t = a x + ::: + a x + t 0 (j = 1; 2; :::; m) (4) j
j
j1 1
jn n
j
n
on P with respect to the subspaces U = U1 ; U = U1 + U2 ; :::; U = U1 + ::: + Uk ; (k
n)
where U1 ; :::; Un are subspace generated in P n by the coordinate vectors ! e 1 = (1; 0; :::; 0); ! e2= ! (0; 1; :::; 0); :::; e = (0; 0; :::; 1) respectively. To justify the algorithm we prove the next n
216
proposition. LEMMA 5.1 consider (Ui1 + ::: + Uik )-convolution of system (4) where the coe¢ cients of some unknown x` (` 6= i1 ; :::; ik ) is non-negative (non-positive). Then there exists a linear
combination of the k-columns numbered i = i1 ; :::; ik of the system’s matrix such that the sum of that combination and the `-th column of that matrix is a column vector with non-negative (non-positive) coordinates. Conversely if such a combination exists for columns numbered i1 ; :::; ik then the coe¢ cient of x` is non-negative (non-positive) in every (Ui1 + ::: + Uik )convolution of system (4) provided the latter has a non-empty convolution of this type. Indeed, under the conditions of the …rst part of the proposition, all solutions of the system a1i u1 + ::: + ami um = 0 (i = i1 ; :::; ik ) g uj 0 (i = i1 ; :::; ik ) satisfy the inequality a1` u1 + ::: + am` um
0(
0);
Thus the …rst part follows directly fro theorem 2.4. The second part is obvious. From the above proposition follows: COROLLARY 5.7. suppose the coe¢ cient column for x` in a (Ui1 + ::: + Uik )-convolution of system (4) with ` 6= i1 ; :::; ik contains some elements with opposite signs. Then each linear combination of the columns numbered i1 ; :::; ik results, when added the `-th column, in a column-vector which contains elements of opposite signs. Next we prove a fundamental lemma. LEMMA 5.2. Consider a non-empty (U1 ; :::; Uk )-convolution of system (4). Let A1 be the column of coe¢ cients of x1 in system (4) and let A` be the column of coe¢ cients of x` in some (U1 ; :::; U` 1 )-convolution of system (4). Suppose s (s
k) of the k columns
A1 ; :::; Ak contain elements of opposite signs. Then none of the fundamental elements of the cone C = C(U1 ; :::; Uk ) for system (4) contains more than s+1 nonzero coordinates. PROOF. Consider the system a1i u1 + ::: + ami um = 0 (i = 1; :::; k) (5) that de…nes the cone C. If s = k then, by corollary 5.7, the rank is a and our proposition follows directly form lemma 1.2. Thus we assume s < k. In this case, at least one of the columns A1 ; :::; Ak contains no elements of opposite signs. Let k1 ; :::; k` be the numbers (in increasing order) of all such columns. By lemma 5.1, the solution, the solution set of system (5) is not changed when we exchange (sequently) the k` -th, k` 1 -th,:::, k1 -st equations in it for other equations with coe¢ cients without changing signs. By lemma 5.1 this is attained by adding, to each of the changed equations, a linear combination of the equations preceding it.
217
By hypothesis, the (U1 ; :::; Uk )-convolution of system (4) is non-empty. Hence system (5) has non-null non-negative solution. Thus, in each of the above transformations, its equations should intersect at a zero coe¢ cient with the Column-numbers common to all of these equations under consideration. Clearly, any non-negative solution of system (5) can have zero coordinates only in places corresponding to such numbers. Change all of the coe¢ cients with other column numbers, in system 5, to zeros and denote the resulting system by K. In system K, equations whose numbers are: k1 ; :::; k` contain no nonzero terms. Thus its rank does not exceed the number s of its other equation. Applying lemma 1.2, we easily obtain our proposition. To present the algorithm of reduced convolutions we introduce: DEFENITION 5.8. Let A be a system m elements a1 ; :::; am in the …eld P and let Z be a system of m unknowns Z1 ; :::; Zm . To each pair ap > 0; aq < 0 assign the for ap Zp
aq Z q
and to each zero element as assign the form as Zs . The set of all such forms is called an A-deformation of system Z. If A contains elements of opposite signs then an A-deformation is called a composite deformation. RESTRICTED FUNDAMENTAL CONVOLUTION ALOGRITHM. 1. Let A1 be a column of coe¢ cients for system (4), e.g. corresponding to the unknown x1 . Construct an A1 -deformation of the left hand side of the inequalities of system (4) and respectively combine these inequalities. We thus get a system S1 which is a U1 -convolution of system (4) (clearly it is fundamental). Associate each inequality of S1 with its index. The index will, obviously, coincide with the numbers of the parameters involved in the inequality or equivalently with the numbers of those inequalities which form the linear combination de…ning that inequality. 2. Suppose we already have a fundamental (U1 + :::Uk )-convolution Sk of system (4). Let Ak+1 be a nonzero column of coe¢ cients in that system, e.g. corresponding to the unknowns xk+1 . If the Ak+1 -deformation is not a composite deformation then we obtain a system Sk+1 from system Sk by constructing an Ak+1 -deformation of the left hand sides of the inequalities of Sk . If Ak+1 -deformation is a composite deformation and if s of the k+1 Ai -deformation (i = 1; :::; k + 1) are composite deformations, then system Sk+1 is obtained as follows: a) Perform the Ak+1 -deformation of the left hand side of the inequalities of system Sk but do not combine the left hand side of those inequalities whose union of indices contains more than (s + 1) distinct elements. b) From the system obtained in a) eliminate an inequality whose index contains the index of any other inequality and repeat the process.
218
In either case, the system Sk+1 is a fundamental (U1 + :::Uk )-convolution of system (4). This can be easily by corollary 5.6 and lemma 5.2 above. As above, associate each inequality with its index. Repeating this process and presuming the convolution of an empty convolution to the empty we obtain, in a …nite number of steps, a P n -convolution of system (4) (P n = U1 + :::Un ). REMARK 1. If the Ai -deformations introduced here are unrestricted, in connection with the indices of the inequalities used in them, then the above algorithm coindicies with the inequalities used in them, the above algorithm coincides with the algorithm of Cherniko [6] which is called, appropriately, the free convolution algorithm. In view of theorem 5.5 the free convolution algorithm results in a (U1 + :::Uk )-convolution of system, (4). Such a convolution, however, need not be fundamental. If we only impose condition a) then we have another restricted convolution algorithm. It yields a consequence of (U1 + :::Uk )-convolutions of system (4). REMARK 2. Since in performing the Ai -deformation in the current process of eliminating the unknowns of a system (4), we take a non-zero column each time, it follows from the theorem 5.4 that: The number of steps required to eliminate all of the unknowns of a system (4) with a non-empty P n -convolution does not exceed the rank of the of system. REMARK 3. The algorithm of restricted convolution may be associated with the following recursive process of constructing linear forms involving the parameters t1 ; :::; tm . Let T1 be a system of linear forms in (t1 ; :::; tm ) and suppose we have already constructed system Tk of linear forms in the unknowns (t1 ; :::; tm ). Divide system Tk into three subsystems: Tk1 , Tk2 and Tk3 . (with one of them possibly empty). Now construction the sum Tk2 + Tk3 by adding those forms of Tk2 and Tk3 such that: If s of the system Ti2 + Ti3 (i = 1; 2; :::; k) are not empty then each form involves no more than s+1 unknowns and these forms are distinct in terms of the number of unknowns involved in them (The system Ti2 + Ti3 is empty if either Ti2 or Ti3 is empty). De…ne Tk+1 as the union Tk1 [ (Tk2 + Tk3 ).
Clearly, for a given, the number of inequalities in the fundamental (U1 +:::Uk )-convolution
of system (4), with m …xed, does not exceed the number pm;k of forms in the system Tk+1 with the greatest number of forms. Thus the number pm;k = maxpm;k gives the number of k
possible steps involved in eliminating the unknowns of system (4) by the method of restricted fundamental convolutions and by the method of reduced convolutions (for instance, pm = 36 for m=10). By remark 2, this number is independent of the number of unknowns of the system (4). REMARK 4. Choose k columns, say the …rst k, from the matrix of system (4). Consider
219
the system aj1 x1 + ::: + ajn xn + tj Ifm X (i) dj tj
0 (j = 1; 2; :::; m):
0 (i = 1; 2; :::; s):
j=1
If
is the system obtained after eliminating the unknowns x1 ; :::; xm using the method described above, then the system m X (i) dj (ajk+1 xk+1 + ::: + ajn xn + tj )
0 (i = 1; 2; :::; s):
j=1
coincides, obviously, with the fundamental (U1 + :::Uk )-convolution of system (4). This
property of our method facilitates its use in actual computations (for case of system (4) with a large number of unknowns). The algorithm of restricted fundamental convolutions may be used to …nd solutions of system (4) for given values of the parameters tj . EXAMPLE. Given the system x1 + x2 x3 + x4 3 0 2x1 x2 x3 x4 + 1 0 x1 + 2x2 + x3 x4 2 0 g x1 x2 + 2x3 + x4 2 0 3x1 + x2 3x3 2x4 + 1 0 2x1 x2 + x3 + x4 + 0 0 The U1 -convolution for this system has the form: 3x2 + 0x3 + 0x4 0x2 + x3 + 2x4 x2
x3 + 3x4
3x2 + x3
3x4
3x2 + 3x3 + x4
5
0 (1; 3);
5
0 (1; 4);
3
0 (1; 6);
3
0 (2; 3); 3
0 (2; 4);
2x2 + 0x3 + 0x4 + 1 7x2 + 0x3
0 (2; 6);
5x4
5
0 (3; 5);
2x2 + 3x3 + x4
5
0 (4; 5);
x2
3x3
x4 + 2
0 (5; 6):
Along-side each inequality we noted its index. Let us from the fundamental (U1 + U3 )convolution: 3x2 + 0x4
5
x2 + 5x4
11
2x2 + 0x4 + 1
0 (1; 3); 0 (1; 4; 6); 0 (2; 6); 220
3x2 + 0x4 7x2
5x4
3 5
0 (4; 5; 6); 0 (3; 5);
The fundamental (U1 + U3 + U4 ) and (U1 + U3 + U4 + U2 )-convolutions are given by: 3x2
5
0 (1; 3);
2x2 + 1
0 (2; 6);
3x2
0 (4; 5; 6);
3
and 7
0 (1; 2; 3; 6);
8
0 (1; 3; 4; 5; 6):
Since the last of these systems is consistent, it follows from corollary 5.3’that the original system is consistent. Substituting one of the solutions of the (U1 + U3 + U4 ) -convolutions, e.g. x2 = 1, into the (U1 + U3 ) -convolution we get x4 = 1. Substituting x2 = x4 = 1 in the U1 -convolution we get x3 = 0. Finally, substituting x2 = x4 = 1 and x3 = 0 in the original system we get (0; 1; 0; 1) as a solution of the latter. In conclusion of x2 we give the computational scheme for the restricted fundamental
convolution algorithm. Let
0
a11 ::: a1n 0 @ ::: ::: ::: j H = am1 ::: amn denote the matrix whose
1 ::: 0 left
1 ::: 0 ::: ::: A : ::: 1 part is the matrix A0 of system (4) and whose right part is
an identity matrix of order m. The restricted fundamental convolution algorithm is given in terms of the following procedure. 1. Take any nonzero column of A0 , e.g. the …rst column A01 . Performing an A01 deformation on the system of rows the matrix H 0 we obtain a system of rows of the matrix H 1 , which is positioned in H 0 by extending the columns of the latter, where the left part is the matrix a fundamental U1 -convolution of system (4). We denote the right part of H 1 by E 1 . If all elements of A01 are positive or if all of them are negative then we say that the matrix H 1 is empty. In this case the U1 -convolution of system (4) is empty and the process ends. If H 1 is not empty (all of the elements of its …rst column are zeros) the process continues. 2. Suppose we have obtained the matrix H k , which is positioned in the system H k
1
by
extending its columns, and whose left side Ak is the matrix of the fundamental (U1 + :::Uk )convolution of system (4). Denote the left side of H k by E k . With each row of the matrix H k we associated an index, namely the set of numbers of those columns of E k which contain
221
positive elements of that row. Take a nonzero column of Ak , say k + 1-st column Akk+1 (the …rst k columns of Ak are zero columns). If the Akk+1 -deformation of the system of rows of H k is not a composite deformation then we obtain a system of rows of H k+1 (as a submatrix of H k whose columns we extend) by performing this deformation. If H k+1 is empty, the process ends. If the Akk+1 -deformation is a composite deformation and if s of the Aii+1 -deformations (i = 1; :::; k + 1) are composite deformations then we take in H k+1 those rows of H k which have zero elements of the column Akk+1 and those row combinations resulting from the deformation and which satisfy the conditions: 1) Their indices do not have more than s+1 elements. 2) The index of none of them contains the index of any other. The …rst requirement is automatically satis…ed if in performing the Akk+1 -deformation we don’t combine any two rows the union of whose indices contains more than s+1 elements. The second is, obviously, satis…ed if we don’t combine two rows the union of whose indices contains the index of a third row. Under these restrictions, the matrix H k+1 will not contain any rows that are not necessary for the construction of the fundamental (U1 + :::Uk+1 )convolution of system (4). On the other hand it is not hard to show that none of the rows that are necessary for such a construction is not included in the matrix. Indeed, for a row to be included in the Akk+1 -deformation we have one of two possibilities about it: It is simply brought into H k+1 (if it has a zero element of the column Akk+1 ), or it is a combination of two rows whose k+1-st elements (elements of Akk+1 in those rows) have opposite signs to the k+1-st element of that third row. Clearly the index of such a row combination involves the original pair. But then, in view of such a row combination could not be brought into H k+1 . It is also clear that the …rst case is also impossible since it could mean bringing a row, unchanged, into H k+1 . The computational scheme of convolution of system (4) for …xed values of the parameters tj (say tj = aj ) (j = 1; :::; m) is essentially the same as the above process. In the case we insert the column whose elements are a1 ; ::; am in the matrix H 0 to the left of the vertical line. If we perform the above transformation of this matrix H 0 then the n+1-st column of the resulting matrix represents the free terms of the corresponding fundamental convolution of system (4) with tj = aj . As we noted, the matrix of coe¢ cients of this convolutions is the matrix formed by the …rst n columns of the transformed matrix. Since the positive values of E k do not interest us (we are only interested in their column numbers in determining the indices of the rows of H k ) then we may set each of them equal to unity. REMARK 5. For U = U1 + :::Uk the maximal system of essentially distinct fundamental
222
elements of the cone C(U) of the U-composition of system (4) coincides with the basis of the cone non-negative solutions of the system a1i u1 + ::: + ami um
0 (i = 1; 2; :::; k);
Hence, to obtain a fundamental U-convolution of system (4) for U = U1 + :::Uk it is possible to use the computation scheme of x4 of chapter III. If the latter restated in the
form of a solution to this problem, then we obtain a computational scheme which essentially coincides with the scheme presented here. However, the present scheme has the advantage of making it possible to determine the number of inequalities in the resulting convolution (see remark 3 about the algorithm of restricted fundamental convolutions). The above computational scheme may be used to a fundamental U-convolution of system (4) only in case U = U1 + :::Uk (k=1, 2,...). It is not hard to show that obtaining a fundamental U-convolution for system (4) for an arbitrary U
P n reduces to this case. We
show that to be true of the more general case of obtaining a fundamental U convolution of a system (2) on L(P) for any U 1
L(P ). Indeed, let U be a …nite dimensional subspace of
r
L(P ) and let u ; :::; u be a basis for it. Then we obtain a fundamental U-convolution of (2) by applying the above procedure to the system x1 fj (u1 ) + ::: + xr fj (ur ) + tj = 0 (j = 1; 2; :::; m) and obtaining a full fundamental convolution from that by replacing tj with tj +fj (x)(x 2
L(P )): If the subspace U is in…nite-dimensional (clearly this is not possible if L = P n ) then let H denote the kernel of system (2) and perform the direct decomposition: U = (U \ H) + U
0
0
Here U is …nite-dimensional subspace of L(P). Since a fundamental U-convolution of system (2), obviously, coincides with the fundamental U-convolution of the latter, the computational scheme of the latter may be used to obtain the former as we asserted.
x3. NON-NEGATIVE SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS.
The algorithm presented in the preceding section may be used to obtain the general formula for non-negative solutions of systems of homogeneous linear equations. Indeed, let a1i u1 + ::: + ami um
0 (i = 1; 2; :::; n) (6)
be such a system. Clearly, the set M of its non-negative solutions may be regarded as the cone C(U) with U = P n for the inequality system fj (! x ) + tj = aj1 x1 + ::: + ajn xn + tj 0 (j = 1; 2; :::; m) (7) with unknowns x1 ; :::; xn and parameters t1 ; :::; tm . But then, in view of theorem 5.1 and the de…nition of fundamental U-convolution (see x1) we have: 223
THEOREM 5.6. If uk1 t1 + ::: + ukm tm
0 (k = 1; 2; :::; `)
is a fundamental ..-convolution of system (7), then (uk1 ; :::; ukm ) (k = 1; 2; :::; `) is the maximal system of essentially distinct fundamental elements of the cone C(P n ) of nonnegative solutions of system (6) and hence the formula ` X (u1 ; :::; um ) = pk (uk1 ; :::; ukm ) 0 (8) k=1
where p1 ; :::; p` are non-negative parameters (arbitrary non-negative elements in P) de…nes
all the non-negative solutions of system (6). If the fundamental P n -convolution of system (7) is empty, then zero solution is the only non-negative solution of system (6). COROLLARY 5.8. The numbers of the nonzero coordinates of a positive (non-negative) solutions of system (6) that has the largest number of non-zero coordinates coincide with numbers of the inequalities of system (7) that are included in the indices of the inequalities of the fundamental P n -convolution of system (7). If all of the parameters p1 ; :::; p` in formula (8) are positive then we get the positive then we get the positive solution of system (6) with the largest number of nonzero coordinates. The problem of obtaining all of the non-negative solutions of an arbitrary system a1i u1 + ::: + ami um = ai (i = 1; 2; :::; n) (9) of linear equations is solved by obtaining formula (8) for the homogeneous linear equation system a1i u1 + ::: + ami um
ai um+1 = 0 (i = 1; 2; :::; n) (10)
and choosing those solutions for which the last coordinate, um+1 is equal to one. Here, system (7) is of the form: fj (! x ) + tj = aj1 x1 + ::: + ajn xn + tj 0 (j = 1; 2; :::; m) g fm+1 (! x ) + tm+1 = a1 x1 ::: = an xn + tm+1 0 If uk1 t1 + ::: + ukm tm + ukm+1 tm+1
0 (k = 1; 2; :::; h)
n
is its fundamental P -convolution then, by theorem 5.6, (uk1 ; :::; ukm ; ukm+1 ) (k = 1; 2; :::; h) is the maximal system of essentially distinct fundamental elements of the cone of nonnegative solutions of system (10). Given that system, it is not hard to …nd all supporting solutions of system (9) (see de…nition 1.5). Indeed, they clearly coincides with the non-zero vectors: uk
k
( uk 1 ; :::; uuk m ) m+1
m+1
de…ned by the solutions (uk1 ; :::; ukm ; ukm+1 ) with ukm+1 > 0 .
224
If all the coe¢ cients of system (6) are rational then so, obviously, are all the solutions (uk1 ; :::; ukm ; ukm+1 ) (k = 1; 2; :::; `) de…ned by formula (8). It is not hard to show that, in this case, formula (8), for rational values of the parameters pk (k = 1; 2; :::; `) would determine all the non-negative solutions of system (6) with rational coordinates. Multiplying the solutions (uk1 ; :::; ukm ) by appropriately chose positive integers, we get the set of all positive integral solutions of (6). In this way, formula (8) with rational coe¢ cients, In order to solve the related problem of …nding non-negative rational solutions, and, in particular, positive integral solutions for an equation system with rational coe¢ cients and free terms, we proceed in an analogous manner. To illustrated theorem 5.6 consider: EXAMPLE. Find the general formula for non-negative solutions of the system u1 + 2u2 u3 u4 + 3u5 2u6 = 0; u1 u2 + 2u3 u4 + u5 u6 = 0; g u1 u2 + u3 + 2u4 3u5 + u6 = 0; u1 u2 u3 + u4 2u5 + u6 = 0: By theorem 5.6, the question reduces to …nding the fundamental R4 -convolution of system x1 + x2 x3 + x4 + t1 0 2x1 x2 x3 x4 + t2 0 x1 + 2x2 + x3 x4 + t3 0 g x1 x2 + 2x3 + x4 + t4 0 3x1 + x2 3x3 2x4 + t5 0 2x1 x2 + x3 + x4 + t6 0 For that system, the fundamental U1 -convolution is: 3x2 + 0x3 + 0x4 + t1 + t3
0;
0x2 + x3 + 2x4 + t1 + t4
0;
x2
0;
x3 + 3x4 + 2t1 + t6
3x2 + x3
3x4 + 2t2 + t3
0;
3x2 + 3x3 + x4 + t2 + t4
0;
2x2 + 0x3 + 0x4 + t2 + t6 7x2 + 0x3
0;
5x4 + 3t3 + t5
0;
2x2 + 3x3 + x4 + 3t4 + t5
0;
x2
0:
3x3
x4 + 2t5 + 3t6
we now construct the fundamental (U1 + U3 )-convolution: 3x2 + 0x4 + t1 + t3
0;
x2 + 5x4 + t4 + 3t6
0:
2x2 + 0x4 + t2 + t6 3x2 + 0x4 + 3t5 + 3t6
0; 0;
225
7x2
5x4 + 3t3 + t5
0:
The fundamental (U1 + U3 + U4 ) and (U1 + U3 + U4 + U2 )-convolutions have the forms 3x2 + t1 + t3
0;
2x2 + t2 + t6
0;
3x2 + 3t5 + 3t6
0
and 2t1 + 3t2 + 2t3 + 3t6
0;
t1 + t3 + 3t4 + 3t5 + 3t6
0:
By theorem 5.6, the general formula for non-negative solutions of the system has the form: (u1 ; u2 ; u3 ; u4 ; u5 ; u6 ) = p1 (2; 3; 2; 0; 0; 3) + p2 (1; 0; 1; 3; 3; 3):
x4. CONVOLUTIONS OF SOME SPECIAL SYSTEMS OF LINEAR INEQUALITIES.
This section is devoted to convolutions of inequality systems on L(P) of the form fj (x) aj 0 (j = 1; 2; :::; m) fj (x) aj 0 (j = m + 1; :::; m + p) g (11) fj (x) + bj 0 (j = m + 1; :::; m + p) systems of the form fj (x) aj 0 (j = 1; 2; :::; p) g (12) fj (x) + bj 0 (j = 1; :::; p) and systems of the form jfj (x)
dj j
"j (j = 1; 2; :::; p) (13)
where "j (j = 1; 2; :::; p) are non-negative elements of P. Each system (12) with bj jfj (x)
aj j
aj (j = 1; :::; p) is, clearly, equivalent to the system
"j (j = 1; 2; :::; p)
with dj = 1=2(aj + bj ) and "j = 1=2(aj
bj ). It is also clear that every system (13)
may be written as a system (12). Applying the de…nition of fundamental U-convolutions of system (2) of x1 to system (11) results in an awkward situation. This is due to the fact that we would be trying to …nd positive solutions (u1 ; :::; um ; :::; um+2p ) of the equation:
u1 f1 (x)+:::+um fm (x)+um+1 fm+1 (x):::; um+p fm+1 (x) um+p+1 fm+1 (x)::: um+2p fm+p (x) = 0 (x 2 L(P ))
whose terms can be grouped in the obvious manner. We consider here a variant of that
de…nition which is more appropriated to the present problem. Let qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x) (14)
226
be a linear combination, with nonzero coe¢ cients, involving s+1 functions of which s are linearly independent on U and such that the coe¢ cients of the functions fjk (x) with jk
m
are positive. Consider combination (14) which is uniformly zero on U and which includes at least one function fjk (x) with jk
m. Such a combination is, obviously, unique up to a
positive multiple (in P). Otherwise, it would be possible to construct a combination which is essentially distinct form it (and which is also unique up to a positive multiple) that is identically zero on U having the form qj1 ( fj1 (x)) + ::: + qjs+1 ( fjs+1 (x)) In both cases combination (14) which is identically zero on U result in the inequalities qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x)
0
0
(qj1 aj1 + ::: + qjs+1 ajs+1 )
0; (15)
where ajk for qjk > 0 bjk for qjk < 0 and, for the second case, we have 0
ajk = f
qj1 fj1 (x)
:::
qjs+1 fjs+1 (x) + (qj1 a"j1 + ::: + qjs+1 a"js+1 )
0; (16)
where ajk for qjk < 0 bjk for qjk > 0 Now consider the system of all essentially distinct inequalities (15) and (16) obtained in a"jk = f
this manner and adjoin to it the inequality bj
aj
0 for some non-null function fj (x).
Using the de…nition of fundamental U-convolution of system (2), introduced in x1, we can
show that we, thus, obtain a fundamental U-convolution of system (11).
If we assume that m = 0, then the convolution we just got turns into a fundamental U-convolution for system (12). In this case, each combination (14) which is identically zero on U results in both of the inequalities (15) and (16). Using this variant of the de…nition of the fundamental U-convolution of system (12) it is possible to show that the fundamental U-convolution of system may be qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x)
qj1 dj1
:::
qjs+1 djs+1 )
jqj1 "j1 j + ::: + qjs+1 "js+1 (17)
corresponding to all possible combinations (14) that are identically zero on U and which
di¤er from each other by more than a positive multiple or a negative multiple in P. We note yet another variant of the de…nition of the fundamental U-convolution for system (11) with aj = bj (j = m + 1; :::; m + p); i.e. for the system fj (x) aj 0 (j = 1; 2; :::; p) g fj (x) j = 0 (j = m + 1; :::; m + p) Here, a combination (14) that is identically zero on U results in the inequality qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x)
(qj1 aj1 + ::: + qjs+1 ajs+1 )
whenever it includes at least on function fjk (x) with jk 227
0 m. Otherwise if results in the
equation qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x)
(qj1 aj1 + ::: + qjs+1 ajs+1 ) = 0
The system of all, thus obtained, essentially distinct inequalities and equations (two equations are non-essentially distinct if one of them is a nonzero multiple of the other) is a fundamental U-convolution of the system under consideration. If m=0 we get the next de…nition of the fundamental U-convolution of the equation system fj (x)
aj
0 (j = 1; 2; :::; p) (18)
The fundamental U-convolution of the equation system (18) is the equation system qj1 fj1 (x) + ::: + qjs+1 fjs+1 (x)
(qj1 aj1 + ::: + qjs+1 ajs+1 ) = 0
given by every possible combination (14) which is identically zero on U. This de…nition is now restated. Let (uk1 ; :::; ukm ; ukm+1 ) (k = 1; 2; :::; `) be essentially distinct solutions of u1 f1 (x) + ::: + um fm (x) = 0 (x 2 U )
such that each nonzero coordinate of each of them corresponds to a function fj (x) and such that they constitute a system with a rank smaller by one then the number of its elements. Then the system m m X X k uj fj (x) ukj aj j=1
0 (k = 1; 2; :::; `):
j=1
is a fundamental U-convolution of system (18). The next section is devoted to the question of convoluting systems of equations. So the
remainder of this section is devoted to stating some propositions that follow from theorem 5.2’(for U=L(P)) in view of the de…nitions introduced above. THEOREM 5.7. System (11) with bj
aj (j = 1; :::; m) is inconsistent if and only if
there exists an identically (on L(P)) zero combination (14) with nonzero coe¢ cients, with positive coe¢ cients for fjk (x) with jk
m and which involves s+1 functions fj (x) of which
s are linearly independent (on L(P)) such that for a for qjk > 0 0 ajk = f jk bjk for qjk < 0 the inequality 0
0
qj1 aj1 + ::: + qjs+1 ajs+1 < 0: holds. REMARK. For the special cases of systems (12) and (13) we drop the requirement that the coe¢ cients qk with jk
m are positive.
THEOREM 5.8. System (13) is inconsistent if and only if there exists an identically (on L(P)) zero combination (14) with nonzero coe¢ cients involving s+1 functions fj (x) of which
228
s are linearly independent on L(P) such that qj1 dj1 + ::: + qjs+1 djs+1
jqj1 "j1 j + ::: + qjs+1 "js+1 :
x5. CONVOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS. AN ALGORITHM
FOR THE ELIMINATION OF UNKNOWNS.
In the preceding section, we presented a de…nition of the fundamental U-convolution for a system in inequalities introduced in x2. The fundamental U-convolution of system (18), thus obtained, is equivalent to the fundamental U-convolution of the inequality system fj (x) aj 0 (j = 1; 2; :::; m); g (12) fj (x) + aj 0 (j = 1; :::; m): Thus, by the remark about theorem 5.3, we have the proposition:
Let V be s subspace of L(P) and U be a direct complement of V in L(P). Then the projection of the solution set of system (18) on the subspace V (the U-convolution of that set on the subspace V) coincides with the set of solutions of the system’s fundamental Uconvolution (de…ned in the preceding section) on V if the latter is non-empty and t coincides with all of the subspace V if the latter is empty. Thus we are able to solve the following problem: Using linear combinations of the system (18) of equations, construct a system of equations whose solution set on a given subspace of L(P) coincides with the U-convolution of the solution set of system (18) on that subspace, where U is a direct complement of that subspace in L(P). It is easy to show that the fundamental U-convolution of system (18) is not the system with the smallest number of equations which solves the above problem. Below, we study a more general approach to solving this problem which leads to obtaining a solution involving a system with a smallest number of equations that the fundamental U-convolution of system (18). We do not have to assume that P is an ordered …eld here. Thus, in this section, P is an arbitrary …eld. The basic results of this section are in terms of a, more general than system (18), system fj (x) + tj = 0 (j = 1; 2; :::; p) where fj (x) are linear functions on an arbitrary linear space L(P) on a …eld P and where tj are parameters with values in P. For the sake of simplicity we shall speak of system (1p) on L(p). We will also consider system (18) on the space L(P) with an arbitrary …eld P (not necessarily ordered). DEFINITION 5.9. Let U be a subspace of L(P) and let P m (U ) be the set of all solutions: (p1 ; :::; pm ) 2 pm (pm is the linear space of m-dimensional vectors on P) of the equation p1 f1 (x) + ::: + pm fm (x) = 0 (20)
229
where the function fj (x) are considered on U. For any …nite system ! p i = (pi ; :::; pi ) (i = 1; 2; :::; `) 1
m
of nonzero generating elements of the space pm we construct the system m m X X k pj fj (x) + pkj aj 0 (k = 1; 2; :::; `) (21) j=1
j=1
This system is called a U-convolution of system (19). If the space P m (U ) has no nonzero elements then we say that the U-convolution of system (19) is empty. THEOREM 5.9. Let U = L(P ) be a subspace where system (19) has a non-empty Uconvolution (21) and let V be a direct complement of U in L(P). Let x0 = u0 + v 0 (u0 2
U; v 0 2 V ). be a solution of system (21) for some values of the parameters tj . Then, for 0
0
these values of tj system (19) has at least one solution of the form u + v 0 (u 2 U ): If the
U-convolution of system (19) is empty then, for any values of tj system (19) has at least one solution in U. PROOF. Let tj = t0j be a value of the parameters such that system (21) has a solution x0 = u0 + v 0 .We shall show that, for tj = t0j (j = 1; 2; :::; m) system (19) has a solution 0
0
0
u + v 0 (u 2 U ). Suppose this not to be true. Then for tj = tj = t0j + f (v 0 ) (j = 1; 2; :::; m)
system (19) has no solutions in U and hence the solution pairs [x; t] (x 2 U; t 2 P 1 ) of the system
0
fj (x) + tj t = 0(j = 1; 2; :::; m) on the space U + P 1 satisfy the condition t = 0. But then there exists a nonzero vectors ! p = (p1 ; :::; pm ) 2 P m such that: For all x 2 U and t 2 P 1 we have the relation: m X 0 t= pj (fj (x) + tj t) (22) j=1
and hence (20). But then p 2 pm (U ). since ! p i (i = 1; 2; :::; `) are generating elements of P m (U ) for some elements h1 ; :::; h` of P we have ! p =; h1 ! p 1 + ::: + h` ! p ` and hence pj =; h1 p1j + ::: + h` p`j (j = 1; 2; :::; m): Substituting in (22) for pj we get ` X m m X X 0 i t= ( pj fj (x) + pij tj t)hi i=1 j=1
j=1
Since, by hypothesis, x0 = u0 + v 0 is a solution of system (21) for tj = t0j ; u0 is a solution 0
of that system for tj = tj . But then, substituting x = u0 and t = 1 in that last relation leads to a contradiction. This proves our …rst assertion of the theorem. If system (19), for some values t"j of the parameters tj , has no solution in U then we have 0
relation (22) with t"j substituting for tj . But then the corresponding equation (20) would have a nonzero solution (p1 ; :::; pm ). If the U-convolution of system (19) is empty, then this is not possible. This proves the second assertion of the theorem.
230
COROLLARY 5.9. A system (19) with a non-null space P m (U ) is consistent for given values of the parameters tj if and only if one of its U-convolutions is consistent for those values. COROLLARY 5.10. Let V be a direct complement of the subspace U in L(P). Consider the solution set of a U-convolution of system (19), for given tj on V. This solution set coincides with the projection, on V, of the solution set of system (19). If P m (U ) is a null space, the projection of the latter set coincides with V. All the U-convolutions of system (19) having a non-null space P m (U ) have one and the same solution set for …xed values of the parameters. If W is subspace of L(P) and if U is a direct complement of W in L(P) then, it follows, that any U-convolution of system (18) satis…es the requirement of the problem stated above. REMARK. Let v 0 be a solution of U-convolution of system (18) in the subspace V. Then 0
taking a solution u of the system fj (u)
(aj
f (v 0 )) = 0 (j = 1; 2; :::; m) 0
We get a solution u + v 0 of system (18). The existence of a solution .. follows from theorem 5.9. 0
DEFINITION 5.10. Let U and U be two subspaces of L(P). If the U-convolution S of 0
system (19) is non-empty (19) or more precisely: an iterated (U ; U )-convolution of the latter. 0
0
If system S is empty or if its U -convolution is empty then the interacted (U ; U )-convolution 0
of system (19) is empty. We can analogously de…ne iterated (U ; U ; U " )-convolutions of system (19) etc. 0
0
THEOREM 5.10. Every (U ; U )-convolution of system (19) coincides with some (U +U )convolution of that system. PROOF. For our proof, we may suppose the parameters of system (19) to be free. Let S 0
0
be some U-convolution of system (19) and S be a U -convolution of S. First we consider the case where both of them are non-empty. Let U be a direct complement of the intersection 0
0
0
U \ U in U and let V be a direct complement of the subspace U " = U + U = U + U 0
in L(P). It is not hard to show that S is a U -convolution of S. By theorem 5.9. the set ! ! M (U ; t ) (( t = t1 ; :::; tm )) of solutions of system S on the subspace U + V for any values ! of the tj parameters of system (19) is the projection of the solution set M ( t ) of the latter on that subspace. If system S is considered as a system of linear equation on U + V then, 0
by theorem 5.9, the set of solution of S on the subspcace V coincides with the projection ! ! of the set M (U ; t ) on that subspace and hence with the projection of M ( t ) on it. If 0
x0 = u0 + v 0 (u0 2 U; v 0 2 V ) is a solution of system S for arbitrary values of the parameters 231
tj = t0j (j = 1; 2; :::; m) for which it is consistent, then system (19) has a solution u" + v 0 . But then fj (u" + v 0 ) + t0j (j = 1; 2; :::; m) Suppose now that ! q = (q1 ; :::; qm ) is an arbitrary solution of equation (20) with functions fj (x) de…ned on U " . Using the above equation and the relation q1 f1 (x) + ::: + qm fm (x) = 0 (x 2 U " ) We get m m X X pj fj (x0 ) + pj tj = 0 j=1
j=1
0
Since x0 is a solution of system S for values of the parameters tj for which it is consistent. 0
Then all solutions of system S as a system on L(P ) + P m of elements [x; t1 ; :::; tm ] (x 2 L(P ); (t1 ; :::; tm ) 2 P m ) satisfy the equation m m X X pj fj (x) + pj tj = 0 j=1
j=1
Thus this left hand side of the latter is expressed as a linear combination of the left had 0
sides of the system S with coe¢ cients in P. But then the linear form q1 t1 + ::: + qm tm may be expressed as a linear combination, with coe¢ cients in P. of linear forms of the parameter 0 t ; :::; t involved in S . But ! q = (q ; :::; q ) is an arbitrary element of the set P m (U " ) of 1
m
m
1
"
solutions of (20) on U . Thus the vectors coe¢ cients of these constitute a generating system 0
of that set. But then system S must be a U " -convolution of system (19) which is what we 0
wanted to prove (since U " = U + U ). 0
Suppose now that at least one of the two system S and S is empty. Then system (19) 0
has an empty U " -convolution. If system S is empty and system S is not empty then, by the second part of theorem 5.9, system S has a solution in U’, for any values of the tj parameters 0
involved in it and hence, in U " = U + U . But then, by the …rst part of theorem 5.9, system (19) has a solution in U " for any values of the parameters tj involved in it (here system S and (19) are considered as system on U " ). But this is possible only if the functions fj (x) (j = 1; 2; ::; m) are independent on U " , i.e. if the set P m (U " ) is null. In this case the U " -convolution of system (19) is empty. The theorem is proved. We now present an algorithm for convolution such systems fj (! x ) aj = aj1 x1 + ::: + ajn xn aj = 0 (j = 1; 2; :::; m) (23) on L(P ) = P n . Let U1 ; :::; Un be subspaces of P n generated respectively by the vectors ! e 1 = (1; 0; :::; 0); :::; ! e n = (0; 0; :::; 1). Consider the process of obtaining the Ui -convolution of system (1). a1i p1 + ::: + ami pm = 0 232
If matrix are 0 a1i 6= 0 then the rows of the 1 a2i a1i 0 ::: 0 B a3i 0 a :: 0 C 1i B C: @ ::: ::: ::: ::: ::: A ami 0 0 ::: a1i This may be taken as the nonzero generating elements of the space P m (Ui ). With that choice the Ui -convolution of system (23) coincides, essentially with Gaussian algorithm for elimination of the unknown xi . However, the generating elements of the space P m (Ui ) may be chosen in a di¤erent manner. For instance, if ajr 6= 0 (j = 1; 2; :::; m) they may be taken to be 0 the rows of the matrix 1 a2i a1i 0 ::: 0 0 B a3i C 0 a1i :: 0 0 B C: @ ::: A ::: ::: ::: ::: ::: ami 0 0 ::: ami am 1i The algorithm of determining a U-convolution, thus obtained, does not coincide with the Gauss algorithm for eliminating the unknown xi . By theorem 5.10 the (Ui + Uk )-convolution reduces to a repeated application of the present algorithm. Thus the unknowns xi and xk are sequently eliminated. Of more interest, however, is the elimination of both unknowns at once by a single (Ui +Uk )-convolution solving the auxiliary equation (20). For the present case, it has the form p1 (a1i xi + a1k xk ) + ::: + pm (ami xi + amk xk ) = 0 and reduces to the system a1i p1 + ::: + ami pm = 0 g a1k p1 + ::: + amk pm = 0 We assume that the rank of the latter is 2, since otherwise, for the Ui -convolution of system (23) the variables xi and xk are eliminated at the same time. For simplicity, we assume that i = 1 and k = 2. This can, obviously, be done without loss of generality. ari ass Let rs denote the determinant . Then, for 12 6= 0, we may take the rows ark ask of the 0 matrix 1 0 ::: 0 23 13 12 B 0 0 :: 0 C 24 14 B C :(25) @ ::: ::: ::: ::: ::: ::: A 0 0 ::: 2m 1m 12 as nonzero generating elements of the space P m (U1 + U2 ) of solutions of the system (24) (with i=1 and k=2). Analogously the (Ui + Uk + U` )-convolution leads to a process for the elimination of the unknowns xi and xk and x` simultaneous etc. However for the simultaneous elimination of more than two unknowns, this process is more complicated than the process of eliminating two unknowns at a time. On the other hand the sequential pairwise elimination of 233
unknowns may, sometimes, have some disadvantages with it is compared to eliminating the unknowns one by one. Thus the optimal algorithm is one which combines pairwise with singe elimination of unknowns. EXAMPLE. Solve the system x1 + x2 x3 x4 + x5 1 = 0 2x1 x2 x3 + 2x4 x5 3 = 0 x1 + 2x2 + x3 3x4 + 4x5 + 1 = 0 g 3x1 + x2 3x3 + x4 2x5 = 0 4x1 3x2 + 4x3 + 4x4 x5 = 0 Construct the matrix (25): 0 1 3 3 3 0 0 @ 1 4 0 3 0 A 2 7 0 0 3 Then get the U1 + U2 -convolution: 3x3
0x4
6x5 + 3 = 0;
6x3 + 4x4 + 3x5 3x3
13 = 0;
28x4 + 12x5 + 19 = 0:
The matrix (25) has the form (156; 84; 12): Then the (U1 + U2 + U3 + U4 ) convolution is 1332x5 + 1332 = 0 We then get x5 = 1: Substituting x5 = 1 in the …rst two equation of the (U1 + U2 )convolution we get x3 = x4 = 1. Substituting x3 = x4 = x5 = 1 in the …rst two equations of the original system, we get x1 = x2 = 1: Thus our system has a (unique) solution (1; 1; 1; 1; 1): Now, it only remains to note that convoluting a system (23) with respect to any subspace U of P n reduces to the present process. Indeed, if u1 ; :::; ur is a minimal system of generating elements of the subspace U then the U-convolution of system (23) reduces to eliminating the 0
0
unknowns x1 ; ::; xr from the system 0
0
`j (u1 )x1 + ::: + `j (ur )xr + `j (u)
aj = 0 (j = 1; 2; :::; m).
x6. CONVOLUTION OF LINEAR INEQUALITY SYSTEM THAT CONTAIN STRICT
INEQUALITIES 0 0 fj (x) aj < 0 (j = 1; 2; :::; m ; m > 0); g (26) 0 fj (x) aj 0 (j = m + 1; :::; m): (on L(P) where P is an ordered …eld).
DEFINITION 5.11. Let U be a subspace of L(P) and let C(U) be the composition cone of the system 234
fj (x)
aj
0 (j = 1; :::; m) (27)
associated with system (26). Let ! c i = (ci ; :::; ci ) (i = 1; :::; k) 1
m
be a …nite system of nonzero generating elements of the cone C(U). With an element ! ci we get the inequality m m X X i cj fj (x) cij aj j=1
j=1
j=1
j=1
0 (k = 1; 2; :::; `):
if ! c i1 = ! c i2 = ::: = ! c im0 = 0 and the inequality m m X X cij fj (x) cij aj < 0
if at least one of the elements ci1 ; :::; cim is nonzero. The system of all inequalities obtained this way is called the U-convolution of system (26). If cone C(U) is null, then the Uconvolution de…ned by it is said to be empty. If the elements ! c i (i = 1; :::; m) constitute a basis of C(U) then the U-convolution of system (26) that they de…ne is called a fundamental U-convolution of system (26). If U=L(P) then the U-convolution is said to be full. For iterated convolution of system (26), de…nition 5.7 is still valid. 0 fj (x) (aj t) 0 (j = 1; 2; :::; m ); g (26) 0 fj (x) aj 0 (j = m + 1; :::; m): is consistent, and using the above de…nition we may introduce, for system (26), a set of propositions corresponding to the ones in x1 and x2. To this end, we …rst note the following obvious relationship between U-convolutions if systems (26) and (28).
Consider all the inequalities of a non-empty U-convolution S(t) of system (28) that contain the parameter t (with nonzero coe¢ cients). Turn them into strict inequalities and set m m X X i t=0. The resulting cj fj (x) cij aj < 0 j=1
j=1
of a U-convolution S of (26) is changed into: 0 m m m X X X cij fj (x) cij aj + t cij 0 j=1
j=1
j=1
then system S becomes a U-convolution S(t) of (28). DEFINITION 5.12. The convolutions S and S(t) are said to be adjoint. THEOREM 5.11. If system (26) is consistent then every U-convolution of it, for any subspace U of L(P), is either consistent or empty. System (26) is consistent if at least a U-convolution of it for at least one subspace U of L(P) is either or consistent. PROOF. The …rst assertion follows directly from the de…nition of a U-convolution of system (26). The second assertion is now proved. Suppose a U-convolution S of system (26) is consistent and suppose x0 is one of its solutions. Then, clearly, there exist a t = t0 > 0 235
such that the adjoint to S, U-convolution S(t) of system (28) would have x0 as a solution for t = t0 and hence is consistent. But then so is system (26). If some U-convolution of system (26) is empty then so is any U-convolution of system (28) corresponding to it. But then system (28) is consistent for any value of t. Taking t to be positive, the consistency of system (26) follows directly. This completes the proof of the theorem. s X pjk fjk (x) (s not …xed, s 1) k=1
with positive coe¢ cients, which involves a system of functions fj1 (x); :::; fjs (x) of rank
s
0
0
1, includes at least one of the functions fj (x) (j = m + 1; :::; m ) and satis…es the
condition s X pjk ajk = 0 k=1
The corollary follows from theorem 5.11 for U = L(P ). Corollary 5.11 leads to a corollary
of theorem 4.4. COROLLARY 5.12. If system (26) is consistent then each of its full convolutions is either or consistent. System (26) is consistent if at least one of its full convolutions is either empty or consistent. REMARK. It follows form corollary 5.12 that the system f (x) < 0 (j = 1; 2; :::; m) of homogeneous linear strict inequalities of L(P) is consistent if and only if it does not have any non-empty full convolution, i.e. if and only if the cone C(L(P)) of its L(P)-composition is null. For systems of linear inequalities on Rn , this proposition leads to the well known theorem by Vorond represented in corollary 2.8. COROLLAY 5.13. If a consistent system (26) has a non-empty U-convolution with respect to some subspace U of L(P), then all of its U-convolutions are equivalent. Let S be a non-empty U-convolution of a consistent system (26) (then all U-convolutions with respect to this particular subspace are non-empty) and let V be a direct complement of the subspace U in L(P). If x0 = u0 + v 0 (u0 2 U; v 0 2 V ) is any solution of S then so is v 0 .
If v 0 is substituted for x in all of the inequalities of S, then it will turn to a U-convolution S0 of the system 0 fj (x) (aj fj (v 0 )) < 0 (j = 1; 2; :::; m ); g (26) 0 fj (x) (aj fj (v 0 )) 0 (j = m + 1; :::; m): considered as a system on U. Since S0 is consistent, it follows form theorem 5.11 that 0
0
the latter system is consistent. If u is a solution of that system, then u + v 0 is a solution of system (26). But then, by de…nition of U-convolutions of system (26), it is the sum of solutions of each of them. But then, by the same de…nition, the element x0 = u0 + v 0 is a 236
solution to each of the U-convolutions. Hence all U-convolutions, for this U, have the same solutions. This completes our proof. REMARK. From the above arguments follows the proposition: If a U-convolution of system (26) is empty and if v 0 is a solution of this U-convolution on direct complement V of U in L(P), then there exist u0 2 U such that x0 = u0 + v 0 is a
solution of system (26).
COROLLARY 5.14. If the system fj (x)
aj
0 (j = 1; :::; m):
of linear inequalities on L(P) is stably consistent then every U-convolution of that system, with respect to any subspace U of L(P), is either empty or stably consistent. Conversely, such a system is stably consistent. Conversely, such a system is stably consistent if at least one U-convolution of it with respect to at least one subspace U of L(P) is either empty or stably consistent. 0
In view of theorem 5.5, any iterated (U ; U )-convolution of system (28) coincides with a 0
0
(U + U )-convolution of it. If we go from these convolutions to the adjoint to them, (U ; U ) 0
0
and (U + U ) convolutions of system (26) then theorem 5.5 asserts that an iterated (U ; U )0
convolution of theorem (26) coincides with a (U + U )-convolution of that system. Hence, the assertion of theorem 5.5 is valid for system (26). Suppose that the index of an inequality is a U-convolution S of system (26) (the de…nition of index introduced in x1 is valid for (26)) contains the index of another inequality in S. Then,
by corollary 5.5, the corresponding inequality in the adjoint to S U-convolution S(t) of system (28) maybe be eliminated from S(t) without changing the fact that S(t) is a U-convolution of system (28). Since the U-convolution of system (28) obtained in this way corresponds to an adjoint convolution of system (26) we have: If a U-convolution of system (26) includes an inequality whose index contains the index of another inequality in the convolution then the former may be eliminated and the remaining inequalities would still constitute a U-convolution of system (28). Hence corollary 5.5 holds for system (26). But then, so would corollary 5.6 which follows from it and from theorem 5.5. The algorithm of restricted fundamental convolutions presented in x2 may be converted to
an algorithm for obtaining such convolution of system (2) on P n by making the appropriate
changes. It would now yield sequentially fundamental (U1 + ::: + Uk )-convolutions of system (26) (see x2).
Clearly the same changes could convert a free convolution algorithm so it could be used
237
on system (26). (See remark 1 about the algorithm of restricted fundamental convolutions) The resulting sequence of (U1 +:::+Uk )-convolutions thus obtained are, here as well as there, not necessarily fundamental. At the end of this section we investigate the use of the algorithm of free convolutions to proved certain conditions for consistency of systems of strict linear inequalities that are contained in the paper Dines [1]. To this end, we give these conditions for systems over P n while Dines gives them for systems overRn . Following Dines [1] we introduce: DEFINITION 5.13. A matrix is said to be I-positive or I-negative relative to one of its column if all elements of that column are positive or negative. In either case, the matrix is said to be I-de…nite with respect to that column. If a matrix has a column with respect to which it is I-positive (I-negative) then it is said, simply, to be I-positive (I-negative or I-de…nite). DEFINITION 5.14. Assume that the matrix: 0 1 a11 ::: a1n M = @ ::: ::: ::: A : am1 ::: amn is not I-de…nite with respect to the r1 -st column, that the elements air1 of the latter are positive for i = i1 ; :::; ip negative for i = j1 ; :::; jq and zero for i = k1 ; :::; ks . Then for each part of elements air1 and ajr1 where i is any of the numbers i1 ; :::; ip and j is any of the numbersj1 ; :::; jq we form the rows of determinants air1 ast ! a t (i; j) = (t = 1; 2:::; r1 1; r1 + 1; :::; n); ajr1 ajt and for the elements akr1 where k is one of the numbers k1 ; :::; ks we form the rows of elements akr (t = 1; 2:::; r1
1; r1 + 1; :::; n);
From the rows thus obtained we form the matrix M1r1 whose rows are ranked as follows: 1) Any (i,j)-row precedes any k-row. 2) The (i,j)-rows are lexicographically ordered with respect to the pairs (i,j). 3) The k-rows are ordered according to k. If M is a matrix which is I-de…nite with respect to its r1 -st column then M1r1 denotes a matrix with one row and n
1 columns whose elements are either +1 or
1 depending on
whether M is I-positive or I-negative with respect to its r1 -st column. The matrices M1r1 (r1 = 1; 2:::; n); will be referred to as (n-1)-column I-minors of the matrix M or I-complements of its r1 -st column. Let
238
0
1 ar111 ::: ar1r1 1 1 ar1r1 1 +1 ::: ar1n1 ::: ::: ::: ::: ::: A M1r1 = @ ::: r1 r1 r1 r1 amr1 1 ::: amr1 1 amr1 1 ::: amr1 n The I-complement of that matrix de…ned by the column whose elements are ar1r1 2 is denoted r r
1 2::: by M2r1 r2 . Proceeding analogously we can de…ne the I-minors of M3r1 r2 r3 etc. By Mp+1
r r2::: rp
we denote I-minors of Mp 1 r r
1 2::: are Mp+1
rp
(p = 1; 2; :::n
rp+1
1) determined by the column whose elements r r2::: rp
. We thus have a system of I-minors Mp 1
(p = 1; 2; :::n 1) (here r1 ; r2 ; :::; rp
are p of I-minors 1; 2; :::n). A matrix M has I-rank ` if it has at least one `-column I-de…nite I-minor and it does not have any I-de…nite ` + 1-column I-minors. If M has no I-de…nite minors then its I-rank is zero. The I-rank of an I-de…nite matrix is n. The results of Dines, noted above, may now be formulated in terms of the next theorem. THEOREM 5.12. A necessary and su¢ cient condition for the consistency of the system aj1 x1 + ::: + ajn xn < 0 (j = 1; 2; :::; m) (29) on P n is that the I-rank of its matrix be di¤erent from zero. If the I-rank of system (29) is ` (` > 0) then system (29) has a solution where `
1 of the unknowns may take
predetermined values. PROOF. We note that if the matrix M is not I-de…nite with respect to the r1 -stcolumn then its I-complement M1r1 coincides with the matrix of the Ur1 -convolution of system (29) as the terms involving the unknown xr1 are eliminated from this convolution’s inequalities (equate its coe¢ cients to zeros). If the matrix M1r1 is not I-de…nite with respect to the column whose elements are ar1r1 2 then the I-complement M2r1 r2 of that column coincides with the matrix of the (Ur1 ; Ur2 )-convolution of system (29) if we eliminate the terms involving .xr1 and xr2 from the inequalities of that convolution (set the coe¢ cients of xr1 and xr2 are zero), etc. Notice that if s is the smallest number of components for which the (Ur1 ; :::; UrS )convolutions of (29) are empty, then n Conversely if n
s + 1 is the I-rank of the matrix M of system (29).
s + 1 is the I-rank of M then i is the smallest number of components
for which these convolutions are empty. Here r1 ; r2 ; ::: are arbitrary numbers chosen from 1; 2; :::; n. Suppose now that system (29) is consistent. Then, since every (Ur1 ; Ur2 ; :::; Urp )-convolution of it coincides with the corresponding (Ur1 + Ur2 + ::: + Urp )-convolution (this follows from the fact that theorem 5.5 applies to system (26)) it follows from the remark about corollary 5.12 that there exists a natural number h
n such that the (Ur1 ; Ur2 ; :::; Urh )-convolution of
system (29) is empty. We assume, thus, that h is the smallest number of coe¢ cients of xrh of the system that are involved in the Urh -convolution (denote it by S) is I-de…nite. As we already noted, this means that the I-rank of M is n-h+1 which is di¤erent from zero (h
239
n).
This completes the proof of necessity in the …rst part of the theorem. Let `
1 unknowns other than xrh be given arbitrary values. In view of the I-de…niteness
of the column of coe¢ cients of xrh , the values of the latter may be chosen so that it and the `
1 values of the other unknowns satisfy system S. Each solution of S may be extended
to a solution of system (29). Indeed, if S coincides with (29) then this needs no proof. If S di¤ers from (29) then it coincides with a (Ur1 ; Ur2 ; :::; Urh 1 )-convolution of the latter and hence, in view of corollary 5.13, each of its solutions may be extended to a solution of (29). This establishes the second assertion of the theorem. It remains to prove the su¢ ciency in the …rst part of the theorem. Assume the I-rank of M to be di¤erent from zero. Then, as we noted at the beginning of this proof, either we can …nd a (Ur1 ; Ur2 ; :::; Urp )-convolution of the system such that the following (Ur1 ; Ur2 ; :::; Urp+1 )convolution is empty, or its is the case that the column of coe¢ cients of at least one of the unknowns consists totally of positive elements or of negative elements. In the …rst instance the consistency of system (29) follows from theorem 5.11. In the second case it is obvious. This proves the theorem. From the proof, it is clear that theorem holds fro non-homogeneous linear inequalities as well. Thus the following result (see Dines [1]) is valid: If the I-rank of the matrix of the system aj1 x1 + ::: + ajn xn
aj < 0 (j = 1; 2; :::; m) (30)
is ` (` > 0) then this system has a solution where `
1 of the unknowns may take
arbitrary values. As a corollary to theorem 5.12 we note the following (included in Dine [1]) result: A necessary and su¢ cient condition for the stable consistency of system (30) is that the I-rank 0 of the matrix a11 ::: a1n a1 B ::: ::: ::: ::: B @ am1 ::: amn am 0 ::: 0 1 be di¤erent from zero.
1
C C: A
This assertion follows from applying theorem 5.12 to the system aj1 x1 + ::: + ajn xn aj t < 0 (j = 1; 2; :::; m); g t 0. Recall here, see de…nition 1.5, that a supporting solution of system (2) is a positive solution (i.e. non-negative, non-zero) of system (20) such that the column of the system’s matrix corresponding to the solution’s non-zero components are linearly independent. In
259
chapter I, at the end of x2 it was shown that the supporting solutions of system (20) coincides
with non-zero nodal solutions of the inequality system fj (! x ) aj 0; (j = 1; 2; :::; m); fj (! x ) + aj 0; (j = 1; 2; :::; m); g (21) xi 0 (i = 1; 2; :::; n) associated with it. It was also shown that system (2) has a supporting solution if and only if system (21) has a non-zero nodal solution. If system (20) has a non-negative solution and if at least one aj is di¤erent from zero, then it follows that it has at least one supporting solution. Indeed, if system (20) has a non-negative solution then system (21) is consistent. Then theorem 1.1 it has at least one nodal solution that could not be a zero solution, since, with an aj 6= 0, system (21) has
no zero solutions. Since the set of non-negative solutions of system (20) coincides with the solution set M of system (21), the problem of minimizing f (! x ) considered here reduces to a problem has several optimal solutions. They, obviously, coincide with the optimal solution of the second. But, then, by corollary 6.1 and remark about it at least one of the optimal solutions of the …rst problem coincides with a non-zero nodal solution of system (21). This, in view of the relationship between supporting solutions of system (20) and non-zero nodal solutions of system (21), establishing the proposition: If a canonical linear programming problem, having no zero feasible solutions, has optimal solutions, then one of the supporting solutions of system (20) is an optimal solution. If system (20) has zero solutions, then it does not have supporting solutions (for, then, all the aj ’s are zeros) and hence the zero solution is the unique nodal solution of system (21). If the problem of minimizing f (! x ) on the set of non-negative solutions of system (20) or, equivalently, on the set of solutions of system (21) has optimal solutions, then by corollary 6.1 and remark, the zero vector is one of them. This proves: THEOREM 6.8. Consider a canonical linear programming problem that has optimal solutions. At least one nodal solution is optimal if and only if the zero vector is not a feasible solution. If the zero solution is a feasible solution, then it is one of the optimal solutions. The question of existence of zero solutions of system (20), i.e. of the feasibility (and potential optimality) of the zero vector for the canonical linear programming problem, is equivalent to the problem of consistency of the system a1i u1 + ::: + ami um
bi
0 (i = 1; 2; :::; n):
In other words, the zero vector is optimal if and only if the above system is consistent. Some conditions for the optimality of a supporting solution of system, (20) are given in
260
the next two theorem which are well known for P = R. 0
THEOREM 6.9. Let (x01 ; :::; x0r0 ; 0; :::; 0); (r r); be a supporting solution of system (20) ! and let A i = (a1i ; :::; ami ) (i = 1; 2; :::; r) be maximal system of linearly independent vectors ! ! /// containing the vectors A 1 ; :::; A r0 . Let ! ! ! x1i A 1 + ::: + xri A r = A i (i = 1; 2; :::; n) (22) ! be the decompositions of the vectors A i de…ned by that system. If x1i b1 + ::: + xri br
bi (i = 1; 2; :::; n) (x01 ; :::; x0r0 ; 0; :::; 0)
then the solution is an optimal solution for the problem of minimizing the function f (! x ) = b1 x1 + ::: + bn xn on the set of non-negative solutions of system, (20). ! 0 0 PROOF. Let (x1 ; :::; xn ) be any solution of system (20) and let A 0 = (a1 ; :::; am ) and ! 0 ! 0 ! x1 A 1 + ::: + xn A n = A 0 (23) Furthermore let 0
0
0
x1 b1 + ::: + xn bn = z (24) We show that z0 = x01 b1 + :::; x0r0 br0
z
0
In (24) replace each bi by zi . Since zi 0
0
0
bi , by hypothesis, we have
0
x1 z1 + ::: + xn zn z (24 ) ! Substitute for A i in (23) from (22) and get n r X X ! ! 0 xi ( xji A j ) = A 0 : i=1
j=1
Changing the order of summation we also get r X n n n X X X ! ! ! ! 0 0 0 xi xji A j = ( xi x1i ) A 1 + ::: + ( xi xri ) A r = A 0 : j=1 i=1
i=1
i=1
Analogous, substituting for zi in (24’), we get n n X X 0 0 0 ( xi x1i )b1 + ::: + ( xi xri )br z i=1
i=1
! ! ! Since the vectors A 1 ; :::; A r are linearly independent, the former decomposition of A 0 coincides with the decomposition ! 0 ! ! ! ! A 1 x1 + ::: + A r0 x0r0 + A r0 +1 0 + ::: + A r 0 = A 0 of that vector. Hence our inequality has the form z0 = x01 b1 + :::; x0r0 br0
z
0
Consequently, z0 is the minimal value of f (! x ) on the set of non-negative solutions of system (20) and, thus, (x01 ; :::; x0r0 ; 0; :::; 0) is an optimal solution of our problem. The theorem is proved. THEOREM 6.30. Let a supporting solution ! x 0 of system (20) have exactly r non-zero
261
coordinates. Suppose, e.g., that (x01 ; :::; x0r0 ; 0; :::; 0). If, in terms of the notation of theorem 6.9, there exists an i = i0 for which zi0 > bi0 , then the supporting solution is not optimal. Let, e.g., i0 = r + 1. If, then at least one coe¢ cients of iji0 in formula (22), taken for i = i0 is possible, then for x0j x 1 j r ji0
t0 = min
(with the minimum taken over all j for which xji0 is positive) the vector 0 ! x = ((x0 t x ); :::; (x0 t x ); t ; 0; :::; 0) (25) 1
0 1i0
0 ri0
r
0
0 is supporting solution for which f (! x ) < f (! x 0 ). If all xji0 are non-positive then the
minimization problem has no optimal solution. 0 PROOF. For our choice of t0 ; the vector ! x is a non-negative solution of system (20) with ! 0 r r positive coordinates. The vectors A i corresponding to it are linearly independent, ! otherwise the vector A i0 may have two distinct decompositions in terms of the linearly ! ! 0 independent vectors A ; :::; A (one of these is de…ned in (22) for i = i0 ) Consequently ! x 1
r
is a supporting solution of system (20). Furthermore we have 0 f (! x ) = f (! x 0 ) t0 (zi0 bi0 ): (26) 0 Since t0 > 0 and zi0 bi0 > 0, f (! x ) < f (! x 0 ). Consequently, ! x 0 is not an optimal solution for our problem. If all the xji0 ’s (j = 1; 2; :::; m). are non-positive then we take the vector ! x to be de…ned ! by formula (25) for an arbitrary t0 > 0. Clearly, then, x is a non-supporting non-negative 0
solution of system (20). By (26) it is possible for any q 2 P to …nd a positive t0 such that 0 f (! x ) < q. But this means that, in this case, our problem has no optimal solutions. For the case where each supporting solution of system (20) of rank r > 0 has exactly r positive coordinates, theorem 6.9 and 6.10 de…ne an algorithm which leads, in a …nite number of steps, to a supporting solution which is optimal or to concluding that no optimal solutions exist. This algorithm is known as the simplex method. A description of this method may be found in any book on linear programming, e.g. the book, Gass [1]. For the canonical linear programming problem on Rn , the restriction concerning the number of positive coordinates of supporting solutions is not a substantial restriction. As well known, we may take " > 0 su¢ ciently small such that every supporting solution of the system aj1 x1 + ::: + ajn xn = aj + "aj1 + "2 aj2 + ::: + "n ajn (j = 1; 2; :::; m) (20) whose rank is r > 0 will have exactly r positive coordinates (see Gass [1]). Thus by slightly changing the free terms of system (20) we get a solution that satis…es our restriction. The application of the simplex method starts by choosing a supporting solution of system
262
(20). We have given above (in chapter V, see x3) a method for …nding all the supporting solutions of this system. However, in order to …nd one supporting solution of the system, it is necessary to …nd all of them. Thus, the method of chapter v might require more computations than necessary. On the other hand, the method used to obtain supporting solutions, used in linear programming, does not always accomplish this easily and fast enough. Thus there may be some interest in the method presented below (see also Chernikov [9]) for …nding nodal solutions of the system Tj (! x ) = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m) (27)
of linear inequalities on P n whose rank r is not zero. As we noted in the beginning of this section, the problem of …nding supporting solutions, in which we are interested, for system (20) reduces to a problem of …nding nodal solutions of system of type (27). For r = m the nodal solutions of system (27) coincided with the solutions of the equation system Tj (! x ) = 0 (j = 1; 2; :::; m) and hence they could be found in the usual way. Thus we assume that r < m. ! Let ! x 0 = (x01 ; :::; x0n ) be a solution of system (27). Let x = (x1 ; :::; xn ) be vector that satis…es at least one of the inequalities Tj (! x ) 0. Let ! ! ! ! x = x 0 + t( x x 0) 0
where t is one of the non-negative values of the parameter t satisfying at least one of the equations T (! x (t)) = 0. Let j
0 0 0 0 0 ! x = (x1 (t ); :::; xn (t )) = (x1 ; :::; xn )
0 0 where j = j is the index of an equation with Tj 0 (! x (t)). The vector ! x is, obviously, a solution of system (27). We say that the equation Tj 0 (! x (t)) is obtained by varying the ! solution x 0 of system (27). If a 0 6= 0 then we say that varying ! x 0 leads to the system
j `
Tj1 (! x ) = Tj (! x)
aj` a 0
Tj 0 (! x)
0
m; j 6= j ) (28) j ` !0 0 (not involving the unknown x` ) and its solution x with coordinates x (1 0 (1
j
i
n; i 6= `).
Every solution of system (28) may, obviously, be extended to be a solution of system (27) by adjoining to it, the value of the coordinate x obtained from the equation T 0 (! x ) by `
j
substituting the coordinates of the solution of system (28) into that equation. ! If ! x 1 is a solution of system (28), say x , that is obtained by varying the solution ! x 0 of system (27), then the process of variation may be applied to it. The procedure of sequential variation described here ends, obviously, in the r-th step (r is the rank of system (27)), resulting in a system that does not include the unknowns x1 ; :::; xn . The system obtained in the k-th statement (k=1,2,...,r) may, without loss of generality, be written in the form
263
Tjk (! x ) = Tjk 1 (! x) where akji T k 1 (! x)
1
j
akjk
1
akkk 1
Tkk 1 (! x)
0 (j = k + 1; :::m) (29)
are the coe¢ cients of the system 0 (j = k + 1; :::m)
and we assume here that Tj0 (! x ) = Tj (x). A solution of the above system, e.g. the solution obtained by varying the solution ! x k 1 = (xk 1 ; :::; xk 1 ) of the latter system, is k
denoted by ! xk
1
=
n
(xkk+1 ; :::; xkn ).
In the variation process we encounter the equations Tkk 1 (! x ) 0 (k = 1; :::m) (30) Since system (30) is obtained from T (! x ) = 0 (k = 1; :::r) (31) k
by combining the equations of the latter according to the Gauss method, the two systems are equivalent, i.e. have the same solutions. Let ! x r 1 = (xrr 1 ; :::; xrn 1 ) be a solution of system (29) with k = r 1 satisfying the condition T r 1 (! x ) = 0; e.g. let it be the solution obtained in the r-th variation. Substituting r
this solution into the equation: Trr 12 (! x ) = 0 we get a solution of that equation that satis…es system (29) for k = r 2. Substituting the obtained solution un the equation: T r 3 (! x)=0 r 2
we get a solution of that equation that satis…es system (29) for k=r-3, etc. As a result we obtain a solution of system (30) that, obviously, satis…es system (27). Here, each equation of system (30) may be considered as an equation in n knowns x1 ; :::; xn with zero coe¢ cients of the unknowns excluded from it. Since system (30) is equivalent to system (31), the solutions obtained here are solutions for (31) as well. Since the rank of the latter is, clearly, equal to the rank of system (27), the solutions we get are nodal solutions of system (27). In connection with system (21), corresponding to system (20), we get some short cuts. In the …rst place, instead of system (21) we may take the system aj1 x1 + ::: + ajn xn aj 0; (j = 1; 2; :::; m); g xi 0 (i = 1; 2; :::; n) and assume that x0 is a solution of the latter that turns the inequalities with indices j = 1; 2; :::; m into equation. Second, we may take the …rst r variations to be the …rst r steps in the Gauss algorithm for elimination of unknowns as applied to the left hand side of the inequalities of system (21); the variation of the vector x0 = (x01 ; :::; x0n ) simply consists of crossing out one of its other r coordinates in some order. Third, for further variations, in connection with the resulting system of inequalities, a new solution is obtained by changing only one coordinate of the old solution.
264
x4. DEGREE OF INCONSISTENCY OF SYSTEM OF LINEAR INEQUALITIES.*6 Let
fj (x)
aj
0 (j = 1; 2; :::; m) (32)
be an arbitrary system of linear inequalities on L(P). If it is consistent then it is always possible to …nd a positive element p 2 P such that the associated system fj (x)
aj
p (j = 1; 2; :::; m)
would be consistent. Hence the system fj (x)
aj
t (j = 1; 2; :::; m) (33)
would also be consistent, where t is a parameter in P . System (33) may be considered a system of linear inequalities on L(P ) + P 1 which is the direct sum of L(P ) and P 1 . On the set N of its solutions, the function f (; t) = t ((x; t) 2 L(P ) + P 1 )
is bounded below (since system (32) is consistent by hypothesis). Hence, by theorem 2.1, min f(x,t) exists and is attained on N. Let min f (x; t) = f (x0 ; t0 ) = t0 (x0 2 L(P ); t0 2 P 1 )
(x;t)2N
System (32) is consistent, hence t0 > 0. By de…nition of t0 and by the attainment of a minf (x; t) on N, the system fj (x)
aj
t0 (j = 1; 2; :::; m) (34)
is consistent and the system fj (x)
aj
0
t0 < t0 (j = 1; 2; :::; m) 0
is not consistent for any t0 < t0 . DEFINITION 6.5. If system (32) is inconsistent then the element t0 2 P for which the
system (34) associated with (33) is consistent but not stably consistent is called the degree of inconsistency of system (32). From the above argument we conclude: THEOREM 6.11. To each inconsistent system of linear inequalities on L(P) there corresponds a unique positive element of P which is the degree of inconsistency of that system. To determine the degree of inconsistency of system (32) we may also proceed in the following manner. Consider the system fj (x) aj 0; (j = 1; 2; :::; m); g tj t 0 (j = 1; 2; :::; m) on the space L(P ) + P m+1 which is, obviously, consistent. On the set K of its solution, the function 6
* The present section is devoted, essentially, to the exposition of a result which is sated without proof in the article by Eremine [2]. Furthermore, it is extended to systems on arbitrary linear space L(P).
265
f (x; t1 ; :::; tm ; t) = t(x 2 L(P ); (t1 ; :::; tm ; t) 2 P m+1 )
is bounded below (by zero element of P). Hence, by theorem 2.1, min f (x; t1 ; :::; tm ; t) exists and is attained on K. Let min f (x; t1 ; :::; tm ; t) = f (x0 ; t01 ; :::; t0m ; t0 ) = t0 It is easy to show that t0 = t0 . Indeed, clearly t0
t0 . On the other hand t0 = max t0j 1 j m
and hence the system fj (x)
aj
t0 (j = 1; 2; :::; m) t0 and hence t0 = t0 . Thus if T is the set of vectors
is consistent. But this means t0 (t1 ; :::; tm ) for which the system fj (x)
aj
tj (j = 1; 2; :::; m)
is consistent, then t0 = min max tj (t1 ;:::;tm )2T
THEOREM 6.12. The degree of inconsistency of an inconsistent system (32) of linear inequalities of rank r >0 coincides with the largest degree of inconsistency of its inconsistent subsystem of rank r containing r + 1 inequalities. Among those subsystems there is a system fjk (x)
ajk
0 (k = 1; 2; :::; r + 1)
whose degree of inconsistency is such that the system fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1) (35)
is consistent and its solutions satisfy system (34). PROOF. Adjoin the inequality
t
0 to system (33). In view of the inconsistency of
system (32) this leads to the system fj (x) aj t; (j = 1; 2; :::; m); g t 0 which is equivalent to system (33). Using theorem 6.2 we choose a nodal subsystem S of that system the function f (x; t) attains its minimum, t0 , on the set of solutions of S. Since that minimum is attained for every nodal solution of S (see theorem 6.2) and is distinct from zero, S does not include the inequality
t
0. In view of the fact that the system under
consideration has rank r + 1, as rank r + 1 and contains r + 1 inequalities. Hence it is of the form fjk (x)
ajk
t (k = 1; 2; :::; r + 1)
Since the minimum of the function f (x; t) on the solution set or S is attained and is equal to t0 , the inequality system fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1)
is consistent and the system fjk (x)
ajk < t0 (k = 1; 2; :::; r + 1) 266
is consistent. Consequently, the system fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1)
is inconsistent (since t>0) and t0 is its degree of inconsistency. But the degree of inconsistency of a subsystem of system (32) cannot exceed the degree of inconsistency of system (32). This completes the proof of the …rst of the theorem. By theorem 6.2, the boundary equations of the system, fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1)
obtained here, i.e. the equations fjk (x)
ajk = t0 (k = 1; 2; :::; r + 1)
constitute a consistent system all of whose solutions satisfy system (34). This proves the second part of the theorem. DEFINITION 6.6. suppose `j (x) (j = 1; 2; :::; m) is a system of non-null linear functions of rank r > 0 de…ned on the space L(P) and let aj (j = 1; 2; :::; m) be elements of P. Then the smallest variation of the equation system `j (x)
aj = 0 (j = 1; 2; :::; m)
is given by the minimax `0 = min max j`jk (x) x2L(P )
j
ajk j
COROLLARY 6.5. The smallest variation of an inconsistent linear equation system `j (x)
aj = 0 (j = 1; 2; :::; m)
of rank r>0 on L(P) having non-null functions `j (x), coincides with the smallest variation of one of its subsystems of rank r containing r + 1 inequalities. PROOF. The minimax `0 obviously coincides with the measure of inconsistency of the system `j (x) aj 0; (j = 1; 2; :::; m); g (`j (x) aj ) 0 (j = 1; 2; :::; m) Suppose jk (`jk (x)
where
jk
ajk ) = 0 (r = 1; 2; :::; r + 1) is a function of jk with value: +1 or -1, is an inconsistent subsystems of the
latter system having rank r and satisfying the conditions of the second part of theorem 6.12. Then, by that theorem, the system j`jk (x)
ajk j = `0 (r = 1; 2; :::; r + 1)
j`jk (x)
ajk j < `0 (r = 1; 2; :::; r + 1)
is consistent, and the system
is inconsistent. But then we have
267
`0 = min max j`jk (x) x2L(P )
j
ajk j
which completes the proof of the corollary. REMARK. The existence and attainment of the minimax `0 follows from theorem 6.12 (since, for `0 6= 0; `0 is the degree of inconsistency of the system of linear inequalities in the
beginning of the above proof). For the case of Rn corollary 6.5 is well known (see Remez [1]). DEFINITIOIN 6.7. We say that a linear inequality system of rank r > 0 satis…es Haar’s condition if any of its subsystems with r inequalities has rank r. LEMMA 6.1. For an inconsistence system (32) satisfying Haar’s condition, system (34) is equivalent to any system (35) that satis…es the conditions of theorem 6.12. PROOF. Let fjk (x)
ajk
0 (k = 1; 2; :::; r + 1)
be an inconsistence subsystem of rank r of system (32) satisfying the requirement of the second part of theorem 6.12. Since t0 is the degree of its consistency, the system fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1) (36)
is unstably consistent. Hence all of its solutions satisfy at least one of the equations fjk (x)
ajk = t0 , say the …rst one. But then the inequality
fj1 (x) + aj1
t0 would
be a consequence of the preceding system. Thus, by theorem 2.4, there exist non-negative elements p1 ; :::; pr+1 such that, for all x 2 L(P ), we have fj1 (x) = p1 fj1 (x) + p2 fj2 (x)::: + pr+1 fjr+1 (x)
and hence the relation (1 + p1 )fj1 (x) + ::: + pr+1 fjr+1 (x) = 0 is an identity in x 2 L(P ). By the Haar property, all of the coe¢ cients here are nonzero.
By theorem 6.12 system (35), i.e. the system fjk (x)
ajk = t0 (k = 1; 2; :::; r + 1)
is consistent. Hence the relation (1 + p1 )(fj1 (x)
(aj1 + t0 )) + ::: + pr+1 (fjr+1 (x)
(ajr+1 + t0 )) = 0
is an identity in x 2 L(P ). But all of the coe¢ cients here are positive. Hence all
solutions of (36) satisfy (35) and hence (34) (see theorem 6.12). Thus system (34) and (35) are equivalent as we were to show. The lemma implies the validity of the proposition: fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1)
with rank r > 0 containing r + 1 inequalities of a system (32) with rank r > 0 satis…es Haar’s condition then the system
268
fjk (x)
ajk = t0 (k = 1; 2; :::; r + 1) (37)
is consistent and all of its solutions satisfy (34). fj 0 (x) k
aj 0
k
t0 (k = 1; 2; :::; r + 1)
be an inconsistent subsystem of system (32) which satis…es the condition of the second part of theorem 6.12. Then by lemma 6.1, the system fj 0 (x) k
aj 0 = t0 (k = 1; 2; :::; r + 1) k
is equivalent to system (34). By theorem 6.12 and lemma 6.1, system (37) is equivalent to the system fjk (x)
ajk
t0 (k = 1; 2; :::; r + 1)
Since all solutions of system (38) satisfy system (34), then satisfy the above system and hence satisfy system (37) which is equivalent to it. Since (37) and (38) are of equal rank, this means they are equivalent. Consequently, all solutions of system (37) satisfy system (34). By lemma 6.1 and its corollary we have THEOREM 6.13. If the degree of inconsistency of a given subsystem fjk (x)
ajk
0 (k = 1; 2; :::; r + 1)
with rank r > 0 and r + 1 inequality of a system (32) with rank r > 0 satisfying Haar’s condition coincides with the degree of inconsistency of the latter, i.e. with x 2 L(P ), then
the corresponding system (37) is consistent and equivalent to system (34).
Concluding this section we consider the special case where system (32) of rank r > 0 has r + 1 inequalities and is of the form fj (x)
aj = aj1 x1 + ::: + ajn xn
THEOREM 6.14. Let
aj
0 (j = 1; 2; :::; m);
be the determinant of the matrix obtained by bordering a
nonzero minor of order r in the matrix of system (39) by the column of element aj and a row of the matrix which is not in that minor and let Aj be the algebraic complement of aj in If the system (39) is inconsistent, then the degree, t0 , of its inconsistency is the element m X ( = Aj ) j=1
of the …eld P n .
PROOF. It follows from the fact the system fj (x)
aj
t0 (j = 1; 2; :::; r + 1)
is consistent and that the system fj (x)
aj < t0 (j = 1; 2; :::; r + 1)
is inconsistent that all solutions of the former satisfy at least one of the equations fj (x)
aj = t0 (j = 1; 2; :::; r + 1); 269
.
Thus if, in the determinant
, we replace aj by aj + t0 we get a zero determinant (see
the remark about corollary 2.4). Our assertion about t0 follows immediately. By theorem 6.14, it follows that m X ( = Aj ) < 0 j=1
However, this assertion follows directly from theorem 2.8. In the article Eremine [2], it
was stated that this inequality is not only necessary but also su¢ cient for the inconsistency of system (39). The following example proves that this is not correct. EXAMPLE. Consider the consistent system x 1 0; y 2 0; g 1 1 x + 2 y 1 0: 3 for which 1 0 1 0 1 = 0 1 2 = 31 ; A1 = 1 1 ; 1 1 3 2 1 3 2 1 0 1 0 A2 = 1 1 ; A3 = 0 1 3 2 and r+1 3 X X = Aj = = Aj = ( 31 )=( 13 12 + 1) = j=1
2:
j=1
In Eremin’s paper [2] the assertion that the inequality in question is su¢ cient for incon-
sistency was said to follow from a corollary of theorem 3 of the article by Chernikov [2]. This reference, obviously, must be overlooked, since that corollary is correct (it coincides with theorem 2.8 of this book). As a corollary of theorem 6.14 we have COROLLARY 6.6. If `j (x) = aj1 x1 + ::: + ajn xn (j = 1; 2; :::; r + 1); is a system of non-null linear functions on Rn of rank r > 0 and if aj (j = 1; 2; :::; r + 1) are some elements of P, then the minimax `0 of the system of functions j`j (x)
aj j (j = 1; 2; :::; R + 1);
coincides with r+1 X = Aj : j=1
PROOF. If the equation system `j (x)
aj = 0 (j = 1; 2; :::; r + 1);
is consistent, then the assertion is trivial since `0 = 0 in that case. If that system is inconsistent then `0 coincides with the degree of inconsistency of the system
270
`j (x) aj 0; (j = 1; 2; :::; r + 1); g (`j (x) aj ) 0 (j = 1; 2; :::; r + 1) and hence (see theorem 6.12) with the degree of inconsistency of one of its inconsistency subsystems jk (`jk (x)
ajk ) = 0 (r = 1; 2; :::; r + 1)
of rank r (
jk
is a function of jk with values +1 and
1 which satis…es the requirement
of the second part of theorem 6.12. But then, by theorem 6.14, we have r+1 X 0 0 `0 = ( = Aj ) j=1
where
0
0
and Aj (j = 1; 2; :::; r + 1) are de…ned as
and Aj in the theorem 6.14. Since 0
the system in question is inconsistent, then all the nonzero Aj ’s have the same sign (see theorem 2.8). It follows, then, that r+1 X 0 0 `0 = = Aj j=1
If we associate the subsystem `jk (x)
ajk
0 (k = 1; 2; :::; r + 1)
with this system and if the determinants 0
it in the same way as 0
and Aj (j = 1; 2; :::; r + 1) are related with
0
and Aj are related to the former, then
0
= j j and Aj = jAj j (j = 1; 2; :::; r + 1)
But then
r+1 X `0 = j j = Aj j=1
which we were to show. For Rn , the above proposition is well known (see Krein [1]). Furthermore, for that case, all the results of this section are known (see Eremine [2] except, possibly the second part of theorem 6.12 (it is not included in Eremine’s article). x5. APPLICATIONS OF THE METHOD OF CONVOLUTIONS OF LINEAR IN-
EQUALITY SYSTEMS OF LINEAR PROGRMMING.
The maximization problem for a linear function f (x) (x 2 L(P )) and an inequality system fj (x)
aj
0; (j = 1; 2; :::; m) (40)
on L(P) may be considered as a problem …nding a solution (x; t) of the system fj (x) aj 0; (j = 1; 2; :::; m); g (41) f (x) + t 0 on L(P ) + P 1 (x 2 L(P ); t 2 P 1 ) having the greatest value of t or, equivalently, as
maximization problem for the function (x; t) = t
271
and system (41) (the consistency of system (40) is not postulated here). We recall once again that minimizing a function f(x) coincides with maximizing problem of maximizing
f (x). The solution of the
(x; t) on system (41) may be obtained by applying the method of
convolutions presented in the preceding chapter. Such an approach is based on the following theorem. THEOREM 6.15. If the maximization problem for the function f(x) and system (40) is solvable then so is the maximization problem for the linear function (x; t) = t (x 2 L(P ); t 2 P 1 )
and every U-convolution (in particular, a fundamental U-convolution) of system (41) for any arbitrary subspace U of L(P). Furthermore, the optimal values of the two problems are equal. If any U-convolution of system (41) for any subspace U of L(P) is empty, then the maximization problem for the function f(x) and the system (40) is unsolvable. If the maximization problem for the function (x; t) = t and U-convolution (in particular, a fundamental U-convolution) of system (41) for some subspace U of L(P) is solvable then the maximization problem for the function f (x) and the system (40) is solvable. PROOF. Suppose the original maximization problem is solvable. Then there exists a solution x = x0 of system (40) such that f (x0 ) = maxf (x) = t0 x2M
where M is the set of solutions of system (40). Thus (x0 ; t0 ) is a solution of system (41) with the greatest value of t. Applying corollary 5.3’, we conclude that every U-convolution of the system fj (x) aj
0; (j = 1; 2; :::; m); g f (x) + t0 0 for an arbitrary subspace U of L(P) is either consistent or empty. If a given U-convolution
of that system for a given subspace U is empty, then so is the corresponding U-convolution 0
for system (41) with t = t > t0 . But then, by corollary 5.3’, system (41) is consistent for 0
t = t which contradicts the maximality of t0 . Consequently, the U-convolution in question is not empty. If it is consistent for some t" > t0 , then system (41) is consistent for t = t" > t0 which contradicts the maximality of t0 . This proves the …rst assertion of the theorem. By the same arguments, it is not hard to prove the second assertion. If the maximum problem for the function (x; t) = t and some non-empty U-convolution with respect to some subspace U in L(P) is solvable and if .. is its optimal value, then by corollary 5.3’system (41) has a solution for t = t . It can’t have a solution for t > t , for otherwise that U-convolution (by the same corollary) would have a solution for t > t which
272
would contradict the maximality of t . This proves the third assertion of the theorem. THEOREM 6.16. If the maximum problems for the linear function f(x) and system (40) is solvable, then so are the maximization problems for the function (x; t) = t and each total convolution of system (41). Furthermore, the optimal values of these functions coincide. If any total convolutions of system (41) is empty (in that case they are all empty) then the maximum problem for the function f(x) and system (4) is unsolvable. If the maximum problem for the function
(x; t) = t and a total convolution of system (41) is solvable, then
so is the maximum problem for the function f(x) and the system (40). The proof is analogous to the proof the preceding theorem and it uses corollary 5.4 instead of corollary 5.3’. COROLLARY 6.7. (For the case of L(R) see Chernikov [6]). If the maximum problem for the function f(x) and system (40) is solvable then each total convolution of system (41) contains the unknown t (i.e. at least one coe¢ cient of t in it is non-zero) and is consistent. If some total convolution of system (41) contains the unknowns t and is consistent, then the maximum problem for the function f(x) and system (40) is solvable and its optimal value is the maximum value of t that satis…es the convolution in question. This total convolution is consistent, obviously, if system (40) is consistent. To illustrate theorem 6.16 and its corollary 6.7 consider the following: EXAMPLE. Solve the maximum problem of the function f (x) = 2x1 + x2
x3
x4
and the system x1 + x2 x3 + x4 3 0; 2x1 x2 x3 x4 + 1 0; x1 + 2x2 x3 x4 2 0; g x1 x2 + 2x3 + x4 2 0; 3x1 + x2 3x3 2x4 + 1 0: First we construct a fundamental total convolution of the system x1 + x2 x3 + x4 3 0; 2x1 x2 x3 x4 + 1 0; x1 + 2x2 x3 x4 2 0; g x1 x2 + 2x3 + x4 2 0; 3x1 + x2 3x3 2x4 + 1 0; 2x1 x2 + x3 + x4 + t 0; Using the algorithm for a fundamental convolution presented in x2, Chapter V, we get,
sequentially, a U1 -convolution, a U1 + U3 -convolution, a U1 + U3 + U4 -convolution and a U1 + U3 + U4 + U2 -convolution of the system. The last of these, is a total fundamental convolution of the system. The fundamental U1 -convolution has the form 3x2 + 0x3 + 0x4
5
0 (1; 3); 273
0x2 + x3 + 2x4 x2
5
x3 + 3x4
3x2 + x3
0 (1; 4);
3
3x4
0 (1; 6);
3
3x2 + 3x3 + x4
0 (2; 3); 3
0 (2; 4);
2x2 + 0x3 + 0x4 + 1 7x2 + 0x3
0 (2; 6);
5x4
5
0 (3; 5);
2x2 + 3x3 + x4
5
0 (4; 5);
x2
3x3
x4 + 2 + 3t
0 (5; 6):
(Alongside each inequality, we listed its index, see Chapter V, ..2). The fundamental U1 + U3 -convolution has the form 3x2 + 0x4
5
0 (1; 3);
2x2 + 0x4 + 1 + t 7x2
5x4
5
x2 + 5x4
0 (3; 5);
11 + t
3x2 + 0x4
0 (2; 6); 0 (1; 4; 6);
3 + 3t
0 (4; 5; 6);
The fundamental U1 + U3 + U4 -convolution has the form 3x2
5
0 (1; 3);
2x2 + 1 + t 3x2
0 (2; 6);
3 + 3t
0 (4; 5; 6);
Finally, the fundamental U1 + U3 + U4 + U2 -convolution has the form 7 + 3t
0 (1; 2; 3; 6);
8 + 3t
0 (1; 3; 4; 5; 6):
Fromm this we …nd the optimal value t0 = 7=3 of the original maximum problem. Substituting t0 = 7=3 in the U1 + U3 + U4 -convolution, we …nd that x2 = 5=3. Substituting these values in the U1 + U3 -convolution we get x4 = 4=3. Substituting these values in the U1 -convolution we …nd that x3 = 2. Substituting these values into original system, we get x1 = 2. Thus the optimal solution for the problem is (2; 5=3; 2; 4=3). For the canonical maximization problem for the function f(x) and the system aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m); g xi 0 (i = 1; 2; :::; n); the method of this section may be applied upon replacing the above system by the inequality system aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m); m X (aj1 x1 + ::: + ajn xn aj ) 0 g j=1
xi
0 (i = 1; 2; :::; n);
274
Theorem 6.15 allows us to solve a linear programming problem on .. by solving a linear programming problem with a smaller number of unknowns (for a U-convolution and in particular for a fundamental U-convolution of system (41)). Corollary 6.7 may be used to solve the following problem. Let the free terms of the system aj1 x1 + ::: + ajn xn
tj
0 (j = 1; 2; :::; m)
be parameters with values in P. We are required to determine the set of values of these parameters for which the maximum problem for the function b1 x1 + ::: + bn xn and the above system, is solvable. First we construct a fundamental U1 + ::: + Un convolution of the system aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m); g(42) b1 x1 + ::: + bn xn + t 0. Let us writhe this convolution in the form hj1 t1
:::
hjm tm
(all hj1 and hj
aj
0 (j = 1; 2; :::; k) (43)
0). By corollary 6.7, the posed problem is solvable if and only if the
convolution in question included t as an unknown (i.e. at least one coe¢ cient of t in it is non-zero). We assume that system (43) is derived from system (43) via a process of fundamental convolutions (43) is derived from system (43) via a process of fundamental convolutions of system (42). We denote the convolutions involved in this process by S1 (x1 ; :::; xn ; t1 ; :::; tm ; t); :::; Sn 1 (x1 ; :::; xn ; t1 ; :::; tm ; t): Let (t01 ; :::; t0m ; t0 ) be a solution of system (43) with t0 = min
hj1 t01 +:::+hjm t0m hj
where the minimum is taken over j for which hj is positive. Substituting the vector (t01 ; :::; t0m ; t0 )
in the convolution Sn 1 ((x1 ; :::; xn ; t1 ; :::; tm ; t) we get a vector (x0n ; t01 ; :::; t0m ; t0 )
that satis…es it. Substituting that vector in Sn 2 (xn 1 ; xn ; t01 ; :::; t0m ; t0 ) we get a vector (x01 ; :::; x0m ; t01 ; :::; t0m ; t0 ) that satis…es that convolution, etc. Finally we get a vector (x01 ; :::; x0m ) is a solution of the maximum problem for the function b1 x1 + ::: + bn xn and the system aj1 x1 + ::: + ajn xn
t0j
0 (j = 1; 2; :::; m)
and the optimal value for that problem is t0 .
275
From the above consideration we conclude that if (t01 ; :::; t0m ; t0 is an arbitrary solution of system (43) then the maximum problem for the function b1 x1 + ::: + bn xn and the system aj1 x1 + ::: + ajn xn
t0j
0 (j = 1; 2; :::; m)
is solvable and has an optimal value min
hj1 t01 +:::+hjm t0m hj
where the minimum is taken over j with hj > 0. Thus the region which we wish to determine is de…ned, in that sense, by system (43). If all of the hj ’s are positive, then our argument leads to the conclusion that the maximum problem in question is solvable for all values of the parameters (t1 ; :::; tm ) in P. If hjk (k = 1; 2; :::; `) are all the zero hj ’s then our solution coincides with the set of solutions of the system hjk 1 t1
:::
hjk m tm
0 (j = 1; 2; :::; k)
We now go on to consider a well known generalization of the linear programming problem. DEFINIION 6.8. The problem of …nding the greatest value .. such that the system fj (x) aj 0 (j = 1; 2; :::; m); g(44) f (k) (x) + t0 0 (k = 1; 2; :::; `) on L(P ) + P 1 is consistent is called the problem of upper optimization for the system of linear functions f (1) (x); :::; f (`) (x) (x 2 L(P )) (45)
and system (40). The value T0 is called the optimal value of this problem. For L(P ) = Rn , this problem was studied in Kantarovich’s monograph [3]. This problem may be considered as a problem of maximization for the function (x; t) = t ( (x; t) 2 L(P ) + P 1 ) and system (44). Thus, we may apply to it the same arguments that we used in proving theorem 6.15. Doing this we get:
THEOREM 6.15*. If the upper optimum problem for the system (45) of functions and system (40) is solvable, then so is the maximum problem for the linear function (x; t) = t ( (x; t) 2 L(P ) + P 1 )
and every U-convolution (and in particular, a fundamental U-convolution) of system (44) with respect to any subspace (U of L(P). Furthermore, the optimal values for all problems are the same. If any U-convolution of system (44) for any subspace U of L(P) is empty, then the upper optimization problem in question is not solvable. If the maximum problem for (x; t) = t and some U-convolution (in particular, a fundamental U-Convolution) of system (44) with respect to some subspace U of L(P) is solvable, then so is the upper optimum problem for the system (45) of functions and the system (40) of inequalities. It is not hard to show that theorem 6.16 and its corollary 6.6 apply to the upper optimum problem. The latter is stated as follows: 276
COROLLAY 6.7*. If the upper optimum problem for the system (45) of function and the system (40) of inequalities is solvable then every total convolution of system (44) contains the unknown t (i.e. at least one of the coe¢ cients of t is nonzero) and is consistent. If a total convolution of system (44) contains the unknown t and is consistent then the upper optimum problem in question is solvable and its optimal value is the maximum value of t satisfying that total convolution. This total convolution is, clearly, consistent if system (40) is consistent. Using theorem 6.1, it is not hard to prove the following generalization of it. THEOREM 6.17. In order that the upper optimum problem for the system (45) of functions and the system (40) of inequalities be solvable, it is necessary and su¢ cient that there exist non-negative elements p1 ; :::; pm of P and non-negative elements q1 ; :::; q` not all of which are zeros, such that the relation ` m X X qk f (k) (x) = pj fj (x) (46) j=1
k=1
holds as an identity in ..
PROOF. Su¢ ciency. By theorem 6.1, the relation m ` X X pj qk qk (k) f (x); t= ( q1 +:::+q` f (x) + q1 +:::+q` t) + q1 +:::+q` j j=1
k=1
being equivalent to (46) implies that the function f(x,t)=t has a maximum value on the set of solutions of the consistent system fj (x) aj 0 (j = 1; 2; :::; m); g(47) f (k) (x) + t0 0 (k = 1; 2; :::; `) By theorem 6.1, this maximum is attained in this set. But this means that one of the solutions (x; t) has the greatest value of t, which proves su¢ ciency. Necessity. If our problem is solvable, then the function f (x; t) = t has (ad attains) a largest value on the set of solutions of the consistent system (47). But then, by theorem 6.1, for non-negative elements p1 ; :::; pm ; q1 ; :::; q` of P such that the relation ` m X X (k) t= qk ( f (x) + t) + pj fj (x); k=1
j=1
is an identity in (x,t). But then the relation ` X t= qk t k=1
is an identity in t which implies that q1 + ::: + q` = 1. Thus at least one qk is distinct from zero. Hence, if our problem is solvable then there exists relation (46) which is identically zero in x 2 L(P ) with non-negative p1 ; :::; pm ; q1 ; :::; q` and at least one positive qk . This proves necessity.
277
THEOREM 6.18. (See Chernikov [15]) The element T0 = min f (i) (x0 ) where x0 is a x i `
solution of the consistent system (40), is the optimal value of the upper optimum problem for the system (45) of functions and system (4) if and only if there exist p1 ; :::; pm ; q1 ; :::; q` of P satisfying q1 + ::: + q` > 0; pj = 0 for fj (x0 ) 6= 0 and qi = 0 for f (i) (x0 ) 6= T0 and such
that the relation (46) is an identity in L(P).
PROOF. Let T 0 be an optimal value. First we note that the dual problem of our problem is the minimization problem for the function a1 p1 + ::: + am pm and the system m ` X X pj fj (x) + qi ( f (i) (x) + t) = t; j=1
i=1
g
pj 0(j = 1; 2; :::; m); qi 0(i = 1; 2; :::; `): If the original problem has as solution hint so does this problem and its optimal value coincides problem has a solution then so doe this problem and its optimal value T 0 of the original problems (see theorem 6.4). Hence, there exist non-negative numbers p1 ; :::; pm ; q1 ; :::; q` that satisfy (46) and are such that a1 p1 + ::: + am pm
T0 (q1 + ::: + q` ) = 0;
q1 + ::: + q` = 1. (and hence q1 + ::: + q` > 0). Substituting the …rst equation from (46) we conclude that m ` X X pj (fj (x) aj ) + qi ( f (i) (x) + T0 ) = t; j=1
i=1
is an identity in x 2 L(P ). Substituting x = x0 in this relation we conclude that the
coe¢ cients of all of those terms are not zero at x = x0 are zero. This proves necessity.
Now let p1 ; :::; pm ; q1 ; :::; q` be non-negative numbers satisfying the conditions of the theorem. Then for x = x0 , relation (48) holds and hence, by (46), we have m ` X X pj aj T0 qi = 0: j=1
i=1
From this and from (46), it follows that relation (48) is an identity in x 2 L(P ). If T0 is 0
not the optimal value for our upper optimum problem, then there exists an element x such that 0
fj (x )
aj
0 (j = 1; 2; :::; m);
0
f (I) (x ) + T0
0 (I = 1; 2; :::; `):
It follows from the fact that q1 + ::: + q` > 0 that at least one of q1 ; :::; q` is not zero. 0
Thus, setting x = x, the left hand side of (48) becomes negative which is not possible. This proves our theorem.
278
For the inequality system fj (! x ) aj = aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::; m) (49)
on the space P n and the system f (i) (! x ) = bi1 x1 + ::: + bin xn (i = 1; 2; :::; `) (50) of functions, the above theorem has the following form. THEOREM 6.18*. The element T0 = min f (i) (x0 ); where x0 = (x01 ; :::; x0n ) is a solution x i `
of system (49), is an optimal value of the upper optimum problem for the system (50) of functions and system (49) of inequalities if and only if there exist p1 ; :::; pm ; q1 ; :::; q` of the …eld P satisfying the conditions q1 + ::: + q` > 0, pj = 0 for fj (x0 ) 6= aj and qi = 0 for f (i) (x0 ) = T0
such that the relation m ` X X ajk pj = bik qi (k = 1; 2; :::; n) j=1
i=1
are satis…ed.
For the system fj (! x ) aj = aj1 x1 + ::: + ajn xn aj 0 (j = 1; 2; :::; m); g (51) xi 0 (i = 1; 2; :::; n); (on P n ) the following is valid. COROLLARY 6.8. In order that the element T0 = min f (i) (x0 ); where x0 = (x01 ; :::; x0n ) x i `
is a solution of system (51), be an optimal value for the upper optimum problem for the system (50) of functions and the system (51) of inequalities, it is necessary and su¢ cient that there exist non-negative numbers p1 ; :::; pm ; q1 ; :::; q` satisfying q1 + ::: + q` > 0; pj = 0 for fj (x0 ) 6= aj and qi = 0 for f (i) (x0 ) 6= T0
such that m ` X X ajk pj bik qi (k = 1; 2; :::; n) j=1
i=1
and m ` X X ajk pj = bik qi for x0k > 0: j=1
i=1
For the case of system (51) on Rn , this theorem is not new (see Kantorovich [3], p.282). In this section we shall also consider the maximization (minimization) problem for the linear function f(x) (on L(P)) and the system 0 fj (x) aj 0 (j = 1; 2; :::; m ); g (52) 0 0 fj (x) aj < 0 (j = m + 1; :::; m ) 0 0 on L(P) which includes m strict inequalities, m < m. We also consider the more general problem of upper optimum for such a system. The minimum problem, here, reduces to a maximum problem to the problem of …nding a solution (x; t) of the system 279
0
fj (x) aj 0 (j = 1; 2; :::; m ); 0 0 fj (x) aj < 0 (j = m + 1; :::; m ); g (53) f (x) + t 0 with the greatest t. The solution of the above problem may be obtained by using the results of ..6 of the preceding chapter. In particular, using the theorem 5.11 and corollary 5.12 of x6 we can show that theorems 6.15 and 6.16 of the present section apply to the
maximum problem for the linear function f(x) and system (52). Further, it follows that the solvability of the maximum problem implies that system (52) is consistent and that there exists a solution x = x0 of that system such that f (x0 ) is the greatest value of f(x) on the set of solutions of system (52). If the solvability of our problem for a consistent system (40) follows from the boundness of the function f(x) over the set of solutions of that system, then this optimality criterion looses its power. Using theorem 6.16 for the problem at hand, corollary 6.7 may be reformulated as follows. THEOREM 6.19. (For L(R), see Chernikov [12]). If the maximum problem for the linear function f(x) and the system (52) is solvable, then each total convolution of system (53) includes t as an unknown (i.e. at least one of the coe¢ cients of t in it is nonzero), is consistent and among the values of t satisfying that convolution there is a greatest element. If a given total convolution of system (53) includes the unknown t, is consistent and the set of values of t satisfying it has a largest element t = T0 , then the maximum problem for the function f(x) and the system (52) is solvable and its optimal value equals T0 . Clearly, the theorem stays valid for the maximum problem for the function f(x) and the system (40). In this case it is simply equivalent to corollary 6.7. EXAMPLE. Solve the maximum problem for the function f (x) = 2x1 + x2
x3
x4
and the system x1 + x2 x3 + x4 3 0; 2x1 x2 x3 x4 + 1 0; x1 + 2x2 x3 x4 2 0; g x1 x2 + 2x3 + x4 2 0; 3x1 + x2 3x3 2x4 + 1 0: This system di¤ers from the system used in the example following corollary 6.7 by the fact that the …rst inequality is strict. A total (fundamental) convolution of the system x1 + x2 x3 + x4 3 0; 2x1 x2 x3 x4 + 1 0; x1 + 2x2 x3 x4 2 0; g x1 x2 + 2x3 + x4 2 0; 3x1 + x2 3x3 2x4 + 1 0; 2x1 x2 + x3 + x4 + t 0; 280
has the form 7 + 3t
0 (1; 2; 3; 6);
8 + 3t
0 (1; 3; 4; 5; 6):
In the previous example, these inequalities were weak inequalities. Among the values of t satisfying this last system, there is no greatest value. Hence the maximum problem considered here has no solution. All we said about the maximum problem for the function f(X) and the system (52) may be repeated, essentially, for the upper optimum problem with respect to that system. If system (45) is the system of functions that de…nes the upper optimum problem and if 0 fj (x) aj 0 (j = 1; 2; :::; m ); 0 0 fj (x) aj < 0 (j = m + 1; :::; m ); g (54) f (i) (x) + t 0 (j = 1; 2; :::; m is the system de…ned for that problem, then upon replacing, in theorem 6.15*, system (40) by system (52) and system (44) by system (54), that theorem remains valid. Corollary 6.7* may be reformulated along the lines of theorem 6.19 (the corresponding result for P = R was given in Chernikov [8]). At the conclusion of this section we study the relation between the optimal values for the maximum problems for system (40) and (52). THEOREM 6.20. If the inequality f (x)
c
0 is a consequence of a consistent system
(52) then it is also a consequence of the system fj (x)
aj
0 (j = 1; 2; :::; m) (55)
associated with it. PROOF. If the inequality f (x)
c
0 is not a consistence of system (55) then either
the maximum problem for the function f(x) and the system (55) is unsolvable or its optimal value .. equals c. In the …rst case, by corollary 6.7, there exist a total convolution of the system fj (x)
aj 0 (j = 1; 2; :::; m); g (56) fj (x) + t < 0 which does not include the unknown t. But then the corresponding total convolution
of system (53) does not include t. But this contradicts the boundness of the function f(x) on the set of solutions of system (52) which is implied by the hypothesis of the theorem. Consequently, the maximum problem for the function f(x) and the system (55) is solvable and the optimal value .. of that problem coincides with c. Now let S be a total convolution of system (56). In view of the solvability of the maximum problem for the function f(x) and the system (355), the convolution S includes t as an unknown (see corollary 6.7) and hence the corresponding total convolution S of system (53) 281
includes t. Since S may be obtained from S by changing some of the inequalities of the latter to strict inequalities, it follows that any t < T0 satis…es both S and S. Using corollary 5.12 for the system S we conclude that system (52) has solutions for which f (x) = T0 1=2(T0 c), which is impossible since T0
1=2(T0
c) > c: Thus T0
c. This completes the proof.
COROLLARY 6.9. If the maximum problem fro the linear function f(x) and the system (55) has a solution then so does the maximum problem for the function f(x) and the system (55) and the optimal values for the two problems are equal. Conversely if the second problem is solvable and if its optimal value T0 satis…es a total convolution of system (53) then the …rst problem has a solution. The …rst part of this theorem follows immediately from the preceding theorem and the second part follows from the theorem and corollary 5.12. x6. OPTIMIZATION OF LINEAR VECTOR FUNCTIONS.
We consider, here, a well-known (for P = R) generalization of the basic linear programming problem. DEFINITION. The maximum (minimum) problem for the linear vector-function ! F (x) = (f 1 (x); :::; f ` (x)) (57) (f i (x) are linear functions on L(P)) and the system fj (x)
aj
0 (j = 1; 2; :::; m) (58)
on L(P) is the problem of …nding a solution x = x0 of that system such that no solution x of the system satis…es the inequality ! 0 ! ! ! F (x ) < F (x) (( F (x) < F (x0 ))
! An element x0 satisfying this condition is called an optimal solution and F (x0 ) is called
the optimal value for this problem. Such a problem is said to be solvable if it has an optimal solution. The minimization problem for the vector-function (57) and system (58) will be ! considered as a maximum problem for F (x) and system (58). ! ! The relation ! x y for ! x and ! y in P ` means that all coordinates of ! y x are ! ! non-negative. If at least one of these coordinates is positive, then we have x < y . DEFINITION 6.10. A solution ! x 0 = (x01 ; :::; x0n ) of the system f (! x ) a = a x + ::: + a x a 0 (j = 1; 2; :::; m) (59) j
j
j1 1
jn n
j
is said to be upper extemal if for an ! " = ("1 ; :::; "n ) > 0 (! " in P n ) the element ! x0+! " is not a solution of that system. LEMMA 6.2. A consistent system (59) with non-negative coe¢ cients aji has an upperextremal solutions if and only if at least one coe¢ cient of each xi is nonzero. PROOF. Indeed, necessity follows from the fact that if all the coe¢ cients of a given 282
unknown, xi , are zero then any solution of system (59) remains a solution after adding an arbitrary element of P to its i-th coordinate. We now prove su¢ ciency. Let ! x 0 = (x0 ; :::; x0 ) 1
n
be a solution of a system (59) with non-negative coe¢ cients that satis…es the conditions of the theorem. Substitute the vector (x01 + t; :::; x0n ) into the inequalities of (59) and …nd the greatest value t = t01 such that the vector satis…es the system. Next, …nd the largest vaule t = t02 such that the vector (x01 + t01 ; x02 + t02 ; :::; x0n ) satis…es the system, etc. At the end of this process we get a vector (x01 + t01 ; :::; x0n + t0n ; ) which is, obviously, an upper-extremal solution of system (59). Next, we note the following lemma which follows directly from corollary 5.4. LEMMA 6.3.3 Every optimal value for the maximum problem for the vector function (57) and the system (58) is an upper extremal solution for any total convolution of the system fj (x) aj 0 (j = 1; 2; :::; m); g (60) f (k) (x) + tk 0 (k = 1; 2; :::; `) Conversely, every upper-extremal solution of any total convolution of the latter is an optimal value for that problem. By lemma 6.2 and 6.3 we have. THEOREM 6.21. If the maximum problem for the vector function (57) and system (58) is solvable then every convolution of system (60) includes each of the unknowns tk (k = 1; 2; :::; `) (i.e. at least one coe¢ cient of each tk in it is nonzero) and is consistent. Conversely if a total convolution of system (60) includes all of the unknowns tk and is consistent the maximum problem at hand is solvable and its optimal value coincides with an upper extemal solution of that convolution. The convolution referred to in the theorem is consistent if system (58) is consistent. COROLLARY 6.10. The maximum problem for the vector function (57) and a consistent system (58) is solvable if and only if there exist non-negative elements p1 ; :::; pm and positive elements q1 ; :::; q` of P such that ` m X X k qk f (x) = pj fj (x) k=1
j=1
holds for all x 2 L(P ) (i.e. is an identity in x in L(P)).
PROOF. Necessity. Suppose the problem at hand is solvable. Then by theorem 6.21, a total convolution S (i.e. any U-convolution with U = L(P )) of system (60) involves each of the unknowns tk (k = 1; 2; :::; `) . We shall assume that S is a fundamental total convolution. By the above de…nition (see Chapter V , x1) of fundamental convolutions, this implies that for each k = 1; 2; :::; ` there exists a fundamental element of the cone C(L(P n )) of non-negative solutions (u1 ; :::; um ; v1 ; :::; v` ) of the equations.
283
m X
uj fj (x)
j=1
` X k=1
vk f k (x) = 0 (x 2 L(P ))
such that one of the coe¢ cients vk of that fundamental solution is positive. The sum (p1 ; :::; pm ; q1 ; :::; q` ) of all essentially distinct fundamental elements of the cone C(L(P n )) is, obviously, a non-negative solution of the above equation with positive q1 ; :::; q` . This proves necessity. Su¢ ciency. Suppose relation (61) holds for some non-negative elements p1 ; :::; pm and positive elements q1 ; :::; q` of P. Then, by lemma 1.2, it is possible to express each function f k (x) (k = 1; 2; :::; `) as a linear combination, with non-negative coe¢ cients, of a linearly independent subsystems of the system f1 (x); :::; fm (x); f 1 (x); :::; f ` (x) of functions. But then a total convolution of system (60) must include each of the unknowns tk (k = 1; 2; :::; `) (see Chapter V, x1). Thus, by theorem 6.21, the maximum
problem in question is solvable. This proves su¢ ciency.
Remark. If there exists an identically zero linear combination with positve coe¢ cients involving all of the coordinate function of the vector function (57), then all the values of the vector function on the set M of solutions of a consistent system (58) are optimal values for the maximum problem for the vector function (57) and system (58). Indeed, let q1 f 1 (x0 ) + ::: + q` f ` (x0 ) = 0 0
be such a relation and let x0 be in M. Let x be an element in M such that ! 0 ! 0 F (x ) < F (x ) Then 0
0
q1 f 1 (x0 ) + ::: + q` f ` (x) < q1 f 1 (x ) + ::: + q` f ` (x ) However, this is not possible since both sides of this inequality are zeros. THEOREM 6.22. Suppose that there does not exist an identically zero, on L(P), linear combination with positive coe¢ cients involving all coordinates of the vector function (57). Then all optimal values of the maximum problem for the vector function (57) and a consistent system (58), if the problem is solvable, are attained only in the set of a boundary solution of system (58). Recall that a solution of system (58) is said to be boundary if it turns at least one inequality with a non-null function fj (x) into an equation. PROOF. By the hypothesis of the theorem, at least one of the unknowns tk of the system f k (x) + tk
0 (k = 1; 2; :::; `) (62)
say t1 , is not included in the its total convolution (here we don’t exclude the case where
284
! the latter is empty). Suppose an optimal value T = (T1 ; :::; T` ) of the maximum problem for the vector function (57) the consistent system (58) is attained at a non-boundary solution x0 of the latter. Since the system (62) with tk = Tk (k = 1; 2; :::; `) is consistent (x = x0 is one of its solutions), it follows from theorem 5.2 that its total convolution is either consistent or empty. Since the latter does not, by hypothesis, include the unknown t1 , it follows (by 0
theorem 5.2) that system (62) with tk = Tk (k = 1; 2; :::; `) and t1 = T1 > T1 has solutions. 0
Let x be one of them. Since x0 is non-boundary solutions of (58), there exists an element t0 > 0 such that x(t0 ) = x0 + t0 (x
0
x0 )
is a solution of (58). It is not hard to show that f k (x(t0 )) (k = 1; 2; :::; `) and T1 < f 1 (x(t0 )) ! But, by hypothesis, T is an optimal value of our problem. Thus we have a contradiction. Tk
This proves the theorem. THEOREM 6.23. Suppose the maximum problem for the vector function (57) and the consistent system (58) is solvable and that one of its optimal solutions is a boundary solution x0 of system (58). Let S be the maximal subsystem of (58) whose inequalities are turned into equalities by x = x0 . Then every solution of (58) turning all of the inequalities of S into equations is an optimal solution for this problem. 0
PROOF. Indeed, let x be a solution of (58), other than x0 , which turns all of the 0
inequalities of S into equations. If x is not an optimal solution to our problem, then there ! 0 ! exists a solution x = x of system (58) that satis…es the inequalities F (x ) < F (x) . That inequality is, obviously, satis…ed by all the elements 0
0
x (t) = x + t(x
0
x ) (0
1; t 2 P )
t
! all of which are solutions of (58). Adding the term F (x0 ) 0
! 0 F (x ) to both side of
0
F (x ) < F (x (t)) we get 0
0
F (x0 ) < F (x (t) + x0
x ) (0
t
1; t 2 P )
It is not hard to show that there exists an element t0 0
x" (t) = x (t) + x0
0
x = x0 + t(x
x)
is a solution of system (58) for all positive t is not included in S then fj (x) 0 < t0
t0 . In fact, if the inequality fj (x)
for 0 < t
aj
0
aj < 0 . Clearly, for that inequality, there exists an element
1 such that
fj (x" (t)) = fj (x0 + t(x
1 such that
0
0
x )) < aj
t0 . If an inequality .. is included in S then
285
fj (x" (t)) = fj (x0 ) + tfj ((x) for any t > 0. Thus, for 0 < t
0
fj (x )) = aj + t(fj (x)
aj )
aj
t0 ; x" (t) is a solution to system (58).
Since F (x0 ) < F (x" (t)) this contradicts the assumed optimality of the solution x0 of system (58). The contradic0
tion shows that x is an optimal solution of the problem under consideration. This proves the theorem. The system S considered in theorem 6.23 is a boundary subsystem of system (58) (see de…nition 1.4). If r > 0 is the rank of system (58), then system S may be included in a boundary system S’ of system (58) having rank r (see property 1 of boundary systems in Chapter I). But then any subsystems with rank r involving r inequalities of S’must be a nodal subsystem of system (58). Hence, by theorem 6.22 and the remark that preceded it we have. THEOREM 6.24. If a maximum problem for the vector function (57) and system (58) with nonzero rank is solvable then at least one nodal solution of system (58) is an optimal solution of that problem. For the case where P is R, the …eld of real numbers, this result and essentially all the other results of this section are contained in the author’s article [13]. Lemma 6.3 and theorem 6.2 were published (without proof) in the author’s article [17].
CHAPTER V II SOME INFITINE SYSTEMS OF LINEAR INEQUALITIES In the study of …nite systems of linear inequalities on L(P) (where P is an arbitrary ordered …eld), a major role was played by the principle of bounding solutions (theorem 1.1) and the theorem about consequence inequalities (theorem 2.4). For in…nite systems of linear inequalities, these propositions are, generally, no longer valid and they hold only for some …nite systems of a very special type. They don’t even hold for systems of the form aj1 x1 + ::: + ajn xn
aj
0 (j = 1; 2; :::) ( )
on Rn . For an example, consider the system x1 cos 2 where
j j
+ x2 sin 2
j
1
0 (j = 1; 2; :::)
ranges over all the rational numbers in the half segment [0,1) taking each of
them for a value only once. The system is, obviously consistent and has rank r = 2. The 286
set of its solutions may be represented in the X1 X2 plane by the points of a disk with unit radius and center at the origin. Obviously, no solution of the system turns more than one of its inequalities into an equation and hence the system has no nodal solutions. The inequality x1 cos 2 where
+ x2 sin 2
1
0
is an irrational number in the half segment [0,1) is, obviously, a consequence of
our system. Suppose there exist no negative numbers p0 ; p1 ; :::; p` ; for which the relation x1 cos 2 j + x2 sin 2 j 1 = ` X = pk (x1 cos 2 jk + x2 sin 2
jk
1)
p0
k=1
is an identity in x1 and x2 . Since the solution (cos 2
; sin 2
)
of our system, turns the left side of that relation into zero, we conclude that p0 = 0 and pk [x1 cos 2
cos 2
jk
+ sin 2
sin 2
jk
1) = 0
(k = 1; 2; :::; `) or, equivalently, p0 = 0 and pk (cos 2 (
jk )
1] = 0 (k = 1; 2; :::; `)
By the relation 6=
jk
(k = 1; 2; :::; `)
and 0
0
imply that the inequality f 0 (x) the inequality f 0 (x)
0 (x0 2 L) which
0 is not a consequence of system (1). If, for f 0 2 D
0 is not a consequence of system (1), then for a solution x0 2 L of 0 ( 2 M)
the L-half space x0 (f )
0 contains D(C). This completes the proof.
In Minkowski’s theorem D(C) = C. DEFINITION 7.2. Let L = L + P 1 be a direct sum of L and P 1 ; i.e. the linear space 0
0
of the pairs [x; t] (x 2 L; t 2 P 1 ) Let L = L + P 1 be the linear space of the pairs [f; p] 0
(f 2 L ; p 2 P 1 ) and let [f; p]([x; t]) = f (x) + pt: The system (2) of linear inequalities on L 0
is said to be the polyhedrally (L; L )-closed if the dual cone of the system f (x) a t 0 ( 2 M ) g (3) t 0 0 0 associated with it is equal to its the polyhedral (L; L )-closure. If L is the space of all 0
linear functions on L, then a the polyhedrally (L; L )-closed system (2) will be simply called polyhedrally closed. 0
THEOREM 7.2 S system (1) is (L; L ) polyhedrally closed if and only if its dual cone C 0
coincides with its the polyhedral (L; L )-closure D(C). 0
PROOF. Indeed, let system (1) be polyhedrally (L; L )-closed and let f 0 be an arbitrary 0
element of D(C). By de…nition 7.2, the polyhedral (L; L )-closedness of system (1) implies that the dual cone C of the system f (x) 0 ( 2 M ) g t 0
0
coincides with its polyhedral polyhedral (L; L )-closure D(C), i.e., with the intersection 0
of all the sets containing it and de…ned on L by the inequalities of the form [x; t]([f; p])
0
0 ([f; p] 2 L ) 290
By theorem 7.1 such a coincidences implies that for an arbitrary inequality f (x) + kt 0
(f 2 L ; k 2 P ) that is a consequence of system (4) we have [f; k] 2 C:
Since f 0 2 D(C) theorem 7.1 also implies that the inequality f 0 (x)
0 is a consequence
of system (1). Thus we have [f 0 ; 0] 2 C
But then, there exist non-negative elements p 1 ; :::; p n ; p0 of the …eld P such that f 0 (x) + 0t = p 1 f 1 (x) + ::: + p n f n (x)
p0 t
is an identity in [x; t]. Hence p0 = 0 and f 0 (x) = p 1 f 1 (x) + ::: + p n f n (x) Hence f 0 2 C: But then D(C) = C. This proves necessity.
Now let D(C) = C and let [f; k] be an arbitrary element of D(C). Since [f; k] 2 D(C),
it follows from theorem 7.1 that the inequality f (x) + kt
0
is consequence of system (4). Clearly k
0 and the inequality f (x)
0 is a consequence
of system (1). Hence f 2 D(C). In view of the relation D(C) = C, there exist non-negative elements p 1 ; :::; p
n
of P such that the relation
f (x) = p 1 f 1 (x) + ::: + p n f n (x) is an identity in x 2 L. Hence the relations
f (x) + kt = p 1 f 1 (x) + ::: + p n f n (x) + p0 ( t) is an identity in [x; t] 2 L where p0 =
k
0
0. Hence [f; k] 2 C. Consequently system
[1] is the polyhedral (L; L )-closed. The theorem is proved.
0
COROLLARY 7.1. System (2) is the polyhedrally (L; L )-closed if and only if system (3) 0
associated with it is polyhedrally (L; L )-closed. 0
DEFINITION 7.3. A consistent system (2) is said to be …nitely L -determined if every linear inequality f (x)
a
0
0 (f 2 L )
which is a consequence for it is a consequence for one of its …nite subsystems. 0
0
If L is the space of all linear functions on L then a …nitely L -determined system will simply be called …nitely determined. LEMMA 7.1. The linear inequality f (x)
a
0
0 (f 2 L )
is a consequence of a consistent system (2) if and only if the inequality f (x)
at
0
291
is a consequence of system (3) associated with that system. The proof of the lemma is analogous to the proof of lemma 2.5. Lemma 1.10 used in the proof of lemma 2.5 is valid for systems of type (2). Indeed, suppose all the solutions of system (2) satisfy the linear inequality f (x)
a
0
0
and suppose at least one solution x of the system 0 ( 2 M)
f (x)
associated with system (2) satis…es the inequality f (x) > 0. If x0 is a solution of system 0
(2) then the elements x(t) = x0 + tx . are solutions of system (2) for t > 0. Hence, for any t > 0 we have 0
f (x0 + tx )
a
0
Since for t=
f (x0 )+a+1 0 f (x )
the inequality is obviously reversed and we have a contradiction. Consequently if all solutions of system (2) satisfy the inequality f (x)
a
0
then all solutions of (5) satisfy f (x)
0. Thus lemma 1.10 used in the proof of lemma
2.5 is valid for systems of the form (2). 0
THEOREM 7.3. A consistent system (2) is …nitely L -determined if and only if it is 0
polyhedrally (L; L )-closed. 0
PROOF. Let a consistent system (2) be …nitely L -determined and let C be the dual cone 0
of system (3) associated with that system. Let D(C) be the polyhedral (L; L )-closure of C. Let [f; k] be an arbitrary element of D(C). Then the inequality f (x) + kt
0
is a consequence of system (3) and hence the inequality f (x) + k
0 0
is a consequence of system (2) (in view of lemma 7.1). Since system (2) is …nitely L determined, that inequality is consequence of at least one of its …nite subsystems and hence, by theorem 2.4, there exist non-negative element p 1 ; :::; p n ; p0 of the …eld P such that the relation f (x) + k =
n X
p i (f i (x)
a i)
p0
i=1
is an identity in .. But then the relation n X f (x) + kt = p i (f i (x) a i ) p0 t i=1
is an identity in [x; t] 2 L. Since that relation implies that [f; k] 2 C the arbitrariness of 292
0
[f; k] implies that D(C) = C. This proves the polyhedral (L; L )-closedness of system (2). The necessity is proved. 0
Now let a consistent system (2) be polyhedrally (L; L )-closed and let 0
0 (f 2 L ; k 2 P )
f (x) + k
be an inequality which is a consequence of it. Then the inequality 0
0 ([f; k] 2 L )
f (x) + kT
is a consequence of system (3). Hence [f; k] 2 D(C). Since, in view of the polyhedral 0
(L; L )-closedness of system (2), D(C) = C there exist non-negative elements p 1 ; :::; p n ; p0 such that the relation n X f (x) + kt = p i (f i (x)
a i)
p0 t
i=1
is an identity in [x; t] 2 L. Hence the relation n X f (x) + k = p i (f i (x) a i ) p0 i=1
is an identity in x 2 L. Thu f (x) + k f i (x)
a
i
0 is a consequence of a …nite subsystem
0 (i = 1; 2; :::; n)
0
of system (2) which proves that system (2) is …nitely L -determined. This proves the theorem. It follows from the above theorem that every consistent …nite system of type (2) is poly0
0
hedrally (L; L )-closed. The polyhedral (L; L )-closedness of an arbitrary …nite subsystem follows from that theorem by corollary 7.1. 0
Let L be a …nite Eucliden space in Rn and let L be its conjugate space, i.e. the space of all linear function on L. Identifying the latter with Rn system (2) has the form f (! x ) a = (! x ! x 0) a 0 ( 2 M ) (6) where ! x 0 = (a 1 ; :::; a n ) 2 L (L = Rn ) and (! x ! x 0 ) is the scalar product a 1 x1 + ::: + a n xn System (3) has the form f (! x ) a = ([! x ; t]; [! x 0 ; a ]) 0 ( 2 M ) ! ([! x ; t]; [ ; 1]) 0 ! ! where [! x ; t]; [! x 0 ; a ] and [ ; 1] are elements of the space L = L + R1 = Rn+1 ; = (0; :::; 0) 2 L = Rn and the notation ( , ) means scalar multiplication. 0
With reference to de…nition 7.3, a …nitely L -determined system will be called …nitely 0
determined in the present case. In connection with de…nition 7.2 a polyhedrally (L; L )-closed
293
system (6) is simply called polyhedrally closed. The cone K generated by the elements ! [! x 0 ; a ] ( 2 M ) and [ ; 1]
in the space L, in connection with de…nition 2.11, will be called the conjugate cone of
system (6). To obtain the conditions for system (6) to be …nitely determined that are implied by theorem 7.3 we …st note the follow proposition. A system (6) is polyhedrally closed if and only if its conjugate cone is topologically closed for L = Rn+1 . !
This proposition follows from the fact that for any space Rn a convex cone with vertex at = (0; :::; 0) is polyhedrally closed if and only if it is topologically closed. The topological
closedness of a polyhedrally closed cone is obvious, since it is the intersection of closed sets. The converse may be obtained by using the following theorem (see Bourbaki [1], p. 102): For any space Rn , for any nonempty cone C that is distinct form Rn and is topologically closed ! having vertex at = (0; :::; 0) and for any point in Rn there exists a supporting hyperplane of C that does not contain that point and which separates it from the cone C. Now, the condition for system (6) to be …nitely determined and which follows form theorem 7.3 is: THOEREM 7.4. A consistent system (6) is …nitely determined if and only if its conjugate cone is topologically closed. REMARK 1. The assertion of theorem 7.4 is true for the more general case where L is a 0
real Hilbert space H and where L is its conjugate space. In that case, and in system (6), x0 ( 2 M ) are elements of H and (x; x0 ) are scalar products of the elements x and x0 de…ned for H. The scalar product in the space L = L + R1 = Rn+1 is de…ned by 0
0
([x; t ]; [y; t" ]) = (x; y) + t t" where (x,y) is a scalar product in H. Thus the space L is a real Hilbert space. 0
0
REMARK 2. If the space L is total on L, i.e. if f (x) = 0 for all f in L implies x is 0
the zero element of L, then the space L and L may be endowed with dual topologies in such a way that they become locally convex linear topologies are extended in the natural 0
way to topologies on L and L (preserving local convexity) then the condition for polyhedral 0
(L; L )-closedness in theorem 7.3 becomes a condition of topological closedness of the dual cone of system (3) corresponding to system (2), i.e. of the cone generated by [f ; a ] ( 2 M ) [ ; 1] 0
in the space L = L + R1 (it my be called the conjugate space of system (2)). Here,
294
is
0
the zero element of L . REMARK 3. A is known for a topologically closed in…nite set of vectors in Rn the cone generated by the set need not be topologically closed. Hence, by theorem 7.4, Haar’s assertion (see Haar [1] that a system (6) with a topologically (on Rn+1 ) closed set of vectors [! x 0 ; a ] ( 2 M; x0 2 Rn ]
is …nitely determined, is not correct. This problem was dealt with in the author’s note
[1]. There, in part, the following example was introduced. Example. For the system 2k+1 1 u + u2 k(k+1) u3 0 (k = 1; 2; :::); k(k+1) 1 g u2 0; u2 0; u3 0 the set of vectors in question is, obviously, closed in R3 . All of its solutions satisfy u3
0. However, the left hand side of this inequality cannot be expressed as a linear
combination with non-negative coe¢ cients of a …nite number of the left hand side of the system’s inequalities. Thus, by Mikowski’s theorem, the inequality
u3
0 cannot be a
consequence of any of the system’s …nite subsystems. It is known that if a set of element of the space Rn is not only closed but also bounded and it its convex hull (i.e. the intersection of all convex sets containing it) does not contain the zero element then the cone generated by that set is topologically closed (see Bourbaki [1], pages 112 and 110). Hence, by theorem 7.4 we get: COROLLAY 7.2. (see Chernikov [11]). If the system f (! x ) a = a 1 x1 + ::: + a n xn ( 2 M ) (7)
of linear inequalities on Rn having a bounded and closed set of vectors (a 1 ; :::; a n ; a )n ( 2 M )
is stably consistent, then it is polyhedrally closed and hence is …nitely determined. Indeed, by the stable consistency of system (7), the convex hull of the set of vectors ! (a 1 ; :::; a n ; a )n ( 2 M ) and [ ; 1] (8)
may not contain the zero vector. In fact, if system (7) has a stable solution (x01 ; :::; x0n ),
then system (3) corresponding to it has a stable solution (x01 ; :::; x0n ; 1). It is not possible to construct a linear combination of the left hand sides of a 1 x01 + ::: + a n x0n a 0 ( 2 M ); g 1 0. The latter inequality is obviously true since the solutions [xi ; ti ] of system (3) are linearly independent. Thus, [x0 ; t0 ] is a stable solution of system (3). But then x0 is a stable solution of system (2). Hence, system (2) is stably consistent as we were to show. 0
REMAKR 1. It is not true, in general, that an arbitrary polyhedrally (L; L )-closed system (2) with stably consistent …nite subsystems is stably consistent. Indeed, let L = P ! be the space of all possible numerical sequences (x1 ; :::; xn ; :::) of P each of which containing 0
only a …nite number of nonzero elements. Denote by L , the space of the linear forms (! a ;! x ) = a x + ::: + a x + ::: n n
1 1
with coe¢ cients ai in P such that only a …nite number of the a1 ’s in each of them are 0
nonzero. The space L may be naturally identi…ed with the space L by associated each linear form with the sequence of its coe¢ cients. Consider the system xj
0 (j = 1; 2; :::):
de…ned on L by the elements ! x with non-negative coordinates. Consequently C = D(C). 0
0
0
Hence the cone C is polyhedrally (L; L )-closed. But then our system is polyhedrally (L; L )closed (see theorem 7.2). Clearly all of the …nite subsystems of this system are stable. But it is also clear that each of its solutions is unstable. This proves the remark. REMARK 2. If the conjugate cone of system (7) is not topologically closed then theorem 7.6 may not be valid for it even if the set (a 1 ; :::; a n ; a )n ( 2 M ) associated with it is topologically closed.
EXAMPLE. For the system
298
x + ny
0 (n = 1; 2; :::); g (3) y 0
the set (0; 1); ( 1; n) (n = 1; 2; :::); is, clearly, closed. Each of the system’s …nite subsystems is, obviously, stably consistent. However, the system itself is not stably consistent since y = 0 for any solution of it. x2. CONVOLUTIONS OF POLYHEDRALLY CLOSED SYSTEMS OF LINEAR IN-
EQUALITIES.
DEFINITION 7.4. Let f (x) ( 2 M )
be a system of linear functions de…ned on the linear space L=L(P) (P is an ordered …eld) and let U be a subspace of L. The cone C(U) of non-negative solutions fu g ( 2 M ; u 2 U )
of the equations X u f (x) = 0 (x 2 U ) 2M
having a …nite number of nonzero coordinates u ; is called the cone of U-composition
for the system (9) of functions. The collection of indices .. corresponding to given nonzero element of the cone C(U) will be called the index set of that element. DEFINITION 7.5. The element fu0 g ( 2 M ) of the cone C(U) is said to be a funda-
mental element of it if the rank of the system f of functions whose indices u correspond to the nonzero coordinates ..of that element is smaller by one than the number of functions in that system. DEFINITION 7.6. The cone of U-composition of the linear inequality system f (x) + t
0 ( 2 M ) (10)
where f (x) ( 2 M ) are linear functions on L and where t are parameters with values
in P is the cone C(U) of the system f (x) ( 2 M ) of functions de…ning the inequalities of that linear inequality system. In particular, if t =
a (a 2 P;
the cone of composition for the linear inequality system f (x)
a
on L = L(P ).
2 M ), the cone is called
0 ( 2 M ) (11)
DEFINITION 7.7. If u = fu g ( 2 N;
2 M)
is a maximal system of essentially distinct fundamental elements of the cone C(U) of U-composition of the system (10) then the linear inequality system X X u f (x) + u t 0 ( 2 N ) (12) 299
on L is called a fundamental U-convolution of system (10). In particular, if t =
a
( 2 M ) is called a fundamental U-convolution of system (10) is empty.
Two elements of the cone C(U) which agree up to a scalar multiple are not essentially
distinct. Two fundamental elements of the cone C(U) with the same index are non-essentially distinct. It is not possible for the index of a fundamental element to be a proper subset of the index of another fundamental element (see the remark presented in ..1, Chapter V, in connection with de…nition 5.2). From these arguments, it follows that the maximal system of essentially distinct fundamental element of the cone C(U) is unique up to a scalar of its elements. DEFINITION 7.8. If c = fc g ( 2 N;
2 M)
is a system of nonzero generating elements of the cone C(U) of U-composition of system (10) then the system X X c f (x) + c t
0 ( 2 N)
is called a U-convolution of system (10). In particular if t =
a (
2 M ) it is a
U-convolution of system (11). The indices of the inequalities of the U-convolution are the indices of the elements of the cone C(U) that de…ne them. For U = L, a U-convolution of system (10) is called a total convolution of the latter. The de…nitions presented here are, essentially, repetitions of the corresponding de…nitions we introduces in Chapter V (see x1 in that chapter).
Theorem 7.7. The maximal systems of essentially distinct fundamental elements of a non-
null cone C(U) of a U-composition of the system (9) of functions and only those maximal systems are basis’for it. PROOF. Every element of the cone C(U) is included in the cone of a U-composition of some …nite subsystem of the system f (x) of functions (the elements of the second cone must be augmentented by the set of zero coordinates). By theorem 5.1, it could be expressed as a linear combination with positive coe¢ cients of fundamental elements of the latter. Since these elements are also fundamental elements of the cone C(U), it follows that any maximal system of essentially fundamental elements of the cone C(U) is a basis for it. If a given fundamental element of the cone C(U) can be expressed as a linear combination with positive coe¢ cients of some of the element of that cone, then its index includes the index of each of these elements. Since for ’s in proper subset in the index of a fundamental element, the functions f are linearly independent, it follows that none of the fundamental elements among the elements in question can be essentially distinct form our fundamental 300
element. Hence, each basis for the cone C(U) coincides with some maximal system of its fundamental elements. This proves the theorem. DEFINITION 7.9. Two consistent linear inequality systems of type (11) are linearly equivalent if every inequality in one of them may be expressed as a linear combination with non-negative coe¢ cients of a …nite number of the inequalities of the other. COROLLAY 7.5. Every U-convolution of system (10) contains a fundamental U-convolution of it and is linearly equivalent to that fundamental convolution for any choice of the para0
meters .. for which it is polyhedrally (L; L )-closed and has a consistent U-convolution. A system (10) with an empty U-convolution is consistent for arbitrary values of the parameters 0
t for which it is polyhedrally (L; L )-closed. 0
This theorem is equivalent to the proposition that a polyhedrally (L; L )-closed system (11) with a consistent or empty U-convolution is consistent. PROOF. Let system (12) be a U-convolution of system (10). Suppose, for some values of the parameters t = t0 for which system (10) is consistent, that system (12) is consistent. Then all …nite subsystems of the latter are consistent. Hence for t = t0 (corollary 5.2’) all …nite subsystem of (10) are consistent. By theorem 7.5 it follows that system (10) with t = t0 ( 2 M ) is consistent. This proves the …rst part of the theorem. The second part may be proved analogously (using Corollary 5.3’).
COROLLARY 7.6. System (7) with a topologically closed conjugate cones consistent if at least one of its U-convolutions for some subspace U of L is either empty or consistent. We note here that the consistency of a U-convolution of system (10) for a given of t does not in general imply that system, (10) is consistent of that value. EXAMPLE. The system 2n+1 1 u u2 + n(n+1) u3 0 (n = 1; 2; :::); n(n+1) 1 g y 0; y 0 is clearly inconsistent. However its U-convolution y
0; y
0
for the subspace U consistent of the vectors (x; 0), has solutions (x; y) with y = 0. The consistency of system (10) is not guaranteed by the fact that it has an empty Uconvolution. EXAMPLE. The system x+n
0 (n = 1; 2; :::);
is inconsistent, despite the fact that its U-convolution for U = P 1 is empty. To apply theorem 7.8 in obtaining sequential convolutions of system (11). it is necessary to resolve the question of polyhedral closedness of the convolution of a polyhedrally closed 301
system of type (11). To do that we …rst introduce: DEFINITION 7.10. Let A = fa g ( 2 M )
be an arbitrary system of elements of a given linear space L = L(P ); let L=R+S 0
be an arbitrary direct decomposition of it and let a ( 2 M ) and a" ( 2 M ) be the 0
projection of the element a into R and S respectively a = a + a" . Let p 1a
0 1
+ ::: + p
r+1
a
0 r+1 0
be a zero valued linear combination with positive coe¢ cients of the elements a which involves a …nite subsystem of whose rank is less by one, than the number of its elements. Then the collection of al essentially distinct combinations p 1a
1
+ ::: + p
r+1
a
r+1
associated with all possible combinations (13) will be called an RS-convolution of the system A of elements. If, for a system A, it is not possible to construct a zero valued combination with the above properties, then we say that the RS-convolution of system A is empty. For …xed R and S, the RS-convolution of system A is unique to a scalar multiple of its elements. Thus, for given R and S, every RS-convolution of system A generated one and the same cone Cs in L. LEMMA 7.3. Let C be a convex cone generated in L by the elements of system A. Then, with the notation of de…nition 7.10, the relation Cs = C \ S if the RS-convolution of system A is nonempty and the relation C \ S =
(where
is the null element of L) holds if the
RS-convolution of A is empty.
PROOF. Clearly every non-null element of .. is a linear combination p 1a
1
+ ::: + p
m
a
m
with positive coe¢ cients such that p 1a
0 1
+ ::: + p
m
a
0 m
=
It is not hard to show that such a linear combination is a linear combination with positive coe¢ cients of an RS-convolution of the …nite system a 1 ; :::; a
m
of elements (and hence, is such a combination of the elements of an RS-convolution of A). Indeed, let the rank r of the system 0
a 1 ; :::; a
0 m
be zero, then our assertion is obviously true. Let r be positive and suppose the …rst r
302
elements of that system are linearly independent. Then we have r X 0 0 a i= cji a j (i = 1; 2; ::; m) j=1
Substituting in the equation system 0
a 1 x1 + ::: + a
0 m
xm =
we obtain the equivalent system cji x1 + ::: + cjm xm =
(j = 1; 2; ::; r)
By theorem 5.2 every nonzero non-negative solutions of that system may be expressed as a linear combination with non-negative coe¢ cients of the fundamental elements of the cone of its non-negative solutions. Theorem 5.1 applies here, since that cone may be considered as a cone of composition of the system c1i u1 + ::: + cri yr =
(i = 1; 2; ::; m)
of functions. For any fundamental element (x01 ; :::; x0n ) of the cone in question, the combination x01 a
0 1
+ ::: + x0m a
0 m
whose value is zero becomes a combination of the form (13) of de…nition 7.10 upon eliminating the terms with nonzero coe¢ cients from it. This proves our assertion. Hence the combination (14) may be expressed as a linear combination with positive coe¢ cients of the elements of an RS-convolution of A and hence is included in the cone Cs . Thus C \ S
Cs .
On the other hand C and Cs
Cs
S
Thus C \ S = Cs . as we were to prove. The second assertion of the lemma follows easily
from above arguments. 0
Now let L be the space of all linear functions on the space L (on which we are considered the system (10)). Let U be a subspace of L and let V be a direct complement of U in L. 0
0
0
Let U and V denote the collections of elements f 2 L such that, respectively, f (x) = 0 0
0
0
(x 2 U ) and f (x) = 0 (x 2 V ) Clearly L is the direct sum of U and V . Hence, lemma 7.3 implies the validity of:
0
THEOREM 7.9. Let L be a linear space (on an ordered …eld P) and let L be the space of all linear functions on L. Let U be any subspace of L and let V’be the set of all elements 0
of f 2 L such that
f (x) = 0 (x 2 U )
Then the dual cone of any U-convolution of the linear equation system f (x) = 0 ( 2 M ) 303
coincides with the intersection of the dual cone of that system and the subspace V’of L
0
if that convolution is not empty. If the convolution is empty, then the intersection coincides 0
with the zero element of L . For a fundamental U-convolution, theorem 7.9 follows directly form lemma 7.3 for R = 0
0
U ’and S = V . For an arbitrary U-convolution, the theorem follows from lemma 7.3 and corollary 7.5. 0
REMARK. For an arbitrary space L of linear functions on L which contains the elements 0
0
f ( 2 M ) the theorem is valid under the condition that L = U 00 + V . 0
COROLLLARY 7.7. Let L be the space of all linear functions on L and let U be a
subspace of L for which a U-convolution of system is not empty. If system (11) is polyhedrally 0
0
(L; L )-closed then each of its U-convolutions is polyhedrally (L; L )-closed. 0
PROOF. By de…nition 7.2 and corollary 7.1 system (11) is polyhedrally (L; L )-closed if and only if the system f (x) + t 0 ( 2 M ); g (15) t 0 0 associated with it is polyhedrally (L; L )-closed. Here L = L + P 1 is the sets of pairs 0
0
0
[x; t] (x 2 L; t 2 P 1 ) and the set L = L + R1 is the set of pairs [f; p] (f 2 L ; p 2 P 1 ); L
0
is the space of all linear functions on L.
If U is a given subspace of L and V is a direct complement of U in L then we have the direct decomposition L = [U; 0] + [V; P 1 ]; 0
where [U; 0] is the space of pairs [u; 0] (u 2 U ; 0 2 P 1 ) and where [V; P 1 ] is the space of 0
0
0
pairs [v; t] (v 2 V ; t 2 P 1 ). Let [V; P 1 ] be the collection of elements [f; p] 2 L such that f (x) + pt = 0 ([x; 0] 2 [U; 0]): 0
The subspace [V; P 1 ] may be considered as the intersection of the solution sets of the system [x; 0]([f; p]) [x; 0]([f; p])
0; 0;
of inequalities, where [x; 0] 2 [U; 0] satisfy all the equations [x; 0]([f; p]) 1
0
0 de…ned
0
on that subspace. Thus the subspace [V; P ] is polyhedrally (L; L )-closed. Since the 0
dual cone of system (15) is also polyhedrally (L; L )-closed, its intersection with subspace 0
0
[V; P 1 ] is polyhedrally (L; L )-closed. But, by theorem 7.9, this implies the polyhedral (L,L’)-closedness of the [U,0]-convolution of system (15). It is easy to show a [U,0]-convolution of system (15) may be obtained form a U-convolution of system (11) by multiplying all the free terms of the latter by the parameters t and adjoining 304
the inequality
0 to it. In connection with de…nition 7.2 (and corollary 7.1) this implies
t
0
that the U-convolution of system (11) is polyhedrally (L; L )-closed. COROLLARY 7.8. If system (7) on the space P n (P is an ordered …eld) is polyhedrally closed and if U is a subspace of P n for which it has a non-empty U-convolution, then every U-convolution of the system is polyhedrally closed. 0
THEOREM 7.10. Let L be a subspace of the space of linear functions on L that contains the element f ( 2 M ) associated with system (11). Let U be a subspace of L such that, for 0
0
0
0
0
a direct complement V of it in L, we have: L = U + V where U and V are the collections 0
of elements f 2 L such that
f (x) = 0 (x 2 V ) and f (x) = 0 (x 2 U )
0
respectively. If a system (11) satisfying these conditions is polyhedrally (L; L )-closed and has an empty U-convolution, then every U-convolution of the system is polyhedrally 0
(L; L )-closed. The proof is analogous to the proof of corollary 7.7. COROLLARY 7.9. For any two subspaces U1 and U2 of L, a U2 -convolution of a U1 convolution of system (10) coincides with a U1 + U2 -convolution of system (10). PROOF. By de…nition of a U-convolution of system (10), it su¢ ces to prove the assertion for the system 0 (x 2 M )
f (x)
with linearly independent functions f (x) on L. Let V12 be a direct complement of U1 +U2 in L and let U 2 be a direct complement of U1 \ U2 in U2 . Then V1 = U 2 + V12 is a direct
complement of the subspace U1 in L. Since all linear combinations of the functions f (x)
( 2 M ) included in a U1 -convolution of system (16) are equal to zero on U1 \ U2 , it follows that U2 -convolution cone of these functions is also their U 2 -convolution cone. Thus a U2 convolution of a U1 -convolution of system (16) is a U 2 -convolution of its U1 -convolution. By theorem 7.9. the dual cone of a U1 -convolution of system (16) is equal to the inter0
0
0
section of the dual cone C of system (16) with the subspace V1 of L (where L is the space of all linear functions on L) whose elements f satisfy f (x) = 0 (x 2 U1 )
By the same theorem, the dual cone of a U 2 -convolution of a U1 -convolution of system 0
0
(16) coincides with the cone C \ V1 and the subspace (U1 + V12 ) of those elements f in L
0
such that f (x) (x 2 U 2 ). Hence it coincides with the intersection of the cone C and the 0
0
subspace V1 \ (U1 + V12 ) . It is not hard to show that the latter coincides with the subspace 0
of those elements f 2 L such that
305
f (x) = 0 (x 2 U1 + U 2 = U1 + U2 ) 0
0
Indeed, V1 is the subspace of those elements f 2 L for which f (x) = 0 (x 2 U1 ) 0
0
and (U1 + V12 ) is the subspace of those elements f 2 L for which f (x) = 0 (x 2 U 2 ) 0
0
0
Hence V1 \ (U1 + V12 ) is the subspace of those elements f 2 L for which f (x) = 0 (x 2 U1 ) and f (x) = 0 (x 2 U 2 ) 0
0
0
Hence C \ V1 \ (U1 + V12 ) = C \ V12
0
On the other hand, by theorem 7.9, the intersection C \ V12 coincides with the dual cone
of a U1 + U2 -convolution. of system (16). But then the U2 -convolution of a U1 -convolution of system (16) coincides with a U1 + U2 -convolution of that system. As we noted in the beginning of the proof, this proves our corollary. In chapter V we proved (see the remark after theorem 5.3) that if V is a direct complement of a subspace U of L, then the set of solutions of a U-convolution of system (11) on the subspace V coincides with the projection of the solution set of system (11) on that subspace. 0
Recall that the projections of the element x = u + v (u 2 U ; v 2 V ). on the subspace
U and V are the elements u and v respectively.
For an in…nite system (11), this assertion is not true even if the system is polyhedral closed. EXAMPLE. The system 1 2n+1 x y + n(n+1) z 0 (n = 1; 2; :::); n(n+1) g y 0; z 0; y + t 0 (on R4 ) is polyhedral closed since it is stably consistent and since the set of vectors 1 ; n(n+1)
(
2n+1 1; n(n+1) ; 0) (n = 1; 2; :::);
(0; 1; 0; 0); (0; 0; 1; 0); (0; 1; 0; 1); that satisfy its inequality on R4 is bounded and closed. If U is the subspace of R4 consisting of the vectors (x; 0; 0; 0) then the U-convolution of that system consists of the inequalities y
0; z
0; y + t
0
is obviously inconsistent. 0
A su¢ cient condition for a solution of a U-convolution of a polyhedrally (L; L )-closed system (11) on the subspace V (where V is the direct complement of the subspace U in L), to be the projection of a solution of system (11) is given by: 306
THEOREM 7.11. Let V0 be a solution of some U-convolution of a polyhedrally closed (11) on the direct complement V of the subspace U of L or let it be an arbitrary element of V if that convolution is empty. If the system f (u)
(a
0 (x 2 M ) (17)
f (v0 ))
on U, is polyhedrally closed (as a system of linear inequalities on U) then system (11) has a solution x0 = u0 + v0 (u0 2 U )
PROOF. Since upon substituting x = v0 in a U-convolution of system (11) it becomes a total convolution of system (17), it follows from theorem 7.8 that system (17) is consistent. By this theorem, this system is also consistent in the case where the total convolution is empty. If u0 is a solution of system (17) then x0 = u0 + v0 is a solution of system (11) . This proves our theorem. The example which we present below shows that the polyhedral closedness of system (11) does not, in general, imply the polyhedral closedness of system (17) corresponding to it. EXAMPLE. Let U be the subspace given by the vectors: (x; y; z; 0) and let v0 be the vector (0; 0; 0; 0). Then the system (17) corresponding to the polyhedrally closed system of the preceding example is 2n+1 1 x y + n(n+1) z 0 (n = 1; 2; :::); n(n+1) g y 0; z 0; y + t 0 The latter is not polyhedrally closed. Indeed, it was shown, in remark 3 about theorem 7.4, that the system is not …nitely determined and hence is not …nitely claimed (see theorem 7.4). For an arbitrary system (11), only the following assertion, which follows form the de…nition of a U-convolution, is valid: Let V be a direct complement of a proper subspace U of L. Then the projection of the set of solutions of a consistent system (11) on the subspace V is contained in the set of solutions of any U-convolution of the system on that subspace. In the case of an empty U-convolution we take the solution set of that convolution on V to be the whole set V. x3. A SEPARATION THEOREM FOR CONVEX SETS.
DEFINITION 7.11. Let L=L(P) be an arbitrary linear space (P is an ordered) and let L
0
be some subspace of the space L* of all linear functions de…ned on L. If a non-empty set Q of 0
the space L is the intersection of a given system S of half space ( L -half spaces) de…ned on L by the inequalities: f (x)
a
0
0 where f is a non-zero elements of L and a 2 P , then we say
the Q has a polyhedral L-representation S. If the system S is …nite, then the set Q is said to
307
0
0
be an L -polyhedral set. An arbitrary system of L -half spaces of L is said to be polyhedrally 0
0
(L; L )-closed if the system of inequalities de…ning it is polyhedrally (L; L )-closed. 0
DEFINITION 7.12. Two sets A and B in the space L are said to be totally L -separated if they are empty and if its is possible to …nd two planes a = 0 and f (x)
f (x)
such that f (x)
a
0 for all x 2 A
b
0 for all x 2 B:
and f (x) 0
0
b = 0 (a < b; f 2 L )
0
0
For L = L , the L is dropped from all the names introduced there ( L -half spaces, 0
L -represented, etc.). THEOREM 7.12. Let two nonempty convex sets A and B in the space L which do 0
not have common elements have, respectively, polyhedral L -representations S(A) and S(B) 0
0
whose union is polyhedrally (L; L )-closed. Then they are totally L -separated. PROOF. Let f (x)
a
g (x)
b
0 ( 2 M ) (18)
0 ( 2 N ) (19)
be systems of linear inequalities on L with f ; g
0
0
L which de…ne the L -representations
S(A) and S(B) such that the system f (x) a 0 ( 2 M) g (20) g (x) b 0 ( 2 N) 0 polyhedrally (L; L )-closed. Since system (20) is inconsistent, it follows form theorem 7.5 that at least one of its …nite subsystems f i (x) a i 0 (i = 1; 2; ::; k) g g i (x) b i 0 (i = 1; 2; ::; `) is not consistent. Thus the inequality t 0 is a consequence of the systems f i (x) a i t 0 (i = 1; 2; ::; k) g g i (x) b i t 0 (i = 1; 2; ::; `) Hence, the relation k k X X t= p i (f i (x) a i t) + q i (g i (x) b i t) i=1
i=1
with non-negative coe¢ cients p
i
and q
i
is an identity in x 2 L and t 2 P . Since each
of the systems (18) and (19) are consistent, none of these sums is empty. Setting t = 1 and introducing the notation k X f (x) a = p i (f i (x) a i t) 1=2; (21) i=1
308
we get f (x)
a=
k X
q i (g i (x)
b i t) + 1=2
i=1
From these two relations it follows that a < 0 for all x 2 A
f (x) and
a > 0 for all x 2 B:
f (x)
If, in this discussion, we replace (21) by k X f (x) b = q i (g i (x) b i t) 2=3; i=1
we get
b < 0 for all x 2 A
f (x) and
b > 0 for all x 2 B:
f (x)
Since k X a= p i a i + 1=2 i=1
and b=
k X
p i a i + 2=3;
i=1
0
it follows that a < b and hence the sets A and B are totally L -separated. 0
COROLLAY 7.10. If two non-empty convex L -polyhedral sets in the space L = L(P ) 0
do not have common elements then they are totally L -separated, and hence there exists a plane f (x)
0
a = 0 in the space L with f 2 L which strictly separates the two sets, i.e. for
the elements of one of the two sets we have f (x)
a < 0 and we have f (x)
a > 0 for the
elements of the other. 0
REMARK. If at least one of the two sets considered here is not L -polyhedral then the proposition about their strict separability is no longer valid, even if we assume that one 0
0
of them is L -polyhedral and that the other has a polyhedrally (L; L )-closed polyhedral 0
L -representation. EXAMPLE. The consistent system 1 2n+1 x y + n(n+1) 0 (n = 1; 2; :::); n(n+1) g y 0; is polyhedrally closed and hence the set A of its solutions has a polyhedrally closed representation. It, obviously, has no points in common with the polyhedral set B de…ned by the inequality
y
0;. The sets A and B cannot, obviously, be strictly separated by a
plane. 309
0
On the other hand, by corollary 7.10, strictly L -separated polyhedral sets are always 0
0
totally L -separated. For two arbitrary sets that are strictly L -separated and that have 0
0
polyhedrally (L; L )-closed L -representations, this assertion is not always true. EXAMPLE. Let A be the set of solutions of the preceding system and let C be the set of solutions of the consistent polyhedrally closed system 1 2n+1 x + y + n(n+1) 0 (n = 1; 2; :::); n(n+1) g y 0; Clearly, the sets A and C are strictly separated but they are not totally separated. DEFINITION 7.13. The outward normal of the half space de…ned in Rn by the inequality (! x ;! x 0 ) a = x01 x1 + ::: + x0n xn a 0 with ! x 0 = (x0 ; :::; x0 ) 6= (0; :::; 0) (x0 ; :::; x0 ; a 2 R); 1
n
1
n
is the vector ! x 0 = (x01 ; :::; x0n ).
Using the topological properties of the space R of real numbers (more precisely, the two topological properties of Rn that follows from those of R) we get, from theorem 7.12, upon setting P = R. COROLLARY 7.11. If two (non-empty) convex sets A and B in the space Rn which do not have any points in common, have polyhedral representations S(A) and S(B) such that outward normals to the half spaces involved in these representations generated all of Rn as a cone, then A and B are totally separated. PROOF. Let (! x ;! x 0) a 0 ( 2 M) g (22) ! ! 0 (x; y ) b 0 ( 2 N) be two systems of inequalities associated, respectively, with the representations S(A) and S(b). By the inconsistency of the union of these systems, the system (! x ;! x 0 ) a t 0 ( 2 M) ! ! ( x ; y 0 ) b t 0 ( 2 N ) g (23) t 0 ! has no solution with t 6= 0. Thus an arbitrary solution (! x 0 ; 0 ) of it de…nes a solution ..
of the system (! x ;! x 0 ) ( 2 M) g (! x ;! y 0) ( 2 N) Since the elements .. generate .. as a cone, that system has no nonzero solution. But then ! ! 0 x = 0 and system (23) has no nonzero solutions. Thus, by the topological properties of Rn , the dual cone of system (23) (on Rn+1 ) coincides with Rn+1 . This means, in particular, 0
that system (22) is polyhedrally closed (is polyhedrally (L; L )-closed for L = Rn and L
0
being the conjugate space of Rn ). But then, our proposition follows from theorem 7.12. In Rn , any closed non-empty convex set H that does not coincide with Rn has a polyhedral
310
presentation (this being the intersection of a system of half spaces in Rn ). Furthermore, a bounded set H, under these conditions, has a polyhedral representation such that the outward normals of the half spaces involved in it, generated all of Rn as a cone. Thus, corollary 7.11 may be viewed as a generalization of the existing theorems on the strict separation of two closed convex sets in Rn that have no elements in common when one of the two sets is assumed to be bounded. x4. THE LINEAR PROGRAMMING PROBLEM FOR POLYHEDRALLY CLOSED
SYSTEMS OF LINEAR INEQUALITIES. Let f (x)
a
0 ( 2 M ) (24)
0
be a consistent polyhedrally (L; L )-closed system of linear inequalities with a set of 0
solutions H and let f(x) (x 2 L). be a linear function with .. Here, L is a subspace of the space L of all linear functions on L that includes the elements f (x) (
2 M ) . If there
is an upper bond T (T 2 P ). of the functions f(x) on H, then we say that T is the largest
value of f on H. Clearly T is an upper bound of the values of the parameter t for which the system f (x)
a 0( 2 M ) g (25) f (x) + t 0 is consistent. 0
0
THEOREM 7.13. If the linear function f (x) (x 2 L ; f 2 L ) has the a greatest value T 0
on the set H of solutions of a consistent polyhedrally (L; L )-closed system with nonzero rank, then system (24) has a …nite subsystems S whose rank equals the number of its inequalities,
such that the function f(x) has the a greatest value T on the set S which is attained there and coincides with T. If the subsystem S does not have such a subsystems that is distinct from it, and if the vale T is attained in H then it is attained for those and only those solutions of the system (24) which satisfy the boundary equations of all the inequalities of S. PROOF. The inequality f (x)
T
0
0 is a consequence of the polyhedrally (L; L )-
closed system (24). Thus, by theorem 7.3 and corollary 2.1 there exist linearly independent functions f k (x) (k = 1; 2; :::; `) and non-negative elements p0 ; p 1 ; :::; p ` in P such that the relation ` X f (x) T = p k (f k (x) a k ) p0 (26) k=1
is an identity in x 2 L. Suppose p0 6= 0. By de…nition of T there exists an element x0 in
H such that
311
f (x)
T >
p0
By the preceding relation, we have ` X p k (f k (x0 ) a k ) f (x0 ) T =
p0 >
p0
k=1
and further ` X p k (f k (x0 )
a k ) > 0;
k=1
which is not possible since x0 satis…es f k (x)
a
k
0 (k = 1; 2; :::; `) (27)
and since all of the elements p
k
are non-negative. Consequently p0 = 0. But then, by
relation (26), f (x) = T for all boundary solutions of the system f k (x)
a
k
= 0 (k = 1; 2; :::; `) (27)
of equations associated with system (27). Since f (x)
T for all solutions of the system
(27), it is a subsystems of S that satis…es the conditions of the …rst part of the theorem. Now suppose the condition of the second part of the theorem are satis…ed and suppose x is an element of H such that f (x ) = T . Using the relation (26) we get ` X p k (f k (x ) a k ) = f (x )
T =0
k=1
Since S (i.e. system (27)) is minimal, as is assumed in the second part of the theorem, we conclude that x is a solution of the system (28) associated with it. Indeed, otherwise at least one of the elements p
k
would be zero and system (27) could be replaced by a proper
subsystems of itself (i.e. by a subsystem that is distinct form (27)). Since, on the other hand, 0
0
every x 2 T that satis…es system (28) also satis…es f (x ) = T (this follows from relation (26) above), the second part of the theorem is proved.
0
0
REMARK. The existence of a largest value T of the function f (x) (x 2 L ; f 2 L ) on 0
the set H of solutions of a consistent polyhedrally (L; L )-closed system (24), does not imply that the value is attained on the set H. To show this, consider the system y
0;
1 x n(n+1)
y+
2n+1 n(n+1)
0 (n = 1; 2; :::)
which, in view of corollary 7.2, is polyhedrally closed. The greatest value T of the function y on the set of solutions of this system is zero. It is clear that this value is not attained on that set. 0
0
THEOREM 7.14. In order for a non-null linear function f (x) (x 2 L ; f 2 L ) having
a greatest value T on the set H of solutions of a consistent system (24) with nonzero rank,
to attain that value on H is is su¢ cient that the corresponding system (25) with t = T be
312
0
polyhedrally (L; L )-closed. PROOF. Suppose the system (25) with t = T is inconsistent. Then the two convex sets: H and the set K of solutions of the inequality
f (x) + T
0 are two sets in L that have
no elements of the inequalities .. are two sets in L that have no elements in common. But 0
system (25) witht = T is polyhedrally (L; L )-closed. Hence theorem 7.12 applies here. Thus 0
the sets H and K are totally L -separated. Moreover, any plane separating them has the form:
f (x) + c = 0. If
f (x) + a = 0 and
f (x) + b = 0 (a < b):
are two such separating planes (there existence follows from theorem 7.12) then
f (x) +
0 for all x 2 H,which contradicts the de…nition of the element T (since, then, clearly
a
a < T ). The contradiction proves the theorem.
REMARK. The condition of the theorem is not necessary. EXAMPLE. The system 2n+1 1 x y + n(n+1) z 0 (n = 1; 2; :::); n(n+1) g y 0; z 0; 0 is polyhedrally closed (here L = L = R3 ). The largest value T of the function
y on the
set of solution of the system is, obviously, zero. It is attained at every solution (x; 0; 0) with x
0 of the system. The system (25) with t = T for the present case has the form 1 2n+1 x y + n(n+1) z 0 (n = 1; 2; :::); n(n+1) g y 0; y 0; z 0; In view of theorem 7.3 it is not polyhedrally closed since the inequality z 0, being a
consequence for it, cannot be expressed as a linear combination with non-negative coe¢ cients of a …nite number of its inequalities. We now proceed to consider the dual problem to the maximum problem associated with 0
0
0
the linear function f (x) (x 2 L ; f 2 L ) and the polyhedrally (L; L )-closed system (24). To this end, we …rst de…ne the space M(P) as the direct sum of the space P ( 2 M ) that are isomorphic to the space P 1 . An element ! x of M(P) is the system (:::; x ; :::) with a …nite number of nonzero coordinates x ( from a given P .
2 M ) which includes one and only one element x
DEFINITION 7.14. If c ( 2 M ) is an arbitrary element of the …eld P, then the X expression c x is said to be linear form on the space M(P). An element ! x of M(P) is 2M
said to be non-negative if all of its components x are non-negative and is called positive if, also, at least one of the them is not zero. The problem of minimizing the linear form X c x on the set N of non-negative elements (:::; x ; :::) of M(P) for each of which the 2M
relation.
313
X
u f = f (x);
2M
is an identity in .. , is called the dual problem of the problem of maximizing the liner 0
0
0
function f (x) (x 2 L ; f 2 L ) on the set of solution of a polyhedrally (L; L )-closed system (24).
0
0
THEOREM 7.15. If there exists a largest value T of the linear function f (x) (x 2 L ; f 2 0
L ) on the set of solutions of a polyhedrally (L; L )-closed consistent system (24), then the set N is not empty, a smallest value of the linear form X a(! u)= a u 2M
exists and is attained on it and the value equals T. Conversely, if the set N is not empty and a smallest value T of the linear form a(! u ) on it exists, then there exists a largest value of the function f (x) on the set H and that value coincides with T. PROOF. If T is the largest value of the function f(x) on the set H, then the inequality f (x)
0 is, clearly, a corollary of system (24). In view of theorem 7.3 it is a corollary
T
of some …nite subsystem f k (x)
a
0 (k = 1; 2; :::; `)
k
of system (24). Using theorem 2.4 we conclude that the relation ` X f (x) T = u0 k (f k (x) a k ) u0 (29) k=1
is an identity in x 2 L where u0 k (k = 1; 2; :::; `) and u0 are no-negative elements of P.
The element u0 ; here, cannot be nonzero. Otherwise, it follows from the relation ` X f (x) (T u0 ) = u0 k (f k (x) a k ) k=1
that
f (x)
T
u0 < T
for all x in H. But this contradicts the de…nition of the element T. If ! u 0 = (:::; u0 ; :::) is an element of the set N and if we de…ne the elements u0 k (k = 1; 2; :::; `) (in which all u0 k with 6= k is zero) the relation (29) yields X T = u0 a 2M
and the relation X u0 f (x) = f (x): 2M
is an identity in x 2 L.This means that ! u 0 2 N . Hence N is not empty. 0 0 Suppose ! u = (:::; u ; :::) is an element of N. Consider the identity in x 2 L. 314
f (x)
0 a(! u )=
X
0
u (f (x)
a ):
2M
0 a(! u ). Thus T
It implies, for each x 2 H; that f (x)
" > 0 (" in P) then f (x) inequality f (x)
T
(T + ")
0 a(! u ). On the other hand if
0 is, obviously, a consequence of system (24) (since the
0 is not consequence of it). By theorem 7.3, it is a consequence of one
of that system’s subsystems. But then, by theorem 2.4, the relation X u (f (x) f (x) (T + ") =
a )
u0
2M
u = (:::; u ; :::) and u0 is an identity in x 2 L where ! X a u T +"
0 (u0 2 P ):Thus we have
2M
0 0 In view of the relation T a(! u ) for any ! u 2 N; this implies that T is a lower bound of the function a(! u ) on N or, equivalently, is the smallest value value of the function a(! u)
on N. By relations (30), this value is attained on N. Now suppose N is not empty and there exists a smallest value of the linear form X a(! u)= a u 2M
on it. Then, for any u 2 N the relation X f (x) a(! u)= u (f (x) a ): 2M
is an identity in x. It implies that f (x)
a(! u ) for all x 2 H. But then the inequality
T holds on H. Suppose the inequality f (x)
f (x)
" holds on H for some " > 0 (" in
T
P). Then, as in the proof of the …st part of the theorem, the relation X f (x) (T + ") = u (f (x) a ) u0 2M
is an identity in x 2 L; where ! u = (:::; u ; :::) and u0 X a u T "
0 (u0 2 P ). This implies
2M
contrary to the de…nition of T. Consequently, T is the largest value of f on H. This completes the proof of the theorem. 0
0
COROLLARY 7.12. The linear function f (x) (x 2 L ; f 2 L ) has a largest value on the 0
set of solutions of a polyhedrally (L; L )-closed consistent system (24) if and only if the set N of non-negative solutions (:::; u ; :::) of the equation X u f (x) = f (x) (x 2 L) 2M
having a …nite number of nonzero coordinates is not empty and there is a smallest value
of a(! u)=
X
a u
2M
315
on the set. 0
0
THEOREM 7.16. The linear function f (x) (x 2 L ; f 2 L ) has a largest value on the 0
set of solutions of a polyhedrally (L; L )-closed system (24) if and only if the system f (x) a 0( 2 M ); X u f (t) = f (t) (t 2 L) 2M X g (31) a u f (x) 2M
u 0 has at least one solution (x; ! u ) where x 2 H and ! u is a non-negative vector (:::; u ; :::)
( 2 M ) with a …nite number of positive components (which are elements of P ). PROOF. Let f (x0 ) be the largest value of f (x) on H, i.e. f (x)
f (x0 )(x; x0 2 H):
Then, for the …rst part of theorem 7.15 there exists an element ! u 0 = (:::; u0 ; :::) of the set N such that X f (x0 ) = a u0 2M
But this means that system (31) has a solution (x; ! u 0 ) that satis…es the condition of the
theorem. Conversely, let (x ; ! u ) be a solution of system (31) that satis…es the conditions of the 0 0 0 ! theorem. Then, for u = (:::; u ; :::) and x we have X 0 u f (t) = f (t) (t 2 L) 0
0
2M
identically in L and X 0 0 a u f (x ). 2M
Hence f (x)
0
f (x )
X
0
u (f (t)
2M
a ) (t 2 L):
But then the inequality f (x)
0
f (x )
0; 0
0
holds on H. Hence f (x ) is the largest value of f (x) on H (since x 2 H).
REF EREN CES 316
Agmon, S. 1. The relaxation method for linear inequalities, Canadian J. Math. 6(1954), 382-392. Alexandrov, A.D. 1. Convex polyhedra. Gastehizdat 1950 (Russian). Arno¤, E.L. and Sengupta, S.S. 1. Mathematical programming, Progress in operations research, Vol. 1 (ed. Ako¤, R.L.), Wiley, New York-London, 1961, 105-210. Arrow, K.J., Hurwicz, L. Uzawa, H. 1. Studies in linear and non-linear programming, Stanford University Press, Stanford, California, 1958. Borosov, A.C. 1. Linear programming in techno-economics problems. Nauka, 1974 (Russian) Beckenbach, E. and Bellman, R. 1. Inequalities, Springer-Verlag, Berlin-Gottingen-Heidelberg, 1961. Ben-Israel, A. 1. Notes on linear inequalities I. J. Math. Analysis and applications 9, No. 2 1964, 303-314. Berge, C. and Ghouila-Houri, A. 1. Programmes, jeux et reseaux des transport, Dunod, Paris 1962. Birko¤, G. 1. Lattice theory. American Mathematical Society Providence, Rhode Island 1948. Bonneseen, T. and Fenchel, W. 1. Theorie der Konvexen Korpes. Springer, Berlin 1934. Braunschweiger, C.C. 1. An extension of the nonhomogeneous Farakas theorem. Amer. Math. Monthly 69, No. 10 1962, 969-975. Bourbaki, N. 1. Espaces vectoriels topologiques (Elements de mathematique, XV). Herman et Cie, Paris 1953. Burger, E. 1. UBER Homogene linear Ungleichungssysteme, Z. agnew. Math. und Mech. 36, Nr. 314 (1956), 135-139. Caratheodory, C.
317
1. Uber uen Variabilititatsbereich der Fourirschen Konstanten vor positive hamonsischen Funktionen, Rend. Circ. Mat. Pulermo 32 (1911), 193-217. Garver, W.B. 1. Systems of linear inequalities, Ann. Math. 23 (1921-1922), 193-217. Charnes, A. and Kooper, W.W. 1. The strong Minkowski-Farakas-Wey1 theorem for vectors spaces onver ordered …elds, Proc. of Nat. Academy of Sci. U.S.A. 44, No. 9 (1958), 914-916. Chernikov, S.N. 1. A generalization of the Kronecher-Kapelli theorem on systems of linear inequalities. Mat. Sb. 15 (57) No. 3 (1944), 437-448. 2. Systems of linear inequalities, Usp. Mat. Nauk 8 No. (54) (1953), 7-73. 3. Positive and negative solutions of systems of linear inequalities. Mat. Sb. 38(80) (1956), 479-508. 4. On the general solutions of system of linear inequalities. Uchen. Zap. Permskovo Un-ta 16, Vip 3 (1958), 3-12. 5. Nodal solutions of systems of linear inequalities. Mat. Sb. 50 (92), No. 1 (1960), 518-521. 6. Convolutions of system of linear inequalities. DAN SSSR 131, No. 3 (1960), 518-521. 7. A theorem about the separability of convex polyhedral sets. DAN SSSR 138, No. (6) (1961), 1298-1300. 8. A solution of linear programming problems by the method of elimination of unknowns. DAN SSSR 139, No. 6 (1961), 1314-1317. 9. Algorithm for …nding nodal solutions of systems of linear inequalities, SAN SSSR 145, No. 1 (1962), 41-44. 10. Convolutions of …nite systems of linear inequalities. DAN SSSR 145, No. 5 (1963), 1075-1078. 11. On Haar’s theorem for in…nite systems of linear inequalities, Usp. Mat. Nauk 18, No.5 (113) (1963), 199-200. 12. A method of convolution for a system of linear inequalities. Usp. Mat. Nauk 18, No.5 (199) (1964), 149-155. 13. Convolutions of systems of linear inequalities. Sb. Pamyati N.G. Chebotareva, Iza. Kazansk. Un-ta, Kazan 1964, 93-112. 14. On the basic theorems in the theory of linear inequalities. Sibirisk Matem. J., No. 5 (9164), 1181-1190.
318
15. Linear inequalities, Tr. IV. All union Mathematical Congress T.2, Izd. AN SSSR 1964, 36-46. 16. Convolutions of …nite systems of linear inequalities. J. of computational Math. and Math. Phys. 5, No. 1 (1965), 3-20. 17. Linear inequalities. Tr. The …rst Kazukhstan interdisciplinary scienti…c conference in mathematics and mechanics. Alama-Ata 1965, 134-137. 18. Polyhedral closedness of systems of linear inequalities DAN SSSR 161, No. 1 (1965), 55-58. 19. Convolutions of systems of linear equations. S. Computational Math. and Math. Phys. 5, No. 2 (9165), 329-334. Chernikova, N.V. 1. An algorithm for …nding the general formulae of non-negative solutions of system of linear inequalities. J. of Computational Math. and Math. Phys. 4 (1964), 733-738. 2. An algorithm for …nding the general formulae of non-negative solutions of system of linear inequalities. J. of Computational Math. and Math. Phys. 4, No 2 (1965), 334-337. Dantzig, G.B. 1. Maximization of a linear function of variation of variables subject to linear inequalities, Chap. XII of the book: "Activity analysis of production and allocation," ed. T.C. Koopmans, New York 1951, 339-347. 2. Application of the simplex method to a transportation problem, chapter XXII of the above book, 359-373. Dines, L.L. 1. Systems of linear inequalities, Am. of Math. (2) 20, No. 3 (1918-1919), 191-199. Dines, L.L. and McCoy, N.H. 1. On linear inequalities, Trans. Ray. Soc. Canada (3) 27 (1933), 33-70. Dunford, N. and Schwartz, J.T. 1. Linear operators, part I: General Theory, John Wiley & Sons, New York, 1958. Eremin, I.I. 1. On certain properties of nodal system of linear inequalities. Usp. Mat. Nauk 11, Vip 2 (68) (1958), 169-172. 2. On the inconsistent systems of linear inequalities. DAN SSSR 138, No. 6 (1961), 1280-1283. 3. Relaxation methods in solving systems of inequalities with convex functions on the
319
left hand side, DAN SSSR 160, No. 6 (1961), 1280-1283. Fan, Ky 1. On system of linear inequalities in "Linear inequalities and rerated systems," ed. Kuhn and Tucker, New Jersey, 1956. Farakas, J. 1. Uber die Theories der einfachn Ungleichungen. J. rein und angew. Math. 124 (1901), 1-24. Fourier, J.B. 1. Solutions d’une question particulities du calcul des inegalities. Nouvean bulletetin des sciences par la societe philomathique de Paris, 1826, p. 99. Oeuvres II. Gauthier-Willars, Paris 1890. Gale, D. 1. Convex polyhedral cones and linear inequalities, in "Activity Analysis of Production and Allocation," ed. T.C. Koopmans, New York 1951, 287-297. 2. The theory of linear economic models. McGraw-Hill book company, New York 1960. Gass, S. 1. Linear programming, Methods and applications, McGraw-Hill Book Company, New York 1958. Goldman, A.D. 1. Resolution and separation theorems for polyhedral convex sets. Linear inequalities and related systems, Kuhn & Tucker (eds), New Jersey 1956. Goldman, A.D. and Tucker, A.W. 1. Polyhedral convex cones. Linear Inequalities and Related Systems Kuhn & Tucker (eds) New Jersey 1956. Gordan, P. 1. Uber die Au‡osungen linearen Gleichungen mit reellenn Ceo¢ cieuten, Math. Ann 6 (1873), 22-28. Gummer, C.F. 1. Sets of linear equations in positive unknowns. Amer. Math. Monthly 33 (1938), =488. Haar, A. 1. Uber lineare Ungleichungen, Acta Litt. Scient. Univ. Hung. 2 (1924), 1-14. Heller, I. 1. On a class of equivalent systems of linear inequalities, Paci…c J. Math. 13, No. 4 (1963), 1209-1227.
320
Invanov, V.K. 1. The minimax problem for systems of linear functions. Mat. Sb. 28 (70), No. 3 (1951), 685-706. Kantorovich, L.v. 1. Mathematical methods in the organization and planning of production. Izd. LGU 1939. 2. On an e¤ective method for solving a class of extremum problems, DAN SSSR 28, No. 3 (1940), 212-215. 3. Economic computation of the best utilization of resources. Izd. AN SSSR 1959. Koopmans, T.C., ed. 1. Activity analysis of production and allocation (Cowles Commission Monograph No. 13), Wiley, New York, 1951. Krein, M.G. 1. Conditions for the equivalence of systems of linear inequalities, Nauch. Dok1. Vicsh. Shkoli (phys. Mat. Nauk.), 3 (1958), 73-74. Kuhn, H.W. 1. Solvability and consistency for linear equations and inequalities, Amer. Math. Monthly 63, No. 4 (1956), 217-232. Kuhn, H.W. and Tucker, A.W., ed. 1. Linear inequalities and related systems, Princeton University Press, Princeton, New Jersey 1956. Minkowski, H. 1. Geometric Der Zaklen, Teubner, Leipzig 1896. 2. Gesammelte Abhandlungen, Bd. 2, Teuber Leipzig-Berlin 1911. Michelson, V.C. 1. On the signs of solutions of systems of linear inequalities. Usp. Mat. Nauk 9 Vip 3 (61) 1954, 163-170. Motzkin, T.S. 1. Beitrage zur Theorie der linearen Ungleichugen (Dissertation, Basel 1933), Jerusalem 1936. Motzkin, T.S. and Shoenberg, I.J. 1. The relation method for linear inequalities, Cand. J. Math. 6 (1954), 292-404. Nefedev, G.V. 1. On the equivalence of certain theorems about systems of linear inequalites. Usp. Mat.
321
Nauk 12 No. 4 (76) (1957), 187-192. Okunev, L.Ya. 1. Higer Algebra, Gostechzdat 1949. Ostrogradskii, M.V. 1. Considerations generales sur les moments des forces. Mem. de 1’Acad. imp. des SC. de St. Peterbourg 1834. Remez, E. Ya. 1. On a more direct method of solving the Chebishev approximation problem in function spaces. Vid. Ukr. Akad. Nank. Keiv 1935. Rubinstein, G. Sh. 1. General solutions of systems of linear inequalities. Usp. Mat. Nank 9, No. 2 (60) (1954), 171-177. Shkolnik, A.G. 1. Linear inequalities, DAN SSSR 70, No 2 (1950), 189-192. 2. Linear inequalities, Uchen. Zap. Phy-Mat. fac. MGPI 16 (1951), 127-174. stiemke, E. 1. Uber positive Losungen homogener linear Gleichungen, Math. Ann. 76 (1915), 340342. Tucker. A.W. 1. Dual systs of homogeneous linear relations. In: Linear Inequalities and Related Systems, Kuhn and Tucker eds, New Jersey 1956. Uzawa, H. 1. A elementary method of linear programming. In: Studies in Linear and Non-linear Programming, Arrow, Hurwicz, Uzawa, eds, Clalifornia 1962. Van der Warden, B.L. 1. Moderne Algebra, Bd. 1, Berlin 1930. Voronoi, G.F. 1. Sur quelques proprietes des formes quadratiques positive parafaites. J. reine und angew Math. 133 (1908), 97-178. 2. Recherches sur les paralleloedress primitifs, J. reine und angew Math. 134 (1908), 198-207; 136, 67-181. Zoutendijk, G. 1. Methods of feasible directions. Elsevier Publishing Co., Amsterdam, London, New York, Princeton 1960.
322