Document not found! Please try again

Factorization of Almost Periodic Matrix Functions of ... - CiteSeerX

2 downloads 0 Views 346KB Size Report
Toeplitz operators with matrix symbols (including various classes of discontinuous symbols) on the unit ... for locally sectorial matrix valued functions a factorization exists that is canonical .... Proof. The second \if and only if" statement of (a) is easily obtained using the ...... Applying the = 0 case to the matrix functions cGR.
Factorization of Almost Periodic Matrix Functions of Several Variables and Toeplitz Operators Leiba Rodman, Ilya M. Spitkovsky, Hugo J. Woerdeman

Dedicated with respect and a ection to Professor M. A. Kaashoek on the occasion of his 60-th birthday We study connections between operator theoretic properties of Toeplitz operators acting on suitable Besikovitch spaces and factorizations of their symbols which are matrix valued almost periodic functions of several real variables. Among other things, we establish the existence of a twisted canonical factorization for locally sectorial symbols, and characterize one-sided invertibility of Toeplitz operators in terms of their symbols. In addition, we study stability of factorizations, and factorizations of hermitian valued almost periodic matrix functions of several variables.

1. Introduction Operator theoretic properties of Toeplitz operators (invertibility, Fredholmness, description of the kernel and of the range), and closely related Wiener-Hopf equations and Riemann boundary value problems, have been studied using factorization techniques for a long time. Starting with the ground breaking papers by Krein [38] and Gohberg-Krein [22], factorization properties of the Wiener-Hopf type of the symbol function, which may be scalar, matrix, or operator valued, and may have symmetries, e.g., Hermitian valued, or unitary valued, played a major role in these studies. The literature on this subject is extensive; we mention here only a few relevant books. Singular integral equations on a (more or less) general curve are studied using generalized factorizations (in the Lp classes) in [23] (scalar symbols) and [11, 41] (matrix symbols). The book [9] provides a comprehensive treatment of Toeplitz operators with matrix symbols (including various classes of discontinuous symbols) on the unit circle or the real line. A graduate level textbook exposition of basic theory of Toeplitz operators and their symbols is given in [19]. For rational matrix (and operator) valued symbols, an approach using realization (a concept borrowed from control and systems theory), leads to explicit descriptions of the corresponding Toeplitz operators and solutions of systems of Wiener-Hopf equations, see [20], [21]. Recently, in connection with convolution type equations on nite intervals, factorization of almost periodic matrix functions became a subject of considerable attention, see [32, 33]. Factorization properties of 2  2 triangular almost periodic

2

Rodman, Spitkovsky, Woerdeman

matrix functions (motivated by applications to Wiener-Hopf equations on nite intervals) were considered in [35, 34, 8, 45]. As it happens, the factorization problem for such matrices is closely related with the almost periodic version of the Corona problem, see [6, 7, 46]. General relations between Fredholm properties of Toeplitz operators with almost periodic matrix symbols and factorization of the symbols were discussed in [31, 36, 4]. Very recently, the almost periodic factorization found still another area of application to positive and contractive extension problems in the context of the abstract band method: see [53] (scalar functions), [47] (matrix functions), [2, 3] (matrix periodic functions of two, and in certain cases more than two, variables). So far, factorization of almost periodic symbols of Toeplitz operators was studied exclusively for functions of one variable. In the present paper the main theme is connections between Toeplitz operators and factorizations of their symbols, which are matrix valued almost periodic functions of several real variables. In our approach, it is advantegeous to allow the consideration of functions with Fourier spectrum in a given additive subgroup of Rk. By doing so, we are able to treat simultaneously many particular cases, for example, periodic functions, or functions of mixed type (e.g., periodic in one variable and non-periodic in other variables). One of our main results states that for locally sectorial matrix valued functions a factorization exists that is canonical up to a scalar multiple (a so-called twisted canonical factorization). Other results pertain to Hermitian matrix functions and to continuity properties of canonical factorizations. We conclude the introduction with a brief description of the contents section by section. Basic properties of algebras of almost periodic functions are addressed in Section 2, where also the notion of canonical factorizations with respect to a halfspace are introduced. Section 3 contains the invertibility criteria for Toeplitz operators whose matrix symbols belong to Wiener algebras of almost periodic functions of several variables, in terms of their (canonical) factorization. In Section 4, the existence of such factorization is established for the sectorial matrix functions. For locally sectorial matrix functions we prove existence of canonical factorizations up to multiplication by a scalar elementary exponential. Speci cs of the factorization of Hermitian matrix functions, both de nite and inde nite, are discussed in Section 5. Results of Sections 4 and 5 are then used in Section 6, where one-sided invertibility of Toeplitz operators with almost periodic matrix symbols of several variables is characterized, again in terms of factorizations of the symbols. The factorizations here are generally non-canonical, and may involve non-diagonalizable middle factors. In the last Section 7 we deal with continuous families of almost periodic matrix functions, and discuss the continuity of their canonical factorizations.

Factorization and Toeplitz Operators

3

2. Algebras of Almost Periodic Functions and Factorizations In this section we present some background results on algebras of almost periodic functions, and introduce the notion of a factorization with respect to a halfspace, which is central to this paper. We let (AP k ) denote the algebra of complex valued almost periodic functions of k real variables, i.e., the closed subalgebra of L1 (Rk) (with respect to the standard Lebesgue measure) generated by all the functions e (t) = eih;ti , where  = (1 ;    ; k ) 2 Rk. Here the variable t = (t1 ;    ; tk) 2 Rk, and

h; ti =

k X j =1

j tj

is the standard inner product of  and t. The norm in (AP k ) will be denoted by k  k1 . The next proposition is standard (see Section 1.1 in [43]).

Proposition 2.1. (AP k ) is a commutative unital C -algebra, and therefore can be identi ed with the algebra C(B) of complex valued continuous functions on a compact Hausdor topological space B. Moreover, Rk is dense in B. The space B is called the Bohr compacti cation of Rk. Recall that for any f(t) 2 (AP k ) its Fourier series is de ned by the formal sum

X

where



f eih;ti ;

(2.1)

Z 1 e?ih;ti f(t)dt;  2 Rk; (2.2) f = Tlim !1 (2T)k [?T;T ]k and the sum in (2.1) is taken over the set (f) = f 2 Rk : f 6= 0g, called the spectrum of f. The spectrum of every f 2 (AP k ) is at most a countable R set. The mean M ff g of f 2 (AP k ) is de ned by M ff g = f0 = limT !1 (2T1 )k [?T;T ]k f(t)dt. The Wiener algebra (APW k ) is de ned as the set of all f 2 (AP k ) such that the Fourier series of f converges absolutely. The Wiener P algebra is a Banach algebra with respect to the Wiener norm kf kW = 2 k jfj (the multiplication in (APW k ) is pointwise). Note that (APW k ) is dense in (AP k ). For the general theory of almost periodic functions of one and several variables we refer the reader to the books [12, 39, 40] and to Chapter 1 in [43]. Let  be a non-empty subset of Rk. Denote (AP k ) = ff 2 (AP k ) : (f)  g; (APW k ) = ff 2 (APW k ) : (f)  g:

R

4

Rodman, Spitkovsky, Woerdeman

If  is an additive subgroup of Rk, then (AP k ) (resp. (APW k ) ) is a unital subalgebra of (AP k ) (resp. (APW k )).

Proposition 2.2. Let  be an additive subgroup of Rk. Then: (a) f 2 (AP k ) is invertible in (AP k ) if and only if f is invertible in (AP k ) if and only if there exists  > 0 such that jf(t)j   for every t 2 Rk. (b) f 2 (APW k ) is invertible in (APW k ) if and only if f is invertible in (APW k ) if and only if there exists  > 0 such that jf(t)j   for every t 2 Rk. Proof. The second \if and only if" statement of (a) is easily obtained using the identi cation of (AP k ) with C(B) and the denseness of Rk in B. The rst \if and only if" statement of (a) follows form a general result on C  -algebras: If A2 is a unital C  -subalgebra of a unital C  -algebra A1 then a 2 A2 is invertible in A2 if and only if a is invertible in A1 (see, e.g., Theorem 2.1.11 in [42]). The part (b) for one variable k = 1 is a classical result which can be found, for example, in [17] (Corollary 2, p. 175). For several variables the result follows using the approach of Section 29 of [17]. We brie y outline this approach. First of all, the algebra (APW k ) is identi ed with `1 (Rkd), where by Rkd we denote the Abelian group Rk with the discrete topology. Next, the maximal ideal space of `1 (Rkd) is identi ed with the group ?k of characters on Rkd, when ?k is given the weak topology (cf. Theorem 5.1 in [17]); recall that a character of Rkd is a group homomorphism from Rkd into the multiplicative group of complex numbers of absolute value one. Namely, the value of x 2 `1 (Rkd) on the maximal ideal M of `1 (Rkd) that corresponds to the character  2 ?k is x(M ) =

X

x()():

R

2

k d

Next, one proves that the set of characters ft gt2

R k

(2.3)

de ned by t() = eih;ti , t 2 Rk is dense in ?k . For k = 1, this is Theorem 6.2 in [17]; for k > 1 this follows from Theorem 23.18 in [29] that asserts that the character group of a direct product of nitely many locally compact Abelian groups coincides with the direct product of the corresponding character groups. Therefore, if f 2 (APW k ) is such that jf(t)j  ; t 2 Rk (2.4) where  > 0 is independent of t, in other words, jf(Mt )j  ; t 2 Rk; then the density of the set (2.3) in ?k implies that f takes nonzero values on the set of maximal ideals of (APW k ), and hence is invertible in (APW k ). This

Factorization and Toeplitz Operators

5

proves the second \if and only if" statement in (b). If (2.4) holds, and moreover f 2 (APW k ) , then by the already proved parts of Proposition 2.2, the inverse of f belongs to (APW k ) \ (AP k ) = (APW k ) , which proves the rst \if and only if" statement in (b). 2 If (X) is a set (typically a Banach space or an algebra), we denote by (X)mn the set of m  n matrices with entries in (X). Many properties of the algebras (AP k )nn and (APW k )nn can be extended from the one-variable almost periodic functions without diculties. We state one such property, which is especially useful.

Propositionn2.3. Let  be an additive subgroup of Rk. Let f 2 (AP k )nn (resp.  n k f 2 (APW ) ), and let

= fz 2 C : z is an eigenvalue of f(t) for some t 2 Rkg: If is an analytic function in an open neighborhood of the closure of , then (f) 2 (AP k )nn (resp. (f) 2 (APW k )nn).

Here, for every xed t 2 Rk, (f(t)) is understood as the n  n matrix de ned by the standard functional calculus. Proof. We give the proof for the case f 2 (APW k )nn; if f 2 (AP k )nn, the proof is analogous. The proof is modeled after the proof of Proposition 2.3 in [47]. Since f(t) is a bounded function, the set is also bounded. Let z0 2= (the closure of ). Then z0 I ? f(t) has eigenvalues z0 ? 1 (t); : : : ; z0 ? n (t), where 1 (t); : : : ; n(t) are eigenvalues of f(t), so

j det (z0 I ? f(t)) j =

n Y

j =1

jz0 ? j (t)j  n ;

where is the (positive) distance from z0 to . By Proposition 2.2, (det (z0 I ? f(t)))?1 2 (APW k ) ; and therefore the matrix function z0 I ? f is invertible in (APW k )nn. Thus, z0 belongs to the resolvent set of f as an element of the Banach algebra (APW k )nn . Now we can de ne (f) 2 (APW k )nn using the functional calculus: 1 Z (z)(zI ? f)?1 dz (f) = 2i ? for a suitable contour ? that contains in its interior, where the integral converges in the norm of (APW k )nn. Since the convergence in (APW k )nn implies pointwise convergence, for every t 2 Rk we have 1 Z (z)(zI ? f(t))?1 dz: (2.5) ( (f))(t) = 2i ?

6

Rodman, Spitkovsky, Woerdeman

But the right-hand side of (2.5) is just the de nition of (f(t)). It follows that (f(t)) (de ned pointwise for every t 2 Rk) is the value of (f) at t; since (f) 2 (APW k )nn, we are done. 2 Next, we present a theorem that introduces the notion of mean motion of a multivariable almost periodic function.

Theorem 2.4. Let f 2 (AP k ) and let  be the minimal additive subgroup of Rk that contains (f). Assume that jf(t)j   > 0 for all t 2 Rk, where  is independent of t. De ne the continuous real valued function y(t), t 2 Rk by the conditions f(t) = jf(t)jeiy(t), ? < y(0)  . Then there exists a unique c 2 Rk such that y(t) = hc; ti + u(t), where u 2 (AP k ). Moreover, c 2 , (u)   and, if f 2 (APW k ), then also u 2 (APW k ). As in the one variable (k = 1) case, we shall call c the mean motion of f, and denote it by w(f). In the one variable case, Theorem 2.4 is the celebrated Bohr's theorem (see, e.g., [39]). Theorem 2.4 (without the (APW k ) part) was proved in [44]. The proof in [44] involves the consideration of eiy(st), s 2 R, for a xed t as an almost periodic function of one variable s, applying the Bohr's theorem, and proving that the mean motion of eiy(st) is a linear function of t. The uniqueness is easy: If y(t) = hc1; ti + u1(t) = hc2; ti + u2(t); where cj and uj satisfy the requirements of Theorem 2.4, then hc1 ? c2 ; ti 2 (AP k ), and since the functions in (AP k ) are bounded, we must have c1 = c2. To obtain the (APW k ) part of this theorem, argue as follows. Represent the function u as the sum u = u0 + u1, where u0 2 (APW k ) and ku1kAP k < 2 . Then eiu1 (t) = f(t)jf(t)j?1 e?ihc;ti e?iu0 (t):

(2.6)

Assume f 2 (APW k ); then jf j2 = ff 2 (APW k ) as well. Since the function (z) = z ?1=2 is analytic in a (complex) neighborhood of [2; 1) and jf j?1 = (jf j2 ), by Proposition 2.3 the function jf j?1 belongs to (APW k ). The other two multiples in the right hand side of (2.6) obviously belong to (APW k ); hence, so does the function (t) = eiu1 (t). On the other hand, the values of  lie in the right open half-plane. Using Proposition 2.3 again, we may de ne u2 2 (APW k ) so that eiu2 = . Since u1 and u2 are both continuous on Rk, this means that they di er by a constant summand. But then u1 belongs to (APW k ) simultaneously with u2. Finally, the function u (= u0 + u1 ) belongs to (APW k ) as well. 2 We now introduce the notion of canonical factorization with respect to a halfspace. A subset S of Rk is called a halfspace if it has the following properties: (i) Rk = S [ (?S); (ii) S \ (?S) = f0g;

Factorization and Toeplitz Operators

7

(iii) if x; y 2 S then x + y 2 S; (iv) if x 2 S and is a nonnegative real number, then x 2 S. A standard example of a halfspace is given by Ek = f(x1;    ; xk )T 2 Rknf0g : x1 = x2 =    = xj ?1 = 0; xj 6= 0 ) xj > 0g[f0g: (The vectors in Rk are understood as column vectors; the superscript T denotes the transpose.) Clearly, when k = 1 the only halfspaces are [0; 1)(= E1) and (?1; 0]. In general, we have the following statement.

Proposition 2.5. A set S  Rk is a halfspace if and only if there exists a real invertible k  k matrix A such that = fAx : x 2 Ek g: (2.7) S = AEk def Moreover, for a given halfspace S the matrix A satisfying (2.7) is unique up to multiplication on the right by a lower triangular real matrix with positive entries on the diagonal.

Proof. The existence of A is a special case of basic results on linearly ordered real vector spaces, see [14] or Section IV.5 in [15]. Indeed, a halfspace S induces a linear order x S y in Rk by the rule x S y if and only if x ? y 2 S. Conversely, every linear order in Rk which is compatible with the addition and multiplication by positive scalars determines a halfspace, consisting of vectors that are greater than or equal to zero with respect to the linear order. By the results of [14], for every linear order x  y in Rk (compatible with addition and multiplication by positive scalars) there exists a basis b1;    ; bk of Rk such that x  y if and only if the vectors of coecients P = ( 1;    ; k)T and = ( 1    ; k )T taken from the P k k linear combinations x = j =1 j bj and y = j =1 j bj satisfy ? 2 Ek . Now A = [b1b2    bk ] satis es (2.7). To prove the uniqueness of A, rst note that every lower triangular real matrix T with positive entries on the diagonal has the property that TEk = Ek . Conversely, let T be a real matrix such that TEk = Ek . Then T maps the subspace f(0; x2;    ; xk)T : xj 2 Rg = (closure Ek ) \ (?closure Ek ) into itself, and by repeating this argument, we obtain that the subspaces f(0;    ; 0; xp;    ; xk)T : xj 2 Rg (p = 1; 2;    ; k) are all T-invariant, in other words, T is lower triangular. It is easy to see then that all diagonal entries of T must be positive. 2 Let G 2 (AP k )nn. A representation G(t) = G+ (t)G? (t); t 2 Rk; (2.8)

8

Rodman, Spitkovsky, Woerdeman

where G+1 2 (AP k )Snn ; G?1 2 (AP k )?nSn (2.9) is called a left APS canonical factorization of G. Using instead of (2.8) the equality G(t) = G? (t)G+ (t) (with the same properties of G (t)), we obtain a right APS canonical factorization of G. Note that the right APS canonical factorization coincides with the left AP?S canonical factorization. Therefore it is in principle not necessary to introduce both left and right APS canonical factorizations. However, to allow easy comparison with the classical results we will use both notions. Canonical factorizations, their generalizations and applications for one variable (k = 1 and S = [0; 1)) have been studied in [32, 52, 33, 36], for example. We are not aware of previous studies of APS canonical factorizations for functions of several variables. We say that (2.8) is a left APWS canonical factorization of G if G satisfy the stronger than (2.9) conditions G+1 2 (APW k )Snn , G?1 2 (APW k )?nSn. If  is an additive subgroup of Rk, then a representation (2.8) is called a left canonical (APS ) factorization if G+1 2 (AP k )Sn\n , G?1 2 (AP k )(n?Sn)\ , and a left canonical (APWS ) factorization if G+1 2 (APW k )Sn\n, G?1 2 (APW k )(n?Sn)\ . Of course, G must belong to (APW k )nn (respectively, (AP k )nn , or (APW k )nn ) in order to potentially admit a left canonical APWS (respectively, (APS ) , or (APWS ) ) factorization. The notions of right APWS canonical factorizations etc. are introduced analogously.

3. Toeplitz Operators We start with general and well-known remarks concerning Toeplitz operators. The exposition in the rst part of this section follows that in [4]. Let H be a Hilbert space, and let B(H) stand for the C  algebra of (bounded linear) operators acting on H. For any orthoprojection P 2 B(H) and an arbitrary A 2 B(H) an abstract Toeplitz operator TP (A) (2 B(L) where L = ImP) is de ned by the formula TP (A)x = PAx; x 2 L: The next lemma is a basic result connecting invertibility of abstract Toeplitz operators and operator factorizations.

Lemma 3.1. Let A 2 B(H) be invertible, and let P and Q (= I ? P ) be a pair of complementary orthoprojections on H. Then the following statements are equivalent: i) the operator TP (A) is invertible;

Factorization and Toeplitz Operators

9

ii) the operator TQ (A?1 ) is invertible; iii) A = A+ A? where A are invertible elements of B(H) and A+1 ImQ  ImQ, A?1 ImP  ImP . Lemma 3.1 is a combination of Theorem 5 and Corollary 3 from [13]; see also [9], [5] or Section III.4 in [18] for a simpler exposition from the point of view of matricially coupled operators. In general (as was also observed in [13]), the invertibility of TP (A) does not imply the invertibility of A. However, this is the case for classical Toeplitz operators, in other words, when H is the Lebesgue space (L2 )n1 on the unit circle T, P is the Riesz orthoprojection onto the Hardy space (H 2 )n1 and A is the multiplication operator Mf de ned by (Mf x)(t) = f(t)x(t); with f 2 (L1 (T))nn; see the original paper [50] or the monographs [9, 11, 41]. Moreover, for certain classes of functions f it can be guaranteed that A in statement iii) of Lemma 3.1 also are multiplication operators: A = Mf . This is true, in particular, when f has an absolutely convergent Fourier series; the same property is then inherited by f (see the original paper [22] or later expositions in [11, 41]). Thus, a basic approach to express invertibility of Toeplitz operators in terms of factorization of symbol functions consists of using Lemma 3.1 and interpreting, if possible, the operators A, A+ and A? as multiplication operators. As we shall see in Theorem 3.2 below, all the properties mentioned above (and much more) are valid in the context of the algebras (AP k ) and (APW k ). Introduce an inner product on (AP k ) by the formula

hf; gi = M ffg g; f; g 2 (AP k ):

(3.1)

The completion of (AP k ) with respect to this inner product is called the Besikovitch space and is denoted by (B k ). Thus (B k ) is a Hilbert space. For a nonempty set   Rk, de ne the projection

0 1 X X f eih;ti A = f eih;ti ;  @ 2(f )

2(f )\

where f 2 (APW k ). The projection  extends by continuity to the orthogonal projection (also denoted  ) on (B k ). We denote by (B k ) the range of  , or, equivalently, the completion of (AP k ) with respect to the inner product (3.1). The vector valued Besikovitch space (B k )n1 consists of n  1 columns with components in (B k ), with the standard Hilbert space structure. Similarly, (B k )n1 is the Hilbert space of n  1 columns with components in (B k ) . In the periodic case ( = Zk) we may identify (B k ) with L2(Tk). We let A be a multiplication operator Mf with f 2 (APW k )nn. Denote by  the smallest additive subgroup of Rk which contains (f). Then A can be considered as an operator on H = (B k )n0 1 for any additive subgroup 0 of Rk which

10

Rodman, Spitkovsky, Woerdeman

contains  (the cases 0 =  and 0 = Rk are not excluded). For a xed halfspace S, the Toeplitz operators TP (Mf ) corresponding to P = S \0 , (S nf0g)\0 , (?S )\0 and ((?S )nf0g)\0 will be denoted by T(f)[S0 , T(f)(S0 , T(f)?0S ] and T(f)?0S ) , respectively. Thus, for example, T(f)?0S ] is de ned on (B k )(n?S1)\0 . We denote by GT the transpose of a matrix function G.

Theorem 3.2. Assume that G 2 (APW k )nn and let  = (G) be the minimal additive subgroup of Rk that contains (G). Then the following statements are equivalent:

i) the operator T(G)?0S ] is invertible for some additive subgroup 0  (G);

ii) the operator T(G)?0S ] is invertible for every additive subgroup 0  (G);

iii) the operator T(G)?0S ) is invertible for some additive subgroup 0  (G);

iv) the operator T(G)?0S ) is invertible for every additive subgroup 0  (G); v) the operator T(GT )[S0 is invertible for some additive subgroup 0  (G);

vi) the operator T(GT )[S0 is invertible for every additive subgroup 0  (G); vii) the operator T(GT )(S0 is invertible for some additive subgroup 0  (G);

viii) the operator T(GT )(S0 is invertible for every additive subgroup 0  (G); ix) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn) and the operator T(G?1 )[S0 is invertible for some additive subgroup 0  (G); x) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn) and the operator T(G?1 )[S0 is invertible for every additive subgroup 0  (G); xi) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn) and the operator T(G?1 )(S0 is invertible for some additive subgroup 0  (G); xii) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn) and the operator T(G?1 )(S0 is invertible for every additive subgroup 0  (G); xiii) G admits a left canonical APS factorization; xiv) G admits a left canonical APWS factorization; xv) G admits a left canonical (APS ) factorization; xvi) G admits a left canonical (APWS ) factorization.

Factorization and Toeplitz Operators

11

Remark. A left canonical APS factorization, if it exists, is unique up to the transformation G+ 7! G+ C; G? 7! C ?1G? ; (3.2) where C is an arbitrary constant invertible matrix. Hence, the equivalence of statements xiii){xvi) in Theorem 3.2 means that a left canonical APS factorization of the matrix function G 2 (APW k )nn is in fact its left canonical (APWS ) factorization. Observe also that the product M fG+ gM fG?g is invariant under the transformation (3.2), and is therefore de ned uniquely by the matrix function G itself. We denote it by dS (G) and call it the (S-) geometric mean of G. The particular case k = 1 and S = [0; 1) was considered in [4, Theorem 2.3]. Most of the proof of Theorem 2.3 of [4] goes through in the more general situation of Theorem 3.2 without essential changes. Several equivalences in Theorem 3.2 are simply particular instances of i) , ii) of Lemma 3.1. We give below only some details of the proof of the implication i) ) xvi), which is a dicult part of Theorem 3.2. As in the proof of Theorem 2.3 of [4], we can restrict the proof to the case 0 = : Analogously, we show that the generalized Riemann operators R1 = (S nf0g)\ + MG (?S )\ and R2 = (?S nf0g)\ + MGT S \ are invertible on (B k )n1, and letting j = R?1 1Ej , j = R?2 1 Ej , where Ej is the j-th column of I, the functions j and j belong to (APW k )n1. Denote by + (? , + , ? ) the n  n matrix function the j-th column of which is (S nf0g) j (respectively, (?S ) j , S j , (?S )nf0g j ). Then + 2 (APW k )n\(nS nf0g) ; ? 2 (APW k )n\(n?S ) ;

+ 2 (APW k )n\Sn; ? 2 (APW k )n\(n?S )nf0g : Using the de nition of j , j , it also follows that + + G? = I (3.3) and ? + GT + = I: (3.4) Introduce now a matrix function C = ( + )T G? . On the one hand, (3.3) implies that C = ( + )T (I ? + ) 2 (APW k )n\Sn: On the other hand, from (3.4): C = (GT + )T ? = (I ? ? )T ? 2 (APW k )n\(n?S ) : Hence, C is a constant matrix: C 2 C nn . We will show that C is non-singular. To this end, introduce matrix functions b  , b  of one complex variable z according to the formulas T   ?1 T b b  (z) =  (A ?1 [z; 0; | {z  ; 0} ] ); (z) = (A [z; 0;| {z  ; 0} ] ); k?1 times

k?1 times

12

Rodman, Spitkovsky, Woerdeman

where A is de ned by (2.7). Then b + , b + (b ? , b ? ) can be extended analytically into the upper half plane + (respectively, the lower half plane ? ), and limy!+1 b + (iy)(= limy!?1 b ? (iy)) = 0. If det C = 0, then det b +  det(I ? b + ) = 0, and therefore either det b + or det(I ? b + ) vanishes in + identically. Since limy!+1 (I ? b + (iy)) = I, det(I ? b + (iy)) di ers from zero for y large enough. Hence, det b + (z) = 0 for all z 2 + . From here and (3.4) we conclude that det(I ? b ? ) = 0 identically (on R, and therefore in ? ). This, however, is in contradiction with the property limy!?1 (I ? b ? (iy)) = I. After the nonsingularity of C is established, the proof of i) ) xvi) is completed as in the proof of [4, Theorem 2.3]. 2

4. Factorization of Sectorial Matrix Functions In this section we prove several results on sectorial (in particular, positive de nite) Toeplitz operators. Recall that an operator A 2 B(H) is called -sectorial for a given 2 [0; 2 ) if jImhAf; f ij  (tan ) RehAf; f i (4.1) and there exists an  > 0 such that RehAf; f i  kf k2 (4.2) for all f 2 H. More precisely, we say that an operator A 2 B(H) is ( ; )-sectorial if (4.1) and (4.2) hold for every f 2 H. Condition (4.2) clearly implies jhAf; f ij  kf k2 ; f 2 H; (4.3) conversely, (4.3) and (4.1) imply RehAf; f i  (cos( ))?1kf k2 . For = 0, condition (4.1) means that ImhAf; f i = 0, RehAf; f i  0, and condition (4.2) can therefore be rewritten as hAf; f i  kf k2 for all f 2 H: (4.4) Hence, the operator A is 0-sectorial if and only if it is positive de nite (notation: A > 0).

Proposition 4.1. Let  be an additive subgroup of Rk, and let G 2 (APW k)nn. Then the following statements are equivalent for any 2 [0; 2 ),  > 0, and for a

given halfspace S : (i) T(G)[S is ( ; )-sectorial;

(ii) T(G)[S0 is ( ; )-sectorial for some additive subgroup 0  ;

Factorization and Toeplitz Operators

13

(iii) T(G)[S0 is ( ; )-sectorial for every additive subgroup 0  . (iv) the matrix function G(t) is ( ; )-sectorial for every t 2 Rk.

Proof. We rst consider the case = 0. In that case, condition (iv) means that MG  I on (B k )n1 . Since T(G)[S0 is a compression of MG , the implication (iv) ) (iii) holds. Implications (iii) ) (ii) and (iii) ) (i) are obvious. It remains to show that (i) ) (iv); the implication (ii) ) (iv) would follow from there by considering 0 in place of . The following proof of the implication (i) ) (iv) is a simple generalization of the respective reasoning from [27] for the case of classical Toeplitz operators on the Hardy space H 2 (T). Suppose that T(G)[S  I, that is,

hT(G)[S ; i  kk2

for all  2 (B k )Sn\1 . Since hS \ G; i = hG; S \i = hG; i; this implies that hG; i  kk2 for all  2 (B k )Sn\1 : Denote by W the multiplication operator (W)(t) = eiht;ai (t);  2 (B k )n1; where a is the rst column of the matrix A in (2.7). Then W  = W ?1 , WMG = MG W, and hMG W m ; W mi = hW ?m MG W m ; i = hMG ; i; m 2 Z: Therefore, the inequality hMG ; i  kk2 does not only hold for  2 (B k )Sn\1 , but also for  2 W m (B k )Sn\1 , m 2 Z. Since the lineal fW m :  2 (B k )Sn\1 ; m 2 Zg is dense in (B k )n1 , this proves that MG  I on (B k )n1 . That MG  I holds on the space (B k )n1 now follows as in the proof of Lemma 8.2 of [47]. Indeed, let fj gj 2J be the complete collection of the cosets of  in Rk; then we have the orthogonal decomposition (B k )n1 = j 2J (B k )nj 1, and MG = j 2J (MG )j ; where (MG )j is an operator on (B k )nj 1 which is unitarily similar to MG . Recall that (B k ) and (AP)k can be thought of as L2 (B) and C(B), respectively, where B is, as before, the Bohr compacti cation of Rk. Therefore, MG is a multiplication operator on the space (L2 (B))n1 with a continuous matrix function

14

Rodman, Spitkovsky, Woerdeman

G 2 (C(B))nn . The positivity of the operator MG ? I implies that G ? I itself is positive semi-de nite (on B, and therefore on Rk). This nishes the proof in the case that = 0. For general , observe that condition (4.1) can be rewritten in terms of the real part AR = 21 (A + A ) and imaginary part AJ = 21i (A ? A ) of the operator A as follows: h(AR tan  AJ )f; f i  0: (4.5) Therefore, an operator A is -sectorial if and only if the operators cAR + AJ and cAR ? AJ are both positive de nite for all c > tan . Moreover, A is ( ; )-sectorial if and only if cAR  AJ  (c ? tan )I; c  tan : (4.6) for every c  tan . Indeed, if (4.5) and (4.2) hold, then (4.6) is immediate. Conversely, (4.6) clearly implies (4.5); dividing both sides of (4.6) by c and passing to the limit when c ! 1, we obtain (4.2). Applying the = 0 case to the matrix functions cGR  GJ and the respective Toeplitz operators T(cGR  GJ )[S0 (= c(T(G)[S0 )R  (T(G)[S0 )J ), we arrive at the general case. 2

Corollary 4.2. Let   0 be additive subgroups of Rk. If G 2 (APW k )nn0 nn is such that G(t) is ( ; )-sectorial for every t 2 Rk, then  G 2 (APW k ) is also ( ; )-sectorial for every t 2 Rk. Proof. Note that T(G)[S0 is ( ; )-sectorial by Proposition 4.1. Now for every g 2 (B k )n1 we have

hT(G)[S0 g; gi = hS \0 (Gg);  gi = h S?\0 (Gg); gi  = hS \ ( G)g + (0 n G)g ; gi:

(4.7)

But because the Fourier spectrum of g is contained in , the Fourier spectrum of (0 nG)g does not intersect . The expression (4.7) is therefore equal to

hS \ (( G)g); gi = hS \ (Gg); gi = hT( G)[S g; gi:

Comparing with the left hand side of (4.7) we see that the operator T( G)[S is ( ; )-sectorial, and by the same Proposition 4.1 we are done. 2 Using the sectoriality of the Toeplitz operators, we obtain the following factorization result.

Theorem 4.3. Let G 2 (APW k )nn, and let  be the minimal additive subgroup of Rk that contains (G). If G(t) is ( ; )-sectorial for every t 2 Rk, where

Factorization and Toeplitz Operators

15

2 [0; 2 ),  > 0 are independent of t, then G admits left and right canonical (APWS ) factorization. Proof. According to Proposition 4.1, the operator T(G)[S is ( ; )-sectorial, and therefore invertible, simultaneously with G. Theorem 3.2 implies then that G admits the left canonical (APWS ) factorization. The matrix functions G and GT are ( ; )-sectorial only simultaneously. Thus, the reasoning above, applied to GT in place of G, yields the left canonical (APWS ) factorization GT = X+ X? of GT . But then G = X?T X+T is a right canonical (APWS ) factorization of G. 2 Theorem 4.3 can be extended to locally sectorial matrix functions, provided that one allows for the following modi cation in the de nition of the canonical APS factorization. Let us say that G admits a left twisted canonical APS factorization if there exist c 2 Rk and G satisfying (2.9) such that, in place of (2.8), the following representation holds: G(t) = eihc;ti G+ (t)G? (t); t 2 Rk: (4.8) The right twisted canonical APS factorization, as well as (APS ), (APWS ), (APWS ) variations of both left and right twisted canonical factorizations, are de ned in the obvious way. For the left twisted canonical (APS ) and (APWS ) factorizations we require, in particular, that c in the formula (4.8) belongs to , and analogously for their right counterparts.

Theorem 4.4. Let G 2 (APW k )nn, and let  be the minimal additive subgroup

of Rk that contains (G). Assume that the numerical range of G(t) is bounded away from zero, that is, there exists  > 0 such that

jhG(t);  ij  k k2

(4.9)

for all  2 C n and t 2 Rk. Then G admits left and right twisted canonical (APWS ) factorization.

For the one variable case Theorem 4.4 was proved in [1]. Taking the determinants of both sides in (4.8), we observe that c = w(det G)=n. Hence, the vector c in the left/right twisted canonical factorization of G is de ned by G uniquely and is the same for both factorizations (provided, of course, that they exist). Proof of Theorem 4.4. Consider G as a continuous matrix function on B. Condition (4.9) can be extended from t 2 Rk to all t 2 B. Denote by S(t) the smallest sector with the vertex at the origin containing the numerical range of G(t), by !(t) the unit vector in the direction of its bisector, and by l(t) the angle between the sides of S(t). The numerical range is a continuous function of the matrix (see [27]); hence, !: B ! T and l: B ! [0; ) also are continuous functions. But then supt2 k l(t) = 2 < , !1 j k 2 (AP k ), and the matrix function

R

R

16

Rodman, Spitkovsky, Woerdeman

F = !?1 G 2 (AP k ) is ( ; )-sectorial. Choose a function !0 2 (APW k ) so close to ! that F0 = !0?1 G is ( 0 ; 0)-sectorial (perhaps, with 0 a little bigger than and 0 a little smaller than ). On top of that, F0 belongs to (APW k ) together with !0?1 and G, and therefore (Theorem 4.3) admits left and right canonical APWS factorizations: F0 = X+ X? = Y? Y+ . There also exists a canonical APWS factorization of the scalar positive de nite function j!0j in the form

j!0j = !+ !+ :

(4.10)

(Indeed, the scalar case of Theorem 4.3 gives existence of a canonical APWS factorization of j!0j; passing to complex conjugates and comparing with the original factorization produces the form (4.10)). From Theorem 2.4 we then obtain: !0 (t) = !+ (t)!+ (t)eihc;ti ei(S u)(t)ei(?Snf0g u)(t): Hence,

   G = !0F0 = eihc;ti ei(S u)(t) !+ (t)X+ (t) ei(?Snf0g u)(t)!+ (t)X? (t)

   = eihc;ti ei(?Snf0g u)(t) !+ (t)Y? (t) ei(S u)(t)!+ (t)Y+ (t) (4.11)

are, respectively, left and right twisted canonical APWS factorizations of G. To nish the proof of Theorem 4.4, it suces to show that every left twisted canonical APWS factorization (4.8) of a locally sectorial matrix function G is automatically a (APWS ) factorization; the case of right factorizations can then be tackled by taking transposes. Since the diagonal entries of any matrix are contained in its numerical range, the left upper entry g11 of the locally sectorial matrix function G is invertible. The ?1 G is locally sectorial simultaneously with G. In addition, its matrix function g11 numerical range for all t 2 Rk contains the point 1, and therefore skips the nonpositive half-axis R?. Hence, the continuous branch of arg det( g111 G) is bounded, and the mean motion of det( g111 G) equals zero. But then w(det G) = n w(g11), and therefore c = w(g11). Since g11 2 (APW k ) , Theorem 2.4 implies that c 2 . Observe now that (4.8) leads to a left canonical APWS factorization of the matrix function e?ihc;ti G(t). Since c 2 , the latter matrix function belongs to (APW k ) simultaneously with G. But then G+1 2 (APWSk )nn, G?1 2 (APW?k S )nn (see the remark after Theorem 3.2). 2

Corollary 4.5. Let G be as in Theorem 4.4. Then G admits a canonical APS

factorization if and only if the mean motion of det G (or, equivalently, of any diagonal entry of G) equals zero. If this condition is satis ed, then the numerical range of dS (G) does not contain zero.

Factorization and Toeplitz Operators

17

Proof. The rst statement follows from the existence of the representation (4.8) with c = n1 w(det G) = w(gjj ). To prove the second statement, we turn to equality (4.11). According to (4.11), a factorization of G can be obtained by multiplying out factorizations of a scalar function !0 and a ( 0 ; 0)-sectorial matrix function F0 . But then dS (G) di ers from dS (F0 ) by a scalar (non-zero) multiple dS (!0 ). Thus, it suces to show that the numerical range of dS (F0 ) does not contain zero. To this end, observe that the matrix function X??1 X+ = X??1 (X+ X? )X??1

is 0-sectorial simultaneously with F0 = X+ X? . Hence, its mean value M fX??1 X+ g is 0-sectorial as well. Next we note the equality M fX??1X+ g = M fX? g?1 M fX+ g; which follows from the fact that the mean is an additive and multiplicative functional on (APW k )Snn (the multiplicativity is a consequence of S being a halfspace). So M fX? g?1 M fX+ g is 0-sectorial. This, in its turn, implies the 0sectoriality of M fX? g (M fX? g?1M fX+ g)M fX? g = M fX+ gM fX? g = dS (F0): 2 For certain classes of matrices, an explicit description of the numerical range is known, see [26, 37]. Whenever values of G belong to such classes, the condition (4.9) can be recast accordingly. For instance, for 2  2 matrix functions the following result holds.

Corollary 4.6. Let G 2 (APW k )22. Suppose that the eigenvalues 1(t), 2(t) of G(t), the Frobenius norm kG0(t)kF of G0(t) = G(t) ? 21 (1 (t)+2 (t))I , and its

determinant det G0 (t) uniformly on Rk satisfy the inequality

p

j1 (t)j + j2 (t)j > j1 (t) ? 2 (t)j2 + (kG0(t)kF ? 2jdet G0(t)j)2 : Then G admits left and right twisted canonical (APWS ) factorization.

The one variable (k = 1) version of Corollary 4.6 for  = R was stated in [52]; its discrete ( = Z) prototype goes back to [51].

5. Factorization of Hermitian Matrix Functions Another class of almost periodic matrix functions for which factorization results are often useful, is the class of Hermitian valued functions. Before we formulate

18

Rodman, Spitkovsky, Woerdeman

the theorem, recall that J 2 C nn is a signature matrix if J = J  = J ?1 , that is, if J is simultaneously Hermitian and unitary. An example of signature matrices is delivered by J=

"I

p+

0

#

0 ; p+ + p? = n: ?Ip?

(5.1)

If J is a signature matrix, U 2 C nn is called J-unitary provided that U  JU = J. For an invertible Hermtian matrix G we de ne its signature (notation: sign G) by its number of positive eigenvalues minus its number of negative eigenvalues (counting multiplicities).

Theorem 5.1. Let G 2 (APW k )nn and assume that the matrix G(t) is Hermitian for every t 2 Rk. Let also  be the minimal additive subgroup of Rk which contains (G). If the Toeplitz operator T(G)?0S ] is invertible for some additive subgroup 0  , then G admits a factorization

G(t) = A+ (t)J(A+ (t)) ;

(5.2)

where A+1 2 (APW k )Sn\n, J is given by (5.1), and the sizes p+ and p? are uniquely determined by the additional condition p+ ? p? = signG (the signature of G(t), which is constant for t 2 Rk). The matrix function A+ is unique up to a constant right J -unitary multiple.

Proof. Using Theorem 3.2, the proof is not much di erent from that for the canonical Wiener-Hopf factorization of matrix functions continuous on the unit circle [49]. Namely, from the invertibility of T(G)?0S ] it follows that G has a canonical (APWS ) factorization (2.8). Since G is Hermitian, we have that at the same time G(t) = G?(t)G+ (t); t 2 Rk; so that (G? )?1 G+ = G+ G??1. According to the latter equality, the matrix function H = (G? )?1G+ is Hermitian, invertible, and belongs to (AP k )Snn \ (AP k )?nSn = C nn . As such, H admits a representation H = C  JC. From here, G? = C ?1J(C  )?1 G+ , so that G = (G+ C ?1 )J(G+ C ?1) ; and (5.2) holds with A+ = G+ C ?1. In particular, p+ ? p? coincides with the signature of G(t) for all t. Finally, let also G = B+ JB+ , where B+ 2 (AP k )Snn. Then A?+1 B+ J = 1 ?1 JB+ A? + . In other words, the matrix function U = A+ B+ satis es the condition UJ = JU ?1 , and is therefore J-unitary. It is also constant, because

Factorization and Toeplitz Operators

19

U(= JU ?1J) belongs simultaneously to (AP k )Snn and (AP k )?nSn. It remains to observe that B+ = A+ U. 2

Corollary 5.2. Let G 2 (APW k )nn be Hermitian for every t 2 Rk. If G admits a canonical APS factorization, then dS (G) is a Hermitian matrix with the same signature as G(t).

Indeed, (5.2) implies that dS (G) = M fA+ gJM fA+ g . Combining Theorem 5.1 with Theorem 4.3, the following result follows.

Corollary 5.3. Let G 2 (APW k )nn and assume that the matrix G(t) is positive de nite for every t 2 Rk, and j det G(t)j   for every t 2 Rk, where  > 0

is independent of t. Let also  be the minimal additive subgroup of Rk which contains (G). Then G(t) admits left and right canonical factorizations of the forms

G(t) = A+ (t)  (A+ (t)) = (A~+ (t))  A~+ (t);

(5.3)

where A+1; A~+1 2 (AP k )Snn . The factors A+1 ; A~+1 are de ned up to the right/left constant unitary multiple, and in fact satisfy the condition A+1 ; A~+1 2 (APW k )Sn\n.

For the case  = Zk, the result of Corollary 5.3 is well-known, see [28]. For the one-variable case (with S = [0; 1) and  = R) Corollary 5.3 was proved in [52]. The one variable periodic ( = Z) case was considered much earlier in [48]. As soon as Corollary 5.3 is established, the factorization of any (invertible) matrix function in (APW k )nn can be reduced to the factorization of unitary matrix functions. The reasoning is exactly the same as in the case of classical Wiener-Hopf factorization (see [41, Chapter 7]) or when k = 1, S = [0; 1) [52]. Namely, for a xed subgroup  of Rk, if G 2 (APW k )nn is invertible, then GG , (GG)1=2 and (G G)1=2 are invertible in (APW k )nn and are positive de nite (this follows by applying Proposition 2.3 rst with (z) = pz, and then with (z) = z ?1.) Then, according to Corollary 5.3, GG = XX  , (GG )1=2 = X1 X1 and (G G)1=2 = X2 X2 , where X, X1 and X2 are invertible in (APW k )Sn\n . The matrix functions U = X ?1 G and V = X1?1 GX2?1 are unitary, and they both admit left (twisted) canonical factorization only simultaneously with G. If G itself is Hermitian, then (GG )1=2 = (GG)1=2 and it is possible to choose X2 = X1 . But then V is Hermitian as well (observe that U does not necessarily have this property; this is our main reason for introducing V along with a more straightforwardly de ned U). In other words, the factorization problem for any Hermitian matrix function reduces to such a problem for a signature matrix function, the values of which are simultaneously Hermitian and unitary. A necessary and sucient condition for the factorability of such matrix functions is presently not known. We will give here one sucient condition of geometrical nature.

20

Rodman, Spitkovsky, Woerdeman

Theorem 5.4. Let V 2 (APW k )nn be a signature matrix function. Suppose

that there exists a maximal V -positive subspace L of C n , i.e.:

hV (t);  i  k k2

(5.4)

for all  2 L and t 2 Rk (with L and  > 0 not depending on t), and the dimension of L is equal to the multiplicity of the eigenvalue 1 of V (t). Then V admits left and right canonical (APWS ) factorizations.

Proof. Introduce an orthonormal basis of C n in such a way that the rst p+ = dim L vectors of it constitute a basis of L, and denote the corresponding change of coordinates matrix by U0 . Since U0 is a constant matrix, the factorability properties of V and U0 V U0 are the same. Consider the partition X Y  U0 V U0 = Y  Z with a p+  p+ upper left block X. Due to (5.4), X is a uniformly positive de nite matrix function. From the unitarity of V (and therefore of U0 V U0 ) it follows that X = (I ? Y Y  )1=2 ; Z 2 = I ? Y  Y; and Y ((I ? Y  Y )1=2 + Z) = 0: (5.5) The latter two equalities of (5.5), combined with the maximalpositivity of L, imply that Z = ?(I ? Y  Y )1=2. (Indeed, if Z had a positive eigenvalue  with anormal ized eigenvector x, then we would have Y x = 0, and therefore L +_ span x0 would be an U0 V U0 -positive subspace of dimension larger than p+ , a contradiction with the hypothesis that L is a maximal positive subspace.) Then, due to the rst equality of (5.5), ?Z and X are unitarily equivalent on the orthogonal complements to the subspaces of their xed vectors. The matrix function I  X Y 0 p  + W = 0 ?In?p U0 V U0 = ?Y  ?Z +   then has a uniformly positive real part WR = X0 ?0Z and a contractive imagi  nary part WJ = iY0  ?0iY . The existence of left and right canonical (APWS ) factorization of W follows now from Theorem 4.3. Indeed, W is (arctan(1=); )sectorial where  > 0 is such that WR  I. Since W and V di er by constant multiples only, the same is true for V . 2

6. One-sided Invertibility of Toeplitz Operators

Factorization and Toeplitz Operators

21

In this section, we use Theorem 5.1 to establish one-sided invertibility conditions for Toeplitz operators in terms of factorizations. As in [36], where the case k = 1,  = R was considered, we take advantage  I T  of a simple but useful observation that T is right invertible if and only if T 0 is two-sided invertible. The main result here is the following extension of Theorem 3.2.

Theorem 6.1. Assume that G 2 (APW k)nn and let  = (G) be the minimal

additive subgroup of Rk that contains (G). Then the following statements are equivalent:

i) the operator T(G)?0S ] is right invertible for some additive subgroup 0  (G);

ii) the operator T(G)?0S ] is right invertible for every additive subgroup 0  (G);

iii) the operator T(G)?0S ) is right invertible for some additive subgroup 0  (G);

iv) the operator T(G)?0S ) is right invertible for every additive subgroup 0  (G); v) the operator T(GT )[S0 is left invertible for some additive subgroup 0  (G);

vi) the operator T(GT )[S0 is left invertible for every additive subgroup 0  (G);

vii) the operator T(GT )(S0 is left invertible for some additive subgroup 0  (G);

viii) the operator T(GT )(S0 is left invertible for every additive subgroup 0  (G); ix) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn ) and the operator T(G?1)[S0 is right invertible for some additive subgroup 0  (G); x) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn ) and the operator T(G?1)[S0 is right invertible for every additive subgroup 0  (G); xi) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn ) and the operator T(G?1)(S0 is right invertible for some additive subgroup 0  (G); xii) G is invertible (in (L1 (Rk))nn and therefore in (APW k )nn ) and the operator T(G?1)(S0 is right invertible for every additive subgroup 0  (G); xiii) G admits a representation G(t) = G+ (t)G? (t), t 2 Rk, where G+ 2 (AP k )Snn; G??1 2 (AP k )?nSn ; G?+1; G? 2 (AP k )nn;

22

Rodman, Spitkovsky, Woerdeman

xiv) G admits a representation G(t) = G+ (t)G? (t), t 2 Rk, with G+ 2 (APW k )Sn\n; G??1 2 (APW k )(n?Sn)\ ; G+?1; G? 2 (APW k )nn; (6.1) xv) G admits a representation G = + U? ; (6.2) where +1 2 (AP k )Snn, ?1 2 (AP k )?nSn, and where U 2 (AP k )Snn is

unitary valued, xvi) G admits a representation , where +1 2 (APW k )Sn\n, ?1 2 V (6.2)  (APW k )(n?Sn)\ , and U = 0 0I with unitary valued V 2 (APW k )Snnfn0g .

Proof. Obviously, xvi))xv))xiii) and xvi))xiv))xiii). If xiii) holds, then T(G)?S ] G??1?S G?+1?S = ?S G+ G??S G??1 ?S G?+1?S = ?S G+ G? G??1?S G+?1?S = ?S G+ ?S G?+1?S : (6.3) On the other hand, using the equality G+ S nf0g = S nf0g G+ S nf0g , we have ?S = ?S G+ G?+1?S = ?S G+ ?S G?+1?S + ?S G+ S nf0g G?+1?S = ?S G+ ?S G?+1 ?S : Comparing with (6.3), we obtain that G??1?S G?+1?S is a right inverse of the operator T(G)?S ] . In other words, xiii))i) for 0 = R. Wewill show now that i))xiv). To this end, introduce the matrix function  I G ^ ?0S ] on the direct sum of G^ = G 0 and consider the Toeplitz operator T(G) two copies of the space (B k )n0 1. Since ^ ?0S ] = T(G) 

"

#

I (T(G)?0S ] ) ; ? S ] T(G)0 0

^ ?0S ] (see the right invertibility of T(G)?0S ] implies the two sided invertibility of T(G) [36, Lemma 1.4]). According to implication i))xvi) of Theorem 3.2, G^ admits a left canonical (APWS ) factorization. But G^ is an Hermitian matrix function with zero signature. Due to Theorem 5.1,its left canonical factorization can be  I 0 n   ^ written in the form G = XJX with J = 0 ?In and X 1 2 (APW k )S2n\2n.   11 X12 , and We now partition the matrix function X into n  n blocks: X = X X21 X22 let ?1 X22 ? X12 ) : G+ = X22 ; G? = (X11 X21

Factorization and Toeplitz Operators

23

The proof of the invertibility of X21 (in (APW k ) ), the equality G = G+ G? , and of the properties (6.1) goes through in exactly the same way as in the implication 5))2) of Theorem 1.5 in [36], where the case k = 1, S = [0; 1) was considered. The proof of xiv))xvi) is the same as for the implication 2))3))4) in the same theorem of [36], the only di erence being that, when factoring positive de nite matrix functions in (APW k )nn, one should refer to Corollary 5.3 instead of its one-variable version from [52]. Hence, statements xiii){xvi) and i) (for 0 = R) are pairwise equivalent. To prove the equivalence of statements i) (for general subgroups 0  ) through xii), one simply should apply the respective equivalences of Theorem 3.2 to the ^ matrix function G. 2 Note that the left twisted canonical factorization (4.8) is of the form (6.2) if c 2 S. Comparing with Theorem 4.4, the following corollary results.

Corollary 6.2. Let G 2 (APW k)nn, and let  be the minimal additive subgroup

of Rk that contains (G). Assume that the numerical range of G(t) is bounded away from zero. Then T(G)?0S ] is right (resp. left) invertible for every additive supergroup 0 of  if the mean motion of the determinant of G belongs to S (resp. w(det G) 2 ?S ).

7. Robustness and Continuity of Factorizations In this section we consider continuity properties of canonical and twisted canonical, factorizations. It turns out that these factorizations persist under small perturbations of the original function, and the factors are well-behaved. A basic result in this direction is the following theorem.

Theorem 7.1. If G 2 (APW k )nn admits a left, resp. right, canonical APS factorization, then there exists  > 0 such that every G~ 2 (APW k )nn with kG~ ? Gk1 <  admits a left, resp. right, canonical (APWS ) factorization. Moreover, there exists 1  , 1 > 0, such that for every G~ 2 (APW k )nn satisfying kG~ ? GkW < 1 the left, resp. right, canonical (APWS ) factorization of G~ can be chosen so that its factors are continuous functions of G~ (in the Wiener norm k  kW ). Note that we have used the (AP k ) norm in the rst statement and the Wiener norm in the second statement. Note also that Proposition 2.2 guarantees invertibility (in (APW k )nn) of every G~ 2 (APW k )nn which is suciently close to G in the (AP k ) norm. This is a necessary condition for existence of a canonical (APWS ) factorization.

24

Rodman, Spitkovsky, Woerdeman

Proof. First, we verify the equality

kT(G)?0S ] k = kGk1

(7.1)

for any supergroup 0 of the minimal additive subgroup that contains (G). Let MG : (B)n0 1 ! (B)n0 1 be the operator of multiplication by G. Then a standard argument (see, for example, the proof of Lemma 4.11 in [4]) shows that

kMG k = kGk1:

(7.2)

Since T(G)?0S ] is a compression of MG , we have kT(G)?0S ] k  kGk1. Conversely, x  > 0, and let x 2 (B)n0 1 , kxk = 1, be such that kyk  kMG k ? , where y = MG x. We may also assume that the Fourier spectrum of x is a nite set. Let f1 ;    ; q g = (x) \ S. Then for every 0 2 0 \ (?S) such that 0 + Pqj=1 j 2 ?PS we have that the spectrum of eih0 ;ti x belongs to ?S. Now, denoting by ih;ti the Fourier series of y: 2(y) y e









kT(G)?0S ] eih0 ;tix k2 = k0\(?S ) MG eih0 ;ti x k2

 0  = k0 \(?S ) eih ;ti MG x k2 = P

X 2(y); +0 2?S

ky k2:

(7.3)

Choosing a suitable 0, we can make 2(y); +0 2?S ky k2 as close as we wish to kyk2 . Thus, the expression (7.3) can be made greater than or equal to (kMG k ? )2? . Since  > 0 was arbitrary, we obtain kT(G)?0S ] k  kMG k, and in view of (7.2), the equality (7.1) follows. Once (7.1) is established, the proof of the rst part of Theorem 7.1 for the left canonical factorizations follows by applying Theorem 3.2. For the right canonical factorizations, use the already obtained result for the transposed matrix function GT . We now prove continuity of the factors (in the Wiener norm), for the left canonical factorization (the proof for the right one is completely analogous). Let G = G+ G? be a left canonical (APWS ) factorization. Replacing G by G?+1GG??1, we reduce the proof to the case G = I. Now the continuity follows from general results on factorization of elements close to unity in abstract Banach algebras (see [10]). Alternatively, use the explicit construction of the factors in the proof of i) ) xvi) of Theorem 3.2. 2 Theorem 7.1 leads to a global continuity result for canonical factorizations:

Corollary 7.2. Let G` (resp. Gr ) be the set of all functions G 2 (APW k )nn

that admit a left, resp. right, canonical (APWS ) factorization G = G+ G?, resp.

Factorization and Toeplitz Operators

25

G = G? G+ . Then the factors G in this factorization can be chosen continuous functions of G 2 G` , resp. G 2 Gr (in the Wiener norm). Proof. Consider the case of the left factorization. By Theorem 7.1, G are continuous functions of G 2 G` locally, in a neighborhood of every given G. Multiplying G+ on the right by (M fG+ g)?1 (which is a locally continuous function of G), we may assume without loss of generality that M fG+ g = I. But a left canonical (APWS ) factorization with this additional property is unique, so in fact G+ is a (globally) continuous function of G 2 G`. 2 Recall that M fG+ gM fG?g is the geometric mean dS (G) of the matrix function G. Hence, for the factorizations constructed in the proof of Corollary 7.2, M fG?g = dS (G). This proves that dS (G) also is a continuous function of G on Gl . Splitting this factor out of G? , we can rewrite the (left) canonical factorization of G 2 Gl as G = G+ dS (G)G? ;

(7.4)

where all three factors G+ 2 (APW k )Sn\n, dS 2 C nn , G+ 2 (APW k )(n?Sn)\ are de ned by G uniquely and depend on it continuously. We note that with obvious changes Theorem 7.1 and Corollary 7.2, and their proofs, extend to twisted canonical factorizations. In particular, the mean motion of G~ 2 (APW k )nn remains constant in a k  k1 -neighborhood of G 2 (APW k )nn, provided G admits a twisted canonical (APS ) factorization. The results of Theorem 7.1 and Corollary 7.2 apply to all matrix functions having canonical APWS factorization, independent of their additional algebraic properties. However, it is a nontrivial question whether or not the symmetric factorizations of Hermitian matrix functions considered in Theorem 5.1 and Corollary 5.3 are also continuous, locally or globally. The answer is armative for positive de nite matrix functions.

Corollary 7.3. Let P be the set of all uniformly positive de nite matrix functions in (APW k )nn. Then the factors A+ , A~+ in the factorizations (5.3) can be chosen continuous functions of G 2 P (in the Wiener norm). Proof. As usual, it suces to consider the case of left factorization. According to Corollary 7.2, G can be represented in the form (7.4). Since G is positive de nite, the matrix dS (G) is positive de nite as well (Corollary 5.2), and the representation G?dS (G)G+ also delivers a left canonical factorization of G(= G ). But then G+ and G? di er only by a constant right multiple. Since both these matrices have the mean value I, it follows from here that in fact they are equal: G+ = G? , so that G = G+ dS (G)G+ :

(7.5)

26

Rodman, Spitkovsky, Woerdeman

Let H be the positive square root of dS (G). Then H, and therefore A+ = G+ H, depend continuously on G in the Wiener norm. It remains to observe that (7.5) can be rewritten as G = A+ A+ . 2 Local continuity persists also for symmetric factorizations of arbitrary (not necessarily de nite) Hermitian matrix functions.

Theorem 7.4. Let G 2 (APW k )nn , and assume that G admits a left canonical APS factorization of the form (5.2) (in particular, G is Hermitian valued). Then there exists  > 0, such that every G~ 2 (APW k )nn having Hermitian values and satisfying kG~ ? GkW <  admits a left canonical (APWS ) factorization

~ = A~+ (t)J(A~+ (t)) ; G(t) (7.6) in which A~ is a continuous function of G~ (in the Wiener norm). Analogous result holds for right canonical factorizations. Proof. Using Theorem 7.1, and arguing as in the proof of Theorem 5.1, we obtain a left canonical factorization of the form ~ = A~+ (t)H(A~+ (t)) ; G(t) (7.7) where A~+ 2 (APW k )Sn\n and the constant (that is, independent of t 2 Rk) in~ To make explicit the vertible matrix H = H  are continuous functions of G. ~ ~ dependence of H on G, write H = H(G). Without loss of generality, we assume that H(G) = J. Taking a smaller , if necessary, we may also assume that the de~ are nonzero, for k = 1;    ; n, and terminants of the left upper k  k blocks of H(G) n  n k ~ for every G 2 (APW ) having Hermitian values and satisfying kG~ ? GkW < . ~ the method of Lagrange (which is essentially based on One can then apply to H(G) the Gaussian algorithm; see [16], for example) for reduction of a Hermitian form to a sum of signed squares. As a result, we obtain continuous (as a function of H in a neighborhood of H(G)) invertible matrix S such that H = S  JS. Substituting this expression for H in (7.7), the claim of the theorem follows. 2 It turns out, however, that for Hermitian non-positive de nite valued functions the global continuity of canonical symmetric factorizations fails. This failure occurs already for 2  2 constant matrices:

Lemma 7.5. There does not exist a continuous function f on the set H? of 2  2 Hermitian matrices with negative determinants, with f taking values in the group GL2 of all invertible (complex) 2  2 matrices, and such that



for every X 2 H? .

(f(X)) Xf(X) = 10 ?01



(7.8)

Factorization and Toeplitz Operators

27

Proof. We show that a function f with the required properties does not exist even on a subset Hhu of H? consisting of all 2  2 matrices that are simultaneously Hermitian and unitary and have determinant ?1. Arguing by contradiction, suppose that f : Hhu ! GL2 is a continuous function such that (7.8) holds for every X 2 Hhu . Taking determinants, we see that j det(f(X))j = 1. We claim that in fact f(X) can be chosen unitary for every X 2 Hhu . To prove the claim, consider the singular  d value  decomposition f(X) = U1DU2 , where U1, U2 are unitaries, and 0 1 D = 0 d with d1; d2 > 0. In fact, d1d2 = 1. Substituting the singular 2 value decomposition in (7.8), and denoting the Hermitian unitary matrix U1 XU1 by Y , we obtain that DY D is unitary. Writing out (DY D)2 = I, a simple algebra yields Y D2 = D?2 Y , which in turn implies  that either d1 = d2 = 1 (and then 0 ! our claim is proved), or Y = ! 0 for some j!j = 1. In the latter case a calculation shows that       (7.9) (U1 x01 x02 U2) X(U1 x01 x02 U2 ) = 10 ?01 for any positive number x1 and x2 = x?1 1. At this point we note that if Z = V1 DV2 is a singular value decomposition of an invertible matrix Z (V1 , V2 unitaries, D positive diagonal), then the product V1 V2 is uniquelyp de ned by Z and is a continuous function of Z. Wepindicate a proof. Let Z = ZZ   V be the polar decomposition of Z, and let ZZ  = WDW  , where W is unitary and D is positive diagonal. Then1 Z = WD(W  V ) is a singular value decomposition, and W  W  V = V = (ZZ  )? 2 Z is a continuous function of Z. The uniqueness of V1 V2 follows from the uniqueness of the polar decomposition of Z. ~ = U1U2 which in view of Returning to the proof of Lemma 7.5, we let f(X) the preceding paragraph is a continuous function of f(X),and therefore also of ~  X f(X) ~ = 1 0 holds, and replacing X 2 Hhu . By (7.9), the equality f(X) 0 ?1 ~ f by f, we prove the claim. We assume therefore that f(X) is unitary. The equation (7.8) is now a similarity as well. Hence the rst column of f(X) is an eigenvector of X corresponding to the eigenvalue 1. There is a homeomorphism between the set Hhu and the set GR1 of one-dimensional subspaces in C 2 , with the standard gap topology in GR1 . The homeomorphismmatches a matrix X 2 Hhu with the (one-dimensional) eigenspace of X corresponding to the eigenvalue 1. Using this homeomorphism, we now have a continuous nowhere zero function g from GR1 into C 2 such that g(M) 2 M for every M 2 GR1 . The value g(M) is simply the rst column of f(X), when X is identi ed with its eigenspace M to the eigenvalue 1 via the  1corresponding   r(z) , homeomorphism. Write M = Span z , where z 2 C ; then g(M) = zr(z)

28

Rodman, Spitkovsky, Woerdeman

for some continuous nowhere  0  zero function r : C ! C . As z ! 1, the subspace M converges to Span 1 . Therefore, zr(z) ! z0 for some nonzero z0 2 C as

n 



o

=2 z ! 1. We now obtain that the index 21 arg r(Rei ) =0 is equal to ?1 for R > 0 suciently large, and (because r(0) 6= 0 and r(z) is continuous at 0) is equal to 0 for R > 0 suciently small. Since the index (as a function of the radius R) is locally constant for continuous nowhere zero complex valued functions, a continuous function r(z) with the indices as above cannot exist. 2 If we restrict consideration to a Hermitian function parameterized by an interval, then global continuity of canonical factorizations still holds:

Theorem 7.6. Let G 2 (APW k)nn be a continuous function (with respect to the Wiener norm) of a parameter 2 [0; 1]: G = G . Assume that for every 2 [0; 1] the function G is Hermitian valued and admits a left canonical APS factorization. Then G admits a left canonical (APWS ) factorization

G (t) = A + (t)J(A + (t)) ;

(7.10)

where J is given by (5.1) and A + 2 (APW k )n\Sn is a continuous function of 2 [0; 1] (in the Wiener norm).

Again, an analogous result holds for the right canonical factorizations.

Proof. All matrix functions G are invertible. Therefore, their signatures do not depend on . Theorem 5.1 then implies that for each 2 [0; 1] the factorizations (7.7) exist and J there does not depend on . According to the same theorem, for a xed , if (7.10) and G (t) = B + (t)J(B + (t)) are left canonical (APWS ) factorizations, then B + (t) = A + (t)S, where S is a constant J-unitary matrix (which depends on , of course). Denote by U the multiplicative group of all J-unitary matrices S 2 C nn , with J given by (5.1). Using Theorem 7.4, nd points 0 = 0 < 1 <    < q < q+1 = 1 so that for each closed interval [ j ?1; j ] (j = 1;    ; q + 1), the function G admits a left canonical (APWS ) factorization G (t) = A( j+) (t)J(A( j+) (t)) ;

(7.11)

where the factor A( j+) is continuous on [ j ?1; j ]. Because of the uniqueness of factors in a canonical factorization, up to multiplication S 2 U , the  (j) ?symmetric 1 (j +1) matrices Sj = A j + A j + , (j = 1;    ; q), are constant (i.e., independent of t 2 Rk) and belong to U . Using the fact that U is connected (see, e.g., Theorem

Factorization and Toeplitz Operators

29

IV.3.1 in [24], where a topological description of this group is given), for each j = 1;    ; q, select a continuous path of matrices S (j ) 2 U , 2 [ j ?1; j ], such that S (jj)?1 = I and S (jj) = Sj . Now de ne A + = A( j+) S (j ) for 2 [ j ?1; j ], (j = 1;    ; q), and A + = A( q++1) for 2 [ q ; 1]. The construction of S (j ) shows that A + is a well-de ned continuous function of on the whole interval [0; 1], and the factorization (7.10) holds. 2 Theorem 7.6 can be extended to the case when a paracompact Hausdor contractible topological space X is used in place of the interval [0; 1]. Recall that a topological space is called contractible if all its homotopy groups are trivial. This extension of Theorem 7.6 is based on the triviality of the set of principal U -bundles on X, where U is the Lie group of J-unitary matrices. In turn, this set can be identi ed with the set of U -valued cocycles on X modulo equivalence, after passing to the limit with respect to re nements of open coverings of X. See the book [30], for example, for more details. An elementary proof of triviality of cocycles, for the case when X is a contractible compact of a nite dimensional Euclidean space, is found in [25] (Anhang).

Acknowledgements

The research of all three authors is partially supported by NSF Grant DMS 9800704. The research of LR is also partially supported by a Faculty Research Assignment Grant of the College of William and Mary. We thank Prof. C. Schneiderer of Universitat Regensburg for elucidation of some material concerning principal bundles and cocycles.

References [1] Babadzhanyan, R. G., Rabinovich, V. S., On factorization of almost periodic matrix functions, Di erential and Integral Equations, and Complex Analysis. University Press, Elista (1986), 13-22 (in Russian). [2] Bakonyi, M., Rodman, L., Spitkovsky, I. M., Woerdeman, H. J., Positive extensions of matrix functions of two variables with support in an in nite band, C. R. Acad. Sci. Paris Ser. I Math. 323(8) (1996), 859-863. [3] , Positive matrix functions on the bitorus with prescibed coecients in a band, J. Fourier Analysis and Applications 5 (1999), 789-812. [4] Ball, J. A., Karlovich, Yu. I., Rodman, L., Spitkovsky, I. M., Sarason interpolation and Toeplitz corona theorem for almost periodic matrix functions, Integral Equations and Operator Theory 32 (1998), 243-281. [5] Bart, H., Gohberg, I., Kaashoek, M. A., The coupling method for solving integral equations, Operator Theory: Advances and Applications 12 (1984), 39-73. [6] Bastos, M. A., Karlovich, Yu. I., dos Santos, F. A., Tishin, P. M., The Corona theorem and the existence of canonical factorization of triangular AP -matrix functions, J. Math. Anal. Appl. 223 (1998), 494-522.

30 [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]

Rodman, Spitkovsky, Woerdeman , The Corona theorem and the canonical factorization of triangular AP matrix functions { E ective criteria and explicit formulas, J. Math. Anal. Appl. 223 (1998), 523-550. Bastos, M. A., Karlovich, Yu. I., Spitkovsky, I. M., Tishin, P. M., On a new algorithm for almost periodic factorization, Operator Theory: Advances and Applications 103 (1998), 53{74. Bottcher, A., Silbermann, B., Analysis of Toeplitz Operators, Springer-Verlag, Berlin, Heidelberg, New York 1990. Budjanu, M. S., Gohberg, I. C., The factorization problem in abstract Banach algebras. I. Splitting algebras, Amer. Math. Soc. Transl. 110 (1977), 107-123. Clancey, K. F, Gohberg, I., Factorization of Matrix Functions and Singular Integral Operators, Birkhauser, Basel and Boston 1981. Corduneanu, C., Almost Periodic Functions, J. Wiley & Sons 1968. Devinatz, A., Shinbrot, M., General Wiener-Hopf operators, Trans. Amer. Math. Soc. 145 (1969), 467-494. Erdos, J., On the structure of ordered real vector spaces, Publ. Math. Debrecen 4 (1956), 334-343. Fuchs, L., Partially Ordered Algebraic Systems, Pergamon Press, Oxford 1963. Gantmacher, F. R., The Theory of Matrices, volume 1, Chelsea Publishing Company, New York, N. Y. 1959. Gelfand, I., Raikov, D., Shilov, G., Commutative Normed Rings, Chelsea, Bronx, N.Y. 1964. Gohberg, I., Goldberg, S., Kaashoek, M. A., Classes of Linear Operators. I, Birkhauser, Basel and Boston 1990. , Classes of Linear Operators. II, Birkhauser, Basel and Boston 1993. Gohberg, I., Kaashoek, M. A. (eds.), Constructive Methods of Wiener-Hopf Interpolation, Birkhauser, Basel and Boston 1986. Gohberg, I., Kaashoek, M. A., Block Toeplitz operators with rational symbols, Operator Theory: Advances and Applications 35 (1988), 385-440. Gohberg, I., Krein, M. G., Systems of integral equations on a half-line with kernel depending upon the di erence of the arguments, Amer. Math. Soc. Transl. 14 (1960), 217-287. Gohberg, I., Krupnik, N., One-Dimensional Linear Singular Integral Equations. Introduction, volume 1 and 2, Birkhauser, Basel and Boston 1992. Gohberg, I., Lancaster, P., Rodman, L., Matrices and Inde nite Scalar Products, Birkhauser, Basel and Boston 1983. Gohberg, I., Leiterer, J., U ber Algebren stetiger Operatorfunktionen, Studia Mathematica 17 (1976), 1-26. Gustafson, K. E., Rao, D. K. M., Numerical Range. The Field of Values of Linear Operators and Matrices, Springer, New York 1997. Halmos, P., A Hilbert Space Problem Book, Van Nostrand, Princeton, N.J. 1967.

Factorization and Toeplitz Operators

31

[28] Helson, H., Lowdenslager, D., Prediction theory and Fourier series in several variables. I., Acta Math. 99 (1958), 165-202. [29] Hewitt, E., Ross, K. A., Abstract Harmonic Analysis, volume 1, Springer-Verlag, Berlin-Gottingen-Heidelberg 1963. [30] Husemoller, D., Fibre Bundles, Springer-Verlag, New York-Berlin-Heidelberg 1994. [31] Karlovich, Yu. I., On the Haseman problem, Demonstratio Math. 26 (1993), 581-595. [32] Karlovich, Yu. I., Spitkovsky, I. M., Factorization of almost periodic matrix-valued functions and the Noether theory for certain classes of equations of convolution type, Mathematics of the USSR, Izvestiya 34 (1990), 281-316. [33] , (Semi)-Fredholmness of convolution operators on the spaces of Bessel potentials, Operator Theory: Advances and Applications 71 (1994), 122-152. , Almost periodic factorization: An analogue of Chebotarev's algorithm, [34] Contemporary Math. 189 (1995), 327-352. , Factorization of almost periodic matrix functions, J. Math. Anal. Appl. 193 [35] (1995), 209-232. [36] , Semi-Fredholm properties of certain singular integral operators, Operator Theory: Advances and Applications 90 (1996), 264-287. [37] Keeler, D., Rodman, L., Spitkovsky, I. M., The numerical range of 3  3 matrices, Linear Algebra Appl. 252 (1997), 115-139. [38] Krein, M. G., Integral equations on a half-line with kernel depending upon the di erence of the arguments, Amer. Math. Soc. Transl., Series 2 22 (1962), 163-288. [39] Levitan, B. M., Almost Periodic Functions, GITTL, Moscow 1953 (in Russian). [40] Levitan, B. M., Zhikov, V. V., Almost Periodic Functions and Di erential Equations, Cambridge University Press 1982. [41] Litvinchuk, L. S., Spitkovsky, I. M., Factorization of Measurable Matrix Functions, Birkhauser Verlag, Basel and Boston 1987. [42] Murphy, G. J., C  -Algebras and Operator theory, Academic Press 1990. [43] Pankov, A. A., Bounded and Almost Periodic Solutions of Nonlinear Di erential Operator Equations, Kluwer, Dordrecht/Boston/London 1990. [44] Perov, A. I., Kibenko, A. V., A theorem on the argument of an almost periodic function of several variables, Litovskii Matematicheskii Sbornik 7 (1967), 505-508 (in Russian). [45] Quint, D., Rodman, L., Spitkovsky, I. M., New cases of almost periodic factorization of triangular matrix functions, Michigan Math. J. 45(1) (1998), 73-102. [46] Rodman, L., Spitkovsky, I. M., Almost periodic factorization and corona theorem, Indiana Univ. Math. J. (1998), to appear. [47] Rodman, L., Spitkovsky, I. M., Woerdeman, H. J., Caratheodory-Toeplitz and Nehari problems for matrix valued almost periodic functions, Trans. Amer. Math. Soc. 350 (1998), 2185-2227. [48] Shmulyan, Yu. L., The Riemann problem with a positive de nite matrix, Uspekhi Matem. Nauk 8(2) (1953), 143-145 (in Russian).

32 [49] [50] [51] [52] [53]

Rodman, Spitkovsky, Woerdeman , The Riemann problem with a Hermitian matrix, Uspekhi Matem. Nauk (1954), 243-248 (in Russian). Simonenko, I. B., The Riemann boundary value problem for n pairs of functions with measurable coecients and its application to the investigation of singular integrals in the spaces Lp with weight, Izv. Akad. Nauk SSSR. Ser. Mat. 28(2) (1964), 277-306 (in Russian). Spitkovsky, I. M., Stability of partial indices of the Riemann boundary value problem with a strictly nondegenerate matrix, Soviet Math. Dokl. 15 (1974), 1267-1271. , On the factorization of almost periodic matrix functions, Math. Notes 45(5{ 6) (1989), 482-488. Spitkovsky, I. M., Woerdeman, H. J., The Caratheodory-Toeplitz problem for almost periodic functions, J. Functional Analysis 115(2) (1993), 281-293. 9(4)

Department of Mathematics College of William and Mary Williamsburg, VA 23187-8795 USA

1991 Mathematics Subject Classi cation. Primary 47A68; Secondary 26B99, 43A60, 47A53, 47B35 Received Date inserted by the Editor

Suggest Documents