Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 4, Number 1, pp. 55–71 (2009) http://campus.mst.edu/adsa
Solution of a q-deformed Linear System Containing Zero Conditions at Infinity Adil Huseynov Ankara University Department of Mathematics 06100 Tandogan, Ankara, Turkey
[email protected]
Abstract In this study, we investigate the eigenvalues and eigenvectors of a quadratic pencil of q-difference equations. The results obtained are then used to solve the corresponding system of differential equations with boundary and initial conditions and zero conditions at infinity.
AMS Subject Classifications: 39A10. Keywords: Quadratic pencil, q-difference equation, eigenvalue, eigenvector.
1
Introduction
We consider the system of linear second order differential equations cn
d2 un (t) dun (t) = rn + an−1 un−1 (t) + bn un (t) + qan un+1 (t), 2 dt dt n ∈ {0, 1, . . . , N − 1} , t ≥ 0
(1.1)
with the “boundary” conditions u−1 (t) = 0, uN (t) + huN −1 (t) = 0, t ≥ 0
(1.2)
and the conditions (initial conditions at t = 0 and zero “end” conditions at t = ∞) un (0) = fn ,
lim un (t) = 0, n ∈ {0, 1, . . . , N − 1} ,
t→∞
Received March 27, 2008; Accepted October 18, 2008 Communicated by Martin Bohner
(1.3)
56
Adil Huseynov
where N ≥ 2 is a positive integer, q > 0 is a fixed real number, {un (t)}N n=−1 is a desired solution, fn (n = 0, 1, . . . , N − 1) are given complex numbers, the coefficients cn , rn , an , bn of equation (1.1) and the number h in the boundary conditions (1.2) are real, and an 6= 0, cn > 0, (1.4) b0 − q |a0 | ≥ 0, bN −1 − hqaN −1 − |aN −2 | ≥ 0, bn − |an−1 | − q |an | ≥ 0, n ∈ {1, 2, . . . , N − 2}
(1.5)
with strict inequality in at least one relation of (1.5). If {un (t)}N n=−1 is a solution of problem (1.1)–(1.3), then taking boundary conditions (1.2) into account, we have d2 u0 (t) dt2 2 d un (t) cn dt2 n 2 d uN −1 (t) cN −1 dt2 c0
du0 (t) + b0 u0 (t) + qa0 u1 (t), dt dun (t) + an−1 un−1 (t) + bn un (t) + qan un+1 (t), (1.6) = rn dt = 1, 2, . . . , N − 2, duN −1 (t) = rN −1 + aN −2 uN −2 (t) + (bN −1 − hqaN −1 )uN −1 (t). dt = r0
Consequently, finding a solution {un (t)}N n=−1 of problem (1.1)–(1.3) is equivalent to the N −1 problem of finding a solution {un (t)}n=0 of system (1.6) that satisfies the conditions (1.3). Setting u0 (t) f0 u1 (t) f1 u(t) = , f = .. , .. . . uN −1 (t) fN −1 c0 0 0 · · · 0 r0 0 0 · · · 0 0 c1 0 · · · 0 r1 0 · · · 0 0 0 0 c2 · · · 0 , R = 0 0 r2 · · · 0 C= , .. .. .. . . .. .. .. . . .. .. . . . . . . . . . . 0 0 0 · · · cN −1 0 0 0 · · · rN −1 b0 qa0 0 · · · 0 0 0 a0 b1 qa1 · · · 0 0 0 0 a1 b 2 · · · 0 0 0 .. . . . . . . . . . . . . J = . (1.7) , . . . . . . 0 0 0 . . . bN −3 qaN −3 0 0 0 0 · · · aN −3 bN −2 qaN −2 0 0 0 ··· 0 aN −2 bN −1 − hqaN −1 we can write problem (1.6), (1.3) in the form C
du(t) d2 u(t) =R + Ju(t), 0 ≤ t < ∞, 2 dt dt
(1.8)
q-deformed Linear Systems
57 u(0) = f,
lim u(t) = 0.
t→∞
(1.9)
Thus problem (1.1)–(1.3) is equivalent to problem (1.8), (1.9), i.e., if {un (t)}N n=−1 is N −1 a solution of problem (1.1)–(1.3), then the vector-function u(t) = {un (t)}n=0 forms −1 a solution of problem (1.8), (1.9), and conversely, if u(t) = {un (t)}N n=0 is a solution of (1.8), (1.9) then {un (t)}N n=−1 , where u−1 (t) = 0, and uN (t) = −huN −1 (t), forms a solution of problem (1.1)–(1.3). Our principal aim in this paper is to prove that problem (1.1)–(1.3) (or, equivalently, problem (1.8), (1.9)) has a unique solution and to investigate the structure of the solution, that is, to give an effective formula for it. To do so, we seek a nontrivial solution of equation (1.8), which has the form u(t) = eλt y,
(1.10)
where λ is a complex constant and y is a constant vector (an element of the space CN ) which depends upon λ and which we desire to be nontrivial, that is, not equal to 0, the null vector. Substituting (1.10) into (1.8), we obtain (λ2 C − λR − J)y = 0.
(1.11)
Definition 1.1. A complex number λ0 is said to be an eigenvalue of equation (1.11) (or of the quadratic pencil λ2 C −λR −J) if there exists a nonzero vector y ∈ CN satisfying equation (1.11) for λ = λ0 . The vector y is called an eigenvector of equation (1.11), corresponding to the eigenvalue λ0 . Thus the vector-function (1.10) is a nontrivial solution of equation (1.8) if and only if λ is an eigenvalue and y is a corresponding eigenvector of equation (1.11). Note that (1.11) is equivalent to the discrete boundary value problem (λ2 cn − λrn − bn )yn − an−1 yn−1 − qan yn+1 = 0, n = 0, 1, . . . , N − 1, y−1 = 0, yN + hyN −1 = 0,
(1.12) (1.13)
N −1 that is, if {yn }N n=−1 is a solution of problem (1.12), (1.13), then the vector y = {yn }n=0 −1 forms a solution of equation (1.11), and conversely, if y = {yn }N n=0 is a solution of (1.11), then {yn }N n=−1 , where y−1 = 0 and yN = −hyN −1 , forms a solution of problem (1.12), (1.13). Let us denote by λ1 , . . . , λm all eigenvalues of equation (1.11), and by y (1) , . . . , y (m) the corresponding eigenvectors. Then by linearity of equation (1.8) the vector-function
u(t) =
m X j=1
αj eλj t y (j)
(1.14)
58
Adil Huseynov
will be a solution of equation (1.8), where α1 , . . . , αm are arbitrary constants independent of t. Now we must try to choose the constants αj so that (1.14) will also satisfy the conditions in (1.9): m X j=1
αj y
(j)
= f,
lim
t→∞
m X
αj eλj t y (j) = 0.
(1.15)
j=1
In this paper we show that under the conditions (1.4), (1.5), such a choice of constants αj is possible if we pick out suitable eigenvalues λj . We also find formulas for the constants αj . To this end we have to examine the eigenvalue problem (1.11) in detail. The paper is organized as follows. In Section 2 we present some needed facts about second order self-adjoint q-difference equations because (1.12) is such an equation. In Section 3 reality of the eigenvalues and “orthogonality” of the eigenvectors are established. Section 4 contains further properties of the eigenvalues and eigenvectors. In Section 5 we find the form of a general solution of equation (1.8). Section 6 determines the number of the negative eigenvalues. Section 7 is devoted to the proof of the basisness of “half” of the eigenvectors. Finally, in Section 8, using the results obtained for the eigenvalue problem, we prove existence and uniqueness of solution to the problem (1.1)–(1.3) and present an effective formula for the solution. Note that a comprehensive treatment of general matrix polynomials to which our quadratic pencil λ2 C − λR − J belongs is given in [4]. However, due to the special structure (1.7) of the matrices C, R and J, and the conditions (1.4), (1.5), we have succeeded in obtaining, in this paper, more specific results. Similar problems involving usual difference equations (q = 1) were investigated earlier in [5, 6].
2
Auxiliary Facts on Linear q-difference Equations
For a treatment of q-calculus, we refer the reader to [7]. The theorems given below in this section and related to second order self-adjoint q-difference equations are similar to those for usual second order difference equations [9] and are not difficult to verify. Definition 2.1. Let q be a fixed real number such that q 6= 0 and q 6= 1. Let us set q Z = {q n : n ∈ Z} = . . . , q −2 , q −1 , q 0 , q 1 , q 2 , . . . . Let y(x) be a complex-valued function defined for x ∈ q Z . The “q-difference” operator Dq is defined by y(qx) − y(x) , x ∈ qZ. (2.1) Dq y(x) = (q − 1)x The expression in (2.1) is called the q-derivative of the function y at x.
q-deformed Linear Systems
59
Higher order q-derivatives are defined by repeated application of the operator Dq . For example, the second order q-derivative is Dq2 y(x) = Dq (Dq y(x)) =
y(q 2 x) − (q + 1)y(qx) + qy(x) . q(q − 1)2 x2
Fundamental properties of Dq are given in the following theorem. Theorem 2.2. Assume f, g : q Z → C are functions. Then (i) Dq (y(x) + z(x)) = Dq y(x) + Dq z(x), (ii) Dq (cy(x)) = cDq y(x) if c is a constant, (iii) Dq (y(x)z(x)) = (Dq y(x))z(x) + y(qx)Dq z(x) = y(x)Dq z(x) + (Dq y(x))z(qx), (Dq y(x))z(x) − y(x)Dq z(x) y(x) = if z(x)z(qx) 6= 0. (iv) Dq z(x) z(x)z(qx) Example 2.3. We have the following: (i) Dq c = 0 if c is a constant, (ii) Dq x = 1, (iii) Dq x2 = (q + 1)x, (iv) Dq x3 = (q 2 + q + 1)x2 , (v) Dq ln x =
ln q . (q − 1)x
Theorem 2.4. If Dq y(x) is identically zero on q Z , then y(x) is constant on q Z . Let p(x) and r(x) be given functions defined on q Z with p(x) 6= 0 for all x ∈ q Z . The second order self-adjoint linear homogeneous q-difference equation is defined to be x x Dq y + r(x)y(x) = 0, x ∈ q Z , (2.2) Dq p q q where y(x) is a desired solution. We can also write equation (2.2) in the form x x a y + b(x)y(x) + qa(x)y(qx) = 0, x ∈ q Z , q q where
p(x) a(x) = , b(x) = r(x) − qa(x) − a (q − 1)2 q 2 x2
x . q
(2.3)
60
Adil Huseynov
Note that any equation written in the form of equation (2.3), where a(x) 6= 0 on q Z , can be written in the self-adjoint form of equation (2.2) by taking x 2 2 2 . p(x) = (q − 1) q x a(x), r(x) = b(x) + qa(x) + a q Setting x = q n (n ∈ Z) in equation (2.3), we get a(q n−1 )y(q n−1 ) + b(q n )y(q n ) + qa(q n )y(q n+1 ) = 0, n ∈ Z. Finally, denoting an = a(q n ), bn = b(q n ), yn = y(q n ) for n ∈ Z, we can write the last equation in the form an−1 yn−1 + bn yn + qan yn+1 = 0, n ∈ Z.
(2.4)
Further we will deal with equations of the form (2.4) assuming that q is any fixed positive real number. Note that equation (1.12) has the form of equation (2.4). Take a fixed integer n0 ∈ Z and consider the initial conditions yn0 = c0 , yn0 +1 = c1 ,
(2.5)
where c0 and c1 are given numbers. Theorem 2.5 (Existence and Uniqueness Theorem). The initial value problem (IVP) (2.4), (2.5) has exactly one solution y = (yn ). Corollary 2.6. Let (yn ) be a solution of equation (2.4). If yn is zero for two successive integer values of n, then yn = 0 for all n ∈ Z. Definition 2.7. Let y = (yn ) and z = (zn ) be solutions of equation (2.4). The Wronskian of these solutions is defined to be yn z n = yn zn+1 − yn+1 zn , n ∈ Z. Wn (y, z) = yn+1 zn+1 Theorem 2.8. If y = (yn ) and z = (zn ) are solutions of equation (2.4), then c Wn (y, z) = n , n ∈ Z, q an where c is a constant. Corollary 2.9. If y = (yn ) and z = (zn ) are solutions of equation (2.4), then either Wn (y, z) = 0 for all n ∈ Z or Wn (y, z) 6= 0 for all n ∈ Z. Theorem 2.10. Any two solutions of equation (2.4) are linearly independent if and only if their Wronskian is different from zero. Theorem 2.11. Equation (2.4) has two linearly independent solutions and every solution of equation (2.4) is a linear combination of these solutions.
q-deformed Linear Systems
3
61
Properties of the Eigenvalues and Eigenvectors
In this section we establish the reality of the eigenvalues and get several “orthogonality” relations for the eigenvectors of problem (1.11) assuming that there exist the eigenvalues and eigenvectors. Existence of the eigenvalues and eigenvectors and their further properties will be established in the next section. Note that there are only a few works concerning spectral analysis of q-difference equations, see [1–3]. Consider the eigenvalue problem (1.11), where the matrices C, R, and J have the form (1.7), and we assume throughout that the conditions (1.4), (1.5) are satisfied. We will investigate the equation (1.11) in the linear space −1 : y ∈ C, n = 0, 1, . . . , N − 1 CN = y = (yn )N n n=0 with the inner product (y, z)q =
N −1 X
q n yn z n ,
(3.1)
n=0
where C denotes the set of complex numbers and the bar over a number denotes complex conjugation. The following two lemmas are not difficult to prove. Lemma 3.1. The matrices C, R, and J are self-adjoint with respect to the inner product (3.1), that is, each of them satisfies the relation (T y, z)q = (y, T z)q , ∀y, z ∈ CN . Lemma 3.2. The matrices C and J are positive, that is, (Cy, y)q > 0, (Jy, y)q > 0, ∀y ∈ CN , y 6= 0. Note that the positiveness of J follows from the condition (1.5) by virtue of the −1 N following equality: For any real vector y = {yn }N n=0 ∈ R , 2 (Jy, y)q = (b0 − q |a0 |)y02 + q N −1 (bN −1 − hqaN −1 − |aN −2 |)yN −1
+
N −2 X
n
q (bn − |an−1 | − q
|an |)yn2
n=1
+
N −1 X
q n |an−1 | (yn−1 ± yn )2 ,
n=1
where the ± sign in (yn−1 ± yn )2 is taken to be that of an−1 . Theorem 3.3. Each eigenvalue λ of equation (1.11) is real, nonzero, and has the same sign as 2λ(Cy, y)q − (Ry, y)q 6= 0, (3.2) where y is an eigenvector corresponding to λ.
62
Adil Huseynov
−1 Proof. Let λ ∈ C be an eigenvalue of equation (1.11) and y = {yn }N n=0 6= 0 be a corresponding eigenvector. By forming the inner product of both sides of equation (1.11) with the vector y, we get
λ2 (Cy, y)q − λ(Ry, y)q − (Jy, y)q = 0.
(3.3)
Since, by Lemma 3.2, (Jy, y)q > 0, we get from (3.3) that λ 6= 0. Also, since (Cy, y)q > 0, we have q (Ry, y)q ± (Ry, y)2q + 4(Cy, y)q (Jy, y)q . (3.4) λ= 2(Cy, y)q Since (Ry, y)q is real by Lemma 3.1 and (Ry, y)2q + 4(Cy, y)q (Jy, y)q > 0, we get from (3.4) that λ is real. Further, the product of λ with 2λ(Cy, y)q − (Ry, y)q is, by (3.3), λ [2λ(Cy, y)q − (Ry, y)q ] = λ2 (Cy, y)q + (Jy, y)q > 0, so that (3.2) holds and the sign of λ is the same as the sign of the expression in (3.2). The theorem is proved. Theorem 3.4. The eigenvectors y and z of equation (1.11) corresponding to the distinct eigenvalues λ and µ, respectively, satisfy the “orthogonality” relations (λ + µ)(Cy, z)q − (Ry, z)q = 0,
(3.5)
λµ(Cy, z)q + (Jy, z)q = 0,
(3.6)
λµ(Ry, z)q + (λ + µ)(Jy, z)q = 0.
(3.7)
Proof. Multiplying in the sense of the inner product the first of the equalities λ2 Cy − λRy − Jy = 0, µ2 Cz − µRz − Jz = 0 from the right by z and the second one from the left by y, and using the reality of the eigenvalues and Lemma 3.1, we get λ2 (Cy, z)q − λ(Ry, z)q − (Jy, z)q = 0, µ2 (Cy, z)q − µ(Ry, z)q − (Jy, z)q = 0. Eliminating from these two equations in turn (Jy, z), (Ry, z), and (Cy, z), we obtain respectively the “orthogonality” relations (3.5), (3.6), and (3.7).
q-deformed Linear Systems
4
63
Existence of Eigenvalues
To investigate the existence and further properties of the eigenvalues and eigenvectors of equation (1.11), we note that equation (1.11) is equivalent to the problem of finding a vector {yn }N n=−1 that satisfies the boundary value problem (1.12), (1.13). We define the solution y = {ϕn (λ)}N n=−1 of equation (1.12) satisfying the initial conditions ϕ−1 (λ) = 0, ϕ0 (λ) = 1.
(4.1)
Using (4.1), we can recursively find ϕn (λ), n = 1, 2, . . . , N , from the equation (1.12), and ϕn (λ) is a polynomial in λ of degree 2n. In fact, we can find that ϕ1 (λ) = ϕ2 (λ) =
1 2 (λ c0 − λr0 − b0 ), qa0
a0 1 2 2 c − λr − b )(λ c − λr − b ) − (λ , 0 0 0 1 1 1 q 2 a0 a1 qa1 c0 c1 · · · cn−1 2n ϕn (λ) = n λ + ... . q a0 a1 · · · an−1
It is easy to see that every solution {yn (λ)}N n=−1 of equation (1.12) satisfying the initial N condition y−1 = 0 is equal to {ϕn (λ)}n=−1 up to a constant factor: yn (λ) = αϕn (λ), n = −1, 0, 1, . . . , N,
(4.2)
with α = y0 (λ). Indeed, both sides of (4.2) are solutions of equation (1.12) and they coincide for n = −1 and n = 0. Hence (4.2) holds by the uniqueness of solution. We get yN (λ) + hyN −1 (λ) = α[ϕN (λ) + hϕN −1 (λ)]. Consequently setting χ(λ) = ϕN (λ) + hϕN −1 (λ),
(4.3)
we have the following lemma. Lemma 4.1. The eigenvalues of equation (1.11) are roots of the recursively constructed polynomial χ(λ). To each eigenvalue λ0 corresponds, up to a constant factor, a single −1 eigenvector which can be taken to be the vector {ϕn (λ0 )}N n=0 . The function χ(λ) is called the characteristic function of problem (1.12), (1.13) (or of (1.11)). By Lemma 4.1 the eigenvalues of equation (1.11) coincide with the roots of the function χ(λ). On the other hand the eigenvalues of equation (1.11) coincide, obviously, with the roots of the polynomial det(λ2 C − λR − J). Since both χ(λ) and det(λ2 C − λR − J) are polynomials in λ of degree 2N , it follows therefore that they differ by at most a constant factor from each other. This factor is easily found. To this end it suffices to compare the coefficients of λ2N in these polynomials. This yields det(λ2 C − λR − J) = q N a0 a1 · · · aN −1 χ(λ).
64
Adil Huseynov
Lemma 4.2. There exist 2N distinct eigenvalues. Proof. Since ϕn (λ) for each n is a polynomial of degree 2n, by (4.3) χ(λ) is a polynomial of degree 2N . Therefore χ(λ) has 2N roots. Now we show that the roots of χ(λ) are simple. Hence the statement of the lemma will follow. Differentiating the equation (λ2 cn − λrn − bn )ϕn (λ) − an−1 ϕn−1 (λ) − qan ϕn+1 (λ) = 0 with respect to λ, we get (2λcn − rn )ϕn (λ) + (λ2 cn − λrn − bn )ϕ˙ n (λ) − an−1 ϕ˙ n−1 (λ) − qan ϕ˙ n+1 (λ) = 0, where the dot over a function indicates the derivative with respect to λ. Multiplying the first equation by ϕ˙ n (λ) and the second one by ϕn (λ), and subtracting the left and right members of the resulting equations, we get (2λcn − rn )ϕ2n (λ) + an−1 [ϕn−1 (λ)ϕ˙ n (λ) − ϕ˙ n−1 (λ)ϕn (λ)] −qan [ϕn (λ)ϕ˙ n+1 (λ) − ϕ˙ n (λ)ϕn+1 (λ)] = 0. Summing the last equation multiplied by q n , for the values n = 0, 1, . . . , m (m ≤ N −1) and using the initial conditions (4.1), we get q m+1 am [ϕm (λ)ϕ˙ m+1 (λ) − ϕ˙ m (λ)ϕm+1 (λ)] =
m X
q n (2λcn − rn )ϕ2n (λ).
(4.4)
n=0
Let us assume χ(λ0 ) = 0. In particular, setting in (4.4), m = N − 1 and λ = λ0 , and using the equality ϕN (λ0 ) = −hϕN −1 (λ0 ), which follows from the assumption χ(λ0 ) = 0, we have N
q aN −1 χ(λ ˙ 0 )ϕN −1 (λ0 ) =
N −1 X
q n (2λ0 cn − rn )ϕ2n (λ0 ).
(4.5)
n=0
The right-hand side of (4.5) is not zero by virtue of Theorem 3.3. Consequently χ(λ ˙ 0 ) 6= 0, that is, the root λ0 of the function χ(λ) is simple. We can summarize the results obtained above in the following theorem. Theorem 4.3. The equation (1.11) has precisely 2N real distinct eigenvalues λj (j = 1, . . . , 2N ). These eigenvalues are different from zero. To each eigenvalue λj there corresponds, up to a constant factor, a single eigenvector which can be taken to be N −1 ϕ(j) = {ϕn (λj )}n=0 , where {ϕn (λ)}N n=−1 is the solution of equation (1.12) satisfying initial conditions (4.1).
q-deformed Linear Systems
5
65
General Solution of the Problem (1.1), (1.2)
Now we determine a general form of solutions of problem (1.1), (1.2). Lemma 5.1. If ϕ(1) , . . . , ϕ(2N ) are eigenvectors of equation (1.11), corresponding to the eigenvalues λ1 , . . . , λ2N , respectively, then the vectors Φj = [ϕ(j) , λj ϕ(j) ] ∈ CN × CN (j = 1, . . . , 2N ) form a basis for CN × CN . Proof. Consider the linear space CN ×CN of vectors denoted by [y, z], where y, z ∈ CN . Define on this space the bilinear form by the formula h[y, z], [u, v]i = (Cy, v)q + (Cz, u)q − (Ry, u)q ,
(5.1)
where (·, ·)q in the right-hand side denotes the inner product in CN defined by the formula (3.1). Note that the formula (5.1) does not define an inner product in the space CN × CN , because for the nonzero vectors [y, z] the numbers h[y, z], [y, z]i are not necessarily positive (they may also be zero or negative). In view of formula (3.5) of Theorem 3.4, the vectors Φj = [ϕ(j) , λj ϕ(j) ], j = 1, . . . , 2N are orthogonal with respect to the bilinear form h·, ·i: hΦj , Φl i = 0, j 6= l.
(5.2)
Further, it is remarkable that, by virtue of Theorem 3.3, we have ρj = hΦj , Φj i = 2λj (Cϕ(j) , ϕ(j) )q − (Rϕ(j) , ϕ(j) )q 6= 0
(5.3)
and the sign of ρj coincides with the sign of λj , signρj = signλj .
(5.4)
From (5.2) and (5.3) it follows that Φ1 , . . . , Φ2N are linearly independent in the space CN × CN . Since the number of them is equal to 2N and dim(CN × CN ) = 2N , they form a basis for the space CN × CN . The theorem is proved. By Lemma 5.1, for an arbitrary vector [f, g] that belongs to CN × CN , we have the unique expansion [f, g] =
2N X
βj Φj , i.e., f =
j=1
2N X j=1
(j)
βj ϕ , g =
2N X
βj λj ϕ(j) ,
(5.5)
j=1
and βj =
1 1 h[f, g], Φj i = {λj (Cf, ϕ(j) )q + (Cg, ϕ(j) )q − (Rf, ϕ(j) )q }, ρj ρj
where ρj is defined by formula (5.3).
(5.6)
66
Adil Huseynov
Remark 5.2. To prove Lemma 5.1 we could also use the orthogonality relation (3.6) or (3.7). In the case of (3.6) we must use on CN × CN the bilinear form h[y, z], [u, v]i = (Jy, u) + (Cz, v),
(5.7)
and in the case of (3.7) h[y, z], [u, v]i = (Jy, v) + (Jz, u) + (Rz, v). The bilinear form (5.7), in contrast to the bilinear form (5.1), is an inner product in CN × CN . But in connection with the formulas (5.3) and (5.6) for ρj and βj , the bilinear form (5.1) has more advantages, since both the matrices C and R presented in it are diagonal. Theorem 5.3. The general solution {un (t)}N n=−1 of problem (1.1), (1.2) has the form un (t) =
2N X
γj eλj t ϕn (λj ), n = −1, 0, 1, . . . , N,
(5.8)
j=1
where γ1 , . . . , γ2N are arbitrary constants. Proof. From the definitions of λj and ϕn (λ) it follows that (5.8) satisfies (1.1), (1.2) for arbitrary constants γ1 , . . . , γ2N . Conversely, assume that {un (t)}N n=−1 is an arbitrary N −1 −1 solution of (1.1), (1.2). Define the vectors f = {fn }n=0 and g = {gn }N n=0 by setting un (0) = fn ,
dun (0) = gn , n ∈ {0, 1, . . . , N − 1}, dt
(5.9)
and then determine the constants β1 , . . . , β2N using the expansion (5.5) of the vector [f, g], and put 2N X vn (t) = βj eλj t ϕn (λj ), n = −1, 0, 1, . . . , N. j=1 −1 Then the vector-function v(t) = {vn (t)}N n=0 satisfies the initial value problem
C
d2 v(t) dv(t) =R + Jv(t), 0 ≤ t < ∞, 2 dt dt
(5.10)
dv(0) = g. (5.11) dt −1 On the other hand, u(t) = {un (t)}N n=0 also satisfies (5.10), (5.11), and it is well known that a problem of the kind (5.10), (5.11) which can be written in the form of a first order linear system has a unique solution. Hence u(t) = v(t), so that we have (5.8) with γj = βj (j = 1, . . . , 2N ). v(0) = f,
q-deformed Linear Systems
6
67
Number of the Negative Eigenvalues
Above in Section 4 we showed that under the conditions (1.4), (1.5) equation (1.11) has exactly 2N eigenvalues λj (j = 1, . . . , 2N ) which are real, distinct, and different from zero. We will assume that these eigenvalues are arranged in increasing order: λ1 < λ2 < . . . < λ2N .
(6.1)
Note also that since an 6= 0 (n = 0, 1, . . . , N − 1) and q > 0, it follows from the condition (1.5) that b0 > 0, b1 > 0, . . . , bN −2 > 0, bN −1 − hqaN −1 > 0.
(6.2)
Theorem 6.1. Half of the eigenvalues of equation (1.11) are negative and the other half positive, that is, under the assumption (6.1) we have λj < 0 for j = 1, . . . , N and λj > 0 for j = N + 1, . . . , 2N. Proof. Consider the auxiliary eigenvalue problem [λ2 C − λεR − J(ε)]y = 0
(6.3)
depending on a parameter ε ∈ [0, 1], where the matrix J(ε) is obtained from the matrix J by means of multiplication of all its nondiagonal elements by ε. It is obvious that the analog of the conditions (1.4) and (1.5) is fulfilled for all ε ∈ (0, 1]. The eigenvalues of equation (6.3) are nonzero for all ε ∈ [0, 1] and coincide with the roots of the polynomial det[λ2 C − λεR − J(ε)].
(6.4)
For each ε ∈ (0, 1], the roots of the polynomial (6.4) are distinct by virtue of Lemma 4.2, being applicable to the equation (6.3). Denote them by λ1 (ε) < λ2 (ε) < . . . < λ2N (ε). The equation (6.3) is equivalent to the pair of equations
εC
−1
Rz + C
−1
z = λy, J(ε)y = λz.
Therefore, the eigenvalues of equation (6.3) coincide with the eigenvalues of the matrix 0 I A(ε) = C −1 J(ε) εC −1 R of dimension 2N × 2N . Since λj (ε) (j = 1, . . . , 2N ) are the eigenvalues of the matrix A(ε), being continuous in ε ∈ [0, 1], they are continuous functions of ε (see [8, Chapter
68
Adil Huseynov
2, §5]). Note that at the point ε = 0 we do not state that λ1 (ε), λ2 (ε), . . . , λ2N (ε) are distinct. Now we show that for all values of ε ∈ (0, 1] half of λj (ε) (j = 1, . . . , 2N ) are negative and the other half positive: λj (ε) < 0 (j = 1, . . . , N ), λj (ε) > 0 (j = N + 1, . . . , 2N ). Hence, in particular, for ε = 1 the statement of the lemma will follow. Assume the contrary. Let for some value of ε ∈ (0, 1] λj (ε) < 0 (j = 1, . . . , k), λj (ε) > 0 (j = k + 1, . . . , 2N ),
(6.5)
where 0 ≤ k ≤ 2N and k 6= N (for k = 0 all the eigenvalues λj (ε) are understood to be positive, and for k = 2N to be negative). Since λj (ε) (j = 1, . . . , 2N ) are different from zero and are distinct and continuous functions for all values of ε ∈ (0, 1], the inequalities (6.5) are valid for all values of ε ∈ (0, 1] with the same value of k. Passing in (6.5) to the limit as ε → 0, we get λj (0) ≤ 0 (j = 1, . . . , k), λj (0) ≥ 0 (j = k + 1, . . . , 2N ). But this is a contradiction, since for ε = 0, the roots of the polynomial (6.4) are the numbers s s bj bN −1 − hqaN −1 (j = 0, 1, . . . , N − 2), ± , ± cj cN −1 half of which are negative and the other half positive by virtue of (6.2). Thus the theorem is proved.
7
Basisness of “Half” of the Eigenvectors
Theorem 7.1. The eigenvectors of equation (1.11), corresponding to the negative (or positive) eigenvalues form a basis for CN . Proof. We may assume, by Theorem 6.1, that λ1 < . . . < λN < 0 < λN +1 < . . . < λ2N . To see that the eigenvectors ϕ(1) , . . . , ϕ(N ) corresponding to the eigenvalues λ1 , . . . , λN , −1 respectively, form a basis for CN , let z = {zn }N ∈ CN and 0 (z, ϕ(j) )q = 0, j = 1, . . . , N.
(7.1)
It suffices for us to establish that then z = 0, the null vector. Applying (5.5) and (5.6) to the vectors f = 0 and g = C −1 z, we have 0=
2N X j=1
(j)
βj ϕ , C
−1
z=
2N X j=1
βj λj ϕ(j) ,
(7.2)
q-deformed Linear Systems
69
where βj =
1 (z, ϕ(j) )q , j = 1, . . . , 2N. ρj
(7.3)
From (7.3), in view of (7.1), we have βj = 0, j = 1, . . . , N , and therefore, (7.2) takes the form 2N 2N X X 0= βj ϕ(j) , C −1 z = βj λj ϕ(j) . j=N +1
j=N +1
Multiplying the last equalities by z in the sense of the inner product in CN , we get 0=
2N X
(j)
βj (ϕ , z)q =
j=N +1
(C
−1
ρj |βj |2 ,
(7.4)
j=N +1
2N X
z, z)q =
2N X
(j)
βj λj (ϕ , z)q =
j=N +1
2N X
ρj λj |βj |2 .
(7.5)
j=N +1
By virtue of (5.4) we have ρj > 0, j = N + 1, . . . , 2N . Consequently from (7.4) it follows that βj = 0, j = N + 1, . . . , 2N , and from (7.5) we then get (C −1 z, z)q = 0. Hence z = 0, since N −1 n X q (C −1 z, z)q = |zn |2 c n=0 n and q > 0, cn > 0. The theorem is proved.
8
Solution of the Problem (1.1)–(1.3)
We now give an application of the results obtained above. Theorem 8.1. The problem (1.1)–(1.3) has a unique solution {un (t)}N n=−1 that is representable in the form un (t) =
N X
αj eλj t ϕn (λj ), n = −1, 0, 1, . . . , N,
(8.1)
j=1
in which λ1 , . . . , λN are the negative eigenvalues of equation (1.11), and {ϕn (λ)}N n=−1 is the solution of equation (1.12) satisfying initial conditions (4.1). Further, the coefficients α1 , . . . , αN are defined with the help of the unique expansion f=
N X j=1
−1 in which ϕ(j) = {ϕn (λj )}N n=0 .
αj ϕ(j) ,
(8.2)
70
Adil Huseynov
Proof. By Theorem 7.1, the vectors ϕ(1) , . . . , ϕ(N ) form a basis of CN . Therefore, for the vector f = {fn }0N −1 ∈ CN presented in the first condition of (1.3), there exist the numbers α1 , . . . , αN uniquely determined by the expansion (8.2). Next we define un (t) by formula (8.1). Then {un (t)}N n=−1 will be a solution of problem (1.1), (1.2) satisfying the first condition in (1.3) in view of (8.2), and the second one in view of that λ1 , . . . , λN are negative. For proof of the uniqueness of solution, we note that by Theorem 5.3 the general solution of problem (1.1), (1.2) has the representation un (t) =
2N X
γj eλj t ϕn (λj ), n = −1, 0, 1, . . . , N,
(8.3)
j=1
where γ1 , . . . , γN are arbitrary constants. Let (8.3) satisfy the conditions (1.3). Then from the second condition in (1.3) it follows that there it must be γN +1 = . . . = γ2N = 0, since λj > 0 for j = N + 1, . . . , 2N and are distinct. Further, setting t = 0 in (8.3) and using the first condition in (1.3), and (8.2) we get γj = αj , j = 1, . . . , N . The theorem is proved. Remark 8.2. The expansion (8.2) written in the form fn = α1 ϕn (λ1 ) + α2 ϕn (λ2 ) + . . . + αN ϕn (λN ), n = 0, 1, . . . , N − 1, gives us a linear nonhomogeneous system of equations in unknowns α1 , α2 , . . . , αN . Remark 8.3. As follows from (8.1), for the solution of problem (1.1)–(1.3) we have un (t) = O(e−δt ), n = −1, 0, 1, . . . , N, as t → ∞, where δ = min{|λ1 | , . . . , |λN |}.
References [1] Murat Adıvar and Martin Bohner. Spectral analysis of q-difference equations with spectral singularities. Math. Comput. Modelling, 43(7-8):695–703, 2006. [2] Murat Adivar and Martin Bohner. Spectrum and principal vectors of second order q-difference equations. Indian J. Math., 48(1):17–33, 2006. [3] Martin Bohner and Thomas Hudson. Euler-type boundary value problems in quantum calculus. Int. J. Appl. Math. Stat., 9(J07):19–23, 2007. [4] I. Gohberg, P. Lancaster, and L. Rodman. Matrix polynomials. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1982. Computer Science and Applied Mathematics.
q-deformed Linear Systems
71
[5] G. Sh. Guseinov and G. Oturanc¸. On the theory of a certain class of quadratic pencils of matrices and its applications. Turkish J. Math., 21(4):409–422, 1997. [6] Gusein Guseinov and Mehmet Terziler. The eigenvalue problem for a matrix of a special form and its applications. Linear Algebra Appl., 259:329–346, 1997. [7] Victor Kac and Pokman Cheung. Quantum calculus. Universitext. Springer-Verlag, New York, 2002. [8] Tosio Kato. Perturbation theory for linear operators. Springer-Verlag, Berlin, second edition, 1976. Grundlehren der Mathematischen Wissenschaften, Band 132. [9] Walter G. Kelley and Allan C. Peterson. Difference equations. Harcourt/Academic Press, San Diego, CA, second edition, 2001. An introduction with applications.