Lpâ convergence of subdivision schemes: joint spectral radius versus restricted spectral radius. Maria Charina, Costanza Conti, Thomas Sauer. Abstract.
Lp– convergence of subdivision schemes: joint spectral radius versus restricted spectral radius Maria Charina, Costanza Conti, Thomas Sauer
Abstract. In this paper we compare two approaches for investigating the Lp – convergence of multivariate scalar and vector subdivision schemes. The first approach is based on the so–called joint spectral radius, the second one on what we call the restricted spectral radius, a quantity that can be used to characterize the restricted contractivity of the corresponding difference subdivision schemes. We show that the two approaches are not only equivalent but that in fact the joint spectral radius and the restricted spectral radius are equal for 1 ≤ p ≤ ∞. One of the advantages of working with the restricted spectral radius is that the restricted p–norms can be computed by means of classical optimization methods, the restricted ∞–norm even by means of linear programming. This also allows for an estimation of the restricted special radii.
§1. Introduction Refinable functions and refinable function vectors have been a topic of active research in recent years and can be connected to many different areas, for example to signal and image processing. The existence and regularity of such functions, mostly in terms of the refinement parameters, has been investigated in a multitude of papers some of which we list in the references, though due to the amount of available material the list we provide there can by no means be complete or exhaustive. Subdivision schemes are computational means to recursively generate discrete functions defined on denser and denser grids in Rs where, in the context of vector subdivision, the values of these functions are allowed to belong to Rn , n ≥ 1. At each step of the subdivision recursion the new XXX xxx and xxx (eds.), pp. 1–4. Copyright 200x by Nashboro Press, Brentwood, TN. ISBN 0-9728482-x-x All rights of reproduction in any form reserved.
c
1
2
M. Charina, C. Conti, T. Sauer
values are obtained simply as a local average of the previously computed values on the coarser grid. The averaging coefficients form the so–called refinement mask. If they are (real) numbers, the scheme is said to be a scalar subdivision scheme acting on scalar sequences. If the averaging coefficients are matrices the scheme is called either a matrix subdivision scheme or a vector subdivision scheme as it now maps vector valued sequences to vector valued sequences. If a subdivision scheme converges, then the limit function automatically satisfies the refinement equation with the coefficients being the mask’s coefficients. One of the approaches for checking the Lp –convergence of a subdivision scheme, is to consider the so–called joint spectral radius (p–JSR) introduced in [13] and further investigated in [4, 5, 9, 10]. Another possibility to characterize Lp –convergence is based on the restricted contractivity of the corresponding difference subdivision scheme, see [8] for details on scalar and [2, 16] on vector multivariate difference subdivision schemes. The use of the restricted contractivity has been first discussed in [2] for the case of L∞ –convergence and the so-called restricted spectral radius (p–RSR) has been defined there for p = ∞. In this paper, in Section 2 we first recall some basic facts about multivariate vector subdivision schemes. Then, in Section 3, we extend the definition of p–RSR to the case 1 ≤ p < ∞ and show that p–JSR and p–RSR coincide for 1 ≤ p ≤ ∞ in both scalar and vector multivariate cases. Thus the two approaches can be used according to one’s preference. One of the advantages of studying the p–RSR is that in case p = ∞ the restricted p–norm can be computed easily by means of linear programming. In case 1 ≤ p < ∞, the p–RSR approach allows us to compute the restricted p–norm by solving a problem of concave minimization. Due to the compact support of the mask the optimization problems at hand depend on a finite number of parameters. For p = ∞, the resulting linear optimization problem can be solved for example by using MATLAB optimization routines. In the case 1 ≤ p < ∞, the optimization problem permits, for example, the application of an outer approximation method as stated in [12, Algorithm 3.3], but at present we do not have gathered any numerical experience in this respect. Finally, in Section 4 we discuss one of the advantages of the RSR approach by applying it to an example of an L∞ –convergent vector multivariate subdivision scheme.
§2. Notation and Background Let `n×k (Zs ) denote the linear space of all sequences of n × k matrices indexed by Zs . In addition, let `n×k (Zs ) denote the Banach space of p
Joint and restricted spectral radius
3
sequences of n × k matrices with finite p-norm defined as !1/p X |C(α)|pp 1 ≤ p < ∞, kCkp := s α∈Z p = ∞, sup |C(α)|∞ , α∈Zs
where |C(α)|p is the p-operator norm on Rn if k > 1 and the p-vector norm if k = 1. For notational simplicity, we write `np (Zs ) for `n×1 (Zs ) and p denote vector sequences by lowercase letters. Moreover, let `n×k (Zs ) ⊂ 0 n×k s `p (Z ) be the space of finitely supported matrix valued sequences. A specific example of such a sequence is the scalar delta sequence δ ∈ `0 (Zs ) defined by 1, α = 0, δ(α) := 0, α ∈ Zs \ {0}. For a finite matrix sequence A ∈ `n×k (Zs ) we define the associated symbol 0 as the Laurent polynomial X s A∗ (z) := A(α) z α , z ∈ (C \ {0}) , α∈Zs
where, in the usual multiindex notation, z α = z1α1 · · · · · zsαs . Let A ∈ `n×n (Zs ) be a finitely supported sequence, shifted so that 0 s supp A ⊆ [0, N ] for some N ∈ N and 0 ∈ supp A, i.e., A(0) 6= 0. The subdivision operator SA : `n (Zs ) → `n (Zs ) associated to the mask A is defined by X SA c(α) = A(α − 2β)c(β), α ∈ Zs . β∈Zs
The subdivision scheme then corresponds to a repeated application of SA to an initial vector sequence c ∈ `n (Zs ) yielding c(0) := c,
c(r+1) := SA c(r) ,
r ≥ 0.
(1)
The iteration in (1), when applied to a starting matrix sequence C ∈ `n×k (Zs ), can be understood as the application of SA to the columns of C(r) separately and storing the results accordingly in C(r+1) . The subdivision scheme (1) or, for short, the subdivision scheme SA , is said to be L∞ –convergent if for any vector sequence c ∈ `n∞ (Zs ) there exists a uniformly continuous vector field fc : Rs → Rn such that r lim sup |fc 2−r α − SA c(α)|∞ = 0 (2) r→∞ α∈Zs
and fc 6= 0 for some initial data c. The latter condition is imposed to exclude trivialities as the always convergent subdivision scheme based on
4
M. Charina, C. Conti, T. Sauer
A = 0. In the Lp –setting (2) is replaced by (see, for example,) r lim 2−rs/p kµr (fc ) − SA ckp = 0,
r→∞
cf. [7], where the the mean value operator at level r ∈ N is defined for f ∈ Lp (Rs ) as Z µr (fc ) (α) := 2rs fc (t) dt, α ∈ Zs . 2−r (α+[0,1]s )
If the subdivision scheme converges, then the limit function always has the form X fc = F ∗ c = F (· − α) c(α) α∈Zs
with the basic refinable function F ∈ Ln×n (Rs ) being the limit function p obtained from the initial sequence δIn . F is called refinable since it satisfies the refinement equation X F = F (2 · −α) A(α) α∈Zs
relative to A. Next, for all ∈ {0, 1}s we define the n × n matrices A :=
X
A( − 2α),
α∈Zs
and their joint 1–eigenspace EA := {v ∈ R : A v = v, ∈ {0, 1}s }. We also define m := dim EA and recall that for a convergent subdivision scheme we always have that 1 ≤ m ≤ n. For simplicity we assume that EA = span {e1 , . . . , em } ,
1 ≤ m ≤ n.
For C ∈ `n×k (Zs ) and D ∈ `k×n (Zs ) we define the j−th column of ∇` C as C1,j (· − e` ) − C1,j .. . Cm,j (· − e` ) − Cm,j .. [∇` C]·,j := , 1 ≤ ` ≤ s, 1 ≤ j ≤ k, . Cm+1,j .. . Cn,j
Joint and restricted spectral radius
5
T and ∇` D := ∇` DT , respectively. The backwards difference operator ∇ : `n×k (Zs ) → `ns×k (Zs ) is then defined by ∇1 ∇ := ... . (3) ∇s Note that the structure of the operator ∇ depends on m which, however, we will not indicate explicitely to keep notation uncluttered. It has been shown even for more general dilation matrices in [16] that if m = dim EA satisfies 1 ≤ m ≤ n then there exists a mask B ∈ `ns×ns (Zs ) 0 such that ∇SA = SB ∇. (4) Remark 1. The factorization in (4) is obtained by applying to A∗ (z) componentwise the process of reduction, which is a generalization of the division with remainder, cf. [6]. In particular, this algorithm yields a solution B of (4) such that supp B ⊂ supp A. Thus, in the rest of the paper we can assume that also supp B ⊆ [0, N ]s . We continue by recalling the definition of the p–norm joint spectral radius of a finite collection of matrices given, for example, in [13]. For ∈ {0, 1}s , we denote by A the linear operators on `1×n (Zs ) defined as 0 X A v(α) := v(β)A( + 2α − β). (5) β∈Zs
Denote by A the finite collection of the linear operators A := {A : ∈ {0, 1}s } . For a finite set K ⊂ Zs we denote by `n×k (K) ⊂ `0n×k (Zs ) the linear space of all sequence supported in K. Definition 1. A finite set K ⊂ Zs is called admissible for A or A– admissible for short if `1×n (K) is invariant under all A , ∈ {0, 1}s , i.e. if v ∈ `1×n (K) implies that A v ∈ `1×n (K) for any ∈ {0, 1}s . A block matrix representation of A : `1×n (K) → `1×n (K) is of the form T A ( + 2α − β) α,β∈K , ∈ {0, 1}s . For a finite dimensional subset V ⊂ `1×n (K) denote by A|V := {A |V : ∈ {0, 1}s }
6
M. Charina, C. Conti, T. Sauer
the finite collection of matrix representations A |V of A with respect to a basis of V . For a positive integer r consider the r–th Cartesian power of A|V Ar |V = {(A1 |V , . . . , Ar |V ) : 1 , . . . , r ∈ {0, 1}s } . For each 1 ≤ p ≤ ∞, the joint spectral radius (p–JSR) of A|V is defined by 1/rp X p lim |A1 |V . . . Ar |V | , p < ∞, r→∞ s ,..., ∈{0,1} 1 r ρp (A|V ) := 1/r lim max s |A1 |V . . . Ar |V | , p = ∞. r→∞ 1 ,...,r ∈{0,1}
The limit above exists and is independent of the choice of the matrix norm | · | used on Rn . Moreover, it has been shown in [1] that n o ρ∞ (A|V ) = sup |λ|1/r : r > 0, λ ∈ σ (A1 |V . . . Ar |V ) , j ∈ {0, 1}s , where σ(M ) denotes the spectrum, i.e., the set of all eigenvalues, of the matrix M . In the following we write A ∼ B to indicate that the expression A is equivalent to B, i.e. there exist positive constants C1 , C2 such that C1 A ≤ B ≤ C2 A. Furthermore, for a ∈ R we use the symbol [a]s to denote the point (a, . . . , a) in Rs with all its components being equal to a. §3. Convergence of Multivariate Subdivision Schemes We start by extending the definitions of the restricted ∞–norm and the restricted spectral radius (∞–RSR) given in [2] to the case 1 ≤ p < ∞. For the difference subdivision operator SB in (4) we consider the subdivision r operator SB associated with the iterated mask B(r) with B(1) := B and (r) (r−1) B := SB B , r > 1. We define the restricted p–norm as r kSB ∇ckp r : c ∈ `np (Zs ), ∇c 6= 0 kSB |∇ kp : = sup k∇ckp and the restricted p–spectral radius (p–RSR) r ρp (SB |∇ ) := lim kSB |∇ k1/r p . r→∞
We have chosen the ambiguous notation SB |∇ deliberately, as the above restricted norm and spectral radius could also be seen as the norm of SB
Joint and restricted spectral radius
7
s restricted to the difference space ∇`np (Zs ) ⊂ `ns p (Z ). That the range of SB is also the difference space and that therefore iterations and spectral radius are well defined is an immediate consequence of (4). It is, however, worthwhile to point out that restricted norm and spectral radius are well– defined even if (4) is not valid; in that case, however, its interpretation as the restricted operator does not make sense. Also note that due to (4) the restricted norm does not depend on the specific choice of the difference mask B. It has been shown in [2] that for G := [−N, 1]s ∩ Zs r r n s kSB |∇ k∞ = max max |S ∇c(α)| : c ∈ ` (Z ), k∇c| k = 1 . ∞ G ∞ B ∞ r s α∈[0,2 −1]
(6) Note that G is not admissible for A as for β ∈ G we get + 2α − β ∈ [0, N ]s =⇒ α ∈ 2−1 [−N − 1, N + 1]s and α is not necessarily in G, i.e. if v ∈ `1×n (G), A v is not necessarily r in `1×n (G). Nevertheless, due to the periodicity of SB we can modify (6) and get the following result for p = ∞. Lemma 1. Let K := [0, N ]s ∩ Zs . Then K is A–admissible and r r n s kSB |∇ k∞ = sup max |SB ∇c(α)|∞ : c ∈ `∞ (Z ), k∇c|K k∞ = 1 . r+1 α∈2
K
(7) Proof: For any ∈ {0, 1}s we get by definition of A ∈ A a nonzero entry in (5) if and only if + 2α − β ∈ K and β ∈ K. Hence, n 1 o ∩ Zs = K α ∈ (K − + K) = K − 2 2 Now we can replace G by [−N, 0]s in (6): if one of the components of β is positive, then for α ∈ [0, 2r − 1]s the corresponding component of α − 2r β is always negative and so α − 2r β does not belong to the set 2r [0, N ]s . On the other hand, if a component of β is < −N , then the corresponding component of α − 2r β is bigger than 2r N . But the set 2r [0, N ]s includes the support of the iterated mask B(r) . Next, by Remark 1, for any fixed α ∈ [0, 2r − 1]s and β running over [−N, 0]s we have for the iterated mask B(r) ˜ B (r) (α − 2r β) = B (r) (˜ α − 2r β) for α ˜ = α + 2r [N ]s , which is in 2r+1 K, and β˜ = β + [N ]s , which is running over [0, N ]s . Thus, the sequence c maximizes (6) if and only if the sequence ˜ = c(β˜ − [N ]s ) for β˜ ∈ [0, N ]s maximizes (7). d with d(β)
8
M. Charina, C. Conti, T. Sauer
We continue by expressing the restricted p–norm, 1 ≤ p < ∞, of a r subdivision operator SB in a different way. By definition of SB we get p 1/p X X r (r) r kSB ∇ckp = B (α − 2 β) ∇c(β) , α∈Zs β∈Zs p
where the iterated mask B(r) is supported on 2r K. From these support considerations we get that B (r) (α − 2r β) 6= 0 if and only if α − 2r β ∈ 2r K
β ∈ 2−r α − K.
⇐⇒
Thus p X X p r (r) r kSB ∇ckp = B (α − 2 β) ∇c(β) α∈Zs β∈Zs p p X X X (r) r r = B (α + 2 N γ − 2 β) ∇c(β) α∈2r K γ∈Zs β∈2−r (α+2r N γ)−K p p X X X = B (r) (α − 2r β) ∇c (β + N γ) γ∈Zs α∈2r K β∈2−r α−K p X p r = kSB (∇c|K+N γ )kp . γ∈Zs
Defining for 1 ≤ p < ∞ the quantity ( ) X r p r p ns kSB |∇,K kp := max |SB ∇c(α)|p : ∇c ∈ `p (K), k∇ckp = 1 α∈2r K p
we get from k∇ckp =
X
p
k∇c|K+N γ kp and
γ∈Zs p
r kSB ∇ckp
=
X
p
r kSB (∇c|K+N γ )kp ≤
γ∈Zs
X
p
p
r kSB |∇,K kp k∇c|K+N γ kp
γ∈Zs p
r = kSB |∇,K kp
X
p
p
p
r k∇c|K+N γ kp = kSB |∇,K kp k∇ckp .
γ∈Zs
that p
p
r r kSB |∇ kp ≤ kSB |∇,K kp .
The converse of this estimate holds by a simple standard argument, so r r that kSB |∇ kp = kSB |∇,K kp and, consequently, r ρp (SB |∇ ) = ρp (SB |∇,K ) := lim kSB |∇,K k1/r p . r→∞
(8)
Joint and restricted spectral radius
9
We are now ready to compare JSR and RSR which we will do first in the scalar and then in the vector case. This allows us to first present the basic ideas in common to both without being affected by the slightly more intricate situation that we are confronted with in the vector case. 3.1. The Scalar Case. We now consider a scalar mask denoted by a ∈ `0 (Zs ). The difference operator from (3) is now simply ∇ : `(Zs ) → `s (Zs ). For K = [0, N ]s ∩ Zs we define X V := {v ∈ `(K) : v(α) = 0}. (9) α∈K
We state two elementary properties of V in the following lemma. Lemma 2. 1) V is A–invariant. 2) V = span {∇` δ(· − β) : β ∈ Zs , 1 ≤ ` ≤ s} ∩ `(K). Proof: Part 1) is immediate from the fact that Ea = span {1}, while for part 2) we begin by taking a sequence v ∈ V . From (9) we know that v ∗ ([1]s ) = 0 where v ∗ (z) is the Laurent polynomial for v. Now, the ideal of all polynomials that vanish at [1]s is hz − 1i and so v ∗ can be written as s s X X ∗ v ∗ (z) = (z` − 1) v`∗ (z) = (∇` v` ) (z), `=1
`=1
whereX each sequence v` is also supported on K and can be expressed as v` = v` (β)δ(· − β). β∈K
Using Lemma 1 and 2 we obtain the main result of this subsection. s Proposition 1. Let a ∈ `0 (Zs ) and B ∈ `s×s 0 (Z ) satisfy (4). Then ρp (SB |∇ ) = ρp (A|V ), 1 ≤ p ≤ ∞, for V defined in (9).
Proof: It has been shown in Section 2 of [11], that ρp (A|V ) = lim max k∇` Sar δkp , r→∞ 1≤`≤s
1 ≤ p ≤ ∞.
To prove the claim of this Proposition we show that r kSB |∇ kp ∼ max k∇` Sar δkp , 1≤`≤s
1 ≤ p ≤ ∞.
We start with the case p = ∞. Certainly, `(K) contains ∇` δ, 1 ≤ ` ≤ s, and k∇` δ|K k∞ = 1. Then by (7) we get r kSB |∇ k∞ ≥
r max |SB ∇δ(α)|∞ =
α∈2r+1 K
max |∇Sar δ(α)|∞
α∈2r+1 K
10
M. Charina, C. Conti, T. Sauer
and due to supp a(r) ⊂ (2r − 1)[0, N ]s and the definition of K (r) a (α − e1 ) − a(r) (α) ... max |∇Sar δ(α)|∞ = max α∈2r+1 K α∈2r+1 K a(r) (α − es ) − a(r) (α) ∞ = k∇Sar δk∞ Note also that by definition of ∇ we have k∇Sar δk∞ = max k∇` Sar δk∞ . 1≤`≤s
˜ ∈ `(K+[−1, 0]s ) be a maximizing sequence such that k∇˜ Let c c|K k∞ = 1 and X r r kSB |∇ k∞ = max |SB ∇˜ c(α)|∞ = max B (r) (α − 2r β)∇˜ c(β) . α∈2r+1 K α∈2r+1 K β∈K ∞
Note that ˜ = ∇` ∇` c
X
c˜(β)δ(· − β) =
β∈Zs
X
c˜(β)∇` δ(· − β)
(10)
β∈Zs
˜ by constant seand that the fact that ∇ is not sensitive to modifying c quences permits us to assume that c˜([−1]s ) = 0. Consequently, ∇˜ c ∈ `s (K + [−1, 1]s ) implies that max s |˜ c(β)| ≤ sN . Note also that due β∈K+[−1,1]
to the compact support of B(r) it holds that X X (r) r (r) r = B (α − 2 β)∇˜ c (β) B (α − 2 β)∇˜ c (β) β∈K β∈K+[−1,1]s ∞
. ∞
Thus, for C := s(N + 2)s+1 we get by (10) that c˜(β)∇1 δ(· − β) X r r ... (α) kSB |∇ k∞ = max S B α∈2r+1 K s c˜(β)∇s δ(· − β) β∈K+[−1,1] ∞ X r ≤ sN max SB ∇δ(· − β) (α) α∈2r+1 K β∈K+[−1,1]s ∞
r ≤ C max |SB ∇δ(α)|∞ = C max k∇` Sar δk∞ . r+1 α∈2
1≤`≤s
K
The proof for 1 ≤ p < ∞ proceeds as above with the slight difference that due to !1/p s X X p k∇δkp = k∇δ|K kp = |∇` δ(α)| = (2s)1/p `=1 α∈K
Joint and restricted spectral radius
11
the identity (8) yields that 1/p
(2s)
s X X
r kSB |∇ kp ≥
α∈2r K
!1/p p
|∇` Sar δ(α)|
`=1
!1/p ≥
X
max
1≤`≤s
p |∇` Sar δ(α)|
= max k∇` Sar δkp . 1≤`≤s
α∈2r K
The rest is as in case p = ∞. Therefore, by Theorem 3.2 in [11] immediately implies the following convergence result. Theorem 1. Let a ∈ `0 (Zs ). Then Sa converges in p–norm, 1 ≤ p ≤ ∞, s if and only if there exists B ∈ `s×s 0 (Z ) satisfying (4) and ρp (SB |∇ ) < 2s/p ,
1 ≤ p < ∞,
ρ∞ (SB |∇ ) < 1.
3.2. The Vector Case Let A ∈ `n×n (Zs ) with EA = span {e1 , . . . , em }. The difference operator 0 is as in (3). To be consistent with [4] we define K := [−2, N + 1]s ∩ Zs . Define ( ) X 1×n V := v ∈ ` (K) : v(α)ej = 0, 1 ≤ j ≤ m . (11) α∈K
Let δej ∈ `n0 (Zs ) and δIn ∈ `n×n (Zs ) be sequences such that δej (α) = 0 ej δα,0 and δIn (α) = In δα,0 respectively. In analogy with the scalar case we again collect some properties of V in the following lemma. Lemma 3. 1) V is A–invariant. 2) V = span {∇` (δeTj )(· − β) : β ∈ Zs , 1 ≤ ` ≤ s, 1 ≤ j ≤ n} ∩ `1×n (K). Proof: Part 1) By (5) we get for any ∈ {0, 1}s , v ∈ V and 1 ≤ j ≤ m that X X X X A v(α)ej = v(β) A( + 2α − β)ej = v(β)ej = 0, α∈Zs
β∈Zs
α∈Zs
β∈Zs
which implies that V is A− invariant. For part 2) we take any sequence v ∈ V . For the scalar sequences vej it holds that v ∗ ([1]s )ej = 0, 1 ≤ j ≤ m. Thus, we can repeat the argument of Lemma 2 and write v ∗ (z)ej =
s X `=1
(z` − 1) v`∗ (z)ej =
s X `=1
∗
(∇` v` ) (z)ej ,
12
M. Charina, C. Conti, T. Sauer
where each of the vector sequences v` is supported on K. Next, by (11) the last n − m components of the vector sequence v are in the span of the corresponding elements of the vectors sequences (δeTj )(·−β), m+1 ≤ j ≤ n for β ∈ [−2, N ] ∩ Zs . And, by the definition of ∇` , 1 ≤ ` ≤ s, we get ∇` (δeTj ) = δeTj , m + 1 ≤ j ≤ n, 1 ≤ ` ≤ s. This yields the claim. The following Lemma is crucial for comparing the JSR and RSR in the vector case. Lemma 4. For 1 ≤ p ≤ ∞ it holds that r ρp (A|V ) = lim max k∇` SA δej k1/r p . r→∞
1≤`≤s 1≤j≤n
Proof: The difference to the scalar case is that in the vector case equation (4.1) from [4] implies that β ∈ Zs , 1 ≤ j ≤ n and 1 ≤ ` ≤ s we have
X
r
T (r)
A ∇` (δeTj )(· − β) = ∇ (δe )(γ − β)A (· − γ) ` j
p
γ∈Zs
p
X
T r
= ∇` (δej )(γ − β)SA δIn (· − γ)
γ∈Zs
p
T
r r = ej ∇` SA δIn (· − β) p = eTj ∇` SA δIn p . This, by part 2) of Lemma 3 and Lemma 4.2 in [4], implies that r ρp (A|V ) := lim kAr |V k1/r = lim max keTj ∇` SA δIn k1/r p p . r→∞
r→∞
1≤`≤s 1≤j≤n
As for p = ∞ we are looking for the maximal entry in the same matrix r sequence ∇` SA δIn , we have for any r ∈ N r r max keTj ∇` SA δIn k∞ = max k∇` SA δej k∞ .
1≤j≤n
1≤j≤n
For 1 ≤ p < ∞ we have for fixed ` and any r ∈ N X r r |eTj ∇` SA δIn (α)|pp max keTj ∇` SA δIn kpp = max 1≤j≤n
1≤j≤n
α∈Zs
X r = |∇` SA δIn (α)|(p) s α∈Z
∞
and max
1≤j≤n
r k∇` SA δej kpp
= max
1≤j≤n
X α∈Zs
r |∇` SA δej (α)|pp
X r (p) = |∇` SA δIn (α)| s α∈Z
1
Joint and restricted spectral radius
13 (p)
r with the elements of n × n matrices |∇` SA δIn (α)| being equal to the r absolute values of the corresponding elements of the matrices ∇` SA δIn (α) raised to the p–th power. Due to the equivalence of n × n matrix norms we the claim follows.
Similarly to the scalar case we show the equality of the p–JSR and p–RSR for 1 ≤ p ≤ ∞. Proposition 2. Let A ∈ `0n×n (Zs ) and B ∈ `ns×ns (Zs ) satisfy (4). Then 0 ρp (SB |∇ ) = ρp (A|V ), 1 ≤ p ≤ ∞, for V defined in (11). r Proof: Note that for K = [0, N ]s and due to periodicity of SB we have as in the scalar case r r n s kSB |∇ k∞ = sup max |SB ∇c(α)|∞ : c ∈ `∞ (Z ), k∇c|K k = 1 . α∈2r+1 K
(12) The proof proceeds very much as in scalar case and so we just highlight the differences between scalar and vector cases for p = ∞. Using the observations above, have to show r r kSB |∇ k∞ ∼ max k∇` SA δej k∞ . 1≤`≤s 1≤j≤n
Since `n (K) contains ∇` δej , 1 ≤ ` ≤ s, 1 ≤ j ≤ n, we get from (12) for 1 ≤ j ≤ n that r kSB |∇ k∞ ≥
max |SB ∇δej (α)|∞ =
α∈2r+1 K
max |∇SA δej (α)|∞ .
α∈2r+1 K
Due to supp A(r) ⊂ (2r − 1)[0, N ]s = (2r − 1) K we have for 1 ≤ j ≤ n r max |∇SA δej (α)|∞
α∈2r+1 K
r = k∇SA δej k∞ .
r r Note also that by definition of ∇ we have k∇SA δej k∞ = max k∇` SA δej k∞ 1≤`≤s
for 1 ≤ j ≤ n. Therefore, r r kSB |∇ k∞ ≥ max k∇` SA δej k∞ . 1≤`≤s 1≤j≤n
To show that r r δej k∞ kSB |∇ k∞ ≤ C max k∇` SA 1≤`≤s 1≤j≤n
we use the argument of Proposition 1 with C := snN s+1 and get XX n r r SB ∇(δej )(· − β) (α) kSB |∇ k∞ ≤ sN max r+1 α∈2 K β∈K j=1
∞
r r ≤ C max |SB ∇(δej )(α)|∞ = C max k∇` SA δej k∞ . r+1 α∈2 K 1≤j≤n
1≤`≤s 1≤j≤n
14
M. Charina, C. Conti, T. Sauer
The result for 1 ≤ p < ∞ is proved similarly. Extending Theorem 4 given in [2] for p = ∞ to the case 1 ≤ p < ∞ we obtain the following convergence result. Theorem 2. Let A ∈ `n×n (Zs ), EA = span {e1 , . . . , em }. Then SA con0 verges in p–norm, 1 ≤ p ≤ ∞, if and only if there exists B ∈ `ns×ns (Zs ) 0 satisfying (4) and ρp (SB |∇ ) < 2s/p ,
1 ≤ p < ∞,
ρ∞ (SB |∇ ) < 1.
This theorem has been proved in [2] for the case p = ∞. Here we give the proof of the result for 1 ≤ p < ∞ separating the necessary and sufficient part into two propositions that follow. Proposition 3. Let A ∈ `n×n (Zs ), EA = span {e1 , . . . , em } and B ∈ 0 ns×ns s `0 (Z ) satisfy (4). If ρp (SB |∇ ) < 2s/p then SA converges in p–norm, 1 ≤ p < ∞. Proof: The proof for 1 ≤ p < ∞ does not differ much from the one for p = ∞ given in [2] and starts by recalling the fact that the cardinal tensor product B–spline φ is an Lp –stable, compactly supported scalar refinable function whose integer translates form a partition of unity. We set Φ(0) := Φ := φ In , and define for r ∈ N X Φ(r) := Φ(r−1) ∗ A (2·) := Φ(r−1) (2 · −α) A(α). α∈Zs
Using the Lp –stability of Φ we get that
(r+1)
r − Φ(r) ∗ c ≤ 2−rs/p C1 kSB ∇ckp ,
Φ p
where the factor 2−rs/p is due to the Lp –setting. Moreover, we write the int . teger r as r = t + qR, 0 ≤ t < R, q br/Rc ≥ 0, and set C2 := max SB p 0≤t 0 we obtain that
ρbr/Rc
(r+k)
kckp . (13) − Φ(r) ∗ c ≤ 2 C1 C2
Φ 1−ρ p Choosing c = δej for j = 1, . . . , n proves that Φ(r) , r ∈ N, is a Cauchy sequence in Lp . Thus, there exists a limit matrix function Ψ := limr→∞ Φ(r) refinable with respect to A, i.e., Ψ = Ψ ∗ A(2·).
Joint and restricted spectral radius
15
We finally show that for any initial data c ∈ `np (Zs ) the subdivision scheme SA converges to the function fc := Ψ ∗ c. For that purpose we note that the stability of Φ implies the existence of a constant C3 > 0 such that r 2−rs/p kSA c − µr (Ψ ∗ c)kp
≤ C3 Φ(r) ∗ c − Ψ ∗ c + C3 kΨ ∗ c − Φ ∗ µr (Ψ ∗ c)(2r ·)kp . p
The first term on the right in the above estimate goes to zero by (13) while the second term goes to zero by (10) in [16]. Proposition 4. Let A ∈ `n×n (Zs ), EA = span {e1 , . . . , em }. If SA con0 verges in p–norm for some 1 ≤ p < ∞, then there exists B ∈ `ns×ns (Zs ) 0 s/p satisfying (4) and ρp (SB |∇ ) < 2 . To prove this result, we need some more notation. Denote by A ⊗ B the Kronecker product of the matrices A, B, i.e., the matrix with block representation [Ajk B]jk . Moreover, let τj be the backwards shift operator τ1 with respect to ej , that is, τj c = c (· − ej ), and write τ = ... . Define
Im 0 Jm := 0 0 representation
τs . With this notation at hand, we get the convenient
∇ = τ ⊗ Jm − 1n ⊗ (2Jm − In ),
1 1n = ... . 1
Based on the partial summation operators σj , j = 1, . . . , s, σj c = −
0 X
c (· − k ej ) ,
c ∈ `n0 (Zs ) ,
k=−∞
we can define a left inverse of ∇ as 1 σ1 Im 0 Σ := 0 In−m s
... ...
σs Im 0
0 In−m
,
i.e., Σ∇c = c for any c ∈ `n0 (Zs ). Recall, in addition, that `n0 (Zs ) is a dense subspace of `np (Zs ) for 1 ≤ p < ∞, but not for p = ∞. Proof: That convergence of SA implies the existence of B can be found for example in [16]. Moreover, the necessary convergence condition
16
M. Charina, C. Conti, T. Sauer
(A − In ) F = 0 from [7] and the assumption that EA = span {e1 , . . . , em } yield that the basic refinable function F has the block form F1 F2 F = , hence, Jm F = F. 0 0 By (4) we now get for any c ∈ `np (Zs ) that r r r r kSB ∇ckp = k∇SA ckp = k(τ ⊗ Jm ) SA c − 1n ⊗ (2Jm − In ) SA ckp r r r r ≤ k(τ ⊗ Jm ) (SA c − µ (F ∗ c))kp + k1n ⊗ (SA c − µ (F ∗ c))kp + k(τ ⊗ Jm ) µr (F ∗ c) − 1n ⊗ µr (F ∗ c)kp r ≤ 2s kSA c − µr (F ∗ c)kp + kτ ⊗ µr (F ∗ c) − 1n ⊗ µr (F ∗ c)kp ,
where we took into account the translation invariance of p–norms and the fact that Jm F = F . Since SA preserves all constant sequences in EA and since F is the canonical limit function, the operator G : `np (Zs ) → r `np (Zs ), defined by Gc = SA c−µr (F ∗c) has the property that Gc = 0 if c s n s is constant sequence in EA and hence there exists G0 : `ns p (Z ) → `p (Z ) such that G = G0 ∇. This G0 can be obtained by representing G∗ (z) with respect to the ideal z 2 − 1 . In addition, we have that G0 = GΣ with the partial summation operator from above. By the same methods as in [14, Lemma 2.3], see also [15, Lemma 2.18], it then follows that kGckp ≤ C max kGδej kp k∇ckp , (14) 1≤j≤n
for some constant C > 0 that depends on p and the support size of A, and so we get that r r kSA c − µr (F ∗ c)kp ≤ C1 kSA δIn − µr (F )kp k∇ckp .
(15)
Similar arguments also lead to the estimate kτ ⊗ µr (F ∗ c) − 1n ⊗ µr (F ∗ c)kp ≤ 2r/p C2 ωp F, 2−r k∇ckp
(16)
with the Lp modulus of continuity, ωp . Substituting (15) and (16), we thus obtain that r r 2−rs/p kSB ∇ckp ≤ C1 2−rs/p kSA δIn − µr (F )kp + C2 ωp F, 2−r k∇ckp and since r lim 2−rs/p kSA δIn − µr (F )kp = lim ωp F, 2−r = 0
r→∞
r→∞
by convergence and standard properties of Lp functions, the claim finally follows.
Joint and restricted spectral radius
17
§4. Computing the restricted p–norm As presented in [2] the restricted ∞− norm is easily computed by means of the linear optimization problems: solve for α ∈ 2r+1 K, i = 1, . . . , ns and for j = 1, . . . , m max
s XX
(r)
Bi,(`−1)n+j (α − 2r β) ∇` cj (β), (17)
β∈K `=1
−1 ≤ ∇` cj (β) ≤ 1,
1 ≤ ` ≤ s,
for j = m + 1, . . . , n
max
X
s X
β∈K
`=1
! (r) Bi,(`−1)n+j
r
(α − 2 β) cj (β),
−1 ≤ cj (β) ≤ 1.
(18)
It is easy to see that the maximum for j = m + 1, . . . , n is attained when cj (β) := sgn
s X
(r)
Bi,(`−1)n+j (α − 2r β) ,
β ∈ K.
`=1
For j = 1, . . . , m the above problem can be solved by standard methods of linear optimization as there is only finite number of parameters and side conditions involved, even if this number will increase dramatically with r. An example how restricted norm works will be provided in the next section. The individual optimal solutions for 1 ≤ j ≤ n are then added together and those values which depend on i = 1, . . . , ns and α ∈ 2r+1 K are finally maximized with respect to these two parameters. r r Remark 2. It is easy to see that kSB |∇ k∞ = kSB k∞ if for any i, α all (r) r s the entries Bi,(`−1)n+j (α − 2 β), β ∈ Z , are of the same sign for the same `, j.
For 1 ≤ p < ∞ the computation of the restricted p− norm translates into the nonlinear optimization problem p X X (r) r B (α − 2 β) d(β) max α∈2r K β∈K p
18
M. Charina, C. Conti, T. Sauer
with side conditions s X X
p
|dj (β)|p ≤ 1,
∇j d` = ∇` dj ,
1 ≤ j < ` ≤ s,
j=1 β∈K
T
s where it is convenient to write d ∈ `ns p (Z ) in the form d = [d1 , . . . , ds ] n s with d` ∈ `p (Z ). The conditions ∇j d` = ∇` dj , 1 ≤ j < ` ≤ s, imply that d` = ∇` c and dj = ∇j c for some c ∈ `np (Zs ) and thus convert the requirement that d ∈ ∇`np (Zs ) into linear side conditions on d. We now point out that this problem lies within the scope of concave minimization problems, i.e. the function to be minimized is concave and it is minimized over a convex compact set E described by a finite number of linear and nonlinear convex constraints of the type gi (·) ≤ 0, 1 ≤ i ≤ M , M ∈ N. Problems of this type can be solved, for example, by means of outer approximation methods as given in [12, Algorithm 3.3]. To be able to apply this method we must verify that the functions fα , defined for α ∈ 2r K as
fα
X [dj (β) : 1 ≤ j ≤ s, β ∈ K]T = B (r) (α − 2r β) d(β) β∈K
p
are convex with respect to the parameters dj (β) which immediately follows by the triangular inequality convex. Since also the functions g(x) = xp are convex for x ≥ 0, so are the composite functions g ◦ fα and hence the target function of the optimization problem so that we are indeed facing a convex maximization problem which is equivalent to a concave minimization problem.
§5. Examples In this section we discuss an example that illustrates the computational advantage of the RSR approach. Consider the convergent bivariate vector subdivision scheme associated with A introduced and studied in [3]. As the joint 1–eigenspace of A is spanned by (1, 1)T , we start by defin˜ to ensure that E ˜ := span {e1 }. We choose the ing a modified mask A A 1 1 1 E−generator to be V = √ and, after an appropriate shift to 1 −1 2
Joint and restricted spectral radius get polynomial entries in 0 0 1 1 −1 T ˜ A = V AV = 2 8 −1 1 0
A∗ (z), 0 0 0 0 0 1 0 1
19
we obtain 1 0 2 0 −1 0 −1 1 4 0 5 0 −2 2 0 3 5 0 4 0 0 3 2 2 2 0 1 0 1 1 1 0
1 0 2 1 1 1 0 0
0 1 0 1 0 0 0 0
where the boldface matrix corresponds to α = (0, 0), α1 changes from left to right and α2 from the bottom up. It is easily checked that EA ˜ = span {e1 }. Thus, we get 2 (z1 − 1) 0 (z1 − 1) 0 0 1 0 1 ∗ ˜∗ (z2 − 1) 0 A (z) = B (z) (z22 − 1) 0 0 1 0 1 with the B ∗ (z) of the form
b11 (z) 1 b21 (z) ∗ B (z) = 0 8 b21 (z)
0 0 0 0 b23 (z) a ˜22 (z) , 0 b33 (z) 0 0 b23 (z) a ˜22 (z)
where b14 (z) = b34 (z) = 0 due to a ˜12 (z) = 0. The elements of B ∗ (z) are given by b13 (z) = b31 (z) = 0 and b11 (z) = 1 + 2z2 + z22 + (1 + 3z2 + 3z22 + z23 )z1 + (z2 + 2z22 + z23 )z12 , b33 (z) = 1 + 2z1 + z12 + (1 + 3z1 + 3z12 + z13 )z2 + (z1 + 2z12 + z13 )z22 , b21 (z) = 1 + 2z2 − z23 + (z2 + z22 )z1 , b23 (z) = −1 − z1 + (−1 − z1 )z2 . The non-restricted norm of SB is easily 4 0 0 1 3 0 1 kSB k∞ = 8 0 0 4 3 0 1
computed to be 0 4 = 1, 0 4
i.e. is determined by the second row of |B(0,1) | :=
∞
X
|B((0, 1) + 2β)|. The
β
rest of the rows of all B , ∈ {0, 1}s yield numbers smaller than 1. This does not allow us though to conclude if the scheme SA is convergent.
20
M. Charina, C. Conti, T. Sauer
Let us compute the restricted norm of SB next and compare it with the value of the non-restricted norm we obtained. To compute the restricted norm we formulate as shown in (17) the following linear optimization problem for = (0, 1), i = 2 and j = 1 max −1 2 0 −1 x =: max f T x, x
x
x := [ c1 (0, 0) c1 (−1, 0) c1 (0, −1) c1 (−1, −1) ]T , with side conditions Ax :=
−1 1 0 0 −1 0 1 0 0 −1 0 1 0 0 −1 1 1 −1 0 0 1 0 −1 0 0 1 0 −1 0 0 1 −1
x ≤
1 1 1 1 1 1 1 1
x ∈ R4 .
=: b.
To solve this problem we call the MATLAB routine x=linprog(f , A, b). Next we compute f T x, add to it the result we get from (18) for = (0, 1), i = 2 and j = 2 and obtain kSB |∇ k∞ = 78 . To estimate the JSR in this case we would have to work with matrices A of dimension 98 × 98 as |K| = 49. §6. References 1. A. M. Berger, Y. Wang, Bounded semi-groups of matrices, Linear Algebra Appl. 166 (1992), 21-27. 2. M. Charina, C. Conti and T. Sauer, Regularity of multivariate vector subdivision schemes , submitted. 3. C. Conti and K. Jetter, A new subdivision method for bivariate splines on four-directional mesh, Comput. Math. Appl. 119 (2000), 81–96. 4. D. R. Chen , R. Q. Jia and S. D. Riemenschneider, Convergence of vector subdivision schemes in Sobolev spaces, Appl. Comput. Harmonic Anal. 12 (2002), 128–149. 5. C. Cabrelli, C. Heil and U. Molter, Self-similarity and multiwavelets in higher dimensions, Memoirs of the AMS 170 807 (2004), 1–82. 6. D. Cox, J. Little and D. O’Shea, Ideals, Varieties and Algorithms, 2nd ed., Springer, 1996. 7. W. Dahmen and C. A. Micchelli, Biorthogonal wavelet expansions, Constr. Approx. 13 (1997), 294–328.
Joint and restricted spectral radius
21
8. N. Dyn, Subdivision schemes in CAGD, in: Advances in Numerical Analysis, vol. II: Wavelets, Subdivision Algorithms and Radial Basis Functions (W. A. Light, ed.), 36–104, University Press, Oxford, 1992. 9. B. Han, Vector cascade algorithm and refinable function vectors in sobolev spaces, J. Approx. Theory 124 (2003), 44–88. 10. B. Han, Solutions in Sobolev spaces of vector refinement equations with a general dilation matrix, submitted. 11. B. Han and R. Q. Jia, Multivariate refinement equations and convergence of subdivision schemes, SIAM J. Math. Anal. 29 (1998), 1177– 1199. 12. R. Horst, P. M. Pardalos and N. V. Thoai, Introduction to Global Optimization, 2nd ed., Kluwer Academic Publishers, 2000. 13. R. Q. Jia, Subdivision schemes in LP spaces, Adv. Comput. Math., 3 (1995), 309–341. 14. C. A. Micchelli and T. Sauer, Regularity of Multiwavelets, Advances Comput. Math. 7 (1997), 455–545. 15. C. A. Micchelli and T. Sauer, On vector subdivision, Math. Z. 229 (1998), 621–674. 16. T. Sauer, Stationary vector subdivision - quotient ideals, differences and approximation power, RACSAM Rev. R. Acad. Cienc. Exactas F´ıs. Nat. Ser. A Mat. 229 (2002), 621–674. Maria Charina Institut f¨ ur Angewandte Mathematik Universit¨ at Dortmund D–44221 Dortmund, Germany Costanza Conti Dipartimento di Energetica “Sergio Stecco” Universit` a di Firenze Via Lombroso 6/17, I–50134 Firenze, Italy Thomas Sauer Lehrstuhl f¨ ur Numerische Mathematik Justus–Liebig–Universit¨ at Gießen Heinrich–Buff–Ring 44, D–35392 Gießen, Germany