Regularity of multivariate vector subdivision schemes Maria Charina (
[email protected]) Institut f¨ ur Angewandte Mathematik, Universit¨ at Dortmund, D–44221 Dortmund, Germany
Costanza Conti (
[email protected]) Dipartimento di Energetica “Sergio Stecco”, Universit` a di Firenze, Via Lombroso 6/17, I–50134 Firenze, Italy
Thomas Sauer (
[email protected]) Lehrstuhl f¨ ur Numerische Mathematik, Justus–Liebig–Universit¨ at Gießen, Heinrich–Buff–Ring 44, D–35392 Gießen, Germany September 28, 2003 Abstract. In this paper we discuss methods for investigating the convergence of multivariate vector subdivision schemes and the regularity of the associated limit functions. Specifically, we consider difference vector subdivision schemes whose restricted contractivity determines the convergence of the original scheme and describes the connection between the regularity of the limit functions of the difference subdivision scheme and the original subdivision scheme.
1. Introduction Subdivision schemes are computational means for generating recursively discrete functions defined on denser and denser grids in IRs . The values of those functions are in IRn , n ≥ 1. At each step of the subdivision recursion the new values are obtained simply by local averaging of the previously computed values on the coarser grid. The averaging coefficients form the so–called refinement mask. If they are (real) numbers, the scheme is said to be a scalar subdivision scheme acting on scalar sequences. If the averaging coefficients are matrices the scheme is called either a matrix subdivision scheme or a vector subdivision scheme as it now maps vector valued sequences to vector valued sequences. Vector subdivision schemes play an important role in the convergence and regularity analysis even of scalar multivariate subdivision schemes, in the analysis of Hermite-type subdivision schemes, and in the context of multi-wavelets. When dealing with subdivision schemes, it is natural to study their convergence and investigate the smoothness of the associated limit functions. These problems have been addressed by various authors in the recent two decades and have led to a variety of results. A lot of facts and references, though mainly for the scalar case, can be found already in the classical monograph [1] as well as in the survey [6]. In c 2004 Kluwer Academic Publishers. Printed in the Netherlands.
cfinal7.tex; 19/11/2004; 7:05; p.1
2
M. Charina, C. Conti, T. Sauer
particular, both provide factorization methods for subdivision operators and analysis of the associated vector difference subdivision scheme. More recently, approaches based on studying the transition operator [14, 15, 21] and joint spectral radius [3, 9, 11, 13] have been pursued. We remark, however, that the list of references we give is by no means exhaustive or complete. In this paper, we further pursue the factorization approach. We give a characterization of the convergence of multivariate vector subdivision schemes and derive sufficient conditions for a refinable function to possess a certain order of differentiability. This will be done classifying the subdivision schemes with respect to the dimension of a certain finite dimensional subspace depending on the matrices that form the mask of the subdivision scheme as introduced in [16, 17]. Based on this dimension, we use a suitable difference operator to pass to a difference scheme whose mask consists of larger matrices. The conditions on the mask of the original scheme ensuring the existence of the difference scheme allow for an algebraic interpretation that naturally generalizes the well–known “zero at −1” property from the univariate case. The concept of the restricted spectral radius, which we will introduce and investigate, enables us to characterize the convergence of the original subdivision scheme in terms of the spectral properties of the difference scheme. Finally, we also show that convergence of the difference scheme implies that the original scheme is convergent to a smoother limit function.
2. Notation and background We denote by `n (ZZ s ) and `n×n (ZZ s ) the linear spaces of all sequences of n–vectors and n × n matrices, respectively. We will consider these sequences as discrete functions defined on the integer grid ZZ s and write c = (c(α) : α ∈ ZZ s ) ∈ `n (ZZ s ) ,
A = (A(α) : α ∈ ZZ s ) ∈ `n×n (ZZ s )
for the vector and matrix valued sequences. In addition, we write `n∞ (ZZ s ) and `n×n Z s ) for the Banach spaces of all bounded vector and matrix ∞ (Z valued sequences, respectively, equipped with the norms kck∞ = sup |c(α)|∞ , α∈ZZ s
kAk∞ = sup |A(α)|∞ , α∈ZZ s
where | · |∞ denotes the ∞–norm on IRn for vectors and the associated operator norm for matrices. Moreover, we denote by `n0 (ZZ s ) and `n×n (ZZ s ) the finitely supported vector valued and matrix valued 0
cfinal7.tex; 19/11/2004; 7:05; p.2
3
Regularity of vector subdivision
sequences, respectively. A specific example of such a sequence is the scalar delta sequence δ ∈ `0 (ZZ s ) defined by
δ(α) :=
1, α = 0, 0, α ∈ ZZ s \ {0}.
For a finite matrix sequence A and a finite vector sequence c we define the associated symbols as the Laurent polynomials, A∗ (z) :=
X
A(α) z α ,
c∗ (z) :=
α∈ZZ s
X
c(α) z α ,
| \ {0})s , z ∈ (C
α∈ZZ s
respectively, where, in the usual multiindex notation, z α = z1α1 ·. . .·zsαs . The canonical unit multiindices are denote by j , j = 1, . . . , s, i.e. zj = z j . In addition, let Λ denote the ring of Laurent polynomials with real coefficients. The subdivision operator SA : `n (ZZ s ) → `n (ZZ s ) associated with the matrix sequence or “mask ” A ∈ `n×n (ZZ s ) is defined by 0 SA c(α) =
X
α ∈ ZZ s , c ∈ `n (ZZ s ) .
A (α − 2β) c(β),
β∈ZZ s
It follows from the finite support of A that the linear operator SA is a bounded and thus continuous operator from `n∞ (ZZ s ) to itself. Alternatively, using the symbol calculus notation we can also write the subdivision scheme in the form
(SA c)∗ (z) = A∗ (z) c∗ z 2 ,
| \ {0})s . z ∈ (C
(1)
The subdivision scheme SA is said to be convergent in `n∞ (ZZ s ) if for any vector sequence c ∈ `n∞ (ZZ s ) there exists a uniformly continuous vector field fc : IRs → IRn , fc ∈ Cu (IRs )n , such that r r lim fc 2−r · − SA c ∞ = lim sup fc 2−r α − (SA c) (α) ∞ = 0
r→∞
r→∞ α∈ZZ s
and fc 6= 0 for some initial data c. The latter condition excludes trivialities like the subdivision scheme associated with A = 0. It is an immediate consequence of the linearity of the subdivision operator SA that the limit function fc is of the form fc = Ψ ∗ c :=
X
Ψ (· − α) c(α),
(2)
α∈ZZ s
where the basic limit function Ψ ∈ C (IRs )n×n is obtained by iterating the subdivision scheme on the matrix sequence δIn . Equivalently, the j–th column of Ψ is generated applying the operator SA to the j–th
cfinal7.tex; 19/11/2004; 7:05; p.3
4
M. Charina, C. Conti, T. Sauer
column of δIn , i.e. to δej , j = 1, . . . , n where ej is the j–th standard unit vector in IRn . Next, for ∈ {0, 1}s we define the n × n matrices A :=
X
A( − 2α),
α∈ZZ s
and their joint 1–eigenspace EA := {v ∈ IRn : A v = v, ∈ {0, 1}s }. We also list two simple facts about EA for convergent subdivision schemes SA , cf. [17, 19]. LEMMA 1. If the subdivision operator SA is convergent, then 1. EA 6= {0}, that is, there exists at least one v ∈ IRn \{0} such that A v = v, ∈ {0, 1}s . 2. a sequence c ∈ `n (ZZ s ) is an eigensequence of SA with respect to the eigenvalue 1 if and only if c(α) = v, α ∈ ZZ s , for some v ∈ EA . As in [17] we define m := dim EA and note that for a convergent subdivision scheme we always have that 1 ≤ m ≤ n. Moreover, let V = {v1 , . . . , vm } be a basis of EA . Without loss of generality, we assume that V is an orthonormal basis for EA . Let vm+1 , . . . , vn ∈ IRn be any set of vectors that complete V to an orthonormal basis of IRn , so that the matrix V = [v1 · · · vn ] is orthogonal and V = V {ej : j = 1, . . . , m} =: V IRm . Any such matrix V ∈ IRn×n is called an EA –generator. For an EA – e by generator we define a modified mask A e = V T AV, A
i.e.
e A(α) = V T A(α) V,
α ∈ ZZ s
and recall that the convergence of SA and SA e are equivalent. More s T n precisely, the limit function of SA e for c ∈ `∞ (ZZ ) is V fV c . Using the concept of EA –generators, we introduce the difference operator associated with a vector subdivision scheme. To that end, we denote by ∇j , j = 1, . . . , s, the j–th partial backwards difference operator acting on scalar sequences c ∈ ` (ZZ s ) as follows (∇j c)(α) = c (α − j ) − c(α),
α ∈ ZZ s .
The backwards difference operator ∇ : `n (ZZ s ) → `ns (ZZ s ) is then defined by ∇1 Im 0 0 In−m T . . .. ∇V = ∇m,V = .. V , ∇s Im 0 0 In−m
cfinal7.tex; 19/11/2004; 7:05; p.4
5
Regularity of vector subdivision
where V is an EA –generator. Note that this difference operator is a discrete analogue of a derivative and maps eigensequences associated with the eigenvector 1 to zero (provided that the scheme is convergent). Moreover, we use the abbreviation ∇ for ∇m,I . The proof of the following observation is straightforward. LEMMA 2. Suppose that SA is a convergent subdivision scheme. Then, for c ∈ `n (ZZ s ) the following are equivalent 1. ∇V c = 0. 2. SA c = c. 3. c(α) = v, α ∈ ZZ s , for some v ∈ EA . Assume for a mask A ∈ `n×n (ZZ s ) that 1 ≤ m := dim EA and let 0 V be any EA –generator. It has been shown in [2] and for more general dilation matrices in [19] that there exists a mask B ∈ `ns×ns (ZZ s ) such 0 that ∇ V S A = SB ∇ V . (3) This is the aforementioned factorization property. Indeed, in the scalar univariate case this is equivalent to A∗ (z) having a factor of the form ∗ z+1. In higher dimensions
2 this property corresponds to A (z) belonging to the quotient ideal z − 1 : hz − 1i, cf. [20]. We write D
E
D
E
I = z 2 − 1 = zj2 − 1 : j = 1, . . . , s , and J = I : hz − 1i = I : hzj − 1 : j = 1, . . . , si for the two polynomial ideals in question and assume that the mask A is shifted such that A∗ (z) is a polynomial in z. Then (3) is equivalent to the algebraic property that
J
V T A∗ (z)V ∈
I .. .
I ... . J .. .. .. . .
I ... I I I ... . I I .. .. . . . . . . . I ... I
I I I ... I .. . . . I I . . .. . . . I .. . . . . I J I ... I I I ∗ ∗ ... ∗ .. . . . ∗ ∗ . . .. . . . I .. . . . . ∗ I ∗ ... ∗ ∗
(4)
cfinal7.tex; 19/11/2004; 7:05; p.5
6
M. Charina, C. Conti, T. Sauer
where the two column and row blocks are of size m and n − m and there are no restriction on V T A∗ V in the lower right n − m × n − m block. We finally recall how to obtain a matrix mask B of minimal support (in the sense of total degree) that satisfies (3). For that end, we use the following block representation for the matrix V T A∗ (z)V : T
∗
V A (z)V =
F ∗ (z) G∗0 (z) , G∗1 (z) H ∗ (z)
where F ∗ is an m × m matrix valued Laurent polynomial (this determines the sizes of other blocks). Next, we compute matrix valued ∗ and C ∗ , j, k = 1, . . . , s, such that Laurent series Bjk k (zj − 1) F ∗ (z) = G∗1 (z) =
s X k=1 s X
(5)
(6)
∗ zj2 − 1 Bjk (z),
zj2 − 1 Ck∗ (z).
k=1 ∗ and C ∗ , j, k = 1, . . . , s, The existence of (Laurent) polynomials Bjk k follows from (4) and once they are determined one can choose
∗ (z) B11
∗ C1 (z) .. B ∗ (z) = . B ∗ (z) s1
C1∗ (z)
z1 −1 ∗ s G0 (z)
∗ (z) . . . B1s
z1 −1 ∗ s G0 (z)
... .. .. .. .. , . . . . zs −1 ∗ zs −1 ∗ ∗ G (z) . . . B (z) G (z) ss 0 0 s s 1 ∗ s H (z)
1 ∗ s H (z)
Cs∗ (z)
. . . Cs∗ (z)
1 ∗ s H (z)
(7)
1 ∗ s H (z)
cf. [19]. It is well–known that B is not defined uniquely by (3): in ∗ nor C ∗ from (5) and (6), respecgeneral, neither the polynomials Bjk k tively, are unique and also the choice of B ∗ in (7) is only one out of ∗ and C ∗ , j, k = 1, . . . , s. We also many possible ones for the same Bjk k remark that the decompositions (5) and (6) are obtained most conveniently, namely by applying componentwise the process of reduction, a well–known basic algorithm in Computer Algebra, cf. [4].
3. Convergence Analysis In this section we investigate the property of the difference subdivision scheme SB that characterizes the convergence of the original scheme
cfinal7.tex; 19/11/2004; 7:05; p.6
7
Regularity of vector subdivision
SA independently of the choice of the mask B. For simplicity, we, if necessary, change the coordinate system so that EA = span {e1 , . . . , em } ,
1 ≤ m ≤ n.
(8)
e as described in the previous and pass from A to the modified mask A section. Under the assumption (8), we have then that A and B are related by ∇SA = SB ∇. (9)
Based on the operator ∇ we define the restricted norm of the operator SB , B ∈ `ns×ns (ZZ s ), to be 0 (
kSB |∇ k∞ := sup
)
kSB ∇ck∞ : c ∈ `n∞ (ZZ s ) , ∇c 6= 0 k∇ck∞
and the restricted spectral radius of the operator SB as r ρ∞ ( SB |∇ ) := lim sup k SB |∇ k1/r ∞ . r→∞
We have chosen the ambiguous notation SB |∇ deliberately, as the above restricted norm and spectral radius could also be seen as the norm of SB restricted to the difference space ∇`n∞ (ZZ s ) ⊂ `ns Z s ). That the ∞ (Z range of SB is also the difference space and that therefore iterations and spectral radius are well defined is an immediate consequence of (9). It is, however, worthwhile to point out that restricted norm and spectral radius are well–defined even if (9) is not valid; the interpretation as a restricted operator does not make sense then. Also note that due to (9) the restricted norm does not depend on the specific choice of the difference mask B. Moreover, we remark that the symbol of the operator ∇ is of the form (z1 − 1) Im 0 0 In−m .. .. ∇∗ (z) = (10) . . (zs − 1) Im 0 0 In−m and start with the following simple but crucial fact. LEMMA 3. Let D ∈ `n×n (ZZ s ) satisfy 0 SD ej = 0,
j = 1, . . . , m,
(11)
where ej (α) = ej , α ∈ ZZ s . Then there exists F ∈ `n×ns (ZZ s ) such that 0 SD = SF ∇ and kSF k∞ is bounded.
cfinal7.tex; 19/11/2004; 7:05; p.7
8
M. Charina, C. Conti, T. Sauer
Proof. It follows immediately from (11) that for ∈ {0, 1}s ! X
0 = (SD ej ) () =
D( − 2α)
ej = D ej ,
j = 1, . . . , m,
α∈ZZ s
implying that "
D = Thus, D∗ (z) ∈
∗m×(n−m)
0m×m
0(n−m)×m ∗(n−m)×(n−m) h
z2 − 1
n×m
#
,
∈ {0, 1}s .
i
Λn×(n−m) and it follows from (10) that
D∗ (z) = F ∗ (z) ∇∗ z 2 for some finitely supported mask F ∈ `n×ns (ZZ s ). 0 The boundedness of the norm is an immediate consequence of the finite support of F. 2
THEOREM 4. Let A ∈ `n×n (ZZ s ) satisfy (8) and let B ∈ `0ns×ns (ZZ s ) 0 be given by (9). Then the following statements are equivalent: 1. SA converges. r 2. lim kSB |∇ k∞ = 0. r→∞
3. ρ∞ ( SB |∇ ) < 1. Proof. First note that the operators SD := ∇j SA , j = 1, . . . s, satisfy the hypothesis (11) of Lemma 3 and thus there exists B ∈ `0ns×ns (ZZ s ) satisfying ∇SA = SB ∇. Moreover, let N ∈ IN be such that A(α) 6= 0 for α ∈ [−N, N ]s . (1 ⇒ 2) The proof is analogous to the one given in [1, Theorem 2.3] for the scalar case. Therefore, we will restrict ourselves only to an outline of the proof and to pointing out the differences between the scalar and the vector case. Recall first that the support of the basic limit function Ψ is included in the support of A. Next, choose c ∈ `n∞ (ZZ s ) and µ ∈ (0, 1). We begin by estimating |fc (x) − fc (y)|∞ for x, y ∈ IRs such that |x − y|∞ < µ. Since for any x, y ∈ IRs and w ∈ EA we have fc (x) − fc (y) =
X
(Ψ(x − α) − Ψ(y − α)) (c(α) − w) ,
α∈ZZ s
we get the estimate |fc (x) − fc (y)|∞ ≤ ω(Ψ; µ) #Γx,y max |c(α) − w|∞ . α∈Γx,y
Here ω (Ψ; µ) := sup kΨ(· + h) − Ψ(·)k∞ |h|∞ 0
(r+k)
− Φ(r) ∗ c
Φ
∞
≤ 2 C1 C2 kSF k∞
ρbr/Rc kck∞ . 1−ρ
(15)
Choosing c = δej for j = 1, . . . , s proves that Φ(r) , r ∈ IN , is a Cauchy sequence. Thus, there exists a uniformly continuous limit matrix function Ψ := lim Φ(r) r→∞
cfinal7.tex; 19/11/2004; 7:05; p.11
12
M. Charina, C. Conti, T. Sauer
refinable with respect to A, i.e., Ψ = Ψ ∗ A(2·). We finally show that for any initial data c ∈ `n∞ (ZZ s ) the subdivision scheme SA converges to the function fc := Ψ ∗ c. For that purpose we take into account that the stability of Φ implies the existence of a constant C3 > 0 such that
r
S c − Ψ ∗ c 2−r · A ∞
≤ C3 Φ(r) ∗ c − Ψ ∗ c
∞
+ C3 fc 2−r · − Φ ∗ fc 2−r · (2r ·) ∞ .
The first term goes to zero by (15) while the second term goes to zero by standard properties of the quasi–interpolant for test functions, cf. [5]. The equivalence of 2 and 3 follows from standard arguments. 2
Theorem 4 and its proof were based on the assumption (8) which guaranteed the simple relationship (9) between SA and SB . In the case of a general EA of positive dimension that is not spanned by the first m unit vectors, one has to return to the EA –generating matrix V ∈ IRn×n and its associated difference operator. Thus, the general result reads as follows. COROLLARY 5. The subdivision scheme SA associated with A ∈ `n×n (ZZ s ) is convergent if and only if dim EA ≥ 1 and there exists some 0 (ZZ s ) such that ∇V SA = SB ∇V for some EA –generating B ∈ `ns×ns 0 matrix V ∈ IRn×n and ρ∞ (SB |∇V ) < 1. Remark 1. The proof of Theorem 4 and the aforementioned fact that the differenced mask B is not unique have an interesting effect on Corollary 5. If there exists some EA –generating matrix V and some mask B such that ∇V SA = SB ∇V and ρ∞ (SB |∇ ) < 1, then the subdivision scheme converges and also ρ∞ (SC |∇ ) < 1 for any other mask C and EA –generating W that satisfy ∇W SA = SC ∇W . In this respect, the choice of V and B in Corollary 5 are irrelevant. It is important to emphasize that the restricted spectral properties of any difference operator SB derived from a given subdivision scheme associated with the mask A fully characterize the convergence of the original scheme SA while the non-restricted ones can fail to do so. To see this, let A and B satisfy (9) which in symbol notation reads
∇∗ (z) A∗ (z) = B ∗ (z) ∇∗ z 2 .
(16)
Moreover, let C 6= 0 satisfy C ∗ (z) ∇∗ z 2 = 0. Such matrix sequences can be easily parameterized by a basis for the module of syzygies on
cfinal7.tex; 19/11/2004; 7:05; p.12
13
Regularity of vector subdivision
the vector zj2 − 1 : j = 1, . . . , s , cf. [4]. Consider the family B(t) := B + t C, t ∈ IR, and note that SA and SB(t) still satisfy ∇SA = SB(t) ∇ = SB ∇. On the other hand, the spectral radius of SB(t) can be made arbitrarily large by choosing t sufficiently large while the restricted norms satisfy kSB |∇ k∞ = kSB(t) |∇ k∞ .
4. Smoothness Analysis In this section we state a sufficient condition for the limit functions of a subdivision scheme to have a certain degree of smoothness. Again, the restriction to ∇`n∞ (ZZ s ) ⊂ `ns Z s ) will play a crucial role. For ∞ (Z that reason, we will call the subdivision scheme SB , B ∈ `ns×ns (ZZ s ), 0 n s restrictedly convergent if for any sequence c ∈ ∇`∞ (ZZ ) there exists a function fc ∈ C (IRs )ns such that r lim SB c − fc 2−r · ∞ = 0,
r→∞
and fc 6= 0 for some c ∈ ∇`n∞ (ZZ s ). Alternatively, we will also say that the restricted scheme SB |∇ is convergent. An analysis of restricted convergence, though similar to that in the preceding section, is more intricate and thus will not be presented here. However, it is worthwhile to mention that the convergence can be again described in terms of spec2 ×ns2 tral properties of SC satisfying ∇V SB = SC ∇V with C ∈ `ns (ZZ s ) 0 ns×ns and V ∈ IR . Clearly, C will contain further redundancies that stem from the commuting of partial difference operators, i.e., ∇j ∇k = ∇k ∇j , j, k = 1, . . . , s. Since such a factorization approach relies heavily on the performance of the subdivision scheme on constant sequences, recall for example Lemma 1, it is necessary that the space ∇`n∞ (ZZ s ) still contains constant sequences and it fortunately does. More precisely, all constant sequences of the form y(α) = y j : j = 1, . . . , s ,
α ∈ ZZ s ,
y j ∈ EA
can be written as y = ∇V c, where c(α) = −
s X
y j αj ,
α ∈ ZZ s .
j=1
For more information see [19]. A detailed study of restricted convergence shall be presented in a forthcoming paper.
cfinal7.tex; 19/11/2004; 7:05; p.13
14
M. Charina, C. Conti, T. Sauer
Here we will give a simple sufficient condition that guarantees that a subdivision scheme SA converges to limit functions of some given smoothness. To keep the presentation simple, we again restrict ourselves to the case when EA is spanned by the first m unit vectors for some 1 ≤ m ≤ n. THEOREM 6. Suppose that A ∈ `n×n (ZZ s ) satisfies (8) and admits 0 a convergent subdivision scheme. Let B be given by (9). If 2 SB |∇ is convergent with limit functions in C k (IRs )ns , then SA is convergent and its limit functions belong to C k+1 (IRs )n . Proof. Let γ ∈ IN s such that γj > k, j = 1, . . . , s, and let Nγ (x) =
s Y j=1
(χ ∗ · · · ∗ χ) (xj ) |
{z γj
(17)
}
denote the tensor product B–Spline of order γ which has the property that ∂ Nγ := −∇j Nγ−j , j = 1, . . . , s . ∂xj Let us denote by CD (IRs ) := C 1 (IRs )m ⊕ C (IRs )n−m the space of n–vector valued function f ∈ C (IRs )n such that
fj ∈
C 1 (IRs ) , C (IRs ) ,
j = 1, . . . , m j = m + 1, . . . , n.
Next, we define for any f ∈ CD (IRs ) the operators Dj f =
∂f1 ∂fm ,..., , fm+1 , . . . , fn ∂xj ∂xj
!T
,
j = 1, . . . , s,
and D1 f .. Df = . Ds f
∈ C (IRs )ns .
With the n×n and ns×ns diagonal matrix valued functions Φ := Nγ In and −Nγ−j Im 0 Ξ := diag : j = 1, . . . , s 0 Nγ In−m we get for any c ∈ `n∞ (ZZ s ) by (17) that D (Φ ∗ c) = Ξ ∗ ∇c,
(18)
from which it follows that r r D (Φ ∗ SA c) (2r ·) = 2r (Ξ ∗ ∇SA c) (2r ·) = (Ξ ∗ (2SB )r ∇c) (2r ·) .
cfinal7.tex; 19/11/2004; 7:05; p.14
15
Regularity of vector subdivision
Since Ξ is a matrix test function, the convergence of SB |∇ yields that r c) (2r ·) converges to at least a continuous the sequence g (r) := D (Φ ∗ SA function. Since SA converges by assumption, the sequence of functions r c (2r ·) is a Cauchy sequence with respect to the norm f (r) := Φ ∗ SA kf kD := kf k∞ + kDf k∞ which makes CD (IRs ) a Banach space. Let fc and gc be the limit r c and S r ∇c, respectively, then f ∈ C (IRs ), and functions for SA c D B therefore Dfc exists and satisfies Dfc = gc . But since, by assumption, gc ∈ C k (IRs )ns we get that (fc )j ∈ C k+1 (IRs ), j = 1, . . . , m, for any initial sequence c while the necessary condition for convergence yields that (fc )j = 0, j = m + 1, . . . , n. Thus fc ∈ C k+1 (IRs )n as claimed. 2 Since any convergent subdivision scheme is also restrictedly convergent, we immediately have the following, weaker criterion. COROLLARY 7. If, under the assumptions of Theorem 6, 2 SB converges to limit functions in C k (IRs )ns , then limit functions of SA belong to C k+1 (IRs )n . Theorem 6 and Corollary 7 give us a method for verifying that a limit function of a vector subdivision scheme is k times continuously differentiable. We emphasize once more that Theorem 6 and Corollary 7 provide only sufficient conditions for checking the differentiability of a limit function. We can iterate Corollary 7 to obtain a criterion for checking the smoothness by means of repeated differencing as it is done in the univariate scalar case, cf. [1, 6], and univariate vector case, cf. [16, 17]. However, this becomes more intricate as the assumption (8) does not assure that EB is also spanned by unit vectors. Quite the contrary, this only happens under very rare circumstances. Therefore, we must again incorporate the more general difference operators ∇V as used in Corollary 5. By iterating Corollary 7 we then obtain the following result. COROLLARY 8. Suppose that for A ∈ `n×n (ZZ s ) there exist B(j) ∈ 0 j ×nsj nsj ×nsj s ns `0 (ZZ ), j = 0, . . . , k, and Vj ∈ IR , j = 0, . . . , k − 1, such (0) that B = A, Vj is an EB(j) –generating matrix and ∇Vj · · · ∇V0 SA = SB(j+1) ∇Vj · · · ∇V0 ,
j = 0, . . . , k − 1.
If SA and 2k SB(k) converge, then all limit functions of SA belong to C k (IRs )n .
cfinal7.tex; 19/11/2004; 7:05; p.15
16
M. Charina, C. Conti, T. Sauer
5. Computing restricted norms In this section we present a method for estimating the restricted spectral radius ρ∞ (SB |∇ ) of the “difference scheme” SB . Since by standard properties of spectral radii one has r ρ∞ (SB |∇ ) = inf kSB |∇ k1/r ∞ , r∈IN
the usual way to do so is to compute the restricted norms of the iterates r . In particular, the spectral radius is less than 1 if there exists some SB
R r| k r R ∈ IN such that SB |∇ < 1. Clearly, kSB ∇ ∞ < kSB k∞ for any ∞ r ∈ IN , where the non–restricted norm on the right hand side can be computed efficiently, but only gives a non-sharp upper estimate. To resolve this problem, we present a systematic way for computing the restricted norm of the iterates of a subdivision scheme by means of linear optimization. To that end, given B ∈ `ns×ns (ZZ s ) and r ∈ IN we define a sequence 0 ns×ns C(r) ∈ `0 (ZZ s ) by setting for α ∈ ZZ s , j, k = 1, . . . , ns and r ≥ 0 (r) Cjk (α)
B (r) (α − 2r ) − B (r) (α), ` jk jk := (r) B (α), jk
k − (` − 1)n ∈ {1, . . . , m} otherwise.
In .. With the matrix J = . ∈ IRns×n it follows from straightforward In computations that for any c ∈ `n∞ (ZZ s )
r kSB ∇ck∞
X (r) r = sup C (α − 2 β) Jc(β) α∈ZZ s β∈ZZ s
2r γ,
.
(19)
∞
ZZ s ,
Replacing α by α + γ ∈ on the right hand side of (19) and taking into account the shift invariance of the norm k∇ck∞ , we obtain that r kSB ∇ck∞
X (r) r = max s C (α − 2 β) Jc(β) α∈[0,2r −1] β∈ZZ s
. ∞
We set G = ([−1, 1]s − supp B) ∩ ZZ s . The finite support of C(r) , r ≥ 0 yields that r kSB |∇ k∞
X C (r) (α − 2r β) Jc(β) = max max s k∇c|G k∞ =1 α∈[0,2r −1] β∈G
.
(20)
∞
cfinal7.tex; 19/11/2004; 7:05; p.16
17
Regularity of vector subdivision
The restriction on ∇c|G in (20) is decomposed into the (pairs of) linear side conditions −1 ≤ ∇` ck (β) ≤ 1,
k = 1, . . . , m,
` = 1, . . . , s,
β ∈ G,
and −1 ≤ ck (β) ≤ 1,
k = m + 1, . . . , n,
β ∈ G.
For fixed j = 1, . . . , ns and α ∈ [0, 2r − 1]s we then compute X
C (r) (α − 2r β) Jc(β)
β∈G ns XX
=
(r)
Cjk (α − 2r β) (Jc(β))k
β∈G k=1 " s n XX X
=
# (r) Cj,(`−1)n+k
β∈G k=1 `=1 m XX α Djk (β) ck (β) β∈G k=1
=:
j
+
r
(α − 2 β) ck (β) X
n X
α Djk (β) ck (β).
β∈G k=m+1
This decoupled problem is maximized by choosing, for any fixed j = 1, . . . , ns and α ∈ [0, 2r − 1]s α cα,j k (β) = sgn Djk (β).
k = m + 1, . . . , n,
β ∈ G,
(21)
and solving the linear programs max
m XX
α Djk (β) ck (β),
−1 ≤ ∇` ck (β) ≤ 1,
` = 1, . . . , s, (22)
β∈G k=1
which can be done by standard methods of linear optimization as there are only a finite number of parameters and side conditions involved. Finally, we find the maximum over j = 1, . . . , ns and α ∈ [0, 2r − 1]s . Observe that the computationally involved part (22) can be further decomposed into solving max
X
α Djk (β) ck (β),
−1 ≤ ∇` ck (β) ≤ 1,
` = 1, . . . , s,
β∈G
separately for each k = 1, . . . , m and then adding the results. Together (21) and (22) maximized over j and α yield the desired restricted norm. Clearly, the computational cost of this procedure increases exponentially with the iteration level r but this is a typical phenomenon of all spectral radius estimates based on iterating a subdivision scheme.
cfinal7.tex; 19/11/2004; 7:05; p.17
18
M. Charina, C. Conti, T. Sauer
References 1. 2. 3.
4. 5. 6.
7. 8.
9. 10. 11. 12. 13.
14. 15. 16. 17. 18.
19.
20. 21.
A. S. Cavaretta, W. Dahmen, C. A. Micchelli, Stationary Subdivision, Mem. Amer. Math. Soc. 93, No. 453 (1991). M. Charina, C. Conti, Regularity of multivariate vector subdivision schemes, Preprint, Dipartimento di Energetica, Universit` a di Firenze, (2002) D. R. Chen, R. Q. Jia, S. D. Riemenschneider, Convergence of vector subdivision schemes in Sobolev spaces, Appl. Comput. Harmon. Anal. 12 (2002), 128–149. D. Cox, J. Little, D. O’Shea, Ideals, Varieties and Algorithms, 2nd ed., Springer 1996. W. Dahmen, C. A. Micchelli, Biorthogonal wavelet expansions, Constr. Approx. 13 (1997), 294–328. N. Dyn, Subdivision schemes in CAGD, in: Advances in Numerical Analysis, vol. II: Wavelets, Subdivision Algorithms and Radial Basis Functions (W. A. Light, ed.) pp. 36–104, Oxford University Press, 1992. N. Dyn and D. Levin, Subdivision schemes in geometric modelling, in: Acta Numerica, pp. 1–72, Cambridge University Press, 2002. N. Dyn, D. Levin, Matrix subdivision – analysis by factorization, in: Approximation Theory (B. D. Bojanov, ed.), pp. 187–211, DARBA, Sofia, 2002. B. Han, Vector cascade algorithms and refinable function vectors in Sobolev spaces, J. Approx. Theor., 124, (2003), Issue 1, 44–88. B. Han, R. Q. Jia, Multivariate refinement equations and convergence of subdivision schemes, SIAM J. Math. Anal. 29 (1998), 1177–1199. R. Q. Jia, Subdivision schemes in Lp spaces, Adv. Comput. Math. 3 (1995), 309–341. R. Q. Jia, S. D. Riemenschneider, D. X. Zhou, Vector subdivision schemes and multiple wavelets, Math. Comp. 67 (1998), 1533–1563. R. Q. Jia, S. D. Riemenschneider, D. X. Zhou, Smoothness of multiple refinable functions and multiple wavelets, SIAM J. Matrix. Anal. Appl. 21, 1,(1999), 1–28. Q. T. Jiang, Multivariate matrix refinable functions with arbitrary matrix, Trans. Amer. Math. Soc. 351 (1999), 2407–2438. Q. T. Jiang, Z. Shen, On the existence and weak stability of matrix refinable functions, Constr. Approx. 15 (1999), 337–353. C. A. Micchelli, T. Sauer, Regularity of multiwavelets, Adv. Comput. Math. 7 (1997), 455–545. C. A. Micchelli, T. Sauer, On vector subdivision, Math. Z. 229 (1998), 621–674. H. M. M¨ oller, T. Sauer, Multivariate refinable functions of high approximation order via quotient ideals of Laurent polynomials, Adv. Comput. Math. (2003), to appear. T. Sauer, Stationary vector subdivision - quotient ideals, differences and approximation power, RACSAM Rev. R. Acad. Cienc. Exactas F´ıs. Nat. Ser. A Mat. 229 (2002), 621–674. T. Sauer, Polynomial interpolation, ideals and approximation order of refinable functions, Proc. Amer. Math. Soc. 130 (2002), 3335–3347. Z. Shen, Refinable function vectors, SIAM J. Math. Anal. 29 (1998) 235–250.
cfinal7.tex; 19/11/2004; 7:05; p.18