Differentiability of multivariate refinable functions and ... - CiteSeerX

1 downloads 0 Views 188KB Size Report
Feb 16, 2004 - we have that. F ∗ y = ̂F(0)y,. 3. we have that dimR. (. ̂F(0). ) ≥ dimE1. (σ ξ. A∗(1);CL. ) ,. 4. with the matrix UL from (26) we have that. dimR.
Differentiability of multivariate refinable functions and factorization Thomas Sauer ([email protected]) Lehrstuhl f¨ur Numerische Mathematik, Justus–Liebig–Universit¨at Gießen, Heinrich–Buff–Ring 44, D–35392 Gießen, Germany February 16, 2004 Abstract. The paper develops a necessary condition for the regularity of a multivariate refinable function in terms of a factorization property of the associated subdivision mask. The extension to arbitrary isotropic dilation matrices necessitates the introduction of the concepts of restricted and renormalized convergence of a subdivision scheme as well as the notion of subconvergence, i.e., the convergence of only a subsequence of the iterations of the subdivision scheme. Since, in addition, factorization methods pass even from scalar to matrix valued refinable functions, those results have to be formulated in terms of matrix refinable functions or vector subdivision schemes, respectively, in order to be suitable for iterated application. Moreover, it is shown for a particular case that the the condition is not only a necessary but also a sufficient one.

Dedicated to Charles A. Micchelli, a unique person, friend, mathematician and collaborator, on the occasion of his sixtieth birthday

1. Introduction The theory of refinable functions and the associated subdivision schemes that emerged from the almost classical monograph [1] has been a very active field of research in the past decade. One particular question to be considered was how the differentiability of a refinable function can be described by means of the associate refinement mask. Let us recall how this looks in the univariate case. Here, a compactly supported function ϕ is called refinable, a term actually coined by C. A. Micchelli, if there exists a finitely supported sequence a such that ϕ=

X

ak ϕ (2 · −k) ,

(1)

k∈ZZ

where the above sum is a finite one due to the finite nature of ϕ and a. Usually, the function ϕ is only given implicitly by (1) and all that is known is the mask a, so properties of ϕ have to be read off from a. A natural operator associated to a is the stationary subdivision operator Sa that maps a sequence c to the sequence   Sa c = 

X

aj−2k ck : k ∈ ZZ  ,

k∈ZZ

c 2004 Kluwer Academic Publishers. Printed in the Netherlands.

paperfinal.tex; 9/03/2004; 7:22; p.1

2

T. Sauer

representing a discrete function on the grid ZZ/2. Iterating the subdivision operator yields sequences Sar c corresponding to functions on 2−r ZZ, r ∈ IN0 , and one can introduce a notion of convergence by requiring the existence of a limit function fc such that Sar c → fc . This intuitive idea will be made specific later. Finally, associate to the mask a the Laurent polynomial a∗ (z) =

X

ak z k ,

| \ {0}, z∈C

k∈ZZ

often called the symbol of a. Then the following result is well–known and can be found for example in [1, 3, 5, 12]; this list is in no way complete and I apologize to those not mentioned as this was not done with any unfriendly intention. THEOREM 1. If ϕ is stable then ϕ is k times differentiable if and only if there exists a finitely supported sequence b such that a∗ (z) = 2−k (z + 1)k b∗ (z) and the subdivision scheme Sb converges. This theorem relates differentiability of refinable functions to factorizations of the symbol or, equivalently, to the existence of a multiple zero at z = −1. It is well–known that a straightforward transition of this concept to the multivariate case is impossible as there is no factorization in the multivariate case, and that one must consider containment in quotient ideals as a replacement for the zero at −1 as already indicated in [1] for dyadic scaling, see [16, 18, 19] for more details. A more convenient way to express factorizations is to consider them in terms of difference operators which is an equivalent description in the scalar univariate case described above as well as in univariate vector subdivision. There the description of differentiability in terms of factorization, is given in [14]. This paper considers the part “⇒” of Theorem 1 for vector subdivision in several variables and with an arbitrary isotropic scaling matrix, making use of difference schemes. Since passing to a difference scheme turns even scalar subdivision schemes into vector subdivision schemes, as was already shown in [1, 5], the application of the iterated factorization process invariably necessitates the results to be developed and formulated for vector subdivision schemes of arbitrary dimension. This makes the results more general but also more intricate and more complicated in the details. To derive the extension of Theorem 1 which is stated as Theorem 15 we will first introduce the necessary notation in Section 2. Section 3 will be be devoted to introducing some modified concepts of convergence and to proving some basic facts about them, which will be extended to modified notions of stability and refinability in Section 4. The heart of the matter will appear in Section 5 which uses the previous results to establish the main result, Theorem 15. In the final part, Section 6, we briefly remark for a special case that

paperfinal.tex; 9/03/2004; 7:22; p.2

Refinable functions and factorization

3

the conditions derived before are not only necessary but also sufficient for the differentiability of the limit function and hint how this is proved. A full proof of the converse would be too lengthy and will not appear in this paper. The results of this paper do not claim to provide practical means for checking the regularity of a given refinable function or to construct new refinable functions with prescribed smoothness; the difficulties to use factorization methods to “smoothen” a given function in several variables have already been pointed out in [20]. The sole aim is to show that the analogies of the simple univariate case still exist in the more general situation, but necessitate more intricate concepts whose origin will be pointed out in Remark 4 following Theorem 15.

2. Preliminaries We begin with setting up the notational conventions used throughout this paper. For N, M ∈ IN , we denote by `M ×N (ZZ s ) the set of all N × M matrix valued functions from ZZ s to IR or, equivalently, all matrix valued sequences indexed by ZZ s ; those will be written as A = (A(α) : α ∈ ZZ s ). The case M = 1 corresponds to column vectors for which we will write ×M (Z Z s ), 1 ≤ p ≤ ∞, we denote the Banach subspaces c ∈ `N (ZZ s ). By `N p of `N ×M (ZZ s ) for which the norms !1/p

kAkp :=

X

|A(α)|pp

,

1 ≤ p < ∞,

α∈ZZ s

and kAk∞ := sup |A(α)|∞ , α∈ZZ s

respectively, are finite, where for A ∈ IRN ×M we denote by |A|p the p– operator norm if M > 1 and the p–vector norm if M = 1. Moreover, we use ×M `N (ZZ s ) for the sequences with finite support. 00 Like in [13, 14], we consider the function spaces Hp (IRs ), where we set Hp (IRs ) = Lp (IRs ) for 1 ≤ p < ∞ and H∞ (IRs ) = Cu (IRs ), the space of uniformly continuous and uniformly bounded functions on IRs , all of them equipped with their natural norms. For k ∈ IN0 we use Hpk (IRs ) to denote the functions that have a k–th derivative in Hp (IRs ), while the space of vector fields with all components in Hp (IRs ) will be denoted by Hp (IRs )N and the matrix valued functions by Hp (IRs )M ×N . We will use convolution notation also for discrete objects, defining A∗B=

X

A (· − α) B(α),

α∈ZZ s

paperfinal.tex; 9/03/2004; 7:22; p.3

4

T. Sauer

where A belongs either to `L×N (ZZ s ) or to Hp (IRs )L×N and B ∈ `N ×M (ZZ s ). Usually, we will consider the cases L = N and M = 1, N . Let Ξ ∈ ZZ s×s be a matrix all whose eigenvalues are greater than one in modulus, or, equivalently, kΞ−r k → 0 as r → ∞ where k · k is an arbitrary norm for s×s matrices. Such a matrix is called an expanding matrix. Obviously, ξ := |det Ξ| satisfies ξ > 1 then. Moreover, Ξ is called isotropic, cf. [6], if it is similar to a diagonal matrix all whose entries have the same modulus or, equivalently, if its powers Ξr , r ∈ IN , are equiconditioned, that is, if the following holds true: CΞ := sup kΞr k Ξ−r < ∞.



(2)

r∈IN

We write Q = [0, 1]s for the unit cube of IRs and note that for any r ∈ IN we have the decomposition Ξ−r (α + Q)

[

IRs =

α∈ZZ s

since Ξ is nonsingular. It is well–known that the quotient group ZZ s /ΞZZ s has order ξ and that it is isomorphic to the set Γ = Ξ [0, 1)s ∩ ZZ s . The structure of this group is most conveniently exploited by considering the Smith factorization of Ξ, cf. [10, 11]. ×M To A ∈ `N (ZZ s ) we associate the symbol A∗ which is the matrix 00 valued Laurent polynomial defined as A∗ (z) =

X

| \ {0})s , z ∈ (C

A(α) z α ,

α∈ZZ s

as well as, for γ ∈ Γ, the subsymbols A∗γ (z) =

X

A (γ + Ξα) z α ,

| \ {0})s , z ∈ (C

α∈ZZ s





where the s–vector z Ξ is defined by z Ξ = z Ξα for α ∈ ZZ s . The topic of this paper is the investigation of stationary vector subdivision schemes which start with an initial sequence c0 = c ∈ `N (ZZ s ) and inductively compute sequences cr , r ∈ IN . Normally, this is done by setting cr+1 = SA cr :=

X

A (· − Ξα) cr (α),

r ∈ IN0 ,

(3)

α∈ZZ s

cf. [1, 2, 4, 13, 14, 19], but in the context of this paper here we will need more general types of such sequences that arise necessarily in the context of derivatives for refinable functions. What all these notions have in common is that the data cr is related to the grid Ξ−r ZZ s , i.e., for α ∈ ZZ s the value cr (α)

paperfinal.tex; 9/03/2004; 7:22; p.4

5

Refinable functions and factorization

is thought to approximate the value of a function at the point Ξ−r α. Since the grids become denser and denser as r increases, there is a natural notion of convergence of the subdivision scheme to a limit function. Following [4], we s define the mean value operator µr : HpN (IRs ) → `N p (IR ) as (µr f ) (α) = ξ r

Z Ξ−r (α+Q)

f (t) dt,

α ∈ ZZ s ,

and recall, cf. [4, 13], that its operator norm satisfies kµr kp = ξ r/p . Now, the subdivision scheme is said to be convergent on `N Z s ) if for any c ∈ p (Z N `N Z s ) there exists a vector field fc ∈ Wp (IRs ) such that p (Z lim ξ −r/p kcr − µr fc kp = 0.

r→∞

(4)

It has been shown in [4] that whenever the subdivision scheme converges there exists a function F ∈ Wp (IRs )N ×N such that fc = F ∗ c. Moreover, F is a refinable function, that is, F = (F ∗ A) (Ξ ·) .

(5)

A convenient tool when dealing with matrices will be the Kronecker product A ⊗ B of two matrices A ∈ IRM1 ×N1 and B ∈ IRM2 ×N2 , which is defined as the block matrix j = 1, . . . , M1 A ⊗ B = Ajk B : ∈ IRM1 M2 ×N1 N2 . k = 1, . . . , N1 



Among other properties that can be found for example in [21], the Kronecker product has the property that whenever the multiplication of A and C as well as B and D are well–defined (which is a matter of the matrix dimensions being compatible), we have that (A ⊗ B) (C ⊗ D) = (AC ⊗ BD) .

(6)

Here C and D can be either matrices or vectors. For a matrix A ∈ IRN ×M we denote by R(A) the range of A, i.e., the vector subspace A · IRM of IRN . Moreover, if F ∈ Hp (IRs )N ×M is a matrix valued function, then we denote by R (F) the smallest linear subspace of IRN such that R (F(x)) is contained in this subspace almost everywhere with respect to x.

paperfinal.tex; 9/03/2004; 7:22; p.5

6

T. Sauer

3. Restricted and renormalized convergence The concept of restricted convergence, which will turn out to be crucial for the investigation of differentiability, will be introduced and studied in this section. Here, “restricted” will always refer to an SA –invariant subspace L of `N Z s ), that is, SA L ⊆ L. Taking into account the iterative structure p (Z of subdivision schemes this is nothing but a natural requirement: instead of initializing the subdivision scheme with c0 , it should also be possible to start it with any intermediate result cr , r ∈ IN , without affecting convergence or the limit function. In addition, we require that L is generated by the translates of a finite number of finitely supported sequences in `N (ZZ s ), a property that all the spaces `N Z s ) have. As a consequence, L is always translation p (Z invariant, i.e., c ∈ L implies that c(· + α) ∈ L for any α ∈ ZZ s . Moreover, the assumption also ensures that L contains some nontrivial finitely supported sequences, i.e., L ∩ `N Z s ) 6= {0}. All those requirements shall be met 00 (Z by the specific choices of L that appear later, as those consist of the full sequence space or differences applied to it. Moreover, let X ∈ IRN ×N be an expanding isotropic N × N matrix, i.e., there exists a constant CX > 0 such that |Xr |p |X−r |p ≤ CX . A renormalized stationary subdivision scheme based on the mask A ∈ ×N (ZZ s ) and the normalization matrix X starts with an initial sequence `N 00 0 c = c ∈ `N Z s ) and computes a sequence cr , r ∈ IN , by p (Z r cr := Xr SA c = XSA X1−r cr−1 .

(7)

We now consider the behavior of this sequence cr for initial data only chosen from the subspace L. The concept of convergence transfers in a straightforward way. N ×N (ZZ s ) admits a convergent subdiviDefinition 1. We say that A ∈ `00 sion restricted to L and normalized by X if for any c ∈ L there exists a vector field fc ∈ Hp (IRs )N such that r 0 = lim ξ −r/p kcr − µr fc kp = lim ξ −r/p kXr SA c − µr fc kp r→∞

r→∞

(8)

and if there exists at least one c ∈ L such that fc 6= 0. Remark 1. The introduction of restricted and renormalized subdivision is enforced by two different aspects of multivariate subdivision schemes. Indeed, restricted convergence is needed to handle the fact that the span of the difference operators, see (19) is a a proper subspace of the respective vector sequence space, while renormalization is a consequence of dilation matrices Ξ other than multiples of the identity matrix and appears immediately when the refinement equation F = F ∗ A (Ξ ·) is differentiated – see Remark 2, Example 1 and Lemma 17.

paperfinal.tex; 9/03/2004; 7:22; p.6

7

Refinable functions and factorization

The usual notion of convergence occurs when X = I is the identity matrix and L = `N Z s ) is the full sequence space. We continue by givp (Z ing a necessary condition for the convergence of renormalized subdivision schemes. ×N PROPOSITION 2. If A ∈ `N (ZZ s ) admits a convergent subdivision 00 scheme restricted to L and normalized by X ∈ IRN ×N , then there exists an invertible matrix Y ∈ IRN ×N such that for any c ∈ L we have that

YX A∗γ (1) Y−1 fc (x) = fc (x),

γ ∈ Γ, a.e. x ∈ IRs .

(9)

We precede the proof of Proposition 2 by making an observation on the isotropic renormalization matrix X. To that end, let σ = ρ(X) denote the spectral radius of X and define W := σ −1 X = |det X|−1/N X.

(10)

Then the matrix W has a useful property. LEMMA 3. Let X be isotropic and let W be defined by (10). For any matrix norm k · k we have that

(11) sup W±r < ∞. r∈IN 1/r

Proof. Recall, e.g. from [22, Theorem 3.8], that kX−r k ≥ ρ X−1 = σ −1 for any r ∈ IN . Since X is isotropic, it then follows for r ∈ IN that 

CX ≥ kXr k X−r ≥ σ −r kXr k = kWr k



and the same argument also shows that kW−r k ≤ CX .

2

In the sequel, we will have to make use of subsequences of cr . For a convenient formulation of this concept, we denote by R := {r : IN → IN : r(k + 1) > r(k), k ∈ IN } the set of all indexing functions for subsequences and use, for r, r0 ∈ R, the notation r ⊆ r0 if r(IN ) ⊆ r0 (IN ), indicating that r denotes a subsequence of r0 . Proof of Proposition 2. We consider only 1 ≤ p < ∞, the case p = ∞ is similar and even easier. For r ∈ IN consider

r+1 r+1

SA c − µr+1 fc

X p

r+1 r = X SA X−r Xr SA c − Xr+1 SA X−r µr fc +

+ Xr+1 SA X−r µr fc − µr+1 fc p





r+1 r −r r r+1 c − µr fc ) . ≥ X SA X µ fc − µ fc − Xr+1 SA X−r (Xr SA p

p

paperfinal.tex; 9/03/2004; 7:22; p.7

8

T. Sauer

We examine the first term more closely by expanding its p–th power into

p

r+1

SA X−r µr fc − µr+1 fc

X p p X X r+1 −r r r+1 = X A (α − Ξ β) X µ fc (β) − µ fc (α) α∈ZZ s β∈ZZ s p  p  X X  ≥ Xr+1 A (α − Ξ β)X−r − I µr+1 fc (α) α∈ZZ s β∈ZZ s p p   X X − Xr+1 A (α − Ξ β) X−r µr fc (β) − µr+1 fc (α) . α∈ZZ s β∈ZZ s p

|Wr |p

By Lemma 3 we have that ≤ CX , r ∈ IN , and since closed and bounded sets in finite dimensional spaces are compact, there exist r ∈ R and a matrix Y ∈ IRN ×N such that lim Wr(k) = Y.

k→∞

Therefore, there exists a constant C > 0 such that for k ∈ IN and r = r(k) we have p   X X r+1 −r r+1   X A (α − Ξ β)X − I µ fc (α) α∈ZZ s β∈ZZ s p  p  X X X r+1 −r   µr+1 fc (γ + Ξ α) = X A (γ + Ξ β)X − I γ∈Γ α∈ZZ s β∈ZZ s p p  X X  r ∗ −r r+1 fc (γ + Ξ α) = W XAγ (1)W − I µ p

γ∈Γ α∈ZZ s

p  X X  ≥ YXA∗γ (1)Y −1 − I µr+1 fc (γ + Ξ α) p

γ∈Γ α∈ZZ s



−C |Y|pp |Wr − Y|pp 

 p

p X

X A∗γ (1) − I  µr+1 fc . p

γ∈Γ

p

Combining all the preceding estimates we thus get for r = r(k), k ∈ IN , that X



p



ξ −r YXA∗γ (1)Y−1 − I µr+1 fc (γ + Ξ ·)

p

γ∈Γ





≤ C |Y|pp |Wr − Y|pp 

p X p X A∗γ (1) − I  kfc kp

γ∈Γ

p

(12)

paperfinal.tex; 9/03/2004; 7:22; p.8

9

Refinable functions and factorization

p   X X +ξ −r Xr+1 A (α − Ξ β) X−r µr fc (β) − µr+1 fc (α) (13) s α∈ZZ β∈ZZ s p

+

p CX

kSA kpp

|X|pp ξ −r

r

r

kX SA c − µ

fc kpp

(14)



r+1 + Xr+1 SA c − µr+1 fc .

(15)

p

The term (12) converges to zero since Wr → Y, the one in (13) by a support size argument identical to the one in [4] and the ones in (14) and (15) due to the convergence of the subdivision scheme. This in turn implies that for γ ∈ Γ the series p Z     X ξ YXA∗γ (1)Y−1 − I fc t + Ξ−r−1 γ α∈ZZ s Ξ−r α+Ξ−r−1 Q p

converges to zero for r = r(k), k → ∞. On the other hand, arguments as in the proof of [15, Lemma sum also converges  p 3.2] yield that the above  

∗ −1 to YXAγ (1)Y − I fc , hence the function YXA∗γ (1)Y−1 − I fc p

2

must be zero almost everywhere.

At this point, some remarks are worthwhile. We first note that in the simplest and most common case of renormalization by a scalar factor, X = s I, s > 1, the matrix Y can be chosen to be the identity matrix I and no subsequence is needed. In general, the matrix Y depends on X, however. Proposition 2 motivates and justifies the introduction of the the joint 1–eigenspace EXA :=

\ n

y ∈ IRN : X A∗γ (1) y = y

o

(16)

γ∈Γ

by stating that for a convergent subdivision scheme we always have EXA 6= {0}, i.e., this space is nontrivial. Therefore, the number n = dim EXA satisfies 1 ≤ n ≤ N . Multiplying both sides of (9) by Y−1 we get the equivalent form that Y−1 fc (x) must be such an eigenvector for almost any x ∈ IRs and every c ∈ L. It follows immediately from (16) that y ∈ IRN belongs to EXA if and only if XSA y = y, where we use y also to denote the constant sequence identical to y. Observe, however, that constant sequences need not belong to L. They even belong to the superspace `N Z s ) of L if and only if p = ∞. p (Z Moreover, Proposition 2 and its proof suggest that it is reasonable to look r at a weaker form of convergence which requires only subsequences of Xr SA ×N to converge. To that end, we say that A ∈ `N admits a subconvergent 00

paperfinal.tex; 9/03/2004; 7:22; p.9

10

T. Sauer

scheme relative to L and normalized by X if there exists r ∈ R such that r(k) for any c ∈ L the sequence Xr(k) SA c converges to fc in the sense of Definition 1 as k → ∞. Note, however, that the subsequence r ∈ R must be independent of the data sequence c and thus is exclusively a property of the scheme, more precisely, a property of the renormalization matrix. Remark 2. Subconvergence is not such an eccentric property as it may appear at first. Indeed, it is trivial that whenever Sa is a convergent subdivision scheme and b = −a, then Sb = −Sa is a subconvergent scheme as Sbr = (−1)r Sar and thus the sequences Sb2r and Sb2r+1 both converge. Such schemes occur naturally in connection with negative scaling factors in refinement equation, as can be seen from the following simple scalar example in one variable. While in the univariate case refinement equations with negative dilation factors have not played a significant role – most likely for good reasons – negative entries in the dilation matrices are not uncommon at all in the multivariate case. Actually, the best–studied examples, like quincunx √ refinement equations or 3–subdivision schemes, cf. [9, 8], have dilation matrices with some negative entries and the property that Ξr is a multiple of the identity matrix for some r > 1. Example 1. Let ϕ be a centered B–Spline of even order and let a be the associated symmetric refinement mask.  The symmetry of ϕ and a imply that | b b ϕ(ω) = ϕ(−ω), ω ∈ IR, and a∗ z −1 = a∗ (z),  z ∈ C \ {0}. Consequently, b ϕ also satisfies the identity ϕ(ω) = 12 a∗ eiω/2 ϕb (−ω/2), ω ∈ IR, which is the Fourier transform of the refinement equation ϕ=

X

ak ϕ (−2 · −k)

k∈ZZ

that incorporates the scaling factor −2. Taking derivatives of the refinement iω b b equation, it follows that ψ, defined by ψ(ω) = e iω−1 ϕ(ω) is refinable with 2 respect to the mask −b and the dilation factor −2, where b∗ (z) = z−1 a∗ (z). Thus, the subdivision scheme associated to −b and dilation factor −2 is precisely of the type mentioned in Remark 2. In particular, it is only subconvergent with odd and even iterates differing by sign. With this terminology at hand, similar arguments as in the proof of Proposition 2 yield the following result. N ×N THEOREM 4. If A ∈ `00 (ZZ s ) admits a subconvergent subdivision restricted to L and normalized by X ∈ IRN ×N , then there exists a nonsingular matrix Y ∈ IRN ×N such that for any c ∈ L we have that

YX A∗γ (1) Y−1 fc (x) = fc (x),

γ ∈ Γ, a.e. x ∈ IRs .

(17)

paperfinal.tex; 9/03/2004; 7:22; p.10

11

Refinable functions and factorization

The next result shows that as long as only subconvergence is considered, the actual structure of the isotropic renormalization matrix X is irrelevant and that subconvergence depends only on the spectral radius σ = |det X|−1/N of X. Indeed we have the following fact. ×N PROPOSITION 5. Let A ∈ `N (ZZ s ) and let X ∈ IRN ×N be isotropic. 00 Then SA is subconvergent relative to L with renormalization by X if and only if it is subconvergent with renormalization by σI. Proof. Suppose A is subconvergent renormalized by X, i.e., there is r ∈ R such that for any c ∈ `N Z s ) there exists fc such that p (Z





r(k)

lim ξ −r(k)/p Xr(k) SA c − µr(k) fc = 0. p

k→∞

Set Wr := σ −r Xr and note that by Lemma 3 there exists a subsequence 0 r0 ⊆ r such that Wr (k) → Y for k → ∞. Since

0

0

r (k) r0 (k)

SA c − µr (k) Y−1 fc

σ p



  0 0

−r0 (k) r0 (k) r0 (k)

≤ W SA c − Wr (k) Y−1 µr (k) fc

X p 





 0

−r0 (k) r0 (k) r0 (k) r 0 (k) −1 r (k) Y kfc kp , SA c − µ fc + 1 − W ≤ W

X p

the subconvergence normalized by σI follows immediately. The converse is proved by exactly the same argument. 2 Remark 3. It should be mentioned that, though subconvergence with renormalization by X is equivalent to subconvergence with renormalization by σI, the associated limit functions are not the same! However, they just differ by left multiplication of the matrix Y or Y−1 , respectively. Combining Theorem 4 and Proposition 5 which allows us to replace X by σ I, σ = |det X|1/N , we thus obtain the following result. ×N COROLLARY 6. If A ∈ `N (ZZ s ) admits a subconvergent subdivision 00 restricted to L and normalized by X ∈ IRN ×N then there exists an nonsingular matrix Y ∈ IRN ×N such that for any c ∈ L we have that

σ YA∗γ (1) Y−1 fc (x) = fc (x),

γ ∈ Γ, a.e. x ∈ IRs .

(18)

We finally arrive at the concept of factorization of a (convergent) subdivision scheme. Indeed, let V ∈ IRN ×N be any orthogonal matrix such that the first n columns of V span EA which we write as EA = V IRn . Such a matrix will be called a EA –generator. We now define the partial difference operators

paperfinal.tex; 9/03/2004; 7:22; p.11

12

T. Sauer

Dj , j = 1, . . . , s, as Dj c = c (· + j ) − c and, depending on n and V the difference operator D : `N (ZZ s ) → `N s (ZZ s ) as 

Dn,V

D1 In 0  0 IN −n   . ..  =  .. .   Ds In 0 0 IN −n

    T V ,   

(19)

where Ik denotes the k × k identity matrix. With this notation at hand and with Theorem 4 and Corollary 6 yielding n ≥ 1, we obtain the following result. N ×N THEOREM 7. Suppose that A ∈ `00 (ZZ s ) admits a subconvergent subdivision restricted to L and normalized by X ∈ IRN ×N and let V be an s×N s EA –generator. Then there exists B, B0 ∈ `N (ZZ s ) such that 00

Dn,V XSA = SB Dn,V

(20)

Dn,V SA = SB0 Dn,V ,

(21)

and respectively. A proof of Theorem 7, of (21) to be precise, can be found in [19], where it is also pointed out how (21) and (20) can be understood as an ideal theoretic generalization of the “zero at −1” property from the univariate case, a property involving quotient ideals of Laurent ideals. Since a more detailed description would require the introduction of further concepts and notation, we will not develop this connection any further here and just refer to [19]. On the other hand, the equivalence of (20) and (21) follows immediately from Proposition 5.

4. Stability and refinability In this section, we give a simple sufficient criterion for the convergence of a restricted and normalized stationary subdivision scheme which is based on the existence of a (relatively) stable and refinable function. To that end, let us recall that a matrix valued function F ∈ Hp (IRs )N ×N is called stable if there exist two constants A, B > 0 such that for any c ∈ `N Z s ) the bounds p (Z A kckp ≤ kF ∗ ckp ≤ B kckp .

(22)

In other words, the sequence norm of c and the function norm of F ∗ c are equivalent. It is now straightforward how to define restricted stability: the

paperfinal.tex; 9/03/2004; 7:22; p.12

13

Refinable functions and factorization

function F is called stable restricted to L or (restrictedly) stable relative to L if there exist A, B > 0 such that (22) holds for any c ∈ L. It is obvious that there are functions that are restrictedly stable but not stable – just combine stable and non-stable scalar functions on the diagonal of F and choose L accordingly. The definition of restricted refinability is not so obvious. Note, however, that the refinement equation (5) is equivalent to c ∈ `N Z s) . p (Z

F ∗ c = F ∗ SA c (Ξ ·) ,

(23)

Indeed, using (5) we obtain that F∗c =

X

F (· − α) c(α) =

α∈ZZ s

=

X

X

F (Ξ (· − α) − β) A(β) c(α)

α,β∈ZZ s

X

F (Ξ · −β)

A (β − Ξ α) c(α),

α∈ZZ s

β∈ZZ s

which yields (23), while the converse is obtained by setting c = ej δ, j = 1, . . . , s. Referring to (23), we introduce the following concept. Definition 2. The function F ∈ Hp (IRs )N ×N is called A–refinable relative to L and normalized by X if F ∗ c = X F ∗ SA c (Ξ ·) ,

c ∈ L.

(24)

In all refinement equations we make the standing assumption that Ξ were an isotropic matrix. Our goal in this section is to generalize the well–known result that the existence of a stable vector refinable function implies the convergence of the associated subdivision scheme, cf. [4]. In the case of matrix refinable functions and vector subdivision, however, an additional condition has to be satisfied as was pointed out in [14]. In the case of restricted and renormalized refinability, the situation becomes even more intricate as the remainder of this section will show. We will begin with a few preliminary remarks. Define the Fourier transform of a matrix valued function as Z b F(ω) :=

F(x) e−i ω

Tx

dx,

ω ∈ IRs ,

IRs

×M and that of a matrix valued sequence A ∈ `N (ZZ s ) as 1

b A(ω) :=

A(α) e−i α

X



,

ω ∈ IRs ,

α∈ZZ s

paperfinal.tex; 9/03/2004; 7:22; p.13

14

T. Sauer

respectively – the Fourier transform of vector valued sequences is then obvious. Note that the Fourier transform of sequences is a 2π–periodic function and not the discrete Fourier transform of this sequence. With this notation at hand, it follows by straightforward computations that for c ∈ L00 := L ∩ `N Z s ) the refinement equation (24) becomes 00 (Z 1 b  −T  b  −T  b b(ω) = X F b(ω), Ξ ω A Ξ ω c F(ω) c ξ

ω ∈ IRs .

(25)

b Taking into account that A(ω) = A∗ e−iω , equation (25) reads for ω = 0 as 1 ∗ σ ∗ b b b b(0) = XF(0) b(0) = WF(0) b(0). F(0) c A (1) c A (1) c ξ ξ 

Motivated by this identity, we define the linear subspace CL of IRN as b(0) : c ∈ L00 } . CL := {c

Let dim CL =: M ≤ N and let u1 , . . . , uM be an orthonormal basis of CL ; then we define the symmetric projection matrix UL from IRN to CL as UL =

M X

uj uTj .

(26)

j=1

A first small observation on L00 is as follows. LEMMA 8. For any y ∈ CL and any M ∈ IN there exists a sequence c ∈ L00 such that cy (α) = y, α ∈ [−M, M ]s . b(0). Since (c(· + α))∧ (0) = Proof. For y ∈ CL choose c ∈ L00 such that y = c s b(0) for any α ∈ ZZ , we can assume that c is supported on [0, K]s for some c K ∈ IN . Set X cy = c (· + α) , α∈[−M,M +K]s

to obtain for α ∈ [−M, M ]s that X

cy (α) =

β∈[−M,M +K]s

=

X

X

c(α + β) =

c(β)

β∈[−M,M +K]s +α

b(0) = y c(β) = c

β∈ZZ s

2

which verifies the claim. Also, we write for T ∈ IRN ×N and C ⊂ IRN |Tr y|2 ρ := ρ (T; C) := lim sup max y∈C\{0} |y|2 r→∞

!1/r

paperfinal.tex; 9/03/2004; 7:22; p.14

15

Refinable functions and factorization

for the spectral radius of the matrix T relative to C. Related to this spectral radius is the set | N, Eρ (T; C) := {y ∈ C + iC : Ty = a y, |a| = ρ (T; C) , |y|2 = 1} ⊂ C

of all eigenvectors belonging to eigenvalues of maximal modulus. Note that Ea need not be a vector space if there is more than one eigenvalue of maximal modulus. LEMMA 9. Suppose that Fis a stable solution of the refinement equation  (24), let ρ := ρ σξ A∗ (1); CL . Then 1. ρ = 1, 2. for any y ∈ E1



σ ∗ ξ A (1); CL



we have that

b F ∗ y = F(0)y, 



b 3. we have that dim R F(0) ≥ dim E1



σ ∗ ξ A (1); CL



,

4. with the matrix UL from (26) we have that    σ ∗ b dim R F(0) UL = dim E1 A (1); CL . 

ξ Proof. Substituting ω = α∈ in (24), we get for c ∈ L00   b (2πα) σ A∗ (1) c b 2π ΞT α c b(0) = W F b(0). F (27) ξ 2π ΞT α,

ZZ s ,

−1 Left multiplying (27)  by W and  iterating the resulting identity, we obtain σ ∗ for y ∈ Eρ := Eρ ξ A (1); CL and r ∈ IN that





b 2π ΞT W−r F



r 

b (2πα) α y=F

r

σ ∗ A (1) ξ

b (2πα) y. (28) y = ρr F

For α 6= 0 the Riemann–Lebesgue–Lemma and Lemma 3 yield that the left hand side converges to zero and therefore α ∈ ZZ s \ {0},

b (2πα) y = 0, F

y ∈ Eρ .

(29)

Since F is compactly supported, the translation invariant function F ∗ y is well–defined and locally integrable. Moreover, for any M > 0 there exists, by Lemma 8, a sequence cy ∈ L00 such that F ∗ cy = F ∗ y on [−M, M ]s . From the stability of F relative to L it then follows that F ∗ y 6= 0 and the Poisson summation formula yields for any y ∈ Eρ that !

0 6= F ∗ y =

X α∈ZZ s

F(· − α)

y=

X

T

b (2πα) y = F(0) b e−2πiα · F y

α∈ZZ s

paperfinal.tex; 9/03/2004; 7:22; p.15

16

T. Sauer

is a nonzero constant function. Setting α = 0 in (28) and taking into account b b that F(0) y 6= 0, we thus find that F(0) y is an eigenvector of W−1 with re−1 spect to the eigenvalue must λ that−1 satisfies |λ| = 1 since W is an isotropic matrix that satisfies det W = 1. Let y1 , . . . , ym be a basis of the finite dimensional vector space spanned by E1 . The stability of F yields that the −1 b vectors F(0)y respect to j are linear independent eigenvectors of W with  b eigenvectors ρj , j = 1, . . . , m, of modulus 1. Hence, dim R F(0) ≥ m. −1 Let w1 , . . . , wN be a basis of eigenvectors of W with associated eigenb values ρ1 , . . . , ρN of modulus 1, chosen such that wj = F(0) yj , j = 1, . . . , m. Writing F=

N X

fj ∈ Hp (IRs )N ,

wj fjT ,

j=1

we observe that bfjT (0) yk = δjk , j = 1, . . . , N , k = 1, . . . , m. By means of (27) with α = 0 we get that 

N X









N X

σ b(0), wj bfjT (0) A∗ (1) c ξ j=1

b(0) =  ρj wj bfjT (0) c

j=1

i.e., 

T

σ ∗ A (1) ξ

b fj (0) − ρj bfj (0) ⊥ CL ,

j = 1, . . . , N.

(30)

Since L is generated by compactly supported sequences and is invariant with respect to SA , it follows immediately that F0 := FUL and A0 := UL AUL have the same behavior on L as F and A, i.e., we have for c ∈ L that F0 ∗ c = F ∗ c,

SA0 c = SA c,

F0 ∗ c = X F0 ∗ SA0 c (Ξ ·) .

With A0 instead of A and F0 instead of F, (30) now yields that 

T

σ 0∗ A (1) ξ

b fj0 (0) − ρj bfj0 (0) = 0,

which implies that 

m = dim E1

  σ 0∗ σ ∗ b 0 (0) A (1); CL = dim E1 A (1); CL ≥ R F ξ ξ 





and the last term is bounded from below by m since b 0 (0) yj = F(0)U b b F L yj = F(0)yj = wj ,

which proves the final claim.

j = 1, . . . , m, 2

paperfinal.tex; 9/03/2004; 7:22; p.16

17

Refinable functions and factorization

Following [14], we say that G ∈ Hp (IRs )N ×N is a test function relative to some subspace V ⊆ IRN if G is compactly supported and satisfies G∗v ≡ v for v ∈ V. A straightforward modification of the arguments from [4] then yields the following result, cf. [13, 14]. LEMMA 10. If G ∈ Hp (IRs )N ×N is a test function relative to R (f ) for a given f ∈ Hp (IRs )N , then lim kf − G ∗ µr f (Ξr ·)kp = 0.

r→∞

PROPOSITION 11. If F ∈ Hp (IRs )N ×N is 1. A–refinable relative to L and normalized by X, 2. stable relative to L, 



b 3. satisfies R F(0) = R (F),

then the subdivision scheme defined by A is subconvergent relative to L and normalized by X. Proof. Let the matrices W, Y ∈ IRN ×N and r ∈ R be chosen as in the proof of Proposition 2. As in the proof of Lemma 9, we first use F0 := F UL which is indistinguishable from F relative to L. Since byassumption and Lemma 9 it follows that dim R (F0 ) = dim E1 σξ A∗ (1); CL , there exists an invertible matrix V 



such that Y−1 VR (F0 ) = E1 σξ A∗ (1); CL and therefore G := VFUL is stable relative to L. In addition, G is refinable with respect to A and relative to L since 



G ∗ c = VFU ∗ c = VXFU ∗ SA c (Ξ ·) = VXV−1 G ∗ SA c (Ξ ·) , where the matrix VXV−1 is also an isotropic expanding one. Finally, Lemma 9 −1 also yields that  G is a test function relative to R Y G , i.e., relative to −1 R Y G ∗ c for all c ∈ L. Therefore, replacing F by G, we can assume that in addition to the assumptions of this proposition, F is a also a test function relative to the space R Y−1 F . Using (24) in the rewritten form X−1 F ∗ c = F ∗ SA c (Ξ ·) as well as the relative stability of F and the SA –invariance of L, we obtain for r ∈ IN that r ξ −r/p kXr SA c − µr (F ∗ c)kp r = ξ −r/p |X|p SA c − X−r µr (F ∗ c) p



r ≤ |Xr |p A−1 F ∗ SA c (Ξr ·) − F ∗ µr X−r F ∗ c (Ξr ·) p





= |Xr |p A−1 X−r F ∗ c − F ∗ µr X−r F ∗ c (Ξr ·) p





≤ CX A−1 F ∗ c − Xr F X−r ∗ µr (F ∗ c) (Ξr ·) p .



paperfinal.tex; 9/03/2004; 7:22; p.17

18

T. Sauer

For r = r(k), k ∈ IN , we thus get that



F ∗ c − Xr F X−r ∗ µr (F ∗ c) (Ξr ·) p

 

−1 r ≤ F ∗ c − Y F Y ∗ µ (F ∗ c) (Ξr ·) p





r −r −1 r −r/p ∗ µ (F ∗ c) . +ξ

W FW − YFY p

(31) (32)

Since





Wr FW−r − YFY −1 ∗ µr (F ∗ c) p





= Wr FW−r − YFW−r + YFW−r − YFY−1 ∗ µr (F ∗ c) p   ≤ |Wr − Y| W−r + W−r − Y−1 |Y| kF ∗ µr (F ∗ c)k , p

p

p

p

p

the term (32) converges to zero for r = r(k) and k → ∞. To finish the proof, we have to show that (31), or equivalently



  

Y −1 F ∗ c − F ∗ µr Y −1 F ∗ c (Ξr ·)

p

(33)

converges to zero as k → ∞, which is done by an application of Lemma 10 taking into account the remarks made at the beginning of this proof. 2 We next adapt a characterization of stability which is due to Jia and Micchelli [7] to our purposes. For that purpose, however, we must restrict ourselves to subspaces L of a particular form, namely those that can be obtained from some `N Z s ) by repeated differencing. For that end, we define a class p (Z N Z s ) inductively by saying that L ∈ ILN if ILN p of linear subspaces of `p (Z p either L = `N Z s ) or if there exists 1 ≤ n ≤ N/s such that L = Dn L0 p (Z N/s

for some L0 ∈ ILp . Analogously, we define the class ILN by requiring that L ∈ ILN if either L = `N (ZZ s ) or L = Dn L0 for some n and L0 ∈ ILN/s . Based on this recursive definition we define for L ∈ ILN the stability space | N as S(L) ⊂ C n

o

| N : e y ∈ L , S(L) := y ∈ C θ

eθ (α) = eiθ



,

α ∈ ZZ s .

(34)

With this definition the results in [7, 13] can then be extended as follows. THEOREM 12. A compactly supported matrix valued function F ∈ Hp (IRs )N ×N s is stable relative to L ∈ ILN p if and only if for any ω ∈ IR and any y ∈ S(L) 



b (ω + 2πα) y : α ∈ ZZ s 6= 0. F

(35)

paperfinal.tex; 9/03/2004; 7:22; p.18

19

Refinable functions and factorization

The proof of this result follows almost literally the one given in [7] and the matrix extensions in [13]. Recall from [7] that violation of linear independence, i.e., the existence of a sequence c ∈ `N (ZZ s ) such that F ∗ c = 0, requires c to be an exponential sequence, that is, c = eθ x for some θ ∈ IRs \ {0} and x ∈ IRs \ {0}. If fact, if (35) is violated for some pair ω, y, then one chooses θ = ω and x = y. Thus, S(L) is necessitated by considering the structure of the exponential sequences in spaces L ∈ ILN ∞ . This relationship is expressed in the following lemma which, together with the methods from [7, 13], completes the proof of Theorem 12. N/s

LEMMA 13. Suppose that L ∈ ILN has the form L = Dn L0 , L0 ∈ IL∞ . Then c ∈ L is an exponential sequence if and only if there exists an exponential sequence c0 ∈ L0 such that c = Dn c0 .   T Proof. Let c0 = eθ y, then Dj c(α) = eiθ α e−iθj − 1 y and thus the difference of an exponential sequence is exponential again. For the converse, suppose that eθ y =: c = Dn c0 , y = (yj : j = 1, . . . , s) is an exponential sequence. Since Dj Dk = Dk Dj for j, k ∈ {1, . . . , s}, it follows that "



eiθk − 1 In 0 0 IN −n 

=

#



yj eθ = 

Dk Dj In 0 c0 = 0 IN −n

"

=



eiθj − 1 In 0 0 IN −n





Dk In 0 (eθ yj ) 0 IN −n 

Dj In 0 (eθ yk ) 0 IN −n

#

yk eθ ,

i.e., "

yj =

eiθj −1 I eiθk −1 n

0

0

IN −n

#

yk .

(36)

Thus, the sequence c0 := eθ y0 with " 0

y =

e−iθj − 1 0

−1

In

#

0 IN −n

yj ,

j = 1, . . . , s,

is well–defined by (36) and satisfies Dn c0 = c.

2

Moreover, the proof of the second part of Lemma 13 also suggests the following helpful result. LEMMA 14. Suppose that L ∈ ILN satisfies L = Dn L0 . For any y ∈ S(L) and any θ ∈ (0, 2π)s there exists y0 ∈ S (L0 ) such that "

y=



eiθj − 1 In 0 0 IN −n

#

!

y0 : j = 1, . . . , s .

paperfinal.tex; 9/03/2004; 7:22; p.19

20

T. Sauer

Proof. The assumption y ∈ S(L) means that eθ y ∈ L = Dn L0 , i.e., eθ y = Dn c0 for some c0 ∈ L0 . By Lemma 13, c0 is an exponential sequence of the form eθ y0 which implies that y0 ∈ S (L0 ). Evaluating the relation c = Dn c0 = Dn eθ y0 then yields the claim. 2 5. Differentiability implies factorization In this section we finally apply the concepts of the preceding sections to obtain a necessary condition for a stable refinable function to belong to Hpk (IRs )N ×N . THEOREM 15. Suppose that F ∈ Hpk (IRs )N ×N is a compactly supported stable solution   of the refinement equation F = F ∗ A (Ξ ·) and that n = b dim R F(0) = R (F). Then there exist orthogonal matrices Vj ∈ IRN s k

j−1 ×N sj−1

,

j = 1, . . . , k,

k

s ×N s (ZZ s ) and an isotropic normalization matrix Y with a mask B ∈ `N 00 ρ(Y) = ξ k/s ρ(X) such that

Dnsk−1 ,Vk · · · Dn,V1 SA = SB Dn,V1 · · · Dnsk−1 ,Vk and B admits a subconvergent subdivision scheme relative to the SB –invariant Z s ) and normalized by Y. sequence space Dnsk−1 ,Vk · · · Dn,V1 `N p (Z Before we attack the proof of this theorem, let us first have a look at the meaning of its different ingredients. Remark 4. Compared to Theorem 1, Theorem 15 incorporates three “new” concepts: the matrices Vj , restricted convergence and renormalized convergence, which are due to different aspects of the generalizations we consider here. The matrices Vj are an effect of the vector subdivision we consider here and like in [14] they are needed to transform a subdivision scheme into a form that makes it accessible by the difference operator, see [17] for a non–subdivision, non–difference approach which comes to the same conclusion. Restricted convergence, on the other hand, is a multivariate effect as multivariate difference operators are not surjective (or “onto”) any more if s > 1. Finally, renormalization and thus subconvergence stem from the use of dilation matrices Ξ that are not multiples of the identity matrix. The proof Theorem 15 will essentially be an inductive application of the case k = 1. Therefore, we will first consider this case in detail and explain the main ideas in this particular situation. Nevertheless, in order to be able to iterate it, we must state the case k = 1 for a slightly more general situation, as is done in the following result.

paperfinal.tex; 9/03/2004; 7:22; p.20

21

Refinable functions and factorization

PROPOSITION 16. Suppose that F ∈ Hp (IRs )N ×N is a compactly supported stable solution of the refinement equation F ∗ c = XF ∗ SA c (Ξ ·) ,

L ∈ ILN p ,

c ∈ L,





b = R (F). If F ∈ Hp1 (IRs )N ×N , then the mask and that n = dim R F(0) s×N s B ∈ `N (ZZ s ) defined by Dn,V SA = SB Dn,V admits a subconvergent 00 subdivision scheme relative to Dn VL and with an appropriate normalization matrix Y.

Let us begin by supposing that F ∈ Hp1 (IR)N ×N satisfies the restricted refinement equation (24), i.e., F ∗ c = X F ∗ SA c (Ξ·), c ∈ L. Moreover, we assume that F has the “normal form” 

F=

F1

F1 ∈ Hp1 (IR)n×n , F2 ∈ Hp1 (IR)n×N −n ,



F2

,

0N −n,n 0N −n,N −n

(37)

for an appropriate n ∈ IN . We then define G ∈ Hp (IRs )N s×N s as  b G(ω) = diag

iωj In 0 0 IN −n "

× diag

eiωj − 1 0





: j = 1, . . . , s

−1

In

0 IN −n



b Is ⊗ F(ω) ×

#

!

: j = 1, . . . , s

(38)

from which it follows immediately that GDn = ∇n F. On the other hand, (37) implies that  

∇n F = (∇ ⊗ IN ) F =  

∂ ∂x1

IN

.. .

∂ ∂xs

IN





  F=  

∂ ∂x1 F

.. .

∂ ∂xs F

  . 

b is a block diagonal matrix with Also note that due to (37) the function G blocks  " #  iω j b b b b b j := Gj,1 Gj,2 =  eiωj − 1 F1 iωj F2  , G j = 1, . . . , s. (39) 0 0 0 0

We now show that G is a stable function, refinable with respect to the mask B determined by Dn SA = SB Dn from which we will conclude that B induces a subconvergent subdivision scheme, renormalized by some matrix Y and relative to M = Dn L. LEMMA 17. There exists a matrix Y ∈ IRN s×N s such that the function G is refinable with respect to B, normalized by Y and relative to M = Dn L, i.e., G ∗ c = Y G ∗ SB c (Ξ·) , c ∈ Dn L. (40)

paperfinal.tex; 9/03/2004; 7:22; p.21

22

T. Sauer

Proof. We first note that for any scalar function f ∈ Hp1 (IRs ) one has ∇ (f (Ξ ·)) = ΞT ∇f (Ξ ·). To make use of this formula, we let P ∈ IRN s×N s be a (row) permutation matrix such that h

(∇ ⊗ IN ) (X F(Ξ ·)) = P ∇ (XF(Ξ·))jk : j, k = 1, . . . , N h

= P ΞT ∇ (XF)jk (Ξ ·) : j, k = 1, . . . , N 

T

= P IN ⊗ Ξ



"



s X

i

i #

Xjr Frk (Ξ ·) : j, k = 1, . . . , N

r=1









= P IN ⊗ ΞT (X ⊗ Is ) [∇Fjk (Ξ ·) : j, k = 1, . . . , N ] = P IN ⊗ ΞT (X ⊗ Is ) P−1 (∇ ⊗ IN ) F (Ξ ·) = Y ∇n F (Ξ·) , where (6) yields 







Y := P IN ⊗ ΞT (X ⊗ Is ) P−1 = P X ⊗ ΞT P−1 .

(41)

With this identity at hand, we now obtain for c ∈ Dn L, written as c = Dn c0 , c0 ∈ L, that G ∗ c = G ∗ Dn c0 = ∇n F ∗ c0 = (∇ ⊗ IN ) (X F(Ξ ·)) ∗ SA c0 = Y ∇n F ∗ SA c0 (Ξ·) = Y G ∗ Dn SA c0 (Ξ·) = Y G ∗ SB Dn c0 (Ξ·) = Y G ∗ SB c (Ξ·) , 2

which is (40).

Moreover, (6) tells us that the eigenstructures of a Kronecker product are the Kronecker product of the individual eigenstructures, which allows us to draw the following conclusion. LEMMA 18. If X ∈ IRN ×N is an isotropic matrix, then so is the matrix Y defined in (41) and we have that ρ(Y) = ξ 1/s ρ(X). | N and x ∈ C | n be eigenvectors of X and ΞT with respect Proof. Let x ∈ C | to the eigenvalues λ, µ ∈ C, respectively and set y = P (x ⊗ x). Then, by (6), 







P−1 Yy = X ⊗ ΞT (x ⊗ x) = Xx ⊗ ΞT x = λµ (x ⊗ x) , that is, y is an eigenvector of Y with respect to the eigenvalue µλ with |µ| = |det X|1/N ,

|λ| = |det Ξ|1/s = ξ 1/s .

paperfinal.tex; 9/03/2004; 7:22; p.22

23

Refinable functions and factorization

Since both matrices are also similar to a diagonal matrix the Kronecker product of their eigenspaces spans the eigenspaces of Y and thus all eigenvectors of Y can be written in the form y = P (x ⊗ x). This completes the proof of the lemma. 2 Finally, it just remains to prove the stability of G relative to Dn L. LEMMA 19. If F is stable relative to L, then the function G is stable relative to M := Dn L. Proof. Suppose that F were stable and G were not. By Theorem 12 there exists a vector y1  ..  | N s, y =  .  ∈ S(M) ⊆ C ys 



| N, yj ∈ C

j = 1, . . . , s,

b (ω + 2πα) y = 0, i.e., we have for j = 1, . . . , s and ω ∈ IRs such that G that b j (ω + 2πα) yj = 0, G α ∈ ZZ s . (42)

Due to Lemma 14 there exist x ∈ S(L) such that 

yj =

eiωj − 1 In 0 0 IN −n 



x,

j = 1, . . . , s,

where the periodicity of (42) allows us to assume that ω ∈ [0, 2π)s . Moreover, because of (39) we find for j = 1, . . . , s and α ∈ ZZ s that b (ω + 2πα) F "

=

i (ωj + παj )−1 In 0 0 IN −n

#

 b j (ω + 2πα) G

eiωj − 1 In 0 . 0 IN −n 



If now ω 6= 0 and therefore ωj 6= 0 for some j ∈ {1, . . . , s}, it follows that eiωj − 1 6= 0. Thus, " b (ω + 2πα) x = F b (ω + 2πα) F "

=

eiωj − 1 0

−1

i (ωj + 2παj )−1 In 0 0 IN −n

In

0 IN −n

#

yj

# b j (ω + 2πα) yj = 0 G

for any α ∈ ZZ s ,contradicting the stability    of F. Hence, it follows that ω = yj1 x1 0. Write yj = and x = where yj2 = x2 , j = 1, . . . , s, yj2 x2

paperfinal.tex; 9/03/2004; 7:22; p.23

24

T. Sauer

by the structure of the difference operator. Now we obtain for α 6= 0 and j = 1, . . . , s that " b (2πα) = F

0 0

i b 2πα Gj,2 (2πα)

0

#

,

while for α = 0 passing to the limit ωj → 0 and reference to (39) yield that " b F(0) =

b j,1 (0) F b 2 (0) G 0 0 

b (2πα) yielding that F

0 x2

#

" b j (0) = =G

#

b j,1 (0) G b j,2 (0) G , 0 0

(43)



= 0 which again contradicts the stability of F. 2

This proves the lemma.

A useful by–product of this proof is equation (43) which tells us that the additional condition on stable refinable functions which ensures the convergence of the subdivision scheme also inherits from F to G. 



b LEMMA 20. If R F(0)

= R(F) and if G is defined as above, then



 b R G(0) = R(G). b b Proof. From (43)   it follows that  G(0)  = Is ⊗ F(0) and therefore we get b b = n dim R F(0) ≥ dim R(G). 2 that dim R G(0)

Now we have collected the tools to show how to arrive from a stable refinable function to its derivative function. Let F ∈ Hpk (IRs )N ×N be stable and refinable relative to L and X and assume that 



b n := dim R F(0) = dim R (F) .

(44)

  R b b Since F(0) = F(t) dt, it follows that R F(0) ⊆ dim R (F) and thus 



b (44) is equivalent to R F(0) = R (F). Let V be any orthogonal matrix  n IR such that VT R (F) = , then F0 := VT FV has the normal form (37). 0 Moreover, F0 is refinable:

F0 ∗ VT c = VT FV ∗ VT c = VT F ∗ c = VT F ∗ X SA c (Ξ ·) = VT FV ∗ VT XV SVT AV VT c (Ξ ·) = VT FV ∗ X0 SA0 VT c, and replacing c by Vc, which maps L isomorphically to L0 = VL, we get that F0 ∗ c = F0 ∗ X0 SA0 c, (45)

paperfinal.tex; 9/03/2004; 7:22; p.24

25

Refinable functions and factorization

where X0 = VT XV is still an isotropic expanding matrix. By Proposition 11 the mask A0 admits a subconvergent subdivision scheme normalized by X0 and relative to L0 , from which Theorem 7, or more specifically equation N s×N s (21), allows us to conclude that there exists B ∈ `00 (ZZ s ) such that 0 Dn SA0 = SB Dn . Since L is invariant under SA0 , it also follows that Dn L0 is invariant under SB , so that our fundamental assumption is satisfied again. Now define G from F0 as in (38). Lemma 17 and Lemma 18 tell us that G is refinable relative to Dn L0 and normalized by an isotropic expanding matrix Y ∈ IRN s×N s , while Lemma 19 ensures the stability of G so that we can once more appeal to Proposition 11 to conclude that SB is subconvergent too. Thus, taking into account the role of the matrix V, we have finally proved Proposition 16. By Lemma 20, Lemma 19 and Lemma 17, the properties of F in Proposition 16 carry over to G so that we can apply this result repeatedly if F has derivatives of higher degree. Therefore, we have proved the following necessary condition for differentiability. THEOREM 21. Suppose that F ∈ Hp (IRs )N ×N is a compactly supported stable solution of the refinement equation F ∗ c = XF ∗ SA c (Ξ ·) , 



b and that n = dim R F(0)

c ∈ L,

L ∈ ILN p ,

= R (F). If F ∈ Hpk (IRs )N ×N , then there

exist orthogonal matrices V∈ IRN s sk ×N sk `N (ZZ s ) such that 00

j−1 ×N sj−1

, j = 1, . . . , k, and a mask B ∈

Dnsk−1 ,Vk · · · Dn,V1 SA = SB Dn,V · · · Dnsk−1 ,Vk and SB admits a subconvergent subdivision scheme relative to the SB –invariant sequence space Dnsk−1 ,Vk · · · Dn,V1 L and with a normalization matrix Y such that ρ(Y) = ξ k/s ρ(X). Specializing to X = I and L = `N Z s ), Theorem 15 follows immedip (Z ately.

6. A hint on the converse We finally look at a converse of Theorem 15 that has been given in [20]. For a full converse an iteration of this result will be needed which takes into account the intricate process of how the renormalization matrices develop. THEOREM 22. Suppose that f ∈ Hp (IRs ) is a stable function, refinable with respect to a ∈ `00 (ZZ s ). If there exists B ∈ `N Z s ) such that DSa = 00 (Z

paperfinal.tex; 9/03/2004; 7:22; p.25

26

T. Sauer

SB D and if SB is subconvergent relative to D`p (ZZ s ) and normalized by Ξ and all limit functions gc of SB belong to Hpk (IRs )s for any c ∈ D`p (ZZ s ), then f ∈ Hpk+1 (IRs ). The idea of the proof is rather simple. The stability of f guarantees convergence of Sa . On the other hand, the assumptions on the restricted subconvergence SB can be used to show that Sa also converges in the Sobolev norm k·kp +k∇ ·kp and thus f must belong to Hp1 (IRs ) – the arguments from [20] which were based on normalized convergence of SB only can easily be extended to restricted normalized subconvergence. Moreover, one finds that ∇f ∗ c equals the limit function gDc and so f is k + 1 times differentiable if the limit functions all have a kth derivative. For more details the reader is referred to [20].

Acknowledgements I am very grateful to the referee for a careful and critical reading and a lot of valuable suggestions.

References 1. 2.

3. 4. 5.

6. 7. 8. 9. 10. 11.

Cavaretta, A. S., W. Dahmen, and C. A. Micchelli: 1991, Stationary Subdivision, Vol. 93 (453) of Memoirs of the AMS. Amer. Math. Soc. Cohen, A., N. Dyn, and D. Levin: 1996, ‘Stability and interdependence of matrix subdivision schemes’. In: F. Fontanella, K. Jetter, and P.-J. Laurent (eds.): Advanced Topics in Multivariate Approximation. Dahmen, W. and C. A. Micchelli: 1990, ‘Stationary subdivision, fractals and wavelets’. In: W. D. et al. (ed.): Computation of Curves and Surfaces. pp. 3–26. Dahmen, W. and C. A. Micchelli: 1997, ‘Biorthogonal wavelet expansion’. Constr. Approx. 13, 294–328. Dyn, N.: 1992, ‘Subdivision schemes in Computer Aided Geometric Design’. In: W. Light (ed.): Advances in Numerical Analysis – Volume II, Wavelets, Subdivision Algorithms and Radial Basis Functions. pp. 36–104. Jia, R.-Q.: 1999, ‘Characterization of smoothness of multivariate refinable functions in Sobolev spaces’. Trans. Amer. Math. Soc. 351, 4089–4112. Jia, R. Q. and C. A. Micchelli: 1992, ‘On the linear independence for integer translates of a finite number of functions’. Proc. Edinburgh Math. Soc. 36, 69–85. √ Jiang, Q. and P. Oswald: 2003, ‘Triangular 3–subdivision schemes: the regular case’. J. Comput. Appl. Math. √ 156, 47–75. Kobbelt, L.: 2000, ‘ 3–subdivision’. In: Proceedings of SIGGRAPH 2000. pp. 103–112. Latour, V., J. M¨uller, and W. Nickel: 1998, ‘Stationary subdivision for general scaling matrices’. Math. Z. 227, 645–661. Marcus, M. and H. Minc: 1969, A Survey of Matrix Theory and Matrix Inequalities. Prindle, Weber & Schmidt. Paperback reprint, Dover Publications, 1992.

paperfinal.tex; 9/03/2004; 7:22; p.26

Refinable functions and factorization

12. 13. 14. 15. 16.

17. 18. 19. 20.

21. 22.

27

Micchelli, C. A.: 1995, Mathematical Aspects of Geometric Modeling, Vol. 65 of CBMSNSF Regional Conference Series in Applied Mathematics. SIAM. Micchelli, C. A. and T. Sauer: 1997, ‘Regularity of Multiwavelets’. Advances Comput. Math. 7(4), 455–545. Micchelli, C. A. and T. Sauer: 1998, ‘On vector subdivision’. Math. Z. 229, 621–674. Micchelli, C. A., T. Sauer, and Y. Xu: 2001, ‘Subdivision schemes for iterated function systems’. Proc. Amer. Math. Soc. 129, 1861–1872. M¨oller, H. M. and T. Sauer: 2004, ‘Multivariate refinable functions of high approximation order via quotient ideals of Laurent polynomials’. Advances Comput. Math 20, 205–228. Plonka, G.: 1997, ‘Approximation order provided by refinable function vectors’. Constr. Approx. 13, 221–244. Sauer, T.: 2002a, ‘Polynomial interpolation, ideals and approximation order of refinable functions’. Proc. Amer. Math. Soc. 130, 3335–3347. Sauer, T.: 2002b, ‘Stationary vector subdivision – quotient ideals, differences and approximation power’. Rev. R. Acad. Cien. Serie A. Mat. 96, 257–277. Sauer, T.: 2003, ‘How to generate smoother refinable functions from given ones’. In: W. Haussmann, K. Jetter, M. Reimer, and J. St¨ockler (eds.): Modern Developments in Multivariate Approximation, Vol. 145 of International Series of Numerical Mathematics. pp. 279–294. Steeb, W.-H.: 1991, Kronecker Product of Matrices and Applications. BI–Wiss.–Ver. Taylor, A. E. and D. C. Lay: 1980, Introduction to Functional Analysis. John Wiley & Sons, 2nd edition.

paperfinal.tex; 9/03/2004; 7:22; p.27

Suggest Documents