Multiresolution Approximation of Fractal Transform Bing Cheng Xiaokun Zhu Institute of Math. and Statit. Dept. of Aerospace Eng. University of Kent at Canterbury University of Glasgow England Scotland July 6, 1995 Correspondence: Dr. Bing Cheng Institute of Mathematics and Statistics University of Kent at Canterbury Kent CT2 7NF, UK Email:
[email protected] Telephone number: (44) 01227 76400 Ext 7675 Fax number: (44) 01227 475453
1
Abstract In this paper, we show that Barnsley's local iterated function system or fractal transform (FT) (Barnsley and Hurd, 1993) constitutes a multiresolution approximation to the square-integrable space
L
2 (T d )
for
d
1, where
T
is the inter-
val (?1; 1). This provides an alternative multiresolution approximation to the wavelets and a theoretical basis for the successful applications of the fractal transform algorithm in signal/image compression.
Key words: Multiresolution; Fractal transform; Wavelet; Signal/Image compression.
1 Introduction Following the pioneering work of Mandelbrot (Mandelbrot, 1982), recently there have been considerable recent interests in the use of fractals to model natural phenomena. There were two popular ways of constructing a fractal. The rst is to use fractional Brownian motion (FBM) (Mandelbrot, 1982). However, fractional Brownian motion is de ned in a one-dimensional framework and it is very dicult to generalize it to high dimensions. The second way is to use the iterated function systems (IFS) developed by Barnsley and his collaborators ( Barnsley, 1988). IFS theory has many advantages over FBM: IFS modelling has higher exibility than FBM modelling; generalization from one dimension to higher dimensions is very natural and easy; with the help of IFS, highly complex spatial information can be derived from temporal iteration that is governed by only a small set of parameters. This led Barnsley to suggest the use of fractals for data compression. However, the attractor of an IFS is self-ane but we know that real world data are not always selfane. Indeed, most of time they are not. The break-through for this problem was made in 1988 by Barnsley (op cit.) who generalized IFS to local IFS which resulted in so-called 2
fractal transform (FT). The coding of real world data by fractal transform has generally achieved high-compression ratio with high- delity to the data. In recent years, we have seen expanding theoretical and applied studies of fractal transforms. (See Fisher, 1994 for example.) Mathematically, let f be a real-valued function on T d (d 1). De ne the graph of f by G = f(x; f (x)j 8x 2 T dg where T is the interval (?1; 1). Then the FT's coding process is to provide an optimal approximation to G by the attractor of a local IFS, or equivalently, to f by the xed point of a FT. At the algorithmic level, such an FT approximation works quite well. However, there is still a question to be answered: can a function with arbitray shapes be approximated by such FT scheme? or equivalently, representation ability of FT for data complexity. In this paper, we show that it does provide a multiresolution approximation framework which is similar to the framework provided by wavelets in some aspects. In section 2 we discuss the fractal transform in some detail. In section 3, we brie y introduce wavelet-based multiresolution approximation. In section 4, we prove the main result of this paper. In section 5, we show that our result holds for the fractal transform in high dimension as well. In section 6, we run some simulations. In section 7, we give some discussion and raise some unsolved problems. Now we introduce some notations. Denote by Z (Z+ ) the set of integers (positive integers). De ne L2(S ) to be the space of measurable and square-integrable functions on S where S is a subset of T d (d 1) and L1 (S ) to be the space of all measurable functions f on S such that kf k < 1, where kf k = supfjf (x)j 8x 2 S g.
3
2 Fractal Transform Let 2 be a compact subset of T . For positive integer L 1, let fD1; ; D2L g be disjoint sub-intervals on 2 such that [Li=1Di = 2 and fR1; ; R2L g be another group of subintervals of 2. De ne functions i(y) = Pi y + Qi and wi(x) = aix + bi for i = 1; ; 2L, where Pi; Qi; ai; bi are real numbers and wi satis es
i = 1; ; 2L:
wi(Ri) = Di
For i = 1; ; 2L ; de ne ane maps Wi from Ri T to Di T by 0 1 10 1 0 1 0 B bi C B x CC BB ai 0 CC BB x CC Wi B A @ A + B@ CA i = 1; ; 2L: @ A = @ Qi y 0 Pi y
(2:1)
(2:2)
De ne a graph space of functions, @(T 2), say by
@(T 2) = f(x; (x)) j8x 2 2 and 2 L2(2)g: We de ne a map W on @(T 2) by
W (A) =
2L [
i=1
Wi(A \ Ri (Ri))
(2:3)
where A = f(x; (x)) j8x 2 2g. Then we can introduce an iteration of the map W by: 1. A0 2 @(T 2); 2. An = W (An?1); n = 0; 1; ; Let A0 = 2 (2). Then
A1 = =
2L [
i=1
2L [
i=1
Wi(A0 \ Ri (Ri)) =
wi(Ri) vi (Ri)) = 4
2L [
i=1
2L [
i=1
Wi(Ri (Ri))
Di vi (wi?1(Di )):
This creates a map F on L2(2) by de ning, 8 2 L2(2),
F (x) = i (wi?1(x)) 8 x 2 Di; i = 1; ; 2L
(2:4)
Then we have
A1 = 2 F (2) Barnsley (Barnsley and Hurd, 1993) called this F a fractal transform (FT in short form), fDig domain blocks and fRig range blocks. Generally denoting the iteration of functions by (n) = F (n?1); n = 1; 2; ; (2:5) with
(0)
= , we have
An = 2 F
(n?1)(2):
(2:6)
Then under the assumption that jPij < 1 for i = 1; ; 2L , Barnsley and Hurd (1993, page 186) have shown that there exists a xed point, f say, of F in L1(2) such that f = Ff , That is, (2:7) f (x) = i f (wi?1(x)) 8 x 2 Di ; i = 1; ; 2L Now we do a generalization of Barnsley's fractal transform. Suppose that the space 2
T , [Li=1 Di = 2 and [Li=1 Ri . Obviously L2( ) L2(2). Now de ne a generalized fractal transform by, 8 2 L2( ), 8 > ?1 L < ~F (x) = i (wi (x)) if x 2 Di ; i = 1; ; 2 (2:8) > :0 if x 2 ? 2: Lemma (Convergence of the generalized fractal transform (GFT)) 1. Suppose jPi j < 1 for i = 1; ; 2L , then there exists a xed point, f say, of F~ in ~ , That is, L1( ) such that f = Ff 8 > < i f (wi?1 (x)) 8 x 2 Di ; i = 1; ; 2L; f (x) = > :0 8 x 2 ? 2: 5
L 2 2. Suppose P2i=1 Pi jaij < 1 for i = 1; ; 2L, then there exists a xed point, f say, of ~ , That is, F~ in L2( ) such that f = Ff 8 > < i f (wi?1 (x)) 8 x 2 Di ; i = 1; ; 2L; f (x) = > :0 8 x 2 ? 2:
Proof of (i): 8 1;
2 2 L1 ( ).
kF~ 1 ? F~ 2)k = SupfjF~ 1(x) ? F~ 2(x)j 8x 2 g = SupfjF~ 1(x) ? F~ 2(x)j 8x 2 2g 1max fjP jg k i2L i
1
?
2k:
So F~ is a contraction operator on L1 ( ). Then we know that there exists a uniquely xed point, f, say of F~ in the space L1 ( ).
Proof of (ii): 8 1;
2
kF~
2 L2( ). 1
? F~ 2)k2 =
2L Z X [F~ 1(x) ? F~ 2(x)]2dx = = D
i=1
=
i
2L X
Z
Pi2 jaij R [ 1(x) i i=1
?
Z
2L X
[F~ 1(x) ? F~ 2(x)]2dx
i=1 2
Pi2
Z
(x)]2dx
Di
[ 1(wi?1(x)) ?
2L X
i=1
Pi2jaij k
?1 (x))]2dx
2(wi 1
?
2 )k
2
Again F~ is a contraction operator on Then we know that there exists a uniquely xed point, f, say of F~ in the space L2( ). Clearly such a function f only possesses local-ane property. Given an arbitrary function g, then the aim of the FT approximation to g is to nd the best FT such that its xed point f, a FT function, say, is the closest to g. In section 4, we will provide a theoretical basis for such FT approximation. Note: In the rest of this paper we still use FT to denote a generalized FT for simplicity.
L2( ).
6
3 Wavelet-based multiresolution approximation Resolution is indeed a fundamental feature when viewing images and, of course, when extracting information from them. In fact, it is an inherent quality to any process of physical observation and measurement. According to Mallat's de nition (Mallat 1989), a multiresolution approximation of L2(T ) is a sequence fVj gj2A of closed subspaces of L2(T ) such that the following conditions hold:
M1 Vj Vj+1 , for j 2 Z . M2 Sj2Z Vj is dense in L2(T ). M3 f (x) 2 Vj , f (2x) 2 Vj+1 for all j 2 Z . The other two conditions for the multiresolution approximation are:
M4 If f (x) 2 Vj , then f (x ? 2?j n) 2 Vj for each n 2 Z: M5 There is an unconditional basis of Vj . In addition there is one more condition that the intersection of all Vj only has zero element. In the following, we only use the index set Z+ = f1; 2; 3; g. So to impose the assumption that the intersection of all Vj only has one element which is zero is unrealistic somehow. Secondly, in this paper, we are not interested in the translation property of the approximation which is useful for the construction of wavelet. Thirdly, it is easy to see any linear combination of FT functions could not be a FT function. So it is some unrealistic to wish to have a FT basis function system. Wavelet-based multiresolution approximation has been rapidly developed both in theory and applied areas in recent years, for example, see Chui(1992) and Daubechies(1992). 7
4 Fractal multiresolution approximation Multiresolution approximation often provide better and fast encoding result. See Fisher (1994) for example, where he has developed quadtree-partition or HV-partition fractal transform algorithm. In the following, we will see that sizes of domain and range blocks and number of ane maps will play an important role in fractal transform encoding method. For integer L > 0, de ne V^L to be a function subspace in L2(T ) by
V^L = ff : T ! T j 8j 2 Z; on interval [j; j + 1]; there exists a generalized FT function g with its domain blocks D1 ; ; D2L ; range blocks R1; ; R2L and affine maps W1; ; W2L such that 2L 2L [ [ Di = [j; j + 1] ; Ri T and f = g on [j; j + 1] g; i=1
i=1
(4:1)
where we call L level of resolution. Since V^L is not a linear subspace of L2(T ), let VL be the closure of linear spanning of V^L in L2(T ). Then we the following conclusion.
Theorem 1: (Multiresolution Fractal Transform Approximation Theorem) 1. VL VL+1 for L 2 Z+ . 2. f () 2 VL , f (2) 2 VL+1 for L 2 Z+ . 3. SL2Z+ VL = L2(T ). Property (i) states that every FT function at level L is FT function at ner level L+1, and total number of FT functions at level L+1 is larger than that at level L. So that, when we do approximation or representation, we will have more candidates of FT functions at level L+1 than them at level L. 8
Property (ii) states that, after we view (continuous-time) image f in the space VL , i.e. at level L, if we wish to zoom in image f to see more details, then we have to go to the space VL+1 , i.e. at level L+1. Equivalently, we have to increase number of ane maps, domain, range blocks, at level L+1, to be twice as that at level L. On the other hand, such process is vice versa. If image f is sat in space VL+1 , then its coarser version will be in the space VL , with half number of ane maps, domain and range blocks. Property (iii) states a very important fact: every function in L2(T ) can be approximated progressively by a sequence of FT functions, with increasing level of resolutions. Since members of L2(T ) are plentiful enough to cover almost every real world image, this provides a theoretical justi cation of success of the FT encoding algorithms for real world images.
Proof of (i): 8f 2 VL , without loss of generality, we only consider f on the interval [0, 1]. By formula (2.7), we have
f (x) = i f (wi?1 (x)) 8 x 2 Di; i = 1; ; 2L: Divide fDi g; i = 1; 2L into ner domain blocks fD~ j g j = 1; 2L+1 such that
D~ 2i?1 [ D~ 2i = Di for i = 1; 2L : Correspondingly. Set range blocks R~ j = w(?j1+1)=2(D~ j ) and ane maps w~j = w(j+1)=2 and v~j = v(j+1)=2 for j = 1; 2L+1 . Therefore we have
f (x) = ~j f (w~j?1 (x)) 8 x 2 D~ j ; j = 1; ; 2(L+1): This implies that VL VL+1.
Proof of (ii): 8f 2 VL, we still consider only the function f on [0, 1]. We know that f is a FT function on [0, 1] and [1, 2]. There are two groups of domain and range blocks and 9
ane maps such that
f (x) =
i(1) f (fwi(1)g?1(x))
8 x 2 Di(1);
i = 1; ; 2L
and
2L [
i=1
Di(1) = [0; 1]; (4:2)
and
f (x) = i(2) f (fwi(2)g?1(x)) 8 x 2 Di(2); i = 1; ; 2L and
2L [
i=1
Di(2) = [1; 2]: (4:3)
De ne Di(1)=2 = fx j2x 2 Di(1)g and Di(2)=2 = fx j2x 2 Di(2)g. From (4.2), we have
f (2 2?1x) = i(1) f (2 2?1 fwi(1)g?1(2 2?1 x)) 8 x 2 Di(1): Let y = 2?1x. Then, when x 2 Di(1); y 2 Di(1) =2. We obtain
f (2y) = i(1) f (2 2?1 fwi(1)g?1(2y)) 8y 2 Di(1)=2: De ne fw~i(1)g?1() = 2?1fwi(1)g?1(2). We have
f (2y) = i(1) f (2 fw~i(1)g?1(y)) 8y 2 Di(1)=2; for i = 1; ; 2L: Similarly, we have
f (2y) = i(2) f (2 fw~i(2)g?1(y)) 8y 2 Di(2)=2 for i = 1; ; 2L : For i = 1; ; 2L ; ; 2(L+1), denote 8 8 > (1) > L < R(1) < Di if i 2L ; if i 2 ; i ~ ~ Ri = Di = > > L (L+1) ; : R(2) : Di(2)?2L if 2L < i 2(L+1); i?2L if 2 < i 2 and
8 8 > > (1) L < wi < vi(1) if i 2L; if i 2 ; w~i = > (2) and v~i = > (2) : wi?2L if 2L < i 2(L+1); : vi?2L if 2L < i 2(L+1):
Then we have
f (2y) = ~i(2) f (2 w~i?1 (y)) 8y 2 D~ i for i = 1; ; 2(l+1): 10
L (1) L (2) However, since S2i=1 Di =2 = [0; 1]=2 = [0; 1=2] and S2i=1 Di =2 = [1; 2]=2 = [1=2; 1], f is a FT function on [0, 1] using 2 2L = 2L+1 domain blocks fD~ ig and range blocks fR~ig. Therefore f (2) 2 VL+1:
Conversely, without loss of generality we still consider FT functions on [0, 1]. If f (2) 2 VL+1 , then on [0, 1], there are ane maps fwig, domain blocks fDi g and range blocks fRi g such that f (2y) = i f (2 wi?1 (y)) 8y 2 Di i = 1; ; 2L+1 : Letting x = 2y, we have
f (x) = i f (2 wi?1 (2?1x)) 8x 2 2Di i = 1; ; 2L+1 : L+1 L+1 where 2Di = f2y j8y 2 Dig. Since Si2=1 Di = [0; 1], we know that Si2=1 2Di = [0; 2]. Therefore there exists an integer L~ L such that [0; 1] SLi~=1 2Di . De ne ane map w~i?1(x) = 2 wi?1(2?1x), domain blocks D~ i = [0; 1] \ 2Di and range blocks R~i = w~i?1(D~ i ). So the restriction of f on [0, 1] is a FT function. This implies that f 2 VL~ . Since L~ L, by property (i) above, we obtain f 2 VL :
Proof of (iii): We will show this fact by proving that any step function is a FT function. For j 2 Z and L 0, on the interval [j; j +1], let domain block Di be [(j + i ? 1)2?L; (j + i)2?L ] for i = 1; 2L. Now for each 1 i 2L, set the contraction factor Pi 0, i.e., 0 10 1 0 1 0 BB bi BxC BB ai 0 CC BB x CC C + = Wi B @ A@ A @ A @ Qi y y 0 0
1 CC A i = 1; ; 2L :
and range block Ri = wi?1 (Di). De ne a step function f on R by f (x) = Qi if x 2 Di , 11
and G to be the graph of f, that is,
1 0 B x CC G = fB A j8x 2 T g: @ f (x)
We have
0 1 B x CC Wi(Ri f (Ri )) = fWi B @ A jx 2 Ri g f (x) 1 0 1 10 0 B x CC B bi C B ai 0 C C = fB A + B@ CA jx 2 Rig A B@ @ Qi f (x) 0 0 1 0 B aix + bi CC = fB A jx 2 Ri g: @ Qi Since ai(Ri) + bi = wi(Ri) = Di , we obtain Wi (Ri f (Ri)) = Di f (Di ): Therefore, we have
W (G) =
2L [
i=1
Wi(Ri f (Ri )) =
2L [
i=1
Di f (Di ) = G:
This implies that the step function f is a FT function.
5 FT in high dimensions For a still image such as a grey-level image, we can de ne a graph space
@(S ) = f(x1; x2; (x1; x2)) j8(x1; x2) 2 S and 2 L2(S )g: where S is a closed subset of T 2. For simplicity, let S be a tensor product of two subsets, 21 and 22 of T , i.e., S = 21 22. For k = 1; 2, let fD1(k) ; ; D2(kL)g be disjoint sub-regions 12
(domain blocks) on 2k such that [Li=1 Di(k) = 2k and domain blocks Di;j in S be tensor product Di;j = Di(1) Dj(2); for i; j = 1; ; 2L: De ne functions i;j (y) = Pi;j y + Qi;j and 0 10 1 0 1 0 ai;j bi;j C BB R(xi;j) BB x1 CC B x1 CC B C B + = wi;j B @ (i;j) A@ A @ A @ x2 Ry ci;j di;j x2
1 CC A i; j = 1; ; 2L:
(5:1)
For i; j = 1; ; 2L , let range block Ri;j in T 2 be
Ri;j = wi;j?1 (Di;j )
i; j = 1; ; 2L:
For i; j = 1; ; 2L; de ne ane maps Wi;j from Ri;j R to Di;j R by 0 1 10 1 0 1 0 ( i;j ) ai;j bi;j 0 C BB Rx CC BB x1 CC B CC BBB x1 CCC BB (i;j) CC B BB CC B B C C i; j = 1; ; 2L: + Wi;j B x2 C = B R x c d 0 B C B C B C 2 i;j i;j y B@ CA CA B@ CA B@ CA B @ Qi;j y 0 0 Pi;j y
(5:2)
(5:3)
Similar to the discussion in Section 3, we can de ne a fractal transform in L2(S ) by 8 2 L2(S ),
F (x1; x2) = i;j (wi;j?1(x1; x2)) 8 (x1; x2) 2 Di;j ; i j = 1; ; 2L:
(5:4)
Similarly we have a FT function f such that
f (x1; x2) = i f (wi;j?1(x1; x2)) 8 (x1; x2) 2 Di;j ; i; j = 1; ; 2L:
(5:5)
Similar to the discussion in Section 4, we can prove that this constitutes a FT Multiresolution approximation to the space L2(T 2)) and obviously such an approximation also applies to higher dimensional case such as image sequences and 3D images. 13
6 Simulations In this section we select several one-dimensional functions and data as examples to see how the multiresolution fractal transform (MFT) approach provides a progressive approximation to them. Here we adopt the policy that the rst concern is how well of MFT approximation or equivalently, how strong of MFT's representation ability for any type of data. Compressing data is only our second concern. Representation ability is one of fundamental bases for good compression. Through we have proved such representation ability theoretically, we still wish to see how it behaves for dierent type of data. First we de ne two measures for MFT approximation. The signal-to-noise rate is de ned as 2! (Original Data Approximating Data) SNR = ?10 log ; (6:1) (Original Data)2 and the Hausdro error is de ned by
ErrorH = H (G; G^ )
(6:2)
where H is the Hausdro distance, G is the graph of the true function f, i.e., G = f(x; f (x))j x 2 Rg and G^ is the graph of the approximant f^, i.e., G^ = f(x; f^(x))j x 2 Rg. In all our simulations, we use data size 256 uniformly, namely, f (1); : : : ; f (256), and start searching 4 domain blocks with equal size - 64. Given optimal domain blocks at k level, we split each domain block into two sub-blocks to form a domain block pool of k+1 level approximation, as illustrated in Figure 1. Figure 1 is about here Let S be a sub-interval in [0; 255] and denote fS to be restriction of f on S . The new 14
optimal sub-domain block is obtained by the following minimization procedure: minfk2i+1 fRk2i+1 ? fD2ki+1 kg;
and
minfk2i+1 fRk2i+1+1 ? fD2ki+1 kg;
(6.3)
where Rk2i+1 and Rk2i+1+1 are the range blocks corresponded to their domain block D2ki+1 and D2ki+1+1, respectively. Each minimization involves searching optimal splitting point of domain block and locations of two range blocks. We have used range block size which is always twice of its domain block's size. This means that, when we nd that two optimal domain sub-blocks have unequal block sizes, so their range blocks do. We will accept the new k domain blocks D2ki+1 and D2ki+1 +1 to replace Di provided that the following internal stop condition is satis ed: minfk2i+1fRk2i+1 ?fD2ki+1 kg + minfk2i+1 fRk2i+1+1 ?fD2ki+1 kg minfkifRki ?fDik kg; (6:4) otherwise we discard these two new domain sub-blocks and do not process the domain block Dik further. In addition, we use three external stop conditions in our search algorithm: (a)minimum length of domain blocks; (b)minimum Hausdro error of approximation. (c)maximum partition levels. In all of the following examples, we have used 10, 0.1 and 10 for the above three stop conditions.
Example 1. A sinusoid function, x ): f (x) = 255 sin( 2255
(6:5)
The original data is produced by sampling the Equation 6.5 with the region [0; 255]. The results are reported in Table 1 and the approximants are shown in Figure 2. In the Table 15
1. When the algorithm is stopped, the sinusoid function is approximated with 8 domain blocks and the SNR is 46.23db and ErrorH = 0:002913. The result is very good. Table 1 is about here. Figure 2 is about here.
Example 2. A regular function which consists of types of a rectangle and a triangle. The original data are produced by sampling the regular function with the region [0; 255]. We nd that the MFT converges at level 6, the SNR is 32.25db, and ErrorH = 0:094524 as shown in Table 2. At level 6, we have used 30 domain blocks. The graph of several approximants at dierent resolution levels are shown in Figures 3 and 4. We can observe the convergence behaviour from the graph of approximants except at some small parts. When the resolution level is increased, the approximator provides a better approximation to the original graph. Table 2 is about here. Figures 3 and 4 are about here.
Example 3. Fractional Brownian motion data. A fractional Brownian motion, BH (t) say, is a stochastic process whose increments BH (t2) ? BH (t1) have a Gaussian distribution and, for r > 0, BH (t2) ? BH (t1) and r1H (BH (rt2) ? BH (rt1)) have the same statistical
distribution. We produce original data with parameters H = 0:85, r = 3. MFT approximant converges at level 6, except for some very details in original data without recovery, The corresponded SNR is 27.58db and ErrorH = 0:057237. We have used 27 domain blocks. The results at other levels are shown in Table 3. The graph of approximants at dierent resolution levels are shown in Figures 5 and 6. 16
Table 3 is about here. Figures 5 and 6 about here.
Example 4: Monthly New York measles data (1928-1950). There exists a potential chaotic behavior in these measles data. (See Cheng and Tong, 1992 for example). It has been proved that it is not fully satis ed to t these data by such conventional methods as kernel, spline, and non-linear auto-regression. The diculty comes from its sharp data pattern. MFT is very easy to cope with this diculty, especially, for the highest peak in the middle. As far as we understand, conventional methods can not preserve this peak without sacri cing other parts' peaks. MFT treats every peak, even edge peaks, equally well. At level 7, the correspondwd SNR is 11.08db and ErrorH = 0:377015. At level 7, we have used 27 domain blocks. The results at other levels are shown in Table 4. The graph of other approximants at dierent resolution levels are shown in Figures 7, and 8. Table 4 is about here. Figures 7, 8 are about here.
Example 6: Daily exchange rate between US dollar and Geman deutschmark in 1989. Stock market data are chaotic in very high dimensions. Current low-dimensional chaotic dynamic systems do not t these type of data very well. However one advantage of fractal modelling is its adaptability to t such irregular data. More strongly, some people even claim "stock market is fractal!". (See Peters, 1994, for example). Not surprising, MFT converges at level 5. The corresponded SNR is 45.67db and ErrorH = 0:114177. We have use 17 domain blocks. The results at other levels are shown in Table 5. The graph of 17
several approximants at dierent resolution levels are shown in Figures 9 and 10. Table 5 is about here. Figures 9 and 10 are about here.
Example 7: Male speech data. MFT converges at level 6. The corresponded SNR is 23.09db and ErrorH = 0:416158. We have used 26 domain locks. The results at other levels are shown in Table 6. The graph of several approximants at dierent resolution levels are shown in Figures 11 and 12. Table 6 is about here. Figures 11 and 12 about here.
Example 8: Daubechies' wavelet which we use the Daubechies' smoothness order = 2 (See Daubechies, 1992). There are many ways to generate wavelets. One way is to use IFS method based on its dilation equation. However here we treat it as one unknown type of data and see how MFT approximates it. MFT converges at level 6. The correspondwd SNR is 35.63db and ErrorH = 0:016611. We have used 15 domain blocks. The results at other levels are shown in Table 7. The graph of several approximants at dierent resolution levels are shown in Figures 13 and 14. Table 7 is about here. Figures 13 and 14 are about here.
18
7 Discussions In this paper we have provided a theoretical justi cation for the successful application of FT's coding algorithm to real world data, which is a wavelet-like multiresolution approximation (or equivalently representation) framework. Compared with the wavelet-based approximation, it has a ner and compact parametric representation and can be very easily generalized to high dimensional cases. The level of approximate resolution is controlled by the number of afgine maps, domain and range blocks. However, there is a fundamental dierence between function approximation or representation by fractal transform method and by wavelet method. Wavelet method is some kind of basis function method which localizes function both in time and frequency domains. (Fourier tansform can only localizes function in frequency domain). Then after such localizations, the function is recovered by linear combination of these local components. Fractal transform works in other way around. Since any linear combination of FT functions could not be a FT function, we will not expect some kind of FT basis function system. Fractal transform method treat representation complexity by increasing its model complexity rather than, like wavelet method, decreasing this representation complexity by decomposing it into simple components. So in such sense, fractal transform is selfrepresentative and self-growth of complexity. Fractal transform provides us a new way to represent complexity! Finally we would like to talk about possibility of lossless MFT encoding. Look Tables 1 - 7, as level of resolution increases, we see the increases of SNR and decreases of ErrorH . However, after passing some level, SNR become decreased slightly and ErrorH becomes increased slightly, which we don't expect them happend. One possible reason for this negative phenomenon is that we have always kept twice range block size as domain block has during MFT encoding. However we feel that to achieve lossless MFT encoding, i.e., 19
zero-valued approximation error is an extremely dicult task. Our future work in this aspect includes to provide a rate of convergence for such MFT approximation and representation.
References [1] M. F. Barnsley, Fractals Everywhere. Academic Press, New York, 1988. [2] M. F. Barnsley and L. P. Hurd, Fractal Image Compression. AK Peters, Ltd., Wellesley, Massachusetts, 1993. [3] B. Cheng and H. Tong (1992). On consistent nonparametric order determination and chaos (with discussion). J. Royal Statistics Society. B 45 pp. 427-449 [4] C. K. Chui, (Ed.) An Introduction to Wavelets. Academic Press. London, 1992. [5] I. Daubechies, Ten Lectures on Wavelets. CBMS-NSF Regional Conference Series in Applied Mathematics. 61 SIAM books, 1992. 1388-1410. [6] Y. Fisher, (Ed.) Fractal Image Compression: Theory ans Application to Digital Images., 1994, New York: Springer-Verlag. [7] S. Mallat, "Multiresolution approximation and wavelet orthonormal bases of L2(R)." Trans. Amer. Math. Soc. 315, 1989, pp. 69-87. [8] S. Mallat, "A theory for multiresolution signal decomposition: the wavelet representation." IEEE. Trans. Pattern Anal. and Machine Intell. 1989, 11 pp. 674-693. [9] B. Mandelbrot The Fractal Geometry of Nature W.H. Freeman and Co., San Francisico, 1982. 20
[10] Peters, E.E. (1994). Fractal Market Analysis, John Wiley & Sons, INC., London.
21
Table 1: MFT Approximation of Sinusoid Function. Level DB number 2 3
4 8
SNR
Error H
45.049286 0.005915 46.224854 0.002913
Table 2: MFT Approximation of Regular Function. Level DB number 2 3 4 5 6
4 7 14 27 30
SNR
Error H
2.879622 4.655092 9.782963 18.719585 32.247147
0.730478 0.716075 0.729920 0.352425 0.094524
Table 3: MFT Approximation of FBM Level DB number SNR Error H 2 3 4 5 6
4 8 16 22 27
18.014635 23.393778 25.951838 27.406368 27.575560
0.111197 0.069140 0.057549 0.055696 0.057237
Table 4: MFT Approximation of Monthly New York Measles Data (1928-1950). Level DB number SNR Error H 2 3 4 5 6 7
4 8 12 15 23 26
1.215073 3.997457 5.444898 7.223487 10.372748 11.075181
0.933693 0.629130 0.592882 0.481565 0.379159 0.377015
Table 5: MFT Approximation of Daily Exchange Rate Data. Level DB number SNR Error H 2 3 4 5
4 8 11 17
38.609196 42.936199 44.224030 45.671921
0.189381 0.131992 0.11417 0.114177
Table 6: MFT Approximation of Male Speech Data. Level DB number SNR Error H 2 3 4 5 6
4 8 16 24 26
9.551127 13.892967 19.296848 23.080330 23.088985
0.558415 0.599532 0.496144 0.416164 0.416158
Table 7: MFT Approximation of Daubechies Wavelet Data. Level DB number SNR Error H 2 3 4 5 6
4 7 13 14 15
14.379082 27.816370 35.731339 36.021088 35.727436
0.236895 0.059374 0.016563 0.016602 0.016611
Figure 1: Partition of domain block Data Cordinate 0
64
128
196
256
Initialization: L = 2 2nd possible
0
R block
256
128 1st possible R block 2nd affine map 2st affine
map
1st D block
2nd D block
3rd
D block
4th D block
Furthere partition: L = 3
0
128
64 possible
possible
splitting
splitting point
point
196
possible splitting point
No matter how we vary domain block size, we always keep that its range block size is twice of domain block size
256
possible splitting point
Sinusoid function
Level 2 MFT’s Approximation
150
150 ’sin.mft2’
100
50
50
0
0
y
100
-50
-50
-100
-100
-150
-150 0
50
100
150 x
200
250
300
0
50
100
Level 3 MFT’s Approximation
150 x
200
300
Sinusoid Data
150
150 ’sin.mft3’
original level2 level3
100
100
50
50
Yi
y
250
0
0
-50
-50
-100
-100
-150
-150 0
50
100
150 x
200
250
300
0
50
100
150 Xi
200
250
300
Figure 2: MFT Approximation of Sinusoid Function (levels 2-3).
y
’sin.g’
Regular function
Level 2 MFT’s Approximation
250
85 ’period.g’
’period.mft2’ 80
75
70
y
y
150
100
65
60
55 50 50
0
45 0
50
100
150 x
200
250
300
0
50
Level 3 MFT’s Approximation
100
150 x
200
300
Level 4 MFT’s Approximation
250
250 ’period.mft3’
’period.mft4’
200
200
150
150
100
100
y
y
250
50
50
0
0
-50
-50 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 3: MFT Approximation of Regular Function (levels 2-4).
200
Level 5 MFT’s Approximation
Level 6 MFT’s Approximation
250
250 ’period.mft5’
’period.mft6’
200
150
100
y
y
150
100 50
50 0
-50
0 0
50
100
150 x
200
250
300
Period Data 250 original level2 level6 200
Yi
150
100
50
0 0
50
100
150 Xi
200
250
300
0
50
100
150 x
200
250
300
Figure 4: MFT Approximation of Regular Function. (levels 5-6).
200
FBM with H = 5 and r = 3
Level 2 MFT’s Approximation
250
250 ’fbm5_3.mft2’ 200
150
150
100
100
50
50
0
0
y
200
-50
-50
-100
-100
-150
-150
-200
-200
-250
-250 0
50
100
150 x
200
250
300
0
50
Level 3 MFT’s Approximation
100
150 x
200
300
Level 4 MFT’s Approximation
250
250 ’fbm5_3.mft3’
’fbm5_3.mft4’
200
200
150
150
100
100
50
50
0
0
y
y
250
-50
-50
-100
-100
-150
-150
-200
-200
-250
-250 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 5: MFT Approximation of FBM (levels 2-4).
y
’fbm5_3.g’
Level 5 MFT’s Approximation
Level 6 MFT’s Approximation
250
250 ’fbm5_3.mft6’ 200
150
150
100
100
50
50
0
0
y
200
-50
-50
-100
-100
-150
-150
-200
-200
-250
-250 0
50
100
150 x
200
250
300
Original Data and Level 6 MFT’s Approximation 250 ’fbm5_3.g’ ’fbm5_3.mft6’
200 150 100
y
50 0 -50 -100 -150 -200 -250 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 6: MFT Approximation of FBM (levels 5-6)).
y
’fbm5_3.mft5’
Level 2’s MFT Approximation 3500 ’bio.mft2’
25000
3000
20000
2500
15000
2000
y
y
’bio.g’
10000
1500
5000
1000
0
500 0
50
100
150 x
200
250
300
0
50
100
Level 3’s MFT Approximation
150 x
200
250
300
Level 4’s MFT Approximation
16000
30000 ’bio.mft3’
’bio.mft4’
14000 25000 12000 20000 10000 15000
y
y
8000
6000
10000
4000 5000 2000 0 0
-2000
-5000 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 7: MFT Approximation of Monthly New York Measles Data (1928-1950), (Levels 2-4).
New York Measles monthly Data (1928-1950) 30000
Level 6 FT Approximation 20000 ’bio.mft5’
’bio.mft6’
14000 15000
12000
10000 10000
X_t
X_t
8000
6000 5000 4000
2000
0
0
-2000
-5000 0
50
100
150 t
200
250
300
0
50
Level 7 FT Approximation
100
150 t
200
250
300
Original Data and Level 7 FT Approximation
20000
30000 ’bio.mft7’
’bio.g’ ’bio.mft7’ 25000
15000 20000
10000
X_t
X_t
15000
10000
5000
5000 0 0
-5000
-5000 0
50
100
150 t
200
250
300
0
50
100
150 t
200
250
300
Figure 8: MFT Approximation of Monthly New York Measles Data (1928-1950), (Levels 5-7).
Level 5 FT Approximation 16000
Level 2 MFT’s Approximation 19500 ’dayhi.mft2’
19000
19000
18500
18500
18000
18000
17500
17500
y
y
’dayhi.g’
17000
17000
16500
16500
16000
16000
15500
15500 0
50
100
150 x
200
250
300
0
50
Level 3 MFT’s Approximation
100
150 x
200
250
300
Level 4 MFT’s Approximation
20000
20000 ’dayhi.mft3’
’dayhi.mft4’
19000
19000
18500
18500
18000
18000
y
19500
y
19500
17500
17500
17000
17000
16500
16500
16000
16000
15500
15500 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 9: MFT Approximation of Daily US dollar-to-German Mark Exchange Rate Data in 1989.(levels 2-4)
Daily US dollar-to-German Mark Exchange Data approximation in 1989 19500
Original Data and Level 5 FT Approximation 19500 ’dayhi.mft5’
’dayhi.g’ ’dayhi.mft5’ 19000
19000
18500 18500 18000
y
y
18000 17500
17500 17000 17000 16500
16500
16000
16000
15500 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 10: MFT Approximation of Daily US dollar-to-German Mark Exchange Rate Data in 1989. (level 5)
Level 5 FT Approximation 19500
Male Speech Data
Level 2 MFT’s Approximation
300
240 ’male.g’
’male.mft2’ 220
250
y
y
180 150
160 100 140
50
120
0
100 0
50
100
150 x
200
250
300
0
50
Level 3 MFT’s Approximation
100
150 x
200
300
Level 4 MFT’s Approximation
300
300 ’male.mft3’
’male.mft4’
250
250
200
200
150
150
y
y
250
100
100
50
50
0
0 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 11: Male Speech Data Approximation. (levels 2-4)
200 200
Level 5 MFT’s Approximation
Level 6 MFT’s Approximation
300
300 ’male.mft6’
250
200
200
150
150
y
250
100
100
50
50
0
0 0
50
100
150 x
200
250
300
Original Data and Level 6 MFT’s Approximation 300 ’male.g’ ’male.mft6’ 250
y
200
150
100
50
0 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 12: Male Speech Data Approximation. (levels 5-6)
y
’male.mft5’
Daubechies’ Wavelet
Level 2 MFT’s Approximation
150
150 ’wv.g’
’wv.mft2’
50
0
0
y
50
y
100
-50
-50
-100
-100
-150
-150
-200
-200 0
50
100
150 x
200
250
300
0
50
Level 3 MFT’s Approximation
100
150 x
200
250
300
Level 4 MFT’s Approximation
150
150 ’wv.mft3’
’wv.mft4’
50
50
0
0
y
100
y
100
-50
-50
-100
-100
-150
-150
-200
-200 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 13: Daubechies' Wavelet Data Approximation. (levels 2-4)
100
Original Data and Level 6 MFT’s Approximation 150 ’wv.mft6’
’wv.g’ ’wv.mft6’
50
50
0
0
y
100
y
100
-50
-50
-100
-100
-150
-150
-200
-200 0
50
100
150 x
200
250
300
0
50
100
150 x
200
250
300
Figure 14: Daubechies' Wavelet Data Approximation. (levels 5-6)
Level 6 MFT’s Approximation 150