International Journal of Computer Mathematics Vol. 84, No. 11, November 2007, 1567–1590
Grey video compression methods using fractals MEIQING WANG*† and CHOI-HONG LAI‡ †College of Mathematics and Computer Science, Fuzhou University, Fuzhou, Fijian 350002, China ‡School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, Greenwich, London SE10 9LS, UK (Received 03 June 2006; revised version received 25 November 2006; accepted 04 December 2006) The authors’ experience in the treatment of grey video compression using fractals is summarized and compared with other research in the same field. Experience with parallel and distributed computing is also discussed. Keywords: Fractals; Video compression; Adaptive partition; Parallel algorithm AMS Subject Classifications: 65K05; 94A08
1.
Introduction
Image compression methods using fractals has been widely because of its advantages of high compression ratio and the simplicity of decompression, which is particularly suited to the situation of one encoding and many decodings. Fractal theory studies self-similarity in objects. At the beginning of the 1980s, it was used to simulate natural scenes in computer graphics. The self-similarity used in fractal theory is known as an iterated functional system (IFS), and was first proposed by Hutchinson in 1981 [1]. Later, Barnsley and colleagues [2, 3] used IFS to simulate natural scenes, such as clouds, trees, and leaves, and implemented the concept using computer programs. Subsequently, fractal image compression, in which a complete image is compressed using IFS, was developed. The first fractal image compression system was described by Jacquin [4] in 1989. From the early 1990s to the mid-1990s, studies of fractal image compression focused on different partition methods, including square, triangular, and rectangular partition, to improve the qualities of decompressed images, and on classifications of range blocks and domain blocks to reduce the run times of compression algorithms [5]. Recently, researchers have focused on two aspects of fractal image compression: fractal image compression for still images, concentrating on the hybrid combination of fractal compression methods and other compression methods, and parallel or distributed algorithms for *Corresponding author. Email:
[email protected]
International Journal of Computer Mathematics ISSN 0020-7160 print/ISSN 1029-0265 online © 2007 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/00207160601178299
1568
M. Wang and C.-H. Lai
fractal image compression to reduce run times. Fractal compression methods are used in pseudo-spiral architecture [6, 7], the quad-tree partition method in fractal compression has been substituted by binary partition [8, 9], and a combined fractal and wavelet compression method has been studied [10, 11]. Hartenstein and Saupe [12] proposed a combination of fractal compression and fast Fourier transform, and Charfi et al. [13] described joint source– channel fractal image coding with unequal error protection. Implementation and performance analysis of parallel fractal image compression algorithms are described in [14, 15]. There have also been a large number of studies of fractal video compression. There are many applications that will benefit from a reliable and fast system of data transmission, such as entertainment applications, video conferencing, video archives and libraries, remote learning, multimedia presentations, and video on demand. Fractal image compression methods can easily be extended to video compression because a video can be considered as a sequence of images in continuous motion. At present, fractal image compression is not as good as state-of-the-art compression technologies, such as MPEG [16], JPEG [17], and H.263 [18]. However, the main advantage of decompressing a fractal compressed image is that it is only necessary to compute the fixed point of a fractal transform operator equation, which is very simple and well suited to the situation of one encoding and many decodings. This provides our motivation for concentrating on fractal video compression techniques. Two fractal compression methods are used in video compression: cube-based compression [19–21] and frame-based compression [22, 23]. In cube-based compression a sequence of images is divided into groups of frames, each of which is partitioned into non-overlapping cubes. A compression code is computed and stored for every cube. This method, proposed in 1993, has a high computing complexity which makes it difficult to implement because of limits on computing power, random access memory, etc. In frame-based compression, the compression code is computed and stored for each frame, and intra-frame or inter-frame selfsimilarity can be used. The cube-based method gives high-quality decompressed images but the compression ratio is relatively low; the frame-based method has a high compression ratio but the current frame relates to the previous frame in such a way that errors are introduced and spread between frames. We have proposed a hybrid compression algorithm [24] which is based on an interaction between the frame-based algorithm and the cube-based compression algorithm. An adaptive partition method, which combines with either the cube-based method or the hybrid compression method, was introduced in [25,26]. These methods improve the compression ratio while maintaining a good peak signal-to-noise ratio (PSNR). In addition, they are implemented solely by software, unlike the methods proposed in [27, 28] which need special hardware. Most state-of-the-art video compression methods [29–32] are based on motion compensation. Fang [33] attempted to combine fractal methods with motion compensation and tested a few image sequences. The experimental results showed that the hybrid combined method performed better than pure fractal compression methods for video sequences with relatively slow movement. Other combination methods have been reported in the literature, including combinations of domain blocks [34], fractal coding and neighbourhood vector quantization [35], combination of fractals and wavelets [36], and combination of fractals and prediction mapping [37, 38]. The long run time required by fractal video compression algorithms is a major problem. In our recent studies [24–26], we found that compression of a VCD sequence consisting of 16 frames of 8-bit grey images each of 720 × 576 pixels required approximately 2 hours. Such algorithms are impractical for the media industry. In addition to optimizing fractal compression
Grey video compression using fractals
1569
algorithms [39–42], many researchers use parallel fractal compression algorithms in order to achieve a high speed-up on high-performance machines. Parallel fractal video compression algorithms were proposed and implemented in supercomputers or distributed systems based on PCs connected with local network [43–45]. Pommer et al. [46] concentrated on implementing the algorithm on shared memory systems. In this paper, we review some basic fractal video compression methods, combination methods of fractal compression, and some other compression methods. In particular, we present a hybrid algorithm using cube-based and frame-based compression, an algorithm combining fractal compression and motion compensation, and the use of an adaptive partition method to improve the compression ratio. A parallel algorithm for the fractal cubebased method which reduces the run time for compression is described. Numerical tests are performed to test the compression qualities of fractal video compression algorithms and the speed-up and scalability performance of the parallel fractal video compression algorithm. The paper is organized as follows. The background mathematical theory for fractal image compression is presented. The two basic fractal compression algorithms for motion images, the cube-based method and the frame-based method, are discussed, and their advantages and disadvantages are highlighted. In section 3, some combined methods are discussed, in particular a hybrid cube- and frame-based algorithm and a combined fractal compression and motion compensation algorithm. The development of adaptive partition methods is described. A parallel algorithm and parallel environments of fractal video compression methods are presented in section 4, and a brief description of how to extend a fractal grey video compression method to colour video is given in section 5. Finally, the results of numerical tests are given, and some conclusions are presented.
2.
Mathematical theory
In this section we give a brief overview of the basic mathematical theory of fractal image compression [24], which has not changed since the late 1980s. Let I be a square monochrome digital image of size 2N × 2N , N ≥ 0, defined over a region denoted by X ∈ {0, . . . , 2N − 1} × {0, . . . , 2N − 1}. The image function u : X → R defines the intensity of the pixel (i, j ) ∈ X of I . Define the matrix PX formed by the pixel intensities of X collocated as ⎡ ⎢ ⎢ PX = ⎢ ⎣
u(0, 0) u(1, 0) .. .
u(0, 1) u(1, 1)
u(2N − 1, 0)
u(2N − 1, 1)
··· ···
u(0, 2N − 1) u(1, 2N − 1)
⎤
⎥ ⎥ ⎥. ⎦ · · · u(2N − 1, 2N − 1)
(1)
Suppose that PX is partitioned into non-overlapping submatrices Rs,t , 0 ≤ s, t ≤ 2N −n − 1, known as range blocks, each of size 2n × 2n . Then ⎡
R0,0 R1,0 .. .
⎢ ⎢ PX = ⎢ ⎣ R2N −n −1,0
R0,1 R1,1 R2N −n −1,1
··· ···
R0,2N −n −1 R1,2N −n −1
· · · R2N −n −1,2N −n −1
⎤ ⎥ ⎥ ⎥ ⎦
(2)
1570
M. Wang and C.-H. Lai
where
⎡
Rs,t
u(2n s, 2n t + 1) u(2n s + 1, 2n t + 1)
··· ⎢ ··· ⎢ =⎢ ⎣ u(2n s + 2n − 1, 2n t) u(2n s + 2n − 1, 2n t + 1) · · · ⎤ u(2n s, 2n t + 2n − 1) u(2n s + 1, 2n t + 2n − 1) ⎥ ⎥. ⎦ (2n s + 2n − 1, 2n t + 2n − 1) u(2n s, 2n t) u(2n s + 1, 2n t) .. .
˜ k, k = Suppose that each range block is associated with a set of larger submatrices D n+1 n+1 1, . . . , nD , known as domain blocks of PX , which are usually of size 2 × 2 . Here, nD is calculated as 22(N−n) . Simple neighbouring operations, say A, can be applied to the submatrix ˜ k by averaging the intensities of pairwise disjoint groups of neighbouring pixel intensities. D ˜ k , which is also This procedure leads to a 2n × 2n matrix denoted symbolically by Dk = AD known as a codebook block. For convenience in the development of the mathematical theory the pixels of matrix PX are collocated using a row-wise data structure which leads to the pixel 2N 2N vector p ∈ R2 . The set of all such pixel vectors is denoted Rp , i.e., p ∈ Rp ⊆ R2 . The submatrices Rs,t and Dk are collocated in the same data structure to give the vectors r and dk , 2n respectively. The sets of all r and dk are denoted as Rr and Rd , respectively, i.e. r ∈ Rr ⊆ R2 2n and dk ∈ Rd ⊆ R2 . The concepts of range blocks and domain blocks are shown in figure 1. 2n Let wi : Rr → R2 be a contractive mapping such that wi has a unique fixed point r∗ ∈ Rr , i.e. wi (r∗ ) = r∗ , and define the set of transformations Si (B) = {wi (r) : r ∈ B} , ∀B ⊆ Rr , for 1 ≤ i ≤ K. Let H (Rr ) denote the vector space whose elements are non-empty compact subsets of Rr . The fractal transformation W : H (Rr ) → H (Rr ) is defined by r r W (B) = ⊕K i=1 Si (B), ∀B ∈ H (R ), where ⊕ is the direct sum. For any B1 , B2 ∈ H (R ), K K
W (B1 ) − W (B2 ) = ⊕ Si (B1 ) − ⊕ Si (B2 ) i=1
i=1
K K = ⊕ {w (r ) : r ∈ B } − ⊕ {w (r ) : r ∈ B } s,t 1 i u,v u,v 2 i=1 i s,t i=1 K = ⊕ {wi (rs,t ) − wi (ru,v ) : rs,t , ru,v ∈ B1 ∪ B2 } i=1
K
≤ ⊕ { wi (rs,t ) − wi (ru,v ) : rs,t , ru,v ∈ B1 ∪ B2 }. i=1
Hence W is also a contractive mapping because wi is a contractive mapping. The result ensures that if B∗ ∈ H (Rr ), then W (B∗ ) = B∗ and there exists a sequence of transformations {B(0) , B(1) , B(2) , . . . , B(q) , . . .} where B(q) = W (B(q−1) ), with an arbitrary starting value B(0) .
Figure 1.
Image I partitioned into range blocks and domain blocks.
Grey video compression using fractals
1571
In image compression the encoding problem is to find W such that W (p) = p, i.e., P = ⊕K i=1 wi (rs,t ) = W (p). We consider the transformation defined by wi (r) = αr + βI
(3)
2n
where I = (1, . . . , 1)T ∈ R2 , and α and β are known as the scaling and offset factors, respectively. For any rs,t , ru,v ∈ Rr ,
wi (rs,t ) − wi (ru,v ) 2 = αrs,t + βI − (αru,v + βI) 2 = α(rs,t − ru,v ) 2 < α
rs,t − ru,v 2 which leads to −1 < α < 1, ensuring that wi and W are contractive mappings. For every r ∈ Rr , equation (3) provides a combination of scaling and offset to approximate the transformation W . This can be achieved by comparing the range block r and the codebook blocks dk , k = 1, . . . , 22(N−n) , by solving the minimization problem min r − (αdk + β) 2 α,β
where, by the collage theorem, wi (dk ) ≈ wi (r). By defining m = 2n it is possible to derive the relations below for α and β:
α=
⎧ m
m ⎪ ⎪ mdk × r − ⎪ i=1 dki i=1 ri ⎪ ⎪
m 2 ⎪ ⎨ mdk × dk − dki
if mdk × dk −
i=1
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩0
if mdk × dk −
m
2
m
= 0
dki
i=1
2 dki
=0
i=1
m m 1 β= ri − α dki m i=1 i=1 and the rms error E(dk , r) between αdk + βI and r is given by
m m 1 E(dk , r) = dki + β mβ − 2 ri . r × r + α αdk × dk − 2dk × r + 2β m i=1 i=1 (4) Decoding can easily be achieved using the relation B(q) = W (B(q−1) ). The quality of decompressed images compared with original images is given by the PSNR. The PSNR of an 8-bit grey image is given by [5] PSNR = 10 × log10
(1/22 N )
2552 ˆ j ) − u(i, j ))2 i,j (u(i,
(5)
where u(i, j ) and u(i, ˆ j ) are the intensities of the original image and the decompressed image, respectively, at the pixel (i, j ). It has been found that the PSNR of a decompressed image can be as high as 38–40 dB.
1572
3.
M. Wang and C.-H. Lai
Fractal grey video compression
In this section we summarize current fractal video compression methods for grey images. Two basic fractal video compression methods, the cube-based method and the frame-based method, which are direct extensions of fractal compression methods for still images, are described. A number of combined methods recently proposed in the literature, including the hybrid cube- and frame-based method, combined fractal compression and motion compensation, and combined fractal compression and wavelet transform, are also briefly explained. Finally, adaptive partition methods, which can be used to improve compression ratios are briefly discussed. 3.1 Extensions for fractal video compression The mathematical theory of fractal compression of still images can easily be extended to video image sequences. Two different extensions are used to reduce the spatial and temporal redundancy in a video image sequence: cube-based compression and frame-based compression. 3.1.1 Cube-based fractal compression. Let Seq = {ft , 1 ≤ t ≤ S} be a sequence of images in continuous motion, where ft is a single image frame as described in section 2. Hence Seq can be considered as a cubic monochrome digital image of size 2N × 2N × 2N , N ≥ 0, defined over a region X ∈ {0, . . . , 2N − 1} × {0, . . . , 2N − 1} × {0, . . . , 2N − 1}. The image function u : X → R defines the intensity of the pixel (i, j, k) ∈ X of Seq. Therefore it is easy to extend the matrices defined in equations (1), (2), and (3) to three-dimensional arrays. In a cube-based compression the sequence is divided into groups of frames (GOF), i.e. Seq = {GOFg : 1 ≤ g ≤ ks } which can then be compressed and decompressed as an entity. The three-dimensional array of pixels PX , which is partitioned into non-overlapping sub-arrays Ri,j,t , 0 ≤ i, j, t ≤ 2N −n − 1, known as range cubes, each of size 2n × 2n × 2n , is shown in figure 2 (the notation is the same as that used in section 2). Each range cube is associated with a set of larger sub˜ k , k = 1, . . . , nD , known as domain cubes of PX , which are usually of size 2n+1 × arrays D n+1 ˜k 2 × 2n+1 . Simple neighbouring operations, say A, can be applied to the sub-array D by averaging the intensities of disjoint groups of eight neighbouring pixel intensities. This ˜ k , which is also known as a leads to a 2n × 2n × 2n array denoted symbolically as Dk = AD codebook cube. PX is collocated using a row-wise and depth-wise data structure, giving the 3N 3N pixel vector p ∈ R2 . The set of all such pixel vectors is denoted Rp , i.e. p ∈ Rp ⊆ R2 . The submatrices Rs,t,g and Dk are collocated in the same data structure to give the vectors r and dk , 3n respectively. The set of all r and dk are denoted Rgr and Rgd , respectively, i.e. r ∈ Rgr ⊆ R2 3n and dk ∈ Rgd ⊆ R2 . Therefore the mathematical theory described in section 2 is valid in the present situation. In the description above, we consider the image to be square and the lengths of the three edges (along x, y, and the time t) of a range cube to be the same. In practice, the lengths of two edges of an image N, M may be different. In general, the size of a range cube R is n × m × l, where n and m can be 16, 8, or 4, and l can be 8, 4, 2, or 1, depending on the rate of image motion. In the following, we define N = n × kn , M = m × km , and the number of frames in a GOF as S = l × kl . In order to reduce the compression time, the search for an optimal codebook
Grey video compression using fractals
Figure 2.
1573
Indication of range cubes and a domain cube in a GOF.
cube in a given range is limited to the adjacent area of the range cube. The set g includes all the codebook cubes of GOFg (1 ≤ g ≤ ks ). The cube-based fractal compression method is given by the following algorithm. ALGORITHM Al_3.1.1
Cube-based fractal compression for video.
Given the image sequence Seq:– Prepare GOFg (1 ≤ g ≤ ks ); For g = 1, . . . , ks Begin Prepare g ; g g g For t = 1 to kl do, For j = 1 to km do, For i = 1 to kn do Begin For each Dk ∈ g do Begin (αk , βk ) := Solve minα,β VRi,j,t − (αVDk + βI) ; Compute E(Dk , Ri,j,t ); End Compute the compression code: E(Dopt , Ri,j,t ) = VRi,j,t − (αopt Dopt + βI)
:= minDk {E(Dk , Ri,j,t )}; store αopt , βopt and the index of Dopt ; End-For End-For It should be noted that high-quality decompressed images can be obtained with the cubebased fractal compression algorithm, but the compression ratio is relatively low. 3.1.2 Frame-based fractal compression. In frame-based compression, the compression codes are computed and stored for every frame. The approximate transformation for each
1574
M. Wang and C.-H. Lai
range block is obtained by means of inter-frame similarity, i.e. the domain blocks from the previous frame are used to compute the approximate transformation for the range blocks of the current frame. Therefore the transformation is not necessarily a contractive map, and the domain blocks from the previous frame, which are not required to be shrunk, may be of the same size as the range blocks. In the decompression process, the approximate transformation is used on the previous frame once only without the need for iteration. We denote the decompressed image sequence by {DF0 , DF1 , . . . , DFs }. The range blocks and the domain blocks are shown in figure 3. ALGORITHM Al_3.1.2
Frame-based fractal compression for video.
Given the image sequence Seq = {f0 , f1 , f2 , . . . fS } :– Apply {Algorithm Al_3.1.1} to sequence {f0 }; Do g = 1, . . . , S Compute DFg−1 ; Prepare g ; g For j = 1 to km do g For j = 1 to kn do For each Dk ∈ g do Begin (αk , βk ) := Solve min Ri,j,g − (αVDk + βI) ; α,β
Compute E(Dk , Ri,j,g ); End Compute the compression code: E(Dopt , Ri,j,g ) = VRi,j,g − (αopt Dopt + βI)
:= min{E(Dk , Ri,j,g )} Dk
Store αopt , βopt and the index of Dopt ; End-For End-For End-Do Two obvious disadvantages of the frame-based compression are that the error due to the use of the previous frame will inevitably spread to later frames and there will be a delay between frames during decompression. However, frame-based compression has a high compression ratio which is particularly suitable for video transmission through the internet.
Figure 3.
Range and domain blocks in a frame-based compression.
Grey video compression using fractals
3.2
1575
Combined methods
Experimental results show that the cubed-based and frame-based methods are not satisfactory. Recently, combination methods have been proposed in order to improve the results. Gharavi-Alkhansari and Huang [34] proposed a fractal video coding method using matching pursuit, a combination method of domain blocks. In this method, instead of using one optimal domain block for every range block, an optimal set of domain blocks obtained from the previous frame was used. Zhen and Wilson [35] proposed a volume-based hybrid video coding scheme which combines fractal coding with neighbourhood vector quantization (VQ) to give a form of predictive VQ [47]. Unlike conventional motion compensation, where block matching is only performed within the previous frame, the neighbourhood blocks (the virtual codebook) of a range block are simply selected from the connected neighbourhood; nine are in the previous time slice, and four are in the same time slice. Neighbourhood VQ can exploit both temporal and spatial redundancy; therefore the local signal coherence can be well captured. Zhang et al. [36] proposed a wavelet video compression scheme using adaptive fractal coding. First, each video frame is decomposed into multi-resolution sub-bands using pyramidal wavelet transform. The sub-bands are organized into a set of wavelet subtrees to represent motion activities. The subtrees are classified as motion and non-motion subtrees by using multi-resolution motion detection. Non-motion subtrees are compressed by simply using the previous frame, and motion subtrees are compressed by variable-tree fractal coding. The main advantage of this method is that it can describe large object motion activities instead of piecewise motion activities and hence less computation is required. Kim et al. [37] proposed a fractal video compression algorithm based on circular prediction mapping and non-contractive inter-frame mapping. In this algorithm, which is similar to the frame-based method described in section 3.1.2, each range block is approximated by the domain block in the adjacent frame. Therefore the algorithm can effectively exploit temporal correlation in the real image sequence. Zhu and Belloulata [38] proposed the CPM/NCIM fractal coding scheme, a region-based fractal coding for monocular and stereo video sequences which combines fractals with region-based functionality of MPEG-4. The first n frames of the right video sequence are compressed as a ‘set’ using circular prediction mapping (CPM) and the remaining frames are compressed using non-contractive inter-frame mapping (NCIM). CPM and NCIM accomplish the motion estimation/compensation which can exploit the high temporal correlations between the adjacent frames of the right sequence. The spatial correlations between the left and right frames are exploited by searching similar blocks using quadtree-based disparity estimation/compensation. The aim of studying various combination methods is to improve compression ratios while maintaining high resolution of the corresponding decompressed images. The method proposed in [34] needs to store or transmit codebook blocks which will require extra storage space. The method in [35] is an improvement on the VQ method, but does not benefited from the simplicity and effectiveness of fractals during its decompression process. The method described in [36] is more suitable for image sequences from videoconferences, because the motion of a large object in a videoconference is usually small, thereby reducing compression codes. On the other hand, this method has no advantages when applied to image sequences from a movie because there are too many details to change. The methods in [37, 38] combine fractal and CPM methods, and the latter needs special hardware so that the advantage gained from fractal compression being a pure software algorithm is lost.
1576
M. Wang and C.-H. Lai
In [24, 33] two combination methods, the hybrid fractal method and the combination of fractal compression and motion compensation, were proposed in an attempt to address the above shortcomings. They are discussed in detail below. 3.2.1 Hybrid fractal method. A hybrid compression method combining the cube-based and frame-based techniques may achieve the aim of utilizing the advantages of these two methods. Suppose that each GOFg of Seq = {GOFg : 1 ≤ g ≤ ks } is partitioned into disjoint subsets forming the series {G1 , G2 , G3 , . . .}, which can be decoupled into two disjoint series sub-GOFe = {G2 , G4 , G6 , . . .} and sub-GOFo = {G1 , G3 , G5 , . . .} (figure 4). A cube-based compression algorithm is applied to sub-GOFo = {G1 , G3 , G5 , . . .} and its decompressed image series is denoted {DG1 , DG3 , DG5 , . . .}. Frame G2i of sub-GOFe is then compressed by applying the frame-based algorithm to DG2i−1 of sub-GOFo . ALGORITHM Al_3.2.1
Hybrid compression algorithm.
Given the image sequence Seq = {GOFg : 1 ≤ g ≤ ks }:– Do g = 1, 2, . . . , ks Obtain {G1 , G2 , G3 , . . .}; Construct sub−GOFo = {G1 , G3 , G5 , . . .}, sub−GOFe = {G2 , G4 , G6 , . . .}; Apply {Al_3.1.1} to sub−GOFo ; Compute {DG1 , DG3 , DG5 , . . .}; Do i = 1, 2, . . .
Figure 4. Typical partition of a GOF.
Grey video compression using fractals
1577
Apply {Al_3.1.2} to G2i (the previous frame is DG2i−1 ); End-Do End-Do. It should be noted that the elements in {DG1 , DG3 , DG5 , . . .} and {DG2 , DG4 , DG6 , . . .} can be computed simultaneously. Therefore the time delay between frames when the video is being displayed is minimized.
3.2.2 Combined fractal compression and motion compensation. Most state-of-the-art video compression methods [16, 18, 42] are based on motion compensation. The idea is that each small block in the current frame is formed by moving a small block in the previous frame. In [33], an attempt was made to combine the fractal compression method with motion compensation. As in the MPEG method, a video sequence Seq = {f0 , f1 , f2 , . . . , fS } is divided into two subsets, the I-frame (intra-frame) subset SI = {I1 , I2 , . . .} and the P-frame (predicted frame) subset SP = {P1 , P2 , . . .} (figure 5). An I-frame image is quite different from the previous frame and needs independent compression; a P-frame image can be compressed using the data the previous frame or the previous I-frame image. We did not use B-frames, which in MPEG are interpolated by their previous and succeeding frames, because they may affect decompression time. I-frame subset SI = {I1 , I2 , . . .} can be compressed using either fractal video compression methods or fractal still image compression methods for each I-frame. Each P-frame is partitioned into non-overlapped small blocks f = R1 ∪ R2 ∪ · · · ∪ Rh . The optimal approximate block Di for every small block Ri , i = 1, . . . , h, can be found from the previous frame or the previous I-frame of f . The index of Di is considered as a motion vector and is saved to a file named addr_file. The error component Ei = Ri − Di is computed and used to construct the error image E = E1 ∪ E2 ∪ · · · ∪ Eh which is compressed using fractal compression method. The compression codes are saved in a file named err_file. During decompression, err_file is decompressed as an error image, and then the error image and the motion vector from the file addr_file are used to reconstruct the image f . ALGORITHM Al_3.2.2
Combined fractal compression and motion compensation algorithm.
Given the image sequence Seq = {f0 , f1 , f2 , . . .fS } :– Prepare SI and SP : S = SI ∪ SP ; For each f ∈ SI do Apply {Algorithm: A fractal compression method for still images} to f End-For Or Apply {Algorithm: A fractal video compression method} to SI ;
Figure 5. A series of images divided into I-frames and P-frames.
1578
M. Wang and C.-H. Lai
For each f ∈ SP do Partition f into 16 × 16 non-overlapped blocks: f = R1 ∪ R2 ∪ · · · ∪ Rh ; For Ri , i = 1, . . . , h do Compute the best approximate block Di from its previous I-frame or its previous P-frame; Write the address of Di to the file addr_file; Compute the error block Ei = Ri − Di ; End-For; Construct error image E = E1 ∪ E2 ∪ · · · ∪ Eh ; Compress the error image E with {Algorithm: A fractal compression method for still images}; Write compression codes to the file err_file; End-For. The combination of fractal compression and motion compensation can improve the PSNR values of decompressed images and the compression ratios of images, but it generates two compression files which increase the complexity of decompression algorithms. This is not as good as pure fractal compression in which the decompression process only requires simple iterations. 3.3 Adaptive method The compression ratio obtained using fractal video compression with fixed partition is not very high. In practice, the rate of inter-frame and intra-frame variation is not constant. The compression qualities may not be changed, but the compression ratio may be improved if the partition varies in different regions in a video sequence, i.e. larger cubes can be used in the regions where pixel intensities change slowly, and smaller cubes can be used in regions where they change rapidly. This is the motivation underlying the concept of adaptive partition. Adaptive methods have been used in many studies. In the field of image compression, Jacobs et al. [48] proposed the adaptive quadtree partition method, which was subsequently combined with many other image compression methods. More recently, Engel and Uhl [49] used adaptive technology in wavelet-based image compression to make it suitable for objects of arbitrary shape. Nguyen and Saupe [50] studied adaptive post-processing technology for fractal image decompression. Franco and Malah [51] proposed adaptive image partitioning for fractal coding and achieved designated rates under a complexity constraint, which limited the computing complexity by evaluating a weight for every range block acquired by the adaptive quadtree partition. Kassler [52] proposed an image compression method similar to the frame-based method in which adaptive quadtree partition is used but only blocks which have undergone conspicuous change compared with the previous frame are compressed. This method gives a high compression ratio for videoconference images and can be used in real-time communication, but the quality of decompressed images is not very high. In our studies, we extend the adaptive quadtree partition for the compression of still images and video image sequences and obtain an octant-tree partition. 3.3.1 Adaptive cube-based compression algorithm. First, a sequence of images is partitioned into larger non-overlapping cubes (e.g. 16 × 16 × 8). For a given cube, if the rms
Grey video compression using fractals
1579
error related to the optimal codebook cube is less than the initial tolerance, the resulting compression code is the final code. Otherwise the cube is partitioned into eight smaller subcubes, and an optimal codebook cube is sought for each of these. The partition continues until the rms error is sufficiently small enough or the cubes cannot be partitioned again. The algorithm below encapsulates the concept of adaptive cube-based compression. In the algorithm, ρ max denotes the maximal partition, i.e. the range cubes are the largest of all the partitions. Similarly, ρ min denotes the minimal partition, i.e. the range cubes produced by this partition are the smallest of all the partitions. The set of all codebooks of GOFg for a ρ given partition ρ is denoted by g . ALGORITHM Al_3.3.1 Adaptive cube-based compression algorithm. Given the image sequence Seq :– Prepare Seq = {GOFg : 1 ≤ g ≤ ks }; Prepare the tolerance ε, the maximal partition ρ max , and the minimal partition ρ min ; For g = 1, . . . , ks Begin ρ For every possible partition ρ, prepare g ; g For t = 1 to kl do g For j = 1 to km do g For i = 1 to kn do Begin max ρ = ρ max ; Rρ = Ri,j,t ; ρ Call Octant (ρ, R ) End-For (i, j, t) End-For (g). Procedure Octant (ρ, Rρ ): ερ = 10000; While (ε ρ > ε) and (ρ = ρ min ) do Begin ρ For each Dk ∈ g do Begin (α, β) := Solve min VRρ − (αVDk + βI) ; α,β
Compute E(Dk , Rρ ); End; Compute the minimal rms error: ρ ερ = E(Dopt , Rρ ) = min { E(Dk , Rρ )|Dk ∈ g }; ρ min If (ε ≤ ε) or (ρ = ρ ) then Store tag bit 0; Store αopt , βopt and the index of Dopt ; Else Begin Store tag bit 1; New Partition λ- : Partition Rρ to 8 small subcubes; ˜ For each subcube R ˜ Call Octant (λ, R); End-If End-While.
1580
M. Wang and C.-H. Lai
3.3.2 Decompression of adaptive cube-based compression. The difference between fixed partition and adaptive partition methods in fractal decompression is that the compression files produced by the latter method contain certain tag bits which are used to indicate the location of the range cubes. Fractal decompression is an iterative process which uses the approximation formula R ≈ αD + βI, where R is a range cube, D is a codebook cube whose location forms part of the compression codes, and α and β are the other compression codes. The following algorithm is an iterative decompression process. ALGORITHM Al_3.3.2
Decompression algorithm for adaptive cube-based compression.
Read initial information from the compression file: {ks – the number of GOFs; s– the number of frames in a GOF; the size of every image; the size of the maximal partition and the minimal partition} Initialize Seq = {GOFg : 1 ≤ g ≤ ks }; Every frame of Seq may be a black or white frame For g = 1, . . . , ks Prepare the maximal partition ρ max , and the minimal partition ρ min ; Begin ρ For every possible partition ρ, prepare g ; g For t = 1 to kl do g For j = 1 to km do g For i = 1 to kn do Begin max Current_partition ρ = ρ max ; current_range_cube Rρ = Ri,j,t ; L: Read the tag bit a from the compression file; If (a = 0) then Begin Read compression codes αopt , βopt and the index of a codebook for current_range_cube Rρ ; Find the codebook Dopt from the codebook set g with the index; Compute Rρ ⇐ αopt .Dopt + βopt I; End Else if (α = 1) then Begin New partition ρ: ˜ Partition Rρ to 8 small subcubes; Goto Label L; End-If End-For (i, j, t) End-For (g). 3.3.3 Adaptive hybrid compression method. Since the adaptive partition method results in a higher compression ratio than the algorithm based on fixed partition while maintaining the quality of the decompression image, the obvious move is to combine the hybrid compression method with an adaptive partition, i.e. the sub-sequence sub − GOFo = {G1 , G3 , G5 , . . .} is compressed using the adaptive partition cube method. Numerical experiments validate the above statements regarding compression ratio and qualities.
Grey video compression using fractals
ALGORITHM Al_3.3.3
1581
Hybrid compression algorithm.
Given the image sequence Seq = {GOFg : 1 ≤ g ≤ ks } : − For g = 1, 2, . . . , ks do Prepare {G1 , G2 , G3 , . . .}; Construct sub- GOFo = {G1 , G3 , G5 , . . .}, sub-GOFe = {G2 , G4 , G6 , . . .}; Apply { Al_3.3.1} to sub- GOF0 ; Use { Al_3.3.2} to compute { DG1 , DG3 , DG5 , . . .}; For i = 1, 2, . . . do Apply { Al_3.1.2} to G2i ; End-For (i) End-For (g).
4.
Parallel computing
The major difficulty with fractal compression is the long run tile of the algorithms. In our recent studies, we have found that compression of a VCD sequence consisting of 16 frames of 8-bit grey images each of 720 × 576 pixels takes approximately 2 hours. Such algorithms are impractical for the media industry. Except for optimal compression algorithms, which are similar to those used in fractal still image compression, parallel computing is generally used to reduce the run times of fractal video compression algorithms. The existence of parallel data in fractal video compression algorithms makes it easy to parallelize the algorithms. For example, a sequence in the cube-based algorithm is partitioned into independent GOFs, which can be compressed in parallel. For each GOF, the computational complexity focuses on searching for an optimal approximate codebook cube for every range cube with which compression codes can be computed. This process can also be run in parallel because the search processes for range cubes are independent [42, 44]. 4.1
Parallel algorithm
A parallel algorithm for the sequential cube-based method (Al_3.1.1) is presented in this section. The parallelization of other fractal video compression algorithms is similar. In order to implement the parallel matching search procedure, we partition the procedure into a series of tasks. 4.1.1 Task partition. Although the matching search procedures for any two range cubes do not need to communicate with each other, it is necessary to keep all compression codes generated by matching procedures in the same output file. This will allow easy handling of the compression codes. Hence it is not a good idea to generate small partitions. In our distributed computing system, we use two, four, or eight computing nodes. Therefore the problem is partitioned into four, eight, or 16 tasks (figure 6). 4.1.2 Communication. Although it is not necessary to communicate between tasks, every task requires initial data which include the intensities of range cubes and all codebook cubes. They can be sent to the corresponding computing nodes as arrays after codebook cubes are computed. It is then necessary to return the compression codes from every computing node when the computing task is finished. Each task has to complete the following codes.
1582
M. Wang and C.-H. Lai
Figure 6. Task partition.
For Ri,j,t in the task For each Dk ∈ g do (αk , βk ) := Solve minα,β VRi,j,t − (αVDk + βI) ; Compute E(Dk , Ri,j,t ); End-For Compute the compression code:– E(Dopt , Ri,j,t ) = VRi,j,t − (αopt Dopt + βI)
:= minDk {E(Dk , Ri,j,t )}; Store αopt , βopt and the index of Dopt as arrays; End-For; The complete algorithm is as follows. ALGORITHM Al_4.2.1
Parallel cube-based fractal compression for video.
Given the image sequence Seq:– Prepare {(GOF)g : 0 ≤ g ≤ 2N−n − 1}; For g = 0, . . . , 2N−n − 1 do Prepare g ; Send initial data to computing nodes; For each computing node: Execute tasks in parallel; Receive compression codes from computing nodes; Store compression codes in an output file according some given order; End-For. 4.2 Parallel computing environments Parallel algorithms can be implemented in tightly coupled high-performance computers [15, 45], in cluster systems consist of supercomputers [44], or in distributed computing
Grey video compression using fractals
1583
systems consisting of PCs [42, 43]. The first two systems give high speed-up ratios, and the parallelization is easily carried out using a message-passing mechanism such as MPI or PVM. If no supercomputers are available, PCs and local networks can be used to construct distributed computing systems in order to implement parallel computing. A distributed computing system consists of loosely coupled PCs based on a local area network to simulate low-cost high-performance computing applications. These computing nodes are usually low cost and can be extended to grid computing environments. The state-of-the-art technologies used to build distributed computing systems are CORBA, JAVA RMI/EJB, DCOM, .NET Remoting, and Web Service. Because of ease of access we have used DCOM and NET Remoting to build two distributed computing systems on which we have examined a parallel fractal video compression algorithm [43]. A detailed description of the parallel performance of fractal video compression method in cluster systems consistingof supercomputers is given in section 6.
5.
Fractal colour video compression
In digital image processing, a colour image is often expressed as an RGB mode, i.e. pixel intensity consists of three values corresponding to the colours R, G, and B. Thus a frame of a colour image in RGB space f can be decomposed into three single-colour channels fr , fb , and fg . Similarly, a colour video sequence can be decomposed into three single-channel sequences. A single colour channel is treated as a grey image. Therefore fractal grey video compression methods can easily be extended to colour video compression.
6.
Numerical experiments and results
Two sequences of motion images were used in the experiments, one from a videoconference and the other from a VCD movie. The original sequence of motion images from the videoconference consists of frames of 8-bit grey images, each of 256×256 pixels. A subset of the sequence for f values of 1, 2, 13, and 14 is shown in figure 7. The decompressed images are shown in figures 8–12. The original sequence of motion images from a movie extract consists of frames of 8-bit grey images, each of 720 × 576 pixels. A subset of the sequence for f values of 1, 2, and 3 is shown in figure 13. The decompressed images are shown in figures 14–18. The compression ratio, which is defined as the ratio of the storage size of the original image to the storage size of the compressed image, is used to compare the efficiency of compression. Two sets of experiments were performed, one designed to test the quality of fractal compression algorithms on the sequential environment and the other to test the performances of parallel implementations.
Figure 7.
Original motion images from a video conference (256 × 256 pixels, 8-bit grey image).
1584
M. Wang and C.-H. Lai
Figure 8.
Decompressed images in test 1_1 (compression ratio, 6.72).
Figure 9.
Decompressed images in test 1_2 (compression ratio, 11.76).
Figure 10.
Decompressed images in test 1_3 (compression ratio, 43.50).
Figure 11.
Decompressed images in test 1_4 (compression ratio, 17.47).
6.1 Experiments on sequential algorithms The experimental environment is a PC with a 1.7 GHz CPU and a 256 Mb RAM. The operating system is Windows 2000 Professional. All the algorithms were implemented using VC++ language. A summary of the experimental parameters and results obtained is given in table 1.
Grey video compression using fractals
Figure 12.
1585
Decompressed images in test 1_5 (compression ratio, 17.47).
Figure 13. The original motion images from a movie (720 × 576 pixels, 8-bit grey image).
6.2
Figure 14.
Decompressed images in test 2_1 (compression ratio, 3.65).
Figure 15.
Decompressed images in test 2_2 (compression ratio, 6.06).
Numerical experiments on the parallel environment
The algorithm used in the parallel environment is based on Al_4.2.1, which is a parallel version of the fractal cube-based video compression algorithm.
1586
M. Wang and C.-H. Lai
Figure 16.
Decompressed images in test 2_3 (compression ratio, 16.12).
Figure 17.
Decompressed images in test 2_4 (compression ratio, 8.62).
Figure 18.
Decompressed images in test 2_5 (compression ratio, 12.12).
Table 1.
Test Test 1.1 Test 1.2 Test 1.3 Test 1.4 Test 1.5 Test 2.1 Test 2.2 Test 2.3 Test 2.4 Test 2.5
Compression ratios and PSNRs obtained with the various methods.
Image
Algorithm
Decompressed images
Video conference Video conference Video conference Video conference Video conference VCD1 VCD1 VCD1 VCD1 VCD1
Al− 3.1.1 Al− 3.2.1 Al− 3.2.2 Al− 3.3.1 Al− 3.3.2 Al− 3.1.1 Al− 3.2.1 Al− 3.2.2 AL− 3.3.1 Al− 3.3.2
Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure
8 9 10 11 12 14 15 16 17 18
Compression ratio
PSNR
6.72 11.76 43.50 17.47 16.02 3.65 6.06 16.12 8.62 12.12
36.15 33.79 33.60 34.86 35.4 31.96 30.70 27.46 30.33 29.37
Grey video compression using fractals
1587
6.2.1 Speed-up performance. Three tests were used to study the speed-up performance of the parallel fractal video compression algorithm. The first two tests used 12 and 15 frames of the videoconference; and the third used six frames of the video movie. The numerical experiments were performed on two Sun Sparc machines, Prun1 and Prun2, which communicated via a 100 Mb Ethernet switch. Prun1 and Prun2 are dual Ultra Sparc (V9) processors with 2 Gb RAM; each processor is 1 GHz. The software support environment includes a Sun Solaris 8 (SunOS 5.8) operating system [53], MPI version 1.2.6, and a Sun Fortran 90 version 9 compiler. The efficiency of using different number of processors to run the parallel program was examined using the execution time, which includes the compression time, the communication time, and the idle time. The speed-up of a parallel program is defined as the ratio of the time required for a job to be run on one processor to the time reqired for the same job to be run on n identical processors. Let P be the number of processors and T (n) be the time required to complete the task on n processors. Thus, using the above definition, the speed-up S(n) is given by S(n) =
T (1) . T (n)
The execution time and speed-up obtained in the tests are listed in table 2. Figure 19 shows the execution time for compression of six frames of 8-bit images, each of 720 × 576 pixels. The histogram shows that execution time decreases as the number of processors increases. When there is only one processor the compression algorithm is equivalent to a sequential algorithm. Figure 20 shows the speed-up ratios obtained with different numbers of processors. When the number of processors increases, the speed-up increases. High speed-up ratios can be obtained by using the parallel algorithm. Table 2.
Execution time and speed-up of the experiment.
No. of processors P
Movie (720 × 576, 6 frames)
Video conference 1 (256 × 256, 12 frames)
Video conference 2 (256 × 256, 15 frames)
Time (s)
Speed-up
Time (s)
Speed-up
Time (s)
Speed-up
1 2 4 8 16 24 36
1891.87 947.83 476.78 239.63 123.65 85.04 60.04
– 1.996 3.968 7.895 15.300 22.247 31.510
1539.9701 770.1 385.72 197.48999 100.35
– 1.9997 3.992 7.798 15.346
2506.05 1254.18 629.20996 321.58001 162.49
– 1.998 3.983 7.793 15.423
Figure 19.
Execution time versus number of processors.
1588
M. Wang and C.-H. Lai
Figure 20.
Table 3.
Speed-up versus processor number.
Execution time of the scalability test.
Number of processors/pixels
Execution time (s)
P = 1, 360 × 288 pixels P = 4, 720 × 576 pixels P = 16, 1440 × 1152 pixels
471.93 476.78 491.72
Figure 21.
Scalability of the parallel algorithm.
6.2.2 Scalability performance. Another test was designed to analyse the scalability of the parallel algorithm. The test still used six frames of the video movie but the image sizewas changed in order to compare execution time. When the number of processors p = 1 let the image size be 360 × 288 pixels, when p = 4 let the image size be 720 × 576 pixels, and when p = 16 let the image size be 1440 × 1152 pixels. The results of the test are shown in table 3 and figure 21.
7.
Conclusion
Video compression methods using fractals have been reviewed. Numerical tests show that combination methods, such as the hybrid method, the adaptive partition method, and combined fractal compression and motion compensation, can achieve higher compression ratios and better compression qualities (high PSNR) than the two basic fractal video compression methods. Combined fractal compression and motion compensation is the best method for videoconference images, whereas the hybrid method and the adaptive partition method are more suitable for general motion images.
Grey video compression using fractals
1589
Parallel tests show that the parallelization of fractal video compression methods can achieve a high speed-up ratio with high scalability. The parallel fractal video compression algorithms are practical for industrial applications and are a promising tool in the field of image compression. Acknowledgements The images from the movie extract were provided by Meiah Entertainment Ltd, Hong Kong, China. The numerical tests were performed at the laboratory of the School of Computing and Mathematical Sciences, University of Greenwich, UK. H. Peng tested the parallel algorithm andY. Fang tested the combined fractal video compression and motion compensation algorithm. References [1] Hutchinson, J.E., 1981, Fractals and self-similarity. Indiana University Mathematics Journal, 30, 713–747. [2] Barnsley, M.F. and Sloan, A.D., 1987, Chaotic compression. Computer Graphics World, November, 107–108. [3] Barnsley, M.F., Ervin, V., Hardin, D. and Lancester, J., 1986, Solution of an inverse problem for fractals and other sets. Proceedings of the National Academy of Sciences of the USA, 83, 1975–1977. [4] Jacquin, A.E., 1989, A fractal theory of iterated Markov operators with applications to digital image coding. PhD Thesis, Georgia Institute of Technology. [5] Saupe, D., Hamzaoui, R. and Hartenstein, H., 1996, Fractal image compression: an introductory overview. In: D. Saupe and J. Hart (Eds.), Fractal Models for Image Synthesis, Compression and Analysis, ACM SIGGRAPH’96 Course Notes 27 (New York: ACM). [6] Wang, H., Wang, M., Hintz, T., Wu Q. and He, X., 2005, VSA-based fractal image compression. In: Journal of the WSCG, 13th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2005. [7] Wang, H., Wang, M., Hintz, T., He. and Wu, Q., 2005, Fractal image compression on a pseudo-model. In: Computer Science 2005. Proceedings of the 28th Australasian Computer Science Conference (ACSC 2005) (Sydney: ACS). [8] Ong, G.-H., Chew, C.-M. and Cao, Y., 2001, A simple partitioning approach to fractal image compression. In: Proceedings of the 2001 ACM Symposium on Applied Computing (New York: ACM). [9] Ong, G.-H. and Yang, K., 2004, A binary partitioning approach to image compression using weighted finite automata for large images. In: DCABES 2004 Proceedings (Wuhan, China: Wuhan University of Technology). [10] Vrscay, E.R., 1998, A generalized class of fractal–wavelet transforms for image representation and compression, Canadian Journal of Electrical and Computer Engineering, 23, 69–83. [11] Kim, T., Van Dyck, R.E. and Miller, D.J., 2002, Hybrid fractal zero tree wavelet image coding. Signal Processing: Image Communication, 17, 347–360. [12] Hartenstein, H. and Saupe, D., 2000, Lossless acceleration of fractal image encoding via the Fast Fourier transform. Signal Processing: Image Communication, 16, 383–394. [13] Charfi, Y., Stankovic, V., Hamzaoui, R., Haouari, A. and Saupe, D., 2002, Joint source-channel fractal image coding with unequal error protection. Optical Engineering, 41, 3168–3176. [14] Bodo, Z.P., 2004, Maximal processor utilization in parallel quadtree-based fractal image compression on MIMD architectures. Studia Universitatis Babes-Bolyai Informatica, 49, 3–16. [15] Jason, D.J. and Tinney, G.S., 1996, Performance analysis of distributed implementations of a fractal image compression algorithm. Concurrency: Practice and Experience, 8, 357–380. [16] Koenen, R., 2000, Overview of the MPEG-4 standard: coding of moving pictures and audio. Report ISO/IEC JTC1/SC29/WG11 N4668, Hewlett Packard, Palo Alto, CA. [17] Lee, D.T., 2000, JPEG2000 requirements and profiles: coding of still pictures. Report ISO/IEC JTC1/SC29/WG1 N1803, Hewlett Packard, Palo Alto, CA. [18] Rijkse, K., 1996, Draft ITU-T Recommendation H.263 (Geneva: International Telecommunication Union). [19] Lazar, M.S. and Bruton, L.T., 1994, Fractal coding of digital video. IEEE Transactions on Circuits and Systems for Video Technology, 4, 297–308. [20] Barthel, K.U. and Voyé, T., 1995, Three-dimensional fractal video coding. In: Proceedings of ICIP-95 (Washington, DC: IEEE). [21] Wang, M., 2004, Cuboid method of fractal video compression. In: ICITA 2004 Proceedings (Sydney: Macquarie Scientific Publishing). [22] Fisher, Y., Rogovin, D. and Shen, T.P., Fractal (self-VQ) encoding of video sequences. In: Proceedings of the SPIE Visual Communications and Image Processing (Bellingham, WA: SPIE). [23] Kim, C.-S. and Lee, S.-U., 1995, Fractal coding of video sequence by circular prediction mapping. In: NATO ASI Conference on Fractal Image Encoding and Analysis (Berlin: Springer-Verlag).
1590
M. Wang and C.-H. Lai
[24] Wang, M. and Lai, C.-H., 2005, A hybrid fractal video compression method. Computers and Mathematics with Applications, 50, 611–621. [25] Wang, M., Liu, R. and Lai, C.H., 2006, Adaptive partition and hybrid method in fractal video compression. Computers and Mathematics with Applications, 51, 1715–1726. [26] Wang, M. and Liu, R., Adaptive partition and hybrid method in fractal video compression. In: DCABES 2004 Proceedings (Wuhan, China: Wuhan University of Technology). [27] Uzun, I.S. and Amira, A., 2005, Real-time 2-D wavelet transform implementation for HDTV compression. Real-Time Imaging 11, 151–165. [28] Lee, T.-C., Robinson, P., Gubody, M. and Henne, E., 1999, Software/hardware co-design implementation for fractal image compression. In: Proceedings of the 37th Annual ACM Southeast Regional Conference (CDROM) (New York: ACM). [29] Pau, G., Tillier, C., Pesquet-Popescu, B. and Heijmans, H., 2004, Motion compensation and scalability in lifting-based video coding. Signal Processing: Image Communication, 19, 577–600. [30] Li, X., 2004, Scalable video compression via overcomplete motion compensated wavelet coding. Signal Processing: Image Communication, 19, 637–651. [31] Venkatachalapathy, K., Krishnamoorthy, R. and Viswanath, K., 2004, A new adaptive search strategy for fast block based motion estimation algorithms. Journal of Visual Communication and Image Representation, 15, 203–213. [32] Zhang, L., Wu, X. and Bal, P., 2005, Real-time lossless compression of mosaic video sequences. Real-Time Imaging, 11, 370–377. [33] Fang,Y., 2006,A fractal video compression algorithm based on motion compensation. Master Thesis (in Chinese) Fuzhou University, China. [34] Gharavi-Alkhansari, M. and Huang, T.S., 1996, Fractal video coding by matching pursuit. In: Proceedings of the IEEE International Conference on Image Processing (ICIP’96), Vol. 1, pp. 157–160 (Washington, DC: IEEE). [35] Zhen, Y. and Wilson, R., 2004, Hybrid fractal video coding with neighbourhood vector quantisation. In: Proceedings of the Data Compression Conference (Los Alamitos, CA: IEEE Computer Society). [36] Zhang, Y., Po, L.M. and Yu, Y.L., 1997, Wavelet transform based variable tree fractal video coding. In: Proceedings of IEEE International Conference on Image Processing, Vol. 2, pp. 294–297 (Washington, DC: IEEE). [37] Kim, C.-S., Kim, R.-G. and Lee, S.-U., 1998, Fractal coding of video sequence using circular prediction mapping and noncontractive interframe mapping. IEEE Transactions on Image Processing, 7, 601–605. [38] Zhu, S. and Belloulata, K., 2004, Region-based fractal coding of monocular and stereo video sequences. In: Proceedings of the IASTED International Conference on Signal and Image Processing 2004 (Calgary: Acta Press). [39] Belloulata, K., 2005, Fast fractal coding of subbands using a non-iterative block clustering. Journal of Visual Communication and Image Representation, 16, 55–67. [40] Shen, F. and Hasegawa, O., 2004, A fast no search fractal image coding method. Signal Processing: Image Communication, 19, 393–404. [41] Hamzaoui, R., Saupe, D. and Hiller, M., 2001, Distortion minimization with fast local search for fractal image compression. Journal of Visual Communication and Image Representation, 12, 250–468. [42] Truong, T.-K., Jeng, J.-H., Reed, I.S., Lee, P.C. and Li, A.Q., 2000, A fast encoding algorithm for fractal image compression using the DCT inner product. IEEE Transactions on Image Processing, 9, 529–534. [43] Wang, M., Huang, Z. and Lai, C.-H., 2006, Matching search in fractal video compression and its parallel implementation in distributed computing environments. Applied Mathematics Modelling, 30, 677–687. [44] Wang, M., Zheng, W. and Zheng, S., 2000, Distributed computing system with Java RMI. Mini-Micro Systems, 21, 908–910 (in Chinese). [45] Peng, H., 2006, Design of parallel algorithms for fractal video compression. Master Thesis (in Chinese) Fuzhou University, China. [46] Pommer, A., Zinterhof, P., Vajtersic M. and Uhl, A., 1999, Fractal video compression on shared memory systems. In: Lecture Notes in Computer Science, Vol. 1557, pp. 317–326 (Berlin: Springer-Verlag). [47] Hamzaoui, R., Saupe, D. and Wagner, M., 1998, Rate-distortion based video coding with adaptive meanremoved vector quantization. In: Proceedings of ICIP’98 (Washington, DC: IEEE). [48] Jacobs, E.W., Fisher, Y. and Boss, R.D., 1992, Image compression: a study of the iterated transform method. Signal Processing, 29, 251–263. [49] Engel, D. and Uhl, A., 2003, Adaptive image compression of arbitrarily shaped objects using wavelet packets. In: Proceedings of the 23rd International Picture Coding Symposium 2003 (PCS 2003), pp. 283–288 (Washington, DC: IEEE). [50] Nguyen, G.K. and Saupe, D., 2000, Adaptive postprocessing for fractal image compression. In: Proceedings of the IEEE International Conference on Image Processing 2000 (ICIP’2000) (Washington, DC: IEEE). [51] Franco, R. and Malah, D., Adaptive image partitioning for fractal coding achieving designated rates under a complexity constraint. Proceedings of the International Conference on Image Processing 2001 (ICIP’01), Vol. 2, pp. 435–438 (Washington, DC: IEEE). [52] Kassler, A., 1998, Fractal video compression for real-time communication. Proceedings of the IASTED International Conference on Signal and Image Processing 1998 (SIP’98) (Calgary: Acta Press). [53] Gilly, D. and O’Reilly & Associates, 1994, Unix in a Nutshell: System V Edition (2nd edn) (Sebastopol, CA: O’Reilly & Associates).