Random Wavelet Transforms, Algebraic Geometric Coding, and Their Applications in Signal Compression and De-Noising Man Kam Kwongy Mathematics and Computer Science Division Argonne National Laboratory Argonne, IL 60439-4844 E-mail:
[email protected]
Tomasz Bieleck Li M. Song Stephen S. T. Yau Department of Mathematics, Statistics and Computer Science The University of Illinois at Chicago Chicago, IL 60607-7045 Abstract
The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.
Introduction In this paper, we rst outline the main ideas behind classical wavelet-based algorithms of compression and de-noising of signals. We then introduce the concept of random wavelet transforms and discrete random wavelet transforms, which are more eective than their classical counterparts in handling fractional noises. Finally, we discuss potential applications of the method of algebraic geometric coding of data produced by our new algorithms. Since this article is not a detailed survey, many aspects of both compression and coding algorithms will not be presented here. For example, we do not discuss thresholding algorithms; we limit discussion of wavelet bases to the orthonormal ones generated by the wellknown Daubechies scaling functions (see [13,14] for a discussion of a new method to construct biorthogonal bases); and we do not give a detailed analysis of coding algorithms. Instead, this paper is meant to be an overview of a methodology dealing with a speci c class of signal+noise models that we have in mind. In Section 1, we describe a class of noisy signal models. In particular, we focus on fractional random elds,
which occur naturally in many applications. In Section 2, we formulate a signal de-noising problem. In solving this problem, we achieve, in addition, a compression of the signal. In Section 3, we present some ideas from wavelet analysis that motivate our approach. In particular, we lay the ground for the so-called discrete wavelet transform that we introduce in Section 4. Section 4 is pivotal for the paper: it presents the essence of the simultaneous compression/de-noising algorithms. In the last two sections (5 and 6) we present brie y some new ideas from algebraic geometric coding theory which, we believe, can be used bene cially in source and channel coding for image-based information, especially that resulting from our compression/de-noising algorithms. Novelty and strength of the algebraic geometric codes are discussed.
1. Noisy Signal Models
We begin by describing the class of noise processes that we are concerned with. Since we are going to deal with multiparameter signals in general, the corrupting noise will be modeled as a random eld parametrized by t 2 R . The cases d = 1 and d = 2 correspond to oneand two-dimensional signals, respectively. For each t 2 R , let C denote the cuboid determined by t and 0, and let V t denote the Euclidean volume of C . We suppose that a probability space ( ; F ; P) has been de ned. De nition 1.1 We de ne a fractional Wiener sheet (fWs) on R with parameter H 2 (0; 1] to be a realvalued Gaussian random eld W (W (t)) 2Rd such that i) W (0) = 0, P a.s., The Control and Information Laboratory y Work supportedby the Mathematical, Information, and Comii) EW (t) = 0, 8t 2 R , putational Sciences Division subprogram of the Oce of Computational and Technology Research, U.S. Department of Energy, iii) E(W (t)?W (s))2 = k jV t +V s ?2V t \ s j2 , under Contract W-31-109-Eng-38. 8t; s 2 R , k = const > 0, d
d
t
C
t
d
H
H
t
H
H
d
H
H
d
H
H
C
C
C
C
H
where c ; k 2 K are the coecients of S() in the basis F. The estimation problem for S() may therefore be reduced to that of the c 's. Obviously, one cannot hope to get good estimates of all the c 's when using only a nite number I of observations R(t ). Therefore the expansions in (2.1) and (2.10) are truncated to a nite number of coecients, and one estimates only the coecients that are retained. It is clear that the success of this method depends, to a large extent, on the accuracy of the approximation of S() by the reduced expansion. It turns out that very good results can be obtained if one uses the so-called wavelet basis to carry out the expansions in (2.1) or (2.10).
where E denotes the expectation w.r.t. the probability function P. Remark 1.1 A fractional Wiener sheet on R was rst constructed and studied in [2]. Remark 1.2 In the case d = 1, a fractional Wiener sheet coincides with fractional Brownian motion (fBm) and is denoted by B (B (t)) 2R . For the classical de nition of fBm, see, for example, [18]. A discussion of various real-life phenomena giving rise to fBm can be found in [12]. The fWs extends the concept of fBm to the multiparameter case and is very useful in modeling multiparameter disturbances whose intensities depend on the size of the perturbed object. Remark 1.3 Note that in the case H = 12 , the fWs is equivalent to an ordinary Brownian motion with a d-dimensional \time" parameter. Let W_ (W_ (t)) 2Rd be a random eld consisting of distributional derivatives of the fWs W w.r.t. t 2 R . We shall call such a random eld a fractional Wiener noise (fWn). In the case H = 21 , W_ is the Gaussian white noise on R . The class of noisy signals that we are considering in this paper consists of random elds (R(t)) 2Rd such that
k
k
d
H
k
i
H
t
H
3. Wavelet Transform and Random Wavelet Transform We will discuss, for simplicity, only one-dimensional wavelets. Higher-dimensional wavelets can be treated in a similar way. In her seminal paper [5], Ingrid Daubechies presented the construction of two sequences (p ) and (q ) of real numbers that solve the two-scale equations, X (x) = p (2x ? k) (3:1)
H
t
H
d
H
k
d
k
k
k
t
R(t) = S(t) + W (t); t 2 R H
(x) =
(1:1)
d
X
q (2x ? k) k
(3:2)
k
for all x 2 Rd. She proved the existence of compactly supported soR(t) = S(t) + W_ (t); t 2 R (1:10) lutions in L2 (R). For each solution, the functions () where S() 2 L2 (R ). The function S() is called the and () are called the scaling function and mother original signal and R() the recorded (or measured, or wavelet, respectively. The name wavelet for () is motivated by the following property: observed) signal. or
H
d
d
Z
2. The Problem of Signal De-noising
R
Simply put, the abstract problem of signal de-noising amounts to estimating S() from the noisy observation R(). From the standpoint of statistical theory, it is a regression problem and, most typically, a nonparametric one. In practice, one does not have access to the entire R(), but only to a nite number of sampled values: either R(t ) or some average of R() in a small neighborhood of t for i = 1; ; I. The practical problem, therefore, is to estimate S() given the data R(t ); i = 1; ; I and the relation (1.1) or (1.10). Let F = ff ; k 2 K g be a basis for L2(R ). We can write
(x)dx = 0:
(3:3)
A very important role in wavelet theory is played by the translated and dilated version of the basic functions () and (), (x) := 2 2(2 x ? k) (3:4) j=
j;k
j
(x) := 2 2 (2 x ? k) (3:5) for all x 2 R; j; k 2 Z. Let us denote, for J 2 Z, J = fJ (); k 2 Zg; and J = f (); j J ; k 2 Zg: X R(t ) = c f (t ) + W (t ); i = 1; ; I; (2:1) Then, as shown in [5], (J ; J ) is an orthonormal basis 2 for L2 (R) for each J 2 Z, under appropriate choice of the coecients (p ) and (q ). Such a choice of (p ) or and (q ) will be assumed from now on. In particular, X R(t ) = c f (t ) + W_ (t ); i = 1; ; I; (2:10) ?12= f ; j; k 2 Zg is an orthonormal wavelet basis for L (R). 2 j=
j;k
j
i
i
i
i
k
k
;k
d
k
k
i
H
j;k
i
K
k
k
i
k
k
k
i
H
j;k
i
K
2
k
k
X
For each locally square integrable function f, we deg = d (3:16) ne the scaling coecients (a ) and wavelet coecients (b ) as the respective integral transforms for each j 2 Z. There exists a unique pair of numerical Z sequences f(~p ); (~q )g so that one can obtain the coea := f(x) (x)dx; (3:6) cients (c ; k 2 Z), (d ; k 2 Z), j = M ? 1; M ? 2; ; N, R from the coecients (c ; k 2 Z), by means of the following \decomposition algorithm": and Z X (3:7) b := f(x) (x)dx c ?1 = p~ ?2 c (3:17) R for all j; k 2 Z. The right hand side of (3.7) is called the X wavelet transform of f(). If f 2 L2(R) then we have d ?1 = q~ ?2 d ; j = M; M ? 1; ; N: (3:18) j
j
k
j;k
k
j;k
j;k
k
j
j;k
j;k
j
k
j
j;k
j;k
k
k
l
k
M k
j
k
l
l
j
1 X
f(x) =
?1
;k
;k
k=
and
1 X 1 X
aJ J (x)+
J
1 X 1 X
k
j l
l
b
j;k
?1
j=
l
k
j;k
It is also possible to \reconstruct" the c 's from the sequences (c ; k 2 Z) and (d ; k 2 Z), j = N; ; M ? 1, by means of the \reconstruction algorithm":
(x) (3:8)
k=
M k
j
j
k
k
X
(p ?2 c ?1 + q ?2 d ?1);
j = N + 1; ; M; (3:19) where the expansions are in the L2 (R) sense. The fol- where p ; q are the original Daubechies coecients. lowing notation will be useful. The above algorithms are the Mallat pyramidal algorithms ([17]). It is important to note that the coecients (c ; k 2 V := closure 2 (R) f ; k 2 Zg; V?1 := ; (3:10) Z), (d ; k 2 Z), j = N; N + 1; ; M ? 1, can be obtained from (c ; k 2 Z) in one shot. Let us denote by W := closure 2 (R) f ; k 2 Zg: ; l 2 Z) and (s ; l 2 Z) the coecients of (r and , j = N; N + 1; ; M ? 1, respectively, in Then we have (see [5] or [4]) the basis of V . That is, M X V +1 = V W: (3:11) = r (3:20) f(x) =
?1 =?1
k=
b
j;k
j;k
(x);
c =
(3:9)
j
k
k
j
j
l
l
k
N k
k
M k
M;N;k
j;k
L
k
j
j;k
L
j
l
l
j
k
j
l
M;j;k l
l
N;k
j;k
M
j
j
j
M;N;k
N;k
Therefore, if we denote by f the projection of f on V , and by g the projection of f on W , we see that for each f 2 L2 (R) and all M N we have j
M;l
l
j
j
l
j
j;k
=
X
s
M;j;k l
(3:21)
M;l
l
(3:12) for k 2 Z and j = N; N + 1; ; M ? 1. Then X c c = r The projection f is thus obtained by adding various levels of \detail", encoded in g 's, to the \blurred" verX sion f of f . In particular, we have d = s c f = f +g +g M
N
N
N +1
+ + g ?1: M
M;N;k
N k
M
l
j
N
N
1 X
g;
?1
g;
(3:14)
j
which is the same as (3.9). Let us denote by (c ; k 2 Z) and (d ; k 2 Z) the coecients of f in the basis of V and W , respectively, for each j 2 Z. That is, where j
f = j
j
j
X
c ; j
k
l
M
j=
k
M;j;k
k
j;k
M l
(3:23)
(3:13) for k 2 Z and j = N; N + 1; ; M ? 1. The formulas (3.23) and (3.24) express the simultaneous projections of f on the spaces V ; W ; W +1 ; ; W ?1. We can write (3.22) and (3.23) in matrix form as
j
which is the same as (3.8), and 1 X
j
l
j =N
f=
(3:22)
l
M
f =f +
M l
k j
(3:15)
k
N
2 6 6 6 6 6 4
c d d
N N
.. .
N +1
d
M
?1
N
3 2 7 6 7 6 7 6 = 7 6 7 5 6 4
N
r s s
M;N
M;N
.. .
M;N +1
s
M;M
c = [c ] 2Z d = [d ] 2Z i
i l l
j
j
l l
3
M
?1
3 7 7 7 c ; 7 7 5 M
(3:24)
r s
= [r ] 2Z b = b +b (3:34) or = [s ] 2Z a = a + a (3:330) for i = M; N, j = N; N + 1; ;M ? 1. If we denote the b = b + b (3:340) left hand side of (3.24) by w , and the matrix on the right-hand side by W , we can write it as for j; k 2 Z. The above relations are used in the compression w = W c : (3:25) tion. and de-noising algorithms given in the next secM;N;k
M;N
l
l;k
R j;k
S j;k
M j;k
R j;k
S j;k
M j;k
R j;k
S j;k
M j;k
M;j;k
M;j
l
l;k
M;N
M;N
M;N
M;N
M
This is the key relation for the discrete wavelet transform to be introduced in Section 4. In the remaining part of this section, we shall brie y describe transforms of the fractional Brownian motion B . The results given below can be extended to the case of Wiener sheet by using the result of [2]. The scaling and wavelet coecients of B are random sequences given by
4. Discrete Wavelet Transform and Signal De-Noising Suppose that the I = 2 coecients of the projection f of f on V , M > 0, are known. Using these I coecients, we can compute approximations to c00; d00; d10; d11; ; d0 ?1; ; d 2??11, the coecients introduced in the preceding section. We shall use a nitedimensional counterpart of (3.25). In other words, denoting by c the vector of I known coecients of f and by W the appropriate nite dimensional part of W 0 , we de ne the vector w of \discrete" wavelet coecients of f by w := W c : (4:1) The matrix transform in (4.1) is called the discrete wavelet transform. The vector w approximates the coecient vector [c00; d00; d10; d11; ; d0 ?1; ; d 2??11]. It is not dicult to see that W is an orthogonal matrix. Therefore we have c := W w ; (4:2) where T denotes the transposition operation. The matrix transform in (4.2) is called the inverse discrete wavelet transform. An important feature of (4.1) is that, typically, many of the elements of w are near zero and therefore can be discarded for compression and estimation purposes without signi cantly altering the outcome of the inverse transformation (4.2). This property is the basis for the idea of a simultaneous signal compression and de-noising that we shall brie y discuss below. Let c ; c ; c , and c denote the vectors of coef cients obtained by convolving R(); S(); B () and B_ (), respectively, with I elements of the basis of V . Only the coecients c are observed in practice, in other words, the elements of c are approximated by sampled measurements of R() at I points. Now, we can obviously write c = c + c (4:3) or c = c + c : (4:30) Applying (4.1) to (4.3) or (4.30) we get (4:4) w = w +w M
H
M
H
a =
Z
H j;k
and
b =
R Z
B (x) (x)dx; H
M
I
B (x) H
(x)dx
(3:27)
R respectively, for j; k 2 Z. Using the results of [1], we j;k
H
H j;k
b =
R
(x)dB (x)
R
j;k
(x)dB (x)
I
(3:29)
H
for j; k 2 Z. Several properties of the random wavelet coecients of B are demonstrated in [3]. The transform in (3.29) is called the random wavelet transform. The scaling and wavelet transforms of B_ can be de ned in terms of (3.29) and (3.30) as M
R
(x)B_ (x)dx = a
Z
R
H
j;k
j;k
H j;k
(x)B_ (x)dx = b H
H j;k
(3:30)
R I
(3:31)
M j;k
H I
H I
H
M
R I
H
S j;k
S I
R I
H
a = a +a
I
H
for j; k 2 Z. For d = 1, the models (1.1) and (1.10) can be written as R(t) = S(t) + B (t) (3:32) or R(t) = S(t) + B_ (t) (3:320) for t 2 R. This implies the following relations for the respective scaling and wavelet coecients, R j;k
T I
I
H
Z
M
I=
I
Z
H j;k
I
M
(3:28)
H
j;k
I
I
M
Z
I
I
can also de ne what we call random scaling and random wavelet coecients of B . We have used stochastic integrals of scaling and wavelet functions with respect to B . That is, we de ne and
M
I
M;
H j;k
a =
M
I=
(3:26)
j;k
M
R I
(3:33)
R I
4
R I
S I
S I
S I
M I
M I
M I
or
seeking to represent the source eciently in the sense that the average length (bit per symbol) is minimal. Such average length is bounded below by the entropy, the information that the source carries (known as the source coding theorem; see [10]). Low average length can generally be made close to the entropy for the price of decoding complexity. In the approach presented in this paper, random wavelet representation is the main tool we use in signal compression/de-noising. This compression procedure, accomplished with various thresholding techniques, is an entropy-reduction transformation. It actually appears before the steps depicted in the above diagram, and produces the \digital source." Understanding the statistics of the source (which in our case is the statistical distribution of the wavelet representation of the signals) or the statistics of the wavelet coecients can lead to signi cant entropy reduction, thus increasing the Note that (4.5) represents a simultaneous decompression compression ratio. This feature reinforces the imporand estimation of the original, unobserved signal S(). tance of the noise modeling. The actual source coding If we denote by c^00 ; d^00; d^10; d^11; ; d^0 ?1; ; d^ 2??11 involves encoding the coecients in the random wavelet ^ ) is obtained representation of the signal. the elements of c , then the estimate S( by 6. Algebraic Geometric Codes ^ = ^c000 0(t)+ d^00 0 0(t)+ d^10 1 0(t)+ d^11 1 1(t)+ 1. Channel Coding. S(t) The objective of channel coding (error control) is to + d^0 ?1 ?1 0(t) + + d^ 2??11 ?1 2?1(t) achieve reliability of transmitting information through a (4:6) noisy channel. Channel coding introduces redundancy for t 2 R. in order to control the error as a result of channel noise during transmission. It seems to be the only practical 5. Digital Communication way to achieve reliability and eciency. A channel is described by the conditional probability The methodology of Sections 1 through 4 produces of correct reception The term channel capacity, C, is ina vector of data w^ that one may wish to store and/or troduced as the maximal mutual information to measure to transmit. In this and the next section, we present the capacity of the channel ([10]). A classical theorem some basic principles of algebraic geometric coding that of Shannon (the channel coding theorem; compare the we intend to use to code the data w^ and transmit them source coding theorem in the preceding section) states via digital communication channels. that if Digital communication oers various advantages and H(S ) C ; has become increasingly important. Some of its adT T vantages are its exibility, reliability, and availability of wide-band channels such as optical bers and satellite. where, T is the source rate measured in symbol per T A typical (memoryless) digital communication system seconds, T is the channel rate in symbol per T seconds, and H(S ) is the entropy, then there exists some code has these channels: with arbitrary small error probability. Two parameters are apparent to the designer: transdigital source ! source encoder mitted signal power and bandwidth. These parameters ! channel encoder ! modulator in turn, through a modulation scheme, determine the ratio of the signal energy per bit E to the noise power ) Noisy Channel ) density N0 . The reduction of this ratio E =N0 can detector ! channel decoder mean lower transmitted power requirement and hardware cost, and thus serves as some measure of perfor! source decoder ! digital sink mance. Coding is the only practical way to achieve a where the eect of the detector is a \demodulator." small error probability Furthermore, careful design and Source coding (here a form of lossless data compression) choice of coding scheme can lead to the reduction of is a means to remove redundancy. It maps the digital E =N0, known as the code gain. This is one factor that source into some code, where full recovery is possible, prompts us to consider algebraic geometric codes.
(4:40) As we indicated earlier, many elements of w will be near zero. Using appropriate thresholding, we rst set to zero those elements of w that are appropriately small. In other words, we compress w . After that, we estimate elements of w by using whatever information is left in the compressed version of w . There are, of course, numerous estimation procedures possible. The one that we used in our simulations was based on appropriate shrinkage of the elements of the compressed w . Actually, we have achieved a simultaneous compression and estimation (de-noising) in one shrinkage operation. Let us denote by w^ the estimate of w . Applying (4.2) to w^ , we obtain the estimate of c , as ^c = W w^ : (4:5)
w = w + w : R I
S I
M I
R I
R I
R I
S I
R I
R I
S I
S I
S I
s
s
S I
S I
T I
s
S I
s
s
M
s
M
I=
S I
s
s
;
s
M
M
;
;
s
;
s
M
I=
s
M
;
;I=
I
I
s
c
s
s
c
c
b
b
b
5
2. Algebraic Geometric Codes.
g is the genus of the curve. This is called the designed minimal distance. To spell out some detail, let the above de ned code have parameters [n; k; d]. If we assume that 2g ? 2 < deg(G) < n, then k = deg(G) ? g +1, and d > d ([17]). So for choice of deg(G) at the scale of n, it is clear that both k and d will be of the same scale.
There are two types of code in common practice, block codes and convolutional codes. Among block codes, linear code is the most common for its easy manipulation (because of the extra structure it carries). In this paper, we will not be concerned with convolutional codes. The economics of a linear code is measured by the parameters [n, k, d] (sometimes [n; k; d] to signify the presence of the underlying nite eld) of length n, dimension k, and minimal distance d. It is a k-dimensional vector subspace sitting inside some standard n-dimensional vector space over a given nite eld, say F , through an encoding process. Various ingenious techniques (such as concatenation) can improve code performance. Classical constructions (typically cyclic codes) usually depend in some way on the parameters n and k (and d). It has been the case that, as the codeword length n increases, either k or d gains poorly [15]; that is, dnumber or knumber will tend to 0. In other words, the code cannot compensate satisfactorily for both error-correcting ability and information rate. Algebraic geometric (AG) codes come to the rescue. They were constructed about fteen years ago, an interdisciplinary fruit of coding theory and algebraic geometry, and were soon proven to be of theoretic importance. They are the rst examples to exceed the Gilbert-Varshamov bound; both dnumber and knumber are bounded away from 0 when n becomes large (indeed, far better bounds). Such measures are important so that we can check taht the code rate and error-correcting capability do not diminish miserably. The complexity incurred from aspects such as the increase of codeword length (hence bandwidth requirement) can now be analyzed.
4. Decoding Algorithms.
q
Decoding algorithms are one of the most important factors in channel coding. The study in this area began late last decade; the eorts are summarized in [23]. The basic algorithm is similar to the idea of decoding cyclic codes: (1) write down a parity check matrix; (2) determine an error locator; (3) solve a system of linear equations to determine the error location; and (4) nally evaluate the correct value. But the error locator does not consist exactly of error locations; extra places may be introduced. The point has always been how to solve a system of linear equations (not an arbitrary one) eectively. Three recent results in the area of decoding AG codes are worth attention. They have close error-correcting capacity (by which we mean the number of errors the code is capable of correcting) and similar complexity, as illustrated below, and are given respectively in Justesen et. al. [11], Feng and Rao [7], and Ehrhard [6].
q
3.
Ref. Curve
Capacity
2 [11] planar d2 i ? m8 + m4 ? 89
Construction of Algebraic Geometric Code.
An AG code is constructed ([16,23]) from a smooth projective curve over a nite eld F . In one way it is the image of the evaluation map from a certain space of sections L(D) at some n rational points P . Here D is a divisor with support disjoint from the divisor P := P1 + ::: + P . In most practical situations these sections are represented by polynomials, and the evaluation is done at speci c points (e.g., on the projective plane). By Weil's conjecture ([9]) for curves there are approximately the scale of 2q (rational) points available on the curve. Clearly, such construction provides us with numerous choices to construct linear codes. Although the actual minimal distance is hard to determine because the theorem of Riemann-Roch does not tell us how to compute the dimension of the space of sections associated to a special divisor, we can obtain a comparable quantity d = deg(G) ? 2g + 2 which plays a role in measuring the code's error-correcting ability, where
Complexity O(n 73 )
[7]
any
(d ? 1)=2
O(n3)
[6]
any
(d ? 1)=2
O((d )2 n)
q
In the above, m is the degree of the plane curve; d should be considered as proportional to n. They each were able to make use of an algorithm generalizing some standard ones to the cyclic codes (cases of [11] and [7]), thanks to the underlying algebraic structure. An equivalent description using dierentials was the presentation in [7] and [6]. They all have the appeal of implementability.
i
n
We
5. Perspective. AG codes are easy to construct, have rates and errorcorrecting capacity, and algorithms for decoding are fast. In [24] the authors constructed a code, for channels with or without memory, with a rate arbitrarily close to the channel capacity, and with the remarkable property (an example of the identi cation coding theorem of
treat this topic informally.
6
Ahlswede and Dueck) that the transmission can achieve double exponential rate. Two constructions were given. One was a triple-layered concatenated code; the other used algebraic geometric codes that were equipped with extraordinary bounds. It is also known [21] that algebraic geometric codes could be used to rewrite a nonlinear code into a linear code. We conclude that AG codes can be used to provide optimal performance. We have performed some computer simulations of the decoding algorithms we mentioned above and compared their performance to other existing ones. The results will be reported in the future.
[13] Kwong, M. K. and P. T. Peter Tang, W-matrices, nonorthogonal multiresolution analysis, and nite signals of arbitrary length, Argonne National Laboratory Preprint MCS-P449-0794. [14] Kwong, M. K., MATLAB implementation of Wmatrix multiresolution analyses, Argonne National Laboratory Preprint MCS-P462-0894. [15] van Lint, Introduction to Coding Theory, Springer-Verlag, 1991 [16] van Lint and van der Geer, Introduction to Coding and Algebraic Geometry, Birkhauser, 1988 [17] Mallat, S., A theory for multiresolution signal decomposition: the wavelet representation, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 11 pp. 674-693 (1989) [18] Mandelbrot, B. B. and van Ness, T., Fractional Brownian motions, fractional noises and applications, SIAM Review, vol. 10, no. 4 (1968) [19] Morgera, S. D. and Krishma, H., Digital Signal Processing, Academic Press, 1989 [20] Oppenheim, A. V. and Schafer, R. W., DiscreteTime Signal Processing, Prentice Hall, 1989 [21] SIAM News, vol. 27, number 2, Feb 1994. [22] Song, L., Lectures on error correcting codes, algebraic geometric codes and the decoding algorithms, Lecture Notes, the University of Illinois at Chicago, fall 1993 and spring 1994 [23] Tsfaman, M. G. and Vladut, S. G., Algebraic Geometric Codes, Kluwer Academic Publishers, 1991 [24] Verdo, S., and Wei, V. K., Explicit construction of optimal constant-weight codes for identi cation via channels, IEEE Transactions on Information Theory, vol. 39, no. 1 (1993)
7. References [1] Bielecki, T., On integration with respect to fractional Brownian motions, Statistics and Probability Letters, to appear 1995 [2] Bielecki, T., On integration with respect to fractional Wiener sheet, preprint, 1995 [3] Bielecki, T., Chen, J. and Yau, Stephen S. T., Random wavelet transformation and its properties, in Wavelet Applications in Signal and Image Processing II, eds. A. F. Laine and M. A. Unser, SPIE Proc. vol. 2303 (1994) [4] Chui, C., An Introduction to Wavelets, Academic Press, 1992 [5] Daubechies, I., Orthonormal basis of compactly supported wavelets, Commun. Pure Appl. Math., vol. 41, pp. 909-996 (1988) [6] Ehrhard, D., Achieving the designed error capacity in decoding algebraic-geometric codes, IEEE Transactions on Information Theory, vol. 39, no. 3 (1993) [7] Feng, G.-L. and Rao, T. R. N., Decoding algebraicgeometric codes up to the designed minimal distance, IEEE Transactions on Information Theory, vol. 39, no. 1 (1993) [8] Gray, R. M., Source coding theory, Kluwer Academic Publishers, 1990 [9] Hartshorne, R. Algebraic Geometry, SpringerVerlag, 1990 [10] Haykin, S. Digital Communications, John Wiley & Sons, 1988 [11] Justesen, J., Larson, K. J., Jensen, H. E., and Hholdt, T. , Fast decoding of codes from algebraic plane curves, IEEE Transactions on Information Theory, vol. 38, no. 1 (1992) [12] Keshner, M. S., 1 Noise, Proceedings of the IEEE, vol. 70, no. 3, pp. 212-218 (1982) f
7