Meyer Wavelets with Least Uncertainty Constant

0 downloads 0 Views 547KB Size Report
If, for some p ≥ 0, an increasing function ̂x(t) satisfies Eq. (6) and the boundary conditions ̂x(0) = 0 and ̂x(π/3) = π/2, then it provides the absolute minimum ...
c Pleiades Publishing, Ltd., 2008. ISSN 0001-4346, Mathematical Notes, 2008, Vol. 84, No. 5, pp. 680–687.  c E. A. Lebedeva, V. Yu. Protasov, 2008, published in Matematicheskie Zametki, 2008, Vol. 84, No. 5, pp. 732–740. Original Russian Text 

Meyer Wavelets with Least Uncertainty Constant E. A. Lebedeva1* and V. Yu. Protasov2** 1 2

Kursk State University

Moscow State University Received August 28, 2007

Abstract—In the present paper, we construct a system of Meyer wavelets with least possible uncertainty constant. The uncertainty constant minimization problem is reduced to a convex variational problem whose solution satisfies a second-order nonlinear differential equation. Solving this equation numerically, we obtain the desired system of wavelets. DOI: 10.1134/S0001434608110096 Key words: Meyer wavelet, uncertainty constant, variational problem, second-order nonlinear differential equation, Sobolev space, Fourier transform.

1. INTRODUCTION The system of Meyer wavelets, which is one of the first examples of wavelet bases, was constructed by Meyer in 1986 [1]. At present, this wavelet family and its modifications find numerous applications in the theory of functions, numerical methods for solving differential equations, signal processing, etc. (see, e.g., the vast bibliographies in [2], [3]). Systems of Meyer wavelets are constructed as follows. Let θ(ω) be an odd absolutely continuous function equal to π/4 for ω ≥ π/3, and let λ be an even function such that   ⎧ π 2π 4π ⎪ ⎪ + θ(ω − π), ω∈ , , ⎪ ⎪4 3 3  ⎪ ⎪    ⎨π ω 4π 8π (1) −θ −π , ω ∈ , , λ(ω)|[0,∞] = ⎪ 4 2 3 3 ⎪     ⎪ ⎪ ⎪ 8π 2π ⎪ ⎩0, ∪ , +∞ . ω ∈ 0, 3 3

∞ The Meyer wavelet-function ψ and its Fourier image ψ(ω) = −∞ ψ(t)e−itω dt are defined as    ∞ 1 1 ω sin(λ(ω)) dω, ψ(ω) = e−iω/2 sin(λ(ω)). cos t − (2) ψ(t) = 2π −∞ 2 The quantity J = ∆ψ ∆ψ , where ∞ 1 (t − t0 )2 |ψ(t)|2 dt, ∆ψ = ψ2L2 (R) −∞ ∞ 1 t|ψ(t)|2 dt, t0 = ψ2L2 (R) −∞

*





2 (ω − ω0 )2 |ψ(ω)| dω, 2 ψL2 (R) −∞ ∞ 1 2 ω0 = ω|ψ(ω)| dω, 22 ψ −∞

∆ψ =

L (R)

is called the uncertainty constant of the function ψ. **

1

E-mail: [email protected] E-mail: [email protected]

680

MEYER WAVELETS WITH LEAST UNCERTAINTY CONSTANT

681

The uncertainty constant is a quantitative characteristic in problems related to the uncertainty principle used in quantum mechanics, in harmonic analysis, and in problems of time-frequency localization [4]. The uncertainty constant shows to what extent a function is localized in the time domain (the factor ∆ψ ) and in the frequency domain (the factor ∆ψ ). The smaller each of these factors, the better is the function localized in the corresponding domain. Thus, for the Haar system (see, e.g., [5]), ∆ψ = 5/6 and ∆ψ = ∞; therefore, the Haar system is localized in time better than in frequency. The system of Meyer wavelets is the earliest example of an orthonormal basis in the space L2 (R) whose elements are sufficiently smooth, have finite uncertainty constants, and are very well localized in the time domain (the wavelet decreases at infinity faster than any power if ψ ∈ C ∞ (R)) and in the frequency domains (ψ is finite). A good time-frequency localization is one of the main advantages of the Meyer wavelets. In the present paper, we find an optimal system of Meyer wavelets with least possible uncertainty constant J(ψ) and with the best possible (in this sense) time-frequency localization. In the next section, we reduce the uncertainty constant minimization problem to a convex variational problem and show that this problem has a solution satisfying the corresponding differential equation (Theorem 1). In Sec. 3, we present a numerical solution of this equation and thus obtain a function θ(ω) for constructing the optimal Meyer wavelet by formulas (1) and (2). The same scheme is used so solve the similar problem of minimization of ∆ψ for a given value of ∆ψ (i.e., the problem of best possible localization in the time domain for a given localization in the frequency domain), and conversely. The Meyer waveletfunctions, which are optimal in this sense, can be found by using the same variational problem and the same differential equation (Remark 2). 2. MAIN RESULTS In [6], the following simplified expression for the uncertainty constant of the family of Meyer wavelets was obtained:  π/3  14π 21 π/3 − ω sin(2θ(ω)) dω (θ  (ω))2 dω; (3) J(θ) = 3 π 0 0 in this case, the function θ is chosen from the set       

π π π π  1 , θ(0) = θ = 0, θ = . θ ∈ C 0, 3 3 3 4 The representation of a functional in the form (3) remains valid if the functions θ are chosen not from the space C 1 [0, π/3] but from the Sobolev space     π π 1  2 = f is absolutely continuous, f ∈ L 0, . W2 0, 3 3 Indeed, the condition θ ∈ C 1 [0, π/3] was used to obtain formula (3), which permits applying the property of the Fourier transform of the derivative f  (w) = iwf (w), which also holds in the space W21 . We use the notation x(t) = 2θ(w) to reduce the functional (3) to the form   π/3 1 14π 21 π/3 − t sin x(t) dt (x (t))2 dt, (4) I(x) = 4 3 π 0 0 where

 π , 0, 3

 x∈

W21

x(0) = 0,

  π π x = . 3 2

(5)

Theorem 1. There is a function x(t) on which the functional (4) takes its absolute minimum under conditions (5). This is an analytic increasing concave function on the interval [0, π/3], and it satisfies the differential equation x (t) = −pt cos x(t) for a certain value of the parameter p ≥ 0. MATHEMATICAL NOTES

Vol. 84

No. 5

2008

(6)

682

LEBEDEVA, PROTASOV

We divide the proof of the theorem into several auxiliary statements. Lemma 1. If, for some p ≥ 0, an increasing function x (t) satisfies Eq. (6) and the boundary conditions x (0) = 0 and x (π/3) = π/2, then it provides the absolute minimum in the variational problem ⎧ π/3 ⎪ ⎪ ⎪ F (x) = − t sin x(t) dt → min, ⎪ ⎪ ⎪ 0 ⎪    ⎨ π π π 1 (7) , x(0) = 0, x = , x ∈ W2 0, ⎪ 3 3 2 ⎪ ⎪ π/3 ⎪ ⎪ ⎪ ⎪ ⎩G(x) = (x (t))2 dt = a, 0

where a = G( x) and this minimum point is unique. Remark 1. For given values of the function x(t) at the ends of the interval, the functional b (x (t))2 dt a

attains its minimum on a linear function [7, p. 93]. Thus, the functional G(x) attains its minimum at x(t) = (3/2)t. Hence G(x) ≥ 3π/4, and the equality holds only for a linear function. Proof of Lemma 1. If p = 0, then x (t) = (3/2)t, a = 3π/4, and the domain of definition in problem (7) in this case consists of a single point x (Remark 1). Therefore, the statement of the lemma is trivial in this case, and, in what follows, we assume that p > 0. Replacing the last relation in problem (7) by the similar inequality G(x) ≤ a, we obtain a new is the unique point of absolute minimum in problem (7 ), variational problem (7 ). We shall show that x which implies the statement of the lemma. First, we note that it suffices to consider only functions x such that     π π for all t ∈ 0, . x(t) ∈ 0, 2 3 / [0, π/2] for some t0 , then there exists a function x  satisfying all conditions of Indeed, if x(t0 ) ∈ problem (7 ), the condition   π , 0≤x (t) ≤ π/2, t ∈ 0, 3 and the condition F ( x) < F (x):

⎧π ⎪ , ⎪ ⎪ ⎪2 ⎪ ⎪ ⎪ ⎨x(t),

3π π or x(t) ≥ , 2 2 π 0 ≤ x(t) ≤ , 2 x (t) = ⎪ ⎪ 0, −π ≤ x(t) ≤ 0, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩−x(t) − π, − 3π ≤ x(t) ≤ −π. 2 On each distinguished set, we have G( x) ≤ G(x) and sin x (t) ≥ sin x(t). Therefore, F ( x) ≤ F (x), and this is a strict inequality if only x  = x. Thus, it suffices to minimize the functional F on the convex set    

 π π π π 1 , G(x) ≤ a, x(0) = 0, x = , 0 ≤ x(t) ≤ . Ma = x ∈ W2 0, 3 3 2 2 x(t) ≤ −

On this set, F is strictly convex, i.e., F ((x + y)/2) < (F (x) + F (y))/2 for all x and y such that x = y, because the function f (x) = − sin x is strictly convex on the interval [0, π/2]. Therefore, the minimum to provide a local point F (if it exists) is unique on the set Ma . Necessary conditions for the function x MATHEMATICAL NOTES

Vol. 84 No. 5 2008

MEYER WAVELETS WITH LEAST UNCERTAINTY CONSTANT

683

minimum in problem (7 ) are the following ones: there exist multipliers λ0 , λ1 ≥ 0, not simultaneously zero, for which the Lagrangian L(x , x, t) = −λ0 t sin x + λ1 (x )2 satisfies the Euler–Lagrange equation (d/dt)Lx = Lx on the extremal x(t) = x (t) and the complex) − a) = 0. Moreover, if λ0 > 0, then these conditions are also mentary slackness condition λ1 (G( sufficient for x to provide the absolute minimum. This follows from the convexity of the functional F and of the set Ma (see, e.g., [8, pp. 51–52]). In our case, the complementary slackness condition is satisfied automatically, and the Euler–Lagrange equation has the form 2λ1 x = −λ0 t cos x. By setting λ1 = 1/2 provides the and λ0 = p > 0 and using Eq. (6), we obtain the desired statement. Thus, the function x absolute minimum in problem (7 ), and hence in problem (7). Lemma 2. For any p ≥ 0, Eq. (6) has a unique solution x(t) such that x(0) = 0 and x(π/3) = π/2. This solution is an analytic increasing concave function continuously depending on the parameter p. Moreover, G(x) → 3π/4 as p → +0, and G(x) → +∞ as p → +∞. For any a ≥ 3π/4, there exists a unique value of the parameter p ≥ 0 for which the corresponding solution of (6) satisfies the relation G(x) = a. Proof. We consider the half-strip

Π=

 π . (t, y) ∈ R | t ≥ 0, 0 ≤ y ≤ 2 2

For an arbitrary b > 0, we let yb (t) denote the solution of the equation y  (t) = −t cos y(t) with the initial conditions y(0) = 0 and y  (0) = b; this condition issues from the point (0, 0) and is continued to the intersection with the boundary of Π. The function yb (t) is uniquely continued in Π either to infinity or to the intersection with the straight line y = 0 or to the intersection with the straight line y = π/2 at a point t = tb . In the last case, contracting the argument, we obtain the solution of Eq. (6): the function x(t) = yb (3tb t/π) is the solution of (6) for p = p(b) = (3tb /π)3 and for x(0) = 0 and x(π/3) = π/2. If b1 > b2 , then yb1 (t) > yb2 (t) for any t for which both of the functions are defined. Otherwise, there exists a u > 0 such that yb1 (u) = yb2 (u) and yb1 (t) > yb2 (t) for all t ∈ (0, u). Then it follows from the differential equation that we have yb1 (t) > yb2 (t)

for all t ∈ (0, u),

which makes the equality yb1 (u) = yb2 (u) impossible. Thus, the variable tb decreases from b; therefore, the function p(b) = (3tb /π)3 also decreases. Let us show that p(b) → 0 as b → ∞. To this end, we note that since we have cos yb < 1 in the interior of the half-strip Π, the function yb (t) majorizes the solution zb (t) of the equation z  = −t with the initial conditions z(0) = 0 and z  (0) = b. Thus, yb (t) > −(1/6)t3 + bt, and hence tb is less than the positive root of the equation π 1 − t3 + bt = , 6 2 which, as is easy to verify, tends to zero as b → ∞. Hence tb → 0 and p(b) → 0 as b → ∞. We also show that the function x(t) = yb (3tb t/π) tends to the linear function (3/2)t uniformly on the interval [0, π/3] as b → ∞. We let l denote the ray {(t, π/2), t ≥ 0} and let r denote the greatest lower bound of b ≥ 0 for which the graph of the solution yb intersects the ray l. Since the solution monotonically depends on the parameter b, we see that yb intersects the ray l for any b > r and does not intersect it for any b < r. Let us consider the boundary case b = r.

Case 1. We assume that the graph of the function yr intersects the ray l at the point t = s. Since yr is concave, we have yr (s) ≥ 0. If yr (s) = 0, then the two solutions of the equation y  (t) = −t cos y with the initial conditions y(s) = π/2 and y  (s) = 0 pass through the point t = s (the first is y ≡ π/2, the second is y = yr ), which is impossible. But if yr (s) > 0, then the solution yr can be continued MATHEMATICAL NOTES

Vol. 84

No. 5

2008

684

LEBEDEVA, PROTASOV

through the point s, and there exists an ε > 0 for which yr (s + ε) > π/2. Since the solution continuously depends on the initial data for a sufficiently small δ > 0, we have yr−δ (s + ε) > π/2 as before. Hence the solution yr−δ also intersects the ray l, and this contradicts the choice of r. So we see that yr does not intersect l. Since this function is concave, the following two cases are possible.

Case 2. The function yr attains its maximum at a point s ≥ 0. In this case, yr (s) < π/2, and hence yr (s) = 0 and yr (s + ε) < 0 for ε > 0. Then we choose a sufficiently small δ > 0 for which π  for all t ∈ [0, s + ε]. (s + ε) < 0 and yr+δ (t) < yr+δ 2 Since the solution yr+δ is concave, it does not intersect the ray l, which contradicts the choice of r.

Case 3. The solution yr can be continued to the entire ray [0, +∞), monotonically increases, and is bounded from above by the constant π/2. If, in addition, we have h = limt→+∞ yr (t) < π/2, then yr (t) ≤ −t cos h for all t ≥ 0. Hence yr (t) ≤ rt − (cos h/6)t3 , which contradicts the fact that the solution yr increases on the ray [0, +∞). Thus, h = π/2. Then tb → ∞ as b → r + 0. Indeed, since yr (N ) < π/2 for any N > 0, there exists an ε > 0 such that yr+ε < π/2 on the interval [0, N ] and hence tr+ε > N . Therefore,  3 3tb → +∞ as b → r + 0. p(b) = π Thus, Eq. (6) has a unique solution for any p ≥ 0, and this solution is concave and increasing. Since the solution continuously depends on the initial data, the quantity tb and the functions yb and xb continuously depend on b. Therefore, the solution xb continuously depends on tb and hence on p, because tb = πp1/3 /3. Thus, the solution x continuously depends on p. As was already shown, the solution uniformly tends to the function (3/2)t as p → 0. As p → +∞, this solution tends to the constant π/2 uniformly on each interval [ε, π/3], ε > 0. Indeed, for any t ∈ [ε, π/3], we have     3tb 3tb > yr . xb (t) = yb π π Since b → r + 0 and tb → ∞ as p(b) → ∞, the quantity yr (3tb t/π) tends to yr (+∞) = π/2. Next, since x → (3/2)t as p → 0, we see that G(x) → 3π/4. On the other hand, as p → ∞, we see

b that x → π/2 and hence G(x) → ∞. Indeed, since the functional a (x (t))2 dt attains its minimum on a linear function (Remark 1), we have   ε x(ε) 2 x2 (ε)  2 (x (t)) dt ≥ ·ε= G(x) > ε ε 0 for any ε > 0. Since x(ε) → π/2 as p → ∞, we have lim G(x) ≥

p→∞

π2 . 4ε

Since ε is arbitrary, this limit is equal to +∞. Since G(x) continuously depends on the parameter p, the quantity G(x) ranges through all the values from 3π/4 to +∞. By Lemma 1, each value of G(x) is associated with a single p. Combining Lemmas 1 and 2, we obtain the following statement. Proposition 1. For any a ≥ 3π/4, there exists a unique function x ∈ W21 [0, π/3] providing the absolute minimum for the variational problem (7). This function is the solution of Eq. (6) for a value of the parameter p = p(a); moreover, the function p(a) is continuous and increasing. MATHEMATICAL NOTES

Vol. 84 No. 5 2008

MEYER WAVELETS WITH LEAST UNCERTAINTY CONSTANT

685

Proof of Theorem 1. For a fixed value a = G(x), the functional F (x) attains its minimum on the solution of Eq. (6) for a value of the parameter p = p(a) which continuously depends on a (Proposition 1). Hence the value of the functional   1 14π 21 + F (x) G(x) I(x) = 4 3 π also continuously depends on a. It follows from Lemma 2 that   π π2 G(x) → +∞ and F (x) → F =− 2 18

as a → ∞.

Therefore, I(x) → +∞ as a → ∞. Hence the minimal value of the functional I(x) is attained for some a and the corresponding p and x(t). Remark 2. The same scheme permits solving the following two similar problems for finding the optimal Meyer wavelets: to find the minimal value of ∆ψ for a given value of ∆ψ = A; and conversely, to find the minimal value of ∆ψ for a given value of ∆ψ = B. The solutions give the Meyer wavelets with the best possible localization in the time domain for a given localization in the frequency domain and conversely. It follows from Proposition 1 that, for any admissible values of A and B, the optimal Meyer wavelet can be constructed by using the solutions of Eq. (6) with conditions (5) and the additional condition ∆ψ = A (∆ψ = B, respectively).

Fig. 1.

Fig. 2.

MATHEMATICAL NOTES

Vol. 84

No. 5

2008

686

LEBEDEVA, PROTASOV

Fig. 3.

3. NUMERICAL CALCULATION OF THE PARAMETER p AND OF THE MINIMIZING FUNCTION x(t) Theorem 1 and Proposition 1 propose a method for the numerical calculation of the parameter p and the function x for which the functional I(x) given by formula (4) takes the least possible value. For each p ≥ 0, the solution x(t) of the corresponding equation (6) with the boundary conditions x(0) = 0 and x(π/3) = π/2 is first substituted into the functional I(x), and then the minimal value of the functional is sought over all p ∈ [0, +∞). The numerical solution was sought by using the MathCAD 14 package, and then verified by using the Mathematica 5.0 package; this process was performed according to the following scheme. First, we obtained the solution of Eq. (6) for p = 20. Then Eq. (6) was solved numerically as a partial differential equation, where the parameter p was assumed to be an argument of the unknown function x(t, p). Then the values of the functional I(xp ) were found on the family of solutions xp (t) = x(t, p) thus obtained. The graph of the dependence of the functional I on the values of the parameter p is presented in Fig. 1 by the solid line, while the dotted line corresponds to the value of the uncertainty constant on the function x(t) = (3/2)t. This value is equal to 6.886. The least possible value of the functional I(x) up to 0.001 is equal to 6.874 and is attained for p = 0.676. Figures 2 and 3 present the graphs of the line x(t) = (3/2)t (the dotted line) and of the solutions of Eq. (6) for p = 0.676 (the solid line). Figure 3 is the part of Fig. 2 on the interval 0.4 ≤ x ≤ 0.6. For an approximation of the desired function we can propose the polynomial x (t) = 1.580 t − 0.113 t3 + 0.042 t5 − 7.037 · 10−3 t7 + 1.582 · 10−3 t9 , which is obtained by a recursive calculation of the coefficients in the expansion of the function x(t) in a  is equal to 6.875, and the Taylor series at the point t0 = 0. The value of the uncertainty constant for x x(t) − x(t)| does not exceed 1.091 · 10−3 . maximal deviation from the desired function maxt∈[0,π/3] |

Fig. 4.

MATHEMATICAL NOTES

Vol. 84 No. 5 2008

MEYER WAVELETS WITH LEAST UNCERTAINTY CONSTANT

687

Fig. 5.

4. CONCLUSIONS. OPTIMAL MEYER WAVELETS The system of Meyer wavelets with least possible uncertainty constant is constructed as follows. First, a solution x(t) of Eq. (6) is sought for p = 0.676 with the boundary conditions x(0) = 0 and x(π/3) = π/2. Then the Meyer wavelet-function ψ and its Fourier transform ψ are calculated by formulas (1) and (2), where θ(w) = (1/2)x(t). The function ψ generates a system of Meyer wavelets with least possible uncertainty constant J = 6.874. The graphs of the functions ψ and ψ are shown in Figs. 4 and 5, respectively. ACKNOWLEDGMENTS The authors wish to express gratitude to Professor I. Ya. Novikov for setting the problem. The authors also thank S. V. Konyagin, A. S. Kochurov, A. V. Rozhdestvenskii, and K. S. Ryutin for valuable comments and suggestions. The work of the second author was supported by the Russian Foundation for Basic Research (grant no. 08-01-00208), by the program “Leading Scientific Schools” (grant no. NSh-3233.2008.1), and by grant MD-2195.2008.1. REFERENCES ` ´ 1. Y. Meyer, “Principe d’incertitude, bases Hilbertiennes et algebres d’operateurs,” in Seminaire Bourbaki, ´ 1985/86 (Asterisque, Soc. Math. France, Paris, 1987), Vols. 145–146, pp. 209–223. 2. I. M. Johnstone, G. Kerkyacharian, D. Picard, and M. Raimondo, “Wavelet deconvolution in a periodic setting,” J. R. Stat. Soc. Ser. B Stat. Methodol. 66 (3), 547–573 (2004). 3. Ajith Kumarayapa and Ye Zhang, “More efficient ground truth ROI image coding technique: implementation and wavelet based application analysis,” J. Zhejiang Univ. Sci. A 8 (6), 835–840 (2007). 4. G. B. Folland and A. Sitaram, “The uncertainty principle: a mathematical survey,” J. Fourier Anal. Appl. 3 (3), 207–238 (1997). 5. I. Ya. Novikov, V. Yu. Protasov, and M. A. Skopina, Theory of Wavelets (Fizmatlit, Moscow, 2005) [in Russian]. 6. E. A. Lebedeva, “Minimization of the uncertainty constant of the family of Meyer wavelets,” Mat. Zametki 81 (4), 553–560 (2007) [Math. Notes 81 (3–4), 489–495 (2007)]. ´ M. Galeev, and V. M. Tikhomirov, A Collection of Problems in Optimization (Nauka, 7. B. M. Alekseev, E. Moscow, 1984) [in Russian]. 8. B. M. Alekseev, V. M. Tikhomirov, and S. V. Fomin, Optimal Control (Nauka, Moscow, 1979) [in Russian].

MATHEMATICAL NOTES

Vol. 84

No. 5

2008