Document not found! Please try again

Instantaneous Frequency Estimation Based on Synchrosqueezing ...

5 downloads 0 Views 3MB Size Report
The continuous wavelet transform (CWT)-based SST sharpens the time-frequency representation of a non-stationary signal by assigning the scale variable of.
Instantaneous Frequency Estimation Based on Synchrosqueezing Wavelet Transform Qingtang Jiang and Bruce W. Suter



February 2016 1st revision in July 2016 2nd revision in February 2017 Abstract Recently, the synchrosqueezing transform (SST) was developed as an alternative to the empirical mode decomposition scheme to separate a non-stationary signal with time-varying amplitudes and instantaneous frequencies (IFs) into a superposition of frequency components that each have well-defined IFs. The continuous wavelet transform (CWT)-based SST sharpens the time-frequency representation of a non-stationary signal by assigning the scale variable of the signal’s CWT to the frequency variable by a reference IF function. Since the SST method is applied to estimate the IFs of all frequency components of a signal based on one single reference IF function, it may yield not very accurate results. In this paper we introduce the instantaneous frequency-embedded synchrosqueezing wavelet transform (IFE-SST). IFE-SST uses a rough estimation of the IF of a targeted component to produce accurate IF estimation. The reference IF function of IFE-SST is associated with the targeted component. Our numerical experiments show that IFE-SST outperforms the CWT-based SST in IF estimation and separation of multicomponent signals.

Keywords: Instantaneous frequency, Empirical mode decomposition (EMD), Synchrosqueezing transform (SST), Signal separation

1

Introduction

Recently the study of modeling a non-stationary signal as a superposition of Fourier-like oscillatory modes has been an active research area. To model a non-stationary signal x(t) as x(t) = A0 (t) +

K X

 Ak (t) cos 2πφk (t)

(1)

k=1 ∗

Qingtang Jiang is with the Department of Mathematics and Computer Science, University of Missouri-St.

Louis, MO 63141, USA, e-mail: [email protected]; Bruce W. Suter is with The Air Force Research Laboratory, AFRL/RITB, Rome, NY 13441, USA, e-mail: [email protected].

1

is important to extract information, such as the underlying dynamics, hidden in x(t). The representation of x(t) in (1) with Ak (t), φ0k (t) > 0 and Ak (t), φ0k (t) varying more slowly than φk (t), is called an adaptive harmonic model (AHM) representation of x(t), where Ak (t) are called the instantaneous amplitudes (IAs) and φ0k (t) the instantaneous frequencies (IFs), which can be used to describe the underlying dynamics. AHM representations of non-stationary signals have been used in many applications including geophysics (seismic wave), atmospheric and climate studies, oceanographic studies, medical data analysis, speech recognition, non-stationary dynamics in financial system, see for example, [1]-[4]. The empirical mode decomposition (EMD) introduced by Huang et al is a popular method to decompose a non-stationary signal as a superposition of intrinsic mode functions (IMFs) [5]. This is an efficient data-driven approach and no basis of functions is used. It has been widely used in many applications, see [1] and the references therein. Many time-frequency methods have been developed to study the time-varying spectral properties of a given signal x(t) [6]. Recently the synchrosqueezing transform (SST), also called the synchrosqueezed wavelet transform, was developed by Daubechies, Lu and Wu [7] to provide mathematical theorems to guarantee the recovery of oscillatory modes from the SST of x(t). SST, which was first introduced by Daubechies and Maes in 1996 for the consideration of speech signal separation [8], is based on the continuous wavelet transform (CWT), which has scale and time variables. SST re-assigns the scale variable to the frequency variable to sharpen the time-frequency representation of a signal as the method of both time and frequency re-assignments studied by Auger and Flandrin in 1995 [9] (see also [10] for the time-frequency and time-scale representations of signals by the re-assignment method). In addition, the original signal can be recovered from its SST. SST provides an alternative to the EMD method and its variants such as the ensemble EMD (EEMD) scheme [11], and it overcomes some limitations of the EMD and EEMD schemes such as mode-mixing and the possible negativeness of the IFs which arise in EMD and EEMD schemes. See [12]-[14] for a comparison between EMD and SST. Generalized SST was introduced in [15] for the time-frequency representation of signals with significantly oscillating IFs. The stability of SST was studied in [16]. The SST with vanishing moment wavelets with stacked knots was introduced in [17] to process signals on bounded or half-infinite time intervals for real-time signal process. [18] introduced the hybrid EMT-SST computational scheme by applying the modified SST to the IMFs of the EMD. [19] provided the AHM representation of oscillatory signals composed of multiple components with fast-varying instantaneous frequency by optimization. [20] proposed a new method to determine the time-frequency content of time-dependent signals consisting of multiple oscillatory components. The SST introduced in [8] and studied in above papers is also referred to the wavelet-based SST. The short-time Fourier transform (STFT)-based SST was studied in [21]-[24]. Also, the S2

transform-based SST is introduced in [25] and has been applied in seismic spectral decomposition. Other methods to decompose a non-stationary signal as the superposition of adaptive IMFs/subbands include optimization methods [26]-[31], the empirical wavelet transform to produce an adaptive wavelet frame system for the decomposition of a given signal [32], an alternative algorithm to EMD with iterative filtering replacing the sifting process in EMD [33, 34], and a STFT-based signal separation method in [35]. In this paper we will introduce instantaneous frequency-based synchrosqueezing wavelet transform. Our work is motived by [23] on the demodulation transform with STFT. Our approach gives IFs more straightly. This paper is organized as follows. In Section 2, we review the SST. In Section 3, we introduce the instantaneous frequency-embedded wavelet synchrosqueezing transform (IFE-SST) and study its property. In Section 4, we consider the implementation issue. In Section 5, we use IFESST for the separation of multicomponent signals. Our experimental results show that IFE-SST outperforms wavelet-based SST in estimation of IFs and signal separation. The conclusion is given in Section 6.

2

Synchrosqueezed wavelet transform

The synchrosqueezed wavelet transform (SST) separates Fourier-like oscillatory mode  Ak (t) cos 2πφk (t) from a superposition in (1), with A(t) and φ0 (t) positive and slow-varying, compared to φ(t), and they satisfying certain conditions. The SST approach in [8, 7] is based on the continuous wavelet transform (CWT).

2.1

Continuous wavelet transform (CWT)

A function ψ(t) ∈ L2 (R) is called a continuous (or an admissible) wavelet if it satisfies (see e.g. [36, 37]) the admissible condition: Z



b 2 |ψ(ξ)|

0 < Cψ = −∞

dξ < ∞. |ξ|

In this paper the Fourier transform of a function x(t) ∈ L1 (R) is defined by Z ∞ x b(ξ) = x(t)e−i2πξt dt, −∞

which can be extended to functions in L2 (R). Denote ψa,b (t) = a1 ψ

t−b a



. The continuous wavelet

transform (CWT) of a signal x(t) ∈ L2 (R) with a continuous wavelet ψ is defined by Wx (a, b) = hx, ψa,b i = 3



1 t − b x(t) ψ dt. a a −∞

Z

The variables a and b are called the scale and time variables respectively. The signal x(t) can be recovered by the inverse wavelet transform (see e.g. [36, 37, 38]) Z ∞Z ∞ da 1 Wx (a, b)ψa,b (t)db x(t) = . Cψ −∞ −∞ |a| The Fourier transform and the CWT given above can be extended to x(t) in the class of tempered distributions, denoted by S 0 , which includes the Dirac delta function, sinusoidal functions or polynomials on R. For x(t) ∈ S 0 , x b is defined as the Fourier transform of x(t) if x b statifies Z ∞ Z ∞ x(ξ) yb(ξ) dξ, ∀y(t) ∈ S, x b(ξ) y(ξ) dξ = −∞

−∞

where S is the Schwartz space or the space of testing functions all of whose derivatives of any order exist and are rapidly decreasing. Refer to [39] for mathematically rigorous definitions of S and S 0 . For x(t) = ei2πct with frequency c, its Fourier transform is δ(ξ − c) (see [40]). If the continuous wavelet ψ is in S, then the CWT Wx (a, b) of x ∈ S 0 with ψ is well defined. For x(t) = A(t)ei2πφ(t) , its CWT Wx (a, b) is well defined as long as ψ has certain decay as |t| → ∞ b to assure A(t)ψ(t) ∈ L1 (R). In addition, if ψ(ξ) has enough decay as |ξ| → ∞, we have Z



Wx (a, b) = hb x, ψba,b i = −∞

 x b(ξ)ψb aξ ei2πbξ dξ.

A function x(t) is called an analytic signal if it satisfies x b(ξ) = 0 for ξ < 0. In this paper, we consider analytic continuous wavelets. In addition, we assume ψ also satisfies Z ∞ b dξ < ∞. 0 6= cψ = ψ(ξ) ξ 0

(2)

For an analytic signal x(t) ∈ L2 (R), it can be recovered by another inverse wavelet transform which does not involve the time variable b (refer to [8, 7]): Z ∞ 1 da x(b) = Wx (a, b) , cψ 0 a where cψ is defined by (2). Furthermore, for a real signal x(t) ∈ L2 (R), it can be recovered by the following formula (see [7]): 2 Z ∞ da  x(b) = Re Wx (a, b) . cψ 0 a Again, the above two formulas hold for x(t) = A(t)ei2πφ(t) as long as ψ has certain decay as |t| → ∞. The “bump wavelet” defined by 1

b = e1− 1−σ2 (ξ−µ)2 χ ψ(ξ) (µ− 1 ,µ+ 1 ) , σ

4

σ

(3)

and the (scaled) Morlet’s wavelet defined by b = e−2σ2 π2 (ξ−µ)2 − e−2σ2 π2 (ξ2 +µ2 ) , ψ(ξ)

(4)

where σ > 0, µ > 0, are commonly used continuous wavelets. The parameter σ in (3) and (4) controls the shape of ψ and has the effect on the CWT of a signal. For a multicomponent  P signal x(t) = K k=1 Ak cos(2πφk t with positive constants Ak , φk , 0 < φk < φk+1 , there are no  interference among the components Ak cos 2πφk t in |Wx (b, a)| with the bump wavelet as long as the parameter σ is large enough. For a superposition (1) of AHMs with φk satisfying the separation condition, the larger σ does not necessarily provide a better separation of AHMs. See [13] for the detailed discussion on the effect of σ on the CWT of a signal with the bump wavelet. Here we illustrate that for Morlet’s wavelet, a larger σ of Morlet’s wavelet does not necessarily result in a sharper representation of the CWT in the time-scale plane as the “bump wavelet” illustrated in [13]. Let 2

2

y(t) = ei2π(9t+5t ) + ei2π(13t+10t ) , 0 ≤ t ≤ 1,

(5)

which is sampled uniformly with 128 sample points. The CWT of y(t) with Morlet’s wavelet with √ σ = 1, µ = 1 and σ = 5, µ = 1 are shown Fig.1. Observe that the wavelet with a larger σ results in more blurring representation of y(t) in the time-scale plane.

2

2

Figure 1: |Wy (a, b)|: CWT of y(t) = ei2π(9t+5t ) + ei2π(13t+10t ) , 0 ≤ t ≤ 1 by using Morelet’s wavelet ψ with σ = 1, µ = 1 (Left picture) and with σ =



 5, µ = 1 (Middle picture); CWT of s(t) = cos 2π(10t) , 0 ≤

t ≤ 1 with wavelet ψ given by (6) (Right picture)

The “bump wavelet” is bandlimited, and hence it has a better frequency localization than Morlet’s wavelet. On the other hand, Morlet’s wavelet ψ(t) is 1 2 2 2 −( √t )2 i2πµt 2σ ψ(t) = √ e (e − e−2π σ µ ). σ 2π Thus Morlet’s wavelet enjoys nice localization in both the time and frequency domains. The “bump wavelet” is analytic, but Morlet’s wavelet is not. Observe that the second term in (4) is very 5

small for σ ≥ 1 and µ ≥ 1, e.g., with µ = 1, σ = 1, e−2σ

2 π 2 (ξ 2 +µ2 )

≤ exp(−2π 2 ) = 2.6753 × 10−9 . b Thus the second term in (4) could be dropped in practice. In addition, the first term of ψ(ξ) in (4) is also very small for any ξ ≤ 0 if σ ≥ 1, µ ≥ 1. Thus in practice one may use ψ defined by ( b = ψ(ξ)

e−2π

2 (ξ−1)2

,

for ξ > 0, for ξ ≤ 0.

0,

(6)

ψ defined by (6) is one of the three wavelets used in [16]. In this paper, unless it is specifically stated, we will use this ψ and we also call it Morlet’s wavelet. b Observe that ψ(ξ) of Morlet’s wavelet ψ given in (6) concentrates at ξ = 1. If an input signal x(t) concentrates around ξ = c in the frequency domain, then its CWT concentrates around the line a =

1 c

in the scale-time plane. For example, let us consider x(t) = A cos(2πct), where c > 0

is a constant. Then x b(ξ) =

A 2 (δ(c)

+ δ(−c)). Thus for a > 0,

Z Wx (a, b) =



  1 x b(ξ)ψb aξ ei2πbξ dξ = A ψb ac ei2πbc . 2 −∞

Therefore, Wx (a, b) concentrates around ac = 1, i.e. a = 1/c. See Fig.1 for |Ws (a, b)| with  s(t) = cos 2π(10t) . Observe that |Ws (a, b)| does concentrate around a = 0.1, the reciprocal of the IF=10 of s(t). However |Ws (a, b)| spreads out around a = 0.1 and what we see in the scale-time plane is a zone, not a sharp line, around a = 0.1. This property will cause the problem that we cannot separate the IFs from their CWTs when two signals have close IFs, though for the superposition of signals such as A cos(2πct) or Aei2πct , there is no such an issue as long as we choose the parameter σ of the wavelet to be large enough. For example, as demonstrated in Fig.1, the CWTs of the two components of y(t) given in (5) are mixed. SST re-assigns the scale variable a to the frequency variable so that it sharpens the time-frequency representation of a signal.

2.2

Synchrosqueezed wavelet transform (SST)

The idea of SST is to re-assign the scale variable a to the frequency varilable. As in [7], we first look at the CWT of x(t) = A cos(2πct), where c is a positive constant. As shown above, the CWT  of x(t) is Wx (a, b) = 1 A ψb ac ei2πbc . Observe that the IF of x(t), which is c, can be obtained by 2

∂ ∂b Wx (a, b)

2πiWx (a, b)

= c.

Thus, for a general x(t), at (a, b) for which Wx (a, b) 6= 0, a good candidate for the instantaneous frequency (IF) of x(t) is

∂ Wx (a,b) ∂b 2πiWx (a,b)

. In the following, denote

ωx (a, b) =

∂ ∂b Wx (a, b)

2πiWx (a, b) 6

,

for Wx (a, b) 6= 0.

ωx (a, b) is called the “reference IF function” in [18] and the “phase transform” in [16]. SST is to transform the CWT Wx (a, b) of x(t) to a quantity, denoted by Tx (ξ, b), on the time-frequency plane: Z Wx (a, b)δ ωx (a, b) − ξ

Tx (ξ, b) = {a:Wx (a,b)6=0}

 da , a

(7)

where ξ is the frequency variable.

Figure 2: Assignment of a to ξ in SST for x(t) = A cos 2π(ct)



 Fig.2 illustrates the definition of SST for the special case with x(t) = A cos 2π(ct) . See Fig.3  for the SSTs of s(t) = cos 2π(10t) , 0 ≤ t ≤ 1 and y(t) given in (5).

2



2

Figure 3: Left: SST of s(t) = cos 2π(10t) ; Right: SST of y(t) = ei2π(9t+5t ) + ei2π(13t+10t

)

The input signal x(t) can be recovered from its SST as shown in the following theorem. Theorem 1. ([7]) Let cψ be the constant defined by (2). Then for a real-valued x(t), 2 Z ∞  x(b) = Re Tx (ξ, b)dξ ; cψ 0

7

(8)

for an analytic x(t), x(b) =

1 cψ



Z

Tx (ξ, b)dξ.

(9)

0

In practice, a, b, ξ are discretized. Suppose aj , bn , ξk , j, n, k = 1, · · · , are the sampling points of a, b, ξ respectively. Here we assume ξk+1 − ξk = ∆ξ for all k. Then the SST of x(t) is given by X

Tx (ξk , bn ) =

Wx (aj , bn )a−1 j (∆a)j ,

j: |ωx (aj ,bn )−ξk |≤∆ξ/2,|Wx (aj ,bn )|≥γ

where (∆a)j = aj+1 −aj , and γ > 0 is a threshold for the condition |Wx (a, b)| > 0. The recovering formula (8) for a real signal x(t) leads to x(bn ) = Re

2 X  Tx (ξk , bn ) , n = 1, 2, · · · , cψ k

while for an analytic x(t), we have, from (9), x(bn ) =

1 X Tx (ξk , bn ), n = 1, 2, · · · . cψ k

2.3

Generalized SST

Figure 4: Left:z(t); Right: IF of z(t) As shown in the above examples, SST works well for some signals such as those of constant frequency. However, SST does not work well for the signals whose frequencies change significantly with the time. For example, let z(t) be the signal given by ( z(t) =

0.8 cos 16πt,

0 ≤ t < 2.5,

 cos 2π 15t + cos(2πt) ,

2.5 ≤ t ≤ 4.

z(t) is a variant of a signal considered in [15]. Here t is uniformly sampled with sample rate

(10) 4 511 .

z(t) and its IF are shown in Fig.4. The SST of z(t) is presented in Fig.5. Observe that the IF of z(t) is blurring for t > 2.5.

8

The generalized SST was introduced by Li and Liang in [15]. The idea is to transform a signal x(t) = A(t) cos(2πφ(t)) or x(t) = A(t)exp(2πiφ(t)) to a signal with a constant frequency by x(t) −→ x(t) exp(−2πiφ0 (t)), where φ0 (t) is a function such that φ00 (t) = φ0 (t) − ξ0 with ξ0 being the target frequency. If we choose φ0 (t) = φ(t) − ξ0 t, then x(t) exp(−2πiφ0 (t)) is a signal with constant frequency ξ0 . The problem of this approach is that in practice φ0 (t) is unknown, one needs to estimate φ0 (t).

Figure 5: SST of z(t) given in (10)

3

Instantaneous frequency embedded SST

Motivated by the work of S. Wang et al [23] on the demodulation transform with STFT, we define the instantaneous frequency-embedded CWT (IFE-CWT) as follows. Let ϕ(t) be a differentiable function with ϕ0 (t) > 0. For x(t) ∈ L2 (R), we define  0 xϕ,b,ξ0 (t) := x(t)e−i2π ϕ(t)−ϕ(b)−ϕ (b)(t−b)−ξ0 t ,

(11)

where ξ0 ≥ 0. Oberve that if x(t) = A(t) exp(i2πφ(t)) for some φ(t) with φ0 (t) > 0, then xϕ,b,ξ0 (t) with ϕ(t) = φ(t) has IF φ0 (b) + ξ0 . Also note that in the definition of generalized SST in [15], the  frequency demodulation of x(t) is x(t) exp − i2π(ϕ(t) − ξ0 t) . Definition 1. Suppose ϕ(t) is a differentiable function with ϕ0 (t) > 0. The IFE-CWT of x(t) ∈ L2 (R) with a continuous wavelet ψ is defined by WxIFE (a, b) = hxϕ,b,ξ0 , ψa,b i =

Z



x(t)e−i2π

−∞



ϕ(t)−ϕ(b)−ϕ0 (b)(t−b)−ξ0 t

1 t − b ψ dt. a a

(12)

In the above definition, we assume x(t) ∈ L2 (R). Actually, the definition of IFE-CWT can be extended to slowly growing functions x(t). Next, we have the following property about the IFE-CWT. 9

Proposition 1. Let WxIFE (a, b) be the IFE-CWT of x(t) defined by (12). Then Z ∞  b WxIFE (a, b) = ei2πϕ(b) x e(ξ)ψb aξ + aϕ0 (b) ei2πbξ dξ,

(13)

−∞

where x e(t) = x(t)e−i2πϕ(t)+i2πξ0 t .

(14)

 0 Proof. Let ψ1 (t) = ψ(t)e−i2πϕ (b)at . Then the Fourier transform of ψ1 (t) is: ψb1 (ξ) = ψb ξ + aϕ0 (b) . With x e(t) given by (14), we have ∞

1 t − b 0 dt x(t)e−i2πϕ(t)+i2πξ0 t ei2πϕ (b)(t−b) ψ a a −∞ Z ∞ Z ∞ 1 t − b b x e(t) ψ1 dt = ei2πϕ(b) x e(ξ) ψb1 (aξ) ei2πbξ dξ = ei2πϕ(b) a a −∞ Z−∞ ∞  b = ei2πϕ(b) x e(ξ)ψb aξ + aϕ0 (b) ei2πbξ dξ, WxIFE (a, b) = ei2πϕ(b)

Z

−∞

as desired.  We note that the proof of (13) is straightforward. However, the formula (13) plays an important role in our discussion and implementation of IFE-SST. Thus, we consider it in a proposition. b If x(t) = Cei2πφ(t) for a constant C and we choose ϕ(t) = φ(t), then x e(ξ) = Cδ(ξ0 ). Thus  WxIFE (a, b) = Cei2πφ(b) ψb aξ0 + aφ0 (b) ei2πbξ0 .  Observe that for Morlet’s wavelet given in (6), ψb aξ0 + aφ0 (b) (hence WxIFE (a, b)) concentrates along  a ξ0 + φ0 (b) = 1. Thus IFE-CWT gives more straightforward scale-time representation of a signal. Let u(t) be the signal given by 2

u(t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1.

(15)

u(t) is a chirp considered in [24]. Here we uniformly sample u(t) with 128 sample points. In Fig.6, with ϕ(t) = 9.6982t2 + 10.5970t, ϕ0 (t) = 19.3964t + 10.5970, which can be estimated from CWT of u(t), the IFE-CWT of u(t)with ξ0 = 20 is shown. Observe that the IF of u(t) is 10+20t, 0 < t < 1, a line segment. The IFE-CWT of u(t) displayed in Fig.6 is indeed a zone concentrating a line segment, while the CWT does not give a clear picture for the IF of u(t). The picture of IFE-CWT of u(t) with a different ξ0 is similar to that with ξ0 = 20. In the following experiments, we simply set ξ0 = 0. Next, we show that x(t) can be recovered back from its IFE-CWT. 10

Figure 6: CWT (Left picture) and IFE-CWT (Right picture) with ξ0 = 20, ϕ(t) = 9.6982t2 + 2

10.5970t, ϕ0 (t) = 19.3964t + 10.5970 of u(t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1

Theorem 2. Let x(t) be a function in L2 (R). Then Z ∞ da 1 exp(−i2πξ0 b) WxIFE (a, b) , x(b) = cψ |a| −∞

(16)

where cψ is defined by (2). When x(t) satisfies certain condition, x(t) can be recovered from its IFE-CWT with the scale variable a restricted to a > 0. Theorem 3. Let x(t) be a function in L2 (R). Suppose there is ϕ with ϕ0 (t) > 0 such that y(t) defined by y(t) = x(t) exp(−i2πϕ(t)) satisfies yb(ξ) = 0, ξ ≤ A for some constant A. Let WxIFE (a, b) be the IFE-CWT of x(t) defined by (12) with ϕ(t) and ξ0 > −A. Then Z ∞ 1 da x(b) = exp(−i2πξ0 b) WxIFE (a, b) , cψ a 0

(17)

where cψ is defined by (2). The proof of Theorem 2 is similar to that of Theorem 3. Here we give the proof of Theorem 3. Proof of Theorem 3 Let x e(t) be the function defined by (14). Observe that x e(t) = y(t)ei2πξ0 t . b Thus x e(ξ) = yb(ξ − ξ0 ). Hence, by (13) in Proposition 1, we have Z ∞ Z ∞   da da b = ei2πϕ(b) x e(ξ)ψb a ξ + ϕ0 (b) ei2πbξ dξ WxIFE (a, b) a a 0 −∞ 0 Z ∞ Z ∞    da = ei2πϕ(b) yb(ξ − ξ0 )ψb a ξ + ϕ0 (b) ei2πbξ dξ a 0 −∞ Z ∞Z ∞    da = ei2πϕ(b) yb(ξ)ψb a ξ + ξ0 + ϕ0 (b) ei2πb(ξ+ξ0 ) dξ a 0 −∞ Z ∞  Z ∞  da  yb(ξ) = ei2πϕ(b)+i2πξ0 b ψb a ξ + ξ0 + ϕ0 (b) ei2πbξ dξ a −∞ 0 Z



11



Z

Z



  da i2πbξ ψb a ξ + ξ0 + ϕ0 (b) e dξ a ZA∞ Z0 ∞ b da ei2πbξ dξ yb(ξ) = ei2πϕ(b)+i2πξ0 b ψ(a) a A 0 (since ξ + ξ0 + ϕ0 (b) > 0 when ξ > A) Z ∞ Z ∞ = cψ ei2πϕ(b)+i2πξ0 b yb(ξ)ei2πbξ dξ = cψ ei2πϕ(b)+i2πξ0 b yb(ξ)ei2πbξ dξ = ei2πϕ(b)+i2πξ0 b

yb(ξ)

−∞

A

= cψ e

i2πϕ(b)+i2πξ0 b

y(b) = cψ e

i2πξ0 b

x(b).

This shows (17).  Remark 1. If the condition yb(ξ) = 0, ξ ≤ A is not satisfied, then for large ξ0 , we have Z ∞ da 1 exp(−i2πξ0 b) WxIFE (a, b) . x(b) ≈ cψ a 0

(18)

This can be obtained as follows. Following the proof of Theorem 3 and noting that   yb(ξ)ψb a ξ + ξ0 + ϕ0 (b) = 0 if ξ + ξ0 + ϕ0 (b) ≤ 0, we have Z



da WxIFE (a, b)

Z



Z



  da i2πbξ e dξ ψb a ξ + ξ0 + ϕ0 (b) a a 0 −∞ 0 Z ∞ Z ∞   da i2πbξ = ei2πϕ(b)+i2πξ0 b e dξ yb(ξ) ψb a ξ + ξ0 + ϕ0 (b) a −ξ0 −ϕ0 (b) 0 Z ∞ Z ∞ b da ei2πbξ dξ ψ(a) = ei2πϕ(b)+i2πξ0 b yb(ξ) a −ξ0 −ϕ0 (b) 0 Z ∞ = cψ ei2πϕ(b)+i2πξ0 b yb(ξ)ei2πbξ dξ =e

i2πϕ(b)+i2πξ0 b

yb(ξ)

−ξ0 −ϕ0 (b)

= cψ ei2πϕ(b)+i2πξ0 b

Z



−∞ i2πϕ(b)+i2πξ0 b

= cψ e

yb(ξ)ei2πbξ dξ −

y(b) − cψ e

Z

−ξ0 −ϕ0 (b)

−∞

i2πϕ(b)+i2πξ0 b

Z

yb(ξ)ei2πbξ dξ



−ξ0 −ϕ0 (b)

−∞

yb(ξ)ei2πbξ dξ

≈ cψ ei2πξ0 b x(b). Thus (18) holds with error |cψ |

R −ξ0 −ϕ0 (b) −∞

|b y (ξ)|dξ. 

For x(t), at (a, b) for which WxIFE (a, b) 6= 0, we need to define the reference IF function ωxIFE (a, b). Following the definition of ωx (a, b), ∂ IFE ∂b Wx (a, b) 2πiWxIFE (a, b)

may be a good candidate for the reference IF function. First we look at

12

(19) ∂ IFE ∂b Wx (a, b).

From (13),

we have Z ∞  ∂ IFE 0 i2πϕ(b) b Wx (a, b) = i2πϕ (b)e x e(ξ)ψb aξ + aϕ0 (b) ei2πbξ dξ ∂b −∞ Z ∞  i2πϕ(b) b +e x e(ξ)ψb aξ + aϕ0 (b) ei2πbξ i2πξdξ Z−∞ ∞  b +ei2πϕ(b) x e(ξ)ψb0 aξ + aϕ0 (b) aϕ00 (a)ei2πbξ dξ −∞

=: I1 + I2 + I3 .

(20)

where x e is given by (14). Clearly, I1 = i2πϕ0 (b)WxIFE (a, b). Consider the case x(t) = Cei2πφ(t) again. As shown above, with ϕ(t) = φ(t), we have x e(t) = b C exp(i2πξ0 ), and hence x e(ξ) = Cδ(ξ0 ), and  WxIFE (a, b) = Cei2πϕ(b) ψb aξ0 + aφ0 (b) ei2πbξ0 . Note that in this case  I1 + I2 = i2πφ0 (b)WxIFE (a, b) + Cei2πφ(b) ψb aξ0 + aφ0 (b) ei2πbξ0 i2πξ0 = i2πφ0 (b)WxIFE (a, b) + i2πξ0 WxIFE (a, b)  = i2π φ0 (b) + ξ0 WxIFE (a, b). Therefore, I1 + I2 = φ0 (b) + ξ0 2πiWxIFE (a, b) is the IF of x(t) plus the target frequency ξ0 . Hence, for a general x(t), we define the reference IF function ωxIFE (a, b) of the IFE-CWT of x(t) to be ωxIFE (a, b) :=

I1 + I2 I2 = ϕ0 (b) + , 2πiWxIFE (a, b) 2πiWxIFE (a, b)

(21)

where I1 and I2 are defined by (20). Observe that the reference IF function ωxIFE (a, b), defined by (21), is not the quantity defined by (19). Instead, it is ∂ IFE ∂b Wx (a, b) − I3 . 2πiWxIFE (a, b)

Clearly, ωxIFE (a, b) depends on ϕ. Definition 2. The instantaneous frequency-embedded wavelet synchrosqueezing transform (IFESST) of a signal x(t) with ϕ and ξ0 is defined by Z  da IFE Tx (ξ, b) = WxIFE (a, b)δ ωxIFE (a, b) − ξ , a {a:WxIFE (a,b)6=0} where WxIFE is IFE-CWT of x(t) defined by (12) and ωxIFE (a, b) is defined by (21). 13

By Theorem 3, we know the input signal x(t) can be recovered from its IFE-SST as shown in the following theorem. The author refers to Theorem 3.3 in [7] for the recovery of components of a signal from the orginal SST. Theorem 4. Let x(t) be a function in Theorem 3 with A = 0. Then Z ∞ 1 x(b) = exp(−i2πξ0 b) TxIFE (ξ, b)dξ. cψ 0

(22)

In practice, a, b, ξ are discretized. Suppose aj , bn , ξk , j, n, k = 1, · · · , are the sampling points of a, b, ξ respectively. Again, we assume ξk+1 − ξk = ∆ξ for all k. Then the IFE-SST of x(t) is given by X

TxIFE (ξk , bn ) =

WxIFE (aj , bn )a−1 j (∆a)j ,

j: |ωxIFE (aj ,bn )−ξk |≤∆ξ/2, |WxIFE (aj ,bn )|>γ

where (∆a)j = aj+1 − aj , and γ > 0 is parameter to set the condition |Wx (a, b)| = 6 0. The recovering formula (22) for x(t) implies x(bn ) =

X 1 exp(−i2πξ0 bn ) TxIFE (ξk , bn ), n = 1, 2, · · · . cψ

(23)

k

We will discuss more about the discretization and the implementation of IFE-SST in the next section. Another issue we need to consider about IFE-CWT and IFE-SST is that, in practice, to estimate IFs of x(t) from its IFE-SST, we need ϕ(t) and ϕ0 (t) which should be close to φ(t) and φ0 (t) respectively. We will first use (regular) CWT/STFT to have a rough estimate of φ, φ0 to be used as the input ϕ, ϕ0 for IFE-CWT and IFE-SST. Then we use IFE-CWT or IFE-SST to get more accurate estimate of φ, φ0 . See the next section for more details.

4

Implementation

For the implementation of the IFE-SST, one may modify the procedures in [16]. Suppose x(t) is discretized uniformly at points tn = t0 + n∆t, n = 0, 1, · · · , N − 1. e ∈ CN denote the discretization of x Let bn = n∆t, n = 0, 1, · · · , N − 1. Let x e in (14):  T e= x x e0 , x e1 , · · · , x eN −1 , where T denotes the transpose of a vector/matrix, and x en = x e(tn ) = x(tn )e−i2πϕ(tn )+i2πξ0 tn . 14

( Let ∆η =

1 N ∆t

for 0 ≤ k ≤ [ N2 ],

k∆η,

be the sampling points (k − N )∆η, for [ N2 ] + 1 ≤ k ≤ N − 1, b b for the frequency variable η. Let x ek = x e(ηk ), 0 ≤ k ≤ N − 1 denote the discretization of the  T b b b b b e= x e: Fourier transform x e of x e. One may obtain x e0 , x e1 , · · · , x eN −1 by applying the FFT to x b e = ∆t FFTe x x. and ηk =

The scale variable can be discretized as aj = νj ∆t, νj = 2j/nν , j = 1, 2, · · · , nν



 log2 N − 1),

where nν is a parameter which user can choose. One may choose nν = 32 or nν = 64 as suggested in [16]. Then we have Z



−∞

=

1 N

N −1 X   b b ∆η x e(ηk )ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk x e(η)ψb aη + aϕ0 (b) ei2πbη dη ≈ k=0 N −1 X

 (FFTe x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk .

k=0

Thus the IFE-CWT of x(t) can be discretized as WxIFE (aj , bn ) = ei2πϕ(bn )

N −1  1 X (FFTe x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk . N

(24)

k=0

Similarly, the integral I2 in (20) can be discretized as I2 (aj , bn ) = i2πei2πϕ(bn )

N −1  1 X (FFTe x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk ηk . N k=0

Therefore, ωxIFE (a, b) defined by (21) can be approximated by, for WxIFE (aj , bn ) 6= 0, ωxIFE (aj , bn )

0

= ϕ (bn ) +

PN −1

x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk ηk k=0 (FFTe .  PN −1 b aj ηk + aj ϕ0 (bn ) ei2πbn ηk (FFTe x )(k) ψ k=0 

(25)

The frequency variable ξ > 0 of the IFE-SST can be discretized as follows. Let ∆ξ be the frequency resolution parameter (one may set ∆ξ = n1ν 2(log N1 −1)∆t ). Partition the time-frequency h i 2 1 1 region {(ξ, b) : 0 < ξ ≤ 2∆t , b ≥ 0} into K0 := 2∆t∆ξ nonoverlap zones: n o ∆ξ ∆ξ Ωk := (ξ, b) : ξk − < ξ ≤ ξk + , b ≥ 0 , k = 1, 2, · · · , K0 , 2 2 where ξk = k∆ξ. Let γ > 0 be parameter to set the condition |Wx (a, b)| = 6 0. One may choose γ to be a number between 10−8 and 10−4 . Then we obtain the IFE-SST of x(t): TxIFE (ξk , bn ) =

X j: ξk − ∆ξ γ 2 2

15

WxIFE (aj , bn )

log 2 , nν

(26)

where we have used the fact a−1 j (∆a)j ≈

log 2 nν .

Finally, x(b) can be recovered by (23). In the

following we summarize our calculation of IFE-SST as Algorithm 1. Algorithm 1. (Calculation of IFE-SST of monocomponent signal) Input:

x(tn ), ϕ(tn ), ϕ0 (tn ), 0 ≤ n ≤ N − 1, ξ0 , and γ > 0.

Step 1. Calculate WxIFE (aj , bn ) by (24). Step 2. Calculate ωxIFE (aj , bn ) by (25) for all j, n with |WxIFE (aj , bn )| > γ. Step 3. Calculate TxIFE (ξj , bn ) by (26). Next we give the procedures to estimate IF of a monocomponent signal with IFE-SST. Algorithm 2. (IFE-SST-based IF estimation of monocomponent signal) Input:

x(tn ), ξ0 , γ > 0, and initial ϕ(tn ), ϕ0 (tn ), 0 ≤ n ≤ N − 1 estimated from CWT/SST

of x(tn ). Step 1. Calculate IFE-SST by Algorithm 1. e n ) = Pn φe0 (tk )∆t. Step 2. Estimate IF φe0 (tn ) from IFE-SST and set φ(t k=0 e n ), φe0 (tn ) obtained in Step 2 as the initial ϕ(tn ), ϕ0 (tn )) and Step 2 Step 3. Repeat Step1 (with φ(t till the error criterion is reached.

2

Figure 7: Left: SST of u(t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1; Middle: IFE-SST of u(t) with ξ0 = 0, ϕ(t) = 9.8305t2 + 10.3153t, ϕ0 (t) = 19.6610t + 10.3153; Right: IFE-SST of u(t) with ξ0 = 0, ϕ(t) = 10.0061t2 + 9.9960t, ϕ0 (t) = 20.0122t + 9.9960

In Algorithms 1 and 2, we need initial estimate ϕ(tn ), ϕ0 (tn ) for the phase function φ and IF φ0 . As mentioned above, we may use CWT or SST to obtain a rough estimation of them. For

16

a monocomponent signal x(tn ), 0 ≤ n ≤ N − 1, one may give an estimation of its IF from its CWT/SST as follows. Let Mn = max{ωx (aj , bn )}, 0 ≤ n ≤ N − 1, j

(Mn will be maxj {|Wx (aj , bn )|} if we use CWT for the estimation). Then a curve obtained by approximating (bn , Mn ), 0 ≤ n ≤ N − 1 gives the SST-based estimation of IF of x(tn ). For the monocomponent signal, one may simply use polynomial fitting least-squares polynomial fitting to obtain the IF estimation. The IF estimation from IFE-SST in Step 2 of Algorithm 2 can be carried out similarly. More precisely, denote, MnIFE = max{ωxIFE (aj , bn )}, 0 ≤ n ≤ N − 1. j

Then a curve obtained by approximating (bn , MnIFE ), 0 ≤ n ≤ N − 1 gives the estimated IF of x(tn ) with IFE-SST. Next let us look at u(t) given in (15) as an example about how to obtain its IF by IFE-SST. Throughout this paper, we set γ = 10−5 and nν =

1 32

for SST and IFE-SST. From the SST of

u(t), which is shown in the left picture of Fig.7, we obtain Mn , 0 ≤ n ≤ 127. Then we obtain linear least square approximation with (bn , Mn ), 4 ≤ n ≤ 123: ϕ0 (t) = 19.3426t + 10.7704.

(27)

One could use higher order polynomial least square approximation. Here we use the linear least square approximation for the purpose to compare the estimation to the true IF of u(t): φ0 (t) = 20t + 10. Also we consider n from 4 to 123 to reduce the boundary effect. Then we use ϕ0 (t) in (27) and its integral as the initial input IF and phase function to calculate IFE-SST of u(t). From the IFE-SST, we then obtain M IFE and IF estimation, denoted by φe0 (t), by linear least square approximation n IFE (bn , Mn ), 4 ≤

n ≤ 123. We continue this procedure iteratively as in Algorithm 2. In Table 1, we list the estimated φe0 (t). Observe that the best estimation we can get is φe0 (t) =

with

20.0122t + 9.9960 which is quite close to the true IF φ0 (t). IFE-SST of u(t) with ϕ0 (t) = 19.6610t + 10.3153 and that with ϕ0 (t) = 20.0122t + 9.9960 are shown in Fig.7. Note that IFE-SST of u(t) with ϕ0 (t) = 19.6610t+10.3153 already gives a sharper representation of IF than SST. Here we also provide the differences |Mn −φ0 (tn )| and |MnIFE −φ0 (tn )| to show the performance of SST and IFE-SST, where φ0 (t) is true IF of u(t). The comparison between |Mn − φ0 (tn )|, n = 5, · · · , 124 and |MnIFE − φ0 (tn )| with ξ0 = 0, ϕ0 (t) = 19.6610t + 10.3153 and that with ξ0 = 0, ϕ0 (t) = 20.0122t + 9.9960 are shown in the left and middle pictures of Fig.8. 17

Iteration

φe0 (t)

1

19.6610t + 10.3153

2

19.8390t + 10.1597

3

19.9180t + 10.0764

4

19.9648t + 10.0322

5

19.9890t + 10.0117

6

19.9997t + 10.0043

7

20.0122t + 9.9960

8

20.0122t + 9.9960 2

Table 1: Estimated IF of u(t) = ei2π(10t+10t ) by IFE-SST with Algorithm 2

2

Figure 8: Left: IF estimation errors with SST and with IFE-SST for u(t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1 with ξ0 = 0, ϕ0 (t) = 19.6610t + 10.3153; Middle: IF estimation errors with SST and with IFE-SST for 2

u(t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1 with ξ0 = 0, ϕ0 (t) = 20.0122t + 9.9960; Right: IF estimation errors with 2

SST and with IFE-SST for x(t) = ei2π(10t+100t ) , 0 ≤ t ≤ 1

Our IFE-SST-based IF estimation works well with chirps of high frequency. As an example, we show in Fig.8, IF estimation errors |Mn − φ0 (tn )| and |MnIFE − φ0 (tn )| for n = 13, · · · , 500 2

for x(t) = ei2π(10t+100t ) , 0 ≤ t ≤ 1, which is uniformly sampled with 512 sample points, where φ0 (t) = 10 + 200t is the true IF.

5

IFE-SST based signal separation

We will apply the IFE-SST for signal separation. We consider the adaptive harmonic model (AHM) after the trend removal process: x(t) =

K X

xk (t) + (t),

 xk (t) = Ak (t) cos 2πφk (t) ,

k=1

18

where (t) is the noise. We may separate the components of x(t) as follows. Use CWT/STFT to identify the highest frequency component, say x1 (t), and estimate initial φ01 (t) and φ1 (t). Then we use Algorithm 2 to have accurate estimate of φ01 (t), φ1 (t) and recover x1 (t). After that we remove x1 (t) from x(t) and repeat the same procedures to the new signal to recover the component of the 2nd highest frequency and other components. Our method is different from that in [13], where optimization method is used to reconstruct the components of a multicomponent signal simultaneously. Next we modify the definition of IFE-CWT and IFE-SST for the purpose to estimate the IF of a particular component of a multicomponent signal. Consider the case xk (t) = Ak ei2πφk (t) and ξ0 = 0. We assume the IFs of different xk (t) lie nonoverlap different zones in the timefrequency plane. Suppose we want to estimate the IF of the `th component x` (t). We should choose ϕ0 (t) close to φ0` (t). Assume it happens that ϕ(t) = φ` (t). Then x ek (t) = xk (t)e−i2πϕ(t) = b Ak ei2π φk (t)−φ` (t) . Thus x e` (ξ) = A` δ(ξ), and for k 6= `, the IF of x ek (t) lies in a zone away from b the line ξ = 0, and we have x ek (ξ) ≈ 0 for |ξ| ≤ , where  is a small positive number. Therefore, f IFE (a, b) by if we define the modifies IFE-CWT W x Z   b b a ξ + φ0 (b) ei2πbξ dξ, f IFE (a, b) := ei2πφ` (b) ψ W x e (ξ) x ` |ξ|≤

then f IFE (a, b) = ei2πφ` (b) W x

K Z X k=1

≈ ei2πφ` (b)

Z

|ξ|≤

  b x ek (ξ)ψb a ξ + φ0` (b) ei2πbξ dξ

  A` δ(ξ)ψb a ξ + φ0` (b) ei2πbξ dξ

|ξ|≤

 = A` ei2πφ` (b) ψb aφ0` (b) . fxIFE (a, b) concentrates along aφ0 (b) = 1, the IF of x` (t) in the time-scale plane. Hence, W ` Numerically, we consider U N −1  X  ei2πϕ(bn )  X IFE f Wx (aj , bn ) = + (FFTe x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk , N k=0

(28)

k=N −L

and P

ω exIFE (aj , bn )

PN −1 U k=0 + k=N −L



 (FFTe x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk ηk = ϕ (bn ) +  P (29)  PN −1  U x)(k)ψb aj ηk + aj ϕ0 (bn ) ei2πbn ηk k=0 + k=N −L (FFTe 0

instead of WxIFE (aj , bn ) and ωxIFE (aj , bn ) defined by (24) and (25) resp. Then we define the modified IFE-SST of x(t): X

TexIFE (ξk , bn ) = j:

ξk − ∆ξ γ |W x

fxIFE (aj , bn ) log 2 . W nν

(30)

Here U and L are some nonnegative integers, which are not large. In addition, if φ0` to be estimated is the IF of the component with the highest frequency, one may choose L = 0, while if φ0` is the IF of the component with the lowest frequency, one may choose U = 0.

2

Figure 9: Right: IFE-SST of z(t) with Algorithm 3 and U = L = 20; Middle: v(t) = ei2π(10t+10t ) + 2

ei2π(9t+5t ) , 0 ≤ t ≤ 1 (real part); Right: SST of v(t)

In the following, we describe the procedures to estimate the IF of a particular component and separate components of multicomponent signals. Algorithm 3. (IFE-SST-based IF estimation of `th component of multicomponent signal) Input:

x(tn ), γ > 0, ξ0 , and initial ϕ` (tn ), ϕ0` (tn ) for `th component estimated from

CWT/SST of x(tn ). Choose integers U, L ≥ 0 IFE (ξ , b ) of `th component by (30). Step 1. Calculate the modified IFE-SST Tex,` j n IFE (ξ , b ) and set φ e` (tn ) = Pn φe0 (tk )∆t. Step 2. Estimate IF φe0` (tn ) from IFE-SST Tex,` j n k=0 `

Step 3. Repeat Step 1 (with φe` (tn ), φe0` (tn ) obtained in Step 2 as the initial ϕ` (tn ), ϕ0` (tn )) and Step 2 till the error criterion is reached. Algorithm 4. (IFE-SST based signal separation) Step 1. Use CWT/STFT/SST to identify the number K of frequency components. Choose a (targeted) frequency component, say x` (t). Use CWT/STFT/SST to obtain initial estimation φe0` , φe` of φ0` , φ` . Step 2. With ϕ0 = φe0` , ϕ = φe` , use Algorithm 3 to have accurate estimation of φ0` (t), φ` (t) and obtain x e` (t), recovered x` (t). Step 3. Remove x e` (t) from x(t) and repeat Steps 1-2 to recover the second targeted component. Step 4. Do Step 3 for other components. 20

Step 5. If time permits, set y` (t) = x(t) − more accurate estimation of

φ0` (t),

P

ek (t). 0≤k≤K,k6=` x

Apply Algorithm 3 to y` (t) to have

φ` (t) and x e` (t); and do the same to other components.

Figure 10: Left: IFE-SST of estimated 1st component with estimated IF: φe01 (t) = 19.9813t + 10.0093, 0 ≤ t ≤ 1; Right: IFE-SST of estimated 2nd component with estimated IF: φe02 (t) = 10.0337t + 8.9831, 0 ≤ t ≤ 1

Before we consider multicomponent signals, we remark that we can also use Algorithm 3 to estimate the IF of a monocomponent signal. For example, for the signal z(t) with significantly changing frequency given by (10), we show in Fig.9 its IF estimation obtained by Algorithm 3 with U = L = 20. Next we consider a signal consisting of two chirps: 2

2

v(t) = v1 (t) + v2 (t), v1 (t) = ei2π(10t+10t ) , v2 (t) = ei2π(9t+5t ) , 0 ≤ t ≤ 1.

(31)

v(t) is uniformly sampled with 128 sample points. The real part of this signal v(t) is shown in the middle picture of Fig.9 and the SST of v(t) is shown in the right picture of Fig.9. Fig.10 shows the IFE-SSTs of the estimated 1st and 2nd components by Algorithm 4. From the FE-SSTs, we can recover v1 and v2 by recovering formula (22) in Theorem 4. The error between v1 and recovered v1 and that between v2 and recovered v2 are shown in Fig.11.

2

Figure 11: Left: Error between v1 (t) = ei2π(10t+10t ) , 0 ≤ t ≤ 1 and recovered v1 (t); Right: Error between 2

v2 (t) = ei2π(9t+5t ) , 0 ≤ t ≤ 1 and recovered v2 (t)

21

Figure 12: Left: v(t) and noised v(t) with noise (10dB) (real part); Right: SST of noised v(t) We also consider signal separation in noisy environment. The signal to noise ratio (SNR) of a noised signal x e = x +  with noise  is defined by SNR = 20 log

kx − xk2 (dB), kk2

where x is the mean of x. We consider the case ve(t) = v(t) + (t) with v(t) defined by (31) and (t) a Gaussian white noise with 10dB. ve(t) is shown in Fig.12, where its SST is also provided. With Algorithm 4 (we choose L = 10, U = 0 for v2 (t), the component with lower frequency, and choose L = 0, U = 10 for v1 (t), the component with higher frequency, and we also do Step 5), estimated IFE-SSTs of v1 (t) and v2 (t) in the noise environment are shown in the top row of Fig.13. We also provide the recovered v1 (t) and v2 (t) in Fig.13. Next example considered is w(t) = w1 (t) + w2 (t) with   w1 (t) = log(2 + t/2) cos 2π(5t + 0.1t2 ) , w2 (t) = exp(−0.1t) cos 2π(4t + 0.5 cos t) ,

(32)

for 0 ≤ t ≤ 8. We sample w(t) uniformly with 1024 sample points. We discuss the IF estimation of w(t) in noisy environment. Let w(t) e = w(t) + (t), where (t) a Gaussian white noise with 10dB. w(t) e and its SST are shown in Fig.14. With Algorithm 4, the estimated IFE-SSTs of w1 (t) and w2 (t) in the noise environment are shown in the top row of Fig.15, and the recovered w1 (t) and w2 (t) are shown in the bottom row of Fig.15. From the SST and IFE-SST provided in Figs.13 and 15, we see IFE-SST gives a better IF representation of a signal than SST. These examples also show that our IFE-SST works well in the noise environment. Our last example is to use IFE-SST to separate the components of a bat echolocation signal. Fig.16 shows an echolocation pulse emitted by the Large Brown Bat (Eptesicus Fuscus). The data can be downloaded from the website of DSP at Rich University: http://dsp.rice.edu/software/bat-echolocation-chirp. There are 400 samples; the sampling step is 7 microseconds. The IF representation of this bat signal has studied in [23] and 22

Figure 13: Top row: IFs of v1 (t) and v2 (t) recovered by Algorithm 4; Bottom row: Recovered v1 (t) and v2 (t) in noise environment by Algorithm 4

[24] by matching-modulation-transform-SST (MDT-SST) and the second-order SST respectively. Here we remark that since the IFE-SST is based on CWT, we only compare our method with CWT-based SST in this paper, not with the SST based on STFT, including MDT-SST and the second order SST. For the bat signal, we just show that using Algorithm 4, we can get the IF estimation and separate the components. The SST of the bat signal is shown in Fig.16, where 2 2 b Morlet’s wavelet with ψ(ξ) = e−20π (1−ξ) χ(0,∞) (ξ) is used. The IFE-SSTs of the four main components obtained by Algorithm 4 are shown in Fig. 17, and the recovered components are provided in Fig.18.

6

Conclusion

In this paper we introduce the instantaneous frequency-embedded continuous wavelet transform (IFE-CWT). We establish that the original signal can be reconstructed from its IFT-CWT. Then based on IFE-CWT, we introduce the instantaneous frequency-embedded synchrosqueezing transform (IFE-SST). IFE-SST can preserve the IF of monocomponent signal. For each component of a multicomponent signal, IFE-SST uses a reference IF function associated with that component. Our numerical experiments show that IFE-SST has a better performance than the CWT-based SST in the separation of multiple components of non-stationary signals. The experimental results also show that IFE-SST works well in the noise environment. 23

Figure 14: Left: w(t) and noised w(t) with noise (10dB); Right: SST of noised w(t)

Figure 15: Top row: IFs of w1 (t) and w2 (t) recovered by Algorithm 4; Bottom row: w1 (t) and w2 (t) recovered in noise environment by Algorithm 4

ACKNOWLEDGMENT OF SUPPORT AND DISCLAIMER: (a) Contractor acknowledges Government’s support in the publication of this paper. This material is based upon work funded by AFRL, under AFRL Contract No. FA8750-15-3-6000 and FA8750-15-3-6003. (b) Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of AFRL. The authors thank the anonymous reviewers for their valuable comments. The authors wish to thank Curtis Condon, Ken White, and Al Feng of the Beckman Institute of the University of Illinois for the bat data in Fig.16 and for permission to use it in this paper.

24

Figure 16: Left: Bat echolocation chirp; Right: SST of bat signal

Figure 17: IFE-SSTs of four main components of bat signal

References [1] N. E. Huang and Z. Wu, “A review on Hilbert-Huang transform: Method and its applications to geophysical studies,” Rev. Geophys., vol. 46, no. 2, June 2008. [2] J. B. Tary, R. H. Herrera, J. J. Han, and M. van der Baan, “Spectral estimation–What is new? What is next?”, Review of Geophys., vol. 52, no. 4, pp. 723–749, Dec. 2014. [3] H.-T. Wu, Y.-H. Chan, Y.-T. Lin, and Y.-H. Yeh, “Using synchrosqueezing transform to discover breathing dynamics from ECG signals,” Appl. Comput. Harmon. Anal., vol. 36, no. 2, pp. 354–459, Mar. 2014.

25

Figure 18: Four main components of bat signal obtained by Algorithm 4 [4] S. K. Guharaya, G. S. Thakura, F. J. Goodmana, S. L. Rosena, and D. Houser, “Analysis of non-stationary dynamics in the financial system,” Economics Letters, vol. 121, no. 3, pp. 454–457, Dec. 2013. [5] N. E. Huang, Z. Shen, S. R. Long, M. L. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and Hilbert spectrum for nonlinear and nonstationary time series analysis,” Proc. Roy. Soc. London A, vol. 454, no. 1971, pp. 903–995, Mar. 1998. [6] L. Cohen, Time-frequency Analysis, Prentice Hall, New Jersey, 1995. [7] I. Daubechies, J. Lu, and H.-T. Wu, “Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool,” Appl. Computat. Harmon. Anal., vol. 30, no. 2, pp. 243–261, Mar. 2011. [8] I. Daubechies and S. Maes, “A nonlinear squeezing of the continuous wavelet transform based on auditory nerve models,” in A. Aldroubi, M. Unser Eds. Wavelets in Medicine and Biology, CRC Press, 1996, pp. 527-546. [9] F. Auger and P. Flandrin, “Improving the readability of time-frequency and time-scale representations by the reassignment method,” IEEE Trans. Signal Proc., vol. 43, no. 5, pp. 1068–1089, 1995.

26

[10] E. Chassande-Mottin, F. Auger, and P. Flandrin, “Time-frequency/time-scale reassignment,” in Wavelets and Signal Processing, Appl. Numer. Harmon. Anal., Birkh¨auser Boston, Boston, MA, 2003, pp. 233–267. [11] Z. Wu and N. E. Huang, “Ensemble empirical mode decomposition: A noise-assisted data analysis method,” Adv. Adapt. Data Anal., vol. 1, no. 1, pp. 1–41, Jan. 2009. [12] H.-T. Wu, P. Flandrin, and I. Daubechies, “One or two frequencies? The synchrosqueezing answers,” Adv. Adapt. Data Anal., vol. 3, no. 1–2, pp. 29–39, Apr. 2011. [13] S. Meignen, T. Oberlin, and S. McLaughlin, “A new algorithm for multicomponent signals analysis based on synchrosqueezing: With an application to signal sampling and denoising,” IEEE Trans. Signal Proc., vol. 60, no. 11, pp. 5787–5798, Nov. 2012. [14] F. Auger, P. Flandrin, Y. Lin, S.McLaughlin, S. Meignen, T. Oberlin, and H.-T. Wu, “Timefrequency reassignment and synchrosqueezing: An overview,” IEEE Signal Process.Mag., vol. 30, no. 6, pp. 32–41, 2013. [15] C. Li and M. Liang, “A generalized synchrosqueezing transform for enhancing signal timefrequency representation,” Signal Proc., vol. 92, no. 9, pp. 2264–2274, 2012. [16] G. Thakur, E. Brevdo, N. Fuˇckar, and H.-T. Wu, “The synchrosqueezing algorithm for timevarying spectral analysis: Robustness properties and new paleoclimate applications,” Signal Proc., vol. 93, no. 5, pp. 1079–1094, 2013. [17] C. K. Chui, Y.-T. Lin, and H.-T. Wu, “Real-time dynamics acquisition from irregular samples with application to anesthesia evaluation,” Anal. Appl., vol. 14, no. 4, pp.537–590, Jul. 2016. [18] C. K. Chui and M. D. van der Walt, “Signal analysis via instantaneous frequency estimation of signal components,” Int’l J Geomath, vol. 6, no. 1, pp. 1–42, Apr. 2015. [19] M. Kowalskia, A. Meynarda, and H.-T. Wu, “Convex optimization approach to signals with fast varying instantaneous frequency,” Appl. Comput. Harmon. Anal., in press. [20] I. Daubechies, Y. Wang, H.-T. Wu, “ConceFT: Concentration of frequency and time via a multitapered synchrosqueezed transform,” Phil. Trans. Royal Soc. A, vol. 374, no. 2065, Apr. 2016. [21] H.-T. Wu, “Adaptive analysis of complex data sets,” Ph.D. dissertation, Princeton Univ., Princeton, NJ, 2012.

27

[22] T. Oberlin, S. Meignen, and V. Perrier, “The Fourier-based synchrosqueezing transform,” in Proc. 39th Int. Conf. Acoust., Speech, Signal Process. (ICASSP), 2014, pp. 315–319. [23] S. Wang, X. Chen, G. Cai, B. Chen, X. Li, and Z. He, “Matching demodulation transform and synchrosqueezing in time-frequency analysis,” IEEE Trans. Signal Proc., vol. 62, no. 1, pp. 69–84, 2014. [24] T. Oberlin, S. Meignen, and V. Perrier,“Second-order synchrosqueezing transform or invertible reassignment? towards ideal time-frequency representations,” IEEE Trans. Signal Proc., vol. 63, no. 5, p.1335–1344, Mar. 2015. [25] Z.-L. Huang, J. Z. Zhang, T. H. Zhao, and Y. B. Sun, “Synchrosqueezing S-transform and its application in seismic spectral decomposition,” IEEE Trans. Geosci. Remote Sensing, vol. 54, no. 2, pp. 817–825, Feb. 2016. [26] S. Meignen and V. Perrier, “A new formulation for empirical mode decomposition based on constrained optimization,” IEEE Signal Proc. Letters, vol. 14, no. 12, pp. 932–935, Dec. 2007. [27] T. Y. Hou and Z. Shi, “Adaptive data analysis via sparse time-frequency representation,” Adv. Adapt. Data Anal., vol. 3, no. 1, pp. 1–28, Apr. 2011. [28] T. Oberlin, S. Meignen, and V. Perrier, “An alternative formulation for the empirical mode decomposition,” IEEE Trans. Signal Proc., vol. 60, no. 5, pp. 2236–2246, May 2012. [29] T. Y. Hou and Z. Shi, “Data-driven time-frequency analysis,” Appl. Comput. Harmon. Anal., vol. 35, no. 2, pp. 284–308, Sep. 2013. [30] K. Dragomiretskiy and D. Zosso, “Variational mode decomposition,” IEEE Trans. Signal Proc., vol. 62, no. 3, pp. 531–544, Feb. 2014. [31] N. Pustelnik, P. Borgnat, and P. Flandrin, “Empirical mode decomposition revisited by multicomponent non-smooth convex optimization,” Signal Proc., vol. 102, pp. 313–331, Sep. 2014. [32] J. Gilles, “Empirical wavelet transform,” IEEE Trans. Signal Proc., vol. 61, no. 16, pp. 3999–4010, Aug. 2013. [33] L. Lin, Y. Wang, and H. M. Zhou, “Iterative filtering as an alternative algorithm for empirical mode decomposition,” Advances in Adaptive Data Analysis, vol. 1, no. 4, pp. 543–560, Oct. 2009.

28

[34] A. Cicone, J. F. Liu, and H. M. Zhou, “Adaptive local iterative filtering for signal decomposition and instantaneous frequency analysis,” Appl. Comput. Harmon. Anal., vol. 41, no. 2, pp. 384–411, Sep. 2016. [35] C. K. Chui and H. N. Mhaskar, “Signal decomposition and analysis via extraction of frequencies,” Appl. Comput. Harmon. Anal., vol. 40, no. 1, pp. 97–136, 2016. [36] Y. Meyer, Wavelets and Operators Volume 1, Cambridge University Press, 1993. [37] I. Daubechies, Ten Lectures on Wavelets, SIAM, CBMS-NSF Regional Conf. Series in Appl. Math, 1992. [38] C.K. Chui and Q.T. Jiang, Applied Mathematics—Data Compression, Spectral Methods, Fourier Analysis, Wavelets and Applications, Amsterdam: Atlantis Press, 2013. [39] E. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton Univ. Press, Princeton, NJ, 1971. [40] R. Allen and D. Mills, Signal Analysis: Time, Frequency, Scale, and Structure, Wiley-IEEE Press, Piscataway, NJ, 2004.

29

Suggest Documents