Efficient Adaptive Algorithm for Modeling a Speech Signal

3 downloads 0 Views 181KB Size Report
values. An application of a speech signal is provided then to ... signal processing. ..... International Conference on Signal, Speech and Image Processing SSIP.
Efficient Adaptive Algorithm for Modeling a Speech Signal A. Maddi and A. Guesssoum

D. Berkani

Department of Electronics University of Blida Road of Soumaa, PB 270, Blida, ALGERIA [email protected], [email protected]

Electrical Engineering National Polytechnic School of El Harrach, Road of Hassan Badi, PB 182, Algiers, ALGERIA [email protected]

Abstract—This paper presents a time-domain prediction error method based on Newton algorithm identification for the identification of Auto Regressive Moving Average (ARMA) models of linear structural systems. The proposed structural identification method can incorporate as much as possible structural information known a priori into the structural identification process to improve the identification accuracy. This method incorporates a recursive procedure that makes successive corrections in determining, a linear mathematical model, based on the data of observation, to represent the system considered. To avoid the practical difficulty often associated with bias prediction error measurement, a stochastic Newton method is proposed. The numerical results show that the proposed method is capable to obtain the minimal standard deviation of error prediction and the estimated parameters are closer with actual values. An application of a speech signal is provided then to identify the parameters of AR model corresponding to the speech signal. Keywords-Modeling; Identification; Prediction; Unbiased estimato; Newton method; Speech signal.

I. INTRODUCTION The majority techniques of identification were developed for the identification of the models of dynamic systems, when the parameters are, either unknown, or vary in time. Thus the method of parametric identification consists in determining, in a recursive way, a linear mathematical model, based on the available data of observation, to represent the system considered. The model must provide a faithful approximation of the behavior of the physical system with an aim to identify the parameters of system or to design algorithms of simulation. The essential step consists in formalizing knowledge available a priori, collecting experimental data, then to estimate the structure, the parameters and uncertainties of a model, finally to validate (or invalidate) this one. The use of experimental data to determine the parameters of mathematical models with unbiased estimates represents an important subject of the automatic searchers [1] [5]. In this paper we propose a stochastic Newton algorithm to identify the parameters of ARMA model with unbiased estimates. We show how this method can be seen as low complexity approximations compared to the least mean square algorithm LMS.

II.

PROBLEM FORMULATION

A Newton algorithm is used in several applications of signal processing. Thus we can write the general formula as [6]:

x(t ) = x(t − 1) −

f [ x(t − 1)] f ' [ x(t − 1)]

(1)

Let consider a quadratic functional J (x ) . And we want to find the optimal. The method of stochastic Newton is given by:

Min J ( x) ⇒

∂J ( x ) ∂x

x = xˆ

=0

(2)

We put:

f ( x) =

∂J ( x) = J ' ( x) ∂x

(3)

∂ 2 J ( x) = J " ( x) 2 ∂x

(4)

Avec

f ' ( x) =

By replacing in the equation (Eq.1), we write:

x(t ) = x(t − 1) −

J ' [ x(t − 1)] J " [ x(t − 1)]

This last equation is called: scalar Newton algorithm.

(5)

III. IDENTIFICATION OF ARMA MODEL Identification of ARMA Model Autoregressive moving average model noted ARMA (n, m) is defined by the following equation [1]: n

m

∑ a y ( t −i) =∑ b u ( t −i) + e(t) i

i

i=0

(6)

i=0

}

θ =θˆ

=0

(11)

 ∂ 2 J (θ )  = E ϕ (t )ϕ T (t ) = R  2  ∂ θ  

{

}

(12)

(

)

θˆ(t) =θˆ(t −1) +γ1(t)R−1(t)ϕ(t) y(t) −ϕT (t)θˆ(t −1)

u (t ) : Input signal y (t ) : Output signal

(13)

Where γ 1 (t ) : scalar function.

e(t ) : White noise.

If R is unknown, then R can be estimated by:

E(ϕ(t)ϕT (t)) = R ⇒ E(ϕ(t)ϕT (t) − R) = 0

By developing Eq.6, we obtain:

... + b1u(t − 1) + ... + bm u(t − m) + e(t )

(7)

As we can represent this last equation in a matrix form:

y(t) = θ Tϕ(t) + e(t)

(8)

With

R(t ) = R(t − 1) + γ 2 (t )(ϕ (t)ϕ T (t) − R(t − 1))

e (t ) = y (t ) − θˆT (t − 1) ϕ (t )

Data vector.

The identification of the vector parameters è within the meaning of the average quadratic error is:

[

]

(9)

(15)

From the equations Eq.10, Eq.13 and Eq.15, the algorithm of stochastic Newton can be summarized as: (16)

θˆ(t) = θˆ(t − 1) + γ 1 (t)R −1 (t)ϕ(t)e(t)

Parameters vector to be identified

1 J (θ ) = E e 2 (t ) 2

(14)

By applying the procedure of approximation, we write:

y(t ) = −a1 y (t − 1) − a2 y(t − 2) − ... − an y (t − n) + ...

θ : ϕT :

{

The use of the stochastic procedure of approximation, the Eq.5 will be written:

Where a0 = 1 , b0 = 0

T

T

∂J (θ )  T  ∂θ  = −E ϕ(t)[y(t) − ϕ (t )θ

(

(17)

)

R(t) = R(t −1) +γ 2 (t) ϕ(t)ϕT (t) − R(t −1)

(18)

IV. EXAMPLE OF SIMULATION Let us consider a stable system [5], [9] with no minimal phase of order 2, having the following transfer function:

However the error of prediction is defined as is followed:

e (t ) = y (t ) − θˆT (t − 1) ϕ (t )

H 2 ( z ) = z −1

(10)

The minimization of the criterion J (θ ) consists to find the optimum; it means to find the point where the derivative is cancelled, so we write:

With

1 + 2 z −1 1 + 0.3 z −1 + 0.8 z − 2

a1 = 0.3, a2 = 0.8, b1 = 1, b2 = 2

The results obtained by stochastic Newton algorithm are illustrated on table1.

EFFECT OF SAMPLES N, ( σ v2

TABLE I.

= 0.25 )

Estimated parameters N

aˆ1

aˆ 2

bˆ1

bˆ2

Average Bias

256*2

0.2821

0.7967

1.0041

1.9707

0.0474

256*4

0.2924

0.7953

0.9974

2.0027

0.0300

256*10

0.3004

0.7994

0.9988

2.0070

0.0103

256*20

0.2989

0.8003

0.9881

2.0078

0.0103

256*50

0.3002

0.8004

1.0002

2.0090

0.0032

EFFECT OF THE VARIANCE σ v2 , (N=256*10)

TABLE II.

Estimated parameters

Variance

aˆ1

aˆ 2

bˆ1

bˆ2

Average Bias

1.00

0.3004

0.8028

1.0349

2.0104

0.0163

0.25

0.2987

0.7994

0.9970

1.9983

0.0115

0.04

0.2986

0.7971

0.9970

2.0026

0.0102

0.01

0.2995

0.8007

1.0001

1.9984

0.0100

σ

2 v

stochastic Newton algorithm. The stochastic Newton algorithm is most commonly used adaptive algorithm because of its simplicity and a reasonable performance. Since it is an iterative algorithm it can be used in a highly time-varying signal environment. It has a stable and robust performance against different signal conditions. However it may not have a really fast convergence speed [5], [9] compared to other algorithms like the RLS and LMS. VI.

TABLE III.

Using Stochastic Newton Algorithm 1.0000

Using MATLAB Function 1.0000

a1 a2 a3 a4 a5 a6

-0.7757 -0.1305 0.0379 -0.0566 -0.0055 0.0348

-0.7738 -0.1313 0.0371 -0.0550 -0.0075 0.0364

a7 a8 a9 a10

0.0066 -0.0587 0.0498 0.0635

0.0038 -0.0589 0.0531 0.0619

Figure (a) A m p litu d e

2 1 0 -1 0

500

1000 1500 2000 Nombre d'échantillons

2500

ESTIMATED PARAMETERS OF AR MODEL

Parameters of AR Model a0

3

-2

APPLICATION TO THE SPEECH SIGNAL

To verify the above theoretical developments, a modeling speech signal experiment was carried out to show the abilities of the SN algorithm. The data file of speech signal used here is named (mf3.dat), which corresponds to the following sentence: « un loup s’est jeté immédiatement sur la petite chèvre » [5]. The validation model was carried out by comparison the results simulation with MATLAB function AR. Using SN algorithm application to the speech signal, gave us the following results:

3000

800 Figure (b) 400

2

200

1

Figure (a)

0 -200 0

1000

2000 3000 4000 Nombre d'échantillons

5000

6000

Amplitude

A m p litu d e

600

0 -1 -2 -3 0

500 9

Figure 1. Prediction error and its autocorrelation

20

1000 1500 2000 Nombre d'échantillons

A mplitude

3000

x 10

Figure (b)

15

V. INTERPRETATION RESULTS It is noticed, according to the results obtained that when the number of samples N increases or the variance of the additive noise, the standard deviation of error prediction still decreases until the estimated parameters are closer with actual values. Thus the position of the zeros of the system does not influence on the error of estimation parameters, on the other hand the position of the poles must be located inside the unit circle to ensure the stability of the system, and the convergence of

2500

10 5 0 -5

0

1000

2000 3000 4000 Nombre d'échantillons

5000

6000

Figure 2. (a) - Evolution parameters of AR model, (b) - Autocorrelation of error prediction

VII. CONCLUSION The majority of the methods of identification of models at discrete or continuous time only rest on the assumption of additive noise on the output system, the input system being supposed perfectly controlled. However, in many practical cases, the input is also measured and thus sullied with noises of measurement; it proves that the use of the traditional methods not taking into account the noise on the input leads to obtain a biased estimates. The identification of systems when the input and output signals are sullied with noise, but nondependent by the intermediary of a closed loop is known as "error-in-thevariables ". This type of model is very interesting when the goal of the identification is the determination of the internal laws governing the process, more than the prediction of its future behavior. Many disciplines use this type of models. We showed that the identification of the parameters of a speech signal using algorithm SN gives good results, but these performances are degraded quickly in the presence of disturbance compared to the algorithm of recursive least squares RLS. In the representation time frequency of the disturbed signal, some of the parameters of the speech signal are masked by the noise [7], [3]. In such a case the parameters of the clean speech cannot be estimated starting from the disturbed signal and are thus regarded as dubious. REFERENCES [1]

Murant Kunt, Maurice Bellanger, Frederic de Coulon, « Techniques modernes de traitement numérique des signaux », collection électricité, traitement de l’information, Volume1, Année 1991. [2] D. Arzelier, « Introduction à l’étude des processus stochastiques et au filtrage optimal », version 2, INSA, 1998. [3] A. Maddi, A. Guessoum, D. Berkani, « A Comparative Study of Speech Modeling Methods », 11th WSEAS International Conference on SYSTEMS, Agios Nikolaos, Crete Island, Greece, July 23-25 2007, pages: 104-109 [4] T.Chonavel, « Statistical signal processing, Modeling and estimation », Springer, January 2002. [5] A. Maddi, A. Guessoum, D. Berkani, « Application the recursive extended least squares method for modelling a speech signal », Second International Symposium on Communications, Control and Signal Processing, ISCCSP’06, March 13-15, 2006 Marrakech, Morocco, In CD-ROM Proceedings. [6] R.Ben Abdennour, P.Borne, « Identification et commande numérique des procédés industriels », April 2001. [7] P.Renevey, « Speech recognition in noisy conditions using missing feature approach », thesis of doctorate, federal polytechnic school of Lausanne, 2000. [8] D. Landau, « Identification et commande des systèmes, Hermes, Paris, 1998. [9] A. Maddi, A. Guessoum, D. Berkani, « Using Recursive Least Squares Estimator for Modelling a Speech Signal », The 6th WSEAS International Conference on Signal, Speech and Image Processing SSIP '06 Lisbon, Portugal, September 22-24, 2006. [10] O. Siohan, « Reconnaissance automatique de la parole continue en environnement bruité: Application à des modèles stochastiques de trajectoires », thesis of doctorate, university Henri Poincaré, Nancy I, September 1995.

Suggest Documents