designing a novel equalizer for digital communication system in neural

0 downloads 0 Views 179KB Size Report
data communication systems demand a faster algorithm for an adaptive equalizer. ..... [1] S.Haykin, Neural Networks: A Comprehensive. Foundation (2nd Ed ...
DESIGNING A NOVEL EQUALIZER FOR DIGITAL COMMUNICATION SYSTEM IN NEURAL NETWORK PARADIGM Prof. K. R. Subhashini a, B Pavan kumar b , K Pavan kumar c Department of Electrical Engg, NIT, Rourkela-769008, India c Department of Computer Science, University of Memphis,USA [email protected], b [email protected], c [email protected] a,b

a

Abstract: This paper presents a new approach to equalization of communication channels using RBF Neural Networks as a classifier. Abundant research has been done in using Neural Network for the problem of channel equalization. The classical gradient based methods suffer from the problem of getting trapped in local minima. And the stochastic methods which can give a global optimum solution need long computational times. In this paper a novel method in which the task of an equalizer is decentralized by using a FIR filter for studying the channel characteristics and RBF Neural Network for classifying the received data. In the results it can be observed that this method of equalization provides optimum performance, which can be obtained using Tabu Search. Also, since we are using FIR filter, training will be very faster and LMS algorithm is computationally very simple. KEY WORDS Artificial Neural Networks, Tabu Search, Equalization, Local Minima, Global Solution, Radial Basis Function. 1.

INTRODUCTION

The Back Propagation (BP) Algorithm revolutionized the use of Artificial Neural Networks (ANNs) in diverse fields of science and engineering, such as pattern recognition, function approximation, system identification, data mining, time series forecasting etc. These ANNs construct a functional relationship between input and output patterns through the learning process, and memorize that relationship in the form of weights for later applications [1, 2]. This BP algorithm belongs to the family of Gradientbased algorithms and they provide an easy way of supervised learning in multilayered feed-forward ANNs. The method of gradient descent [3] is that a downhill movement in the direction of the negative gradient will eventually reach the minima of the performance surface over its parameter space. Since the gradient techniques converge locally, they often get trapped at suboptimal solutions [4] depending on the serendipity of the initial random starting point. To avoid this, a multiple of stochastic methods are developed from evolutionary and nature-inspired metheuristics. Some of the examples are Particle Swarm Optimization, Simulated Annealing, Genetic Algorithms, Ant Colony Optimization etc. But most of these evolutionary algorithms are very complex and need a lot of computational times. But the present day

data communication systems demand a faster algorithm for an adaptive equalizer. In this paper a novel method of equalization is introduced in which the task of an adaptive equalizer is decentralized. It uses a FIR filter to adapt to the environment, i.e estimate the channel condition and a RBF neural network is used to classify the received data. The FIR filter is trained using the LMS algorithm. The use of the RBF NN for pattern classification can be directly explained by observing the channel state diagram of a practical channel. It can be easily predicted that, if a Gaussian kernel is defined for each cluster, the classification will be optimum. Hence this method can serve as a faster and efficient method of equalizing a communication channel. In the results we have compared our results with the traditional Back Propagation (BP) algorithm and also with Tabu Search (TS) algorithm, which, in the recent past, earned a reputation to produce a so-called global optimum solution [5,6]. In section.2 the concept of channel equalization is given. In section 3, a brief introduction to Tabu Search algorithm is given. In section 4, the RBF Neural Network and its role as a pattern classifier is explained. In section 5 the proposed algorithm is mentioned. Section 6 is dedicated to results and discussion. 2. CHANNEL EQUALIZATION The two principal causes of distortion [7] in a digital communication channels are Inter Symbol Interference (ISI) and the additive noise. The ISI can be characterized by a Finite Impulse Response (FIR) filter [7, 8]. The noise can be internal to the system or external to the system. Hence at the receiver the distortion must be compensated in order to reconstruct the transmitted symbols. This process of suppressing channel induced distortion is called channel equalization. The equalization in digital communication scenario is illustrated in Fig.1,

Fig. 1. Schematic of a Digital Communication System

where s (k ) is the symbol sequence transmitted through the channel. N (k ) represents Additive White Gaussian Noise (AWGN). r (k ) is the received data and sˆ( k − d ) is an estimate of the transmitted data. d is called the decision delay. The channel can be modeled as an FIR filter with a transfer function

A( z ) =

na −1

∑a z i =0

−i

i

(1)

n a is the length of the communication channel impulse response. The symbol sequence s ( k ) and the channel taps a i can be complex valued. In this study,

where

however, channels and symbols are restricted to be real valued. This corresponds to the use of multilevel pulse amplitude modulation (M-ary PAM) with a symbol constellation defined by si = 2i − M − 1, 1 ≤ i ≤ M (2) Concentration on the simpler real case allows us to highlight the basic principles and concepts. In particular, the case of binary symbols (M = 2) provides a very useful geometric visualization of equalization process. The task of equalizer is to reconstruct the transmitted symbols as accurately as possible based on the noisy channel observations r (k ) . Various equalizers can be classified into two categories, namely, the symbol-decision equalizer and the sequence-estimation equalizer. The later type is hardly used as it is computationally very expensive. The symboldecision equalizers in its initial stages were implemented using Linear Transversal Filters. Later the advent of ANNs marked the modeling of equalizers which can provide superior performance in terms of Bit Error Rate (BER) compared to FIR modeling. 3. TABU SEARCH ALGORITHM Tabu search can be thought as an iterative descent method [9]. An initial solution is randomly generated and a neighborhood around that solution is examined. If a new solution is found in the neighborhood that is preferred to the original, then the new solution replaces the old and the process repeats. If no solution is found that improves upon the old function evaluation, then unlike a gradient descent procedure which would stop at that point (a local minima), the TS algorithm may continue by accepting a new value that is worse than the old value. Therefore a collection of solutions in a given neighborhood is generated and the final solution would be the best solution found so far for that particular neighborhood.To keep from cycling, an additional step is included that prohibits solutions from recurring (hence the name Tabu) for a user defined number of solutions. This Tabu List (TL) is generated by adding the last solution to the beginning of the list and discarding the oldest solution from the list. During this procedure, the best solution found so far is retained. If the new generated weight is not in the TL, the search goes on and the weight is added to the TL.To reject any solution, all the weights must be within the Tabu Area (TA) of any entry in the TL. The Aspiration Criterion (AC) is used to activate the

solutions that are tabued but around which there are some superior solutions. Due to these characteristics, Tabu Search serves as very efficient search algorithm and has grasped the attentions of many for its use in constrained optimization problems [5,6]. 4. RBF NEURAL NETWORK EQUALIZER: In this section first an introduction to RBF neural network is given. Then the novel method of using this RBF in the equalization problem as a pattern classifier is explained. 4.1 Introduction to RBF Neural Network Radial Basis Functions emerged as a variant in Artificial Neural Network where they are embedded in a two layered neural network, besides an input layer. In detail Radial basis functions neural network is a three layered network which has an input layer, a hidden layer with non-linear RBF activation function and an output layer which is linear, i.e. weighted sum of all hidden outputs [1].

Figure 2: Radial Basis Function Neural Network 4.2 RBF Based Equalizer Here, in this section, RBF-NN method the task of an adaptive equalizer can be classified into two sub-tasks – Adapting to the environment and classifying the received symbols to one of the channel states, accordingly two different structures can be used for these two subtasks. Hence the task of an adaptive equalizer is decentralized. Since the classical theory of equalization, the practical channels are being modeled as FIR filters; a Linear Transversal Filter which is made adapted to the channel characteristics is used. For the problem of pattern classification in equalization, earlier researches say RBF will provide near Bayesian classification [10], hence, here RBF is chosen as the classifier. Hence the structure looks like this.

Fig .3 Schematic of a Digital Communication System with Decentralized equalization.

During the training process, the channel is estimated by training the Linear Transversal Filter using the LMS algorithm. Then the estimated channel information is used to calculate the centers of the RBF structure. Since the task of the equalizer is decentralized, it has the added advantage of faster training and as we are using the RBF for classification, the performance is near to Bayesian. As mentioned earlier, a Linear Transversal filter is used to study the channel. During the training phase the filter is trained using LMS algorithm. A known set of data is given to both the channel needed to be estimated and to the FIR filter to calculate the error between the two. Then the weights of the filter are adjusted using the error. Since the LMS algorithm is being used, training is very simple and also faster. In normal channel conditions, training the FIR filter of just 20-30 iterations is sufficient, compared to more than a minimum of 200 iterations of training required by either gradient based or stochastic algorithms. Once the training is done, the weights of the FIR filter are fixed and then calculate the centers of the RBF nodes. Calculating the centers is explained in the algorithm. Once the centers are calculated, the spread is made equal to the noise variance and the values are fixed and the RBF is allowed to perform its task of classification. The classification is not so simple as in other pattern classifications. Here the node which is giving the maximum output is noted and then the transmitted symbol is estimated by using the delay and the signal sequence which is responsible to get that center.

4.

Calculate the error e = y ( n) − yˆ ( n)

5.

Update

Calculate

output

of

y ( n ) = h ( n) ∗ x ( n) + v ( n ) [ d ( n ) =

1.

Calculate

output

yˆ (n) = hˆ( n) ∗ s (n)

of

the

filter

n a + m −1

SC =

2.

the

vector

a

SC ,

matrix,

of

order

× ( n a + m − 1) having values

−1 −1 K

+1

−1 −1 K M M M

−1 M

+1 +1 K

+1

Calculate all possible centers by convoluting the matrix SC with the estimated channel.

(na + m − 1) centers, each m-dimensional, are given by the rows of the matrix obtained as result of convolution. For example for a channel whose characteristics are given by 1 + 0.5 z

−1

and for

the

case

of

m = 2 , the formation of centers is shown below. Here n a = 2

− 1 − 1  − 1  − 1 1  1 1   1

− 1 − 1  − 1. 5   − 1. 5 −1 1   − 0.5 1 − 1   1 1   1  − 0.5 *  = − 1 − 1 0.5  0.5   −1 1   0. 5   1. 5 1 −1   1 1   1.5

− 1. 5  − 0.5 0. 5   1.5  − 1. 5   − 0. 5 0. 5   1.5 

Depending on the feed back element, eliminate half of the centers. For example, if sˆ( k − d − 1) is +1, i.e if the previous feedback element is +1, eliminate all the centers corresponding to the

channel y (n) from

FIR

Generate

2

transmitted bit, s ( k − d − 1) being -1 and vice

figure] 3.

represents

b. Classification Algorithm for RBF:

3.

2.

using

s(n) is input vector representing [s(n), s(n-1),…….s(n-p)]

a. Channel estimation using FIR filter trained with LMS algorithm:

Initialize the weights to some random variables.

weights

[hˆ(0), hˆ(1)....hˆ( p − 1)] , where p is filter order. And

The algorithms for the proposed equalizer are depicted below. There are two algorithms; (a) is for the training the FIR filter using the LMS algorithm and (b) is for classifying the data using the RBF neural network.

1.

hˆ(n)

where

5. PROPOSED ALGORITHMS

Fig 4.FIR transversal filter

the

h( n + 1) = hˆ( n) + µes ( n) ,

4.

versa. Similarly eliminate centers for all the feed back elements to get the Dynamic centers (remaining centers after elimination)

No.ofDynamicC enters =

2 na + m −1 2 nb

5.

These dynamic centers will be the centers for the RBF nodes as shown in the Fig 5 6. Calculate Spread for each RBF node (spread=noise variance) 7. Give the noisy signal received signal as input to the RBF Classifier. 8. Note the output of the RBF node which gives the maximum output and correspondingly note the center (channel State). 9. Extract the transmitted sequence and hence the transmitted bit which is responsible for this channel state. This will be the estimate of the transmitted data. 10. Compare the estimated data with the actual transmitted data and calculate the BER.

achieved with m=2, nb=1, d=2 and used one hidden layer with only one neuron in that. Performance is observed better by using a channel of higher order like a 5-tap channel, H 2 ( z ) = 0.9413 + 0.3841z −1 + 0.5684 z −2 + 0.4201z −3 + 1.0 z −4 the RBF Neural network structure by taking m = 5, nb = 4 and one neuron in hidden layer. For testing we used 105 samples. And it can be shown in the graph (Fig 7) that at 18dB SNR, BP ends the average BER at -0.79dB and where as RBF based equalizer gives average BER –3.4dB

Fig. 6 SNR Vs BER for RBF, Tabu based Weight adaptation and BP algorithm for 2-1-1 structures.

Fig 5 Schematic of a RBF NN equalizer 6. SIMULATION AND RESULTS The results obtained using the proposed algorithms of using BP, TS and RBF for training the neural network are shown and the improvement in performance in terms of average bit error rate and standard deviation are discussed. RBF Neural Network and FIR filter trained with LMS algorithm experimental results are compared with the previously proposed structures-ANN equalizer trained with both BP and TABU in the form of SNR (dB) and average BER (dB). All the programs are written in C and compiled using Microsoft VC++ ver.6.0 and the plots are taken using Microsoft Excel2003/07. As a part of results here two channels are taken, one is a simple three tap and another is a 5-tap channel with two different neural network structures. All the three different algorithmic trained equalizers are executed for 10 times and their average bit error rate (BER) are taken and corresponding standard deviation is calculated for different Signal to noise ratios(SNR). From the graph shown in the Fig6 it can be noted that a similar performance from all RBF, TABU based, and BP trained neural network equalizers for the simple channel like H 1 ( z ) = 0.3482 + 0.8704 z −1 + 0.3482 z −2 ,is

Fig. 7 SNR Vs BER for RBF, Tabu based Weight adaptation and BP algorithm for 5-1-1 structures. And also standard deviation values are calculated for both and average bit error rate displayed in the below tables. Table 1 for H1(Z) and Table 2 for H2(Z) SNR(dB) 0 2 4 6 8 10 12 14 16 18

BP-sd TABU-sd RBF-sd 0.002763 0.000746 0.000312 0.002265 0.003184 0.000375 0.005029 0.001993 0.000441 0.003622 0.004854 0.000614 0.008399 0.01125 0.001146 0.013847 0.011062 0.000954 0.021207 0.021043 0.002795 0.052868 0.050981 0.010687 0.122811 0.097648 0.035266 0.243315 0.162811 0.109448 Table 1 Standard Deviations for H1(Z)

From the above tables we can say that as noise bursts occur at

the higher SNR the channel effect is very less for RBF based equalizer than Tabu based and BP trained equalizer. So RBF equalizer is good at higher noise bursts too. The SNR vs Std Dev plots for the tables are shown below for quick reference. SNR(dB) 0 2 4 6 8 10 12 14 16 18

BP-sd Tabu-sd Rbf-sd 0.003603 0.007777 0.001016 0.004516 0.006363 0.001348 0.009458 0.007509 0.001626 0.005127 0.011989 0.001505 0.010456 0.015149 0.002191 0.029949 0.020044 0.004627 0.051304 0.022374 0.004893 0.081974 0.026624 0.007269 0.134614 0.026974 0.008246 0.212204 0.029719 0.011015 Table 2 Standard Deviations for H2(Z)

Also the TS is sensitive to the chosen search range and since this is a search technique the optimal solution may not be found within the given number of iterations. RBF NN is introduced to classify the received data in this equalizer structure along with a FIR filter which keeps track of the channel and results shows that it gives the best performance than both the BP algorithm and TABU based algorithm. And it is computationally simple as we use LMS algorithm to train the FIR filter. And also the number of adjustable weight parameters is very less when compared to the other gradient and globally searched algorithms. References [1] S.Haykin, Neural Networks: A Comprehensive Foundation (2nd Ed, Pearson Education, 2001) [2] R. P. Lippmann, An Introduction to Computing with Neural Nets, IEEE ASSP Magazine,1987,4-22. [3] B. Widrow and SD Stearns, Adaptive Signal Processing, (Englewood Cliffs , NJ: Prentice Hall 1985). [4] S. Haykin, Adaptive Filter Theory,(4th Ed, Pearson Education, 2002) Neurocomputing, 70, 2007, 875-882. [5] F. Glover, Tabu Search – Part I, ORSA Journal on Computing, vol.1,1989, 190-206. [6] F. Glover, Tabu Search – Part II, ORSA Journal onComputing, vol.2, 1990, 4-32. [7] S. Qureshi, Adaptive Equalization, Proc IEEE, 1985, pp no 1349-1387. [8] J. G. Proakis, Digital Communications, (New York:McGraw-Hill, 1983). [9] Hertz A, Taillard E, de Werra D. “A Tutorial on Tabu Search”. Proc. ofGiornate di Lavoro AIRO'95 (Enterprise Systems: Management of Technological and Organizational Changes), (1995), pp13-24. [10] S.K.Patra, “Theoritical derivation of minimum mean square error of RBF based equalizer.”, Signal Processing , vol.87,2007, pp. 1613-1625.

Fig. 8 SNR Vs Std Dev for RBF, Tabu based Weight adaptation and BP algorithm for 2-1-1 structures H1(Z)

. Fig. 9 SNR Vs Std Dev for RBF, Tabu based Weight adaptation and BP algorithm for 5-1-1 structures- H2(Z) 7. CONCLUSION In this paper a novel method of equalizing a communication channel using RBF neural network presented. Tabu Search (TS) can be used as a global search technique for optimizing the synaptic weights and has potential to come out from the local minima. Results show that TS algorithm give superior performance compared to BP algorithm. But every algorithm has its own pros and cons, TS is not an exception. It takes more time for searching the optimal solution and is computationally complicated, compared to BP algorithm.

Suggest Documents