ECG Signal Compression Using A Proposed Inverse ...

0 downloads 0 Views 1MB Size Report
decimation for ECG compression. The reconstruction of the original ECG signals can be performed using inverse interpolation techniques such as the linear ...
ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

ECG Signal Compression Using A Proposed Inverse Technique Hadeel Adel1,O. Zahran1, Taha E. Taha1, Waleed Al-Nauimy2, S. El-Halafawy1,El-Sayed M. El-Rabaie1, Fathi E. Abd El-Samie1 1 Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952, Egypt. 2

Department of Electrical and Electronic Engineering, University of Liverpool, UK. E-mails: [email protected],

Abstract—In this paper, a new electrocardiogram (ECG) data compression algorithm is proposed. The algorithm mainly performs a preprocessing operation to convert the 1-D ECG into 2-D array.This preprocessing operation includesdetecting the QRS complex,and then alignment and period sorting are used to convert the ECG signal into a matrix. Normalization is performed to scale the values of the matrix and make a gray scale image due to the 2-D ECG. Then,the algorithm applies decimation for ECG compression. The reconstruction of the original ECG signals can be performed using inverse interpolation techniques such as the linear minimum mean square error (LMMSE), the maximum entropy, and the regularization theory.

In this paper, we discuss the 2-D ECG compression using decimation and interpolation by using three main techniques namely, linear Minimum Mean Square Error(LMMSE) [11][12]-[13], Maximum Entropy[12]-[13] and regularization theory[14]-[15], then compare their performances. The paper is organized as follows. Section II, discusses how to convert 1-D ECG signal to 2-D ECG data array. Section III gives details about the process of decimation. Section IV, explains the image interpolation details. Section V, shows the simulation results followed by conclusions and the more relevant references. II.

Keywords: ECG, Decimation;Interpolation; Maximum Entropy; Regularization Theory.

I.

LMMSE;

INTRODUCTION

Efficient Compression of ECG signals becomes a necessary tool in telemedicine for data transfer and conventional diagnostic complexes for archiving a unified standard for ECG coding and transmission. ECG compression algorithms can be classified intodirect methods, transform domain methods, and parameter extraction methods. Direct methods attempt to remove redundancy by acting directly on the temporal consecutive samples of an ECG record such as Amplitude Zone Time Epoch Coding (AZTEC) [1], Turing Point (TP) [2], Coordinate Reduction Time Encoding System (CORTES) [3], and FAN algorithm [4]. In transform-domain methods, the original samples are transformed to another domain to achieve better compression, such as the case in the Discrete Cosine Transform (DCT) [5]-[6] and Discrete Wavelet Transform (DWT) [7]. Parameter extraction methods include a preprocessor to extract some features that are used to reconstruct the signalsuch as the case in long-term prediction [8], and vector quantization [9]. The ECG signals have both sample-to-sample (intra-beat) and beat-to-beat (inter-beat) correlation.Some 2-D ECG compression approaches have been proposed for better compression performances such as the Lee and Buckley algorithmused with DCT transform to constructed 2-D ECG [10]or Bilgin and et. al. applied JPEG2000 to the similar constructed image and both works have achieved better results.

978-1-4799-2416-5/13/$31.00 ©2013 IEEE

1-D ECG SIGNAL CONVERSION TO 2-D ECG DATA ARRAY.

Because of the intra beat correlation and inter beat correlation of ECG signals, 2-D ECG signal compression algorithms have better performance [16]. To map 1-D ECG as signals to 2-D arrays, the peaks of QRS complex should be detected to identify each heartbeat period (namely the R-R interval). After that, the original 1-D ECG signal is cut before each R peak. In order to period irregularity of ECG signals that presents a challenge to 2-D array construction, we have applied resampling and normalization the time duration of each cycle and set it to a constant number, i.e. 256 samples in each period (heartbeat). By this method we have a matrix, that each row is a heartbeat of ECG. Then we should normalize the value of 2-D array (matrix) by scaling the amplitude of each sample from 0 to 255. Now, there is a gray scale image and 2-D ECG. These processes are named cut and align (C&A). The block diagram of the construction of 2-D ECG data array is shown in Fig. 1. 2-D ECG

1-D ECG data QRS detection

Convert 1D ECG to gray

Fig. 1. Block diagram for construction of 2-D ECG data array.

III.

IMAGE DECIMATION

Decimation algorithm scans through lines of pixels or group of pixels according to decimation down scale factor (M). As a result Gaussian Pyramid of varying image resolution is obtained. Decimation process is shown in Fig. 2.

61

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

x(i.j)

c(n1, n2) Convolut ion

y(m,n) Decimator

f (x )

k , spaced 1-D sampled data sequence, many interpolation functions can be used. The value to be estimated,

fˆ ( x) , can, in general, be written in the form [23] -[24]:

fˆ ( x) =

h(n3,n4)

Fig. 2. Block diagram image decimation process. Here X(i,j) is input image, h(n3,n4) is convolution averaging mask and C(n1,n2) is convolved image without zero padding. Y(m,n) is the output decimated image. Y(m,n)=C [ n1M,n2M]. Where M is decimation down scale factor and 0 ≤ m ≤ (n1/ M), 0 ≤ n ≤( n2/ M), The resulting image is a reduced size mirror of the original image faithful in tonality to the original but smaller in size. IV. INTERPOLATION The conventional definition of image interpolation is represent an arbitrary continuous function as discrete sum of weighted and shifted synthesis functions which means a mixed convolution operation. There is a wide range of application in image interpolation because it allows the user to change the size of image interactively, to concentrate on some details or to get a better over view. The classical approach to image interpolation is to use the sincbasis function to map a discrete signal to a continuously defined signal [17]-[18]. The sinc based approach suffers from many problems such as its infinite support the slow rate of convergence of a sum of shifted sincs and the instability in the presence of noise [18]. B-spline interpolation algorithm [19]-[17] used to avoid interpolation problem, B-spline interpolation has been implemented inform of digital filtering [19]-[17]. Complexity of B-spline interpolation algorithm of orders greater than 1, researchers have suggested modified cubic image interpolation algorithm which does not require the computational complexities[20]. Some image interpolation introduced [21][22]. Interpolation allows the user to vary the size of images interactively, to concentrate on some details so it can used in wide range in image processing system. Hence, interpolation can be used to obtain an HR image from an LR one. HR images are required in so many fields such as medical imaging, remote sensing, satellite imaging. Image compression and decompression and HDTV. Image interpolation is also required in almost every geometric transformation such as translation, rotation, scaling and registration.



∑ c( x ) β ( x − x ) k

k

(1)

k = −∞

where β ( x ) is the interpolation basis function , and x and xk represent continuous and discrete spatial distances,

c( x )

k respectively. The values of are called the interpolation coefficients and need to be estimated prior to the interpolation process.

A. Linear Minimum Mean Square Error (LMMSE) Interpolation Approach The LMMSE criterion requires the mean square error of estimation to be minimum over the entire ensemble of all possible estimates of the image. The LMMSE estimate of the reconstructed image will be given by [11]-[12]-[13]: f^=RfDt (D RfDt+ Rv )-1 g

(2)

Wheref^ is the estimate of the reconstructed image, g which is decimated image, Rf, Rv The autocorrelation matrices for the image and noise diagonal. There are three problem in this algorithm, the first problem, is the estimation of the autocorrelation matrix of the reconstructed image, we can solve this problem by approximated Rf by diagonal sparse matrix. The second problem, is the noise variance estimation. This problem can be solved by estimating this value from the available decimated image.the third problem, is the matrix inversion process required for estimating the reconstructed image. The solution of this problem depending on the approximation of Rfas adiagonal matrix. B. Maximum Entropy Interpolation Approach A mathematical model for image interpolation based on the maximization of the entropy of the reconstructed image a priori. If the samples of the required reconstructed image are assumed to have unit energy, they can be treated as if they are probabilities, possibly of so many photons, which are present at the ith sample of the required reconstructed image [12]-[13]. The required image is assumed to be treated as light quanta associated with each pixel value. Thus, the entropy of the required image is defined as follows : N2

The classical image interpolation is defined as The process of image interpolation aims at estimating intermediate pixels between the known pixel values. To estimate the intermediate pixel at position x, the neighboring pixels and the distance s are incorporated into the estimation process. For equally

H e = −∑ f i log2 ( f i )

(3)

i =1

where He is the entropy and fi is the sampled signal.

62

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

C. Regularized ImageInterpolation The Regularization theory, which was basically introduced by Tikhonov and Miller[12]-[25], provides a formal basis for the development of regularized solutions problems. The stabilizing function approach is one of the basic methodologies for the development of regularized solutions. The estimation of the reconstructed image is given by [13][26]: f^ = ( Dt D +Qt Q)-1Dt g

(4)

The rule of the regularization operator Q is to move the small Eigen values of D away from zero while leaving the large eigenvalues unchanged. It also incorporates prior knowledge about the required degree of smoothness of F into the interpolation process. The regularization operator Q is a finite difference matrix chosen to minimize the second order (or higher order) difference energy of the estimated image. The 2-D Laplacian is preferred for minimizing the second order difference energy. The 2-D Laplacian is the most popular regularization operator[14]-[15]. The regularization parameter λ controls the trade-off between fidelity to the data and the smoothness of the solution. In this paper, we suggest another solution to the regularized image interpolation. This solution is implemented by the segmentation of the decimated image into overlapping segments and the interpolation of each segment separately using Eq. (4) as an inversion process. It is clear that, if a global regularization parameter is used, a single matrix inversion process for a matrix of moderate dimensions is required because the term ( Dt D +Qt Q)-1 is independent of the image to be interpolated. Thus, the suggested solution is efficient from the point of view of computation cost. V.

before and after applying the Regulization technique at CR of (4:1). Results show that the Regulization technique gives the best MSE results at different CR values. The distortion between the original and reconstructed signal is measured by percent root mean square difference (PRD) as: PRD % = 



∑       ∑   

*100

(5)

Where xorg denotes the original data, xrec denotes the reconstructed data, and n is the number of samples. Compression ratio (CR) is calculated as number of bits in the original signal over number of bits in the compressed signal. Table II, shows the relation between the Percent Root Mean Square Difference (PRD) and the Compression Ratio (CR) for the three techniques at different CR. Table I: Relation between CR and MSE for the three techniques. CR LMMSE 2:1 4:1 8:1

9.16 x 10-4 0.002 0.005

MSE Maximum Entropy 4.73 x 10-4 5.32x10-4 0.0018

Regularization 1.03 x 10-4 4.23 x 10-4 9.78 x 10-4

Table II: Relation between CR and PRD for the three techniques. CR 2:1 4:1 8:1

LMMSE 2.7764 3.4175 4.9702

PRD Maximum Entropy 1.1556 2.2778 4.8713

Regularization 0.9613 1.9834 3.334

SIMULATIONS RESULTS

Simulation results are carried out on 2-D ECG image using Matlab (R2007b). Three interpolation techniques namely, LMMSE, Maximum Entropy and Regularization are tested on the decimated image and then compared the final image withthe original image. Table I, shows the relation between the Mean Square Error (MSE) and the Compression Ratio (CR) for the three techniques at different CR. Figure 3, shows the images before and after applying the LMMSE technique at CR of (2:1). Figure 4, shows the images before and after applying the Maximum Entropy technique at CR of (2:1). Figure 5, shows the images before and after applying the Regulization technique at CR of (2:1). Figure 6, shows the images before and after applying the LMMSE technique at CR of (4:1) Figure 7, shows the images before and after applying the Maximum Entropy technique at CR of (4:1). Figure 8, shows the images

63

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

(a)

(a)

(b)

(b)

(c)

(d)

Fig. 3. (a) The Original signal. (b) Original 2-D ECG (c) The decimated image. (d) Reconstructed 2-D ECG(CR=2:1) using LMMSE technique.

(c)

(d)

Fig. 4. (a) The Original signal. (b) Original 2-D ECG (c) the Decimated image. (d) Reconstructed 2-D ECG(CR=2:1) using Maximum Entropy technique.

64

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

(a)

(a)

(b)

(b)

(c)

(c)

(d)

(d)

Fig. 5. (a) The Original signal. (b) Original 2-D ECG (c) The decimated image. (d) Reconstructed 2-D ECG(CR=2:1) using Regularization technique.

Fig. 6. (a) The Original signal. (b) Original 2-D ECG (c) The decimated image. (d) Reconstructed 2-D ECG(CR=4:1) using LMMSE technique.

65

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

(a)

(a)

(b)

(b)

(c)

(c)

(d)

Fig. 7. (a) The Original signal. (b) Original 2-D ECG (c) The decimated image. (d) Reconstructed 2D ECG(CR=4:1) using Maximum Entropy technique.

(d) Fig. 8. (a) The Original signal. (b) Original 2-D ECG (c) The decimated image. (d) Reconstructed 2-D ECG(CR=:1) using Regularization technique

66

ICCTA 2013, 29-31 October 2013, Alexandria, Egypt

VI. CONCLUSION In this paper we have discussed how to compress the ECG signal using the image processing concepts. This is made by converting the ECG signal into 2-D ECG data array and applying the image decimation process. The image reconstructed is made by using the three interpolation technique, namely, LMMSE, Maximum entropy and Regularization. The MSE and PRD are computed for every case and the results show that the Regularization technique gives the best MSE and PRD results. REFERENCES [1] J. Cox, F. Noelle, H. Fozzard, and G. Oliver, “AZTEC: A pre-processing program for real-time ECG rhythm analysis,” IEEE Trans. Biomed. Eng., vol. BME-15, pp. 128–129, 1968. [2] W. Mueller, “Arrhythmia detection program for an ambulatory ECG monitor,” Biomed. Sci. Instrum., vol. 14, pp. 81–85, 1978. [3] J. Abenstein and W. Tompkins, “New data-reduction algorithm for real-time ECG analysis,” IEEE Trans. Biomedical Eng., vol. BME-29, pp. 43–48, 1982. [4] D. A. Dipersio and R. C. Barr, “Evaluation of the FAN method of adaptive sampling on human electrocardiograms,” Med. Biomed. Eng. Computing, pp. 401–410, Sept. 1985. [5] N. Ahmed, P. J. Milne, and S. G. Harris, “Electrocardiographic data compression via orthogonal transform,” IEEE Trans. Biomed. Eng., vol. 22, pp. 484–487, June 1975. [6] B. R. Shankara and I. S. N. Murthy, “ECG data compression using Fourier descriptors,” IEEE Trans. Biomed. Eng., vol. 33, pp. 428–433, Apr. 1986. [7] G. Nave and A. Cohen, “ECG compression using long-term prediction,” IEEE Trans. Biomed. Eng., vol. 40, pp. 877–885, Sept. 1993. [8] N. Thakor, Y. Sun, H. Rix, and P. Caminal, “Multiwave: A waveletbased ECG data compression algorithm,” IEICE Trans. Inform. Syst., vol. E76-D, no. 12, pp. 1462–1469, 1993. [9] C. P. Mammen and B. Ramamurthi, “Vector quantization for compression of multichannel ECG,” IEEE Trans. Biomed. Eng., vol. 37, pp. 821–825, Sept. 1990. [10] H. Lee and K. M. Buckley, ”ECG data compession using cut and align beats approach and 2-D transform,” IEEE Trans. Biomed. Eng., vol46, no.5, May 1999. [11] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam, and F. E.Abd El-Samie, “Optimization of image interpolation as an inverse problem using the LMMSE algorithm,” in Proc. of MELECON, pp. 247-250, May 2004. [12] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam, and F. E. Abd El-Samie, “Efficient

implementation of image interpolation as an inverse problem,” Journal of Digital Signal Processing, vol. 15, no. 2, pp.137-152, March 2005. [13] H. C. Anderws and B.R. Hunt, "Digital Image Restoration. Englewood Cliffs," NJ: Prentice- Hall, 1977. [14] N. B. Karayiannis and A. N. Venetsanopoulos, “Regularization theory in image restoration- the stabilizing functional approach,” IEEE Trans. Acoustics, Speech and Signal Processing, vol. 38 , no.7, pp.1155-1179, July 1990. [15] N. P. Galatsanos. and R.. T. Chin, “ Digital restoration of multichannel images,” IEEE Trans. Acoustics, Speech and Signal Processing, vol. 37, no.3, pp. 415-421, March 1989. [16] J. Pan and W. J. Tompkins, “A real-time QRS detection algorithm”, IEEE Trans. Biomed. Eng., vol. BME-32, pp. 230–236, 1985. [17] J. S. Lim, ''Two-Dimensional signal and image processing'', Prentice Hall Inc., 1990. [18] W. K. Pratt, ''Digital image processing'', John Wiley & Sons Inc. , 1991. [19] M. Xia, and B. Liu, “ Image registration by super curves,” IEEE Trans. Image Processing , vol. 13, no.5, pp.720-732, May 2004. [20] J. H. Shin, J. H. Jung, J.K. Paik and M. A. Abidi, “Data fusion-based spatio-temporal adaptive interpolation for low-resolution video,” in Proc. ICIP , 2001. [21] P. Guillemain and R. K. Martinet, “ Characterization of acoustic sig Signal Processing, vol. 41, no.2, pp. 821-833, February 1993. [22] nals through continuous linear time-frequency representations,” Proceedings of the IEEE, vol. 84, no. 4, pp. 561- 585, April 1996. [23] G. W. Wornell, “ Emerging applications of multirate signal processing and wavelets in digital communications,” Proceedings of the IEEE, vol. 84, no. 4, pp. 586- 603, April 1996 [24] .M. Unser, A. Aldroubi, and M. Eden “B-Spline signal processing: Part I- Theory” IEEE Trans. Signal Processing, vol. 41, no.2, pp. 821-833, February 1993. [25] J.K. Han and H. M. Kim, “Modified cubic convolution scaler with minimum loss of information,” Optical Engineering. , vol. 40 no.4, pp. 540-546 , April 2001. [26] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam and F. E. Abd El-Samie, “Sectioned implementation of regularized image interpolation” in Proc. MWSCAS, 2003.

67

Suggest Documents