On Low Resolution Ultrasonic Image Processing for Target ... - wseas

0 downloads 0 Views 607KB Size Report
medical applications ultrasound images are intensively used in echography, providing rapid and efficient method for diagnosis and right treatments. Industrial ...
Mathematical Methods and Techniques in Engineering and Environmental Science

On Low Resolution Ultrasonic Image Processing for Target Recognition Purposes DOREL AIORDACHIOAIE, MARIUS MAZAREL, CLAUDIU CHICULITA, RAZVAN SOLEA, SILVIU EPURE, and LAURENTIU FRANGU Electronics and Telecommunications Department Dunarea de Jos Galati University Domneasca-47, Galati-800008, ROMANIA [email protected], http://www.etc.ugal.ro/daiordachioaie/index.htm Abstract: - The paper presents preliminary results from time and frequency domain analysis of ultrasonic images in air with low resolution, i.e. 20 by 20 image elements. For each considered domain, a discriminate function is defined. Time domain based analysis uses a function with arguments based on the maximum correlation coefficients between experimental sonar images and elementary (pre-defined) image patterns. Frequency domain analysis uses distance function with arguments the coefficients of the Discrete Cosine Transform (DCT). The results shows good results for object detection and recognition (size, position and shape) in the field of mobile robots by using discrete ultrasound sensors for image acquisition and requiring minimum processing time.

Key-Words: – Ultrasound, Ultrasonic Image, Sonar, Signal Processing, Pattern Recognition. materials within building and the layout of building and to make detection and localization, as well. So, the goal of the through-wall sensing is to provide vision into otherwise obscured areas, e.g. [12], and [13]. Ultrasound imaging means representation in 2D (static images) or 3D (video or dynamic images) of the environment with the help of ultrasound waves. An element of the image (which could be assimilated as pixel) has the intensity proportional with the reflected signals by the surface of the objects from the explored environment. The modern way for ultrasound imaging is to use an array of ultrasound sensors, as described e.g. by [14] or [15], [16], [17], [18], [19], [20], and [21]. This solution requires high power signal processors and it is very complex, expensive, and difficult to maintain. Sometimes solutions based on discrete sensors are more indicated from cost point of view. The approach of using ultrasound imaging by discrete composition (i.e. pixel by pixel) is not new. The major drawback of the discrete solution is the processing time necessary to obtain the echoes signals, considering that an echo is associated with a pixel. In the field of mobile grounded robots, ultrasound imaging is drastically limited by the absorption of ultrasound in air, and the number of applications and results decrease also dramatically. Practically,

1 Introduction The ultrasounds, in general, and ultrasound images, in particular, have a lot of applications in various domains, starting with relatively old areas of robotics, medical and industrial applications, and continuing with the new areas as through-wall sensing. Ultrasounds are well described and understood both from theoretical and experimental points of view, and covered from many points of view like generation, medium propagation, scattering, etc. We can mention here some classical references within physics of acoustics as [1] or [2], and continue with works from specialized journals as those establishing the absorption coefficients [3] and various tutorials as [4]. In robotics, map building for obstacle avoidance and navigation are mainly solved by signal processing and pattern recognition mixtures as provided by [5] for detection of reflectors, or for feature extraction, e.g. [6], [7], [8], and [9]. In medical applications ultrasound images are intensively used in echography, providing rapid and efficient method for diagnosis and right treatments. Industrial applications are mainly used to detect non-uniformities inside materials. The field is covered both by experiments and simulation as in [10] and [11]. Through-wall sensing looks to see inside structures of buildings to identify the

ISBN: 978-1-61804-046-6

396

Mathematical Methods and Techniques in Engineering and Environmental Science

elementary signals in order to build ultrasound images in three scenarios, with different objects (targets): case#1 - a ball of 20 cm diameter; case #2 - a rectangular box with an edge oriented to the sonar head; case #3 – a rectangular box (the same as in the case 2) perpendicular oriented to the sonar head. The set of three objects is presented in Fig. 1.

at least in robotics and for target recognition, frequencies over 300 kHz are rarely used. The generation of ultrasound images is a challenge and we have considered the methodology of obtaining ultrasonic images (in air) and also the image processing algorithms for objects and texture recognition. The need for such investigation is coming from the fact that sonar images have poor content of information about the explored media and – more important – sonar images differ very much of visual images. In fact, this is an effect of two components: generation and processing. In the first case it is about of resolution or granularity, i.e. a sonar image is obtained from echoes generated by the targets from the explored (isonificated) environment with poor selectivity while a visual image uses optical waves (light). Next, sonar (acoustic) images are virtual (estimated inside the brain) and processed by the audio apparatus, which is designed for one-dimensional processing, while the visual system is adapted for image processing, i.e. for two dimensional data processing. The used sonar head is simple, it has a biomimetic functional behavior with one emitter and two receivers and it is based on pan&tilt structure. The discrete solution in obtaining ultrasound images, i.e. pixel by pixel, is a time consuming task and always the detection and recognition will be desired to be carried out on minimum possible sizes of processed images. In this work we consider time being prohibitive so we are more or less - with an imposed low resolution ultrasound images. It should be mentioned here that the time constraints imposes also the categories of detection and classification methods, in the sense that these should be the simplest like structure and quickly as working time. This is the major constraint for the set of considered processing method. This work presents preliminary results from sonar image generation and analysis by using a fixed ultrasonic frequency of around 40 kHz. Section II presents the details of the experiments and the algorithm to build sonar images. Section III and IV present the results of the time domain and, respectively, of frequency domain analysis. Finally, the two approaches are compared and main conclusions are designed.

Fig. 1. The objects used in the sonar experiment The acquisition and processing boards contains two ADC circuits and a powerful DSP module. The board performs acquisition of four simultaneous signals at a rate of maximum 2 MSps, at 12 bit resolution. The processing is performed by a DSP module based on the dual core BF561 DSP from Analog Devices. The module contains 64MB SDRAM and 8 MB of Addressable Flash. Each core of the DSP is capable of 600MHz and has two MAC units per core, being capable in total of 2.4 GMACs for demanding signal processing. The signal processing board connects the ultrasound transducers, discrete or under array structure, with a PC. The sensor block is built separately in order to allow modularity, and to have the amplified and filtered signals as close as possible to the ultrasonic sensors. The board is directly controlled from MATLAB environment running on the PC. It receives commands from the PC regarding the number of emitted pulses for transmission and the sampling rate for acquisition. The board then drives the emitter and acquires signals from the ultrasonic receivers for a limited time window. Data from ADC circuits is transferred to SDRAM memory using DMA. Currently, the data is transferred to PC for offline processing in MATLAB, at a rate of max 3Mbps. Image generation is made element by element technique, by exploring various directions of the environment, on the directions were the objects are located. Exploring means pulse emission plus waiting and storing the received echoes from environments. Fig. 2 presents the pseudo code for data acquisition. Fig.3 presents the structure of the obtained data block used after running data

2 Description of the Experiment The experiment has the objective to generate

ISBN: 978-1-61804-046-6

397

Mathematical Methods and Techniques in Engineering and Environmental Science

to [0-255] integer values interval and is proportional to the envelope’s amplitude of the echoes. Fig. 5 shows – as example - three such images, from the considered cases, with frame’s index 2700 (from a set of indices starting from 2500 and ending at 3500), with unprocessed received signals. Fig. 6 shows the same indexes but for the envelope of the received signals (echoes). The low resolution and the poor information content about the explored environment are visible.

acquisition process during the exploration of the environment. The sampling frequency is Fs = 450 kHz and the distance between sonar head and target is R =1.07 m. The 3D data has N = 8192 images or frames, each image with nl = 20 lines and nc=20 columns. 3D_DATA_ACQUISITION PSEUDOCOD #0: Declarations / Definitions : nl = number of line ; nc = number of column ; time = time variable ; #1: Initializations: #1.1.The max. no. of lines, nlmax ; #1.2.The max. no. of columns, ncmax ; #1.3.Define the recording time, T ; #2: while nl < nlmax, #3: while nc < ncmax, #3.1: Pulse transmission ; #3.2: While time < T, #3.3: Sample and hold; #3.4: Convert on 12 bits; #3.5: Store samples; end; end; end; END.

Fig. 2. The simplified pseudo code for 3D data acquisition. The

Fig.4. Examples of echoes; target = box.

recording

window is T = N ⋅ Ts = 8192 / 450 = 18.2 ms , and it corresponds to a maximum range of R = vs ⋅ T = 340 ⋅18.2 / 2 = 3.09 m , where vs is the estimated speed of sound in air. The exploring parameters for a single image are: azimuth ± 200 and elevation ± 400 with 00 on the direction of the target.

The image generation process uses time as a variable; its meaning is the distance to the explored plane. Consequently, the ultrasonic images are three-dimensional ones. Once the image set is finished, all classical algorithms of image processing could be applied.

Fig. 5. Examples of raw instantaneous ultrasonic images, frame #2700 Fig. 3. The image data block used for image processing Fig. 4 presents a set of four echoes exy(t) which correspond to pixels of coordinates (2,2), (2,15), (15,2) and (15,15). Depending on target, the relative positions and amplitudes of the received echoes are changing in time. The intensity of pixels is scaled

ISBN: 978-1-61804-046-6

Fig. 6. Examples of ultrasonic images from envelopes, frame #2700

398

Mathematical Methods and Techniques in Engineering and Environmental Science

algorithm is presented in Fig. 8 and the results of the correlation based analysis in Fig. 9. It should be noted that the max correlation function of Eq.(1) (represented in Fig.8) was pre-processed by mean removing.

3 A Time Domain Solution The departure point in this solution is to compute the correlation coefficients between pre-defined (image) prototypes of expected object classes and image frames from 3D data image set. The set of the artificial prototype images is presented in Fig. 7. They are vertical sections through the 3D image and represent a square, a disc (circle) and a rectangular surface with different orientations and parameters. At the bottom of the figure, the 2D autocorrelation functions as images are presented. Prototype images are normalized to assure equals weights for all prototypes images. In order to build a classifier, a discriminate function is defined by

f (IPi , ID j ) = max xcorr(IPi , ID j )

Fig. 8. Time domain recognition processor By using the maximum values of the time correlation analysis, a single good result is obtained as in Fig.9.a, when the similarity between ball and circle (disc) in the image pattern set is correctly indicated. The next two figures (9.b and 9.c) show some uncertainty in classification mainly because the selected reference patterns are not dissimilar enough. From the point of view of the shapes of the discriminate function, it seems that there is it enough information to say that Fig.9.b corresponds to a target with two (or more than two) surfaces and Fig.9.c corresponds to an object with one face, close to circle in cross section, which is of course quite close to reality.

(1)

where IPi , i = 1,.., nip are matrices from the set of nip prototype images and ID j , j = 1,2,..., nid are matrices from the set of measured (data) images. The correlation function xcorr(A,B) returns the cross-correlation of matrices A and B with no scaling. It has its maximum value when the two matrices are aligned so that they are shaped as similarly as possible. If matrix A has dimensions (Ma, Na) and matrix B has dimensions (Mb, Nb), the equation for the two-dimensional discrete crosscorrelation is Ma Na

C (i, j ) = ∑ ∑ A(u, v) ⋅ B (u + i, v + j )

(2)

u =1 v =1

where 1 ≤ i ≤ Ma + Mb and 1 ≤ j ≤ Na + Nb .

Fig. 7 The set of prototype images (p1=s, p2=b1, p3=b2, and p4=b3)

Fig. 9. Results of the correlation based image analysis

The structure of the time domain processing

ISBN: 978-1-61804-046-6

399

Mathematical Methods and Techniques in Engineering and Environmental Science

The second discriminate function is defined as

The results obtained by time correlation analysis forced us to search other methods of analysis and descriptors, in order to improve the results of the classification. Next section will consider frequency domain based methods.

J 2 (i ) = arg min d w (IP, ID), k

The results of the evaluation of distances between the prototype vectors and each experimented case are presented in Fig. 11. The results indicate the pattern #1 (the square) as winner for almost all considered cases. It is not possible to distinguish across images with curves. Consequently, it is not possible to make classifications of all targets, but only detection of obstacles (targets) and a rough description of the surface of target.

4 Frequency Domain Solution Frequency domain analysis is considered here starting from the fact that different variable content (signals) are observed when the set of images is projected on xOy plane (see Fig.3). The Discrete Cosine Transform (DCT) is used for image processing and two discriminator functions were considered. The first one is based on the sum of the AC coefficients of each frame (image). The mathematical expression is based on

J1 (i ) = arg min ∑ IP (k ) − ID , k

(5)

k = 1,4, i = 1, N

(3)

k = 1,4, i = 1, N with

IP(k ) = DCT (P(k ))

(3.a)

the Discrete Cosine Transform matrix coefficients of the pattern image P (k ) , k=1,2,3,4. The ID matrix is of size 20x20 and contains the coefficients of the same transform applied to image (frame) number i. The evolution of J1 shows the non-uniformity of the explored environment. It can not be used for classification purposes but for obstacles detection and size discrimination only. Fig. 10 shows the graphical aspect of the discriminate function for the three considered cases. The second discriminator function is based on the evaluation of the distance between the set of prototype vectors and all frames from the considered four cases. The weighted distance was as:

d W (IP, ID) = IP − ID ⋅ w

Fig.10. Frequency domain information

(4)

where w is the weight matrix. The element w(1,1) (left up corner), which corresponds to DC coefficient of image transforms, is zero and the weights of all other weights (which corresponds to the AC coefficients) are decreasing with their index. Thus, the high frequency components of the analyzed images will have a small contribution to the values of the computed distance.

ISBN: 978-1-61804-046-6

Fig. 11. Distances between prototype set and recorded images

400

Mathematical Methods and Techniques in Engineering and Environmental Science

Localization, IEEE Trans. On Robotics, Vol. 23, No. 6, 2007, pp. 1151-1159. [8] Araujo E.G. and Grupen R.A., Feature Extraction for Autonomous Navigation using an Active Sonar Head, IEEE Int. Conf. on Rob. and Automation, San Francisco, 2000. [9] Rikoski R.J., et al, On Correlating Sonar Images, Robotics: Science and Systems I, MIT Press, Cambridge, Massachusetts, 2005. [10] Gudra T., et al, Airborne Ultrasonic Transducers for Ultrasonic Trans. Tomography in Gaseous Media, Proc. of 20th Int. Cong. on Acoustics, ICA-2010, Sydney, pp. 1-5. [11] Gomez F., Althoefer K., Seneviratne L.D., Simulation of ultrasound imaging inside fully charged pipes, Automation in Construction, vol. 15, 2006, pp. 355 – 364. [12] Baranoski E.J., Through-wall imaging: Historical perspective and future directions, Journal of the Franklin Inst.,vol.345,2008, pp. 556–569. [13] Borek, S.E., An overview of through the wall surveillance for homeland security, 34th Applied Imagery and Pattern Recognition Workshop, Vol. 6, 2005 pp. 19–21. [14] Bernus, L., et al, Sampling Phased Array: A New Technique for Signal Processing and Ultrasonic Imaging, INDTCM, Vol.50, no. 3, Publisher: NDT.net, 2008, pp. 153-157. [15] Drinkwater B.W., Wilcox P.D., Ultrasonic Arrays for Non-Destructive Evaluation: A Review, NDT&E Int., vol. 39, 2006, 525–541. [16] Doron M.A. and Nevet A., Robust Wavefield Interpolation for Adaptive Wideband Beamforming, Signal Processing, vol. 88, 2008, pp. 1579–1594. [17] South H.M., et al, Technologies for Sonar Processing, John Hopkins APL Technical Digest, vol. 19, no. 4, 1998, pp. 459-469. [18] Jensen J.A., et al, Synthetic Aperture Ultrasound Imaging, Ultrasonics, Vol. 44, 2006, pp. e5-e15. [19] Jensen J.A, Ultrasound Imaging and its Modeling, Imaging of Complex Media with Acoustic and Seismic Waves, Springer Verlag, Vol. 84, 2002, pp. 135-165. [20] South H.M., et al, Technologies for Sonar Processing, Johns Hopkins Appl. Technical Digest, Vol. 19, No. 4, 1998, pp.450-469. [21] Ossant, F., et al, Airborne ultrasonic imaging system for parallelepipedic object localization, IEEE TIMrans. on Instr. and Measurement, vol. 45, no. 1, 1996, pp. 107 – 111.

5 Conclusion Some results of airborne ultrasound image processing were presented and discussed. The objective of the study was double: to check the discrete methodology for ultrasound image generation and, second, to check some classical targets recognizing algorithms on low resolution images. An overall constraint was accepted for all processing tasks: processing time. This was the reason to evaluate algorithms on low resolution ultrasound images and to search simple but sufficient target recognition algorithms. The proposed analysis methods, in time and frequency domains, show good results for detection and acceptable results for classification. Both approaches are simple to implement on ordinary mobile robots. The time domain method based on correlation seems more efficient, both from computation time and computation complexity points of view. Frequency domain methods need more work for the selection of the prototype images. Further research might consider more complex image processing algorithms and higher resolution images by using narrow beam formers based on arrays of ultrasonic transceivers.

Acknowledgment The work was partly supported by the Romanian Council for Research (UEFISCDI) under Grant 12079 / 2008, URL: http://www.adbiosonar.ugal.ro. References: [1] Filippi P., et al, Acoustics: Basic Physics, Theory, and Methods, Academic Press, 1998. [2] Blackstock D.T., Fundamentals of Physical Acoustics, Wiley-Nescience, 2000. [3] Bass H. E., Shields F. D., Absorption of sound in air: High – frequency measurement, JASA, Vol. 62, No 3, 1977, pp 571-581. [4] Leighton T.G.,What is ultrasound?,Progress in Bioph. and Mol. Biol.,vol.93, 2007, pp. 3–83. [5] Urena J., et al, Classification of Reflectors with an Ultrasonic Sensor for Mobile Robot Applications, RAS, Vol. 29, 1999, 269–279. [6] Sejdic E, et al, Time–frequency feature representation using energy concentration: An overview of recent advances, Digital Signal Processing, vol. 19, 2009, pp. 153–183. [7] Reijniers J. and Peremans H., Biomimetic Sonar System Performing Spectrum-Based

ISBN: 978-1-61804-046-6

401

Suggest Documents