The Implementation of Wavelet-based Medical Image ... - Springer Link

3 downloads 278 Views 360KB Size Report
JPEG2000 Image Coding Standard on a Grid Computing Scheme Utilizing Condor. Distributed Batch ... management system. It can place jobs in a batch and.
The Implementation of Wavelet-based Medical Image Compression Using JPEG2000 Image Coding Standard on a Grid Computing Scheme Utilizing Condor Distributed Batch System P.R. Bangun1, N. Surbakti2, A.B. Suksmono3, T.L.R. Mengko4 1

2,3,4

Telecommunication Laboratory, Institut Teknologi Nasional, Bandung, Indonesia, Image Processing Research Group, Institut Teknologi Bandung, Bandung, Indonesia

Abstract — In today modern medical world, digital medical imaging is becoming more common. A powerful processing system is needed to handle a large data set which accuracy and time is of the essence. This paper presents wavelet based medical image coding which is adopted from JPEG2000 Part 1 image coding standard to provide lossless image compression. The image compression scheme is implemented in a grid computing system to aim a powerful computing capability and to cut down computing time. Wavelet-based image processing naturally fit with grid computing because the original image can be split into smaller blocks, which later can be processed independently. Keywords — grid computing, JPEG2000, lossless, wavelet

I. INTRODUCTION Digital medical imaging is an important thing to support a right diagnosis. Since it deals with human life, accuracy is crucial, consequently it has a large amount of data. As a medical record, these data need to be stored for years, therefore image compression is important. In this paper we use JPEG2000 image coding standard to derive the benefit of its lossless property. JPEG2000 is designed to incorporate compression of different type of images (bi-level, gray level, color, and multi component) with various imaging models (real time transmission, image library archival, limited buffer and bandwidth resources, etc [1] . It utilizes two types of wavelet filter. Daubechies 9/7 floating point wavelet filter provides lossy compression and biorthogonal 5/3 integer wavelet filter supports lossless compression. Grid computing exposes the concept of virtual organization, to enable a dynamic and autonomic resource sharing to obtain a powerful computing capability. Grid computing take advantages of under utilized computers in the networks, which could provide a huge computing power and storage capacity collectively. In this paper we develop a grid computing system using Condor which is developed by Condor Team at the Computer Sciences Department of the University of

Wisconsin-Madison. Condor works as a resource management system. It can place jobs in a batch and distribute them to the resources attached to it. The Condor Team provides binaries for many popular platforms, including Linux, Solaris, and Microsoft Windows. The purpose of Condor is to enable utilization of all computers that connected to the network. This is achieved by the combination of two concepts implemented in Condor: high-throughput computing and opportunistic computing. The goal of high-throughput computing is to provide large amounts of fault tolerant computational power over a long period of time and the goal of opportunistic computing is to use computer resources whenever they are available, without requiring a hundred percent availability [2]. These concepts enable Condor to effectively utilize available resources in the network.

II. METHODS 2.1 Discrete Wavelet Transform(DWT) Wavelet techniques are adopted by JPEG2000 Part 1 coding standard. DWT in general decomposes the image into its frequency subbands. Wavelet subband component generation is conducted in the manner of gradual levels of decomposition. DWT implementation is done by passing a signal through a lowpass (LPF) and highpass filter (HPF), and down sampling the output of each filter. Fig. 1 depicts N-level wavelet decomposition of one-dimensional signal.

Fig.1 N-level discrete wavelet decomposition of one-dimensional signal

N.A. Abu Osman, F. Ibrahim, W.A.B. Wan Abas, H.S. Abd Rahman, H.N. Ting (Eds.): Biomed 2008, Proceedings 21, pp. 570–574, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2008

The Implementation of Wavelet-based Medical Image Compression Using JPEG2000 Image Coding Standard…

Fig.2 N-level discrete wavelet reconstruction of one dimensional signal Output of a filter having impulse response h(n) and input x(n) is f

¦ x ( k ) h( n  k )

x ( n) h( n)

(1)

k f

The outputs of HPF and LPF after down sampling are y HPF (k )

¦ x ( n) g ( 2k  n)

(2)

¦

(3)

n

y LPF (k )

in the direction of row and followed by one dimensional image data filtering in the direction of columns utilizing wavelet low pass and high pass analysis filter. Detailed wavelet decomposition diagram is shown in Fig. 3. Main information of each level is contained in the corresponding level lowest frequency subband image (LL); further addressed as approximation component. The three consecutive higher frequency subband images build up detail components; each contains horizontal details (LH), vertical details (HL), and diagonal details (HH) information. Subsequent level subband images are generated by performing similar decomposition on approximation component of the previous decomposition level, such that approximation component of decomposition level d is a reduced resolution version of the original image, having width and height reduced by a factor of 2d [3]. Complete image representation for every level is synthesized from its approximation component altogether with the three corresponding detail components. 2.2. JPEG2000 Coding Standard

x ( n) h( 2k  n)

n

Where g (n) and h(n) are impulse response of HPF and LPF respectively. Fig. 2 depicts N-level wavelet reconstruction of one-dimensional signal. After the additive operation we obtain the output of each reconstruction level. š x ( n)

571

f ( k ) g (  n  2k )  y ( k ) h(  n  2k ) ¦ y HPF LPF k f

(4) For a two dimensional image, single level decomposition procedure consists of one dimensional image data filtering

Fig. 3 Two-dimensional DWT Decomposition

_____________________________________________________

Some of the most important features correspond to the application presented in this paper are as follow [4]: Superior low bit-rate performance: JPEG2000 offers performance superior to the current standards at low bit rates (e.g., below 0.25 bpp for highly detailed gray-scale images). This significantly improved low bit-rate performance should be achieved without sacrificing performance on the rest of the rate-distortion spectrum. Network image transmission is one of the applications that need this feature. x Continuous-tone and bi-level compression JPEG2000 is capable of compressing both continuous-tone and bi-level images. If feasible, this standard should strive to achieve this with similar system resources. The system should compress and decompress images with various dynamic ranges (e.g., 1 to 16 bits) for each color component. x Lossless and lossy compression JPEG2000 provides lossless and lossy compression. An example of applications that need lossless compression feature is medical images, where loss is not always tolerated. x Progressive transmission by pixel accuracy and resolution: Progressive transmission that allows images to be reconstructed with increasing pixel accuracy or spatial resolution is essential for many applications such as web browsing, image archival and printing. x Open architecture: It is desirable to allow open architecture to optimize the system for different image types and applications. x

IFMBE Proceedings Vol. 21

_______________________________________________________

572

P.R. Bangun, N. Surbakti, A.B. Suksmono, T.L.R. Mengko

2.2.1 The JPEG 2000 Compression Engine The JPEG 2000 compression engine consists of encoder and decoder. At the encoder, the discrete transform is first applied on the source image data. The transform coefficients are then quantized and entropy coded before forming the output code stream (bit stream). The decoder is the reverse of the encoder. The code stream is first entropy decoded, dequantized, and inverse discrete transformed, thus resulting in the reconstructed image data. 2.2.2 JPEG2000 Codestream Formation JPEG2000 codestream structure supports random partial access and processing by design. Codestream is basically constructed from concatenation of coded-codeblocks, each of which has been processed through the coding system independently, so that there is no information dependency between each codeblocks. Fig. 4 describes JPEG2000 codestream structure. Coding is performed in level of codeblocks. Code blocks are collected into larger unit called precinct, containing all code blocks in every subband images of one decomposition level which represent the same spatial part in its original image. Precincts are further grouped into tiles, and tiles finally make up the whole complete image. Represented in codestream domain, collection of coded-codeblocks corresponding to a same precinct forms a packet. Collection of the entire packets that build up a decomposition level forms a resolution layer. First layer is occupied with highest decomposition level subband images, i.e. one approximation component and three detail components. The consecutive layers are occupied with three detail components of lower decomposition level accordingly.

III. CONDOR DISTRIBUTED BATCH SYSTEM

3.

4.

5.

centralized manager. In this process Condor matches the jobs in queue with the available computers. The match maker makes a rank of priority of the queued jobs. Remote execution mechanism; which provides I/O communication (e.g. file transfer and re-linking) between the submitting machine and the execution machine. Listening activity; Condor listens to the user activity. Once the Condor detects keyboard or mouse activity, or the CPU load. Check pointing mechanism; Condor takes a checkpoint of the job at a specified interval time, so that it can be moved to other computers when the job is interrupted.

Fig. 5 shows how Condor matches the demand and supply in a distributed environment. Condor system constitutes of some daemons, each of them plays a different role. Following is the list of all the daemons and a short description about their roles [5]: Condor_master It is responsible for the life of other daemons, starting and stopping the services, and for upgrading binaries. x Condor_startd It represents a machine in Condor pool. This daemon holds information about the machine capabilities and preferences (ClassAd). x Condor_starter It prepares the execution environment and starts the job. It monitors job’s progress, sends report to the submitting machine, and exits while the job is finished or stopped. x

Condor works using five mechanisms: 1. Claim mechanism; Condor determines whether a computer can be claimed as part of a pool. 2. The match making process; is done by a

Fig. 4 JPEG200 codestream formation

_____________________________________________________

Fig. 5 Condor resource management architecture

IFMBE Proceedings Vol. 21

_______________________________________________________

The Implementation of Wavelet-based Medical Image Compression Using JPEG2000 Image Coding Standard…

573

check pointing. In late 2001 Condor is equipped with java universe. This universe helps users to run java-based jobs in the Condor pool. Fig. 7 shows components in the Java universe. The experiments described in next chapter uses Java programming, hence it uses Java universe.

IV. EXPERIMENTS AND DISCUSSION Fig. 6 Java Universe x

x

x

Condor_schedd It represents a job in Condor pool for a given submit machine. This daemon holds the jobs in queue and can be used to start, halt, or cancel a job. Condor_shadow It is spawned in the submit machine when the submitted job executed on a remote machine. This daemon is responsible for handling system call (e.g. file I/O), logging and monitoring progress. Condor_collector It runs in the central manager. This daemon collects information about offered and requested resources. Condor_negotiator Acts as a matchmaker in the pool, and also has the responsibility to enforce user priorities.

In Condor system, user must define the universe of a job. There are some types of universe, each of them define an execution environment. The earliest version of Condor used standard universe, which support check pointing. The Condor version for Microsoft Windows 2000/XP (and some other OS), build with vanilla universe, with no support for

We made small-scale experiments to simulate a distributed image compression using Condor. We used JJ2000, an open source, java-based image compression software. For this purpose, we installed JRE 1.4.2 from Sun Microsystem. The submitted jobs used Java universe. There are three computers setup as a tiny Condor pool, connected with 10 Mbps ethernet. We give the pool name Condor@I2PRG, composed of these computers: x

x

x

CON#1, Fedora Core 4 (Linux 2.6.11-1), AMD Athlon XP 800 Mhz, RAM 512 MB. This computer acts as central manager, execute host, and submit host. CON#2, Microsoft Windows XP, AMD Athlon XP 2000 Mhz, RAM 256 MB. This computer acts as execute host. CON#3, Microsoft Windows 2000, Intel Pentium 4 1800 Mhz, RAM 256 MB. This computer acts as execute host.

In this experiment, we transformed raw images into JPEG2000 formats. We used two different image files, taz_ref.ppm (160,060 Bytes) and scrshot.ppm (2,359,357 Bytes). We made two experiments (100 trials for each experiment) for each image, first using single computer system and then using Condor@I2PRG. For the second image (2,359,357 Bytes), the average processing time of the Condor pool is shorter than the average processing time of the single computer but for the first image (160,060 Bytes), the processing time of the grid computing is not shorter than the processing time of the single computer.

Table 1 Experiment Result IMAGE

Fig. 7 Condor pool used for experiments

_____________________________________________________

ENCODING TIME (s) (Average of 100 trials)

taz_ref.ppm (160,060 B) scrshot.ppm (2,359,357 B)

IFMBE Proceedings Vol. 21

Single Computer 1,63 12,86

Condor@I2PRG 2,67 5,88

_______________________________________________________

574

P.R. Bangun, N. Surbakti, A.B. Suksmono, T.L.R. Mengko

V. CONCLUSIONS AND FUTURE WORKS The grid computing scheme developed here is not fit for small size image because it will give significant overhead time to carry out standard mechanisms compare to the compression time itself. Future works are planned in the following lines: - To develop grid system using different computer architecture and operation system.  To define collaboration policies between virtual organizations in the architecture in order to ease applications and data storage collaboration.

VI. REFERENCES 1. 2.

3. 4.

5.

_____________________________________________________

[1] Anastassopoulos,G.K. Skodras, A.N., “JPEG2000 ROI coding in medical imaging applications, Democritus University of Greece, University of Patras, Greece. [2] D. Thain, T. Tannenbaum, and M. Livny (2005), Distributed computing in practice: The condor experience (Submitted for publication)", Concurrency and Computation: Practice and Experience, Vol. 17, No. 2-4, 2005, pp 323-356 [3] Taubman, David, Remote browsing of JPEG2000 images, The University of New South Wales, Sydney, Australia. [4] Skodras, A.N. Christopoulos, C.A. Ebrahimi, T.(2000) “JPEG2000: The upcoming still image compression standard, Proceedings of the 11th Portuguese Conference on Pattern Recognition (REPCA00D20), 2000, pp. 359-366, Porto, Portugal [5] T. Tannenbaum, D. Wright, K. Miller, and M. Livny (2002) Condor - A distributed job scheduler, Sterling editor, Beowulf Cluster Computing with Linux, The MIT Press, 2002. Available: http://www.cs.wisc.edu/condor/doc/beowulf-chapter-rev1.pdf1]

IFMBE Proceedings Vol. 21

_______________________________________________________

Suggest Documents