A review of software fault detection and correction ...

0 downloads 0 Views 3MB Size Report
Apr 30, 2018 - [email protected]. Editors. Rafa E. Al-Qutaish. Ecole de Technologie Superieure,. Ahmed Hamza Usman. King Abdulaziz University, Saudi ...
JSEIS EDITORIAL BOARD Founding Editor in Chief Muhammad Imran Babar Army Public College of Management & Sciences (APCOMS) University of Engineering & Technology Taxila, Pakistan [email protected]

Co-Editor in Chief Masitah Ghazali Universiti Teknologi Malaysia, Malaysia. [email protected]

Dayang N.A. Jawawi Universiti Teknologi Malaysia, Malaysia. [email protected]

Advisory Editorial Board

Vahid Khatibi Bardsiri Bardsir Branch, Islamic Azad University, Iran. [email protected]

Zeljko Stojanov University of Novi Sad, Serbia. [email protected]

Muhammad Siraj King Saud University, Saudi Arabia. [email protected]

Basit Shahzad National University of Modern Languages, Pakistan. [email protected]

Vladimir Brtka University of Novi Sad, Serbia. [email protected]

Editors

Rafa E. Al-Qutaish Ecole de Technologie Superieure,

Ahmed Hamza Usman King Abdulaziz University, Saudi

Canada. [email protected]

Arabia. [email protected]

Alessandra PIERONI Marconi International University, Florida, USA. [email protected]

Arta M. Sundjaja Binus University, Indonesia. [email protected]

Nur Eiliyah Wong Universiti Teknologi Malaysia, Malaysia. [email protected]

Noemi Scarpato Università Telematica, Rome, Italy. [email protected]

Manu Mitra Alumnus of University of Bridgeport, USA [email protected]

Hikmat Ullah Khan COMSATS, WAH, Pakistan. [email protected]

Venkata Druga Kiran Kasula K L University, Vaddeswaram, India druga [email protected]

Kirti Seth INHA University, Tashkent, Uzbekistan [email protected]

Mustafa Bin Man Universiti Malaysia Terenggnau, Malaysia. [email protected]

Anitha S. Pillai Hindustan University, India. [email protected]

Gule Saman Shaheed Benazir Bhutto Women University, Pakistan. [email protected]

Farrukh Zeeshan COMSATS, Lahore, Pakistan. [email protected]

Mohammed Elmogy Mansoura University, Egypt. [email protected]

Abid Mehmood King Faisal University, Saudi Arabia. [email protected]

Nadir Omer Fadi Elssied Hamed University of Khartoum, Sudan. [email protected]

Vladimir Brtka University of Novi Sad, Serbia. [email protected]

Ashraf Alzubier Mohammad Ali International University of Africa, Khartoum, Sudan. [email protected]

Abubakar Elsafi International University of Africa, Sudan. [email protected]

Sim Hiew Moi Southern University College, Johor Bahru, Malaysia. [email protected]

Vikas S. Chomal The Mandvi Education Society, Institute of Computer Studies, India. [email protected]

Mohd Adham Isa Universiti Teknologi Malaysia, Malaysia. [email protected]

Ashraf Osman Alzaiem Alazhari University, Sudan. [email protected]

Awad Ali Abder Rehman University of Kassala, Sudan. [email protected]

Shahid Kamal Gomal University, Pakistan. [email protected]

Philip Achimugu Lead City University Ibadan, Nigeria. [email protected]

Shafaatunnur Hasan Universiti Teknologi Malaysia, Malaysia. [email protected]

Arafat Abdulgader University of Bisha, Saudi Arabia. [email protected]

Golnoosh Abaei Shahab Danesh University, Iran. [email protected]

Hemalatha K.L. Dept of ISE,SKIT, Bangalore, India [email protected]

Raad Ahmed Hadi Iraqia University, Baghdad, Iraq [email protected]

Mohammed Abdul Wajeed Keshav Memorial Institute of Technology, Hyderabad, India [email protected]

Mohd. Muntjir Taif University, Saudi Arabia. [email protected]

Razieh Haghighati Universiti Teknologi Malaysia. [email protected]

Adila Firdaus Limkokwing University, Cyberjaya, Malaysia. [email protected]

Wasef Al-matarneh Petra University, Amman, Jordon. [email protected]

Anwar Yahya Ebrahim Babylon University, Iraq. [email protected]

Neelam Gohar Shaheed Benazir Bhutto Women

University, Peshawar, Pakistan. [email protected]

Managing Editor/Linguist

Summaya Amra Army Public College of Management & Sciences, Rawalpindi, Pakistan. [email protected]

Regional Steering Committee Muhammad Imran Babar APCOMS, Rawalpindi, Pakistan. [email protected]

Kashif Naseer Qureshi Bahria University Islamabad, Pakistan. [email protected]

Hikmat Ullah Khan COMSATS, WAH, Pakistan. [email protected]

Sheikh Muhammad Jahanzeb APCOMS, Rawalpindi, Pakistan. [email protected]

Khalid Mehmood Awan COMSATS, Attock, Pakistan. [email protected]

Muhammad Zahid Abbas COMSATS, Vehari, Pakistan. [email protected]

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

Contour extraction for medical images using bitplane and gray level decomposition 1

Ali Abdrahman M Ukasha, 2Ahmed B. Abdurrhman, 3Alwaleed Alzaroog Alshareef 1,2,3

Department of Electrical and Electronics Engineering, University Sebha, Libya Email: [email protected]

ABSTRACT In this paper we have implemented a contour extraction and compression from digital medical image (X-ray & CT scan) and is proposed by using the most significant bit (MSB), maximum gray level (MGL), discrete cosine transform (DCT), and discrete wavelet transform (DWT). Transforms depend on different methods of contour extraction like Sobel, Canny and SSPCE (single step parallel contour extraction) methods. To remove the noise from the medical image the pre-processing stage (filtering by median & enhancement by linear contrast stretch) is performed. The extracted contour is compressed using well-known method (Ramer). Experimental results and analysis show that proposed algorithm is trustworthy in establishing the ownership. Signal-to-noise ratio (SNR), mean square error (MSE), and compression ratio (CR) values obtained from MSB, MGL, DCT & DWT methods are compared. Experimental results show that the contours of the original medical image can be extracted easily with few contour points at high compression exceeded to 87% in some cases. The simplicity of the method with accepted level of the reconstruction is the main advantage of the proposed algorithm. The results indicate that this method improves the contrast of medical images and can help with better diagnosis after contour extraction. This proposed method is very useful for real time application. Keywords: bit-planes and Gray level decomposition; contour edge extraction and compression; image compression; DCT; DWT; 1.

INTRODUCTION

In the recent years, a huge amount of digital information is circuiting through all over the world by means of the World-Wide Web. Most of this data is exposed and can be easily forged or corrupted. The need for intellectual property rights protection arises. Use of multimedia technology and computer networking is all over the world. Image resolution enhancement is the process of manipulating an image so that resultant image is good quality image. Image enhancement can be done in various domains. The conventional method using Bit-plane decomposition [1], gives an image that is better in visual quality and PSNR parameters. For much better resolution of an image, a new proposed method, which uses Gray-level decomposition, is employed and the result will be compared with the existing methods. Medical image contour extraction based on most significant bit / maximum gray level has been proposed as one of the possible ways to deal with this problem, to keep information safe. Feature extraction approach in medical imaging (i.e. magnetic resonance imaging MRI, computed tomography CT, and X-ray) is very important in order to perform diagnostic image analysis [2]. Edge detection reduces the amount of data and filters out useless information, while protecting the important structural properties in an image [3]. The extraction contours of digital data have become very popular approach for reduction data. Several contour compression techniques were developed and a large amount of methods were proposed, but still the most of known methods to compress the contours is Ramer [4] which is has high quality compared with others like Centroid [5], Triangle [6, 7], and Trapezoid [8, 9] methods. The contour can be extracted from binary image using single step parallel contour extraction (SSPCE) method [10, 11], or simply used Sobel & Canny edge detectors [12-14]. 2.

THE ANALYSED ALGORITHM

Figure 1 shows a sequence of steps which are to be followed before contour compression from the CT /X-ray images. When CT / X-ray images are viewed on computer screen, they look like black and white but in actual they contain some primary colors (RGB) content. So, for further processing of these image, it must be converted to perfect grayscale image in which the red, green and blue components all have equal intensity in RGB space. The pre-processing step is required for Gray-level decomposition for efficient and accurate calculation of edges from the medical images. This step is carried out to improve the quality of the image to make it ready for further processing. This improved and enhanced image will help in detecting edges and improving the quality of the overall image. Edge detector step is using for contour extraction. Finally, the extracted contours can be compressed using well-known method Ramer with different threshold values. 3.

PRE-PROCESSING STAGE This stage is very necessary when the gray level decomposition has been used. Usually the medical images 1

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org are captured with some undesired components and hence the median filter can remove it. In this work the medical images (CT scan or X-ray) captured in foggy weather conditions get highly degraded due to suffering from poor contrast and loss of color characteristics [15]. This task uses a contrast enhancement algorithm for such degraded medical color images to obtained the contour extraction with high quality and later with good compression. Besides proposed method being simple, experimental results show that the proposed method is very effective in contrast and color of image after resizing to 256 X 256 medical pixels to be given in 8bit/pixel (bpp) precision. Each pixel has a gray value between 0 and 255 For example, dark pixel may have a value of 10 and a bright pixel might have a value of 230.

Image Acquisition (CT / X-ray)

Preprocessing (Resizing, filtering, contrast adjustment)

Contour compression using Ramer

Bit-planes / Gray-levels / DCT / DWT Transforms

Binary Image

Morphological Operations

Edge Detectors

Comparisons

Figure. 1 Block diagram analysed algorithm 4.

BIT-PLANE DECOMPOSITION

We assume a 256 X 256 medical pixels image to be given in 8bit/pixel (bpp) precision. The entire image can be considered as a two-dimensional array of pixel values. We consider the 8bpp data in the form of 8-bit planes, each bit plane associated with a position in the binary representation of the pixels. 8-bit data is a set of 8 bit-planes. Each bit-plane may have a value of 0 or 1 at each pixel, but together all the bit-planes makeup a byte with value between 0 to 255. Given below (as shown in Figure 3 & 4) are the most significant bit-planes of two tested images (as shown in Figure 2).

(a)

(b)

Figure. 2 Test Images: (a) X-ray, and (b) CT scan

(a)

(b)

2

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(c) Figure. 3 X-ray Hand Skel Images: (a), (b), and (c) Bit-planes no. 7, 8, and (7 & 8) respectively

(a)

(b)

(c) Figure. 4 CT scan Images: (a), (b), and (c) Bit-planes no. 7, 8, and (7 & 8) respectively 5.

GRAY-LEVEL DECOMPOSITION

The idea consists on that given a set of gray-level patterns to be first memorized: (1) Decompose them into their corresponding binary patterns, and (2) Build the corresponding binary associative memory (one memory for each binary layer) with each training pattern set (by layers). A given pattern or a distorted version of it, it is recalled in three steps: (1) Decomposition of the pattern by layers into its binary patterns, (2) Recalling of each one of its binary components, layer by layer also, and (3) Reconstruction of the pattern from the binary patterns already recalled in step 2. The proposed methodology operates at two phases: training and recalling. Conditions for perfect recall of a pattern either from the fundamental set or from a distorted version of one they are also given. Figures 5 and 6 represent the gray level decomposition.

(a)

(b)

3

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(c) Figure. 5 X-ray Hand Images: (a), (b), and (c) Gray-level decomposition using maximun gray level, quarter gray levels, and half gray levels respectively

(a)

(b)

(c) Figure. 6 CT scan Images: (a), (b), and (c) Gray-level decomposition using maximun gray level, quarter gray levels, and half gray levels respectively This paper compared this zonal method with another zonal sampling method consists in selecting one block of the spectral images (i.e. shadow region) as LPF for image compression and the other coefficients will be taken into account in the contour reconstruction stage. This algorithm is referred as algorithm II and is shown in Figure 2 [16]. 6.

SINGLE STEP PARALLEL CONTOUR EXTRACTION

Detection of edge points (pixels) of a 3-dimensional physical object in a 2-dimensional image and contour is one of the main research areas of computer vision. The extraction of object contours and object recognition depends on the correctness and completeness of edges [17]. Edge detection is required to simplify images and to facilitate image analysis and interpretation [18]. Edge detection extracts and localizes points (pixels) around which a large change in image brightness has occurred. Edge detection is based on the relationship a pixel has with its neighbors. If the grey-levels around a pixel are similar, the pixel is unsuitable to be recorded as an edge point. Otherwise, the pixel may represent an edge point. •

Sobel Metric

It is defined as the square root of the sum of squared and squared, where and are obtained by convolving the image with a row mask and column mask respectively. The Sobel operator performs a 2-D spatial gradient measurement on an image. Typically, it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. The Sobel edge detector uses a pair of 3x3 convolution masks, one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows). A convolution mask is usually much smaller image than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time. The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial gradient that correspond to edges. Typically, it is used to find the approximate absolute gradient 4

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org magnitude at each point in an input grey-scale image. In theory at least, the operator consists of a pair of 3×3 convolution masks as shown in Figure 7. One mask is simply the other rotated by 90°. -1 -2 -1

0 0 0

+1 +2 +1

Gx

+1 0 -1

+2 0 -2

+1 0 -1

Gy

Figure. 7 Sobel Cross convolution masks •

Canny Metric

It is optimal for step edges corrupted by white noise. In evaluating the performance of various edge detectors, Canny has defined three criteria [19-22] for optimal edge detection in a continuous domain: ✓ Good Detection: the maximum of the ratio of edge points to non-edge points on the edge map. ✓ Good Localization: the detected edge points must be as close as possible to their true locations. ✓ Low-responses Multiplicity: the maximum of the distance between two non-edge points on the edge map. The Canny operator was designed to be an optimal edge detector (according to particular criteria, there are other detectors around that also claim to be optimal with respect to slightly different criteria). It takes as input a grey scale image and produces as output an image showing the positions of tracked intensity discontinuities. •

SSPCE Metric

The SSPCE (single step parallel contour extraction) method is applied to the binary image which is obtained by suitable threshold value applied to the noisy digital watermarked image [10, 11]. The eight rules of edge extraction are applied and are coded using 8-directional chain-code as shown in Listing (1). LISTING (1): IMPLEMENTATION OF THE EIGHT RULES FOR CONTOUR EXTRACTION (3X3 WINDOWS) a(i,j)  0; i = 1,2,…..,N; j = 1,2,…..,N; for i = 2,3,…..,N-1; j = 2,3,…..,N-1; { if b(i,j) and b(i+1,j) and [b(i,j+1) or b(i+1,j+1)] and [ not [b(i,j-1) or b(i+1,j-1)]] then a(i,j)  a(i,j) or 2 { edge 0 } if b(i,j) and b(i+1,j) and b(i+1,j-1) and [ not [b(i,j-1)]] 0

then a(i,j)  a(i,j) or 2 { edge 1 } if b(i,j) and b(i,j-1) and [b(i+1,j) or b(i+1,j-1)] and [ not [b(i-1,j) or b(i1,j-1)]] 1

then a(i,j)  a(i,j) or 2 { edge 2 } if b(i,j) and b(i,j-1) and b(i-1,j-1) and [ not [b(i-1,j)]] 2

then a(i,j)  a(i,j) or 2 { edge 3 } if b(i,j) and b(i-1,j) and [b(i,j-1) or b(i-1,j-1)] and [ not [b(i,j+1) or b(i1,j+1)]] 3

then a(i,j)  a(i,j) or 2 { edge 4 } if b(i,j) and b(i-1,j) and b(i-1,j+1) and [ not [b(i,j+1)]] 4

then a(i,j)  a(i,j) or 2 { edge 5 } if b(i,j) and b(i,j+1) and [b(i-1,j) or b(i-1,j+1)] and [ not [b(i+1,j) or b(i+1,j+1)]] 5

then a(i,j)  a(i,j) or 2 { edge 6 } if b(i,j) and b(i,j+1) and b(i+1,j+1) and [ not [b(i+1,j)]] 6

then a(i,j)  a(i,j) or 2 { edge 7 } }

7

Morphological filters [18] are used for sharpening medical images. In this method, after locating edges by 5

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org gradient-based operators, a class of morphological filter is applied to sharpen the existing edges. In fact, morphology operators, through increasing and decreasing colors in different parts of an image, have an important role in processing and detecting various existing objects in the image. Locating edges in an image using morphology gradient is an example that has comparable performance with that of classic edge-detectors such as Canny and Sobel [23, 36]. 7.

DISCRETE COSINE TRANSFORM

Spectral domain transforms like Karhonen-Loeve [24], Fourier, Haar [25], Pieodic Haar Piecewise-Linear (PHL) [26], Walsh-Hadamard [27, 28], Discrete Cosine (DCT) [29], and recently, wavelets [30, 31] can be used to extract the medical contours points and image compression using low-pass filter (LPF) and high-pass filter (HPF) are investigated and compared with Sobel and Canny detectors in this section. The algorithm uses Discrete Cosine Transform (DCT). Effectiveness of the contour extraction for different classes of images is evaluated. The main idea of the procedure for both contour extraction and image compression are performed. To compare the results, the mean square error and signal-to-noise ratio criterions were used. The simplicity and the small number of operations are the main advantages of the proposed algorithms. A high pass filter is a filter that passes high frequencies and attenuates low frequencies. In high pass filtering the objective is to get rid of the low frequency or slowly changing areas of the image and to bring out the high frequency or fast changing details in the image. This means that if we were to high pass filter the box image we would only see an outline of the box. The edge of the box is the only place where the neighbouring pixels are different from one another. Contour representation and compression are required in many applications e.g. computer vision, topographic or weather maps preparation, medical images and moreover in image compression. The results are compared with Sobel and Canny edge detectors for the contour extraction [12, 18, 32-34]. 8.

DISCRETE WAVELET TRANSFORM

The Wavelet analysis is an exciting new method for solving difficult problems in mathematics, physics, and engineering, with modern applications as diverse as wave propagation, data compression, signal processing, image processing, pattern recognition, computer graphics, the detection of aircraft and submarines and other medical image technology [31, 35]. Wavelets allow complex information such as music, speech, images and patterns to be decomposed into elementary forms at different positions and scales and subsequently reconstructed with high precision. Wavelets are obtained from a single prototype wavelet called mother wavelet by dilations and shifting using the equation (1).

a ,b (t ) = 9.

1 t −b ( ) a a

(3)

ZONAL SAMPLING METHOD

A lot of zonal sampling methods which was described in [16], shows that the best scheme for compression and contour extraction is as illustrated in Figure 8. Fit criterion of the algorithm consists in selecting one of the squared block of the spectral images (e.g. shadow region) as LPF filter for image compression and the other coefficients will be taken into account in the contour reconstruction stage as shown in the Figure 2. This method is mainly used in this work using DCT transform.

Figure. 8 LPF & HPF filters zonal method for the spectral image using the algorithm I 10. RAMER METHOD The Contour is represented as a polygon when it fits the edge points with a sequence of line segments. There are several algorithms available for determining the number and location of the vertices and also to compute the 6

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org polygonal approximation of a contour. The well-known is Ramer method which is based on the polygonal approximation scheme [4]. The simplest approach for the polygonal approximation is a recursive process (Splitting methods). Splitting methods work by first drawing a line from one point on the boundary to another. Then, we compute the perpendicular distance from each point along the segment to the line. If this exceeds some threshold, we break the line at the point of greatest error. The idea of this first curve approximation is illustrated in Figure 1. We then repeat the process recursively for each of the two new lines until we don't need to break any more. For a closed contour, we can find the two points that lie farthest apart and fit two lines between them, one for one side and one for the other. Then, we can apply the recursive splitting procedure to each side. First, use a single straight line to connect the end points. Then find the edge point with the greatest distance from this straight line. Then split the straight line in two straight lines that meet at this point. Repeat this process with each of the two new lines. Recursively repeat this process until the maximum distance of any point to the poly-line falls below a certain threshold. Finally draw the lines between the vertices of an edge of the reconstructed contour to obtain the polygonal approximating contour as shown in Figure 9. C

104 102 100 98

maximum distance (d)

y

96 94

A

B maximum distance (d)

92 90

D

88 40

42

44

46

48

50

52

54

56

58

x

Figure. 9 Contour compression using Ramer Algorithm 11. APPLIED MEASURES The compression ratio of the analyzed methods is measured using the equation (4). CR = [( LCC − LAC ) / LCC ]  100%

where

LCC

is the input contour length, and

L AC

(4)

is the approximating polygon length.

Quality measuring of a contour approximation during the approximating procedure uses mean square error (MSE) and signal-to-noise ratio (SNR) criterions by the relations (5) and (6) respectively [11, 35]. LCC

MSE = (1 / LCC )   d i2

(5)

i =1

d

where i is the perpendicular distance between i point on the curve segment and straight line between each two successive vertices of that segment. SNR = −10  log10 ( MSE / VAR )

(6)

where VAR is the variance of the input sequence. The mean square error (MSE) and peak signal-to-noise ratio (PSNR) criterions were used to evaluate the distortion introduced during the image compression and contour extraction procedures. The MSE criterion is defined by the following equation: ~

MSE ( I , I ) =

n m ~ 1 ( I (i, j ) − ( I (i, j )) 2  (n * m) i =0 j =0

(7)

7

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Where I and I are the grey-level and reconstructed images respectively. The PSNR is defined by the following formula: ~

PSNR ( I , I ) = 10 log 10

( L − 1) 2 ~

(8)

MSE ( I , I ) where L is the grey-level number. 12. RESULTS OF THE EXPERIMENTS To visualize the experimental results a CT scan Hand image & X-ray image. Selected images are shown in Figure 2. Some selected results of the tested images are shown in Figure (10 to 19) (Related results are in Tables (I to X)). Where CE is contour extraction; BP is bit plane; GL is grey level; PN is contours points number.

(a) Binary image using MSB

(b) CE using Sobel

(c) CC using Sobel

(d) CE using Canny

(e) CC using Canny

(f) CE using SSPCE

(g) CC using SSPCE

Figure. 10 Contour extraction & compression using Most significant bit (MSB) and Sobel, Canny, and SSPCE respectively 8

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Table. 1 Results of hand medical image contours extraction & compression using msb and ramer and sobel, canny, & sspce methods with threshold =0.1 Measures

Contour Extraction Methods Sobel

Original

Compressed

Contour

Contour Points

Points

Using Ramer

MSE

SNR

CR [%]

(MSB) 1865

401

0.0337

14.7248

78.4987

Canny

2079

443

0.2467

6.0782

87.1054

SSPCE

1759

1399

0.0155

18.1044

20.4662

(a) Binary image using MGL

(b) CE using Sobel

(d) CE using Canny

(f) CE using SSPCE

(c) CC using Sobel

(e) CC using Canny

(g) CC using SSPCE

Figure. 11 Contour extraction & compression using maximum gray level (MGL) with adjust input image intensity values [0.3 0.5], based on Sobel, Canny, and SSPCE respectively

9

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Table. 2 Results of Hand Medical Image Contours Extraction & Compression using MGL and Ramer and Sobel, Canny, & SSPCE Methods with threshold =0.1 Original

Compressed

Contour Points

Contour Points

Contour Extraction Methods b) Sobel

(MGL)

Using Ramer

1379

c) Canny d) SSPCE

Measures

MSE

SNR

CR [%]

365

0.0155

18.1044

73.5315

1662

366

0.0198

17.0387

77.9783

1392

701

0.0109

19.6339

49.6408

(a) Binary image using DCT

(b) CE using Sobel

(c) CC using Sobel

(d) CE using Canny

(e) CC using Canny

(f) CE using SSPCE

(g) CC using SSPCE

Figure. 12 Contour extraction & compression using DCT with zonal block 150 & threshold=33, and Ramer based on Sobel, Canny, and SSPCE respectively

10

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Table. 3 Results of hand medical image contours extraction & compression using dct and ramer and sobel, canny, & sspce methods with zonal sampling block=150 Original

Compressed

Contour Points

Contour Points

(DCT)

Using Ramer

b) Sobel (0.1)

1623

c) Canny (0.1) d) SSPCE (0.5)

Measures Contour Extraction

MSE

SNR

CR [%]

415

0.0184

17.3441

74.4301

1881

416

0.0224

16.5064

77.8841

1667

672

0.0154

18.1259

59.6881

Methods (Threshold)

(a) Binary image using DWT

(b) CE using Sobel

(d) CE using Canny

(f) CE using SSPCE

(c) CC using Sobel

(e) CC using Canny

(g) CC using SSPCE

Figure. 13 Contour extraction & compression using DWT with threshold=10, and Ramer based on Sobel, Canny, and SSPCE respectively

11

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Table. 4 Results of hand medical image contours extraction & compression using dwt (haar) details coefficients and sobel, canny, & sspce methods with threshold =10 Original

Compressed

Contour

Contour Points

Points (DWT)

Using Ramer

b) Sobel

1454

c) Canny d) SSPCE

Measures

Contour

MSE

SNR

CR [%]

562

0.0282

15.5025

61.3480

1800

696

0.0271

15.6753

61.3333

1521

696

0.0126

19.0003

54.2406

Extraction Methods

Figure. 14 Image compression using a) DCT (zonal sampling), and b) DWT (Haar) Table. 5 Results of hand medical image comptression using dct and dwt Measures MSE

PSNR

CR [%]

Image compression a) DCT

19.1210

35.3157

34.30

b) DWT

255.40.64

24.0585

69.50

(b) CE using Sobel

12

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(c) CE using Sobel after Morphological

(d) CC using Sobel

(e) CC using Sobel

(f) CE using Canny after Morphological

(g) CC using Canny

(h) CC using SSPCE

(i) CE using SSPCE after Morphological

(j) CC using SSPCE

Figure. 15 Contour extraction & compression using Most significant bit (MSB) and Sobel, Canny, and SSPCE respectively

13

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Table. 6 Results of chest medical image contours extraction & compression using msb and ramer and sobel, canny, & sspce methods with threshold =0.1 Original

Compressed

Contour

Contour

Contour

Points

Points

Extraction

(MSB)

Using

Measures

MSE

SNR

CR [%]

Ramer

Methods Sobel

2039

1649

0.0331

14.8082

19.1270

Canny

2705

1729

0.0430

13.6685

36.0813

SSPCE

1427

1664

0.0261

15.8323

14.2428

(a) Binary image using MGL

(b) CE using Sobel

(c) CE using Sobel after Morphological

(d) CC using Sobel

(e) CE using Canny

14

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(f) CE using Canny after morphological

(g) CC using Canny

(h) CE using SSPCE

(j) CE using SSPCE after morphological

(j) CC using SSPCE

Figure. 16 Contour extraction & compression using maximum gray level (MGL) with adjust input image intensity values [0.4 0.44], based on Sobel, Canny, and SSPCE respectively Table. 7 Results of chest medical image contours extraction & compression using mgl and ramer and sobel, canny, & sspce methods with threshold =0.1 Original

Compressed

Contour

Contour

Contour

Points

Points

Extraction

(MGL)

Using

Measures

MSE

SNR

CR [%]

Ramer

Methods b) Sobel

1681

1450

0.0263

15.7969

13.7418

c) Canny

2258

1490

0.0123

19.1176

35.5624

d) SSPCE

1720

1525

0.0299

15.2400

11.3372

15

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(a) Binary image using DCT

(b) CE using Sobel

(c) CE using Sobel after morphological

(d) CC using Sobel

(e) CE using Canny

(f) CE using Canny after morphological

(g) CC using Canny

16

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(h) CE using SSPCE

(i) CE using SSPCE after morphological

(j) CC using SSPCE

Figure. 17 Contour extraction & compression using DCT with zonal block 150 & threshold=33, and Ramer based on Sobel, Canny, and SSPCE respectively Table. 8 Results of hand medical image contours extraction & compression using dct and ramer and sobel, canny, & sspce methods with zonal sampling block=100 Original

Compressed

Contour

Contour

Contour

Points

Points

Extraction

(DCT)

Using

Measures

MSE

SNR

CR [%]

Ramer

Methods (Threshold) b) Sobel (0.1)

1874

1640

0.0307

15.1242

12.4867

c) Canny (0.1)

2501

1640

0.0418

13.7921

34.4262

d) SSPCE (0.5)

1758

1510

0.0280

15.5213

14.1069

(a) Binary image using DWT

17

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(b) CE using Sobel

(c) CE using Sobel after morphological

(d) CC using Sobel

(e) CE using Canny

(f) CE using Canny after morphological

(g) CC using Canny

(h) CE using SPCE

18

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

(i) CE using SSPCE after morphological

(j) CC using SSPCE

Figure. 18 Contour extraction & compression using DWT with threshold=10, and Ramer based on Sobel, Canny, and SSPCE respectively Table. 19 Results of hand medical image contours extraction & compression using dwt (haar) details coefficients and sobel, canny, & sspce methods with threshold =10 Original

Compressed

Contour

Contour

Contour

Points

Points

Extraction

(DWT)

Using

Measures

MSE

SNR

CR [%]

Ramer

Methods b) Sobel

2233

1847

0.0365

14.3772

17.2862

c) Canny

2817

1847

0.0453

13.4372

34.4338

d) SSPCE

1943

1551

0.0294

15.3182

20.1750

(a)

(b)

Figure. 19 Image compression using a) DCT (zonal sampling), and b) DWT (Haar) Table. 10 Results of hand medical image comptression using dct and dwt Measures MSE

PSNR

CR [%]

a) DCT

72.4032

29.5332

15.30

b) DWT

70.9038

29.6241

80.40

Image compression

19

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org 13. CONCLUSIONS Medical image enhancement technologies have attracted much attention since advanced medical equipment was put into use in the medical field. Enhanced medical images are desired by a surgeon to assist diagnosis and interpretation because medical image qualities are often deteriorated by noise and other data acquisition devices, illumination conditions, etc. We have implemented of CT / X-ray sample images. Sobel and Canny edge detection operators & single step parallel contour extraction (SSPCE) have been implemented on that image. The contours of the tested images can be also extracted using (DCT) high pass filter coefficients during zonal sampling methods; and single level of (DWT) within high pass filter details coefficients. Enhanced medical images are desired by a surgeon to assist diagnosis and interpretation because medical image qualities are often deteriorated by noise and other data acquisition devices, illumination conditions, etc. For these reasons we use pre-processing stage (filtering & enhancement) before the main processing. The results show that this kind of algorithms has a satisfactory performance. The extracted contours are compressed using well known Ramer method. Simulation results using MATLAB programming show that this kind of algorithms has a satisfactory performance with good compression ratio exceeds to 87% (see Table I) for Hand X-ray. By using DWT, the compressed image can be obtained during approximation coefficients (low pass filter) by compression ratio exceeds to 80% with good quality approaches to 30 decibels (see Figure 19 & Table X). In the feature we can used the proposed strategy is to detect, analyze and extract the tumor from patient’s CT scan images. REFERENCES 1. 2. 3. 4. 5.

6. 7. 8.

9.

10. 11. 12. 13.

14. 15. 16.

17.

M. Petrou and C. Petrou, “Image Processing: The Fundamentals”, Wiley, Amsterdam, 2010. D. W. Mc Robbie, E. A. Moore, M. J. Graves, M. R. Prince, “MRI : From picture to proton”. 2nd. ed. New York: Cambridge University. 2007. Rafael C. Gonzalez and Richard E. Woods, “Digital Image Processing”. 3nd. ed. New Jersey: Pearson Prentice Hall. 2008. Ramer U., “An iterative procedure for the Polygonal approximation of plane curves”, Computer Graphics and Image Processing, Academic Press, Volume 1, Issue 3, pp. 244-256, 1972. Dziech A., Baran R. & Ukasha A., “Contour compression using centroid method”, WSEAS Int.Conf. on Electronics, Signal Processing and Control (ESPOCO 2005), Copacabana, Rio de Janeiro, Brazil, pp. 225-229, 2005. Dziech A., Ukasha A. and Baran R., “Fast method for contour approximation and compression”, WSEAS Transaction on communications, Volume 5, Issue 1, pp. 49-56, 2006. Ukasha A., Dziech A. & Baran R., “A New Method For Contour Compression”, WSEAS Int. Conf. on Signal, Speech and Signal Processing (SSIP 2005), Corfu Island, Greece, pp. 282- 286, 2005. Ukasha A., Dziech A., Elsherif E. and Baran R., “An efficient method of contour compression”, International Conference on Visualization, Imaging and Image Processing (IASTED/VIIP), Cambridge, United Kingdom, pp. 213-218, 2009. Ukasha A., “Arabic Letters Compression using New Algorithm of Trapezoid method”, International Conference on Signal Processing, Robotics and Automation (ISPRA'10), Cambridge, United Kingdom, 336-341, 2010. Dziech, W. S. Besbas, “Fast Algorithm for Closed Contour Extraction”, Proc. of the Int. Workshop on Systems, Signals and Image Processing, Poznań, Poland, 1997, pp. 203-206. W. Besbas, “Contour Extraction, Processing and Recognition”, Poznan University of Technology, Ph. D. Thesis, 1998. Scott E. Umbaugh, “Computer vision and image processing”, Prentice-Hall, 1998. Nalini K. Ratha, Tolga Acar, Muhittin Gokmen, and Anil K. Jain, “A distributed Edge detection and surface reconstruction algorithm”, Proc. Computer Architectures for machine Perception (Como, Italy), 1995, pp. 149-154. Yali Amit, “2D Object Detection and Recognition”, MIT Press, 2002. Veysel Aslantas, “An SVD based digital image watermarking using genetic algorithm”, IEEE, 2007. A. Ukasha, “An Efficient Zonal Sampling Method for Contour Extraction and Image Compression using DCT Transform”, The 3rd Conference on Multimedia Computing and Systems (ICMCS'12), Tangier, Morocco, May,2012. Nalini K. Ratha, Tolga Acar, Muhittin Gokmen, and Anil K. Jain, “A distributed Edge detection and surface reconstruction algorithm”, Proc. Computer Architectures for machine Perception (Como, Italy), 1995, pp. 149-154.

20

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org 18.

19.

20. 21.

22.

23. 24. 25. 26.

27. 28. 29. 30. 31. 32. 33. 34. 35. 36.

G. Economou, S. Fotopoulos, and M. Vemis, “A noval edge detector based on non – linear local operations”, Proc. IEEE International Symposium on Circuits and Systems (London), 1994, pp. 293296. Kim L. Boyer and Sudeep Sarkar, “On the localization performance measure and optimum edge detection”, Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (1994), pp. 106108. J. F. Canny, “A computational approach to edge detection”, Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (1986), pp. 679-698. Didier Demigny and Tawfik Kamle, “A discrete expression of cranny’s criteria for step edge detector performances evaluation”, Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (1997), pp. 1199-1211. Hemant D. Tagare and Rui J.P. deFigueiredo, “Reply to on the localization performance measure and optimal edge detection”, Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (1994), pp. 108-110. Chen, T., Wu, Q.H., Rahmani-Torkaman, R. and Hughes, J. (2002) “A Pseudo Top-Hat Mathematical Morphological Approach to Edge Detection in Dark Regions”. Pattern Recognition, 35, 199-210. A. K. Jain, “Fundamentals of Digital Image Processing”, New Jersey: Prentice Hall International, 1989. Brigham E.O., “The Fast Fourier Transform”, Prentice-Hall, Englewood Cliffs, 1974. A. Dziech, F. Belgassem & H. J. Nern, “Image data compression using zonal zonal sampling and piecewise-linear transforms”, Journal of Intelligent And Robotic Systems. Theory & Applications, 28(12), Kluwer Academic Publishers, June 2000, 61-68. Walsh, J. L. “A Closed Set of Normal Orthogonal Functions”, Amer. J. Math. 45, 5-24, 1923. Wolfram, S., “A New kind of Science”, Champaign, IL: Wolfram Media, pp. 573 and 1072-1073, 2002. Clarke R. J., “Transform Coding of Images”, Academic Press, 1985. Gerhard X. Ritter and Joseph N. Wilson, “Computer vision algorithms in image algebra”, CRC Press, New York, 1996. Vetterli, Martin & Kovacevic, Jelena, “Wavelets and Subband Coding”, Printice Hall Inc., 1995. D.H. Ballard and C.M. Brown, “Computer vision”, Prentice Hall, Englewood Cliffs, NJ, 1982. R.M. Haralick and L. G. Shapiro, “Computer and robot vision”, Addison-Wesley Publishing Co., 1992. B.K.P. Horn, “Robot vision”, The MIT Press, Cambridge, MA, 1986. Gonzalez R. C., “Digital Image Processing”, Second Edition, Addison Wesley, 1987. Mahmoud, T.A. and Marshall, S. (2008) “Medical Image Enhancement Using Threshold Decomposition Driven Adaptive Morphological Filter”. Proceedings of the 16th European Signal Processing Conference, Lausanne, Switzerland, 25-29 August 2008, 1-5.

AUTHORS PROFILE

21

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org

An appraisal for features selection of offline handwritten signature verification techniques 1

Anwar Yahy Ebrahim, 2Hoshang Kolivand, 3Mohd Shafry Mohd Rahim 1

Babylon University, Babylon, IRAQ Department of Computer Science, Liverpool John Moores University, Liverpool, UK, L3 3AF. 3 Universiti Teknologi Malaysia 81310 Skudai Johor MALAYSIA Email: [email protected], [email protected], [email protected]

2

ABSTRACT This research provides a summary of widely used Handwritten Signature Verification based feature selection techniques. Moreover, the focus is on selected best features of signature verification, characterized by the number of features represented for each signature and the aim is to discriminate if a given signature is genuine or a forgery. We presented how the discussion, on the advantages and drawbacks of feature selection techniques, has been handled by several researchers in the past few decades and the recent advancements in the field. Keywords: signature verification; feature extraction; dimension reduction; feature selection; handwritten signature; 1.

INTRODUCTION

Handwritten signature is widely utilized and recognized technique throughout the world, the thorough testing of the signature image is important before going to the deduction about the writer. The difference in original signature makes it difficult to distinguish between the original and forgery signature. The signature identification and verification methods may develop the authentication procedures and can distinguish between the genuine and forged signatures [1]. The handwritten signature has also an adequate importance in online banking implementations and cheque processing mechanism [2]. For the authentication of passports, biometrics methods can be utilized; exactly, for the signature verification [3]. Features extraction can be defined as the characteristics of signature that are derived from that signature itself. These extracted features represented an important role in developing the robust system as all other phases are based on these features. A large number of features may decrease the value of FRR (overall number of genuine signatures discarded by the system) but at the same time it will increase the value of FAR (number of forged signatures accepted by the system). However, little effort has been done in measuring the consistency of these attributes. This consistency measurement is important to determine the effectiveness of the method. In order to measure the consistencies of these features, there is a need to choose the best attributes set among them [4]. There are two major procedures of signature identification and authentication one of them is the real identification of the signer of signature, and the other is real classification of sample whether it is an original or a forged [5]. The focus of this research will be on off-line signature authentication methods. Further parts of this study will be, in Section 2 include the literature review of the already published existing methods of off-line signature verification, Section 3 includes the critical analysis of existing research studies, and lastly in Section 4 the conclusion of the research is given. 2.

PROBLEM STATEMENT

In the literature of offline handwritten signature verification, we can find multiple ways of defining the problem. In particular, one matter is critical to be able to compare related work: whether or not skilled forgeries are used for training. Some authors do not use skilled forgeries at all for training [7, 8], other researchers use skilled forgeries for training writer-independent classifiers, testing these classifiers in a separate set of users [9] lastly, some papers used skilled forgeries for training writer-dependent classifiers, and test these classifiers in a separate set of original signatures and forgeries from the same set of users.

22

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Boosting feature selection is achieved by attributes selection methods that chooses the single most discriminant attribute of a set of the attributes and finds a threshold to detach the two categories to train, effectively a decision stump. Then, attributes are chosen in a greedy fashion according to the weighting while training is conducted by the features selection techniques. The presence of a very large number of features resulted in a committee built on the best attributes selection signifying the training samples [10]. The concept of feature selection proposes a system for signatory recognition which is based on reduced number of features from the signature [11]. Proposed a good approach of feature selection, which when applied for signature provides a good way of compressing the signature while maintaining acceptable identification rates. 3.

SIGNATURE VERIFICATION

Handwritten signatures have applied as bio-metric features that distinguish persons. It has confirmed that signature samples are very faultless bio-metric feature with a low conflict proportion. Some signature samples might be comparable but there are different technical methods to distinguish between them and for disclosure of forgery signatures. There are two classes of handwritten signature verification systems: 3.1 Verification system of offline (static) signature Signature is written offline like a signature written on bank cheques and the technique read the scan sample of the signature then obtains it with the signature samples stored in the database. Off-line signatures are shown in Figure 1.

Figure. 1 Offline signatures [12] 3.2 Verification system of online (dynamic) Signature signing onto a reactive electronic system for example in is read on-line, and comparison of signature samples on folder of the individual to test for validity. Several selected best features are used with on-line signature samples that are not accessible for the off-line ones. Online handwritten signature is displayed in Figure 2.

Figure. 2 Online signatures [12] 4.

DATASETS

The availability of datasets is one of the most important requirements in any research area. Thus, the same is the case with signature analysis and recognition. A number of datasets comprising signature samples have been developed over the years mainly to support signature verification, signature segmentation and signer recognition tasks. Especially, during the last few years, a number of standard datasets in different scripts and languages have been

23

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org developed allowing researchers to evaluate their systems on the same databases for meaningful comparisons. Some notable datasets of signature samples along with their exciting measurements are presented in Table 1. Table. 1 Summary of notable signature dataset

5.

Dataset Name

Language

Signatures

GPDS [13] CEDAR [14] Arabic dataset [15] Japanese dataset [16] Persian Dataset [17] Chinese NZMI dataset [18]

English English Arabic Japanese Persian Chinese

8640 2640 330 2000 2000 1200

PRE-PROCESSING

For effective recognition of a signatory from offline signature samples, the signature must be distinguishable from the background allowing proper segmentation of the two. Most of the signatory identification techniques developed to date depend on selected features which are extracted from binary signatures with white background and black ink trace. An exclusion to this is the search of Wirotius [19], where the authors argue that like online signature sample, grayscale images also contain information about pen pressure, the intensity of the gray value at a particular pixel being proportional to the pen pressure. Zuo et al. [20] also supported this idea and conducted a series of signatory identification experiments both on gray scale and binary images. The experiments on gray scale images reported slightly better results than the binary images with an overall identification rate of 98%. It should however, be noted that feature extraction from the gray ink trace is quite complex as opposed to the binary version. A large set of useful attributes can be extracted from binarized version of signature and consequently most of the contributions to signatory identification are based on binary images of signature [20]. A number of standard thresholding systems have been developed to binarize images into foreground and background [21], and these methods can also be applied to signature samples. Most of the research employs the well-known Otsu’s thresholding algorithm [21], to compute a global threshold for the signature image and convert the gray scale image into binary [22]. Signature images may present variations in terms of pen thickness, scale, rotation, etc., even among authentic signatures of a person. Common pre-processing techniques are: signature extraction, noise removal, application of morphological operators, size normalization, centering and binarization [23]. The experiments on gray scale images reported slightly better results than the binary images with an overall identification rate of 98%. It should however, be noted that feature extraction from the gray ink trace is quite complex as opposed to the binary version. A large set of useful attributes can be extracted from binarized version of signature and consequently most of the contributions to signer identification are based on binary images of signature [24]. 6.

FEATURE EXTRACTION 6.1 Global and local feature extraction

Local and global features include data, which are efficient for signature confession. The features choosing is that various features are necessary for any style confession and classification method. Global attributes are extracted from the complete signature. The set of these local and global attributes is further applied for reporting the identity of documentation and forgery signature samples from the dataset. The global attributes that are extracted from sample are described as follows [25]. Width (Length): For a binary image, width is the dimension between 2 pixels in the horizontal projection and must include more than three points of the signature. Height: Height is the dimension between 2 pixels in the vertical projection and must include more than three points of the signature for a binary signature. Aspect ratio: The proportion is a global attribute that represents the ratio of the width and the height of the signature image [26].

24

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org Horizontal projection: Horizontal projection is calculated from both binary and the skeletonized signatures. The set of dark points are calculated from the horizontal projections of binary and skeletonized images. Vertical projection: A vertical projection is presented as the set of dark points achieved from vertical projections of binary and skeletonized images. Local attributes extracted from gray level, binary signatures. From the small areas of the whole signature, local attributes represent, height, width, aspect proportion, horizontal, and vertical projections. To get a group of global and local attributes, both of these attributes groups are collected into an attribute vector that are represented as input to the classification techniques for matching [27, 28]. 6.2 Orientation Orientation represents the direction of the image lines. This attribute is necessary and helps to know how the signatory signed down the image, which letters come first confirming towards corners and peaks. Orientation is obtained by using the proportion of angle at main axis [29]. 7.

DIMENSION REDUCTION

This section introduces, the reduction of the dimension, difficulties in classification for high dimensional multivariate. Figure 3 shows the main idea of this study.

Figure. 3 Representation data of the data reduction methods The basic concept is to decrease large amounts of information down to the significant parts. Data reduction is the procedure of decreasing the set of arbitrary inputs under consideration [32]. It can be split into attribute selection discussed in detail in next sub-sections and feature extraction. There are benefits of data reduction as it enhances the achievement of the machine training model. The first part of dimension reduction is feature selection methods, which is a try to find the original features. In some situations, information test such as classification can be done in the reduced area more exactly than in the original area such as Sparse PCA technique [33]. Linear and nonlinear reduction methods have been suggested in recent time which depend on the estimation of local data. This section shows a logical comparison of these methods. By identifying the weaknesses of current, linear and nonlinear techniques. 7.1 Linear dimension reduction Linear methods achieve dimension reduction by combine the information into a sub area of lower dimension. There are different methods to do so, such as Linear Discriminant Analysis (LDA), and Principal Component Analysis (PCA) [33]. LDA is a popular data-analytic tool for studying the category relationship between information points and LDA is supervised. A main disadvantage of LDA is that it fails to find out the local geometrical object of the data manifold [34]. Dimension reduction is the task to reduce the amount of available data (data dimension). The data processing required in dimension reduction, of ten times linear for computational simplicity, is determined by optimizing an appropriate figure of merit, which quantifies the amount of information preserved after a certain reduction in the data dimension. The ‘workhorse’ of dimension reduction comes under the name of PCA [33]. PCA has been extremely popular in data dimension reduction since it entails linear data processing.

25

JOURNAL OF SOFTWARE ENGINEERING & INTELLIGENT SYSTEMS ISSN 2518-8739 30th April 2018, Volume 3, Issue 1, JSEIS, CAOMEI Copyright © 2016-2018 www.jseis.org 7.2 Non-linear techniques for dimension reduction This section discusses two non-linear methods for dimension reduction named as Kernel PCA (KPCA) and MultiDimensional Scaling (MDS). These methods attempt to maintain the original data in the low dimensional performance [36]. Shown in Figure 4 KPCA calculates the kernel matrix K of the variables points xi. KPCA is Kernel based method. As shown in Figure 4 the mappings performed by Kernel principal component depends on the selection of the kernel task. A main shortcoming of KPCA is that the size of the kernel is ratio to the square of the set of cases in the database [37].

Figure. 4 Kernel principal components analysis Honarkhah et al. [38] represented MDS but a major disadvantage of MDS provides a global measure of dis/similarity but does not provide much insight into subtleties [34]. The susceptibility to the curse of dimension and the problem in finding the small eigenvalue in an eigen problem. The PCA is susceptible to the relative scaling of the original attributes [39]. Feature extraction produces new features from the original features, while feature selection returns a subset of the original features [40]. The set of PC

Suggest Documents