Pattern Recognition 44 (2011) 375–385
Contents lists available at ScienceDirect
Pattern Recognition journal homepage: www.elsevier.com/locate/pr
Off-line signature verification based on grey level information using texture features J.F. Vargas b,n, M.A. Ferrer a, C.M. Travieso a, J.B. Alonso a a b
´gico y la Innovacio ´n en Comunicaciones (IDeTIC), Universidad de Las Palmas de Gran Canaria, Tafira Campus 35017, Las Palmas, Spain Instituto para el desarrollo tecnolo Electronic Engineering Department,(GEPAR),Universidad de Antioquia, Medellin, Colombia
a r t i c l e in fo
abstract
Article history: Received 7 April 2010 Received in revised form 11 June 2010 Accepted 29 July 2010
A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.
Keywords: Off-line handwritten signature verification Pattern recognition Grey level information Texture features Co-occurrence matrix Local binary pattern LS-SVM
1. Introduction The security requirements of today’s society have placed biometrics at the centre of an ongoing debate concerning its key role in a multitude of applications [1–3]. Biometrics measure individuals’ unique physical or behavioural characteristics with the aim of recognising or authenticating identity. Common physical biometrics include fingerprints, hand or palm geometry, retina, iris, or facial characteristics. Behavioural characteristics include signature, voice (which also has a physical component), keystroke pattern, and gait. Signature and voice technologies are examples of this class of biometrics and are the most developed [4]. The handwritten signature is recognised as one of the most widely accepted personal attributes for identity verification. This signature is a symbol of consent and authorisation, especially in the credit card and bank checks environment, and has been an attractive target for fraud for a long time. There is a growing demand for the processing of individual identification to be faster and more accurate, and the design of an automatic signature verification system is a real challenge. Plamondon and Srihari [5] noted that automatic signature verification systems occupy a very
n
Corresponding author. Tel.: +34 928 451269; fax: + 34 928 451243. E-mail addresses:
[email protected] (J.F. Vargas),
[email protected] (M.A. Ferrer),
[email protected] (C.M. Travieso),
[email protected] (J.B. Alonso). 0031-3203/$ - see front matter & 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2010.07.028
specific niche among other automatic identification systems: ‘‘On the one hand, they differ from systems based on the possession of something (key, card, etc.) or the knowledge of something (passwords, personal information, etc.), because they rely on a specific, well learned gesture. On the other hand, they also differ from systems based on the biometric properties of an individual (fingerprints, voice prints, retinal prints, etc.), because the signature is still the most socially and legally accepted means of personal identification.’’ A comparison of signature verification with other recognition technologies (fingerprint, face, voice, retina, and iris scanning) reveals that signature verification has several advantages as an identity verification mechanism. Firstly, signature analysis can only be applied when the person is/was conscious and willing to write in the usual manner, although it is possible that individuals may be forced to submit the handwriting sample. To give a counter example, a fingerprint may also be used when the person is in an unconscious (e.g. drugged) state. Forging a signature is deemed to be more difficult than forging a fingerprint, given the availability of sophisticated analyses [6]. Unfortunately, signature verification is a difficult discrimination problem since a handwritten signature is the result of a complex process depending on the physical and psychological conditions of the signer, as well as the conditions of the signing process [7]. The net result is that a signature is a strong variable entity and its verification, even for human experts, is not a trivial matter. The scientific challenges and the valuable applications of signature verification have
376
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
attracted many researchers from universities and the private sector to signature verification. Undoubtedly, automatic signature verification plays an important role in the set of biometric techniques for personal verification [8,9]. In the present study, we focus on features based on grey level information from images containing handwritten signatures, especially those providing information about ink distribution along traces delineating the signature. Textural analysis methodologies are included for this purpose since they provide rotation and luminance invariance. The paper is organised as follows: Section 2 presents the background to off-line signature verification. Section 3 provides an overview of statistical texture analysis. Section 4 describes the approach proposed. Section 5 presents details about the database. Section 6 is devoted to the classifiers. Section 7 presents the evaluation protocol and reports the experimental results. The paper ends with concluding remarks.
signature; and skilled forgeries, produced by people who, after studying an original instance of the signature, attempt to imitate it as closely as possible. Clearly, the problem of signature verification becomes more and more difficult when passing from random to simple and skilled forgeries, the latter being so difficult a task that even human beings make errors in several cases. Indeed, exercises in imitating a signature often allow us to produce forgeries so similar to the originals that discrimination is practically impossible; in many cases, the distinction is complicated even more by the large variability introduced by some signers when writing their own signatures [14]. For instance, studies on signature shape found that North American signatures are typically more stylistic in contrast to the highly personalised and ‘‘variable in shape’’ European ones [15].
2. Background
Dynamic information cannot be derived directly from static signature images. Instead, some features can be derived that partly represent dynamic information. These special characteristics are referred to as pseudo-dynamic information. The term ‘‘pseudo-dynamic’’ is used to distinguish real dynamic data, recorded during the writing process, from information, which can be reconstructed from the static image [15]. There are different approaches to the reconstruction of dynamic information from static handwriting records. Techniques from the field of forensic document examination are mainly based on the microscopic inspection of the writing trace and assumptions about the underlying writing process [16]. Another paper from the same author [17] describes their studies on the influence of physical and bio-mechanical processes on the ink trace and aims at providing a solid foundation for enhanced signature analysis procedures. Simulated human handwriting movements are considered by means of a writing robot to study the relationship between writing process characteristics and ink deposit on paper. Approaches from the field of image processing and pattern recognition can be divided into: methods for estimating the temporal order of stroke production [18,19]; methods inspired by motor control theory, which recover temporal features on the basis of stroke geometries such as curvature [20]; and finally, methods analysing stroke thickness and/or stroke intensity variations [21–25]. An analysis of mainly grey level distribution, in accordance with methods of the last group, is reported in this paper. A grey level image of a scanned handwritten signature indicates that some pixels may represent shapes written with high pressure, which appear as darker zones. High pressure points (HPPs) can be defined as those signature pixels which have grey level values greater than a suitable threshold. The study of high pressure features was proposed by Ammar et al. [21] to indicate regions where more physical effort was made by the signer. This idea of calculating a threshold to find the HPP was adopted and developed by others researchers [26,14]. Lv et al. [27] set two thresholds to store only the foreground points and edge points. They analyse only the remaining points whose grey level value is between the two thresholds and divide them into 12 segments. The percentage of the points whose grey level value falls in the corresponding segment is one of the values of the feature vector that reflects the grey level distribution. Lv and co-workers also consider stroke width distribution. In order to analyse not only HPPs but also low pressure points (LPP) a complementary threshold has been proposed by Mitra et al. [28]. In a previous work, we use a radial and angular partition (RAP) for a local analysis to determine the ratio, over each cell, between HPPs and all points conforming the
There are two major methods of signature verification. One is an on-line method to measure sequential data, such as handwriting speed and pen pressure, with a special device. The other is an off-line method that uses an optical scanner to obtain handwriting data written on paper. There are two main approaches for off-line signature verification: the static approach and pseudo-dynamic approach. The static one involves geometric measures of the signature while the pseudo-dynamic one tries to estimate dynamic information from the static image [10]. On-line systems use special input devices such as tablets, while off-line approaches are much more difficult because the only available information is a static two-dimensional image obtained by scanning pre-written signatures on a paper; the dynamic information of the pen-tip (stylus) movement such as pen-tip coordinates, pressure, velocity, acceleration, and pen-up and pendown can be captured by a tablet in real time but not by an image scanner. The off-line method, therefore, needs to apply complex image processing techniques to segments and analyse signature shape for feature extraction [11]. Hence, on-line signature verification is potentially more successful. Nevertheless, off-line systems have a significant advantage in that they do not require access to special processing devices when the signatures are produced. In fact, if the accuracy of verification systems is stressed, the off-line method has much more practical application areas than that of the on-line one. Consequently, an increase in amount of research has studied feature-extraction methodology for off-line signature recognition and verification [12]. It is also true that the track of the pen shows a great deal of variability. No two genuine signatures are ever exactly the same. Actually, two identical signatures would constitute legal evidence of forgery by tracing. The normal variability of signatures constitutes the greatest obstacle to be met in achieving automatic verification. Signatures vary in their complexity, duration, and vulnerability to forgery. Signers vary in their coordination and consistency. Thus, the security of the system varies from user to user. A short, common name is no doubt easier to forge than a long, carefully written name, no matter what technique is employed. Therefore, a system must be capable of ‘‘degrading’’ gracefully when supplied with inconsistent signatures, and the security risks must be kept to acceptable levels [13]. Problems of signature verification are addressed by taking into account three different types of forgeries: random forgeries, produced without knowing either the name of the signer nor the shape of its signature; simple forgeries, produced knowing the name of the signer but without having an example of his
2.1. Off-line signature verification based on pseudo-dynamic features
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
binary version of the image [29]. Franke [30] evaluates ink-trace characteristics that are affected by the interaction of biomechanical writing and physical ink-deposition processes. The analysis focused on the ink intensity, which is captured along the entire writing trace of a signature. The adaptive segmentation of ink-intensity distributions takes the influences of different writing instruments into account and supports the cross-validation of different pen probes. In this way, texture analysis of ink trace appears as an interesting approach to characterise personal writing for enhanced handwritten signature verification procedures.
377
need to reduce G to guarantee the minimum number of pixels transitions per P(i, j) matrix component, despite losing texture description accuracy. The grey level number G can be reduced easily by quantifying the image I(x, y). The classical feature measures extracted from the GLCM matrix (see Haralick [32] and Conners and Harlow [31]) are the following: Texture homogeneity H: H¼
G1 X G1 X
fPði,jÞg2
ð1Þ
i¼0j¼0
3. Statistical texture analysis Statistical texture analysis requires the computation of texture features from the statistical distribution of observed combinations of intensities at specified positions relative to each other in an image. The number of intensity points (pixels) in each combination is identified, leading to the classification of the texture statistics as first-order, second-order, or higher-order. Biometric systems based on signature verification, in conjunction with textural analysis, can reveal information about inkpixels distribution, which reflects personal characteristics from the signer, i.e. pen-holding, writing speed, and pressure. But we do not think that only ink distribution information is sufficient for signer identification. So, in the specific case of signature strokes, we have also taken into account, for the textural analysis, the pixels in the stroke contour. By this we mean those stroke pixels that are in the signature-background border. These pixels will include statistical information about the signature shape. So this distribution data may be considered as a combination of textural and shape information. 3.1. Statistical features of first order Statistical features of first order, as represented in a histogram, take into account the individual grey level value of each pixel in an image Iðx,yÞ,1 r x rN,1 r yr M, but the spatial arrangement is not considered, i.e. different spatial features can have the same level histogram. A classical way of parameterising the histogram is to measure its average and standard deviation. Obviously, the discriminative ability of first order statistics is really low for automatic signature verification, especially when user and forger use a similar writing instrument. In fact, most researchers normalise the histogram, so as to reduce the noise for the subsequent processing of the signature. 3.2. Grey level co-occurrence matrices The grey level co-occurrence matrix (GLCM) method is a way of extracting second order statistical texture features from the image [31]. This approach has been used in a number of applications, including ink type analysis [16], e.g. [32–34]. A GLCM of an image I(x,y) is a matrix Pði,jDx , Dy Þ,0 ri r G1, 0 r j r G1, where the number of rows and columns are equal to the number of grey levels G. The matrix element P(i, j9Dx, Dy) is the relative frequency with which two pixels with grey levels i and j occur separated by a pixel distance (Dx, Dy). For simplicity, in the rest of the paper, we will denote the GLCM matrix as P(i, j). For a statistically reliable estimation of the relative frequency we need a sufficiently large number of occurrences for each event. The reliability of P(i, j) depends on the grey level number G and the I(x, y) image size. In the case of images containing signatures, instead of image size, this depends on the number of pixels in the signature strokes. If the statistical reliability is not sufficient, we
A homogeneous scene will contain only a few grey levels, giving a GLCM with only a few but relatively high values of P(i, j). Thus, the sum of squares will be high. Texture contrast C: 8 9 = G1 < G1 X G1 X X 2 ð2Þ n Pði,jÞ , 9ij9 ¼ n C¼ : ; n¼0 i¼0j¼0
This measure of local intensity variation will favour contributions from P(i, j) away from the diagonal, i.e iaj. Texture entropy E: E¼
G1 X G1 X
Pði,jÞlogfPði,jÞg
ð3Þ
i¼0j¼0
Non-homogeneous scenes have low first order entropy, while a homogeneous scene reveals high entropy. Texture correlation O: O¼
G1 X G1 X ijPði,jÞðmi mj Þ i¼0j¼0
si sj
ð4Þ
where mi and si are the mean and standard deviation of P(i, j) rows, and mj and sj the mean and standard deviation of P(i, j) columns, respectively. Correlation is a measure of grey level linear dependence between pixels at the specified positions relative to each other. 3.3. Local binary patterns The local binary pattern (LBP) operator is defined as a grey level invariant texture measure, derived from a general definition of texture in a local neighbourhood, the centre of which is the pixel (x, y). Recent extensions of the LBP operator have shown it to be a really powerful measure of image texture, producing excellent results in many empirical studies. LBP has been applied in biometrics to the specific problem of face recognition [35,36]. The LBP operator can be seen as a unifying approach to the traditionally divergent statistical and structural models of texture analysis. Perhaps the most important property of the LBP operator in real-world applications is its invariance to monotonic grey level changes. Equally important is its computational simplicity, which makes it possible to analyse images in challenging real-time settings [37]. The local binary pattern operator describes the surroundings of the pixel (x, y) by generating a bit-code from the binary derivatives of a pixel as a complementary measure for local image contrast. The original LBP operator takes the eight neighbouring pixels using the centre grey level value I(x, y) as a threshold. The operator generates a binary code 1 if the neighbour is greater than or equal to the central level, otherwise it generates a binary code 0. The eight neighbouring binary codes can be represented by an 8-bit number. The LBP operator outputs for all the pixels in the image can be accumulated to form a histogram, which represents a measure of the image texture. Fig. 1 shows an example of a LBP operator.
378
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
The above LBP operator is extended in [38] to a generalised grey level and rotation invariant operator. The generalised LBP operator is derived on the basis of a circularly symmetric neighbour set of P members on a circle of radius R. The parameter P controls the quantisation of the angular space and R determines the spatial resolution of the operator. The LBP code of central pixel (x, y) with P neighbours and radius R is defined as LPBP,R ðx,yÞ ¼
P1 X
sðgp gc Þ2p
ð5Þ
p¼0
( where sðlÞ ¼
1
lZ0
0
lo0
, the unit step function, gc the grey level
value of the central pixel: gc ¼I(x, y), and gp the grey level of the pth neighbour, defined as 2pp 2pp ,yRcos ð6Þ gp ¼ I x þ Rsin P P If the pth neighbour does not fall exactly in the pixel position, its grey level is estimated by interpolation. An example can be seen in Fig. 2. In a further step, [38] defines a LBPP,R operator invariant to rotation as follows: 8 P1 X > > < sðgp gc Þ if Uðx,yÞ r 2 riu2 ð7Þ LBPP,R ðx,yÞ ¼ p ¼ 0 > > : Pþ1 otherwise where Uðx,yÞ ¼
P X sðgp gc Þsðgp1 gc Þ,
with
gP ¼ g0
ð8Þ
p¼1
Analysing the above equations, U(x, y) can be calculated as follows:
(3) calculate the absolute value: 9f ðpÞf ðp1Þ9,1 r p rP; and P (4) obtain U(x, y) as the integration or sum PP ¼ 1 9f ðpÞf ðp1Þ9. If the grey levels of the pixel (x, y) neighbours are uniform or smooth, as in the case of Fig. 3, left, f(p) will be a sequence of ‘‘0’’ or ‘‘1’’ with only two transitions. In this case U(x, y) will be zero or P riu2 code is worked out as the sum P1 two and the LBPP,R p ¼ 0 f ðpÞ. Conversely, if the surrounding grey levels of pixel (x, y) vary quickly, as in the case of Fig. 3, right, f(p) will be a sequence containing several transitions ‘‘0’’–‘‘1’’ or ‘‘1’’–‘‘0’’ and U(x, y) will be greater than 2. So, in the noisy case, a constant value equal to riu2 making it more robust to noise than P+1 is assigned to LBPP,R previously defined LBP operators. The rotation invariance property is guaranteed because when riu2 summing the f(p) sequence to obtain the LBPP,R , it is not weighted riu2 by 2p. As f(p) is a sequence of 0 and 1, 0 r LBPP,R ðx,yÞ r P þ1. As riu2 textural measure, we will use its P+ 2 histogram bins of LBPP,R ðx,yÞ codes. From the three LBP codes proposed in this section, LBP, LBPP,R , riu2 riu2 and LBPP,R , we will use LBPP,R in this paper, because of its property of rotational invariance.
4. Textural analysis for signature verification The analysis of the writing trace in signatures becomes an application area of textural analysis. The textural features from the grey level image can reveal personal characteristics of the signer (i.e. pressure and speed changes, pen-holding, etc.) complementing classical features proposed in the literature. In this section we describe a basic scheme for using textural analysis in automatic signature verification.
(1) work out the function f ðpÞ ¼ sðgp gc Þ,0 o p oP considering gP ¼g0; (2) obtain its derivate: f ðpÞf ðp1Þ,1 r p r P;
Fig. 1. Working out the LBP code of pixel (x, y). In this case I(x, y) ¼ 3, and its LBP code is LBP(x, y)¼ 143.
riu2 Fig. 3. Calculating the LBPP,R code for two cases, with P¼ 4 and R¼ 2. Left: gc ¼ 152, {g0, g1, g2, g3} ¼ {154, 156, 155, 149}, {f(0), f(1), f(2), f(3), f(4)}¼ {1, 1, 1, 0, 1}, and riu2 U(x, y)¼ 0+0+1+1 ¼2 r2, therefore LBPP,R ðx,yÞ ¼ 1 þ 1 þ 1 þ 0 ¼ 3. Right: gc ¼ 154, g0 ,g1 ,g2 ,g3 ¼ f155,152,159,148g, {f(0) ,f(1) ,f(2) ,f(3) ,f(4)} ¼{1 ,0, 1, 0, 1}, U(x, riu2 y) ¼ 1+ 1 +1 + 1¼ 4Z 2, and LBPP,R ðx,yÞ ¼ P þ 1 ¼ 5. (a) Smooth and uniform grey level change and (b) noisy grey level surroundings. The numbers and the shade intensity represent the grey levels
Fig. 2. The surroundings of I(x, y) central pixel are displayed along with the pth neighbours, marked with black circles, for different P and R values. Left: P¼ 4, R¼ 1, and the LPB4,1(x, y) code is obtained by comparing gc ¼I(x, y) with gp ¼ 0 ¼I(x, y 1), gp ¼ 1 ¼I(x + 1, y), gp ¼ 2 ¼I(x, y+ 1), and gp ¼ 3 ¼ I(x 1, y). Centre: P ¼ 4, R¼ 2, and the LPB4,2(x, y) code is obtained by comparing gc ¼I(x, y) with gp ¼ 0 ¼ I(x, y 2), ¼I(x LPB8,2p (x, p ¼ffiffiffi3 ¼ I(x pffiffiffigp ¼ 1p ffiffiffi + 2, y), gp ¼ 2 ¼I(x, y+ 2), and gp p ffiffiffi 2, y). Right: P ¼8, R¼ 2, and the pffiffiffi ffiffiffi y) code is obtained by comparing gp c ¼I(x, ffiffiffi py)ffiffiffi with gp ¼ 0 ¼ I(x, y 2), gp ¼ 1 ¼ Iðx þ 2,y 2Þ, gp ¼ 2 ¼I(x + 2, y), gp ¼ 3 ¼ Iðx þ 2,y þ 2Þ, gp ¼ 4 ¼I(x, y+ 2), gp ¼ 5 ¼ Iðx 2,y þ 2Þ, gp ¼ 6 ¼ I(x 2, y), and gp ¼ 7 ¼ Iðx 2,y 2Þ.
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
4.1. Background removal The features used in our system characterise the grey level distribution in an image signature but also require a procedure for background elimination. Grey levels corresponding to the background are not discriminating information but adding noise can negatively affect the characterisation. In this work, we have used a simple posterisation procedure to avoid background influence. Obviously, any other efficient segmentation procedure would also be useful. Posterisation occurs when an image apparent bit depth has been decreased so much that it has a visual impact. The term ‘‘posterisation’’ is used because it can influence the image in a similar way to the colour range in a mass-produced poster, where the print process uses a limited number of coloured inks. Let I(x, y) be a 256-level grey scale image and nL + 1 the number of grey levels considered for posterisation. The posterised image IP(x, y) is defined as follows: Iðx,yÞnL 255 ð9Þ IP ðx,yÞ ¼ round round nL 255 where round(U) rounds the elements to the nearest integers. The interior round performs the posterisation operation, and the exterior round guarantees that the resulting grey level of IP(x, y) is an integer. In the results presented in this paper, with MCYT and GPDS Corpuses, we have used a value of nL ¼3 obtaining a 4-grey level posterised image, the grey levels being 0, 85, 170, and 255. Perceptually, valid values can be nL ¼ 3 or 4. With values of nL ¼ 1 or 2 the signature is half erased and this is not a valid segmentation. With a value of nL ¼3 the signature strokes are well preserved and the background appears nearly clean. With values of nL 43, mainly in the MCYT Corpus, more and more salt and pepper noise appears in the background. In order to avoid posterior image processing and eliminate the salt and pepper noise, a value of nL ¼3 was selected. The images from both corpuses consist of dark strokes against a white background. In the posterised image the background appears white (grey level equal to 255) and the signature strokes appear darker (grey levels equal to 0, 85, or 170). Therefore, to obtain the Ibw(x, y) binarised signature (black strokes and white background) we apply a simple thresholding operation, as follows: 255 if IP ðx,yÞ ¼ 255 ð10Þ Ibw ðx,yÞ ¼ 0 otherwise The black and white Ibw(x, y) image is used as a mask to segment the original signature and the segmented signature is obtained as ( 255 if Ibw ðx,yÞ ¼ 255 IS ðx,yÞ ¼ ð11Þ Iðx,yÞ otherwise
379
At this point, a complete segmentation between background and foreground is achieved. An example of the above described procedure can be seen in Fig. 4. 4.2. Histogram displacement This section is aimed at reducing the influence of the different writing ink pens on the segmented signature. We achieve this by displacing the histogram of the signature pixels toward zero, keeping the background white with grey level equal to 255. Assuring that the grey level value of the darkest signature pixel is always 0, the dynamic range will reflect features only of the writing style. This can be carried out by subtracting the minimum grey level value in the image from the signature pixels, as follows: ( IS ðx,yÞ if IS ðx,yÞ ¼ 255 IG ðx,yÞ ¼ ð12Þ IS ðx,yÞminfIS ðx,yÞg otherwise where IG(x, y) is the segmented image histogram displaced toward zero. Fig. 5 illustrates the effect of this displacement. 4.3. Feature extraction After the segmentation and signature histogram displacement, the image is cropped to fix the signature size and it is resized to N ¼512 and M¼512. The aim of these adjustments is to improve the scale invariance. As an interpolation method, we use the nearest neighbour. This is in order to keep the ink texture as invariant as possible. 4.3.1. GLCM features To calculate GLCM features, we have to assume the statistic significance of the Pði,j9Dx , Dy Þ,0 r i,j rG1 GLCM matrix estimation. If we follow the rule of 3 [39], which supposes an independent, identical distribution, a 1% estimation error with a 95% of confidence limit will require at least 300 samples per component. As P(i, j) contains G2 components, the number of pixel transitions that we will need for a reliable estimation of all the P(i, j) components will be 300 G2. The number of signature pixels for each signature in our databases has been worked out in its histogram, depicted in Fig. 6. To guarantee statistical significance at the 98% level for the signatures in the databases, we work out the 2nd percentile that corresponds to 23,155 pixels. Then, the number of grey levels should be pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 23,155 4300 G2 -G o 23,155=300 ¼ 8:78 ð13Þ We need to take into account that the number of grey levels G is an integer. So, in order to obtain a reliable estimation of the GLCM matrix, the signature images will be quantified to G ¼8 grey levels to calculate the P(i, j) matrix, despite losing texture resolution. Experiments with 16 and 32 grey levels have also
Fig. 4. Posterisation procedure: (a) original image I(x, y) with 256 grey levels, (b) posterised image IP(x, y) with nL ¼ 3: 4 grey levels, (c) binarised image Ibw ðx,yÞ, and (d) segmented image Is(x, y): original signature with the background converted to white (grey level equal to 255).
380
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
Fig. 5. Histogram preprocessing. Upper: histogram and signature detail of image IS(x, y), lower: histogram and signature detail of IG(x, y), which is darker than IS(x, y). Note that IS(x, y) histogram finishes abruptly at grey level 213 because of the posterisation process with nL ¼ 3: as roundð212nL =255Þ ¼ 2, pixels with grey level 212 remain within the signature stroke, and as roundð213nL =255Þ ¼ 3, pixels with grey level 213 go to the background.
right (x+ 1, y), right and above (x +1, y+1), above (x, y+1), and left and above (x 1, y+ 1). We do not need to work out more GLCM matrices because, for instance, the relation of pixel (x, y) with pixel (x 1, y 1) is taken into account when the central pixel is at (x 1, y 1). The textural measures obtained for each GLCM matrix are the following: homogeneity, contrast, entropy, and correlation, all of which are defined in Section 3. So we have 16 textural measures (4 measures of 4 different matrices) to calculate. These are reduced to 8, following the suggestion of Haralick [32]. Suppose that Hi ,Ci ,Ei , and Oi are the homogeneity, contrast, entropy, and correlation textural measures, respectively, of Pi ,1 ri r4. We define the 4-element vector M containing the average of each textural measure as M ¼ mean Hi , mean Ci , mean Ei , mean Oi ð15Þ Fig. 6. Number of signature pixel histograms for both databases considered in this paper.
1rir4
1rir4
1rir4
1rir4
where the ‘‘mean’’ of the vector is mean Hi ¼
4 1X H 4i¼1 i
ð16Þ
been performed and the resulting final equal error rate has confirmed that it is preferable to have a reliable GLCM matrix estimation than to increase the texture resolution. The quantified image IQ(x, y) is obtained from IG(x, y) as follows: IG ðx,yÞG 255 ð14Þ IQ ðx,yÞ ¼ round fix 255 G
1rir4
where fix rounds toward zero, and the exterior round is to guarantee integer grey levels in the IQ(x, y) image. Once the signature image with G ¼8 grey levels has been quantified, four GLCM matrices of size G G ¼ 8 8 ¼ 64 are worked out: P1 ¼ Pði,j9Dx ¼ 1, Dy ¼ 0Þ, P2 ¼ Pði,j9Dx ¼ 1, Dy ¼ 1Þ, P3 ¼ Pði,j9Dx ¼ 0, Dy ¼ 1Þ, and P4 ¼ Pði,j9Dx ¼ 1, Dy ¼ 1Þ. These GLCM matrices correspond to joint probability matrices that relate the grey level of the central pixel (x, y) with the pixel on its
where the ‘‘range’’ is the difference between the maximum and the minimum values, i.e.
and the four-component vector R, containing the range of each textural measure, is ( ) R¼
range Hi , range Ci , range Ei , range Oi
1rir4
1rir4
1rir4
range Hi ¼ max Hi min Hi
1rir4
1rir4
1rir4
ð17Þ
1rir4
ð18Þ
The eight-components feature vector is obtained by concatenating the M and R vectors: GLCM Feature Vector ¼ fM,Rg
ð19Þ
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
4.3.2. LBP features To extract the feature set of the signature image IG(x, y) based riu2 on LBP, we have chosen the rotation invariant operator LBPP,R defined in Section 3. We have studied two cases. For the first case riu2 we consider P¼ 8 and R ¼1, which obtain the LBP8,1 ðx,yÞ code, thresholding each pixel with the 8 neighbouring pixels. The proposed feature vector is the normalized histogram of riu2 riu2 LBP8,1 ðx,yÞ. As 0 rLBP8,1 ðx,yÞ rP þ1 ¼ 9, the histogram is calculated with 10 bins as follows: n o riu2 hisLBP8,1 ðx,yÞ ðlÞ ¼ # ðx,yÞLBP8,1 ðx,yÞ ¼ l
1 rx r N ¼ 512 1 ry rM ¼ 512
ð20Þ
0 r l rP þ1 ¼ 9
where # means ‘‘number of times’’. The normalized histogram is obtained from hisLBP ðx,yÞ ðlÞ , 0 rl r P þ 1 ¼ 9 hisLBP8,1 Feature VectorðlÞ ¼ PP þ 1 8,1 l ¼ 0 hisLBP8,1 ðx,yÞ ðlÞ ð21Þ For the second case analysed, the feature vector is obtained riu2 code with P¼16 and R¼ 2. In from the rotation invariant LBPP,R riu2 ðx,yÞ, we consider the second ring around the (x, y) this case LBP16,2 riu2 ðx,yÞ r P þ 1 ¼ 17, the normalized histogram pixel. As 0 r LBP16,2 will contain 18 bins, and the feature vector will be
hisLBP16,2 ðx,yÞ ðlÞ
riu2 Feature Vector ðlÞ ¼ PP þ 1 LBP16,2 l¼0
hisLBP16,2 ðx,yÞ ðlÞ
, 0 r l r P þ 1 ¼ 17 ð22Þ
It should be noted that by including the pixels in the border of riu2 the signature in the GLCM and LBPP,R matrices, both matrices will include a statistical measure of the signature shape. This means how many pixels in the signature border are oriented north, north west, etc. This results from the background having a grey level equal to 255 (224 in the case of GLCM because of the quantisation with G¼8).
5. Database We have used two databases for testing the proposed grey level based features. Both have been scanned at 600 dpi, which guarantees a sufficient grey texture representation. The main differences between them are the pens used. In the MCYT database all the signers, genuine, and forger are signed with the same pen on the same surface. Instead, in the GPDS database, all the users signed with their own pens on different surfaces. So, similar results with both databases will point to a measure of ink independence of the proposed features. 5.1. GPDS-100 Corpus The GPDS-100 signature corpus contains 24 genuine signatures and 24 forgeries of 100 individuals [25], producing 100 24 ¼2400 genuine signatures and the same for forgeries. The genuine signatures were taken in just one session to avoid scheduling difficulties. The repetitions of each genuine signature and forgery specimen were collected using each participant’s own pen on white A4 sheets of paper, featuring two different box sizes: the first box is 5 cm wide and 1.8 cm high and the second box is 4.5 cm wide and 2.5 cm high. Half of the genuine and forged specimens were written in each size of box. The forgeries were collected on a form with 15 boxes. Each forger form shows 5 images of different genuine signatures chosen randomly. The forger imitated each one 3 times for all 5 signatures. Forgers were given unlimited time to learn the signatures and perform the
381
forgeries. The complete signing process was supervised by an operator. Once the signature forms were collected, each form was scanned with a Canon device using 256-level grey scale and 600 dpi resolution. All the signature images were saved in PNG format. 5.2. MCYT Corpus The off-line subcorpus of the MCYT signature database [10] was used. The whole corpus comprises fingerprint and on-line signature data for 330 contributors from 4 different Spanish sites. Skilled forgeries are also available in the case of signature data. Forgers are given the signature images of clients to be forged and, after training, they are asked to imitate the shape. Signature data were always acquired with the same ink pen and paper templates over a pen tablet. Therefore, signature images are also available on paper. Paper templates of 75 signers (and their associated skilled forgeries) have been digitised with a scanner at 600 dpi. The resulting off-line subcorpus has 2250 images of signatures, with 15 genuine signatures and 15 forgeries per user. This signature corpus is publicly available at http://atvs.ii.uam.es.
6. Classification Once the feature matrix is estimated, we need to solve a twoclass classification (genuine or forgery) problem. A brief description of the classification technique used in the verification stage follows. 6.1. Least squares support vector machines To model each signature, a least squares support vector Machine (LS-SVM) has been used. SVMs have been introduced within the context of statistical learning theory and structural risk minimisation. Least squares support vector machines (LS-SVM) are reformulations to standard SVMs, which lead to solving indefinite linear (KKT) systems. Robustness, sparseness, and weightings can be imposed on LS-SVMs where needed and a Bayesian framework with three levels of inference has been developed [40] for this purpose. Only one linear equation has to be solved in the optimization process, which not only simplifies the process, but also avoids the problem of local minima in SVM. The LS-SVM model is defined in its primal weight space by y^ ðxÞ ¼ xT jðxÞ þ b
ð23Þ
where j(x) is a function that maps the input space into a higher dimensional feature space, x is the M-dimensional vector, and x and b the parameters of the model. Given N input–output learning pairs ðxi ,yi Þ A RM xR,1 ri r N, least squares support vector machines seek the x and b that minimize N 1 T 1X x xþg e2 2 2i¼1 i
ð24Þ
yi ¼ xT jðxi Þ þ b þei , 1 r i rN
ð25Þ
minJðo,eÞ ¼ x,b,e
subject to
In our case we use as j(x) mapping function a Gaussian RBF kernel. The meta parameters of the LS-SVM model are the width C of the Gaussian and the g regularisation factor. The training method for the estimation of x and b can be found in [40]. In this work, the meta parameters (g, C) were established using a grid search. The LS-SVM trained for each signer uses the same (g, C)
382
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
meta parameters; further details about model construction are given in the next section.
Table 2 riu2 riu2 þ LBP16,2 . Tested with random forgeries. Results using LBP8,1 Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
7. Evaluation protocol
5 samples
7.1. Experiments
10 samples
MCYT GPDS-100 MCYT GPDS-100
0.75 0.36 1.52 0.73
26.40 26.64 15.23 14.29
3.81 4.59 2.38 2.41
0.74 0.40 0.82 0.50
9.22 8.40 9.32 6.52
Each signer is modelled by a LS-SVM, which is trained with 5 and 10 genuine samples so as to compare the performance of the model with the number of training samples. These samples were chosen randomly. Random forgeries (genuine samples from other signers) were used as negative samples, in a similar way to that outlined by [41], in our case taking a genuine sample of each one of the other users of the database (74 for the case of the MCYT Corpus and 99 for the GPDS Corpus). Keeping in mind the limited number of samples in the training, leave-one-out cross-validation (LOOCV) was used to determine the parameters of the SVM classifier with RBF kernel (g, C). For testing, random and skilled forgeries were taken into account. For random forgeries, we select a genuine sample of each one of the other users of the database (different to the one used for training). For skilled forgeries, all available forgeries were used; this is 15 for the MCYT Corpus and 24 for the GPDS Corpus. Training and testing procedure was repeated 10 times with different training and testing subsets for the purpose of obtaining reliable results. Two classical types of error were considered: Type I error or false rejection rate (FRR), which is when an authentic signature is rejected, and Type II error or false acceptance rate (FAR), which is when a forgery is accepted. Finally the equal error rate (EER) was calculated, keeping in mind that the classes are unbalanced. To calculate FAR and FRR we need to define a threshold. As the LS-SVM has been trained as a target value +1 for genuine signature and 1 for forgeries, we have chosen an a priori constant threshold equal to 0 for all the signers, i.e. if the LS-SVM returns a value greater or equal than 0, the signature is accepted as genuine. If the LS-SVM returns a value lesser than 0, the signature is considered a forgery and consequently rejected.
Table 3 riu2 riu2 þ LBP16,2 . Tested with skilled forgeries. Results using LBP8,1
Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
5.00 6.17 9.84 10.05
24.56 22.49 13.20 11.36
12.82 13.38 10.68 10.53
3.79 3.90 3.70 3.76
9.70 8.29 8.65 5.77
10 samples
Table 4 Results using GLCM based features. Tested with random forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
3.12 0.46 5.68 1.19
32.76 37.39 21.39 26.34
6.65 6.40 6.68 4.31
2.19 0.45 1.73 0.74
7.49 6.12 8.48 7.11
10 samples
Table 5 Results using GLCM based features. Tested with skilled forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
6.49 2.91 9.72 4.92
30.93 35.07 21.47 24.61
16.27 17.12 12.65 12.18
4.16 2.34 3.45 2.54
8.78 7.29 8.27 7.24
10 samples
7.2. Results riu2 Experiments were carried out using different values for LBPP,R parameters R and P. First, values were set to R¼1 and P¼8. Then they were set to R¼2 and P¼16. Finally, a combination at feature level of both pairs was used. Table 1 shows results obtained using 5 genuine samples for training and evaluating with skilled forgeries. As can be seen the best results were obtained using riu2 riu2 the combination LBP8,1 þ LBP16,2 . This makes sense, because the new feature vector of length 10 +18 ¼28 includes information on the first and second pixels rings around the central pixel. Tables 2 and 3 show more detailed information about results obtained riu2 riu2 with LBP8,1 þ LBP16,2 . Tables 4 and 5 present results for GLCM characterisation.
Table 1 riu2 . Trained with 5 samples and tested with skilled forgeries. Results using LBPP,R riu2 parameters LBPP,R
Data set
FAR (%)
FRR (%)
EER (%)
R ¼1, P ¼8 R ¼2, P ¼16 {R¼ 1, P¼ 8} + {R ¼2, P ¼16} R ¼1, P ¼8 R ¼2, P ¼16 {R¼ 1, P¼ 8} + {R ¼2, P ¼16}
MCYT MCYT MCYT GPDS-10 GPDS-10 GPDS-10
3.35 3.17 5.00 3.90 4.24 6.17
30.72 28.37 24.56 32.51 30.07 22.49
14.30 13.25 12.82 16.54 15.66 13.38
Table 6 riu2 riu2 þ LBP16,2 þ GLCM. Tested with random forgeries. Results using LBP8,1 Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
0.86 0.27 1.53 0.55
24.21 21.87 12.00 10.35
3.64 3.75 2.20 1.76
0.76 0.34 0.83 0.43
9.77 9.62 8.16 5.83
10 samples
In order to study the system performance when using a riu2 riu2 combination of LBP8,1 þLBP16,2 and GLCM parameters, a feature level fusion was carried out to obtain a feature vector of dimension 10 + 18+ 8, equal to 36. Tables 6 and 7 present the results obtained for random and skilled forgeries, respectively. It is easy to see that the EER decreases when combining the riu2 different grey level based features. So it seems that LBPP,R and GLCM texture measures are uncorrelated, which is logical, because riu2 each texture measure is based on a different principle: LBPP,R is based on thresholding and GLCM is based on joint statistics. As stated above, the quality of the texture based parameters is not solely due to the discriminative ability of the texture to identify writers. The texture parameters, as defined, also include shape information when including pixels in the stroke border. To
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
Table 7 riu2 riu2 þ LBP16,2 þ GLCM. Tested with skilled forgeries. Results using LBP8,1
Table 11 Using contour-hinge parameters proposed in [45].
Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
4.53 5.13 7.53 8.64
23.25 20.82 12.61 9.66
12.02 12.06 8.80 9.02
3.55 3.43 3.96 3.52
9.26 8.44 9.66 6.52
10 samples
383
Algorithm
EER (%)
Reported in [45] with MCYT Corpus Implemented here with a posteriori score normalization proposed in [45] with MCYT Corpus Implemented here without score normalization with MCYT Corpus Implemented here without score normalization using GPDS Corpus
10.18 10.32 14.81 15.17
Table 8 riu2 riu2 þ LBP16,2 þ GLCM with BW signatures. Tested with random Results using LBP8,1 forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
0.57 0.34 1.25 0.68
26.44 23.44 13.79 11.29
3.65 4.06 2.04 1.99
0.58 0.34 0.70 0.46
9.42 8.96 9.32 6.21
10 samples
Table 12 riu2 riu2 þ LBP16,2 þ contour-hinge. Tested with random forgeries. Results using LBP8,1 Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
0.01 0.01 0.04 0.02
26.47 22.99 8.45 7.76
3.16 3.71 0.57 0.98
0.02 0.01 0.03 0.03
7.59 6.17 6.28 4.17
10 samples
Table 9 riu2 riu2 þ LBP16,2 þ GLCM whith BW signatures. Tested with skilled Results using LBP8,1
forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
3.93 4.68 7.86 9.00
26.20 23.88 13.92 11.03
12.84 13.17 9.37 9.75
3.35 3.27 2.87 3.75
8.91 8.91 8.65 6.46
10 samples
Table 10 Comparison of proposed approach with other published methods.
[42] [43] [44] [10] [45] Approach proposed (LBP + GCLM) a
Table 13 Results using GLCM + contour-hinge. Tested with random forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
0.02 0.00 0.05 0.02
27.75 23.29 9.12 8.42
3.32 3.75 0.62 1.06
0.03 0.01 0.06 0.04
7.58 5.44 6.81 4.02
10 samples
Table 14 riu2 riu2 þ LBP16,2 þ GLCM þ contour-hinge. Tested with random forgeries. Results using LBP8,1
EER (%)
Value scale
25.10 22.40/20.00a 15.00 11.00/9.28a 10.18/6.44a 12.02/8.80a
Grey B/W B/W B/W B/W Grey
5/10 genuine samples used for training.
verify such an hypothesis, we have converted the signatures to black and white and worked out the LBP and GLCM matrices. The results are given in Tables 8 and 9. It can be seen that the results are just a little bit worse than those in Tables 6 and 7. This confirms that the texture features contain shape information and that the grey level data provide some information about the writer. A comparison of the performance of different signature verification systems is a difficult task since each author constructs his own signature data sets. The lack of a standard international signature database continues to be a major problem for performance comparison. For the sake of completeness, in Table 10 we present some results obtained by published studies that used the MCYT database. Although it is not possible to carry out a direct comparison of the results, since the methodologies of training and testing and the classification strategies used by each author are different, Table 10 enables one to visualise results from the proposed methodology alongside results published by other authors. The next step in analysing grey scale based features is to combine them with geometrical based features. It is supposed that the two types of features will be uncorrelated and their combination will improve the automatic handwritten signature
Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
0.03 0.01 0.15 0.06
20.17 18.26 5.07 5.56
2.43 2.95 0.47 0.74
0.03 0.02 0.15 0.06
6.73 5.52 4.93 3.05
10 samples
verification (AHSV) scheme. For geometrical based features we have used the contour-hinge algorithm proposed in [46] and used in [45] for AHSV with the MCYT Corpus. Table 11 shows the results obtained by [45] using the contour-hinge algorithm and with our implementation of it. To compare the results, we need to take into account that in our work we have not use a score normalization, i.e. the threshold is 0 for all the users. On the other hand, [45] uses a user-dependent a posteriori score normalization, that is to say, their EER is an indication of the level of performance with an ideal score alignment between users. The score normalization used by [45] is as follows: s0 ¼s sl, where s is the raw similarity score computed by the signature matcher, s0 is the normalized similarity score, and sl is the user dependent decision threshold at the ERR obtained from a set of genuine and impostor scores for the user l. So, for a fair comparison we give our results with the score normalization of [45] and without score normalization. As can be seen the counter-hinge parameters work slightly better with the MCYT than with the GPDS Corpus. Tables 12–17 present results that confirm how features based on grey level information can be combined with features based on binary images to improve overall system performance. These tables offer information on the feature level combination of riu2 riu2 LBP8,1 þLBP16,2 þ GLCM þ contour-hinge features. Again 5 and 10 genuine samples were used, respectively, in the training set for positive samples, and random forgeries (genuine samples from
384
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
Table 15 riu2 riu2 þ LBP16,2 þ contour-hinge. Tested with skilled forgeries. Results using LBP8,1 Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
2.21 4.71 6.54 13.67
26.43 24.36 8.69 8.08
11.90 13.40 7.08 11.61
1.66 3.16 2.28 3.86
8.27 7.05 6.38 4.24
10 samples
Acknowledgments This work has been funded by Spanish government MCINN TEC2009-14123-C04 research project; F. Vargas is supported by the high level scholarships programme, Programme AlBan No. E05D049748CO.
References Table 16 Results using GLCM + contour-hinge. Tested with skilled forgeries. Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
1.64 4.40 5.51 11.90
33.31 28.64 13.31 11.52
14.31 15.11 7.46 11.76
1.29 3.04 2.30 4.42
7.68 6.91 8.25 5.72
10 samples
Table 17 riu2 riu2 þ LBP16,2 þ GLCMþ contour-hinge. Tested with skilled forgeries. Results using LBP8,1 Training
Data set
FAR (%)
FRR (%)
EER (%)
FAR (s)
FRR (s)
5 samples
MCYT GPDS-100 MCYT GPDS-100
2.71 4.79 6.77 13.13
24.13 23.09 8.59 7.46
11.28 12.88 7.23 11.04
1.62 2.74 2.45 3.86
7.83 6.68 6.87 3.91
10 samples
others signers in the database) as negatives samples. We should note that in this case the results with the GPDS Corpus are worse than those for the MCYT Corpus because of the counter-hinge performance.
8. Conclusions A new off-line signature verification methodology based on grey level information is described. The performance of the system is presented with reference to two experimental signature databases containing samples from 75 and 100 individuals, including skilled forgeries. The experimental results for skilled forgeries (Tables 3 and 5) show that using grey level information achieves reasonable system performance EER¼16.27% and riu2 riu2 12.82%, when LBP8,1 þLBP16,2 and GLCM features are used for the MCYT Corpus. Overall system performance is improved when riu2 riu2 a feature-level fusion of LBP8,1 þ LBP16,2 + GLCM features is implemented. These latter results compare well with the current state-of-the-art (Table 10). A combination of the proposed riu2 riu2 approaches LBP8,1 þ LBP16,2 þ GLCM and the contour based approach proposed in [45] leads to further performance improvement, especially in the case of random forgeries. We suggest that score-level and decision-level fusions should be studied in the future. Additionally, a simple and low computational cost segmentation algorithm has been proposed based on posterisation. Although a procedure to reduce the effect of ink-type was presented, more efforts need to be made in that particular direction in order to improve this stage of our system. Nevertheless, comparing the similar results obtained with MCYT and GPDS databases with the grey level based features, it seems that the proposed features display some invariance to pen type because the MCYT Corpus has been made with the same pen and the GPDS Corpus with different pens.
[1] K. Bowyer, V. Govindaraju, N. Ratha, Introduction to the special issue on recent advances in biometric systems, IEEE Transactions on Systems, Man and Cybernetics—B 37 (5) (2007) 1091–1095. [2] D. Zhang, J. Campbell, D. Maltoni, R. Bolle, Special issue on biometric systems, IEEE Transactions on Systems, Man and Cybernetics—C 35 (3) (2005) 273–275. [3] S. Prabhakar, J. Kittler, D. Maltoni, L. O’Gorman, T. Tan, Introduction to the special issue on biometrics: progress and directions, PAMI 29 (4) (2007) 513–516. [4] S. Liu, M. Silverman, A practical guide to biometric security technology, IEEE IT Professional 3 (1) (2001) 27–32. [5] R. Plamondon, S. Srihari, On-line and off-line handwriting recognition: a comprehensive survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (1) (2000) 63–84. ¨ [6] K. Franke, J.R. del Solar, M. Kopen, Soft-biometrics: soft computing for biometric-applications, Tech. Rep. IPK, 2003. [7] S. Impedovo, G. Pirlo, Verification of handwritten signatures: an overview, in: ICIAP ’07: Proceedings of the 14th International Conference on Image Analysis and Processing, IEEE Computer Society, Washington, DC, USA, 2007, pp. 191–196, doi:http://dx.doi.org/10.1109/ICIAP.2007.131. [8] R. Plamondon, in: Progress in Automatic Signature Verification, World Scientific Publications, 1994. [9] M. Fairhurst, New perspectives in automatic signature verification, Tech. Rep. 1, Information Security Technical Report, 1998. [10] J. Fierrez-Aguilar, N. Alonso-Hermira, G. Moreno-Marquez, J. Ortega- Garcia, An off-line signature verification system based on fusion of local and global information, in: Workshop on Biometric Authentication, Springer LNCS-3087, 2004, pp. 298–306. [11] Y. Kato, M. Yasuhara, Recovery of drawing order from single-stroke handwriting images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9) (2000). [12] S. Lee, J. Pan, Offline tracking and representation of signatures, IEEE Transactions on Systems, Man and Cybernetics 22 (4) (1992) 755–771. [13] N. Herbst, C. Liu, Automatic signature verification based on accelerometry, Tech. Rep., IBM Journal of Research Development, 1977. [14] C. Sansone, M. Vento, Signature verification: increasing performance by a multistage system, Pattern Analysis & Applications, Springer 3 (2000) 169–181. [15] H. Cardot, M. Revenu, B. Victorri, M. Revillet, A static signature verification system based on a cooperative neural network architecture, International Journal on Pattern Recognition and Artificial Intelligence 8 (3) (1994) 679–692. ¨ [16] K. Franke, O. Bunnemeyer, T. Sy, Ink texture analysis for writer identification, in: IWFHR ’02: Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition (IWFHR’02), IEEE Computer Society, Washington, DC, USA, 2002, p. 268. [17] K. Franke, S. Rose, Ink-deposition model: the relation of writing and ink deposition processes, in: IWFHR ’04: Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition, IEEE Computer Society, Washington, DC, USA, 2004, pp. 173–178, doi:http://dx.doi.org/10.1109/ IWFHR.2004.59. [18] Y. Qiao, M. Yasuhara, Recovering dynamic information from static handwritten images, in: Frontiers on Handwritten Recognition 04, 2004, pp. 118–123. [19] A. El-Baati, A.M. Alimi, M. Charfi, A. Ennaji, Recovery of temporal information from off-line arabic handwritten, in: AICCSA ’05: Proceedings of the ACS/IEEE 2005 International Conference on Computer Systems and Applications, IEEE Computer Society, Washington, DC, USA, 2005, pp. 127–vii. [20] R. Plamondon, W. Guerfali, The 2/3 power law: when and why? Acta Psychologica 100 (1998) 85–96 12. [21] M. Ammar, Y. Yoshida, T. Fukumura, A new effective approach for automatic off-line verification of signatures by using pressure features, in: Proceedings 8th International Conference on Pattern Recognition, 1986, pp. 566–569. [22] D. Doermann, A. Rosenfeld, Recovery of temporal information from static images of handwriting, International Journal of Computer Vision 15 (1–2) (1995) 143–164. [23] J. Guo, D. Doermann, A. Rosenfeld, Forgery detection by local correspondence, International Journal of Pattern Recognition and Artificial Intelligence 15 (579–641) (2001) 4. [24] L. Oliveira, E. Justino, C. Freitas, R. Sabourin, The graphology applied to signature verification, in: 12th Conference of the International Graphonomics Society, 2005, pp. 286–290. [25] M. Ferrer, J. Alonso, C. Travieso, Offline geometric parameters for automatic signature verification using fixed-point arithmetic, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (6) (2005) 993–997. [26] K. Huang, H. Yan, Off-line signature verification based on geometric feature extraction and neural network classification, Pattern Recognition 30 (1) (1997) 9–17.
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
[27] H. Lv, W. Wang, C. Wang, Q. Zhuo, Off-line Chinese signature verification based on support vector machine, Pattern Recognition Letters 26 (2005) 2390–2399. [28] A. Mitra, P. Kumar, C. Ardil, Automatic authentification of handwritten documents via low density pixel measurements, International Journal of Computational Intelligence 2 (4) (2005) 219–223. [29] J. Vargas, M. Ferrer, C. Travieso, J. Alonso, Off-line signature verification based on high pressure polar distribution, in: ICFHR08, Montereal, 2008. [30] K. Franke, Stroke-morphology analysis using super-imposed writing movements, in: IWCF, 2008, pp. 204–217. [31] R.W. Conners, C.A. Harlow, A theoretical comparison of texture algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence 2 (3) (1980) 204–222. [32] R.M. Haralick, Statistical and structural approaches to texture, Proceedings of the IEEE 67 (5) (1979) 786–804. [33] D. He, L. Wang, J. Guibert, Texture feature extraction, Pattern Recognition Letters 6 (4) (1987) 269–273. [34] M. Trivedi, C. Harlow, R. Conners, S. Goh, Object detection based on gray level cooccurrence, Computer Vision, Graphics and Image Processing 28 (3) (1984) 199–219. [35] S. Marcel, Y. Rodriguez, G. Heusch, On the recent use of local binary patterns for face authentication, International Journal on Image and Video Processing, Special Issue on Facial Image Processing, IDIAP-RR 06-34, 2007. [36] S. Nikam, S. Agarwal, Texture and wavelet-based spoof fingerprint detection for fingerprint biometric systems, in: ICETET ’08: Proceedings of the 2008 First International Conference on Emerging Trends in Engineering and Technology, IEEE Computer Society, Washington, DC, USA, 2008, pp. 675–680, doi:http://dx.doi.org/10.1109/ICETET.2008.134. ¨ ¨ The local binary pattern approach to texture analysis—exten[37] T. Maenp a¨ a, sions and applications., Ph.D. thesis, Oulu University, Dissertation, Acta Univ. Oulu C 187, 78p +App., 2003, /http://herkules.oulu.fi/isbn9514270762/S.
385
[38] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (7) (2002) 971–987. [39] A.J. Mansfield, J.L. Wayman, Best Practices in Testing and Reporting Performance of Biometric Devices Version 2.01, National Physical Laboratory, San Jose State University NPL Report CMSC 14/02, August 2002. [40] J.A.K. Suykens, T.V. Gestel, J.D. Brabanter, B.D. Moor, J. Vandewalle, in: Least Squares Support Vector Machines, World Scientific Publishing Co. Pte. Ltd., 2002. [41] D. Bertolini, L. Oliveira, E. Justino, R. Sabourin, Reducing forgeries in writerindependent off-line signature verification through ensemble of classifiers, Pattern Recognition 43 (1) (2009) 387–396. ¨ [42] I. Guler, M. Meghdadi, A different approach to off-line handwritten signature verification using the optimal dynamic time warping algorithm, Digital Signal Processing 18 (6) (2008) 940–950. [43] F. Alonso-Fernandez, M.C. Fairhurst, J. Fierrez, J. Ortega-Garcia, Automatic measures for predicting performance in off-line signature, in: IEEE Proceedings of the International Conference on Image Processing, ICIP, vol. 1, 2007, pp. 369–372. [44] J. Wen, B. Fang, Y. Tang, T. Zhang, Model-based signature verification with rotation invariant features, Pattern Recognition 42 (7) (2009) 1458–1466. [45] A. Gilperez, F. Alonso-Fernandez, S. Pecharroman, J. Fierrez, J. Ortega- Garcia, Off-line signature verification using contour features, in: Proceedings of the International Conference on Frontiers in Handwriting Recognition, ICFHR, 2008. [46] M. Bulacu, Statistical pattern recognition for automatic writer identification and verification, Ph.D. thesis, Artificial Intelligence Institute, University of Groningen, The Netherlands, March 2007, /http://www.ai.rug.nl/ bulacu/S.
Jesus F. Vargas was born in Colombia in 1978. He received his B.Sc. degree in Electronic Engineering in 2001 and M.Sc. degree in Industrial Automation in 2003, both from Universidad Nacional de Colombia. Since 2004, he is an Auxiliar Professor at Universidad de Antioquia, Colombia. He is currently a PhD student at Technological Centre for Innovation in Communications (CeTIC, Universidad de Las Palmas de Gran Canaria, Spain). His research deals with offline signature verification.
Carlos M. Travieso-Gonzalez received his M.Sc. degree in 1997 in Telecommunication Engineering at Polytechnic University of Catalonia (UPC), Spain. Besides, he received Ph.D. degree in 2002 at ULPGC-Spain. He is an Associate Professor from 2001 in ULPGC, teaching subjects on signal processing. His research lines are biometrics, classification system, environmental intelligence, and data mining. He is a reviewer in international journals and conferences. Besides, he is an Image Processing Technical IASTED Committee member.
Jesus B. Alonso received his M.Sc. degree in 2001 in Telecommunication Engineering and Ph.D. degree in 2006, both from the Department of Computers and Systems at Universidad de Las Palmas de Gran Canaria (ULPGC), Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria from 2002. His interests include signal processing in biocomputing, nonlinear signal processing, recognition systems, and data mining.
Miguel A. Ferrer was born in Spain in 1965. He received his M.Sc. degree in Telecommunications in 1988 and Ph.D. in 1994, both from the Universidad Polite´cnica de Madrid, Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria, where he has taught since 1990 and heads the Digital Signal Processing Group there. His research interests lie in the fields of biometrics and audio-quality evaluation. He is a member of the IEEE Carnahan Conference on Security Technology Advisory Committee.