Intensity Based Distinctive Feature Extraction and Matching ... - wseas.us

17 downloads 4813 Views 808KB Size Report
Keywords:- Indian Sign Language, Hand gesture, scale invariant features, Feature ... biometrics which helps in identification of persons based ... recognition system to recognize the alphabets of ISL. ...... Recognition Using Desk and Wearable.
Mathematical Methods and Systems in Science and Engineering

Intensity Based Distinctive Feature Extraction and Matching Using Scale Invariant Feature Transform for Indian Sign Language SANDEEP B. PATIL1 Department of ETC Faculty of Engineering and Technology of Shri Shankaracharya Technical Campus Bhilai, INDIA [email protected]. G.R.SINHA2 Department of ETC Faculty of Engineering and Technology of Shri Shankaracharya Technical Campus Bhilai Bhilai, INDIA, [email protected]

ABSTRACT- India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform has been used to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The intensity based feature extraction and matching has been performed using SIFT. The first part of experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures. The second part of experimental result shows the total number of key points matched when the intensity of the image varies from low to high. The matching percentage of approximately 98% is achieved. Keywords:- Indian Sign Language, Hand gesture, scale invariant features, Feature matching.

1 Introduction In recent years, there have been numerous research contributions in the field of Indian Sign Language (ISL) biometrics which helps in identification of persons based on their traits or characteristics. This area is not very easy because no standard databases are available for the Indian Sign Language (ISL) biometrics. The significance of the problem can be easily illustrated by using natural gestures applied together with verbal and nonverbal communication. The use of hand gestures in support of verbal communication is very useful for the people having no visual contact. Different approaches for recognizing hand gestures are available in literatures; few of them require wearing marked gloves or attaching extra hardware to the body of the subject. These approaches are less likely to apply for real world applications whereas vision-based approaches are considered as non-intrusive and hence more likely to be used for real world applications.

ISBN: 978-1-61804-281-1

245

Indian sign language (ISL) is an attempt in this direction that deals with recognition of various static and symbolic characters generated by hand gestures. The biometrics uses the ISL gestures as human traits and recognizes the characters accordingly, which is very useful for deaf and dumb people. Many professional estimate that the deaf population in India is approximately 3 million and hard hearing people is 10 millions. Around 100 million people are associated and involved with these 13 million people caring them and helping them in communication. The involved peoples are family members, social workers, audiologist, professional, teachers etc. If any solution is developed for helping the deaf and dumb people to recognize the language then it would be a significant contribution towards society and mankind.

2 Related Research

Mathematical Methods and Systems in Science and Engineering

of hand that are extracted using the Haar transform. Kmeans clustering algorithm was used to reduce the number of extracted features which reduced the computational complexity. This method was limited to small database, and for large database the performance of AdaBoost algorithm is not satisfactory. Alon et al. (2009) introduced a unified framework that simultaneously performed spatial segmentation, temporal segmentation, and recognition. The framework carried both the information’s that is bottom-up and topdown. A gesture has been recognized even when the hand location is highly ambiguous and gesture’s begins and ends information are unavailable. The performance of this approach was evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, and retrieval of occurrences of signs of interest from a video database containing continuous, unsegmented signing of American Sign Language (ASL). This system is limited to five signs of ASL. The algorithm has to undergo through three major components that is feature extraction, spatio- temperol matching and intermodal competition that requires large processing time. Chou et al. (2012) developed a novel algorithm for the hand gesture images detection and recognition. The detection process rotates an askew gesture to right position, and to delete the elbow and forearm parts from the captured pictures. The recognition process includes two phases (a) the model construction and (b) sign language identification. This algorithm gives 94% accuracy for hand gesture recognition and the accuracy will be adversely affected if the orientation of hand gesture lies in skew or if the image part from the wrist to the arm is incorrect. Jothilakshmi et al. (2012) used acoustic features to develop a two level language identification system for Indian languages. Firstly, the system identifies the family of the spoken language, and then it is fed to the second level to identify the particular language in the corresponding family. The system uses hidden Markov model (HMM), Gaussian mixture model (GMM), artificial neural networks (ANN). The system could not achieve good accuracy. Kadam et al. (2012) studied an effective use of glove for implementing an interactive sign language teaching programmed. Sign language glove is very useful to aid in communication with the deaf. A translator was suggested to those who want to learn sign language. The flex sensors are required to mount on the finger of the glove whose resistance changes according to the finger position that is difficult to understand for the deaf. . Nguyen et al. (2012) described the facial expressions whose features are tracked to effectively capture temporal visual cues on the signers face during signing

A numerous research works on ISL biometrics is being carried out and some of the researchers have already contributed in this area. Study of sign language biometrics and also ISL biometrics was made and below is a report of literature survey: Bhuyan et al. (2011) proposed a novel approach for hand pose recognition for analyzing the textures and key geometrical features of the hand. A skeletal hand model was constructed to analyze the abduction/adduction movements of the fingers and subsequently, texture analysis was performed to consider some inflexive finger movements. The feature extraction technique is more complex because the abduction and adduction angle of the fingers and internal variations for particular gestures. Ghotkar et al. (2012) introduced a hand gesture recognition system to recognize the alphabets of ISL. The system consists of 4 modules: real time hand tracking, hand segmentation, feature extraction and gesture recognition. Genetic algorithm was used for gesture recognition. Computational time required for the system is more, since it has to process through 4 different modules. Nandy et al. (2010) suggested a novel approach towards recognizing of ISL gesture for humanoid robot interaction (HRI). The JAVA based software was developed to deal with the entire HRI process. This system found its limitations in real time applications where different speech and hearing would explore several issues. Rajam et al. (2011) developed a Sign Language biometrics system for one of the south Indian languages. The system describes a set of 32 signs, each representing the binary ‘UP’ & ‘DOWN’ positions of the five fingers of right hand palm. This system was applied to single user both in training and testing phase. The above mentioned system is applicable to Tamil text only and find limited to five fingers of single hand with same background. Yang quan (2011) introduced a novel recognition method of sign language using the visionbased multi-features classifier. This can exclude some skin-like object and tracked the moving recognizes hand more precisely from the sign language video sequence. The system developed dynamic sign language appearance modeling first, and then classification method of SVMs for recognition has been used. The experiment was carried out over 30 groups of the Chinese manual alphabet images and the results proved that this appearance modeling method is simple, efficient, and effective for characterizing hand gestures. The overall result shows that using linear kernel function the best recognition rate of 99.7% of letter ’F’ has been achieved. The system suffers with limitations in real time with dynamic hand gesture. Arulkarthick et al. (2012) presented a real-time system for sign language recognition using hand gestures. After detecting hands postures, hand gestures are recognized based on features ISBN: 978-1-61804-281-1

246

Mathematical Methods and Systems in Science and Engineering

using probabilistic principal component analysis (PPCA). A test was conducted to recognize six isolated facial expressions representing grammatical markers in American Sign Language (ASL). The recognition accuracy reported for ASL facial expressions was 91.76% in person dependent tests and 87.71% in person independent tests. The proposed system is not fully automatic and some subject to tilt/rotate their head while thrusting the head forward, contributing to the lower accuracy for these two classes. Paulraj et al. (2011) proposed a sign language recognition system that will help the hearing impaired to communicate more fluently with the normal people. Sign language recognition system employs skin color segmentation and neural network method for training. A segmentation process is carried out to separate the right and left hand regions from the image frame and a simple vertical interleaving method has been used in the preprocessing stage to reduce the size of the image. The system requires about 3218 average mean epochs to train the network model that exceeds the training time. Moreover there was confusion in first, fourth, eighth and ninth sign that greatly reduced the accuracy. Rousan et al. (2009) introduced the first automatic Arabic sign language (ArSL) recognition system based on hidden Markov models (HMMs). The system operates in different modes including offline, online, signer-dependent, and signer-independent modes. Experimental results demonstrated that the given system has high recognition rate for all modes. The system did not rely on the use of data gloves or other means as input devices, and it allows the deaf signers to perform gestures freely and naturally. This system is limited to only Arabic sign language. Samir et al. (2012) developed a post processing module based on natural language processing rules that is fast, easy and computationally simple implementation. The system was more accurate especially for domain-specific sign language recognition. There have been research contributions in sign language biometrics but very limited work has been done in ISL biometrics. ISL gestures used in literatures were not common and no specific database available for ISL. Methods developed for other languages do not perform well for ISL because the methods are biased for the language which is written differently than ISL. Gloves have been used in many methods that lead to poor identification of fingers and their separation.

The image of Indian sign language is shown in figure 1. Based on this image the hand gesture databases from 10 different persons were collected. The database consists of 26 images for each class. Figure 2 shows the database of one of the class. Each image was taken using the 14 Mega pixel digital cameras having a resolution of 284 x 215 pixels.

Fig. 1. Indian sign language

Fig. 2. 26 ISL gesture from class 1.

4 Processing and SIFT feature extraction Scale invariant feature transform (SIFT) algorithm was first introduced by David Lowe in 1999, to perform reliable matching between different images of the same object. This algorithm is widely used for feature extraction because of the Image stability over translation, rotation and scaling and to some extent invariant to change in the illumination and camera viewpoint. Localization is very well achieved in both spatial and frequency domains that helps in reducing the probability of disruption by occlusion, clutter, or noise. The proposed methodology is used to extract the feature of Indian Sign language gesture. The flow chart of the proposed methodology is shown in figure 3.

3 Database collections

ISBN: 978-1-61804-281-1

247

Mathematical Methods and Systems in Science and Engineering

Input Image

(3) Once difference of Gaussian (DOG) images has been obtained, key points are identified as local minima/maxima of the DOG images across scales. This is done by comparing each pixel in the DOG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected as a candidate key point. To determine the difference of Gaussian, we only need two corresponding points and to check whether the point is extermum we only need 26 differences of Gaussian points spread over three scales around it. Figure 5. Shows the scale and DOG for first octave, similarly DOG for second and third octave is calculated.

Scale space Extrema Key point’s Localization Loop 1 Assigning Orientation Computing Descriptors SIFT Features Intensity Variation

Apply loop 1 Input Image

Recomputed SIFT features DOG

Number of features Matched

YES

More N

Fig. 3. Various phases of SIFT algorithm with intensity variation

Extreme Detection

YES

4.1 Scale space extrema detection

More

This phase identify the stable points from different views of the same image over image rotation, translation and scaling. Figure 4. Shows the internal stage of the extreme detection phase. The algorithm computes ‘scale’, ‘difference of Gaussian’ and ‘extrema’ over several ‘octaves’. The difference of Gaussian image D(x, y, σ) is given by

Key Point

Fig. 4. Flow chart for generating DOG for several octaves.

(1) Where is the convolution of the original image I(x, y) with the Gaussian blur G(x, y, Kσ) at scale Kσ i.e (2) Where * is the convolution operator, G(x, y, σ) is a variable-scale Gaussian and I(x, y) is the input image.

ISBN: 978-1-61804-281-1

N

248

Mathematical Methods and Systems in Science and Engineering

Fig. 5. Scale and DOG for first octave.

4.2 Key Point detection The extrema from the first phase are then further examined to determine the candidate lies on the edge or point of low contrast. These points are generally unstable over image variation and hence are rejected. For rejecting low contrast point, each extrema is examined using a method that involves solving a system of 3x3 linear equation and so it takes constant time. To detect the extrema on edge, a 2x2 matrix is generated and simple computation is performed on it; to generate a ratio of principle curvature. This quantity is simply compared with threshold value to decide whether extrema is rejected or not. The final key points detected for first two images were shown in figure 6.

(c) Key points for figure (a).

(d) Key points for figure (b)

4.3 Orientation Assignment This phase assigned orientation to each key point based on local image gradient directions. To achieve this, the Gaussian smoothed image L(x, y,σ) at the key point scale σ is taken to perform all computations in a scale invariant manner. For the sample image L(x, y) at scaleσ, the gradient magnitude, m(x, y), and orientation, θ(x, y), are precomputed using pixel difference as shown in figure 7.

(4) A (668-keypoints)

(5)

C (751- Key points)

Fig. 6. Number of key points for A and C.

Once the key point has been detected, the original image has to undergo intensity variation; the detection of new key points after intensity variation would be used for matching with original key points. Table 2 and 3 shows the number of key points detected after intensity variation. It also shows how much key points exactly matched with the key points from original image. Figure 7. Shows the images of key points detection after intensity variation.

(a) Magnitude

(b) Phase

Fig. 7. Orientation Assignment

Thus the computation for magnitude and orientation can be done over constant time.

4.4 Key Point Descriptor

(a) Reduced Intensity

In this phase, the algorithm computes a descriptor for each key point identified. The algorithm computes a descriptor vector for each key point such that the descriptor is highly distinctive and partially invariant to the remaining variation such as illumination, 3D viewpoint etc. This step is performed on the image closest in scale to the key point’s scale. First, a set of orientation histogram are created on 4x4 pixel neighborhood with 8 bins each. These histogram are

(b) Increase Intensity

ISBN: 978-1-61804-281-1

249

Mathematical Methods and Systems in Science and Engineering

computed from magnitude and orientation value of the samples in a 16x16 region around the key point such that each histogram contain samples from a 4x4 subregion of the original neighborhood region. The magnitudes are further weighted by a Gaussian function with σ equal to one half the width of the descriptor window. The descriptor then becomes a vector of all the values of these histograms. Since there are 4x4 =16 histograms each with 8 bins the vector has 128 elements.

5 Performance Evaluation This paper mainly focused on the time required for implementing various phases of SIFT algorithm. It also shows the number of key points matched when the intensity of the image is varies from low to high. Table 1 shows time required for constructing Gaussian scale space, Differential scale space, finding key points, descriptors and total number of key points extracted for 26 ISL gestures. The adequate numbers of key points were extracted from each ISL gesture so that it can further been used for matching. Table II and table III shows the total number of key points matched for low intensity and high intensity image. The percentage accuracy achieved using this method is approximately 98 %. thresholds. To evaluate true-positive detection, sometimes is also required the localization of the tumor [28].

K

6.759

0.029

0.457

0.500

553

L

6.715

0.028

0.376

0.380

428

M

6.719

0.028

0.458

0.508

544

N

6.746

0.028

0.465

0.578

627

O

6.729

0.027

0.315

0.266

334

P

6.694

0.027

0.475

0.588

630

Q

6.764

0.027

0.488

0.661

677

R

6.725

0.029

0.404

0.458

513

S

6.691

0.029

0.420

0.444

506

T

6.738

0.028

0.462

0.527

566

U

6.712

0.027

0.339

0.290

364

V

6.740

0.029

0.382

0.371

434

W

6.727

0.029

0.360

0.388

440

X

6.700

0.028

0.428

0.575

602

Y

6.743

0.029

0.427

0.400

448

Z

6.763

0.028

0.411

0.544

586

ISLG = Indian Sign Language Gestures, GSSC = Gaussian Scale Space construction, DSSC = Differential Scale Space construction, KP = Key Points, TKPE = Total number of key points extracted.

Figure 8. shows the bar chart of the data given in Table I, Figure 8(a),(b) and (c) shows the time constraint for various phases of SIFT algorithm.

Table 1 TIME CALCULATION FOR VARIOUS PHASES OF SIFT

Time Calculation in seconds for

ISL

GSSC

DSSC

KP

G

Descri ptor.

TKP

A

6.741

0.028

0.490

0.644

668

B

6.699

0.031

0.485

0.780

751

C

6.677

0.029

0.420

0.415

475

D

6.709

0.029

0.509

0.660

669

E

6.698

0.027

0.499

0.567

605

F

6.693

0.028

0.508

0.777

756

G

7.415

0.028

0.397

0.419

485

H

6.752

0.030

0.414

0.471

520

I

6.730

0.027

0.373

0.352

407

J

6.715

0.030

0.288

0.412

457

ISBN: 978-1-61804-281-1

Figure (a)

E

Figure (b)

250

Mathematical Methods and Systems in Science and Engineering

Table 2 PERCENTAGE MATCHED KEY POINTS FOR LOW INTENSITY IMAGE

ISL GESTURE WITH REDUCED INTENSITY A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Figure (c)

Figure 8. Bar chart of time constraint for various phases of SIFT algorithm

proposed computer-aided diagnosis systems with the help of real time breast images. This experimentation data consists of 127 breast images. Those 127 breast images are passed to the proposed CAD system. The testing process is done for 56 cases. These 56 cases are fed to the proposed system and their output is recorded for calculation of the sensitivity and specificity. Using a small threshold is more likely to detect true lesions, but also to generate more false positive responses. Using a large threshold gives fewer false positive responses, but may miss more true lesions. Table-3 shows the results of quantitative analysis and from the results we can also prove the effectiveness of the proposed algorithm. Sensitivity, specificity and accuracy of prediction have been calculated according to the above formals for all of the testing data. The items of the tables include total accuracy (i.e., the percentage of correctly classified patterns), sensitivity (i.e., the probability that a case identified as malignant is indeed malignant), and specificity (i.e., the probability that a case identified as benign is indeed benign). The performance of the proposed method is evaluated using perfect test method which gives the sensitivity and specificity of the result with graphical representation. GRSDB-1 have highest sensitivity of the proposed method is 97.5% and specificity is 91.2% shown in Table-3. Hence the proposed method is highly desirable in order to assist the radiologist in the detection of malignant region and to improve the diagnostic accuracy. In Fig.8 binary image is obtained by thresholding the gray level than converted into pseudo colored and affected region are labeled with numbered from top to bottom and left to right . After finding the region than boundary is defined and brightest area are displayed.

ISBN: 978-1-61804-281-1

TOTAL NUMBER OF KEY POINTS 668 762 482 674 606 758 493 522 413 459 550 439 541 623 341 630 682 516 507 570 365 435 434 600 443 582

TOTAL NUMBER OF MATCHED KEY POINTS 652 744 466 655 595 745 476 512 397 446 541 422 534 606 323 622 666 497 499 553 352 426 427 589 434 561

PERCENTAGE OF MATCHED KEYPOINTS 97.6048 99.0679 98.1053 97.9073 98.3471 98.5450 98.1443 98.4615 97.5430 97.5930 97.8300 98.5981 98.1618 96.6507 96.7066 98.7302 98.3752 96.8811 98.6166 97.7032 96.7033 98.1567 97.0455 97.8405 96.8750 95.7338

Figure (a) Number of key points extracted and matched for low intensity

251

Mathematical Methods and Systems in Science and Engineering

Figure (b) Percentage matched of key points. Figure 9. Bar chart for total number of key point extracted, matched and percentage of matching.

Table 3 PERCENTAGE MATCHED KEY POINTS FOR HIGH INTENSITY IMAGE

ISL GESTURE WITH INCREASED INTENSITY A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

TOTAL NUMBER OF KEY POINTS 675 765 482 676 608 766 496 527 414 466 553 435 544 628 344 640 691 516 508 572 366 444 450 601 446 592

ISBN: 978-1-61804-281-1

TOTAL NUMBER OF MATCHED KEY POINTS 655 745 467 656 589 747 475 516 395 446 537 419 532 608 322 622 667 495 498 555 351 424 432 586 436 566

PERCENTAGE OF MATCHED KEYPOINTS 98.0539 99.2011 98.3158 98.0568 97.3554 98.8095 97.9381 99.2308 97.0516 97.5930 97.1067 97.8972 97.7941 96.9697 96.4072 98.7302 98.5229 96.4912 98.4190 98.0565 96.4286 97.6959 98.1818 97.3422 97.3214 96.5870

Figure 10. Bar chart for total number of key point extracted, matched and percentage of matching for high intensity image.

6 Conclusions The SIFT key points developed in this paper are very useful due to their distinctive features. Large number of key points has been extracted from ISL gesture image. These extracted features are invariant to translation, rotation and scaling. This leads to provide the reliable matching between the images associates with noise and blur. Result shows the computational time required for processing of various phases of SIFT algorithm. Further it also generates the number of key points for all 26 Indian sign language gestures. This paper also shows the result based on intensity variation and computes the matching performance of the image based on number of feature extraction. The approximated 98% accuracy has been achieved for low and high intensity images.

252

Mathematical Methods and Systems in Science and Engineering

Conference on Computing Communication and Networking Technologies (ICCCNT) :737-742. REFERENCES

[11] Chiu Y.H, Cheng C.J, Su H.W, Wu C.H, and 2007, Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis, IEEE Transaction on Pattern Analysis and Machine Intelligence, 29(1):28-39. [12] Chakraborty P, Mondal S, Nandy A, Prasad J.S , 2010, Recognizing & Interpreting Indian Sign Language Gesture for Human Robot Interaction, Int’l Conf. on Computer & Communication Technology |ICCCT’10|,2(10), 52(11): 712-717. [13] Chen X and Zhou Y, 2010, Adaptive Sign language recognition with exemplar extraction and MAP/IVFS, IEEE signal processing letter, 17(3):297-300 [14] Chung H. W, Chiu Y. H and Chi-Shiang Guo, 2004, Text Generation From Taiwanese Sign Language Using a PST-Based Language Model for Augmentative Communication, IEEE Transcation On Neural System And Rehabilitation Engineering, 12(4):441-454. [15] Debevc M, Kosec P & Rotovnik M, 2009, Accessible Multimodal web pages with Sign Language Translations for Deaf and Hard of Hearing Users, 20th International Workshop on Database and Expert Systems Application 3(9): 279-283.

[1] Arulkarthick V.J, Sangeetha D. and Umamaheswari S, 2012, Sign Language Recognition using K-Means Clustered HaarLike Features and a Stochastic Context Free Grammar, European Journal of Scientific Research, 78(1):74-84. [2] Alon J. and Athitsos V, 2009, A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation, IEEE Transactions on Pattern analysis and Medical Intelligence, 31(9):1685-1699. [3] Agarwal Ila, Johar S and Santhosh J, 2011, A Tutor For The Hearing Impaired (Developed Using Automatic Gesture Recognition), International Journal of Computer Science, Engineering and Applications (IJCSEA), 1(4):49-61. [4] Akmeliawati R, Kuang Y.C, 2007, Real-Time Malaysian Sign Language Translation using Colour Segmentation and Neural Network, IMTC 2007 Instrumentation and Measurement Technology Conference Warsaw, Poland, 1-3 May 2007, 1(2):1-6. [5] Ahmed P, Mishra G.S and Sahoo A.K, 2012, A proposed framework for Indian signs language recognition, International Journal of Computer Applications, 2(2): 158-169. [6] Agrawal A and Rautaray S.S, 2011, Interaction with Virtual Game through Hand Gesture Recognition, International Conference on Multimedia, Signal Processing and Communication Technologies, 203, 1(7):244247. [7] Bhosekar A, Kadam K, Ganu R. & Joshi S.D, 2012, American Sign Language Interpreter, IEEE Fourth International Conference on Technology for Education, 6(12): 157-159 [8] Bhuyan M.K, Kar M.K & Debanga R.N, 2011, Hand Pose Identification from Monocular Image for Sign Language Recognition, IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011), 3(12):378-383 [9] Bharti P, Kumar D and Singh S, 2012, Sign Language to Number by Neural Network, International Journal of Computer Applications, 40(10): 38-45. [10] Balakrishnan G, Rajam P.S, 2011, Real Time Indian Sign Language Recognition System to aid Deaf-dumb People, IEEE International ISBN: 978-1-61804-281-1

[16] Dasgupta T, Basu A, Mitra P, 2009, English to Indian sign language machine translation system: A structure transfer framework, international conference on artificial intelligence (IICAI):118-124. [17] Domingo A, Akmeliawati R & Kuang Ye Chow, 2007, Pattern Matching for Automatic Sign Language Translation System using LabVIEW, International Conference on Intelligent and Advanced Systems 2007, 9(7):660-665. [18] Dharaskar R.V, Futane P.R, , 2012, Video Gestures Identification And Recognition Using Fourier Descriptor And General Fuzzy Minmax Neural Network For Subset Of Indian Sign Language, 12th International Conference on Hybrid Intelligent Systems (HIS):525-530. [19] Dharaskar R.V, Futane P.R, 2011, HASTA MUDRA, An Interpretation of Indian Sign Hand Gestures, 3rd International Conference on Electronics Computer Technology, 1: 337-380. [20] Fernando P, Fernando S, Kumarage D, Madushanka D & Samarasinghe R, 2011, Real253

Mathematical Methods and Systems in Science and Engineering

system for English-to-ISL, word congress on nature & biologically inspired computing (NaBIC-2009):1382-1387. [32] Jothilakshmi S, Palanivel S. and Ramalingam V, 2012, A hierarchical language identification system for Indian languages, Digital Signal Processing 2(2):544–553. [33] Juan P. W, Stern H and Yael E, 2005, Cluster Labeling and Parameter Estimation for the Automated Setup of a Hand-Gesture Recognition System, IEEE Transaction On System, man, And Cybernetics, 35(6):932-944. [34] Kishore P.V.V and Rajesh Kumar P, 2012, A Video Based Indian Sign Language Recognition System (INSLR) Using Wavelet Transform and Fuzzy Logic, International Journal of Engineering and Technology, 4(5):537-542. [35] Kishore P.V.V and Rajesh Kumar P, 2012, Segment, Track, Extract, Recognize and Convert Sign Language Videos to Voice/Text, International Journal of Advanced Computer Science and Applications, 3(6):35-47. [36] Krishnaveni M and Radha V, 2012, Classifier fusion based on Bayes aggregation method for Indian sign language datasets, Procedia Engineering 30 (2012):1110 – 1118. [37] Khambaty Y, Quintana R and Shadaram M, 2008, Cost Effective Portable System for Sign Language Gesture Recognition, IEEE International conference on System of system engineering 2(8):67-72. [38] Karami A, Sarkaleh A.K and Zanj B, 2011, Persian Sign language recognition using wavelet transform and Neural network, Expert system with application 3(8): 2661-2667. [39] Katsamanis A, Maragos P, Theodorakis S, 2009, Product-HMMS for Automatic Sign Language Recognition, IEEE International Conference on Acoustics, Speech and Signal Processing 5(9):1601-1604. [40] Lowe D., 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2): 91-110. [41] Lowe D., 2004.Object recognition from local scale invariant features, in processing of International conference on computer vision, corfu sep 1999.60(2): 91-110. [42] Lekhashri B & Pratap A. A, 2011, Use of Motion-Print in Sign Language Recognition’, Proceedings of the National Conference on Innovations in Emerging Technology-2011 Kongu Engineering College, Perundurai, Erode, Tamilnadu, India.17 & 18 February: 99-102.

time Sign Language Gesture Recognition Using Still-Image Comparison & Motion Recognition , 6th International Conference on Industrial and Information Systems, ICIIS 2011, Sri Lanka,4(11):55-62. [21] G.R. sinha, Sandeep B. Patil, Biometrics: Concept and Application, Wiley India Pvt. Ltd, a subsidiary of John Wiley, 22march 2013. [22] Ghotkar A.S, Hadap M, Khatal R, Khupase S, 2012, Hand Gesture Recognition for Indian Sign Language, International Conference on Computer Communication and Informatics (ICCCI -2012), Coimbatore, INDIA, 2(4):9-12. [23] Geetha M and Manjusha U C, 2012, A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation, International Journal on Computer Science and Engineering (IJCSE), 4(3):406-415. [24] Gaolin F. and Wen G, 2002, A SRN/HMM System for Signer-independent Continuous Sign Language Recognition, Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02), 5(2):345-350. [25] Gaolin F ,Wen G and Zhao D, 2007, LargeVocabulary Continuous Sign Language Recognition Based on Transition-Movement Models, IEEE Transaction On System, man, And Cybernetics, 37(1):1-9. [26] Hee-Deok Y, Park A.Y, and Seong W. L, 2007, Gesture Spotting and Recognition for Human– Robot Interaction, IEEE Transactions on Robotics, 23(2):256-270 [27] Hee-Deok Y, Sclaroff S, and Seong W. L, 2009, Sign Language Spotting with a Threshold Model Based on Conditional Random Fields, IEEE Transactions on Pattern analysis and Medical Intelligence, 31(7):1264-1277. [28] Habili N and Moini A, 2004, Segmentation of the Face and Hands in Sign Language Video Sequences Using Color and Motion Cues, IEEE Transaction On circuit and System for Video Technology, 14(8):1086-1097. [29] Huang Q, Kong D, Li J, Sun Y, Wang L, Zhang X, 2012, Synthesis of Chinese Sign Language Prosody Based on Head, International Conference on Computer Science and Service System, 1(8):1860-1864. [30] Ittisam P, Toadithep N, 2010, 3D Animation Editor and Display Sign Language System Case Study: Thai Sign Language, 2(3):633-637. [31] Idicula M, Suryapriya A.K, Suman S, 2009, Design and development of a frame based MT ISBN: 978-1-61804-281-1

254

Mathematical Methods and Systems in Science and Engineering

Neural Network, Advance in computational Research, Bioinfo journal, 4(1): 38-41. [54] Patil S.B, Sinha G.R and Patil V.S, 2011, Isolated Handwritten Devnagri Numerals Recogntion Using HMM, IEEE Second International conference on Emerging Applications of Information technology, Kolkota, ISBN: 978-1-4244-9683-9:185-189. [55] Pentland A, Starner T, & Weaver J, 1998, Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video, IEEE Transaction on Pattern Analysis and Machine Intelligence, 20(12):1371-1376. [56] Quan Y, 2010, Chinese Sign Language Recognition Based On Video Sequence Appearance Modeling, 5th IEEE Conference on Industrial Electronics and Applicationsis, ISSN 978-1-4244-5046-6/10:1537-1542 [57] Quan Y, 2010, Chinese Sign Language Recognition Based On Video Sequence Appearance Modeling, 5th IEEE Conference on Industrial Electronics and Applicationsis: 15371542 [58] Ranganath S and Sylvie C.W, 2005, Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning, IEEE Transaction on Pattern Analysis and Machine Intelligence, 27(6):873-891. [59] Sandjaja I W, 2009, Sign Language Number Recognition, Fifth International Joint Conference on INC, IMS and IDC, 6(9):15031508 [60] Shiquan D, Wang Hui, Xiaomei Z, 2011, Research on Chinese-American Sign Language Translation, IEEE International Conference on Computational Science and Engineering, 9(11): 555-558

[43] Maebatake M, Suzuki I, and Shingo K, Yasuo H, 2008, Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM, 2008 Second International Symposium on Universal Communication, 6(8): 478-481 [44] Miwa Takai, 2012, Measurement of Motion Quantity from Human Movement and Detection of The Sign Language Word, Proceedings of the 2012 International Conference on Advanced Mechatronic Systems, Tokyo, Japan, September 18-21,2012,1(2):298-303. [45] Mehta A.N, Melody M, Starner T, 2010, Recognizing Sign Language from Brain Imaging, IEEE International Conference on Pattern Recognition, 3(10):3842-3845. [46] Magdy A and Samir A, 2012, Error Detection and Correction Approach for Arabic Sign Language Recognition, ISSN 978-1-4673-2961, 3(12):117-123 [47] Majumder S, Rekha J, 2011, Indian sign language recognition with Global-Local Hand configuration, IEEE 13th International Conference on Communication Technology (ICCT):27-33. [48] Nguyen T.D and Ranganath S, 2012, Facial expressions in American sign language: Tracking and recognition, Pattern Recognition: 1877–1891. [49] Paulraj M P, Palaniappan R, Yaacob S, Zanar A, 2011, A Phoneme Based Sign Language Recognition System using 2D Moment Invariant Interleaving feature and Neural Network, IEEE Student Conference on Research and Development, 2(11):111-116. [50] Padmavathi V. S, Shangeetha R.K, 2012, Computer Vision Based Approach For Indian Sign Language Character Recognition, IEEE International Conference on Machine Vision and Image Processing: 181-184. [51] Patil S.B and Sinha G.R, 2012, Real Time Handwritten Marathi Numerals Recognition Using Neural Network, International Journal of Information Technology and Computer Science, 4(12): 76-81. [52] Patil S.B, Sinha G.R and Thakur K, 2012, Isolated Handwritten Devnagri Character Recognition Using Fourier Descriptor and HMM in International Journal of Pure and Applied Sciences and Technology (IJPAST), 8(1): 69-74. [53] Patil S.B. and Sinha G.R, 2012, Off-line mixed Devnagri numeral recognition using Artificial

ISBN: 978-1-61804-281-1

255

Suggest Documents