International Journal of Research and Reviews in Information Sciences Vol. 1, No. 1, March 2011 Copyright © Science Academy Publisher, United Kingdom
Science Academy Publisher
An Overview of Automated Image Annotation Approaches T. Sumathi1, C.Lakshmi Devasena2, and M.Hemalatha3 1
Department of software systems, Karpagam University, Coimbatore, India. Department of software systems, Karpagam University, Coimbatore, India. 3 Department of software systems, Karpagam University, Coimbatore, India. 2
Correspondence should be addressed to T. Sumathi
[email protected] and M. Hemalatha
[email protected]
Abstract – This paper discusses about the literature survey on statistical approaches for automatic annotation in digital images. Nowadays digital photography is a common technology for capturing and archiving images due to the falling price of storage devices and digital cameras. In this paper we focus on different techniques of automating the process of annotating images as an intermediate step in retrieval process. There are some researches on image annotation which produced very good knowledge theoretically or technically that lead to produce such promising surveys. However, not all produced related knowledge on image annotation and discussing until the latest research in this area. This paper aims to cover the approaches for automatic image annotation from it starts until the latest research finding. Using the earlier literature reviews, and evaluation results as guidelines, an attempt is made to outline the automatic image annotation which is combination of image analysis and statistical learning approaches. Summary and analysis of some approaches have been used as references to produce a framework in designing an automatic image annotation model.
1.
Introduction
Automated image annotation can be defined as the process of modeling the work of a human annotator when assigning words to images based on their visual properties .up to now most of the image annotation systems are based on the combination of Image analysis and statistical machine learning techniques. To improve retrieval accuracy, the research focus has been shifted from designing sophisticated low level feature extraction algorithm to reducing the semantic gap between the visual features and the richness of human semantics. By tradition there are two main trends in the process of retrieving images. The first one is called content based image retrieval (CBIR) also known as query by image content (QBIC) or content based visual information retrieval (CBVIR) content based means that the search will analyze the actual contents of the image by using image analysis techniques.
Figure 1. A Typical CBIR System [1]
Given an input image, the goal of automatic image annotation is to assign a few relevant text keywords to the image that reflects its visual content. Utilizing image content to assign a richer, more relevant set of keywords would allow one to further exploit the fast indexing and retrieval architecture of Web image search engines for improved image search. This makes the problem of annotating images with relevant text keywords of enormous practical interest. Image annotation is a difficult task for two main reasons: First is the well-known pixel-to-predicate or semantic gap problem, which points to the fact that it is hard to{new baseline} extract semantically meaningful entities using just low level image features, e.g. color and texture. Doing unambiguous recognition of thousands of objects or classes reliably is currently an unsolved problem. The second difficulty arises due to the lack of correspondence between the keywords and image regions in the training data. For each image, one has access to keywords assigned to the entire image and it is not known which regions of the image correspond to these keywords. This makes difficult the direct learning of classifiers by assuming each keyword to be a separate class. Recently, techniques have emerged to circumvent the correspondence problem under a discriminative multiple instance learning paradigm [3] or a generative paradigm [4].
International Journal of Research and Reviews in Information Sciences
2.
Automated Image Annotation
Also known as image auto annotation consists of a number of techniques that aim to find the correlation between low level visual features and high level semantics. The main challenge in automated image annotation is to create a model able to assign visual terms to an image in order to successfully describe it. The starting point for most of these algorithm is a training set of images that have been already been annotated by humans. These metadata made up of simple keywords that describe the content of the image. Image analysis techniques are used to extract features from the images such as color, texture and shape in order to model the distribution of a term being present in the image. Features can be obtained from the whole image (global approach), or from blobs, which are segmented parts of the image (segmented approach) or from tiles which are rectangular partitions of the image. The next step is to extract the same feature information from an unseen image in order to compare it with all the previously created models. The result of this comparison yields a probability value of each keyword being present in an image. The block diagram of typical image annotation framework is shown in Figure 2. A number of strategies can be adopted to produce the final output for these systems. One of them consists of an array of 1’s or 0’s with the same length as the number of terms in the vocabulary, which indicates the presence or absence of the objects in the image. This is defined as hard annotation. In contrast with soft annotation, this provides a probability score that gives some confidence for each concept being present or absent in the image.
Annotation techniques Initial Human Interaction Machine task
Human effort
There are three types of image annotation approaches available: manual, automatic and semi automatic [6]. Manual annotation needs users to enter some descriptive keywords when perform image browsing. While automatic annotation detects and labels semantic content of images with a set of keywords automatically. In the case of semiautomatic annotation, it needs user’s interaction to provide an initial query and feedback for image annotation while browsing. The comparison of the three annotation techniques and advantages and disadvantages of them are shown in Table 1 and Table 2 respectively.
Table 1. Annotation Techniques Comparisons Manual Semi Automatic Automatic Enter some descriptive keyword Provide storage for annotation to be saved such as disk space or database
Provide initial query at the beginning Parse Human’s query and extract semantic information to perform annotation
Perform sufficient semantic information for retrieval purposes
Perform some annotation and work with machine’s output
No interaction
Detect labels semantic keywords automatically using recognition technology Verify and correct machine’s output finally for annotation accuracy
Table 2. Advantages and Disadvantages on Annotation Techniques Annotation Manual Semi Automatic Automatic techniques Advantages The most Quality of the The most accurate annotation efficient, the annotation improves in the least time interactive consuming manner after correction Disadvantages Time Less time than Error – prone, consuming , automatic the less accurate expensive, greater time annotation difficult, than manual subjective, annotation inconsistency
Manual image annotation is considered expensive and time consuming [7], [8]. While semi annotation is very efficient compared to manual annotation and more accurate than automatic annotation based on experiment evaluation by [6]. Automatic image annotation is the best in term of efficiency but less accuracy. Independently of the method used to define the annotation, automated image annotation system generate a set of keywords that help to understand the scene represented in the image. Many experiments show that current image annotation techniques present poor performance in the context of image retrieval. Image annotation surveys have been reviewed by many researchers according to the demanding the needs for annotating images.
3.
Figure 2. A typical image annotation frame work
2
Issues Relevant To Image Annotation
Images are annotated to simply access to them by using metadata that being added to images in order to allow more effective searches. If the images are described by textual information, then text search technique can be used to perform images searches [9]. However, there is a need to improve generation of automated metadata for images called automatic image annotation. By using this method, allow image searches using CBIR more effective. Many researchers have proposed various techniques in attempting to bridge the well known semantic gap. Many of them realize another problem which is dependency on the training dataset to learn the models [10]. Image annotation surveys have been reviewed by many researchers according to the demanding the needs for annotating images. Hanbury [9] discussed the practical aspect of image annotations and divided image annotation
International Journal of Research and Reviews in Information Sciences approaches into three; free text annotation, keyword annotation and annotation based on anthologies. While Tsai and Hung [11] reviewed 50 image annotation systems using supervised machine learning techniques to annotate images via mapping the low-level or visual features to high-level concepts or semantics. In Datta et al. [12], the authors surveyed almost 300 key theoretical and empirical contributions related to image retrieval and automatic image annotation and their subfields. They also discussed on challenge for systems development in the adaptation on existing image retrieval techniques. Jiayu [13] has classified image annotation approaches into statistical approaches, vector-space related approaches and classification approaches. The idea behind the statistical approach is to estimate the probabilities of images queries that then will be ranked according to their probabilities. While the vector-space related approaches assume that images representation as vector, which contain occurrences of words within images. Then vector space will be utilized to build visual terms which are analogous to word from image feature descriptors. In the case of classification approaches, the process of attaching words to images will be viewed as classifying images to a number of pre-defined groups which characterized by a concept or word using classification algorithm. Hence, multiple annotations can be generated by assuming an image belongs to multiple classes.
4.
Statistical Approaches In Automatic Image Annotation
Statistical models are popular approaches in image retrieval including automatic image annotation. Basically, they annotate images by estimating the joint probability of an image with a set of words, the probabilities of words given an images or specific image region. Then the words are ranked according to their probabilities. One of the drawbacks of using these models is probably the computational cost of parameter estimation such as the learning process. Mori et al. [14] used a Co-occurrence Model in which they looked at the co-occurrence of words with image regions created using a regular grid. In another word, they applied a co-occurrence model to words and low-level features of tiled image regions. The annotation process began when they divided images into rectangular tiles of the same size. Then, for each tile, they calculated a feature descriptor which was combining of color and texture. All the descriptors was then clustered into a number of groups which represented by the centroid. Each tile inherited the whole set of labels from the original image. Then, they estimated the probability of a label related to a cluster by the co-occurrence of the label and the image tiles within the cluster. The process of finding word occurrence in images is shown in Figure 3.
3
Figure 3. Co-occurrence Model[14]
Duygulu et al. [5] proposed to describe images using a vocabulary of blobs. First, regions are created using a segmentation algorithm like normalized cuts [15]. For each region, features are computed and then blobs are generated by clustering the image features for these regions across images. Each image is generated by using a certain number of these blobs. Their Machine translation Model applies one of the classical statistical machine translation models to translate from the set of keywords was viewed as task of translating from vocabulary of blobs to vocabulary of words.
Figure 4. Machine translation Model [5]
Monay and Gatica-Perez [16] introduced latent variables to link image features with words as a way to capture cooccurrence information. This is based on latent semantic analysis (LSA) which comes from natural language processing and analyses relationships between images and the terms that annotate them. The addition of a sounder probabilistic model to LSA resulted in the development of probabilistic latent semantic analysis (PLSA) [17]. Blei and Jordan [18] viewed the problem of modeling annotated data as the problem of modeling data of different types where one type describes the other. For instance, image and their captions, papers and their bibliographies, genes and their functions. In order to overcome the limitations of the generative probabilistic models and discriminative classification methods Blei and Jordan proposed a framework that is a combination of both of them. They culminated in Latent Dirichlet Allocation,[19] a model that follows the image segmentation approach and finds conditional distribution of the annotation given the primary type. Jeon et al. [20] improved on the result of Duygulu et al. [5] by introducing a generative language model referred as Cross Media Relevance Model (CMRM) as shown in Figure 5. the above model uses, the same process used by Duygulu et al. [5] was chosen to calculate the blob representation of images. They assumed that this could be viewed as analogous to the cross-lingual retrieval problem to perform both image annotation and ranked retrieval. This model also
International Journal of Research and Reviews in Information Sciences used the keywords shared by the similar images to annotate new images. Their experimental results shown that their CMRM was better retrieve images than [14] and [5].
4
their experimental results, their Dual Cross-Media Relevance Model (DCMRM) outperformed the last two relevance models [21]-[22] for image retrieval. For image annotation, their model also outperformed previous models [5]-[22].
Figure 5. Cross Media Relevance Model [18]
Lavrenko et al. [21] argued that the process of quantization from continuous image features into discrete blobs, as the approach used by the machine translation model and the CMRM model, will cause the loss of useful information in image regions. By using continuous probability density functions to estimate the probability of observing a region given an image, they improved on the results obtained by Duygulu et al. [5] and Jeon et al. [20]. They also outperformed results of [17]-[20] for ranked retrieval. Their model can be shown in Figure 6.
Figure 6. Continues-space Relevance Model [21]
While Feng et al. [22] modified the above model such that the probability of observing labels given an image was modeled as a multiple-Bernoulli distribution as shown in Figure 7. In addition, they simply divided images into rectangular tiles instead of applying automatic segmentation algorithms. Their Multiple Bernoulli Relevance Model (MBRM) achieved further improvement on performance. Their experimental results shown that their MBRM model was better retrieval than [19] and [21].
Figure 7. Multiple Bernoulli Relevance Models [22]
Liu. et. al. [10], they estimated the joint probability by the expectation over words in a pre-defined Lexicon. It involves two kinds of critical relations in image annotation. First is the word-to-image relation and the second is the word-to-word relation. These can be shown in Figure 8. In
Figure 8. Dual Cross-Media Relevance Model [10]
Torralba and Oliva [23] focused on modeling a global scene rather than image regions. This scene-oriented approach can be viewed as a generalization of the previous one where there is only one region or partition which coincides with the whole image. Torralba and Oliva supported the hypothesis that objects appear and used them to predict presence or absence of objects in unseen images. Consequently, images can be described with basic keywords such as street, buildings or highways using a selection of relevant low-level global filters. Yavlinsky et. al. [24] followed an approach using global features together with robust non-parametric density estimation and the technique of kernel smoothing. The results shown by him are comparable with the inference network [11] and CRM [10]. Notably, he showed that the corel dataset proposed by Duygulu could be annotated remarkably well by just using global color information. Liu et.al [21] use a combination of correlation by word net and a correlation of statistical co-occurrence in order to expand the existing annotation image and to prune irrelevant keywords for each annotated image. Jin et.al [15] proposes a new frame work for automated image annotation that estimated the probability for language model to be use for annotation an image. The use a word-toword correlation which is taken into account through the Expectation Maximizations (EM) algorithm for finding optimal language model for the given image. Several unsupervised learning approaches for the image auto-annotation problem have also been presented. CMRM method uses clustering algorithm to generate blob tokens [20] those blob tokens are used as an intermediate dictionary for the purpose of generating annotation. An approach based on SOM neural network is used in the PicSOM System [25]. CRM [26] and MBRM [27] use feature values directly, together with kernel based techniques as a distance measure. This approach proves to be very effective and those methods are currently state of the art methods for the image autoannotation problem. DIMATEX [28] uses a dichotomic clustering for the purpose of speeding up the process; FastAN [29] uses discrete measures instead of kernel calculation, which is
International Journal of Research and Reviews in Information Sciences computationally very expensive. Both DIMATEX and FastAN are fast methods and focus mainly on the efficiency. There are also several other approaches like GAP [30] method represents the auto-annotation problem as a graph and uses graph-based techniques. A supervised learning approach based on SVM in Mixhier [32] method uses supervised machine learning with Mary classifiers, which are also used in BML and MCML methods. BML and MCML methods are a supervised machine learning methods, based on the c4.5 classifiers and provide better results for all tested datasets with normalized score and accuracy quality measures in comparison to CRM. It gives similar results for mean per-word precision and recall for a subset of best words. SemSpace [33] uses domain ontologies and allows the semantic annotation of image parts. This application does not use automated processes at all. Thus the annotation of images is a time consuming job in this system. Riya [34] uses a face detection algorithm. This reduces the annotation time for users, but leads to new problems. The sole usage of face detection algorithm leads to the problem of incorrectly generated annotations. (i.e., called” phantom faces” may be generated). Andreas and Gabor [35] propose an IMAGINATION application to minimize the human effort to create high quality image annotations. First, the annotation for the creation of semantic images has high quality to reduce the time needed for manual corrections. To achieve these, this method combines different automated processes. Moreover, background knowledge in common domain ontology is also exploited to achieve better result. Yasuhide Mori, Hironobu and Ryuichi proposed a method called Image to word transformation based on dividing and vector quantizing images with words. It based on statistical learning from images to which words are attached. The key concept of the method is as follows. First each image is divided into many parts and at the same time all attached words for each image are inherited to each part. Next, parts of all images are vector quantized to make clusters and finally the likelihood for each word in each cluster is estimated statistically. This method is useful when it is combined with a good human interface system to help users in mining data.
5.
5
annotated w auto, of which Wc are correct, the per-word recall and precision are given by recall=Wc / WH and precision=Wc/wauto, respectively. At last, the values of recall and precision are averaged over the set of words that appear in the test set to get the average per-word precision (P) and average per-word recall. We also consider the number of words with non-zero recall (NZR) (i.e., words with Wc >0, which provides an indication of how many words the system has effectively learned. The efficiency of semantic retrieval is also evaluated by measuring precision and recall. Given a query term and top n image matches retrieved from the database, recall is the percentage of all relevant images contained in the retrieved set, and precision is the percentage of n which are relevant. Retrieval performance is evaluated with the mean-average precision (MAP), which is the average precision, over all queries, at the ranks where recall changes. The evaluation metrics are summarized in Table 3. Metric P R # words NZR MAP MAP NZR Image categorization accuracy Mean coverage
6.
Table 3. Evaluation Metrics Description Average per-word precision Average per-word recall Number of words with non-zero recall Mean average precision(ranked retrieval) Mean-average precision for NZR(ranked retrieval) Percent of the images correctly categorized Percent of the ground –truth annotations that match the computer annotations.
Comparisons of Related Work
This section provides some comparisons of related automatic image annotation researches as shown in Table 4. These comparisons focus on image analysis and statistical learning techniques for image annotation only and they do not consider the system performance such as annotation accuracy, precision recall etc. Table 4. Automatic Image Annotation Automatic Image annotation Statistical Segmen Relation image to Dataset approaches tation word co-occurrence model
Ncut
Tiled image into rectangular grid
PSU
Translation Model CMRM
EM
1 blob to 1 word
Corel
Evaluation of automatic image annotation
5.1. Precision and recall Precision and recall which are the most popular metrics for comparing CBIR, are also widely used for evaluating the effectiveness of automatic image annotation approaches. Precision is defined as the ratio of the number of words that correctly retrieved to the total number of words retrieved in every image search. While recall is the ratio of the number of words that retrieved correctly to the number of words. For a given semantic descriptor, assuming that there are WH human annotated images in the test set and the system
1set blob to 1 set word CRM EM Tiled image MBRM EM Coherent region semantically DCMRM EM Word-to-image Word-to-word Ncut= Normalisedcut, EM= Expectation Maximization
7.
Ncut
corel Corel 5K Corel 5k Corel 30K
Conclusions
Bridging the semantic gap for image retrieval is not easy to overcome. In order to overcome the well known problem
International Journal of Research and Reviews in Information Sciences in semantic gap, automatic image annotation is the solution. However major difficulty is to make computers understand image content in terms of semantics or high level concepts. In order to bridge the semantic gap between low-level content and high level concepts could be applied by implementing image analysis and statistical learning approaches or other techniques such as classification.
[23] [24] [25]
[26]
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9] [10]
[11]
[12]
[13] [14]
[15]
[16]
[17]
[18]
[19] [20]
[21]
[22]
Syaifulnizam Abd Manal and Md Jan Nordin “Review on statistical approaches for automatic image annotation, 2009 international conference on electrical engineering and informatics 5-7 august 2009, IEEE 978-1-4244-4913-2/09. Ameesh Makadia, Vladimir Pavlovic, and Sanjiv Kumar “A New Baseline for Image Annotation, 2009 international conference on electrical engineering and informatics 5-7 august 2009, IEEE 978-14244-4913-2/09. Yang, C., Dong, M., Hua, J.: Region-based image annotation using asymmetrical support vector machine-based multiple-instance learning. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. (2006). Carneiro, G., Chan, A.B., Moreno, P.J., Vasconcelos, and N.: Supervised learning of semantic classes for image annotation and retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence (2007). Duygulu, P., Barnard, K., de Freitas, J.F.G., Forsyth, and D.A.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: European Conference on Computer Vision. (2002) 97– 112. L. Wenyin, S. Dumais, Y. Sun, H. J. Zhang, M. Czerwinski and B.Field, “Semi Automatic Image Annotation, ” in 8th IFIP T.C 13Conference on Human-Computer Interaction 2002, p. 326-333. C. H. Wiener, N. Simou and Tzouvaras. (2006) Image Annotation on the Semantic Web. [Online].Available: http://www.w3.org/TR/2006/WDswbp-image-annotation-20060322. K. Barnard and N. V. Shirahatti, “Method for Comparing Content Based Image Retrieval Methods,” in Proceedings of the SPIE 2003, p.1-8. (IJSIS '96), 1996, p. 261. A. Han bury, “A Survey of Methods for Image Annotation,” J. Vis.Lang. Computer, vol. 19, pp. 617-627, Oct. 2008. J. Liu, B. Wang, M. Li, Z. Li, W. Y. Ma, H. Lu and S. Ma, “Dual CrossMedia Relevance Model for Image Annotation,” in Proceedings of the 15th International Conference on Multimedia 2007, p. 605 – 614. C. F. Tsai and C. Hung, “Automatically Annotating Images with Keywords: A Review of Image Annotation Systems,” Recent Patents on Computer Science, vol 1, pp 55-68, Jan., 2008. R. Datta, D. Joshi, J. Li and J. Z. Wang, “Image Retrieval: Ideas, Influences, and Trends of the New Age,” ACM Computing Surveys (CSUR), vol 40, Apr. 2008. T. Jiayu, “Automatic Image Annotation and Object Detection,” PhD thesis, University of Southampton, United Kingdom, May 2008. Y. Mori, H. Takahashi and R. Oka, “Image-to-word Transformation Based on Dividing and Vector Quantizing Images with Words,” in MISRM’99 First International Workshop on Multimedia Intelligent Storage and Retrieval Management, 1999. J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” in IEEE Transactions of Pattern Analysis and Machine Intelligence, vol. 22, pp. 888–905, Aug. 2000. Monay. F., Gatica-Perez, D.: Plsa –based image auto annotation with latent space models. In: Proceedings of the 11th international ACM Conference on Multimedia, Newyork, USA, ACM (2003)275-278. Monay. F., Gatica-Perez, D.: On image auto annotation: constraining the latent space. In: Proceedings of the 12th international ACM Conference on Multimedia, Newyork, USA, ACM (2004)348-351. Blei, D.M.Jordan, and M.I.: Modeling annotated data. In Proceedings of the international ACM Conference on research and development in information retrieval, Newyork, ACM (2003)127-134. Blei, D.M. Jordan, M.I.: Latent Dirichlet Allocation. Journal of machine learning research (2003)993-1022. J. Jeon, V. Lavrenko and R. Manmatha, “Automatic Image Annotation and Retrieval Using Cross-Media Relevance Model,” in Proceedings of the 26th annual international ACM SIGIR, 2003. V. Lavrenko, R. Manmatha and J. Jeon, “A Model for Learning the Semantics of Pictures,” in Proceedings of Advance in Neutral Information Processing, 2003. S. Feng, R. Manmatha and V. Laverenko, “Multiple Bernoulli Relevance Models for Image and Video Annotation,” in IEEE Computer Society
[27] [28] [29]
[30]
[31] [32]
[33] [34] [35]
6
Conference on Computer Vision and Pattern Recognition, 2004, p. 10021009. Torralba, A., Oliva, A.: “Statistics of natural image categories. Network: computation in neural systems. 14(3) (2003) 391-412. Yavlinsky, A., Schofield, E., Riiger, and S: “Automated image annotation: ACM Press (2003)127-134. Ville Viiyaniemi and Jorma Laaksonen: “Keyword detection approach to automatic image annotation”: In proceedings of 2nd European workshop on the integration of knowledge, semantic and Digital media technologies (EWIMT 2005) London, UK, November 2005. V. Lavrenko, R. Manmatha and J. Jeon,: “ A Model for learning the semantics of the pictures, In proc. Of NIPS’03. S.L: “Automated image annotation: ACM Press (2003)127-134 Glotin H., Tollari S.: Fast Image Auto-Annotation with Visual Vector Approximation Clusters, CBMI 2005. Halina Kwasnicka, Mariusz Paradowski: Fast Image Auto-Annotation with Discretized Feature Distance Measures, Machine Graphics and Vision, 2006 (in printing). Pan J., Yang H., and Faloutsos C., Duygulu P.: GCap: Graph based Automatic Image Captioning, In Proceedings of the 4th International Workshop on Multimedia Data and Document Engineering (MDDE 04), in conjunction with Computer Vision Pattern Recognition Conference (CVPR 04), 2004.Washington DC, July 2nd 2004. Cusano, C., G. Ciocca, and R. Schettini: Image annotation using SVM, In Proceedings of Internet imaging IV, Vol. SPIE 5304. 2004. Gustavo Carneiro, Nuno Vasconcelos.: Formulating Semantic Image Annotation as a Supervised Learning Problem, Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on Volume 2, 20-25 June 2005 Page(s):163 - 168 vol. 2. Van Ossenbruggen, J., Troncy. R., Stamou, G., Pan, J.Z.: Image annotation on the semantic web. W3 working draft, W3C (2006). Riya: Riya visual research, Http://www.riya.com/(2007) (accessed) 2007-12-09). Andreas Walter, and Gabor Nagypal: The combination techniques for automatic semantic image annotation generation in the imagination application, In Proceedings of Internet imaging IV, Vol. SPIE 5304. 2008.
Dr. M. Hemalatha completed MCA MPhil., PhD in Computer Science and currently working as an Assistant Professor and Head, Department of software systems in Karpagam University. Ten years of Experience in teaching and published Twenty seven papers in International Journals and also presented seventy papers in various National conferences and one international conferences Area of research is Data mining, Software Engineering, bioinformatics, Neural Network. Also reviewer in several National and International journals.
T. Sumathi is presently doing PhD in Karpagam University, Coimbatore, Tamilnadu, India and has completed M.Phil (computer science) in the year 2006 and MCA degree in 1999 and B. Sc(computer science) in 1996. Major research area is Image processing and title for the research is image annotation. At Present, she is working as Lecturer in Karpagam University, Coimbatore.
C. Lakshmi Devasena is presently doing PhD in Karpagam University, coimbatore, Tamilnadu, India and has completed MPhil (computer science) in the year 2008 and MCA degree in 1997 and B. Sc(computer science) in 1994. Major research area is Image processing and data mining. At Present, she is working as Lecturer in Karpagam University, Coimbatore.