Global Science and Technology Journal Vol. 1. No. 1. July 2013 Issue. Pp.23-40
Feature-based Intelligent Image Retrieval Algorithms: A Comparison Md. Baharul Islam* and Balaji Kaliyaperumal** Due to the development of Internet and photographic technology the numbers of digital images have been increasing rapidly. In this scenario, maintaining the database of images and retrieval of correct image from online database is a challenging task. To perform this task Content Based Image Retrieval (CBIR) is used which is the process of retrieving the most closely matched images automatically by extracting different features. Single feature-based system cannot retrieve image accurately and efficiently. Highdimensional feature reduced the query efficiency and less-dimensional feature reduced query accuracy. So it may be a better solution using multi features for image retrieval. The proposed system is based on CBIR, in which the colour feature extraction is accomplished through constructing a one dimension feature vector. The texture feature extraction is acquired by using gray-level co-occurrence matrix (GLCM) or colour co-occurrence matrix (CCM). The GLCM and CCM are separately combined into a colour feature with the use of quantization of HSV (Hue, Saturation, and Value) colour space. The purpose of this paper is to provide a better and efficient way for image retrieval by using integrated features. Performance is computed separately for three feature extraction mode and got better result using multi-feature extraction retrieval system.
Field of Research: Database, Feature Extraction, Image Retrieval, World Wide Web
1. Introduction The number of images in an image hosting website is estimated to be greater than two billions images Singhai and Shandilya (2010). These developments came because of researchers focused their attention on content based image retrieval methods which have various applications such as cataloguing, annotating and accessing the images, image search engines, computer forensics and security etc. The retrieval methods based on text or keywords apparently could not get multimedia information exactly. Everyone is thirsted for accurate and fast retrieval due to increased multimedia informations over internet. Research on multimedia information retrieval is one of most important filed in multimedia world Alemu el al (2009). Colour-based image retrieval systems have three categories depending on the signature extraction approach like histogram, colour layout, and region-based search. A colour model for representing colour is in terms of intensity values. Usually a colour space defines one- to four- dimensional space. Two three-dimensional colour spaces are RGB and HSV Smeulders el al (2000). Image retrieval is involving a lot of related works like data collection build up feature database, search _______________ *Md. Baharul Islam, Department of Multimedia Technology and Creative Arts, Faculty of Science and Information Technology, Daffodil International University, Dhaka-1207, Bangladesh Email:
[email protected] **Balaji Kaliyaperumal, School of Computer Engineering, Nanyang Technological University, 50 Nanyang Drive, Singapore 63755, E-mail:
[email protected] 23
Islam & Kaliyaperumal in the database, arrange the order and deal with the results of the retrieval techniques. Histogram based search have two different colour spaces. It has many advantages such as efficient, less memory space and robust to rotation and scaling variations. Texture based methods can be classified into two categories such as statistic feature and structure feature where spatial domain are changed into frequency domain. For CBIR, visual features such as shape, colour and texture are extracted from characterize images. Features of the query image are compared to the images in the database during the retrieval. Colour and texture are the most important visual features. Fig. 1 shows the block diagram of CBIR system. The visual content of images in the image database and the query images are provided to extract features by users to form the feature vectors. Similarities between images are calculated from feature vectors. This retrieval is performed by indexing those images which are closely matched to the query images. The retrieval systems also provide feedback to modify the process to generate more meaningful results. Figure 1: Content Based Image Retrieval Systems
High-dimensional feature reduce the query efficiency whereas low-dimensional feature reduce query accuracy Choras (2007). So it may be a better way using multi features for image retrieval. Colour and texture are the most important visual features. Firstly, we worked with colour and texture features separately. Then we worked with integrated features that show satisfactory result than the single feature. This paper organized by literature review, methodology, experimental results and conclusions.
2. Literature Review Early work on image retrieval can be traced back to the late 1970s. In 1979, a conference on database techniques for pictorial applications was held in Florence Pass and Zabih (1996). Since then, the application of image database management 24
Islam & Kaliyaperumal techniques had attracted the attention of most of the researchers. In the early 1990s, the volume of digital images ware produced dramatically due to modern technology in different fields Nandagopalan el al (2009) like education, medical, industry etc. More difficulties are faced for text-based information retrieval. CBIR had increased in many applications such as biomedicine, military, commerce, education, web image classification and searching tremendously. Rapid and effective searching for desired images from large-scale image database became an important and challenging research topic Huang el al (1997) Shen el al (2000). CBIR technology overcame the defects of traditional text-based image retrieval technology. It used content features (colour, texture, shape etc) of image which was analyzed and extracted automatically to achieve the effective retrieval Manjunath and Ma (1996) Rui el al (1996). Several systems have been proposed in recent years from research community for CBIR: BIC (Query by Image Content) by IBM, Photo book by MIT Media Lab., Visual SEEK by Columbia University, Retrieval Ware by Excalibur Technologies Corporation, Netra by the University of California, Alexandria Digital Library, IRIS by German Software Development Laboratory of IBM and the AI group of the University of Breen Choras (2007). For CBIR, visual features such as shape, colour and texture are extracted to characterize images. Each of the features is represented using one or more feature descriptors. During the retrieval, features and descriptors of the query are compared to those of the images in the database in order to rank each indexed image according to its distance to the query. In biometrics systems images used as patterns (e.g. fingerprint, iris, hand etc.) that are also represented by feature vectors. The candidate patterns are retrieved from database by comparing the distance of their feature vectors.
3. Feature Extraction Feature is defined as a function of one or more measurements which specifies some quantifiable property of an object. It is computed so that it quantifies some significant characteristics of the object. We classify the various features currently employed as follows: General features: Application independent features such as colour, texture, and shape. Based on the abstraction level, it can be further divide into pixel-level features that calculated at each pixel e.g. colour, location; Local features that calculated over the results of subdivision of the image band on image segmentation or edge detection; Global features that calculated over the entire image or just regular sub-area of an image. Domain-specific features: Features which are dependent on the application, such as human faces, fingerprints, and conceptual features. These features are often a synthesis of low-level features and high-level features. Low-level features can be extracted directed from the original images, whereas high-level feature extraction must be based on low level features Saber and Tekalp (1998).
25
Islam & Kaliyaperumal 3.1 Colour Feature Extraction The colour feature is one of the widely used features for image retrieval. Images characterized by colour feature have many advantages that are given below:
Robustness: The colour histogram is invariant to rotation of the image on the view axis, and changes in small steps when rotated or scaled. Colour feature is also insensitive to changes in image and histogram resolution and occlusion.
Effectiveness: Percentage of relevance between the query image and the extracted matching image.
Implementation simplicity: The construction of the colour histogram is a straightforward process including scanning the image, assigning colour values to the resolution of the histogram, and building the histogram using colour components as indices.
Computational simplicity: The histogram computation has O(X,Y) complexity for images of size X × Y . The complexity for a single image match is linear O(n), where n represents the number of different colours or resolution of the histogram.
Low storage requirement: The colour histogram size is significantly smaller than the image itself. Typically the colour of an image is represented through some colour model. There exist various colour models to describe colour information. A colour model is specified in terms of 3D coordinate system and a subspace within the system where each colour is represented by a single point. The more commonly used colour models are RGB (red, green, blue), HSV (hue, saturation, value) and YCbCr (luminance and chrominance). Thus the colour content is characterized by 3-channels from some colour model. One representation of colour content of the image is by using colour histogram. Statistically, it denotes the joint probability of the intensities of the three colour channels.
3.1.1 Feature of HSV Colour In HSV colour scheme, hue is used to distinguish colours, saturation is the percentage of white light added to a pure colour and value refers to the perceived light intensity Kong (2009). Computation will be very difficult to ensure rapid retrieval because of large range of each component Pass and Zabih (1996) Nandagopalan el al (2009). Based on the colour model of substantial analysis, we divide colour into eight parts. Saturation and intensity is divided into three parts. Hue (H), Saturation (S) and Intensity (V) are shown as equations (1) to (3).
26
Islam & Kaliyaperumal 0 1 2 3 H 4 5 6 7
if if
h [ 316 , 20 ] h [ 21 , 40 ]
if
h [ 41 , 75 ]
if
h [ 76 ,155 ]
if if
h [156 ,190 ] h [191 , 270 ]
if
h [ 271 , 295 ]
if
h [ 296 , 315 ]
0 if S 1 if 2 if 0 if V 1 if 2 if
(1)
s [ 0 ,0 .2 ] s [ 0 .2 ,0 .7 ]
(2)
s [ 0 . 7 ,1] v [ 0 , 0 .2 ] v [ 0 .2 ,0 .7 ]
(3)
v [ 0 .7 ,1]
According to quantization level, three-dimensional feature vector of HSV for different values with different weight is converted to one-dimensional feature vector named G as follows: G S qVq H Vq S V
(4)
Where, S q is quantize value of S, Vq is quantize value of V. We set S q = Vq = 3, then G = 9H + 3S +V
(5)
Three-component vector of HSV converted to one-dimensional vector in this way that quantizes the whole colour space into 72 kinds of main colours. So we can handle 72 bins into one-dimensional histogram. This quantification can be effective not only by reducing the effects of light intensity but also reducing the computational time and complexity. 3.2 Texture Feature Extraction Texture is a powerful regional descriptor that helps in the retrieval process. Own texture does not have the capability for finding similar images, but it can be used to classify textured images from non-textured ones. And then it can be combined with another visual attribute like colour to make the retrieval more effective. Texture is one of the most important characteristics of image that to be used to classify and recognize objects and finding similarities between images in multimedia databases Shen el al (2000).
27
Islam & Kaliyaperumal 3.2.1 Gray Level Co-occurrence Matrix Gray level co-occurrence matrix (GLCM) is well known and widely used methods to extract texture feature Jurie and Triggs (2005). The co-occurrence matrix is defined by joint probability density of two pixels which have different positions. For a digital image f of size M N , which is denoted as I ( x, y). Gray-level is defined as P(i, j | d , ). The Gray-level co-occurrence Matrix is defined as p (i, j | d ,0) {( x1, y1), ( x 2, y 2) M N I(x1, y1) i, I(x2, y2) j,| x1x2|0,| y1 y2| d}
(6)
P(i, j | d ,45) {(x1, y1), ( x2, y2) M N I ( x1, y1) i, I ( x2, y2) j, ( x1 x2 d , y1 y2 d ) or( x1 x2 d , y1 y2 d )}
(7)
P (i , j | d ,90 {( x1, y1), ( x 2, y 2 ) M N I ( x1, y1) i , I ( x 2, y 2 ) i , I ( x 2, y 2) j , | x1 x 2 | d , | y1 y 2 | 0}
(8)
p(i, j | d ,135) {(x1, y1),(x2, y2) M N I (x1, y1) i, I (x2, y2) j, (x1 x2 d , y1 y2 d ) or(x1 x2 d , y1 y2 d )}
(9)
Where, {} is the number of occurrences of the pair of gray level i and j which are a distance d apart. The angle is denoted as between the pair of gray level and the axis. ( 0 ,45 ,90 ,135 ). So this Gray level Co-occurrence is defined as P(i, j | d , ) according to the distance d and the angle . For example GLCM of Figure 2 shows how co-occurrence matrix calculates the first three values in a Gray Level Co-occurrence Matrix. In the output of GLCM, element (1,1) contains the value 1 because there is only one instance in the input image where two horizontally adjacent pixels have the values 1 and 1, respectively element(1,2) contains the value 2 because there are two instances where two horizontally adjacent pixels have the values 1 and 2. Element (1, 3) in the GLCM has the value 0 because there are no instances of two horizontally adjacent pixels with the values 1 and 3. Co-occurrence matrix is continuous processing for scanning all other pixel pairs (i, j ) and recording the sums in the corresponding elements of the GLCM.
28
Islam & Kaliyaperumal Figure 2: Example of Gray Level Co-occurrence Matrix
GLCM is a symmetry matrix whose level is determined by the image gray-level. Elements in the matrix are computed by the equation (10). p (i , j | d , )
p ( i, j | d , ) 256
(10)
256
p (i , j | d , ) i 1
j 1
In this method, four features are selected including energy (measures of homogeneity), contrast (measures moment of inertia, reflecting the image clarity and texture of shadow depth), entropy (measures image texture randomness), and inverse difference (measures local changes in image texture number). 256 256
Energy
E p( x, y ) 2
:
(11)
x 1 y 1
256
Contrast:
256
I x 1
( x y)
(12)
p ( x, y )
y 1
256
Entropy:
2
S x 1
256
p( x, y) log p( x, y) 256
Inverse difference:
(13)
y 1
H x 1
256
1
1 ( x y)
2
p( x, y )
(14)
y 1
In our proposed technique, the texture feature is extracted by the gray level cooccurrence matrix and co-occurrence matrix in which the result of those two methods are used in the Euclidean distance function to get the exact match of the images. 3.3 Image Database In this paper, the experimental data set contains 1000 images from the Corel database Corel (2011). The Images divided into 10 categories (Africans, Beaches, Monuments, Buses, Dinosaurs, Elephants, Flowers, Horses, Mountains and Foods) and each category contains 100 images of size 256x384 or 384x256.
29
Islam & Kaliyaperumal Table 1: Categories of COREL 1K Dataset Image Index Category No. Category 0 – 99 1 Africans 100-199 2 Beaches 200-299 3 Monuments 300 – 399 4 Buses 400 – 499 5 Dinosaurs 500 – 599 6 Elephants 600 – 699 7 Flowers 700 – 799 8 Horses 800 – 899 9 Mountains 900 – 999 10 Food
4. Methodology We worked with three schemes (features) whereas one single feature and two multifeatures. Scheme 1 (Feature Vector based on HSV colour): First the images with the resolution size 256x384 or 384x256 presented in the COREL image database are resized to resolution size of 256x256. The resized images are then converted from RGB to HSV. The image colours are then subdivided into eight parts of Hue, three parts of saturation and three parts of Value which is shown in equation (1) to (3). The three component vector of HSV which forms a one-dimensional vector is quantized into 72 bin of one-dimensional histogram. Figure 3: Derivation of the Feature Vector of HSV Colour
[256x256x3]
RGB [256x256]
HSV
G 9 H 3S V [256x256] G = [65536x1]
Reshape 72 bins of one-dimensional histogram
1 2
71 72
30
Islam & Kaliyaperumal Figure 4: Derivation of the Feature Vector of GLCM
RGB
[256x256]
[256X256] RGB
[256x256] HSV
GRAY
[256x256] [256x256]
GLCM
G 9 H 3S V 256x256] Reshape
GTEX
G = [65536x1] [1x72] 1
[1x4]
2
71
1
1
2
= [E I S H]
3
4
72
2
76
Scheme 2 (Texture Feature vector based on GLCM): The above figure represents the extraction of texture features using GLCM. In the extraction of the feature vector process the RGB images are converted to Grey scale images. To the grey scale image the GLCM method creates a symmetric matrix composed of the probability value based on the distance and the direction amongst the pixels of the image. The level of the images is determined by the image grey level. From the matrix obtained by GLCM the statistical features such as Energy, Contrast, Entropy, Inverse difference equation (11) to (14) are computed to form a 4-dimensional texture feature.
31
Islam & Kaliyaperumal Figure 5: Derivation of the Feature Vector of Co-occurrence Matrix
RGB
[256x256x3] RGB
[256x256] [256x256x3] [256x256x4]
HSV
RGHI
[256x256] CCM
[256x256]
G 9 H 3S V Reshape
GTEX =[ E R E G E H E I
G = [65536x1]
1
2
3
. .
I R IG I H I I
SR SG S H S I H R H G H H H I 72
[
1
2
3
. .
. .
.
. .
.
.
]
16
1 [1x72] [1x16]
1
2
3
. . . .
. . .
.
. .
.
. .
.
. .
.
. .
.
88
[1x88]
Scheme 3 (Texture Feature vector based on Co-occurrence Matrix):The colour components R and G in RGB colour space I and H in HSV colour space are respectively are extracted based on the co-occurrence matrix with a direction of 90 . The statistic features extracted from the co-occurrence matrix are as follows: Energy, Contrast, Entropy, Inverse difference equation (11) to (14). In this method, a 16 dimensional texture feature is obtained from the components of R, G, H, I and their respective statistic values such as E, I, S and H. Figure 3-5 shows the derivative of feature vector using HSV colour, GLCM, Cooccurrence matrix. The experimental algorithms are given in figure 6.
32
Islam & Kaliyaperumal Figure 6: Content Based Image Retrieval (CBIR) Based on HSV Colour, HSV & GLCM and HSV & CCM Images COREL DATABASE IMAGES
Image resize [256x256x3]
Q=double (Q) [256x256]
Rgb2hsv
Rgb2gray [256x256]
R, G in RGB colour space I, H in HSV colour space
hx360 [256x256]
Grey level Co-occurrence matrix [256x256] Equation (7)
Energy E = ER EG EH E I
[1x4]
Contrast I = I R IG IH I I
[1x4]
Entropy S = S R SG S H S I
[1x4]
Inverse Difference H = H R HG H H H I
[1x4]
G = 9H + 3S + V [256x256] H [256x256] Equation (1) S [256x256] Equation (2) V [256x256] Equation (3)
G = Reshape (G, 256*256, 1) [65536x1]
Gmh = 72 bins of one-dimensional
Normalized Texture Values
GTEX
Probability value of grey level co-occurrence matrix Equation (10)
Texture Feature Energy E [1x1] Contrast I [1x1] Entropy S [1x1] Inverse Difference H [1x1]
histogram of G value
= [ E R EG E H E I
Normalized Texture Values
I R I G I H I I S R SG S H S I HRHG HH HI ]
Normalized
GTEX
Gmh value [1x72] Gmh
Gtex
= [E I S H]
G tex
Compute Distance using Euclidean Equation (15)
Returns Retrieve Images
a. Distance Calculation Distance between two images is used to find the similarities between query image and the images in the database. The proposed method used the Euclidean distance between the two feature vectors of the images. n
d (P,Q)
( pi
qi )2
(15)
i 1
where, P ( p1 , p 2 ,.... pn ) and Q (q1, q 2 ,...q n ) are two points in an n – dimensional space. Then the distance can be calculated from equation (15).
33
Islam & Kaliyaperumal b. Method of Evaluation The feature vectors of all the images are calculated using HSV, GLCM and CCM. The resultant feature vectors are then stored in the database for further comparison. In our proposed system, the retrieved image is compared with the exact image from the same category of the query image Q. The accuracy is calculated using the precision measure shown in equation (16). PQ
N correct 100 N returned
(16)
Where, N returned is the number of images that are returned to the user after a query has been made, N corrent is the number of images that belongs to the same category as the query image Q. c. Graphical User Interface We used MATLAB for developing CBIR system. Figure 7-10 shows screenshot of CBIR for HSV, HSV+GLCM, HSV+CCM and Graphical User Interface (GIU) for controlling the system that is taken from our developed algorithm. Figure 7: Graphical User Interface for HSV Colour Features
34
Islam & Kaliyaperumal Figure 8: Graphical User Interface for HSV Colour Features and Gray Level Cooccurrence Matrix
Figure 9: Graphical User Interface for HSV Colour Features and Co-occurrence Matrix
35
Islam & Kaliyaperumal Figure 10: GUI of the CBIR System for user control
The CBIR application provides user with two options to query an image. User can click on “Browse” and select a folder which lists all the or click on “Select Image”. The system will perform the necessary processing and display ten best matched images. 5. Experimental Results The CBIR system was tested on an Intel Core TM 2 Duo processor P8700 running at 2.53 GHZ having a 3 GB of RAM. Experiment 1: Feature Extraction Cost Table 2 shows the duration to extract features for all three schemes. And Table 3 shows the retrieval time for 10 data from all types of images in database. Table 2: Conversion Time for Feature Extraction from All Three Schemes COREL Database 1000 Images
Scheme 1 (sec)
Scheme 2 (sec)
Scheme 3 (sec)
109.64
171.75
211.26
36
Islam & Kaliyaperumal Table 3: Time Taken for Retrieval 10 Images from Database Category Africans Beaches Monuments Buses Dinosaurs Elephants Flowers Horses Mountains Food
Scheme 1 (sec) 1.05 1.08 1.13 1.07 1.10 1.06 1.05 1.07 1.01 1.03
Scheme 2 (sec) 1.54 1.52 1.47 1.49 1.45 1.49 1.48 1.50 1.47 1.62
Scheme 3 (sec) 1.68 1.61 1.51 1.62 1.53 1.63 1.69 1.57 1.53 1.68
Experiments 2: Comparison with other author’s work As shown in Table 4, the overall average precision using global search technique for top 10 images was 59.85%. This proposed system produced an overall precision of 84.32%, 82.92% and 82.7% which was better than the Global search technique. Table 4: Global Search vs. HSV, GLCM and CCM Category Africans Beaches Monuments Buses Dinosaurs Elephants Flowers Horses Mountains Food Average Precision
Scheme 1 82 60 67 86.3 94 62.33 91.3 92.6 68 84.6 84.32
Scheme 2 85.3 65 74 93 98 69.33 97 94 69.3 84.33 82.92
Scheme 3 86.33 63 76 95.33 98.33 71.3 95.33 93.33 65 84 82.7
GIST Rudinac (2007) 71.5 34 40 67.5 100 54 67 78.5 30 56 59.85
For different number of the top retrieved images, we provide the average accuracy for the three schemes that are shown from Table 5 to Table 7.
37
Islam & Kaliyaperumal Table 5: Average Precision for Different Number of Top Retrieved Images Using HSV Categories African Beaches Monuments Buses Dinosaurs Elephants Flowers Horses Mountain Food Average Precision
TOP 5 86.6 66 82 88 94 76 96 94 76 84.6 84.32
Scheme 1 (HSV COLOUR) TOP 10 TOP 20 TOP 50 82 75.8 62.3 60 51.5 40.1 67 56.3 40.4 86.3 80.6 67.93 94 91.16 80.4 62.33 50.5 35 91.3 77.1 56.8 92.6 87.33 86.7 68 58.66 46.6 83.6 77.1 64.66 78.71 70.6 58.8
TOP 100 50.4 32.1 31.4 53.36 59.8 26.1 39.7 73.9 39.1 50.9 42.67
Table 6: Average Precision for Different Number of Top Retrieved Images Using HSV Colour Space and Gray Level Co-Occurrence Matrix Categories African Beaches Monuments Buses Dinosaurs Elephants Flowers Horses Mountain Food Average Precision
Scheme 2 (Gray Level Co-occurrence Matrix) TOP 5 TOP 10 TOP 20 TOP 50 TOP 100 93.3 85.3 80 65 52.1 72 65 55.16 42.12 34.2 82.6 74 62.1 44.66 33.06 92.66 93 81.5 76.33 59.8 98 98 95.8 85.06 63.26 76.6 69.33 53.16 37.33 27.13 98 97 93.1 77.6 53.46 94.6 94 91.5 80.8 77 79.3 69.3 71.6 44.6 35.4 86.66 84.33 74.5 64.33 51.26 96.57 82.92 85.42 61.78 48.6
38
Islam & Kaliyaperumal Table 7: Average Precision for Different Number of Top Retrieved Images for HSV Colour Space and Co-Occurrence Matrix Categories African Beaches Monuments Buses Dinosaurs Elephants Flowers Horses Mountain Food Average Precision
TOP 5 88 72 82 91.33 98 84 97.3 93.33 72.66 91.33 86.9
Scheme 3 (Co-occurrence Matrix) TOP 10 TOP 20 TOP 50 86.33 81.16 66.46 63 53.83 43.73 76 61.6 44.53 95.33 87 75.93 98.33 98.16 94.86 71.3 56.33 95.6 95.33 85.5 95.6 93.33 95.5 87.73 65 56.66 45.2 84 82.66 63.55 82.7 66.84 65.7
TOP 100 52.6 36.5 34.43 60.53 75.8 46.3 46.3 76.93 39.6 48.06 49.87
It can be seen that scheme 2 are given better results for top 30 image retrieval than other schemes although the computational cost is higher than scheme .
6. Conclusion Our method provides an approach based on HSV colour space and texture characteristics of the image retrieval. Through the quantification of HSV colour space, we combined colour features and gray-level co-occurrence matrix as well as co-occurrence matrix separately using normalized Euclidean distance classifier. Our experiments are indicating that the use of colour features and texture characteristic of the image retrieval method is superior to a single colour image retrieval method, and integrated characteristic of colour image retrieval has obviously advantages. Scheme 2 has shown better performance compared to other schemes. In future, we can expect to use colour and shape for the retrieval of the images.
References Alemu, Y., Koh, j., Ikram, M., and Kim, D. (2009), ‘Image Retrieval in Multimedia Databases: A Survey’, 5th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 681-689. Choras, R.S. (2007), ‘Image Feature Extraction Techniques and Their Applications for CBIR and Biometrics Systems’, International Journal of Biology and Biomedical Engineering, 6-16. Corel, D. (2011), coral online database, http://www-i6.informatik.rwthaachen.de/dagmdb/ index.php/Content-Based_Image_Retrieval Huang, R.Y., Mehrotra, and Sharad, S. (1997), ‘Retrieval with relevance feedback in MARS’, IEEE International Conference on Image Processing, New York, USA, 815-818 Jurie, F. and Triggs, B. (2005), ‘Creating efficient codebooks for visual recognition’, 10th IEEE International Conference on Computer Vision, 1, 604-610 39
Islam & Kaliyaperumal Long, F., Zhang, H., and Feng, D.D. (2003), ‘Fundamentals of Content Based Image Retrieval’, Multimedia Information Retrieval and Management Technological Fundamentals and Applications, Springer, www.cse.iitd.ernet.in/~pkalra/siv864/Projects/ch01_Long_v40-proof.pdf Kong, F. (2009), ‘Image Retrieval using both colour and texture features’, 8th international conference on Machine learning and Cybernetics, 12-15 July, Baoding, 2228-2232 Manjunath, B.S. and Ma, W.Y. (1996), ‘Texture feature for browsing and retrieval of image data’, IEEE Transaction on Pattern Analysis and Machine Intelligence, 18(8), 837-842 Nandagopalan, S., Adiga, B.S., and Deepak, N. (2009), ‘A universal Model for Content-Based Image Retrieval’, International Journal of Computer Science, 242-245 Pass, G., and Zabih, R. (1996), ‘Histogram refinement for content based image retrieval’, IEEE Workshop on Applications of Computer Vision, 96-102. Rafael, G. (2009) ‘Digital Image processing using MATLAB’ 2nd edition, Gates mark LLC Rudinac, S., Zajic, G., Uscumlic, M., Rudinac, M., and Reljin, B. (2007), ‘Global Image Search vs. Regional Search in CBIR Systems’, International Workshop on Image Analysis for Multimedia Interactive Services, 14-14 Rui, Y., Alfred, C., and Huang, T.S. (1996), ‘Modified descriptor for shape representation, a practical approach’, 1st International workshop on Image Database and Multimedia Search, 112-115 Saber, E., and Tekalp, A.M. (1998), ‘Integration of colour, edge and texture features for automatic region-based image annotation and retrieval’, Journal of Electronic Imaging, 684–700 Shen, H.T., Ooi, B.C., and Tan, K.L. (2000), ‘Giving meanings to www images’, In Proceedings of ACM Multimedia, 39-48 Singhai, N., and Shandilya, S.K. (2010), ‘A Survey on: Content Based Image Retrieval Systems’, International Journal of Computer Applications, 4(2), 2226. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and Jain, R. (2000), ‘Content based image retrieval at the end of the early years’, IEEE Transactions on pattern analysis and machine intelligence, 1349-1380
40