Context Modeling in CBIR - CiteSeerX

0 downloads 0 Views 308KB Size Report
Note that the white car's driver stop at the road mark. A4 .... This is the rule in the Highway Code that corresponds to the situation only. We show here that the.
Context Modeling in CBIR Patrick BREZILLON*1, Daniel RACOCEANU Abstract The notion of context is a key for problem solving in many domains, especially in Content-Based Image Retrieval (CBIR). However, one often considers the word context like words “concept” or “system”, i.e. without giving a clear definition. This problem has been encountered in artificial intelligence and fixed with a context model and a software called contextual graphs. In this paper, we point out that there are several types of context in CBIR. Thus, if one wish to use efficiently context, we need before to identify and model it correctly. We show then that it is possible to improve the different steps in the CBIR processing. We illustrate this point on two steps, namely the user’s query management and the disease diagnosis from images, thanks to methods and tools coming from artificial intelligence. Keywords:context modeling, contextual graphs

1.

Introduction

The Onco-Media project (Ontology and Context related Medical image Distributed Intelligent Access) aims to : (1) To develop a novel grid-distributed, contextual and semantic based, intelligent information access framework for medical images and associated medical reports focusing on robust visual indexing and retrieval (set of) algorithms for medical images; robust fusion indexing (set of) intelligent techniques for medical images and associated medical reports; developing a grid-distributed medical image retrieval application that links the medical concepts of the images and text documents based on medical ontology using intelligent methods; and methods for context-sensitive navigation and query; (2) to explore new medical image diagnosis assistance, teaching and research access applications using semantic, visual and context-sensitive medical information with the grid computing facilities; and (3) to crystallize a network of research excellence in the field of distributed medical images access among Asia, French and French Switzerland partners, leveraging on their complementary scientific values and experience. This paper discusses the aspects of the project related to the notion of context. Indeed, context is actually a buzzword used across a number of domains, and this word appears also in CBIR, although through a number of acceptances because this concept is used more in ad hoc way that in a rationale one. An important problem is the fact that one can not speak of context in an abstract way but in relation to a focus [5]. IN CBIR, the focus can take drastically different forms like the user’s query, the organ concerned, the type of disease, the type of image, the reasoning and interpretation that can be lead from such data. Our claim is that context must be modeled carefully in the CBIR domain in order to be used efficiently. In this paper, we make a brief survey of the literature on CBIR to point out how the word context is considered, but without effective use of context in the problem of image management. Then, we show how this notion of context is modeled in artificial intelligence. Finally, we show how context can be modeled in two situations in CBIR, *1

Department of Computer Science, University PARIS 6 (104 AVENUE DU PRESIDENT KENNEDY, 75016 PARIS, FRANCE) e-mail: [email protected]

namely the management of a user’s query (i.e. context modeling at the level of the domain knowledge) and the support of image interpretation in CBIR.

2.

How context is considered in the CBIR area

The literature possesses a number of references to studies based on context since a long time. Torralba [21] outlines the history of visual context modeling and points out such works. Studies by Biederman et al. [3] and Palmer [18] highlight the effect of contextual information in the processing time for object recognition. However, authors generally suppose a shared understanding of the word context, in the same way as for the words “concepts” and “systems.” Indeed, these words are not defined and the meaning is different from one author to another, and also from the reader. 1) COBWEB (Context-based Image Retrieval on the Web) is a research and development project co-financed by the ESPRIT Programme of the EU (ESPRIT project 28773). COBWEB aims to provide an innovative solution to the problem of cost-effectively filing and retrieving huge numbers of still images and of allowing remote access to image databases via the Internet. Building upon an object-oriented repository of images, COBWEB is presented to offer advanced features for: image analysis, conceptual clustering, human computer interfaces, image retrieval, support for remote searches via the Internet. The goal was to create a system for providing a solution to the problem of automatically storing and retrieving images from large image distributed over the Internet [22]. It was a threefold project with (1) development of effective algorithms for the automatic storing of the visual features of the images that can be used as a means for retrieving similar images ; (2) creation of a simple, cross-platform user interface that helps the user perform queries and retrieve images in the most natural way ; and (3) development of a robust communication subsystem that guarantees satisfying retrieval times. The approach is relatively classical heavily algorithm-based, but with no real explicit reference to the role of context in the whole process. However, it is difficult to know what is the final status of the project, their Web site (http://cobweb.eunet.no/) having disapeared. 2) Content-based image retrieval (CBIR), also known as content-based visual information retrieval (CBVIR). "Content-based" means that the search makes use of the contents of the images themselves, rather than relying on human-inputted metadata such as captions or keywords. Muller et al. [15] present a state-of-the-art on CBIR and a systematic overview of techniques used, visual features employed, images indexed and medical departments involved. The growing interest in CBIR arises from the limitations inherent in metadata-based systems. Textual information about images requires humans to personally describe every image in the data base. This is impractical for very large databases, or for images that are generated automatically, e.g. from surveillance cameras. It is also possible to miss images that use different synonyms in their descriptions. The ideal CBIR system from a user perspective would involve what is referred to as semantic retrieval, where the user makes a request like "find pictures of dogs" or even "find pictures of Abraham Lincoln". Current CBIR systems therefore generally make use of lower-level features like texture, color, and shape, although some systems

take advantage of very common higher-level features like faces (see facial recognition system). Different implementations of CBIR address different types of user queries. • With query by example, the user searches with a query image (supplied by the user or chosen from a random set), and the software finds images similar to it based on various low-level criteria. This is the most popular method with the problem of having an appropriate starting image for querying. • With query by sketch, the user draws a rough approximation of the image they are looking for, for example with blobs of color, and the software locates images whose layout matches the sketch. • Other methods include specifying the proportions of colors desired (e.g. "80% red, 20% blue") and searching for images that contain an object given in a query image. Wikipeda provides several applications and main references on this topic. In all the implementations, context is used implicitly . However, contextual elements like the color must be consider ed in relation with the focus and other contextual elements. In medical image domains the relative spatial distribution of the pathological structures (a contextual information) play important roles in diagnosis [1]. This contextual information is represented in different ways : representing the structures by nodes with their attributes, and their spatial relations by links with their attributes, a two dimensional representation ; spatial histograms or correlograms. Another way of including contextual information is the use of spatial histograms or correlograms. These correlograms have been applied to describe the distribution of color along with its spatial relations or the relative distribution of the points in a curve in shape matching and retrieval If Content-based image retrieval (CBIR) is the most widely used method for searching large-scale medical image collections, this approach is not suitable for high-level applications as human experts are accustomed to manage medical images based on their clinical features rather than primitive features [7]. Indeed there is a need to associate bottom-up and top-down techniques, the results obtained by one techniques being contextual information (and knowledge) for the other technique. 4) Attributed Relational Graphs (ARGs) are a formalism for contextual representation. Parts are represented as nodes and their spatial relationships as arcs. A pattern usually consists of several primitives among which various contextual relations are defined. Attributed relational graph (ARG) is chosen to represent the samples of patterns. The pattern ARG models both the attributes of nodes and the relations among the nodes. The images are segmented. Each node of the ARG represents a segment of the image. The attribute of the node is the mean color (RGB) feature vector of the segment. The adjacent relations among the segments are considered. The pattern ARG models are assumed to be Contextual Gaussian Mixture models. Hong and Huang [10, 11] proposed an unsupervised method for extracting recurrent patterns from a single image. The algorithm uses the local context information of a pixel to predict the value of that pixel. An alternative way is to extract features from images via low-level image preprocess, and use the extracted features to represent images. Low-level image processing decomposes a pattern into several components (e.g. regions, edges). Nonetheless, the relations (e.g. spatial constraints) between those components persist through the images and group those components together to form a pattern.

5) Context-based information retrieval has been considered by O’Sullivan et al. [17]. Two types of contextual information in are important in Information Retrieval (IR): (1) at the querying step: for example, the interactions between users and systems, users’ preferences and skills, or cultural and linguistic differences. The context in IR studies usually refers to this type of user information; and (2) during the process of data creation. For example, the background knowledge of the annotator, the work environment (e.g., amount of time and money), or the potential users (e.g., family members or general public) may influence the characteristics of the resulting data collection. There are however some problems. An extreme case might be the context for the same person at two different times. For example, suppose that the context information assigned to a photo image as an annotation immediately after the photo is taken is “lovely silver car.” The feeling and the clarity of memories may change over time. When the same person tries to retrieve the image at a later date, the context at that time might be “boring black car.” The presence of such embedded type of context is not obvious in IR. The problem is that one confuses the context of the user and the context of the image. An annotation is a description of a given user’s context. « Lovely silver car » or « boring black car » are not directly attached to the image, but to the user. In ordinary textual documents, such contextual information is mixed with the thematic subject, since both of the embedded context and the thematic subject are represented by words. In image retrieval, on the other hand, images are just signals and the context of the data creation is implicitly found in the images themselves. If images are annotated, their context can be accessed through the subjective words. 6) Among other approaches, there is the association of image and text. Westerveld [23] proposes to combine image features (content) and words from collateral text (context) into one semantic space by using Latent Semantic Indexing, a method that uses co-occurrence statistics to uncover hidden semantics. This method can be used for multi-modal and cross-modal information retrieval. Latent Semantic Indexing can outperform both content based and context based approaches, a promising approach for indexing visual and multi-modal data. He describes two different approaches to image retrieval, namely context based image retrieval and content based image retrieval. In the context Based Image Retrieval, the context of an image contains all information that doesn’t come from the visual properties of the image itself. For example, the place where you found an image or the person who pointed you at it can tell a lot about the information displayed in the image. Context here corresponds only for the textual information that comes with an image. (The similarity between images is then based on the similarity between the associated texts, which in turn is often based on similarity in word use.) From this brief survey of the literature on the use of context in image processing, we have shown that people (1) do not put the same meaning for context, and (2) consider that context plays the same role in image processing and image interpretation. This position is not correct, and it is important to analyze more carefully what context is.

3.

Types of context to consider

1) Definitions Brézillon and Pomerol [5] defined context like "what constrain something without intervening in it explicitly." The "something" being a problem solving for the authors that we now consider by extension as a focus for an actor.

Several elements justify this definition, the three main elements being that (1) context is relative to the focus, (2) the focus evolving, its context evolves too, and (3) context is highly domain-dependent. As a consequence, one cannot speak of context in an abstract way. Then, we show that the focus allows dividing the context into external knowledge and contextual knowledge. The latter constitutes a kind of tank where the contextual elements are more or less related to the focus at its current step in a flat way, when the former has nothing to do with the focus at its current step. At this conceptual level, the focus acts as a discriminating factor on the knowledge in a similar way for social networks. The focus evolves because a new event occurs (e.g. an unpredicted event) or as a result of a decision made at the previous step of the focus. The notion of context intervenes more on the relationships between knowledge pieces than the pieces themselves. Now, research on context in artificial intelligence is organized along two axis, namely reasoning models represented in Contextual Graphs, and the knowledge instantiation of a part of the contextual knowledge, which is structured in a proceduralized context. The formalism of the contextual graphs has been used in several real-world applications [5]. Two applications are presented along the first axis, addressing the question of the diagnosis of a device and the collaborative building of the answer to a question. These applications allow identifying explicitly the differences between the behavior prescribed by the procedures (corresponding to the instructions) and the effective actors’ behaviors (users facing a problem with a device). This is completely in the line of the prescribed and effective tasks identified by Leplat [14] and procedures versus practices, and found in a number of applications like in road safety (a support for self-evaluation of drivers), in medicine (a support in ischemy diagnosis) and in software engineering (a support in the assembling of software pieces). Along the second axis, Brezillon and Brezillon [6] discuss the relationships between contextual knowledge and the proceduralized context in order to implement them in a computer system. For addressing the current status of the focus, the actor selects a subset of the contextual knowledge called proceduralized context. In terms of contextual knowledge, the proceduralized context is an ordered series of instantiated contextual elements. The two keywords here are instantiation of contextual elements, which is the link between the two axes too, and the comparison of the proceduralized context to a buffer between the focus and the contextual knowledge. 2) Context in reasoning A context-based reasoning has two parts: diagnosis and action [5]. The diagnosis part analyzes the situation at hand and its context to extract the essential facts for the actions. The actions are undertaken in a foreseen order to realize the wished task. Sometimes, actions are undertaken even if the situation is not totally (or even not at all) analyzed. For example, a driver puts into gear before any action or situation analysis. Other actions are carried out before the proceduralization of a part of the contextual knowledge. Thus, diagnosis and actions constitute a continuous interlocked process, not two distinct and successive phases in a context-based reasoning. Moreover, actions introduce changes in the situation or in the knowledge about the situation, and imply a revision of the diagnosis, and thus of the decision making process itself. As a consequence, there is a need of a context-based formalism for a uniform representation of diagnosis and actions.

Contextual graphs propose a representation of this combination of diagnosis and actions. (A contextual graph represents a problem solving or at least a step.) Diagnosis is represented by contextual elements. When a contextual node is encountered, an element of the situation is analyzed. The value of the contextual element, its instantiation, is taken into account as long as the situation is under the analysis. After, this instantiation does not matter in the line of reasoning that can be merge again with the other lines of reasoning corresponding to other instantiations of the contextual element. Thus, contextual graphs allow a wide category of diagnosis/action representations for a given problem solving. Contextual graphs are acyclic due to the time-directed representation and guarantees algorithm termination. Each contextual graph (and any sub-graphs in it) has exactly one root and one end node because the decision making process starts in a state of affairs and ends in another state of affairs (not necessarily with a unique solution on all the paths) and the branches express only different contextual-dependent ways to achieve this goal. This gives a general structure of spindle to contextual graphs. A path represents a practice developed by an actor, and there are as many paths as practices known by the system. Elements of a contextual graph are: actions, contextual elements, sub-graphs, activities and parallel action groupings. An action is the building block of contextual graphs. A contextual element is a pair of nodes, a contextual node and a recombination node; a contextual node has one input and N outputs (branches) corresponding to the N instantiations of the contextual element. The recombination node is [N, 1] and represents the moment at which the instantiation of the contextual element does matter anymore. Sub-graphs are themselves contextual graphs. They are mainly used for obtaining different displays of the contextual graph by aggregation and expansion like in Sowa's conceptual graphs [20]). An activity is a particular sub-graph that is identified by actors because appearing in a same way in different problem solving. An activity is defined in terms of actor, situation, task and a set of actions. More precisely, an activity is a sequence of actions executed, in a given situation, for achieving a particular task that is to be accomplished by a given actor. A parallel action grouping expresses the fact (and reduce the complexity of the representation) that several sub-graphs must be crossed before to continue, but the order in which sub-graphs are crossed is not important, or even could be crossed in parallel. The parallel action grouping could be considered like a kind of “complex context.”

Figure 1: Contextual graph of the traffic situation. See ref [5] for details. Elements are defined in Table 1.

Figure 1 presents drivers’ behaviors in the scenarios in the contextual-graphs formalism. Note that the contextual graph contains only behaviors of the black-car driver. The description is more extended (e.g. there are two behaviors that lead to scenario “5”) because we choose the black-car driver’s viewpoint and not the viewpoint of an external observer. This will be the topic of another paper. Contextual element C1 C2 C3 C4 Action A1 A2 A3 A4 A5 A6 A7

Is the white car stopping? Is the white car going ahead? Can I let the white car going ahead? Can I overtake the white car on the left? Definition Keep the same behavior Interpret the behavior of the white car’s driver Note that the white car’s driver stop at the road mark Brake enough to let it going ahead Change of lane and overtake Evaluate the situation Try to brake strongly

Table 1: Contextual elements and actions while negotiating a crossroad of Figure 1

3) Context in the dressing of image retrieval In real-world applications, context appears like the “missing link” between the domain knowledge and the focus. Our work in the CBIR being not enough mature, we present an application in road safety but show how this work will be transferred in CBIR domain. Brézillon and Brézillon [6] propoe the representation of a simple intersection in terms of situation dressing. The domain knowledge contains elements like roads, lanes, traffic lights, country, city, lights, etc. For defining a specific intersection, we must contextualize the domain knowledge (“Place” = “City”, “Traffic lights” = no, etc.). Thus, the contextual element “Place” is instantiated to “City” and this implies that some other domain elements become irrelevant (e.g. “Field of corn” is anymore an instantiation of “At the corner of the intersection”) and others must be instantiated (e.g. “Type of building at the corner”). This kind of dressing of the intersection corresponds to a contextualization of the situation. This contextualization, and thus we go back to the first axis on reasoning, leads to two types of inferences rules. The first type concerns integrity constraints. For example, "Period of the day" = "Night" implies that the value “Sunny” is not relevant for the contextual element “Weather.” The second type is composed of rules on what a driver must do in a given context. For example, "Period of the day" = "Night" implies that the value “Car lights” must be “Switch on.” The latter rules constitute a kind of theoretical model of the behavior that drivers must hold in the specific context (i.e. dressing) of the situation, that is for the given situation dressing, the current focus. Thus, a student can have the same exercise (What do you do at the crossroad?) but will have always to reanalyze the situation in the light of the contexts generated randomly. A contextual element has an instance, which can be either a value or another contextual element. This allows to make explicit the granularity of the description but suppose to have a mechanism for acquiring incrementally new concepts because "we don't know what we need before to have to use it". This leads to a mechanism of incremental

acquisition. Another consequence is that a contextual element is itself a list of elements, and this, recursively. The situation above is described (very partially) as follows: Physical elements Environment = “Countryside” Type of region = “Fields” Type of land Plant = “None” Pasture = “No animal” “City” ••• Lehman et al. [13] follow a similar path in IRMA. IRMA is a codification of images in medicine. Making context explicit is something more that allow to use this classification for other purposes such as the query building. However we have no room to develop the parallel. We point out [6] that a contextual element has an instance, which can be either a value or another contextual element. This allows to makeexplicit the granularity of the description. However, this supposes to have a mechanism for acquiring incrementally new concepts because "we don't know what we need before to have to use it". This leads to a mechanism of incremental acquisition. Brezillon and Brezillon [6] propose a formalism of representation based on the following model, which could be applied in medicine too:

We already have seen that an instance may be a value or another contextual element (CE). The determination of a value is made from resources: a sensor (e.g. the temperature), a database (e.g. user’s profile for user’s sex), a computation (e.g. the value of a dollar in euro), or asked to the user (e.g. “Do you like speed?”). The last point is that the instantiation of a contextual element may lead to trigger or inhibit another contextual element. For example, choosing “Countryside” implies that “Type of building at the right corner” is not relevant, but it is mandatory to instantiate “Type of field.” In the same spirit, the instantiation “Corn” for the contextual element “Type of field” means that (1) corn is tall and hides the view of black-car driver, (2) the visibility is strongly reduced, and (3) the driver must drive carefully. This implies that two types of rules must be considered. The triggering mechanisms are given by two types of rules, namely integrity constraints and inference rules. Integrity constraints are rules that describe relationships between contextual elements and their instantiation. For example, we can extend the previous example of the “Environment” with rules like: IF “Item at the right corner” = “Field”

THEN

Look for “Season” IF “Season” = “Summer”, THEN look for “Type of field” ELSE “Type of field” = “Reaped” IF “Type of field” = “Corn” THEN “Visibility of the right road” = “Null” ELSE “Visibility of the right road” = “Good”

(The last line would be refined by taking into account, say, “Weather” (for fog). Another group of integrity rules concerns the relationships between contextual elements that rely on common sense knowledge. For example: IF “Moment of the day” = "Night" THEN “Weather” cannot be “Sunny” IF “Weather” = "Rainy" THEN “Road state” must be "Wet" Making explicit such rules in a decision support system allows considering the situation in a coherent context to identify the important contextual elements that are able to explain a situation. Rules on the driver's behavior do not concern directly the instantiation of contextual elements but the relationship between instantiation of contextual elements and the driver’s behavior. Examples of such inference rules are: IF “Road state” = "Wet" or “Weather” = "Fog" or “Visibility” "Weak” THEN “Car speed” = “Reduced” and “Driver status” = “Vigilant” IF “Type of day” = “Working day” THEN IF “Time” = “Morning” and “Near intersection” = “School” THEN “Pay attention to children going to school” “Anticipate a possible fast brake” IF “White car” = “Firemen car” THEN “Let it the priority anyway” If we would make a model of the theoretical behavior of drivers arriving at a simple intersection, we will have just to implement: IF “Car on the road at right” THEN stop and let it the way. This is the rule in the Highway Code that corresponds to the situation only. We show here that the contextualization of the situation (it is the night, it is raining, etc.) lead to a richer model, not of the drivers but of their behaviors. Context allows considering a task in its environment (a contextualized task). Another observation is that if the model of the theoretical behavior is unique for a situation, the model of the “prescribed behavior” of drivers is context-dependent and thus there is a specific model of prescribed behavior for each context. This means that the “distance” between the prescribed task and the effective task can be defined more precisely.

4.

Conclusion

Endsley [8] established the well-known definition of Situation Awareness with its three levels: (a) perception of elements, (b) comprehending what those elements mean and (c) using that understanding to project future states. Our approach is ascribed in this realm. The notion of focus defining the related contextual elements that are relevant in the current context is a factor that improve the perception of Endsley's first level. Making explicit the distinction between the selected contextual elements and their instantiations may be associated with the second level (i.e. the understanding of the meaning of the elements.) The identification of the rules of "good behavior" (and by contrast of “bad behavior”) in the context of the situation allows an efficient decision making and prediction of the third level. This paper gives a new view on the classical dichotomy “prescribed task versus effective task.” We have shown that the rules, which are deduced from the instantiation of the contextual elements, lead to a task model that concern the contextualized situation, not the situation uniquely. This is important in terms of drivers' behaviors because if the Highway Code addresses the situation (at a simple intersection, the priority is to the vehicle coming from your right), the task model that arises from the inference rules is able to adapt to the contextualized situation. For example, if the driver coming on your right stops his vehicle and indicates that you can go on, it is not necessary to stop and wait. Thus, the driver will learn a set of operational rules instead of a general rule. In other terms, our approach is a support for drivers to develop an efficient model of practices instead of a task model (i.e. a theoretical model). It is more important to learn how to use a rule rather than to learn uniquely the rule.

Acknowledgement Fundings are obtained from the ONCO-MEDIA project.

References [1] Amores, J. and Radeva, P.: Retrieval of IVUS images using contextual information and elastic matching. International Journal of Intelligent Systems, Vol. 20, pp. 541-559, 2005. [2] Amores, J., Sebe, N., Radeva, P., Gevers, T. and Smeulders, A.: Boosting contextual information in content-based image retrieval. Proceedings of ACM MIR’04, New York, USA, 2004. [3] Biederman, L. et al.: Scene perception: Detecting and judging objects undergoing relational violations. Cogn. Psychol. Vol. 14, pp. 143–177, 1982. [4] Bouaud, J., Séroussi, B. and Antoine, E.-Ch.: OncoDoc: modélisation et opérationalisation d’une expertise thérapeutique au niveau des connaissances. Plate-Forme IA, IC’99, pp. 61-69, 1999. [5] Brézillon, P.: Context Modeling: Task model and model of practices. Proceedings of the 6th International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT-07), Roskilde University, Denmark, 2007). [6] Brézillon, J. and Brézillon, P.: Context modeling: Context as a dressing of a focus. Proceedings of the 6th International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT-07), Roskilde University, Denmark, 2007. [7] Chen, L. Tang, H.L. and Wells I.: Clinical content detection for medical image retrieval. Proceedings of the IEEE Engineeting in Medicine and Biology 27th Annual Conference, Shanghai, China, 2005. [8] Endsley, M.R. : Toward a Theory of Situation Awareness in Complex Systems. Human Factors, 1995. [9] Gregory, L. and Kittler, J.: Using Contextual Information for Image Retrieval. 11th International Conference on

Image Analysis and Processing (ICIAP'01) , p. 0230 th [10] Hong, P. and Huang, T.S.: Extract the Recurring Patterns from Image,” The 4 Asian Conference on Computer Vision, Jan 5-8, 2000a, Taipei, Taiwan. [11] Hong, P., Wang, R. and Huang, T.: Learning Patterns from Images by Combining Soft Decisions and Hard Decisions. CVPR, 2000b. [12] Hudelot, C., Atif, J. and Bloch, I.: Fuzzy spatial relation ontology for image interpretation. IFA06, 2006. [13] Lehmann T.M., Schubert H., Keysers D., Kohnen M., Wein B.B.: The IRMA code for unique classification of medical images. Proceedings SPIE 5033: 109-117, 2003. [14] Leplat, J. : Regards sur l'activité en situation de travail - Contribution à la psychologie ergonomique. Paris : Presses Universitaires de France, 1997. [15] Müller, H., Michoux, N. et al. : A review of content-based image retrieval systems in medical applications – clinical benefits and future directions”, International Journal of Medical Informatics, Vol. 73, pp. 1-23, 2004. [16] Mylonas, P., Athanasiadis , T.and Avrithis, Y.: Image Analysis Using Domain Knowledge and Visual Context, 2006. [17] O’Sullivan, D., McLoughlin, E., Bertolotto, M. and Wilson, D.: Context-oriented image retrieval. In: A. Dey et al. (Eds.): CONTEXT-05, LNAI 3554, pp. 339-352, 2005. [18] Palmer, S.E. : The effects of contextual scenes on the identification of objects. Memory and Cognition, vol. 3, pp. 519–526, 1975. [19] Paslaru-Bontas, E.: Using context information to improve ontology reuse. Doctoral Workshop at the 17th Conference on Advanced Information Systems Engineering CAiSE'05, 2005. [20] Sowa, J.F.: Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks Cole Publishing Co., Pacific Grove, CA, 2000. [21] Torralba, A.: Contextual influences on saliency”, “Neurobiology of attention”, 2005. [22] Vlachos, M ., Vardangalos, G. and Tatsiopoulos, C.: Effective ways for querying images by content over the internet. 10th Mediterranean Electrotechnical Conference (MeleCon-2000), Vol. 1, pp. 337-340, 2000. [23] Westerveld, T.: Image Retrieval: Content versus Context. In Content-Based Multimedia Information Access, RIAO 2000. http://citeseer.ist.psu.edu/westerveld00image.html