Ontology Based Object Categorisation for Robots Rogan Mendoza and Mary-Anne Williams Faculty of Engineering and The Innovation & Technology Research Laboratory University of Technology, Sydney PO Box 123, Broadway NSW 2007
[email protected] and
[email protected]
Abstract Ontologies are a powerful means for expressing and sharing knowledge in a meaningful way, and are becoming accepted as a viable modelling approach. The purpose of this paper is to enhance the representations used by robots by incorporating ontologies and implementing reasoning services that can exploit the information inherent within ontology based representations. Our objective is to explore the use of ontological concepts for object categorisation in agents and issues related to the grounding of ontology based representations. The research is driven by the need to make progress towards the development of a generalised solution for the grounding problem which would allow intelligent agents to achieve more adaptive behaviours. Object categorisation is important for intercommunication between agents because it plays an important role in supporting problem solving and the achievement of goals. Ontologies allow concepts to be easily shared meaningfully between agents and this can enable interoperability between multiple heterogenous systems. In order to illustrate our ideas we focus on the robot soccer domain because it is an application where agents, namely robots, must make decision, communicate and collaborate in a complex and dynamic environment.. Keywords: Ontology, Categorisation, Symbol Grounding, RoboCup Soccer.
1
Introduction
The grounding problem (Ziemke 1997, Coradeschi and Saffiotti 2003, Williams et al 2005) is a prevalent topic in Artificial Intelligence, in particular in the areas of knowledge representation, psychology and robotics. One of the key aims of this paper is to use the power of ontologies to improve the ability of agents to ground their representations for the purpose of enabling the sharing of knowledge through object categorisation. The RoboCup robot soccer domain is a highly dynamic environment where AIBO1 robots fight for possession of 1
An AIBO is an autonomous robot in the form of a dog see www.aibo.com for details. Copyright (c) 2005, Australian Computer Society, Inc. This paper appeared at the Australasian Ontology Workshop (AOW 2005), Sydney, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 58. T. Meyer, M. Orgun, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included.
an orange ball in order to pass it to team mates or shoot for goal. The robot soccer field is crafted and defined using white boundary lines and colour coded beacons. The robots process a vast array of sensory data in order to build a world model, i.e. a representation of the current state of the field. A robot must be able to recognise the goals, the ball, other players, to reason about their position, and to select an appropriate action which may involve searching for the ball, kicking the ball, and eventually shoot for goal. The question is how are low level representations, including programming instructions, related to the robots domain and objectives. How can sub-symbolic representations, e.g. sensor and motor data, of an action or event become translated into symbolic representations that can be communicated to other robots and humans so as to be understood and used in a meaningful way? The grounding problem makes these kinds of questions central. In this paper we propose an approach that incorporates the use of ontologies and related reasoning mechanisms as a means of addressing the grounding problem. Ontologies can be used to conceptualise an agent’s world. They can be considered to be a collection of concepts and relationships that are structured in a way that allows an autonomous agent to share information, as well as derive new, previously implicit, knowledge and properties about particular concepts. We show how concepts within ontologies can be used in a way that bridges the gap between low level software code and sensory information and high level concepts that supports the creation of meaning and subsequent understanding of real world objects. The key advantages of using an ontology based representation are that it enables knowledge sharing and communication and as a result leads to improved interoperability between autonomous systems. Related work by Vogt (2003) shows that categorisation is an essential element for approaching the grounding problem. It has been shown that forming categories to represent sub-symbolic information or objects is a step to solving the symbol grounding problem and is essential for communication. Categorisation is an important capability for robots. It allows them to deal with complex information captured by their senses, to interact with their environment, and to be able to manage their internal subsymbolic representations. The more grounded an agent’s ontology the more effectively it can achieve its goals, therefore a robot with an ontology playing soccer who can categorise objects will be more successful if its ontology is grounded. A grounded ontology will also assist agents to communicate more effectively and transparently.
The semantic web is emerging and fuelling the next generation of web based applications. The Ontology Web Language3 (OWL) has become the World Wide Web Consortium4 standard mark up language for the development of ontologies and ontology based web applications. This paper describes OBOC, an Ontology Based Object Categorisation system that can be implemented on an AIBO and used to enhance its object recognition and communication capabilities. We identify OWL classes and demonstrate how restrictions within classes and properties can be employed to determine relationships between different concepts. We also explore concept acquisition where concepts are created in ontology by an AIBO and discuss the limitations and assumptions involved with such an approach. OWL is the ontology language used to for the application, developed using Protégé, and designed to use the reasoning services provided by the Racer 5 inference engine
2
The Grounding and Anchoring Problem
The grounding problem needs to be addressed in the development of all intelligent agents. The grounding problem is all about managing the relationship between representations and the entities they represent in a meaningful fashion. This relationship is important because it affects the way an intelligent system could behave, how it could interact with its environment, and what it is capable of achieving. Clearly robots need to be aware of their own grounding capability in order to exhibit robust intelligent behaviour. Anchoring (Coradeschi and Saffiotti 2003) has been identified as a form of grounding in which the representations concern physical objects only. Typically, anchoring is achieved by maintaining a link between internal representations of physical objects and objects themselves in the real world. Harnad (1990) poses the question “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?” In other words how can the meanings of meaningless symbols be grounded in something that is understandable and meaningful? This is the challenge underlying the grounding problem.
Chinese characters, has to respond to people asking questions outside the room in sequences of Chinese characters. The person in the room does not know the Chinese language, but simply uses a rule book written in a language he understands to answer Chinese questions with Chinese answers. Although the person is able to successfully answer all the questions in Chinese, he has no understanding of the Chinese language or the meaning of the Chinese words he generates as answers. Searle uses the example to argue that computer programs are syntactic and lack the semantics that the human mind possesses and thus cannot understand things as we do. The human mind represents and manages symbols, however the symbols possess semantics, i.e. meanings (Mayo 2003). Since no amount of syntax will ever produce semantics Searle concluded that a purely syntactic program will never be able to understand what it is doing because of the lack of intentionality i.e. the inability to link internal representations to external objects or states (Ziemke 1997). Harnad’s (1990) response to solving the grounding problem was through iconisation and categorisation. Iconic representations involve forming internal projections that are picked up through senses that allow us to discriminate between different objects. Categorical representations, on the other hand, involve identifying objects based on invariant features that are distinguishable. Combined together they form a grounded symbol and allow for reliable classification of category membership. The ability to combine and recombine categories and icons into propositions to be reasoned with is an important aspect of grounded symbols. For example you could take the grounded symbols “horse” and “stripes” to form the concept “Zebra”. However, as Vogt (2003) points out, categorical representations described by Harnad (1990) have no direct link to the real world. We refer to the notion of the meaning or semiotic triangle (Locke 2004) to make this direct link. Vogt uses the term semiotic symbol (Ogden 1923) to represent a physical symbol system that is defined by a sign or semiotic triangle shown in Figure 1.
Prior to Harnad, Searle (1980) put forward important arguments relating to the same problem. He describes an experiment known as the Chinese Room where a person in a room, armed with a rule book for manipulating
3
http://www.w3.org/TR/owl-features : The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. 4
http://www.w3.org
5
Racer Systems http://www.racer-systems.com
Meaning
Form: ‘soccer ball’
Referent
Figure 1 Meaning Triangle A sign is the relationship that exists between three elements. The referent denotes the actual physical object or thing that the sign represents. The form is the word or representation that the sign takes; which could be a word or a number. The meaning is the attached description and context associated with the sign. As Vogt (2003) explains, a semiotic symbol is by definition grounded in
itself because the symbol has an intrinsic relation with the referent whether or not it is perceived. For example, when you think of a form such as ‘soccer ball’, a corresponding meaning is brought up which in turn links that concept to the referent or what we perceive as an actual soccer ball – this is a relationship which is indirect and also instinctive. Ontologies can provide a bridge between low level subsymbolic percepts and the high level symbolic representations. A robot using a concept in an ontology can be defined by semiotic or meaning triangle to categorise objects and thus form a grounded symbol. The use of ontologies can lead to a generalised solution to the grounding and anchoring problems because of the universality of the technologies relating to the World Wide Web – specifically the Semantic Web.
YUV format and each pixel is then classified using a set of colour concepts which give the robots the ability to recognise and differentiate between objects – see Figure 2 above. Since the robots themselves are already grounded in colour and able to detect colour, i.e. they can form icons of objects, the task of categorising these objects and linking them to the real world is simplified. Categorisation involves partitioning objects into cognitively useful groups referred to as categories. The resultant categories can then be used to build knowledge; concepts can become predicates which are used to describe the world and its behaviour in a knowledge base. Concepts help robots reduce the complexity of the information they must manage.
4 3
AIBOs and the RoboCup
The RoboCup is an international robot soccer initiative. It involves an annual soccer competition where international teams of dynamic robots compete against each other in different conditions and competitions. RoboCup provides an inspiring challenge for researchers AI and robotics because the soccer domain appeals to many people at all levels. We have targeted RoboCup as the domain to explore our ideas on object categorisation through ontologies and grounding because it has the right mix of complex and dynamic environment and the need for sophisticated object recognition and the ability to reason about the properties of objects. The UTS Unleashed! Robot Soccer System6 that competed in the Four-Legged RoboCup Competition in 2003 and 2004 was selected as the platform for development as it allowed us to focus entirely on the design and implementation of the ontology and concept creation without having to develop the entire required infrastructure such as robot locomotion and vision.
Figure 2 Raw Camera Image and Extracted Objects in the Raw Image The RoboCup environment contains a soccer field which has been crafted to assist the AIBO sensor capability. Each team is made up of four (4) robots; one being a designated goal keeper whose job is to prevent the orange-coloured ball from entering a colour coded box that represents the goal. Four dual-coloured beacons stand near the corners of the field which assist the robots to localize on the field during a match. Images taken from an AIBO robots single CMOS camera are converted to 6
http://www.unleashed.it.uts.edu.au
Related Work
In recent years the number of web based applications and systems that make use of ontologies has grown substantially. Ontologies and the Semantic Web have been seen by some in the artificial intelligence community as machine readable code that when perfected will no longer necessitate the need for any human intervention as machines will be able to reason for themselves (Veltman 2002). This forms a part of our objective with this research. There is currently no specific work related that use ontologies and the semantic web for allowing a mobile robot to dynamically categorise real world objects. Perhaps, the most relevant research by Schlenoff (2002) who described the use of an ontology of obstacles to aid in path planning and obstacle avoidance. Other applications of ontologies has mainly been demonstrated in systems that are used for image and document classification (Breen et al 2002, Schober et al 2004, Song et al 2004), communication and object Mapping in mobile robots (Vogt 2003, Limketkai 2005) and also object learning (Modayil & Kuipers 2004). Breen et al (2002) proposed an image classification system that uses an ontology to discover relationships between objects in an image to provide semantic meaning to the image. They implemented a domain dependant ontology comprised of nodes and links that represented different sports and the objects associated with each sport. These nodes were then used to determine whether a set of recognised objects belonged to a particular sport which was then used to classify the image. Ontologies with their rich, structured taxonomies enable real world objects to be represented by concepts and the related properties and individuals. Our approach uses the superclass and subclass relationships and restrictions between concepts to determine whether a recognised object can be classified or categorised as a specific concept within the ontology. A third party reasoning service such as Racer or Fact++ can classify classes and output relevant classes that represent the category that a recognised object belongs to. This is further explained in the following section.
5
OWL Ontology Capabilities
6.1
Concepts within an OWL ontology can have a set of properties that can be linked to other instances or data types in other classes. This permits a concept Ball with the property hasColour to have the value ‘Red’ or ‘Orange’. Concepts are structured in a way that allows parent and child and equivalent relationships to be defined between concepts. Concepts can be comprised of a more generalised superclass while at the same time have subclasses that represent more detailed aspects of a concept. Concepts can also have equivalent classes which are represented by necessary and sufficient conditions – i.e. a defined class. This restriction imposes specific conditions on a class that says in order for a class to be classified as another class then it must meet certain necessary and sufficient conditions. This is illustrated in Figure 3, derived from Horridge (2004). Necessary and Sufficient Conditions Condition 1 Named Class
Condition 2
Feature Based Categorisation
All real world objects have specific features or properties that differentiate them from other objects. As humans, we are able to detect certain differences such as the colour of an object, its size, shape and also unique parts of the object. Ontologies have defined classes that can be related to the real world. For example you can define a class called ‘Ball’ and also define its properties – it is a spherical object, rolls on the ground, and has the colour orange: Ball (hasColour Orange) ∩ (hasShape Round) ∩ (isMovable=True) Using this same approach where a human uses recognised features to categorise an object, we explore whether a robot with specific domain ontology could successfully classify objects that it can perceive and conceive. A robot that can recognise certain properties of an object such as colour, size and shape would be able use these properties to categorise it using the ontology. A key issue is that conflicts may occur if two different concepts in the ontology share the same information. For example a round, yellow soccer ball and a lemon share common features and an object could be categorised as both if the concepts in the ontology were not unique.
The bi-directional arrow in the diagram means that given certain conditions, the reasoner is able to determine whether a given class had a relationship with a named class and vice versa. Using this approach, any class within the ontology that satisfies the necessary and sufficient conditions of a named class can be classified as being a subclass or an equivalent of a Named Class. This is the idea behind the ontology based object categorisation approach. If a recognised object is translated as a class in the ontology and then classified using a reasoner such as Racer, it would determine that the recognised object belonged to a certain class if it met the defined conditions. Since the Named Classes in the ontology are defined, this method of object categorisation is precise and accurate because the properties of classes are strictly stated.
In the real world, object types tend to have unique properties that humans can distinguish, however this is a challenging area for most computer based vision systems where objects are recognised through shapes and line drawings (Strat 1992). Thus techniques need to be implemented that can determine the most relevant concept that matches an object due to limitations in object recognition such as through heuristic based pruning techniques that determine relevant concepts based on weights (Breen et al 2002). However, to combat this with the current approach, we assume all concepts defined in the ontology are unique – this prevents the reasoner from inferring multiple concepts for the one object even when they possess perfect information. When they have incomplete information it is reasonable for them to entertain multiple classifications for the purposes of reasoning, decision making, and problem solving. Clearly even humans can only distinguish objects correctly and uniquely if they have all the necessary information at hand to do so.
6
6.2
Condition 3
Figure 3 A Defined Class in OWL
Some Challenges for an Ontology based Approach
In the RoboCup domain, the ontology based approach allows for two methods of categorisation – feature based categorisation and context based categorisations, which are explained in the following subsections. The ability for robots to create their own concepts in their own ontology and share it for other robots to learn is another capability enabled through ontologies. However there are issues and constraints with such methods. These are discussed below.
Context Based Categorisation
Another human capability involves contextual object categorisation where our categorisation of objects is based on the contextual surroundings that we perceive. Say for example in a photograph we see a round and orange object, our initial perception of the object is usually based upon the context in which the object lies and also based on experience; i.e. we base it on the relationships and connections that may exist between the objects based on what we know. If the object were surrounded by other fruits and in a basket, or attached to a tree, we would come to the conclusion that the object is most probably the fruit orange. The same object on a basketball court or next to a ring would be categorised as
a basketball. In both cases the percept of orange round thing is similar, yet the classification is very different. Ontologies work in a similar way where a class can have certain relationships with other classes or instances. An ontology is able to represent that an animal such as koala have their habitats in eucalyptus forests, or that a game of soccer is comprised of players, a ball, goals and a field. For example, we can model the goal box in the following way: GoalBox (isBehindOf GoalKeeper) ∪ (isNear OwnBeacon) Using this same principle, a robot would be able to use an ontology to determine the relationships between recognised objects and be able to successfully categorise them based on surrounding objects. This is similar to how some systems use ontologies for image classification (Breen at al 2002, Schober et al 2004) and also for the use of Relational Object Maps (Limketkai 2005).
6.3
Concept Learning and Sharing
One of the most important aspects of communication is the shared meaning of vocabulary and language. However, in a dynamic environment where the same object may be perceived and categorised differently by a range of systems and by differing points of view, it would be almost coincidental for two unsupervised robots to come up with the same name for the same object. If two robots are trying to communicate about a single recognised object, how would they be able to share knowledge and information with each other if the object had been categorised differently by the two robots. Both of the concepts stand for the same object but in respect to different groundings (Sowa 2000). Ontologies are a powerful mechanism for defining semantic relationships between concepts which go beyond the syntactic level. Although two concepts may have different syntactic names, the subsumed properties and asserted conditions of each may be the same. The rules within an OWL ontology allow these to be logically inferred and thus allow a robot to determine that two concepts actually represent the same object; a highly natural and powerful capability. Using this approach, two concepts or classes in an ontology and the associated properties can be compared. If the two concepts share the same properties then it could be inferred that the two are equivalent concepts or one is subsumed by another. For example the following concepts share the same superclass: Robot 1 Initial Ball Concept: Ball ⊆ RoboCupBall Robot 2 Ball Concept: Ball ⊆ Pink_Ball Robot 1 (Updated Ball Concept) Ball ⊆ (RoboCupBall), Ball ⊆ (Pink_Ball)
Thus if a robot recognises a round, pink object and categorises it with a new concept called ‘Pink_Ball’ any robot that receives a message describing all the ontological features of “Pink_Ball’” in a message should be able to ‘learn’ that this object is a type of ‘Ball’ and add it to their own ontology. However this approach poses several assumptions. Firstly, the key assumption is that the robot creating the concept need to be grounded in the concepts ‘Pink’ and ‘Round’ in order to combine the two. Secondly, if the properties of a concept were modelled as sensory data and measurements, instead of the worded symbols, it would be difficult to assign ‘names’ for such concepts. There would need to be a human element in this process to supervise how concepts are named; the online clustering method described by Modayil and Cuipers (2004) could be applied. Finally, another issue is that robots interpreting new concepts would need to be able to understand what each property meant to translate it into something they could recognise. Inferring the properties of a concept in the ontology would be quite trivial but the real challenge is for the robot to ‘understand’ what each property represented. For example an insect-categorising robot using its ontology to infer that a spider had eight legs has to know what legs actually are and therefore be able to recognise it.
7
Ontology Implementation and Design
The aim of this research is to be able to implement an Ontology Based Object Categorisation (OBOC) system embedded in an AIBO to enable the use of ontologies for object categorisation for autonomous systems. The designs and development for this system are described below.
7.1
RoboCup Ontology
Using Protégé as the IDE, a domain specific ontology has been developed in the RoboCup domain. A basic RoboCup Soccer ontology was created that comprises of a Concrete Object class, which represents all physical objects in the game such as goals, players, ball and beacons, and also an Abstract Object class which represents the intangible aspects of the game of interest – such as colour and shape. This will be the ontology used by the AIBO robot for classifying recognised objects as specific and defined concepts.
7.2
System Design
Figure 4 provides the foundation for the design model. The OBOC system will be capable of meeting the following requirements: 1.
Object Categorisation based on properties or features recognised by the system
2.
Object Categorisation based on context
3.
Concept creation and learning of concepts from other ontologies and robots.
and techniques developed will be critical for when the system is realised within the AIBO robot.
8
Figure 4 System Boundary Model of the OBOC System The entities of the proposed system are described below: Object Categoriser: is an internal part of AIBO perception system and is responsible for categorising objects based on properties and also categorisation based on context and surrounding objects. Ontology Management: manages existing classes and creates new classes in the ontology. In addition it services queries from the Concept Learner and communicates with the Ontology Reasoning entity to categorise a class based on the properties and/or context. Concept Learner: queries the ontology for categorised concepts to infer identifiable properties and features of the object that it represents. For example if the Concept Learner is provided with the concept name ‘RoboCupBall’ then it will access the Ontology Management entity to query about the properties of that specific concept. Concept Merger: is responsible for identifying semantic relationships between concepts. Given multiple ontologies and concept names it determines which concepts refer to the same object based on common properties and restrictions. This can allow heterogenous systems to communicate about the same object. Ontology Reasoning: performs queries on the ontology using the reasoning services of a third party component – Racer, located on a server. The queries performed will determine whether a recognised object or a set of categorised concepts can be successfully categorised as a defined concept in the ontology.
7.3
Ontology Based Categorisation Tool
A proof-of-concept tool has been developed that has allowed different capabilities discussed in section 6 along with the design in 7.2 to be implemented and evaluated. The Protégé API was used to access and manage the ontologies, while the Racer server was used as the Reasoner to perform all the necessary reasoning services. The tool allows users to interact with the RoboCup Soccer ontology by enabling properties to be entered via a graphical user interface and the system is able to output the correct Categorised objects. Although this tool has been developed as a stand alone application, the methods
Conclusions and Further Research
The approach described in this paper have shown that ontologies can provide the structure and mechanism that enables an intelligent system such as an AIBO robot to ground its motor-sensory information to real world objects and allow it to categorise objects through automated reasoning. This has also been partly demonstrated by the development of the Ontology Based Categorisation tool. Although the level of ‘understanding’ that an AIBO robot would gain cannot be completely comparable to that of the human capabilities, the use of logic in intelligent systems for reasoning about real world objects is a step in the right direction. There are important assumptions and constraints that exist in this approach. Firstly the system must have the capability to interpret concept properties. The system must also be capable of translating recognised properties of an object to relevant properties in a class. Thus there are elements of ‘hard coding’ certain aspects to ensure the system can interact with the ontology for now. These have been explored in the Ontology Based Categorisation tool. Secondly, not every object in the real world is colour coded and unique to the naked eye. Although we as humans can say that if two object types appear to be identical then they are for all intents and purposes the same, however a robot with limitations in vision and hardware cannot distinguish every property and feature of a recognised object. This leads to the problem where concepts in an ontology can share the same properties. Thus there must be techniques applied to allow for the selection of the most relevant concept if multiple concepts are inferred through reasoning. One approach is through a heuristic based pruning method where concepts are selected based on specific weights of concepts. The next step is to implement the design in the actual RoboCup environment. There are limitations with the hardware of the AIBO robot as well as the techniques associated using ontologies for object categorisation. Implementing it will allow us to effectively demonstrate and analyse how ontologies can improve the groundedness of a system and also exhibit the benefits of reasoning capabilities in dynamic and complex environments like RoboCup soccer. Apart from pure object categorisation, future work will involve the development of the ontology to include a broader range of abstract concepts that could be exploited by a robot to model the intentions of other robots which will assist the robot soccer players to anticipate opposition player’s behaviour based on recognisable movements, e.g. execute an intercept.
9
References
Breen, C., Khan, L., Ponnusamy, A. 2002, ‘Image Classification Using Neural Networks and Ontologies’,
The 13th International Workshop on Database and Expert Systems Applications, Sept 2-6, pp. 98-102. Coradeschi, S. and Saffiotti, A. 2003, Special Issue of Robotics and Autonomous Systems on Perceptual Anchoring symbols to sensor data in single and multiple robot systems, Robotics and Autonomous Systems, vol. 43, no. 2-3. Gärdenfors, P. and Williams, M-A. 2003, ‘Building Rich and Grounded Robot World Models from Sensors and Knowledge Resources: A Conceptual Spaces Approach’, in the Proceedings of the International Symposium on Autonomous Mini-robots for Research and Edutainment. Harnad, S. 1990, ‘The Symbol Grounding Problem’, Physica D: Nonlinear Phenomena, vol. 42, pp. 335346. Horridge, M. et al 2004, A Practical Guide to Building OWL Ontologies Using The Protege-OWL Plug-in and CO-ODE Tools ver 1.0, viewed 12 October 2004, Locke., J. 2004, An Essay Concerning Human Understanding (1690), viewed 12 Octover 2005, Limketkai, B., Liao, L., and Fox, D. 2005, ‘Relational object maps for mobile robots’, In Proc. of the Int. Conf. on Artificial Intelligence (IJCAI), pp. 1471-1476 Mayo, M.J. 2003, ‘Symbol Grounding and its Implications for Artificial Intelligence’ Oudshoorn, M. J. (ed), Proc Twenty-Sixth Australasian Computer Science Conference: Conference in Research and Practice in Information Technology, vol. 16, pp. 55-60. Modayil, J., and Kuipers, B. 2004, ‘Bootstrap learning for object discovery’, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 742-747. Ogden, C. K., and I. A. Richards 1923, The Meaning of Meaning, 8th Edition, Harcourt, Brace, and World, New York. Schlenoff, C. 2002, ‘Linking Sensed Images to an Ontology of Obstacles to Aid in Autonomous Driving’, Proceedings of the 18th National Conference on Artificial Intelligence: Workshop on Ontologies for the Semantic Web. Schober, J.P., Hermes, T., and Herzog, O. 2004, ‘Content-based Image Retrieval by Ontology-based Object Recognition’, Proceedings of the KI-2004 Workshop on Applications of Description Logics (ADL-2004), Ulm, Germany, September 2004. Searle, J.R. 1980, ‘Minds, brains and programs’, Behavioural and Brain Sciences, 3 (3): 417-457. Song, M.H., Lim, S.Y., Park, S.B., Kang, D.J., and Lee, S.J. 2004, ‘Ontology based Automatic Classification of Web pages’, International Journal of Lateral Computing, vol. 1, no. 1, viewed 12 October 2005,
Sowa, J.F. 2000, Ontology, Metadata, and Semiotics, viewed 2 August 2005, Strat, T.M. 1992, “Natural Object Recognition”, Springer-Verlag New York, Inc. New York, USA. Veltman, K. 2002, “Challenges for the Semantic Web”, viewed 22 July 2005, Vogt, P. 2003, ‘Anchoring of Semiotic Symbols’, Robotics and Autonomous Systems, vol. 43, no. 2, pp. 109-120. Williams, M-A., Gärdenfors, P., Karol, A., McCarthy, J., Stanton, C. 2005 ‘A Framework for Evaluating Groundedness of Representations in Systems: From Brains in Vats to Mobile Robots’, IJCAI Workshop on Agents in Real-Time and Dynamic Environments. Ziemke, T. 1997. ‘Rethinking grounding’, In Austrian Society for Cognitive Science, Proceedings of NewTrends in Cognitive Science - Does Representation need Reality, Vienna.