role of multi-agent architecture and systems in provid- ing these ... view the FIPA standard for building multi-agent systems ...... [39] Gerhard Weiss, editor.
Agent-Based Services for the Semantic Web H. Stuckenschmidt, I. Timm, C. Schlieder Intelligent Systems Group Center for Computing Technologies University of Bremen {i.timm, heiner, cs}@tzi.de 1.
MOTIVATION
The semantic web has become an important research area devoted to the development of techniques and methods that enable intelligent applications to work on information contents available on the World Wide Web. In a recent article, Berners-Lee Hendler and Lassila describe the benefits of the semantic web as follows: ”The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available. The Semantic Web promotes this synergy: even agents that were not expressly designed to work together can transfer data among themselves when the data come with semantics.” [27] This quotation emphasizes the duality of semantically grounded information on one hand and agents on the other hand, which provide intelligent services on the basis of the information. Currently most research is concerned with semantic annotation of information. Languages like RDF [5], OIL [10] or DAML+OIL (http://www.daml.org/2001/03/reference.html) are results of this effort. However, much less work seems to be done in the development of intelligent agents that really make use of semantically annotated information. Many semantic web application like Ontobroker [11] or our own tool BUSTER [1] are rather monolithic systems that access web-pages similar to a conventional search engine rather than being implemented as agents that inherit the web and cooperate with other agents. In this paper we discuss the semantic web from a services point of view. We assume that semantic description of
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 2000 ACM 0-89791-88-6/97/05 ..$5.00
information in terms of ontologies and meta-data models exist and try to figure out what kind of services are needed on the semantic web. We further analyze the role of multi-agent architecture and systems in providing these services. It turns out that many important services are already provided by existing multi-agent standards. Services that still need to be defined are mostly concerned with the use and management of ontologies as well as the provision of sophisticated reasoning mechanisms. The paper is structured as follows: In section 2 we describe services related to the integration and use of heterogeneous and distributed information as defined by the I3 reference architecture (http://ise.gmu.edu/I3 Arch/) and outline important services. In section 3 we review the FIPA standard for building multi-agent systems (http://www.fipa.org) and compare it with requirements we identified in the previous section. In Section 4, we argue for the importance of ontologies for the implementation of important services not directly addresses by the FIPA Standard. In section 5 we describe a first implementation of a service on the basis of the FIPA-OS multi-agent environment. We conclude with a discussion of current limitations and further needs of providing agent-based services for the semantic web.
2.
THE I3 REFERENCE ARCHITECTURE
In order to identify useful services for the semantic web, we refer to results of the DARPA I3 (Intelligent Integration of Information) programme. The goals of this programme are very similar to the ones promoted by the semantic web community as it was set up to ”...enable the development of large-scale, intelligent applications by providing the technology to transform disperse collections of heterogeneous data sources into virtual knowledge-bases which integrate the semantic content of disparate sources and provide integrated information, at the right level of abstraction, to end-user applications.” [24] The programme proposes a reference architecture that defines different services that have to be provided in order to achieve the goals of the programme. We briefly review these services and discuss their role in th context of the semantic web.
The I3 reference architecture defines five interacting families of services. An overview of these services is given in figure 1. On the lowest level, wrapping services encapsulate heterogeneous information sources and provide simple access methods. Semantic Integration and Transformation services integrate information from different sources using wrapping services as well as services from the functional extension family. Coordination and Management services finally handle configuration of information sources and different tools that have to be used in order to access and integration the information from these sources. In the following we describe the services from these families in a bit more detail.
2.3
Integration and Transformation Services
The services from the third family provide support for the integration of different information sources and system components on a semantic level. They cover the range of integration tasks starting from physical integration in terms of caching and indexing up to the integration of processes. The main topic of these services are the following: • Physical Integration • Schema and Information Integration • Component Programming and Process Integration Tasks covered by these services are the integration of heterogeneous information structures as well as aggregation, abstraction and transformation of information contents. Another focus is the re-use of software components realized by high level approaches to component programming and integration.
2.4
Figure 1: Families of Services defined by the I3 Reference Architecture
Functional Extension Services
In order to carry out complex integration and management tasks services from the corresponding service families use so-called functional extension services. These are services that do not provide additional functionality with respect the use of information, they rather improve and augment other services that would be less effective otherwise. Among the functional extension services are the following: • Active Services
2.1
Coordination Services
Coordination services provide support for finding a configuration of information sources and tools suitable for a given task. This support can be automatic to a certain degree. The services provided create the configuration, invokes components and manage the exchange of results on a high level of abstraction. The main coordination services are • configuration construction • tool selection and invocation In order to provide the desired functionality the coordination services mainly use services from the management family.
2.2
Management Services
Management Services implement functionality used by the coordination service in order to find, configure and execute tools and information sources. These services are grouped into the following main functionalities. • Resource Discovery
• Inference Mechanisms • Object-Orientation and Persistence The ’active services’ extension enables a service to provide support for a specific task without being explicitly invoked. The service rather perceives that it is needed and supplies its functionality pro-actively. Inference Mechanisms can be used to derive information which is not explicitly known by the system. This additional information can be used to improve the result or the process of other services. Object orientation and persistence in turn are well known concepts from software engineering that come with various advantages with respect to development and maintenance.
2.5
Wrapping Services
Wrapping services are used in order to make tools and information sources compliant with a certain required standard of the system. This may include certain syntactic and structural standards, e.g. the relational database model as well as behavioral standards, e.g. the ODBC standard interface.
• Configuration Primitives • Syntax and structure wrapping • Interpretation and Execution • Behaviour wrapping Resource discovery means that management services locate useful information on the basis of descriptions of their content. Configuration primitives include similar mechanisms for finding useful tools as well as mechanisms for matching functionality against requirements. Interpretation and execution, finally, includes scheduling of different components as well as the interpretation of exchanged information.
Wrapping services are necessary in order to allow semantic integration services from the corresponding family to identify correspondences on a semantic level. Further, wrapping information sources and tools significantly increases re-usability, which is an important for scalability in large systems.
3.
AGENT-BASED SERVICES FOR THE SEMANTIC WEB
In order to allow large scale information sharing on the World Wide Web, we have to implement the services defined by the I3 reference architecture. As already seen in the introduction, the idea of the semantic web is to use intelligent agents that cooperate with other, thereby forming an intelligent infrastructure for information use. We argue that many of the services described above are already covered by recent developments in the multi-agent community. In order to substantiate this claim, we discuss a common standard for multi-agent systems.
the services available. As a next step the requested service will be decomposed, such that the available services are matching to the decomposed service requests. Additionally the construction service has to configure the services in order to meet the request. The static service construction as a specialization of the dynamic one is missing the request decomposition task. Thus, it is matching service and service request as well as configuring the service provider.
The Foundation for Intelligent Physical Agents (FIPA) is proposing standards for heterogeneous agent systems, multiagent interaction, and agent management1 . The objective is to provide specifications for services allowing the commercial use of agent technology. This standard consists of several service specifications and descriptions; the major ones with respect to the semantic web will be recalled in the following sections. The services specified in FIPA and implemented in FIPA-compliant agent toolkits enable the rapid development of flexible applications and services for the semantic web.
3.1
Coordination Services
The coordination of agents is crucial for the proposition of agent-based services. In the context of information agents the coordination is realized by the communication of agents. Therefore FIPA defines an agent communications language (ACL) consisting of three specifications [13]: speech acts, protocols, and content languages. The smallest unit of communication is the speech act, the so-called communicative act, which is sent from one agent to another. Communicative acts are classified in ACL as performatives due to their intended interpretation, e.g. inform, query, request. The semantics of communicative acts are formalized in the notation of the semantic language (SL) [21] for a basic set of twenty different performatives. Sequences of speech acts are introduced for the coordination of agents. The concrete sequence of agent communication (conversation) is limited to the structure provided by the interaction protocols. The choice of an appropriate interaction protocol is done by the initiating agent of the conversation. The FIPA committee has defined eleven interaction protocols, which are dedicated to information sharing, bargaining and negotiation, and coordination of actions [18]. The concepts of coordination services mentioned in the last section can be implemented using the specification of FIPA for interaction and communicative acts.
3.1.1
Configuration Construction 3
The I architecture is distinguishing two types of configuration construction services. The general type, dynamic configuration construction service, is receiving a request for a specific service. It is analyzing the request with respect to 1
http://www.fipa.org/
Figure 2: FIPA Reference Architecture There are interaction protocols specified which could coordinate the process of configuration construction. The request for a service is following the brokering interaction protocol [15]. A sub-protocol is initiated by the configuration construction agent for the decomposition of a requested service. This sub-protocol is following the contract-net protocol invented by Smith 1988 [31] and formalized within the FIPA standard [17]. The protocol deals with task decomposition and assignment of decomposed tasks.
3.1.2
Tool Selection and Invocation
The tool selection and invocation follows a different policy than the configuration construction service discussed in the last paragraph. Here the service is seeking for an agent capable of performing the request and is returning the agent’s name and address to the requesting agents. The corresponding FIPA service can be found in the directory facilitator which is implementing a yellow page service for agents [12]. Agents are registering to the directory facilitator. This includes their name, address and provided services. On the basis of this information it can reply to service requests from other agents. The interaction protocols used here are standardized in FIPA and are following the query inform interaction protocol [20]. In contrast to the configuration construction service the directory facilitator is not connecting the requesting agent to the service provider agent.
3.2
Management Services: Configuration, Interpretation and Execution
The management services defined in the I3 reference architecture provide an infrastructure for all tools to be executed, searched, and configured. The FIPA specification addresses these issues within the multiagent context. The
reference architecture is illustrated in Figure 2. The message transport service guarantees the communication for the agents. All other elements are implemented as agents using ACL for communication. The agent management system (AMS) has the task of managing the activity states of the agents as well as their address information in correspondence with the I3 architecture. Agents can look-up address information on agents as long as they know the name. For a service location the directory facilitator (DF) is managing the offered services and is implementing the yellow page service of the I3 reference architecture. The configuration of the system is done by interaction of agents with the AMS or the DF directly. The agent communication channel (ACC) is used for connecting multiagent system with another one, i.e. it links the AMS of each system to another. The FIPA architecture is not providing detailed specification of the service description within the DF, thus, it is difficult to use the DF in an intelligent or smart way.
3.3
Integration Services: Physical Integration and Component Programming
As presented in the last paragraph the FIPA reference architecture is integrating a message transportation mechanism (Message Transport Service) enabling agents to communicate with each other using an agent communication language. Different physical agent platforms can be cross-linked using the ACC. These layers of the FIPA agent reference architecture are independent from the network protocol layer and the communication layers provided from the programming language. For information integration and process integration the contents of communicative acts can be specified following standard languages. There are four major content languages defined: constraint choice language (CCL), the semantic language (SL) based on multimodal logic, the knowledge interchange format (KIF), and the resource description framework (RDF) [16].
3.4
Functional Extension Services: Services
Active
Active behavior is an optional extension of services introduced in the I3 reference architecture. Each service specified in the framework of FIPA specification is implemented as an agent. A major design criterion for agents is that they are acting autonomously and pro-active [39]. Thus, the services are explicitly capable of reacting to requests and initiating conversations autonomously.
3.5
Wrapping Services: Encapsulation of Services
Wrapper agents are serving as an interface between external information sources and the system. They are an important element in the FIPA context and defined as part of the software integration specification [14]. Wrapper services are provided by agents and allow agents to: request dynamic connection to software, invoke operations on software systems, be informed of results of operations, query properties of the software system, set parameters of software systems, subscribe to events, manage states of external services and
terminate services.
4.
THE ROLE OF ONTOLOGIES
In the last section we argued that many of the services defined by the I3 reference architecture are provided by multiagent systems. This perception underlines the importance of agents for the semantic web. However, we think that these standards have to be augmented by mechanisms for the interpretation of data semantics. In the following we discuss remaining services not directly provided by a multi-agent architecture. It turns out that these services are directly connected to the ontological infrastructure of the semantic web because they use meta-data annotations that are based on explicit ontologies and ontology-based reasoning in order to provide intelligent services.
4.1
Wrapping Services: Syntactic Standards
In order to be easily accessible, resources and services have to be wrapped. The corresponding services guarantee that all resources and services can be accessed in a uniform way. The FIPA operating system provides this service for services that have been implemented by agents. What FIPA does not provide is a wrapping service for information about the environment the agents live in. In the case of the semantic web this environment consists of web-pages and web-sites that are semantically enriched. So a wrapping service has to ensure that the semantic description of web-pages can be accessed uniformly. The approach that has been taken by the World Wide Web Consortium in order to guarantee uniform access is standardization. The extensible markup language XML [3, 9] has been developed the standardized representation of encoding metadata. The Resource description Framework RDF [5] does the same for contents-related metadata. Ontological information can be represented up to a certain degree using RDF schemas [4]. Recently a new standardization effort has been set up for the ontology the DARPA agent markup language that is intended to provide a more expressive language for describing ontologies. We conclude that wrapping services in the original sense are not needed as long as we only talk about web pages as sources of information. The only claim is that all information sharing services on the semantic web should comply to the standards promoted by the World Wide Web Consortium. This does not mean that all services have to restrict themselves to the standards. Some work has been done on the use of annotation language that extend existing standards. Broekstra et.al. [6] extend RDF schema by more expressive language constructs, Staab and Maedche [33] use RDF schema in order to encode more expressive language elements.
4.2
Functional Extension Services: Inference
One of the main reasons for describing the semantics of information is to enable computers to understand and reason about information. In terms of information sharing services this goal corresponds to the inference service from the functional extension family. Although many people agree in the claim that agents should be capable of performing reasoning, inference services have not been standardized yet. We claim that Ontology-based reasoning
services are an important foundation for semantic web applications. Such services have already been developed and described in literature. We briefly review some basic inference mechanisms that have been implemented and applied in the context of information sharing and the semantic web.
4.2.1
Consistency Checking
A difficult problem that has to be solved in the face of multiple sources of information are inconsistencies at different levels: • the actual content of a page may be inconsistent with • the describing metadata, the metadata may be inconsistent with respect to the underlying ontology • the ontologies of two pages may be incompatible with each other. In order to solve this problem inference mechanisms are needed that can be used for consistency checking. Solutions exist like the Spectacle system, that can be used to validate metadata models on the basis of syntactic criteria [34]. On the ontological level, reasoner like the FaCT system [26] can be used to check the consistency of an ontology. This includes the verification of a metadata model with respect to the ontology provided the model can be represented as an instantiation of concepts and relations defined in the ontology. The reasoner can also assist the process of comparing two ontologies and identifying semantic correspondences [35] and inconsistencies [22].
4.2.2
Classification
Another important reasoning service is classification. It can be used to categorize web pages as well as the content of a web page. The possibility to classify resources by their content is essential in the context of the World Wide Web, because it can help to reduce information to an amount that can still be handled by sorting out resources that do not match certain criteria. Again, a distinction has to be made between classification on a syntactical and a semantic level. Classification based on syntactic criteria as implemented in the already mentioned spectacle system has the advantage that it can use existing structures on web pages (i.e. XML tag-structures) for the classification [36]. Semantic classifications rely on explicit representations of the information content of a web page. If such a description exists, description logic reasoner supporting A-Box reasoning (we currently use the RACER system [25] for web-page classification) can be used in order to infer the class of a web-page on the basis of a structured representation of its content. If no explicit representation exists text-mining and natural language processing techniques have to be employed in order to derive a description from the actual information content.
4.2.3
Querying
Beside the categorization of web pages and content, the ability to query contents similar to a conventional database is a beneficial inference service that could be provided on the semantic web. The idea is to use semantic
annotations in order to locate and specific information items and interpret them in the context of the query. Again approaches for supporting queries on web contents exist at different semantic levels. Recently, a specification for the XML query language XQL [28] has been released by the World Wide Web Consortium that uses a purely syntactic approach for querying the content of XML document based on the structure of a document. More interesting for the semantic web are query languages that use the semantics of documents specified in the ontologies they are based on. Christophides and others for example describe a language for querying metadata models specified in RDF and RDF schemas [7]. Their approach has already been implemented in the sesame system, a repository for RDF models that can be queried over the web (http://sesame.aidministrator.nl/). Decker and others describe another RDF inference engine that uses techniques from deductive databases and logic programming in order to answer complex queries about the information contained in RDF annotated web-pages [8]. The system has been used in the ontobroker system [11] and for the implementation of semantic community web portals [32].
4.3
Integration and Transformation Services
In order to achieve semantic interoperability in a heterogeneous information system like the World Wide Web, the meaning of the information that is interchanged has to be understood across the systems. This is normally achieved using explicit ontologies of information structure and content. Information integration and transformation services will make use of the wrapping and inference services described above. We distinguish three kinds of integration services, namely schema integration, information integration and process integration.
4.3.1
Schema Integration
means that two data structures are mapped on each other. In the case of the semantic web this means that different XML schemas or DTDs. A lot of work on schema integration has been done in the database community and some of these approaches have been extended to semi-structured information. Most of these approaches use ontologies either as a global model or as semantic description of the schemas that have to be integrated [37]. In the cases where a global ontology is used, the elements from the each schema are related to terms from the ontology. If multiple ontologies are used, these ontologies have to be integrated based on their semantics. For this purpose, many existing approaches make use of inference services, i.e. consistency checking and classification.
4.3.2
Information Integration
is a second service related to the integration of heterogeneous information sources. The basic assumption is that a piece of information can only be completely understood in its context. The context again depends on implicit assumptions made by the information provider. A typical example is the use of special scales (e.g. the currency of a prize) or classifications (e.g. standardized product classifications). The process of re-interpreting a piece of information in a
different context is called context transformation [30]. It has been shown that ontologies ease the process of context transformation, because they explicate assumptions about the meaning of certain classification [38]. On the semantic web, the context of a piece of information could be derived from special XML annotations and interpreted on the basis of ontologies that define the meaning of the tag in terms of a concept description.
4.3.3
Process Integration
is the most challenging of the specified integration services. It includes more than just providing interfaces between different component which could be done using ontologies, but also different processes implemented by single components. For the special case of knowledge based systems this problem is addressed in the IBROW project [2] and first results have been achieved with on-demand configuration of different reasoning components over the internet. The IBROW approach is based on ontologies that are supplemented with descriptions of the competence of single components. This approach can probably be generalized for information intensive processes on the semantic web. Another main pillar of a more general service for process integration could be the PSL effort [29] that defines an ontology and a specification language for processes.
4.4
Management Services: Resource Discovery
While most of the management services that are concerned with the management of services are part of the FIPA Standard or even implemented in the FIPA operating system, the management of information sources, i.e. of web sites and pages have to be implemented explicitly. Here, the first and foremost problem in the location of information sources, referred to as resource discovery. The architecture mentions three different kinds of resource discovery services, namely white pages, yellow pages and smart yellow pages.
4.4.1
White pages
White pages are simple indexes containing references to available services and information sources that can be used to check which resources are available. The FIPA operating system provides such a white page service for agents, namely the agent management system. For information resources such white page service is provided by uniform resource locators (URLs) and name servers that resolve the URLs and provide access to the corresponding page.
4.4.2
Yellow pages
Yelloew pages are services that enable users and systems to access resources based on a description of the information or function they provide. This kind of reference to pages and services is more sophisticated than a pure white page service as it does not only contain a list of resource, but it also organizes them in an intelligent way and also allows one resource to provide different kinds of information or function. This kind of resource discovery service strongly benefits from the use of ontologies, because they explicate the categorization used to group and identify useful resources.
The directory facilitator of the FIPA operating system is such a yellow page service for agents and their capabilities. In the case of information sources yellow pages have become popular in terms of web-portals like Yahoo! whose directory of topics is often referred to as the most often used ontology on the web. Similar mechanisms are used on Marketplaces in both the business to business and the business to consumer area where products and services can be accessed via explicit classifications. Yellow page services can be generated semi-automatically using inference services described above. Especially classification methods are useful in this context, because they can be used in order to assign web-pages to a certain category.
4.4.3
Smart yellow pages
Smart yellow pages take the idea of using semantics in order to access resources a step further. While yellow pages in most cases only use simple assignments of resources to classes smart yellow pages use inference services in order to find a suitable resource based on the needs specified by a user. From a technical point of view, the difference to a yellow page service is the use of inference services at run time. Benefits for the user are a greater flexibility, because a query is not restricted to predefined classes. A good example for a smart yellow page service is the Ontoseek system [23]. Ontoseek uses a linguistic ontology to describe products in a catalogue. Product descriptions are labeled graphs, where the labels are taken from the ontology. Queries are also formulated in terms of labeled graphs that are matched against product descriptions. A linguistic thesaurus is used in order to extend the search with synonyms. We expect smart yellow pages to be one of the most important applications on the semantic web, because it has been shown that existing technology is mature enough to provide solutions that are based on RDF and RDFS or other ontology representations.
5.
THE ONTOLOGY-AGENT: AN EXAMPLE IMPLEMENTATION
In the last two sections we argued that services for the semantic web benefit from a combined use of multi-agent systems and ontologies. We argued for using the FIPA Operating System becuase it already implements some important services, especially from the coordination and management family. We also emphasized the need for ontologies as a base technology for the implementation of of resource discovery and semantic integration services. In order to give evidence for this claim we began to implement ontology-based services using the FIPA operating system. As a first trial, we used the semantic translation module of the BUSTER system in order to implement a prototype of the ontology agent which is described in the FIPA ontology service specification [19].
5.1
The FIPA OS Toolkit
The main objective of the FIPA standardization is to support agent and multiagent system interaction. Considering the heterougenous developments of agent architectures,
communication protocols and behavior modelling, FIPA aims at specifying needed services as well as concepts for the implementation and integration of agents and multiagent systems. It specifies the agent platform concept (cf. Figure 2), which introduces four mandatory services: message transport service, directory facilitator, agent management system, and standardized services for agent communication language. Implementations of multiagent systems, which are following these specifications, should be able to communicate and interact with other FIPA-compliant agent platforms.
nical compatibility of agent platforms. There are no other or optional (e.g. ontology service specification) services implemented.
5.2
The FIPA Ontology Agent
According to the FIPA specification, the ontology agent is a special agent providing ontology-related services to other agents in the system. Its major tasks are • discovery of public ontologies • maintenance of an ontology repository
One of these implementations covering most of the basic functionality is the FIPA open source development (FIPA-OS) of Nortel Networks (http://fipa-os.sourceforge.net/). In contrast to many ’closed’ implementations, which are under development mainly from FIPA members, FIPA-OS is providing freely available and modifiable source code. The aim of FIPA-OS is to enable developers for the adoption of FIPA without implementing the specification. Therefore, it consists of four components: agent platform, agent shell, configuration utilities, and an ’empty’ reference agent. The FIPA-OS agent shell is a layer, which consists of necessary components for the development of agents and their interaction as well as their interaction with the agent platform. It provides the elementary concepts for the agent communication language, message transportation services suggested by FIPA, an elementary component for task handling, and an automatic conversation manager. The agent platform implements the FIPA conform services agent management system, message transport service, and directory facilitator on top of the agent shell. It contains an agent loader which is capable of loading and unloading agents dynamically during runtime of the system, too. Configuration tools are provided for easy and guided modification of agent platform parameters for he development as well as the implementation of agents. Within this layer a cross federation of agent platforms is supported. Thus, agent platforms can easily be linked together. The starting point for developers is the initial ’empty’ reference agent, which implements all concepts needed for registering with the agent platform, performing tasks and conversations, receiving and sending messages, and parsing message contents in different content languages (SL, RDF). A concrete agent is inherited from this reference agent. For the application of agents in heterogeneous environments it is necessary to interact with other agents and agent platforms. On the syntactical level FIPA provides and FIPA-OS implements sufficient support. In order to adapt agents to a specific domain and to solve real-world problems it is not satisfactory to enable communication between agents on a syntactical level only - semantical integration is essentially needed. Different ontologies may be used for the sematical definition of communication. FIPA proposes a special ontology service specification on an abstract level, if agents are not using the same ontology [19]. The implementation of FIPA-OS is considering the mandatory part of the FIPA specification to ensure tech-
• translation of expressions across ontologies • answer queries about the relationship between terms and ontologies • facilitate the identification of shared ontologies As such the ontology agent supports the communication between different agents that do not share the same basic vocabulary as a result of the use of different ontologies (see figure 3. The ontology agent can be used by these agents in order to find a shared ontology as a basis for communication or to interpret messages by translating the vocabulary used.
Figure 3: The ontology agent as a facilitator for agent communication In order to provide these services the ontology agent uses a meta ontology defining predicates and actions the ontology agent is able to calculate or perform: ontol-relationship is a predicate that defines different degrees of translateability between different ontologies. Agent may query the ontology agent about this relationship in order to get information about the possibility of a translation. atomic-sentence defines manipulations of an existing ontology in terms of assert and retract statements. Assert is an action that corresponds to the process of adding a statement to an existing ontology. Retract corresponds to the process of deleting a statement from the ontology. translate can be used in order to translate an expression, e.g. a class name from one ontology into another. Depending of the kind of relationship between the ontologies (see above), the result of the translation keeps the intended meaning of the initial statement. In the following we focus on the translate action and present and approach as well as a prototype capable of performing a special kind of translation between class names from different ontologies.
5.3
Term Translation by Re-Classification
We developed an approach for translating terms from one ontology to another that is based on description logics [35]. The main inference mechanism used in description logics is subsumption checking. A concept is said to subsume another concept, if the membership of the latter implies membership in the former. Following the semantics defined above, the subsumption relation between two concepts is equivalent to a subset relation between the extensions of the concept definition. Given two concept definition A and B subsumption is tested by checking if A v B ⇐⇒ I[(and A (not B))] = ∅. Subsumption checking can be seen as a special classification method, because it returns a list of classes Bi (concepts) an member of a given concept A belongs to. Using the notion of subsumption a name s from a one terminological context (represented by a concept definition S) can be replaced by any other name t whose definition T subsumes S : s
t←T vS
When transforming names from on terminology into names of another terminology, we have to handle two different sets of concept definitions. Let O be a terminological knowledge base containing two terminologies represented by sets of concept definitions S ⊂ O and T ⊂ O. Each of the sub-models defines a subsumption relation vS and vT respectively. Comparing with the subsumption relation vO defined by the overall model we recognize that in general the union of the two subsumption relations are only a subset of the overall subsumption relation. Even worse, the interesting part of the relation defining subsumptions across the two sub-models cannot always be computed, because the sub-models will normally be disjoint. In order to overcome this problem, we require the existence of a shared sub-model V = (S ∩ T ) ⊂ O covering basic definitions from the domain of interest. Using this shared vocabulary we can now compute subsumption relations between names from the two terminologies using available subsumption reasoner that support the language. As we are normally interested in the most specific translation to minimize information loss, we get the following definition s
t ← {T ∈ T |S v T ∧ 6 ∃C ∈ T [C < Ti ∧ S v C]}.
The accuracy of this kind of transformations strongly depends on the nature of the shared sub-model. If the shared model only covers a small part of the definitions of names from the two terminologies, we only get an incomplete an general result with a high degree of information loss. If on the other hand, the names from the terminologies are defined using only shared concepts, we get a complete transformation.
5.4
A First Implementation
On the basis of the term translation by re-classification introduced in the last sections as well as the ontology service specification we have developed and implemented the ’BUSTER Ontology Agent’. The system’s architecture and its integration into the FIPA platform is shown in Figure 4.
Figure 4: Agent
Architecture of the Buster Ontology
The agents are interacting using ACL through the message transport service. The ontology agent is providing the term translation service. E.g. if Agent 1 wants to inform Agent 2 about φa , which is formally defined in Ontology a, and Agent 1 is aware of Agent 2 using Ontology b for knowledge processing, it can request a translation of φa into φb as a concept of Ontology b. The negotiation of this request is automatically handled along the line of the FIPA query-inform interaction protocol [20]. The implementation of the ontology agent covers two major aspects: the first is handling the requests and the interaction with other agents; the second is actually re-classifying terms. The FIPA-OS ’empty’ reference agent is slightly extended and instantiated for the first task according to the specification of the ontology service specification. This modification enables the ontology agent to negotiate with other agents and to handle their requests autonomously. This includes the parsing and interpretation of the messages following the semantic definition of the Ontology Service Ontology in SL0 syntax, too [19]. The re-classification task is solved in the second component, which is connecting the ontology agent to a reasoner using a CORBA interface. In our prototypical implementation we are restricted to the translation of terms defined in description logics as we have integrated the FACT reasoner [26]. The FACT reasoner is connected to the ontologies in question and is handling the re-classification of negotiated task in the first component.
6.
DISCUSSION
In this paper we discussed automatic services for the semantic web. We used the I3 reference architecture as a check-list in order to identify and discuss important services without claiming that they have to be adopted completely. Nevertheless, we showed that many ideas and requirements having been included in the architecture are can be provided using multi-agent systems as described in the FIPA specification. Furthermore, much of the basic technology for providing coordination and management services has also been implemented an is available in the FIPA OS system. This enables us to concentrate on higher level
functionality that needs intelligent methods. We argued that ontologies are a key technology with respect to such services, for example resource discovery and information integration. We conclude that both, the semantic web and the multi-agent community benefit from an integration of multi-agent technology with ontology-based services for information discovery and integration. Our implementation of the ontology service ilustrates that the implementation of a service on top of the FIPA standard is very promising because the implementation is relatively straightforward and can be re-used as part of larger systems in various ways. The Buster Ontology Agent is a starting point of really performing an integration, because many problems still have to be solved. Especially the problem of translating between different ontology is a difficult one that needs more investigation in the theoretical and experimental level before we can really speak about a reasonable implementation of the FIPA ontology specification. Further, we can imagine the realization other agent-based services not mentioned in the FIPA specification. Especially the problem of resource discovery is very relevant in the context of the semantic web. Standardized solutions this problem will therefore be of great value.
[8]
[9]
[10]
[11]
[12] [13]
Acknowledgement
[14]
The prototype of the Ontology Agent has been developed with the help of Andreas Lattner, Thorsten Scholz, Gerhard Schuster and Ubbo Visser. We also thank Holger Wache for valuable information about the I3 Reference Architecture.
[15]
7.
REFERENCES
[1] H. Stuckenschmidt andT. Voegele, U. Visser, and R. Meyer. Intelligent brokering of environmental information with the buster system. In Information Age Economy: Proceedings of 5th International Conference Wirtschaftsinformatik, 2001. [2] Richard Benjamins, Bob Wielinga, Jan Wielemaker, and Dieter Fensel. Brokering problem-solving knowledge on the internet. In Fensel et al., editor, Proceedings of the European Knowledge Acquisition Workshop, number LNAI 1621 in Lecture Notes in Artificial Intelligence. Springer-Verlag, 1999. [3] T. Bray, J. Paoli, and C.M. Sperberg-McQueen. Extensible markup language (xml) 1.0. Technical Report REC-rdf, W3C, 1998. [4] D. Brickley, R. Guha, and A. Layman. Resource description framework (rdf) schema specification. Working draft, W3C, August 1998. http://www.w3c.org/TR/WD-rdf-schema. [5] D. Brickley, R. Guha, and A. Layman. Ressource description framework schema specification. Technical Report PR-rdf-schema, W3C, 1998. [6] J. Broekstra, M. Klein, S. Decker, D. Fensel, F. van Harmelen, and I. Horrocks. Enabling knowledge representation on the web by extending rdf schema. In Proceedings of the tenth World Wide Web conference WWWW’10, Hong Kong, May 2001. [7] V. Christophides, D. Plexousakis, G. Karvounarakis, and S. Alexaki. Declarative languages for querying
[16] [17] [18] [19] [20] [21] [22]
[23]
[24] [25]
[26]
[27]
portal catalogs. In Proceedings of the DELOS Workshop: Information Seeking, Searching and Querying in Digital Libraries, pages 115–120, 2000. Stefan Decker, Dan Brickley, Janne Saarela, and Jrgen Angele. A query and inference service for rdf. In Proceedings of QL’98 - The Query Languages Workshop, 1998. David C. Fallside. Xml-schema part 0: Primer. Working draft, W3C, February 2000. http://www.w3.org/TR/2000/WD-xmlschema-020000225/. D. Fensel, I. Horrocks, F. Van Harmelen, S. Decker, M. Erdmann, and M. Klein. Oil in a nutshell. In R. Dieng, editor, Proceedings of the 12th European Workshop on Knowledge Acquisition, Modeling, and Management (EKAW’00), number 1937 in Lecture Notes in Artificial Intelligence, pages 1–16. Springer-Verlag, 2000. Dieter Fensel, Stefan Decker, M. Erdmann, and Rudi Studer. Ontobroker: The very high idea. In 11. International Flairs Conference (FLAIRS-98), Sanibal Island, USA, 1998. FIPA. Abstract architecture specification. http://www.fipa.org/specs/fipa00001/, 2000. FIPA. Acl message structure specification. http://www.fipa.org/specs/fipa00061/, 2000. FIPA. Agent software integration specification. http://www.fipa.org/specs/fipa00079/, 2000. FIPA. Brokering interaction protocol specification. http://www.fipa.org/specs/fipa00033/, 2000. FIPA. Content language library specification. http://www.fipa.org/specs/fipa00007/, 2000. FIPA. Contract net interaction protocol specification. http://www.fipa.org/specs/fipa00029/, 2000. FIPA. Interaction protocol library specification. http://www.fipa.org/specs/fipa00025/, 2000. FIPA. Ontology service specification. http://www.fipa.org/specs/fipa00086/, 2000. FIPA. Query interaction protocol specification. http://www.fipa.org/specs/fipa00027/, 2000. FIPA. Sl content language specification. http://www.fipa.org/specs/fipa00008/, 2000. Enrico Franconi and Gary Ng. The i.com tool for intelligent conceptual modelling. In Proceedings of the 7th Intl. Workshop on Knowledge Representation meets Databases, Berlin, Germany, August 2000. Nicola Guarino, Claudio Masolo, and Guido Vetere. Ontoseek: Content-based access to the web. IEEE Intelligent Systems, 14(3), 1999. David Gunning. Intelligent integration of information (i3). http://www.isx.com/pub/I3/i3 overview.html. Volker Haarslev and Ralf Mller. Racer system description. International Joint Conference on Automated Reasoning, 2001. to appear. I. Horrocks. The FaCT system. In H. de Swart, editor, Automated Reasoning with Analytic Tableaux and Related Methods: International Conference Tableaux’98, number 1397 in Lecture Notes in Artificial Intelligence, pages 307–312. Springer-Verlag, Berlin, May 1998. Tim Berners Lee, James Hendler, and Ora Lassila.
[28] [29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
The semantic web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American, 5(1), 2001. J. Robie. Xml query language. Technical report, W3C, August 1999. C. Schlenoff, Gruninger M., F. Tissot, J. Valois, J. Lubell, and J. Lee. The process specification language (psl): Overview and version 1.0 specification. Technical Report NISTIR 6459, National Institute of Standards and Technology, Gaithersburg, MD, 2000. L. Serafini and C. Ghidini. A context based semantics for federated databases. In M. Cavalanti, P. Bonzon, and R. Nossum, editors, Formal Aspects of Context,, volume 20 of Applied Logic Series. Kluwer Academic Publishers., 2000. R. G. Smith. The contract net protocol: High-level communication and control in a distributed problem solver. In A. H. Bond and L. Gasser, editors, Readings in Distributed Artificial Intelligence. Morgan Kaufmann, San Mateo, 1988. S. Staab, J. Angele, S.Decker, M.Erdmann, A.Hotho, A.Maedche, H.-P.Schnurr, R.Studer, and Y.Sure. Semantic community web portals. In Proceedings of the 9th International World Wide Web Conference, Amsterdam, 2000. Steffen Staab, Michael Erdmann, and Alexander Maedche. Engineering ontologies using semantic patterns. In A. Gomez-Perez, M. Gruninger, H. Stuckenschmidt, and M. Uschold, editors, Proceedings of the IJCAI-01 Workshop on Ontologies and Information Sharing, 2001. H. Stuckenschmidt and F. van Harmelen. Knowledge-based validation, aggregation and visualization of meta-data: Analyzing a web-based information system. In N. Zhong and Y. Yao, editors, Proceedings of the first Asia-Pacific conference on Web Intelligence (WI’2001), Lecture Notes in Artificial Intelligence. Springer-Verlag, 2001. Heiner Stuckenschmidt. Using oil for intelligent information integration. In Proceedings of the Workshop on Applications of Ontologies and Problem-Solving Methods at the European Conference on Artificial Intelligence (ECAI 2000), 2000. Frank van Harmelen and Jos van der Meer. Webmaster: Knowledge-based verification of web-pages. In M. Ali and I. Imam, editors, Proceedings of the Twelfth International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, (IEA/AEI99), LNAI. Springer Verlang, 1999. H. Wache, T. V”ogele, U. Visser, H. Stuckenschmidt, G. Schuster, H. Neumann, and S. H”ubner. Ontology-based information integration: A survey. In A. Gomez-Perez, M. Gruninger, H. Stuckenschmidt, and M. Uschold, editors, Proceedings of the IJCAI-01 Workshop on Ontologies and Information Sharing, 2001. Holger Wache and Heiner Stuckenschmidt. Practical context transformation for information system interoperability. In P. Bouquet, editor, Proceedings of Context’01. Springer, 2001. Gerhard Weiss, editor. Multiagent Systems - A
Modern Approach to Distributed Artificial Intelligence. The MIT Press, Cambridge, Massachusetts, 1999.