< f i r s t n a m e>C a r l o s P e d r i n a c i Listing 2.1: XML document example
2.2. Ontologies and Ontology Languages
16
Since its creation, XML has served as the basis for dening a large variety of vocabularies and languages such as XHTML, WAP, and even RDF, and OWL as we will see in the next sections.
In fact, XML is a meta-language which
supports the denition of specic vocabularies for a vast variety of purposes, and it facilitates the creation of software tools to manipulate them. To support this exibility and to grant the ecient, yet simple manipulation, of XML documents, the XML specication lays on very simple but strict rules, for example: all XML documents must have a root element or attribute values must always
be quoted . XML documents are composed of storage units called entities, which contain either parsed (characters or markup) or unparsed data. Markup encodes a description of the document's storage layout and logical structure so that it is self-describing. XML documents can be well-formed if they comply to the XML specication, and valid if they also conform to a Document Type Denition (DTD). The purpose of a DTD is to dene the legal building blocks and the structure of an XML document. DTDs were mainly thought to support the exchange of data between dierent groups of people or organisations by dening a common and shared representation of the data and its structure. XML Schemas (Bray et al., 2004) have been proposed as a successor of DTDs. XML Schemas (XSD) intend to cover some of the things DTDs lack, through a richer grammar, supporting, for example, dening the allowed number of nested elements, and the inclusion of data types (e.g., decimal, dateTime, integer, etc.)(Biron & Malhotra, 2001). Even though XML together with a DTD, or an XML Schema, have proven to be extremely useful for data exchange, they lack proper semantics. In fact, any data interchange on the basis of XML requires an agreement between the persons or organisations exchanging the information about the semantics of the elements in the exchanged data. Such an agreement is basically a commitment to particular semantics of the elds dened in the DTD or in the XML Schema. In other words, XML only species a syntactical convention, the semantics are outwith of its scope.
2.2.2 RDF and RDF Schema XML provides outstanding support for dening vocabularies, however it fails to provide proper semantics of the data it holds. To overcome this limitation the Resource Description Framework (RDF) has been dened. RDF is a W3C recommendation that denes a general-purpose language for dening metadata (i.e., data about data) in the Web (Manola & Miller, 2004).
RDF is thus
a framework for exchanging information between applications without loss of meaning.
RDF is particularly intended for describing Web resources (e.g., a
Web page), it does not however require that resources be retrievable on the Web and is therefore suitable for representing any kind of metadata. RDF is based on the identication of resources based on their Uniform Resource Identier (URI), that is a compact string of characters for identifying a resource (Berners-Lee et al., 1998). In RDF, a resource is basically a thing we want to describe. Resources have properties that may have values. A property is a particular characteristic of the resource being described, for example the author in the case of a Web page. A value can either be another resource or a literal, that is, text. As opposed to traditional programming languages, RDF
Chapter 2. Knowledge Representation for the Web
17
has no built-in datatypes, instead it identies them by their URIs, thus avoiding datatypes conversions. For basic and commonly used ones RDF adopts XML Schema datatypes which are, just as everything, identied by their URI. Any expression in RDF is a collection of triples consisting on a subject, an object and a predicate.
For example, in order to say that this thesis was
written by something with a rst name Carlos and a last name Pedrinaci, three statements would be required as shown in Table 2.1.
Table 2.1: RDF Statements Example. Subject
Predicate
Objects
this_thesis
written_by
thing_1234
thing_1234
rst_name
Carlos
thing_1234
last_name
Pedrinaci
The RDF recommendation comes with an abstract syntax and an XML syntax.
RDF's abstract syntax is a directed graph (Klyne & Carroll, 2004)
where elliptical nodes represent resources, arcs represent properties and rectangular nodes represent values. See Figure 2.1 for the graph representation of the previous example in Table 2.1. However, for machine processing a way for representing RDF graphs as a sequence of symbols is required. There are currently several syntaxes available, such as N3 (Berners-Lee, 1998), or a subset of this notation called N-Triples (Grant et al., 2004). The RDF recommendation denes an XML-based syntax called RDF/XML (Beckett & McBride, 2004). See Listing 2.2 for an RDF/XML representation of our simple example in Table 2.1.
Figure 2.1: RDF Graph Example.
RDF supports the assertion of simple statements for describing things, typically web resources. However, it has no means for describing what the subjects, objects or predicates mean nor does it support dening any kind of relationship between them. That is the role of the RDF vocabulary description language, RDF Schema.
2.2. Ontologies and Ontology Languages
18
C a r l o s P e d r i n a c i Listing 2.2: RDF/XML Example
RDF Schema The RDF Schema (Brickley & Guha, 2002; McBride, 2004), also known as the RDF Vocabulary Description Language, supports dening domain specic vocabularies. RDF Schema introduces some simple ontological concepts such as
Classes, Subclasses, Properties or Subproperties. It denes a type system similar to those of Object-Oriented programming languages that allows dening a hierarchy of the kinds of things that can exist in a particular domain, and their properties. Because RDF Schema is also metadata it can be encoded in RDF, see (Brickley & Guha, 2002, appendix A). In RDF Schema, Classes (rdfs:Class ) are resources that represent a collection of resources, for example Books, Cars or Persons. A resource can be identied as a member of a Class by means of the property rdf:type. Class hierarchies, can be constructed using the rdfs:subClassOf property. Similarly, concepts' properties (rdf:Property ) are also resources and can be rened by means of rdfs:subProperty. Moreover, RDF Schema supports restricting the values of a property to be members of a particular class using the rdfs:range property, and also allows restricting the subjects of a property (rdfs:domain ). RDFS was deliberately dened with limited semantics. It was designed to be a simple, formal and extensible ground for dening shared vocabularies for the web.
Therefore RDFS lacks many ontological constructs such as equivalence,
inverse, symmetric or transitive relationships, nor does it support dening cardinality constraints. Because in many cases RDFS fails to provide an expressive enough ontological language, the W3C has also created a more expressive language we describe in the next section.
2.2.3 OWL An important number of use-cases for ontologies have shown the need for a richer expressiveness than the one provided by RDF and RDF Schema (Hein, 2004). This has led to initiatives to dene richer ontology languages, that can support useful automated reasoning. The prominent result from these initiatives is the Web Ontology Language (OWL) (McGuinness & van Harmelen, 2004; Antoniou
Chapter 2. Knowledge Representation for the Web
19
& van Harmelen, 2004), a W3C recommendation. OWL is derived from the DAML+OIL Web Ontology Language (Connolly
et al., 2001) and is dened as a vocabulary extension of RDF. It builds upon RDF and RDF Schema, in terms of its (XML-based) syntax, and its modelling primitives which includes the ones from RDF and the majority of the RDFS constructs. As a result every OWL document is a valid RDF document. OWL was designed to support the use, modication and integration of ontologies over the Web, with a balance between expressivity and scalability, avoiding unnecessary complexities and maintaining compatibility with other standards (Hein, 2004). Expressiveness comes at the price of eciency and in some cases can even lead to intractability. Thus, to full the previous requirements, OWL has three dierent sublanguages, with increasing expressivity: OWL Lite, OWL DL and the most expressive OWL Full.
It is important to note however that, even
though the three species are dierent languages, the following relationships between them grant upward compatibility:
•
every legal OWL Lite ontology is a legal OWL DL ontology;
•
every legal OWL DL ontology is a legal OWL Full ontology;
•
every valid OWL Lite conclusion is a valid OWL DL conclusion; and
•
every valid OWL DL conclusion is a valid OWL Full conclusion.
In the next sections we will introduce the dierent OWL languages in an increasing expressivity order. We will briey mention the most relevant features of each of them and we will give some illustrations for a better understanding. We do not intend to provide a fully-edged review of the languages, their syntax and their constructs. The reader is referred to (McGuinness & van Harmelen, 2004; Dean et al., 2004; Patel-Schneider et al., 2004; Antoniou & van Harmelen, 2004) for more extensive information.
OWL Lite OWL Lite aims at providing an easy to use language, that caters for classication hierarchies with simple constraints.
OWL Lite is the most restrictive
avour of OWL and, as such, it is the least expressive one but it also supports the most ecient reasoning.
Because of its simplicity, OWL Lite is expected
to promote a wider adoption of semantic technologies in the World Wide Web. At the same time, it paves the way for further renements using its more expressive successors, OWL DL or OWL Full, thanks to the previously mentioned compatibility rules. Despite the desire to simply extend RDF Schema, the tradeo between expressiveness and computability has led to some restrictions. RDF Schema has some very powerfull primitives such as rdfs:Class (the Class of all Classes) and
rdf:Property (the Class of all Properties) that cross the borders of computability. Hence their use has been restricted. For dening classes OWL Lite provides
owl:Class which is a subclass of rdfs:Class. In addition, there are two special classes, owl:Thing and owl:Nothing. The former is the superclass of everything and the latter is the empty class (a subclass of every class).
In OWL Lite,
instances or individuals are declared by means of the primitive owl:Individual.
2.2. Ontologies and Ontology Languages
20
Regarding properties, OWL Lite provides two dierent and disjoint classes of properties, owl:ObjectProperty and owl:DatatypeProperty which are both subclasses of rdf:Property. Like in RDFS, OWL Lite supports dening hierarchies of classes and properties by means of rdfs:subClassOf and rdfs:subPropertyOf respectively. These basic constructs are complemented with others that support dening (in)equalities relationships between properties, classes or individuals (e.g.,
owl:equivalentClass, owl:dierentFrom ). Moreover, OWL Lite supports dening some property characteristics such as, symmetry (owl:SymmetricProperty ), transitivity (owl:TransitiveProperty ) or even the inverse of a property using the
owl:inverseOf construct. An example of a symmetric property is works with : if Jon works with Carlos, Carlos works with Jon. Concerning transitivity, the ancestor relationship serves as a good illustration: because your grand-father is an ancestor of your father and your father is one of your ancestors, your grandfather is also one of your ancestors.
Again, the family relationships allow us
to show an inverse relationship that holds between the hasParent and hasChild relationships. Furthermore, OWL Lite allows restrictions to be placed on how properties can be used by instances. For instance, the construct owl:allValuesFrom supports restricting the range of a property locally to a specic class, as opposed to the RDFS range denition which is global. Let us imagine a property hasSon. In the case of humans, it can be restricted to be instances of Person, and the same property for dogs can be restricted to be a Dog instance. Another example is owl:maxCardinality which restricts a property (again locally) by specifying the maximum number of individuals that can be related to a particular class. To illustrate, such a constraint could allow us to state that a Person can only have one Mother. It is worth noting that in OWL Lite, cardinality restrictions are restricted to at least, at most or exactly 1 or 0, which is not the case for the other languages as we will see in the next sections. Finally, a last feature that shows the goal to promote a wider adoption of such technologies in the Web, are the versioning capabilities. OWL Lite provides some constructs for dening ontology versions (owl:versionInfo ), backwards compatibility (owl:backwardCompatibleWith ) or even incompatibilities between ontologies (owl:incompatibleWith ).
OWL DL Description Logics (DL) originate from research carried in the 1970s and the 1980s on the KL-ONE knowledge representation system, by interest in overcoming semantic networks and frame languages limitations (Russell & Norvig, 2003; Baader et al., 2004; Brachman & Levesque, 2004).
Description Logics
are knowledge representation languages that can be used to describe denitions and properties of categories. Their main characteristic is the formal logic-based semantics that grants the tractability of inferences. Their main inference tasks are subsumption (i.e., checking whether a class is a subclass of another), classi-
cation (i.e., nding the class an instance belongs to) and consistency at design time. Since DLs provide the means for describing conceptualisations with welldened semantics and powerful reasoning tools, they are good candidates for ontology denition. OWL Description Logics, OWL DL for short, is a more expressive super-
Chapter 2. Knowledge Representation for the Web
21
set of the OWL Lite language. OWL DL has some additional constructs and less restrictions than OWL Lite but still it restricts the way OWL and RDF primitives can be used in order to maintain some computational characteristics. In particular, OWL DL is bound to Description Logics constraints in order to maintain its tractability as well as to benet from previous extensive work performed in the eld of knowledge representation (Baader et al., 2004). OWL DL supports dening enumerated classes (i.e., classes dened by an enumeration of the individuals that make up the class) thanks to the owl:oneOf construct.
For example, we could dene the class OWL Languages by enu-
merating the OWL Full, OWL DL and OWL Lite individuals. Moreover, in OWL DL using owl:hasValue one can force individuals of a class to have a particular value for a property (e.g., every instance of French Person must have French as the Nationality value). Furthermore, OWL DL provides the means for dening boolean operations between classes. It is possible to dene classes as the union of others using the
owl:unionOf construct. For example, we can dene Human as being the union of Man and Woman classes. The same way OWL DL supports dening a class as the complement (owl:complementOf ) of another, say Young is the complement of Old. The last boolean operation provided is the intersection (owl:inserctionOf ), that allows dening the class Spanish Researcher as the intersection of Spanish Person and Researcher. Also, as opposed to OWL Lite, in OWL DL cardinality restrictions are not restricted anymore to the values 0 or 1. Instead, one can use any non-negative integer number for these constraints, such as a Football Team has at least eleven members. Despite the additional expressivity constructs, OWL DL is still bound to some constraints.
For instance, no cardinality restrictions can be placed on
transitive properties. Moreover, resources can only be either a class, a datatype, a datatype property, an object property, an individual or a value, and it has to be stated explicitly. There exist situations where these restrictions limit the usability. For instance, a typical diculty arises when dierent systems work on dierent levels of abstraction. What is a concept for the lowest level system, shall be treated as an instance in the other case. The same OWL-DL ontology could not be used in both applications since the same resources would be both a class and an instance. In order to overcome these nal limitations, OWL Full has been dened.
OWL Full In OWL Full, every construct can be used in any combination as long as the result is a valid RDF expression. OWL Full is therefore completely compatible with RDF, both syntactically and semantically: any legal RDF document is also a legal OWL Full document, and any valid conclusion in RDF/RDFS is also valid in OWL Full. The result is a very expressive but undecidable language. The OWL W3C recommendation consists of a set of three ontology representation languages with increasing expressivity.
Both tool and application
developers are given the possibility to use the one that better suits their needs. Choosing between OWL Lite and OWL DL merely depends on whether the user requires the more expressive constructs OWL DL provides.
The choice
between OWL DL and OWL Full is a tradeo between the expressive power of
2.3. Rules and Rule Languages
22
all the RDF Schema constructs and the tractability granted by OWL DL. In this respect, we would like to point out, again, the importance for supporting complete and sound reasoning over ontologies, which is not possible for OWL Full. OWL languages are suciently expressive to be used in some KnowledgeBased Systems, those whose inferences can be reduced to Description Logics inferences (i.e., subsumption, classication and consistency checking). In many other cases, there remains the need for specifying the application behaviour. In traditional Software Engineering this is usually called the business logics, and is usually made up of a set of procedural sentences that are supposed to achieve the desired tasks (i.e., a set of programming constructs that specify what to do with some data).
In Knowledge-Based Systems, due to their complexity and
the usual need to perform automated activation upon certain events depending on a large set of variables, a systems' behaviour is usually achieved by means of a general inference engine and a set of inference rules.
This declarative
way for dening part of the behaviour in Knowledge-Based Systems has proven useful and there are currently several initiatives intended to bring their power to the Semantic Web scenario. In the next section we review the most relevant approaches followed in this context.
2.3 Rules and Rule Languages We previously characterised knowledge as being of two dierent kinds, the static knowledgethe Domain Knowledge and the dynamic knowledgethe Inference and Task knowledge.
Ontologies are an appropriate means for dening
static knowledge for they are suciently expressive and pave the way for reusing Knowledge Bases and integrating pre-existing knowledge-based components. However, the ability to dene behavioural aspects when developing KnowledgeBased Systems is an important, and in many cases essential, requirement. In Musen's words (Musen, 2004): Many intelligent systems are designed primarily to answer queries about large bodies of knowledge. In these cases, ontologies provide most of the representational muscle needed to build complete systems.
To build systems that solve real-world tasks, however, we
must not only specify our conceptualisations, but also clarify how problem-solving ideally will occur. The dynamic behaviour of Knowledge-Based Systems is usually dened as a mixture between traditional software procedures and inference rules that declaratively dene reasoning steps. Again, in AI many dierent languages have been created to appropriately support dening these inference rules. These languages are built upon the principles of logic, typically First-Order Logic or even Higher-
Order Logic, and support logically inferencing (i.e.
deriving) new facts (i.e.
assertions about the world) upon pre-existing facts (Russell & Norvig, 2003). In the Semantic Web, rules have been identied as a required and enabling technology (Musen, 2004; Schwartz, 2003; Berners-Lee et al., 2001; Berners-Lee, 2002a; Staab et al., 2003). Hence, several eorts have been devoted to supporting their integration into the overall Semantic Web architecture (Berners-Lee
Chapter 2. Knowledge Representation for the Web
23
et al., 2001). The most prominent eorts, we will review next, are RuleML and SWRL.
2.3.1 RuleML The Rule Markup Language (RuleML) (Boley et al., 2001; RuleML Initiative, 2005) is a standardisation initiative that was started in 2000.
Its goal is to
provide an open, vendor neutral XML/RDF-based rule language that will allow exchanging rules between dierent systems and components across the Web. RuleML encompasses a hierarchy of rules as depicted in Figure 2.2.
Figure 2.2: RuleML Rule Types (RuleML Initiative, 2005).
Reaction Rules:
Reaction rules are basically all those rules that return no
value. They state triggering conditions, pre-conditions and eects. Reaction rules are always applied in the forward direction, thus, they declaratively dene the behaviour of a system in response to a particular condition or event. For example, if the stock is empty, stop selling.
Transformation Rules:
Transformation rules are themselves further charac-
terised into Derivation rules, Facts and Queries.
Derivation Rules:
Derivation rules are transformation rules that have a set of
premises and whose action only asserts a new fact or conclusion. This type of rules can either be applied forwards for deriving new facts or backwards for proving a conclusion from premises. RuleML does not prescribe any direction.
An example of a derivation rule is if a man has married a
women, his marital status should be changed to married .
Facts:
Facts are basically derivation rules that have an empty set of premises.
In other words, facts always hold. available.
A fact could be there are 25 books
2.3. Rules and Rule Languages
24
Queries:
Queries are understood as derivation rules that have an empty conclu-
sion (i.e., false) or a conclusion that captures the derived variable bindings. An example of a query is how many books are there in our stock,s whose result would be bound to a variable with the number of books in the stock.
Integrity Constraints:
Integrity constraints are queries whose only action is
to signal inconsistency when some conditions are met. This kind of rules is usually applied forward, upon updates.
For example if a man has a
wife then his marital status must be married . Therefore, RuleML is not only dedicated to inference rules, even though the rst years of development have mainly been devoted to them.
The initiative
intends to provide a common markup language that could (ideally) be used for exchanging any kind of rule in any kind of language, such as SQL (ISO, 2003), OCL (OMG, 2003) or Prolog (Cohen, 1988). Because of the extensive previous work on rule-based systems carried out in AI, and the great variety of possible syntaxes, RuleML has a concrete XML syntax and it is divided in a hierarchy of sublanguages. Figure 2.3 shows the dierent sublanguages RuleML currently identies.
Figure 2.3: RuleML Family of Sublanguages (RuleML Initiative, 2005).
All sublanguages correspond to well-known rule systems. RuleML introduces URIs (Berners-Lee et al., 1998) so that they can be used for integrating any of the resources URIs can locate (e.g., web pages, services, ontologies) in the rules. The latest version of RuleML (Hirtle et al., 2004) provides an XML Schema specication for each of the sublanguages shown in Figure 2.3.
Interestingly,
since the syntax is based upon XML, XSL Transformations (XSLT) (Clark, 1999) (which is a language for transforming XML documents) can be used to perform transformation into specic rules languages for their actual execution.
Chapter 2. Knowledge Representation for the Web
25
For instance, there currently exist XSLTs for transforming RuleML rules into some other languages such as Jess (Friedman-Hill, 2003) and N3 (Berners-Lee, 1998). Current work is being devoted to more precise URI semantics, negation handling, a full First-Order Logic sublanguage, an Object-Oriented RuleML and Reactive RuleML for supporting reactive rules (RuleML Initiative, 2005). Even though RuleML can be said to be in a relatively early stage, the initiative has already achieved remarkable results which are mainly represented by an increasing support of the language by commercial and non-commercial tools, such as SweetRules or Mandarax (see Chapter 3), as well as by the Semantic Web Rule Language (SWRL) we present next.
2.3.2 SWRL In May 2004, the Semantic Web Rule Language was proposed to the W3C (Horrocks et al., 2004).
SWRL is based on a combination of OWL DL and
OWL Lite with the Datalog RuleML sublanguage of the Rule Markup Language. The language extends OWL to include Horn clauses
3 (Brachman & Levesque,
2004) thus enabling applying Horn-like rules over an OWL knowledge base. The proposal includes an extension to both the semantics and the abstract syntax of OWL (Patel-Schneider et al., 2004) along with two dierent syntaxes. The XML concrete syntax combines the OWL XML syntax with the RuleML XML syntax in order to simplify the integration of OWL and RuleML. Hence this syntax mainly facilitates mixing rules and OWL statements, and simplies reusing previously developed RuleML tools for SWRL. The second syntax is an RDF based syntax. The main rationale behind providing such a syntax is to support automated reasoning over rules such as applying more specic versions of a rule to increase eciency etc. However, the use of variables goes beyond RDF Semantics and more work is required to provide full and correct RDF semantics of SWRL. Rules in SWRL are of the form of an implication between an antecedent (body) and a consequent (head). The meaning is that whenever the conditions specied in the antecedent hold, the consequent must also hold. Antecedents and consequents can be composed of zero or more atoms.
C(x), P (x, y), sameAs(x, y) description, P () and OWL property the form
Atoms can be of
dif f erentF rom(x, y). C() is an and x and y are either variables,
or
OWL OWL
individuals or OWL data values. In SWRL an empty antecedent is treated as true and an empty consequent is considered false. Finally, multiple atoms are
4
treated as a conjunction . In addition to these basic constructs, SWRL has also a modular set of built-ins based on XPath (Clark & DeRose, 1999) and XQuery (Boag et al., 2005). Among these built-ins we can nd several kinds of operators for comparisons (e.g., swrlb:equal or swrlb:lessThan ), arithmetics (e.g., swrlb:divide or
swrlb:mod ), strings manipulation (e.g., swrlb:substring or swrlb:contains ), and so forth. This modularity does not only bring extensibility to the language, but
3 Horn
clauses contain at most one positive literal. The importance of Horn clauses resides
in that they support ecient resolution procedures.
4 Note
that rules with antecedents composed of disjunctive atoms can trivially be repre-
sented splitting the antecedent into dierent rules.
2.4. Summary, Conclusions and Discussion
26
also simplies the development of tools which can incrementally support these built-ins. Because SWRL is quite a recent proposal, very little support is available from tools, thus making it hardly usable. However, there is a growing condence on a wider adoption of the language on the Web. A good example is the OWL-S community (OWL Services Coalition, 2003) which already identies SWRL as a means to support dening rules in semantically annotated Web Services.
2.4 Summary, Conclusions and Discussion The development of intelligent systems in Articial Intelligence is mainly characterised by a pragmatic conception of knowledge which is taken to be the capability to act rationally (Newell, 1982). To achieve such a rational behaviour the Knowledge-Based Systems community has undertaken the development of these systems as a process involving the acquisition and representation of the knowledge in some representation language for supporting further automated reasoning. Derived from the characterisation of knowledge as the capacity to act rationally and applying typical engineering principles such as decomposition or reuse, the main Knowledge Engineering approaches characterise the knowledge KBSs build upon of being of two kinds static and dynamic. Ontologies, understood as a formal explicit and sharable specication of a conceptualisation for a domain of interest, appear as the most prominent representation formalism for representing applications' static knowledge. Furthermore, a mixture between procedural and declarative specications of rules provides the means for dening the dynamic knowledge of KBS. In the Semantic Web, which is the leading eort concerning the application of Knowledge-Based Systems techniques over the Web, important work has been pursued towards dening suitable representation languages both for ontologies and rules. The representation of ontologies is mainly dominated by Description Logics as the underlying logics. In this respect, the most prominent result is OWL Description Logics which is currently a W3C recommendation and is likely to become the standard ontology representation language for the Web. There exist however, various representation languages that have been used for representing ontologies, we have not presented in this chapter. This is the case for Ontolingua (Gruber, 1993b), F-Logic (Angele & Lausen, 2004) or even the Unied Modeling Language (UML) (Craneeld & Purvis, 1999).
These
languages have not been included in our survey for the very reason that this thesis is concerned with the development of Knowledge-Based Services over the Web. Hence, since these languages are not Web standards nor are they likely to become standards, using them for representing ontologies would inevitably and undesirably limit the sharability of our Knowledge Bases. Moreover this would as well limit our capability for reusing third party Knowledge Bases or systems. Ontologies have purposely restricted expressivity in order to enable and promote their sharability. In order to overcome their inherent expressivity limitations, some rules languages have dened. In particular the general rule markup language, RuleML, is currently under development for supporting the denition of (ideally) any kind of rule in any kind of specic language.
Based on one
of the sublanguages of RuleML, SWRL has recently been proposed as the Se-
Chapter 2. Knowledge Representation for the Web
27
mantic Web rule language, enabling the coupling of inference rules with OWL ontologies. Rules therefore provide a suitable means for declaratively dening the dynamic knowledge of Knowledge-Based Systems, and in some cases even support expressing additional static conceptualisations, such as the uncle relationship, which cannot be captured using OWL (Staab et al., 2003). Representing knowledge as a mixture of Knowledge Bases expressed according to some ontology or ontologies and a set of declarative or procedural rules brings important benets to the construction of Knowledge-Based Systems, notably, this enables reusing and simplies systems maintenance. However, this comes at the price of complicating systems development which require reconciling dierent representation formalisms, presumably handled by diverse systems into a unique and integrated system. The next chapter is precisely devoted to analysing the current techniques and systems available for appropriately handling ontologies and rules.
Chapter 3 Knowledge-Based Reasoning
Section 3.3 of this Chapter is partly based on a previously published paper (Pedrinaci et al., 2003).
Knowledge-Based Reasoning is the automated manipulation of knowledge to achieve
a
rational
behaviour.
Therefore
Knowledge
Knowledge-Based Reasoning are strongly intertwined.
Representation
and
For instance, the logic
the knowledge represented builds upon, pre-establishes its expressivity and computational characteristics. Logics dene the reasoning framework, that is, they establish what can be represented, how reasoning can be performed and to which extent. However, in addition to logics, other aspects play a crucial role in engendering a rational behaviour from explicitly represented knowledge, such as the algorithms or engineering principles applied. This chapter is devoted to exploring the very aspects of Knowledge-Based Reasoning over the Web. First, we briey present the main logics and techniques that have been used so far in Knowledge-Based Reasoning. Next, we introduce additional methods for controlling the reasoning process, such as the main reasoning procedures and the concept of Problem-Solving Methods. Further, we move into more pragmatical aspects of KBSs development, presenting a survey of the most relevant tools for manipulating explicitly represented knowledge. Finally, we summarise and conclude the chapter introducing some open issues in Knowledge-Based Reasoning over the Web.
3.1 Logic The principal motivation behind reasoning in Knowledge-Based Systems is to support them behaving according to what they know as opposed to traditional programs that act according to what they have explicitly been told to do (Brachman & Levesque, 2004; Russell & Norvig, 2003; Davis, 2001). KnowledgeBased Systems benet from the knowledge they hold to achieve their purposes by, combining, modifying and generating new information. Such a process usually relies on the application of mathematical logic to correctly deduce or infer new knowledge out of known facts hold in the Knowledge Base. For example, if we know that: (a) Jim is the father of Peter and that, (b) Dan is Jim's brother, we can deduce that, (c) Dan is Peter's uncle. Mathematical Logic provides a 29
3.1. Logic
30
formal ground, for dening such a kind of logical relationships, usually referred to as inference rules in AI. At the core of the reasoning process, resides what we refer to as logical entailment or logical consequence. Retaking the previous example, the fact that Dan is Peter's uncle is the logical entailment of the other two facts (a) and (b). Automatically computing the entailments, referred to as deductive inference in AI, requires knowledge to be represented in a language having a formal syntax, semantics, and pragmatics (Brachman & Levesque, 2004) as illustrated in Chapter 2. Once represented using a formal language, knowledge is usually manipulated by general purpose systems called inference engines, that deduce, at run time, new knowledge and hence support Knowledge-Based Systems in their rational behaviour. The most important properties of the inferencing process are its soundness and completeness. An inference algorithm is said to be sound if it only derives
1
logically entailed sentences, and hence it is usually a highly desirable property . On the other hand, a reasoning process is said to be complete if it can derive all sentences that are entailed (Brachman & Levesque, 2004; Russell & Norvig, 2003). It is quite obvious that complete algorithms are desirable for KnowledgeBased Systems development. Unfortunately it has been proven that complete algorithms are not always achievable (Brachman & Levesque, 2004). So far, in Articial Intelligence, many logics with dierent expressiveness and computational characteristics have been dened and applied. A review of the main ones shows that there is an important tradeo to be made between the expressiveness of a language and the tractability of the inferences, and this still remains an open issue (Brachman & Levesque, 2004): Indeed, it can be argued that much of the research that is concerned with both knowledge representation and reasoning is concerned with nding interesting points in the tradeo between tractability and expressiveness. Higher-Order Logic have the greatest expressive power among logics, however it makes automation dicult and incomplete.
Therefore, Higher-Order
Logic systems are usually applied for verifying human-written derivations rather than for automatically deducing new facts (Russell & Norvig, 2003; McAllester, 2001). The reader is referred to (Nipkow et al., 2002) for an example. First-Order Logic (FOL) is a more limited logic, that has been widely applied in Articial Intelligence. Even though FOL exhibits better computational characteristics than Higher-Order Logic, it still remains semidecidable (Russell & Norvig, 2003): The question of entailment for rst-order logic is semidecidable that is, algorithms exist that say yes to every entailed sentence, but no algorithm exists that also says no to every nonentailed sentence. In addition, inference in FOL remains computationally expensive, making it unusable for large Knowledge Bases in practice. This fact is particularly relevant for an environment like the Web where applications will likely integrate diverse sources of information spread across the Web, and therefore Knowledge Bases
1 It
is important to note however that soundness is not always desirable. A good example
is default reasoning where reasonable beliefs can be later invalidated upon new assertions.
Chapter 3. Knowledge-Based Reasoning
31
will presumably tend to be quite large. In other words, one of the main benets of the Web derives from the extensive amount of information it holds, for it can be greatly useful in many situations. However, we need methods and tools able to deal with presumably large and distributed Knowledge Bases in order to reason over the Web. To achieve a better automation of the reasoning processes, many logics have been dened, and in fact this remains one of the central activities in the Knowledge Representation and Reasoning subelds of AI. A subset of FOL, namely the Propositional Logic, is a very simple logic for which many ecient algorithms have been developed (Russell & Norvig, 2003), and in fact, restricting it to Horn clauses we can determine whether a Knowledge Base entails an atom in a linear number of steps (Brachman & Levesque, 2004). A somewhat dierent approach has also been undertaken by the so-called frame representations, which are driven by the fact that humans tend to organise their perception of the world into categories (e.g. carnivorous, herbivorous and omnivorous). This initiative has led to Frame Logics, a more procedural approach to knowledge representation than the previous we have presented. Another approach, inspired by the tendency humans have to think in terms of categories of objects but without the procedural aspects of Frame Logics, is called Description Logics (DL). Description Logics, which are particularly relevant for the Semantic Web (see Chapter 2), are designed to simplify the description of categories of objects and their properties. They usually have appealing computational characteristics allowing automated inferencing over large Knowledge Bases. The downside of Description Logics is their limited inferencing capabilities (see Chapter 2). Finally, it is worth mentioning Horn-Logic, a particularly interesting and widely applied subset of First-Order Logic. Horn-Logic is a quite expressive subset, for which calculating entailments becomes much more manageable. HornLogics, allow for a much more ecient reasoning procedure than what is achievable in First-Order Logic (Brachman & Levesque, 2004). Eective and ecient reasoning is not just concerned with logics, although their crucial role has been made clear.
In addition, appropriate techniques,
procedures and in some cases assumptions, can make a great deal of dierence to the eciency or even the suitability of the whole reasoning mechanism. In the next section we will review some of the main state-of-the-art techniques in Knowledge-Based Reasoning.
3.2 Reasoning Control Theorem-proving methods are domain-independent ways of reasoning which can be used as generic reasoning procedures. These methods can be applied over First-Order Logic facts independently from the domain we are reasoning about. However, just like it happens in real life, it comes as no surprise that dierent approaches to reasoning may greatly dier in the performance of the reasoning activity.
For instance, if we want to determine if Peter has any brothers or
sisters, it will obviously be faster to check whether Peter's parents have any other children rather than iterating over each and every person in the world comparing his or her parents with Peter's. Still, in both cases, the assertions about the world would remain the same. What makes the dierence here, is the
3.2. Reasoning Control
32
reasoning procedure adopted. In Articial Intelligence, the main knowledge-based reasoning procedures can be characterised as Backward-Chaining or Forward-Chaining (Brachman & Levesque, 2004; Russell & Norvig, 2003). Backward-Chaining, also referred to as goal-directed , starts with a set of goals as input and attempts to solve them by searching for goal premises over the asserted facts.
The Backward-
Chaining approach gave birth to logic programming which is based on the idea that logic could be used as a programming language (e.g., Prolog (Kowalsky, 2001)).
Unfortunately, Backward-Chaining can be quite inecient and even
non-deterministic as the reasoning system can enter into endless loops (Brachman & Levesque, 2004; Kowalsky, 2001). Forward-Chaining, also called data-driven , starts with a set of asserted facts and applies all the deductive rules exhaustively until no further inferences can be made.
This solution has proven to be more ecient than Backward-
Chaining in many cases.
Forward-Chaining is at the core of somewhat more
procedural approaches like Production Systems.
These have successfully and
widely been applied in the development of so-called Expert Systems that prot from domain-specic expertise to exhibit superior performance to generic reasoning methods (Davis, 2001; Russell & Norvig, 2003). In this type of systems, the knowledge is very domain-specic and declared as a set of production rules which basically determine the system behaviour through several IF condition THEN action sentences.
Unfortunately, this more procedural approach to
knowledge representation usually obtains a performance boost at the price of loosing formal semantics. Both Forward-Chaining and Backward-Chaining have interesting characteristics but, unfortunately, there is no perfect candidate for every problem.
It
seems that Forward-Chaining might be more appropriate for reacting to changes in the Knowledge Base determining the implications of the newly added facts, whereas Backward-Chaining is typically more useful in problem-solving activities where a goal is desired and the logical foundations for it have to be found. In order to improve their general performance, but also in order to better deal with special situations, some additional characteristics have been introduced in both reasoning procedures.
For instance, in the case of Backward-Chaining,
goal ordering can make a good deal of dierence in terms of performance, and is usually implemented (e.g., Prolog). This allows the programmer to establish the exploration of the Knowledge Base to obtain a better reasoning behaviour. A somewhat similar feature is usually implemented in Forward-Chaining reasoners, where a salience (i.e., priority) can be established between the dierent rules in order to give priority to the execution of the most promising ones over the others. Finally, an important concern, which is particularly relevant to such an open, large and dynamic environment like the Web, regards dealing with incomplete knowledge. To understand the importance it just suces to compare the situations where we want to prove something, and (1), we have information about it (true or false), or (2), we don't have any information. In the rst situation everything is clear, but what happens in the second one? How should the system react? There are basically two approaches to dealing with this situation. The rst one assumes that if a fact is not asserted then we can take it as being false. This approach is called the Closed-World Assumption (Brachman & Levesque, 2004). The other approach is simply to say we don't know. It is quite
Chapter 3. Knowledge-Based Reasoning
33
clear that under the Closed-World Assumption (CWA) we do nonmonotonic reasoning, as new facts could invalidate previous beliefs. Therefore under the CWA, some additional mechanisms (e.g., truth maintenance systems (Russell & Norvig, 2003)) need to be provided in order to maintain a consistent Knowledge Base.
3.2.1 Problem-Solving Methods Despite the many mechanisms available for developing Knowledge Based systems, in the 1980s rule-based approaches showed their limitations in terms of scalability (Musen, 2000). These systems suered from the rather naïve view, that large and complex systems could just be developed with a more or less modular set of rules and an inference engine (see Chapter 2). As a result, reusing and maintaining these Knowledge-Based Systems turned out to be particularly dicult and slowed down further developments. To cope with these diculties several researchers sought for higher-level abstractions that could leverage Knowledge-Based Systems development to a more formal and applicable discipline. These eorts led to some formal methodologies for the construction of Knowledge-Based Systems, CommonKADS (Schreiber
et al., 1999) being perhaps the most relevant (see Chapter 2). The core principles of these eorts to deal with systems complexity are the reuse of Knowledge Bases by means of ontologies, and the encapsulation and reuse of recurring reasoning patterns that are applied in dierent domains. Many researchers have advocated the development of Knowledge-Based Systems by reusing Problem-Solving Methods (PSMs), that is software components that encode domain-independent sequences of inference steps for solving particular tasks (Chandrasekaran, 1986; Steels, 1990; Gennari et al., 1994; Studer
et al., 1998). In Fensel & Benjamins words (Fensel & Benjamins, 1998): Problem-solving methods describe this control knowledge independently from the application domain thus enabling reuse of this knowledge for dierent domains and applications. It describes which reasoning steps and which types of knowledge are needed to perform a task. PSMs support reusing their problem-solving expertise by dening and sharing the concepts and relationships they manipulate, in a task-specic but domainindependent way. For example, a Conguration PSM denes the concepts and relationships required for solving any kind of conguration problem, independently of the domain it is applied to, e.g., elevators construction. Ontologies provide a formal means for dening sharable knowledge representation schemas (see Chapter 2), therefore ontologies and PSMs are strongly intertwinned (Crubézy & Musen, 2004). PSMs typically dene the concepts and relationships they handle in ontologies, so that other systems can use PSMs terminologies and integrate PSMs expertise. PSMs integration typically relies on bridging the gap between dierent ontologies. These has mainly be done in two ways: (1) via some sort of mapping between ontologies (Gennari et al., 1994; Park et al., 1997), or (2) through the use of adapters to bridge the gap between task and domain-specic knowledge (Fensel & Benjamins, 1998). With the advent of the Semantic Web, reusable PSMs have gained even more momentum for the development of Knowledge-Based Systems.
Perhaps, the
3.3. Software Components
34
most relevant results have been achieved in the context of the IBROW project (Benjamins et al., 1998; Crubézy et al., 2003) which have been further developed in the area of Semantic Web Services (Domingue et al., 2004; Benjamins, 2003). In this context, good PSM techniques have been developed based on current Web Services technologies as opposed to previous solutions that relied mainly on CORBA (Gennari et al., 1996, 1998), for example.
3.3 Software Components The development of Knowledge-Based Systems involves as we have seen an appropriate representation of their knowledge using some representation language(s). At runtime, the knowledge represented has to be appropriately interpreted in order to achieve the desired behaviour. An important number of components and engines have been developed in AI, and their characteristics dier to an important extent depending upon their intended functionality, the logics they support or the reasoning techniques employed (e.g., backward-chaining, forward-chaining, etc). In this section we enter into more pragmatic aspects of the development of Knowledge-Based Systems. We therefore review here the main software components available at the time of this writting, restricting our study to those that appear to be in a usable state, and that are actively being developed and/or maintained. This analysis is particularly relevant for the next chapters where we integrate some of these software libraries into particular applications. Our survey is restricted to Java-enabled tools: Java being the main programming language we will use in this thesis for its growing and very active community, specially in the context of the Semantic Web. In the context of this thesis, that is in the context of developing Knowledge Based Systems for the Web, we have characterised the dierent software components available into three kinds:
Ontology Toolkits:
We understand by Ontology Toolkits mainly software com-
ponents that support reading, modifying and generating ontologies and Knowledge Bases expressed according to some ontology.
Hence, these
software libraries which are at the core of almost any KBS for the Web, support manipulating the main representation languages we previously identied in Chapter 2.
Ontology Storage and Querying Systems:
These are software components
that are dedicated to the manipulation of ontologies but with a special emphasis on the persistence and manipulation of large Knowledge Bases in an ecient manner. It comes as no surprise that these systems build upon existing Ontology APIs or have led to the creation of their own ones.
Reasoning Systems:
Last, we understand by Reasoning Systems any soft-
ware that supports performing automated reasoning over a Knowledge Base.
In other words, Reasoning Systems provide the capability to de-
rive new knowledge or to (re)act intelligently based on the state of the Knowledge Base.
This includes inference engines, production systems,
theorem provers and so on.
Chapter 3. Knowledge-Based Reasoning
35
We should notice that the boundaries between the three categories are somewhat fuzzy, and some systems could t well in more than one category. Therefore this categorisation should not be taken as a strict and formal denition of the dierent systems available but, instead, as a particular organisation we employ for the sake of clarity, and in order to better understand the role these software components play in the nal system. During the rest of this section we navigate through the dierent categories, describing their main functionalities, contrasting the dierent solutions available, and further detailing the ones we nd to be the most relevant for our work.
3.3.1 Ontology Toolkits In Chapter 2 we introduced the need for representing knowledge in a formal way for supporting its automated manipulation by computer programs. Thus, achieving an intelligent behaviour in KBSs is done through an eective manipulation of the knowledge encoded in any kind of rules (declarative or procedural) according to some ontology or ontologies.
Therefore KBSs need, at the very
least, to be given the means to read, write, explore, and query their Knowledge Bases. We here understand as Ontology Toolkits, the software components that offer this kind of functionality for the Web formats previously presented. Therefore in this section we present the main software components available at the time of this writting for the manipulation of Knowledge Bases represented in RDF/RDFS or any of the OWL languages.
Among the many characteristics
these libraries have, we consider ve of them as the most relevant ones in order to decide the appropriate toolkit(s) to use for the development of a specic application:
Languages:
In Chapter 2 we have presented the main ontology representation
languages for the Web.
Each of them have dierent syntaxes and their
semantics greatly dier. It is therefore particularly relevant to know which languages and syntaxes the toolkits support. This will highly determine the interoperability of the systems built.
API Paradigm:
In Software Engineering, API (Application Programming In-
terface) is an abstract specication of the interface of a software component. It denes the (Object Oriented) methods the component oers to the applications and how these must be used. Because there doesn't seem to exist, yet, any consensus regarding what could be a general Ontology API, each and every tool implements its own API. Handling the Knowledge Base can be done in several ways.
Some might be more suitable
than others for certain purposes. We here mainly consider three general paradigms:
Statement-centric:
Data is manipulated at the statements level.
For
example, for adding a new RDF statement add(Subject, Predicate,
Object) (see Chapter 2).
Resource-centric:
Data is manipulated at a higher abstraction level,
where the developer works over resources such as concepts or properties.
An example is adding a new property to a concept (e.g.,
resource.addProperty(P) ).
3.3. Software Components
36
Ontology-centric:
A higher abstraction level that supports working at
an ontological level. The developer is given the possibility to work directly with the ontology elements such as the Concepts, the Properties. and can also benet from additional methods for performing typical operations over ontologies (e.g., obtain all the subclasses of a given concept, access instances of a particular concept, etc). For example, resource.getSubclasses().
Reasoning Support:
Implementing syntax parsers is not all there is to be
done with ontologies.
The languages come with specic semantics that
establish the basis for performing automated reasoning. This characteristic establishes the reasoning supported by the toolkit, and therefore establishes the reasoning activities the applications developers can benet from. It is worth noting however that some tools are just syntax parsers and thus do not provide any reasoning support.
License:
The license under which the software is distributed.
This gives an
idea about the potential evolution of the software, the possibility to adapt it for specic purposes and also explains the lack of information in some cases.
Latest Release:
The date of the latest release. This characteristic has mainly
been included in order to account for the latest development activities, as well, it identies the version that has been analysed. Based on these criteria, we have analysed the main toolkits available, namely Jena (Jena, 2004), the KAON API (Oberle et al., 2004b), the WonderWeb OWL API (Bechhofer et al., 2003), Rio (Rio, 2005) and the Redland RDF API (Redland, 2005).
We have deliberately omitted some others such as Sergey
Melnik's RDF API (Melnik, 2001) which seems to be abandoned since January 2001 or JRDF (Newman et al., 2005) which is still at an early stage. Table 3.3.1 summarises the main characteristics of the toolkits that have been analysed. Among these toolkits perhaps the most complete is Jena.
Jena according
to its developers is a Java framework for building Semantic Web applications. It features support for many RDF serialisation formats, an OWL API, an integrated rule-based inference engine, and several plug-ins are available, such as Joseki for RDF data publishing over the Web (Joseki will be treated in more detail in the next section).
Jena is more and more widely used in the devel-
opment of applications that make use of any of the Semantic Web standard formats.
See for example Swoogle (Ding et al., 2004), ePerson (Banks et al.,
2002), the well-known ontology editor Protégé (Noy et al., 2001) and our own Music Rights Clearing Organisation Xena described in Chapter 7. The KAON API (Oberle et al., 2004b) is also particularly relevant. KAON, the KArlsruhe ONtology management infrastructure, is a complete framework which includes several modules providing dierent functionality. The aim is to provide a general framework for ontology-based applications. It includes, a module for editing ontologies, another for managing their evolution, a module for the ecient storage of knowledge bases, etc. The KAON API, provides programmatic access to KAON ontologies independently from the storage mechanism applied, and it includes support for change notication, ontology evolution, etc. Finally, it is worth mentioning the WonderWeb OWL API (Bechhofer et al., 2003) for its inuential role as the rst implementation of an OWL API.
Redland
Resource-centric
N3
RSS
Turtle
Table 3.1: Ontology Toolkits.
Apache
GPL
LGPL
N-Triples
LGPL
LGPL
LGPL
Statement-centric
Ontology-centric
License BSD-Like
N3
RDF/XML
Turtle
N-Triples
Statement-centric
RDF/XML
Rio
Resource-centric
OWL
Statement-centric
RDF/XML
OWL API
DIG 1.0 Interface
Resource-centric
OWL Ontology-centric
RDFS OWL-Lite
Statement-centric
RDF/XML
OWL
OWL-Lite DIG 1.1 Interface
Resource-centric Ontology-centric
RDFS
Reasoning Support
N3
WonderWeb
KAON API
API Paradigm Statement-centric
N-Triples
Languages
RDF/XML
Tool
Jena
(February 2005)
Version 1.0.0.2
(April 2005)
Version 1.0.3
(March 2005)
Version 1.4.2
(April 2005)
Version 1.2.9
(February 2004)
Version 2.1
Latest Release
Other
Part of Sesame
available
Various modules
for RDF publishing
Joseki can be used
Chapter 3. Knowledge-Based Reasoning 37
3.3. Software Components
38
3.3.2 Ontology Storage and Querying Systems Ontology-based systems make use of Knowledge Bases encoded according to some ontology or ontologies, to achieve some rational behaviour. The knowledge they hold, that is the knowledge that drives their execution, is often subject to changes which need to be tracked and stored for later executions. New facts may be asserted, others may change and some may just be deleted. Still, ontologybased systems must be aware of these changes and act accordingly; after all, the main idea underlying Knowledge-Based Systems is that of being driven by their Knowledge Base. At this point it might be worth remembering that the main role ontologies, is to support representing sharable vocabularies, in order to share and reuse Knowledge Bases, as well as to leverage systems communication from a purely syntactical activity to a semantic activity paving the way for integrating or collaborating with pre-existing KBS. It is incontestable that sharing ontologies has many potential applications, however it also poses some technical requirements: Knowledge Bases become a potentially distributed resource that we still need to be able to manage eectively and eciently. In order to support storing and publishing Knowledge Bases, Ontology Storage and Querying facilities have been developed. These are software applications that support the ecient storage and retrieval of Knowledge Bases. Ontology Storage and Querying systems, also called Ontology Servers, are usually built upon previous extensive work performed in the context of Databases, and provide a semantic level for manipulating the Knowledge Base. This semantic level usually supports accessing the Knowledge Base via predened methods that allow, for example, retrieving the concepts, the instances, or the properties of some individuals; and, because these software systems are thought to support the manipulation of large Knowledge Bases, they also provide the means for eciently querying. Ontology Storage and Querying systems are regarded as particularly important systems in the context of the Semantic Web, as well as, more broadly, in the context of Ontology-based systems development. Just like it happened with Databases, in the case of traditional information intensive applications, Ontology Servers are expected to have an important impact in the development of knowledge-intensive applications. This fact becomes more obvious when we take into account that the knowledge these applications will be based on can, and certainly will, be distributed over the network, the eective and automated communication being granted by the use of shared ontologies as a lingua-franca. We have analysed an important number of Ontology Storage and Querying systems that are available for the Java platform. Even though the survey was not intended to be exhaustive, we have analysed those that appear to be the most relevant: Kowari (Wood et al., 2005), Sesame (Broekstra et al., 2002), Joseki (Jena, 2004), KAON RDF Server (Oberle et al., 2004b), RDF Suite (Alexaki
et al., 2001), YARS (Harth et al., 2005), Cerebra (Cerebra, 2005), Mandarax (Mandarax, 2005) and Snobase (Snobase, 2004).
Again we have deliberately
omitted some systems such as Inkling (Inkling, 2002) whose development has been discontinued or Triple (Sintek et al., 2005) which we have opted to treat in the last category of infrastructural software components. Table 3.3.2 summarises their most relevant characteristics. Since these systems need to handle ontologies, they share a good set of characteristics with the previous category of
Chapter 3. Knowledge-Based Reasoning
39
systems (i.e., ontology toolkits). However, there are some specic ones as well, which are particularly relevant for this kind of system:
Model Storage:
Since Ontology Servers main role is to support the persistence
of Knowledge Bases. Therefore, the mechanism or mechanisms employed for the persistence of Knowledge Bases are particularly relevant.
Query Language:
Accessing large Knowledge Bases requires the appropriate
means for eectively and eciently retrieving the desired information. Thus, this kind of software component oer mechanisms for querying the Knowledge Bases. In this characteristic we account for the query languages supported. The majority of the systems found are particularly geared towards the storage and querying of RDF metadata, since it is the most expanded ontology representation language for the Web and has been subject to intensive and extensive research in the last years. Additionally, some of the systems also support part of OWL in terms of their reasoning capabilities, since syntactically OWL documents are valid RDF documents. Most of the systems delegate data persistence over traditional Database Management Systems (DBMS), may that be Relational (R-DBMS) or Object-Oriented (O-DBMS). Traditional DBMS provide an ecient, robust, well-tested and widely applied solution to storing large amounts of data, and it therefore seems to be a reasonable decision.
There
exist however some solutions such as Kowari which have opted for a native storage mechanism thus reducing, perhaps unnecessarily, the portability of the information which gets bound to a specic backend. What makes the dierence between the dierent solutions we have analysed is their performance and querying capabilities which are inherently strongly related. There are currently several query languages available for querying Ontology servers. Among them, the more widely-used are RDF Query Language (RQL) (Karvounarakis et al., 2002) and RDF Data Query Language (RDQL) (Seaborne, 2002) which was submitted for consideration to the W3C. Based on RQL, RDQL and N3, the Sesame RDF Query Language (SeRQL) (Broekstra & Kampman, 2003) has been dened and developed by the Sesame developers in an attempt to provide a fully-edged solution based on past experience. Kowari's iTQL (Wood et al., 2005) is yet another query language with some interesting characteristics, such as the subquery construct that supports implementing existential quantiers, universal quantiers, and the cardinality restrictions described in all three OWL languages.
The reader is referred to (Magkanaraki
et al., 2002; Prud'hommeaux & Grosof, 2004; Haase et al., 2004) for a deeper analysis and comparison of many of the dierent query languages available. Concerning the performance of the existing systems, several surveys have been undertaken by researchers, see for example (Lee, 2004) or the older but still useful surveys (Magkanaraki et al., 2002; Barstow, 2001). However, giving a performance ranking would be unfair, as the performance of the dierent systems greatly diers depending on the application (e.g., which are the most common operations, which are the more critical ones, etc.), depending on the environment (e.g., DBMS used, available memory, etc.), depending on the query language used, etc. For instance, some application developers, have opted to use Joseki (Mazzocchi et al., 2005; Banks et al., 2002), some others have decided to develop their own solution (Quan et al., 2003), and some others have delegated
3.3. Software Components
40
the Knowledge Base persistence to Sesame, like Flink (Mika, 2004) 1st prize at the Semantic Web Challenge in 2004, and our own application ODEA, see Chapter 6. Even though, the ecient manipulation of Knowledge Bases is an important and in many cases necessary step in the development of Knowledge-Based Systems for the Web, it often requires to be complemented with additional reasoning mechanisms. In the next section we review the last category of software components, namely the reasoning systems, that are mainly focussed on providing advanced inferencing support.
Languages RDF/XML OWL N-Triples Sesame RDF/XML N-Triples N3 Joseki (Jena) RDF/XML N3 N-Triples OWL KAON RDF/XML RDF Server OWL RDF Suite RDF/XML YARS RDF/XML N-Triples N3 Cerebra RDF/XML OWL Mandarax XKB ZKB RuleML RDF/XML Snobase OWL RDF
Tool Kowari
Table 3.2: Java Ontology Storage and Querying Systems.
API Query Reasoning License Paradigm Model Storage Language Support RDFS Mozilla Statement Native iTQL Resource Repository RDQL OWL-Lite Ontology Statement R-DBMS SeRQL RDFS LGPL Resource O-DBMS RDQL OWL-Lite RQL Statement R-DBMS RDQL RDFS BSD-Like Resource O-DBMS Fetch OWL-Lite Ontology SPO DIG 1.1 SPARQL Statement R-DBMS KAON Query RDFS LGPL Resource OWL-Lite Ontology Statement R-DBMS RQL RDFS GPL Resource O-DBMS Model Native N3QL BSD Repository R-DBMS DL-Based RDFS Commercial ? XQuery OWL DL R-DBMS XKB Derivation rules GPL O-DBMS SQL with neg-as-failure Statement R-DBMS OWL-QL Production rules Commercial Resource Cloudscape Derivation rules Ontology Fuzzy logic Model JSR-94 Ver. 1.1 Many modules (Sept. 2004) available
Ver. 4.0 (2005) Ver. 3.4 (March 2005)
Ver. 1.2.9 Many modules (April 2005) available Ver. 2.0 (July 2003) Jan. 2005
Latest Other Release Ver. 1.1.0-pre2 Supports Jena (Dec. 2004) Ver. 1.1.3 Many modules (April 2005) available Ver. 2.1 (Feb. 2004) Chapter 3. Knowledge-Based Reasoning 41
3.3. Software Components
42
3.3.3 Reasoning Systems We previously dened what reasoning means in AI, and the role it plays at runtime driving the execution of Knowledge-Based Systems. In Section 3.1 we reviewed the role of logics in automated reasoning and we introduced the main reasoning control mechanisms that have been applied in AI up to date.
We
now turn our attention to the software components available that implement the reasoning concepts and techniques previously reviewed in order to perform advanced automated inferencing over the Knowledge Base. Reasoning Systems, as we call them, provide the capability to derive new knowledge or to (re)act intelligently based on the state of the Knowledge Base. We therefore include in this category inference engines, production systems, theorem provers and so on. Java developers can nowadays make use of a great variety of reasoning systems. Moreover, with the advent of the Semantic Web and the current emphasis on Business Intelligence, a lot of eort is being devoted to the development of new tools and the improvement of previously existing ones. We have reviewed those that, in our opinion, are more relevant in the context of this thesis. Thus, we have particularly focussed on the reasoning systems that support some of the Web languages (see Chapter 2) and we have enlarged the scope of our survey to include the best regarded general reasoning systems.
In fact, limiting the
study to Web languages would unnecessarily limit the reasoning capabilities of the applications, and given that there is no standard rule language for the Web, we would in any case need to make use of non-standard ones (see Chapter 2). Table 3.3.3 summarises the results of our survey. Some of the systems analysed, were also present in the previous section where we reviewed Ontology Storage and Querying systems.
In fact, as we previously said, the limits be-
tween the dierent categories are somewhat fuzzy as many Knowledge-Based Systems require the power brought by the three kinds of software components; that is they need to manipulate ontologies, support large Knowledge Bases efciently and benet from great inferencing mechanisms. Hence, many vendors and developers intend to provide solutions that fullll all the applications requirements. For instance, Cerebra represents a good example of quite a complete solution, where the manipulation of ontologies with complete OWL-DL reasoning support is coupled with storage and querying facilities.
Other examples
are Mandarax, which couples the manipulation of large RDF/RDFS Knowledge Bases with derivation rules, or Snobase which includes all sorts of reasoning mechanisms as well as some learning algorithms (e.g., Naive Bayes, Gaussian neighbourhood, etc.). RDF/RDFS and the OWL avours are regarded as the standard ontology representation languages in the Semantic Web. Therefore, Web-oriented KBSs will presumably need to manipulate OWL or RDF/RDFS Knowledge Bases. As a consequence, the engines that support RDF/RDFS or any of the OWL languages are particularly relevant for the development of KBS for the Web. Since OWL is rooted in Description Logics, which are generally given a semantics that make them a subset of First-Order Logic, mainly three dierent approaches have been undertaken to develop OWL inference engines (Zou et al., 2004):
Specialised Description Logics Classier:
Description Logics reasoners are
perhaps the most widely used tools for OWL reasoning. This is the case for RACER (Haarslev & Möller, 2003), FaCT/FaCT++ (Fact, 2005), and
Chapter 3. Knowledge-Based Reasoning Pellet (Pellet, 2004).
43
They all implement dierent types of Description
Logics. RACER is a complete reasoner for OWL-DL, supporting reasoning both over concepts ( Tbox ) and over individuals ( Abox ). The FaCT system only supports Tbox reasoning, and nally Pellet provides reasoning that is sound and complete for OWL-DL without nominals and OWL-DL without inverse properties.
Full First-Order Logic Theorem Prover:
OWL statements can be trans-
lated into First-Order Logic and it is therefore possible to use First-Order Theorem Provers to do OWL inferencing. This approach has been followed by Hoolet (Bechhofer, 2004), but is unlikely to scale well.
Subset of First-Order Logic Reasoner:
The last approach consists on ap-
plying a subset of First-Order Logic and reasoning with a general purpose inference engine. This is the case for Euler (Roo, 2005) or F-OWL (Zou
et al., 2004) which are useful yet incomplete OWL-Full reasoners. Apart from OWL reasoners, Triple (Sintek et al., 2005), which is mainly concerned with RDFS reasoning, supports dening user rules expressed in its rule language called Triple which is derived from F-Logic. A somewhat similar approach is followed by SweetRules (SweetRules, 2004) which is particularly focussed on supporting dening rules over concepts dened in OWL ontologies. SweetRules supports an important number of languages such as OWL, RuleML, SWRL, or Jess (Friedman-Hill, 2003), and is based on the semantic-preserving translation between the dierent languages to support the interoperation with language-specic inference engines. Furthermore, we must mention an important number of general reasoners that are currently available for Java developers. Jess (Friedman-Hill, 2003) is regarded as one of the most powerful inference engines available for Java. It is a rule-engine and scripting language based on CLIPS (Riley, 2004) and completely written in Java.
It therefore enables accessing all the packages available for
Java from the rule engine and, conversely, enables the seamless integration of rule-based technologies into Java programs.
Among the very features of this
tool, we must mention its outstanding capability to maintain a bidirectional communication between its workspace and Java program's workspace. Jess is in fact the core of many reasoners that have been developed for Semantic Web formats (e.g., OWLJessKB formerly known as DamlJessKB (Kopena & Regli, 2003)), and has been integrated in our Online Design of Events Application described in Chapter 6. Another engine which is worth mentioning is called OntoBroker (Ontobroker, 2005).
It is a commercial F-Logic reasoner that has been applied in
several applications and projects, e.g., OntoWeb (Ontoweb, 2005) or OBELIX (Obelix, 2004). Last, but not least, we must also mention the main Prolog implementations for Java: XSB (The XSB Project, 2005) and JLog (JLog, 2005). In summary, a large and still growing number of reasoning systems are available nowadays for Java developers. They greatly dier in their logical foundation, their reasoning power, the algorithms they make use of, and the knowledge representation format(s) they support. In this respect, it is worth noting that there is little, if any, compatibility between dierent engines making it dicult to switch from one to another, or even to support their more or less seamless integration.
Languages Supported RDF OWL XKB ZKB RuleML RDF Snobase OWL RDF SweetRules RuleML SWRL Jess XSB CommonRules Jena rules OWL KIF JRules BAL XML Bossam OWL LogicML Buchingae Hoolet OWL TPTP RACER KRSS RDF OWL FaCT/FaCT++ OWL
Tool Cerebra Mandarax
SHIF(D)
Description Logic
ALCQHIR + (D)−
Production rules Production rules (based on Horn-logic) Theorem prover (FOL) Partial OWL DL
Commercial
Production rules Derivation rules Fuzzy logic Hybrid reasoning combining SCLP rules with DLP OWL Ontologies Commercial Free for non-commercial projects Free download Free for non-commercial projects GPL
LGPL
License Commercial GPL
Reasoning Support RDFS OWL DL Derivation rules with negation-as-failure
JSR-94 compatible
Other
April 2004 Version 1.7.19 (April 2004) Version 0.99.3 (March 2005)
Version 5.0 Version v0.7b45
DIG
Uses WonderWeb API DIG
Version 1.1 Many additional (September 2004) utilities Version 2.0 Based on (November 2004) languages translation
Latest Release Version 4.0 (2005) Version 3.4 (March 2005)
44
3.3. Software Components
JLog
Ontobroker Jess XSB Algernon Drools
F-OWL
Triple
Pellet
Tool Euler JTP
RDF N3 Triple Flora-2 OWL N3 F-Logic RDF Jess C-Prolog Algernon Java Groovy Python Prolog
Languages Supported N3 OWL RDF OWL KIF OWL Open Source
SFO License
MIT License
License W3C License Open Source Version 1.1.0 (December 2004) September 2004
Latest Release Version 1.2.2 (April 2005) December 2003
Version 0.41 (September 2003) Commercial Version 3.8 Commercial Version 6.1p8 GPL Version 2.7.1 (March 2005) Mozilla Public License Version 5.0.1 (January 2005) BSD License Version 2.0-beta-21 (February 2005) Prolog GPL Version 1.2.2 Table 3.3: Java Reasoning Systems.
OWL Lite Partial OWL DL (See XSB) F-Logic Production rules HiLog Access-Limited Logic Production rules
Sound and Complete and SHON (D) Sound but incomplete SHION (D) OWL Lite (see XSB) SHIN (D)
Reasoning Support Partial OWL Full Theorem prover (FOL)
Protégé plugin JSR-94 compatible
JSR-94 compatible
Other Chapter 3. Knowledge-Based Reasoning 45
3.4. Summary, Conclusions and Discussion
46
3.4 Summary, Conclusions and Discussion Knowledge-Based Systems benet from the knowledge they hold to achieve their purposes by, combining, modifying, and generating new information.
Such a
process usually relies on the application of logics to correctly deduce or infer new knowledge out of known facts held in the Knowledge Base. So far, in Articial Intelligence, many logics with dierent expressiveness and computational characteristics have been dened and applied. A review of the main ones shows that there is an important tradeo to be made between the expressiveness of a language and the tractability of the inferences. Still, eective and ecient reasoning is not just concerned with logics. In addition, appropriate techniques, procedures and in some cases assumptions, can make a great deal of dierence to the eciency or even the suitability of the whole reasoning mechanism.
We have introduced Forward-Chaining and
Backward-Chaining as the main knowledge-based reasoning procedures. Additionally, Knowledge Engineering methodologies have advocated the development of Knowledge-Based Systems by reusing Problem-Solving Methods (PSMs), that is software components that encode domain-independent sequences of inference steps for solving particular tasks. PSMs support reusing their problem-solving expertise by dening and sharing the concepts and relationships they manipulate, in a task-specic but domain-independent way. PSMs are essentially the means for leveraging reuse in the construction of KBSs to a higher level where the expertise for solving particular tasks can also be reused. Knowledge-Based Systems require to eectively and eciently manipulate their knowledge, typically represented as a mixture of Knowledge Bases expressed according to some ontology or ontologies and a set of declarative or procedural rules. Several systems have been developed so far for the manipulation of ontologies, for the eective persistence and querying of Knowledge Bases, and for performing automated reasoning over Knowledge Bases. A plethora of heterogeneous systems, in terms of their internal features but also in terms of how their functionalities are exposed to third parties, is currently available. For instance, there is a great number of query languages and interaction protocols existing which has inevitably slowed down a wider adoption and integration of Ontology Storage and Querying systems in current applications. In an attempt to provide a generic interface for all of them, some researchers are currently working on both the standardisation of query languages and access protocols.
In this respect SPARQL is a work in progress by the W3C RDF
Data Access Working Group, to provide a standard means for manipulating RDF storage systems. It is composed of the SPARQL query language for RDF (Prud'hommeaux & Seaborne, 2004) and the SPARQL protocol (Clark, 2005). Even though, it is not a standard yet, Joseki, Jena and a few other tools already implement SPARQL. Similarly, in an attempt to provide a partial solution to this problem, the Java Specication Request 94 (JSR-94) has been dened by some of the main rule engines vendors (Scott et al., 2003). The JSR-94 is the specication of a lightweight-programming interface for acquiring and using a rule engine in a standard way.
The goal of the API is to become a standard that would give
software independence from the rule engine. Although there are currently a few inference engines that already implement that API (see Table 3.3.3), Jess being
Chapter 3. Knowledge-Based Reasoning
47
the reference implementation, it is still far from being a standard. Hence, choosing an appropriate reasoning system is therefore subject to a deep requirements analysis, inevitably coupled with Software Engineering best practices to achieve software quality and support the seamless integration of the, often large, set of components that integrate the KBS. In our cases studies, particularly in the Online Design of Events Application (see Chapter 6) we pay special attention to these issues. Dierent applications have dierent requirements and dierent constraints, hence there is no perfect candidate tool, or set of tools, to support the development of fully-edged Knowledge-Based Systems for the Web. Still, KBSs need to integrate a more or less complex set of these infrastructural systems, as we call them, for achieving their purposes. Some researchers advocate for integrated systems, that support manipulating ontologies, rules and Knowledge Bases such as OntoBroker (Ontoprise GmbH, 2005) or Triple (Sintek et al., 2005). These, somewhat monolithic approaches, originate from traditional expert systems architectures (i.e., a Knowledge Base and an inference engine), and therefore share their limitations. For instance these systems usually present a lack of versatility in terms of their reasoning capabilities. Not to mention the fact that this kind of systems typically build upon some internal representation language (e.g., Triple, F-Logic) other than those dened for the Web. Other researchers have proposed the KAON application server, a componentbased infrastructure for the development of Semantic Web applications (Oberle
et al., 2004a, 2005; Oberle, 2004). This solution, overcomes the diculties associated to the previous monolithic approach, by supporting seamlessly plugging diverse software components in order to support creating Semantic Web applications. However, this approach is oriented towards covering the whole development cycle of Web applications that make use of Semantic Web technologies and leaves the reasoning infrastructure unspecied.
In other words, building
Knowledge-Based Services on the basis of the application server proposed by (Oberle et al., 2005) still requires dening the whole reasoning infrastructure. Moreover, it imposes important decisions such as the inevitable integration of a high-weight component, an application server, and consequently the fact that the system shall, unreservedly, be given a Web-based user interface. This might seem irrelevant at rst glance, but as we will see in the Online Design of Events case study (see Chapter 6), creating Web-based user interfaces for KnowledgeBased Systems is far from trivial. Not to mention the fact that, for reasoning over the Web, we do not necessarily need a Web-based user interface. A good example can be found in our second case study as well (see Chapter 7). In the next part of this thesis we describe our Opportunistic Reasoning Platform, a generic platform for developing Knowledge-Based Services over the Web. The platform reconciles best Software and Knowledge Engineering principles into a versatile reasoning infrastructure that supports seamlessly plugging diverse reasoning components, which opportunistically contribute to reasoning activities in distributed environments like the Web.
Part II Opportunistic Platform
51
Semantic Web technologies appear as promising candidates for supporting new kinds of applications by applying Articial Intelligence techniques to the information and processes distributed over the Web.
An extensive review of
these technologies shows that there are, however, a large number of knowledge representation languages, an overwhelming set of software components oering diverse functionality, and the need to apply a rapidly growing body of heterogeneous knowledge, ranging from well-known Software Engineering concepts to advanced Articial Intelligence techniques. What stands out is the need for reconciling an important number of technical and theoretical aspects in order to support the construction of fully-edged knowledge-based solutions for the Web. Some researchers have already undertaken the construction of development frameworks, composed of diverse software components, that aim to full the very diverse requirements the developers of Web-oriented Knowledge-Based Systems have. Although, these development frameworks are an important step forward, it is not all there is.
The early days in Knowledge Based Systems development
already accounted for the need to applying engineering principles, both in terms of software development and knowledge modeling. When it comes to the Web, for its distinguishing decentralised and autonomous traits, the integration of engineering practices becomes of an even greater importance. In this thesis, we investigate how to implement fully-edged KnowledgeBased Services over the Web, based on the rm belief that (Bass et al., 2003): Having an environment or infrastructure that actively assists developers in creating and maintaining the architecture (as opposed to just code) is better. Therefore, instead of providing just code, we propose a platform that assists developers in the construction of Knowledge-Based Services for the Web. The Opportunistic Reasoning Platform, as we call it, is composed of a widely applicable reasoning modelthe Blackboard Modelcoupled with an in-depth analysis of its features, and a general software architecture with a skeletal and reusable implementation of the reasoning model. Thus, our platform does not provide developers with some reusable software components, but instead oers them a reusable architecture, together with a reusable implementation of a widely applicable reasoning model, whose scope, capabilities and intricaties have been extensively analysed and documented. This part of the thesis is devoted to presenting and describing our Opportunistic Reasoning Platform. In a rst step we present, describe and analyse the Blackboard Model of reasoning, and, in a second step, we present and describe the software platform.
Chapter 4 The Blackboard Model
In this chapter we present and characterise the Blackboard Model which is the Opportunistic-Reasoning Model per excellence. We rst introduce the metaphor that describes this reasoning model.
We then analyse the very implications
of the blackboard approach with respect to supporting automated knowledgebased reasoning. Further, we attempt to establish the appropriateness of the Blackboard Model for performing dierent knowledge-intensive tasks, taking into account previous use cases in the blackboard literature as well as previous research undertaken in the Knowledge Engineering community.
Finally, we
analyse the suitability of this reasoning model for supporting reasoning processes over the Web.
4.1 Introduction In the early 1970's the development of Knowledge Based Systems was historically dominated by the classical Expert Systems structure, that is a Knowledge Base, an inference engine, and the working memory (Erman et al., 1988b). MYCIN is often taken as the paradigmatic example of this classic approach, with a Knowledge Base composed of more than 400 rules interpreted at runtime by a backward-chaining inference engine in order to diagnose infectious diseases (Davis, 2001). Classical Knowledge Based Systems were based on a rather naïve view that complex systems could be built upon a more or less extensive set of rules. The numerous systems developed under such an assumption proved that it had some important weaknesses (Erman et al., 1988b): (1) The control of the application of the knowledge is implicit in the structure of the knowledge base, e.g. in the ordering of the rules for a rule-based system. (2) The representation of the knowledge is dependent on the nature of the inference engine (a rule interpreter, for example, can only work with knowledge expressed as rules). Originating from these weaknesses, the development of complex KnowledgeBased Systems was a hard and tedious task, requiring the creation of complete systems mostly from scratch with very little possibility for knowledge or software reuse (other than the inference engines themselves). Similarly, maintenance of 53
4.1. Introduction
54
these systems was often an even harder task because of the inherent interdependencies existing between the many rules. Last but certainly not least, the strict dependency on inference engines, caused the need to choose a particular one over the others at a very early stage in the development with a dicult and expensive back out, and it disallowed applying dierent reasoning methods over the KB. While hybrid inference engines provide a solution to this last limitation by supporting both forward and backward chaining reasoning at will, they failed to overcome the other limitations. During the 1970's and 1980's the Blackboard
Model was proposed, developed, and applied in an important number of applications as a way to surmount previous inconvenients of the classic Knowledge Based Systems approach. The Blackboard Model is often presented using the analogy of a group of persons trying to put together a jigsaw puzzle (Engelmore et al., 1988c), see Figure 4.1: Imagine a room with a large blackboard and around it a group of people each holding over-size jigsaw pieces.
We start with vol-
unteers who put on the blackboard their most promising pieces. Each member of the group looks at his pieces and sees if any of them t into the pieces already on the blackboard. Those with the appropriate pieces go up to the blackboard and update the evolving solution. The new updates cause other pieces to fall into place, and other people go to the blackboard to add their pieces. It does not matter whether one person holds more pieces than another.
The
whole puzzle can be solved in complete silence; that is, there need be no direct communication among the group. Each person is selfactivating knowing when his pieces will contribute to the solution. No a priori established order exists for people to go up to the blackboard. The apparent cooperative behaviour is mediated by the state of the solution on the blackboard. If one watches the task being performed, the solution is built incrementally (one piece at a time) and opportunistically (as an opportunity for adding a piece arises), as opposed to starting, say, systematically from the left top corner and trying each piece. The Blackboard Model has often been applied to knowledge-based problemsolving, and it is often also referred to as the Blackboard Problem-Solving
Model .
In the blackboard literature, the group of people are referred to as
experts to better account for the role they play (Engelmore & Morgan, 1988a). The fundamental philosophy of this problem-solving model establishes that experts, also known as Knowledge Sources (KSs) in the blackboard literature, do not communicate with each other directly, instead all the interactions strictly happen through modications on the blackboard. Experts of particular aspects of the problem contribute to the overall problem-solving activity in an incremental and opportunistic way. In any kind of reasoning, a central question is: what to do next? When it comes to automated reasoning this involves determining which pieces of knowledge to apply and when to do it.
This is what is usually referred to as the
control problem . The Blackboard Model is a highly structured opportunistic
Chapter 4. The Blackboard Model
55
Figure 4.1: The Blackboard Model.
reasoning model which establishes that reasoning should respond opportunistically to changes in the blackboard, in other words, to the current state of the problem solving process. As a conceptual model, it does not prescribe how the control problem should be addressed. In fact, it can be, and has actually been, implemented in many dierent ways.
The Blackboard Model often in-
cludes also a moderator or a controller that establishes a global order among expert (i.e., Knowledge Sources) interactions. Such a restriction, is just a way to limit concurrency, therefore simplifying the alignment of the blackboard model with typically, though not mandatory, serialised implementations (Engelmore
et al., 1988c). Still, the essence of the Blackboard Model remains: a group of experts contribute to the overall solution with no direct communication among them, the solution being built incrementally and opportunistically. In the next section we review the main characteristics of the blackboard model and the consequences these have in knowledge-based reasoning.
4.2. Characteristics
56
4.2 Characteristics The Blackboard Model as described by the metaphor, is a conceptual denition of a reasoning behaviour and does not prescribe any particular implementation detail. It is therefore important not to take the Blackboard Model as a computational specication (despite perhaps the eventual presence of a moderator), but instead as a conceptual guideline about how to perform problem-solving reasoning, or, to reuse Hayes-Roth et al.'s words, a general model of cognition (Hayes-Roth et al., 1988b). Even though the Blackboard Model is relatively easy to understand, it encapsulates an important number of characteristics which are not that obvious at rst glance.
In this section, we will cover the main ones
leaving aside more computational or technical aspects which we will address in Chapter 5. The Blackboard Model stems from a direct application of the divide and
conquer principle. The rationale underlying such a principle is to decompose complex problems into more manageable and presumably simpler ones.
So-
lutions to a problem can be obtained by bringing together solutions to subproblems. This is a well-known, widely applied, and often successful approach to dealing with complex problems.
In Computer Science, for example, it has
prominently been applied in recursive algorithms, and it seems to be a typical human reasoning technique for addressing complex problems. Interestingly, some researchers have mapped specic human problem-solving into automated blackboard implementations. A good example is shown in (Hayes-Roth et al., 1988b) where the authors were particularly interested in obtaining a psycholog-
ically reasonable model of planning. Their results accounted for the ability of the Blackboard Model in this respect. In general we can say that this reasoning model, for its similarities with human behaviour, also represents a good tool for exploring unknown problems in a smooth and incremental way.
It eectively
supports the system developers during the exploration of the problem and the methods to solve it, which can, later on, (relatively) easily be mapped into their automated blackboard-based implementation. In addition to simplifying problem-solving, the divide and conquer approach leads to partitioning the expertise required for the reasoning activity. The divide and conquer approach underpining the Blackboard Model thus, relies in partitioning the problem-solving expertise that propagates over the whole application development cycle. For instance, the knowledge acquisition phase, driven by a complete separation of concerns, gets itself partitioned into smaller phases where subsets of the expertise are to be captured. As a result, acquiring knowledge is simplied and even supports its distribution over dierent groups of experts. Similarly, the knowledge representation and system implementation phases are also aected by the independence among the Knowledge Sources: independent Knowledge Sources can apply dierent techniques, dierent reasoning systems and dierent representation languages for their internal and private expertise. This allows for the outstanding capability of the Blackboard Model to seamlessly integrate dierent reasoning mechanisms to overcome the limitations of a unique technique.
Finally, the modularity brought by the independence of
expertise supports the incremental development, extension and replacement of Knowledge Sources in a relatively simple way. The blackboard metaphor, establishes that all the interactions among the
Chapter 4. The Blackboard Model
57
knowledge sources must take place indirectly via the shared blackboard. As a consequence, communication among experts does not rely on any agreed interaction protocol. Instead, it requires a common representation language for the information to be understood by various Knowledge Sources. Typically, Knowledge Sources need not understand each and everything placed on the blackboard. They are usually concerned with a small part the information. Blackboard-based systems developers must therefore nd a tradeo between the specicity of the knowledge representation to support experts in their own activities, and the genericity required for supporting a shared and general understanding of some common aspects of the overall reasoning activity. In fact, nding such a tradeo is known to be one of the main aspects in blackboard applications development (Corkill, 1991).
Still the Blackboard Model does not impose any sort of rep-
resentation mechanism nor does it determine the information to be placed on it. The independence of expertise in such a collaborative reasoning model, coupled with the need for sharing information and the exibility in terms of knowledge representation has often led blackboard developers to hierarchically partition the expertise. Even though, such a kind of Knowledge Level division is not prescribed by the Blackboard Model, it has proven to be eective in many applications (see (Nii et al., 1988b; Erman et al., 1988b), or our own application described in Chapter 6). A hierarchical organisation of the expertise provides a smooth integration of dierent perspectives, multiple levels of abstraction over the same problem, and facilitates the sharing of the information among the experts.
Specic details of a given sub-problem can be abstracted away
and delegated to the respective(s) expert(s) by means of conceptual hierarchies. Similarly, experts of a given sub-problem do not need to be aware of the state of upper-levels problem-solving.
This characteristic of the Blackboard Model
is what Peger & Hayes-Roth refer to as smooth integration of top-down and bottom-up reasoning (Peger & Hayes-Roth, 1997) and has successfully been applied in a variety of domains such as signal understanding (Erman et al., 1988b), planning (Hayes-Roth et al., 1988b), and event designing (see Chapter 6). In the Blackboard Model, each expert is self-activating.
In fact, this
is a natural requirement if we want communication to take place only via the shared blackboard, with no direct communication among the Knowledge Sources.
Hence, this reasoning model relies on event-based activations of the
Knowledge Sources, whomuch like the people solving the jigsaw puzzleare watching the blackboard to react to changes made by other KSs. The eventbased activation, enables applying this reasoning model to a number of special applications. For instance, it supports the automated reaction to urgent problems, which is particularly necessary in real time control applications, and was one of the main reasons for MUSE (Reynolds, 1988) or BLOBS (Zanconato, 1988) developers to adopt the Blackboard Model. As promoted by the blackboard metaphor, the Blackboard Model is an incremental reasoning model where reasoning progresses step-by-step. Consequently, solutions are generated incrementally guided by the dierent and collaborating sources of expertise.
This allows for a guided exploration of the possible
solutions, potentially reducing the amount of calculations to be performed in order to nd a solution (if any). Moreover, thanks to the application of several sources of knowledge focussing on dierent aspects of the problem, the Black-
4.3. Applicability of the Blackboard Model
58
board Model of reasoning represents a good heuristic that avoids some typical problems suered by traditional algorithms whose behaviour is pre-dened, such as Hill-Climbing, to cite a well-known example. The behaviour exhibited by the Blackboard Problem-Solving Model is characterised by its ever changing focus of attention, or, in Feigenbaum's words, the Island-driven exploration of the solution space (Feigenbaum, 1988), which confers on this reasoning model the outstanding ability to deal with uncertainty and smoothly adapt to changing conditions. Finally, the distinctive property of the Blackboard Model, which stems from the previous characteristics, is what is usually referred to as opportunistic reasoning. In (Erman et al., 1988b) the authors dene opportunistic reasoning as the ability of a system to exploit its best data and most promising methods. Opportunistic reasoning is a exible knowledge application strategy as opposed to xed algorithms whose behaviour is pre-established and cannot therefore be adapted and reoriented facing particular situations, at least, not easily. Hence, this expression accounts for the prominent ability of the Blackboard Model to seamlessly integrate a collection of collaborating knowledge sources into the overall reasoning activity, applying their expertise at the most opportune time. In Articial Intelligence, the Blackboard Model is considered as the opportunistic reasoning model: the reason why it has been chosen in an important number of applications (Engelmore et al., 1988a). Perhaps, the most prominent example of the opportunistic reasoning capabilities of the Blackboard Model, is illustrated by the BB1 blackboard control architecture (Hayes-Roth & Hewett, 1988).
BB1 was a task-independent implementation of the black-
board control architecture whose goal was to provide a solution to the control problem (i.e., deciding which of the potential actions the system should perform next). Through this work, the authors explored the ability of the Blackboard Model for meta-reasoning and conscious reasoning when applying the Blackboard Problem-Solving Model.
Their resulting framework, and conclu-
sions regarding the general application of the Blackboard Model for this particular problem-solving task (i.e., solving the control problem), accounted for the outstanding capabilities to seamlessly, exibly and dynamically coordinate the various collaborating experts towards achieving a solution. In this Section we have presented and described the Blackboard Model, a generic reasoning model. We have paid special attention to the characteristics of this reasoning model, taking into account its metaphorical denition as well as use cases from the literature. There remain, however, important questions to be answered as to when this reasoning model is applicable and, more importantly, when is it appropriate.
4.3 Applicability of the Blackboard Model The Blackboard Model arose from abstracting features from the Hearsay-II speech-understanding system (Erman et al., 1988b; Lesser & Erman, 1988). Since then, important and extensive work has been performed in the AI community, further developing particular aspects of the Blackboard Model and its application. The majority of the work performed has focussed on the development of blackboard systems or frameworks, the theoretical foundations of the reasoning model being considered as mature and stable. In this respect, there
Chapter 4. The Blackboard Model
59
still remains however, important questions to be answered concerning the appropriateness of the model.
When should we envisage building a blackboard
system? Under which conditions? What would it provide to the system? In this section, we aim to better establish the suitability of this reasoning model for reasoning over the Web.
To do so we perform a two-step analysis.
First, we attempt to establish the appropriateness of the Blackboard Model for performing typical knowledge-intensive tasks taking into account previous use cases together with further information about how these tasks have usually been performed in the AI literature. The study is based on a well-known and established characterisation of knowledge-intensive tasks and it is complemented with a rationalisation of the appropriateness accounting for the main characteristics of the Blackboard Model, the applications did or could benet from. Finally, we characterise the Web and assess the applicability of the Blackboard Model in such an environment. So far, choosing the Blackboard Model for a Knowledge-Based System development, has been an empirical practice based on previous success in similar applications, or based on the interest in applying opportunistic reasoning as provided by the Blackboard Model.
Some authors, have already realised the
importance of better establishing the conditions under which the Blackboard Model is appropriate.
However, these are not much more than useful guide-
lines. For instance, Nii (1986) argues that the Blackboard approach is generally suitable for solving ill-structured problems (i.e., a problem with poorly dened goals and where no predened algorithm exists that leads to a solution) and complex problems (i.e., one made up of a large number of parts that interact in a nonsimple way (Simon, 1981)). Nii (1986) also gives some properties that often characterise the problems that were successfully solved through the Blackboard Model. Taking into account the current state-of-the-art in blackboard systems, we have extended the set of properties to include those we have noticed are recurrent.
Hence, in general the occurrence of some combination
of the following problem characteristics can serve as a good indication of the
1
appropriateness of the blackboard approach :
•
A large solution space.
•
Noisy or unreliable problem data.
•
A continuous data ow.
•
The need to integrate diverse and heterogeneous data.
•
The need to integrate dierent sources of knowledge.
•
The need to apply several reasoning methods.
•
The need to develop various lines of reasoning.
•
The need for incremental reasoning.
•
The need for an opportunistic control of the reasoning process.
1 The
existence of some of these characteristics does not make of the Blackboard Model
an appropriate reasoning model. However, the more of these characteristics that appear in a problem, the more likely the blackboard model would be appropriate.
4.3. Applicability of the Blackboard Model
60
•
The need for an event-based activation of the reasoning.
•
High complexity of the task.
•
The need for a mixed initiative .
2
That is, the need for a collaborative
framework where computer and users can interchangeably take the initiative.
•
Meta-reasoning or conscious reasoning.
•
The need for psychologically reasonable implementations. This stands for the ability to map human problem-solving into the automated implementation as well as for the capacity to provide explanations of the reasoning steps performed.
Before going any further it is worth noting some aspects. First of all, the reader should bear in mind that we here discuss the appropriateness of the Blackboard Model for performing some knowledge-intensive tasks for a particular domain and not generically as in Problem-Solving Methods. In other words, the reader should not be mislead into thinking that we here address how adequate the Blackboard Model would be for creating a Problem-Solving Method for, say, Planning. Instead we discuss whether the Blackboard Model would be appropriate for doing a particular task for a particular application, for example planning software projects. Further, while performance is inevitably an important characteristic to bear in mind, it is important not to take the following analysis as a comparison with respect to other possible approaches, but rather as a study focussing on the suitability of the Blackboard Model itself. There are, however, cases where much simpler and more ecient approaches are already available, which in a certain way rejects the idea of applying the Blackboard Model to solving them. Nonetheless, whether the blackboard approach is better suited than other reasoning models, which is indeed an interesting topic, is outside of the scope of this thesis. Finally, the reader should note that, the Blackboard Model is usable for any kind of Knowledge-Based System development since, in the extreme case, one can always reduce the blackboard system to a one-Knowledge-Source blackboard
system , therefore converting it into a traditional KBS. However doing so would clearly be inappropriate, since it would unnecessarily complicate the system. Our analysis takes CommonKADS knowledge-intensive task types (Schreiber
et al., 1999) as the basis for testing the suitability of Blackboard Model to perform them, for it is a well-known and well-regarded classication, that covers a broad range of knowledge-intensive tasks (Studer et al., 1998). CommonKADS task types taxonomy distinguishes two families of tasks, analytic and synthetic tasks. Both types of tasks are further divided depending on the type of problem being tackled. In the next sections we cover the dierent task types and determine the appropriateness of the Blackboard Model for them. We further attempt to rationalise the success or failure of the blackboard approach accounting for the characteristics applications developed have beneted or new applications could benet from.
2 Reusing
the words from (Sadeh et al., 1998).
Chapter 4. The Blackboard Model
61
4.3.1 Analytic Tasks Analytic Tasks are knowledge intensive tasks that based on some data produce some sort of characterisation of the input. Figure 4.2 presents the main tasks CommonKADS identies as being Analytic.
In the remainder of this section
we will cover all the dierent tasks identied. We will determine the appropriateness of the Blackboard Model for solving them accounting from previous use cases, whenever available, or based on how these tasks are typically solved in general.
Furthermore, we will attempt to rationalise the success or failure of
the blackboard approach with respect to the set of characteristics we previously introduced in Section 4.3.
Figure 4.2: Analytic Tasks.
Classication
The typical example of an analityc task is Classication which
given an object determines the class it belongs to. To give an example, we might want to determine what kind of animal we are dealing with based on some information and according to a particular classication system. As we previously introduced, classication is particularly relevant for the Semantic Web where it is thought to pave the way for a better organisation of the information distributed over the Web.
In fact, ontologies which are considered as the basic
underpining of the Semantic Web represent a good means for dening classication systems. Moreover, as we presented in Chapter 3, Description Logics as oered by OWL-DL, support the development of decidable and ecient classiers. Classication can both be approached by a solution-driven method, which starts with full set of possible solutions and tries to reduce the set on the basis of input information, or by a data-driven method which is based on the generation of candidate solutions as new information arrives. Classication driven by data could be implemented using blackboard's ability to explore the search space eciently, incrementally and through diverse lines of reasoning. However, it is an expensive solution for a task where good and lower-cost approaches already exist (Nii, 1986).
Diagnosis
Diagnosis is quite similar to Classication, and in fact it is often
performed using classiers (Schreiber et al., 1999).
This type of knowledge-
4.3. Applicability of the Blackboard Model
62
intensive task diers from classication in the sense that the output is a malfunction of the system, a fault category. This is a typical task which is often performed in technical environments, such as electric networks, automobiles, or medical contexts. Since this task is quite similar to Classication, it is again possible to develop a blackboard based diagnoser, being possible in that case to make also use of the potential of the blackboard for accompanying results with explanations. However, as in the previous case, good and lower-cost approaches already exist and the blackboard is perhaps an over complicated approach to performing this kind of task.
For instance in (Tablado et al., 2004) the au-
thors present a system for the tele assistance of elderly people, which monitors vital signs and diagnoses potential risks on the basis of the RACER Description Logics classier (Haarslev & Möller, 2003). In the literature we can nd some blackboard-based systems for diagnosis, however, in these applications the Blackboard Model was chosen as a means to integrate several diagnosers in order to increase the overall accuracy, and not for it being an appropriate framework for developing diagnosing systems.
Assessment
Assessment aims to support decisions adoption.
This type of
task is very common in nancial or military domains. The typical example of an assessment task is determining whether a person should obtain the loan she or he is applying for.
In (Ballard, 1994) the author presents an application
where the Blackboard Model is applied to military situation assessment. In this application the Blackboard Model provided the means for integrating a variety of uncertain information coming from a large collection of dierent types of sensors (e.g., radars, Electronic Support Measures sensors, etc.) and several lines of reasoning focussing on dierent aspects such as environmental information, plans, external entities etc. Hence, eective assessment in this case, was supported by the outstanding capabilities of the Blackboard Model to integrate several sources of (potentially) noisy, unreliable and diverse information, into a complex reasoning activity, geared by diverse lines of reasoning with the potential to quickly react upon certain events. In more general assessment tasks, the Blackboard Model is an appropriate approach for its ability to coordinate several lines of reasoning over a variety of disparate information. Moreover, in this type of task the capacity to build psychologically reasonable solutions, as supported by the blackboard, is usually desirable since this improves the support to the user for adopting a decision.
Monitoring
Very similar to Assessment is Monitoring , the main dierence
being that the latter deals with continuous ows of information. Monitoring is a typical task in technical environment applications, such as engine monitoring, network monitoring etc. Monitoring is concerned with detecting abnormal behaviours and is therefore usually closely integrated with Assessment tasks in applications, see the previous example (Ballard, 1994). For similar reasons to the previous task type, the Blackboard Model can be applied in Monitoring. Additionally, in this particular task type, applications can also benet from the ability of this reasoning model for dealing with continuous data ow and its event-based activation which paves the way for real-time monitoring. In fact, the Blackboard Model is particularly well-suited for performing this kind of knowledge-intensive activity (Ballard, 1994; Lakin et al., 1988).
Chapter 4. The Blackboard Model Prediction
63
The last analytic task which usually includes some synthetic fea-
tures is called Prediction . This task seeks to determine what will happen in a future, usually in order to prevent it, or to be prepared for it.
A typical
example is wheather forecasting. Predicting, in non-trivial cases like non-linear behaviours for example, often requires correlating large amounts of data coming from diverse sources of information, analysing that information applying one or several behavioural models, generating a set of hypothesis and keeping the most plausible: the prediction.
In (Ellis, 1996) the author describes a blackboard-
based system for prediction-driven sound analysis, where the Blackboard Model provides an appropriate paradigm for applying model-driven top-down analysis of sound signals.
In general, prediction in complex scenarios does often
rely on a variety of diverse and more or less accurate models of the system or phenomenon.
The Blackboard Model seems to be well suited to this kind of
knowledge-intensive task for its prominent support for integrating several lines of reasoning geared by dierent reasoning techniques applied over a plethora of noisy or uncertain information and its characteristic island-driven exploration of the solution space. Additionally, predicting might also benet from a psychologically reasonable problem-solving approach that helps in understanding the reasons that led to such a prediction.
Table 4.1: Blackboard Model appropriateness for Analytic Tasks. Task Type
Appropriateness
Classication
Low
Exploited Characteristics Large solution space Multiple lines of reasoning Incremental reasoning Large solution space
Diagnosis
Low
Multiple lines of reasoning Incremental reasoning Psychologically reasonable Large solution space Noisy or unreliable data
Assessment
Moderate
Variety of input Multiple lines of reasoning Multiple reasoning techniques Psychologically reasonable Large solution space Noisy or unreliable data Integrate diverse information
Monitoring
High
Multiple lines of reasoning Multiple reasoning techniques Continuous data ow Event-based Large solution space Noisy or unreliable data
Prediction
Moderate
Integrate diverse information Multiple lines of reasoning Multiple reasoning techniques Incremental reasoning
4.3. Applicability of the Blackboard Model
64
The overall results of the analysis performed so far, are summarised in Table 4.1. In summary, the Blackboard Model appears, in general, quite appropriate for performing Analytic Tasks. Moreover, it stands out from this analysis that the more complicated the task is, the more appropriate the blackboard approach seems. Finally, it is worth noting that, among the many characteristics exhibited by the Blackboard Model, its outstanding ability to deal with various sources of information and maintain several lines of reasoning which evolve opportunistically appear to be the most benecial properties.
4.3.2 Synthetic Tasks In contrast, Synthetic tasks are constructive tasks which, based on some requirements, generate a new thing that satises them. Again, CommonKADS identies a number of typical Synthetic Tasks which are reected in Figure 4.3. Like in the previous section, we will determine the appropriateness of the Blackboard Model for performing the dierent Synthetic Tasks identied, accounting from previous use cases, whenever available, or based on how these tasks are typically solved in general. Furthermore, we will attempt to rationalise the success or failure of the blackboard approach with respect to the set of characteristics we previously introduced in Section 4.3.
Figure 4.3: Synthetic Tasks.
Design
Design is perhaps the synthetic task par excellence (Smithers, 2002b),
and has therefore been subject to extensive research in the AI community as well as outside. Designing is one of the most remarkable intelligent behaviours exhibited by humans. It is a particularly interesting task for its high complexity and the usual lack of any established methodology to guide designers (i.e., a human or a computer-based system) through the whole process towards a suitable result. In this thesis we pay special attention to this knowledge-intensive task for being the central matter in one of our case studies (see Chapter 6). Designing is typically an explorative and evolutionary process (Smithers, 1992), through which a designer devises the very aspects involved in the design and how to satisfy the needs and requirements that motivated it. In Smithers' words (Smithers, 2002b):
Chapter 4. The Blackboard Model
65
What makes designing a particular kind of activity, distinct from problem solving and planning, and other human activities, is that designing must start with something that neither species what is required nor denes a problem to be solved, yet it must arrive at a designa specicationfor something that, when realised or implemented, will satisfy the motivating needs or desires: the realised design should remove the need or desire for something to be dierent. For example, facing the need to design a car adapted to the city and aimed at middle-class families, a designer would need to explore the numerous possibilities until he or she creates a suitable design of what could be the main characteristics and parameters of what could be accepted as a middle-class family car, or a city-adapted car. The designer might rst seek to minimise fuel consumption by imposing the fuel (e.g., diesel). Alternatively, the designer could start by designing the chassis and the whole bodywork in order to reduce the car size and maximise the interior space. Each and every decision will potentially have many implications over the rest of the design, e.g., the size of the engine, the chassis, the nal price, etc. These implications might lead to some modications of previous decisions, which will in turn provoke more changes, and so until a satisfactory design is obtained. In general, we could say that designing is a very complicated incremental and exploratory process characterised by the strong intertwining of its very details. Designing often involves a large body of heterogeneous knowledge. In the previous example, we can easily devise the need for applying a large body of physics theory (e.g., thermodynamics, aerodynamics, etc.) as well as aesthetics, market information, legal regulations and many more. Hence, designing is usually approached following the divide and conquer principle, so that, for example, designing a car turns out to be a matter of designing the engine, its chassis, its appearance, etc. Similarly, designing the engine involves a huge variety of factors (e.g., fuel, material, dimensions, layout, etc.) which will determine its nal characteristics such as the maximum velocity, the power, the fuel consumption and so on. The separation of concerns (which does not avoid the inherent interconnectedness of the dierent aspects of the design) is what allows designers to deal with more manageable subproblems in order to be able to progress. In fact, in many real-world tasks the complexity of the task requires such an extensive and heterogeneous body of knowledge that several designers are involved, and together, collaborate towards a nal and satisfactory design. Smithers et al. (1994) presents a review of the work performed by the AI community in Design. Smithers et al. (1994) characterises the research eorts as mainly driven by three approaches: Automated Design, Design Support, and Cognitive Modelling. Interestingly, but not surprisingly, whatever the approach adopted, the Blackboard Model has often been applied to support the reasoning processes, see for example (Hayes-Roth et al., 1988c; Carter & MacCallum, 1991; Buck et al., 1991), or our own system described in Chapter 6.
In fact,
accounting for previous use cases and our own experience, we can say that the Blackboard Model is particularly well-suited to this kind of knowledge-intensive activity because it supports the seamless integration of a large body of heterogeneous knowledge, whose application can opportunistically and interchangeably be applied during the design activity. Moreover, it paves the way for smoothly
4.3. Applicability of the Blackboard Model
66
applying dierent reasoning techniques on several contexts focussing on diverse aspects of the design. Additionally, the Blackboard Model is particularly wellsuited and even encourages the hierarchical partitioning of the design process which is often applied by humans (see previous example), and has usually been reected in intelligent design applications (Balkany et al., 1991). Another important aspect to bear in mind is that the blackboard approach is very adequate for applying a non-intrusive approach, as it usually happens in Design Support Systems, where the expert system performs the tedious calculations like computing the implications of the dierent decisions, but leaves the initiative to the designer. Last but not least, the blackboard has proven to be suitable platform for supporting the collaboration of the many experts involved in the designing activity (Carter & MacCallum, 1991). Since Design is a very ample task type, many other kinds of problems can be approached as a design task. However, there are obvious ineciencies that would arise in the cases where, the full power of designing is not required. Perhaps the prototypical example, is Conguration Design where many task-specic approaches have been proposed in order to increase systems performance. Conguration Design is often dened as (Schreiber & Wielinga, 1997): ... a form of Design where a set of pre-dened components are given and an assembly of selected components is sought that satises a set of requirements and obeys a set of constraints. Typical examples of Conguration Design tasks can be conguring a computer, a car or an elevator (which is perhaps the most explored domain in Conguration Design literature) (Schreiber & Wielinga, 1997). Another example, which is actually one of the tasks performed by our case study presented in Chapter 6, is conguring a set of tables in some appropriate layout for a suitable organisation of the attendants of a meeting. Conguration Design diers from other types of design in that no new components can be designed and requirements and constraints are complete (i.e. there is no need to form, to create (part of ) the requirements). While Conguration Design seems to be a straight forward search process, the combinatoric explosion of this kind of problems makes it unrealistic to use pure brute-force to solve it. Hence, an important eort has been undertaken in AI, to provide Problem-Solving Methods (PSMs) that could restrict the search space by using knowledge. In this particular case we will cover the appropriateness of the Blackboard Model by covering the main kinds of Conguration Design PSMs that have been described in the literature, checking whether their rationale can be instantiated for a particular domain based on the Blackboard Model.
In
(Schreiber & Wielinga, 1997) the authors present a classication of the main PSMs for Conguration Design which is the basis for our analysis (see Figure 4.4). The taxonomy divides the dierent PSMs in uniform methods and knowledge-intensive methods . Uniform methods, such as Constraint Satisfaction problem-solving, are based on uniform reasoning procedures combined and often fail to make eective use of the domain at hand (Schreiber & Wielinga, 1997).
We here focus on the later since the Blackboard Model is above all a
knowledge-intensive reasoning model. Among the knowledge-intensive methods we nd: 1. Case-Based methods
Chapter 4. The Blackboard Model
67
2. Propose, Critique and Modify methods 3. Hierarchical methods Case-Based methods are based on the assumption that knowledge about solutions is explicitly represented, and reduce Conguration Design to a Classication task (see previous analysis) with some sort of modication of given congurations to better t the specic requirements. Therefore, the Blackboard Model is not particularly well suited for implementing Case-Based methods. The use of DL classiers seems more appropriate in this case. Perhaps the most common family of Conguration Design PSMs is the Propose-Critique-Modify (PCM) extensively analysed in (Chandrasekaran, 1990). This family of PSMs starts with a conguration proposal, which is tested against the constraints and requirements and is subsequently modied in order to correct any conict or constraint violation. This general approach to Conguration Design, has been specialised by the Propose-and-backtrack and the Propose-
and-revise methods among others (Schreiber & Wielinga, 1997). In Propose-and-backtrack, when a conguration fails, previous decisions are reconsidered.
It is known that the propose phase is particularly crucial and
can indeed benet from domain knowledge. In this respect, it would seem that the opportunism of the Blackboard Model could smoothly support the propose phase. However, doing so strongly relies on a good partitioning of the expertise
3 (Chandrasekaran, 1990).
which might not always be achievable
Propose-and-revise methods do not undo design decisions, instead they try to x the designs. Hence, these methods do not work directly over components but over a large set of parameters that need to be tuned. The propose step proposes design extensions, that is, a new parameter assignment.
Again, the
order in which parameters are selected is particularly important and is guided by domain-specic knowledge.
In this case, however, the method can hardly
take advantage of the opportunism of the Blackboard Model since decisions to be adopted (i.e., which parameter to modify next) seem too ne-grained to support an appropriate partitioning of the expertise. Therefore, the Blackboard Model is not particularly well suited for implementing this Conguration Design PSM. The last family of PSM are called Hierarchical methods. These methods employ a hierarchical decomposition, and basically perform a graph search. The task goal is decomposed into a number of substructures which together form the nal Conguration Design.
There are several variants of this approach,
design plans being perhaps the most relevant for us. Design plans specify abstract design decisions that are used to guide the Conguration Design process (Chandrasekaran, 1990). The Blackboard Model could be applied, thus, taking advantage of its inherent opportunism for smoothly navigating over the hierarchical decomposition of knowledge to control the Conguration Design process. In summary, there currently exists an important number of Conguration Design methods that have been applied to dierent domains, and have shown dierent adequacy and performance.
In general, the Blackboard Model does
not seem to be particularly well suited to this kind of reasoning, except for Hierarchical Conguration Design methods.
However, because of the diverse
behaviour exhibited by the dierent methods exposed here, there is no specic
3 Hierarchical
methods for Conguration Design benet from this domain characteristic.
4.3. Applicability of the Blackboard Model
68
Figure 4.4:
Problem-Solving Methods for Conguration Design (taken from
(Schreiber & Wielinga, 1997)).
one that is complete and appropriate for all the conguration tasks.
Hence,
task independent architectures have been proposed for conguration tools development. These tools are composed of a variety of methods whose adequacy is evaluated at any time based on the current problem state, the best suited method being subsequently executed.
The blackboard approach is regarded
as one of the best candidates to support the smooth integration of the dierent methods (Chandrasekaran, 1990) for its outstanding support for seamlessly integrating diverse engines in order to overcome the limitations of a single one.
Assignment
Assignment is a relatively simple task whose input are two ob-
jects sets and it has to produce a suitable mapping between them. Examples of such a task could be assigning airplanes to gates, employees to oces. Assignment can be (and usually is) viewed as a conguration task where a given set of xed elements (e.g., airplanes) has to be tted into a skeletal arrangement (e.g., gates) (Schreiber & Wielinga, 1997). Therefore assignment is often taken as simplied version of Conguration Design, where the set of components is xed, and a skeletal assembly is pre-established. Hence, the suitability of the Blackboard Model for the Assignment task can merely be reduced to its suitability for doing Conguration Design bearing in mind that it is even simpler a task. In summary, the Blackboard Model is overly complicated for the Assignment type of tasks, constraints satisfaction problem-solvers being in these cases a more adequate machinery.
Planning
Planning is dened as the task by which a plan, that is, an or-
dered (or partially ordered) set of actions, is generated to reach a given goal, taking into account domain-specic characteristics. are planning a trip, planning a project, etc.
Examples of such a task
Planning is quite similar to de-
signing and the same kind of techniques can be applied for performing both
Chapter 4. The Blackboard Model
69
types of knowledge-intensive activities. Usually, planning requires taking into account a large amount of heterogeneous information, which needs to be reconciled towards generating a satisfying plan (if not the best). For instance, trip planning might involve several transports information (e.g., timetables, prices, departures, destinations, etc), accommodation information (e.g., location, price, stars rating, etc), traveller preferences and so on. The Blackboard Model has been applied many times to support planning and it has proven to be well suited to this reasoning activity. For example in (Pearson, 1988) the author presents a blackboard-based solution to military missions planning, or a particularly interesting example described in (Hayes-Roth et al., 1988b), where the authors explored the ability of the blackboard model to do errand planning. Both applications beneted from the capacity of the Blackboard Model to perform an island-driven exploration of the large solution space, informed by a variety of sources of knowledge and applying diverse reasoning techniques opportunistically. Additionally, like the errand planning application it might be interesting to obtain explanations of the decisions adopted by the system during the planning process.
Scheduling
This task takes a sequence of activities (e.g., a plan) and allocates
them to available resources during a particular time period, obeying to some constraints and preferences. Retaking the previous example of the trip preparation, scheduling would consist of booking the dierent services and eventually travelling (which often leads to some adjustments over the plan and its schedule). The Blackboard Model has successfully been applied to support Scheduling. In particular, the OPIS system (Smith, 1995) is a Reactive Scheduling System where Scheduling is tightly integrated with the actual process execution allowing for event-based re-organisation of the process upon unexpected situations (e.g., machine break down, etc.). Another example, based on the Blackboard Model, can be found in (Hildum, 1994), where the author describes an opportunistic scheduling system that provides the ability to adapt quickly to changes in the environment. In both cases the systems made particular use of the ability of the Blackboard Model to react to changes over a set of diverse sources of information driven by an opportunistic control of diverse lines of reasoning focussing on particular aspects of the Scheduling task. In general the Blackboard Model seems to be appropriate for Scheduling, for its inherent ability for integrating dierent sources of information and expertise. In practice, the Blackboard Model becomes interesting whenever the Scheduling system needs to react and adapt the whole schedule to changes in the schedule execution environment.
Modelling
Finally,
CommonKADS
identies
Modelling
as
a
synthetic
knowledge-intensive task. Modelling is the task through which we build abstract models, abstract descriptions of system or phenomena, in order to support further inferencing such as behaviour prediction, fault diagnosis etc. This type of task is rarely automated, however, it seems that for those domains where some knowledge about what can or should be modelled, how it can (or should) be modeled, together with some general knowledge about modelling, a blackboardbased system for a Knowledge Modelling support system could be envisaged by viewing the modelling task as a kind of designing activity. This remains however something that would require further exploration.
4.3. Applicability of the Blackboard Model
70
The overall results of the analysis performed, are summarised in Table 4.2. In summary, the Blackboard Model appears, even more appropriate for performing Synthetic Tasks, for they are typically more complex tasks. Finally, it is worth noting that, in addition to beneting mainly from the outstanding capacity of the Blackboard Model for dealing with various sources of information and maintaining several lines of reasoning which evolve opportunistically, Synthetic Tasks do also gain from the fact that the Blackboard Model supports obtaining what we have called psychologically reasonable solutions.
Table 4.2: Blackboard Model appropriateness for Synthetic Tasks. Task Type
Appropriateness
Exploited Characteristics Large solution space Multiple lines of reasoning Multiple reasoning techniques Integrate diverse information
Design
Opportunistic control
High
Mixed initiative High complexity Incremental reasoning Psychologically reasonable Large solution space Conguration Design
Low-Moderate
Multiple lines of reasoning Opportunistic control Large solution space
Assignment
Low
Multiple lines of reasoning Opportunistic control Large solution space Integrate diverse information
Planning
Moderate-High
Multiple lines of reasoning Multiple reasoning techniques Incremental reasoning Psychologically reasonable Large solution space Multiple information
Scheduling
Moderate
Multiple lines of reasoning Opportunistic control Event-Based Large solution space Integrate diverse information
Modelling
Moderate?
Multiple lines of reasoning Opportunistic control Psychologically reasonable
One could easily nd examples that perform a more or less complicated mixture of these knowledge-intensive tasks, the rather prototypical example being a system that does Planning and Scheduling (see (Schreiber et al., 1999) for a more extensive list).
In these cases establishing the suitability of the
Blackboard Model is relatively simpler.
A general rule of thumb is that the
more a domain problem exhibits the previously cited characteristics the more
Chapter 4. The Blackboard Model
71
suitable this reasoning model will be. Hence, for problems composed of more than one knowledge-intensive tasks the Blackboard Model will usually be more suitable. Moreover, the blackboard approach is known to support the seamless integration of diverse reasoning techniques, which for these kinds of problems will certainly be required or, at the very least, recommendable. Therefore, for problems that involve more than one knowledge-intensive task, the Blackboard Model will potentially be well suited.
4.3.3 The Blackboard Model for Reasoning over the Web In the previous section we have studied the appropriateness of the Blackboard Model for solving typical knowledge-intensive tasks. However, this does not yet establish the suitability of the Blackboard Model for reasoning over the Web. Reasoning over the Web cannot be considered as being one of the previously identied task types. Instead, this is concerned with some reasoning, whatever task(s) that is, in a particular environment: the Web. Let us then, rst analyse what the Web actually is, what characterises it, and thus, which aspects make of it a peculiar environment. The Web is based upon the following set of design principles described in (Berners-Lee, 2002b):
•
Simplicity
•
Modular Design
•
Tolerance
•
Decentralisation
•
Test of Independent Invention
•
Principle of Least Power
The rst two principles are basic Software Engineering good practices.
Tol-
erance, in a nutshell, establishes that programs should be liberal with respect to the input they can handle. This principle intends to promote and enable a widespread usage of Web technologies. Tolerance is based on the assumption that human (and therefore programs) make errors and being strict would be an impediment in an open environment like the Web.
However, in practice
tolerance has led to an heterogeneity that is proving more harmful than benecial. A well-known example can be found on the heterogeneous interpretation of standard formats by the web browsers. In an attempt to limit the damages, tolerance is often presented by the sentence be liberal in what you require but
conservative in what you do . Unfortunately, this is uniquely a warning which is not often taken into account by Web applications developers. The fourth principle, together with tolerance, is the essence of the Web. The Web is a decentralised and open environment where no centralised control exists, other than the standardisation eorts lead by the W3C. The last two principles are also very characteristic of the Web. The test of independent invention checks the adaptability of the invention for becoming part of a bigger system but also for interacting with other technologies. It corresponds to the extensibility and adaptability Software Engineering good practices. Finally, the principle of least
4.4. Summary and Conclusions
72
power is in fact the application of the principle of simplicity to the languages for the Web: the simpler, the better . These principles must be kept in mind to better understand what the Web is. The Web can be dened as a huge, heterogeneous, dynamic, distributed and somewhat autonomous source of information or resources. In this environment, resources, which are uniquely identied by their Universal Resource Identier (URI) and represented using standard formats (e.g., HTML, XML, etc.), are stored in distributed and interconnected servers across the world. Servers oer their content and functionality to humans and computers by means of standard
4 to name
protocols, such as the HyperText Transfer Protocol (HTTP) or SOAP two of them, see (Berners-Lee, 2002b) for further details.
Despite the inherent technical challenges, the Web opens up a plethora of possibilities which have already been demonstrated in a variety of dierent scenarios.
Reasoning over the Web is part of the great potentials that can be
brought to reality with some eorts and discipline. To do so, we must however, assume the very nature of the Web and adapt our techniques and technologies to it. Any kind of application conceived for the Web must be prepared for this particular environment.
This involves being ready to deal with an extremely
large and distributed environment, populated by uncertain, dynamic and to some extent autonomous information, served by machines which are prone to errors. The Blackboard Model is a general and versatile reasoning model, particularly well suited for supporting reasoning processes over the Web. First and foremost, its characteristic opportunism provides and outstanding support for reasoning in highly dynamic environments. It supports adapting the reasoning process to the very typical and diverse events of the Web, such as remote execution exceptions, a continuous data ow, connectivity problems, etc. In fact, the event-based activation of KS' expertise paves the way for the seamless, effective and Knowledge-Based choreography of remote services distributed over the Web. Moreover, the Blackboard Model has been shown to be particularly well suited for dealing with noisy, unreliable, heterogeneous and massive data. This is of crucial importance for reasoning over the Web, where its extension, the lack of central control and the principle of tolerance have led to a highly noisy, unreliable and increasingly expanding source of information. In fact, the island-driven exploration of the solution space that characterises the Blackboard Model together with its prominent capacity for integrating diverse reasoning techniques, provides an excellent support for reasoning over the Web.
4.4 Summary and Conclusions In this chapter we have presented, described and deeply analysed the Blackboard Model of reasoning. It is a reasoning model which builds upon the widely applied
divide and conquer principle. The Blackboard Model was rst applied to the speech-understanding problem (Erman et al., 1988b), to integrate highly diverse kinds of knowledge and deal with large amounts of uncertainty and variability in input data. Abstracting the features of the Hearsay-II speech-understanding
4 Formerly
SOAP was the acronym for Simple Object Access Protocol. Since version 1.2 it
is just known as SOAP (Mitra, 2003)
Chapter 4. The Blackboard Model
73
system gave birth to this reasoning model which has since then been subject to extensive research in the AI community. The blackboard approach as described by the metaphor seems to be simple, at least conceptually. However, in practice it turns out to be a relatively complex reasoning model. This complexity is reected both during the Knowledge Engineering phase and eventually during the implementation phase as we will see in the next chapters.
In fact, as previously raised in (Engelmore & Mor-
gan, 1988b), the Blackboard Model does not only promote but also requires an appropriate Knowledge Level partitioning of the expertise (In Chapters 6 and 7, we will be able to see how critical this can be for applying the Blackboard Model): How the problem is partitioned into subtasks makes a great deal of dierence to the clarity of the approach, the speed with which solutions are found, the resources required, and even the ability to solve the problem at all. The Blackboard Model represents a guideline for tackling problems at the Knowledge Level: it provides a conceptual framework for organising the knowledge and a strategy (incremental and opportunistic) for applying it. As such it exhibits an important number of characteristics that in the words of Feigenbaum (Feigenbaum, 1988) make of it the most general and exible knowledge system architecture. Such a claim, was at that time mostly based on empirical practice and corroborated by some other researchers that performed some further and more methodical analysis. In this chapter we have taken this step further, and have studied the broader applicability of the Blackboard Model. To do so we have built upon a widely accepted body of research such as CommonKADS. Originally, the Blackboard Model was mainly applied to analityc tasks and proved to be quite successful, however it has also been particularly successful in synthetic tasks. Our analysis shows that the blackboard is widely applicable and generally well-suited for supporting knowledge-intensive tasks, being particularly useful in complex domains.
Moreover, we have also determined the
dierent characteristics of the blackboard approach that can be advantageous for solving them. Therefore we believe our analysis can serve as the basis for extrapolating the appropriateness of the Blackboard Model to other tasks and domains. Finally, based on its main design principles, we have characterised the Web in order to assess the applicability of the Blackboard Model for reasoning over the Web.
The Web appears as a particularly challenging environment whose
distinctive features have shown to be prominently supported by the Blackboard Model.
Monitoring, Prediction, Assessment, Planning and Scheduling, which
have very appealing applications for the Web, such as the choreography of Web Services, the monitoring of distributed systems or the reactive/pro-active reorganisation of distributed processes, nd in the Blackboard Model an outstanding support. In the next chapter we present our Opportunistic Reasoning Platform, a platform for the development of Knowledge-Based Services over the Web, which builds upon the Blackboard Model, and adapts it to the Web.
Chapter 5 An Opportunistic Reasoning Platform
Software architecture encompasses the structures of large software systems. The architectural view of a system is abstract, distillling away details of implementation, algorithm, and data representation and concentrating on the behavior and interaction of black box elements. L. Bass, P. Clements, and R. Kazman (2003).
In this chapter we present our Opportunistic Reasoning Platform, an infrastructure for the development of Knowledge Based Services over the Web. First, we introduce the platform and its several components.
Next, we describe in
detail the core reasoning component of the platform, a Blackboard Framework. Finally, we depict the specicities that have been introduced in order to better support reasoning over the Web.
5.1 Introduction In the previous chapter we described the Blackboard Model and established its general applicability as well as its suitability for reasoning in an environment like the Web.
The Blackboard Model was shown to be a widely applicable
reasoning model, which is well suited for carrying on an important number of knowledge-intensive tasks, making of it a particularly good choice for supporting reasoning processes over the Web. In this thesis we therefore reconsider applying the blackboard approach, for supporting the development of Knowledge Based Services over the Web. We have developed a platform that actively assists developers in the construction of Web-oriented Knowledge-Based Services, see Figure 5.1. Our Opportunistic Reasoning Platform, as we call it, consists of a widely applied and extensively analysed reasoning modelthe Blackboard Model, a skeletal generic 75
5.1. Introduction
76
implementation of the modelthe Blackboard Frameworkand many architectural and technological decisions. It supports creating robust, scalable, maintainable and extensible Web applications with an outstanding support for implementing Knowledge-Based Services where the dierent components of the application can be deployed on multiple physical systems connected through Web Services and sustained by ontologies. Figure 5.1 shows the main elements that compose our platform. The central component shown on the bottom left-hand side of the gure, is the Blackboard Framework.
The Blackboard Framework, which is a skeletal implementation
of the Blackboard Model, is the core reasoning component of our platform. It presents some specicities with respect to traditional implementations in order to support reasoning over the Web, as we will see in the rest of this chapter. Moreover, the platform identies a Knowledge Base Integration component that supports the integration of Knowledge Bases and Ontologies represented in Web standard formats (see Chapter 2). Further, we identify the use of ProblemSolving Methods (distributed or not) and inference engines, to complement the reasoning power of the systems built with our platform.
Finally, the Oppor-
tunistic Reasoning Platform relies on Web Services as a means to support the seamless, yet eective, integration and interaction with remote systems. In the following sections we describe each of these components in more detail. First, we present the Blackboard Framework we have developed and the dierent subsystems that compose it. Next, we introduce the main decisions adopted in order to better support reasoning over the Web, and we nally describe how our platform supports coupling Knowledge Bases, expressed according to some ontologies, with additional inferencing rules.
Figure 5.1: Overview of the Opportunistic Reasoning Platform.
Chapter 5. An Opportunistic Reasoning Platform 77
5.2. The Blackboard Framework
78
5.2 The Blackboard Framework The Blackboard Model is a relatively complex reasoning model, and applying it is often a tedious and expensive work. It requires an in-depth understanding of the model, its characteristics, and its implications (see Chapter 4). Moreover, it needs implementing the machinery to support the opportunistic reasoning process. And nally, the development of the system à la blackboard. Having realised its potential genericity and the costs of applying it, researchers have attempted to generalise it.
Among such generalisations, often
1
referred to as Blackboard Frameworks or Blackboard Architectures , we can nd AGE (Attempt to Generalise) (Nii & Aiello, 1988), Hearsay-III (Erman
et al., 1988a), GBB (Corkill et al., 1988) or the BB* environment (Hayes-Roth et al., 1988a), to name a few. These generalisations, or, better said perhaps, semi-implementations of the Blackboard Model, were, in general, quite successful in simplifying the use of the Blackboard Model as new domains and applications appeared. Moreover, they also helped in the exploration of the Blackboard Model as a general purpose reasoning model and not as just a useful model for, say, signal-interpretation. Many of the reections previously introduced do in fact nd in these frameworks an important support. The Blackboard Model being a general reasoning model, it does not prescribe any implementation detail and actually leaves many decisions up to the system developer. These degrees of freedom range from architectural aspects to pure implementation details.
Reviewing the various Blackboard Frameworks and
systems that have been developed, we can see a plethora of dierences between them. First of all, we could evoke the dierences in the implementation of the Blackboard, that is, the shared workspace. Knowledge Sources have also been developed in very diverse ways.
The opportunistic control of the Knowledge
Sources execution has been subject to important dierences between the various solutions proposed. These early Blackboard Framework implementations were strongly inuenced by the areas they were applied to, and obviously biased by the existing technologies at the time. We have developed our own Blackboard Framework. To do so we have based our development on the previous analysis and understanding of the Blackboard Model, in the results and conclusions obtained by previous researchers that developed their own frameworks, and on the current state-of-the-art technologies for the new Web era, overcoming some of the diculties previously found (Corkill, 1991). Implementing the Blackboard Model of reasoning requires mapping an essentially concurrent model into its serial computation.
In order to simplify
this process we introduce the concept of a moderator that controls contributions thus driving the overall problem-solving process while at the same time, avoiding race conditions. In fact, given that the Blackboard is a shared datastructure it might happen that various Knowledge Sources try to manipulate it at the same time, producing unpredictable results. Driven by the moderator, that is the Blackboard Controller, our framework works on the basis of cycles, see Figure 5.2. On each and every cycle, the dierent Knowledge Sources inform the Blackboard Controller about their potential contributions to the overall problem-solving. Then, the Blackboard Controller,
1 We
will interchangeably use both names in this thesis
Chapter 5. An Opportunistic Reasoning Platform
79
Figure 5.2: Overview of the Blackboard Framework.
decides which action should next be performed and activates the corresponding Knowledge Source.
Once activated, the Knowledge Source does the inferenc-
ing step it previously proposed, and modies the Blackboard (i.e., the shared workspace) accordingly. As soon as the Knowledge Source has nished its contribution another cycle starts, and so on until a termination condition is met. To support the process, there are a number of data-structures and mechanisms that need to be appropriately put into place. Figure 5.2 shows an overview of our Blackboard Framework. The framework is composed of a shared datastructurethe Blackboard , which is itself composed of several partitions called Partial Solutions , a repository holding global constraints, and an Agenda used to direct the reasoning activity. Finally, the reasoning process is carried on by Knowledge Sources who are coordinated by the Blackboard Controller . Essentially, we aim at supporting eective and ecient reasoning over the Web. Originating from our own experience, but also motivated by the current very active and growing community of Java developers in the context of the Semantic Web, as well as in AI in general, we have developed from scratch an Object-Oriented Blackboard Framework in Java. Its development has been driven by the need and interest to base our solution on current Web-related technologies in order to better support its use for reasoning over the Web, and directed by current Software Engineering and Knowledge Engineering best practices. For the sake of clarity we will rst describe the Blackboard data-structure itself, next we will present the details regarding the moderator, that is the Blackboard Controller, and we will nally focus on explaining the reasoning components, that is the Knowledge Sources.
5.2. The Blackboard Framework
80
5.2.1 The Blackboard Data-Structure The Blackboard is a shared data-structure where the overall state of the problemsolving process is maintained. It serves two purposes. First of all the Blackboard is a global and shared information repository that holds the state of the problemsolving. Second, the Blackboard is constantly observed by all the experts taking part in the problem-solving, and serves as an indirect vehicle for communication among the collaborating experts. The Blackboard is often divided into a hierarchy of abstraction levels in order to gain eciency and to maintain a clear separation of concerns. Knowledge Sources work on a particular level and generate new information for adjacent levels whenever appropriate. For example, in the Hearsay-II speech-understanding system, speech was interpreted at dierent levels of abstraction dealing with syllables, words, word sequences, phrases etc. Information ew across the various levels, until a suitable, that is a coherent hypothesis at any level was devised. The hierarchical partitioning of the shared Blackboard has usually been provoked by the specic organisation of the domain knowledge.
A retrospective
view of the various systems developed shows that such a partitioning of the Blackboard directly stems from the fact that appropriately applying the Blackboard Model requires using a divide and conquer approach. A good division of the problem-solving process into subtasks, inevitably leads to also partitioning the knowledge these subtask bring into action. Hence, a hierarchical division of the workspace is usually, though not the unique, possible and appropriate
2
partitioning . Our framework is no exception in this respect, and the Blackboard is prepared for partitioning the workspace and directly supports creating hierarchies for it is a useful and typical approach. In our framework, we call the dierent partitions Partial Solutions , to better account for the role they play in the reasoning process. Each of the Partial Solutions holds new contributions to the problem-solving activity, or, in the blackboard jargon, new Hypothesis . A Hypothesis, keeps information regarding the action performed (e.g., adding a new fact, modifying a previous belief etc.) by a Knowledge Source at a certain cycle based on some input data (i.e., the triggering conditions). In other words, a Hypothesis informs about which KS did what, when and why . Hence, Hypothesis keep track of all the activities performed during the problem-solving process and therefore play a very important role in tracing the system execution, for debugging purposes, and could even help in automating the explanation of the results obtained. In addition to Partial Solutions, we have also added the possibility to assert global facts which are pieces of information that aect various Knowledge Sources.
Global facts are usually constraints for the overall problem-solving
activity which are typically introduced by the user, directly (i.e., asserting a particular requirement) or indirectly (i.e., derived from a particular action of the user). So far, we have described some of main data-structures that compose the Blackboard, namely the Partial Solutions, the Hypothesis and the Constraints Repository. Still, the Blackboard is not just an information repository, it is also a communication medium between KSs. The blackboard metaphor clearly, and
2 In
the music rights clearance case study (see Chapter 7), we will see an example where
the workspace is partitioned but not hierarchically.
Chapter 5. An Opportunistic Reasoning Platform
81
explicitly establishes that experts collaborate in solving the problem by means of working on the same workspace. Experts are observing the blackboard until they realise they can contribute in one way or another. A straight forward implementation of the metaphor, could be based on an
active monitoring of the Blackboard by all the Knowledge Sources.
That is,
Knowledge Sources would be constantly observing the Blackboard in order to catch any change that might be relevant to their own purposes. However, this kind of monitoring activity is known to be inecient since it usually uses CPU cycles unnecessarily.
For instance, many changes may be irrelevant to many
Knowledge Sources but still, they have to use CPU cycles for monitoring the Blackboard. Our Blackboard implements a more ecient solution based on the Observer design pattern (Stelting & Maassen, 2002; Gamma et al., 1994). In this design pattern, also known as Publisher-Subscriber, an object being observed, keeps track of the dierent objects that wish to be notied upon changes.
On any
change, the observed object noties the other objects, known as observers, about the changes that just took place, thus avoiding the active monitoring.
Dur-
ing the framework initialisation, the Knowledge Sources inform the Blackboard about their main concerns (i.e., they set themselves as Observers of part of the workspace) and need no be actively monitoring. Instead, any change performed over the Blackboard is directly and automatically notied to the appropriate Knowledge Sources, that subsequently react accordingly.
5.2.2 The Blackboard Control An important task to be undertaken in any reasoning concerns deciding which step to perform next.
In AI, such a problem is called the control problem.
This problem, which is often of great importance in traditional KnowledgeBased Systems, becomes crucial in the context of an opportunistic reasoning model such as the blackboard approach. Even though the blackboard metaphor does not identify any control mechanism, the blackboard approach raises two important issues when it comes to implementing it (Carver & Lesser, 1992): the need to map an inherently concurrent model into its serial implementation which leads to obvious diculties such as race conditions, and the usual need for a knowledge-based direction of the problem-solving activity in order to avoid unnecessary computations that could even lead to intractability due to a combinatorial explosion of the solution space. Blackboard systems developers have usually included some control mechanisms, that is, some knowledge-based guidance of the problem-solving. Controlling the blackboard-based reasoning process can be achieved by the Knowledge Sources, by the Blackboard, by a separate component or even by a combination of these three.
In (Carver & Lesser, 1992), the authors present an extensive
analysis of the main control solutions that have been proposed. These include among others, event-based, hierarchical, goal-directed, distributed, or opportunistic control, as in the BB1 system (Hayes-Roth & Hewett, 1988). An important research eort has been devoted to blackboard control mechanisms, but there does not seem to be a general and best regarded solution for it is often a highly domain-dependent problem. As we previously sketched, our Blackboard Framework works on the basis of cycles. At every cycle, each and every Knowledge Source proposes its best
5.2. The Blackboard Framework
82
potential contribution to the overall problem-solving activity process.
These
contribution proposals are stored in what we call Knowledge Source Activation Records (KSAR), a name borrowed from the BB1 framework for blackboard control (Hayes-Roth & Hewett, 1988). KSARs explicitly represent an important set of the problem-solving control information. KSARs play two essential roles: they represent the basic information for the Blackboard Controller to decide the next problem-solving step (i.e., solve the control problem), and they serve for debugging purposes. Once all the potential contributions have been processed, the Blackboard Controller chooses a KSAR, and activates the appropriate Knowledge Source. Then, the chosen Knowledge Source performs its inferencing step and modies the blackboard accordingly. The modications made by the Knowledge Source, new data introduced by sensors, by the execution of remote Web Services, and even by the user interaction might in turn provoke new contributions which will again be processed by the Controller in the next cycle. Such a process continues incrementally and opportunistically until a suitable solution to the problem is obtained. Since the control problem is often highly domain-dependent, our Blackboard Framework does not impose the control mechanism implementation. Instead, it supports implementing new techniques depending on the specic task to be achieved and does not establish any further restriction a priori.
To do so,
we have dened the Blackboard Controller as an abstract class (Horstmann & Cornell, 2001a) that implements some basic machinery required, and forces any Blackboard Application engineer to implement the methods required in order to control the reasoning process. Doing so allows us to bring the power of a generic solution (e.g., versatility, adaptability, etc.) while we maintain a coherent and working machinery. The methods to be implemented for developing a specic application controller are two:
ChooseActivationRecord:
This method should, based on the current state
of the Blackboard, decide which KSAR is to be selected for the next KS action.
In other words, this method should decide at each cycle what
inferencing step is potentially the best contribution. In this respect the framework does not impose any restriction, and, such a decision can be adopted on the basis of all the information present on the blackboard (e.g., current state of the problem-solving, current and past KSARs, etc.) according to any heuristic. In fact, a system developer could implement such a method using a blackboard system, like it was rst done in the BB1 blackboard.
WorkDone:
It is usually required to decide when the current state of the
problem-solving activity could or should be taken as the nal solution. This is what is usually referred to as the termination problem .
Some
times deciding so, could be rather trivial, but in some other cases this might be more complicated, designing being perhaps the most obvious example where we could enter into a never-ending designing activity. Hence, any controller implementation must provide the means for determining whether the nal solution has already been found. Again, the framework does not impose any restriction and this can be implemented in a variety of ways.
Chapter 5. An Opportunistic Reasoning Platform
83
For completeness, our Blackboard Framework includes a relatively simple agenda-based control architecture.
The precursor of such an approach is the
Hearsay-II speech-understanding system (Engelmore et al., 1988b), but it has widely been applied in a number of frameworks and systems. In such a control machinery, on every cycle all the possible contributions of the dierent Knowledge Sources are placed into the agenda, they are rated and the best one is scheduled for execution.
The rating is usually performed by what is usually
referred to as the Scheduler , that is the Blackboard Controller in our own framework.
An agenda-based control architecture is inherently opportunistic
because KSs are activated opportunistically based on the state of the problemsolving process and bearing in mind all the potential contributions (Carver & Lesser, 1992).
5.2.3 Knowledge Sources Traditional Blackboard Systems and thus, traditional Blackboard Frameworks, were developed for local and often autonomous execution. There was usually no interaction with external systems other than sensors, see for example Hearsay-II (Erman et al., 1988b) or HASP/SIAP (Nii et al., 1988b). Given the essentially distributed characteristic of the Blackboard Model, several researchers envisaged the creation of distributed Blackboard Systems, e.g., the Distributed Vehicle Monitoring Testbed (Lesser & Corkill, 1988), or used a Blackboard Architecture to support the distribution of Knowledge Sources, e.g., CAGE and POLIGON (Nii et al., 1988a).
Still, Knowledge Sources remain autonomous, in that, at
runtime, no interaction with anything else than the blackboard and some sensor takes place. Our Blackboard Framework presents an important modication over traditional Blackboard Architectures. Knowledge Sources have been externalised, that is, we do not consider Knowledge Sources just as internal components. Instead we provide them with external interfaces, thus enabling the use of their
3
expertise to improve the interaction with external agents , see Figure 5.1. In order to reason over the Web, a Knowledge-Based System musts support the interaction with external agents, may that be for obtaining or modifying information, for triggering the execution of some remote services, etc. Given that Knowledge Sources encapsulate the expertise of the system, it seems natural to apply this knowledge to improve and direct the interaction with remote systems (like external Web Services), or even with users. For instance, the user interface can be better adapted to the user and the communication with remote systems can be leveraged to a semantic-based interaction as shown in our case studies, see Part III. Traditional Blackboard Systems have usually integrated users by means of a specic Knowledge Source, since doing so provided an homogeneous and simple way to integrating the user within the overall problem-solving activity. Good examples can be found in the Distributed Vehicle Monitoring Testbed where this particular Knowledge Source is called FRONTEND (Lesser & Corkill, 1988), or in the Edinburgh Designer System (Smithers et al., 1992). However, such an interface to the user had to be generic, as it was managed by a specic
3 We system.
understand by external agent any software or human that may interact with the
5.2. The Blackboard Framework
84
Figure 5.3: The Model-View-Controller design pattern.
Knowledge Source, and could not therefore be adapted to the very subtasks that took place into the problem-solving process. In Knowledge-Based Systems there is a strong relationship between the underlying knowledge that drives the application execution and the user interface. When it comes to Blackboard Applications knowledge is partitionned into different elds of expertise. Hence, in our framework, driven by the need to eectively integrate the user into the Event Designing process (see Chapter 6) every Knowledge Source, that might require some interaction with the user to achieve its goals, can oer its own specic and thus adapted user interface. Therefore, each Knowledge Source handles the data that is relevant for its specic eld of expertise, and can apply its knowledge for ltering the information as well as for generating better adapted and more dynamic interfaces. In order to support a seamless yet eective integration of the user in our framework, Knowledge Sources are based on the Model-View-Controller (M-VC) design pattern, see (Gamma et al., 1994; Stelting & Maassen, 2002) which is also the basis of the CommonKADS reference architecture (Schreiber et al., 1999). This pattern provides a clear separation between the dierent subsystems involved in the application, making it easier to modify or customise each part. The Model-View-Controller design pattern depicted in Figure 5.3, encourages good encapsulation by breaking the elements' responsibilities into three: the Model, the View and the Controller. The Model (also known as Application Model in CommonKADS) represents the element's state and provides the means for consulting and modifying it. In a 3-layer architecture, the Model represents the business model. The View subunit is responsible for presenting the data to the user.
It constitutes the
outbound user interface, thus, in a 3-layer architecture it forms the presentation
Chapter 5. An Opportunistic Reasoning Platform
85
logic. The Controller acts as the inbound interface, and transforms user actions into model functions invocations.
This subsystem denes the business logic.
In our Blackboard Framework, the Controller and the View handle the user interface while the Application Model does the reasoning. This approach leads to having a set of small chunks of user interface (one per Knowledge Source). Hence we suggest applying the Composite View design pattern (Alur et al., 2003) in order to provide an homogeneous user interface (see our case studies Chapters 6 and 7). This design pattern supports the generation of user interfaces by joining together several sections.
Only KS-specic parts
are to be generated by each KS's View subunit. The rest is shared and included dynamically at runtime. Moreover, since each Model, View and Controller set is bundled together (as part of each KS), there is no ambiguity in determining which Knowledge Source should handle a user input.
Figure 5.4: Knowledge Sources Internal Structure. Remaining faithful to the blackboard metaphor, our framework provides a skeletal implementation of Knowledge Sources and does not establish any further restriction, other that these purely architectural and somehow methodological guidelines.
Hence, the (KS) Controller, the (KS) View and the (KS) Model
can be developed as desired and/or required.
For instance, the View can be
implemented using any of the Java Graphical User Interface (GUI) libraries (e.g., Swing (Horstmann & Cornell, 2001b)) or even HTML for a Web interface. The reader is referred to Chapter 6 for a case study using a Web interface, and to Chapter 7 for a traditional Swing interface. Moreover, the communication between the View and the Model may be based on any of the two versions of the Model-View-Controller (Stelting & Maassen, 2002). That is, an application developer could use a Pull version, where the View actively retrieves the information from the Model, or a Push one where the Model provokes refreshing the View. The Pull version is the approach to be used in Web-based interfaces since servers do not maintain the connections with the clients (see Chapter 6 for an example). On the other hand, for traditional interfaces, such as Swing interfaces, the Push version achieves more dynamic interfaces (see Chapter 7).
5.2. The Blackboard Framework
86
Similarly, the (KS) Controller and more importantly the (KS) Model are open to dierent approaches. The Blackboard approach allows the integration of dierent reasoning mechanisms, and this is in fact one of its main advantages. Supporting the use of several reasoning mechanisms, stands, in our framework, for the capability to implement the Knowledge Source's Model subsystem as desired. What knowledge it is based on, how the knowledge is integrated, how the Knowledge Source does the reasoning, what inferencing mechanism it uses, or what other (remote) systems it makes use of, is completely domain dependent and thus remains an engineering decision to be adopted for each Knowledge Source of each application. The reader is referred to Figure 5.4 for a general overview of the Knowledge Sources internal structure. Maintaining our implementation generic, is supported in our Blackboard Framework by means of an abstract class called Knowledge_Source which implements a few methods part of the blackboard machinery, and species some other functionality that must be implemented. The developer of an application based on our framework must therefore implement the following methods per Knowledge Source to ensure that the blackboard mechanisms work appropriately:
constraintAdded and constraintRemoved
These two methods ensure that
Knowledge Sources are appropriately informed about any addition or deletion of Hypothesis. They are automatically triggered by the blackboard machinery.
hypothesisAdded and hypothesisRemoved
These
two
methods
ensure
that Knowledge Sources are appropriately informed about any addition or deletion of Hypothesis. They are automatically triggered by the blackboard machinery.
initKnowledgeSource Source is created.
This method is executed whenever the Knowledge It allows the creation at runtime of new Knowledge
Sources, based on the current reasoning requirements, thus reducing the computational overhead whenever they are not required.
chooseNextStep
The blackboard machinery works on the basis of cycles. In
each cycle, the Blackboard Controller retrieves all the potential contributions and decides which one should be performed next. This method is thus executed in every cycle to give each Knowledge Source the opportunity to propose its next contribution (if any).
performStep
Once the Blackboard Controller has decided the next reasoning
step to be undertaken, it has to trigger the execution of the Knowledge Source. This method grants the Knowledge Source is activated and performs the appropriate action. So far, we have presented our Blackboard Framework, paying a special attention to the data-structures involved and the control mechanisms of the overall reasoning platform. Moreover we have described the internals of the Knowledge Sources which eventually bring the Blackboard Model to its full potential. However, we have voluntarily left aside an important part of the specicities of our Opportunistic Reasoning Platform with respect to reasoning over the Web. In the next section, we introduce the mechanisms and architectural decisions that have been adopted in order to better support reasoning over the Web.
Chapter 5. An Opportunistic Reasoning Platform
87
5.3 A Web-Oriented Approach There currently exist a number of knowledge representation formats for the Web (see Chapter 2) and a variety of protocols for transporting the information as well as there exist ways for remotely executing IT services over the Web. We identify these diverse formats and protocols as an integral part of our platform as a means to better support reasoning over the Web. In particular, we identify ontologies and Web Services as key technologies of our reasoning platform. The majority of the formats we previously presented in Section 2.2 have been created to support the denition of ontologies for the Web.
Hence, at
the very least, any Web-oriented Knowledge-Based System should support the use of such formats.
That is, KBSs should support manipulating ontologies
and Knowledge Bases expressed in these Web formats, as well as they should support communicating with other agents over the Web by means of such shared conceptualisations. A typical diculty developers of Blackboard Systems have to face, concerns the representation of knowledge (Peger & Hayes-Roth, 1997). Blackboard Systems rely on the eective collaboration of (almost) independent experts that cooperate via sharing the information in the Blackboard.
Hence, Blackboard
Applications developers have to nd a tradeo between expressivity and sharability of the knowledge encapsulated in the dierent Knowledge Sources.
In
other words, Knowledge Sources must be able to understand the results stored by other Knowledge Sources and still, it has to be possible to represent specialised knowledge for each of the elds of expertise. This tradeo has traditionally been achieved by means of a hierarchical organisation of the expertise. Since ontologies are generally taxonomic representations of knowledge (i.e., hierarchies of concepts), they are also a natural means for representing knowledge in Blackboard Applications. Hence, ontologies pave the way for Knowledge Sources to understanding each other while they support rening this knowledge to better t a certain eld of expertise. In fact, given that ontologies are intended to support an eective and semantics-based communication between independent and heterogeneous agents over the Web, it seems natural to also adopt them as a means to full precisely such a role in a more controlled environment like a Blackboard Application. Bringing Blackboard Systems to the Internet poses new requirements. For instance, communication might now cross the boundaries of the organisation where the system is deployed. This implies, among other things, the need to support a seamless but eective communication with external systems.
In-
teraction between dierent software components builds upon the use of shared communication protocols, information syntax and semantics. Web Services have been proposed as a means for supporting a loosely coupled interoperation between dierent systems, independently from their platform or language, over the Web (Endrei et al., 2004): Web services provides a distributed computing approach for integrating extremely heterogeneous applications over the Internet. The Web service specications are completely independent of programming language, operating system, and hardware to promote loose coupling between the service consumer and provider. A somewhat more explicit denition is given by the World Wide Web Consor-
5.3. A Web-Oriented Approach
88
tium (W3C), by identifying the technologies (protocols and languages) (Booth
et al., 2004): A Web Service is a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a machine-processable format (specically WSDL). Other systems interact with the Web Service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialisation in conjunction with other Webrelated standards. In a nutshell, Web Services are stateless programs oered via the Web by means of standard formats and protocols, such as XML, SOAP, or the Web Services Description Language (WSDL) (Christensen et al., 2001). Web Services are regarded as one of main technologies to streamline Enterprise Application Integration (EAI) as a central part of the Service-Oriented Architecture (Pallos, 2001; Endrei et al., 2004) or Service-Oriented Approach as some authors, with good reason, prefer to call it (Vinoski, 2005). Web Services and the so-called Service-Oriented Approaches have proven to be appropriate for the seamless integration of distributed systems (Vinoski, 2003) and are currently the subject of important commercial and academic eort.
For instance, an overwhelming number of Web Service choreography
languages have been proposed, or are currently under development, in order to automate the composition and execution of Web Services (van der Aalst, 2003). Similarly, the Semantic Web Services initiative couples Web Services together with semantic annotations of them (McIlraith et al., 2001) in order to support
4
the (semi)automated discovery, composition and execution of services . Conceptually, the ideas surrounding Web Services are not new. The Common Object Request Broker Architecture (CORBA) (Vinoski, 1997) one of rst specications adopted by the Object Management Group (OMG) (OMG, 1997), was precisely aimed at supporting the distributed execution in heterogeneous environments.
However, even though CORBA has proven useful, it lacks the
W3C support, which in the Web environment is one of the key factors for the success of a given technology. Hence, as part of our framework we identify ontologies and (Semantic) Web Services as the vehicle for supporting an eective and semantics-based collaboration with remote systems. The reader could here understand we are advocating for a particular technology in order to drive and support the cooperation with remote systems. We want to point out however, that the particular technology being used is not the key aspect. In fact, having a look at the previous design principles of the Web, we can notice that the evolution of technologies is regarded as a central aspect of the Web itself (see for example the test of independent invention in Chapter 4). Instead, the importance lies in the underlying architectural decisions adopted during the design and development of our Opportunistic Reasoning Platform, which identify and support the use of distributed computation technologies (provided they are Web standards!) to achieve its goal: reason over the Web. For instance, we believe we could provide a solution based on CORBA with some, but not many, changes which would aect the particular implementation but
4 While
these dierent initiatives are indeed interesting subjects for further research, they
fall outside of the scope of this thesis and will not therefore be treated in more detail.
Chapter 5. An Opportunistic Reasoning Platform
89
certainly not the core architecture itself. However, CORBA not being supported by the W3C makes of it a doubtful option. The modular and distributable nature of our Blackboard Architecture makes it suitable for Web environments, as it supports the integration of remote systems without aecting the overall architecture.
We identify the execution of
remote Web Services as part of the functionality that could be encapsulated in Knowledge Sources' Application Model subsystem.
Hence, in addition to
retrieving, modifying and reasoning over the knowledge they hold, Knowledge Sources might trigger the execution of remote functionality oered via Web Services (or another Web standard). Again, the externalisation of Knowledge Sources is particularly appealing, since it paves the way for integrating external systems by applying Knowledge Sources expertise in order to leverage the collaboration with remote systems to a semantic level by means of shared conceptualisations (i.e., ontologies). Chapters 6 and 7 provide illustrative examples that account for the benets of this approach.
5.4 Knowledge Integration Chapter 2 and Chapter 3 were devoted to presenting and analysing the various knowledge representation languages for the Web as well as the reasoning techniques and current systems that implement them. Throughout these chapters we presented ontologies as the main Knowledge Bases representation formalism, and we accounted for their purposely limited expressivity in order to be sharable, and still retain desirable computational properties such as decidability. A tradeo between these factors, that is expressivity vs. desirable computational properties, led to Description Logics as a quite expressive among the logics which retain computationally appealing characteristics. Description Logics as supported by OWL-DL, for example, provide an appropriate formalism for dening conceptual taxonomies, which can be augmented with some kinds of restrictions (e.g., cardinality restrictions on properties) and relationships (e.g., transitive properties). The downside of Description Logics comes from the fact that the only inferencing mechanisms supported are subsumption, consistency checking and classication. To cope with this lack of expressivity, we already introduced in the previous sections the possibility to make use of rule languages and/or procedural constructs in order to dene the dynamic knowledge of the system. We accounted for the overwhelming diversity of tools available in terms of their logical foundations but also in terms of their intended functionality (e.g., ontology toolkits, rule engines, integrated solutions, etc.). Originating from this fact, Semantic Web applications, and in general Knowledge-Based Systems for the Web, can roughly be divided into two dierent kinds.
There exist applications and tools that remain bound to Description
Logics but do not commit to any particular ontology and hence have limited inferencing capabilities, but conversely remain generally applicable to dierent domains. These are typically query answering applications or general classiers. Conversely, there exist applications that require further inferencing capabilities and couple ontologies with additional mechanisms (e.g., rules or procedures). These applications gain inferencing power at the price of genericity.
Usually,
these applications commit to one or various particular ontologies in order to
5.4. Knowledge Integration
90
support more advanced and typically faster reasoning procedures. Two major approaches can be extracted from the literature and so-called Semantic Web applications developed so far, in terms of ontologies manipulation. The rst one, relies on using any of the libraries currently available such as the Jena or the KAON API for example (see Section 3.3). These libraries support browsing the ontologies generically through statement, resource or ontologyoriented APIs (see Section 3.3.1).
Furthermore, the libraries usually provide
the capability for semantically querying the content as well as for serialising and unserialising the information to and from Semantic Web formats like RDF or OWL. The second one, relies on mapping the concepts dened in the ontology into semantically equivalent objects of an Object-Oriented programming language. This is the approach employed in (Craneeld, 2001; Goldman, 2003; Eberhart, 2003) and proposed by other projects like SWeDE (SWeDE, 2004), Kazuki (Kazuki, 2004), or FRODO (FRODO, 2000) for example. While performing an in-depth comparison of both approaches is out of the scope of this document, it seems that the rst approach is better suited for dynamic environments where ontologies are often modied or where genericity with respect to ontologies is a key factor. On the other hand, relatively static environments can obtain more benets from applying the second approach which simplies programming tasks, allows applying additional mechanisms (e.g., rules or procedures) and usually exhibits a better performance. Our Blackboard Framework is particularly oriented towards supporting building advanced and possibly distributed Knowledge-Based Reasoning Systems for the Web, whose inferencing capabilities cannot just be reduced to those provided by Description Logics. Still, we have designed the platform to be generic and adapted to the Web, that is, aligned with its underlying principles previously introduced. Thus, in order to facilitate the integration of ontologies into applications built on top of our platform, we advocate mapping ontologies into their Object-Oriented representation in Java. At design time the ontologies created are automatically analysed and the hierarchy of concepts with their properties is recreated in Java Objects. Some of the currently available utilities can be used for that purpose. For instance, developing our case studies we have tested the rdf2java tool developed as part of the FRODO project (FRODO, 2000), an utility created by Stephen Craneeld (Craneeld & Purvis, 1999; Craneeld, 2001) and Kazuki. It is important to note that the importance lies on the conceptual approach (i.e., mapping ontologies into their Object-Oriented representation) and not on the particular tool applied to achieve it. In fact, in reviewing the existing tools we have found that the majority provide more or less the same set of features which allow developers to automatically generate the Java code and provide them with a set of utilities that serialise and unserialise the information between Java and Web formats (typically RDF and, in some cases, OWL as well). As a result, developers are therefore given the means to eectively access information expressed according to some ontologies by using their Object-Oriented representation. Still, they are abstracted away from the burdening details of the particular serialisation format (e.g., RDF/XML). Moreover, this directly supports manipulating these Java objects inside the Blackboard Framework which is, itself, completely based on Java objects. Therefore Knowledge Sources can seamlessly manipulate the concepts, properties and individuals dened in ontologies within their internal reasoning machinery.
Chapter 5. An Opportunistic Reasoning Platform
91
Directly reasoning over Java objects is a particularly appealing feature since it allows us to maintain a coherent view over the data being handled, which typically needs to ow over a variety of heterogeneous software components as we saw in Chapter 3, without having to carry along the inherent complexities of ontology serialisation formats. Unfortunately, reasoning over objects is not supported by all the reasoning systems, and some of them require additional mechanisms to bridge the gap between their internal workspace and Java objects.
Perhaps the prototypical example is Jess (Friedman-Hill, 2003).
Since
we are particularly concerned with enabling the application of dierent sorts of inferencing mechanisms, may they be rules or procedures, the Java code generated shall be modied by application developers, to overcome this limitation
5
insofar as possible . The Java classes generated are adapted to conform to the JavaBeans naming conventions (Sun Microsystems Inc., 1997).
The rationale behind JavaBeans,
is to support the use of Java reection mechanisms (Horstmann & Cornell, 2001b) in order to program generic utilities independently from the actual Java class being handled. These naming conventions ensure properties accessors and modiers are under the form of getPropertyName and setPropertyName, or is-
PropertyName and setPropertyName when it comes to booleans properties. Based on JavaBeans conventions, an important number of tools are already available for developers. Among the various capabilities oered, the main we are concerned with are Events (Sun Microsystems Inc., 1997): Events provide a convenient mechanism for allowing components to be plugged together in an application builder, by allowing some components to act as sources for event notications that can then be caught and processed by either scripting environments or by other components. Conceptually, events are a mechanism for propagating state change notications between a source object and one or more target listener objects. This feature is what allows software components (e.g., inference engines) to map external Java objects into their workspace so that any modication over an object performed inside the component is reected in the original instance which resides in a Java Virtual Machine, and conversely, any change performed in the Java Virtual Machine is directly reected in the component's workspace. Therefore, as part of our platform, we include a simple abstract class which implements the methods addPropertyChangeListener and removePropertyChange-
Listener required by the JavaBeans events mechanism, so that every JavaBean representing an ontology concept can extend the class and directly benet from this feature. Thanks to this simple yet powerful mechanism, applications can seamlessly apply dierent kinds of inferencing rules over the concepts dened in the ontology, and the changes are automatically reected on the Blackboard for further reasoning.
5 Current there.
code generation libraries are already moving in this direction but are not yet
5.5. Summary
92
5.5 Summary In this chapter we have described our Opportunistic Reasoning Platform, an infrastructure for the development of Knowledge Based Services over the Web. Our Opportunistic Reasoning Platform, consists of a widely applied and extensively analysed reasoning modelthe Blackboard Modela skeletal generic implementation of the modelthe Blackboard Frameworkand many architectural and technological decisions. It supports implementing Knowledge-Based Services where the dierent components of the application can be deployed on multiple physical systems connected through Web Services and sustained by ontologies. The Blackboard Framework, which is the kernel of our platform, is an Object-Oriented implementation of the Blackboard Model which presents the novelty of externalising Knowledge Sources as a means to better adapt the infrastructure to the Web. Doing so, supports applying Knowledge Sources expertise to streamline the communication with external agents, may they be humans or external systems. The Opportunistic Reasoning Platform identies ontologies represented in Web standard formats as a means to better support reasoning over the Web. Ontologies are used to represent sharable conceptualisation and thus pave the way for reusing third-party Knowledge Bases, for sharing internal Knowledge Bases, for integrating Problem-Solving Methods, etc. Moreover, to support the seamless and eective interaction with remote systems, the platform identies Web Services technologies as part of the functionality Knowledge Sources can benet from in order to reason over the Web. Finally, the Opportunistic Reasoning Platform advocates mapping conceptual schemas represented in ontologies, into their Java Object-Oriented representation. This representation, thanks to JavaBeans technologies and the Blackboard Framework machinery, supports the seamless interoperation between diverse software components, independently from their internal representation details and abstracting the developer away from the burdening details of Webrepresentation formats.
Part III Case Studies
95
The key to smarter forms of collaborative eBusiness lies in realizing a higher level of machine understandable semantics in eBusiness data and processing systems on the Web. As Tim Berners-Lee explains in (Frauenfelder, 2001): Imagine registering for a conference online. The conference Web site lists the event time, date and location, along with information about the nearest airport and a hotel that oers attendees a discount. With today's Web, you have to rst check to make sure your schedule is clear, and if it is you have to cut and paste the time and the date into your calendar program. Then you need to make ight and hotel arrangements, either by calling reservations desks, or by going to their Web sites. There's no way you can just say, I want to go to that event, because the semantics of which bit is the date and which bit is the time has been lost. Nowadays, the Semantic Web is largely considered as a prime enabler of future forms of collaborative eBusiness. Still, the Semantic Web and the technologies it builds upon is not all there need to be to realize new forms of eBusiness. An appropriate software infrastructure needs to be established in order to support reasoning over interorganisational business processes. In this part of the thesis, we investigate the applicability of our Opportunistic Reasoning Platform to support reasoning over the Web, through the development of two case studies which nd their motivations in the need to implement smart collaborative eBusiness solutions over the Web. The rst case study is a Web-based design support system that assists event designers in the process of organising events such as conferences or meetings. In this application the emphasis lies on supporting organising events by reasoning over an inter-organisational process over the Web, informed by a Knowledge Level theory of design. The second case study is the implementation of a music rights clearing organisation that can automatically clear the rights of music broadcasted by Internet radio stations by contacting the appropriate rights societies involved. In this application we pay special attention to the development of a system reconciling strategic business decisions and technical implementation aspects.
Chapter 6 The Online Design of Events Application
The work presented in this chapter was performed during the European Project
Ontology-Based ELectronic Integration of CompleX
Products and Value Chains (OBELIX IST-2001-33144). The chap-
ter is largely based on the project deliverables (Cendoya et al., 2002, 2003; Aguado et al., 2004). Part of the work has been published as well in (Aguado et al., 2003; Maier et al., 2003) and has been accepted for publication in (Pedrinaci et al., 2005b). The application was exhibited in the IST Event 2004.
In this chapter we describe the Online Design of Events Application, a Webbased Design Support System that aims to support event organisers in the process of organising meetings, conferences, and the like. The chapter is organised as follows. First we present the rationale underlying the system based on a theoretical understanding of designing. Next, we describe how the system has been built on top of our Opportunistic Reasoning Platform.
Further, we in-
troduce some technical aspects and implementation issues that had to be faced during the system development. Finally we illustrate the design support system at work.
6.1 Introduction Since it was opened in 1997, San Sebastian's Technology Park (PTSS) has hosted numerous events. They range from international conferences and scientic sessions to business meetings. The events hosted have included innovative applications in, for example, telemedicine and e-learning, dissemination activities related to current technologies or presentations and galas for prestigious organisations. In the past two years, 457 events were hosted with the participation of 31,750 people. PTSS was actively involved in the organisation of these events, thus, an important amount of time and resources were required for their appropriate organisation. In this chapter we will use as an illustrative example the Kick-O meeting of the European project OBELIX where PTSS was one of the partners. 97
6.1. Introduction
98
The OBELIX Kick-O meeting was held on march 25th and 26th, 2002, in the Technology Park of San Sebastian. There were about fteen participants from every partner taking part in this project.
Figure 6.1 depicts the steps
that were followed during its organisation. First of all, a meeting agenda was prepared dening the dates and activities when the meeting was expected to take place. An initial timetable was also sketched and resources requirements were determined: the need for equipment for projecting slides and printing during the meeting, and a meeting room with tables for up to 20 people organised in a U-Layout. Once these basic requirements were established, a request form was lled in order to reserve the meeting room and, if necessary, to rent, from external providers, the equipment required.
Based on these requirements, PTSS sta
checked the availability of resources (room, tables, chairs, projectors, computers and printers) and the meeting room was reserved. The next step involved setting up a coee and lunch service. To this end the Arbelaitz restaurant was contacted, given that it is the closest restaurant to the meeting room.
After
some interaction (involving phone calls and a few faxes), the service was contracted. Given that some attendants came from other cities and countries, some close and good quality hotels were contacted by phone and fax. This resulted in the booking of eight double rooms (for single use) in the Maria Cristina Hotel. Finally, a dinner was organised in the Zelaya Cider House, and a taxy service was booked in order to take the assistants to and from the airport. From the previous description, even though the event was quite small, we can see how complicated preparing an event can be. The majority of the tasks involved have to be repeated over and over again for every event to be organised. Moreover, in many cases due to resource allocation, schedule changes or other external diculties, part or even the whole event design has to be reworked. For instance, it may happen, depending on the dates and the number of participants, that it is impossible to obtain all the equipment required. Such a situation could lead to a schedule change thus invalidating previous resources allocations, service bookings, etc. Organising events involves many tasks that can, and certainly should, be automated. This is the case, for example, for checking the availability of internal resources. Appropriately performing resources allocation could lead to saving time and money. For instance, tables usage can be minimised, the number of objects to be moved from one meeting room to another can be optimised, the number of events hosted can be maximised, etc. Moreover, in many situations it is also possible to automate the interactions with external providers. Automated collaboration constitutes an important step towards oering cheaper and better services. In fact, as explained in (Akkermans et al., 2004): Companies don't usually oer just a single service to a customer but a more or less interrelated collection of them. This enables broader and better coverage of customers' needs while achieving scale and scope eciencies in service cost by sharing and reusing service elements. In this context, we have developed the Online Design of Events Application (ODEA), an application that aims to support PTSS employees in the task of organising events.
In this application, event organisation is understood as
Chapter 6. The Online Design of Events Application
99
Figure 6.1: OBELIX Kick-O meeting organisation process.
a process involving, not only PTSS, but also other providers, such as audiovisual equipment providers, restaurants and hotels, in a complex value chain that is needed when hosting events. Therefore, collaboration between dierent entities plays an important role in ODEA oering new opportunities for smart collaborative eBusiness. This chapter presents the results of the development of the Online Design of Events Application. We rst present the conceptual understanding of events organisation which underlies the application we have implemented.
We next
describe how our conceptual approach to events organisation has been supported by building our application on top of our Opportunistic Reasoning Platform. Finally, we illustrate the system at work, and conclude with some technical aspects of the development of ODEA.
6.2. ODEA's rationale: a Theory-Based System
100
6.2 ODEA's rationale: a Theory-Based System The description of the OBELIX Kick-O meeting preparation, clearly shows that the nal organisation of an event is not directly specied by the initial requirements. For instance, the need for organising a successful Kick-O meeting did not directly specify anything about how people should be organised, nor did it make explicit the need for contracting a coee service, for example. However, knowledge about previous meetings informed us about these implicit requirements for a meeting to be successful. In other words, the need for organising an event does not immediately specify how it could or should be prepared. Hence, events organisation can be treated as a kind of designing, designing being understood as the action performed when there is a need or desire, for some part or aspect of our world to be dierent, and we cannot immediately specify how it should or could be changed (Smithers, 2002b). The Articial Intelligence community, has been active in design research for a long time. Much of the eorts have been reected as tools that attempt to automate some part of an engineering design task. These tools have demonstrated the feasibility of automating particular tasks, but have not necessarily led to an increased understanding of the design process itself. Thus, tools have mainly been developed based on empirical results with no theoretical background on the design process itself (Smithers, 1996). Theories attempt to explain some phenomena based on general principles independent of the thing to be explained. They therefore serve as a framework for analysing, understanding and explaining some particular phenomenon. From a more practical perspective, theories can, in many cases, be used to construct models which represent an appropriate framework for creating, for engineering new artifacts. For instance, acoustic theory can be used to construct a model of sound propagation and use it for designing and constructing a concert hall. These theoretically derived models, present the important advantage of being well delimitated and therefore safer than empirical models, which can be, and actually are, used as well for constructing things. A good example can be found in the aeronautics industry, where empirical methods have been used, but it would now be unconceivable to construct new aircrafts without the support of aerodynamics theory. Smithers (1996) argues for the need for Knowledge Level theories of the design process in order to properly connect the construction of design systems with the large body of work in Knowledge Engineering. The
KL DVE1
(Smithers,
1998, 2002a) is a Knowledge Level theory of designing which intends to provide a theoretical support for designing that could oer as well an appropriate framework for the Knowledge Engineering and construction of Knowledge-Based design systems. For the development of our Online Design of Events Application we have adopted a dierent approach (see Figure 6.2), compared to more traditional practices in Articial Intelligence in Design, and have engineered the application using the
KL DVE1
as its theoretical framework.
Having established the
framework we have analysed the particular kind of designing (i.e., events organisation). Based on the empirical understanding of our specic domain we have created a model for organising events, namely the Events Design Process Model. Finally, on the basis of the theoretically supported Events Design Process Model
Chapter 6. The Online Design of Events Application
101
we have specied the system and implemented it on top of our Opportunistic Reasoning Platform. In the remaining of this section we will tackle the dierent steps that were undertaken in order to create the Event Design Process Model, postponing the implementation details for subsequent sections of this chapter.
Figure 6.2: Approach for developing ODEA.
6.2.1 The KL DVE1 Two main approaches for the design process can be extracted from design research: Design as problem solving (Simon, 1973, 1981) and Design as exploration (Smithers & Troxell, 1990; Smithers, 1992, 1998, 2000, 2002b). In (Simon, 1973, 1981) Simon characterises design as a kind of problem solving: The design process...
can be composed from a combination of a
General Problem Solver (GPS), which at any given moment nds itself working on some well structured subproblem, with a retrieval system, which continually modies the problem space (that the GPS is working on) by evoking from long-term memory new constraints, new subgoals, and new generators for design alternatives. We can also view this retrieval system as a recognition system that attends to features in the current problem space and in external memory (e.g., models and drawings), and, recognising features as familiar, evokes relevant information from memory which adds to the problem space (or substitutes for the other information currently in the problem space). Smithers has proposed Designing as Exploration, as an alternative. Design as Exploration (DasE) diers from Design as Problem Solving (DasPS) in two important respects. First, DasE does not presume that designing starts with a problem at all, ill-structured or ill-dened. Instead, designing starts with some needs and desires, which might easily suggest the resolution of some problem, but are not in themselves problem specications. Second, as opposed to DasPS,
6.2. ODEA's rationale: a Theory-Based System
102
DasE does not presume that ill-structured problems are solved by decomposition into well-dened subproblems. Instead, it proposes that a nal well-formed problem denition is constructed incrementally, as designing proceeds, via synthesising and solving problems which can be shown to satisfy some or all the initial needs and desires. Thus, it proposes that synthesising problems, whose solution(s) can satisfy the motivating needs or desires, is a central and necessary aspect of designing. Thus designing must nish with realisable specications, designs, without starting with anything that can be properly understood as a problem for which the nal design is a solution.
This apparent paradoxarriving at a kind of
solution without starting with a problemis the distinguishing characteristic feature of designing. Designing resolves this paradox by actively constructing the problem or problems whose solution or solutions can form a design or parts of a design.
De-
signing is thus puzzle making and puzzle solving (Smithers, 1992). Usually the problem formingpuzzle makingand solution ndingpuzzle solving are tightly integrated, incremental activities which are driven by reection on what is happening.
Furthermore, the problem forming aspects are often tacit and
never made explicit in the work of the designers. At the centre of designing is thus a combination of problem forming and solution nding activities. But this cannot be all there is. Problem solutions need well-dened problems, and well-dened problems need to embody explicit conditions on the possible solutions: well-dened problems must dene a space of possible solutions. The conditions used to dene the solution space are derived from identied criteria or requirements. They are operationalisations of criteria that are identied as requiring to be met in order for a design to satisfy the motivating needs or desires. The devising and forming of these requirements is thus also an integral, and necessary, part of designing. The process that leads from the customer needs and desires to the requirements identication and to problem specications is not a trivial one.
This
process is a knowledge-intensive activity which evolves incrementally as the designing proceeds. In general, at the beginning of the designing activity it is not possible to determine all the criteria that will be required for obtaining a suitable design: one that fulls the motivating needs and desires. Instead, designing in general identies an initial set of requirements which may be incomplete, inconsistent, imprecise, ambiguous or even impossible. Designing as Exploration proposes that through the designing activity, problems constructed based on some more or less complete set of requirements, will be solved and the results evaluated.
It is precisely through this process that
the incompleteness, inconsistency, the imprecision, the ambiguity or even the impossibility of the requirements is identied and perhaps resolved. Thus, one of the necessary aspects of designing involves developing the requirements in order to obtain a complete, consistent, precise, unambiguous set of criteria that can be used to create a design that satises the motivating needs and desires. In summary then, reusing the words from (Smithers, 2002b):
Designing as Exploration characterises designing as the exploration of the problems that can be devised whose solutions can be shown to satisfy the needs or desires that motivate the designing.
Chapter 6. The Online Design of Events Application
103
Newell (1982) introduced the Knowledge Level and established the dominant view in the Knowledge Engineering community that characterises knowledge as the capacity to act rationally. Given that all designing is a kind of intelligent, that is rational, behaviour, the Knowledge Level thus oers a suitable and appropriate level of abstraction at which we can seek to develop a general theory of designing: a theory about what kinds of knowledge are necessary and sufcient for designing, what roles these dierent kinds of knowledge play in the process, and what the relationships are between them in designing.
It oers
both a practical level at which to build a general theory of designing, and a way of building useful theories, which can support the Knowledge Engineering of Design Support Systems, (Smithers, 1996), and the specication of knowledge management infrastructures, for example, (Smithers, 1998). The
KL DVE1
(Smithers, 1996, 1998, 2002b,a) is precisely an attempt to develop a Knowledge Level theory of Designing as Exploration. It attempts to dene the necessary and sucient kinds of knowledge involved in all and any kind of designing, the roles they play and the relationships between them. According to the designing.
KL DVE1
knowledge is both used and constructed during
The knowledge used is partitioned into three kinds:
the general
context knowledge (K.gc), the exploration knowledge (K.ex), and the design knowledge (K.dk). The K.gc represents the contextual knowledge that can inuence the design. In fact, every designing takes place in a wider context where cultural, social or even political conditions inuence design. Thus, the K.gc includes domain knowledge about the particular design being performed. Information regarding the customer's needs and desires as well as his or her particular context in order to generate better and more adapted designs.
Finally, given that designing
is not usually a one-o activity, the styles, the methods developed, or even fashions around any designing practice are also embedded in the general context knowledge. Given that the
KL DVE1
is a theory of Designing as Exploration, designing
is taken to be a process that combines and integrates requirements forming, problem dening, solution nding, and solution evaluation. The knowledge used by this exploration process is called K.ex, and each of the core aspects depend upon particular kinds of knowledge that are embedded in K.ex. The exploration knowledge thus includes information for requirements formation, recognition and development. Moreover, it embeds knowledge for dening, modifying and reviewing well formed problems which have to be solved and evaluated with respect to user needs or desires. Finally, since the designing process may result in the identication of some need to change or modify the current set of design requirements, or to dene a new problem, the exploration process typically needs to be coordinated, so that it remains coherent and locally eective. The exploration knowledge thus, includes knowledge to devise and maintain what are called local plans. Usually documenting the design is a formal requirement.
Maintaining an
explicit record of the design rationale requires knowledge on how to do this. Moreover, preparing eective design presentations of the performed designs is usually not a straightforward task. The knowledge required in order to perform these tasks is embedded in the K.dk. Last, during designing, knowledge about the current state of the design is generated. This includes between others, the current set of requirements and
6.2. ODEA's rationale: a Theory-Based System
104
their status, the problems dened so far and the solution evaluations produced. The
KL DVE1
identies these kinds of constructed knowledge and denes them
as components of designing. Given that
KL DVE1
identies the necessary and sucient types of knowledge
involved in any kind of designing, and bearing in mind that it is based on a general characterisation of Designing as Exploration, it does not only provide a general understanding of designing but it does as well support the necessary knowledge engineering for constructing Knowledge-Based design systems. As we will see next, this is precisely the role the
KL DVE1
has played in the development
of ODEA.
6.2.2 Event Organisation as Routine Conguration Design We previously introduced the important contribution of theories to the construction, to the creation of new things, and therefore the interest on applying them. However, making use of theories is not a straightforward process. It requires deriving a model for the particular task being undertaken, and doing so must rely on a deep analysis of the domain of interest. It is therefore important to analyse what is involved in organising an event, as well as the keys to its success. It is however important to note that the analysis presented here does not pretend to be an exhaustive study. Instead we use the scenario previously described, in order to illustrate the conclusions drawn from previous research, see (Cendoya et al., 2002). The OBELIX Kick-O meeting shows some key factors one has to bear in mind when organising an event. Meetings involve discussions and it is therefore required to place the people in a suitable way. This implies situating the people close to each other with the possibility to see everybody.
Hence, setting up
the tables forming an O (a circle or a square) seems to be the best option. However, in this case, given that the participants were going to project slides, tables could not be organised that way. Tables had to be organised forming an U so that projection could take place on the free side. Another important factor for the success of a meeting is the confort. The participants should be confortable enough so that they can stay for hours in the meeting room. This is the main reason why a coee service was contracted, besides the fact that coee breaks have proven to be very fruitful in previous meetings. Lunch was planned to take place in a restaurant close to the meeting room, thus improving participants confort.
Finally, bearing in mind confort, but also taking into
account that several participants where coming from other countries, a taxi service to and from the airport was contracted and several rooms in a good hotel were booked. In the Articial Intelligence in Design community, designing is often classied into three distinct classes depending on the xed (or changing) nature of the requirements and problem statements (Bernaras, 1994):
Routine Design:
This is the kind of design that has often been done in the
same domain, using the same requirements and knowledge. In this type of designing the needs and requirements are used to directly formulate a complete, precise, unambiguous and possible requirements description. The requirements description is then used to formulate a problem speci-
Chapter 6. The Online Design of Events Application
105
cation whose solution can be shown to satisfy the motivating needs and desires. This relative simplicity makes Routine Design look pretty similar to problem-solving.
Innovative Design:
Innovative design is more dicult than routine design. In
Innovative Design, several problem statements are generated solved and assessed. Throughout the generation of these problem statements, the set of design variables is revised and/or changed.
Original Design:
Original Design, as opposed to the previous kinds, involves
the incremental improvement and renement of the initial requirements until a consistent, complete, precise, unambiguous and possible set is devised and serves to specify a problem statement whose solution satises the motivating needs and desires. The analysis of the events organised so far in PTSS shows that the dierent aspects we had to bear in mind for organising the OBELIX Kick-O meeting, are common to almost any event. In fact, organising events in PTSS has revealed to be a task that includes a common and well-known set of requirements and information to be provided.
According to the previous categorisation of the
dierent kinds of design, we can take events organisation (in PTSS) as a kind of Routine Design. However, preparing an event is not just a matter of dening these requirements taking into account the customer or client needs or desires. It requires performing a good set of tasks, such as contacting external providers, allocating resources or organising people. These tasks are precisely those that it would be interesting to automate in ODEA. The result of organisingdesigningan event consists of a set of components which, together, deliver the desired and required service.
In this particular
design, but also in general, event designing does not involve the generation of new elements other than the design itself.
Hence, we can see that according
to (Schreiber & Wielinga, 1997) which establishes that the main characteristic of Conguration Design is that no new component is generated, that the main problem-solving task in event organisation is that of Conguration Design. In summary then, events organisation in the context of PTSS can be considered as a kind of Routine Conguration Design involving a number of external organisation that might be required to take part in the overall preparation.
6.2.3 A Design Support System The
KL DVE1
attempts to dene the necessary and sucient kinds of knowledge
involved in all and any kind of designing, the roles they play and the relationships between them. However, the development of an application cannot uniquely rely on a theoretical understanding of designing. There remain important decisions to be taken during the development.
For instance, one of the main decisions
regards the level of automation the design system will exhibit, i.e., will the system be an automated design system or will it be a design support system? Such a decision is particularly important from an engineering point of view, since it will determine to an important extent the knowledge to be represented in the system. In fact, as as a Knowledge Level theory of Design, the
KL DVE1
does not
6.2. ODEA's rationale: a Theory-Based System
106
impose restrictions nor does it establish any rule regarding the development of a system. In Section 6.2.1 we have briey described the dierent kinds of knowledge the
KL DVE1
identies as taking part in any kind of designing.
An important
part of the knowledge identied belongs to the designer and is made explicit in the course of designing. For instance, requirements are identied, developed and rened as the design process evolves through an iterative process where the designer plays a central role. However it is also quite straight forward to note that in some cases, because of the specic domain, but more often because of the knowledge the client holds, requirements formation might be already partially performed by the client. This kind of situation is quite common in designing tasks that have being performed several times with the same set of requirements: Routine Design as is events organisation. In other words, dierent users might have dierent expectations for a system performing the very same activity. Moreover, cultural, social or even political context, implicitly drive the designer towards a particular solution instead of another.
Hence, it is particularly important to provide a smooth integration
between the system and the user. In order to eectively support PTSS sta in a non-intrusive manner, ODEA itself does not perform automatic design, instead it supports an event designer in the process of designing. ODEA is therefore a Design Support System and not an automated design system which attempts to achieve a complete design automatically. In an approach similar to that of the Edinburgh Designer System (Logan
et al., 1991), the support provided by our design support system is twofold: tactical and strategic.
The tactical support is concerned with solving the te-
dious and complex problems that might arise during the design activity (e.g., conguration problems, services booking, etc). The strategic support is mainly concerned with the organisation of the design process, and therefore keeps track of the current state of the design, presents alternative solutions, supports rening them, guides the designer during the requirements operationalisation, etc. In summary, the main role of ODEA is to assist the user in the puzzle-making and puzzle-solving process of designing an event. Thus, its main concern is to eectively support event designers through a smooth exploration of the problem making and problem solving possibilities by guiding the designing process, assisting the user in the specication of conguration problems, solving these problems, contacting external providers and maintaining a coherent state of the overall design. Even though, the
KL DVE1
is not a cognitive theory of designing, its develop-
ment has inevitably been informed by how humans usually undertake dierent sorts of designing tasks. As a consequence, developing a design support system based on such a theoretical and practical understanding of designing as provided by the
KL DVE1
appears to be quite natural and reasonable. This is not
to say however, that the
KL DVE1
can determine all the aspects of a system in
this respect. Developing a design support system does also need a good understanding about supporting users through software applications. In fact, one has to bear in mind that, as important as automating tasks is keeping the user actively and eectively involved during the design, after all, design support systems aim to support designers in the task of designing. An excessively pretentious approach could lead to a user-unfriendly system which intends to perform the majority
Chapter 6. The Online Design of Events Application
107
of the design, leaving too few aspects to the user will. On the contrary, a loose support to the user is obviously not desirable neither.
Once established the
scope of the system, the developer would be able to determine which of the kinds of knowledge the
KL DVE1
identies should be modelled, which it would be
benecial to include and which we shouldn't be tempted to acquire and represent in the design support system. In our Online Design of Events Application, these has been determined empirically, given the lack of any well-established theory of methodology for supporting users through software systems. Among the dierent types of knowledge identied by the
KL DVE1 ,
we have determined that ODEA needs to embed the
following:
•
Knowledge about events organisation (K.gc).
•
Knowledge about the design practice centred on PTSS environment (K.gc).
•
Knowledge of requirements formation (K.ex).
•
Knowledge for specifying conguration problems (K.ex)
•
Knowledge for solving conguration problems (K.ex)
•
Knowledge for maintaining the design rationale (K.dk)
Additionally, the system needs as well to manipulate the knowledge generated during the design process, that is: the current requirements, the problems dened so far, the solutions to these problem and the current state of the design solution.
6.2.4 The Events Design Process Model Derived from the
KL DVE1 ,
based on the characterisation of events organisation
as a kind of Routine Conguration Design, and given that it is thought to be a design support system, we have created ODEA's Events Design Process Model shown in Figure 6.3. In ODEA, events organisation is therefore understood as an iterative, incremental and opportunistic process that explores the possible event designs in order to satisfy the user's needs and desires, through the generation, resolution and evaluation of conguration problems. Starting with the needs and desires of a client, the Event Design Process Model supports the identication of a set of requirements that, when satised, result in an event design acceptable to the client. It is worth noting in this regard that, since event organisation is understood as Routine Design, requirements formation will typically be (almost) straightforward. The system can therefore present a complete and stable user interface to support this task. On the basis of the requirements obtained, the Event Design Process Model identies, the development of well formed problem specications that are essentially attempts to operationalise some or all of the requirements previously identied.
Given that Event Design is understood as Conguration Design,
these problem specications are conguration problems which are then solved. The solutions that are obtained are evaluated with respect to the requirements. On account of the decisions we previously introduced in order to better support the user in the designing task, the majority of the evaluation phase
6.3. Design and Implementation of ODEA
108
Figure 6.3: ODEA's Design Process Model.
is left to the designer who can then integrate in the process his or her preferences, social styles, cultural inuences, etc. The outcome of this evaluation then leads either to the identication of further or modied requirements, a dierent operationalisation, or to a nal design. As a Knowledge Level theory, the
KL DVE1 denes the necessary and sucient
kinds of knowledge needed and generated in any kind of designing, together with the roles each kind of knowledge plays, and the relationships between them. The Figure 6.3 depicts as well the dierent types of knowledge that shall be embedded in ODEA as previously decided. It is however important to note that the
KL DVE1 , as a Knowledge Level theory, does not make any commitment about
where this knowledge should reside nor how it would need to be represented. In fact, as we will see in the next sections, the knowledge identied is split across dierent modules as promoted and required by our Opportunistic Reasoning Platform.
6.3 Design and Implementation of ODEA The Online Design of Events Application is a Web-based design support system, that implements the Events Design Process Model we have just described in order to support an event designer in the iterative, incremental, and opportunistic exploration of the possible event designs. This process should however, smoothly enable a mixed initiative between the user and the system, which performs the tedious tasks on behalf of the designer, yet allowing him or her to guide the overall design. Developing the system based on the Events Design Process Model requires supporting an opportunistic and knowledge-intensive process.
Hence, even
though an opportunistic reasoning architecture might not be strictly necessary, it is certainly desirable. Moreover, it is worth bearing in mind previous successes in using the Blackboard Architecture for developing design support systems, see notably the Edinburgh Designer System (Logan et al., 1991; Buck et al., 1991).
Chapter 6. The Online Design of Events Application
109
Further, as we previously introduced in Chapter 4, the Blackboard Model of reasoning is particularly well suited for this knowledge-intensive task and is perhaps the most applied reasoning model in the history of AI in design. Last, but certainly not least, events organisation is understood in ODEA as a process involving external agents spread across the Web, which need to be seamlessly integrated into the overall designing process. In order to support ODEA we have therefore built it on top of our Opportunistic Reasoning Platform. Figure 6.4 depicts how the design support system has been supported by our reasoning platform.
The gure depicts the main
components of the system that full the various roles identied by the Opportunistic Reasoning Platform in Chapter 5. We can therefore see the dierent Knowledge Sources that have been developed, the mechanism applied in order to integrate the Knowledge Base, the inference engines the Knowledge Sources have used, and the external agents that have been incorporated into ODEA by means of Web Services and ontologies. In the remainder of this section we will describe in detail these dierent components, paying rst a special attention to the Knowledge Sources developed and how they contribute to the support the event designer. Finally, we will tackle technical implementation details and issues that had to be faced.
Figure 6.4: Architecture of ODEA based on our Opportunistic Reasoning Platform.
110
6.3. Design and Implementation of ODEA
Chapter 6. The Online Design of Events Application
111
6.3.1 Knowledge Sources Organising events requires knowledge about what an event is, as well as information about the dierent kinds of events. Moreover, a good event design depends upon an appropriate organisation of the participants. Besides, we have previously observed that an event may implicate contracting services like accommodation, projection or catering. In Chapter 5 we introduced the use of ontologies for representing sharable conceptualisations of some domain of interest as one the main decisions adopted in our Opportunistic Reasoning Platform. The Online Design of Events Application makes use of three ontologies (Aguado et al., 2003):
Events Types Ontology
The Events Types Ontology denes the dierent
types of events such as Meeting, Conference or Exhibition, together with some general information like the dates, the attendants or the lecturer (see Figure 6.5). This ontology has been dened in generic terms in order to favour its reusability, but specic enough so as to be useful for ODEA in supporting PTSS event designing. It is therefore thought to be reused in other systems for describing events without any other burden associated to their organisation.
Events Equipment Ontology
The Events Equipment Ontology denes the
resources required for organising an event (see Figure 6.6). It is therefore to some extent more oriented towards ODEA even though its conceptualisation could be reused to some extent in other applications. This ontology characterises the equipment as being furniture such as tables and chairs, or technical which comprises computers, videoprojectors, screens etc. The dierent kinds of equipment have further been detailed in terms of their brand, model, price, size, etc. The equipment currently available in PTSS has been expressed in terms of this ontology and stored in the Knowledge Base in order to be used at runtime by the design support system.
Hotel Accommodation Ontology
The Hotel Accommodation Ontology de-
nes the concepts regarding the accommodation and complementary services usually oered by hotels (see Figure 6.7). It therefore denes rooms, which are further characterised as single, double or suite, as well as it denes the concept reservation and additional services such as room facilities (e.g., TV), the provision of meals (e.g., half board), etc.
This ontology
is intended to be used as a shared conceptualisation with hotels systems and therefore supports ODEA in the task of checking and booking rooms automatically. Ontologies pave the way for a better integration of the dierent experts composing the system, by establishing a common language between them. Thus, ontologies provide a convenient solution to a typical diculty developers of Blackboard Systems have to face, concerning the representation of sharable yet expressive and extensible conceptual schemas (Peger & Hayes-Roth, 1997). However, in some cases there still remains the need to provide additional means by which the blackboard mechanism can work appropriately and eciently. In fact, bearing in mind the very denition of the Blackboard Model, it is quite clear that systems where Knowledge Sources frequently modify the shared workspace (i.e., the blackboard), may suer from a performance decrease caused
6.3. Design and Implementation of ODEA
112
Figure 6.5: Event Types Ontology.
by the need for propagating frequent changes over all the Knowledge Sources. Moreover, in the majority of the cases these changes will only be relevant to a subset of the Knowledge Sources, which evidences an unnecessary waste of resources. In order to avoid wasting resources, our Opportunistic Reasoning Platform introduces a partitioning of the blackboard data-structure.
Moreover, it sup-
ports the use of a hierarchical organisation of Knowledge Sources which has been successfully used in a number of Blackboard Systems to smoothly support the information ow among the experts (see Chapter 5).
In ODEA, we have
protted from this capability and we have established a hierarchy among the Knowledge Sources which provides dierent levels of abstraction over the event design process (see Figure 6.8). The hierarchy establishes that the Knowledge Sources of a particular level can access the data being handled by the Knowledge Sources situated directly below. For instance, the Events KS is aware of the work being done by the Projection KS, the Accommodation KS, and Layout
KSs. This mechanism supports a smooth an ecient ow of the information between the levels of abstraction in a way similar to the techniques applied for constructing compilers. The Online Design of Events Application covers, by means of ve Knowledge Sources, the general aspects of events designing, the organisation of people, establishing a projection service and allows to book hotel rooms for the participants. In the next sections we present these Knowledge Sources together with the role they play in the overall event design support system.
First we
Chapter 6. The Online Design of Events Application
113
Figure 6.6: Events Equipment Ontology.
describe the Events Knowledge Source which holds general knowledge about events.
Next, we present the one that helps in the task of preparing a pro-
jection service. Afterwards, we describe the Knowledge Source involved in the process of booking hotel rooms for the participants of a meeting. Finally, we depict the Knowledge Sources charged of people organisation.
Events Knowledge Source The Events Knowledge Source encapsulates general expertise about events and their organisation. In particular, the dierent kinds of events have been modelled together with their specic characteristics. These characteristics are derived from the study previously performed and shown in (Cendoya et al., 2003), which is based on historical data of the events hold in PTSS in the last three years. Therefore, it is important to note that even though some data has been generalised in order to favour reusability, the majority of the information is inuenced by PTSS context (see K.gc in Section 6.2.1). For example, a meeting is characterised as an event for supporting discussion. Participants are therefore appropriately organised, usually in an O-Layout or an U-Layout in the cases where some sort of projection takes place, in order to facilitate discussion. Obviously no meeting can take place with less than two participants. The same way meetings with more than fty participants are unlikely to be successful. The usual number of participants have been determined as varying between ve and forty.
In addition to these general information,
resources usage has also been analysed and classied into possible and recom-
6.3. Design and Implementation of ODEA
114
Figure 6.7: Hotel Accommodation Ontology.
mended based on the type of event. For instance, it is usual that in this kind of event, participants use computers and ipcharts which do also impose other requirements on electricity sockets, internet connectivity, etc. Besides, encapsulating general knowledge about events designing, the Events KS plays an integrating role.
Even though the designing expertise is parti-
tionned into several Knowledge Sources, there will always remain relationships we will need to address. For instance, once we have organised the people we must check whether the organisation is compatible with the projection service, since having a projection service when the participants are organised in an O-Layout would prevent some of them from seeing the projections. In ODEA the Events KS is the top-level KS (see Figure 6.8) and holds the knowledge concerning the integration of the solutions generated by the Layout, Projection and Accommodation KSs. It is worth noting, that such a kind of integration is eectively and eciently supported by the shared conceptualisationsthe ontologiesand by the capability for creating hierarchies as oered by our Opportunistic Reasoning Platform.
Projection Knowledge Source The current implementation of ODEA supports the event designer in the task of determining whether a particular projection service can be set-up or not and how it could be furnished, thanks to the Projection KS. This KS is aware of the
Chapter 6. The Online Design of Events Application
115
Figure 6.8: ODEA's Knowledge Sources.
internal details associated to oering projection solutions during an event and has access to the hardware available together with their characteristics. The Projection KS oers to the event designer the possibility to browse, query, retrieve and reserve projection hardware for the event being designed. To that end, the Projection KS provides a user interface based upon the concepts and attributes dened in the Events Equipment Ontology previously presented. Through this user interface, the event designer can retrieve the available hardware, based on any combination of criteria over the equipment attributes, in ontological terms which are presumably close to the designer understanding. At runtime, based on the selected criteria, the Projection KS generates the appropriate query, obtains the suitable hardware from the Knowledge Base, and presents it to the designer who can then choose his or her preferred one. This Knowledge Source is perhaps the one that better shows the benets we obtain from supporting the user interaction through every Knowledge Source that might require so, as dictated by our Opportunistic Reasoning Platform. In more traditional implementations of Blackboard Architectures, the whole user interface of ODEA would have been delegated to a particular Knowledge Source, making it more complicated, or even impossible because of the necessary partitioning of the expertise, to integrate all the knowledge required (e.g., events, projection, layout, etc.) in a single, complete, powerful and still reasonable user interface. Establishing Knowledge Sources as the entry point for the event designer, has supported the creation of a semantics-based interface which benets from the expertise encapsulated in order to better adapt it to the user. Moreover, because the interaction with the event designer is already performed in terms of the Events Equipment Ontology, the input can directly take part into further reasoning processes. This particular Knowledge Source therefore illustrates one of the main aims of the Semantic Web, that is better enabling computers and people to work in cooperation (Berners-Lee et al., 2001).
6.3. Design and Implementation of ODEA
116
Accommodation Knowledge Source Organising events often implies reserving hotel rooms for some of the participants. We have therefore decided to use hotel reservation as the example for testing the suitability of our platform as a means to support reasoning over the Web in collaboration with external agents. Moreover, this Knowledge Source illustrates how making an eective use of Semantic Web and Web Services technologies can oer new collaborative eBusiness opportunities when they are coupled with the appropriate reasoning software infrastructure. In the Online Design of Events Application we have integrated such a capability via the Accommodation KS. This KS encapsulates the insights associated to hotel rooms reservations.
It is aware of the dierent kinds of rooms (sin-
gle room, double room) and the enhancing services that can usually be oered by hotels. This conceptualisation of the hotels reservation domainthe Hotel Accommodation Ontologyis shared with a remote hotel reservation system developed by Jessica Aguado in the OBELIX project. During an event design, Web Services technologies (i.e., WSDL and SOAP) support the eective communication with the remote hotel service and the shared conceptualisations (i.e., the ontology) leverage the interaction to a semantic level thus supporting the integration of the hotel service into the reasoning processes and paving the way for further integration of other hotel systems. At runtime the designer can automatically contact a hotel and check the availability of rooms.
Based on this information, a high level conguration
problem is generated in order to determine whether the hotel has enough rooms for accommodating the participants. Such a step is, currently, practically unnecessary since such a calculation is far from being complex. However future inclusions of additional hotels would certainly carry the need to solve conguration problems in order to nd out the cheapest option(s), one that minimises the number of hotels used, or a solution that maximises the comfort. Thus, the interest for supporting complex reasoning processes on the Web as supported by our Opportunistic Reasoning Platform. Finally, ODEA allows to automatically book the hotel rooms desired in the ctitious hotel.
Layout Knowledge Sources We have previously raised the importance for an appropriate organisation of the people during a meeting. In general, people organisation is a crucial aspect in any kind of event. Therefore ODEA supports an event designer in the task of appropriately setting-up the people, that is setting up some tables in a suitable way, so that people can be disposed as desired. We currently consider four dierent layouts, O, U, Class-room and Theatre. The rst layout organises the tables in a way such that the people form a rectangle. It is suitable for relatively small sized events and is not compatible with a projection service. The second one is very similar but leaves one of the borders free, so that it can be used for projection. The Class-room layout sets the people in dierent rows just like in traditional class rooms. This is often the preferred layout for Conferences and Lectures. Last, the Theatre layout is very similar but better suited for a large number of participants given that the rows are staggered. While in some cases, people organisation is a trivial task, in many situations
Chapter 6. The Online Design of Events Application
117
this is far from reality. Let us present a simple, yet representative, scenario, to better show the complexity of such a task. PTSS has three dierent kinds of tables, see Figure 6.9. The rst kind is a square table where four people can be placed. The second type is a rectangular one which has six places (two along each side and one at each end). The last kind is a square table which supports 2 people per side.
Figure 6.9: PTSS dierent types of tables. Let us imagine a very simple problem which requires us to organise 4 people in an O-Layout having available 5 tables of each kind.
There are several
solutions, the majority including sets of tables, however, for the sake of clarity, we will just describe here all the solutions that minimise the number of tables used (one in this case). It is clear that a possible solution would be to use a table of the rst kind, see Figure 6.10(a).
Moreover, if we choose the second
1 as shown in Figure 6.10(b). Finally,
type of table, four solutions are possible
using the third kind of table allows us to organise the people into 16 dierent situations, all of them fullling the requirements, see Figure 6.10(c). For such a simple scenario, there are 21 possible solutions that minimise the number of tables used. If we increase the number of people involved, the number of possible combinations soon becomes very large. The previous scenario clearly shows the need for an eective integration of conguration problem-solving (Schreiber & Wielinga, 1997) in ODEA to support people organisation, but also the requirement for appropriately handling and directing its execution. For instance, it is very unlikely that any of the solutions involving a table of type 3 might be convenient if the attendants are not using computers or any other resource which requires more space. This is known a
priori and therefore the calculation could be avoided. In the current example preventing the calculation does not lead to a great optimisation, however the more people and dierent tables are involved the more we benet from restricting the solutions space. In order to support organising people, we have integrated in ODEA a generic conguration problem-solving tool developed by Labein in the course of the OBELIX project (Altuna et al., 2003, 2004). The conguration tool is a ProblemSolving Method implementation (Studer et al., 1998) that encapsulates the expertise for solving conguration problems independently from the application
1 In
fact, there exist more solutions if we consider the possibility for placing people in the
middle of long sides. We will not take these solutions into account, for the tool we have used for solving these problems does not support such a feature (doing so would require supporting merging ports).
6.3. Design and Implementation of ODEA
118
(a) Tables of Type 1
(b) Tables of Type 2
(c) Tables of Type 3
Figure 6.10: Possible solutions per type of table (black circles represent people).
domain. This expertise includes the sequences of the dierent inference steps that have to be performed as well as the dierent kinds of knowledge required during the problem-solving process. The conguration tool is informed by the Conguration Ontologies which establish the concepts and relationships required for solving and specifying conguration problems. There are three Conguration Ontologies:
Components Ontology
The Components Ontology represents the dierent
concepts and relationships necessary to dene the static knowledge of a specic domain.
This knowledge is expressed in terms of components
having a set of ports and described by additional properties. Components can be connected to other components via associations, or via connections between compatible ports.
Constraints Ontology
This ontology supports dening the constraints that
must be applied over the components.
Problem Specication Ontology
This ontology allows applications to de-
Chapter 6. The Online Design of Events Application
119
ne concrete problem specications with some optimal criteria if necessary. The conguration tool that is used in ODEA dierentiates two levels of detail for conguration problems. High level conguration problems are only concerned with determining whether several elements can be put together. These are known as association or aggregation problems.
Detail level conguration
problems do also deal with how these elements are put together, how they are
connected. These are known as connection problems. In order to use the conguration tool over a particular domain, a developer has mainly three possible approaches. The rst one involves adapting the conguration tool to the particular domain of interest.
The second one requires
adapting the domain description to t the conguration tool. Finally, the third one is based on adding intermediate elements to bridge the dierences between the conguration tool and the domain of interest. Both the rst and the second approaches introduce reusability restrictions. Adapting the conguration tool to every domain of interest would remove all its genericity and thus its raison
d'être , and modifying the domain model to t the conguration tool would certainly prevent reusing the domain representation in other tasks. In ODEA we have used the third approach which has successfully been applied in other research projects, see (Fensel & Benjamins, 1998; Park et al., 1997; Crubézy & Musen, 2004) to name a few. In a very similar way to what is described in (Gennari et al., 1994, 1998), ODEA integrates the conguration tool by mapping the Events Equipment Ontology to the Conguration Ontologies.
Thus, we have described the tables
according to the Components Ontology, that is, we have dened their ports with their respective connectivities and we have included additional parameters such as the tables dimensions. This denition has been stored in the so-called Events Components Ontology. Moreover, we have dened constraints according to the Constraints Ontology, in order to organise the tables in the dierent layouts, or to limit the number of persons per table. In this case the denitions have been represented in the Events Constraints Ontology.
Figure 6.11: Conguration Tool Integration.
6.3. Design and Implementation of ODEA
120
Together with these denitions an intermediate module, which can be seen as an adapter in terms of (Fensel & Benjamins, 1998), provides the means for interacting with the conguration tool through simple Java methods. Problem specications are generated thanks to the Problem Specication API allowing application developers to generate problem specications at runtime. The same way, applications can browse the solutions by using the Problem Solution API (see Figure 6.11).
This way, at runtime the Layout Knowledge Sources may
generate a problem specication to be solved by the Conguration Tool. Once the tool has returned the solutions (if any) the Knowledge Source can interpret the results and update the blackboard accordingly. Conguration Problems are characterised by a combinatoric explosion of the solution space. This has led AI researchers to the development of systems that apply Knowledge-Based reasoning for solving these problems. The conguration tool described herein is precisely one of these attempts. There remains however knowledge that can help restricting the solution space but which pertains to the designer. This knowledge mainly concerns preferences for which there is often no explicit rationale behind them and is in general hard to represent formally. An adequate support to event designers must therefore smoothly support the elicitation of this knowledge at runtime. In order to do so, in ODEA conguration problem-solving is performed in two dierent phases, which, once nished, are subject to further review by the event designer who can then smoothly apply his or her own preference criteria (see evaluation phase of the
KL DVE1 ).
In order to smoothly integrate the event designer as well as to reduce the computational resources required for solving the conguration problems, we have decomposed the tables organisation process into two dierent steps. The rst step is in charge of determining whether it is possible or not to nd a solution with the available components. This is achieved through the resolution of a high level conguration problem whose outcome are the possible associations. The second, step consists on solving the detail level conguration problem for the solutions retrieved in the rst step. This second step thus determines how the required tables must be connected. The Online Design of Events Application is supported by two Knowledge Sources to achieve such a decomposition: the Layout KS (High Level) and the Layout KS (Detail Level) (see Figure 6.11). The former is in charge of generating and interpreting high level conguration problems for organising the tables. The latter takes care of rening particular high level solutions by constructing and interpreting detail level conguration problems. At runtime, the system rst determines the possible associations of tables that could full the user requirements. The solutions are then interpreted, and presented to the event designer. As determined by the Events Design Process Model (see Section 6.2.4), the evaluation of these solutions can lead to further detailing or changing the problem specication. Whenever further specication is desired, the Layout KS (Detail Level) generates a detail level conguration problem, which is again solved and presented to the user. The event designer is then given the possibility to select the solution that he or she prefers, and to establish it as the tables organisation for the event being designed. In this section we have presented a high-level view of the Online Design of Events Application. We have given a brief overview of the overall architecture of the system which builds upon our Opportunistic Reasoning Platform, and we have presented the main reasoning components that take part in the design
Chapter 6. The Online Design of Events Application support system.
121
In the next section we move into more technical aspects of
the implementation of ODEA, paying special attention to the mechanisms that have supported the knowledge-based reasoning processes, and the integration with external agents over the Web.
6.3.2 Implementation Aspects So far, we have described ODEA from a high-level point of view leaving aside the technical details of its implementation. In this section we enter into more technical matters by describing, rst of all the main implementation issues derived from the need to provide a Web-based user interface. Secondly, we explain how we have integrated the Knowledge Base and additional inferencing facilities required for supporting the event designer. Finally, we briey present how the external systems have been integrated in ODEA in order to support the collaboration with external providers.
A Web-based System The Online Design of Events Application is a Web-based events design support system that allows event organisers to use their favourite web browser for designing events. Therefore, ODEA has been built on top of our Opportunistic Reasoning Platform and has been deployed on a J2EE Application Server (Johnson, 2003), see Figure 6.12. In other words, the reasoning machinery has been encapsulated in a web module that provides an HTML (Raggett et al., 1999) user interface.
Figure 6.12: ODEA deployed on a J2EE application server. In order to obtain an homogeneous user interface, we have applied the Com-
posite View and Front Controller design patterns (Alur et al., 2003), see Fig-
6.3. Design and Implementation of ODEA
122
ure 6.12. The Composite View bundles all the Knowledge Source's user interfaces, into a single one.
At runtime the dierent components of the overall
interface are generated by the specic Knowledge Sources and are bundled together. The Front Controller design pattern provides the initial point of contact for handling users requests. This Front Controller centralises the actions common to all the views, such as session management or access control and provides a common entry point to every user interaction which is later on delegated to the appropriate Knowledge Source. In order to unify the user interface concerning the tables organisation, we applied the same design patterns to the Layout KSs. Perhaps, the most complex process regarding the encapsulation of the design support system in a web module, has been the generation of HTML views of the data, and the manipulation of the user interaction.
In fact, it is often
overlooked that HTML is not well suited for building rich user interfaces. Adding dynamic elements requires using other scripting languages such as JavaScript (Raggett et al., 1999), and the development process is tedious and prone to errors. Moreover, the communication between Web servers and browsers, which is based on the Client-Server model mainly for scalability reasons (Fielding
et al., 1999), makes it dicult to obtain highly dynamic user interfaces able react upon changes on the Web server. In the Online Design of Events Application we have limited the impact of these issues by integrating JavaScript code in the HTML user interface. The JavaScript elements provide the means for showing graphics-based calendars, dynamic help information, validating data types, and even specic parts of the expertise of the Knowledge Sources. The most complete example in this respect is the Events KS. It features, functions to ensure the start date is previous to the end date of the event; functions that inform the designer about the dierent types of events as modelled in the Event Types Ontology; functions that ensure the required information has been provided in the appropriate data types; and a function to ensure the selected event type and the number of participants is consistent with our characterisation. Therefore, despite being based on an HTML user interface, the system provides a dynamic user interface and minimises the Web server overhead through simple scripts executed on the client side. It is worth noting that the inclusion of such a kind of rules (i.e., part of the expertise of the Knowledge Sources) in JavaScript embedded into the HTML user interface, is enabled by the fact that our Opportunistic Reasoning Platform externalises the Knowledge Sources which are in charge of their own user interface and can thus apply their expertise to improve it (see Chapter 5 and Figure 6.12).
Knowledge Integration in ODEA Our Opportunistic Reasoning Platform identies the need for being able to manipulate ontologies represented in Web standard formats like those presented in Chapter 2, as well as it advocates for their use as a means for dening conceptual schemas to be shared among the Knowledge Sources and other agents involved. ODEA is no exception in this respect and uses ontologies represented in RDF/RDFS as decided by the consortium of the OBELIX project. In order to manipulate the ontologies as well as to support the persistence and ecient manipulation of the Knowledge Base, ODEA integrates a Sesame
Chapter 6. The Online Design of Events Application
123
server (Broekstra et al., 2002) which is deployed on top of a MySQL Database (see Chapter 3 for an overview of Sesame's features). Sesame provides ODEA with an outstanding support for eciently storing, querying and reasoning over RDF and RDF Schema.
At runtime, the Knowledge Sources generate RQL
queries (Karvounarakis et al., 2002) for retrieving the required information. For instance, the Events KS generates the appropriate query for retrieving the different event types, and the Projection KS takes the user criteria and constructs the corresponding RQL query for retrieving the available projectors that meet the user's criteria from the Knowledge Base. The instances stored in the Knowledge Base are retrieved at runtime by the Knowledge Sources and are automatically mapped into Java objects. In order to do so, at design time, a set of Java classes were automatically generated based on the ontologies dened.
An intermediate module, developed on top of the
Sesame API, oers some useful and typically used methods for dealing with the Knowledge Base (e.g., obtain all the instances of a particular concept, obtain the subclasses of a concept, etc.). Additionally, it includes the mechanisms for transforming the instances retrieved in the appropriate Java instances by means of the Java Reection (Horstmann & Cornell, 2001b) and the BeanUtils component developed by the Apache Foundation (The Apache Software Foundation, 2002). As a result, each of the Knowledge Sources that take part in ODEA have the capabilities for manipulating the Knowledge Base eciently without further complications or any burdening regarding the serialisation format. In Chapter 2 we mentioned the suitability of ontologies for dening sharable and static conceptualisations for a domain of interest. We did however raise the need for applying further mechanisms in order to dene how to solve real-world problems which cannot just be reduced to query answering or classication. Mapping the instances retrieved from the Knowledge Base into Java instances, simplies to an important extent their manipulation, and additionally pave the way for integrating them into further reasoning processes.
In ODEA, each
Knowledge Source integrates its own inference engine, Jess (see Chapter 3), and applies its specic inferencing rules to achieve its goals (i.e., support the user in some part of the event design). The main motivations for choosing this particular rule-engine over others has been its maturity, our own previous experience with CLIPS, almost compatible with Jess' scripting language, and its capability to directly map its workspace to external Java instances. Thus, any change made by a rule execution is directly reected on the Java instance held on the Blackboard and inversely, any modication made directly on the Blackboard is automatically reected in the rule engine's workspace. The rules are applied at runtime over the Java instances obtained from the Knowledge Base and therefore complement the expressivity of ontologies while we maintain their shareability. Even though we have used just the Jess inference engine, mainly for practical reasons (i.e., getting used to various engines requires much time), this fact does not discount the possibility of integrating diverse engines if required. In fact, the capability of integrating dierent reasoning systems is one of the most appealing features of Blackboard Systems (see Chapter 5). Therefore, our Opportunistic Reasoning Platform does as well exhibit such a feature as we have tested through the integration of the conguration tool.
6.4. Illustration
124
Integration of External Systems We previously introduced the need for solving conguration problems in ODEA. The expertise required to do so is provided by a conguration Problem Solving Method implementation developed by Labein in the course of the OBELIX project.
The tool comes in two avours, the rst one as a typical software
component oering a Java Application Programming Interface (API), and the second one being encapsulated as a Web Service. Both solutions have successfully been tested in our design support system. We have here presented the Web Service version for it is more relevant for testing the suitability of our reasoning platform for supporting reasoning over the Web. Currently, only two Knowledge Sources make use of the conguration tool, namely the Layout KSs. However, it would be possible applying the PSM to other tasks required for organising events. For instance, if we were to determine the connection between projectors, computers and complementary hardware, we could apply the same approach followed for the tables organisation:
the
Projection KS could be split into two dierent Knowledge Sources (i.e., a High Level and Detail Level), and the generation of the corresponding Components and Constraints Ontologies for the projection domain would suce. Similarly, the Accommodation KS could require solving conguration problems for appropriately accommodating the event participants in more complex scenarios involving several hotels. The Accommodation KS, fulls its role by contacting a (simulated) Hotel Web Service specically developed for ODEA by Jessica Aguado. At runtime, the design support system, contacts the service in order to check the availability of rooms, and eventually supports booking the desired facilities. The inclusion of the Web Service intends to account for the actual capabilities of the underlying Opportunistic Reasoning Platform in order to seamlessly integrate external systems over the Web. Therefore, it was not our aim to provide a fully-edged solution that would contact a large variety of hotel Web Services or other service providers, even though this is, as we now know, perfectly possible. The seamless integration of the (Web Services-based) conguration tool and the Hotel Booking Service is ensured by Web Services technologies. To this end, we have used the Apache Axis component (The Apache Software Foundation, 2003), which is one of the best free SOAP implementations for Java. Thanks to this software component, the generation of WSDL descriptions of the services, as well the creation of clients that use these services was relatively straightforward. In fact, Apache Axis in addition to providing the core software for executing remote Web Services, provides outstanding tools for the automated generation of WSDL descriptions and client skeletons for using the services. In order to better portray the mechanisms implemented in ODEA, we nally present an example of the system at work. We briey depict a particular event design process, introducing the actions undertaken by the Knowledge Sources, the mechanisms activated and how ODEA interacts with external agents.
6.4 Illustration Retaking again the case study of the OBELIX Kick-O meeting, let us imagine Ainhoa organising the meeting using the Online Design of Events Application.
Chapter 6. The Online Design of Events Application
125
Ainhoa directs her preferred browser to the URL of ODEA. At this point the event design support system is initialised and generates a new empty blackboard where only the main Knowledge Sourcethe Events Knowledge Sourceis active and ready to work. On receiving Ainhoa's request to start an event design, the Events Knowledge Source, prepares a rst form where the designer can set an initial agenda. In order to generate this form, the Knowledge Source is informed by the Event Types Ontology which is stored in the Sesame Server (see Figure 6.13).
Figure 6.13: Deployment View of the Online Design of Events Application. Ainhoa lls this rst form by taking advantage of the information provided by the system and submits the agenda, see Figure 6.14. As we already mentionned the meeting took place on march 25th and 26th 2002 in the Technology Park of San Sebastian and there were about fteen participants from every partner taking part in this project.
Subsequently, the Events KS validates the form,
being informed by the Event Types Ontology, and additional rules that ensure the dates are coherent (e.g., the start date is before the end date), and that the kind of event is appropriate bearing in mind the number of attendants. Once the form is validated, the Online Design of Events generates four new Knowledge Sources, the Projection Knowledge Source, the Layout Knowledge Sources and the Accommodation Knowledge Source, which are responsible for supporting other aspects of the organisation of an event. At this point ODEA's blackboard has therefore four Knowledge Sources activated and ready to cooperate and facilitate designing the event to Ainhoa. Aware of the need for projecting slides during the meeting, Ainhoa decides to rst organise a projection facility. She therefore directs her browser to the Projection tab shown in ODEA's user interface. The system shows an interface that supports retrieving the hardware available on the basis of some criteria with respect to projectors attributes as informed by the Events Equipment Ontology.
Ainhoa requests high quality equipment.
The Projection Knowledge
Source, builds the appropriate query, retrieves from the Knowledge Base the
6.4. Illustration
126
Figure 6.14: Event agenda form.
hardware available, and presents the list of available projectors together with their main characteristics (see Figure 6.15). Having browsed the dierent projectors, Ainhoa chooses the VT 46 projector as it fulls the technical requirements and does not result excessively expensive.
Immediately after, the blackboard
mechanisms are automatically activated and a constraint is generated in order to avoid organising people in an O-Layout. In fact, ODEA knows that setting the participants in an O-Layout could prevent some people from viewing the projection, and this is obviously not desirable. The next step Ainhoa decides to undertake, is the organisation of the seats. Hence, she activates the people organisation view and obtains a simple form where the possible layouts are shown. Ainhoa notices that one of the layouts has been disabled, but there exists an explanation below that keeps her informed about the systems internals. She chooses an U-Layout and submits the form. The request is processed by the Layout KS (High Level) which recognises the need for generating a conguration problem specication.
Thus, it retrieves
from the Knowledge Base the dierent types and quantity of tables available. Using the information available so far, that is, the number of participants, the selected layout and the dierent tables available, this knowledge source generates a high level conguration problem specication.
The problem specication is
next transformed from the Events Equipment Ontology terms into those of the Conguration Ontologies, and the transformed specication is sent to the remote conguration tool service. After some calculations, the conguration tool sends the results back which are again transformed from one domain's terms into the other (see Figure 6.11). The Layout KS (High Level) browses the solutions obtained by the congura-
Chapter 6. The Online Design of Events Application
127
Figure 6.15: Projection form.
tion tool and presents them to Ainhoa so that she can determine the one that suits better the event (see Figure 6.16). Ainhoa browses the dierent options and asks the system to detail a particular solution that looks suitable.
Au-
tomatically the Layout KS (Detail Level) prepares a detail level conguration problem specication and sends it to the conguration tool. The detailed results are nally retrieved and shown to Ainhoa who has, eventually, established an appropriate organisation of the tables for the meeting. Finally, given that many of the participants require a hotel room, Ainhoa activates the Accommodation view. There, thanks to the Accommodation KS, she is given the possibility to automatically check rooms availability in a Hotel. She notices however, that the system has already considered the possibility for requiring booking hotel rooms for the participants, and has contacted in advance the Hotel Web Service. The list of rooms available for the event dates appears in her screen (see Figure 6.17). Pleased to see that all the participants can be accommodated in this hotel, she sets the appropriate number and kind of rooms and requests to book them. Informed by the Hotel Accommodation Ontology, the Accommodation KS generates several booking requests. The remote Hotel Web Service which uses same ontology, handles the requests, checks its internal Database and nally conrms the rooms are booked. Glad to see that a large part of the event has quickly been organised Ainhoa closes her browser. In the previous scenario, we have chosen a particular designing process for the sake of clarity. However, we could have started booking the rooms or organising the seats, the event organisation could have required agenda modications due to unavailable resources, etc. It is worth noting that such a behaviour is smoothly and eectively supported by our Opportunistic Reasoning Platform.
6.5. Summary
128
Figure 6.16: Layout form.
6.5 Summary In this chapter we have presented the Online Design of Events Application, a Web-based Design Support System that aims to support event organisers in the process of organising meetings, conferences, and the like. In this application, event organisation is understood as a process involving, not only PTSS, but also other providers, such as audiovisual equipment providers, restaurants and hotels, in a complex value chain that is needed when hosting events.
There-
fore, collaboration between dierent entities plays an important role in ODEA oering new opportunities for smart collaborative e-business. ODEA is based on the characterisation of events organisation as routine conguration design, and it is supported by a theoretical understanding of de-
KL DVE1 . ODEA represents, to our knowledge, one of E of the KL DV 1 to the development of a design system,
signing derived from the only a few applications
see (Mandow & Pérez de la Cruz, 2000) for another example. Creating the design support system informed by the
KL DVE1 ,
has beneted from a theoretical
understanding that has led to a more eective and coherent system. This could be extended to other elds in Computer Science. In fact, like in other areas of Engineering, empirical knowledge is often sucient for the development, but a theoretical understanding helps to produce, more eective, ecient, and acceptable results. Moreover, whenever any explanation or justication of the result is required, the need for a theoretical background is impossible to circumvent. The Online Design of Events Application builds upon our Opportunistic Reasoning Platform in order to eectively support event designers in preparing events, automating an important part of the tasks that need to be performed.
Chapter 6. The Online Design of Events Application
129
Figure 6.17: Accommodation form.
ODEA illustrates how our reasoning platform can support the development of Knowledge Based Services over the Web, through an infrastructural architecture that allows to seamlessly plug and coordinate the many software components required, in an opportunistic reasoning process over the Web.
Chapter 7 Xena: Online Music Rights Clearing Organisation
The work presented in this chapter was performed in collaboration with Ziv Baida, Jaap Gordijn and Hans Akkermans from the Free University of Amsterdam.
Their main contribution consisted on
the business analysis based on a deliverable of the OBELIX project (Gordijn et al., 2004).
We are however indebted to them as well
for insightful discussions on more general aspects of this work. The majority of this chapter was published in (Pedrinaci et al., 2005a).
We here present Xena, a Web Services based music rights clearing organisation that can automatically clear the rights of music broadcasted by Internet radio stations. The case study is motivated by the forthcoming liberalisation of the market for rights societies in the European Union. This will make the current infrastructures for clearing and repartitioning the rights for broadcasting music collapse. This work is driven by the need to bridge the existing gap between business analysis and their IT-based implementation in order to deliver protable eServices over the Web. The rest of this chapter is organised as follows. In Section 7.2 we present the case study domain, followed by an analysis from a business perspective. This analysis is the business-grounding of the system implementation, presented in Section 7.3. Finally, in Section 7.4 we present our conclusions and discuss important issues for future research on what we refer to as business-value driven
Web Services.
7.1 Introduction The rise of the Internet and related technologies have undeniably revolutionised our lives. Information now ows over the Web and reaches every corner of the world in ways that were inconceivable before the advent of the Internet. Communication has now reached a global and multi-modal scope and consequently the way of doing business has reached a new dimension. Nowadays, every company, no matter its size, economic power, or business sector, can have its own little showcase on the Web ready to be visited from all over the world. The use 131
7.1. Introduction
132
of the Internet as a basis for doing business has already proven very fruitful, see for instance Google or Amazon, but the Web has also turned out to be a dicult market place, the so-called Dot-Com crash being a trustworthy proof. Relatively new initiatives such as Web Services and the Semantic Web have joined the scene and appear as promising candidates to lead a new revolution in the business community. The former allows the execution of remotely oered IT services over the Web and the latter promises to bring communication to a semantic level paving the way for a brand new family of intelligent applications for the Web. Researchers from various elds are currently working on integrating them together in order to make of these new promises a reality. However, so far major eorts in the (Semantic) Web Services communities have payed little attention, if any, to the business aspects that conform any commercial activity, independently from the means they build upon (Akkermans et al., 2004). Hence, in addition to the relatively immature stage of (Semantic) Web Services technologies, this limiting view has inevitably introduced additional barriers for their wider adoption in real scenarios. During the OBELIX project, which represents the framework under which we have performed the great majority of the work presented here, we have attempted to shed some light on the business aspects surrounding electronic services (eServices).
Starting from a business analysis carried out during the
OBELIX project (Gordijn et al., 2004) we present in this section Xena, an Online Music Rights Clearing Organisation, and its development process, through which we have attempted to bridge the gap between the business world and the IT world. Reasoning about business value, economic feasibility of businesses and other business logics has traditionally been performed within business science (Holbrook, 1999; Normann & Ramirez, 1994; Porter, 1985; Tapscott et al., 2000). Internet technologies, on the other hand, have been studied and developed within the eld of computer science, decoupled from a good understanding of the business logics and drivers that these technologies support and enable. The rise of Internet and Web Services technologies presents businesses with the opportunity to integrate business activities into a value constellation (Porter, 1985; Normann & Ramirez, 1994). Businesses bundle forces and use automated processes to offer the best solution for customer needs: a bundle of goods and services that together oer a higher added value than single services/goods. The use of Web Services, like other technologies, should be justied by their support of strategic business goals. Hence, transactions they execute must adhere to business logic, as dictated by the business domain, including issues such as competition, legislation and an understanding of the market. Consequently, a rst step in Web Services based business implementations should be understanding the business environment.
A (possibly inter-organisational) bundle
of servicesbusiness activitieshas to be dened that provides a solution for customer needs. The next step is to dene a business process model to carry out this service bundle.
And subsequently, based on the process model and
driven by the business logics, it is possible to implement and select Web Services that distribute the computation of the earlier described business activities (eServices). Thus, decisions made on the business perspective propagate to the system implementation. In this chapter we cross the boarders of research disciplines, and present an exploratory case study, where we investigate and implement eService oerings,
Chapter 7. Music Rights Clearing Organisation
133
starting with a business analysis, and ending with a Web Services based implementation of a scenario sketched by the business analysis. Important business decisions, made during the business analysis, are reected in the system implementation. Our application adheres to these business decisions, and coordinates the execution of distributed Web Services for the actual oering of eServices over the Internet.
Our Web Services based system diers from other work by the
(Semantic) Web Services community in its business grounding, originating from a business analysis that can be traced back in the system implementation.
7.2 A Business Perspective on Music Rights Clearance 7.2.1 Case Study: Music Rights Clearance and Repartitioning Conventional and Internet radio stations broadcast music to attract audience, and sell this audience to their advertisers. Commercial music use is bound by several rights reserved by right holders (e.g., artists and producers). Specically, a radio station has to pay right holders for the right to broadcast music to the public. The process of charging radio stations for music use and of distributing the money among artists, producers and other right holders is supported by organisations called rights societies.
These may be government-appointed or-
ganisations (as is the case in the EU) or commercial companies (as is the case in the US and is intended to be in the EU in the future).
They collect fees
from radio stations (an activity referred to as rights clearance ), and distribute the fees among right holders (an activity referred to as repartitioning ). With respect to the right to communicate music to the public, rights societies provide Internet radio stations with the service of clearing this right, and right holders with the service of beneting from this clearance: repartitioning. Due to the liberalisation of the market for rights societies in the EU, the way of doing business in this industry may change dramatically. New rights societies may appear, and rights societies will start competing for customers: both radio stations (customers for the clearance service) and right holders (customers for the repartitioning service).
Currently, EU Member States laws determine
which rights societies clear certain rights and repartition fees, for each state. A radio station has no exibility in choosing a rights society to work with. Market liberalisation will bring an end to this situation, causing a collapse of the power structures within the industry.
Our case study aims at analysing
new ways of doing business in a liberalised market, concentrating on clearing rights for Internet radio stations, where the whole process can be supported by eServices. We present the case study, gradually going from the business analysis (Section 7.2) to the system implementation supported by our Opportunistic Reasoning Platform (Section 7.3).
The analysis process (Figure 7.1) includes
the following steps: (1) analysis of network value constellations of enterprises; (2) specication of elementary commercial services and opportunities to bundle them; (3) description of inter- and intra-organisational processes for service delivery; and (4) implementation of an inter-organisational information system,
7.2. Business Perspective on Music Rights Clearance
134
based on ontologies and web-services, and supporting the business process/service description and business model.
Figure 7.1: Analysis process: from a business perspective to an IT implementation.
7.2.2 Business Analysis Business analyses are studies that result in an understanding of how actors can protably do business.
The way of doing business can be conceptualised in
so-called business models that show (1) the actors involved, (2) the activities performed by these actors, and (3) the objects of economic value that these actors exchange in their business activities.
The reader is referred to (Baida
et al., 2004) for an ontology-based method to perform a business analysis, and to (Gordijn et al., 2004) for a detailed business analysis of our case study domain. A typical Internet radio station plays a large number of music tracks every month. Market liberalisation means that new rights societies may emerge, that radio stations are no longer obliged to clear rights with a specic rights society, and that rights societies will compete on representing artists. This could be a nightmare for radio stations. For example, nowadays in Holland, all the rights are cleared with the same rights societies (SENA, BUMA and STEMRA), as determined by Dutch law (Gordijn et al., 2004). But in the liberalised market an Internet radio station would have to nd out which (not necessarily Dutch) rights societies represent the artists of every track, and clear rights with all these rights societies. Thus, radio stations may have to do business with a large number of rights societies, rather than just three. Two scenarios have been proposed to solve this problem by introducing new actors: 1. A clearing organisation is introduced; it takes over the rights societies' role to interact with Internet radio stations. It oers a rights clearance service to Internet radio stations, and in fact acts as a proxy, and forwards individual clearing requests to the appropriate rights society. Consequently, rights societies no longer need IT to perform the inter-organisational clearing process themselves. This scenario is depicted in Figure 7.2. 2. Instead of a clearing organisation we introduce a clearing coordinator. Internet radio stations continue interacting with a rights society of their choice.
This rights society cannot clear all the rights itself.
Instead, it
uses the services of a clearing coordinator to nd out through which other rights societies it can clear rights for a customer (this information is not publicly available). This scenario is depicted in Figure 7.3. The two scenarios, for which business models are depicted in Figure 7.2 and Figure 7.3, encapsulate important strategic business decisions:
Chapter 7. Music Rights Clearing Organisation
Figure 7.2:
Clearing organisation business model.
135
The clearing organisation
interacts with radio stations and forwards individual clearance requests to all the rights societies involved.
1. Which actors are involved?
Both scenarios involve rights societies and
right holders, as in the current situation. However, both scenarios introduce a new actor: a clearing organisation and a clearing coordinator. 2. Which actor interacts with customers? In the rst scenario, rights societies give up some power: they no longer interact with radio stations, making it hard to build a strong relationship with radio stations and to create customer loyalty.
In the second scenario rights societies maintain their
traditional position, maintaining direct interaction with radio stations. 3. Who determines fees?
The party that interacts with radio stations can
eventually determine the fees that radio stations have to pay. As long as another entity stands between rights societies and radio stations (clearing organisation scenario), rights societies do not determine the nal clearance fee, making it hard for them to compete on radio stations as customers. In the clearing coordinator scenario rights societies continue determining the fees. In the current case study we chose to implement the clearing organisation scenario.
This implies important business decisions:
(1) introducing a clearing
organisation actor, (2) a clearing organisation, rather than rights societies, interact directly with radio stations, and (3) rights societies not longer determine the nal fee.
We created a business model (Gordijn et al., 2004) with which
nancial feasibility can be analysed for all actors involved.
7.2.3 Service Description The clearing organisation scenario includes two services: clearance and repartitioning. These services can be oered by multiple rights societies; each may require somewhat dierent fees. Consequently, rights societies need to describe
7.2. Business Perspective on Music Rights Clearance
136
Figure 7.3: Clearing coordinator business model. Radio stations interact with a rights society which is supported by a clearing coordinator in order to clear the music rights.
their services in a way that attracts customers, from a business perspective: describe what the eService provides to the customer, and what the customer gives in return. Hence, business activities identied in the business model (see Figure 7.2) are now described as services, using a service ontology (Akkermans
et al., 2004). Based on such a description, a customer can choose the eServices being business activitiesthat he or she wishes to consume. A discussion on using the service ontology to describe and bundle services based on business logics is presented in (Akkermans et al., 2004).
7.2.4 Business Process Managing the rights of broadcasted music is a process involving the rights clearance and the repartitioning of the commissions among the rights holder. Each activity operationalises a value-exchange, expressed by higher-level ontologies; it does not include pure technical activities, such as upload a play report to a database. Rights clearance consists of the activities of identifying the tracks being reported and the rights societies that can clear rights for these tracks, calculating fees (by rights societies), collecting fees (by the clearing organisation), and distributing the fees among rights societies. See Figure 7.4 for a detailed activity diagram of the music rights clearance activity. Repartitioningdistributing collected fees among right holdersinvolves identifying right holders, calculating their commission and distributing fees (either to right holders or to other rights societies that represent these right holders). Repartitioning shall be performed by rights societies and is not therefore part of Xena's responsibilities: once the rights have been cleared, rights societies (not the clearing organisation) shall repartition these rights among the right holders (see Figure 7.2). We therefore do not provide a detailed activity
Chapter 7. Music Rights Clearing Organisation
Figure 7.4: Xena Activity Diagram.
137
7.3. Xena: Web Services Based Music Rights Clearance
138
diagram for it will not be implemented by our application. In the next section we go another step further and we describe how, based on the business analysis performed, we have developed a clearing organisation application that provides the music rights clearance eService to Internet radio stations by coordinating the interaction with distributed rights societies over the Web.
7.3 Xena: Web Services Based Music Rights Clearance For a system to satisfy the business model proposed, it has to be the realisation of the business oering.
This involves, applying the business logics captured
in the business model during the system design and development and, most importantly, maintaining them at runtime. The clearing organisation business model relies on establishing a new actor (i.e.
the clearing organisation) that
merely acts as a proxy between Internet radio stations and rights societies. Therefore, in order to deliver the rights clearance eService over the Web, the implementation of an automated clearing organisation needs to coordinate and seamlessly integrate distributed and heterogeneous systems, as dictated by the business analysis. Moreover, the system needs to be scalable; after all, there are thousands of radio stations in the Web. These highly demanding requirements pose the need for supporting runtime reasoning, informed by the business knowledge, during an inter-organisational process where highly heterogeneous systems need to be coordinated across the Web. In order to full these requirements we have developed Xena, a clearing organisation which is supported by our Opportunistic Reasoning Platform and driven by the business analysis performed. In the remainder of this section we describe how, supported by our Opportunistic Reasoning Process, we have developed a clearing organisation application that reconciles business and technical aspects.
First we present a novel
approach to developing Web Services based eBusiness solutions, motivated by the need to comply to the business logics.
Next, we tackle the technical im-
plementation details and issues that had to be faced during the development of Xena. Finally, we review our approach in the light of the system obtained.
7.3.1 A Real-World Services-Oriented Approach Web Services have been proposed as a standard means for supporting interoperation between dierent systems across the Web, independently from their platform or language (Endrei et al., 2004; Booth et al., 2004).
The power of
Web Services lies in the fact that they can be seamlessly composed to achieve more complex operations.
Several initiatives coming from the Industry and
from academic environments aim at achieving a wider scale use of Web Services to support business activities. In this respect, the Semantic Web Services initiative couples Web Services with semantic annotations in order to establish the computational ground for a higher automation of the discovery, composition and execution processes (McIlraith et al., 2001; Payne & Lassila, 2004). A somehow more pragmatic approach is carried out in the Industry through the denition of so-called orchestration languages that support the denition
Chapter 7. Music Rights Clearing Organisation
139
of business processes through the composition of Web Services (van der Aalst, 2003). Unfortunately, none of these approaches provides a fully-edged solution for delivering eServices over the Web. On the one hand, most semantic approaches focus on computational details and ignore commercial aspects of services and, on the other hand, industrial eorts lack properly dened semantics thus making them unsuitable for automated reasoning tasks. From a Software Engineering perspective, the development of Web Servicesbased applications is mainly dominated by the Service-Oriented Architecture (SOA) (Pallos, 2001; Vinoski, 2003; Endrei et al., 2004). In spite of the technical advantages surrounding the SOA, such as the loose coupling of distributed systems, a structured approach or analysis and design method is required to craft
SOAs of quality , to reuse some of the words from (Zimmermann et al., 2004). Not to mention that in this approach the word service is, again, understood from a purely Computer Science perspective and therefore ignore commercial and hence essential aspects. In Section 7.2 we have presented the real-world services involved in the clearing organisation scenario and how they are composed in the nal eService oering. These services, are understood from a business perspective, still, we need to ll the gap between the eServices denitions and their Web Services based execution, in order to automate the provisioning of the music rights clearance eService over the Web. To overcome the limitations of traditional Software Engineering approaches, such as the SOA, we have adopted a conceptually dierent approach. Supported by our Opportunistic Reasoning Platform and driven by the business understanding of the scenario, we have applied what we have termed a Real-World
Services-Oriented Approach .
This approach is centred around the business
analysis performed as opposed to the (IT) Service-Oriented Approach where commercial aspects do not play such a central role.
Our approach, honours
the structure-preserving design (Schreiber et al., 1999), that is, it preserves the conceptual analysis in the nal artifact, in order to fully and transparently comply to the business rationale. Doing so paves the way for beneting from the very characteristics of our Opportunistic Reasoning Platform, and provides a relatively smooth transition from the business analysis to their IT implementation. On account of the importance of appropriately partitioning the expertise, when undertaking any blackboard application development (see Chapter 4), it is particularly important to appropriately characterise the problem-solving activity to be performed and the knowledge involved.
Hence, driven by the
business models, the main task to be performed by the problem-solving architecture is to support runtime reasoning over the dierent services involved in the nal service oering as well as over the delivery process itself.
In other
words, the main body of knowledge involved in the problem-solving task (i.e. the eService delivery process) corresponds to the business, services and delivery process models. In terms of our reasoning platform, the dierent models serve us to identifying the Knowledge Sources to be developed, their associated expertise and the triggering conditions under which they can opportunistically contribute to the overall delivery process. To do so, the Real-World Service-Oriented Approach establishes that each KS encapsulates the expertise for delivering or consuming
7.3. Xena: Web Services Based Music Rights Clearance
140
a particular real-world service over the Web, that is, an eService. This expertise therefore includes domain knowledge about the service itself, and in some cases references to dierent IT service providers and how to interact with them. The real-world services that need to be delivered or consumed are actually determined by identifying the services where the system is directly involved. In Xena, our main concern is to faithfully and automatically deliver the music rights clearance eService as it was rst modelled and specied during the business analysis phase. Thus, based on the (business) activities where the clearing organisation is directly involved, we identify three KSs:
Main KS:
This KS is the one that provides the music rights clearance eSer-
vice to Internet radio stations. It is in charge of interacting with Internet radio stations and determining the rights societies to deal with, based on the dierent tracks being reported. The music rights clearance eService involves consuming two additional eServices, namely the clearance eService oered by rights societies, and the money transfer eService oered by banks. Thus, the Main KS is helped by two additional Knowledge Sources that are charged of ensuring these two eServices are used appropriately.
Clearing KS:
This KS encapsulates the expertise for contacting rights soci-
eties and clearing song rights.
Accounting KS:
The Accounting KS is related to money transfers (i.e., col-
lecting and distributing fees). It interacts with banks as necessary, based on the current state of the eService delivery. The Knowledge Sources need to be implemented and we need to provide the appropriate means for their coordination in order to deliver the music rights clearance eService.
The business process model establishes how the dierent
business activities (that is, the eServices) need to be coordinated. Therefore, it roughly species how the dierent KSs need to be coordinated and it establishes the activities these have to perform by identifying the actors they need to interact with and the expected results. Since the KSs encapsulate the expertise for oering or consuming the eServices that take part in the overall delivery process, they play three essential roles:
Opportunistic problem-solving:
Above anything, KSs are the means by
which a Blackboard System solves problems. KSs encapsulate the expertise of the system and opportunistically contribute to the overall solution, that is, to the eService delivery in this case.
Bridge the gap between real-world services and IT services:
We have
previously seen that there exist important dierences between services in the real-world and services as understood in Computer Science. However, because we are implementing the automated delivery of an eService over the Web, we need to bridge the gap between real-world services and IT services. In this respect, KSs play an essential role since they are the means by which we bring eServices into their IT executable form. In fact, from a business perspective each KS can be seen as the realisation of some business activity independently from their internal details.
Conversely,
Chapter 7. Music Rights Clearing Organisation
141
from a technical perspective, KS ensure the appropriate automated execution of some eService by means of Web Services (when it comes to dealing with external business actors) or other technologies.
Integration of external systems:
The delivery of the music rights clearance
eService involves the interaction with several business actors, namely the Internet radio stations, the rights societies and the banks.
As a con-
sequence of bringing IT services to their real-world eServices executable form, KSs become the interfaces between the opportunistic reasoning platform and external systems. Integrating external services into the overall system is enabled by the distinguishing characteristic of the core of our platformthe Blackboard Frameworkwhich externalises KS. Our approach therefore distinguishes two dierent levels where the delivery process is dened. On the rst level, we determine how the dierent eServices must be composed for the nal delivery as specied by the business process model. This corresponds in fact, to the opportunistic coordination of the different KSs involved. On the second level, we dene workows for mapping the elementary eServices that take part in the delivery, into their IT implementation, which will in some cases, rely on Web Services executions. The eServices composition process is supported by our platform at both levels, thanks to its outstanding support for opportunistically reasoning over distributed processes over the Web. So far we have focussed on analysing the motivating scenario from a business perspective, and on presenting our particular approach to implementing eServices, driven by the business understanding. Next, we move into the technical aspects associated to implementing the clearing organisation on top of our Opportunistic Reasoning Platform.
7.3.2 Implementation Aspects We have built Xena on top of our Opportunistic Reasoning Platform. Figure 7.5 illustrates how the clearing organisation has been supported by our reasoning platform. The gure depicts the main components of the system that full the various roles identied by the Opportunistic Reasoning Platform in Chapter 5. We can therefore see the three Knowledge Sources that have been developed as determined by the Real-World Service-Oriented Approach, the tools used for integrating the Knowledge Base, the inference engines the Knowledge Sources have used, the external agents that take part in the music rights clearance eService delivery, and the tool used for their integration.
Figure 7.5: Architecture of Xena based on our Opportunistic Reasoning Platform.
142
7.3. Xena: Web Services Based Music Rights Clearance
Chapter 7. Music Rights Clearing Organisation
143
The KSs are informed by the music rights management ontology shown in Figure 7.6. The music rights management ontology denes the commercial actors involved (e.g., rights societies, radio stations, etc.), the rights, the usage fees, the play reports, etc. In summary, the ontology denes the concepts and properties the Knowledge Sources need to manipulate in order to achieve their goals and, at the same time, it establishes a lingua-franca for an eective collaboration with external agents, notably radio stations which have to provide play reports.
Figure 7.6: Music Rights Management Ontology. The ontology has been modelled in OWL-Lite, for it is a more expressive language than RDF/RDFS and is already quite well supported by various tools (see Chapter 3). In addition, the use of OWL-Lite has allowed us as well, to test the genericity of our platform with respect to the ontology representation language(s) and tool(s) being used. Integrating the Knowledge Base into Xena has been achieved as determined by the Opportunistic Reasoning Platform: the ontology has been reproduced in a hierarchy of Java classes adhering to JavaBeans conventions. The hierarchy has been automatically generated thanks to the Kazuki toolkit (The Kazuki Team, 2004), and, at runtime, the KSs,
7.3. Xena: Web Services Based Music Rights Clearance
144
manipulate the Knowledge Base supported by the Jena framework (HP Labs, 2004) in order to eectively handle the play reports provided by Internet radio stations. In order to complement the reasoning capabilities supported by OWL-Lite, each Knowledge Source integrates its own Jess inference engine (Friedman-Hill, 2003). Jess rules dene the activities to be performed under certain conditions, such as the reception of a play report, or the successful transfer of money to some right society. In some cases, the action results in rather simple tasks such as calculating the commission, but in other cases, conditions trigger the execution of remote services, such as the clearing or the bank transfer services. Simulated Web Services have been created both for banks and rights societies, using the Apache Axis component (The Apache Software Foundation, 2003). Still, the necessity to match real-world services into IT services, leads to the need for specifying and executing workows that map business services into a composition of IT services.
In fact, it is very likely that a simple ser-
vice from a business perspective, comes down from a software perspective to a more or less complex set of methods oered via Web Services, which need to be appropriately composed. In order to be able to execute and compose these IT services, we have described them using OWL-S (OWL Services Coalition, 2003), their groundings being linked to the Web Services oered by the simulated implementations of the dierent actors identied in the business analysis. For example, the clearing service oered by rights societies has been modelled in OWL-S as a sequence of the CalculateFee and ClearSong Web Service methods oered by rights societies (simulated) IT system: the ClearSong method, clears the rights for broadcasting some song, however it requires a previous money transfer from the clearing organisation to the rights society, reason why we rst need to nd out the corresponding fee, by means of the CalculateFee method. At runtime, Knowledge Sources trigger the execution of the OWL-S services making use of an OWL-S execution engine (Mindswap, 2004a) and the results of the executions are placed into the blackboard for further processing (see Figure 7.5). Eventhough performing a deep analysis of OWL-S is out of the scope of this thesis, it is worth noting some aspects. We have to acknowledge that, because of the usual mismatch between real-world services and IT services, delivering eServices will often rely on specifying and executing workows that map business services into a composition of IT services. This fact will presumably be even more frequent in real scenarios where legacy systems have to take part into the overall service delivery. In Xena such a role has been partially covered by OWLS. However, because this technology is still in its early stages, we have faced several diculties when applying OWL-S. For example, the execution engine we have used has limited functionality and does not support some conditional constructs (e.g., If-Then-Else or Repeat-Until). Moreover, the engine does not seem to support the latest OWL-S version (1.1) which is the one used by the editing tools. Thus, OWL-S les had to be generated by hand which is tedious and prone to errors. Despite these diculties OWL-S has contributed to integrating remote and heterogeneous IT systems by bringing the (web) services they oer to a higher semantic level, where they can be interpreted, executed and incorporated into reasoning processes. We expect that future developments will support further adoption of OWL-S into applications development. Still, we believe it is impor-
Chapter 7. Music Rights Clearing Organisation
145
tant not to take OWL-S nor Semantic Web Services in general, as a complete solution both in business terms and in technical terms.
The adoption of any
technology can only be justied if it helps to achieve the strategic business goals, and this can only be assessed with a deep knowledge of the business scenario such as the one we have presented for the music industry scenario.
Besides,
on the technical side, we must recognise that applications will still rely on procedural knowledge for achieving their goals.
Hence an architectural support
for the seamless integration of distributed and heterogeneous systems based on the eective integration of (Semantic) Web Services together with procedural and declarative knowledge about the domain as supported by our Opportunistic Reasoning Platform, remains an important requirement. With respect to Knowledge-Based Reasoning, at rst glance the system may not appear to require a complex machinery apart from the manipulation of ontologies in order to streamline the communication with remote systems. In fact, in its current stage, little reasoning power is brought to action. The reason for this is mainly practical. We have deliberately simplied the scenario, since we have not integrated the many business analysis functionalities the business artifacts obtained allow nor have we deployed the system in a more realistic scenario, for obvious practical reasons. It remains however clear that runtime analysis of any business activity, no matter its complexity, requires an important amount of knowledge, and applying this knowledge, poses important requirements in terms of the reasoning support: to increase the protability of many business activities business logics often play a crucial role.
For instance we could imagine a more realistic scenario
where banks require a percentage fee for making money transactions with a xed minimum fee and perhaps a maximum one as well.
Delaying clearance
payments in order to group those for the same right society could reduce costs to an important extent. To complicate things more we could foresee some legal restrictions regarding the period for clearing the rights, some nancial benets for working with particular banks, etc.
7.3.3 Reviewing Our Approach The Real-World Service-Oriented Approach grows up around the need to implement a system that honours business understanding as opposed to current practices in the (Semantic) Web Services communities, almost exclusively concerned with the, indeed necessary but still insucient, technical details. Throughout this exploratory case study we have tried to shed some light on this issue by trying to adapt IT infrastructures and practices to business practices. As we previously introduced, the development of Xena has been driven by the business understanding, but alternatively having a business understanding of the scenario allows us to examine the system developed from a business perspective, to check whether the IT implementation faithfully composes the eServices that were identied.
Comming back to the main business decisions
that were adopted, it is then possible to check whether the implemented system adheres to the business logics dictated by the business model. In Section 7.2.2 we identied three strategic business decisions we now review in the light of our application: 1. Introduce a new actor, the clearing organisation: this new actor is rep-
7.4. Conclusions
146
resented by our system, Xena.
The rest of the actors, that is, Internet
radio stations, rights societies and right holders, still remain involved in the business process (see Figure 7.5). 2. The actor in charge of dealing with Internet radio stations is the clearing organisation: in our IT implementation, Xena is in charge of dealing with Internet radio stations and is therefore directly oering them the music rights clearance eService (see Figure 7.5). 3. Final fees are determined by the clearing organisation:
Xena contacts
rights societies for calculating the fees and eventually adds its own commission to the nal fee. Thus, the nal fee is indeed determined by the clearing organisation. Applying our Real-World Services-Oriented Approach has brought several important characteristics to the application. First and foremost, Xena implements the main strategic business decisions adopted during the business and services modelling phases which is, after all, our main concern.
Second, our
approach allows, as we have just seen, to validate the system obtained, which is important in general, but particularly relevant if not crucial, when it comes to business matters. Finally, the system being the faithful implementation of the business model, we can benet from the tools that currently exist for working over the artifacts obtained during the business analysis: the business model and the services descriptions. Using these tools one can, for instance, study the nancial feasibility of the business model, one could analyse the value exchanges at runtime or even include the rights clearance eService as part of service bundles using the tools developed during the OBELIX project (Akkermans et al., 2004).
7.4 Conclusions The European music industry will have to face a new business environment with the forthcoming liberalisation of the market for the music rights clearance and repartitioning. Using knowledge-based technologies for business modelling (Gordijn & Akkermans, 2003) and service modelling (Akkermans et al., 2004), we have analysed this business scenario and obtained a protable solution. A resulting business model relies on establishing a clearing organisation that merely acts as a proxy between Internet radio stations and rights societies. Driven by this business model, we have developed an online music rights clearing organisation. The application is supported by our Opportunistic Reasoning Platform and relies on Semantic Web and Web Services technologies in order to actually deliver eServices over the Internet. The system has been developed driven by a novel method we have called the Real-World Service-Oriented Approach: an interdisciplinary approach where business and technical aspects are combined in order to deliver protable eServices over the Web. Our Real-World Service-Oriented Approach lies on the fact that it provides a solution to an important diculty that aects current eCommerce solutions: there remains a big gap between the business world and the IT world.
Our
approach, supported, and enabled by our Opportunistic Reasoning Platform, provides a structural solution to this problem while, at the same time, it paves
Chapter 7. Music Rights Clearing Organisation
147
the way for applying advanced Knowledge-Based Reasoning techniques over electronic business activities. Thus, solution developers can focus on the aspects that really matter, that is those specic to the solution being implemented, and leave aside an important part of the inherent complexities of delivering services and reasoning over the Web, which are transparently handled by our platform. Last but not least, it is worth noting that our approach plays an important organisational role since it seamlessly supports business practitioners and IT developers collaborating over the same project without (highly) interfering with each other.
Part IV Evaluation and Conclusions
151
So far, we have introduced the context in which we have conducted our research, reviewing the main existing technologies and tools. We have described our Opportunistic Reasoning Platform both from a conceptual perspective focussing on the Blackboard Model of reasoning, and from a technical point of view, presenting the platform we have developed. We have also presented two case studies developed on top of our reasoning platform. The case studies are, on their own, important contributions, but do also play a more important role in this thesis.
Throughout both case studies we
have assessed the support provided by our Opportunistic Reasoning Platform for reasoning over the Web. We have tested its genericity with respect to stateof-the-art tools and languages, and we have checked the suitability for applying and beneting from (Semantic) Web Services technologies within our platform. Our case studies, have served to examine the platform from an engineering perspective, and have even helped us to better establish the processes for creating KBSs based on the Blackboard Model of reasoning. In this nal part of the thesis we evaluate our Opportunistic Reasoning Platform in the light of the applications developed. The evaluation takes applications' requirements as its starting point, and assesses the support provided by our reasoning platform in order to full them. Further, the evaluations are synthesised in a general evaluation of the platform which contrasts it with wellestablished engineering practices.
We nally, conclude by reviewing the key
contributions of this thesis and introducing lines for further research.
Chapter 8 Evaluation and Discussion
In this chapter we evaluate our Opportunistic Reasoning Platform. We review the specicities of the core of our platformour Blackboard Frameworkin order to assess that these have retained the essence of the Blackboard Model. Next, we evaluate the platform with respect to the case studies.
Finally, we
generalise these evaluations on account for the results obtained and contrast them with current best practices in Software and Knowledge Engineering.
8.1 Is Our Implementation a Blackboard Framework? The work presented so far builds upon previous research performed in the context of Blackboard Frameworks. Still, our implementation presents some specicities. Hence there remains a crucial question unanswered: is our implementation a Blackboard Framework? In other words, have the changes performed maintained the essence of Blackboard Frameworks or have we implemented a new thing ?
This question is particularly important in order to be able to
lean on previous research performed in the context of Blackboard Systems and Frameworks. Let us then rst deal with that question as part of our evaluation process. In Part II we introduced the Blackboard Model of reasoning. We presented as well the concept of Blackboard Framework as a skeletal implementation of this reasoning model in order to minimise the typical development overhead in terms of the software infrastructure required for developing Blackboard Systems (i.e., applications that apply the Blackboard Model of reasoning). Hence, for something to be a Blackboard Framework it only needs to be a skeletal implementation of the Blackboard Model. Let us then remember the essence of this reasoning model. The Blackboard Model is presented by using the metaphor of a group of experts collaborating on a jigsaw puzzle (see Chapter 4).
The metaphor es-
tablishes that experts contribute to the overall problem-solving incrementally when they see new pieces can fall into place, without directly communicating with each other. If one watches the solution, it evolves opportunistically (as an opportunity for contributing appears) as opposed to xed algorithms. The essentially concurrent nature of this model of reasoning, has often led computer scientists to introduce the concept of a moderator, known as con-
troller in the blackboard literature. The controller is charged with serialising 153
8.2. Evaluation of the Opportunistic Reasoning Platform
154
the contributions of the dierent experts, as a means to avoid race conditions. It is however important to note, that the moderator does not actively contribute to the overall problem-solving activity by modifying the blackboard, and thus its inclusion preserves the essence of the Blackboard Model. Instead, the controller ensures contributions take place incrementally and it has the power to decide which of the possible contributions shall be performed next. Our Blackboard Framework is no exception in this respect, and makes use of a controllerthe Blackboard Controller abstract classas a means to serialise Knowledge Sources' contributions avoiding race conditions.
The Blackboard
Controller is given the power to decide which action shall next be performed, but it does not directly contribute to the overall problem-solving by modifying the blackboard state. Only Knowledge Sources are allowed to modify the blackboard and they do so without any direct communication between them. In order to support an eective and ecient collaboration via the blackboard, our Blackboard Framework implements the Observer design pattern which ensures that every Knowledge Sources is informed at any time about the changes performed by others.
Therefore, our Blackboard Framework, in spite of its specicities,
remains faithfull to the Blackboard Model. It is worth noting however, that, since a Blackboard Framework is above all a generic implementation, it leaves room for adapting and using it in particular applications. The developer of a system could establish a direct communication between Knowledge Sources, thus transgressing the main rule of the Blackboard Model of reasoning.
We doubt however, there exists any means by which we
could avoid such a possibility architecturally, and still retain the genericity, and hence the interest of the framework. Instead, we have tried to avoid committing such a mistake by providing a deep analysis of the Opportunistic Reasoning Platform, including its applicability, some guidelines for the development of systems based on it, and some architectural and technological decisions that help to overcome traditional limitations of Blackboard Frameworks.
8.2 Evaluation of the Opportunistic Reasoning Platform The Opportunistic Reasoning Platform described in this thesis is composed of a well-known and deeply analysed Opportunistic Reasoning Modelthe Blackboard Modelan infrastructure implementing the modelthe Blackboard Frameworkand a set of engineering decisions that adapt this infrastructure to the Web. In this section we evaluate the platform we have developed. In order to do so, in a rst step we briey review the main requirements of both applications and we next evaluate the platform with respect to the characteristics exhibited by ODEA and Xena thanks to being built upon our platform. The evaluation puts the emphasis on the main decisions we adopted during the development of our Opportunistic Reasoning Platform and the role they played in both applications.
8.2.1 Applications Requirements In this thesis we have described two dierent applications supported by our Opportunistic Reasoning Platform. Evaluating the support provided by our plat-
Chapter 8. Evaluation and Discussion
155
form must therefore evaluate the systems obtained with respect to their original requirements. Hence, as a rst step we here review the main requirements of our two applications, namely ODEA and Xena.
ODEA The Online Design of Events Application is a Web-based Design Support System aimed at facilitating events designing in the context of the San Sebastian Technology Park. In this respect, events organisation is considered as an activity which involves an important number of actors. For instance, organising a particular event might require the designer to contact several companies such as restaurants, hotels, taxi, audio-visual equipment providers, etc. In Chapter 6 we analysed events organisation by means of an exploratory example in which we described the preparations for a particular meeting. The example illustrated that, even in relatively small cases, organising an event is far from being a trivial activity. With additional analysis we characterised events organisation as a kind of Routine Conguration Design, where designing is understood based on the
KL DVE1 , as an incremental puzzle-making and puzzle-
solving process during which we attempt to devise a solution or solutions that satisfy the motivating needs and desires.
This exploratory process embraces
what we consider is the main particularity of designing: it does not start with a problem statement, still, it needs to achieve a solution that satises some needs and desires. On the basis of such an understanding of designing, and in order to eectively support the event organiser in any event organisation, that is in an event designing, we decided to base ODEA on the Event Design Process Model, establishing an important number of requirements. As a consequence, ODEA, must address a good set of complexities which must be supported by its underlying infrastructure. For instance, the process of designing is understood as an iterative, incremental and inherently opportunistic process through which problem specications are generated, solved and evaluated. Moreover, the
KL DVE1
identies several types of knowledge as being
involved in the designing. Additionally, designing being an exploratory process, the application needs to support following, contrasting and merging several lines of reasoning. Further, ODEA is above all a Design Support System and should therefore allow a smooth alternation between user decisions and automated activities in order to eectively integrate the designer in the overall designing. Moreover,
the need to collaborate with dierent agents (e.g.,
external
providers and the designer itself ), in an automated manner across the Web poses additional diculties. For instance, the system must support interacting with remote and presumably heterogeneous systems over the Web as an integral part of the design process. Similarly, the system must be adaptable, extensible as well as robust as a means to be ready for interacting with such a wild environment as the Web. In other words, one should not forget that ODEA is thought to be deployed on the Web in order to benet from the many commercial and functional opportunities it enables, but this must inevitably be backed by an appropriate infrastructure.
8.2. Evaluation of the Opportunistic Reasoning Platform
156
Xena Music rights clearance in the European Union is currently performed by rights societies which are government appointed organisations. In the future, the market for rights societies will be liberalised and will revolutionise the industry. New rights societies will appear and shall start competing on customers. Thus, in the future, there would be the need for clearing the rights of the music broadcasted by the many Internet radio stations by contacting many rights societies. The forthcoming scenario has been analysed and two dierent business models have been proposed.
Xena is an online music rights clearing organisation
whose aim is to faithfully implement one of these business models. The emphasis on this application lies therefore on the need to comply the business model dened, and do so in an automated manner over the Web.
Therefore, Xena
needs full important requirements. First and foremost, it has to be based on the business logics captured in the business model and maintain them at runtime.
This requires, the need
for driving its development by the business model, and it indirectly implies the need for supporting and validating the solution implemented with respect to the business decisions adopted. In other words, once the system is implemented, it should be possible to trace back the business decisions adopted. To achieve its goal the system must be able to coordinate the clearance process by contacting the required business actors involved, namely rights societies and banks.
This involves for instance the distribution of the execution over
various distributed and heterogeneous systems as well as the need to scale up in order to deal with a potentially high number of actors. Finally, as for the previous case study, the system must be adaptable, extensible as well as robust in order to be ready for interacting with the Web. In this case, however, adaptability is taken to a greater extent since Xena should be adaptable to technical changes (e.g., new technologies, new performance requirements) as well as to changes in the business model (e.g., new actors, new business rules, etc.). In summary, an automated music rights clearing organisation for the Internet should support runtime reasoning informed by the business knowledge during an inter-organisational process where highly heterogeneous systems need to be coordinated relying on a non-guaranteed and uncontrolled communications medium.
8.2.2 Evaluation of the Blackboard Framework In Part I of this thesis, we presented and described the many knowledge representation languages and tools that are currently available for constructing Knowledge-Based Systems. The analysis we performed was centred around the Web-oriented technologies for the very purpose of this thesis.
The review we
performed highlighted the (purposely) lacking expressive power of the ontology representation languages for the Web and the heterogeneous set of software tools available. One of the main conclusions we drew from such a review, was the need for using an important number of tools and technologies in order to develop Knowledge-Based Systems that solve real-world tasks (which cannot be reduced to classication) over the Web. The applications we have developed in this thesis aim precisely at solving
Chapter 8. Evaluation and Discussion
157
real-world knowledge-intensive problems over the Web.
The requirements for
each of the systems we previously reviewed, cannot be fullled uniquely by just applying any of the state-of-the-art reasoning engines and knowledge representation languages. The applications require combining a good set of Software and Knowledge Engineering techniques in order to successfully solve their tasks. They need to support complex reasoning activities, they have to interact with heterogeneous and distributed systems and they must do so in a reasonably maintainable manner. Both systems have been developed on top of our Opportunistic Reasoning Platform which provided a set of modules, utilities and architectural decisions for reasoning over the Web.
Thus, ODEA and Xena have beneted from a
generic and widely analysed reasoning model. In fact, the body of knowledge surrounding this reasoning model has brought an important set of guidelines and
know-how to their implementation, such as the applicability of the reasoning model, its characteristics or even how problems should be addressed (i.e., by partitioning the expertise required to perform the task). Moreover, its genericity did (indirectly) contribute to the nal result of the second case study, Xena, for which, the experience gained during the rst one, did help to overcome many diculties and reduced the time required for its development. Moreover, the capacity for integrating diverse reasoning mechanisms and tools that is attributed to Blackboard Frameworks has as well been particularly benecial.
For instance, the Online Design of Events Application integrates,
Sesame, which ensures the ecient persistence of the Knowledge Base and provides RDFS inferencing capabilities; Jess for complementing its reasoning power; and the Conguration Tool for solving conguration problems.
Simi-
larly, the music rights clearing organisation, Xena, makes use on Jena in order to manipulate an OWL ontology about the music rights domain, and Jess for complementing its reasoning power. Therefore, throughout our case studies we have as well tested to an important extent the genericity of our platform with respect to the software components and ontology representation languages for reasoning over the Web. Another important benet of applying our Opportunistic Reasoning Platform regards the clarity of the approach, or as we prefer to call it, the trans-
parency of the solution. As we previously introduced in Part II, the core of our platformthe Blackboard Architecturederives almost directly from a particular problem-solving modelthe Blackboard Model. It is therefore quite natural, once one understands appropriately the reasoning model, to map the conceptual design of the problem-solving into its nal implementation.
First we need to
analyse the problem to be solved and partition the expertise involved. Such a partitioning determines the Knowledge Sources that need to be involved and what their expertise must be. The nal step is to implement these Knowledge Sources. The benets of the so-called transparency of our reasoning platform were remarkable in both applications, since the way problem-solving should occur maps directly into the underlying Knowledge Sources whose eld of expertise and task to be performed are known in advance.
In both case studies, the
clarity of the approach was at the root of many benecial consequences for the development of the applications, in terms of their maintenance, extensibility or even for testing their functionality. In the rst case study, one should notice that building ODEA based on
158
8.2. Evaluation of the Opportunistic Reasoning Platform
the Events Design Process Model described in Chapter 6, involves mapping a Knowledge Level understanding of events organisation into its nal implementation. In this respect, our Opportunistic Reasoning Platform denitely played a very important role since it simplied the transition from the conceptual design into its nal implementation, allowing us to focus on the specic aspects of the case study and leave aside infrastructural concerns. In the second case study, the transparency of the platform did enable to a great extent the applicability of our Real-World Service-Oriented Approach for developing the music rights clearing organisation. How problem-solving occurs, the dierent Knowledge Sources available and how these contribute and collaborate actually represents the business view over the system.
Focussing
on these aspects one can analyse and understand the system behaviour from a pure business perspective (and hence validate the approach).
In other words
this view provides a high-level perspective over the system which is appropriate for business analysis since it is closer to the real-world concepts (e.g., a KS represents a real-world service). Alternatively, if we focus on a specic KS, we actually move into a more detailed view on how the system actually delivers a particular real-world service by interacting with some remote Web Services. From a Software Engineering perspective, during the development of Xena, our second case study chronologically, we did notice an important number of benets thanks to being based on our Opportunistic Reasoning Platform. First and foremost, Xena clearly beneted from the reusable skeletal implementation of the Blackboard Architecture.
Development time was relatively short and
allowed us to focus on the specicities of the case study, leaving aside infrastructural development which was already covered. Moreover, having an already tested and deeply analysed architectural implementation did as well reduce the eorts required for the analysis and testing phases. The modularity of the Opportunistic Reasoning Platform has also proven its benets during the development of the system. Software Engineering best practices suggest modularity as one of the most important aspects to bear in mind during any software development for it simplies software maintenance, extensibility and it is the basis for reusability.
When it comes to knowledge
intensive applications, modularity is taken to a wider extent that also includes knowledge as a crucial aspect to reduce acquisition, maintenance, extensibility, and evaluation overheads. The Opportunistic Reasoning Platform has brought modularity to the development of the case studies at two dierent levels. At the architectural level, it has brought modularity through a clear separation of functionalities: the Blackboard Framework to support the reasoning process; Web Services for the loosely coupled integration of remote systems; ontologies and libraries to manipulate sharable conceptualisations; and additional inferencing systems for complementing the reasoning power.
Internally and somehow implicitly, the Blackboard
Model leads to making use the divide and conquer for its own implementation, thus leading to a Knowledge Level modularity. In this respect it is worth noting some additional consequences partitioning the expertise had in both applications. As a particular consequence for ODEA, partitioning the expertise for organising events helped to make combinatoric problems more tractable.
People organisation, represents a good example on
how sub-tasks division helps to reduce tasks complexity and optimise computational resources usage. Last, in Xena, partitioning the expertise ensured the
Chapter 8. Evaluation and Discussion
159
scalability of the system. In fact, instead of creating a Knowledge Source per business actor involved we have one per real-world service. The consequence is that if the number of rights societies or banks grow the number of Knowledge Sources involved will remain unchanged, hence, Xena is scalable with respect to the business actors. An additional feature exhibited by our Opportunistic Reasoning Platform which has been benecial for ODEA, is the fact it is event-based. As we previously introduced in the analysis on the Blackboard Model of reasoning (see Chapter 4), one of the main characteristics of this reasoning model, is the selfactivation of the Knowledge Sources. Such an event-based activation of Knowledge Sources contributed to reduce resources consumption in the Online Design of Events Application whose reasoning systems do not require to be permanently running when the possible designs cannot be explored any further without additional information provided by the user. In order to complete the evaluation of our Opportunistic Reasoning Platform, we next review and evaluate the specicities of the core of our platform with respect to traditional Blackboard Frameworks. We therefore focus rst of all, in the impact and consequences of externalising the Knowledge Sources, on the use of ontologies for knowledge representation and nally, on the use of Web Services as a means to support the collaboration with external providers.
Knowledge Sources Externally Visible One of the specicities of our particular Blackboard Framework is the fact that Knowledge Sources are not regarded as internal and hidden functional modules. Instead, Knowledge Sources are regarded as the integration point between the system and external agents. Doing so, has enabled using Knowledge Sources' expertise in order to improve the interaction with external agents, may them be other systems or even the user. First of all, we would like to account for the important improvement of the user interaction in our Design Support System, ODEA, thanks to applying Knowledge Sources' expertise. In the Online Design of Events Application the user interface is adapted at runtime depending on the task being performed, informed by the expertise of the Knowledge Sources. For instance, the user is directly informed by the ontology which provides additional information about the concepts and properties presented, and constitutes a valuable assistance mechanism. Moreover, the information to be provided by the user is known by the Knowledge Sources which build the user interface accordingly in terms the user can understand, and can still take part in further reasoning processes. Finally, ontologies have helped to better support the user in browsing and querying the information conceptually according to the ontologies and not just by using keywords as in traditional search systems. As a consequence, information has been brought to a higher level more intelligible, and hence more useful, to the user. In summary, applying Knowledge Sources expertise for constructing the user interface, allowed to smoothly bridge the gap existing between the event designer and ODEA, leading to a better user integration into the overall design and hence to a better support for the designer. Setting the Knowledge Sources as the mediators between the system and the user also contributed to support the ecient use of computational resources and dynamise human-computer interaction by preltering incorrect and incomplete
8.2. Evaluation of the Opportunistic Reasoning Platform
160
data. For instance, in ODEA the user interface includes some input checking mechanisms such as ensuring that the ending date of an event is not previous to the starting date or comparing the number of participants with the typical amount for the kind of event being selected.
At runtime, once the user has
entered some information, it is validated with respect to the expected input. Again, as a result, applying Knowledge Sources expertise to the generation of the user interface has led to an improvement of the usability of the system. In this respect, it is worth noting that achieving this through a global user interface (e.g., like in traditional Blackboard Systems) where diverse elds of expertise have to be integrated is hardly achievable. Moreover, doing so would inevitably aect the transparency of the solution making of it a harder solution to be maintained. The integration of external systems through the Knowledge Sources did also benet from applying their expertise. In particular, one can clearly notice the benets obtained through the analysis of the integration of the Conguration Tool in ODEA. The Conguration Tool, is a conguration design Problem Solving Method implementation that encapsulates the expertise for solving conguration problems independently from the application domain. To do so it manipulates generic concepts such as components or ports as dened in its ontologies. Tables conguration could be solved by mapping domain specic terms, dened according to the events ontology, into those understood by the conguration tool.
Having established such a mapping, Knowledge Sources could generate
at runtime conguration problem specications for their domain of expertise, which could then be solved. This manner for integrating domain-independent problem-solving systems (i.e., PSMs implementations in a more recent jargon) contrasts with the practices in previous blackboard-based systems such as the Edinburgh Designer System (Smithers et al., 1990) were domain-independent problem-solving systems were considered as specic Knowledge Sources. Doing so led to some diculties originating from the need to apply generic expertise to specic domains. In fact, this was mainly originated from confusing system components with Knowledge Sources, and led to an incorrect partitioning of the expertise. As an illustration, coming back to ODEA, it is important to understand that the system required to support organising people by means of solving tables layout conguration problems but did not require a general expertise for solving conguration problems.
How, in terms of software components, we provide
the expertise for doing so is another matter.
And in fact, on account of our
experience in ODEA, the use of PSMs is an appealing manner to achieve it. Therefore, thanks to the externalisation of the Knowledge Sources and by using ontologies as the lingua-franca between them and the conguration tool, Knowledge Sources can generate and solve conguration problems at runtime. Other examples illustrating the benets from externalising the Knowledge Sources are shown by the integration of remotely oered functionality through Web Services.
For instance, ODEA supports contacting Hotels and Xena in-
teracts with rights societies and banks in order to clear the music rights.
It
is worth noting that such a kind of integration of remote systems is enabled by leveraging the communication from syntactical and protocol matters to a semantics-based interaction thanks to the use of ontologies as the lingua-franca between both systems.
Chapter 8. Evaluation and Discussion
161
Ontologies for Knowledge Representation Another important design decision introduced in our Opportunistic Reasoning Platform is the use ontologies represented in Web standard formats as a means to dene sharable conceptualisations of the domain(s) of interest. Ontologies have provided an explicit and declarative representation of the concepts which are relevant for both applications. The generation of these ontologies has required an exhaustive analysis of the specic domains, notably in the case of the events organisation domain (Cendoya et al., 2003). In this respect ontologies have therefore indirectly contributed to increase the knowledge by forcing to make explicit some implicit, and sometimes previously unshared, understanding of the domains. Creating the ontologies also contributed to adopting an appropriate conceptual abstraction for developing a Blackboard-based solution. In fact, as we previously introduced in Section 4, how the problem is addressed determines to a great extent the success of a Blackboard approach. Ontologies have indeed contributed to tackling the problem at the Knowledge Level, and have enabled, as well as promoted, the necessary conceptual division of the expertise required for supporting the designer, due to their essentially hierarchical nature. Furthermore, the capability to support the integration of dierent systems which is conferred to ontologies in the Semantic Web and Knowledge Representation communities, nds in our case studies a good illustration.
In fact,
ontologies have enabled a semantics-based interaction between the dierent components of the applications both internal and external. Internally, ontologies have dened common vocabularies for the events organisation and music rights management domains. Sharing these conceptualisations among the Knowledge Sources has supported the eective integration and collaboration among the various Knowledge Sources in both systems.
In this
respect, ontologies appear as an appropriate means for representing part of the knowledge that is required in applications, supporting the necessary communication between the dierent experts (i.e., the KSs) involved, and also allowing for the required domain(s) extensions and detailing, for them to perform their specic tasks. In fact, because the Blackboard Model forbids a direct communication between the dierent Knowledge Sources involved, it has to rely on shared conceptualisations. This situation is very similar to the one envisioned and proposed by the Semantic Web in which dierent software systems need to collaborate by sharing the conceptualisations they are committed to. It seems natural that the technical solution proposed for the Semantic Web, which presents additional challenges, should work in the Blackboard Model. Externally, ontologies have enabled the integration of disparate systems by providing a shared an explicit understanding of the dierent domains. ODEA in particular presents two examples on how to integrate software applications by means of shared ontologies. The rst example enables the integration of the generic conguration problem-solving tool through the use of procedural mappings in order to bridge the dierences between conguration problem-solving and the specic domain of interest terminologies. The second example supports the integration of a remote hotel booking service. In this second case, the hotel reuses ODEA's representation of the hotel booking domain.
In the Xena ap-
plication the music rights management ontology has established the conceptual
8.2. Evaluation of the Opportunistic Reasoning Platform
162
schema to be used by radio stations for dening their play reports. The extent to which ontologies enable such a kind of semantics-based interaction is largely determined by their representation language. Hence our Opportunistic Reasoning Platform advocates for the use of Web standards in order to support the wider interaction with remote systems over the Web. However, we do not advocate for any specic language in particular. In fact, we understand that every application has its very particular requirements in terms of the inferencing needed, the eciency desired or even in the maturity of the tools it would have to build upon (see Part I of this thesis). Our case studies make use of two dierent languages so as to test their applicability and the genericity of our platform with respect to the ontology representation language(s) employed. For instance, ODEA uses RDF/RDFS as decided by the OBELIX consortium in the course of the project, and Xena uses OWL Lite. In both cases the use of ontologies was successful and supported the denition of reusable conceptual schemas that paved the way for a semanticsbased interaction internally, among the Knowledge Sources, and externally with remote systems. Ontologies have been integrated in our applications by mapping the taxonomy of concepts they dene into automatically generated Java classes which are subsequently adapted to Java Beans conventions (see Chapter 5). This approach has simplied to an important extent the way to deal with the ontologies, removing the developer from the burden of directly managing RDF/RDFS or OWL statements, rather transforming them in simple Java objects. Additionally, ODEA shows that this mechanism for integrating ontologies remains compatible with the use of tools that internally work solely on the basis of Semantic Web ontology representation languages. This is the case for Sesame with provided ecient persistence and querying capabilities of an RDF/RDFS encoded Knowledge Base. This fact can be further generalised to other kinds of tools, which can seamlessly be integrated on any application built on top of our Opportunistic Reasoning Platform, provided it is compatible with any of these ontology representation languages. As we previously discussed in Part I of this thesis, ontologies for their very purpose of being sharable conceptualisations, need to rely on logics with a reduced expressivity as a means to support the very same inferencing on dierent systems independently from the engine being used. Indeed, the ontologies applied in our applications, which are represented in RDF/RDFS in one case, and OWL Lite in the other, are no exception. To benet from more powerful inferencing than what is supported by the ontologies, our Opportunistic Reasoning Platform supports the integration of additional rules engines. For instance, in ODEA each Knowledge Source makes use of its own Jess rule engine. The main motivations for choosing that particular rule-engine over others has been its maturity, our own previous experience with CLIPS, almost compatible with Jess' scripting language, and its capability to directly map its workspace to external Java instances. Any change made by a rule execution is directly reected on the Java instance hold on the Blackboard and inversely, any modication made directly on the Blackboard is automatically reected in the rule engine's workspace. As a consequence, the inclusion of richer inferencing into the application was relatively simple and avoided important development overheads.
In fact, coupled with the concepts to Java classes mapping, this
capability comes to its full potential, allowing us to directly and seamlessly
Chapter 8. Evaluation and Discussion
163
apply inferencing rules over concepts and properties dened in some ontology, thus overcoming ontologies' expressivity limitations by means of a simple yet powerful structural solution.
Web Services for Collaboration In Chapter 5 we introduced, as part of our Opportunistic Reasoning Platform, the identication and support of distributed computation technologies in order to complement Knowledge Sources functionality. Doing so supports the seamless interaction with remote systems over the Web. Our decision did not explicitly establish any preferred technology, instead we advocated for the use of Web standards as a means to amplify the interoperability of our platform. There currently exists, however, a de facto standard for supporting a loosely coupled interoperation over the Web between dierent systems independently from their platform or language:
the so-called Web Services.
Web Services
have therefore served as a means to support the integration of remote oered functionality in our case studies. In the Online Design of Events Application, Web Services have supported the interaction with a Hotel booking service during the event organisation process and the integration of conguration problem-solving expertise as oered by the generic conguration tool. In Xena Web Services have been the key enabling technology for supporting its activity which inevitably requires interacting with remote rights societies and banks. On account of our experience in both case studies the use of Web Services has been particularly successful.
Their use coupled with the infrastructural
support of our Opportunistic Reasoning Platform, as provided by its inherently distributable reasoning kernel (i.e., the Blackboard Framework), the externali-
sation of Knowledge Sources and the use of ontologies as the lingua-franca, have leveraged our reasoning platform to a truly Web-based solution that embraces the main principles of the Web and prots from its inherent goodnesses paving the way for brand-new applications of knowledge-based technologies over the Internet, such as the ones we have implemented in our case studies. It is important to note however that their use has not been completely straightforward. Web Services were thought of as a technical solution for the seamless interoperation with remote systems over the Web. Roughly speaking they are mainly based on the use of the Web Service Description Language for describing the services oered and SOAP as the communication protocol, and fail to provide a semantic layer.
Therefore, one needs to perform the appro-
priate transformations from the application domain concepts, dened according to some ontology, into the datatypes accepted by the Web Services.
This is
not a particularly complex task but it has revealed to be tedious and undesirably dirty since it requires an important number of datatypes conversions. Moreover, since Web Services, as they are currently dened, do not provide the formal semantics, tasks such as the discovery of services or their composition cannot be automated. In an attempt to overcome these diculties recent eorts have been devoted to the denition of orchestration languages as well as to the so-called Semantic Web Services. In Xena we have explored one of this semantic approaches to Web Services, namely OWL-S. By means of this OWL-based Web Service ontology we have added a semantic layer on top the services oered by rights societies
8.3. Reections on our Opportunistic Reasoning Platform
164
and banks. Despite the technical diculties we had to face for the early stage of development of OWL-S tools, from our experience we can foresee the greater impact Semantic Web Services (not just OWL-S) could have in the future in our Opportunistic Reasoning Platform. It would simplify the use of Web Services for integrating systems, abstracting the developer away from the burdening datatypes castings, and supporting new appealing functionalities such as the automated discovery or composition of remote services. Conversely, the (Semantic) Web Services approach can benet to an important extent from our Opportunistic Reasoning Platform.
In fact, on the
technical side we must recognise that applications will still rely on dynamic knowledge (procedural or declarative) for achieving their goals. Hence, an architectural support for the seamless integration of distributed and heterogeneous systems based on the eective integration of (Semantic) Web Services together with procedural and declarative knowledge about the domain, as supported by our Opportunistic Reasoning Platform remains an important requirement.
8.3 Reections on our Opportunistic Reasoning Platform In the next section we generalise the evaluation results just described and contrast them rst, with current Software Engineering best practices and next, with Knowledge Engineering ones.
8.3.1 A Software Engineering Perspective Our Opportunistic Reasoning Platform is above all a software infrastructure to support the development of Knowledge-Based Systems for the Web.
It is
based on a software architecture, that is, the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them (Bass et al., 2003) that draws directly from the Blackboard Model. However, in order to adapt our platform to the Web, we have based it on a set of principles required to appropriately cope with the very characteristics of the Web. In Chapter 4 we introduced the main principles of the Web, namely simplicity, modularity, tolerance, decentralisation, the independent invention and the principle of least power. These principles, impose some important requirements over Web-based solutions. adaptable.
For instance, tolerance requires the systems to be
Moreover, decentralisation implies that applications would better
be scalable, extensible as well as interoperable. Further, the test of independent invention requires to some of extent genericity.
And nally, modularity and
simplicity are in general good Software Engineering practices. We have therefore based our Opportunistic Reasoning Platform on the following principles we will subsequently review: modularity, extensibility, adaptability, scalability, interoperability and genericity.
Modularity
is one of the basic characteristics that determine to a great ex-
tent software quality. A good decomposition of a system into a set of modules leads to a more maintainable solution, by potentially restricting the scope of
Chapter 8. Evaluation and Discussion
165
tests, changes and errors. For instance, dierent modules can be tested independently, detected errors can more easily be corrected and new functionality is potentially simpler to be introduced. Moreover, since good modularity requires a clear separation of concerns, it is implicitly also a means to achieve clearer and cleaner solutions. Finally modularity is a basic requirement for distributing the work among a set of software developers, and leads to faster development processes. In our framework modularity comes rst of all because of the reasoning model applied, which requires a clear separation of concerns. Several Knowledge Sources encapsulate the expertise of the system and there need not be direct communication between them. Therefore, the reasoning model provides a Knowledge Level modularity which allows the addition, deletion and modication of Knowledge Sources independently. Secondly, the architecture is based on a clear identication of the dierent components involved (e.g., Blackboard, Controller, Knowledge Sources, etc.) and comes with a skeletal Object-Oriented implementation. Finally, the use of Web Services as a means to support the collaboration with remote systems allows their loosely coupled, and hence modular integration.
Extensibility
is an important attribute software should exhibit. Computer
Science is a highly dynamic eld where new technologies appear, new scenarios need to be addressed and user requirements constantly evolve. Hence, Software Engineering advocate for extensibility as a desirable and often even necessary attribute for any software. When it comes to the Web, extensibility becomes crucial, as the Web is mainly characterised by its rapid evolution.
Our Op-
portunistic Reasoning Platform is extensible for it is highly modular from the software point of view, but also because the reasoning model it stems from paves the way for seamlessly adding new reasoning capabilities.
Adaptability
is quite similar to extensibility, the main dierence being that
adaptability is concerned with modifying previously available components, while extensibility involves adding new functionality. As in the previous case, adaptability is of crucial importance for the Web and is supported by the modularity of our platform and the versatility of the underlying reasoning model (see Chapter 4).
Scalability
stands for the ability to cope with an increased work load. Scala-
bility can be particularly important in Web-based applications. For instance, a commercial system that contacts a company's providers may certainly need to be scalable in order to be able to deal with new providers if the need arises. Our framework's scalability stems from two facts. First of all, the modular nature of the Blackboard Model makes of it a particularly good solution for coping with new requirements as it supports an easy integration of new Knowledge Sources. Additionally, the use of Web Services for triggering remote executions is a way to reduce the work overhead, paving the way for a completely distributed execution. Hence, our Blackboard Framework represents an appropriate infrastructure for scaling up to new problem-solving scenarios with increased complexity and/or size.
8.3. Reections on our Opportunistic Reasoning Platform
166
Interoperability
is perhaps the most specic attribute exhibited by our Black-
board Architecture with respect to traditional Blackboard implementations. We understand by interoperability the capability of the architecture to integrate and collaborate with other systems. Therefore, we include here the possibility to interact with remote systems oering some functionality via Web Services, or the capacity to retrieve and integrate external information from the Web as part of the reasoning activity.
Moreover, interoperability in our framework comes
also by its capacity to integrating existing applications (e.g., the hotel booking system, rights societies, etc.) as components of more sophisticated applications. Interoperability is achieved thanks to the use of ontologies, who play a central role in supporting the collaboration between diverse and heterogeneous systems, and also thanks to the use of (Semantic) Web Services to support the seamless and eective interoperation with other systems, may them be external or legacy.
Genericity
is of crucial importance. A platform for the development of di-
verse Knowledge-Based Systems for reasoning over the Web should, above anything, be applicable to a variety of scenarios. We previously accounted for the genericity of the reasoning model in this respect and, by denition, a Blackboard Architecture is a generic implementation of the Blackboard Model. Therefore, our platform is applicable to a large variety of knowledge-intensive tasks. Additionally, the platform is generic with respect to the ontology representation language (provided it is a Web standard), as well as it allows plugging in heterogeneous software components. In this respect our framework can be seen as the core architecture of a software product line for knowledge-based reasoning over the Web, where a software product line is understood as (Bass et al., 2003): ... a set of software-intensive systems sharing a common, managed set of features that satisfy the specic needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way.
Chief among these core assets is
the architecture that was designed to handle the needs of the entire family. Therefore, our Blackboard Architecture indirectly prots from the benets attributed to product line architectures. Therefore, we believe that reusing such a widely analysed reasoning model and design concepts whose quality attributes are known can potentially lead to faster development processes and better quality systems (Bass et al., 2003).
In a more abstract level, our Blackboard Framework aligns well with current Software Engineering practices.
The history in software development is char-
acterised by a continuous search for higher-level abstractions that can support, improve and simplify designing, implementing and maintaining software. The rst step were subroutines. Next, the Object-Oriented programming paradigm appeared, and reusable design patterns were created as a means to deal with recurrent problems.
Currently, the Component-Based Software Engineering
(CBSE) is regarded as the most important paradigm shift in Software Engineering since the invention of the subroutine (Clements, 2001). CBSE converts software development into a composition of reusable software components, turning the software development process into a faster, presumably simpler and of
Chapter 8. Evaluation and Discussion
167
better quality discipline. Typically, components are plugged into a software infrastructure that handles their composition and coordination (Clements, 2001). Our Opportunistic Reasoning Platform is, in fact, a software infrastructure where components, may be knowledge-based (such as Knowledge Sources) or not (e.g., a company's Web Service), are plugged-in to achieve the desired functionality. In this respect, Web Services, and the so-called Service-Oriented Approach, leverage the integration of (remote) software components from a purely technical activity to a more functionality-oriented task which often simplies the engineering of complex systems. Our aim being to support the development of Knowledge-Based Systems for reasoning over the Web, that is, inherently complicated systems, such a fact is particularly relevant. Additionally, our Opportunistic Reasoning Platform can be seen as a quite natural follow-up of Software Engineering's tendency to look after higher-level abstractions. In fact, our platform is, in itself, a reusable software infrastructure for the seamless composition and coordination of components. Moreover, since we identify ontologies as an integral part of our platform, we can benet from their semantics to leverage components composition to a semantics driven and supported activity, as opposed to current practices which are typically dominated by Interface Denition Languages where the semantics are implicit and cannot therefore take part in automated reasoning processes. Finally, it is worth remembering that our Opportunistic Reasoning Platform, in addition to being a generic software infrastructure, it derives from a well-studied reasoning model which provides an even higher-level abstraction for the development of KBS. This as we further discuss in the next section is particularly important for aligning Software Engineering guidelines with Knowledge Engineering best practices.
8.3.2 A Knowledge Engineering Perspective Similarly to Software Engineering, the history of Knowledge Engineering is characterised by a continuous search for higher-level abstractions that can support, improve and simplify designing, implementing and maintaining KnowledgeBased Systems. In this section we review the history of the Knowledge Engineering discipline, and show how our Opportunistic Reasoning Platform aligns with current best practices. Initially the development of Knowledge-Based Systems was seen as a transfer
process based on the assumption that the knowledge required was already available and just needed to be captured and implemented (Studer et al., 1998). This transfer process showed its limitations facing the development of large scale and maintainable Knowledge Bases, MYCIN being a good illustration (Studer et al., 1998). Moreover, it was recognised that the assumption of the transfer process was wrong since experts usually apply tacit knowledge. These deciencies led to a conceptual shift in the process of building Knowledge-Based Systems which is now widely seen as a modelling activity (Schreiber et al., 1999). Hence, developing a Knowledge-Based System is regarded as a process which attempts to model problem-solving capabilities similar to those exhibited by experts. Motivated by the model-based approach several modelling frameworks and methodologies have been developed, notably CommonKADS (which we have used
in
this
thesis),
MIKE,
or
the
somewhat
more
pragmatic
approach
PROTÉGÉ-II, see (Studer et al., 1998) for an overview of them. In general these
8.3. Reections on our Opportunistic Reasoning Platform
168
dierent methods can roughly be characterised by their emphasis on the formal modelling of the expertise, and the decomposition of the inferencing activities into several more manageable subtasks. They therefore play an important role in that they support breaking down and structuring the Knowledge Engineering process, to produce better quality systems. As a result the whole development process is partitioned and hence potentially simplied. For instance, the knowledge acquisition, the knowledge modelling, and the development phases can be divided and distributed among a set of experts.
This separation of concerns
brings modularity to KBSs and thus, supports an easier reuse, extension and maintenance of the systems. Blackboard Systems were developed to bring knowledge modularity to the development of Knowledge-Based Systems, which was, at that time, dominated by monolithic approaches à la MYCIN. Structurally, the Blackboard approach is similar to production systems in that they both have a working memory and some knowledge which is applied at runtime depending on the problem-solving state. In fact, production systems have often been used as the software support for developing Blackboard Systems. However, there is a substantial dierence between both approaches in terms of the knowledge granularity and this is perhaps the most direct and obvious contribution of the Blackboard Model from a Knowledge Engineering perspective. In Blackboard Systems, the expertise for a sub-task of the overall problem-solving activity is encapsulated in a Knowledge Source, which is typically composed of a set of rules. Thus, Blackboard Systems provide a larger grained knowledge modularity than rules and therefore simplify the acquisition, modelling and maintenance of the knowledge. There is therefore a clear and close relationship between the inherent modularity and decomposition required by the Blackboard Model of reasoning and the one promoted by widely applied Knowledge Engineering frameworks and methodologies. The inherent separation of the expertise for performing some knowledge intensive task underlying the Blackboard Model, has direct implications with respect to the whole KBS development cycle.
As we previously introduced,
the Blackboard approach is based on the divide and conquer principle.
This
principle is in line with the practices promoted and supported by Knowledge Engineering frameworks and methodologies, which put the emphasis into addressing problems at the Knowledge Level and advocate for decomposition as the main methodological principle.
The development of Blackboard Systems
can and should rst be addressed at the Knowledge Level as promoted by the various methodologies, and subsequently be mapped into its nal implementation.
This last step which is typically more an art than a science, nds in
our Opportunistic Reasoning Platform an eective support.
In fact, the core
reasoning infrastructure being closely based on the reasoning model, it strongly simplies the process of constructing a system that is compliant with the models (Schreiber et al., 1999): As a general rule, realizing a system will be simple and transparent if the gap between application and architecture specication is small, meaning that the knowledge and communication constructs map easily onto computational primitives in the architecture. Our platform is suitable for applying what is referred to as the structure-
preserving design principle in CommonKADS (see Chapter 7 for an example), which ensures the design process is transparent and meets high quality criteria,
Chapter 8. Evaluation and Discussion
169
such as reusability, maintainability or adaptability. In general, we believe our Opportunistic Reasoning Platform is well suited for applying any of the previous methodologies since it assumes and relies on decomposing the problem-solving both conceptually (e.g., the Blackboard Model, ontologies, etc.) and structurally (e.g., the Blackboard Framework, Web Services, PSMs, etc.). An important eort in the Knowledge Engineering community has been devoted to increase and improve reusability. Reuse in Knowledge Engineering can roughly be divided into two main trends.
The rst trend is concerned with
reusing Knowledge Bases among dierent systems and is dominated by the use of ontologies as a means for representing sharable conceptualisations. Work in this area mainly led to the use of ontologies as a means for integrating heterogeneous sources of information, e.g., (Gruber, 1993b; Mena et al., 2000), or to reusable ontology libraries, e.g., (Laresgoiti et al., 1996). The second trend builds upon the use of domain independent representations of knowledge to achieve the reuse of generic, though task-specic, problem-solving strategies, such as hierarchical classication or assessment (Chandrasekaran, 1986). The main idea underlying such an approach was that making explicit the knowledge involved in any of these generic problem-solving strategies could support the reuse of Components of Expertise to borrow the term from Steels (1990). Currently these generic problem-solving components are widely referred to as Problem-Solving Methods (PSMs) (Benjamins & Fensel, 1998; Studer et al., 1998) and are often dened as implementation and domain-independent descriptions of some reasoning process performed by a KBS. Nowadays, PSMs are regarded as valuable components for the construction of Knowledge-Based Systems as manifested by the major Knowledge Engineering frameworks (Benjamins & Fensel, 1998). The most comprehensive general library of PSMs is the CommonKADS library of task types whose classication was used in Chapter 4. From the development point of view, perhaps the most extensive research was undertaken as part of the PROTÉGÉ-II project (Gennari et al., 1994, 1996, 1998) where the authors developed an architecture and a set of reusable components to build KBSs.
Recently, with the advent of Web Services and the
Semantic Web, previous work on Problem-Solving Methods has gained even more momentum, and is being applied to the development of Web-oriented Knowledge-Based Systems (Benjamins et al., 1998; Benjamins, 2003; Crubézy
et al., 2003; Domingue et al., 2004). To some extent, PSMs could be regarded as the adaptation of the ComponentBased Development approach to the area of KBSs.
They present interesting
characteristics for the development of KBSs in general and particularly for our platform.
PSMs implementations can be regarded as reusable, domain-
independent reasoning software components.
At the same time, they provide
the appropriate Knowledge Level description of the reasoning activity which is precisely required for an adequate application of the Blackboard Model. In fact, the very nature of our platform, that is essentially distributed and componentbased, supports the extensive reuse of Knowledge Bases and Knowledge Based Systems (may them be PSMs or simply remote systems).
Since the Black-
board Model is known to be well suited for the integration of diverse reasoning mechanisms it paves the way for an extensive reuse of previously implemented and perhaps distributed PSMs, similar to the approach promoted by the IBrow project (Benjamins et al., 1998).
Chapter 9 Conclusions and Future Work
In this chapter we present some conclusions, summarising the key aspects of our research, and reviewing the research questions introduced in the rst chapter of this thesis. Finally, we introduce lines for further research.
9.1 Key Results This thesis starts with introducing the Semantic Web which is a foreseen extension of the current Web in which information is not solely shared between humans, but also between machines in order to support automated processing and reasoning over the information spread across the Web. The Semantic Web is regarded as the prime enabler for beneting from AI technologies in the Web, that is for developing Knowledge-Based Systems for the Web, supporting the automation of certain tasks, improving the quality of others, and creating new forms of collaborative eBusinesses over the Web. Part I of this thesis is devoted to exploring the current state-of-the-art technologies for the Semantic Web. The survey is constructed around two important hypothesis that together conform the theoretical ground of so-called KnowledgeBased Systems.
The rst hypothesisthe Knowledge Level Hypothesis
establishes the dominant view in Knowledge Engineering of what knowledge is. According to this hypothesis, knowledge has to be understood as the capacity to act rationally, to achieve a goal. The second hypothesisthe Knowledge Representation Hypothesisestablishes that Knowledge-Based Systems' behaviour will be originated by propositional sentences, that we can understand as representations of what the system knows. Derived from the Knowledge Representation Hypothesis, achieving a rational behaviour with Knowledge-Based Systems is widely regarded as a task involving the explicit representation of its knowledge and the automated reasoning over this knowledge.
The rst part of this thesis does therefore survey the
most prominent Knowledge Representation formalisms and Knowledge Reasoning tools and techniques for the Semantic Web.
In this thesis, as is the case
for most prominent Knowledge Engineering methodologies, we characterise the knowledge as being of two kinds: static and dynamic.
The static knowledge
is concerned with representing conceptualisations of some domain of interest, whereas the dynamic knowledge species how problem-solving should occur. Even though there exist representation languages that can represent both kinds of knowledge, the static and the dynamic knowledge are often represented sep171
9.1. Key Results
172
arated from each other for practical reasons. Ontologies, understood in this thesis as a formal explicit and sharable specication of a conceptualisation for a domain of interest, appear as the most prominent formalism for representing the static knowledge, for they support sharing and reusing Knowledge Bases, and enable an eective communication between software systems.
For these very reasons, ontologies are regarded as
the cornerstone of the Semantic Web and are subject of review in Chapter 2. In particular we present XML which is the general encoding mechanism used for representing data and knowledge in the Semantic Web. We introduce RDF, which supports representing relationships between resources on the Web, and RDFS which introduces some simple ontological concepts such as Classes, Sub-
classes, Properties or Subproperties.
Further, we present the Web Ontology
LanguageOWL and its three sublanguages OWL-Lite, OWL-DL and OWLFull. These languages provide additional constructs for dening more complex conceptualisations. The most prominent is OWL-DL which is currently a W3C recommendation and is likely to become the standard ontology representation language for the Web. Ontologies have a purposely restricted expressivity in order to retain appealing characteristics, to better support sharing Knowledge Bases and reasoning eciently.
In order to complement their lack of expressivity and thus to
support dening how problem-solving should occur, rules languages have been introduced. In Chapter 2 we present the most prominent languages proposed so far for the Semantic Web, namely RuleML and the SWRL. Chapter 3 is devoted to Knowledge Reasoning on the Web. We present the main logics that have been proposed so far for automating reasoning tasks, such as First-Order Logic or Description Logics.
We introduce the most relevant
computational characteristics of Knowledge Reasoning such as the soundness and completeness of the inferences, or the monotonicity of the reasoning process. The review highlights the important tradeo existing between the expressiveness of a logic and the tractability of the inferences.
We then present the most
prominent techniques for controlling the reasoning process, such as BackwardChaining, Forward-Chaining or even Problem-Solving Methods.
Finally, the
second part of Chapter 3 presents an extensive overview of current tools for manipulating ontologies, for eciently storing and querying Knowledge Bases, and general purpose reasoning systems such as rule engines. The review serves to present the software available for developing Knowledge-Based Systems for the Web, but also shows the need to eectively integrate a set of heterogeneous tools in order to support advanced reasoning over the Web. Part II is devoted to describing our Opportunistic Reasoning Platform, an infrastructure for the development of Knowledge Based Services over the Web. Our Opportunistic Reasoning Platform consists of a widely applied and extensively analysed reasoning modelthe Blackboard Modela skeletal generic implementation of the modelthe Blackboard Frameworkand many architectural and technical decisions.
It supports implementing Knowledge-Based
Services where the dierent components of the application can be deployed on multiple physical systems connected through Web Services and sustained by ontologies. Chapter 4 is dedicated to presenting and exploring the Blackboard Model of reasoning. The analysis illustrates its most relevant characteristics, and establishes its appropriateness for performing a wide range of knowledge-intensive
Chapter 9. Conclusions and Future Work
173
tasks. Finally, we assess its suitability for reasoning over the Web, accounting for its opportunism, its capacity to integrate diverse sources of knowledge, its ability to deal with uncertainty, etc. In Chapter 5 we present our infrastructure for the development of KnowledgeBased Services for the Web. We introduce the technical details of the core reasoning component of our platform, which is a novel implementation of a Blackboard Framework where Knowledge Sources have been externalised to better support interacting with external agents.
We show the internal structure of
Knowledge Sources, which is based on the Model-View-Controller design pattern, in order to be able to establish KSs as mediators between the system and the users. The use of Web Services technologies is also included as means to support the seamless integration of external systems in the overall problem-solving architecture. Finally, we introduced the mapping of ontology concepts into a hierarchy of Java objects conforming to JavaBeans conventions, in order to support the seamless but eective integration of Knowledge Bases with additional reasoning components such as rule engines. In the third part of this thesis we present two case studies developed on top of our Opportunistic Reasoning Platform.
The Online Design of Events
Application is a Web-based Design Support System that aims to support event organisers in the process of organising meetings, conferences, and the like (see Chapter 6). In this application, event organisation is understood as a process involving, not only PTSS, but also other providers, such as audiovisual equipment providers, restaurants and hotels, in a complex value chain that is needed when hosting events. Therefore, collaboration between dierent entities plays an important role in ODEA oering new opportunities for smart collaborative eBusiness. The Events Design Process Model that underlies ODEA is derived from the
KL DVE1
and represents one of the very rst applications of this theory to the
development of a system. The result is a consistent model of the events design process which can be explained based on a general theory of design. With respect to the theory, ODEA proves the validity an utility of the
KL DVE1
for explaining
this particular designing task and it illustrates the importance and the interest of creating and using Knowledge Level theories in Articial Intelligence. In Chapter 7 we describe Xena, a Web Services based music rights clearing organisation. The system, is developed on top of our Opportunistic Reasoning Platform, and supports the automated clearance of rights for the music broadcasted by Internet radio stations, by automatically contacting banks and rights societies over the Web. The main contribution of this chapter is what we have termed the Real-World Service-Oriented Approach: a novel approach to the development of eServices solutions, in which we attempt to bridge the gap between the business world and the IT world by means of our Opportunistic Reasoning Platform.
Our particular approach is guided by an in-depth business analy-
sis that represents the real, non-technical, requirements the nal implemented system complies with. The last part is devoted to evaluating the results obtained and concluding the thesis. In the introductory chapter we raised the central research question of this thesis: Is the blackboard approach appropriate for the development of fullyedged Knowledge-Based Services for the Web?
9.2. Future Work
174
This general question was further detailed in: 1. Is the Blackboard Model appropriate, in theory, for the development of Knowledge-Based Services for the Web? 2. What needs to be done to adapt it to the Web? 3. How does it align with current Software and Knowledge Engineering best practices? Throughout the evaluation chapter we have addressed these questions. First of all, we have conrmed that the modications performed over traditional Blackboard Frameworks have retained the essence of the reasoning model. Thus, research carried over Blackboard Frameworks also holds for our particular implementation. Next, we have evaluated the platform with respect to the case studies. Both case studies illustrate that the infrastructural support supplied by our Opportunistic Reasoning Platform has been particularly successful. The case studies show that our reasoning platform embraces the main principles of the Web and prots from its inherent goodnesses, paving the way for brand-new applications of Knowledge-Based technologies over the Internet, such as the ones we have implemented in our case studies. The case studies do not establish the truth as in tautologies, still, they provide good evidence that the blackboard approach, not only is applicable for developing Knowledge-Based Services, but is particularly appealing.
Applying the blackboard approach to
the Web did however require performing a few modications over traditional implementations, as detailed in Chapter 5. For instance, Knowledge Sources had to be externalised, ontologies had to be embraced and appropriately integrated into the overall reasoning infrastructure, and Web Services has to be applied in order to support the seamless interoperation with remote systems over the Web. Finally, the evaluation results have been generalised and contrasted with current Software Engineering and Knowledge Engineering best practices. This evaluation accounts for the particularly appealing characteristics exhibited by the blackboard approach in general, but also by our particular reasoning platform. Our Opportunistic Reasoning Platform has been shown to honour many of the very best practices in Knowledge Engineering, such as the structurepreserving design principle, the use of Problem-Solving Methods or the strict application of the Knowledge Level Hypothesis. Similarly, from a Software Engineering perspective, our reasoning platform complies with the main principles such as modularity, extensibility or adaptability. Additionally, our Opportunistic Reasoning Platform presents the important benets of aligning well with the latest trends in Software Engineering, such as the component-based or the service-oriented approaches to developing systems.
9.2 Future Work The research presented in this thesis is characterised by its multidisciplinarity. We have dealt with Knowledge Engineering, Knowledge Representation, Knowledge Reasoning, Software Engineering, eBusiness, Design Support Systems, etc. Not surprisingly, our research opens up a plethora of possibilities for further research stemming from each of these very elds.
The rest of this section is
devoted to introducing some. We rst present the lines for further research that
Chapter 9. Conclusions and Future Work
175
directly derive from our work in the reasoning platform and we then move into some further research motivated by the case studies.
9.2.1 The Opportunistic Reasoning Platform Our Opportunistic Reasoning Platform development has been driven to some extent by the case studies, since these have set out the direct requirements the platform should full. It would therefore be necessary to test the platform in other applications, with dierent requirements which could possibly raise the need for additional functionality, or changes in the currently available one. A larger set of case studies would indeed strengthen our research and presumably lead to a more robust, powerful, and versatile platform. Still, the case studies we have carried out using our platform have served already to identify some aspects that would require further research and development:
The inclusion of a generic truth maintenance mechanism:
The current
implementation of our platform does not include any truth maintenance system (Russell & Norvig, 2003) per se. Instead, it delegates such a functionality over the specic applications developed on top of it. As a consequence, applications need to include their own mechanisms to ensure the consistency of the workspace. Since, such a feature is likely to be needed in almost every system built on top of our platform, there would be a clear benet into including a general purpose truth maintenance mechanism, ready to be used by any application. Including a general mechanism could support nonmonotonic reasoning to better prot from the opportunism of the platform, and could as well better take advantage of computational resources. Indirectly, this raises the controversial question as to whether applications that reason over the Web could apply the Closed-World Assumption (Brachman & Levesque, 2004), which, at rst glance, might seem to be against the Semantic Web principles. Further research would be required in this respect.
The integration of a general purpose mapping module:
The Online De-
sign of Events Application (see Chapter 6) has served us to illustrate how third-party Knowledge-Based components (i.e., the Conguration Tool) can be integrated into applications. The integration relied on the mapping between ontologies as a means to support an eective communication. The current implementation performs such a mapping procedurally, in a rather ad-hoc manner. Since mapping is likely to be a necessary feature in other scenarios, integrating a general mapping module would simplify to an important extent the integration of third-party ontology-based software. It would lead to a more reliable software development process, and, as it was demonstrated in the Protégé-II project (Gennari et al., 1994, 1996, 1998), would better support reusing software and Knowledge-Bases. With the inclusion of such a module, ontology mappings could be declaratively specied using a graphical tool, and interpreted at runtime by the generic mapping module.
Provide an homogeneous access to software components:
So far, an ex-
tensive amount of toolkits and systems have been developed, each of which
9.2. Future Work
176
denes its own API, its own querying language (when it applies), and its own rule language (when it applies). The direct consequence is that applications can hardly migrate from one tool to another since doing so requires adapting an important part of the program. An homogeneous access to ontologies, Knowledge Bases and general purpose reasoning systems could reduce to an important extent the overhead of this migration. Moreover, this would enable implementing a generic and reusable Knowledge Base Integration module for our platform (see Chapter 5), which has been, up to now, implemented specically adapted to the backend employed (i.e., Sesame in the case of ODEA, and Jena in the case of Xena). In this respect it would be interesting pursuing the line of research carried on in the context of SPARQL (Clark, 2005; Prud'hommeaux & Seaborne, 2004) and the JSR-94 API (Scott et al., 2003).
Further research on the use of (Semantic) Web Services:
Our two case
studies, have shown that these technologies help to leverage the applications developed on top of our platform to truly Web-oriented KnowledgeBased Services.
They are of great importance for eectively reasoning
over the Web, and it would therefore be particularly important to perform a more extensive research on their use in our reasoning platform. Conversely, our platform is particularly appropriate for testing these technologies which, in some cases are still relatively immature. Thus, testing the latest results obtained in the context of Web Services choreography languages, and in the context of Semantic Web Services, notably OWL-S (OWL Services Coalition, 2003) and the Web Services Modelling Framework (Fensel & Bussler, 2002), would presumably be very benecial both for the platform and (Semantic) Web Services research in general.
9.2.2 The Case Studies Our case studies, have helped us to identify new lines for further research,
KL DVE1 , and the Real-World Service-Oriented Approach. E The KL DV 1 has been particularly useful for the development of the Design Support System. There remains however aspects for further research. For inconcerning the
stance, as we previously introduced, creating design support systems requires good understanding of designing as well as of supporting users through computers.
There is a need for a theoretical understanding on supporting users
through software systems.
Such a theory of computer based support would
need to establish quality criteria regarding the creation of support systems. Based on both theories (i.e., the theory of computer-based support and the
KL DVE1 ) one could create a methodology in order to obtain good design support systems from both perspectives.
The methodology should help the developer
in the task of establishing the scope of the design support system.
In other
words, the methodology should help in determining which processes should be automated and to which extent. In fact, as important as automating tasks is keeping the user actively and eectively involved during the design. After all, design support systems aim to support designers in the task of designing. An excessively pretentious approach could lead to a user-unfriendly system which intends to perform most of the design, leaving too few aspects to the user will. On the contrary, a loose support to the user is obviously not desirable neither.
Chapter 9. Conclusions and Future Work
177
Once established the scope of the system, the developer would be able to determine which of the kinds of knowledge the
KL DVE1
identies should be modelled,
which it would be benecial to include and which we shouldn't be tempted to acquire and represent in the design support system. Our Real-World Service-Oriented Approach has proved to be an appealing approach to the development of protable collaborative eBusinesses. However, there still remain important issues that it would be necessary to tackle in the future. For instance, further case studies would be required in order to better establish the rules and guidelines that lead from the business analysis to the business process to the actual implementation. Moreover, it would be interesting to better prot from the business knowledge and the exibility of our Opportunistic Reasoning Platform in order to support business practitioners in the task of declaratively specifying eBusiness delivery processes. Doing so would better support business practitioners in the adoption of executive policies which would immediately and automatically be applied in the eBusiness delivery processes. Finally, given the existing modelling tools and the preliminary research conducted in the context of Xena, it would seem possible to streamline the creation of new eBusiness solutions in a semi-automated process.
Such a pro-
cess would, starting from the business analyses, and based on the Real-World Service-Oriented Approach, generate a skeletal system compliant with the business rules, which could support advanced reasoning over the Web as supported by our platform.
Bibliography
Aduna BV. 2005 (April). Rio RDF API. http://www.openrdf.org/. Last visited: April 2005. Aguado, J., Bernaras, A., Smithers, T., Pedrinaci, C., & Cendoya, M. 2003. Using Ontology Research in Semantic Web Applications: a Case Study. In:
Proceedings of the 10th Conference of the Spanish Association for Articial Intelligence, CAEPIA 2003, and 5th Conference on Technology Transfer, TTIA 2003. Aguado, J., Pedrinaci, C., Bernaras, A., Smithers, T., Calderón, H., & Tellería, J. 2004 (September). Online Design of Events Application Trial, Version 2. Available from: http://obelix.e3value.com. OBELIX IST-2001-33144 DELIVERABLE 7.8. Akkermans, H., Baida, Z., Gordijn, J., Peña, N., Altuna, A., & Laresgoiti, I. 2004. Value Webs: Ontology-Based Bundling of Real-World Services. IEEE
Intelligent Systems,
19(4), 5766.
Alexaki, S., Christophides, V., Karvounarakis, G., Plexousakis, D., & Tolle, K. 2001 (May). Bases.
The RDFSuite:
Managing Voluminous RDF Description
Pages 113 of: 2nd International Workshop on the Semantic Web
(SemWeb'01), in conjunction with Tenth International World Wide Web Conference (WWW10). Altuna, A., Bilbao, S., Cabrerizo, A., García, E., Laresgoiti, I., Lázaro, J. M., Peña, N., Sastre, D., & Urrutia, J. A. 2003 (February). Multi-product Con-
guration Tool (Version 2). OBELIX IST-2001-33144 DELIVERABLE 6.4. Altuna, A., Cabrerizo, A., Laresgoiti, I., Peña, N., & Sastre, D. 2004 (September). Co-operative and Distributed Conguration. In: Net.ObjectDays 2004. Alur, D., Crupi, J., & Malks, D. 2003. Core J2EE Patterns Best Practices and
Design Strategies. Sun Microsystems Press. Angele, J., & Lausen, G. 2004. Ontologies in F-logic. Chap. 2, pages 2950 of: Staab, S., & Studer, R. (eds), Handbook on Ontologies. Springer-Verlag. Angele, J., Fensel, D., Landes, D., & Studer, R. 1998. Developing KnowledgeBased Systems with MIKE. Automated Software Engineering: An Interna-
tional Journal,
5(4), 389418.
179
180
BIBLIOGRAPHY
Antoniou, G., & van Harmelen, F. 2004. Web Ontology Language: OWL. Chap.
4, pages 6792 of: Staab, S., & Studer, R. (eds), Handbook on Ontologies. Springer-Verlag. Baader, F., Horrocks, I., & Sattler, U. 2004. Description Logics. Chap. 1, pages
328 of: Staab, S., & Studer, R. (eds), Handbook on Ontologies. SpringerVerlag. Baida, Z., Gordijn, J., Morch, A.Z., Sæle, H., & Akkermans, H. 2004. OntologyBased Analysis of e-Service Bundles for Networked Enterprises. In: Proceed-
ings of The 17th Bled eCommerce Conference (Bled 2004). Balkany, A., Birmingham, W. P., & Tommelein, I. D. 1991. A knowledge-level analysis of several design tools. Pages 921940 of: Gero, J. S. (ed), Articial
Intelligence in Design. Ballard, D. R. 1994.
A Knowledge-Based Decision Aid for Enhanced Situa-
tional Awareness. In: 13th Digital Avionics Systems Conference. Available at: http://www.reticular.com/Library/Sa.pdf. Banks, D., Cayzer, S., Dickinson, I., & Reynolds, D. 2002 (November).
The
ePerson Snippet Manager: a Semantic Web Application. Technical Report HPL-2002-328 20021122. Hewlett-Packard. Barstow,
A.
2001
(April).
Survey of RDF/Triple Data Stores (W3C).
http://www.w3.org/2001/05/rdf-ds/DataStore. Last Visited: April 2005. Bass, L., Clements, P., & Kazman, R. 2003. Software Architecture in Practice. Second edn. Addison-Wesley. BBN Technologies. 2004.
ment.
SWeDE: The Semantic Web Development Environ-
http://owl-eclipse.projects.semwebcentral.org/index.html.
Last Vis-
ited: June 2005. Bechhofer, S. 2004 (April). Hoolet. http://owl.man.ac.uk/hoolet/. Last visited: April 2005. Bechhofer, S., Volz, R., & Lord, P. 2003 (October). Cooking the Semantic Web with the OWL API.
Pages 659675 of: First International Semantic Web
Conference 2003 (ISWC 2003). Beckett, D., & McBride, B. 2004 (February).
RDF/XML Syntax Specica-
tion (Revised). http://www.w3.org/TR/rdf-syntax-grammar/. Last Visited: March 2005. Benjamins, R. 2003. Web Services Solve Problems, and Problem-Solving Methods Provide Services. IEEE Intelligent Systems,
18(1), 7276.
Benjamins, R., & Fensel, D. 1998. Editorial: Problem-Solving Methods. Inter-
national Journal of Human Computer Studies,
49(4), 305313.
Benjamins, R., Decker, S., Fensel, D., Motta, E., Plaza, E., Schreiber, G., Studer, R., & Wielinga, B. 1998. - IBROW3 - An Intelligent Brokering Service for Knowledge-Component Reuse on the World Wide Web. Pages 2530 of:
Workshop on Applications of Ontologies and Problem-Solving Methods, held inconjunction with ECAI'98.
BIBLIOGRAPHY
181
Bernaras, A. 1994. Models of Design in the CommonKADS Framework. Pages
499516 of: Gero, J. S. (ed), Articial Intelligence in Design. Berners-Lee, T. 1998.
Notation 3. Tech. rept. World Wide Web Consortium
(W3C). Last Visited: March 2005. Berners-Lee, T. 2002a (September). The Semantic Web as a language of logic. http://www.w3.org/DesignIssues/Logic.html. Last Visited: April 2005. Berners-Lee, T., Fielding, R., & Masinter, L. 1998 (August). Uniform Resource
Identiers (URI): Generic Syntax. http://www.ietf.org/rfc/rfc2396.txt. Berners-Lee, T., Hendler, J., & Lassila, O. 2001. The Semantic Web. Scientic
American, May, 3443. Berners-Lee,
Web Architecture from 50,000 feet.
Tim. 2002b (February).
http://www.w3.org/DesignIssues/Architecture. Last visited: May 2005.
XML Schema Part 2: Datatypes
Biron, P. V., & Malhotra, A. 2001 (May).
(W3C Recommendation). http://www.w3.org/TR/xmlschema-2/. Last visited: March 2005. Boag, S., Chamberlin, D., Fernández, M. F., Florescu, D., Robie, J., & Siméon, J. 2005 (April). XQuery 1.0: An XML Query Language W3C Working Draft
04 April 2005. http://www.w3.org/TR/xquery/. Last Visited: April 2005. Boley, H., Tabet, S., & Wagner, G. 2001.
Design Rationale of RuleML: A
Markup Language for Semantic Web Rules. In: International Semantic Web
Working Symposium (SWWS). Booth, ris,
D., C.,
Haas, &
H.,
McCabe,
Orchard,
D.
2004
F.,
Newcomer,
(February).
E.,
Champion,
M.,
Fer-
Web Services Architecture.
http://www.w3.org/TR/ws-arch/. Last visited: March 2005. Brachman, R., & Levesque, H. 2004. Knowledge Representation and Reasoning. Morgan Kaufmann. Bray, T., Paoli, J., Sperberg-McQueen, C. M., Maler, E., Yergeau, F., & Cowan, J. 2004 (February). Extensible Markup Language (XML) 1.1 (W3C Recom-
mendation). http://www.w3.org/TR/2004/REC-xml11-20040204/. Last visited: March 2005. Brickley, D., & Guha, R. V. 2002. RDF Vocabulary Description Language 1.0:
RDF Schema. http://www.w3.org/TR/rdf-schema. Broekstra,
A per,
Second
J.,
&
Kampman,
Generation
SWAD-Europe
Retrieval.
RDF
Workshop
A.
2003
Query on
(November).
Language. Semantic
Web
SeRQL: Position
Pa-
Storage
and
http://www.w3.org/2001/sw/Europe/events/20031113-
storage/positions/aduna.pdf. Broekstra, J., Kampman, A., & van Harmelen, F. 2002. Sesame: A Generic
Architecture for Storing and Querying RDF and RDF Schema. International Semantic Web Conference (ISWC 2002).
182
BIBLIOGRAPHY
Buck, P., Clarke, B., Lloyd, G., Poulter, K., Smithers, T., Tang, M. X., Tomes, N., Floyd, C., & Hodgkin, E. 1991. The Castlemaine Project: development of an AI-based design support system. Pages 583601 of: Gero, John (ed),
Articial Intelligence in Design. Carter, I. M., & MacCallum, K. J. 1991.
A software architecture for design
co-ordination. Pages 859881 of: Gero, J. S. (ed), Articial Intelligence in
Design. Carver, N., & Lesser, V. R. 1992 (October). The Evolution of Blackboard Control
Architectures. Tech. rept. UM-CS-1992-071. Cendoya, M., Bernaras, A., Smithers, T., Aguado, J., Pedrinaci, C., Laresgoiti, I., García, E., Gómez, A., Peña, N., Morch, A. Z., Sæle, H., Langdal, B. I., Gordijn, J., Akkermans, H., Omelayenko, B., Schulten, E., Hazelaar, B., Sweet, P., Schnurr, H. P., Oppermann, H., & Trost, H. 2002 (August). Business needs: Applications and Tools Requirements. Available from: http://obelix.e3value.com. OBELIX IST-2001-33144 Deliverable 3. Cendoya, M., Bernaras, A., Smithers, T., Pedrinaci, C., & Aguado, J. 2003 (November). Online Design of Events Application Trial, Version 1. Available from: http://obelix.e3value.com. OBELIX IST-2001-33144 DELIVERABLE 7.7. Cerebra Inc. 2005. Cerebra. http://cerebra.com/. Last visited: April 2005. Chandrasekaran, B. 1986. Generic tasks in knowledge based reasoning: highlevel building blocks for expert system design. IEEE Expert, Fall 1986, 2330. Chandrasekaran, B. 1990. Design problem solving: a task analysis. AI Magazine,
11(4), 5971.
Christensen, 2001
E.,
Curbera,
Web
(March).
F.,
Meredith,
Services
G.,
Description
&
Weerawarana,
Language
(WSDL)
S.
1.1.
http://www.w3.org/TR/wsdl. W3C Note. Clark, J. 1999 (November). XSL Transformations (XSLT) Version 1.0 (W3C
Recommendation). Available at: http://www.w3.org/TR/xslt. Last visited: April 2005. Clark,
J.,
(XPath)
&
DeRose,
Version
1.0
S.
1999
W3C
(November).
Recommendation
XML 16
Path
Language
November
1999.
http://www.w3.org/TR/xpath. Last Visited: April 2005. Clark, K. G. 2005 (January). SPARQL Protocol for RDF. W3C Working Draft. http://www.w3.org/TR/rdf-sparql-protocol/. Last Visited: April 2005. Clements, P. C. 2001.
From Subroutines to Subsystems:
Software Development.
Component-Based
Chap. 11, pages 189198 of: Heineman, G. T., &
Council, W. T. (eds), Component-Based Software Engineering: putting the
pieces together. Addison-Wesley. Cohen, J. 1988. A view of the origins and development of Prolog. Communica-
tions of the ACM,
31(1), 2636.
BIBLIOGRAPHY
183
Connolly, D., van Harmelen, F., Horrocks, I., McGuinness, D. L., PatelSchneider, P. F., & Stein, L. A. 2001 (December).
2001) Reference Description.
DAML+OIL (March
http://www.w3.org/TR/daml+oil-reference.
Last Visited: March 2005. Consortium, OBELIX. 2004 (December).
OBELIX: Ontology Based ELec-
tronic Integration of CompleX Products and Value Chains (IST-2001-33144). http://www.cs.vu.nl/ obelix/. Last Visited: April 2005. Consortium, OntoWeb. 2005 (March).
OntoWeb: Ontology-based information
exchange for knowledge management and electronic commerce (IST-200029243). http://ontoweb.org/. Last Visited: April 2005. Corkill, D. D. 1991. Blackboard systems. Journal of AI Expert, Corkill, D. D., Gallagher, K. Q., & Murray, K. E. 1988.
9(6), 4047.
GBB: A Generic
Blackboard Development System. Chap. 26, pages 503517 of: Englemore, Robert, & Morgan, Tony (eds), Blackboard Systems. Addison-Wesley. Craneeld, S. 2001. Networked Knowledge Representation and Exchange using UML and RDF. Journal of Digital Information. Craneeld, S., & Purvis, M. K. 1999. UML as an Ontology Modelling Language.
In: Workshop on Intelligent Information Integration. 16th International Joint Conference on Articial Intelligence (IJCAI-99). Crubézy, M., & Musen, M. A. 2004. Ontologies in Support of Problem Solving.
Chap. 16, pages 321341 of: Staab, Steen, & Studer, Rudi (eds), Handbook on Ontologies. Springer-Verlag. Crubézy, M., Lu, W., Motta, E., & Musen, M.A. 2003.
Conguring Online
Problem-Solving Resources with the Internet Reasoning Service. IEEE Intel-
ligent Systems, Davis, R. 2001.
18(2), 3442.
Knowledge-Based Systems.
Pages 430432 of:
Wilson,
Robert A., & Keil, Frank C. (eds), The MIT Encyclopedia Of The Cogni-
tive Sciences. MIT Press. Dean, M., Schreiber, G., Bechhofer, S., van Harmelen, F., Hendler, J., Horrocks, I., McGuinness, D., Patel-Schneider, P. F., & Stein, L. A. 2004 (February).
OWL Web Ontology Language Reference. http://www.w3.org/TR/owl-ref/. Last Visited: March 2005. Ding, L., Finin, T., Joshi, A., Pan, R., Cost, R. S., Peng, Y., Reddivari, P., Doshi, V. C., & Sachs, J. 2004. Swoogle: A Search and Metadata Engine for the Semantic Web.
In: Proceedings of the Thirteenth ACM Conference on
Information and Knowledge Management. ACM Press. Domingue, J., Cabral, L., Hakimpour, F., Sell, D., & Motta, E. 2004 (September). IRS-III: A Platform and Infrastructure for Creating WSMO-based Semantic Web Services. In: Bussler, Christoph, Fensel, Dieter, Lausen, Holger, & Oren, Eyal (eds), Proceedings of the WIW 2004 Workshop on WSMO Im-
plementations. CEUR Workshop Proceedings, vol. 113, nos. ISSN 16130073.
184
BIBLIOGRAPHY
Eberhart, A. 2003. Ontology-Based Infrastructure for Intelligent Applications. Ph.D. thesis, University of Saarbrücken. Ellis, D. P. W. 1996. Prediction-driven computational auditory scene analysis. Ph.D. thesis, Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. Endrei, dahl,
M., P.,
Ang, Luo,
J., M.,
Arsanjani, &
Newling,
A., T.
ented Architecture and Web Services.
Chua, 2004.
S.,
Comte,
Patterns:
Redbooks.
P.,
Service-
IBM.
Krog-
Ori-
Available at
http://www.redbooks.ibm.com/redbooks/pdfs/sg246303.pdf. Engelmore, R. S., & Morgan, A. J. 1988a.
Blackboard Systems. The Insight
Series in Ariticial Intelligence. Addison-Wesley. ISBN: 0-201-17431-6. Engelmore, R. S., & Morgan, A. J. 1988b. Conclusion. Chap. 30, pages 561574
of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. AddisonWesley. Engelmore, R. S., Morgan, A. J., & Nii, H. P. 1988a. Early Applications (19751980). Chap. 5, pages 125134 of: Engelmore, R. S., & Morgan, A. J. (eds),
Blackboard Systems. Addison-Wesley. Engelmore, R. S., Morgan, A. J., & Nii, H. P. 1988b.
Hearsey-II.
Chap. 2,
pages 2529 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Engelmore, R. S., Morgan, A. J., & Nii, H. P. 1988c. Introduction. Chap. 1,
pages 122 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Erman, L. D., London, P. E., & Fickas, S. F. 1988a. The Design and an Example Use of Hearsey-III. Chap. 13, pages 281295 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Erman, L. D., Hayes-Roth, F., Lesser, V. R., & Reddy, D. R. 1988b.
The
Hearsey-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty.
Chap. 3, pages 3186 of: Engelmore, R. S., & Morgan, A. J.
(eds), Blackboard Systems. Addison-Wesley. Feigenbaum, E. A. 1988. Foreword. In: Engelmore, R. S., & Morgan, A. J. (eds),
Blackboard Systems. The Insight Series in Ariticial Intelligence. AddisonWesley. ISBN: 0-201-17431-6. Fensel, D., & Benjamins, V. R. 1998. Solving Components.
An Architecture for Reusing Problem-
Pages 6367 of:
European Conference on Articial
Intelligence. Fensel, D., & Bussler, C. 2002. The Web Service Modeling Framework WSMF.
Electronic Commerce Research and Applications,
1(2), 113137.
Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., & Berners-Lee, T. 1999 (June).
Hypertext Transfer Protocol HTTP/1.1.
http://www.ietf.org/rfc/rfc2616.txt. Last visited: September 2005.
BIBLIOGRAPHY
185
19(1), 7677.
Franconi, E. 2004. Using Ontologies. IEEE Intelligent Systems,
Frauenfelder, M. 2001 (November). A Smarter Web. Technology Review. Friedman-Hill, E. 2003. Jess in Action. Manning Publications Co. Gamma, E., Helm, R., Johnson, R., & Vlissides, J. 1994.
Elements of Reusable Object-Oriented Software.
Design Patterns:
Addison-Wesley.
ISBN:
0201633612. Garlan, D., Allen, R., & Ockerbloom, J. 1995. Architectural Mismatch: Why Reuse Is So Hard. IEEE Software,
12(6), 1726.
Logical foundations of articial
Genesereth, M. R., & Nilsson, N. J. 1987.
intelligence. Morgan Kaufmann Publishers Inc. Gennari, J. H., Tu, S. W., Rothenuh, T. E., & Musen, M. A. 1994. Mapping
International Journal of Human-
domains to methods in support of reuse.
Computer Studies,
41(3), 399424.
Gennari, J. H., Stein, A. R., & Musen, M. A. 1996. Reuse for Knowledge-Based Systems and CORBA Components. In: Gains, B. R., & Musen, M. A. (eds),
10th Ban Knowledge Acquisition for Knowledge-Based Systems Workshop. Gennari, J. H., Cheng, H., Altman, R. B., & Musen, M. A. 1998.
Reuse,
CORBA, and Knowledge-Based Systems. International Journal of Human-
Computer Studies,
49(4), 523546.
Goldman, N. M. 2003. Ontology-Oriented Programming: Static Typing for the Inconsistent Programmer.
Pages 850865 of: Fensel, D., Sycara, K. P., &
Mylopoulos, J. (eds), International Semantic Web Conference. Lecture Notes in Computer Science, vol. 2870. Springer. Gordijn,
J.,
quirements ideas.
&
Akkermans,
engineering:
Requirements
H.
2003.
Exploring
Engineering
Value
based
innovative
Journal,
8(2),
re-
e-commerce 114134.
http://link.springer.de/link/service/journals/00766/contents/03/00169/. Gordijn, J., Sweet, P., Omelayenko, B., & Hazelaar, B. 2004.
D7.4 Digital
Music Value Chain Application. Tech. rept. Amsterdam, NL. Available from http://obelix.e3value.com. Grant, J., Beckett, D., & McBride, B. 2004 (February).
RDF Test Cases.
http://www.w3.org/TR/rdf-testcases/. Last Visited: March 2005. Grosof, B., Dean, M., Ganjugunte, S., Tabet, S., & Neogy, C. 2004 (November).
SweetRules. http://sweetrules.projects.semwebcentral.org/. Last vis-
ited: April 2005. Gruber, T. R. 1993a.
Towards Principles for the Design of Ontologies Used
for Knowledge Sharing.
In: Guarino, N., & Poli, R. (eds), Formal Ontol-
ogy in Conceptual Analysis and Knowledge Representation. Netherlands: Kluwer Academic Publishers.
Deventer, The
186
BIBLIOGRAPHY
Gruber, T. R. 1993b. A translation approach to portable ontology specications.
Knowledge Acquisition,
5(2), 199220.
Haarslev, V., & Möller, R. 2003 (October).
Racer: A Core Inference Engine
for the Semantic Web. In: Second International Semantic Web Conference
ISWC 2003. Haase, P., Broekstra, J., Eberhart, A., & Volz, R. 2004 (November). A Comparison of RDF Query Languages. In: Proceedings of the Third International
Semantic Web Conference. Harth, A., Decker, S., & Gassert, H. 2005 (January). YARS: Yet Another RDF
Store. http://sw.deri.org/2004/06/yars/yars.html. Last visited: April 2005. Hayes-Roth, B., & Hewett, M. 1988. BB1: An Implementation of the Blackboard Control Architecture. Chap. 14, pages 297313 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Hayes-Roth, B., Johnson, M. V., Garvey, A., & Hewett, M. 1988a.
Building
Systems in the BB* Environment. Chap. 29, pages 543560 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Hayes-Roth, B., Hayes-Roth, F., Rosenschein, S., & Cammarata, S. 1988b. Modeling Planning as an Incremental, Opportunistic Process. Chap. 10, pages
231244 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Hayes-Roth, B., Buchanan, B., Lichtarge, O., Hewett, M., Altman, R., Brinkley, J., Cornelius, C., Duncan, B., & Jardetzky, O. 1988c. PROTEAN: Deriving Protein Structure from Constraints. Chap. 20, pages 417431 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Hein, J. 2004 (February). OWL Web Ontology Language Use Cases and Re-
quirements. http://www.w3.org/TR/webont-req/. Last Visited: March 2005. Hildum, D. W. 1994 (September). Flexibility in a Knowledge-Based System for
Solving Dynamic Resource-Constrained Scheduling Problems.
Ph.D. thesis,
Department of Computer Science University of Massachusetts. Hirtle, D., Boley, H., Damasio, C., Grosof, B., Kifer, M., Sintek, M., Tabet, S., & Wagner, G. 2004 (August).
Schema Specication of RuleML 0.87.
http://www.ruleml.org/0.87/. Last Visited: April 2005. Holbrook, M. B. 1999. Consumer Value: A Framework for Analysis and Re-
search. New York, NY: Routledge. Horrocks, I., Patel-Schneider, P. F., Boley, H., Tabet, S., Grosof, B., & Dean, M. 2004 (May).
SWRL: A Semantic Web Rule Language Combin-
ing OWL and RuleML. http://www.w3.org/Submission/2004/SUBM-SWRL20040521/. Last Visited: April 2005. Horstmann, C. S., & Cornell, G. 2001a. Core Java 2. Vol. 1 - Fundamentals. Sun Microsystems Press.
BIBLIOGRAPHY
187
Horstmann, C. S., & Cornell, G. 2001b. Core Java 2. Vol. 2 - Advanced Features. Sun Microsystems Press. HP Labs. 2004 (February).
Jena:
A Semantic Web Framework for Java.
http://jena.sourceforge.net/. Last visited: April 2005. IBM
Corporation.
2004
Snobase.
(September).
http://www.alphaworks.ibm.com/tech/snobase. Last visited: April 2005. Institute
for
Learning
and
Research
sity of Bristol. 2005 (February).
Technology
(ILRT)
at
the
Univer-
Redland RDF Application Framework.
http://librdf.org/. Last visited: April 2005. ISO:
International
Database
Organization
Languages
SQL,
for
Standardization.
ISO/IEC
2003
9075*:2003.
(December). Available
at
http://www.iso.org. Johnson, R. 2003. Expert One-on-One J2EE Design and Development. Wiley Publishing. Karvounarakis, G., Alexaki, S., Christophides, V., Plexousakis, D., & Scholl, M. 2002.
RQL: A Declarative Query Language for RDF.
In: In The 11th
Intl. World Wide Web Conference (WWW2002). Klyne, G., & Carroll, J. J. 2004 (February).
work (RDF) Concepts and Abstract Syntax.
Resource Description Framehttp://www.w3.org/TR/rdf-
concepts/. Last Visited: March 2005. Kopena, J., & Regli, W. C. 2003. DAMLJessKB: A Tool for Reasoning with the Semantic Web. IEEE Intelligent Systems,
18(3), 7477.
Kowalsky, R. 2001. Logic Programming. Pages 430432 of: Wilson, Robert A., & Keil, Frank C. (eds), The MIT Encyclopedia Of The Cognitive Sciences. MIT Press. Lakin, W. L., Miles, J. A. H., & Byrne, C. D. 1988. Intelligent Data Fusion for Naval Command and Control. Chap. 22, pages 443458 of: Englemore, R., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Laresgoiti, I., Anjewierden, A., Bernaras, A., Corera, J., Schreiber, G., & Wielinga, B. J. 1996. Ontologies as Vehicles for Reuse: a mini-experiment. In:
10th Ban Knowledge Acquisition for Knowledge-Based Systems Workshop, vol. 1. Lee, R. 2004 (July). Scalability Report on Triple Store Applications. Tech. rept. MIT. http://simile.mit.edu/reports/stores/. Lesser, V. R., & Corkill, D. D. 1988. Testbed:
The Distributed Vehicle Monitoring
A Tool for Investigating Distributed Problem Solving Networks.
Chap. 18, pages 353386 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Lesser, V. R., & Erman, L. D. 1988. A Retrospective View of the Hearsay-II Architecture. Chap. 4, pages 87121 of: Englemore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley.
188
BIBLIOGRAPHY
Logan, B., Millington, K., & Smithers, T. 1991.
Being economical with the
truth: assumption-based context management in the Edinburgh Designer System. Pages 423446 of: Gero, J. S. (ed), Articial Intelligence in Design. Magkanaraki, A., Karvounarakis, G., Anh, T. T., Christophides, V., & Plexousakis, D. 2002.
Ontology storage and querying.
Technical Report 308.
ICS-FORTH. Maier, A., Aguado, J., Bernaras, A., Laresgoiti, I., Pedrinaci, C., Peña, N., & Smithers, T. 2003. Integration with Ontologies. 2nd Conference on Knowledge Management (WM2003). Mandow, L., & Pérez de la Cruz, J. L. 2000. The role of multicriteria problem solving in design. Pages 2342 of: Gero, J.S. (ed), Articial Intelligence in
Design'00. Kluwer Academic Publishers. Manola,
F.,
&
Miller,
E.
2004
RDF
(Febuary).
Primer.
http://www.w3.org/TR/rdf-primer/. Last Visited: March 2005. Mazzocchi,
S.,
Garland,
SIMILE:
uary).
S.,
Practical
&
Lee,
Metadata
for
R.
the
2005
Semantic
(Jan-
Web.
http://www.xml.com/pub/a/2005/01/26/simile.html. McAllester, D. 2001. Logical Reasoning Systems. Pages 430432 of: Wilson, R. A., & Keil, F. C. (eds), The MIT Encyclopedia Of The Cognitive Sciences. MIT Press. McBride, B. 2004. The Resource Description Framework (RDF) and its Vocabulary Description Language RDFS.
Chap. 3, pages 5166 of: Staab, S., &
Studer, R. (eds), Handbook on Ontologies. Springer-Verlag. McGuinness, D. L., & van Harmelen, F. 2004 (February).
OWL Web Ontol-
ogy Language Overview. http://www.w3.org/TR/owl-features/. Last Visited: March 2005. McIlraith, S., Son, T., & Zeng, H. 2001. Semantic Web services. IEEE Intelligent
Systems,
16(2), 4653.
Melnik, S. 2001 (january). RDF API. http://www-db.stanford.edu/ melnik/rdf/api.html. Last visited: April 2005. Mena, E., Illarramendi, A., Kashyap, V., & Sheth, A. 2000. OBSERVER: An Approach for Query Processing in Global Information Systems based on Interoperation across Pre-existing Ontologies. International journal on Distributed
And Parallel Databases (DAPD), Mika, P. 2004 (September).
8(2), 223272.
Flink: The Who is Who of the Semantic Web.
http://ink.semanticweb.org. Last visited: April 2005. Mindswap.
2004a
(February).
OWL-S
API
(Version
1.0.0).
http://www.mindswap.org/2004/owl-s/api/. Last visited: April 2005. Mindswap. 2004b (December). Pellet. http://www.mindswap.org/2003/pellet/. Last visited: April 2005.
BIBLIOGRAPHY
189
Mitra, N. 2003 (June). SOAP Version 1.2 Part 0: Primer. W3C Recommenda-
tion. http://www.w3.org/TR/soap12-part0/. Last Visited: June 2005. Musen, M. A. 2000. Ontology-Oriented Design and Programming. Tech. rept. SMI-2000-0833. Stanford Medical Informatics.
Available at:
http://www-
smi.stanford.edu/pubs/SMI_Reports/SMI-2000-0833.pdf. Musen, M. A. 2004. Ontologies: NecessaryIndeed Essentialbut Not Sucient.
IEEE Intelligent Systems,
19(1), 7779.
Newell, A. 1982. The Knowledge Level. Articial Intelligence, Newman, A., Gearon, P., & Adams, T. 2005 (April).
18(1), 87127.
JRDF (Java RDF).
http://jrdf.sourceforge.net/. Last visited: April 2005. Nii, H. P. 1986. Blackboard Application Systems and a Knowledge Engineering Perspective. AI Magazine,
7(3), 82106.
Nii, H. P., & Aiello, N. 1988. AGE (Attempt tp GEneralize): A KnowledgeBased Program for Building Knowledge-Based Programs.
Chap. 12, pages
251280 of: Englemore, Robert, & Morgan, Tony (eds), Blackboard Systems. Addison-Wesley. Nii, H. P., Aiello, N., & Rice, J. 1988a. Frameworks for Concurrent Problem Solving: A Report on CAGE and POLIGON. Chap. 25, pages 475501 of: Engelmore, Robert, & Morgan, Tony (eds), Blackboard Systems.
Addison-
Wesley. Nii, H. P., Feigenbaum, E. A., Anton, J. J., & Rockmore, A. J. 1988b. Signal-toSymbol Transformation: HASP/SIAP Case Study. Chap. 6, pages 135157
of: Engelmore, Robert, & Morgan, Tony (eds), Blackboard Systems. AddisonWesley. Nipkow, T., Paulson, L.C., & Wenzel, M. 2002. Isabelle/HOL A Proof As-
sistant for Higher-Order Logic. Springer Verlag. Normann, R., & Ramirez, R. 1994. Designing Interactive Strategy: From Value
Chain to Value Constellation. Chichester, UK: John Wiley & Sons. Noy, N. F., Sintek, M., Decker, S., Crubézy, M., Fergerson, R. W., & Musen, M. A. 2001.
Creating Semantic Web Contents with Protégé-2000.
Intelligent Systems,
2(16), 6071.
IEEE
Oberle, D. 2004. Semantic management of middleware. Pages 299303 of: 1st
international doctoral symposium on Middleware. New York, NY, USA: ACM Press. Oberle, D., Eberhart, A., Staab, S., & Volz, R. 2004a. Developing and Managing Software Components in an ontology-based Application Server. In: Middle-
ware 2004, ACM/IFIP/USENIX 5th International Middleware Conference, Toronto, Ontario, Canada. LNCS. Springer. Oberle, D., Volz, R., Motik, B., & Staab, S. 2004b. software environment.
An extensible ontology
Chap. III, pages 311333 of: Staab, S., & Studer,
R. (eds), Handbook on Ontologies. International Handbooks on Information Systems. Springer-Verlag.
190
BIBLIOGRAPHY
Oberle, D., Staab, S., Studer, R., & Volz, R. 2005. Supporting Application Development in the Semantic Web. ACM Transactions on Internet Technology
(TOIT),
5(2), 359389.
OMG: Object Management Group. 1997.
The Object Management Group.
http://www.omg.org. Last Visited: June 2005.
OCL 2.0 Specication.
OMG: Object Management Group. 2003 (October).
Available http://www.omg.org/docs/ptc/03-10-14.pdf. Ontoprise GmbH. 2005 (April). OntoBroker. http://www.ontoprise.de/. Last visited: April 2005. OWL Services Coalition. 2003.
OWL-S: Semantic Markup for Web Services.
http://www.daml.org/services/owl-s/1.0/. Pallos, M. 2001. Service-Oriented Architecture: A Primer. Business Integration
Journal, December, 3235. Mappings for Reuse in
Park, J. Y., Gennari, J. H., & Musen, M. A. 1997.
Knowledge-based Systems. Technical Report SMI-97-0697. Stanford Medical Informatics. Patel-Schneider,
OWL
Web
P.
F.,
Ontology
Hayes,
P.,
Language
&
Horrocks,
Semantics
I.
and
2004
(February).
Abstract
Syntax.
http://www.w3.org/TR/owl-semantics/. Last Visited: March 2005. Payne, T., & Lassila, O. 2004. Semantic Web Services. IEEE Intelligent Systems,
19(4), 1415.
Pearson, G. 1988. Mission Planning within the Framework of the Blackboard Model. Chap. 21, pages 433442 of: Englemore, R. S., & Morgan, A. J. (eds),
Blackboard Systems. Addison-Wesley. Pedrinaci, C., Bernaras, A., Smithers, T., Aguado, J., & Cendoya, M. 2003. A Framework for Ontology Reuse and Persistence Integrating UML and Sesame.
Pages 3746 of: Conejo, R., Urretavizcaya, M., & de-la Cruz, J. Pérez (eds), Current Topics in Articial Intelligence 10th Conference of the Spanish Association for Articial Intelligence, CAEPIA 2003, and 5th Conference on Technology Transfer, TTIA 2003. Lecture Notes in Computer Science, vol. 3040. Springer-Verlag. Pedrinaci, C., Baida, Z., Akkermans, H., Bernaras, A., Gordijn, J., & Smithers, T. 2005a.
Music Rights Clearance Business Analysis and Delivery.
Pages
198207 of: Bauknecht, K., Pröll, B., & Werthner, H. (eds), 6th International Conference on Electronic Commerce and Web Technologies. Lecture Notes in Computer Science, vol. 3590. Springer-Verlag. Pedrinaci, C., Smithers, T., & Bernaras, A. 2005b (December). Technical Issues in the Development of Knowledge-Based Services for the Semantic Web. In:
SWAP: Semantic Web Applications and Perspectives. 2nd Italian Semantic Web Workshop. (Accepted for publication). Peger, K., & Hayes-Roth, B. 1997 (July). An Introduction to Blackboard Sys-
tems Style Organization. Tech. rept. Stanford University.
BIBLIOGRAPHY
191
Porter, M. E. 1985. Competitive Advantage: Creating and Sustaining Superior
Performance. New York, NY: The Free Press. Prud'hommeaux,
E.,
&
Grosof,
B.
2004
(April).
http://www.w3.org/2001/11/13-RDF-Query-Rules/.
RDF Query Survey. Last Visited:
April
2005. Prud'hommeaux, E., & Seaborne, A. 2004 (October). SPARQL Query Language
for RDF. W3C Working Draft.
http://www.w3.org/TR/rdf-sparql-query/.
Last Visited: April 2005. Puerta, A., Egar, J., Tu, S., & Musen, M. A. 1992. knowledge-acquisition
shell
for
the
automatic
acquisition tools. Knowledge Acquisition,
A multiple-method
generation
4(2), 171196.
Quan, D., Huynh, D., & Karger, D. R. 2003.
of
knowledge-
Haystack: A Platform for Au-
thoring End User Semantic Web Applications.
Pages 738 753 of:
The
SemanticWeb - ISWC 2003, vol. 2870 / 2003. Springer-Verlag. Raggett, D., Hors, A. Le, & Jacobs, I. 1999 (December). HTML 4.01 Specica-
tion. W3C Recommendation. http://www.w3.org/TR/html401. Last visited: September 2005. Reynolds, D. 1988.
MUSE: A Toolkit for Embedded, Real-time AI.
Chap.
27, pages 519532 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Riley,
G.
2004
(May).
CLIPS:
A
Tool
for
Building
Expert
Systems.
http://www.ghg.net/clips/CLIPS.html (Last visited: July 2004). Roo,
J.
De.
2005
Euler
(April).
Proof
Mechanism.
http://eulersharp.sourceforge.net/. Last visited: April 2005. RuleML Initiative. 2005 (January).
The Rule Markup Initiative Web Site.
http://www.ruleml.org. Last Visited: April 2005. Russell, S.J., & Norvig, P. 2003. Articial Intelligence: a modern approach. 2nd international edition edn. Prentice Hall Series in Articial Intelligence. Upper Saddle River, N.J.: Prentice Hall. Sadeh, N., Hildum, D., Laliberty, T., McA'Nulty, J., Kjenstad, D., & Tseng, A. 1998. A blackboard architecture for integrating process planning and produc-
tion scheduling. Schreiber, A.Th., & Wielinga, B.J. 1997. Conguration Design Problem Solving.
IEEE Expert,
12(2), 4956.
Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., de Velde, W. Van, & Wielinga, B. 1999. Knowledge Engineering and Man-
agement: The CommonKADS Methodology. MIT Press. Schwartz, D. G. 2003.
From Open IS Semantics to the Semantic Web: The
Road Ahead. IEEE Intelligent Systems,
18(3), 5258.
192
BIBLIOGRAPHY
Scott,
A.,
C., &
Toussaint,
Selman, Iyengar,
D., S.
A.,
Majoor,
Hornick,
2003
M.,
J.,
McCabe,
Friedman-Hill,
Java
(September).
Rule
F.,
Kerth,
E.,
R.,
Ke,
McMullin,
G.,
Engine
API
JSR-94.
http://www.jcp.org/en/jsr/detail?id=94. Last Visited: April 2005. Seaborne,
Jena Tutorial:
A. 2002 (April).
A Programmer's Introduction
to RDQL. http://www.hpl.hp.com/semweb/doc/tutorial/RDQL/index.html. Last Visited: April 2005. Simon, H. A. 1973. The structure of ill-structured problems. Articial Intelligence, 4, pp 181-201. Simon, H.A. 1981. The science of the articial. First edition edn. MIT Press. Sintek,
M.,
Decker,
S.,
&
Harth,
A.
2005
(September).
TRIPLE.
http://triple.semanticweb.org/. Last visited: April 2005. Smith, B. C. 1985. Language.
Prologue to Reection and Semantics in a Procedural
Chap. 3, pages 3140 of: Brachman, Ronald J., & Levesque,
Hector J. (eds), Readings in Knowledge Representation. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Smith, S. 1995.
Reactive Scheduling Systems.
In: Brown, D. E., & Scherer,
W. T. (eds), Intelligent Scheduling Systems. Kluwer Press. Smithers, T. 1992 (June). solving.
In:
Design as exploration: puzzle-making and puzzle-
Workshop notes for AID '92 Workshop on Exploration-based
models of design and Search-based models of design. CMU. Smithers, T. 1996. On knowledge level theories of design process. Pages 561
579 of: Gero, J. S., & Sudweeks, F. (eds), Articial Intelligence in Design '96. Kluwer Academic Publishers. Smithers, T. 1998. Towards a knowledge level theory of design process. Pages
321 of: Gero, J. S., & Sudweeks, F. (eds), Articial Intelligence in Design '98. Kluwer Academic Publishers. Smithers, T. 2000. Designing a font to test a theory. Pages 322 of: Gero, J. S. (ed), Articial Intelligence in Design '00. Kluwer Academic Publishers. Smithers, T. 2002a (May). On Knowledge Level Theories and the Knowledge Management of Designing.
In:
International Design Conference - Design
(2002). Smithers, T. 2002b.
Synthesis in Designing.
In:
Gero, J.S. (ed), Articial
Intelligence in Design. Smithers, T., & Troxell, W. 1990. Design is intelligent behaviour, but what's the formalism? AI EDAM,
4(2), 8998.
Smithers, T., Conkie, A., Doheny, J., Logan, B., Millington, K., & Tang, M. X. 1990.
Design as an Intelligent Behaviour:
an AI in Design Research Pro-
gramme. Journal of Articial Intelligence in Engineering,
5, 78109.
BIBLIOGRAPHY
193
Smithers, T., Tang, M., Tomes, N., Buck, P., & Clarke, B. 1992.
Develop-
ment of a knowledge-based design support system. International Journal of
Knowledge Based Systems,
5(5), 110.
Smithers, T., Corne, D., & Ross, P. 1994. On computing exploration and solving design problems. Pages 293313 of: Gero, J. S., & Tyugu, E. (eds), Inter-
national Workshop on Formal Design Methods for CAD. IFIP Transactions, vol. B-18. Elsevier. Staab, S., Horrocks, I., Angele, J., Decker, S., Kifer, M., Grosof, B., & Wagner, G. 2003. Where are the rules? IEEE Intelligent Systems, Steels, L. 1990. Components of Expertise. AI Magazine, Stelting, S., & Maassen, O. 2002.
18(5), 7683.
11(2), 2849.
Applied Java Patterns.
Java Series.
Sun
Microsystems Press. Studer, R., Benjamins, R., & Fensel, D. 1998. Knowledge Engineering: Principles and Methods. Data Knowledge Engineering,
25(1-2), 161197.
Sun Microsystems Inc. 1997 (August). JavaBeans (TM) API Specication (Ver-
sion 1.01-A). Tablado, A., Bagüés, M. I., Illarramendi, A., Bermúdez, J., & Goñi, A. 2004. Aingeru: an Innovating System for Tele Assistance of Elderly People. Pages
2736 of: Proceedings of the 1st International Workshop on Tele-Care and Collaborative Virtual Communities in Elderly Care, TELECARE 2004. INSTICC PRESS. ISBN: 972-8865-10-4. Tapscott, D., Ticoll, D., & Lowy, A. 2000.
Digital Capital: Harnessing the
Power of Business Webs. Boston: Harvard Business School Press. The Apache Software Foundation. 2002 (August). Jakarta Project: BeanUtils
Version 1.4. http://jakarta.apache.org/commons/beanutils/. The Apache Software Foundation. 2003 (June).
Apache Axis Version 1.1.
http://ws.apache.org/axis/. The FRODO Team. 2000 (January).
The FRODO project:
A Framework
for Distributed Organizational Memories. http://www.dfki.uni-kl.de/frodo/. Last Visited: June 2005. The Inkling Project. 2002 (July).
Inkling:
RDF query using SquishQL.
http://swordsh.rdfweb.org/rdfquery/. Last visited: April 2005. The
JLog
Project.
2005
(January).
JLog
-
Prolog
in
Java.
http://jlogic.sourceforge.net/. Last visited: April 2005. The
Kazuki
Team.
2004
(April).
The
http://projects.semwebcentral.org/projects/kazuki/.
Kazuki Last
project.
Visited:
June
2005. The
Mandarax
Project.
2005
(March).
http://mandarax.sourceforge.net/. Last visited: April 2005.
Mandarax.
194
BIBLIOGRAPHY
The Vitoria University of Manchester. 2005 (March).
Classication of Terminologies.
FaCT/FaCT++: Fast
http://owl.man.ac.uk/factplusplus/.
Last
visited: April 2005. The XSB Project. 2005 (March). XSB. http://xsb.sourceforge.net/. Last visited: April 2005. van der Aalst, W. 2003. Don't Go with the Flow: Web Services Composition Standards Exposed. IEEE Intelligent Systems,
18(1), 7276.
Vinoski, S. 1997. CORBA: integrating diverse applications within distributed heterogeneous environments. IEEE Communications Magazine, Vinoski, S. 2003.
7(6), 7577.
Integration with Web Services.
14(2), 4655.
IEEE Internet Computing,
Vinoski, S. 2005. What SOA isn't: Debunking the Myths of Service-Orientation.
Business Integration Journal, April, 79. Weizenbaum, J. 1966.
ELIZAa computer program for the study of natural
language communication between man and machine. Communications of the
ACM,
9(1), 3645.
Wood, D., Gearon, P., & Adams, T. 2005 (May).
Kowari:
A Platform for
Semantic Web Storage and Analysis. In: XTech Conference. Zanconato, R. 1988. BLOBS - An Object-Oriented Blackboard System Framework for Reasoning in Time. Chap. 16, pages 335345 of: Engelmore, R. S., & Morgan, A. J. (eds), Blackboard Systems. Addison-Wesley. Zimmermann,
ments
of
O.,
Krogdahl,
Service-Oriented
P.,
&
Analysis
Gee,
and
106.ibm.com/developerworks/library/ws-soad1/.
C.
2004
Design. Last
(June).
Ele-
http://wwwvisited:
March
2005. Zou, Y., Finin, T., & Chen, H. 2004. F-OWL: an Inference Engine for the Semantic Web . In: Hinchey, M., Rash, J. L., Truszkowski, W. F., & Rou, C. A. (eds), Third International Workshop on Formal Approaches to Agent-Based
Systems (FAABS. Lecture Notes in Computer Science, vol. 3228. Greenbelt, MD, USA: Springer-Verlag.