A scalable ontology reasoner via incremental materialization Fazle Rabbi and Wendy MacCaull StFX Centre for Logic and Information St. Francis Xavier University Antigonish, NS B2G 2A5 {rfazle, wmaccaul}@stfx.ca Abstract Ontology based knowledge management systems have a lot of potential: their applicability ranges from artificial intelligence, e.g., for knowledge representation and natural language processing, to information integration and retrieval systems, requirements analysis, and, most lately, to semantic web applications and workflow management systems. However the huge complexity of reasoning for ontologies with large TBoxes and/or ABoxes is often a barrier to their applicability in real-world settings especially those which are time sensitive. Materialization is a promising solution for scalable reasoning over ontologies with large ABoxes as it derives the implicit knowledge of an ontology and makes it available in a relational database. Although materialization can reduce the query answering time of an ontology, it has limitations in applications which require frequent update to the knowledge base. To overcome this problem, we developed a tool for incremental materialization which identifies the fragment of the ontology that needs to be updated due to the ABox or TBox change, thereby reducing the complexity of the exhaustive forward chaining required.
1. Introduction An ontology is a knowledge representation technique which can be used to efficiently conceptualize a community’s knowledge of a domain. There has been a lot of research in the knowledge representation area and various ontology languages with different expressiveness have been proposed. Because of the recent growth of semantic web ontologies, especially in medicine and science, there is a growing demand for efficient mechanisms to handle queries that require complex inferences. Many modern software applications such as time sensitive workflow management systems also require the ability to retrieve not just facts but also inferred knowledge. CBMS 2013
Rokan Uddin Faruqui Department of Computing and Software McMaster University Hamilton, ON L8S 4K1
[email protected]
The OWL family of languages [2] is widely used to define ontologies; it has xml based syntax and its most useful profiles correspond to Description Logics (DLs) [4]. The effective use of ontologies requires not only a well-designed and well-defined ontology language, but also adequate support for reasoning. There are many reasoing tools (e.g., FaCT++ [1], RACER [7], Pellet [15], etc.) but most existing tools involve in-memory reasoning and therefore cannot deal with an ontology with a large number of asserted facts (i.e., a large ABox). In this research, we propose algorithms for a scalable ontology reasoner for the OWL 2 RL profile recently standardized by W3C [12]. (Explanation of the acronyms for the OWL languages are given in [12].) OWL 2 RL is a fairly expressive profile amenable to rule based implementation. A few attempts have been made recently (such as [11], [6]) to store an ontology in a database through a process called materialization, which stores both explicit and inferred facts. In [6] the authors presented a scalable reasoner for OWL 2 RL using a materialization technique; although the query answering was faster, the materialization process was slow which was a bottleneck for ontologies which require frequent updates. KAON2 is a DL reasoner based on an incremental materialization approach [11], [17]. In KAON2 the ontology is translated into a logic program and then it is materialized into a deductive database for querying and storing the information. The system can handle SHIQ knowledge bases, which is equivalent to OWL Lite without nominals. This approach is similar to the approach presented here, except that we develop a reasoner for OWL 2 RL which is more expressive than OWL Lite and materialize the information to a relational database rather than to a deductive database, thus overcoming the limitation of [6]. We built an ontology driven workflow management system ( see [14]), but it was not scalable as we were using an in-memory reasoner. Here we develop an incremental reasoner to overcome this problem, allowing us to demonstrate that it is now feasible to make large scale ”intelligent” workflow systems. Indeed we have changed the idea
c 978-1-4799-1053-3/13/$31.00 2013 IEEE
221
Figure 1. Incremental materialization of ABox facts using DataLog rules
of traditional workflow management systems by incorporating the ability to reason. Currently we are running a pilot for an ontology-driven workflow system in two community based healthcare programs that incorporates this incremental reasoner. This paper is organized as follows: in section 2 we describe the incremental materialization process, and in section 3 we present some related work and future directions.
2. Incremental Materialization Incremental materialization of ABox: Our approach to incrementally materialize facts of an ontology is shown in Fig. 1. We use the term ‘asserted facts’ to indicate the ‘facts’ that go to the incremental materialization process as input, and ‘inferred facts’ to indicate the inferred knowledge that comes out from the incremental materialization process as a result of the reasoning process. In our approach we use an extended version of the DataLog rules to represent an ontology. For a given ontology, the DataLog rules are extracted by reading the TBox information. We materialize all facts of an ontology Abox into a relational database. For any manipulation (i.e., insert, update, delete a fact) to the ABox of an ontology, the incremental materialization process is applied to modify that fragment of the ontology which would be affected when forward chaining is applied starting with the fact in a body of a rule. This usually means a much smaller fragment of the ontology is loaded from a relational database and is used for reasoning and then facts are synchronized into the database. In this section we present the algorithms for incremental materialization and describe the steps. DataLog rules are written in horn clause format. A horn clause is a rule of the form h ← b0 , b1 , . . . bk where each h (head) and bi , 0 ≤ i ≤ k, (body) is either a unary or a binary predicate. Recall that a unary predicate represents an OWL 2 RL class and a binary predicate represents an OWL 2 RL object or data property. In our approach, an ontology file is read and its structure (i.e., TBox) is represented by DataLog rules. Facts (i.e., classAssersions and propertyAssertions) are triggers which may enable DataLog rules. 222
In this paper we also use horn clause rules to represent the OWL 2 RL restrictions ObjectComplementOf, MaxQualifiedCardinality and the irreflexive, asymmetric and disjointProperty restrictions, which impose certain restrictions over the object and data properties of an ontology; any violation of these restrictions results in an inconsistent ABox. To handle the restrictions we propose an extension of DataLog rules by using rules with ‘Exception’ in the head. If any restriction is violated, (e.g., one such rule is executed) then an ‘Exception’ would occur, resulting in the abortion of the insert/delete/update operation into the ontology. Thus the ABox is always consistent. Table 1 shows OWL 2 RL axioms and their DataLog rule representations. In this table sameAs is a symmetric binary predicate and its semantics is: if sameAs(X1 , X2 ) = true then X1I = X2I (where .I is a mapping to the domain (i.e., the interpretation)). Algorithm 1 outlines the insert procedure for modifying the database as a result of adding a fact to the ontology ABox. Using the fact, θ, that we wish to add, DataLog rules are instantiated with relevant facts loaded from the database. Algorithm 2 loads only relevant facts from database by exploring a rule α by matching its body predicates. If α is instantiated, the consequence (i.e., inferred fact) is also added. Note that different instantiations of α are possible (See Algorithm 2 Line 6, Algorithm 1 Line 10). If the consequence is an ‘Exception’, inserting the fact θ will make the ontology inconsistent, so an exception will be raised and the insert operation will be aborted by showing an error message. Now we discuss the steps (Algorithm 3) for removing a fact from an ontology together with the consequences of its removal. We follow an over-estimated approach which removes some items which may need to be returned later; a forward chaining approach is used to correct the results of the over-estimation. If more than one rule can be instantiated to give the same consequence, then removing the consequence by matching only one rule is sometimes known as an over-estimated delete. For example if ‘Bryan’ has two children ‘Max’ and ‘Debi’ then ‘Bryan’ is an individual of Parent class (∃hasChild.P erson v P arent) because of the instantiation of the following rules: r1 : P arent(Bryan) ← hasChild(Bryan, M ax) , P erson(M ax) r2 : P arent(Bryan) ← hasChild(Bryan, Debi) , P erson(Debi)
If we want to remove the fact P erson(M ax) from the ontology, then it affects r1 and we remove the fact P arent(Bryan) even though there is another instance of a rule, r2 , which establishes the fact P arent(Bryan). A subsequent forward chaining is therefore required which retrives the fact P arent(Bryan). Let us now trace through Algorithm 3 in detail. Suppose we wish to delete θ from an ABox. In Algorithm 3 we define 2 sets, D and R (where D stands for deleted facts CBMS 2013
ClassAssertions
SubObjectPropertyOf
a:C
P vQ
ObjectPropertyChain P ◦QvR
C(a)
Q(X1 , X2 ) ← P (X1 , X2 )
R(X1 , X2 ) ← P (X1 , X3 ) ∧ Q(X3 , X2 )
PropertyAssertion
ObjectInverseOf
EquivalentProperties
ha, bi : P
P ≡ Q−
P (a, b)
P (X1 , X2 ) ← Q(X1 , X2 ),
TransitiveProperty
SymmetricObjectProperty
Q(X2 , X1 ) ← P (X1 , X2 )
PropertyDomain T v ∀P − .C
P ≡Q
P (X1 , X2 ) ← Q(X2 , X1 ),
P+ v P
Q(X1 , X2 ) ← P (X1 , X2 ) P ≡ P−
C(X2 ) ← P (X2 , X1 )
P (X1 , X3 ) ← P (X1 , X2 ) ∧ P (X2 , X3 )
P (X1 , X2 ) ← P (X2 , X1 )
PropertyRange
ObjectSomeValuesFrom
DataSomeValuesFrom,ObjectHasValue, DataHasValue
T v ∀P.C
∃P.C v D
∃P.{a} v D
C(X2 ) ← P (X1 , X2 )
D(X1 ) ← P (X1 , X2 ) ∧ C(X2 )
D(X1 ) ← P (X1 , a)
SubClassOf
FunctionalProperty
InverseFunctionalProperty
C vD
> v6 1 P
> v6 1 P −
D(X) ← C(X)
sameAs(X2 , X3 ) ← P (X1 , X2 ) ∧ P (X1 , X3 )
sameAs(X1 , X2 ) ← P − (X1 , X3 ) ∧ P − (X2 , X3 )
EquivalentClasses
MaxQualifiedCardinality 0
MaxQualifiedCardinality 1
C ≡D
D 6 0P.C
D 6 1P.C
D(X) ← C(X) ,
Exception ← D(X1 ) ∧ P (X1 , X2 ) ∧ C(X2 )
sameAs(X2 , X3 ) ← D(X1 ) ∧ P (X1 , X2 )
IntersectionOf
Irreflexive
Asymmetric
C v D1 u D2
∃ P.self v ⊥
C(X) ← D(X)
D1 (X1 ) ← C(X1 ),
∧P (X1 , X3 ) ∧ C(X2 ) ∧ C(X3 ) P ≡ ¬P −
D2 (X1 ) ← C(X1 ) IntersectionOf
Exception ← P (X, X)
Exception ← P (X1 , X2 ) ∧ P (X2 , X1 )
ObjectHasValue, DataHasValue
AllValuesFrom
C1 u C2 v D
D v ∃P.{a}
C v ∀P.D
D(X) ← C1 (X) ∧ C2 (X)
P (X, a) ← D(X)
D(X2 ) ← C(X1 ) ∧ P (X1 , X2 )
UnionOf
ObjectComplementOf
DisjointProperties
C1 t C2 v D
D v ¬C
D(X1 ) ← C1 (X1 ), D(X1 ) ← C2 (X1 )
Exception ← C(X) ∧ D(X)
Exception ← C(X1 ) ∧ sameAs(X1 , X2 ) ∧ D(X2 )
P v ¬Q
Exception ← P (X1 , X2 ) ∧ Q(X1 , X2 )
Table 1. OWL 2 Axioms and Facts, Description Logic rules and Extended DataLog rules and R stands for re-evaluated facts). The set D stores all the facts that need to be deleted from the ontology as a result of deleting θ and the set R consists of individuals which are used to determine the rules that need to be re-evaluated after removing the facts from the database (see Algorithm 3 Line 19,20). If θ is a ClassAssertion of the form C(a), all property assertions, a p, instantiated with the individual a in the domain and all property assertions pa with a in the range are loaded from the database. We initialize the set D by putting θ and a p ∪ pa into it, and initialize set R by using the individuals occurring in the facts in D. On the other hand, if θ is a PropertyAssertion, we initialize D by putting θ into it, and initialize R by putting the domain and range of θ into it. Recursively we identify all rules those are instantiable using facts in D and store the consequences of the rules in D. We then delete the elements of D from the database. The last step is to instantiate rules by elements of R and then using the TBox, perform forward chaining to retrieve the facts which were removed by an over-estimation. In this paper we did not provide any algorithm for updating an ABox because it can simply be accomplished by performing a delete operation followed by an insert operation. Incremental materialization for revised TBox: The incremental insert and delete operations provided in the CBMS 2013
previous section also allow us to incrementally update the ABox as a consequence of modifying the TBox. Algorithm 4 accomplishes this update. The algorithm takes the datalog rules (Ψold ), revised datalog rules (Ψrev ) and the database (Γ) where the ontology has been materialized. For each deleted datalog rule, α, the algorithm finds the facts which were instantiated (as a consequence of firing the rule α) into the ontology ABox; the instantiable function takes a datalog rule (α) and a database (Γ) as input; the function attempts to instantiate all of α’s body predicates from the materialized facts in Γ and returns a set of rule instantiations of α. For each instantiated rule, αi , we delete the consequence of αi by invoking Algorithm 3 with αi , Ψold , and Γ. We invoke Algorithm 3 with Ψold , as in the old set of datalog rules there might have been some rules which were instantiated by the consequence of αi ; we recursively delete all the consequences of the forward chaining of the consequence of αi . The next step of Algorithm 4 is to insert newly added rules and perform forward chaining by invoking Algorithm 1 to get new facts that may be inferred as consequences of firing these new rules. In Algorithm 4, we first delete inferred facts which are consequences of Ψold \ Ψrev ; this updates the materialized ABox for the ontology with TBox Ψold \ Ψrev . Next we insert consequences of Ψrev by forward chaining starting with Ψrev \Ψold on the materializing 223
Algorithm 1: insert: Incremental insert Operation into ABox Data: θ - A class assertion or property assertion Ψ - DataLog rules of the given Ontology’s TBox Γ- A database containing all ABox facts. Result: Updated database. 1 S ⇐ {θ} 2 count := 0 3 while count! = size(S) do 4 count := size(S ) 5 for ∀σ ∈ S do 6 for ∀ α ∈ ψ do 7 if σ is matched with a body predicate of α then 8 Snew := exploreRule(α, σ, Γ ) 9 S ⇐ Snew // instantiable function returns all possible instances of α with body predicates from σ and S 10 for ∀ αi ∈ instantiable(α, σ, S) do 11 if consequence of αi is an Exception then 12 Raise(00 Exception about 00 , αi )
Algorithm 3: delete: Incremental Delete Operations from ABox Data: θ - A class assertion or property assertion Ψ - DataLog rules of the given Ontology’s TBox Γ- A database containing ABox facts. Result: Updated database. I 1 if θ ∈ C then 2 ap := QueryPropByDomain(Γ , θ) // Find all property p(a, b) where θ = C(a) 3 pa := QueryPropByRange(Γ , θ) // Find all property p(b, a) where θ = C(a) 4 D ⇐ {θ} ∪ ap ∪ pa 5 R ⇐ {range(ap) ∪ domain(pa )} 6 else 7 D ⇐ {θ} 8 R ⇐ dom(θ) ∪ range(θ) 9 10 11 12 13
14 15 16 17
// Update all facts in Γ Algorithm 2: exploreRule: Explores the body predicates of a rule Data: α - a DataLog rule σ - a body predicate of α Γ - a database containing ABox facts. Result: S - a set of facts. 1 for ∀ body predicate γ in α do 2 if γ ∈ / C I ∪ P I ∪ {σ} then 3 result := Query(Γ , α, σ, γ) // Find instantiation of γ from ABox 4 if result ! = ∅ then 5 S ⇐ result 6 7 8 9 10
for ∀ αi ∈ instantiable(α, σ, S) do if consequence τ of αi is not an Exception then if τ ∈ / C I ∪ P I then S ⇐ {τ } return ( S)
224
18
19 20 21 22 23 24
25
count := 0 while count! = size(D) do count := size(D) for ∀α ∈ ψ do if a fact from D is matched with a body predicate of rule α then θc ⇐ consequence of α if θc ∈ / D then D ⇐ {θc } if θc ∈ P I then R ⇐ {domain(θc ) ∪ range(θc )}
for ∀ facts τ in D do deleteΓ (τ ) // Delete facts from Γ for ∀∂ ∈ R do ρ := QueryPropByDomainOrRange(Γ , ∂) for ∀σ ∈ ρ do for ∀ α ∈ ψ if σ is matched with a body predicate of α do S := exploreRule(α, σ, Γ ) // Update all facts of memory in Γ
ABox facts; this makes sure to get materialized ABox facts for the ontology with TBox Ψrev . We get the class hierarchy of the TBox by using a standard Pellet reasoner. The Sub-Class and Sub-Property closure is extracted from an ontology TBox by traversing super-classes which are then materialized into the database. Once the TBox is materialized, a new fact can be incrementally materialized using Algorithm 1. CBMS 2013
P ainIntensity u (∃hasP ainLevel.P ainLevelZero t ∃hasP ainLevel.P ainLevelOne) v BackgroundDiscomport P ainIntensity u (∃hasP ainLevel.P ainLevelT wo t ∃hasP ainLevel.P ainLevelT hree) v M ildP ain P atient v ∀hasF ormalCaregiver.F ormalCaregiver
M edication u (((∃hasRoute.{IV } t ∃hasRoute.{SC}) u ∃hasDose.{Dose0.25}
u∃hasF requency.{Q1h} u ∃hasM easures.{mg} u ∃isM adeOf.{Hydromorphine}) t ...) v StrongOpioid W eakOpioid v M edication u ¬StrongOpioid
P atient u (∃ hasM edication.StrongOpioid) v StrongOpioidRegimen
isF eeling ◦ hasSpecialP ainP roblem ◦ hasSpecialist v hasRef Recommendation hasAssessment ◦ hasF ullHistory v hasF ullHistory
hasAssessment ◦ hasAllCausesRecorded v hasAllCausesRecorded
P atient u ∃ hasAllCausesRecorded.{T rueV alue} u ∃ hasF ullHistory.{T rueV alue} v CompletedInitialAssessment
Figure 2. A Fragment of the Pain Management Ontology TBox
Algorithm 4: updateTBox: Incremental Update Operation for revised TBox Data: Ψold - DataLog rules of the old ontology’s TBox Ψrev - DataLog rules of the revised ontology’s TBox Γ- A database containing ABox facts. Result: Updated database. 1 Ψdeleted ⇐ Ψold \ Ψrev 2 Ψinserted ⇐ Ψrev \ Ψold 3 for α ∈ ψdeleted do 4 for ∀αi ∈ instantiable(α, Γ) do 5 delete(consequence(αi ), Ψold , Γ ) 6 7 8 9
updateΓ (Ψrev ) for α ∈ ψinserted do for ∀αi ∈ instantiable(α, Γ) do insert(consequence(αi ), Ψrev , Γ )
Experiment: We performed some experiments on an OWL 2 RL pain management ontology1 which was constructed from the guidelines for the management of cancer related pain in adults [5], extending an earlier version of a pain management ontology from [6]. We evaluated our incremental methods using this pain ontology because there are no widely accepted benchmarks for OWL 2 RL reasoning [13]. A fragment of the pain management ontology is depicted in Fig. 2. The pain management ontology covers all OWL 2 RL axioms (see Table 1). The ontology includes some object properties such as hasPainIntensity, hasPainLevel, hasFormalCaregiver, etc., and some data properties including hasESASScore, hasPPSScore, etc., and some functional object properties including isResponding, isMeansOf, inverse object properties such as isFeeling and isFeltBy, transitive object properties including hasLocation, isLocationOf, isAggregateOf, object chain property hasRe1 available at: http://webpal.net/1.6/stfx/web/ _files/file.php?id=’filePQqvxBnXLb’&filename= file_painip.owl
CBMS 2013
fRecommendation, etc. We also use propositional connectives to create complex class expressions (see Fig. 2). We measured the average time required to insert a single fact (a class assertion or property assertion) by the incremental materialization process for various numbers of facts in the database. In order to insert facts into the pain management ontology ABox we invoked Algorithm 1 with different asserted facts (i.e., class assertion and property assertion) from a data generator program. The data generator program generates random facts for Physician and Nurse individuals; it generates individuals for Patient, Pain, Medication, etc., and also generates binary relations among those individuals. It performs data property assertion by generating random values for different data properties. The results are shown in Fig. 3. Here, ∆10K indicates the average insertion time of a single fact over a database where ten thousand facts have already been materialized; ∆1M indicates the average insertion time of a single fact over a database where 1 million facts have already been materialized. Since the insertion time depends on the number of inferred facts triggered from an asserted fact, in order to calculate the average insertion time (∆i ) of ith asserted fact, we have used the insertion time for the last 500 asserted facts. In the following formula, δj is the insertion time of the j th asserted fact and its inferred knowledge; unlike ∆i , it is not an average insertion time. Pi δ j=i−500 j 500
∆i = These experiments were performed with a desktop computer with 2.6 GHz CPU, 2GB of Memory and using MySQL database. We show that to insert a new fact over the pain management ontology with 1 million records takes around 300 milliseconds. The performance of the delete and update operations also depends on the fact we wish to delete or update. A fact with fewer inferred facts is easier to delete or update than a fact with a large number of inferred facts. For example a Physician individual that is not related with any Patient individual is easier to delete than a Physician individual who 225
is associated with many Patient individuals by the hasFormalCaregiver binary relation. An experiment (not included here) shows that it takes 155 milliseconds to delete a fact with 50 inferred facts from an ontology with 1 million materialized facts.
Figure 3. Performance Testing
3. Related and Future Work Materialization techniques are used in many scalable reasoners, including [3], [18] and [9]. However these techniques are not incremental and so when changes occur in the TBox or ABox the time to recompute the entire set of facts is a barrier when the ontology is large or complex and the application is time sensitive. McGlothlin et al. [10] proposed a materialization method for RDF datasets using a bit vector-based approach to materialize an ontology. While this approach allows for faster query answering, the materialization approach did not support incremental materialization and therefore was relatively slow. Urbani et al. [16] used MapReduce algorithms of parallel computing to build a powerful reasoner, WebPIE, but it lacks query capabilities. Moreover WebPIE is not a complete OWL 2 RL reasoner as it was developed for the ter Horst fragment, a subset of OWL 2 RL. Vladimir et al. [8] presented various optimization techniques for an OWL 2 RL reasoner. They used high performance computing and were able to efficiently materialize billions of triples from an LUBM ontology. However they did not give a method for deleting facts from a materialized ontology. In future we will incorporate high performance computing techniques to deal with billion triples from ontologies built with more expressive fragments.
References
[2] Web Ontology Language (OWL): http://www.w3.org/2004/owl/. [3] L. Al-Jadir, C. Parent, and S. Spaccapietra. Reasoning with large ontologies stored in relational databases: The OntoMinD approach. Data & Knowledge Engineering, 69(11):1158–1180, November 2010. [4] F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, 2003. [5] L. Broadfield, S. Banerjee, H. Jewers, A. J. Pollett, and J. Simpson. Guidelines for the Management of CancerRelated Pain in Adults. Supportive Care Cancer Site Team, Cancer Care Nova Scotia, 2005. [6] R. U. Faruqui and W. MacCaull. OwlOntDB: A scalable reasoning system for OWL 2 RL ontologies. In FHIES’12, volume 7789 of LNCS, pages 105–120. Springer-Verlag, 2013. [7] V. Haarslev and R. M¨oller. RACER System Description. In IJCAR’01, pages 701–706, London, UK, 2001. SpringerVerlag. [8] V. Kolovski, Z. Wu, and G. Eadon. Optimizing EnterpriseScale OWL 2 RL Reasoning in a Relational database system. In International Semantic Web Conference (1), volume 6496 of LNCS, pages 436–452. Springer, 2010. [9] J. Lu, L. Ma, L. Zhang, J. Brunner, C. Wang, Y. Pan, and Y. Yu. SOR: a practical system for ontology storage, reasoning and search. In VLDB’07, pages 1402–1405. VLDB Endowment, 2007. [10] J. P. McGlothlin and L. R. Khan. Materializing and persisting inferred and uncertain knowledge in rdf datasets. In AAAI’10, pages 11–15. AAAI Press, July 2010. [11] B. Motik. KAON2 - Scalable Reasoning over Ontologies with Large Data Sets. ERCIM News, (72), 2008. [12] B. Motik, B. Grau, I. Horrocks, Z. Wu, A. Fokoue, and C. Lutz. OWL 2 Web Ontology Language: Profiles. W3C Recommendation, Available at http://www.w3.org/TR/owl2-profiles/, October 2009. [13] B. Motik and U. Sattler. A Comparison of Reasoning Techniques for Querying Large Description Logic ABoxes. In LPAR’06, volume 4246 of LNCS, pages 227–241. Springer, 2006. [14] F. Rabbi and W. MacCaull. T : A domain specific language for rapid workflow development. In MODELS’12, volume 7590 of LNCS, pages 36–52. Springer, 2012. [15] E. Sirin, B. Parsia, B. Grau, A. Kalyanpur, and Y. Katz. Pellet: A practical OWL-DL reasoner. Web Semantics: Science, Services and Agents on the World Wide Web, 5(2):51–53, June 2007. [16] J. Urbani, S. Kotoulas, J. Maassen, F. van Harmelen, and H. E. Bal. OWL Reasoning with WebPIE: Calculating the Closure of 100 Billion Triples. In ESWC (1), pages 213– 227, 2010. [17] R. Volz, S. Staab, and B. Motik. Incrementally maintaining materializations of ontologies stored in logic databases. Journal of Data Semantics II, 3360:1–34, 2005. [18] J. Zhou, L. Ma, Q. Liu, L. Zhang, Y. Yu, and Y. Pan. Minerva: A scalable owl ontology storage and inference system. In ASWC’06, volume 4185 of LNCS, pages 429–443. Springer, 2006.
[1] Fact++, the new generation of FaCT OWL-DL reasoner. 226
CBMS 2013