Using Automated Reasoning System for Data Model

0 downloads 0 Views 254KB Size Report
There are many DQ tools developed ... principles that are established in reasoning rules (axioms). ... approach generally, but rule based-approach for novice.
Using Automated Reasoning System for Data Model Correctness Analysis Lj. Kazi, Z. Kazi, B. Radulovic and D. Letic University of Novi Sad, Technical faculty “Mihajlo Pupin”, Zrenjanin, Serbia [email protected]; [email protected]; [email protected]; [email protected] Abstract—Data modeling is one of the most important activities in the process of information systems development which is based on domain knowledge in specific area. This paper presents a system for analyzing data model correctness from both syntax and semantic aspect. The system is based on using ontology for domain knowledge representation, transformation of data models and ontology to predicate logic form and their integration with data model quality rules. All components of the system are integrated by using XML form of their results. They create an input for an automated reasoning system, that provide answers regarding data model correctness.

I. INTRODUCTION Data model correctness is one aspect of data quality (DQ), as a general concept. The relevance of data quality in both organizational and decisional processes is recognized by several international institutions and organizations, which resulted in significant number of contributions to the research by database and information system communities. There are many DQ tools developed in the research and commercial purpose. Many data quality software tools are advertised and used in various data-driven applications to improve the quality of data models and business processes. Data modeling has different aspects of quality regarding various data model types, as well as issues regarding process of data model creation, evaluation and correction. Conceptual modeling is considered most difficult and error-prone, especially for novice designers. Therefore, many efforts are made to create rules and heuristics for error detection and consultative support to modeling process. Still, CASE tools don’t support error detection in semantic domain. In [1] we proposed model of an integrated tools system that enables data model evaluation (from both syntax and semantic aspect) by using ontology (for specification of the problem domain knowledge) and automated reasoning system (that use formalized data model and ontology for automation of the process). Ontology tool and data design CASE tool are integrated by using their XML formatted results, merging with reasoning rules and transforming to appropriate automated reasoning tool input (clauses). Generally applicable model of a process that doesn’t depend on any particular semantics is enabled by principles that are established in reasoning rules (axioms). In this paper we present how automated reasoning system gets all needed clauses and perform processing answers to queries regarding data model evaluation. This way application of proposed system is demonstrated with sample data model and basic generally applicable axioms for data model correctness.

II. RELATED WORK A. Data Quality Methodology Data quality is a complex, multidisciplinary area of research [2] and is grounded in other research areas such as [3]: statistics, knowledge representation and reasoning, data mining, management information systems, data integration. Main research issues in data quality relate to [4]: dimensions to measure the level of quality, data models quality, models of data quality measurement, measurement/improvement techniques (i.e. algorithms, heuristics, knowledge-based procedures and learning processes), measurement/improvement tools, frameworks and methodologies. Some of the most important data quality dimensions used for metrics include: accuracy, completeness, Timerelated Dimensions: Currency, Timeliness, Volatility, Consistency, Accessibility, Quality of Information sources, Schema Quality Dimensions (Readability, Normalization). Special concern is data provenance, i.e. “description of the origins of a piece of data and the process by which it arrived in the database”. [5] Data quality activity is any process we perform, using different techniques, directly on data to improve their quality, i.e. to measure and improve data quality dimensions. These activities and techniques include: data acquisition, standardization (normalization), object identification, data integration, source trustworthiness, quality composition, error localization (using set of semantic rules), error correction, cost optimization, as well as schema matching, schema cleaning and profiling. Data quality methodologies may be classified according to several criteria [2]: data-driven (data sources) vs. process driven (data production process); measurement vs. improvement; general purpose (wide spectrum of activities and dimensions) vs. special purpose (specific activity or dimension); intra-organizational vs. interorganizational. B. Evaluating Creation of Data Models Quality of data model reflects quality of data in databases and business intelligence applications. Regarding data model quality, there are several approaches that deal with this problem during the process of data model creation, data model evaluation and data model correction. Data models that are included in quality evaluations include structured (ER, Relational, ObjectOriented) and semi-structured (XML) data models, described at conceptual, logical or physical level [2]. In the field of evaluation issues in the process of data model creation, some experimental research has been

conducted regarding comparison of ER and object oriented modeling [6] [7], where it has been shown the significance of ER modeling and it’s value comparing to object-oriented approach. It has been shown that ER modeling has some advantages to object-oriented approach. After creating ER data models, CASE tools enable automatic transformation to relational data models, following well-defined rules. But, while transforming to relational model, they are not able to capture errors that lead to normalization problems [8], so it is necessary to take care of certain types of ER modeling errors, in aim to avoid future normal forms errors in relational data model. Research [9] present conceptual modeling errors as human errors at three performance levels: skill-based, rule based and knowledge based. Special software prototype CODA was implemented for the purpose of consulting support to conceptual database design to novice designers, and it has been shown (during experimental survey with students) that using this software enhance quality of data models. This software system include heuristics and rules for recognition of typical errors in ER modeling during the process of data model creation, as well as support to further assistance toward solving these issues. Two main conceptual modeling training approaches have been compared in research [10]: rule-based and pattern-based approach. It has been experimentally shown that more complex tasks influence lower designer performance. It has also been shown that rule-based approach is not significantly superior to pattern-based approach generally, but rule based-approach for novice designers given the significantly better performance in two of three complexity levels. C. Automating Data Models Evaluation Research in the field of automating data models evaluation resulted in development of software prototypes that evaluate certain types of data models. Software prototype for conceptual data model (ER) model evaluation [11] uses domain ontology in creating conceptual ER data model, as well as evaluation of externally created ER data model. According to the experimental research [12], testing of relational database schema has been done with ADIA (Alternative Data Instance Analyses) system. This is a fault-based testing approach. The goal is to reveal constraint faults for schema elements, incorrect or absent restriction definitions. The data model is represented by a metamodel using UML notation. In study [13], the problem of object oriented database (OODB) design is analysed, focusing in particular on the correctness of OODB schemas with IS-A hierarchies. Software prototype use object-oriented language elements for defining the physical structure of database by using TQL. III. EMPIRICAL SURVEY ON DATA MODEL ERRORS Empirical survey [1] shows typical student errors that are made in the process of conceptual data model design, during learning and assessment of information systems development. It has been shown that many generations of students make similar errors. The main causes of student errors are inadequate abstraction, generalization, specialization, analysis and synthesis activities etc.

Student errors could be classified as syntax and semantic errors, i.e. errors regarding elements of conceptual data model. Semantic category (type) of student errors in data model: using derived attributes, incomplete set of attributes in an entity, no entity derived of a relation that has important attributes, difference between general and specific, entities derived from attributes domain, composite and complex attributes, multi-valued entities, not enough important relationships, needed to show different roles of entities related to other entities, attribute named in plural, entities of complex structure (made of other entities, entities named as documents), name of an attribute is not clear, not easy to understand, attribute attached to wrong entity (inadequate relation between entity and attribute), putting attributes of similar names to one entity, where they don’t belong together, inadequate name of entity, important relationship is missing, inadequate relationship, inadequate name of attribute, name of attribute as a data value, inadequate identifier attributes. Syntax category (type) of student errors in data model: identifier attribute is missing, attribute without the domain/data type, redundancy of attributes - repeating the same attribute at many entities, with the same meaning and different or equal name, name of relationship is missing. IV.

THEORETICAL CONCEPTS OF PROPOSED DATA MODEL EVALUATION “The main difference between ontology and a database schema is that database schema is usually limited to describing a small subset of a mini-world from reality in order to store and manage data. An ontology attempts to describe part of reality or a domain of interest as completely as possible.” [13] A. Data Model Presentation and Formalization Data models are specific theoretically based specifications that are used for creation of real databases of information systems [14]. Data model is a formal abstraction through which the real world is mapped in the database [15]. Data model enables representation of a real world system through a set of data entities and their connections. They can be represented in various ways: • Diagram (schema) – graphical representation, using specific set of symbols with methodology based meanings, • Data dictionary representation – where elements of data model are listed and textually described in non-structural or semi/structural way, • Formal languages representation, such as predicate logic calculus. According to [12] formal presentation of a data model could be represented as tuple S = (E, A, R, P), where: • E is a finite set of entities, • A is a finite set of attributes, • R is a finite set of constraints concerning domain, definition, relationship and semantics associated to the elements and attributes, • P is a finite set of association rules among elements, attributes and constraints.

B. Ontology Presentation and Formalization “One commonly used definition of ontology is a specification of a conceptualization (Gruber, 1995). In this definition, a conceptualization is the set of concepts that are used to represent the part of reality or knowledge that is of interest to a community of users. Specification refers to the language and vocabulary terms that are used to specify the conceptualization.” [13] According to [16] ontology have various meanings, depending on science that is creating definitions. Ontology originated in the fields of philosophy and metaphysics. Surveys that are conducted in the field of artificial intelligence, the notion of ontology linked to: reusability of knowledge and sharing knowledge in a specific domain. Ontology includes both conceptualization and specification and those are formulated by ontology presentation [13] as follows: • ontology as a thesaurus (dictionary, glossary) – describes the relationships between words (vocabulary) that represent various concepts, • ontology as a taxonomy – describes how concepts of a particular area of knowledge are related, using structures similar to those used in a specialization and generalization of EER models. Ontology [17], [18] is used to capture knowledge about some domain of interest. Ontology describes the concepts in the domain and also the relationships that hold between those concepts. According to [16], the basic characteristics of ontology are hierarchy of concepts/objects, which is established by using different semantic links. An ontology elements and components can be formulated and presented in several forms [19]: • Graphically, as diagrams, • By using XML notation of ontology languages, like OWL language. Numerous ontology modeling languages have been developed during last ten years. World Wide Web Consortium has accepted the following ontology languages: RDF (Resource Definition Framework), RDFS (RDF Schema), OWL (Ontology Language), XML (Extensible Markup Language), XMLS (XML Schema). Ontology languages must be compatible with other industrial and web standards, to enable integration and using by tools of various purpose. Thus, ontology languages are based on the use of XML. • By using other formal languages – like first order predicate calculus. The formal definition, according to [20], determines the ontology O as a set composed of five elements (C, I, R, F, A), where: C - a finite set of concepts, abstractions that describe the objects of the real world, I - a finite set of instances of concepts, i.e. real-world objects, R - a finite set of relations between elements of set S, F - a finite set of functions defined over the real-world objects, A - a finite set of axioms formalized using first order predicate calculus, needed to determine the meaning of objects classes, relations between objects and functions defined over the objects of the real world.

C. OWL Mapping to Predicate Logic Calculus OWL ontology [17] consists of Individuals, Properties, and Classes. 1. Individuals, represent objects in the domain in which we are interested. Individuals may belong to more than one class. 2. Properties are binary relations on individuals. Properties can have inverses. There are two main types of properties: object properties and data type properties. Object properties present relationships between two individuals. 3. OWL classes are interpreted as sets that contain individuals. They are described using formal (mathematical) descriptions that state precisely the requirements for membership of the class. Classes may be organized into a super class-subclass hierarchy, which is also known as taxonomy. Subclasses specialize their super classes. Results of research [21] defined mapping between the ontology components of OWL/RDF languages to predicate logic calculus elements. Ontology primitives have interpretation in predicate calculus and can be transformed into clauses form suitable for implementation in logic programming environments, such as Prolog or DATALOG. The predefined class Thing can be represented by the following rule: Thing(X) : −.

(1)

For each “C subClassOf D” relationship an implication is generated, making explicit that each instance of C is also an instance of D. The following rule pattern expresses the “rdfs:subClassOf” relationship for two classes C and D. D(X) : −C(X).

(2)

For equality of classes, for each “C owl:sameClassAs D” the following rule pattern is instantiated. The two rules realize equality by establishing a circular subclass definition. D(X) : −C(X).

C(X) : −D(X).

(3)

For equality of properties, “M owl:samePropertyAs N” the following rule pattern is instantiated: N(X, Y ) : −M(X, Y ). M(X, Y ) : −N(X, Y ). (4) V. PROPOSED MODEL Data models are usually created in the process of information systems development by using CASE tools that integrate business process modeling results to data modeling while preparing it as an input material for the process. Ontology that more completely [13] describes the knowledge for the specific problem domain is used to be compared with the designed data model, so data model is to be evaluated from semantic aspect.

In the proposed process [1] of data model design evaluation (Fig. 1.) various software tools are used. We propose system of integrated software tools that would automate the whole process and therefore enhance quality of data model evaluation, as well as data model design. The basis of integration relies on using XML as form of results some software tools produce. Ontology tool Protege

Domen ontology OWL, RDF

Business process model, Data flow model, Data Dictionary

Compatibility ?

CASE tool Power Designer

Data model EER, OOM, XML

Predicate calculus

Ontology mapping

Data model formalization Reasoning rules

Automated reasoning system PROLOG

Data model evaluation

Figure 1. Proposed process of data model evaluation [1]

This system is to be used in the process with following steps: • Creating axioms that describe general reasoning rules regarding specific type of data model (EER, RM, OOM etc.). Axioms are reasoning rules that are applied to any semantic presented by ontology and any data model that is to be evaluated. Axioms are used as general theoretical foundation of checking syntax and semantic part of quality of data model. • Creating ontology, by using knowledge from business process and data flow models as well as data dictionary. • Establishing a correlation and mapping between ontology and a formal logic language, i.e. transformation of ontology to a form that is appropriate for automated reasoning system input. Here it will be first order predicate logic calculus. • Formalization of data models, by using first order predicate logic calculus as a formal language. • Creating a system of axioms that will be merged with formally presented ontology and formally presented data models within the transformation tool, to be processed by an automated reasoning system. These axioms are generally applicable to any data model, regardless to specific semantic that is involved. They are set of rules are to be applied over data model elements so the main conclusion could be made about data model quality. Axioms are used as reasoning rules that define the elements of correctness of a data model. • Transformation of first order predicate calculus form of ontology and data model to a form that is suitable for processing in the selected system of automatic reasoning, i.e. to clauses in PROLOG. • Merging clauses regarding ontology, data models and reasoning rules to an input file for PROLOG. • Entering input file to automated reasoning system, setting queries and getting results.

Software tools that present parts of the proposed system [1] are these empirically tested tools: • PROTÉGÉ – as an ontology tool, which saves result of modeling in output form of file that has OWL extension, and whose structure is readable as any XML, but it is specifically formatted by using OWL language. • Sybase Power Designer – as a CASE tool for data modeling, which saves result of modeling in output form of file that has CDM extension for Conceptual Data Model (EER model), PDM extension for Physical Data Model (Relational data model) and Class Diagram of UML Object Oriented Model (OOM extension). Each of these files are in fact XML files that consist of parts of data model and its descriptions. • PROLOG – as an automated reasoning tool. Input data to PROLOG is file with extension PRO, which is in fact a specially formatted textual file. VI. EXAMPLE A. Ontology An example of an ontology called “Student ontology” has been graphically presented at “Fig. 2” with ontology classes and few individuals (Fig. 3) that are classified into these classes: Student, Teacher and Course. Students have exams regarding courses they attended, teachers teach students and evaluate students’ exams.

Figure 2. Ontology classes

Figure 3. Ontology individuals with classes and properties

Ontology is created in Protégé with XML/OWL output form. By applying rule for OWL mapping to predicate calculus “C subClassOf D” relationship” [20], ontology classes can be transformed into two PROLOG clauses: Student(X):-Person(X). Teacher(X):- Person(X). B. Data Model “Fig. 4” shows an example of a simple EER data model diagram. Semantics of this example is: “Student passes the exam from selected course. The exam is evaluated by teacher.”

C. Axioms – Reasoning Rules This example shows some reasoning rules that could be applied in the process of conceptual (EER) data model evaluation. These rules are presented in forms of text, predicate logic and PROLOG clauses. 1. RULE - Each entity must have an identifying attribute. Predicate logic form: (∀x)(∀y)(ent(x)∧atr(y))∧res(identatr)∧p(y,identatr) ∧p(x,y)⇒hasidattribute(x)) (5) Figure 4. Example of EER data model diagram

EER data model in this example can be formally presented [11] as a tuple (E, A, R, P): E = {student, teacher, course} A = {studentID, firstName, lastName, year, examDate, score, courseID, teacherID, name, rank} R = {exam, evaluate, identAtr, string, integer, date, cardlow0, cardlow1, cardup1, cardupm} P = { p1(student, studentID), p2(student, firstName), p3(student, lastName), p4(student, year), p5(exam, date), p6(course, courseID), p7(course, name), p8(evaluate, score), p9(teacher, teacherID), p10(teacher, firstName), p11(teacher, lastName), p12(teacher, rank), p13(studentID, integer), p14(firstName, string), p15(lastName, string), p16(year, integer), p17(examDate, date), p18(score, integer), p19(courseID, integer), p20(teacherID, integer), p21(name, string), p22(rank, string), p23(studentID, identAtr), p24(courseID, identAtr), p25(teacherID, identAtr), p26(student, exam), p27(exam, course), p28(exam, evaluate), p29(evaluate, teacher), p30(cardlow0, exam), p31(cardupm, exam), p32(exam, cardlow0), p33(exam, cardupm), p34(cardlow1, evaluate), p35(cardup1, evaluate), p36(evaluate, cardlow0), p37(evaluate, cardupm) }

Prolog clauses generated from previously formalized data model for this example are: ent(teacher). ent(student). ent(course). atr(studentID). atr(firstName). atr(lastName). atr(year). atr(examDate). atr(score). atr(courseID). atr(teacherID). atr(name). atr(rank). res(exam). res(evaluate). res(identAtr). res(string). res(integer). res(date). res(cardlow0). res(cardlow1). res(cardup1). res(cardupm). res(relationship). p(student, studentID). p(student, firstName). p(student, lastName). p(student, year). p(exam, date). p(course, courseID). p(course, name). p(evaluate, score).

p(teacher, teacherID). p(teacher, firstName). p(teacher, lastName). p(teacher, rank). p(studentID, integer). p(firstName, string). p(lastName, string). p(year, integer). p(examDate, date). p(score, integer). p(courseID, integer). p(teacherID, integer). p(name, string). p(rank, string). p(studentID, identAtr). p(courseID, identAtr). p(teacherID, identAtr). p(student, exam). p(course, exam). p(exam, evaluate). p(teacher, evaluate). p(cardlow0, exam). p(cardupm, exam). p(exam, cardlow0). p(exam, cardupm). p(cardlow1, evaluate). p(cardup1, evaluate). p(evaluate, cardlow0). p(evaluate, cardupm). p(exam, relationship). p(evaluate, relationship).

Prolog clause: hasidattribute(X):ent(X),atr(Y),res(identAtr), p(Y,identatr),p(X,Y). 2. RULE - Entities must be in relationship with at least one another entity. Predicate logic form: (∀x)(∀y)(∀z)(ent(x)∧p(x,y)∧p(z,relationship) ∧res(relationship)∧p(y,z)∧ent(y)∧(y≠x) ⇒hasrelationship(x)) (6) Prolog clause: hasrelationship(X):-ent(X),p(X,Z),p(Z,relationship), res(relationship),p(Y,Z),ent(Y),not X=Y. 3. RULE – An OWL class exists among entities of data model. Prolog clause: owlclassisentity(X,Y,Z):-functor(X,Y,Z),ent(Y),Z>0. Third rule enables connection between ontology classes and entities of data model that is tested. Each class from ontology could be searched among entities of data model. If the class is found, than that class is “covered” by that entity and data model at that part could be considered correct. D. Prolog Queries and Answers After integration of clauses from ontology, data model and axioms (reasoning rules) as an input to PROLOG, it is possible to set queries and retrieve answers that are to be processed according to input clauses. There are three queries (goals) that illustrate using PROLOG for getting answers regarding evaluation of the data model that was formalized in this example. Queries show using previously formalized reasoning rules together with clauses from data model and ontology: Goal: testing 1st rule ?- hasidattribute(Ent). Answer: Ent = teacher, Ent = student, Ent = course. Goal: testing 2nd rule ?- hasrelationship(Ent). Answer: Ent = student, Ent = course.

Goal: testing 3rd rule ?- owlclassisentity(student(ana),Entity,NoOfArg). Answer is displayed in Fig 5.

REFERENCES [1]

[2] [3] [4] [5] Figure 5. Results of testing 3rd rule in PROLOG

“Fig. 5” shows using PROLOG for testing completeness of data model by comparing ontology class with entities of data model. In this case, class student has been searched and found among entities of data model.

[6] [7]

[8]

VII. CONCLUSION This paper presents theoretical survey and experimental research results in the field of data model evaluation. Data model quality is one of aspect of data quality as a general concept. Quality of data models are related to data model creation process and issues regarding errors that occur in conceptual modeling design. Results of empirical survey by using student exams errors in the field of database design are presented. List of error types is given, categorized as syntax and semantic errors. Related research regarding errors of conceptual data model design has been presented. Since CASE tools don’t support semantic aspect of data model evaluation, we proposed a model of a software system that integrates ontology tool, CASE tool and automated reasoning tool for the complete (syntax and semantic) evaluation of data model. Ontology tool is used for creating ontology that describes semantic aspect of business domain. CASE tool is used for data model creation (integrated with previous business modeling within CASE environment). For each type of data model, axioms are created as generally applicable reasoning rules for any semantic or data model to be evaluated. As empirically tested and used, we propose PROTÉGÉ as ontology tool, Sybase Power Designer as CASE tool and PROLOG as automated reasoning tool. Integration of ontology, data model and axioms are possible because ontology and data model are given in XML form. They are transformed and integrated to a unique PROLOG input – in clauses form. To present experimental results of using the proposed system, we presented an example of ontology in graphical form and clauses form and an example of EER data model in graphical clauses form. We also presented examples of reasoning rules for conceptual data models in textual, predicate logic and clauses form. All these components are merged to PROLOG input, so each of the proposed rules was tested experimentally. It was shown that Prolog processed answers to given queries regarding data model evaluation in aspects specified by the reasoning rules.

[9]

[10]

[11]

[12]

[13] [14] [15]

[16] [17] [18] [19] [20]

[21]

[22]

Lj. Kazi, Z. Kazi, B. Radulovic, I. Berkovic, “Integration of Software Tools as Support to Data Model Evaluation“, 13th International Conference Dependability and Quality Management ICDQM, Belgrade, 2010. C. Batini, M. Scannapieco, “Data Quality”, Springer, 2006. R. Y.Wang, M. Ziad, Y.W.Lee, “Data Quality”, Kluwer Academic Publisher, 2001. T.C. Redman, “Data Quality The Field Guide”, The Digital Press, 2001. P. Bunneman, S. Khanna, W.C. Tan: “Why and where: A characterization of Data Provenance”, International Conference on Database Theory, London, UK, 2001. P. Shoval, “Experimental Comparisons of Entity-Relationship and Object-Oriented Data Models”, Journal AJIS, Vol. 4 no. 2, 1997. A. De Lucia, C. Gravino, R. Oliveto, and G. Tortora, “An experimental comparison of ER and UML class diagrams for data modelling”, Journal of Empirical Software Engineering, ISSN: 1382-3256, Springer Science+Business Media, LLC 2009. D.B. Bock, “Entity-Relationship Modelling and Normalization Errors”, Journal of Database Management, 1997. D. Batra, S.R. Antony, “Consulting support during conceptual database design in the presence of redundancy in requirements specifications: an empirical study”, International Journal of Human-Computer Studies, Elsevier, 2001. D. Batra, N.A. Wishart, “Comparing a rule-based approach with a pattern –based approach at different levels of complexity of conceptual modelling tasks“, International Journal of HumanComputer Studies, Elsevier, 2004. Sugumaran V., Storey V.C.: “The Role of Domain Ontologies in Database Design: An Ontology Management and Conceptual Design Environment“, ACM Transactions on Database Systems, Vol. 31, No. 3, September 2006 M.C. Emer, S.R. Vergilio, and M. Jino, “Testing Relational Database Schemas with Alternative Instance Analysis”, 20th International Conference on Software Engineering & Knowledge Engineering (SEKE'2008), San Francisco, USA, 2008. A. Formica, and M. Missikoff, “Correctness of ISA hierarchies in Object-Oriented database schemas”, Journal of Advances in Database Technology, Springer Berlin/Heidelberg, 2006. R. Elmasri, and S.B. Navathe, “Fundamentals of Database Systems”, Addison Wesley, 2007. J. Ullman, H. Garcia – Molina, and J. Widom, “Database Systems: The Complete Book”, Department of Computer Science, Stanford University, Prentice Hall, New Jersey, 2002. P. Mogin, and I. Lukovic, “Principles of databases”, University of Novi Sad, Faculty of Engineering, Stylos, Novi Sad, 1996 [Principi baza podataka]. V. Devedzic, “Intelligent Systems Technology“, University of Belgrade, Faculty of Organizational Sciences, Belgrade, 2004 [Tehnologije inteligentnih sistema]. http://www.w3.org/TR/owl-guide/ M. Horridge, A Practical Guide To Building OWL Ontologies Using Prot´eg´e 4 and CO-ODE Tools Edition 1.2, The University Of Manchester, 2009. N. Nadira Lammari, and M. Elisabeth, “Building and maintaining ontologies: a set of algorithms”, Journal of Data & Knowledge Engineering Vol. 48 (2004), pp. 155–176, Elsevier, 2004. S. Bergamaschi, F. Guerra, and M. Vincini, “Critical Analysis of the emerging ontology languages and standards”, Project: Web Intelligent Search based on Domain Ontologies, (http://www.dbgroup.unimo.it/wisdom/deliverables/fase_1/d1r1.p df), Italy, 2004. R. Volz, S. Decker, and D. Oberle, “Bubo - Implementing OWL in rule-based systems”, DARPA Agent Markup Language Program, http://www.daml.org/listarchive/joint-committee/att-1254/01bubo.pdf

Suggest Documents