A Proposed Model for Extracting Information from

7 downloads 0 Views 1MB Size Report
Apr 27, 2017 - Roth, “MADAMIRA : A Fast , Comprehensive Tool for. Morphological Analysis and Disambiguation of Arabic,”. Proc. 9th Lang. Resour. Eval.
Proceedings of the New Trends in Information Technology (NTIT-2017). The University of Jordan, Amman, Jordan. 25-27 April 2017.

A Proposed Model for Extracting Information from Arabic-Based Controlled Text Domains Mohammad Fasha, Nadim Obeid and Bassam Hammo Department of Computer Science King Abdulla II School for Information Technology The University of Jordan, Amman E-mail:[email protected], [email protected], [email protected] Abstract This paper proposes a model for extraction information from Arabic-based controlled text domains. We define controlled text domains as the text domains that are not restricted in terms of their linguistic features or their knowledge types yet they are not totally undetermined in these respects. A two-phase Information Extraction (IE) scheme assisted by Description Logic (DL) is proposed. The first phase will focus on atomic concepts and relations using an interaction process between Arabic compliant Part-of-Speech (POS) tagging and a supporting Ontology Web Language (OWL) ontology, while higher order relations as well as implicit knowledge will be extracted using the inferencing capabilities of Description Logic (DL). To implement this model, a multi-step work methodology is proposed and the work areas within each step are identified. The proposed model could be considered as a first step towards enabling and developing text-understanding systems. Keywords: Information Extraction, ANLP, OWL Ontologies, Description Logic, Regular Expressions, POS Tagging.

1

INTRODUCTION

Text understanding is a fundamental unsolved problem in artificial intelligence and computational linguistics [1]. This problem is commonly associated with common sense reasoning, which is a hard problem that requires skills from different realms i.e. physical, social, conversational, procedural, sensory, motor, temporal…etc. [2]. Although the previously two referenced citations are relatively old, their premises still hold true until this date.

In this paper, we aim to explore some of Arabic linguistic challenges under the new concept of controlled domains. We define controlled domains as the text domains that are relatively open and free, yet they are not totally undetermined in terms of their syntactic and to some extent their semantic content. Several domains can be classified under such definition including narrativedriven text i.e. news articles and younger age’s (4-6 years old) children stories and others.

Nevertheless, information extraction (IE) from text is considered as an important step towards enabling text understanding. While information extraction itself can be defined as the process of extracting specific factual-types of information from text [3]. In other words, it is the process of transforming selected pieces of knowledge from unstructured text into structured knowledge.

The paper is structured as follows. In section 2, a brief discussion about related previous work is presented. Section 3, presents some highlights about the new concept of controlled text domains. In section 4, a general overview about the proposed model is provided. In section 5, more details are presented about the process steps that are proposed to implement and assess the model. Finally, the conclusion and the suggested future work are discussed in section 6.

Most (IE) implementations focus on selected features under restricted domains where text variations are tightly constrained [4]. Approaching open text domains is considered as a daunting task even for the most devoted enthusiasts [5], [6]. Information extraction from Arabic text has additional challenges. These challenges are caused by numerous distinguishing features related to Arabic language including its rich morphology, highly inflectional nature, short vowels omitting, free words order and many others. These features create numerous ambiguities when computationally approaching Arabic text i.e. lexical, structural, semantic and anaphoric ambiguities. This could be considered as a contributory for the scarcity of Arabic-based related work, whether in restricted domains or in more relaxed ones.

2

RELATED WORK

In comparison with other languages, we observe a scarcity in efforts related to Arabic-based information extraction [7]. This could be partly imputed to the complexity of Arabic language and to the lack of supporting resources such as adequate annotated corpora, reliable related tools and algorithms. Nevertheless, efforts are occasionally presented in Arabic-based (IE) or other correlated fields. In this respect, a considerable amount of Arabic-based (IE) work can be classified under the rule-based schemes. In this respect, an interesting effort was presented by [8] where a model for the automatic detection of explicit causal relations from Arabic texts was presented. The model was based on a rule-based pattern-recognizer that

can detect around (700) linguistic patterns that were devised based on an analysis process over an untagged corpus. In the same respect, that effort was taken a step further by [9] who presented a model based on Rhetorical Structure Theory (RST) theory to answer a set of nonfactual types of questions i.e. why, how vs. the factual based questions i.e. what, who, where questions. Another recent effort was introduced by [10] where a model for annotating normative provisions’ in Arabic legal texts was presented. In that work, annotation can be considered as a synonym of (IE) as the outcome of the model was the identification of the correct semantic labelling of given text tokens. The presented model used a rule-based scheme to identify explicit linguistic markers that are commonly used by legislators to indicate specific provisions’ categories in Arabic normative texts. Similarly, [11] presented an automated tool that can be used to semantically annotate domain specific Arabic web documents assisted by ontologies. The objective was to incorporate semantic notions in the web-searching process to enhance search and retrieval accuracy. Two more recent approaches were presented by [12] and [13]. The former presented a rule-based approach to extract Named Entities (NEs) from semi-structured Arabic text i.e. Wikipedia; but no sufficient justification was provided about the validity of using Wikipedia titles as golden rules for deriving named entities. While the later employed a twostep rule-based algorithm to extract (NEs) where the first step analyzed Arabic tokens based on a pre-defined set of grammar rules and the second used regular expressions to search for the initially identified entities using (GATE) Gazetteers. Reference [14] presented a model for extracting information related to future events from newspaper articles. The theme of that effort was to employ Arabic future verbal proclitics to indicate and identify future events in a “pre-normalized” set of news articles. Reference [15] presented an interesting model for the automatic extraction of ontology relationships from Arabic text including the cause-effect, is-a, part-whole, has-a, kind-of relations and others. A modified version from (Hearst 1992) algorithm was employed to implement that model. Reference [16] proposed a model for extracting discourse relations from Arabic text based on the assumption that the most adjacent Arabic sentences are connected through explicit connections. The focus of that study was to exploit the explicit discourse connections in text i.e. keywords, using a supervised learning scheme. Reference [17] presented a model for extracting relations from Arabic version of Wikipedia using the semantic field theory where “Info Boxes” and list of categories were analyzed to identify relations in Wikipedia’s articles. An early approach was presented by [18] who presented a model empowered by ontologies to assist in extracting

legal terms from related documents where specific terms were extracted and placed into a set of predefined slots. Similarly, the model proposed in this paper follows the same rule-based notions. In the next section, more highlights are provided about the proposed model including its key features and the proposed work areas.

3

CONTROLLED TEXT DOMAINS

By definition, information extraction implementation are bounded by the text domain being investigated i.e. legal, medical…etc, and the specific knowledge types that are targeted by the extraction process i.e. cause-effect relations, future events, named entities…etc. On the other hand, a main objective of our proposed model is to investigate the potentials of approaching (IE) over a more relaxed version of text domains, hence, we introduce the new concept of “Controlled Text Domains”. In this respect, we define controlled text domains as the arbitrary text domains that exhibit certain syntactic and semantic features that can be represented and extracted by a certain information extraction model. Having that stated, the model we are proposing is approaching (IE) problem from a broadened perspective in which the knowledge-supporting model becomes the key factor of the (IE) process rather than the targeted set of information being extracted. In this respect, the approach we are proposing or more specifically the question that we shall be exploring will not be the wondering of how a certain relation would be extracted under a certain text domain, rather, it is the wondering about the extent a certain knowledge-supporting model along with shallow extraction arrangements can support our information extraction process.

4

THE PROPOSED MODEL

The model we are proposing is based on the employment of Description Logic (DL) in a two-phase process to assess the potentials of enhancing information extraction experience over Arabic-based controlled text domains. The first phase of the (IE) process will be targeting atomic types of knowledge that are explicitly exhibited by text including unary concepts and binary relations. While the second part of the information extraction process will involve the extraction of a selected-set of implicit knowledge types as well as higher order relations using the inferencing capabilities of Description Logic (DL). Apparently, such a model would be multifaceted involving language along with its different challenges, knowledge representation and processing, and more importantly the relation and interaction between language and knowledge during the information extraction process.

In principle, the proposed model can be characterized into three main work packages that are demonstrated in Figure 1 below. The first subject is related to the preparation of the selected corpus. Work in this area is more related to language, its syntax, morphology and grammar.

In the next section, more details are provided about the suggested process steps including some of the expected challenges as well as the proposed alternatives in regard.

5 5.1

THE PROPOSED WORK METHODOLOGY Selecting a Suitable Corpus

The first task in the proposed work methodology is to select a suitable corpus that can serve the objectives of this study.

Figure 1-General overview of the proposed model

The second main work area involves language as well as knowledge engineering. The main concern of this stage is to establish the adequate knowledge representation model as well as the suitable information extraction scheme that can extract concepts and relations from text according to the identified knowledge-supporting model. The third main engagement is related to knowledge engineering, more precisely, the establishing of the general axiomations that represent composite concepts and relations using description logic (DL), which can enable enriching and extending the initially extracted information. A main objective of the proposed course of work is to investigate the key elements that are involved in transforming Arabic text into a computationally enabled model, which can be later employed to solve different types of problems i.e. formal representation of knowledge, questions answering, theme extraction…etc. Therefore, the multistep methodology presented in Figure 2 below will be followed in order to construct and validate the proposed model.

A key feature of the sought corpus would be that it conforms to the presented definition of controlled domains. Therefore, the required corpus has to be relatively open and more permitting than the conventionally selected domains i.e. medical and legal reports, domain-specific announcements, commercial publications and others, where information extraction under these domains is closer to keywords matching. An initial survey reveals that this engagement would be challenging enough as there is a scarcity in previous Arabic-based efforts on the same subject. Nevertheless, there are a number of prospects domains that can be investigated as they exhibit the sought features of open text domains; this includes news articles and younger age’s children stories. Consequently, our work shall commence with surveying the World Wide Web as well as Arabic children publications for a suitable corpus that can be used to assess the proposed model.

5.2

Analyzing the Selected Corpus

Considering the rule-based scheme adopted in this work, an initial engagement would be to analyze the selected corpus looking for generalization potentials. This includes the types of syntactic structure, atomic concepts and relations as well as high order relations and composite concepts. Findings about the dominant syntactic structures will be used to define a suitable annotation scheme. Findings about the dominant atomic concepts and relations will be reflected over the knowledge representation model and the (IE) module later, while findings about the composite concepts and relations will be employed to establish a set general axiomations that can be used to enrich the information extraction experience later. An expected main challenge of the analysis process would related to the ability to identify the key-features in text that can be used to extract accurate information yet be general enough to allow sharing that model with other similar domains.

5.3 Figure 2-The process steps of the proposed model

This proposed work methodology resembles a pipeline of sequential and progressive steps where each step builds on the input provided by a prior one and transforms it into an artifact that is suitable for the next step of processing.

Corpus Annotation

An annotated corpus is a fundamental resource for many statistical and rule based (IE) implementations. Commonly, annotation is applied using Part-of-Speech (POS) tagging, partial or full parse trees and more often some notions of semantics.

Based on many previous findings e.g. [19], a conclusion was reached that a morphologically aware (POS) tagging scheme is important for improving the information extraction accuracy. Therefore, a survey over the available and accessible resources will be performed in order to define a suitable (POS) annotation scheme to annotate the selected corpus, knowing that this scheme should include annotation guidelines, a custom tag set and preferably an enabling tool. In this respect, our work will commence with surveying related literature as well examining some of the common (NLP) toolkits, (POS) taggers, parsers and morphology analyzers including [20]–[25] and others.

5.4

Nevertheless, in our proposed model, it is suggested to employ a combination of syntactic and semantic elements fill that gray area. The syntactic part will be responsible for extracting concepts and relations using pattern-matching techniques, while the semantic part will be employed to validate the accuracy of the extracted information. This is important as language variation is vast, speech parts might overlap between different concepts i.e. the same syntactic pattern can exhibit different senses. Semantic intervention would be essential to enforce notions of selectional restrictions [30] during the information extraction process Figure 4 below presents a general highlight for the information extraction module.

The Supporting Ontology

Common practices in formal knowledge representation adheres to the notions of conceptualization or abstractions in an area of interest [26]. In this respect, ontologies emerge as a popular and proven model for knowledge representation and processing and as an explicit representation of conceptualization. Acknowledging that there is no single correct methodology for designing ontologies and that the design motivations might vary according to the domain of interest and the objectives of ontology usages [27], the ontology designing guidelines presented by [28] will be adopted to construct the required knowledge representation model. These guidelines involve the identification of the purpose and the scope of the ontology, the identification of the key concepts and relations related to the domain of interest, identification of the actual terms, coding the ontology and finally placing that design under evaluation. Protégé [27] will be used to assist in designing the required ontology and to produce it in (OWL) format suitable for further processing.

5.5

Information Extraction Scheme

Natural language is the human’s vehicle for communicating and sharing all forms of knowledge, how natural language and knowledge is transformed by human faculties is still vague – depicted as gray areas in Figure 3 below - while some scholars argued that language might be an innate skill that is embodied within humans since birth [29].

Figure 3-The transformation between natural language and knowledge by humans

Figure 4- General Highlights about the (IE) model

For the syntactic part, regular expressions (regex) will be employed to establish the matching patterns. Regular expressions are powerful in string-based search and match operations and they are capable of identifying simple as well the more complex patterns [31]. In the same respect, it is suggested to refer to Arabic grammar rules e.g. [32], [33] while defining the regex patterns to comply with the theoretical backgrounds of Arabic grammar and present more plausible outcomes. While for the semantic part, description logic will be employed to assist in resolving ambiguities and enhancing the information extraction accuracy. Reference [34] library will be employed to enable the interfacing between the Java-based (IE) module and the (OWL) based ontology, and (Hermit 1.3.8.413) reasoner will be used to exploit (DL) inferencing capabilities directly from the (IE) module. Figure 5 next presents a high-level diagram demonstrating the proposed (IE) algorithm. In principle, the algorithm will perform a sequential iteration over sentences and examine each sentence for the occurrences of any of the pre-defined patterns. Once a relation is identified, the supporting ontology will be interrogated to determine the validity of the extracted relation. Initially, the (CanDo) and (CanBeObjectOf) types of relations will be experimented to assess (DL) role in enabling selectional restriction over the extracted types of knowledge.

Figure 5-The Proposed Information Extraction Algorithm

In this respect, it should be emphasized that representing the knowledge-supporting model in (OWL) and processing it using (DL) is expected to provide numerous advantages during this phase of information extraction. These advantages are motivated by (DL) inferencing capabilities that can extend knowledge. As an example, subsumption and equivalence relations for classes would benefit from knowledge related to parent or equivalent concepts. For instance, if a “Vehicle” can move and a “Car” is a sub type of or equivalent to the concept of a vehicle, and a car is the concept that was initially identified in text, then it can be concluded that a car can also move (CanDo(Move, Car)) knowing that the car is a sub type of or an equivalent concept to a vehicle. The same goes true for objects relations where the transitive, symmetric, asymmetric and other relations can extend the knowledge of an instantiated relation.

5.6

Populating Ontology with the Extracted Knowledge

The (IE) process that was presented in the previous section will generate an atomic set of concepts and relations that are compliant with the supporting (OWL) ontology. These concepts and relations have to be populated to ontology to enable further reasoning and higher order processing. In this respect, (OWL-API) library will be utilized to achieve the automatic population of the ontology. Also during the ontology population process, the annotation features of (OWL) will be used to establish the mapping between concepts in the ontology and text tokens. Annotations will be integrated with WordNets to provide synsets functionalities that can extend the coverage of a given concept to a broadened set of text tokens.

5.7

Extending Knowledge using Description Logic

Descriptions Logics (DLs) are a family of knowledge representation formalisms with well-formed formal properties [35]. (DLs) provide numerous competent features, more importantly, they are adequate for representing domain specific knowledge, they have solid formal logic-based semantics and unlike first-order logic, a (DL) query has to terminate with either a positive or a negative response. Furthermore, (DLs) provide competent reasoning capabilities where new implicit knowledge is extracted from existing explicit ones. Another important feature that was found appealing is that knowledge representation and processing by logic is more human intuitive and compatible with our perceptual faculties in contrast to statistical based schemes that are closer to black boxes. Lastly, the general theme of the proposed model is to employ a syntactic-semantic enabled process during the atomic concepts extraction phase, while (DL) can be employed later to establish higher order (n-ary) relations that are composites of atomic ones. (DL) can adequately serve this purpose, as its basic building blocks are atomic concepts, roles and individuals while higher order relations are defined as composites of these atomic ones. In this respect, Protégé’s (DLQuery), Semantic Web Rule Language (SWRL) plugin along with built-in reasoners will be used to examine knowledge enrichment potentials over the selected controlled domain. While doing so, we shall comply with the generalization and abstraction themes presented in Figure 6 next, where atomic concepts aggregate to create more general and composite ones and where factual knowledge and principles are concluded.

7

Figure 6-Information and Knowledge Abstraction Pyramid

6

CONCLUSION AND FUTURE WORK

This paper presented a proposed model for information extraction from controlled text domains. Controlled domains were defined as the relatively open text domains in contrast to the restricted ones. News articles and younger age’s children stories were identified as prospect controlled text domains that are plausible for further investigation. The focus of the proposed model was on Arabic language as it encompasses numerous challenging features that limit efforts and achievements in regard. Exploring new approaches for Arabic-based (IE) might assist in overcoming some of the challenges as well as opening doors for new insights. The model proposed a two-phase information extraction process. In the first phase, atomic concepts and relations will be extracted, while in the second one, higher order relations and composite concepts are targeted. The first phase of information extraction will be implemented using an Arabic-compliant (POS) tagging scheme along with a regex-based syntactic-semantic enabled (IE) process. The second phase of information extraction will be implemented using Description Logic along with its competent and well-formalized inferencing capabilities. Moreover, a suggested pipeline-like work methodology was presented and key work areas within that methodology were identified. The expected future work includes commencing with the model implementation according to the proposed work methodology. This initially entitles selecting, analyzing and annotating a suitable corpus and establishing its adequate knowledge-supporting model. Based on the finding of that work, (Phase-I) and (Phase-II) of information extraction processes can be experimented where a comprehensive assessment about the proposed model can be presented.

REFERENCES

[1]

E. T. Mueller, “Story understanding through multirepresentation model construction,” Text Mean. Proc. HLTNAACL 2003 Work., pp. 46–53, 2003.

[2]

M. Minsky, R. Kurzweil, and S. Mann, Society of mind, vol. 48, no. 3. 1991.

[3]

T. Poibeau, H. Saggion, J. Piskorski, and R. Yangarber, “Multi-source, Multilingual Information Extraction and Summarization,” Theory Appl. Nat. Lang. Process., pp. 23– 50, 2013.

[4]

N. Asghar, “Automatic Extraction of Causal Relations from Natural Language Texts: A Comprehensive Survey,” arXiv Prepr., 2016.

[5]

S. Alqrainy, S. Jordan, and M. S. Alkoffash, “Context-Free Grammar Analysis for Arabic Sentences,” Int. J. Comput. Appl., vol. 53, no. 3, pp. 7–11, 2012.

[6]

E. Riloff and E. Riloff, “Information extraction as a stepping stone toward story understanding,” Underst. Lang. Underst. Comput. Model. Read., pp. 435–460, 1999.

[7]

A. A. Elmadany, S. M. Abdou, and M. Gheith, “A Survey of Arabic Dialogues for Spontaneous Dialogues and Instant Message,” vol. 4, no. 2, pp. 75–94, 2015.

[8]

J. Sadek and F. Meziane, “Extracting Arabic Causal Relations Using Linguistic Patterns,” vol. 15, no. 3, 2016.

[9]

J. Sadek and F. Meziane, “A Discourse-Based Approach for Arabic Question Answering,” ACM Trans. Asian LowResour. Lang. Inf. Process., vol. 16, no. 2, p. 11:1--11:18, 2016.

[10]

I. Berrazega, “A Semantic Annotation Model for Arabic Legal Texts,” 2016.

[11]

S. Al-Bukhitan, T. Helmy, and M. Al-Mulhem, “Semantic annotation tool for annotating arabic web documents,” Procedia Comput. Sci., vol. 32, pp. 429–436, 2014.

[12]

E. Amer and H. M. Khalil, “Hierarchical N-gram Algorithm for extracting Arabic Entities,” pp. 56–60, 2016.

[13]

H. Elsayed and T. Elghazaly, “A named entities recognition system for modern standard Arabic using rule-based approach,” Proc. - 1st Int. Conf. Arab. Comput. Linguist. Adv. Arab. Comput. Linguist. ACLing 2015, pp. 51–54, 2016.

[14]

M. Alruily and M. Alghamdi, “Extracting information of future events from Arabic newspapers: An overview,” Proc. 2015 IEEE 9th Int. Conf. Semant. Comput. IEEE ICSC 2015, pp. 444–447, 2015.

[15]

M. G. H. Al Zamil and Q. Al-Radaideh, “Automatic extraction of ontological relations from Arabic text,” J. King Saud Univ. - Comput. Inf. Sci., vol. 26, no. 4, pp. 462–472, 2014.

[16]

A. Alsaif and K. Markert, “Modelling discourse relations for Arabic,” Proc. Conf. Empir. …, pp. 736–747, 2011.

[17]

N. I. Al-Rajebah, H. S. Al-Khalifa, and A. S. Al-Salman, “Exploiting arabic wikipedia for automatic ontology generation: A proposed approach,” 2011 Int. Conf. Semant. Technol. Inf. Retrieval, STAIR 2011, no. June, pp. 70–76, 2011.

[18]

S. Zaidi, M. Laskri, and K. Bechkoum, “A cross-language information retrieval based on an Arabic ontology in the legal domain,” and Internet-Based, no. February 2016, pp. 86–91, 2005.

[19]

M. T. Diab, “Improved Arabic Base Phrase Chunking with a new enriched POS tag set,” Comput. Linguist., no. June, pp. 89–96, 2007.

[20]

A. Boudlal, A. Lakhouaja, A. Mazroui, A. Meziane, M. Ould Abdallahi Ould Bebah, and M. Shoul, “Alkhalil Morpho SYS1: A Morphosyntactic Analysis System for Arabic Texts,” Int. Arab Conf. Inf. Technol., pp. 1–6, 2010.

[21]

C. D. Manning, J. Bauer, J. Finkel, S. J. Bethard, M. Surdeanu, and D. McClosky, “The Stanford CoreNLP Natural Language Processing Toolkit,” Proc. 52nd Annu. Meet. Assoc. Comput. Linguist. Syst. Demonstr., pp. 55–60, 2014.

[22]

S. Bird, S. Bird, and E. Loper, “NLTK : The natural language toolkit NLTK : The Natural Language Toolkit,” Proc. ACL02 Work. Eff. tools Methodol. Teach. Nat. Lang. Process. Comput. Linguist. 1, no. March, pp. 63–70, 2016.

[23]

N. Habash, O. Rambow, and R. Roth, “MADA+TOKAN: A Toolkit for Arabic Tokenization, Diacritization, Morphological Disambiguation, POS Tagging, Stemming and Lemmatization,” Proc. Second Int. Conf. Arab. Lang. Resour. Tools, no. November 2015, pp. 102–109, 2009.

[24]

A. Pasha, M. Al-badrashiny, M. Diab, A. El Kholy, R. Eskander, N. Habash, M. Pooleery, O. Rambow, and R. M. Roth, “MADAMIRA : A Fast , Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic,” Proc. 9th Lang. Resour. Eval. Conf., pp. 1094–1101, 2014.

[25]

Y. Souteh and K. Bouzoubaa, “SAFAR platform and its morphological layer SAFAR platform and its morphological layer,” no. December 2011, 2015.

[26]

T. R. Gruber, “A translation approach to portable ontology specifications,” Knowl. Acquis., vol. 5, no. 2, pp. 199–220, 1993.

[27]

N. F. Noy and D. L. McGuinness, “Ontology Development 101: A Guide to Creating Your First Ontology,” Stanford Knowl. Syst. Lab., p. 25, 2001.

[28]

M. Jarrar, “Towards a methodology for building ontologiesArabic,” Proc. Expert. Meet. Arab. Ontol. Semant. Networks, 2011.

[29]

N. Chomsky, “Chomsky, Noam -- Language and Problems of Knowledge:The Managua Lectures (1988)..pdf.” MIT Press, p. 24, 1988.

[30]

N. Chomsky, Aspects of the Theory of Syntax. THE M.I.T. PRESS Massachusetts, 1965.

[31]

D. F. Barrero, D. Camacho, and M. D. R-moreno, “Automatic Web Data Extraction Based on Genetic Algorithms and Regular Expressions,” Data Min. Multi-agent Integr., pp. 143–154, 2009.

[32]

J. M.Price, “All The Arabic You Never Learned The First Time Around,” 1997.

[33]

J. Wightwick and M. Gaafar, Grammar.pdf.” McGraw Hill, 2005.

[34]

M. Horridge and S. Bechhofer, “The OWLAPI: A Java API for OWL ontologies,” Semant. Web, vol. 2, no. 1, pp. 11–21, 2011.

[35]

F. Baader, “Basic Description Logics,” Descr. Log. Handb., pp. 43--95, 2003.

“14.Easy

Arabic