On Approximation in the Integration of Connectionist and Logic-Based Systems∗ Anthony Karel Seda and M´aire Lane Department of Mathematics and Boole Centre for Research in Informatics, University College Cork, Cork, Ireland
[email protected];
[email protected]
Abstract We discuss the computation by neural networks of semantic operators TP determined by propositional logic programs P . We revisit and clarify the foundations of the relevant notions employed in approximating both TP and its fixed points when P is a first-order program.
Keywords: Logic programs, semantic operators, neural networks, metrics, approximation.
1
Introduction
An important and interesting aspect of integrating logic-based systems and connectionist systems or artificial neural networks (ANN), see [2, 5, 6, 7], is the computation by neural networks of the semantic operators TP determined by logic programs P . One part of this problem concerns finding algorithms to produce neural networks which compute TP exactly, when P is propositional, and this question has been settled in [6] for the case of the immediate consequence operator TP . When P is not propositional, approximation methods are needed. This question has been studied in [6, 7] for TP and more recently in [5] for certain generalizations of TP . However, the definitions formulated so far depend on embedding all the requisite notions in the real line and hence are not self-contained. The contributions of this paper are twofold. First, for the case of propositional P , we provide algorithms to find neural networks for the computation of TP relative to very general logics T used in defining TP . To date, all results obtained use classical two-valued logic and hence our results are wide generalizations of these. Second, we revisit the foundations of the methods of approximation so far used, and present general definitions which are self-contained in that they do not involve the real line. Furthermore, we greatly extend the class of programs for which approximation methods are known to work from acyclic programs to the class of all definite programs. Since the latter class is computationally adequate, this too is a broad generalization of known results. Full details of all the results relating to approximation can be found in [9]. Throughout, we use the notation of [5, 8, 9, 10] in relation to logics T and logic programs P ; in particular, we work with the general semantic operator TP due to Fitting [1], see also ∗
Proceedings of Information’04, International Information Institute, Tokyo, November, 2004, pp. 297–300.
1
[10], and note that it includes the important immediate consequence operator TP when T is classical two-valued logic. As far as artificial neural networks are concerned, our notation and terminology is standard, see [2, 4, 5], and we closely follow [5]. Acknowlegement The authors thank the Boole Centre for Research in Informatics at University College Cork for substantial financial support in the preparation of this paper.
2
Constructing Neural Networks To Compute TP
Let T = {t1 , t2 , . . . , tn } be a logic with n truth values, see [8, 10]. We assume that conjunction ∧ is finitely determined in the same way as disjunction ∨ is defined to be so in [10]. We will need the relations ≤∧ and ≤∨ defined on T as follows. For s, t ∈ T , we write s ≤∧ t if and only if s∧t = t, and s ≤∨ t if and only if s ∨ t = t. We denote negation by ¬ and assume ¬(¬t) = t for all t∈T. For the rest of this section, we assume that T is finitely determined in the following sense. 2.1 Definition A logic T is said to be finitely determined if ∧ and ∨ are both finitely determined. Following the development of [10], conjunctions and disjunctions of truth values which evaluate to t ∈ T will have associated sets of truth values Et∧ and Et∨ called excluded sets for t, and also sequences (conjt )j and (disjt )j of sets of truth values, referred to as required values for t. Then the elements of A∧t = (Et∧ )c , and of A∨t (Et∨ )c are called the allowable values, and for all j, disjt ⊆ A∨t and conjt ⊆ A∧t , so that each required truth value is also an allowable truth value (but not conversely). We can find the sets Et∨ and Et∧ explicitly from our orderings. For any truth value t ∈ T , Et∨ = {s ∈ T | t