Computer-Assisted Reasoning with Natural Language

0 downloads 0 Views 73KB Size Report
not expect to need the complexity of LOLITA; the demands of mathematical ... An Evaluation of LOLITA and Related Natural Language Processing Systems.
Computer-Assisted Reasoning with Natural Language: Implementing a Mathematical Vernacular Paul Callaghan and Zhaohui Luo Department of Computer Science, University of Durham, South Road, Durham DH1 3LE, U.K. fP.C.Callaghan, [email protected]

Abstract This paper presents the Durham Mathematical Vernacular (MV) project. The project aims to develop the technology for interactive tools based on proof checking with type theory, which allow mathematicians to develop mathematics in their usual vernacular. Its sub-goals are to develop type theory technology to support MV and to develop the corresponding NL technology. Mathematical language has many significant differences from everyday language, hence a different approach is required to automatically process such language. We discuss some important differences and how they affect implementation. A key requirement is for correctness. Another key feature is that the user defines his own terminology. We then discuss work in progress, namely the issue of semantic well-formedness in mathematical descriptions, and a prototype being developed to experiment with aspects of the project.

1 Introduction: Defining a Mathematical Vernacular The long term aim of this project is to develop theory and techniques with which the complementary strengths of NLP (Natural Language Processing) and CAFR (Computer-Assisted Formal Reasoning) can be combined to support computer-assisted reasoning. Our chosen area is mathematical reasoning, with the goal of implementing proof systems that allow mathematicians to reason in a “mathematical vernacular” (MV). Thus, mathematicians may work in a style and a language which is closer to traditional practice, rather than in a formal setting. In the remainder, we outline our view of a “Mathematical Vernacular” (MV), discuss our type theoretic approach to it, and briefly explain how type theory is used to check mathematics.

1.1 Mathematical Vernacular The term “Mathematical Vernacular” has been used before with varying meanings, eg in [16] and [6]. We define it to mean a language which is suitable for ordinary mathematical practice, and which can be implemented with current technology and under the guidance of a formal semantics. The basis for implementation will be constructive type theory and its associated technology (see below). We emphasise formal semantics because correctness is important in mathematics: users must have confidence in the workings of the system. This MV is not necessarily the same as the language mathematicians commonly use, which we shall call “Informal Mathematical Language” (IML). Thus, we do not intend to model completely what mathematicians do, for several reasons. Firstly, IML is an informal solution to the problem of communicating mathematics, and we can’t be sure that it is the most effective way. Secondly, groups of people have their own idea about IML, of what is acceptable and what isn’t, and so on. It would be hard to study such variation in opinion. Lastly, IML does not take account of the  This work is supported partly by the University of Durham Mathematical Vernacular project funded by the Leverhulme Trust (see http://www.dur.ac.uk/˜dcs0zl/mv.html) and partly by the project on Subtyping, Inheritance, and Reuse, funded by UK EPSRC (GR/K79130, see http://www.dur.ac.uk/˜dcs0zl/sir.html).

1st Annual CLUK Research Colloquium, 1997

1

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

new technology for proof checking. This is an important point: we are still learning how to use the formal side of the technology, so an examination of how natural language may be productively used with it is certainly worthwhile. Obviously, our MV should be as close as possible to IML, but we shall use good ideas from formal mathematics and its technology where appropriate. But what methodology should be used? It is hard to design MV “top down” – eg by stating requirements and working towards them – mathematical language is complex and there are many decision points. We choose a “bottom up” approach: we start with the basic points of mathematical language, without which no useful mathematics may be done, and understand and formalise those. We will then think about adding in natural language conveniences like anaphora and wider syntax. We consider some of this ‘core’ in section 3. Related work on mathematical language includes de Bruijn’s MV [6], the Mizar project [16], and Ranta’s typetheoretic semantics of mathematical language [18, 19]. Constructive type theory with dependent types is also a useful framework for natural language, eg in semantics (see Ranta [18]). Also, the use of coercive subtyping may add flexibility to formal accounts of NL [12] and its implementation (section 3.1.1).

1.2 A Type-theoretic Approach Type theory provides a flexible language for expressing and checking a lot of non-trivial mathematics (eg parts of Galois theory in [2]). The process of checking proofs (via type checking) is decidable for the type theories we will use, so it may be mechanised (implemented) straightforwardly. However, type theory and the technology based on it is hard to use: humans require a lot of training to use it effectively, and experts are not fully satisfied with the current implementations and interfaces. Thus there are limits to the associated technology, which would also limit what could be done in MV. We regard development of MV as a constraint satisfaction problem – MV sits between mathematics and its realisation in IML, and mathematics and its realisation in type theory and its implementations. Hence one aim of our project is to extend this technology, by contributing to the facilities needed to support an implementation of MV such as meta-variables and multiple contexts [4].

1.3 Checking Mathematics with Type Theory We briefly explain how type theory provides a way to check mathematical proofs, introducing some ideas which are useful in later sections. A good source of further information is Bailey’s thesis [2], which contains much discussion on how to formalise maths with type theory and how to make the formalised version more comprehendible, accompanied by the formalisation of a substantial example (parts of Galois Theory). A type theory is a language which allows representation of types and objects, with a judgement form of when an object belongs to a certain type, and inference rules for constructing judgements. The notion of dependent types, and the addition of types to represent logic and data structures allows complex reasoning. Propositions are represented by types, and a proof of a proposition as a term which inhabits the type representing the proposition. Theorems are typically expressed as propositions, and proved by a term of suitable type; this term can then be used as a function in proofs of other propositions. We can also have algebraic structures, eg groups – which can be represented as a (dependent) tuple of a set, a binary operator over that set, an identity element, and an inverse function over the set, plus a proof that the data items satisfy the axioms. Other representations are possible. symmetry == {x,y:A}{rxy:R x y} -> (R y x) For example, the property of symmetry for a binary relation R on a set A can be represented as above. It can be viewed as the type of a function which when given two elements x and y in A, and a proof rxy that R x y holds, returns a proof of R y x. Thus, to show that R is symmetric, we need to supply a function of this type. We can then use it to yield proofs of R b a whenever we have a proof of R a b, for any a; b. Correspondingly, to check that a claimed proof p is an acceptable proof of theorem T , we need to check that p has a type compatible with T . As noted above, the algorithm for type checking is decidable, and straightforward to implement. The current “tool support” for type theory is in the form of proof assistants, such as Lego [13], Coq [5], or ALF [14], which help with several aspects of proof production: managing the details of representing a piece of maths (definitions of concepts); storing proved theorems and assumptions (which comprise a ‘context’, together with the

1st Annual CLUK Research Colloquium, 1997

2

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

definitions); guiding the process of proof construction; and checking whether a given term proves the statement it is claimed to. Construction of proofs is by ‘refinement’. For example, a proof of A ^ B is achieved by finding proofs of A and of B separately, at which point the proof assistant will combine them appropriately to prove the main goal. There is currently little automation in these proof assistants: most of the work is done by the user, although there are some simple mechanisms to prove basic propositions, eg simple logic or to help with repetitive operations in induction proofs. 1.3.1

Coercive Subtyping

Coercive subtyping is a new theory of subtyping and inheritance in type theory [10, 11]. It makes type theory easier to use by providing safe abbreviational mechanisms. It has several uses in the MV work, eg in analysing implicit conversions (see section 3.1.1). Coercion mechanisms have been implemented in Lego [1] and Coq [20]; Bailey uses his implementation in a formal development of Galois theory [2]. We shall implement subtyping at the level of Luo’s LF (section 3.2). Some meta-theoretic results of coercive subtyping can be found in [9]. A is a subtype of B , notation A  B , if either A = B (computational equality) or A is a proper subtype of B (notation A < B ) such that there is a unique implicit coercion from A to B . Any function from A to B can be specified as a coercion, as long as the coherence property holds for the overall system (in particular, there cannot be two different coercions from any A to any B ). This idea generalises both the traditional notion of inclusion-based subtyping (eg, between types of natural numbers and integers) and that of inheritance-based subtyping (eg, between record types). Informally, coercive subtyping operates as follows. If A is a proper subtype of B with coercion c, a: A, and C [ ] is a context where an object of type B is required, then a can be used in the context — C [a] stands for (more precisely, is computationally equal to) C [c(a)]. Coercive subtyping is a powerful mechanism, as it allows transitions between arbitrary types provided a valid coercion has been given, so can be used in some very complex ways. Basic examples include extraction of the ‘key’ components of a structure, eg the set from a group (represented as a tuple), or translating a positive even number to a natural number. A language example: we can use it to represent sense ambiguity by providing coercions from specific meanings to a common meaning, hence the specific meanings are subtypes of the common meaning, and can be used wherever the common meaning could be used (section 5.3 in [12]). More examples are given in section 3.1.1.

2 Mathematical Language and Issues of Processing Mathematical language has not been studied in detail in linguistics or in NLP. We give a brief discussion of key issues, oriented towards the differences they make to conventional NL techniques, with a view to how it may be implemented in a prototype (section 3.2), making suggestions for some of these issues. Note that this is different from our long-term goals. Questions we are considering include what techniques are necessary to process MV, and how do we gauge correctness of such a system? We discuss the difference between the language of definitions and of proofs, then consider the main technical differences, and finally discuss a key difference, of the user introducing his own terminology.

2.1 Definitions and Proofs Mathematical language comes in two variants, which it is useful to distinguish between. It will guide implementation of the relevant modes of a system. Firstly, there is the language for making definitions of structures and their properties. The intention of the mathematician must be captured exactly, so correctness is important here. Mathematicians also tend to be very precise in definitions, and there are idioms which we can rely on to process them, so analysis is expected to be straightforward. Secondly, the language used in proofs. Here, the current theorem and the stage of the proof provides a rich context in which to interpret sentences. Reasoning can bridge some of the gaps, so the user need not be so precise here. At minimum, he could give the basic information which then requires only simple inferences to complete the proof. Thus, the language used is more variable, and more informal, than for definitions.

1st Annual CLUK Research Colloquium, 1997

3

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

2.2 Particular Issues in Mathematical Language Overall, mathematical language is ultimately simpler than everyday language. For example, only third person singular and plural conjugations are required, and the present tense is used throughout. The following are issues which are particularly important in mathematical language.

 Idioms: Certain set phrases exist with definite meanings, for example those used in statements of quantification (“for all X such that Y, P holds”). They are particularly common in definitions, where precision and clarity is more important. In a sense, such phrases have acquired a meaning independent of the constituent words, and are most easily analysed as phrases rather than word by word. We can analyse them by providing syntax rules for such large constructions which override word-based analysis.  Implicit Conversions: Objects are treated as if they belong to many classes simultaneously, and conversion between them is not remarked upon. Eg something introduced as a symmetric relation can be referred to as a plain relation, and when other properties are assumed or established, the relation can be described as a transitive relation, symmetric and transitive relation, etc. The work in section 3.1 can help in this respect.  Two Contexts for Interpretation: We distinguish assumptions and conclusions. The resulting analyses in the premiss and conclusion differ for sentences like: “if R is a symmetric relation, then R is a symmetric relation”. Thus in analysis we must know which context is current. The division is not clear cut: consider “if symmetric relation R is also reflexive . . . ” – then the condition of symmetry is a presupposition and acts like a conclusion, namely it must be provable.  Levels of Abstraction: Sometimes it is useful to take a concept as primitive in order to explore the theory of such concepts in isolation, eg if we want to study Abelian groups. We may not want the analysis to be reduced to concepts beneath this level, eg to monoids. The user may also want to define predicates on concepts and restrict their use to those concepts to help his thinking, even if logically those predicates are well-defined for more primitive concepts. Further study is needed on these aspects.  Mix with Symbolic Expressions: Expressions using mathematical symbols must be understood to some degree, in order to understand the surrounding sentence. These expressions may themselves be informal, compared to the detail required in a completely formal language. For example, in “f (x; y) is finite”, we must know what f (x; y) produces, so we can understand the whole sentence. Since names appearing in these expressions can be introduced in the text, as discussed next, analysis must be interleaved to a degree. Eg, to understand “Assume elements x, y such that R x y”, we must establish what x and y are, in order to check the expression; then the information that R x y is a proposition needs to be returned to NL analysis.  Issues of Naming: One important function of the text in mathematical language is to name things, especially when assigning names to objects that cannot be selected in conventional symbolic expressions, ie where the selection operation has no symbolic counterpart. For example, selecting (and giving a name to) the identity of a group. Names can also be fixed names (eg “the identity i of (group) g”), or variables for complex expressions (eg “for all integers n, there is a number m which is equal to n + 1”). Note that names do not always occur (textually) after the object being named, as in “a set of integers S ”. The alternative “a set S of integers” is also acceptable, though it seems less natural. The assignment of names will be included in the grammar rules, as there appear to be stable rules for introduction of names.  Scope and Reuse of Names: It is common for mathematicians to reuse variable names like x, A, etc. in different contexts. An implementation of multiple contexts will provide some support for this.  Correctness: This is an issue we are emphasising in the project. Applications must not make mistakes. Users must not be allowed to believe they have proved a false theorem, and likewise must not be prevented from proving a true one. One obvious component is providing a formal semantics, but correctness in other parts of the system is an open question.

1st Annual CLUK Research Colloquium, 1997

4

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

2.3 New Terminology Much mathematics consists of defining new concepts and investigating the consequences of the definitions. Concepts are usually referenced by NL phrases which are chosen to convey some of the meaning of the concept. Such phrases are not always used as fixed entities, eg one may understand “finite Abelian group” having only seen definitions of “finite groups” and “Abelian groups”. This introduction of new phrases and their link to underlying meaning is a key characteristic of IML. The important question for implementation is what kind of techniques are required to support this, even at the prototype level. For example, what kind of representation is required during processing in order to select the correct meaning? Conventionally, single words are assigned meaning, and through parsing and semantics, the meanings are combined compositionally to select the overall meaning. Do we attempt to assign meaning to individual words? This is acceptable in most cases, but problematic for examples like “left monoid” because compositionality breaks down: a left monoid is a generalisation of monoid, not a specialisation, hence ‘left’ does not signal a restriction. There are several other complications to consider:

 Phrases might not be used as atomic units, thus the information could be distributed in a sentence.  Information from several concepts could be mixed.  Parts of a phrase could be left implicit, eg in set theoretic “relative complement (with respect to set B )”, where the containing set (B ) is left implicit in context.  The user’s choice of terminology may be ambiguous or hard to process correctly. The user could attempt to use deliberately confusing terminology, so we must prevent this.  Some kinds of overloading are important, eg use of different notions of ‘finite’: thus we must not be too restrictive. We suggest some kind of ‘normalisation’ of syntactic phrase together with a pattern matching step to identify the best match. We may need to rely on principles like “greedy matching”, where the longest match takes priority. Note that type checking may be useful in such disambiguation.

3 Current Work We explain current work, namely a system for checking the well-formedness of semantic constructions of mathematical descriptions, and a prototype being developed to help experiment with some basic ideas.

3.1 Conceptual Well Formedness An important part of mathematical language is how to describe which properties an object satisfies. More specifically, we shall consider which ‘class’ of objects a particular object inhabits, as this is more fundamental. Our aim in studying this is to know which constructions are valid and which are not, and then how to process the valid ones. Thus this helps to check the correctness of input. For example, we consider phrases like “cyclic Abelian group” and all its legal variants. There are potentially many ways of expressing the information, even when (syntactically) we have just two adjectives. Within a sentence, we have simple variants like “Abelian group which is cyclic”, or “g is a cyclic group and g is Abelian”. Across sentences, more variations are possible, eg “Let g be an Abelian group. . . . Assume g is cyclic.” Note that phrases like “cyclic group which is Abelian” do not sound quite natural, maybe because commutativity of the operator is a more fundamental property than whether the group can be cyclicly generated; we do not pursue this issue further. We can also have the information embedded in a longer clause, eg “cyclic Abelian group of prime order”, or “Abelian group of prime order which is also cyclic” etc. We have studied the underlying semantic conditions for such constructions, avoiding the details of syntax for the moment. A formal system is given in [12], using a language based on type theory to express the rules. The basic

1st Annual CLUK Research Colloquium, 1997

5

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

idea is to separate the structural component of a class from its logical constraints. The structural components can be checked directly by type checking – thus eliminating errors like “Abelian set”, since Abelian is defined on groups, but the logical parts may require some automatic reasoning or assistance from the user to verify. For example, given the definition of a group g (eg by listing its members and specifying an appropriate operator, such that the required axioms are satisfied1 ), if the user says “Obviously g is cyclic and Abelian”, then structurally these are fine because both properties are defined on groups, but proofs of the logical conditions are required. In the formal system, a ‘category’ is the version of what above has informally been called a class. The system provides “category constructors” for adding logical properties to a category (‘delta’), and for adding structure to a category (‘sigma’) – thus creating new categories from existing ones. An example of delta is to create the notion of “finite set” given the property of ‘finite’ and structure ‘set’; of ‘sigma’, creating a ‘groupoid’ by adding to a set an associative operator defined over the set. This operation is a dependent extension of the set, namely the type of the operator depends on the identity of the set. Note that the properties of such constructions must be added with the delta constructor. Another category constructor introduced is the analogue of functions. We can use this constructor to analyse expressions like the set theoretic “relative complement of . . . (with respect to a superset B)”. We can derive several consequences of this system, such as equivalence of classes with permuted adjectives (eg “cyclic Abelian group” and “Abelian cyclic group”) because the structural components are compatible and the logical constraints are equivalent (since ^ is commutative). See the paper [12] for more details. 3.1.1

The Role of Subtyping

The system described so far is not very flexible. This flexibility will come from being able to easily convert between categories. Such implicit conversions are very much used in mathematics; they are so natural that most people do not realise they are making such conversions. However, they are important in a formal setting. The basis of this flexibility is coercive subtyping, which provides safe abbreviational mechanisms. A notion of sub-category was added to the formal system as a lifted version of coercive subtyping. As an example, consider “finite group”. The property of finiteness is defined on sets2 , and a group is not a set. However, it may be understood as a set if we take the set component of the group. Thus we have a function which can coerce a group to a set. Typically, the user will explicitly declare this coercion between groups and sets, if he requires all groups to be subtypes of the corresponding sets. In such a context, “finite group” is therefore well typed, and can be understood as meaning “group whose set is finite”. Note that this operation is done at the type theory level, and that no work is required at the NL level after formation of the semantic construction; hence such details do not need to be considered at the NL level. Some coercions may be pre-defined. We shall be cautious about adding them however, as they may cause effects which the user does not expect. Useful pre-defined coercions include ones which propagate coercions over data structures, eg we can convert a tuple of groups to a tuple of sets if the coercion between group and set is present. The most useful pre-defined coercion is the one that ‘discards’ properties when required (ie, it selects the first argument in a delta construction); thus a “finite set” is automatically a set, and so on. Note this is not the case for sigma constructions. The mechanism outlined does not prevent the user defining a specific meaning to “finite group”, eg if he was using that surface term to denote a specific concept as a matter of convenience, such as an abbreviation for cyclic groups with finite order. This is an example of a user choosing terminology to suit his thinking, rather than being strictly bound by convention, linguistic or otherwise. In this case, the more specific meaning of finite will be chosen, the one that was defined directly for groups. 3.1.2

Using the Formal System

We shall use the rules to check the well formedness of semantic constructions as they are built during analysis, thus getting instant confirmation on whether they are acceptable or not – and leading to more informative error messages. 1 We shall assume this, without explaining the technicalities, in order to keep this example brief. 2 Other notions of ‘finite’ are possible, eg a finite sequence. We observe that underlying the forms of ‘finite’ are a vague idea of being able to

count the number of elements of some structure, and hence a meaning of ‘finite’ could be based on this. There is also the fact that ‘finite’ has meaning outside mathematics (as have many words), so intuitive understanding could play a part.

1st Annual CLUK Research Colloquium, 1997

6

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

It is a goal of our prototype (section 3.2) to make full use of the underlying type checker during NL analysis. We have mentioned that use of subtyping at the type theory level can simplify NL analysis. Other areas that may be useful include use of meta-variables to support resolution of anaphoric references. As noted above, structural constraints can be checked immediately, but logical constraints may require automatic reasoning or user intervention. Note that in some cases, proofs of the logical conditions may be already present in the context, hence are immediately solved. Otherwise, the logical conditions will be represented as ‘meta-variables’ or holes in the formal development, to be filled in later. We may distinguish kinds of such meta-variables in order to prioritise how they are satisfied; for example, certain kinds of presupposition require immediate attention before allowing the proof to proceed since typically they represent assertions used in the main reasoning (eg “The symmetric relation R also satisfies transitivity”).

3.2 A Prototype for Experimentation We are currently implementing a prototype to experiment with various ideas [4]. Its basis is an implementation in Haskell [7] of Luo’s LF, a simple type theory in which more complex type theories may be specified. A novel part of this work is that the type checker will be used throughout the processes of NL analysis, as we have mentioned above. Another possibility is to use multiple contexts in conjunction with disambiguation. We have identified several aspects of mechanisation of type theories which need improvement to support MV, and plan to experiment with those. The areas include:

 Meta-variables: ‘holes’ in proofs, which allow a main proof to proceed by assuming the proof can be found, but they must be filled in before the whole proof is accepted. In MV, we will use them to represent the details omitted in a user’s statements, such as the formal details usually omitted in mathematical language.  Multiple Contexts: current technology works in a single context, thus a user cannot develop proofs in parallel, or freely explore a mathematical development. Multiple contexts would allow this, and have advantages like more flexible development and use of libraries.  Automation: meta-variables provide a hook for automatic reasoning. We may require techniques oriented towards the kinds of reasoning required to complete MV proofs, as opposed to helping expert type theorists complete formal proofs.  Subtyping: we shall implement subtyping at the LF level, thus providing more flexibility and some theoretical advantages [10]. For the prototype, we will initially use fairly standard NL techniques, eg a Tomita-style parser [22, 8] to handle ambiguous grammars. Since use of the type checker will simplify analysis, we hope that the NL analysis will not need much sophistication. Note that we do not intend to use LOLITA3 [21, 3] in this project, for several reasons: we do not expect to need the complexity of LOLITA; the demands of mathematical language may be hard to implement in LOLITA; and the workings of a smaller, customised system will be easier to verify. The functionality aim of the prototype is to handle a simple mathematical development of binary relations and three theorems connecting definitions on properties of binary relations. We will add more sophistication in future work, eg to manipulate algebraic structures and to perform induction proofs.

References [1] A. Bailey. Lego with coercion synthesis. ftp://ftp.cs.man.ac.uk/pub/baileya/Coercions/LEGO-with-coercions.dvi, 1996. [2] A. Bailey. The Machine-checked Literate Formalisation of Algebra in Type Theory. PhD thesis, University of Manchester, 1998. (submitted). 3 LOLITA (Large-scale, Object-based, Linguistic Interactor, Translator and Analyser) is the large project in Durham which aims to build a general purpose platform with which specific NL applications can be built, eg information extraction [17], or object-oriented requirements analysis [15].

1st Annual CLUK Research Colloquium, 1997

7

Computer-Assisted Reasoning with NL: Implementing a Mathematical Vernacular

[3] Paul Callaghan. An Evaluation of LOLITA and Related Natural Language Processing Systems. PhD thesis, Department of Computer Science, University of Durham, 1998. [4] Paul Callaghan and Zhaohui Luo. Mathematical Vernacular in Type Theory-based Proof Assistants. In Proceedings: User Interfaces in Theorem Proving (UITP), July 1998. (submitted). [5] Coq. The Coq Proof Assistant Reference Manual (version 6.1). INRIA-Rocquencourt and CNRS-ENS Lyon, 1996. [6] N. G. de Bruijn. The mathematical vernacular, a language for mathematics with typed sets. In Nederpelt, Geuvers, and de Vrijer, editors, Selected Papers on Automath. 1994. [7] Haskell. Haskell report 1.4. http://www.haskell.org, 1998. [8] Mark Hopkins. Demonstration of the Tomita parsing algorithm. tomita.tar.gz, 1993.

ftp://iecc.com/pub/file/-

[9] A. Jones, Z. Luo, and S. Soloviev. Some proof-theoretic and algorithmic aspects of coercive subtyping. Proc. of the Annual Conf on Types and Proofs (TYPES’96), 1997. To appear. [10] Z. Luo. Coercive subtyping in type theory. Proc. of CSL’96, the 1996 Annual Conference of the European Association for Computer Science Logic, Utrecht. LNCS 1258, 1997. [11] Z. Luo. Coercive subtyping. Journal of Logic and Computation, 1998. To appear. [12] Z. Luo and P. Callaghan. Mathematical vernacular and conceptual well-formedness in mathematical language. In Proceedings: Logical Aspects of Computational Linguistics, 1997. (forthcoming in LNAI series). [13] Z. Luo and R. Pollack. LEGO Proof Development System: User’s Manual. LFCS Report ECS-LFCS-92-211, Department of Computer Science, University of Edinburgh, 1992. [14] L. Magnusson and B. Nordstr¨om. The ALF proof editor and its proof engine. In Types for Proof and Programs, LNCS, 1994. [15] L. Mich. NL-OOPS: From Natural Natural Language to Object Oriented Requirements using the Natural Language Processing System LOLITA. J. Natural Language Engineering, 2(2):161–187, 1996. [16] Mizar. Mizar WWW Page. http://mizar.uw.bialystok.pl/. [17] R. Morgan, R. Garigliano, P. Callaghan, S. Poria, M. Smith, A. Urbanowicz, R. Collingham, M. Costantino, C. Cooper, and the LOLITA Group. Description of the LOLITA System as Used in MUC-6. In Proceedings: The Sixth Message Understanding Conference, pages 71–87. Morgan Kaufman, Nov 1995. [18] A. Ranta. Type-theoretical Grammar. Oxford University Press, 1994. [19] Aarne Ranta. Context-relative syntactic categories and the formalization of mathematical text. In S. Berardi and M. Coppo, editors, Types for Proofs and Programs. Springer-Verlag, Heidelberg, 1996. [20] A. Saibi. Typing algorithm in type theory with inheritance. Proc of POPL’97, 1997. [21] The LOLITA Group. The LOLITA Project. Springer Verlag. (forthcoming in 3 vols: “The System Core”, “Applications”, “Philosophy and Methodology”). [22] M. Tomita. Efficient Parsing of Natural Language: A Fast Algorithm for Practical Systems. Kluwer Academic Publishers, Boston, Ma, 1986.

1st Annual CLUK Research Colloquium, 1997

8

Suggest Documents