the process of drafting regulations, in particular the need to verify and validate ..... gal Knowledge Based Systems: Information Technology and Law, ...
Drafting and validating regulations: the inevitable use of intelligent tools Joost Breuker, Emil Petkov, and Radboud Winkels University of Amsterdam Department of Computer Science and Law (LRI) P.O. Box 1030 NL 1000 BA Amsterdam, the Netherlands breuker, petkov, winkels @lri.jur.uva.nl
Abstract. In this paper we describe first the nature of laws and regulations, which are not-normal, fragmented pieces of text, that can only be understood by using some (implicit) model about the world to be regulated. Then we describe the process of drafting regulations, in particular the need to verify and validate their intended effects, i.e. deontic statements. We present an ontology, FOLaw, [13] and a prototype system, TRACS (Traffic Regulation Automation and Comparison System), which was created to test new traffic regulations [2]. Even a few runs of tests showed major deficiencies in this regulation. An extended version of TRACS also enables the generation of paraphrases of regulation, and even to some extent, from scratch. The implication of the use of these kind of tools are discussed; not only for checking consistency, but also for aligning (“harmonizing”) regulations of different legal systems (nations).
1 What’s in a regulation? “Can you develop a computer program that can check if the new traffic regulations are consistent and complete?” This apparently innocent question, posed by a government agency concerned with traffic safety, SWOV 1 triggered a decade of research at our institute, LRI. The first and smallest step consisted of a close reading of the draft text of the regulation. It does not take much to notice that a legal text is not a normal text. It consists of individual statements (articles), many of which have a deontic nature. However, we understand these statements in various ways, as can be illustrated by the following two articles from this traffic regulation (draft-RVV-90) Art 3 Vehicles should keep as much as possible to the right Art 6 Two bicyclists may ride side by side. We understand that Art 6 is an exception to Art 3, but we can only draw this conclusion after we have made some spatial model from which we can see that the left bicyclist of the two is not keeping fully to the right: the right hand bicyclist prevents this. If this 1
Stichting Wetenschappelijk Onderzoek Verkeersveiligheid; Foundation for Research on Traffic Safety.
were the point, normal texts would explain this, but regulation texts (“laws”) assume that the reader has already a full understanding, i.e. a generic model, of the domain concerned, i.c. traffic. Moreover, in principle these statements only mark what is undesirable behaviour among all possible behaviours [1]: that is the reason many of these statements have a deontic meaning (‘should’, ‘may’). In view of the request of checking consistency and completeness, drafting regulations can be compared to programming [7]. In software engineering a program is regarded as correct when it has no errors and is effective, i.e. has been verified and validated. The request of SWOV is a verification question. We will discuss the validation issues later, but it is already apparent that verification is problematic, as laws are riddled with exceptions and exceptions are logical inconsistencies. Moreover, the notion of completeness becomes very problematic for two reasons. First, the completeness is relative to the model of the world (domain) to be regulated and this model is implicit and largely based on common sense (see below). Second, it is hard to design a method to assess whether the regulation fully covers the intentions of the legislator, as these intentions are often expressed at a global level of desires from which the legal drafter has to infer undesirable situations and also has to take into account all kinds of implicit, undesirable side effects. Finally, as we will also show, undesirability of situations cannot directly be expressed, but only via the use of the famous three deontic operators: O(bliged), F(orbidden) and P(ermitted) for very pragmatic reasons. As we will show in the next section, these pragmatic reasons are also the cause of the many exceptions one finds in regulations. The completeness with respect to covering of the intended effects by a regulation is rather a question of validation than of verification.
2 FOLaw: a functional ontology for reasoning with the law According to Valente ([12] see also [13]) reasoning in law can be summarized by a number of dependent functions, where each function refers to specific categories of knowledge with specific properties. Here we can only present the global picture of this core ontology (see Fig 1), and we will focus on world knowledge and normative knowledge. The basic assumptions of this core-ontology are that there is a legal system that controls social behaviour in a reactive manner. Of course, the role of law is not only to ‘correct’ illegal behaviour, but also to instruct/prevent undesirable behaviour. These are crude, simplified assumptions, but for details see [13]. A reactive cycle starts with a case, i.e. a real world situation, which is interpreted in order to generate an abstract description of the case in the terms that the legal sources use. This abstract case description is called a legal situation, and the knowledge used to produce this step is the world knowledge, which forms the legal abstract model(LAM). Then, the legal situation is analyzed against the normative knowledge to verify whether it violates any norm, thus producing what is called a classified situation (a situation classified as either ‘allowed’ or ‘disallowed’). In another path, the situation is analyzed using again world knowledge (but here particularly its causal component) in order to find out which agents in the world (if any) have caused the situation. This information is then used as input to the responsibility knowledge which determines which agents (if
legal system knowledge
meta-legal knowledge
normative
world knowledge
causal
reactive
responsibility
definitional
creative
common-sense knowledge
social behaviour
society
agents/organizations
Fig. 1. FOLaw functional core-ontology distinguishing types of knowledge and dependencies
any) are to be held responsible for the situation. The results obtained in these two paths (the classified situation and the responsible agents) are then used as inputs for a function that defines a possible legal reaction using reactive knowledge. Further, outside this cycle, the law may also create an abstract entity (part of the legal system) using creative knowledge; this entity is also added to the legal abstract model. Finally, meta-legal knowledge refers to all these entities. Another way to see the interdependencies shown in Fig 1 is that they provide the connections between the (sub)functions from a reasoning point of view. That is, legal reasoning can be made modular, with each function corresponding to a module and the dependencies between the modules being provided by the dependencies between the functions. Such dependencies must of course be detailed. The main path in Fig 1 can also be seen as the global structure of legal arguments: starting from the ’facts of the case’ and going up to sentencing. Each category corresponds to a type of argument that has as antecedents the inputs and as conclusions the outputs of each function, and as warrants the knowledge belonging to that category. For normative knowledge, for instance, the conclusion is whether a situation is allowed or disallowed, and the warrants are normative knowledge. Moreover, the conclusions in a legal argument are concatenated as shown by the dependencies in the figure; for instance, an argument involving world knowledge (say, concluding that a certain person
is considered a ‘minor’ according to a certain definition) being used as subsidiary for an argument involving normative knowledge (say, concluding that a situation in which this person was driving a car is disallowed according to a certain norm). Legal reasoning can be thus seen as the production and analysis of arguments involving one or more of these categories. Ours is not the only core ontology for law (see e.g. [16] and [15]. Also, the work by McCarty on a “language for legal discourse” can be viewed as a core ontology for legal domains [9]. Although the ontologies are structurally very different, there is an important overlap of categories. The fact that competing ontologies have been proposed has lead to a reflective debate in the AI & Law community (e.g. [17]).
3 Testing regulations by situation generation: TRACS In regulations, one will hardly find statements about responsibility: responsibility, e.g. guilt, liability, is generally established using (common sense) causal reasoning [6]. In general, a regulation contains definitions about terms (world knowledge) and deontic statements (normative knowledge). This is also the case for the RVV90, the Dutch traffic regulation. In order to test this regulation, we constructed a KBS; TRACS (see Fig 2)
World Model
Legal Source
(e.g. traffic) KB
KB
Regulation
situation description
Applier (prod. system)
(NL-) Dialogue applicable legal norms
User
Fig. 2. Architecture of TRACS for testing regulations [den Haan, 1996]
There are two knowledge bases (KB) corresponding to world, respectively normative knowledge distinction of FOLaw: one that contains a model of the domain of law (W ORLD M ODEL), and one that represents the regulations (L EGAL S OURCE). The
W ORLD M ODEL model consists of the objects, agents and actions that can be used to generate or interpret situations in the world, i.e. cases. The legal world of traffic is very abstract and simplified compared to that of the physical/social one. Spatial reasoning is reduced to positions on parts of roads; time is of no importance, because the law does not look at the past, but only at a situation – e.g. being parked – or at a single action – e.g. crossing. The ontology consists of types of “traffic participants” (agents/vehicles), roads and parts of roads, and actions, which are represented naturally by a terminological classifier (LOOM). There are rules to compose or parse roads and crossings. Because of symmetry, and many other constraints the number of possible generic situations for this world is limited, but still combinatorial. This is of importance because we wanted to test the new traffic code (RVV-90) by generating all possible situations. The user in Fig 2 is replaced by a module that is a “situation generator”. which produces all possible situations; legal or illegal. There is no user to feed the system – it runs in batch mode – and only the outputs are checked. The legal reasoning starts when a SITUATION DESCRIPTION is fed into the R EGU LATION A PPLIER. This component is a simple production system that matches the SITUATION DESCRIPTION to the representation of the regulation in the L EGAL S OURCE. A legal norms is represented as a generic situation description with functions that represent the deontic operators and yield “allowed/disallowed” as output when applied to a situation description. This formalism as described in [13] enables us to avoid the use of some deontic logic, which are known to have no tractable realizations, and need exotic extensions to handle ‘paradoxes’. A legal norm matches a SITUATION DESCRIPTION, if at least one sort of all conditions can be unified with the states of the SITUATION DESCRIPTION. This means that the legal norm is applicable. For instance: SITUATION DESCRIPTION : (a’,b’,c’,d’), whereby c is-a-sort-of f, matches prohibited(f) LEGAL NORM : a,b e.g. (place(my-volvo, crossing1), consists-of(crossing1, road1,road2), is-parked(my-volvo), lights(my-volvo, on)) whereby, instance-of(my-volvo, car), sort-of(car, vehicle), and sort-of(parking,stopping) LEGAL NORM : (place(car, crossing), consists-of (crossing-x, road-x, roady) prohibited (is-stopped(vehicle)) (art. 15: “at a crossing of roads it is forbidden to stop a vehicle”)
More than one norm may be applicable. If the applicable norms have overlapping parts and conflicting outputs, the norms are inconsistent. However, these inconsistencies may be due to ‘exceptions’. If they are exceptions, they should be resolvable by the known meta-rules of the law, the most famous being “lex specialis derogat legi generali”, i.e. the more specific wins, which is similar to conflict resolution mechanisms in production systems. As TRACS was only a prototype, the first trials were made when an exhaustive situation generator was only specified on paper. However, these trials were already so illuminating that no further testing was required. Almost all cases that were specified by hand (about 40) lead to unexpected inconsistencies which wee not due to exceptions, or to outcomes that were obviously against the intentions of the legislator.
The results of TRACS were even very surprising in the sense that a number of really nasty contradictions – not of the permission/exception type – were brought to light in testing the RVV-91, the Dutch traffic code [3]. For instance, every situation containing a tram on the tramway was identified as a violation of an article that excepted a list of types of vehicles from taking the road (riding track). The tram is not in that list. However, adding the tram to this list has other nasty side effects. For instance, every situation containing a tram on the tramway was identified as a violation of an article that excepted a list of types of vehicles from taking driving lanes. The tram is not in that list, so that the tram is obliged to take such a lane. As the example shows, inconsistencies may easily arise in indirect ways. We also found that often there are no easy, straightforward repairs. For instance, adding the tram to this exception list would only mean that the tram should use the sidewalk. It turned out that most repairs required either ‘structural’ re-drafting, or ad-hoc articles. For instance, adding the article that trams should run on the tramway would be such a repair, but the legal drafters thought this was too stupid for words.
4 Generating Regulations Legislative drafting can be semi-automatically supported in at least three different ways which are complementary. The first one is by providing information services that relate general legal information to the specific tasks of the legal drafters. An example is the LEDA system, that follows the officially recommended drafting procedures, directives and styles [18]. LEDA offers access to relevant legislation in hypertext format, and provides the initial normative structure formats. A second way is by providing tools that check the consequences of a regulation. ExpertiSZe [11] and TRACS [2] are examples. In this paper we also present a third kind of tool and approach which automates the construction of paraphrases of regulations on the basis of normative goals ((un)desired social behaviour) and a model of the domain (world knowledge) to be regulated. The same normative goals can be expressed in different ways for different purposes (e.g. legal subjects concerned), like paraphrases of text that express the same underlying conceptualization [21]. In this paper we present procedures for generating paraphrases of norms. 4.1 An outline and rationale for generating normative rules The basic assumptions and steps in the process of generating regulations can be summarized as follows. As explained in Section 1 a legal domain refers to some social subsystem. A description of how such a world actually works, i.e. what behaviours can be exhibited by this world we have called legal abstract model (LAM, see Fig. 1). Behaviour at a particular moment is called a situation, consisting of some configuration (relations) of objects which are in a particular state. Situations can be instantaneous (actual), or generic (abstract). Norms are generic situations marked as being obliged, forbidden or permitted. 2 2
In fact, these deontic operators are pragmatic, i.e. efficient expressions to translate the goals of the legislator, which are in principle in terms of legal/illegal. We present here a rather narrow
The LAM should cover all possible situations in the domain. As these cannot be enumerated, a LAM is a generic abstraction similar to behavioural models in model based reasoning. In the current practice of legal drafting, no explicit modeling occurs. It is implicit in the understanding of the problems and the debates – political and technical – that are preliminary to and contingent upon the drafting activities proper. Making such a model explicit in a process of knowledge acquisition, as automation requires, may have the important side effect of clarifying more precisely and coherently what is at stake in a law. It may result in well defined terms, which may be an explicit part of a regulation as well. The disadvantage is also obvious: it takes a lot of effort – up to one or more personyear – which may be relatively large for small legal drafting projects (e.g. local repairs). A second step is going from an articulate and parsimonious world model into a version that consists of a list of all possible situations: this we call situation generation (see below, Sec. 4.2). The partioning of a world model into two disjoint sets of illegal (undesired) and not illegal (not undesired) is called the Qualification Model (QM) [2]. Because in (political) practice normative goals (rather: intentions) are formulated in more global and vague terms than legislation requires, the process of turning these into the QM specification involves assessment. The final step involves the construction of one or more regulation models (RM). An RM is not the same as the text of a regulation. It consists of a structure of norms, which can be expressed in various (ways in) natural languages. A norm has a simple structure as explained in the previous section, but the inter-norm structure may be rather complex, because it contains exceptions, which may be exceptions to exceptions etc. This structure is called the exception structure. Figure 3 depicts the steps and their dependencies in this process. In the following section (4.2 we discuss situation generation and norm generation.
*+
,-
situation generation
.
world model
normative goals
!
situations $ $% . ./
&'
assessment
()
QM
RM-1
norm )( RM-2 generation RM-3 &'
"#
modeling
Fig. 3. Generating regulation models from world knowledge and normative goals
view of norms. Norms may contain rights and duties [8, 10], but we will not discuss here these complications (see also [12].
4.2 Generating situations A qualification model (QM) is a further qualification of the set of possible situations that models a specific domain. It is possible to generate such a set automatically, from the definitions of terms and by combining the possible states of objects. First, we need an ontology that provides the definitions of terms [14, 13]. In legal domains, human agents play a pivotal role, so we will distinguish two types of entities: (human) agents and objects. Further, agents and objects can be related in dynamic and in static ways. The dynamic relations are actions or processes, while the terms relation will denote all other types of relations. As attributes are typical one place, relations, actions and processes have in general more arguments. For each type of action or process defined in the ontology we can generate in principle in a combinatorial way all types of arguments (objects, agents) that may fill these roles. Common sense ontologies may be used to obtain the semantics of these actions (e.g. Wordnet, CYC). For instance, in a simplified combinatorial view an action like give has two agents (actor and recipient) and one object (for the object role). Now, if we distinguish in the domain three types of agents, and four types of objects, 0213021547680:9 possible “giving”situations are generated. 3 To illustrate what is involved, we may assume that situations consist of variables, agents ( ; ), actions ( < ), objects ( = ) and relations ( > ). Furthermore, there are not only elementary situations that consist of one action, but also complex situations in which more actions or relations are involved ( ?7@A per situation): Number of agents: ; Number of objects: = Number of actions: < Number of relations: > Average arity of action and relation predicates: B Average number of actions and relations per situation: ?
– – – – – –
@A
Each predicate may be an action or a relation ( ). Each argument is is either an agent ( ; ) or an object ( = ). Each predicate has arity B , so there are EF;7CG=HI possible combinations of the arguments per predicate. This yields the total of possible actions and relations, which is the multiple ( J ) of EFH and EL;7CM=H : N
=POFQ ASRUT
6VELHPJXEL;YCZ=HI
Suppose that a situation consists of an average of then the number of possible situations is: N
3
4
=PO\[]_^`6VE
N
?
@A
action or relation predicates,
=PO Q A\RUTHbaWcLdD6VEeEFH21YEF;7CM=HbI3HUHbaWcLd 4
This is not really correct. An action or process changes a situation into another situation. Therefore, one may assume that the give-action should be decomposed into an antecedent situation1 in which an agent possesses an object, and a consequent situation2 in which another agent possesses the object. Both situations are causally and intentionally related by the agent who is the actor of the change of possession (giving). This conceptualization is ontologically more correct, but does not really change the notion of combinatorics involved in situation generation. In this paper we treat actions as parts of situations; not as ‘bridges’ between situations. The latter view complicates, but does not really change the processes of situation and rule generation. It is part of current research. Filling the values of the traffic world into the equation above yields: gKhjilknmmpofqMrsetumpvwq hsxysz cLd kZmmF{P|}q~{P|jsetumfq{P|jsLsL}kZ{
|yU|||:|||yt
Indeed a classical combinatorial explosion. However, this is a worst case analysis, because we have only blindly applied the terms and types from the ontology and used an almost unconstrained version of the action frame (only the types of role fillers are given). There are various sources for limiting the number of possible (and meaningful) situations (see also [1]) – Redundancy and tautology There is a lot of redundancy when one does not carefully choose basic terms in representing knowledge. Many relations may have tautological family members. For instance, in the traffic domain there is a lot of symmetry and inversion. – Constraints When the world knowledge contains descriptions of the physical limitations in the world, the physically and logically impossible situations are pruned. For instance, an object cannot be at different locations at the same time. Many of these are of a very general nature, like the roles in actions or physical principles, but there are often many domain specific ones as well. For instance, in the traffic domain, all actions are viewed as taking place in ‘only’ two dimensional space. – Abstraction or grain size level As in all modeling, and particularly in AI, abstraction is what keeps the world manageable, even if it leads to sometimes incorrect identifications at lower levels of grain size. – (Legal) Relevance In modeling worlds for legal (normative) control, worlds are only considered from the point of enforceable law. For instance, a traffic regulation is aimed at enhancing the safety of the participants. Once the situations have been generated, they have to be distributed over two sets to form the qualification model . We mention three possible approaches: – Qualify each situation by hand according to the initial normative goals. An important source for qualification this way may be precedence law. – Formulate a limited number of situations of a high abstraction level, and qualify these by hand according to the initial normative goals or intentions. – Because legal drafting hardly ever occurs with a completely fresh start, one may qualify each situation using the older version of the regulation, using automated legal assessment. As in most political and technical debate the differences between the old and the new are emphasized, it is not difficult to use this method followed by the first one. For the relatively small domains we are using as our testbeds, qualification by hand is not yet a problem. Neither do we think that in practice one ever will have to rely only on this by hand method, but rather on the last one. 4.3 From qualification to regulation The input for the norm generation is the , which consists of the subset of the undesired situations, and of the not undesired situations. is the normative
default chosen, i.e. in law this is in general , reflecting the principle that what is not forbidden is permitted in law. is a situation. 7 is the set of generated norms; is a norm. The overall algorithm for the generation of normative norms is defined as follows: while 6 select( \ ) translate( e ) add(3S7 ) adjust(3Sw ) Here we do not further elaborate the select and add procedures as they are simple set operations. Below, we discuss the translate and adjust procedures. The translation procedure is required, because norms in regulations are in general not expressed as situations, as in the QM, but as rules, which means that a particular action in a norm is selected as a forbidden, permitted or obliged consequent, while the other situation descriptors are viewed as conditions, or ‘grounds for application’. For instance, EUEL;5BOfGyS;:E V¡:HUH¢EF;5BOfyS£y=PO\¤LBHeH is expressed as “if one is older than 18 years old, one is obliged to vote”. In a formal sense, these expressions are equivalent, in particular because the “if ” does not mean a real (material) implication, but rather a conjunction. This translation is required for communicative (pragmatic) reasons rather than for semantical reasons. Besides producing these expressions, the translation procedure also contains the algorithms to select the application grounds. The more abstract the application grounds, the simpler the rules become, but also the more exceptions the regulation will contain. The adjust algorithm takes care of the ‘sign’ of the deontic operator to create exceptions ¯in an efficient manner, and feedsback constraints to the select procedure. The algorithms are described in more detail in [2, 4, 5]. Thus far, these have been implemented as extensions to TRACS and tried out in experimentally in small, artificial domains.
5 Conclusions We have described TRACS in supporting drafting regulations in two ways: one by testing regulations (TRACS proper) and the other way is by generating regulations (TRACS extended). TRACS is based upon the FOLaw ontology in which ‘world’ and ‘normative’ knowledge are distinguished. Pivotal for all legal reasoning is the world knowledge which models a particular legal domain, independent of its normative assignments. This world model, or LAM (Legal Abstract Model) allows the variety of functions of TRACS to support legal drafting. In fact, the LAM for a particular legal domain may be reused beyond the particular legal system for which it has been constructed. With the the globalization of human activities, harmonization and coordination of legalization becomes more and more urgent. In a very explicit manner this is the case in and around the European Community. The traffic regulations provide a good example, as the world of traffic is more or less the same all over the world, but the regulations differ enormously, and it is hard to assess by hand/eye/brain in which respects they are different. As we have seen in Section 3, it is for human legal drafters already difficult to trace the implications
of an individual regulation, comparing and adjusting two or more regulations for the same domain becomes a factor more difficult. In fact, TRACS-test can easily put into a comparative mode, once a LAM has been constructed, as its systematic situation generator may be the input to more than one regulation knowledge base and TRACS records all differences in outcomes [5]. For instance, in this way we have compared (parts of) the old with the new Dutch traffic regulation. As we have also experienced, repairing and adjusting regulations is neither a task that is self-evident as it may bring new design errors. Therefore, we view TRACS-extended rather as a tool for adjustment, repair and communication in legal drafting than for full blown legal drafting from scratch. TRACS has been thus far only an experimental testbed. Its basic philosophy and architecture is currently applied in an Esprit project, CLIME (http://www.bmtech.co.uk/clime/index.html), aimed at information serving of huge regulation bases, based upon a LAM and similar assessment algorithms as in TRACS [19], [20]. However, we have identified at least two problem areas that may limit our approach. – Situation generation is a combinatorial process. Although there are many sources to reduce complexity, many of which are classical in AI, they may not lift completely all barriers to scaled up applications. This is typical for model based approaches in AI. It should be noted that the same combinatorial problem is also one of the major reasons that human legal drafters – or for that matter, society – are not capable of tracing the implications of any non-trivial regulation. In fact, drafts of legislation are assessed in practice by evaluating a set of typical cases and having interest groups give comments. Even a ‘case-based’ input rather than full situation generation may improve quality as the test runs of TRACS have shown. – Our account of the translate process has been somewhat over-optimistic. If we want to translate generic situation descriptions and we have to distribute these over application grounds and conclusions, we have to assume probably too much common sense knowledge about the world. For instance, TRACS may also decide that in the example about voting, the agent at voting should become over 18 years old. In other words, it should be able to distinguish about actions over which an agent can have control and those processes and actions he/she has not. Again the semantics of the actions may help, but the basic message is that we will hit here the same bottom as many KBS applications: the need to fall back on common sense reasoning about the world. Indeed, legal reasoning proceeds in close association with common sense reasoning.
References [1] B REUKER , J., AND DEN H AAN , N. Separating regulation from world knowledge: where is the logic. In Proceedings of the 4th International Conference on AI and Law (New York, NJ, 1991), M. Sergot, Ed., ACM, pp. 41–51. [2] DEN H AAN , N. Automated Legal Reasoning. PhD thesis, University of Amsterdam, 1996. [3] DEN H AAN , N., AND B REUKER , J. TRACS: Een juridisch KBS voor het RVV90. In Juridische toogdag 1992 (Amsterdam, 1992), VU, pp. 21 – 32. [4] DEN H AAN , N., AND B REUKER , J. Constructing normative rules. In Proceedings of JURIX-96 (1996), R. van Kralingen, Ed., pp. 41–56.
[5]
[6] [7] [8]
[9] [10] [11]
[12] [13]
[14]
[15] [16]
[17] [18]
[19]
[20]
[21]
DEN H AAN , N., B REUKER , J., AND W INKELS , R. Automated legislative drafting. In Proceedings of the 8th Dutch Conference on Artificial Intelligence, NAIC (1996), J. Meyer and L. van der Gaag, Eds., pp. 167–178. H ART, H., AND H ONORE , T. Causation in the Law, second edition ed. Oxford University Press, New York, 1985. KOWALSKI , R., AND S ERGOT, M. Computer representation of the law. In Proceedings of the ninth IJCAI (1986), pp. 1269–1270. M ALT, G. Methods for the solution of conflicts between rules in a system of positive law. In Coherence and Conflict in Law (Deventer/Boston, 1992), P. Brouwer, T. Hol, A. Soeteman, W. van der Velde, and A. de Wild, Eds., Kluwer, pp. 201–226. M C C ARTY, L. A language for legal discourse i. basic structures. In Proceedings of the 2nd International Conference on AI and Law (Vancouver, 1989), ACM. ROSS , A. Tu-tu. Scandinavian Studies in Law (1957), 139–153. S VENSSON , J., KORDELAAR , P., WASSINK , J., AND VAN ’ T E IND , G. ExpertisZe, a Tool for Determining the Effects of Social Security Information. In Proceedings of Legal Knowledge Based Systems: Information Technology and Law, JURIX–1992 (Lelystad, NL, December 1992), C. Gr¨utters, J. Breuker, H. van den Herik, A. Schmidt, and C. de Vey Mestdagh, Eds., Koninklijke Vermande, pp. 51–61. VALENTE , A. Legal knowledge engineering: A modelling approach. IOS Press, Amsterdam, The Netherlands, 1995. VALENTE , A., B REUKER , J., AND B ROUWER , P. Legal modelling and automated reasoning with ON-LINE. International Journal of Human Computer Studies 51 (1999), 1079– 1126. VAN H EIJST, G., S CHREIBER , A. T., AND W IELINGA , B. Using explicit ontologies for kbs development. International Journal of Human-Computer Studies 46, 2/3 (1997), 183– 292. VAN K RALINGEN , R. W. Frame-based Conceptual Models of Statute Law. PhD thesis, University of Leiden, The Hague, The Netherlands, 1995. V ISSER , P. Knowledge Specification for Multiple Legal Tasks; A Case Study of the Interaction Problem in the Legal Domain. PhD thesis, University of Leiden, The Hague, The Netherlands, June 1995. V ISSER , P., AND W INKELS , R., Eds. First International Workshop on Legal Ontologies (Melbourne, Australia, 1997), University of Melbourne. VOERMANS , W., AND V ERHAREN , E. LEDA: a semi-intelligent legislative draftingsupport system. In Intelligent Tools for Drafting and Computer-Supported Comparison of Law – Sixth International Conference on Legal Knowledge-based Systems Legal Knowledge-Based Systems, JURIX–1993 (1993), Koninklijke Vermande, pp. 81–94. W INKELS , R., B OSSCHER , D., B OER , A., AND B REUKER , J. Generating exception structures for legal information serving. In Proceedings of the Seventh International Conference on Artificial Intelligence and Law (ICAIL-99), T. F. Gordon, Ed. ACM, New York, NJ, 1999, pp. 182–195. Also in Proceedings of the 11th Belgium-Netherlands Conference on Artificial Intelligence (Maastricht, November 3-4, 1999). W INKELS , R., B REUKER , J., B OER , A., AND B OSSCHER , D. Intelligent information serving for the legal practitioner. In Proceedings of the IASTED Conference on Law and Technology, LawTech-99 (1999), pp. 103–114. W INKELS , R., AND DEN H AAN , N. Automated legislative drafting: Generating paraphrases of legislation. In Proceedings of the Fifth International Conference on AI and Law (College Park, Maryland, 1995), ICAIL-95, Ed., ACM, pp. 112–118.