Potential Pitfalls for Patterns Tim Menzies January 1, 1996
Abstract
the early-1980s [3, 14]. This paper tries to bridge the gap between design architectures proposed for OO and design architectures proposed for expert systems. In all, we will say more about expert system design architectures than OO design patterns. However, after over a decade of experience with KL , we can make some clear statements about its pitfalls. We will argue that we can extrapolate the lessons KL modelling to OO design patterns. That is, the problems already seen by KL will have to be faced in the future by OO design patterns. This paper is structured as follows. Due to the hybrid nature of the content of this paper (OO and expert systems), we will introduce OO design patterns to our expert systems audience in Sections 2. Next, we introduce KL to our object-oriented audience in Section 3. The similarities and dierences between OO design patterns and KL are discussed in Section 4. Section 5 discusses the pitfalls known for KL . Our conclusion, in section 6 lists the warning signs that would suggest that OO design patterns will suer from the same drawbacks as KL .
An exciting new idea in OO design is the
design pattern: an abstract implementation
independent description of class designs that appear in many applications. Using patterns, a designer can bootstrap themselves into better designs using proven old designs. A similar idea can be found in the expert systems literature dating from the mid-1980s. The goal of KL or knowledge-level modelling (e.g. KADS) is to identify abstract patterns of inference that appear in many expert systems. Such abstract patterns of inference and design, it is argued, are productivity tools for the creation of software applications. Recently, however, an alternative view has emerged. While such abstract patterns are good for training purposes or explaining old designs, it is possible that such abstract patterns are not a productivity tool for creating new applications. KEYWORDS: OO design patterns, expert systems, knowledge-level modelling. WORD COUNT: 3702 words.
1 Introduction
2 OO Design Patterns
An exciting new idea in OO design is the design pattern: an abstract implementation independent description of class designs that appear in many applications [9]. Using patterns, it is claimed that a designer can bootstrap themselves into better designs using proven old designs. Despite the current level of enthusiasm for OO design patterns, we nd it necessary to sound a note of caution. A similar idea, called KL or knowledge-level modelling can be found in the expert systems literature dating from
Consider the class hierarchy browser of Figure 1. When a class name is selected in the upper-left list box, the methods of that class are displayed in the upper-right list box. If one of these methods is selected, then the source code for that method is displayed in the bottom text pane. Now compare this class hierarchy browser with the disk browser shown in Figure 2. When a directory name is selected in the upper-left list box, the les in that directory are displayed in the upper-right list box. If one of these les is selected, then the contents of that le are displayed in the bottom text
Department of Software Development, Monash University, Caul eld East, Melbourne, VIC.;
[email protected]
1
Figure 4 shows the inner structure of the composite browser. Composites contain either other composites or leaves. Leaves compile the contents of lower text-pane. Once this structure is in place, all that is required to convert a disk browser to a class hierarchy browser is to:
Class Hierarchy Browser + Number Complex * Fraction Integer - aNumber ^((numerator * aNumber denominator) (denominator * aNumber numerator))/ (denominator * aNumber denominator)
Change the title of the window for \Disk Browser" to \Class Hierarchy Browser".
Implement the dierent compiles methods in the classes File and Method.
Figure 1: A class hierarchy browser pane. c:\ bin docs wp
Component
words.dvi words.ps words.tex
children Composite
Leaf File
\begin{document} \title{Problems with Patterns} \author{Tim Menzies, Department of Software Development
Directory
Method edits compiles CompositeBrowser title browse
Figure 2: A disk browser
KEY: Class name attributes methods
Clearly, there is some similarity in the two browsers. Containers (classes or directories) are shown top-left. The things in the containers that are not themselves containers (methods and les) are shown top-right. The contents of these non-container things are shown in the bottom pane. If we rename containers composites and the non-containers leaves then we can design one composite browser class that handles both class hierarchies and directory trees (see Figure 3). That is, our disk browser and class hierarchy browser are both presentations of nested composites.
Y isa X
X
Y
X
Aggregation Y (many X to 1 Y)
X
Y
Association (1 to 1)
Figure 4: Object model for the composite browser A composite is one of the 23 OO design patterns listed by GOF1 . The above example suggests the power of such OO design patterns. Seemingly dierent problems can be resolved to a single design. OO design patterns could become a repository for experience which can bene t new designers. OO design patterns could also serve to unify the terminology of OO design, allowing experience from one application to migrate into another area. For more on possible bene ts of OO design patterns, see [9]
Composite Broswer Composites hierarchy Fraction
Class
Leafs at selected composite
Editor for the selected leaf.
1 GOF= the \gang of four"; i.e. the four authors of the OO design patterns text [9]
Figure 3: A composite browser 2
3 KL Modelling
HEURISTIC MATCH
The goal of KL modelling is to identify abstract patterns of inference that appear in many expert systems; e.g. diagnosis, classi cation, monitoring, etc. Such abstract patterns of inference, it is argued, are productivity tools for the creation of expert systems. Examples of KL are Clancey's model construction operators [7], Steels' components of expertise [24], Chandrasekaran's task analysis [4], the PROTEGE-II group [22], SPARK/ BURN/ FIREFIGHTER (SBF) [13] and KADS [27]. In terms of mature KL methodologies, KADS is the dominant KL approach. Wielinga et. al. note that, as of 1992, KADS has been used in some 40-to 50 knowledge-based systems (KBS) projects, 17 of which are described in published papers [27]. Our general claim is that OO patterns and KL (e.g. KADS) are similar enough for lessons from KL to be relevant to OO design patterns. To make that claim, we now present a sample of the KADS research and Clancey's KL work. We begin with the quote Clancey used to open the classic KL paper Heuristic Classi cation [6]. We believe that this quote re ects something of the same intent as the GOF work: To understand something as a speci c instance of a more general casewhich is what understanding a more fundamental principle of structure means- is to have learnt not only a speci c thing bit also a model for understanding other things like it that one may encounter. J.S. Bruner In Heuristic Classi cation, Clancey reverse-engineered 10 expert systems written in a variety of tools: 5 used the EMYCIN rule-based approach, 3 used the Teknowledge Inc. toolkit, 1 used a frame-based system and 1 was coded directly in LISP. He found that all these systems shared the same abstract pattern of inference, which he called heuristic classi cation (shown in Figure 5). Inspired by Clancey's insight, subsequent work found other abstract patterns of inference. Tansley & Hayball [25] list over two dozen patterns of inference2 of the form of 2
Data abstractions
Solution abstractions
DATA ABSTRACTION
REFINEMENT
Data
Solutions
Figure 5: Heuristic classi cation Figure 6 using the KADS notation (rectangles are data structures and ovals are functions). Given a complaint, the KADS abstract pattern for diagnosis is that a system model is decomposed into hypothetical candidate faulty components. A norm value is collected from the system model. An observation for that candidate is requested from the observables (stored internally as a finding). The candidate hypothesis is declared to be the diagnosis based on the difference between the norm value and the finding. complaint select
system model
decompose
observable
hypothesis
specify
compare
norm
select
finding
difference
Figure 6: KADS: diagnosis causal tracing), mixed mode diagnosis, veri cation, correlation, assessment, monitoring, simple classi cation, heuristic classi cation, systematic re nement, prediction, prediction of behaviour and values, design (hierarchical and incremental), con guration(simple and incremental) planning, and scheduling
Including systematic diagnosis (localisation and
3
As another example, Figure 7 shows the KADS abstract inference pattern for monitoring. A parameter is selected from a system model. The model's expected normal value is generated from the model and collected from the observable-s (stored as a finding). The current state of the monitoring system us reported as a discrepancy class after comparing the finding with the expected normal value.
system model
observable
select
select
parameter
specify
finding
compare
norm
difference
discrepancy class
classify
historical data
Figure 7: KADS: monitoring Note the shaded portions of Figures 6 & 7. We will return to these shaded portions below (see Section 5.4, Figure 9). The way these abstract patterns of inference are supposed to used is very similar to the GOF proposal: Using abstract patterns of inference, knowledge engineering becomes a structured search for an appropriate inference pattern. Tools such as the SHELLEY workbench [27] allow a knowledge engineer to search requirement documents for a match between the stated requirements and the library of known abstract inference patterns. Once such a pattern is found or developed, then systems development becomes a process of lling in
the details required to implement that abstract inference pattern. All known abstract inference patterns are really combinations of a small number (say, 20) of reusable inference subroutines. Some of the reusable inference subroutines are shown in Figures 6 & 7 (e.g. select, specify, compare). New abstract patterns can be quickly built out of these lower-level inference primitives. Libraries of abstract inference patterns become a productivity tool for building a wide-variety of expert systems. Marques et. al. report signi cantly reduced development times for expert systems using a library of 13 reusable inference subroutines (eliminate, schedule, present, monitor, transform01, transform-02, compare-01, compare02, translate-01, translate-02, classify, select, dialog-mgr) in the SBF environment. In the nine studied applications, development times changed from one to 17 days (using SBF) to 63 to 250 days (without using SBF) [13]. Knowledge engineers can now separate their analysis work from their design and coding work. Analysis is the process of generating abstract problem solving strategies. Design and coding is a process of implementing these strategies. KL becomes a succinct vocabulary for describing, summarising, and comparing expert systems. Practioners nd this retrospective second-glance at their systems useful for developing more generalised architectures for future work [10]. Expert systems theoreticians have used KL to assess and clarify the essential features and dierences of applications [23]. Lastly, knowledge engineering novices can use a KL analysis of classic expert systems to quickly review successful techniques.
The above description is only a partial description of KL in general and KADS in particular. For an on-line versions of the KADS documentation, see http: //www.swi.psy.uva.nl/ projects/CommonKADS/ Reports.html. See [25] for a detailed text on KADS. See the 4
Related Work section of [27] for a discus-
particular diagnosis. Typically, KL is implemented in a rule-based approach (though the KADS community argues that their abstract patterns can be implemented in any high-level language). GOF is careful to describe the context of each OO design pattern and when it would be applicable. KL oers no such contextual information since there exists rough agreement in the knowledge acquisition community on the basic KL tasks (e.g. classi cation, diagnosis, monitoring). GOF OO design patterns are closer to implementation than most KL techniques. KADS, for example, stresses that it is an implementation-independent methodology. It deliberately makes no statement about the inner structure of the inference subroutines such as select. However, given a high-level knowledge representation language, these subroutines are simple to operationalise. The PROTEGE-II group, for example, uses a forward-chaining rule-based shell called CLIPS [20].
sion of the dierences in the various KL approaches. For a tutorial introduction to KADS, see [10, 14].
4 Comparing KL & OO Design Patterns 4.1 Dierences
While both KL and OO design patterns make use of classi cation hierarchies, they use them in radically dierent ways. A standard OO class hierarchy is not suitable for (e.g.) specialisation and generalisation in heuristic classi cation. A heuristic classi cation hierarchy must be semantically clean; i.e. arcane programming constructs such as over-rides of inherited properties must be forbidden. If an OO system implemented heuristic classi cation, then a special network of instances would have to be created to manage a clean generalisation semantics. KL may use OO techniques to implement functions like classify, select etc. However, this would be a low-level implementation decision that would not aect the highlevel KL modelling. Many of the KL constructs (e.g. rules) focus on the processes and connections between concepts. We suspect that if the GOF turned their attention from programmer-level constructs to constructs for business-rules then they would arrive at something like KL . KADS in particular and KL in general is based on a functional-decomposition style of analysis3. Lest OO practioners discount KL for that reason, we note that the SBF results are the most signi cant documented evidence of productivity gains in any software approach (be it knowledge based or object-oriented or otherwise) [17, 18]. Ideally, a KL analysis is implemented in a declarative language. Once an abstract inference pattern is implemented in such a language, it can be \driven" in dierent directions. For example, theoretically, a heuristic classi cation system can be driven \forwards" to produce diagnoses from data or \backwards" to suggest tests that could con rm a
4.2 Similarities
While there are dierences between KL and OO design patterns (see Section 4.1), both are examples of the same software abstraction process. Experienced software engineers can describe dierent applications using a common, abstract, language. The details of these abstract design description languages may dier: OO design patterns use class inheritance and association links; KL uses a variant on data- ow diagrams to describe their abstract patterns of inference. Nevertheless, regardless of these dierences, both KL and OO design languages seek some characterisation of a design that is implementation independent and re-usable. Both systems seek productivity tools for new applications. Both approaches are powerful tools for succinctly describing old designs and suggesting methods for unifying dierent systems.
5 Limits to KL Clancey's Heuristic Classi cation paper [6] oered a uni ed retrospective view on numerous, seemingly dierent, expert systems. Similar (but smaller) studies (e.g. [1, 10]) suggest
While the KADS community uses terms such as \knowledge sources", we read diagrams such as Figures 6 & 7 to be high-level data ow diagrams. 3
5
that KL can retrospectively clarify historical expert systems design issue. However several important issues remain outstanding: (i) are the KL abstractions re-usable?; (ii) are KL abstractions a productivity tool?; (iii) do KL abstraction assist in the design of new systems; (iv) does the extra level of abstractions used in KL overly-complicate the design process. We address these questions below.
5.1 Are KL Abstractions Reusable?
1986 (SALT [12, 11]). This prior solution as used to generate a precise speci cation [28] which was the starting point for the Sisyphus-2 groups. The problem was attempted by eight KL groups. Most other KL groups oered solutions to the 1991, 1992, and 1994 Sisyphus rounds. If SBF was a general productivity tool, then the SBF team should have been able to quickly build Sisyphus solutions.
5.3 Do KL Abstractions Assist in the Design of New Systems?
It is not clear that the problem solving methods found by KL are truly reusable. Between the various camps of KL researchers, there is little agreement on the details of the problem solving methods. Contrast the list of inference sub-routines from KADS [27] and SBF [13] (termed \knowledge sources' and \mechanism" respectively). While there is some overlap, the lists are dierent. Also, the number and nature of the problem solving methods is not xed. Often when a domain is analysed using KL , a new method is induced [10]. Further, dierent interpretations exist of the same method. For example, the problem solving method proposed by Bredeweg [2] for prediction via qualitative reasoning is dierent to the qualitative prediction method proposed by Tansley & Hayball [25].
Corbridge et. al. reports a study in which subjects had to extract knowledge from an expert dialogue using a variety of abstract pattern tools [8]. In that study, subjects were supplied with transcripts of a doctor interviewing a patient. From the transcripts, it was possible to extract 20 respiratory disorders and a total of 304 \knowledge fragments" (e.g. identi cation of routine tests, non-routine tests, relevant parameters, or complaints). Subjects were also supplied with one of three abstract patterns representing models of the diagnostic domain. Each model began with the line \To help you with the task of editing the transcript, here is a model describing a way of classifying knowledge". Model one was an \epistemological model" that divided knowledge into various control levels of the diagnosis process. Model one was the \straw man"; it was such a vague description of how to do analysis that it should have proved useless. Model two was a more sophisticated version of Figure 6. Model three was \no model"; i.e. no guidance was given to subjects as to how to structure their model. The results are shown in Figure 8. The statistical analysis performed by Corbridge et. al. found a signi cant dierence between the performance of groups 3 compared to groups 1 and 2. Further, no signi cant dierence could be found between the poor-abstract-model group (model 1) and the group that was using a very mature abstract model (model 2). These are very counter-intuitive results. Using a hastily-built abstraction was just as
5.2 Are KL Abstractions a Productivity Tools for New Applications?
The productivity gains seen in the SBF experiment have yet to be repeated in other applications. The crucial test for SBF is how well this system ports to domains outside of the areas it was initially developed for. We nd it signi cant that the SBF group made no entry to the Sisyphus-1 or Sisyphus-2 KA project: Sisyphus is an attempt by the international KA community to de ne reproducible KA experiments [10]. Sisyphus2 was a case-study in application development by the international knowledge acquisition community. The particular problem studied in Sisyphus-2 experiment was a medium-sized task (elevator con guration). A prior solution to this problem was known in the literature from 6
Model 1 Epistemological 2 KADS 3 No model
Having noted that such similarities exist, the KL researchers do not take the next step and simplify their distinctions (e.g. by combining diagnosis and monitoring). In other work [15, 16], we have explored unifying knowledge-based processing by inferencing over and-or graphs. Such graphs could be computed from an OO design if we ignore all encapsulation boundaries and just map the dependencies between instance variables. In terms of abstraction, we would characterise such a modelling technique as a very-low level tool. Nevertheless, we have found that a single inference procedure (abduction) can be applied over such a representation to implement many of the KL abstract inference patterns (e.g. diagnosis, case-based reasoning, explanation, prediction, prediction, classi cation, planning, monitoring, qualitative reasoning, veri cation, multiple-expert knowledge acquisition, explanation, single-user decision support systems, multiple-user decision support systems, natural-language processing, design, visual pattern recognition, analogical reasoning, nancial reasoning, machine learning, and case-based reasoning). Our general point here is that, in the case of KL , abstractions have confused rather than clari ed the modelling process. In this regard, the analysis of Motta & Zdrahal of the Sisyphus-2 applications to be particularly interesting. Motta & Zdrahal discuss the various Sisyphus-2 KL implementations using their special knowledge of constraint satisfaction algorithms [29]. We nd their lower level more insightful into the construction process than the less-detailed, high-level KL approach. This low-level view of a problem can nd errors that experienced KL practioners cannot. For example, Motta & Zdrahal argue that one declarative translation of the procedures in the Sisyphus-2 speci cation blurred the distinction between hard constraints (which must not be violated) and soft constraints (which can be optionally violated) [19].
% % disorders knowledge identi ed identi ed fragments 50 28 55
34
75
41
Figure 8: Analysis via dierent models useful as using a mature abstraction. And using no abstractions worked best of all !! Far from challenging this result, the KL community is now exploring empirical methods for exploring its approach in the Sisyphus-3 project.
5.4 Does KL Modelling OverlyComplicated Modelling?
Certain KL authors note certain similarities between the dierent abstract inference patterns proposed by KADS: Scheduling, planning and con guration are actually the same problem, divided on two dimensions (\goal states known or not" and \temporal factors considered or not"[25, Figure 12.3]) There exists a common sub-graph between Figures 6 & 7 (see Figure 9). parameter/ observable
hypothesis
specify
compare
norm
select
finding
6 Conclusion: Potential Pitfalls for Patterns
difference
Figure 9: Overlap between KADS diagnosis and monitoring
A surprising conclusion from research into explanation generation is that the knowledge re7
References
quired to build a system is dierent to the knowledge required to explain a system [5, 26]. We make a similar observation here. When we read the KL and OO design patterns literature, we get a sense of great insight. The intuition is that the abstractions oered by OO design patterns and KL are true and useful for the construction of new systems. However, we have argued here that this intuition may not be correct. Systems that are good for explanation may not be useful for other tasks. Just because KL and OO design patterns apparently clarify old designs, this does not mean that they are productivity tools for new designs. We have argued that there exists a nontrivial similarity between the OO design patterns research and the KL research. This is a falsi able claim. OO design patterns share the same problems as KL if:
Between OO design patterns, common processing elements are identi ed suggesting that seemingly dierent patterns have a signi cant overlap.
The extra layer of abstraction used in OO design patterns confuses rather than clari es design issues.
When controlled experiments are performed measuring development productivity, then no dierence is detected between those applications that use and do not use OO design patterns.
In the future, we nd that low-level details (e.g. constraint-satisfaction algorithms or and-or graph processing) become the focus of the design process.
OO design patterns prove not to be reusable.
Dierent developers working on the same problem develop dierent patterns.
For the above two reasons, libraries of design patterns from dierent sources contain signi cant deviations.
[1] H. Akkermans, F. van Harmelen, G. Schreiber, and B. Wielinga. A Formalisation of Knowledge-Level Models for Knowledge Acquisition. In J. Balder and H. Akkermans, editors, Formal Methods for Knowledge Modeling in the CommonKADS Methodology: A Compilation (KADS-EE/T1.2/TR/ECN/014/1.0), pages 53{90. Netherlands Energy Research Foundation, 1992. [2] B. Bredeweg. Expertise in Qualitative Prediction of Behaviour. PhD thesis, University of Amsterdam, 1992. [3] B. Chandrasekaran. Towards a Taxonomy of Problem Solving Types. AI Magazine, pages 9{ 17, Winter/Spring 1983. [4] B. Chandrasekaran. Design problem solving: A task analysis. AI Magazine, pages 59{71, Winter 1990. [5] W. Clancey. The epistomology of rule-based systems: a framework for explanation. Arti cial Intelligence, 27:289{350, 1983. [6] W. Clancey. Heuristic Classi cation. Arti cial Intelligence, 27:289{350, 1985. [7] W.J. Clancey. Model Construction Operators. Arti cial Intelligence, 53:1{115, 1992. [8] C. Corbridge, N.P. Major, and N.R. Shadbolt. Models Exposed: An Empirical Study. In Proceedings of the 9th AAAI-Sponsored Ban Knowledge Acquisition for Knowledge Based Systems, 1995. [9] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. AddisonWesley, 1995. [10] M. Linster and M. Musen. Use of KADS to Create a Conceptual Model of the ONCOCIN task. Knowledge Acquisition, 4:55{88, 1 1992. [11] S. Marcus and J. McDermott. SALT: A Knowledge Acquisition Language for Propose-andRevise Systems. Arti cial Intelligence, 39:1{37, 1 1989. [12] S. Marcus, J. Stout, and J. McDermott. VT: An Expert Elevator Designer That Uses KnowledgeBased Backtracking. AI Magazine, pages 41{58, Winter 1987. [13] D. Marques, G. Dallemagne, G. Kliner, J. McDermott, and D. Tung. Easy Programming: Empowering People to Build Their own Applications. IEEE Expert, pages 16{29, June 1992. [14] T.J. Menzies. Limits to Knowledge Level-B Modeling (and KADS). In Proceedings of AI '95, Australia. World-Scienti c, 1995. [15] T.J. Menzies. An Overview of Abduction as a General Framework for Knowledge-Based Systems. Technical Report TF95-5, Department of Software Development, Monash University, 1995. [16] T.J. Menzies. Principles for Generalised Testing of Knowledge Bases. PhD thesis, University of New South Wales, 1995.
As to the last point, we note that at least one library of design patterns [21] contains signi cant deviations from the GOF library. 8
[17] T.J. Menzies, J. Edwards, and K. Ng. The Mysterious Case of the Missing Re-usable Class Libraries. In Tools Paci c 1992, pages 421{428. Prentice Hall, 1992. [18] T.J. Menzies and P. Haynes. The Methodologies of Methodologies; or, Evaluating Current Methodologies: Why and How. In Tools Paci c '94, pages 83{92. Prentice-Hall, 1994. [19] E. Motta and Z. Zdrahal. The trouble with what: Issues in method-independent task speci cations. In Proceedings of the 9th AAAI-Sponsored Ban Knowledge Acquisition for KnowledgeBased Systems Workshop Ban, Canada, 1995. [20] NASA. CLIPS Reference Manual. Software Technology Branch, lyndon B. Johnson Space Center, 1991. [21] W. Pree. Design Patterns for Object-Oriented Software Development. Addison-Wesley, 1995. [22] T.E. Rothen uh, J.H. Gennari, H. Erikson, A.R. Puetra, W. Tu, and M.A. Musen. Reusable ontologies, knowledge-acquisition tools and performance systems: Protege-II solutions to sisyphus-2. In B.R. Gaines and M. Musen, editors, Proceedings of the 8th AAAI-Sponsored Ban Knowledge Acquisition for Knowledge-Based Systems Workshop, pages 43.1{43.30, 1994. [23] A.T. Schreiber, B.J. Wielinga, and J.M. Akkermans. Using KADS to Analyse Problem Solving Methods. In J. Balder and H. Akkermans, editors, Formal Methods for Knowledge Modeling in the CommonKADS Methodology: A Compilation (KADS-EE/T1.2/TR/ECN/014/1.0), pages 53{90. Netherlands Energy Research Foundation, 1992. [24] L. Steels. Components of Expertise. AI Magazine, 11:29{49, 2 1990. [25] D.S.W. Tansley and C.C. Hayball. KnowledgeBased Systems Analysis and Design. PrenticeHall, 1993. [26] M.R. Wick and W.B. Thompson. Reconstructive expert system explanation. Arti cial Intelligence, 54:33{70, 1992. [27] B.J. Wielinga, A.T. Schreiber, and J.A. Breuker. KADS: a Modeling Approach to Knowledge Engineering. Knowledge Acquisition, 4:1{162, 1 1992. [28] G.R. Yost. Con guring Elevator Systems, 1992. Digital Equipment Co., Marlboro, Massachusetts. [29] Z. Zdrahal and E. Motta. An In-Depth Analysis of Propose & Revise Problem Solving Methods. In R. Mizoguchi, H. Motoda, J. Boose, B. Gaines, and P. Compton, editors, Proceedings of the Third Japanese Knowledge Acquisition for Knowledge-Based Systems Workshop: JKAW '94, 1994. Note that many of the Menzies references can be obtained from http://www.sd.monash.edu.au/ timm/pub/ docs/papersonly.html.
9