Solution Verification in Software Design: A CBR Approach Paulo Gomes, Francisco C. Pereira, Paulo Carreiro, Paulo Paiva, Nuno Seco, Jos´e Lu´ıs Ferreira, and Carlos Bento CISUC - Centro de Inform´ atica e Sistemas da Universidade de Coimbra. Departamento de Engenharia Inform´ atica, Polo II, Universidade de Coimbra. 3030 Coimbra
[email protected], http://rebuilder.dei.uc.pt
Abstract. Software design is becoming a demanding task, not only because the complexity of software systems is increasing, but also due to the pressure that development teams suffer from clients. CASE tools capable of performing more work and of having more intelligent abilities are needed, so that, they can provide more help to the designer. In this paper we describe a CASE tool capable of assisting the designer in a more intelligent way, be it by suggesting new solutions or by learning user preferences. We detail how the solutions are generated and focus on the verification process, which enables the new designs to have less errors. This verification process takes a CBR approach, which has the advantage of being personalized. We describe experimental results that show the effect of the verification process in the generated solutions.
1
Introduction
The complexity of software design [2] is increasing at a pace that companies and development teams have difficulties following. Developing more powerful CASE (Computer Aided Software Engineering) tools can be a way of increasing the productivity of software development teams. These tools can help the designer more effectively in reusing designs and code of previous systems. This task requires knowledge representation formalisms and reasoning mechanisms capable of providing the software designer useful cognitive functionalities like: retrieving similar designs, suggesting new designs, learning, verifying diagrams, and more. One of the main functionalities that the new generation of CASE tools must have, is to be capable of generating new alternative designs. This is very useful to the designer, enabling her/him to explore the design space, and possibly saving time and resources. Most of the time, the reuse of software designs implies adaptation of old designs to the new situation. This adaptation process when performed by a computational system will generate some inconsistent or incoherent objects in the new design. A verification process is needed to detect and revise these errors. In this paper we present a CASE tool capable of generating new software designs. We focus on the verification process used to correct
mistakes made in the generation process. We also describe how this verification process learns new knowledge that will be reused later and that codifies designer preferences. Case-Based Reasoning [1] is a reasoning mechanism that uses previous experience to solve new problems. This experience is in the form of cases and is stored in a case library for reuse. We think that CBR is suitable for software design verification due to two main reasons: there is no domain model for software design, and users have different preferences. One of the reasons that CBR was developed was to succeed in domains in which there is no model, and consequently rule-based or model-based fail [8]. In software design there is no domain model, which makes CBR the more feasible approach. The second aspect, is that users tend to model systems in different ways, so the verification process has to be user-dependent. In other words, it must convey with the user preferences and not with a general view of the world. Regarding this aspect, CBR is also suitable, providing a way to personalize the verification knowledge for each designer. There are several approaches to case adaptation in CBR design systems. For instance, CADSYN [9] uses constraint satisfaction algorithms for adaptation. Composer [11] goes further, and combines design composition with constraint resolution. Other approaches to design adaptation involve the application of rules (FABEL [12]), or model-based reasoning (KRITIK [3]). Among these, few revise the generated solutions, and the ones that do it, generally do not learn from this revision process. IM-RECIDE [4] is a CBR design system that uses failure knowledge in the form of failure cases to fix errors in new solutions. It learns failure cases that enable the system to identify design errors, and to correct them. In this paper we describe our approach to solution verification based on CBR. This approach is implemented in a CASE tool called REBUILDER, and has two main goals: create a corporative memory of software designs, and provide the software designer with a design environment capable of promoting software design reuse in a more intelligent way. The verification process provides the designer a cognitive tool capable of adapting itself to the designer preferences, and at the same time doing verification tasks, which were performed by the designer. The next section describes the architecture of REBUILDER, it’s modules and how they work together. Then section 3 explains how new diagrams are created, describing two generation processes: analogy and case-based design composition. Section 4 focus on the verification process detailing the various parts of this module. It also describes a simple example to illustrate the process. Section 5 describes experimental work that gives an insight on how this mechanism works. Finally section 6 concludes this paper.
2
REBUILDER
This section describes the architecture of REBUILDER, which comprises four different modules: Knowledge Base (KB), UML Editor, KB Manager and CBR engine (see Figure 1).
Knowledge Base Knowledge Base Manager
CBR Engine Retrieval
Case Library
WordNet
Analogy
UML Editor
Design Composition Knowledge Base Administrator
Verification Evaluation Data Type Taxonomy
Case Indexes
Software Designer
Learning
Fig. 1. REBUILDER’s Architecture.
The UML editor is the front-end of REBUILDER and the environment where the software designer develops designs. Apart from the usual editor commands to manipulate UML objects, the editor integrates new commands capable of reusing design knowledge. These commands are directly related with the CBR engine capabilities and are divided into two main categories: – Knowledge Base actions: such as connect to KB and disconnect from KB. – Cognitive actions: such as retrieve design, adapt design using analogy or design composition, verify design, evaluate design, and actions related with object classification using WordNet. The KB Manager module is used by the administrator to manage the KB, keeping it consistent and updated. This module comprises all the functionalities of the UML editor, and it adds case-base management functions to REBUILDER. These are used by the KB administrator to update and modify the KB. The KB comprises four different parts: case library, which stores the cases of previous software designs; index memory used for efficient case retrieval; data type taxonomy, which is an ontology of the data types used by the system; and WordNet [10], which is a lexical resource that uses a differential theory where concept meanings are represented by symbols enabling a theorist to distinguish among them. Symbols are words, and concept meanings are called synsets. A synset is a concept represented by one or more words. Besides a list of words and the corresponding synsets, WordNet comprises also several types of semantic relations. REBUILDER uses four different type of relations: is-a, part-of, substance-of, and member-of. Synsets are used in REBUILDER for categorization of software objects. Each object has a context synset which represents the
object meaning. The object’s context synset can then be used for computing object similarity (using the WordNet semantic relations), or it can be used as a case index, allowing rapid access to objects with the same classification. WordNet is used to compute the semantic distance between two context synsets. This distance is used in REBUILDER to assess the type similarity between objects, and to select the correct synset when the object name has more than one synset. This process is called word sense disambiguation [7] and is a crucial task in REBUILDER. The CBR engine is the responsible module for all the cognitive functionalities of REBUILDER. It comprises five different submodules, each one implementing a different cognitive process that helps the software designer. These submodules are: Retrieval The retrieval submodule selects from the case base a set of cases ranked by similarity with the user’s query. It enables the software designer to browse through the most similar designs in the case library, exploring different designs alternatives and reusing pieces of cases. This module works like an intelligent search assistance, which first retrieves the relevant cases from the case library and then ranks them by similarity with the query. The cases are presented to the designer only after the cases are ranked. For more details see [5]. Analogy The analogy submodule generates new solutions using analogical reasoning, which involves selecting case candidates from the case library, then mapping them with the query diagram, and finally transferring knowledge from the source case to the problem diagram, yielding a new diagram. This mechanism generates solutions using only one case, which constraints the type of solutions that it can generate. For more details see [6]. Design Composition The design composition submodule also generates new solutions from cases in the case library. The main difference to analogy generated solutions is that it can use more than one case to generate a solution. This mechanism can select pieces of cases and then compose them in a new diagram, yielding a solution to the user’s query. Verification and Evaluation This submodule comprises two functionalities: verification and evaluation. While verification checks the design coherence and correctness. The evaluation mechanism is used to assess the diagram’s properties. The verification is manly used in combination with analogy or design composition to look for errors in the generated diagram and to correct them. The evaluation mechanism is at the designer’s disposal for listing the design properties, trying to identify shortcomings in a diagram. Learning The learning submodule implements several case-based maintenance strategies that can be used by the KB administrator to manage the case library contents. This submodule presents several assessment measures of the case library performance, which provide an important advice to the administrator regarding the addition or deletion of cases.
3
Solution Generation
In REBUILDER solutions are generated using two modules: Analogy and Design Composition. While Analogy creates new diagrams based on one case, Design Composition generates new diagrams based on several cases. The input data used in solution generation is an UML class diagram, in the form of a package. This is the user’s query, which usually is a small class diagram in its early stage of development (see Figure 2). This subsection describes in greater detail how these modules generate designs.
Fig. 2. An example of an class diagram in the early stages of development.
3.1
Analogy
Analogical reasoning is used in REBUILDER to suggest class diagrams to the designer, based on a query diagram. The analogy process has three steps: identify candidate diagrams for analogy; map the candidate diagrams; and create new diagrams by knowledge transfer between the candidate diagram and the query. Cases are selected from the case library to be used as source diagrams. Most of the analogies that are found in software design are functional analogies, that is, the analogy mapping is performed using the functional similarity between objects. Using the retrieval algorithm in the first phase of analogy enables this kind of analogies, since objects that are functional similar tend to be categorized in the same branch of (or close to) the WordNet is-a trees. Thus, the analogy module benefits from a retrieval filtering based on functional similarity (for more details see [6]). The second step of analogy is the mapping of each candidate to the query diagram, yielding an object list correspondence for each candidate. This phase
relies on two alternative algorithms: one based on relation mapping, and the other on object mapping, but both return a list of mappings between objects. The relation-based algorithm uses the UML relations to establish the object mappings. It starts the mapping selecting a query relation based on an UML heuristic (independence measure), which selects the relation that connects the two most important diagram objects. The independence measure is an heuristic used to assign to each diagram object an independence value based on UML knowledge that reflects an object’s independence in relation to all the other diagram objects. Then it tries to find a matching relation on the candidate diagram. After it finds a match, it starts the mapping by the neighbor relations, spreading the mapping using the diagram relations. This algorithm maps objects in pairs corresponding to the relation’s objects. The object-based algorithm starts the mapping selecting the most independent query object, based on the UML independence heuristic. After finding the corresponding candidate object, it tries to map the neighbor objects of the query object, taking the object’s relations as constraints in the mapping. Both algorithms satisfy the structural constraints defined by the UML diagram relations. Most of the resulting mappings do not map all the problem objects, so the mappings are ranked by number of objects mapped. The last step is the generation of new diagrams using the established mappings. The analogy module creates a new diagram, which is a copy of the query diagram. Then, using the mappings between the query objects and the candidate objects, the algorithm transfers knowledge from the candidate diagram to the new diagram. This transfer has two steps: first there is an internal object transfer, and then an external object transfer. In the internal object transfer, the mapped query object gets all the attributes and methods from the candidate object that were not in the query object. This way, the query object is completed by the internal knowledge of the candidate object. The second step transfers neighbor objects and relations from the mapped candidate objects to the query objects, from the new diagram. This transfers new objects and relations to the new diagram, expanding it. 3.2
Case-Based Design Composition
The Design Composition module generates new diagrams by decomposition and/or composition of case diagrams. The goal of the composition module is to generate new diagrams that have the query objects, thus providing an evolved version of the query diagram. Generation of a new UML design using case-based composition involves two main steps: retrieving cases from the case library to be used as knowledge sources, and using the retrieved cases (or parts of them) to build new UML diagrams. In the first phase, the selection of the cases to be used is performed using the retrieval algorithm described in section 2. The adaptation of the retrieved cases to the target problem is based on two different strategies: best case composition, and best complementary cases composition. In the best case composition, the adaptation module starts from the most similar case to the problem, mapping the case objects to the problem objects. The
case mapped objects are copied to a new case. If this new case maps successfully all the problem objects, then the adaptation process ends. Otherwise it selects the retrieved case, which best complements the new case (in relation to the problem), and uses it to get the missing parts. This process continues while there are unmapped objects in the problem definition. Note that, if there are objects in the used case that are not in the problem, they can be transferred to the new case, generating new objects. The best complementary cases composition starts by matching each retrieved case to the problem, yielding a mapping between the case objects and the problem objects. This is used to determine the degree of problem coverage of each case, after which several sets of cases are constructed. These sets are based on the combined coverage of the problem, with the goal of finding sets of cases that globally map all the problem objects. The best matching set is then used to generate a new case.
4
Solution Verification
The main idea of verification in REBUILDER is to use the available knowledge sources to verify class diagrams, which are: design cases, WordNet, verification cases, and the designer. Not only REBUILDER generated diagrams can be verified, designer created ones can also be verified. The designer has only to select the diagram to be checked and the verification command to perform it. The designer can also select an option in the settings menu to automatically verify every solution generated by REBUILDER. There are five different types of verification: package, object name, relation, attribute, and method. Verifying an object consists on determining it’s validity (true or false), which is to say, if the object is correct or if it is invalid or incoherent with the diagram. In case of being correct it stays in the diagram, otherwise the object can be deleted from the diagram. One important implementation aspect is that verification cases are stored locally in the designer’s client. Thus, each designer has her/his library of verification cases, which makes the system personalized. This is very important since each software designer has her/his way of modelling systems, making verification a personalized task. 4.1
Package Verification
The package verification checks a class diagram starting with the diagram relations, then checking each object’s attributes and methods, finally it checks the sub-packages, recursively calling the package verification functionality. 4.2
Name Verification
Name checking applies only to packages, classes and interfaces, and is performed when REBUILDER needs to assign a synset to an object. REBUILDER reasoning mechanisms depend on the correct assignment of synsets to objects, which
makes this verification very important. REBUILDER makes a morphological and compositional analysis of the object’s name, trying to match it to WordNet words. Once it finds a match it can easily get the list of possible synsets for the given name. Now, two things can happen: the user selects the correct synset, or REBUILDER uses a word sense disambiguation method to do it (it is up to the user to decide). 4.3
Relation Verification
Relation checking is based in WordNet, in the design cases, and in relation verification cases, which are cases describing successful and failure situations in checking a relation validity. These cases are described by: Relation Type {Association, Generalization, Realization, Dependency}, Multiplicity {1-1, 1-N, N-N}, Source Object Name, Source Object Synset, Destination Object Name, Destination Object Synset, and Outcome {Success or Failure}. Two relation verification cases c1 and c2 match if: RelationT ype(c1 ) = RelationT ype(c2 ) ∧ M ultiplicity(c1 ) = M ultiplicity(c2 ) ∧ SourceObject(c1 ) ≡ SourceObject(c2 ) ∧ DestObject(c1 ) ≡ DestObject(c2 ) (1)
Where RelationType(c), Multiplicity(c), SourceObject(c) and DestObject(c) are respectively: relation type, multiplicity, source object, and destination object of verification case c. Two objects, o1 and o2 are said to be equivalent if: o1 ≡ o2 ⇔ (Synset(o1 ) 6= ? ∧ Synset(o1 ) 6= ? ∧ Synset(o1 ) = Synset(o1 )) ∨
((Synset(o1 ) = ? ∨ Synset(o2 ) = ?) ∧ N ame(o1 ) = N ame(o2 ))
(2)
Where Synset(o) gives the synset of object o and Name(o) yields the name of object o. All the knowledge sources are used for validating the relation being inspected. The order of search is: relation verification cases, WordNet, design cases and the designer. The verification algorithm for relations is detailed in figure 3. A WordNet equivalent relation is a relation between two synsets in which one of them is the source object synset and the other is the destination object synset. A design case comprises an equivalent relation if it has two objects connected by a similar relation to the one being investigated, and with equivalent source objects and destination objects. Retrieval of verification cases is based on two steps: first on the relation type, and then by source object name. If there are more than one equivalent cases the outcome with more cases is chosen as the correct outcome. In case of a draw, the system retrieves the newest case. 4.4
Attribute Verification
Attribute checking is based on WordNet, the design cases, and in Attribute Verification Cases, which are cases describing successful and failure situations in checking an attribute validity. These cases have the following description: Object Name, Object Synset, Attribute Name, and Outcome Success or Failure. Two
Search for a equivalent verification case in the library. Iffound and outcome is Success Then Consider the relation valid and exit. If found and outcome is Failure Then Consider the relation invalid and delete it from the diagram, and exit. If not found Then Continue. Search for a equivalent relation in WordNet. If found Then Consider the relation valid, add a new successful verification case and exit. Else Continue. Search for a equivalent relation in the design cases. If found Then Consider the relation valid, add a new successful verification case and exit. Else Continue. Ask the user the relation validity. If user considers relation valid Then Consider the relation valid, add a new successful verification case and exit. Consider the relation invalid, add a new failure verification case and exit. Fig. 3. The relation verification algorithm.
attribute verification cases c1 and c2 match if: the objects’ names and synsets, and attributes’ names are the same, or if one of the synsets is null and their names and attributes’ names are the same. As in relation verification, all the knowledge sources are used for validating the attribute being inspected. The order of search and the algorithm used is the same as in relation verification (adapted to the attribute situation). A WordNet equivalent attribute is represented by a substance-of, member-of or part-of relation between the synset of the object being inspected and every possible synset of the attribute’s name. A design case comprises a similar attribute, if there is an object with the same synset and name comprising an attribute with the same name as the attribute being inspected. Retrieval of verification cases is based on two steps: first on object name and then on object synset. 4.5
Method Verification
Method verification is similar to the attribute verification with the exception that WordNet is not used as a knowledge source, being replaced by an heuristic. The heuristic used is: if the method name has a word that is an attribute name or a neighbor class name then the method is considered valid. Method verification cases describe successful and failure situations in checking a method validity. These cases have the following description: Object Name, Object Synset, Method Name, and Outcome Success or Failure. Two method verification cases c1 and c2
match if: the objects’ names and synsets, and methods’ names are the same, or if one of the synsets is null and their names and methods’ names are the same. As in relation verification, all the knowledge sources are used for validating the method being inspected. The order of search and the algorithm used is the same as in relation verification (adapted to the method situation and with WordNet replaced by the heuristic referred before). A design case comprises a similar method, if there is an object with the same synset and name comprising a method with the same name as the method being inspected. Retrieval of method verification cases is performed in the same way as in attribute verification cases. 4.6
Example
This subsection illustrates the verification process described before with an example. Suppose that the designer uses the diagram presented in figure 2 as a problem for the design composition module, resulting in a new diagram. Part of this diagram is presented in figure 4, which shows some inconsistencies, for instance: the generalization between Teacher and Timetable, or the method addStudent in Timetable, or the attribute studentID in class Lecture.
Fig. 4. Part of a class diagram resulting from a design composition operation.
The verification process starts by checking the package containing the new diagram. For illustration purposes we will describe the process checking only the diagram elements considered invalid, the ones mentioned before. When the verification process reaches the generalization between Timetable and Teacher, the system first searches in the verification cases for an equivalent case, which does not exist. Then it searches for WordNet for an is-a relation between the Timetable synset and Teacher synset, which does not exist. Then searches the design cases that do not have any similar relation, and finally asks the designer for a validation on this relation, which s/he answers that is invalid. Then the system adds a new relation verification case comprising: [Generalization, 1-1, Teacher, 1087564761 , Timetable, 105441050, Failure], and deletes the relation. The next time the system finds an identical relation it will consider it invalid and will delete it from the design. Now suppose it is checking the studentID attribute of class Lecture, the attribute verification process will search the case library of 1
Synsets are identified by nine digit numbers.
attribute verification cases and it finds an equivalent case: [Lecture, 106035418, studentID, Failure]. The system considers the attribute invalid and deletes it from the diagram. Finally the system checks method addStudent from class Timetable and the method checking heuristic does not applies, and the design cases have no similar method, then it asks the designer, which considers the method invalid. The method verification algorithm will delete the method and add a new method verification case: [Timetable, 105441050, addStudent, Failure]. We only showed failure situations, but success situations are more frequent than failure ones.
5
Experiments
This section describes the experimental work performed to evaluate the verification mechanism. We have tested the verification with analogy and design composition. Within the design composition we have used both strategies: best case composition and best complementary cases composition (also called best set composition). 5.1
Setup
We have used a Knowledge Base with 60 cases describing software designs. These cases are from four different domains: banking information systems, health information systems, educational institution information systems, and store information systems (grocery stores, video stores, and others). Each design comprises a package, with 5 to 20 objects (total number of objects in the knowledge base is 586). Each object has up to 20 attributes, and up to 20 methods. These designs are defined at a conceptual level, so the design is at an early stage of development having only the fundamental objects. Twenty five problems were defined, each one having one package with several objects (between 3 and 5), which were related to each other by UML associations or generalizations. These problems are distributed by the four case domains in the following way: banking information systems (6), health information systems (7), educational institution information systems (3), and store information systems (9). For each combination of solution generation method (Analogy, Design Composition - Best Case, and Design Composition - Best Set) by type of object considered (Relation, Attribute or Method) we generated a run with the same 25 test problems and then gathered the data, which was then analyzed by a software engineer. 5.2
Verification in Analogy
Figure 5 presents the cumulative number of objects considered wrong by the user. The X-axis represents the 25 problems used, in this case the presentation order of these problems is considered a progression, since the system learns new
Analogy Verification Efects 7
No. Of Wrong Objects
6 5 4 3 2 1 0 5
10
15 Test Problems
20
25
Relations (without Verification) Relations (with Verification) Methods (without Verification) Methods (with Verification)
Fig. 5. Effects of the verification process in the Analogy generated solutions.
verification from each problem it solves. We are only considering: relations, attributes and methods. In this case of Analogy solutions, there are no attribute objects considered wrong by the designers. Notice that solutions generated by Analogy generate few wrong objects compared to the ones created by Design Composition. This happens because Design Composition combines parts of different cases, thus generating some inconsistencies. The verification mechanism did not improve the number of wrong methods, but it has improved slightly the number of wrong relations. 5.3
Verification in Design Composition
In the case of verification in Design Composition, figures 6 and 7 show the experimental results. These results show a major improvement in the solution quality, in particular in the methods and relations. The system can effectively drop the number of wrong objects in generated solutions, presenting to the user better solutions. Table 1 presents the total number of verification cases generated and stored by the verification module, and also the number of questions asked to the user. Notice the difference between the number of verification cases and questions asked to the user. Most of the verification cases come from the design case library (from the design cases) and from WordNet. Since the system learns these new verification cases, the tendency of the number of questions asked to the user is to decline, especially if the designer is working in the same domain.
Design Composition (Best Case) Verification Efects 140
No. Of Wrong Objects
120 100 80 60 40 20 0 5
10
15 Test Problems
20
25
Relations (without Verification) Relations (with Verification) Methods (without Verification) Methods (with Verification) Attributes (without Verification) Attributes (with Verification)
Fig. 6. Effects of the verification process in the Design Composition (Best Case) generated solutions.
Design Composition (Best Set) Verification Efects 140
No. Of Wrong Objects
120 100 80 60 40 20 0 5
10
15 Test Problems
20
25
Relations (without Verification) Relations (with Verification) Methods (without Verification) Methods (with Verification) Attributes (without Verification) Attributes (with Verification)
Fig. 7. Effects of the verification process in the Design Composition (Best Set) generated solutions.
Table 1. The total number of verification cases and questions asked to the user. Analogy Best Case Best Set No. of Verification Cases (Relations) No. of Verification Cases (Methods) No. of Verification Cases (Attributes) No. of Questions (Relations) No. of Questions (Methods) No. of Questions (Attributes)
6
244 511 298 68 0 3
451 1210 646 244 106 183
239 781 430 145 104 185
Conclusions and Future Work
This paper presents an approach to verification of design solutions based on CBR. The main contribution of this work is the use of cases for verification combined with WordNet lexical resource. The experimental work indicates that this approach improves the performance of the adaptation mechanism of REBUILDER, making it more robust. Our approach has the advantage of combining system learning and evolution, with design personalization. As a tool in a CASE environment, the verification module enables an intelligent and personalized assistant, checking possible inconsistencies and errors in diagrams. This tool is especially useful when generated designs are big, escaping from the capabilities of human designers. Another advantage of using CBR, is that the system does not need a domain model to perform verification, at the beginning it might act as a ’dumb’ assistant but it has the willing and the ability to learn from it’s mistakes. There are some limitations to our approach, especially when discussing methodological issues. Our approach is intended to be used in an individual level since it is implemented in a CASE tool. Our main argument for this choice, is that each designer has a different way to model the problem being addressed. Starting from this assumption we decided that the case base of verification cases would be local to each client (or designer). When dealing with team development our assumption can have some limitations. Though we defend our point of view by saying that, from our experience, most of the times team development at design level is performed by one or two members, which then discuss their designs with the rest of the team, but only one or two project members are responsible for using the CASE tool for designing the system model. Nevertheless, there are at least two ways of addressing this problem. First, centralizing all the verification cases in one case base accessible by every team member, which would imply the maintenance of such a case base. Or imposing project-wide agreements, which are always difficult to make. Both solutions have problems to be solved. Future work will go in the direction of checking the classes and interfaces validity, though it is a more difficult goal. Future work will also involve the improvement of word sense disambiguation, which is also crucial to the classification of software objects.
Acknowledgments This work was supported by POSI - Programa Operacional Sociedade de Informa¸ca ˜o of Funda¸c˜ ao Portuguesa para a Ciˆencia e Tecnologia and European Union FEDER, under contract POSI / 33399 / SRI / 2000, and by program PRAXIS XXI. We would like to thanks the reviewers for their helpful comments on this paper.
References 1. Agnar Aamodt and Enric Plaza, Case–based reasoning: Foundational issues, methodological variations, and system approaches., AI Communications 7 (1994), no. 1, 39–59. 2. Barry Boehm, A spiral model of software development and enhancement, IEEE Press, 1988. 3. Ashok Goel, Sambasiva Bhatta, and Eleni Stroulia, Kritik: An early case-based design system, Issues and Applications of Case-Based Reasoning in Design (Mahwah, NJ) (Mary Lou Maher and Pearl Pu, eds.), Lawrence Erlbaum Associates, 1997, pp. 87–132. 4. Paulo Gomes, Carlos Bento, and Pedro Gago, Learning to verify design solutions from failure knowledge, Artificial Intelligence for Engineering Design, Analysis and Manufacturing 12 (1998), no. 2, 107–115. 5. Paulo Gomes, Francisco C. Pereira, Paulo Paiva, Nuno Seco, Paulo Carreiro, Jos´e L. Ferreira, and Carlos Bento, Case retrieval of software designs using wordnet, European Conference on Artificial Intelligence (ECAI’02) (Lyon, France) (F. van Harmelen, ed.), IOS Press, Amsterdam, 2002. 6. Paulo Gomes, Francisco C. Pereira, Paulo Paiva, Nuno Seco, Paulo Carreiro, Jos´e L. Ferreira, and Carlos Bento, Combining case-based reasoning and analogical reasoning in software design, Proceedings of the 13th Irish Conference on Artificial Intelligence and Cognitive Science (AICS’02) (Limerick, Ireland), Springer-Verlag, September 2002. 7. Nancy Ide and Jean Veronis, Introduction to the special issue on word sense disambiguation: The state of the art, Computational Linguistics 24 (1998), no. 1, 1–40. 8. Janet Kolodner, Case-based reasoning, Morgan Kaufman, 1993. 9. Mary Lou Maher, Casecad and cadsyn, Issues and Applications of Case-Based Reasoning in Design (Mahwah, NJ) (Mary Lou Maher and Pearl Pu, eds.), Lawrence Erlbaum Associates, 1997, pp. 161–185. 10. George Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller, Introduction to wordnet: an on-line lexical database., International Journal of Lexicography 3 (1990), no. 4, 235 – 244. 11. Pearl Pu and Lisa Purvis, Formalizing the adaptation process for case-based design, Issues and Applications of Case-Based Reasoning in Design (Mahwah, NJ) (Mary Lou Maher and Pearl Pu, eds.), Lawrence Erlbaum Associates, 1997, pp. 221–260. 12. Angi Voss, Case design specialists in fabel, Issues and Applications of CaseBased Reasoning in Design (Mahwah, NJ) (Mary Lou Maher and Pearl Pu, eds.), Lawrence Erlbaum Associates, 1997, pp. 301–336.