requirements description to an object-oriented model, namely the mapping of use ... design skeleton than a collection of business scenario descriptions. ..... Object-Oriented Analysis and Design with Applications. ... edition, 1991. ... In 4th ACM Workshop on Advances in Geographic Information Systems, pages 165â173,.
Animated Requirements Walkthroughs Based on Business Scenarios
Georg K¨ osters, Bernd-Uwe Pagel, Thomas de Ridder, and Mario Winter Department of Computer Science University of Hagen {georg.koesters | bernd-uwe.pagel | thomas.deridder | mario.winter}@fernuni-hagen.de Abstract. Detecting requirements errors late in a project is extremely costly, because the necessary changes mostly affect every development stage. Thus it is advisable to concentrate efforts on gathering and validating requirements. This particularly encompasses rigorous requirements testing before document delivery. In this paper we focus on the most critical portion of the transition from an user-oriented requirements description to an object-oriented model, namely the mapping of use cases onto classes and operations of an object-oriented model including its validation. We present a way to link use cases with the underlying class model, propose business scenarios as an appropriate means to test the behaviour of the integrated model, and sketch a corresponding CASE-tool which supports animated walkthroughs of use cases down to the operations, keeps track of detected errors and anomalies, and computes suitable coverage metrics.
1.
Introduction
Achieving user acceptance has become an essential prerequisite for the success of any software development project. The best way to satisfy users is to give them ’what they need’, i.e. to meet the users’ expectations of the system to the greatest possible extent. This of course assumes a precise, consistent, and complete specification of its externally observable behaviour, the so-called requirements. Mainly due to managerial reasons, many large projects still use a traditional staged rather than an iterative development approach – also in combination with modern object-oriented technology. Staged process models assume the users are intimately involved only in the requirements engineering and acceptance testing phases, i.e. in the beginning and at the very end of the project. Since the discovery and correction of wrong, missing, or ambiguous requirements is almost impossible without the help of the users – the analyst may check the consistency of the specification, but only the user can validate that his or her expectations of the system are actually met –, potential errors in the requirements documents are often not detected before acceptance testing. Correcting requirements errors at that time is extremely costly, because the necessary changes usually affect every development stage, so that projects run great risk of failing at the end. Thus it is advisable to concentrate effort on gathering and validating requirements. Besides the use of an appropriate requirements engineering method and a close cooperation between analyst and user during the whole definition process, this particularly encompasses rigorous requirements testing before document delivery. As Gerrard claims ”testing of requirements is potentially the most valuable testing we can do” [Ger94]. Testing methods applicable to requirements documents are formal reviews, walkthroughs, and inspections – if we abstain from such niche approaches as, for instance, systematic prototyping or the usage of an executable specification language. Introduced in the early Seventies [Wei71], several combinations, variations, and refinements of the original review technique have been proposed since then. Among them are Fagan inspections [Fag86], active design reviews [PW85], the N-fold inspection method [SMT92], and phased inspections [KM93] which either claim to improve the detected fault rate, to assure more rigidity, to better use the limited (human) resources, or to permit computer support. When applied to requirements documents, however, all of them face the significant difficulty of a lacking source document, which means the work product cannot be tested against a concise specification, but has to meet a rather vague picture of the required system as articulated by the users. In order to compensate this problem, requirements documents should be informal, intuitive, and understandable by non-computer specialists. Otherwise it is left to chance whether missing, ambiguous, or wrong parts are detected or not.
Requirements documents, however, must not only assure that ”we are building the right product” [Boe79], but must also be precise enough to guarantee ”we are building the product right” [Boe79]. These two divergent objectives of requirements documents hold the immanent danger of under-, resp. over-, specification. Projects perpetually show that users prefer high level descriptions of the external behaviour of the system envisaged, say use cases of the relevant business scenarios [Jac92], complemented by intelligible descriptions of the problem domain’s structure and data. These kinds of documents are elaborated in close co-operation with the users and are subject of continuous reviews with intensive user participation. However, they are neither precise enough nor do they allow a seamless transition into object-oriented design. Consequently, use cases, repository, and data model are often rendered into a well-balanced objectoriented requirements model by establishing classes, mapping use cases onto operations, and using inheritance to eliminate redundancy. Obviously, this kind of static model is much better suited as a design skeleton than a collection of business scenario descriptions. On the other hand, only skilled persons are capable of deriving such an integrated model and checking it against its user-oriented counterparts or reviewing it for internal consistency as well as general quality criteria. Users are not able or willing to participate on a level where global behaviour is encapsulated in class and operation descriptions and spread all over the model. In practise, we thus observe the tendency that fully-fledged object-oriented models are widely withheld from the users. In this presentation we focus on the most critical portion of the transition from an user-oriented requirements description to an object-oriented model, namely the mapping of use cases (or similar dynamic models) onto static classes and operations including its validation. Once performed, knowledge about how and why each use case is reflected by which operations in the class model can usually be found nowhere else than in the heads of the responsible analysts – if ever. However, as long as the transition is not comprehensibly documented it is extremely difficult to keep things consistent or to check whether the class model really ’works’, i.e. if the specified operations actually implement the collected use cases. In addition, common review methods are not directly applicable since they assume a work product in written form. Our approach to this topic consists of four parts, each of which is discussed in a section below. Firstly, we present a way to link use cases with the underlying class model. Secondly, we propose business scenarios as an appropriate means to test the behaviour of the integrated model. In section 4 we provide some modelling and reviewing methodology. Finally, we briefly describe the validation component of a corresponding CASE-tool which supports animated walkthroughs of use cases down to the methods, keeps track of detected errors and anomalies, and computes suitable coverage metrics. 2.
Coupling Use Cases and Class Models
Introduced by Jacobson in 1992 [Jac92] use cases have undoubtedly become one of the favourite requirements modelling concepts in practise. They provide an attractive means to structure requirements documents along the intended usage of the system. The lack of any theory for use cases in the original publication, however, caused some confusion about what to gather and how to structure them, so that ”many people are using whatever they think of as use cases to good effect” [Coc95]. Cockburn talks about over 18 different meanings of use case he has personally encountered and classifies them along the 4 dimensions purpose, contents, plurality, and structure. He also introduces a theory of use cases based on goals and goal failures which would provide a starting point of our work. In this theory use cases are assumed to build requirements (purpose), written in consistent prose (content), collecting multiple scenarios each (plurality), and structured in a semi-formal way (structure) which comes close to our understanding of use cases [Coc95]. According to Cockburn [Coc95] and Jacobson [Jac92], we regard a use case as ”a collection of possible sequences of transactions between the system and external actors, characterized by the goal the primary actor has toward the system’s declared responsibilities”. We propose a slightly different definition of use case steps which embodies actions outside the system in order to capture complete workflow. A use case step is either (i) an interaction with the system, (ii) a responsibility to be carried out by some human or an external system (the primary or a secondary actor), or (iii) a referenced
use case, each of which equipped with its own (sub-) goal. With regard to interactions we again concur with Cockburn’s definition: ”Interactions start at the triggering event and end when the goal is delivered or abandoned [by the system]”. A scenario is ”a sequence of [use case steps] happening under certain conditions, to achieve the primary actor’s goal, and having a particular result [(success, failure)] with respect to that goal” [Coc95]. Every use case and use case step is documented in a semiformal way that is understandable for the user or domain expert, respectively. The description must contain the conditions under which the action occurs, the (sub-) goal to be delivered and the real outcome. In order to control scenario explosion each use case is reflected by a rooted directed use case graph. Each scenario of the use case is reflected by a path through the use case graph beginning at the root, called the head node, and ending up as soon as the primary goal is delivered or a delivery has become impossible. The nodes on this path relate to the use case steps. Hence, step sequences that are common to different scenarios belong to the same subpath. In this way, redundancy is eliminated and the use cases become better structured and less complex. Hence, a use case subsumes all scenarios which might (not must) occur, or in other words a scenario is similar to a control flow in the use case graph. The modelling and reviewing approach proposed in this paper is constantly illustrated by the model of a system supporting the familiar conference submission and review process. Example 1 introduces this problem domain and comes up with a portion of the corresponding use case model. Example 1. The role of a program chair of a technical conference in academia comprises several tasks, for instance, the setting of deadlines, preparing the call for papers, collecting submissions and organizing the review process. Suppose we aim at supporting these activities by a comfortable information system called Conference Chair’s Information System (CCIS), the usage of which determined by a use case model. Figure 4 gives the diagram and the textual description of the main course of the use case ”New Submission” reflecting the submission collecting task (notation adapted from [Jac92] and [Coc95]).
Program Chair
New Submission Handle a new submission and notify author
Secretary
Name: New Submission Primary Actor: Program Chair Secondary Actors: Secretary Goal: A submission is received and filed in, with a notification sent to the author Main Course: After receiving the submission, the chair checks the conferences timetable for the deadline. The timely submission is then filed in for the author and a notification is prepared by the secretary Figure 1. Use case ”New Submission”: diagram and main course
Taking a deeper look at the single steps of the use case ”New Submission” and considering alternative courses, for instance, the refusal of a belated submission, leads us to the use case graph depicted in figure 2. The arrow starting at the primary actor icon points to the root node of the use case graph, here the node corresponding to the initial use case step ”Receive Submission”. Since this step is carried out by the actor himself (a human), the node symbol is left unfilled. In the contrary, node symbols corresponding to interactions like ”Take Submission” are shaded, while those corresponding to referenced use cases like ”Register Person” are printed double-framed. Whenever a secondary actor
is involved in a use case step, an arrow connects the respective actor icon and node symbol (e.g., actor ”Secretary” and step ”Send Confirmation”). Arrows pointing from one node symbol to another indicate that the corresponding use case steps may follow each other. Arrows pointing from a node symbol to a circle denote that there exist a scenario ending in the corresponding use case step. If the circle encloses what we call the goal bullet, every such scenario delivers the (main) goal of the use case; otherwise, it is goal abandoning.
Receive Submission Chair gets a submission Program Chair
Check Schedule Chair knows if it is a timely submission
Take Submission Submission is entered into the system
Send Confirmation Receipt is sent to the author
Registrate Person Register author
Refuse Submission Because of its lateness the chair decides to refuse the submission
Send Refusal Refusion note is sent to the author Secretary
Figure 2. Use Case Graph ”New Submission”
Note that the proposed graphical representation of a use case graph is chosen for explanation purposes only and not to be misunderstood as a well-defined graphical notation. In general, however, it is following the UML representation style [BJR97]. 2 Now we bring up the corresponding class model. We identify interactions as a first natural connection point between the class model and our use case model, since – processing each interaction means invoking an operation specified in the class model, and – this operation requires certain object constellations (object models) derived from the class model in order to be successful which must match the conditions and outcome of the interaction. Accordingly, we augment the use case model as follows in order to show if and how both models fit together: 1. Each interaction is mapped onto an operation in the class model. The operation and the class in which it is defined are called the interaction’s root operation and root class, respectively. 2. With the aid of the repository for every interaction collections of classes are determined which correspond to the textual descriptions of the step’s condition and outcome. The collections are called prescope and postscope by analogy to pre- and postconditions. More precisely, the prescope contains all classes of which instances may be required to fulfil the interaction’s goal, while the
postscope contains all classes of which instances may change their state due to the interaction. We abstain from attaching complete object models and concise object states here to keep things feasible. Example 2 illustrates this linking procedure. Example 2. We continue elaborating use case ”New Submission” in the context of the CCIS’s domain class model in figure 3. The class model’s notation concurs with version 1.0 of the Unified Modelling Language (UML) [BJR97].1
{member!}
ProgCommittee ...
referee
member
registrate participant ....
Submission Code Title * Keywords * Abstract ReceivedAt Accepted ... reject accept ...
chair
committee
1
Person participants LastName * * Conference FirstName Title Name 1..5 Affiliation BeginAt author * ... EndAt SubmissionsDue ... authorSubmission 1 1 ... start
Review Paper * BeginAt ReviewStarted ScheduledEnd ReviewFinished prepareCFP EndedAt 0..1 2..5 FinalScore Score 1 registerSubmission 1 * ... startSingleReview Comment 1 collectScores reviews remindReferee 1
*
Tutorial Duration MaxParticipants ...
submissions
Figure 3. Domain Class Model
Figure 4 recalls the use case graph ”New Submission” (cf. example 1), now augmented with preand postscopes. Each class in a pre-, resp. post-, scope of an interaction is represented by a compressed class symbol glued to the top, resp. bottom, of the interaction symbol. The root operations and classes of all interactions are given in the table on the right of the drawing. If we look at the prescope of the interaction ”Send Confirmation” we recognize that instances of the classes Submission and Person are required to achieve the interaction’s goal and the intended root operation to be invoked is accept of class Submission. In other words, confirming a submission by sending a note to the author is done by calling the accept operation of an existing Submission object. Moreover, the empty postscope of interaction ”Send Confirmation” tells us that by all means the found object constellation is preserved, i.e. none of the existing objects changes its state. 2 3.
Tracking Test Scenarios
Major goals of use case graphs and class models are to avoid redundancy and to respect the encapsulation principle. This leads to a rather static and in some sense normalized view of the intended system. Even when these partial models are connected to each other as described in this paper, the 1
Our decision to use UML stems from the fact that it unifies the three most popular object-oriented modelling approaches, namely Booch’s method [Boo94], Rumbaugh’s OMT [RBP+ 91], and Jacobson’s Objectory [Jac92], and will presumably become a OMG standard in the near future. Though UML claims to be a general modelling language for all development stages, it leaves open the modelling process as such and its notation is often by far too detailed for analysis issues.
Receive Submission Chair gets a submission Program Chair
Conference Check Schedule
Interaction Check Schedule Take Submission Send Confirmation
Root Class
Root Operation
Conference SubmissionsDue ProgCommittee registerSubmission Submission accept
Chair knows if it is a timely submission
Conference ProgCommittee
Person
Take Submission Submission is entered into the system ProgCommittee Person Submission Submission
Conference Registrate Person Register author
Refuse Submission Because of its lateness the chair decides to refuse the submission
Conference Person
Person
Send Confirmation Receipt is sent to the author
Send Refusal Refusal note is sent to the author Secretary
Figure 4. Linking Use Case ”New Submission” and Class Model
resulting combined domain model is difficult to understand and validate in its entirety. A promising way to validate the dynamic aspects of the system is therefore to go through relevant business scenarios. Having a special state of the system in mind a certain use case is triggered and the scenario running in the model is compared to the desired one. Hence, scenarios are not just a valuable vehicle throughout the requirements gathering and modelling process – for instance, think about the main course of a use case, obviously a routine scenario around which the use case is built –, but can also be regarded as test cases of the (integrated) domain model. In this role a scenario evolves from an auxiliary feature of the domain model to an independent and separately documented testing component called test scenario. A test scenario consists of a sequence of scenario steps, each of which referring to a step in a use case graph such that the corresponding sequence of referenced steps describe a legal scenario in this graph. In order to better describe the current state of the system after ‘executing‘ the scenario, say the object constellation reached so far, together with its explanatory text each scenario step referring to an interaction provides the following information: – The interaction’s pre- and postscope are taken over and every class contained in the scopes is supplemented by a cardinality constraint which states the expected number of class instances more precisely. Constraints are denoted like multiplicities for UML’s associations, for instance, 0..1 means zero or one instances of a class, 1..* means one or many instances, and * means zero, one, or many instances. – An episode tree, episode for short, is attached to the scenario step which further elaborates the interaction’s root operation. Tracing the episode in preorder gives the sequence of operations called (as specified in the class model) when the interaction’s root operation is triggered in the context described in the scenario step. It is up to the analyst or tester whether modelling of an episode ends in elementary attribute accesses or stops in operations which are well understood, so that no further refinement is needed. Note that activating the same root operation with individual
prescopes or parameter settings may result in totally different episodes.2 Example 3 illustrates the notion of test scenarios. Example 3. Figure 5 represents a goal delivering test scenario for the use case ”New Submission” (cf. example 1). Consider the two scenario steps referring to interaction ”Take Submission”. When first invoked by the program chair, it is assumed that the author of the submission at hand is not yet registered (cardinality of Person is * and no object of class Person corresponding to the author exists in the system). In this case, the root method Register Submission is unable to enter the submission into the system as the attached episode shows. Therefore, use case ”Register Person” is squeezed in first in order to create the needed Person instance. The subsequent scenario step reflects the second attempt to execute interaction ”Take Submission”, now being successful and resulting in a completely different episode compared to the futile try. In the final scenario step an appropriate notification is sent to the author and the test scenario ends with the main goal of the use case delivered.
Receive Submission Chair gets submission documents ProgCommittee>> registerSubmission
Program Chair Conference 1 Check Schedule Conference 1 Person *
Chair knows if it is a timely submission Conference>> participant
ProgCommittee 1 Take Submission Submission is entered into the system Conference 1 Person * Conference 1 Person 1..*
Register Person ProgCommittee>> registerSubmission
Register author
ProgCommittee 1 Take Submission Person 1..* Submission is entered into the system Person 1..* Submission 1..* ProgCommittee 1
Conference 1
Conference>> participant Person 1..*
Submission>> manipulate
Person>> authorSubmission
Submission 1..*
Send Confirmation Receipt is sent to the author
Submission>> new Secretary
ProgCommittee>> submissions-attach Person>> author-attach
Figure 5. Test Scenario for Use Case ”New Submission”
2 The proposed combination of use cases and class models allows the tracing of test scenarios down to the standard operation level. This allows to check for consistency and also documents all situations in which a operation may be called and how it is intended to behave. However, this model extension is for analysts resp. professional testers only and is withheld from the users. 2
The term episode has simultaneously been introduced by Regnell et al. [RAB96] for the external event flow of a system. While their approach regards the system as a black box, ours describes the behaviour of a domain class model in a white box fashion.
We close this section with a brief comparison of the major concepts of the domain model and the test model in table 1.
Abstraction Level
Domain Model
Test Model
High
Use Case
Test Scenario
Average
Use Case Step
Scenario Step
Low
Root Operation
Episode
Table 1. Comparison of Domain and Test Model
4.
Modelling and Reviewing Methodology
Obviously, modelling and testing activities must be well-suited to each other for being successful. Even though the main focus of this conference is on testing, we therefore have to include a few words about how combined models are constructed while explaining how they should be tested. In general, considering the divergent objectives of requirements documents and the numerous consistency rules it is, however, vital to apply a phased review process [KM93] which combines different review techniques and assembles teams in a problem oriented manner. We recommend the following modelling and review process consisting of four stages with falling abstraction of the objectives to be included in the development process. Let us assume that the terms and definitions in the repository have already passed a review and that the use cases have been established and seem to reflect the problem domain more or less sufficiently, but are not systematically tested yet. Maybe there exists a first, rather ‘Entity-Relationship-like‘ domain class model skeleton, but scopes and operations have not been modelled at all. At this time we propose to conduct a series of walkthroughs of the use cases. A situation which holds the immanent danger of interface problems – since business processes are concerned and besides the analysts several user groups are eventually involved as primary and secondary actors – is best covered by walkthroughs. Having a concrete test scenario in mind the use case graph is traced to see whether it behaves as expected and produces the right outcome. Wrong, missing, or ambiguous steps are logged (along with the preceding path and some comments), corrected, and eventually reinspected. Clearly, the test scenario itself is recorded as well. Due to the absence of a complete class model, however, the consideration of scopes and episodes is prevented yet. With regard to the question whether a use case graph is sufficiently tested or not, we can learn from structural testing. Coverage criteria as node, branch, and boundary-interior coverage of control flow graphs can easily applied to use case graphs. Here again at least branch coverage should be achieved. As soon as the use cases have become stable, the analyst should concentrate on the structural parts of the domain class model, i.e. add missing classes, complete the inheritance hierarchies and part-of structures, fill in most of the class attributes, and assign prime operations. After that we recommend to start deriving the pre- and postscopes of the use case steps from the informal descriptions of the interactions. Then the analyst retests and refines the existing test scenarios thereby eventually supplementing overlooked ones. Whenever a ’naked’ interaction is reached (one of which the root method is still unspecified), he checks whether the responsibility of the interaction at hand matches the specification of any existing operation. If such an operation is found, it is qualified as the root operation of the interaction. Otherwise, one of the classes in the prescope has to be complemented by a suitable operation. In all cases, the thread of execution of the root operation is determined next and written down as an episode assigned to the corresponding scenario step. Thereby more operations missing yet might be identified which gradually complete the class specifications. For the sake of more flexibility, the analyst may mark novel operations as ’under construction’, so that he feels free
to continue working on the root operation level before going into greater detail. These modelling activities are re-iterated until the class model has become stable. Testing the complete domain model comprises three kinds of reviews: 1. Since many methods of the classes are born during episode modelling, i.e. while looking at the system in a purely functional way, it is reasonable to cast a deeper glance at the class model as a whole. Of essential interest is the overall balancing, i.e. the distribution of methods and attributes over the classes, and if the class model is truly object-oriented. In order to achieve sufficient model coverage, all elements of the class model should be touched at least once during the review. 2. A sequence of inspections is kicked off now to check for completeness and the various consistency rules. Among them are, for example, that there are no reference cycles in the use case model, each interaction has to be refined and each operation to be referenced by an episode (unused method are suspicious), the specification of each operation must match its usages in the episodes, the prescope of each use case, resp. scenario, step must be a subset of the intersection of the arriving postscopes, and for each episode the signature of the root operation must satisfy the corresponding prescope. The inspections should cover all episodes of the test scenarios and all operations of the domain class model. 3. If consistency is guaranteed the user-oriented test scenarios should be traced a second time in a final walkthrough, this time concentrating on the interactions and in particular descending the episodes. The review teams are individually composed with respect to the experience and knowledge needed – a notable advantage of a series of smaller, specialized reviews over one general review. Since the inspections do not need a deep understanding of the model they can be carried out by novices. The final interaction walkthroughs presume a team of senior analysts and even the participation of an experienced designer appears advisable. Principally, we warn against inviting users to them due to their abstraction level and design orientation. Note that we by no means propose a strictly sequential development and validation process in everyday practise. If possible, the analysts should respect the general order of the activities as mentioned above. However, they may give priority to certain subdomains, proceed incrementally, or even pursue an evolutionary way. Varying risk profiles of use cases or psychological aspects like the danger of review-tiredness (often observed when the review-per-time ratio exceed a certain rate) may also result in well-founded process deviations. 5.
Tool Support and Walkthrough Animation
The applicability and success of any software, resp. requirements, engineering method heavily depends on sufficient tool support. Modelling a complex framework of models such as discussed here and keeping it consistent, however, is simply impossible without use of a powerful tool. Hence, we have enhanced our GeoOOA tool environment [KP96] with components for the editing and validation of use cases and episodes.3 Major parts have already been implemented in VisualWorks 2.5, the Smalltalk programming environment of ParcPlace. This section provides a brief overview of this tool extension with main focus on the model validation. The input of each model primitive instance, e.g. use cases, use case graphs, test scenarios, and episodes, is carried out by a graphical editor responsible for the graphical representation and a specification editor maintaining the semi-formal description. Advanced browsing and cross-referencing facilities catch all informations that relate to a selected instance and are currently available. For instance, when selecting a class in a prescope all operations of this class are displayed in a list with respect to inheritance structures (inherited operations are marked and ordered by generalization classes). Reviews are supported in various ways depending on the review technique and the attendees: 3
GeoOOA is a specific RE-method for geographical applications [KPS96] which subsumes Coad/Yourdon’s OOA [CY91].
– The audiences of user-oriented walkthroughs (walkthroughs on the use case level) can include as many as 20 or more people. Considering that most of them have little or no computer experience it is reasonable to distribute written work products as usual. Hence, in this review phase the tool just helps the recorder to mark visited nodes, keep track of the test scenarios, and eventually attach comments about defects. – Consistency rules are controlled by corresponding validators which also prompt suspicious parts of the model. Most of the inspections therefore become obsolete or at least are heavily supported by the tool.
Figure 6. Tool support: modelling and validating use cases and episodes
– The work product of the final walkthroughs is extremely complex and difficult to prepare in the conventional way. Since the attendees are all used to CASE-tools it appears promising to check the digital model directly. To this end, the tool offers a powerful animator which tracks use case graphs, test scenarios, and episodes. The animator is able to replay test scenarios captured during the user-attended reviews and provides various advanced display functionalities. For instance, whenever an interaction is selected the classes and relationships in its prescope, resp. postscope, are highlighted in a class model view, and while stepping down an episode the corresponding message flow in the class model is animated. If desired, the animator also highlights all episodes that use a selected operation. Each review component logs the complete review process. For example, each step in the use case graphs and each operation in the episode is initially marked as ’unvisited’. As the review goes on the
state may gradually change to ’visited’ (partly validated, no defects detected yet), ’defective’ (one or more defects uncovered), or ’passed’ (validation successfully completed). Hence, the reviews become reproducible and better manageable through the tool support. On demand, the tools compute coverage metrics based on the captured informations, for instance, about unvisited and defective instances, so that the manager or moderator gets immediate feedback about the review progress. A snapshot of the tool component scenario construction, resp. recording, is given in figure 6. Currently, the analyst is constructing test scenarios for the use case ”New Submission” of the CCIS. 6.
Conclusion
In this paper, we have presented a method which bridges the gap between user-oriented dynamic models such as use cases and design-oriented static class models. Then, we have suggested how the model framework might be tested in a series of specific, mostly scenario-based reviews. The method is supported by a comfortable tool environment including validators which provide functionalities for capture-replay, scenario animation, and coverage metrics computation. Up to now, we have validated our method by re-modelling parts of existing applications to good effect. The reason for the ’re-engineering’ instead of a ’constructive’ development is simply because the tool environment is still only available in a prototype version which itself is continuously being complemented and redefined during this phase. Since the tool almost provides full functionality now, we are going to apply method and tool to regular development projects soon. References [BJR97]G. Booch, I. Jacobson, and J. Rumbaugh. Unified Modeling Language (UML). Rational Software Corporation, Santa Clara, CA, (http://www.rational.com), version 1.0 edition, 1997. [Boe79]B.W. Boehm. R & D trends and defense needs. In P. Wegner, editor, Research Directions in Software Technology, pages 44–86. MIT Press, Cambridge, MA, 1979. [Boo94]G. Booch. Object-Oriented Analysis and Design with Applications. Benjamin Cummings, Redwood City, CA, 1994. [Coc95]A. Cockburn. Structuring use cases with goals. Technical Report, Human and Technolgie, HaT.TR.95.01 (to appear in JOOP July/August 1997), 1995. [CY91]P. Coad and E. Yourdon. Object-Oriented Analysis. Prentice-Hall, Englewoods Cliff, New Jersey, 2nd edition, 1991. [Fag86]M.F. Fagan. Advances in software inspections. IEEE Transactions on Software Engineering, 12(7):744– 751, 1986. [Ger94]P. Gerrad. Testing requirements. In EuroSTAR’94, Brussels, Belgium, October 1994. [Jac92]I. Jacobson. Object-Oriented Software Engineering. Addison-Wesley Publishing Company, 1992. [KM93]J.C. Knight and E.A. Myers. An improved inspection technique. Communications of the ACM, 36(11):51–61, November 1993. [KP96]G. K¨ osters and B.-U. Pagel. The GeoOOA-tool and its interface to open software development enviroments for GIS. In 4th ACM Workshop on Advances in Geographic Information Systems, pages 165–173, Rockville, MD, 1996. [KPS96]G. K¨ osters, B.-U. Pagel, and H.-W. Six. GeoOOA: Object-oriented analysis for geographic information systems. In 2nd IEEE International Conference on Requirements Engineering, pages 245–253, Colorado Springs, April 1996. [PW85]D.W. Parnas and D.M. Weiss. Active design reviews: Principles and practices. In Proceedings of ICSE’85, pages 132–136, London, England, August 1985. IEEE Computer Society, Los Alamitos. [RAB96]Bj¨ orn Regnell, Michael Andersson, and Johan Bergstrand. A hierarchical use case model with graphical representation. In Proc. IEEE Intnl. Workshop on Engineering of Computer-based Systems. IEEE, IEEE, March 1996. [RBP+ 91]James Rumbaugh, Michael Blaha, William Premerlani, Frederick Eddy, and William Lorensen. Object-Oriented Modeling and Design. Prentice Hall, 1991. [SMT92]G.M. Schneider, J. Martin, and W.T. Tsai. An experimental study of fault detection in user requirements documents. ACM Transactions on Software Engineering Methodology, 1(2):188–204, April 1992. [Wei71]G.M. Weinberg. The Psychology of Computer Programming. Reinhold Van Nostrand, New York, 1971.