Requirements-based dynamic metrics in object ... - Semantic Scholar

4 downloads 5450 Views 1MB Size Report
and captures dynamic metrics by defining typical usage patterns in terms of ..... hour, and it were determined that the duration of a call is. 6 minutes, then a likely ...
Requirements-Based Dynamic Metrics in Object-Oriented Systems Jane Cleland-Huang, Carl K. Chang, Hosung Kim, and Arun Balakrishnan 154) Department of EECS (/'C University of Illinois at Chicago Ghuangl,ckchang) @eecs.uic.edu Abstract Because early design decisions can have a major longterm impact on the performance of a system, early evaluation of the high-level architecture can be an important risk mitigation technique. This paper proposes a technique for predicting the volume of data that will flow across a network in a distributed system. The prediction is based upon anticipated execution of scenarios and can be applied at an extremely early stage of the design. It is driven by requirements specifications and captures dynamic metrics by defining typical usage patterns in terms of scenarios. Scenarios are then mapped to architectural components, and dataflow across inter-partition links is estimated. The feasibility of the approach is demonstrated through an experiment in which predicted metrics are compared to runtime measurements.

1. Introduction This paper proposes a method of assessing the predicted run-time dataflow between processors in a distributed object-oriented (00)system. Early assessment of architectural characteristics such as data-flow, is an important risk mitigation technique, and can contribute significantly to the task of building a distributed system with a greater degree of confidence in its ability to perform as required [I]. Whereas traditional metrics are captured through examining only the static structure of a system, the proposed metrics consider the interaction of scenarios with static class structures. To the best of our knowledge only a very limited number of researchers are investigating dynamic metrics, and their metrics are all based upon measurements taken from executable designs. As Yacoub et al point out, the extra effort required to define a system in executable form is justifiable for many complex real-time applications [2]. However in many cases management is not supportive of this additional effort even though many systems ultimately fail due to unexpected runtime performance problems. According to

1090-705WO1$10.00 0 2001 IEEE

Shan et al [3], one of the leading causes of redesign effort in distributed systems occurs when systems are poorly partitioned and unsupportable levels of network communication result. This is the type of problem that dynamic metrics are well suited to address. The method proposed in this paper is not dependent upon the use of an executable design and can be applied at an early stage within any software engineering project. The requirements based dynamic metric (RBDM) approach is based upon the fact that scenarios represent the anticipated runtime behavior of the system, and at a very early stage in the design provide the means for quantitatively predicting that behavior. Scenarios are mapped to architectural components, inter-scenario relationships are specified through scenario interaction graphs, and typical usage patterns are defined as a context in which execution frequencies of scenarios can be determined. The metric described in this paper estimates network communication loads, however the approach could be easily extended to predict processor workloads or to measure runtime cohesion or coupling. Section 2 of this paper briefly introduces the concept

of dynamic metrics and defines the proposed dataflow metric. Section 3 describes the elicitation framework and underlying models used in this approach. Section 4 reports on the results from preliminary validation tests, and section 5 concludes with a summary and evaluation of the approach.

2. Dynamic Metrics Researchers have proposed metrics to measure a wide variety of attributes related to coupling and cohesion within Object-Oriented systems. Hitz and Montazeri [4] categorized these metrics according to class level coupling (CLC) and object level coupling (OLC). CLC metrics are obtained from examining class diagrams and programming code, and measure the extent to which one class is likely to impact another class. They are useful in determining the difficulty of maintaining an application during its lifetime. In contrast, OLC metrics are obtained by examining models such as sequence diagrams and

212

object diagrams. They measure the extent to which objects interact, and are useful in planning and managing activities such as testing and debugging. Neither category of metrics has the ability to reflect true runtime behavior of a system [5]. In true object-level coupling, the level of interaction is defined by the number of static links between two objects, the type of those links, and the frequency with which each of those links is used 141. As an example, Huang et a1 [6] defined Inter-processor Communication Volume (ICV) as the level of communication that occurs between interacting classes distributed onto different processors P and P‘ using the following formula:

i=1

Messagesize

=

Frequency n

=

i

=

=

# of blocks needed to

send a message via a specific static link. estimated # of messages per unit of time. total # of static connections between partitions P and P . current message link.

This ICV dynamic metric requires message size and message frequency to be calculated for each connection, however the current methods of obtaining this information involve runtime measurements or use of executable designs. Both of these methods can only be applied at a late stage in the design activity and are therefore normally used to validate decisions already made. As an example, Yacoub et a1 [2] used executable designs to run simulations and to extract the probability of various messages being sent between objects. From these probabilities they constructed metrics such as EOC,(oi,oj) that represents the percentage of the number of messages sent from oi to 0, with respect to the total number of messages sent during execution of scenario x. These metrics are useful but report levels of interactions only relative to the overall interaction of the scenario or the system. In contrast RBDM predicts actual runtime measurements within a realistic context, and as it can be applied at a much earlier stage than other currently used methods, the metrics can be applied throughout the design process to continually validate and guide design decisions as they are being made.

2.1. Early Capture of Runtime Metrics A use-case map (UCM) provides a method of visually representing the interaction between scenarios and architectural components [7]. Scenarios are represented as “wriggly” lines, and components as boxes. Although UCMs have the ability to indicate where class

shall support

sales calls

Figure 1. Use case map showing scenario interactions. interactions occur, they reveal nothing about the frequency with which these interactions will occur or about how much data is involved in each interaction. Ideally, this information could be obtained by constructing a framework that traces back from scenarios to the requirements they fulfill, and by eliciting a rich set of quantitative performance requirements. An example of requirements based traceability is given in Figure 1, in which requirement 102 states that a telephone sales order system must support 100 phone sales per minute. The impact of such a requirement might be propagated to other scenarios in the UCM. In this example, supporting mechanisms such as transmission of packing documents should also subsequently support the processing of 100 documents per minute. In the UCM of Figure 1, an interaction point between scenarios S1 and S2 occurs at component C5, and this is the type of interaction point at which quantitative requirements related to one functional requirement are passed on to another functional requirement. The RBDM model therefore defines relationships between scenarios, and states explicitly which performance requirements are propagated from one scenario to another.

2.2. Defining the Proposed Dynamic Metrics Previously ICV was defined as the volume of data sent between two processors at runtime. By redefining ICV in terms of scenario interactions with classes and partitions, the definition can match the early capture method that is proposed. It should be pointed out however, that this formula represents an approximation of the true ICV value, and is accurate only to the extent to which the selected scenarios are truly representative of runtime behavior. The following definitions are used: DEFINITION 1 (Interclass Message) Let C be the set of all classes in the system. For each class c E C, and c ’ E C, let Message(c,c 7 be the set of

2 13

message links from class c to class c’, and let message(c,c I) E Message(c,c I ) be a specijic message link from class c to class c’. DEFINITION 2 (Scenario) Let S be the set of all scenarios in the system. For each scenario s E S, let Sequence(s) represent the ordered sequence of events and their related messages passing through message(c,c I) that together defne scenario s. DEFINITION 3 (Partitions) Let P be the set of all non-disjoint partitions into sets of classes in the system, and p E P be a specijic instance of a partition. There can exist an allocation relationship such that for each class c, let Partition@) P be the set of partitions to which class c has been allocated.

were asked to reverse-engineer a complete set of both functional and non-functional requirements specifications for M-Net, which is a real-time web-based conferencing system developed at the University of Illinois, Chicago [8]. It was hoped that it would be possible to infer scenario invocation rates from the non-functional requirements. Although the resulting functional requirements were fairly complete, in every group the non-functional ones were too sparse to be useful for defining frequencies. It became apparent that the identification and definition of performance requirements and scenario invocation frequencies should become a distinct task in the elicitation process, and that a framework should be constructed that would clearly identify the necessary variables.

3.1 Identification of Scenarios DEFINITION 4 (Frequency) At runtime, scenarios are executed at certain frequencies. Let Frequency(s) E Integer be the frequency with which scenario s is executed at runtime within a specifed time period. DEFINITION 5 (Datasize) A message carries data from one class to another. Let DataSize(message(c,c I)) E Integer, represent the average message size for message(c,c I). The redefined ICV definition is given as: ICV(p,p’) = For each For each message link scenario s in s

’’

pat& ize(me ssage,[c, c3)] * Frequency($) I [Partition(c) = p and Partition(c7 = p’) AND message(c,c’) E SequenceCs)

As the definition indicates, ICV between any two partitions (p and p’) is calculated by considering each scenario that crosses between those partitions. For each message link within the scenario between the partitions (p and p’) its DataSize (i.e. average message size) is multiplied by the execution frequency of the scenario to obtain ICV. The remainder of this paper focuses upon the method we propose for predicting data size of messages and scenario execution frequencies, and also for discussing the role played by requirements in the elicitation of these values.

3. WBDM Model During initial investigations for this model, four groups of graduate level software-engineering students

As the requirements of a system are often elaborated upon with literally hundreds of scenarios, considering the performance of each one of these would be overwhelming and most probably undesirable. During requirements elicitation, it is commonly accepted that amongst other attributes, the requirements must be complete, meaning that they should describe everything that the software must do. However, during scenario identification for this RBDM model there is a subtle shift away from “completeness” and toward “most representative” scenarios. The developer must decide which scenarios are needed to represent the typical and most frequent behavior of the system. Following the well-known 80/20 rule that suggests that 20% of scenarios may account for 80% of the work, the RBDM model requires that these critical 20% of scenarios be identified. The approach that we explored for scenario identification involved interviewing domain experts and power users of existing and similar systems, and asking them to identify the most frequently used scenarios. Early qualitative performance models, such as the performance requirements framework proposed by Nixon [9], can also be referenced. Such models identify critical and dominant paths prior to any type of design activity, and so support the task of scenario identification. In addition to frequently executed scenarios, the critical set should also include scenarios that carry heavy data-loads, regardless of the frequency with which they are executed. Rather than relying on individual perceptions, a structured collaborative process tends to be more productive [lo]. A prototype Scenario Management tool, known as SABRE-TM (SABRE Traceability Manager) was developed to explore the relationships that exist between scenarios and to facilitate the definition of scenario events and their mapping to architectural components. It

2 14

was found that within a given system a fairly small set of initiating scenarios tend to represent the initial interactions of a user with the system, and that these scenarios in conjunction with user input, trigger the execution of other secondary scenarios. The term “scenario set” is used to describe an initiating scenario and all directly and indirectly related secondary scenarios. In the RBDM model, frequency assignments are made at two distinct levels. First, within a scenario set to specify the frequency and probability at which scenario path sections are invoked, and secondly at a higher level to define the frequency with which each scenario set is invoked.

3.2. Usage Patterns To specify the high-level frequency of scenario sets, typical usage pattems are defined which represent the way a system will actually be used at runtime. The process of identifying usage pattems is similar to the collaborative process used to identify scenarios. Drawing from the example of M-Net [8), a typical usage pattem might involve opening the meeting, logging on and chatting, using the slideshow and audio facilities, and finally closing the meeting. These activities are illustrated in Figure 2. The usage pattern is constructed by inserting a number of scenarios into a timeline. A scenario can be a simple scenario that contains only line items, and does not trigger any other scenario, or a scenario set, constructed from multiple inter-related scenarios [l I]. As Figure 2 shows, the usage pattern specifies the relative starting time of each of its scenarios. Usage pattems are hierarchical in nature, so that highlevel pattems can be constructed from low-level pattems. For example, the usage pattem “Online Lecture” shown in Figure 2 , could be placed as an entity into a higherlevel usage pattern. This hierarchical structuring is important because it means that various combinations of

lower level usage pattems can be combined together to predict the system data-flow under a wide variety of situations. It is often helpful to construct usage pattems that relate to specific performance requirements. For example if a performance requirement stated that M-Net needed to support 50 concurrent meetings with up to 20 members per meeting, a usage pattem could be designed to include 50 meetings made up of a variety of lower level meeting types.

3.3. Defining and Mapping Scenarios Before a scenario can be used in a usage pattern it must be defined as a sequence of events, each one of which must be mapped to architectural components. Although strictly speaking a scenario is a linear sequence of events, Buhr [7] defined a set of visual notations that represent the various inter-relationships that can occur between scenarios and scenario path sections. UCMs are a powerhl tool for representing scenario behavior because they allow scenarios to fork, join, run concurrently, and to synchronize their actions. The RBDM approach supports a subset of this notation by supporting forks and their subsequent joins. The SABRE-TM provides a tool for defining scenarios and for mapping scenarios to architectural components. An example of the “slideshow” scenario used in the “Online Lecture” usage pattem is shown in Scenario events can be represented by either Figure 3. textual descriptions or by references to other scenarios. To support the fork and join notation described by [7], loops and case structures are used to represent repetition of blocks of lines, and to organize altemate options. The vertical bars displayed in lines 4-11 and 8-11 represent loops and case structures respectively. A variable is defined for each loop to specify the estimated number of times the loop will be executed, and also for each switch statement to specify the probability at which that statement will be executed. A default value for the average duration of the entire scenario is also specified.

215

Figure 3. Scenario definition and mapping In this example, the slide show is defined as 20 minutes long, with loop variables named mNoSlides and mNoActions, and switch variables named mProbPoint and mProbText. Each line is mapped to the architectural components, either at the class level, the component level, or the partition level.

3.4. Calculating ICV values Metrics, such as ICV, require knowledge of the frequency with which each link is used, as well as an estimate of the average amount of data related to each message link. The scenario definition and mapping is used to generate a “Scenario Sequence Diagram” shown in Figure 4. This diagram maps events to partitions and depicts messages passed from one event to another. Smith proposed a similar approach to estimate resource requirements in performance related scenarios, and generated execution graphs from use-case scenarios. In these graphs, nodes represented scenario events and were used to support prediction of required runtime resources, which in tum were used to predict the actual performance of the scenario in terms of factors such as processing time or number of database accesses [ 12][ 131. In our approach, the scenario sequence is generated in much the same way as the execution graph, but it is the arcs of the graph that are used to predict data flow. As the majority of messages in a system consist of simple message invocations, such messages are transmitted using the minimum buffer size for each link [3]. For this reason, all messages are assigned a default

message size based upon the known buffer size of the link. The software engineer needs to identify only those messages that carry medium or large sized loads. Using Figure 4 as an example, the first link shown between “Client” and “Application Server” in step 4, is a request to download a slide. As this is a simple message the default value is used. In contrast, the message used to download the slide in step 5, carries the slide data itself. The link is assigned a variable name “MSlideSize” to indicate that the data-load is dependent upon this variable. The dataflow between each pair of partitions is calculated by constructing an equation expressed in terms of simple messages (M) and other variables defined in the scenario. For example, using the variables defined in Figures 3 and 4 and including messages from the nested scenarios that handle floor control and manage the pointing device, the total dataflow between AppServer and client in the slide show is expressed as: 2M + (mNoSlides * (M + mSlideSize + mNoActions * ((mProbPoint * 4M) + (mProbText * (mTextSize * mlines))))). To determine the total data-load, estimates are obtained for each of these variables from a wide variety of sources. When a scenario is developed to test a specific quantitative requirement it may be possible to obtain a value directly from that requirement. Other values, such as the size of a serialized shape, might be obtained from small prototype programs. In this example mSlideSize was determined by a quick investigation into the average size of gif files exported from a PowerPoint presentation. Often assumptions are

216

Scenario .+.

1

Name:[sildeshw

Client

Aaa Sewer

1 Central Sewer

Other Client

.I

Figure 4. Partial scenario sequence diagram for M-Net slide show made about how a typical user will interact with the system. In this example it was estimated that a typical user might switch slides at most every 3 minutes. This information was obtained by observing the use of slides in lectures and presentations. Obviously the accuracy of the proposed metrics is dependent upon these estimations, and so the most reliable source should be selected in each case. Because the same variables often appear in multiple scenarios, it is beneficial to store variables and their estimated values as assumptions in a central repository, so that as more reliable data becomes available assumptions can be easily updated. Data-loads are defined for each scenario set used in a usage pattem. The data-loads are then collated into a spreadsheet format that stores the total unidirectional data load sent from one partition to another during the execution of the scenario set. As the model calculates total ICV over the duration of the scenario, peaks in bandwidth requirements become averaged down. A more precise reporting mechanism can therefore be obtained by creating and integrating shorter scenarios.

3.5. Calculating ICV Values ICV values are calculated from the bottom up, starting with scenarios that contain only textual line items, and working up through more composite scenarios, to lowlevel and then high-level usage patterns. ICV values can be reported either as a static model showing the quantity of data passed between partitions over the entire life of the usage pattern, or as a dynamic graph, showing the

changing bandwidth requirements over time. For example, in the dynamic model the bandwidth (BW) requirements between the Application Server and Central Server at minute 15, for the usage pattem “Online Lecture” shown in Figure 2, would be calculated as : BW(OnlineLecture,l5) = BW(MeetingLogon,9) + BW( PublicChat,7) Bandwidth for a scenario such as Public Chat is calculated by dividing the total number of bytes transferred during the scenario by the duration of the scenario. If one of the functions on the right hand side of the bandwidth equation relates to either a lower level usage pattern or a scenario set, the bandwidth is calculated recursively by examining the intemal dataflow within that usage pattem or scenario set. By calculating predicted bandwidths for each time interval, a graph such as the one shown in Figure 7 can be constructed.

3.6. Traceability Model As proposed by Ramesh [ 141, the traceability model uses strongly typed links, which are supportive of automated querying both during development and in the face of changing requirements during the life of the system. In the traceability model, requirements are elaborated upon using scenarios, which are defined as events and mapped to components. The components are allocated to partitions and data-loads of inter-partition links are estimated. Typical system usage patterns are then defined in terms of scenarios, which define the frequencies with which scenarios are executed.

217

Figure 5. Whiteboard based meeting usage pattern used to evaluate the method Objectives of the RBDM model include the ability for changes in requirements, and reallocation of classes to different nodes to be reflected in the model. If for example a performance requirement originally stated that the sales order system should support 100 phone calls per hour, and it were determined that the duration of a call is 6 minutes, then a likely usage pattem would consist of 10 or more sales clerks servicing a variety of sales calls over a period of time. If the business intended to expand and wished to support an additional 100 calls per hour then it would be necessary to know which usage pattems should be revised and how those revisions affected the related RBDMs. This requirement is fulfilled through a simple traceability link between any performance based requirements and their related usage patterns. Reallocation of classes andor components to different partitions means that certain links that were previously inter-partition become internal links, and new links that may have previously been internal now become interpartition links. A tool such as SABRE-TM can identify new inter-partition links and bring them to the attention of the software engineer with the intention of eliciting and defining their data-loads.

integrated into a composite metric based upon the occurrence of each scenario in the defined usage pattem. Three M-Net meetings that followed the basic format of Figure 5 were conducted, and actual data flow between partitions was measured. Figure 6 compares the predicted dataflow and actual runtime measurements using the static model. Figure 7 reports results for the dynamic model, however only one of the runtime results is plotted in order to improve readability of the graph. In fact, all three tests resulted in similar dataflow pattems. The results from this experiment clearly demonstrated that it is possible to make meaningful predictions about runtime performance using the RBDM approach. The dataflow between each partition pair was predicted quite accurately. Averaging the results of all predictions obtained an accuracy rate of 96% of the total dataflow, which was higher than we had anticipated. In the worst case of under-prediction, the metric predicted only 72% of the dataflow, and in the worst case of over-prediction, it predicted 1 15% of the dataflow. Although we used the same usage pattem as a framework for calculating metrics and for defining the M-Net meetings, we did not specify details such as how many slides should be shown, or how the chat should be conducted.

5. Conclusions & Future Work A full-scale empirical study is planned to further validate the method. In this study, more extensive usage pattems will be used that represent more realistic system 300.0004

I

250,000

4. Preliminary Validation 0

f

Prior to a full empirical study, a smaller scale experiment was run to determine if the ICV metrics obtained through the RBDM model would match actual runtime measurements. The following hypothesis was tested: “The predicted pattems of inter-partition communication costs obtained from the RBDM match the patterns obtained from runtime measurements”. The experiment involved the following scenarios: 1 . Log on and log off. 2. Public chat between meeting members. 3. Open slide show. 4. Display a slide and use the pointing device. 5. Close slide show. The scenarios were defined and used to generate scenario sequence diagrams. ICV values were then calculated for each pair of partitions involved in the scenarios. The usage pattern depicted in Figure 5 , was then developed using these scenarios. Metrics were predicted for each individual scenario, and then

2oo’ooo 150,000

i-

E

loo,ooo 50,000

0 Test1

187

20,116

Test2

187

20,116

Test3

Metrics

I I

247,588 216,207

6,098

10,952 283,556 252,031

7,360

4,860

187

120,116 110,855 1208,6051777,6561 4,697

200

I 18,000 I 7,900

I

1227,9001205,600 5,300

Figure 6. Comparison of metrics and runtime measurements in the static model

218

-

100000

validation of the method through a broader and more formal investigation and also investigate other performance based dynamic metrics.

1D

tu

‘5 c E 8 0

10000

References

1000

P

e

j

100

Im

10

e

[I] S.J. Caniere, R. Kazman, and S.G. Woods, “Assessing and Maintaining Architectural Quality”, Proc. 3“‘ European Conference On Soft...re

0

5

10

15

Time in Minutes

Maintenance and Reengineering,

Amsterdam, Netherlands, Mar. 1999. [2] S.M. Yacoub, H. Ammar, and T. Robinson, “Dynamic Metrics for Object Oriented Designs”, Proc. 6‘h IEEE Symposium on Sofhwre Metrics, Boca Raton, Florida, Nov. 1999. [3] Y-P. Shan and R. Earle, Enterprise Computing with Objects. Addison Wesley, 1999. [4] M. Hitz and B. Montazeri, “Measuring Coupling and Cohesion in Object-Oriented Systems”, lnterna/ional Symposiimr on Applied Corporate Computing, Monterrey, Mexico, Oct. 1995. [5] L.C. Briand, J.W. Daly, and J.K. Wust, “A Unified Framework for Coupling Measurement in Object-Oriented Systems”, IEEE Transactions on sofhvare Engineering, Vol. 25, No. 1, JaniFeb 1999, pp. 47-56. [6] J. Huang and C.K. Chang, “Supporting the Partitioning of Distributed Systems with Function-Class Decomposition”,

2oel

Figure 7. Comparison of metrics and runtime measurements for ICV between AppServer and Client in the dynamic model behavior, and dynamic metrics obtained from these usage patterns will be compared to comparable run-time measurements. In order to test how well the model scales-up to larger systems, more complex usage-patterns representing a larger mass of activity will be developed, and metrics from these patterns will be compared to measurements obtained through simulations. The major contribution of this paper is to show how dynamic metrics can be captured by mapping scenarios onto architectural components, and by developing typical usage patterns to provide a context for determining the execution frequencies of selected scenarios. These metrics are obtainable early in the system life-cycle, and do not require extensive analytical skills to obtain. The method provides the means of validating architectural designs early and frequently during development, which directly addresses well-recognized problems such as correctly partitioning a system for distribution. Despite the fact that our metric results matched actual measurements fairly closely, we believe that it is still extremely difficult to consistently predict typical runtime dataflow. The difficulty lies in the fact that users’ interactions with the system may vary dramatically making it hard to predict “typical” user behavior. However, dataflow related performance requirements deal with the limits of the system, specifying requirements such as the number of concurrent meetings or users that the system must be capable of supporting.. T o test the system at this level it is only necessary to predict more extreme uses of the system, which is actually easier to do than trying to predict how an average user would interact with the system. Our future work will therefore focus upon using these metrics to predict how well a proposed architecture will perform under stressful conditions. We will focus upon further

Proc. 24’” Anriiial International Coinputer Sofnvare and Applications Coiference, Taipei, Taiwan, Oct. 2000, pp. 35 I -

356. [7] R.J.A. Buhr, “Use Case Maps as Architectural Entities for Complex Systems”, IEEE Transactions on Sojiii.are Engineering, Vol. 24, No. 12, Dec 1998, pp. 1131-1151. [8] M-Net. International Center for Software Engineering. University of Illinois at Chicago. http://www.icse.eecs.uic.edu. [9] B. Nixon, “Management of Performance Requirements for Information Systems”, IEEE Transactions on Sofhvare Engineering, Vol. 26. No. 12, Dec. 2000, pp. 1122-1146. [IO] A. Hickey, D. Dean, J.Nunamaker, ‘Setting a Foundation for Collaborative Scenario Elicitation”, Proc. ofthe 32”“ Hanuii International Confirenre on System Sciences, 1999. [I I] K.Breitman and J. Sampaio do Prado, “Scenario Evolution: a Closer View on Relationships”, Proc. of the 4”’ International

Cotference on

Requirements

Engineering,

Schaumberg, IL, June 2000, pp. 95-105. [ I21 C. U. Smith, “Designing High-PerformanceDistributed Applications Using Software Performance Engineering: A Tutorial”, Proc. Computer Mensurenrent Group, San Diego, Dec 1996. [ 131 C. U. Smith and L.G. Williams “Software Performance Engineering with Object-Oriented Systems: A Use Case Approach’, submitted for publication. Available on line at: http://www.perfeng.com/cspubs.htm. [ 141 B. Ramesh, and M. Jarke, “Toward Reference Models for Requirements Traceability”, IEEE Transactions on Sofiware Engineering, Vol. 27, No. I , Jan. 2001, pp. 58-92.

Acknowledgment This research was partially funded by NSF grant CCR0098346.

2 19

Suggest Documents