historical measures based software estimation approach

2 downloads 51815 Views 105KB Size Report
where Ai, Bi, Ci, and Di are constants that differ for each of the three modes; The values .... Integrated (CMMI) process improvement and MSF for Agile Software.
HISTORICAL MEASURES BASED SOFTWARE ESTIMATION APPROACH A. H.Yousef1 ABSTRACT Software estimation is a difficult and important activity for the success of software projects. There are a lot of complex models that estimate software length, size, schedule and cost. Although these models are important to understand estimation process, practitioners do not use them in industry for many reasons. Other methods and tools are used in practice. This paper presents the most successful models in literature, their advantages and disadvantages. Then it presents practical methods used in industry. Then it proposes a new approach that allows mixing of both approaches integrated with an industrial software development life cycle automation package to get the best out of the estimation. Keywords: Software effort estimation; Software Metrics ‫ﻣﻠﺨﺺ اﻟﺒﺤﺚ‬

‫ﺇﻥ ﻋﻤﻠﻴﺔ ﺘﻘـﺩﻴﺭ ﻤﺠﻬـﻭﺩﺍﺕ ﺘﻁـﻭﻴﺭ ﺍﻟﺒـﺭﺍﻤﺞ ﺘﻌﺘﺒـﺭ ﻤـﻥ ﺍﻟﻨﺸـﺎﻁﺎﺕ ﺍﻟﺼـﻌﺒﺔ ﻭﺍﻟﻤﻬﻤـﺔ ﻓـﻲ ﻨﺠـﺎﺡ‬

‫ ﻴﻭﺠﺩ ﻋﺩﺩ ﻤﻥ ﺍﻟﻨﻤﺎﺫﺝ ﺍﻟﺭﻴﺎﻀﻴﺔ ﺍﻟﻤﻌﻘﺩﺓ ﺍﻟﺘـﻲ ﺘﻘـﻭﻡ ﺒﺘﻘـﺩﻴﺭ ﻁـﻭل ﻭﺤﺠـﻡ ﻭﺯﻤـﻥ ﺘﻜﻠﻔـﺔ‬.‫ﻤﺸﺭﻭﻋﺎﺕ ﺍﻟﺒﺭﻤﺠﻴﺎﺕ‬

‫ ﻓﺄﻨﻬـﺎ ﻻ ﺘﺴـﺘﺨﺩﻡ ﻋﻤﻠﻴـﺎ ﻓـﻲ ﺍﻟﺼـﻨﺎﻋﺔ ﻤـﻥ‬،‫ ﺒﺎﻟﺭﻏﻡ ﻤﻥ ﺃﻫﻤﻴﺔ ﻫﺫﻩ ﺍﻟﻨﻤﺎﺫﺝ ﻟﻔﻬﻡ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻘﺩﻴﺭ‬.‫ﻫﺫﻩ ﺍﻟﻤﺸﺭﻭﻋﺎﺕ‬

‫ ﻴﻘـﻭﻡ ﻫـﺫﺍ ﺍﻟﺒﺤـﺙ ﺒﻌـﺭﺽ‬.‫ ﻭﻴﺴﺘﺨﺩﻤﻭﻥ ﺒﻌـﺽ ﺍﻷﺩﻭﺍﺕ ﻭﺍﻟﻁـﺭﻕ ﺍﻷﺨـﺭﻱ ﻋﻤﻠﻴـﺎ‬،‫ﺍﻟﻤﺨﺘﺼﻴﻥ ﻷﺴﺒﺎﺏ ﻜﺜﻴﺭﺓ‬

‫ ﺜـﻡ ﻴﻘﺘـﺭﺡ‬.‫ ﺜﻡ ﻴﻌـﺭﺽ ﺍﻟﻁـﺭﻕ ﺍﻟﻌﻤﻠﻴـﺔ ﺍﻟﻤﺴـﺘﺨﺩﻤﺔ ﻓـﻲ ﺍﻟﺼـﻨﺎﻋﺔ‬.‫ ﻭﻤﻤﻴﺯﺍﺘﻬﺎ ﻭﻋﻴﻭﺒﻬﺎ‬،‫ﺍﻟﻨﻤﺎﺫﺝ ﺍﻷﻜﺜﺭ ﻨﺠﺎﺤﺎ‬ ‫ﺃﺴﻠﻭﺏ ﺠﺩﻴﺩ ﻴﺠﻤﻊ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﻭﻴﺘﻜﺎﻤل ﻤﻊ ﺤﺯﻤﺔ ﺒﺭﻤﺠﻴـﺎﺕ ﺤﺩﻴﺜـﺔ ﻜﺜﻴـﺭﺓ ﺍﻻﺴـﺘﺨﺩﺍﻡ ﻓـﻲ ﺍﻟﺼـﻨﺎﻋﺔ ﻟﻤﻴﻜﻨـﺔ‬ .‫ ﻫﺫﺍ ﺍﻷﺴﻠﻭﺏ ﻴﺤﺴﻥ ﻨﺘﺎﺌﺞ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻘﺩﻴﺭ ﺒﺩﺭﺠﺔ ﻜﺒﻴﺭﺓ‬.‫ﺩﻭﺭﺓ ﺤﻴﺎﺓ ﺘﻁﻭﻴﺭ ﺍﻟﺒﺭﺍﻤﺞ‬

I.

INTRODUCTION

The ability to deliver software on time, within budget, and with the expected functionality is critical to all software customers. The Standish group (www.standishgroup.com) chaos reports indicated that this is not the case in typical projects. They reported that the average cost overrun of software projects was as high as 189%. Some researchers debate the value of cost overrun [1] but it is well documented that the software industry suffers from frequent cost 1

iversityAin Shams Un 1 Sarayat Street, Cairo, Egypt, [email protected]

1

overruns. The estimation process in software development is the basis for project bidding, budgeting and planning. When budgets and plans are overestimated, business opportunities can be lost, while underestimation may be followed by significant losses. It's found that one of the most contributing factors is the imprecise estimation terminology in use [2]. A lack of clarity and precision in the use of estimation terms reduces the interpretability of estimation accuracy results, makes the communication of estimates difficult, and lowers the learning possibilities. Portfolio management can be used to alleviate the problems of external and internal risks that cause delays, budget overrun and poor quality [3]. The complexity increases with global software projects, which involve asynchronous collaboration among geographically distributed teams in several time zones [4]. In this paper, a review of typical software effort estimation techniques on both academic research and industry will be presented to compare the techniques found in software engineering textbooks [5, 10] and software estimation research papers. The paper will present a technique that leads to better estimation accuracy evaluation that ensures that the estimated and the actual effort are comparable. The paper starts with an introduction, followed by a survey on estimation models in Section II. Section III represents industrial practical approaches for software estimation and mentions the criticism of academic research from the industrial point of view. Section IV describes the proposed software estimation approach and its implementation. The paper is ended by a conclusion and future work. II.

SOFTWARE ESTIMATION MODELS

The Software Engineering literature used software estimation models extensively. This includes different versions of COCOMO and Functional Point Analysis. The following subsections will contain a brief discussion of the two techniques. A. COCOMO (constructive cost model) COCOMO (constructive cost model) was first introduced by Dr. Barry Boehm in 1981. As the discipline of software engineering has matured, COCOMO has evolved till now.

2

COCOMO has been a vehicle for introducing and illustrating software engineering methods and techniques. It was used in both education and training [12]. Many extensions to COCOMO were introduced including dynamic multistage models [13]. Data are collected and analyzed from 63 software projects. The analysis showed that projects can be classified into three distinct groups of effort (measured in man-months) versus product size (measured in delivered source instructions). The three groupings are named the Organic, Semidetached, and Embedded modes. The model can be summarized by (1), (2). MM

= Ai * (KDSI) Bi.

(1)

TDEV = Ci * (MM) Di.

(2)

where Ai, Bi, Ci, and Di are constants that differ for each of the three modes; The values of the constants for each mode is shown in Table I. KDSI is thousands of delivered source instructions; MM is effort in Man-Months; and TDEV is development time in months. These equations for the three modes are named "Basic COCOMO". TABLE I: BASIC COCOMO MODEL CONSTANTS [5] Project Category Simple

Ai

Bi

Description

2.4 1.05 Well-understood applications developed by small teams Moderate 3.0 1.12 More complex projects where team members may have limited experience of related systems Embedded 3.6 1.20 Complex projects where the software is part of a strongly coupled complex of hardware Dr. Boehm then presented Intermediate COCOMO that has 15 cost drivers and their effort multiplier values to account for additional influences on project effort. COCOMO II was then released in 1997 and enhanced in 2000. It replaced the 3 modes in basic COCOMO with a single exponential equation for effort estimation and one for schedule estimation[5], five scale factors for adjusting the exponents of the equations, three sizing options, redefined and additional cost drivers, a non-linear reuse model, two levels of cost-driver granularity and phases and milestones for three types of development processes. Other versions of COCOMO included COINCOMO (Constructive incremental COCOMO) which is a COCOMO II tailored for incremental development, DBA COCOMO which is COCOMO II tailored for database 3

applications, COQUALMO (Constructive quality model) that estimates number of residual defects in a software product and provides insights into payoffs for quality investment [12]. Other extensions of COCOMO include iDAVE (Information dependability attribute value estimation) which Estimate and track return on investment of software dependability, COPLIMO (Constructive product line investment model) which estimates Software product line cost and analyzes return on investment. As rapid application development methodologies evolved, both CORADMO and COPSEMO (Constructive rapid application development model and Constructive phased schedule and effort model) are constructed [12]. B. Functional Point Analysis (FPA) FPA is introduced by Alan Albrecht of IBM in 1979 [9]. It is a method to break systems into smaller components, so they can be better understood and analyzed. It also provides a structured technique for problem solving. Function points are an ordinal unit measure for software. Function points measure software by quantifying its functionality provided to the user based primarily on the logical design. It is technology independent and it is an ISO standard since 2003. FPA is a five step counting process. It can be summarized by the following algorithm. 1. 2. 3. 4. 5.

Determine the type of count. Identify the scope and boundary of the count. Determine the unadjusted FP count. Determine the Value Adjustment Factor. Calculate the Adjusted FP Count.

The objects to be counted are 1. Data Functions: a. Internal logical files b. External interface files 2. Transactional Functions: a. External Inputs b. External Outputs c. External Inquiries The counters will be converted to functional points using fuzzy lookup tables that depends on data element types (DET) and record element types 4

(RET). The value adjustment factor (VAF) is then calculated according of 14 General System Characteristics. These factors are Data Communication, Distributed data processing, Performance, Heavily used configuration, Transaction rate, Online data entry, End user efficiency, Online update, Complex processing, Reusability, Installation ease, Operational ease, Multiple sites, Facilitate change. Each general system characteristics is weighted on a scale from 0 (low) to 5 (high). The sum is called TDI (Total Degree of Influence). The value adjustment factor is determined by the following equation VAF = (TDI*0.01) + 0.65

(3)

Although FPA has proved itself in different organization, Function points are not a very good measure when sizing maintenance efforts (fixing problems) or when trying to understand performance issues. Much of the effort associated with fixing problems (production fixes) is due to trying to resolve and understand the problem (detective work). Another inherent problem with measuring maintenance work is that much of maintenance programming is done by one or two individuals. Individual skill sets become a major factor when measuring this type of work. The productivity of individual maintenance programmers can vary as much as 1,000 percent. III.

PRACTICAL SOFTWARE ESTIMATION TECHNIQUES

In practice, software development team members don't use COCOMO models and they use FPA rarely. This is due to the different effort multipliers and scaling factors found in COCOMO and the weights of general system characteristics found in FPA. They believe that the flexibility of controlling these inputs has a high possibility to creep error into estimates. The most widely used estimation techniques found in industry are presented briefly in the next subsections. A. Individual expert judgment Historically, people estimate software measures by individual expert judgment which was proved to be both inaccurate and uncertain. B. Group Reviews and Wide Delphi Estimation Group reviews technique was found to be a very biased estimation technique. The estimates of a group were found to be different with several orders of magnitude. The technique accuracy is enhanced if the experts are moderated by a coordinator until their estimates converge; this is called Wide Delphi Estimation. 5

C. Decomposition The second evolvement of individual expert judgment was by using decomposition. This means that the software project will be decomposed into tasks using Work breakdown structure (WBS). Each developer will estimate the best case, worst case and most likely case of his work. The overall estimate is the sum of these tasks. It gets benefits of what is called "The law of large numbers" in statistics. Instead of having the pessimistic or optimistic vision of a single expert, overall estimation will have less error due to some positive and negative errors. Decomposition is the practice of separating an estimate into multiple pieces, estimating each piece individually, and then recombining the individual estimates into an aggregate estimate. This estimation approach is also known as "bottom up," "micro estimation," "module build up," and "by engineering procedure," [7]. The errors on the high side and the errors on the low side cancel each other out to some degree. Use a generic software-project work breakdown structure (WBS) to avoid omitting common activities. D. Estimation by Analogy This technique starts by collecting data about previous projects, comparing the size of the new project to old similar projects. Checking for consistent assumption between old and new projects is the most important success factor of this technique. E. Fuzzy logic In this technique, the estimators classify the features (or tasks) into fuzzy categories. A lookup table will map these categories into lines of code as shown in Table 1. TABLE II :FUZZY LOGIC SIZING LOOKUP TABLE [7] Feature category Average Lines Of Estimated Effort Code Per Feature (Staff Days) Very Small 150 10 Small 300 25 Medium 600 60 Large 1200 140 Very Large 2400 290 6

The average lines of code per feature column values and the estimated Effort column values are calculated from the completed work found in the development organization historical records for previous projects. The differences in size between adjacent categories should be at least a factor of two. The fuzzy logic approach works well when numbers of features are about 20 features or more. IV.

PROPOSED APPROACH

Within this research, a study was done for twenty two projects from three local and international companies in Egypt. Two companies of them were CMMI certified on different levels (2 and 3) and the third was not certified at all. For each project, the available data of the worst case, best case, most likely estimate, and actual effort were collected for the entire project and/or its work break down structure. Analysis of Data collected showed that most of the estimates were done by single experts (60%). Delphi and group reviews were used for 25% of cases. Functional point analysis was used for 15% of the projects. All the projects in the sample under study used only a single estimation technique. None of the projects used historical data from previous projects. For simplicity, two simple measures (software metrics) are collected for the different project understudy. They are the Magnitude of Relative Error (MRE) and Confidence%. They are defined in (4), (5) MRE= (Actual - Estimated)/Actual

(4)

Where Actual is the actual result of efforts after completing the project and Estimated is the value that was estimated at the project start. %Confidence = m/n

(5)

Where m is the number of tasks in which estimates were within the best case/worst case range and n is the total number of tasks. The maturity of estimation process will lead to low values of MRE (near zero) and high values of %Confidence (near 100%). The analysis of data collected showed that group reviews and Delphi was the best estimation techniques, followed by FPA. The single expert estimation was the worst in results. 7

The analysis of these results leads to the following design goals of the solution. A. Simplicity B. Integration with current Source Control System C. Integration with Project Management Software D. Using historical data from the industry, organization and team E. Using Different Estimation Techniques The design goals A, B, C and D are accomplished by integration with Microsoft Team Foundation Server and Visual Studio Team Suite. The Team Foundation Server (TFS) is an extensible team collaboration server that enables all members of the extended Development and IT team to effortlessly manage and track the progress and health of projects. The Team Foundation server includes a source control system, work item tracking, build automation, testing and team controls with centralized reporting functionalities. These functionalities simplified the design and provided historical data from the organization and team. Its extensibility capabilities enabled different estimation techniques to be added to the IDE (Integrated development Environment) as Add-Ins. The design goal E was to comply with the finding that using estimation accuracy improves when results from multiple estimators or estimation techniques are combined. Figure 1 shows the block diagram of the proposed system. The shaded objects are the additional components added to the Team Foundation Server and Visual Studio Team Suite. Team Foundation Server

History Databases

Team Foundation Clients ƒ Excel ƒ Project ƒ VS Team Suite

Estimation Programs

Reports

New Project Data

Estimates

Figure 1 Block Diagram of the proposed System The Team foundation server contains two built-in process templates. They are Microsoft Solution Framework (MSF) for Capability Maturity Model 8

Integrated (CMMI) process improvement and MSF for Agile Software Development. Each of these process templates has a different set of default work items, work item queries, reports, and guidance. A new process template is designed. It is named "MSF for Agile Software Estimation Development". It is an extension to the MSF for Agile Software Development to better suit the business needs of the organizations under study. Many fields are added in this process template to each work item Types. These fields are shown in Table III. TABLE III :FIELDS ADDED TO THE TEAM FOUNDATION SERVER PROCESS TEMPLATE

New Fields Best Case (Days) Most likely Case (Days) Worst Case (Days) Actual Working Hours Work Item Size Lines of Code

Used For Decomposition / Expert Judgment algorithms Expert Judgment algorithms Decomposition / Expert Judgment algorithms Creating the historical data Fuzzy Estimation Algorithm Creating the historical data Estimation By Analogy Algorithm Effort (Person. Month) Creating the historical data Estimation By Analogy Algorithm Number Of Sub Elements Creating the historical data Estimation By Analogy Algorithm In preparing the estimate, several different techniques should be used. If the estimates diverge, this means that inadequate estimating information is available. The proposed system uses the following algorithms TABLE IV: THE USED ESTIMATION ALGORITHMS Estimation Algorithm Features Individual Expert Judgment Standard Implementation Estimation By Analogy Support use of historical data from previous project(s). Fuzzy Estimation Support use of industry standard, organization standard and team standard values. Decomposition Support calculation of complex standard deviation formula according to confidence%. This value can be calculated from old projects.

9

The following subsections will highlight enhancements done in the implemented software to fuzzy estimation and decomposition estimation. A. Implemented Fuzzy logic Estimator Fuzzy algorithm used lookup tables such as the one found Table II. The accuracy of the algorithm is small if the values of the table are the industry standard values because variance in productivity between teams in different organizations are very large. Using organization standard, team standard and personal standard productivity measures will increase the accuracy. The implemented fuzzy estimator can use industry standard values for organizations that do not have historical work items data. For organizations that have historical data, It can extract measures like lines of code from source control subsystem and work item tracking subsystem to calculate organization standards and team standards. The software connects to previous team foundation servers work items databases in order to collect this information and adapt the fuzzy algorithm. B. Standard and Enhanced Decomposition Estimator The traditional decomposition estimator input is the work items of the new project. For each work item, the best case, worst case and most likely case estimates are sent to the estimator. The standard traditional decomposition estimator block diagram is shown in fig.2.

ƒ

New Project Work items (Best Case, Worst Case, Most Likely Case)

Traditional Decomposition Estimator

ƒ ƒ

New Project Work Items Estimate. Overall project Estimate (Best Case, Worst Case, Most Likely Case)

Figure 2 Block Diagram of the tradition decomposition estimator For each work item, the standard deviation is calculated using the following formula Standard Deviation = (Worst Case –Best Case) / 6

10

(6)

The reason of choosing the value 6 is a common approximation in statistics. It is assumed that 1/6 of the range between a minimum and a maximum equals one standard deviation. If we talk about the common bell shaped normal probability distribution, the minimum is only 0.135% likely (at mean – 3 standard deviations) and the maximum includes 99.86% of all possible values (at mean + 3 standard deviations). Usually, developers do not provide valid values of best case and worst case [7]. Sometimes, the actual work item effort becomes outside the range identified by the team member. Therefore, the confidence% measure of the estimates should be calculated from previous projects and fed to the algorithm to enhance accuracy as shown in fig 3. Team Foundation Server Historical Database of previous projects work items Calculate previous Projects Confidence% Confidence% Calculate Standard Deviation Divisor Divisor

ƒ

ƒ

New Project Work items (Best Case, Worst Case, Most Likely Case) Confidence%

Enhanced Decomposition Estimator

ƒ ƒ

New Project Work Items Estimate. Overall project Estimate (Best Case, Worst Case, Most Likely Case)

Figure 3 Block Diagram of the enhanced decomposition estimator As shown in fig 3, the portfolio of the organization similar projects is used by the implemented estimator to calculate the previous projects confidence%.

11

The implemented system uses a complex standard deviation formula dependant on the value of confidence% calculated from previous projects. The formula is Standard Deviation = (Worst Case –Best Case) / Divisor

(7)

The divisor is calculated from the confidence% with the following lookup table where interpolation can be used for non existing points. TABLE V :LOOKUP TABLE BETWEEN CONFIDENCE% AND DIVISOR Confidence% 10% 20% 30% 40% 50% 60% 70% 80% 90% 99.7%

Divisor 0.25 0.51 0.77 1 1.4 1.7 2.1 2.6 3.3 6

The new confidence% input to the enhanced decomposition estimator is specified by the user. The default value is 50%. It means that the estimates calculated will be 50% valid. Increasing the value of confidence % will increase project estimate and increase statistical probability that actual outcomes will be before estimates calculated by the algorithm. The output values of the algorithm will include the project best case estimate, worst case estimate and most likely case estimate. V.

RESULTS AND CONCLUSION Applying the proposed system to several projects in different organization leads to lower Mean Magnitude Relative Errors and increase confidence% values as shown in Table VI. TABLE VI :RESULTS OF USING HISTORICAL MEASURES ESTIMATION VERSUS OTHER APPROACHES Confidence% MMRE Confidence% Historical Measures Estimation 14 64 Other Approaches 27 78 12

Using different algorithms for estimating the software projects enhanced the estimation process discussions and group reviews. Depending on historical data stored in Team Foundation Server work items database leads to accurate calculations of organization and development team standard values of fuzzy work items. This leads to the practical use of fuzzy logic estimation algorithm in the early phases of the project (Scope definition phase). Decomposition and estimation by analogy can be used in later stages of the project after knowing the work breakdown structure of the project. A comparison of software cost estimation models and some practical approaches are presented. A new approach that integrates Microsoft Team Foundation server work items tracking module and four estimation algorithms are proposed. Results of applying the new approach showed more accurate estimates than using a single estimation algorithm without using previous historical data of the organization or development team. The approach can be extended to allow for more algorithms in the future including different versions of COCOMO and Functional Point Analysis. The constants of the algorithms could be calculated according to the projects found in the history database for a specific organization or development team. ACKNOWLEDGMENT I would like here to acknowledge TOP IT Company who sponsored this work. I am appreciating efforts done from all companies and individuals, who contribute, review and made their data available for the sake of this research. VI.

REFERENCES

[1]

Magne Jørgensen and Kjetil Moløkken-Østvold, "How large are software cost overruns? A review of the 1994 CHAOS report", Information and Software Technology, Volume 48, Issue 4, April 2006, Pages 297-301 Stein Grimstad, Magne Jørgensen and Kjetil Moløkken-Østvold, "Software effort estimation terminology: The tower of Babel", Information and Software Technology, Volume 48, Issue 4, April 2006, Pages 302-310 Wiboon Jiamthubthugsin and Daricha Sutivong, "Portfolio management of software development projects using COCOMO II", Proceeding of the 28th international conference on Software engineering, 2006, Pages: 889 892 Patrick Keil , Daniel J. Paulish and Raghvinder S. Sangwan, "Cost estimation for global software development", International Conference on Software Engineering archive, Proceedings of the 2006 international

[2]

[3]

[4]

13

[5]

[6]

[7] [8]

[9]

[10] [11]

[12]

[13]

workshop on Economics driven software engineering research, 2006, Pages: 7 – 10. Hany Ammar and Ali Mili, "Software Engineering: Technical, Organizational and Economic Aspects, an Arabic Textbook", Phillips Publishing, 2006 Tim Menzies, Dan Port, Zhihao Chen and Jairus Hihn, "Simple software cost analysis: safe or unsafe?", International Conference on Software Engineering, Proceedings of the 2005 workshop on Predictor models in software engineering, 2005, Pages: 1-6 Steve McConnell, "Software Estimation: Demystifying the Black Art", Microsoft Press, 2006 David F. Rico, "ROI of Software Process Improvement: Metrics for Project Managers and Software Engineers", J. Ross Publishing, February 2004 Linda M. Laird, M. Carol Brennan, "Software Measurement and Estimation: A Practical Approach", Wiley-IEEE Computer Society Press, July 2006 Ian Sommerville, "Software Engineering", 8th Edition, Addison Wesley, 2006 Ferens, D.V., "The conundrum of software estimation models", Aerospace and Electronics Conference, 1998. NAECON 1998, pages 320328 Fairley, R.E., "The Influence of COCOMO on Software Engineering Education and Training", 19th Conference on Software Engineering Education and Training, 2006, pages 193- 200 Richard M. Schooff, and Yacov Y. Haimes, "Dynamic Multistage Software Estimation", IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 29, no. 2, February 1999, pages 272-284.

14

Suggest Documents