ISBSG Workshop – Roma, Feb 12th 1998
Software reuse as a potential factor of database contamination for benchmarking in Function Points Roberto Meli (CFPS) - D.P.O. Srl
Abstract The benchmarking activities to which this work refers are based on technical and administrative data gathered from software projects using appropriate systems of measurement, first among which at present is Function Point Analysis. Benchmarking databases contain actual data regarding not only the size of a software project, but its productive factors as well. These databases may have a number of important uses, above all for management and contractual purposes such as, for example, assessing productive processes, creating productivity models, or evaluating software. Therefore, it is essential to be certain that the data contained therein - and therefore the decisions derived therefrom are not contaminated by errors of unacceptable orders of magnitude. Although Function Point Analysis is one of the best possible choices for sizing software applications, Function Points - as they are currently defined – have yet to constitute a purely functional metric system; when used without a critical eye in benchmarking activities, they may lead to erroneous conclusions if they are applied to specific projects. In addition to recommending possible paths to take towards functionally improving Function Point metrics, this work illustrates how software reuse may be a factor that potentially contaminates the validity of comparing the data in a benchmarking database. Lastly, some simple techniques are introduced to solve these problems.
1. Introduction Software production is an industrial process that appears somewhat resistant to classification in the context of scientific, or at least traditional engineering practice. This is surely due to the very essence of the product, which in the first place is intangible, and therefore subject to a certain, unavoidable degree of subjectivity. This feature is also reflected in the difficulty of designing well-founded metricses that make it possible to reliably measure software in its various aspects. However, it is essential to use these metricses if we are to manage the productive process instead of being managed by it. Function Points were introduced to circumvent some important problems posed by previously used metrics, and by Lines of Code in particular. However, not all these problems have been solved. In our opinion, Function Points must evolve to respond to the changes occurring in recent years on the technological and productive landscape, and to resolve, truth be told, some inconsistencies that have been there from the very beginning.
- pag. 1 -
ISBSG Workshop – Roma, Feb 12th 1998
2. Benchmarking Benchmarking (at least the type being discussed here) may be thought of as the activity of comparing, from a quantitative standpoint, a certain productive phenomenon found within an organization with a set of similar phenomena, or with statistically average phenomena from the external environment. Why do we use benchmarking? Mainly, organizations benchmark to: • evaluate the status of their own productive processes at a given moment (assessment); • identify their competitive position on the market; • establish points of arrival and departure for process improvement; • and in general, obtain information useful for forecasting, monitoring, and managing the situation being compared. Software benchmarking may be effectively performed using an archiving system automated, if possible - that makes it possible to gather, validate, and analyze data using a number of instruments of qualitative / analogue as well as quantitative / statistical investigation. Once available, these data may help to achieve objectives that are on a subordinate level, but at times no less important than the primary ones, such as: • creating mathematical productivity, cost, and duration models; • predicting effort, duration, and costs, for a new specific software project; • an evaluation of a supplier’s software offer. A benchmarking database, therefore, is a collection of technical and management data regarding resources, means, and productive processes for the software. They include anticipated and actual data on projects concluded by a number of organizations, and are gathered in accordance with precise, proven security and purification models. The data presented normally range from nationality to type of organization, from the type of project (development, enhancement, porting, etc.) to its domain of application, from the technical size of the software released (in FPs or KLOC), to its quality, from the effort in the various working phases to their duration, and so on. Once the data are entered, the database can be queried by filtering cases similar to the one to which a comparison is to be made, and individually analyzing the results, or by obtaining the average data with the respective statistical dispersion and correlation indicators. It is often possible to apply such evolved investigation techniques as multi-factorial analysis or correlation techniques. As in all empirical investigations, however, the data do not speak on their own. We need hypotheses of relationship and connection between the recorded variables to permit us to accept or reject the proposed models. This is generally the work of statistical experts, because the traps that may be laid in the ground through nonchalant use of analysis instruments are innumerable. Despite this, benchmarking databases are highly useful, and gathering and studying these data is well worth the effort.
- pag. 2 -
ISBSG Workshop – Roma, Feb 12th 1998
As already pointed out, the delicate nature of the decision-making processes supported by these data makes it indispensable to be certain that we are not including in the database a set of observations contaminated by underlying errors or lack of homogeneity on the same order of magnitude as the numerical relationships being sought. Before getting to the heart of the analysis, let us ask: what relationships are there - in software projects - between technical size, effort, working duration, staff, and production cost? Effort and size As many authors have found, a relationship of functional dependence surely exists between an application’s development effort and its size. However, this relationship involves innumerable productive variables that sometimes rise from their supporting roles to take centre stage. Some of these variables are: • • • • • • • • • •
working methods case tools programming languages technological platforms team experience final critical nature and reliability complexity and innovation of the problem to be solved, and of the software expected quality economic/organizational/competitive context nationality
It is commonly accepted that size is the so-called “primary driver” of the functional relationship. This means that variations in size are those that determine, to a greater degree, the variations in the related effort. This is why we generally consider the auxiliary productivity variables through a Multiplicative Adjustment Factor that is applied after calculating the effort, starting from the size. The mathematical models linking size to effort are of the greatest variety, ranging from linear equations - straight lines - to exponential ones, via polynomials. The choice of one or the other is a matter of methodological conviction, intuition, experience, and at times (unfortunately), pure aesthetic taste. Independently gathered empirical data will confirm or belie the productive models proposed. This shows how important it is to have a serious benchmarking database. Working duration, effort, and staff These three variables, on the other hand, are linked together more closely and directly than the previous pair since, by their very working definition, the effort is given by the product of persons and time. If there is a one person-month job to be performed and we have only one person available, it will last (anyone willing to bet?) one working month. We have known for a long time that interchangeability among months and persons has not occurred on all points of the hyperbola (the mathematical curve representing the relationship of interchange of two factors) but only in the central area, at the non-extreme points. A popular saying expresses this quite well: while one woman normally produces a child in nine months, nine women will never produce a child in one month! Although there are reasons to believe that too restricted a constraint on staff (its quality or quantity), or on the acceptable duration of a project, influences development effort to the same degree that other conditions do, it may be stated, as - pag. 3 -
ISBSG Workshop – Roma, Feb 12th 1998
an initial approximation, that it is effort that is used as a primary variable for determining the duration and staff needed for the project. Production cost and effort When we speak of production cost, we are referring to the cost of labour, equipment, organizational factors, and raw material needed for the project. Generally, the part that is most difficult to estimate is the work to be performed. It is therefore necessary to have a model that links the cost of labour to the tasks performed, the process adopted, the unit costs of the resources, and above all the effort to be made. This model, however, does not pose particular conceptual problems for being developed, and is administrative rather than technical in nature. We may see that once the first step – producing a functional model linking size to production effort – has been made, the rest is clear sailing. And it is on this first step, in fact, that major corporate researchers generally focus their attention. An error too often committed in this field by firms producing and purchasing software is that of considering the relationship between size and effort as a constant relationship expressed by a value of average productivity. It is hard to tell who has done more harm to businesses: people who go around armed with a fateful number - a single number - such as Function Points per person-month, or worse yet, Money per Function Point, with which to compare everything they come across, or those consultants who have armed these people with values whose origin is often inscrutable and unfathomable. Just what has been discussed to this point clearly shows how the link between size and effort – and therefore the production cost – is mediated by a set of productive factors that may quite significantly alter the productivity relationship. This is why we must first replace the simple number of FPs per person/month with an equation that takes into account the fact that productivity does not remain constant at all as the project’s size varies. It is often pointed out that small projects tend to be more productive than average projects, which in turn are more productive than large-scale ones. In any event, it is hard to believe that productivity remains constant as the size of the software to be developed varies. This means that instead of the fateful number, we should at least use a table that associates different numbers for each size range into which we have divided the scale of measurement. However, a not insignificant problem with this solution is that the table is in fact a discontinuous function of its variables. This means that projects differing from one another by only a few FPs may develop effort predictions differing by dozens of person/months. It is therefore clear that the only proper practice is to use a continuous, regular function, such as a mathematical equation that links the two aforementioned primary variables. In addition to this, however, we should also have a set of adjustment factors to take into account all the productive conditions that impact development capacity, such as for example those cited above.
- pag. 4 -
ISBSG Workshop – Roma, Feb 12th 1998
A benchmarking database should consider as many of these factors as possible, in order to enable an appropriate multi-factorial analysis to be performed. Unfortunately, the main, most referenced, and almost only public study of this kind dates back to the 1980s, and was done by Barry Boehm in the COCOMO model. We now come to the most delicate part, which is the link between production cost and market price. To explore this relationship, we need to introduce some basic elements of economics that will allow us to discover a rather bizarre situation in Function Points.
3. Software as a market good For some time now, software has taken on the appearance of a market good, and has been the subject of important economic transactions on par with other, more traditional products. The age of software developed entirely in house has certainly passed, and all organizations are making more or less extensive use of the software suppliers market. This means that the mechanism for forming the price of a software is influenced by the empirical laws of supply and demand. What interests us here is the relationship between Function Points and software price. Before examining this, we should review some elements of economics. [1] Market Value, Labour Value and Use Value The value of a good is often defined as its ability to be exchanged for another good. In this definition, the value of a good therefore represents the good’s property of acquiring other goods or services through exchange. In monetary terms, it may be asserted that the value of a unit of the good in question is its price. In economics, there are two theories of value: the Labour Value theory, and the utility theory. Marx’s Labour Value theory draws its origins from classical value theory. According to traditional economists (Smith, Ricardo), the value of a good depends on its production cost, and on the labour cost in particular. Therefore, a good that required three days of labour to be produced should be worth more than a good that required only two days. For Marx, the value of a good is always, and only, represented by the quantity of socially necessary work incorporated therein, either in the form of direct labour by the work force in producing it, or in the form of indirect labour incorporated into the capital goods used to produce it. According to this theory, value is an objective property of the good. Utility theorists, on the other hand, approach the problem starting from the demand for the goods. For them, value is no longer an objective property of the good itself, but depends on its utility (that is, its ability to satisfy one or more of the consumer’s needs), and therefore on the subjective relationship between the good and the consumer. It follows that the same good may have a different value for different consumers.
- pag. 5 -
ISBSG Workshop – Roma, Feb 12th 1998
The price formation mechanism In a market economy, price is determined by the interplay of supply and demand. In the elementary theory of demand for a good, the quantity in demand is determined by a set of factors, which include: • the market price of this good • the consumer’s income • the price of other goods • the consumer’s taste or preferences • the consumer’s aims On the other hand, in the elementary theory of supply of a good, the quantity being supplied is determined by a set of factors, such as: • aims of producers • market price of this good • price of other goods • price of the production factors • technical progress Elementary price theory tells us that there is a single price that can make the quantity of the supplied good equal to that of the one in demand, and that therefore market value or price does not depend exclusively on either the production cost or the Use Value alone. This has important consequences on the software market as well: the monetary price of a supply is not exclusively linked to either the Labour Value (cost of production) or to the Use Value (benefits to the user), but only to achieving an equilibrium between supply and demand which, as we have seen, are influenced by a very broad set of variables. The software field has two different possible scenarios: one of the off-the-shelf package, and one of development to order. Only to the first case can the aforementioned economic laws be appropriately applied, in line with the features of a competitive market. However, in the second scenario, the listed factors will continue influencing individual purchasing decisions, albeit imperfectly. For an individual supply of software to be developed to order, instead of referring to quantities in demand and the supply of a good, we may speak of willingness to purchase or sell that particular good under given conditions. Given the fact that in the software market, both for consumption and to order, the price of the good is linked to myriad factors of equal importance, this means that there is no primary driver to which the price can be linked in a simple manner. Therefore, we cannot expect a strong correlation between the price and size variables. If this were to occur in a given set of gathered data, it would almost certainly be hiding an average situation consisting of latent waste or forced leaks. Therefore, it does not appear to be a good idea to adopt a fixed conversion factor to express Function Points in terms of market price in money, in order to evaluate a specific project. However, it is a good idea to use this relationship as a macroindicator of the organization’s general competitive position, since for the larger numbers, errors tend to cancel each other out. What consequences do these matters have for Function Point analysis and benchmarking? - pag. 6 -
ISBSG Workshop – Roma, Feb 12th 1998
4. Function Points between Labour Value and Use Value It is often stated that Function Points measure the functional size of a software application from the point of view of the experienced user, and at the same time that they are to be linked to the usefulness that the software has for this user, or its Use Value. This is like saying that the kilogram we use to measure bread is also a measure of the degree to which that quantity of bread can satisfy the consumer’s hunger. Does this property appear likely? Does the size grow along with the Use Value? And given the fact that, according to the utilitarian theory, each good has not one single value, but as many values as there are consumers (or users in this case) of the software, should we therefore have as many measurements in Function Points as there are different points of view? To answer these questions, let us try to understand who the user is, and what is his or her Use Value. IFPUG 4.0 [2] standards do not give a clear, unequivocal definition of the User. In a number of points, however, it suggests a definition broadened to include the direct user, the indirect user, the hierarchical manager, the technical/operative user, etc. So the term Experienced User can be taken as an abstraction - a virtual figure who in reality is a composition of a number of different physical figures entitled to express requirements on the software project, and guided in this by an expert in analysis techniques. This offers a solution to the multiple user problem. For each application, there is one and only one experienced user, who is the composite of all the subjects indicated above. It is no accident that in the recently developed discipline of requirement engineering, point-of-view based approaches are winning the day. What then is the Use Value of the software for a subject such as the one described above: physically non-existent but all too demanding? An initial consideration regards the fact that each software project has functions that are more important than others. A function that makes it possible to control air traffic will not be as important as one that determines to which printer to direct the log-file of the day’s calls! If we make the simple supposition that all functions are democratically equal and that the functional size is linked more to the number of functions than to their intrinsic mission, the second element of subjectivity falls as well. This is exactly what occurs when, for example, Data Element Types (DET) and File Type Referenced (RET) are counted to determine the complexity of an External Input. If this approximation were acceptable, we could say that Function Points are linked to the quantity of things that can be done with a given piece of software, and that this corresponds with its Use Value.
- pag. 7 -
ISBSG Workshop – Roma, Feb 12th 1998
Are these two hypotheses that we have made acceptable? In our view, the answer is yes. We have no reason to think that these assumptions may introduce more problems than they solve, given that we are in the field of intangible goods where subjectivity is to some extent ineluctable. The problem with Function Points actually comes from another direction. As Function Points are presently defined, they appear more linked to the Labour Value than to usefulness for the User. If this were the case, we would have an intrinsically inconsistent metrics, which would purport to measure the software’s Use Value while actually being linked to its production costs, and therefore to the Labour Value. However, we have seen how in a market economy, Use Value is not linked to production costs, except by chance. To return to our metaphor, it is as if we wished to measure bread not by the kilogram (a measurement relatively proportional to the ability of the good to satisfy the consumer’s hunger) but by hours of work needed to produce that piece of bread! With the variety offered by modern technology, it may happen that a certain quantity of bread may be produced in a half day, or even in one hour, while maintaining the same ability to satisfy hunger. A measurement of this kind would not be at all linked to the software’s aptitude to satisfy the consumer’s need, but only to the productive conditions. What a strange fate for a metric system conceived for the market: to be perfectly consistent with the Marxist approach! There are two main elements leading to a thesis such as the one described above: 1. IFPUG counting practices including the Value Adjustment Factor (VAF; VAFA; VAFB), which does not actually add a single elementary function to those required by the experienced user, but which - interestingly enough - represents more a way of taking the development difficulties into account by increasing or decreasing the pure functional value by up to 35%, based on the greater or lesser production cost; 2. the composition of the relative weights of the elementary processes EI, EO, EQ, ILF, and EIF, which seem to have been assigned based more on the development difficulty related to the technologies used in the years in which Function Points came into being, than on the perception of usefulness for the experienced user. For example, why should an External Input weigh more than an External Output? Perhaps because with the technologies we were referring to at the time, it was easier to design and produce a data acquisition mask than a printout report. Today, it would likely be the other way around. Therefore, we need to lead Function Points to measure size or – in light of what has been stated – the Use Value of the software, abandoning any kind of mingling with production effort / costs. This is not to frighten all those who are looking to software metrics as tools to manage forecasts, projects, and contracts, and for whom the link between size and effort is of fundamental importance. In truth, they also have everything to gain from purifying the nonfunctional aspects of Function Point metrics since, should this come to pass, they may have stronger, more consistent models at their disposal, even for calculating economic variables.
- pag. 8 -
ISBSG Workshop – Roma, Feb 12th 1998
With regard to the composition of the weights assigned to the elementary processes, the work by Wittig, Morris, Finnie e Rudolph [3] appears extremely promising for Function Point development, in introducing a formal methodological approach to defining User perceptions vis-à-vis the relative weights given to the factors underlying Function Point Analysis. This research effort goes precisely in the direction of completely separating EI, EO, EQ, ILF, and EIF weights from development difficulty, and linking them to the Use Value we have discussed earlier. As a first result we may conclude that the weights originated by the research cited are quite different from those included in the standard tables (this is a different opinion from that of the authors). The second thing we must do is to abandon use of the VAF (Value Adjustment Factor), at least as an element modifying the measurement of the application’s size, perhaps recovering it either as a qualifier (the higher the VAF, the better the application) or as an element modifying the value of the productivity that will influence production costs, but not the size of the application. [4]
5. Software reuse as a potentially contaminating factor for benchmarking We have already seen how a benchmarking database is a useful tool for collecting historical information to develop forecasts for the future and evaluations for the present regarding software, particularly for the software’s productive processes. In other words, it may be seen as providing sound support to the creation of productivity models, and as a means to validate what the market has to offer. However, the data gathered in the database must absolutely be as error-free as possible (whether these errors are systematic or random in nature), or in any event, free of errors of a considerable size. Beyond the management precautions that can be taken to ensure that the data gathering process is safe, reliable, confidential, and true to the real situation, there are potential hazards originating from the very nature of the metrics underlying the gathering process. In particular, we shall see how the practice of software reuse – in all its possible forms– may be a source of considerable contamination to the validity of comparisons made using a database that fails to take this into account in any way. As has already been written, productivity is a multi-factorial function linked to a very broad number of variables that can be attributed to the following main classes, that is, to the features of: • the problem to be solved • the product in existence or to be developed • the productive process • the technologies used or to be used • the working environment for development • the work group • the context A simple functional relationship between effort and size in Function Points can only be used if the latter is a primary driver - which is to say, its influence is at least on one order of magnitude greater than that of the other factors, which will determine the model’s “noise” or range of probable error. This assumption probably holds true for all factors except for reuse, since reuse may have a significant impact on size, and therefore be a candidate for consideration as a primary driver contamination factor. - pag. 9 -
ISBSG Workshop – Roma, Feb 12th 1998
One example should illustrate what has been discussed to this point. Let us suppose that we have a project to develop an application that has already been counted as 500 FPs in accordance with IFPUG 4.0 standard practices. Let us also suppose that at this time, it was possible to recover from a previous project a set of software modules that have already been automatically developed and tested for a total of 100 FPs, and that 150 more FPs, ready for insertion into the software designed ad hoc, are obtained on the software components market. We will find in this case that approximately one half of the functionality provided for the application to be developed will be obtained with minimum, or even no effort. Lastly, let us suppose that our benchmarking database is fed with data supplied exclusively by organizations that develop the software entirely in house, with no reuse of any kind. If the average productivity for similar database projects were 10 FPs/person-month and we applied a simple formula that, for example, divides the 500 FPs by 10 FPs/pm, we would find that we need 50 person-months to develop the new project when, in all probability and considering the savings introduced by reuse, perhaps one half this figure would be sufficient. Waste is ensured by the well-known property by which a software project behaves like a gas, which tends to occupy all the space it is given, thereby spreading itself out to absorb 50 person-months with not even a suspicion that half of this could have been avoided. This sample scenario is not hypothetical in the least; it will become more and more the norm in years to come, when the components market has developed like the packages market, and we will be able to obtain information systems by assembling what we may refer to as prefabricated parts, already prepared because they have been developed in house or externally - it matters little. A benchmarking database that fails to differentiate projects based on the percentage of reuse of the software employed for each case risks contamination by data that cannot be compared with one another. On the other hand, it is not practical to trust that average productivity is the result of a balanced composition of diversified levels of reuse, and that this smoothes out the problem’s sharp edges. Indeed, it is of no use to make an average of all the various levels of reuse as we do with other factors, because this parameter may assume too significant proportions on the individual project. On the other hand, the value of the productivity averages apart from reuse is useful as a macro-economic index of comparison between different organizational situations because, from the competitive standpoint, knowing how to exploit reuse is a factor of competitive advantage.
6. How to properly consider software reuse in models The Counting Practices Committee at Gruppo Utenti Function Point Italia (GUFPI) has recently dealt with this question [5], stressing the arguments described hereunder. Page 2-2 of the CPM 4.0 manual states: “Function points measure software by quantifying its functionality provided to the user based primarily on logical design.”
- pag. 10 -
ISBSG Workshop – Roma, Feb 12th 1998
In effect, the functionality already in existence and incorporated - through the external acquisition of generalized packages or the household use of modules developed on other occasions - is still functionality requested and obtained by the user, and should therefore be counted as if developed from scratch for the purpose of obtaining the size of the application in FPs. This means that reuse is not a size-impacting factor, at least from the external standpoint of Use Value. However, it surely impacts the work to be performed - and therefore the consequent production cost -, perhaps rendering inapplicable the functional link between size and effort that we worked so hard to construct using the benchmarking database. How can we resolve this conflict between FPs provided to the user (all FPs) and those developed by the project (only some) that are useful for forecasting purposes? One possible approach to this problem is as follows: for each project, define two different measurements in Function Points: one connected to the external user view of the software, which corresponds with Function Points as they are currently defined, and the other connected to the administrative and productive needs of the software manufacturer, who wishes to find out which functionalities must be developed more or less from scratch, to be able forecast and assign only those resources needed and sufficient for developing the application. This new measurement may be called Developed Function Points (DFP). Developed Function Points will take into account only those functions that need to be developed entirely or in part, but not those that are effortlessly inherited. In this way, effort forecasting based on a historical productivity measured in DFPs per person/month will not be contaminated by the reuse phenomenon. In practical terms, to determine the Developed Function Points starting from Function Points, we need only assign to each element (EI, EO, EQ, ILF, EIF) classified and evaluated in accordance with the standard contribution tables, a multiplying factor that assumes values from 0 to 1 based on the estimated savings introduced by reusing that particular element. Therefore, by adding the modified contributions and the new reuse coefficients in the usual manner, we will obtain the overall DFP count, which will be less than or equal to the FP count. In a benchmarking database, both measurements should be present in order to produce internal and external productivity models.
- pag. 11 -
ISBSG Workshop – Roma, Feb 12th 1998
An appropriately modified example from the IFPUG 4.0 Counting Practices Manual should illustrate the proposed method: . Functional UFP Reuse DUFP Transactional Function Types
FTRs
DETs
Complexity
Assignment report definition
1
5
Low
3
Low/0.8
2.4
Add job information (screen input)
1
7
Low
3
None/1
3
Add job information (batch input)
2
6
Low
3
High/0.4
1.2
Correct suspended jobs
1
7
Low
3
Very H/0.2
0.6
Employee job assignment
3
7
High
6
All/0
0
Jobs with employees report
4
5
Average
Low/0.8
New dependent transactions to Benefits Notification message
1
5
Low
5 4
Very H/0.2
4 0.8
3
4
Low
4
Low/0.8
3.2
Employees by Assignment Duration Report
3
7
Average
5
Low/0.8
4
External Inquiries
In/Out
In/Out
List of retrieved data
2/1
2/3
Low
High/0.4
Drop-down list box
1/1
2/1
Low
3 3
High/0.4
1.2 1.2
Field level help
1/1
2/4
Low
3
None/1
3
External Inputs
External Outputs
This example yields 45 Unadjusted FPs and only 24.6 Developed Unadjusted FPs. To standardize reuse types, the corresponding numerical percentages and the multiplicative adjustment values, research has been started, involving professionals from a number of different organizational situations, aimed at reducing as much as possible the level of subjectivity in attributing weights and coefficients. Results will be presented in a technical report when the experimentation phase is completed.
7. Conclusions This work is to provide impetus towards developing IFPUG standards on Function Points to rid this valuable metric system of the impurities of its non-functional component, but at the same time towards organizing benchmarking databases so that proper consideration is made of reuse as a factor that potentially contaminates or improves the data, and the decisions derived therefrom.
- pag. 12 -
ISBSG Workshop – Roma, Feb 12th 1998
8. References [1] Richard G. Lipsey, Introduzione all’economia, Etas Libri, 1981 [2] Function Point Counting Practices Manual, Release 4.0, International Function Point Users Group, Blendonview Office Park, 5008-28 Pine Creek Drive, Westerville, OH 430814899, USA, 1994 [3] Wittig, Morris, Finnie e Rudolph, Formal Methodology to Establish Function Point Coefficients, IFPUG - Fall Conference - Scottsdale, Arizona USA- September 15-19, 1997 [4] Roberto Meli, Early and Extended Function Point: a new method for Function Points estimation - IFPUG - Fall Conference - Scottsdale, Arizona USA- September 15-19, 1997 [5] Linee Guida Italiane per il conteggio dei Function Point, Counting Practices Committee Gruppo Utenti Function Point Italia, http://www.gufpi.com/cpc, 1998
9. The author Roberto Meli Roberto Meli received a cum laude degree in information science from Università degli Studi di Bari in 1984. That same year, he started consulting and training for some of Italy’s leading organizations. An expert in project management and software metrics, he has written works for international congresses and for journals in the field of information technology. He has followed and conducted training high-level courses abroad, and is a Certified Function Point Specialist recognized by IFPUG (International Function Point Users Group). Roberto Meli coordinates the Counting Practices Committee at GUFPI (Gruppo Utenti Function Point Italia), and has been Director of D.P.O. Srl since 1990. e-mail:
[email protected] http://web.tin.it/dpo ; http://www.dpo.it
- pag. 13 -