Estimating Packaged Software Implementations

7 downloads 65374 Views 363KB Size Report
of implementation and maintenance of packaged software. Each vendor and .... Business application software which is typically needed in support of business ...
Estimating Packaged Software Implementations The first part of a framework

Frank Vogelezang

René Nijland

Ordina – Proposal Management Center [email protected]

Capgemini [email protected]

Eric van der Vliet

John Hommes

Logica [email protected]

Sogeti Netherlands [email protected]

Hans Smit

Karel van Straaten

Atos [email protected]

SNS Reaal [email protected]

Dirk Vandendaele

Peter Bellen

Gartner Benelux [email protected]

QSM Europe – Estimation [email protected]

Abstract—There is no generic framework for estimating the cost of implementation and maintenance of packaged software. Each vendor and implementation partner uses its own proprietary techniques for estimations. This makes it hard to compare estimations from different sources or to build up benchmark data for public reference. A working group from the NESMA (Netherlands Software Measurement Association) has started to build a framework for estimating the effort and cost for implementation and maintenance of packaged software. For this total scope a generic cost estimation model has been conceived. The model consists of cost-drivers that determine the size of the software (size-drivers) and cost-drivers that influence the amount of work to be done (productivity drivers). In this paper we present the first step of making the generic framework applicable for estimation of the realization stage of a package implementation. It is the ambition of the Working Group to iteratively expand the scope of this framework to the total scope of implementation and maintenance. estimating, package applications, realisation, implementation, cost-drivers

I.

INTRODUCTION

In software intensive organizations there is a trend towards less custom made software and an increasing use of packaged software. [1] There is no generic framework for estimating the effort and cost of implementation and maintenance of packaged software. [2] Each vendor and implementation partner uses its own proprietary techniques for estimation. This makes it hard

to compare estimates from different sources or to build up benchmark data for public reference. Because of the many questions and discussions within the NESMA community about estimating implementation and maintenance of packaged software a working group has started to build an estimating framework. In this paper the generic framework is presented as well as the first part of that framework that is worked out in detail for the realization stage. II.

PROBLEM STATEMENT

In custom software all required functionality is realized in an iterative or linear development process. All required functionality is realized in a process that consists of design and build. Packaged software contains standard industry best practice functionality to suit the needs of a certain type of business function. In general packaged software contains more standard functionality than the functionality that is required for a particular organization. In the design stage it needs to be determined which elements of the standard functionality of the packaged software are needed. In the build stage the packaged software is configured to deliver the required functionality. Functionality that cannot be realized by standard functionality must be added as custom software. This means that implementing required functionality with packaged software is an entirely different process than for custom built software. To estimate the associated cost a new estimation framework is required.

III.

APPROACH

The working group first made an inventory of current known factors that influence the cost of the implementation and maintenance of packaged software. [3][4][5][6][7][8] After analysis of these factors we have created a generic framework and isolated those stages for which we expect that different factors play a different role. These different stages can be estimated with one generic cost estimating model, but the details of the model will vary for different stages.

V.

GENERIC COST ESTIMATION MODEL

The cost-drivers are combined as shown in figure 1 to form the generic cost estimation model:

For packaged software we have identified the following stages for implementation: •

Blueprint



Realization



Deployment

For maintenance we have identified the following stages: •

Run



Event-driven maintenance



Planned maintenance

The generic cost estimating model can be used for each of the six stages. For each of the stages the details are different to make the generic model applicable in practice. In this paper we present the details for the realization stage, because that is the stage most of the working group members are familiar with. IV.

COST-DRIVERS

The objective of the working group is to create a model for estimating the cost of packaged software. In our view the result of the generic model must be expressed in monetary units, because that is the level on which decisions are made whether an organization is better served with custom software or an implementation based on packaged software. During the analysis of factors that influence the cost we have looked at all known factors as cost-drivers and we have determined how they influence the cost. For each of these cost-drivers we have established the best estimation technique, based on current known best practices. These cost-drivers can be divided into three groups: •

Cost-drivers that determine the size of a basic element within that stage. For this type of cost-driver we use the term size-driver. Each size driver can have its own size measure. Each size driver has a size delivery rate to transform the size measure for a type of size driver into effort. We call this effort the base effort.



Cost-drivers that influence the effort for a stage. For this type of cost-driver we use the term productivity driver.



Cost-drivers that are independent of any type of sizing. For this cost-driver we use the term size independent cost.

Figure 1. Generic Cost Estimation Model

Within each stage different types of size drivers are determined that can be counted and weighed to give a size measure. For each type of size driver the size measure can be combined with its associated delivery rate (either fixed or by formula) to yield the effort associated with that type of size driver. The sum of effort for all size drivers will provide the combined effort. Within each stage productivity drivers are determined that influences the combined effort based on the type of size drivers that are relevant within that stage. The combined effort transforms the combined effort to a total effort. The total effort is used to calculate the size dependent cost for the stage. Within each stage a number of cost-drivers can be applied that are independent of any size driver or productivity driver. The result is the size independent cost for the stage. The sum of the size independent cost and the size dependent cost results in the total cost for this stage. VI.

SIZE DRIVERS FOR THE REALIZATION STAGE

Our intention was to conceive a generic cost estimation model for packaged software that consists of modules with standard functionality and can be configured to offer the required functionality. Well-known examples of this type of packaged software are SAP, Oracle eBS, Microsoft Navision, Siebel, JD Edwards, Infor and Oracle Fusion Apps. To make the generic model applicable in practice we have worked out the realization stage in detail. From the inventory of cost influencing factors we have selected the items that qualified as size driver for the realization stage of packaged software: 1.

Configuration

2. 3.

Custom built functionality Implemented modules

4. 5. 6.

External interfaces Internal interfaces Data

case of large implementations the productivity drivers can be different for different modules. For all of these size driver types, with the exception of the implemented modules there are a number of sizing methods available. In the guideline we have included appendices on how to use these sizing methods in the realization stage of the implementation of packaged software. For this article we will only briefly introduce them. A. CEMLI With Release 12 of Oracle eBS Oracle has replaced the use of RICE components with CEMLI components. CEMLI Stands for Configurations/Customization, Extension, Modification, Localization, and Integration. [9] CEMLI spans the entire realization scope, with the exception of implemented modules.

Figure 2. Size driver types for the Realization stage

In figure 2 a representation is shown of how we see the identified size driver types within the scope of the realization stage. The guideline that we have created includes appendices with a mapping of these size drivers to the architectural concepts of SAP, Oracle and Navision. For this article we will only use the general representation as shown in the above figure. A packaged software implementation can consist of different modules. Each module can consist of submodules. Modules consist of the following elements: •

Standard functionality that contains standard industry best practice functionality to suit the needs of a certain type of business function.



Configuration of the standard functionality to suit the specific requirements of an organization.



Custom built functionality to suit requirements that are organization specific and cannot be met by configuring standard module functionality



Interfaces to exchange information with other applications (External interfaces) or with other modules of the same packaged software (Internal interfaces). Internal interfaces can consist of interfaces from one (sub) module to another (sub) module. This means that there can be interdependencies.

Next to the modules the implementation of packaged software often contains data conversions to obtain data from existing systems. Depending on the type and size of the modules it can be required to determine the cost drivers of individual modules. In



Configuration: Configure the standard application. Changing setups and profile values can be the example of configurations.



Customization: Altering/changing the standard objects or creation of custom object to meet client’s business need. Customizations are Extensions or Modifications.



Extension: Creating custom code or changing existing objects with different behavior than the standard functionality.



Modification: Enhancing/changing the standard code to meet the organization’s requirements. It modifies the standard behavior of functionality.



Localization: Define different legislative support based on country/region/language requirements.



Integration: Can be either Data Integration (like APIs or Open Interface Tables) or Application Integration (like EAI or BPEL).

B. Configuration Points Configuration Points is a Gartner proprietary technique to measure the configuration activities in existing packaged software to realize new functionalities. [10] Configuration Points can be used in conjunction with Function Points. The method is based on counting three types of objects relevant in the configuration: data, lists and rules. Configuration Points can be used for configuration, interfaces and data based on standard functionality. C. COSMIC COSMIC is an ISO standardized functional sizing method that measures the functional size of software. [11] The COSMIC functional size measurement method is designed to be applicable to the functionality of software from the following domains: •

Business application software which is typically needed in support of business administration, such as banking, insurance, accounting, personnel, purchasing, distribution or manufacturing. Such software is often characterized as ‘data rich’, as it is dominated largely

by the need to manage large amounts of data about events in the real world. •



Real-time software, the task of which is to keep up with or control events happening in the real world. Examples would be software for telephone exchanges and message switching, software embedded in devices to control machines such as domestic appliances, elevators, car engines and aircraft, for process control and automatic data acquisition, and software within the operating system of computers. Hybrids of the above, as in real-time reservation systems for airlines or hotels for example.

COSMIC is applicable to measure the entire realization scope, with the exception of implemented modules. D. Function Point Analysis Just like COSMIC is Function Point Analysis also an ISO standardized functional size measurement method. [12][13] Function Point Analysis introduces a unit, the function point, to help measure the size of an application that is to be developed or maintained. The word "application" within the framework of Function Point Analysis means "an automated information system". The function point expresses the quantity of information processing that an application provides a user. This unit of measurement is independent from how the system is realized in a technological sense. To use Function Point Analysis in packaged software an assumption becomes necessary for the concept of user. This may also be a packaged application specialist who uses the packaged application for parameterization, data entry and preloading table files to deliver the appropriate functionality to the end user. [14] Function Point Analysis is applicable to measure custom built functionality, external interfaces and data. E. RICEF RICEF is a common term used to describe five areas of technical developments in SAP. [15] RICEF stands for Reports, Interfaces, Conversions, Enhancements, Forms. RICEF spans the entire realization scope, with the exception of implemented modules. •

Reports: All the development that deals with the programming of reports. It includes all types of reports from simple reports where output is produced using single statements, Simple ALV reports, ALV Grid, ALV Grid with advanced functionality, etc.



Interfaces: All interface developments. Interfaces are ALE/IDOCs development. Involves not just ABAP programming for IDOCs but also IDOC customization.



Conversion: Conversion refers to BDC programming. Data upload from legacy system in flat files format to SAP system is done via Conversion Objects. This involves uploading data through BDC, LSMW, BAPI, etc.



Enhancements: User Exits, Customer Exits, BADI's where code has to be written in order to enhance standard package software functionality.



Forms: Include SAP Smartforms, SAP Scripts. Technical development that deals with fetching necessary data from SAP system and displaying in terms of forms for printout are classified as under Forms. VII. SIZE DELIVERY RATE

The Size Delivery Rate is the time it takes to produce a unit in which the size measure for a particular size driver is expressed. If for example the RICEF measure is used and the size has been determined in Implementation Units then the Size Delivery Rate will be expressed in hours per Implementation Unit. [16] The Generic Cost Estimation Model in Figure 1 seems to suggest that the Size Delivery Rate may be unique for each type of size driver. This is not necessarily true. Most of the sizing methods are designed in such a way that the same Size Delivery Rate can be used for all types of size drivers. VIII. PRODUCTIVITY DRIVERS FOR THE REALIZATION STAGE From the inventory of cost influencing factors we selected the items that qualified as size driver for the realization stage of packaged software: A. Concurrent users B. Maturity of the packaged software technology C. Supplier of the packaged software D. Required system reliability E. Reusable components F. Execution time constraints G. People H. Process Each of these productivity drivers will impact the size dependent base effort for implementing the identified size drivers in its own way. If a productivity driver has no impact then its value will be 1. When the required effort will be reduced by the result of the productivity driver then the resulting value will be between 1 and 0. When the required effort will increase as a result of the productivity driver, the result will be greater than 1. In the next sections each productivity driver will be introduced shortly: A. Concurrent users The number of (end)users, applications and equipment that is interacting with the solution. This will impact the test rigor, process control for the realization stage and performance considerations for both standard module functionality, configuration and custom built functionality. The value for this productivity driver will be higher if the number of concurrent users is higher than the number of concurrent users the standard functionality is designed for. The value for this productivity driver will only rarely below 1.

B. Maturity of the packaged software technology The architecture (SOA, modular, dedicated), technology (language, hardware, database) and status of the chosen implementation solution (legacy, proven, state-of-the-art, under construction) of the packaged software. This will impact adaptability, flexibility and scalability of the functionality that is realized. When the packaged software offers business line specific preconfigurations next to vanilla modules then the value for this productivity driver will be below 1. If the version is more than one version behind the current version of the packaged software or if the architecture or technology choice is different from the advised choice then the value for this productivity driver can increase significantly. C. Supplier of the packaged software Openness related to the kernel of the packaged software, adaptability of the packaged software and general support from the supplier. This will impact adaptability of the functionality that must be realized and the overall development effort. The value for this productivity driver will only rarely be below 1 because the general aspects of this productivity driver will be part of the Size Delivery Rate for this packaged software. For a closed packaged application the value of this productivity driver will be highly impacted by the knowledge the IT service provider has access to (either by experience of its own personnel or by access to supplier support information). D. Required system reliability Similar as in custom software development, the potential risk and damage to stakeholders when the requirements are not met influences the value of this productivity driver. This will have impact on reliability considerations for both standard module functionality, configuration and custom built functionality. Further it will impact validation, verification, quality control and test rigor. When no special reliability requirements apply and the packaged software is very close to the standard functionality the value of this productivity driver can be below 1. When reliability requirements are set that are not common for the business function the module is designed for the value of this productivity driver will increase significantly. E. Reusable components Although packaged software by nature consists of reusable components there is an extra reusability aspect that can influence the effort for realization. This productivity driver is concerned with the way configurations or custom built functionality is used that has impact on more than one function or module even. Similar to regular software development, the level of reusability may influence effort positive (limited development effort for by using a single component for multiple functions) or negative (similar configuration or custom routine implemented in multiple functions). This productivity driver is interdependent with the maturity of the package technology and the required system reliability.

The impact in the realization stage of this productivity driver is usually marginal. The effect it can have on the maintenance effort can be very significant. F. Performance constraints When the standard functionality of the packaged software does not meet the constraints, custom enhancements or workarounds might have a significant impact on effort. This is due to the interaction with the architecture and technology of the packaged software. G. People Like in every project the people related characteristics like education and experience level will impact the value of this productivity driver. H. Process Like in every project the process related characteristics like the presence of a repeatable approach will impact the value of this productivity driver. When there is enough data available the individual values of the productivity drivers can be established. It is highly questionable whether this detailed data will be made publicly available. This was one of the reasons the ISBSG initiative to create a repository of data for packaged software implementation and customization projects did not succeed. Productivity drivers A, B, D and F can purely be derived from the packaged application in relation to the requirements. The other four contain information about how well a certain IT service provider performs. This is competition sensitive information that most companies want to keep as proprietary information. IX.

FIRST RESULTS

The first result of the NESMA working group is a user guide for estimating the realization stage of a package implementation. This user guide will be publicly available from the NESMA website www.nesma.nl. X.

FUTURE WORK

This framework for estimating the realization of packaged software implementations must be tested in practice. Sufficient data must be collected to prove the value of this framework. The working group will submit this framework to the ISBSG, because this is the best way to ensure that the collected data will not be used as competitive intelligence. We believe that our framework can make the current ISBSG questionnaires on acquisition or implementation of packaged software more valuable for benchmarking purposes. [17][18] Another option is that this framework will be tested by the members of the SAP maintenance cost research initiative that is carried out in the Netherlands by the Software Improvement Group with five major industrial SAP users. [19] The Working Group will work on the next part of the general framework, the blueprint stage.

A. Appendix A – Currently identified cost-drivers This is the total list of currently identified uncategorized cost-drivers. This list is the basis for the analysis to determine the type of cost-driver (either size-driver or productivity driver) and the relevance of this cost-driver for the stage we are working on •

# named users



# calls (FAM / TAM / Hosting / incident Management)



# concurrent users



# production releases / patches



# trained users





# key users

application management nature (1st/2nd line support / SLA / service window)



experience of the users



required system reliability



sizing tailor made software / customizations



complexity of system modules



sizing configuration



extent of documentation required



# (sub)modules implemented



size of database used (# tables)



age application



percentage of reusable components



volatility (# changes/period)



execution time constraint (Performance of software)



version / update (relative to latest)



volatility of development platform



use call mgt tools



domain knowledge



size organisation (# departments/countries)



personnel continuity



# business processes



language and tool experience



maturity technology



use of software tools



maturity demand organization



development schedule compression



alignment implemented functionality vs business processes



extent of multisite working and quality of inter-site communications



quality implementation team



harmonisatieon of processes



quality of the delivered product



the other non functional requirements



quality total team



3P: People, Product, Process



quality projectmgt / customer



training



#external interfaces (Pakket-Buitenwereld)



user support



# Internal interfaces (Pakket-Pakket)



type ERP package



size dataconversions



supplier ERP package



hardware



frequency updates / recent versions package software



licenses



helpdesk

References: [1]

[2]

[3] [4]

[5]

[6]

[7]

[8]

Bill Swanton, “IT Score for Application Organizations: Maturity Assessment Trends Through January 2012” Gartner publication G00233321, April 2012 www.gartner.com Erik Stensrud, “Alternative approaches to effort prediction of ERP projects”, Information and Software Technology, Volume 43 (2001) www.journals.elsevier.com Fred Heemstra, “Implementatie van ERP systemen, een kostbaar avontuur”, TIEM, May 2006 (in Dutch) Fred Heemstra and Rob Kusters, “Wat bepaalt de kosten van ERPimplementatie?” Maandblad voor Accountancy en Bedrijfseconomie, July/August 2005 (in Dutch) www.mab-online.nl Fred Heemstra, Rob Kusters and Arjan Jonker, “Kosten SAP beheer met 4 knoppen te besturen” Automatisering Gids, June 2007 (in Dutch) www.kwdrm.nl Frank Vogelezang, Bert Veenstra, Cor van Dongen, Johan de Vries, Joyce Span and Hessel Bijlsma, “Estimating the functional size of applications built with the Oracle EBS package”, Lecture Notes in Computer Science LNCS 5891, Proceedings of the international conferences IWSM 2009 and Mensura 2009, November 2009. iwsm-mensura-2009.nesma.nl Guy Janssens, Rob Kusters, and Fred Heemstra: “Sizing ERP implementation projects: an activity-based approach”, International Journal of Enterprise Information Systems, Volume 4, Number 3, 2008 Barry Boehm, Chris Abts, Winsor Brown, Sunita Chulani, Bradford Clark, Ellis Horowitz, Ray Madachy, Donald Reifer and Bert Steece, “Software Cost Estimation with COCOMO II”, Prentice-Hall, 2000 sunset.usc.edu/csse

[9] [10] [11]

[12] [13]

[14]

[15] [16] [17] [18] [19]

Oracle Consulting, “Best Practices for Upgrading Oracle E-Business Suite”, July 2011 www.oracle.com Gartner Inc., “FFPA-I Methodology User Manual”, November 2009 www.gartner.com COSMIC, “The COSMIC Functional Size Measurement Method version 3.0.1 Measurement Manual (The COSMIC Implementation Guide for ISO/IEC 19761: 2003)”, May 2009 www.cosmicon.com IFPUG, “Function Point Counting Practices Manual”, Release 4.3.1, January 2010 www.ifpug.org NESMA, “Definitions and counting guidelines for the application of Function Point Analysis, NESMA functional size measurement method conform ISO/IEC 24570”, Version 2.1, Februari 2002 www.nesma.nl The most current 2.2 version (in Dutch) has an identical content Loredana Frallicciardi, “Enterprise Resource Planning Function Point Analysis: A Possible Approach and a Practical Experience”, Chapter 21 in “The IFPUG Guide to IT and Software Measurement” February 2012 www.ifpug.org Walt, “Guide to SAP ABAP for beginners”, ricef.blogspot.com Larry Putnam Jr., “Best in Class Software Measurement & Estimation Processes for SAP”, Presentation 2008 www.qsm.com ISBSG, “Data Collection Questionaire Business Systems Software Package Acquisition”, version 1.1, 2007 www.isbsg.org ISBSG, “Data Collection Questionaire Business Systems Software Package Implementation”, version 1.1, 2007 www.isbsg.org Software Improvement Group, “SAP beheerkosten onderzoek” www.sig.eu

Suggest Documents