an effort estimation approach for service-oriented ...

5 downloads 0 Views 3MB Size Report
Dr. Ibrahim F. Imam who allowed me the space and freedom I ...... Approach: Raymond McLeod Jr. and Eleanor Jordan, published by Wiley and Sons. [36].
ARAB ACADEMY FOR SCIENCE, TECHNOLOGY AND MARITIME TRANSPORT (AASTMT) College of Computing and Information Technology Department of Information Systems

AN EFFORT ESTIMATION APPROACH FOR SERVICE-ORIENTED ARCHITECTURE (SOA) PROJECTS Submitted By: Esraa A. Farrag A thesis submitted to AASTMT in partial fulfilment of the requirements for the award of the degree of

MASTER OF SCIENCE IN Information Systems

Supervisors Prof. Dr. Ramadan Moawad

Prof. Dr. Ibrahim Imam

Vice-Dean of Faculty of Computers and Information Technology

Prof. of Computer Science

Future University in Egypt

Arab Academy For Science, Technology and Maritime Transport

2016

DECLARATION

I certify that all the material in this thesis that is not my own work has been identified, and that no material is included for which a degree has previously been conferred on me. The contents of this thesis reflect my own personal views, and are not necessarily endorsed by the University. (Signature) (Date)

ii

ACKNOWLEDGMENTS I cannot be more grateful to God, for giving me the strength, courage, persistence and enthusiasm to finish the long path of my master degree. Special thanks to my advisor, Prof. Dr. Ramadan Moawad for his patience, encouragement, motivation and continuous guidance. I have been so lucky to have a supervisor who cared so much about my work, and who responded to my questions and queries so promptly. I would like to thank my advisor, Prof. Dr. Ibrahim F. Imam who allowed me the space and freedom I needed to work and for continued support and guidance. My deepest gratitude for my fellow volunteers in Resala Charity Organization, the most positive, inspiring motivating people I have ever met on Earth .They taught me how to give and wait for nothing in return. They taught me that life is not about happiness, it is about meaning and purpose. I cannot be more proud to be one of them and I pray every single day to be among them for the rest of my life. A special word of thanks also goes to my family, who has been encouraging, supportive and shown belief in me and my work. I know I always have my family to count on when times are rough.

ABSTRACT In the last few decades SOA (Service Oriented Architecture) has become the new trend in the IT industry. Many organizations tend to migrate to SOA in order to cope with the rapidly changing business. Effort estimation of SOA projects has become a real challenge to project managers due to the limited literatures addressing this issue. The traditional effort estimation techniques do not fit SOA projects entirely, as SOA has unique characteristics were not addressed by the traditional cost estimation approaches. These unique SOA characteristics include: loose coupling, reusability, composability and discoverability. On the other hand, cost estimation approaches that were proposed to estimate SOA projects, are still immature as they need further development and most of them are hard to be applied in the industry, as they are more guidelines than actual practical cost estimation approaches. This thesis proposes an effort estimation approach for SOA projects that has been applied to different variety of services. It considers SOA characteristics and the various cost factors for different services. This proposed approach provides effort estimation technique specific for each type of service which are available, migrated, new or composed. The approach also gives effort ratio for each project phase for easily resources allocation through the lifetime of the project. This approach has been applied to real life projects in the IT industry as the SOA project is divided into its component services and each service is estimated solely based on its type. Each service type has its own estimation methodology. The services' efforts are then aggregated to calculate the project’s overall effort. The estimated effort relative error in the case studies ranged from 3.66 % and 19.14%, which is a significant improvement when compared to the existing estimation relative error of the industry which is 30%.

TABLE OF CONTENTS Chapter

Page

ACKNOWLEDGMENTS ................................................................................................. iii ABSTRACT ....................................................................................................................... iv TABLE OF CONTENTS .................................................................................................... v LIST OF TABLES ............................................................................................................ vii LIST OF FIGURES ........................................................................................................... ix List of Publications ........................................................................................................... xii List of Abbreviations ....................................................................................................... xiii Chapter:1 Introduction ...................................................................................................... 15 1.1. Research Background and Problem Statement .................................................. 15 1.2. Research Questions ............................................................................................ 16 1.3. Research Objectives and Contributions ............................................................. 16 1.4. Thesis Outline .................................................................................................... 17 Chapter 2: Background and Literature Review ................................................................ 20 2.1 Definitions: ......................................................................................................... 20 2.1.1. SOA (Service Oriented Architecture): ........................................................ 20 2.1.2. A service: .................................................................................................... 21 2.1.2 Cost versus effort ........................................................................................ 21 2.2. Benefits of SOA ................................................................................................. 21 2.2.1. Technical Benefits ...................................................................................... 23 2.2.2. Business Benefits ........................................................................................ 24 2.3. SOA Characteristics ........................................................................................... 25 2.3.1. Loose coupling: ........................................................................................... 25 2.3.2. Abstraction (separation of concerns, or information hiding): ..................... 25 2.3.3. Autonomy (or Encapsulation): .................................................................... 25 2.3.4. Reusability .................................................................................................. 26 2.3.5. Composability ............................................................................................. 26 2.3.6. Discoverability ............................................................................................ 26 2.4. SOA Project Phases ............................................................................................ 27 2.4.1. Requirements .............................................................................................. 27 2.4.2. Design ......................................................................................................... 27 2.4.3. Development ............................................................................................... 27 2.4.4. Testing......................................................................................................... 28 2.4.5. Implementation ........................................................................................... 28 2.5. Classification of Services ................................................................................... 30 2.5.1. Service Usage Perspective: ......................................................................... 30 .2.5.2 Service Type Perspective: ........................................................................... 31 2.5.3. Construction Classification ......................................................................... 32 2.6. Cost Estimation Approaches .............................................................................. 37 2.6.1. Traditional Cost Estimation Approaches .................................................... 37 2.6.2. SOA specific cost estimation approaches ................................................... 41 Chapter 3: Methodology ................................................................................................... 52 3.1. Available service ................................................................................................ 55 3.2. Migrated service ................................................................................................. 55

3.2.1. Phased effort distribution ............................................................................ 55 3.3. New service ........................................................................................................ 72 3.3.1. Adjusted function point............................................................................... 73 3.3.2. Phased effort distribution ............................................................................ 95 3.4. Composed service ............................................................................................ 101 Chapter 4: Experiment and Results................................................................................. 103 4.1. Project Alpha: ................................................................................................... 104 4.1.1 Customer Name AutoComplete Service: .................................................. 105 4.1.2. Change Password Service: ........................................................................ 113 4.1.3. Integration with Customer Service ........................................................... 116 4.1.4. Client “X” Integration Service: ................................................................. 118 4.1.5. Calculate Totals Service: .......................................................................... 124 4.2. Project Beta: ..................................................................................................... 129 4.2.1. Invoice Service: ........................................................................................ 130 4.3. Accumulation of the results.............................................................................. 135 6.3.1 Accumulation of migrated service results: ............................................... 135 6.3.2 Accumulation of new service results ........................................................ 139 Chapter 5: Conclusion..................................................................................................... 146 5.1. Limitation of the study ..................................................................................... 148 5.2. Future work ...................................................................................................... 148 References ....................................................................................................................... 149 ‫ الملخص‬................................................................................................................................ 155

LIST OF TABLES Table

Page

Table 1:SOA migration strategies advantages and disadvantages .................................... 36 Table 2: Testing perspectives, each stakeholder needs and responsibilities are shown in black, advantages in green, issues and problems in red ‎[33] ............................................ 63 Table 3:comparison of different testing levels with different testing perspectives .......... 65 Table 4 : weight of the testing levels and testing perspectives matrix.............................. 66 Table 5 : Planning & Requirements Phase relative weights grouped by migration strategy ........................................................................................................................................... 67 Table 6: Design Phase relative weights grouped by migration strategy ........................... 67 Table 7 : Development Phase relative weights grouped by migration strategy ................ 68 Table 8: Testing Phase relative weights grouped by migration strategy .......................... 68 Table 9: Transition Phase relative weights grouped by migration strategy ...................... 69 Table 10 : cost factors weights aggregated by migration strategy .................................... 69 Table 11:Migrated Service Factors Weight Distribution among Phases .......................... 71 Table 12: The relative cost of phases for each migration strategy ................................... 72 Table 13: Functional Complexity Matrix of Data functions ............................................. 78 Table 14: ILF functional complexity translation into unadjusted function point table .... 78 Table 15: EIF functional complexity translation into unadjusted function point table .... 78 Table 16: Example of the total ILF and ELF count .......................................................... 79 Table 17: EI functional complexity matrix ...................................................................... 81 Table 18:EO functional complexity matrix ...................................................................... 81 Table 19: EQ functional complexity matrix ..................................................................... 81 Table 20: translate the EI and EO to unadjusted function points ..................................... 82 Table 21 translate the EQ to unadjusted function points .................................................. 82 Table 22: Example of the total unadjusted Function Point Count .................................... 82 Table 23: Traditional function point considered factors ................................................... 90 Table 24: Ignored function point cost factors ................................................................... 93 Table 25 : Example of Total Degree of Influence ............................................................ 94 Table 26: Phase Distribution of Software Development Effort Based on Estimation Approach ........................................................................................................................... 96 Table 27: overall phase distribution profile ...................................................................... 97 Table 28 : Size categories & their equivalent function point size .................................... 97 Table 29 : Estimated Effort Distribution .......................................................................... 99 Table 30: Comparison of phased effort distribution of the different migration strategies and the new service ......................................................................................................... 100 Table 31: Case studies description .................................................................................. 104 Table 32: Auto Complete unadjusted function point count ............................................ 105 Table 33:AutoComplete Cost factors and their weights ................................................. 106 Table 34 : Relative error of the Autocomplete New service using adjusted function point ......................................................................................................................................... 106 Table 35 : Autocomplete service New service phased estimation results ...................... 108 Table 36: Adjusted Function Point relative error compared to the phased effort distribution relative error ................................................................................................ 110

Table 37: Estimated and Actual Effort for Customer Name AutoComplete migrated service ............................................................................................................................. 112 Table 38 : Change Password migrated service results .................................................... 114 Table 39 : Integration with customer service migrated effort estimation ....................... 116 Table 40 :Client "X" integration unadjusted function point count ................................. 119 Table 41:Client "X" integration General System Characteristics ................................... 119 Table 42 : Adjusted Function point estimated effort versus the actual effort for Client "X" integration new Service................................................................................................... 120 Table 43: Client "X" integration new service :Phased effort estimation results ............. 121 Table 44: Client ”X” Integration new service adjusted function point relative error compared to the phased effort distribution relative error................................................ 123 Table 45: Calculate totals service unadjusted function poin count................................. 124 Table 46 : Calculate totals service general system characteristics ................................. 125 Table 47: Totals service Adjusted Function point estimates and the relative error ........ 125 Table 48: Totals service phased effort ratio results ........................................................ 126 Table 49: Calculate Totals new service estimated effort using adjusted function point compared to the phased effort distribution relative error................................................ 129 Table 50 : Invoice service unadjusted function point count ........................................... 130 Table 51 : Invoice service general system characteristics .............................................. 131 Table 52 : Invoice new Service estimated effort using adjusted function point compared to the actual effort ........................................................................................................... 132 Table 53 : Invoice new service service phased effort estimation results ........................ 132 Table 54: Function point relative error compared to the phased effort distribution relative effort ................................................................................................................................ 134 Table 55: Estimated effort compared to the actual effort in all the migrated services of the case studies ................................................................................................................ 135 Table 56 : Estimated effort ratio compared to the actual effort in the migrated services of the case studies ................................................................................................................ 137 Table 57: Effort estimation relative error in the different phases for migrated service in the case studies ................................................................................................................ 138 Table 58 : Phased effort distribution accumulation, results for all the new services of the case study ........................................................................................................................ 139 Table 59: Effort estimation relative error in the different phases for new service in the case studies...................................................................................................................... 140 Table 60: Comparison between the adjusted function point and the phased effort distribution for the new service of the case studies ........................................................ 141 Table 61: Accumulation of the results of the case study ................................................ 143

LIST OF FIGURES Figure

Page

Figure 1:Q: What are the IT/technology problems your company hopes to address using SOA?‎[13] .......................................................................................................................... 22 Figure 2: SOA technical and business drivers in different industries‎[13] ........................ 22 Figure 3 : Cutover Strategies ‎[35]..................................................................................... 29 Figure 4: Types of services from construction perspective .............................................. 33 Figure 5 : Different migration strategies ........................................................................... 34 Figure 6 : Migration strategies technical value versus business value ............................. 35 Figure 7: Data movement Types in Cosmic ‎[53] .............................................................. 40 Figure 8: SMART input and output .................................................................................. 42 Figure 9: SMART process activities ................................................................................. 43 Figure 10:Principle of Divide-and-Conquer‎[24] .............................................................. 49 Figure 11:Procedure of SOA Project Development Cost Estimation based on Divide-andConquer ‎[24] ..................................................................................................................... 49 Figure 12: The high level methodology ............................................................................ 53 Figure 13: The detailed proposed approach ...................................................................... 54 Figure 14: Block Diagram shows the steps of phased effort for migrated services approach ............................................................................................................................ 56 Figure 15: Effort ratio distributed in each phase for different service migration strategies ........................................................................................................................................... 72 Figure 16: SOA Adjusted Function Point Estimation Process ......................................... 73 Figure 17: difference between Scope and Boundary ........................................................ 74 Figure 18: The boundary of an HR project ....................................................................... 75 Figure 19: Count the service functions count ................................................................... 75 Figure 20: The view of a software application from Function Point perspective. ............ 76 Figure 21 : Data Functions types ...................................................................................... 76 Figure 22 : the difference between DET and RET ........................................................... 78 Figure 23 : Transactional Function Types ........................................................................ 80 Figure 24 : Synchronous service Vs Asynchronous ......................................................... 86 Figure 25: Semantic web service vs syntax web service .................................................. 87 Figure 26 Web services Orchestration vs Choreography ................................................. 87 Figure 27 Comparison among different software size using Function Point scales ......... 98 Figure 28 : Comparison among different software sizes scale in LOC ............................ 98 Figure 29: Estimated phased effort distribution for new service .................................... 100 Figure 30: Comparison of phased effort distribution of the different migration strategies and the new service ......................................................................................................... 101 Figure 31: Estimated Effort using Function point versus the actual effort for Customer Name AutoComplete Service ......................................................................................... 107 Figure 32: Estimated effort versus actual effort for Customer Name AutoComplete new service ............................................................................................................................. 108 Figure 33: Effort estimation relative error among the project phases for Customer Name AutoComplete new service ............................................................................................ 109 Figure 34: Comparison between estimated effort ratio and actual effort ratio among the project phases for Customer Name AutoComplete new service .................................... 110

Figure 35: Comparison between adjusted function point and phased effort distribution relative error for Customer Name AutoComplete new service ..................................... 111 Figure 36: Estimated effort and actual effort for Customer Name AutoComplete migrated service ............................................................................................................................. 112 Figure 37: Relative error among the project phases for Customer Name AutoComplete migrated service .............................................................................................................. 112 Figure 38: Estimated Effort ratio of customer name autocomplete service compared to its actual effort ratio ............................................................................................................. 113 Figure 39: Estimated effort of Change Password migrated service compared to the actual effort ................................................................................................................................ 114 Figure 40: Relative error in the different phases of change password migrated service ......................................................................................................................................... 115 Figure 41: Estimated effort ratio of Change Password migrated service compared to the actual effort ratio distributed among project phases ....................................................... 115 Figure 42: Estimated effort compared to actual effort of Integration with Customer migrated service .............................................................................................................. 117 Figure 43: Estimated effort ratio compared to actual effort ratio distributed among the project phases for Integration with Customer migrated service .................................... 117 Figure 44: Relative error in the different project phases of Integration with Customer migrated service .............................................................................................................. 118 Figure 45: Client “X” Integration new service estimated effort using adjusted Function point compared to the actual effort ................................................................................. 120 Figure 46: Client “X” Integration new service estimated phased effort distribution compared to the actual effort .......................................................................................... 121 Figure 47: Client “X” Integration new service estimated effort distribution compared to the actual effort ratio ....................................................................................................... 122 Figure 48:Client “X” Integration new service estimated phased effort distribution relative error ................................................................................................................................. 123 Figure 49: Comparison between the relative error of adjusted function point and the phase effort distribution for the Client “X” Integration service ..................................... 124 Figure 50: Calculate totals new service adjusted function point estimated effort compared to the actual effort ........................................................................................................... 126 Figure 51: Calculate Totals new service estimated effort compared to the actual effort in the different phases ......................................................................................................... 127 Figure 52: Calculate Totals new service estimated effort ratio compared to the actual effort ratio in the different phases ................................................................................... 128 Figure 53: Calculate Totals new service estimated effort ratio relative error in the different phases ............................................................................................................... 128 Figure 54: Calculate Totals service the adjusted function point relative error compared to the phased effort distribution .......................................................................................... 129 Figure 55: Invoice service Estimated effort using adjusted function point compared to the actual effort ............................................................................................................... 132 Figure 56: Invoice new service estimated effort compared to the actual effort in the different phases ............................................................................................................... 133 Figure 57 : Invoice new service estimated effort ratio compared to the actual effort ratio in the different phases ..................................................................................................... 133

Figure 58 Invoice new service relative error in the different phases .............................. 134 Figure 59:Ajusted Function Point relative error compared to the phased effort distribution relative error for Invoice new service .......................................................... 135 Figure 60: Estimated effort compared to the actual effort in the migrated services of the case studies...................................................................................................................... 136 Figure 61: Relative error in the migrated services of the case studies............................ 136 Figure 62: Estimated effort ratio compared to the actual effort in the migrated services of the case studies ................................................................................................................ 137 Figure 63: Effort estimation relative error in the different phases for all the migrated service in the case studies ............................................................................................... 138 Figure 64: Phased effort distribution accumulation, results for the new services of the case study ........................................................................................................................ 139 Figure 65: Effort estimation relative error in the different phases for new service in the case studies...................................................................................................................... 140 Figure 66: Comparison between the estimated effort using adjusted function point and phased effort distribution to the actual effort .................................................................. 142 Figure 67:Comparison between the adjusted function point and phased effort distribution relative error .................................................................................................................... 142

List of Publications [1] Phased Effort Estimation of Legacy Systems Migration to Service Oriented Architecture International Journal of Computer and Information Technology Volume 03 – Issue 03, May 2014. [2] An Approach for Effort Estimation of Service Oriented Architecture (SOA) Projects, Journal of Software, Volume 11, Number 1, January 2016.

List of Abbreviations Abbreviation Cfsu COCOMO COTS CSBSG

Meaning Functional Size Unit The Constructive Cost Model Commercial Off-The-Shelf China Software Benchmarking Standard Group

D&C

Divide And Conquer

DET

Data Element Type

EI

External Inputs

EIF

External Interface Files

EO

External Outputs

EQ

External Inquiries

FP

Function Point

FTR

File Type Referenced

FUR

Functional User Requirements

GSC

General System Characteristics

ILF

Internal Logical Files

LOC

Lines Of Code

RET

Record Element Type

ROI

Return On Investment

RUP

Rational Unified Process

SLA

Service Level Agreement

SMART

Service-Oriented Migration And Reuse Technique

SMIG

Smart Interview Guide

SOA

Service Oriented Architecture

TDI

Total Degree Of Influence

VAF

Value Adjustment Factor

WCF

Windows Communication Foundation

CHAPTER 1 INTRODUCTION

Chapter:1 Introduction Recently, SOA ‎[1] has become the new trend in the IT industry which explains why many organizations tend to migrate to SOA. These organizations are often motivated by the business and technical benefits of SOA. The major SOA business benefits are higher productivity and better quality in less time‎[2],‎ revowoH‎ the core benefits of SOA are mainly technical benefits ‎[3]‎including separation of concerns , enhanced product quality and better coping with changing business.

1.1. Research Background and Problem Statement Software effort estimation represents a major challenge to project managers, as the average cost overrun is around 30%‎[4]. Also most software projects can be considered partial failures as most of them don’t meet either or all their cost, schedule, quality, or requirements objectives ‎[5]. For instance, in the United States, only about one-sixth of all projects were completed on time and within budget, nearly one third of all projects were canceled outright, and over half were considered "challenged". Of the challenged or canceled projects, the average project was 189 percent over budget, 222 percent behind schedule, and contained only 61 percent of the originally specified features‎[5]. This gives us a hint on how accurate effort estimation for a software project is crucial to limit the unpredicted risks and launch slips ‎[6]. It worth noting that, estimates not only forecast the future but also frequently affects it. Too low estimates can lead to lower quality, possible rework in the later phases, and higher risks of project failure. On the other hand, too high estimates can reduce productivity in accordance with Parkinson’s law, which states that work expands to fill the time available for its completion‎[1]. All these estimation challenges apply for the traditional software projects, but for SOA projects the challenges are more crucial. Despite the SOA benefits, organizations tend to consider migration to SOA as a risky task. Not only because SOA is considered as a new technology which represents a major project risk ‎[6]‎[7], but also project managers have no idea how to estimate the effort of SOA projects‎[8].The effort of SOA projects cannot be accurately estimated using the traditional software effort estimation techniques, as these approaches do not fit SOA projects entirely due to the unique characteristics of SOA‎[9]. These unique characteristics include loose coupling, reusability, composability, and

15

discoverability. SOA characteristics have a major impact on the cost that traditional cost estimation approaches cannot address. Many SOA cost estimation approaches were proposed in order to solve this gap. However, these approaches are not mature enough to be applied in real life projects, and they are still more guidelines than an actual estimation approaches.

1.2. Research Questions 

Q1: How can SOA characteristics affect the estimation process of SOA projects?



Q2: How can the efforts of SOA projects are estimated in the proposed approach?



Q3: Why is phased effort estimation important in SOA projects?

1.3. Research Objectives and Contributions Following the proposed research questions, the research objectives are separated into answers to each of them, while the contributions of our work can then be specified respectively.

Q1: How can SOA characteristics affect the estimation process of SOA projects? SOA projects have unique characteristics that differentiate it from traditional software projects, these characteristics are loose coupling, abstraction, autonomy, reusability, composability and discoverability. The SOA characteristics have been often ignored when estimating the cost using traditional software cost estimation approaches as SOA doesn’t fully fit into these estimation approaches, this has led to inaccurate effort estimation. The main contribution in this part of the work is addressing the different SOA characteristics. Each one affects the effort in a different manner.

Q2: How can the efforts of SOA projects are estimated in the proposed approach? The services are classified based on construction into available, migrated, new and composed services. The existing cost estimation approaches did not consider the diversity of the services. They estimated the different types of services using the same methodology. The main contribution in this part of the work is that the SOA project is broken down into its constituent services .Each service is classified into its basic type either available, migrated, new or composed. Each type of these services has its own cost factors and method of estimating the effort. For example, in case of the available service which is a

16

service already exists, the main cost is integration and testing as the development effort is zero. The migrated service which is a service will be reused after modifications, the cost varies depending on the migration strategy used wrapping, reengineering or replacement. The new service which is service to be developed from scratch, could be estimated using either adjusted function point approach or phased effort distribution .The adjusted function point ignores traditional software cost factors and considers the unique characteristics of SOA. The phased effort distribution could be used to estimate the phase’s effort early in the project by knowing only the requirements phase effort .The composed service could be estimated by breaking down the service into its constituent services, estimating each component service. The overall project effort will be the summation of the effort of the consistent services in addition to the integration effort. This approach considers the different service types for a more accurate estimation. Considering each service type and its different cost factors will lead to more accurate estimation.

Q3: Why is phased effort estimation important in SOA projects? Phased effort estimation is useful for project managers for easily resource allocation depending on the different phase. The main contribution in this part of the work is phased effort ratio, so that the effort of different project phases could be estimated by knowing the effort of the requirement phase. This can be done either at the end of the requirement phase by knowing the actual requirement phase or by using expert judgment approach to estimate the requirement phase effort at the start of the project.

1.4. Thesis Outline This thesis is organized as follows. Chapter 2 presents the background of our research. SOA has both technical and business benefits which motives organizations to go through SOA migration. Also SOA has unique characteristics which explain why traditional cost estimation approaches doesn’t entirely fit SOA projects. SOA project is like any other software project, composed of many phases which will be detailed in this chapter. Classification of services from different perspectives are detailed in this chapter .Various cost estimation approaches are also discussed. Our approach is based on classification of services based on its construction into available , migrated , new and composed service .Each service type has its own effort estimation approach , as each type has its unique cost factors and service conditions. The detailed approach is discussed in Chapter 3. 17

The approach is applied to projects in the industry and the results are detailed in chapter 4 .The approach has been applied to 2 projects in an organization that works mainly in Egovernment projects in Egypt for more than 10 years. Chapter 5 concludes the thesis with highlighting the main objectives of the research and the future work.

18

CHAPTER 2 BACKGROUND AND LITRATURE REVIEW

19

Chapter 2: Background and Literature Review This chapter presents the existing literatures related to our research. SOA has become the trending technology in the last decade as many organizations tend to migrate to SOA. These organizations are often motivated by one or more of SOA benefits. These benefits are both technical and business benefits which have been presented in the introduction section earlier. SOA projects are different from traditional software by unique characteristics. Due to these unique characteristics, effort estimation of SOA projects using traditional software effort estimation approaches is considered as a challenging task. As the traditional cost estimation approaches do not consider the SOA characteristics properly, SOA effort estimation approaches have been proposed. Both traditional and SOA effort estimation approaches are presented later in this chapter. For accurate service estimation, the services have to be properly classified. Services could be classified from different perspectives; these perspectives are also presented in this chapter. When organizations migrate to SOA, they can choose from different migration strategies. Migration strategies are wrapping, replacement, re-engineering or mix and match from all others (migration). Migration strategies are also discussed in this chapter. SOA projects like other software projects; they have project phases .These phases are requirements, design, development, testing and implementation. Each phase has different activities. In order to estimate the effort of the SOA project, each activity in each phase has to be well estimated. The SOA project phases and their activities are discussed later in this section.

2.1 Definitions: In this sub-section definitions are presented to remove any misunderstanding or misleading concepts.

2.1.1.

SOA (Service Oriented Architecture):

SOA has been defined by both The OASIS group‎[10] and the Open Group ‎[11]. OASIS defines SOA as: A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations.

20

While The Open Group's ‎[11]definition is: Service-Oriented Architecture (SOA) is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services.

2.1.2.

A service:

Is a logical representation of a repeatable business activity that has a specified function (e.g., check customer credit, provide weather data, and consolidate drilling reports)‎[12]. It is also self-contained and could be composed of other services. Also Service consumers see it as a “black box”.

2.1.2 Cost versus effort Although the terms “cost” and “effort” are often used as synonyms in the software project management area. In the software engineering domain, cost is defined in a monetary sense; it refers to partial or total monetary cost of providing certain products or services‎[13]. While effort refers to staff time spent on performing activities aimed at providing these products or services. In consequence, project cost includes, but is not limited to, project effort. In this thesis, effort estimation of SOA project is our primary focus , so the term “effort” will be used more often.

2.2. Benefits of SOA SOA tackles many problems that face project managers in the industry nowadays. These problems are shown in Figure 1‎[13]. Most project managers preferred SOA due to its flexibility and integration. Integration in SOA is not only with the existing applications but also data integration and service integration.

21

More flexible architecture

71%

Integration to existing applications

67%

Data integration

62%

Service integration

59%

Composite application development

53%

Enterprise portal initiatives

49%

Business management process implementations

46%

Service management and governance

39%

Other technology initiatives

1% 0%

10% 20% 30% 40% 50% 60% 70% 80%

Figure 1:Q: What are the IT/technology problems your company hopes to address using SOA?‎[13]

SOA could be implemented in a wide range of business domains; however migration to SOA is driven by different motives in different domains. As shown in Figure 2 , most industries migrate to SOA in order to gain a competitive advantage and to enhance the internal operations and process efficiency ‎[13].The technical drivers are mainly optimal use of IT resources and short/long term of cost savings. In the next subsection, technical and business benefits of SOA will be both highlighted.

Figure 2: SOA technical and business drivers in different industries‎[13]

22

2.2.1.

Technical Benefits

As noticed from Figure 2 , the benefits of SOA are mainly technical benefits rather than business benefits‎[15]. In this subsection the technical benefits will be discussed, these include: code mobility, enhanced security, better software quality, support for multiple client types, better maintainability, development parallelism, higher availability and improved integration.

2.2.1.1.

Code Mobility

SOA is location independent‎[15],thus the service could be published anywhere‎[16] and in most cases , the service consumer won’t notice the service has been moves to an external provider .This is mainly because the service consumers don’t really care where the service is located .

2.2.1.2.

Enhanced Security

SOA promotes the security ‎[15], as each service represents an additional network interface. While in the traditional software approaches, security was normally handled on the front-end. Most companies did not implement database security because it is too complicated to maintain multiple security lists. Services on the other hand are used by multiple applications, so they have their own security mechanisms.

2.2.1.3.

Better software quality

SOA takes the software quality up to the next level‎[15].As services could be tested easily by developers by writing unit tests. These test suites can be run to validate the service independently from any application that uses the service. More and better testing usually means fewer defects and enhanced software quality.

2.2.1.4.

Support for Multiple Client Types

SOA is technology independent, that it can be integrated with any client technology [13].No matter what technology the client is using, SOA can easily integrate with it.

2.2.1.5.

Better Maintainability

While services are loosely coupled, they could be changed and modified easily ‎[15]. This will definitely lower the maintenance cost.

23

2.2.1.6.

Development Parallelism

As services are loosely coupled, so multiple developers can work on many services independently. This will of course reduce the development time and enhance the development parallelism ‎[15].

2.2.1.7.

Higher Availability

As services are location independent, multiple instances of a single service could be published on multiple servers. So in case of network or server failure, the requests could be redirected to another server without the client's knowledge. This means higher availability services.

2.2.1.8.

Improved integration

One of the core benefits of SOA is enhanced integration, which has been mentioned earlier. This integration enhancement is achieved as SOA offers standardized service description and message structures ‎[16]‎[17] .

2.2.2.

Business Benefits

Indeed the technical benefits are the main motive to organizations to migrate to SOA. However business benefits are easily justified to the upper management. These business benefits include: better return on investment, leveraging legacy investments and improved agility.

2.2.2.1.

Better Return on Investment

SOA increases the ROI (Return on Investment) ‎[15]‎[16] by reducing integration expenses, increasing assets reuse and increasing business agility.

2.2.2.2.

Leveraging the legacy investment

Integrating legacy systems with SOA, could be achieved by minimal effort using wrapping the legacy systems‎[16].The migration guidelines are detailed in‎[18].The phased effort of service wrapping of legacy systems are detailed in‎[9].

24

2.2.2.3.

More Agility

SOA enables organizations to rapidly cope with the changing business requirements; this will of course enhance the agility ‎[17].

2.3. SOA Characteristics In the previous subsection, we took a quick glance at the SOA benefits .While in this subsection, an overview on the unique characteristics of SOA is presented. These unique characteristics will definitely affect the cost of SOA projects,

that’s why the traditional

approaches of cost estimation do not fully fit SOA; these characteristics ‎[18] are as follows‎[18]:

2.3.1.

Loose coupling:

Loose coupling is the degree to which the software parts are interdependent. In a typical SOA project, each service is independent ‎[19] and the services are less cohesive. The more loosely coupled services, the easier to change these services. Loose coupling will lead to enhanced agility‎[2] as the services can adapt easily to rapidly changing business. Moreover, the loose coupling nature of SOA has an effect on the effort estimation. As most developers usually ignore the cost of integration of services‎[20].However, there has been an attempt to estimate the cost of service integration ‎[20] in which the integration has been considered as a cost factor .This attempt will be discussed in the cost estimation approaches subsection.

2.3.2.

Abstraction (separation of concerns, or information hiding):

Services involve separation of concerns, as services are loose coupled and independent from each other‎[18].Therefore each service hides logic from the “outside world”. This will lead to enhanced agility and more flexibility.

2.3.3.

Autonomy (or Encapsulation):

Service encapsulation is a process of wrapping code and data together into a single service .In other words; services have control over the logic they encapsulate. This characteristic is related to loose coupling, as the logic is encapsulated into the service, then the services are loose coupled.

25

2.3.4.

Reusability

The same service can be used over and over for other purposes, this will prevent the services redundancy in the system‎[21]. Reuse will occur only if the services are clearly documented and identified‎[22].Also designed and deployed in a manner which enables them to be invoked by the independent service consumer‎[20] as well as the logic is divided into services for the intention of reuse‎[19]. Hence, the cost of design of a service for the intention of reuse is higher than unreusable service, as the reusable service involves more documentation and design. Reusability is one of the main advantages of SOA as it decreases development, operational, management and maintenance costs, which decreases time to market. Analogy based traditional effort estimation approaches fails to consider the reusability of services ‎[19] as it mostly compares similarity of features. When two projects may have similar features, we can have the case which the first project had to build the services from scratch while the second project is simply reusing some of the already existing services thus it will take less effort.

2.3.5.

Composability

Composability ‎[23] is combining multiple services into one powerful service. So composing could be viewed as another form of reusability‎[21]. Service could be composed of other services which are coordinated and assembled. The composed services could be estimated using Divide and Conquer approach‎[24].

2.3.6.

Discoverability

Services can be discovered and used via service registry ‎[25] or any other discovery mechanism; which will encourage service reusability. The traditional software effort estimation approaches did not consider the discovery mechanism and their effect on the effort. However in ‎[23] the syntax and semantic discovery mechanisms have been considered in the effort as will be discussed later.

26

2.4. SOA Project Phases SOA project is similar to any other software project, composed of phases and each phase has its activities and cost factors affect the project cost. The different project phases are requirements, design, development, testing and implementation. Each phase will be discussed in this subsection.

2.4.1.

Requirements

It is the earliest phase in the project life cycle, in which the major functions of the service are defined. The business requirements are translated into service requirements ‎[26]. Functional and non-functional requirements, features and constraints are specified in this phase‎[27]. Requirement phase could also include a financial analysis of the benefits and costs‎[28].

2.4.2.

Design

When the requirements are clear, the target service could be described and documented in an appropriate way that skilled developers can easily develop. One of the important decisions of this phase is to choose the technical platform on which SOA be implemented‎[26], as well as service components have to be well-documented .

2.4.3.

Development

The main activity in the development phase is the actual coding of the service ‎[26].In this phase the developers write the code which satisfies both requirements and the documented design‎[26]. There is a common mistake made by most project managers, when they consider the cost of development is the actual cost of the project, ignoring all the other phases’ efforts. Such mistake could lead to inaccurate project estimation. Inaccurate cost estimation will lead to unpredicted risks, launch slips, mission failure and major cost growth ‎[6]‎[29]. The development phase could be estimated solely either using top-down approach or bottom-up approach‎[28]:

27

Top-down: In top-down estimation, the project is reviewed as a whole, the effort is estimated based on similar projects ‎[30]‎[31]. The major advantages of this method are its efficiency, and the fact that it can be combined easily with a more formal analogy based estimation strategies. This method is ultimately used in early stages of the project that is based on vague requirement specification when detailed break-down of development work is not available. Bottom-up: In bottom-up estimation ‎[31] the project is divided into components or activities, where each component or activity is estimated individually. The total effort is then calculated as the sum of all the component and activity effort values that are identified. This method usually used when re-estimation of remaining activities of a project‎[30]. The advantage of this method is that each activity is estimated and is available for future project plans. The risks are that activities can be easily forgotten, and that the risk budget covering unexpected tasks is not sufficiently large. One reason that technicians prefer bottom-up estimation may be that they are familiar with the method, since they will often be required to estimate each activity at later project stages.

2.4.4.

Testing

After the services have been developed, it is time to validate and verify these services ‎[32] through extensive testing‎[26]. Unfortunately, the classical testing techniques do not fully fit for SOA projects; however the services could be tested using methods of testing component or subsystem testing ‎[33].This will require more time and effort than traditional testing methods.

2.4.5.

Implementation

The final stage of the system development life cycle is implementation or the cutover stage ‎[34] which involves the transfer of the new system from a development-testingstaging environment into the production. The main goals of the cutover stage are to meet the system requirements and to complete the project within the limits of time and cost. There are four basic strategies of cutover which are pilot system, immediate cutover, phased cutover, and parallel cutover which are shown in Figure 3‎[35].

28

Figure 3 : Cutover Strategies ‎[35]

Each cutover strategy has its own cost and risks as follows:

2.4.5.1.

Pilot system:

Pilot system is a cutover strategy in which the new service is published in only a limited part of the organization to measure its impact and user reaction. Once the pilot has been accepted, the system is installed in the remaining parts of the organization in an immediate, phased, or parallel cutover, depends on the project and organization condition. A pilot system is used to preface the new system to the users and help in choosing the suitable cutover strategy.

2.4.5.2.

Immediate cutover:

The old service is shut down and immediately replaced by the new service, with no transition period. This strategy is characterized by limited cost and high risks as there is no way of rolling back to the old service in case of new service unpredicted failure.

2.4.5.3.

Parallel cutover:

The new service runs simultaneously with the old one until stability and accuracy of the new service can be ensured. This strategy is characterized by limited risks and users’ trust;

29

however it involves high costs and complicated procedures. The duration of the parallel cutover period will vary depending on the complexity and criticality of the new system and the consequences of its failure. Increasing the parallel cutover period implies criticality and crucially of the system to the organization.

2.4.5.4.

Phased cutover:

The new system is deployed in phases. The phases of deployment can be either subsystems of the system or organizational units. In case of Subsystems of the system, each subsystem is cutover separately. While in the organizational units’ case, the system is installed in only one part of the organization at a time. The parts could be organizational levels, functional areas, geographic sites, and so on. An example is when the Air Force cuts over to a new aircraft maintenance. The order of functions for phased cutover by subsystem is usually determined either by the needs of the users or by the logical progression of functions.

2.5.

Classification of Services

When services are better classified they can be better estimated ‎[19]. That’s why in this thesis, classification of services is mentioned prior to cost estimation approaches. Many classification approaches in the existing literature have been proposed to classify services. Each classification views the services from a different perspective. The service usage perspective considers the usage of the services. Also service type perspective viewed the services from type perspective. On the other hand, construction classification views the services based on their construction. These different classifications will be discussed in details in this subsection.

2.5.1.

Service Usage Perspective:

Services could be viewed from their usage perspective‎[36].They are classified into task services , entity services and decision services as follows:

2.5.1.1.

Task service:

Task service is a service which implements an exact business function, activity, or task, such as calculate the price of an insurance quote, or validate the format of an address. These services come in different sizes from large-sized to small-sized.

30

2.5.1.2.

Entity service:

Entity service is a service that handles the access of business entities such as customers, policies, claim, etc., Entity services are usually medium to large sized. Entity services should be independent of any particular business process and are intended to be used in multiple different business processes.

2.5.1.3.

Decision service:

Decision services are services which execute business rules to provide business decisions. An example of a decision service would be the underwriting of an insurance policy. Decision services generally provide yes/no answers to complex questions, or they support frequently changing externalized rules, such as tax regulations. Decision services are usually composed of other small to medium sized services.

2.5.2.

Service Type Perspective:

Services could be classified based on their type as in AUS-SMAT framework ‎[22]. Each service type has its own activities and cost functions as will be detailed in the next subsections. The service types are service mining, service development, application development, service integration, SOA infrastructure and SOA governance.

2.5.2.1.

Service Mining:

When an organization migrate to SOA it should mine for existing components or legacy which could be migrated, wrapped or used as is as a service.

2.5.2.2.

Service development:

If the required service does not exist and cannot be reused or migrated from elsewhere, the service has to be developed from scratch.

2.5.2.3.

Service integration:

The integration service acts as the glue that connects different services together.

2.5.2.4.

Application Development:

When organization needs to build application in SOA, the application is composed of services, these services are either reused as is, needs to be migrated or modified or provided by 3rd party. These services are required to be integrated into a complete application that

31

meets the exact business requirements for this project. So the application development could include service mining, service development and service integration.

2.5.2.5.

Development/Acquisition of an SOA Infrastructure:

To execute and manage SOA projects, organizations need SOA infrastructure, including: security, management, virtualization, etc. There are many vendors such as Microsoft, Oracle, SAP, IBM, etc. Each product has set of technologies that cover most of organization’s needs. One of the main challenges in this area is choosing the proper infrastructure and which pieces are needed and which will be built by the organization.

2.5.2.6.

SOA Governance

SOA governance is an essential activity to manage many levels of SOA decisions correctly. Organization’s goals and strategies have to be well defined. Also the processes of reusing, release and security have to be clear. Further work is still needed to cover SOA governance area ‎[37].

2.5.3.

Construction Classification

The construction perspective views the services as building blocks‎[23].The services are either available and ready to be used (available service), needs can be used after modifications (migrated service), doesn’t exist and needs to be built from scratch (new service) or could be composed from other services (composed service) .The different types of services are shown in Figure 4.

32

Service Types

Available service

Migrated service

New service

Composed service

Figure 4: Types of services from construction perspective

2.5.3.1.

Available service

A service that already exists and can be used as is. Available services might be homegrown or 3rd party service‎[1].If the services are 3rd party, so fees per use has to be considered in costing the SOA project.

2.5.3.2.

Migrated service

A service that cannot be re-used as is and needs to be modified in order to meet the needed requirements. Migrated service is generated through wrapping, replacing or modifying existing services‎[18].Each migration strategy has its own cost factors and circumstances as discussed in our previous work‎[9]. There are different migration strategies each has its cost factors, these migration strategies will be discussed in the next subsection.

SOA Migration Strategies Most organizations tend to migrate for SOA to gain one or more of its technical and/or business benefits as detailed earlier. There are various migration strategies to SOA, each has its pros and cons as detailed in‎[37]. Various factors such as business value, business priority and the technical qualities of the legacy applications participate to decide upon the selection of the proper strategy ‎[39] which is decided at the final step of SMART guidelines‎[40]. Those strategies are wrapping, re-engineering, replacement and migration as shown in Figure 5 and they will be discussed in details as follows:

33

Migration Strategies

Migration Wrapping

Replacement

Re-engineering

(Mix and Match of the other strategies )

Figure 5 : Different migration strategies

a) Wrapping Wrapping ‎[37] is a black box migration strategy in which an interface is built to wrap the existing legacy system .This strategy is used when legacy code is too expensive to rewrite , relatively small , high quality code , high business value and/or fast solution is needed. This makes wrapping legacy is the most attractive feature of SOA, as many organizations cannot take the risk of re-developing new solution from scratch‎[18].However, this strategy will not solve the existing problems of the legacy system‎[41].Generally speaking, wrapping is not the optimal strategy. However it allows a traditional system to easily gain some of the benefits of service oriented architecture in a limited time. b) Replacement Replacement ‎[37] is removing the old application and replace it with new system. The new system could be either off-shelf product or built from scratch‎[22].This strategy is used usually when the business rules are well understood , the old application is obsolete or its maintenance involves high costs and if the other strategies costs cannot be justified ‎[37]. One of the main benefits of replacement is that the new system is built to satisfy the organization’s exact needs .Replacement

is considered

expensive , risky and time

consuming strategy, though .In order to decrease the risk and the development costs, COTS could be used ‎[37].However it should be used carefully as the future modifications could be difficult and expensive. Consequently, COTS are not good option if the business changes rapidly .However, replacement is less costly in maintenance and gives high performance‎[41]. c) Reengineering Re-engineering ‎[37] is the adjustment of the application to be in a new form to easily adding new functionality to the legacy system‎. 34

Re-engineering is used in some cases as follows: - Legacy system needs to be exposed as service, as it has embedded reusable and reliable functionality with valuable logic; -Some components are more maintainable than the whole system or could be replaced without affecting the whole system; d) Migration In migration ‎[37], legacy code is separated from the user interface. User interface is modified to be more compatible with SOA. And the core code is wrapped‎[41].This strategy involves wrapping ,re-engineering and replacement, this is the main reason it will be excluded from our research.

Figure 6 : Migration strategies technical value versus business value

Figure 6 compares the different migration strategies from both technical value and cost and business value‎[39].The figure shows that wrapping has limited cost and business value and low technical value , while migration on the other side ,has high cost and business value and high technical value .Replacement has high cost and business value and low technical value . Redevelopment has high technical value with low cost. Table 1 shows the different migration strategies that each has advantages and disadvantages ‎[37]. This explains why relying on a single implementation strategy is not preferred. Hence, multiple strategies could be used. These advantages and disadvantages of migration strategies are factors affecting the migration costs, as will be detailed in the next section.

35

Table 1:SOA migration strategies advantages and disadvantages

For the purpose of this research, wrapping, replacement and re-engineering will be only considered in this research.

2.5.3.3.

New service

When services cannot be reused from any source ,including legacy systems, external services or vendor services, new service has to be developed from scratch to satisfy the exact needs ‎[22].The main difference between new service and replacement service type of the migrated service, is that the replacement service is a service built from scratch to replace an existing service , but the new service is a service doesn’t exist and will be built from scratch. The new service has to be designed and implemented in a way that enables reusability‎[28]. This implies higher effort in design and development to enhance the flexibility and decrease the future development costs, as will be illustrated in the next sections. The effort estimation of the new service has been detailed in our previous work ‎[42].

2.5.3.4.

Composed service

A composed service is composed of one or more of the above types; it is composed of smaller services to create one powerful service. Composed service is a complex type of service, this complexity could be addressed using Divide and Conquer approach as detailed in‎[24] in which the composed service is broken down into its component services and each service is estimated solely in addition to the integration costs to glue all these services together into the composed service. Also a qualitative approach has been proposed in ‎[23] in which the different cost factors of composed service is considered.

36

2.6. Cost Estimation Approaches In the previous subsections, the services have been classified so that they could be easily estimated. In this subsection, the different effort estimation approaches are discussed. The cost estimation of services is not a straightforward task due to SOA characteristics. There are many cost estimation approaches; each has its pros and cons. In order to simplify these cost estimation approaches, we classified them into two main categories: traditional cost estimation approaches and SOA cost estimation approaches.

2.6.1.

Traditional Cost Estimation Approaches

The traditional cost estimation approaches were originally proposed in order to estimate the effort of the traditional software. But unfortunately these approaches didn’t address SOA characteristics properly, so some alterations have been made to a few of them in order to enable them to estimate SOA project’s efforts. These traditional cost estimation approaches are:

2.6.1.1.

Expert Judgment

This approach is based mainly on the expert intuition and expertise considering the cost of the similar recent project ‎[43]‎[6]. It is the most common approach used in industry as it is highly adaptive to different environments and various project circumstances ‎[29]. This approach depends mainly on expert’s memory ‎[6], consequently past projects’ circumstances, factors as well as details could be forgotten‎[23].This could happen unless these historical data are clearly documented‎[29], which is usually unavailable. Although expert judgment gives acceptable estimate accuracy, it can be easily mislead‎[4]. The main misleading happens when those estimators, before or during the estimation work, are made aware of the budget, client expectations, time available, or other values that can act as so-called estimation anchors. Without noticing it, those people will tend to produce effort estimates that are too close to the anchors. Knowing that the client expects a low price or a low number of work-hours, for example, is likely to contribute to an underestimation of effort. Expert judgment could produce inaccurate estimations when this approach is used in case of software maintenance, as the most experienced engineers tends to over-estimate the amount of work required for small tasks and under-estimate the amount of work for large tasks‎[43].Considering SOA, this approach doesn’t address most of SOA

37

characteristics, as it ignores the reusability, discoverability and composability nature of services. Also it doesn’t support separation of concerns and composability nature of SOA.

2.6.1.2.

COCOMO II

The Constructive Cost Model, known as COCOMO‎[44] is one of the earliest and best documented cost estimation approaches. It estimates the cost of the software based on number of lines of code (LOC). However LOC based estimation approaches are usually criticized as the effort could be obtained only when the project is completed ‎[45]. On the other hand, it cannot be applied to modern software approaches including SOA, due to the rise of auto-generated code. Traditional COCOMOII has been adapted to fit SOA projects as in ‎[46] when Tansey and Stroulia have attempted to estimate the cost of SOA by applying both COCOMO II and real option theory to SOA projects. COCOMO II has been applied to service development and service migration‎, while real option theory has been applied to service composition. A real option ‎[47] is an alternative or choice that becomes available with a business investment opportunity. Erdogmus shows in ‎[48] how strategic flexibility in software projects can be valued in a practical and methodical manner using the concept of real options. The composed service flexibility and reusability can be modeled with the real option theory. Also the COCOMOII parameters have been calibrated for the purpose of SOA. Tansey and Stroulia approach brought guidelines to the SOA projects effort estimation, however the approach wasn’t applied in real industry projects.

2.6.1.3.

Function point

Function point ‎[49] is an approach which estimates the cost of software based on its functional requirements, by counting the software functions. These software functions are number of inputs, number of user’s outputs, the number of inquiries, number of files and the number of interfaces‎[49]. Many adjustments to the traditional function point have been proposed so that SOA projects could be estimated using function point. The traditional function point measures the complexity of software based on 14 cost factors called General System Characteristics ‎[50].These 14 cost factors are : data communications, distributed data processing ,performance, heavily used configuration, transaction rate, online data entry, end-user efficiency , on-Line update, complex processing, reusability, 38

installation ease, operational ease, multiple sites and facilitate change .

The main

adjustments has been made by adding and/or deleting one of more cost factors in order to estimate SOA projects as in‎[51]. In the following a quick glance at the function point attempts to estimate SOA projects.

Cosmic Function Point for SOA: Cosmic approach ‎[19]‎[52]‎[53] has been proposed in order to overcome the traditional function point limitations when applied to SOA, such as inability to estimate nonmonotonic applications as well as service boundary definition. Cosmic approach involves applying a set of models, principles, rules and processes to the Functional User Requirements (FUR) of a given piece of software (service). The result is a number represents the functional size of the service. The Functional User Requirements (FURs) are broken down into their elementary components, which are called “Functional Processes”. A Functional Process is an elementary component of a set of FUR comprising a unique, cohesive and independently executable set of data movements. There are four data movement types are Entry (E), Exit (X), Read (R), and Write (W) as shown in Figure 7. • Entry (E) moves a data group type from a user across the boundary into the functional process type, where it is required. • Exit (X) moves a data group type from a functional process across the boundary to the user that requires it. • Read (R) moves a data group type from persistent storage within reach of the functional process that requires it. • Write (W) moves a data group type inside a functional process to persistent storage. The persistent storage could be file or database. Changes in the data movements can also be measured using this method. The unit of measure in COSMIC is COSMIC Functional Size Unit ( Cfsu) = 1 Data Movement. • Each added data movement receives 1 Cfsu. • Each changed data movement receives 1 Cfsu. • Each deleted data movement receives 1 Cfsu.

39

Figure 7: Data movement Types in Cosmic ‎[53]

In ‎[53] extension of Cosmic Function Point based on empirical aspects from projects in different

technologies

such

as

embedded

systems,

agile

development,

SOA

implementations, and cloud computing. This extended version of Cosmic Function point supports quality, technology and organization’s processes.

The Cosmic approach solved the service boundary definition problem of SOA .However, some criteria for building the model (e.g., the identification of “data groups”) are ambiguous, and the modelling notation is neither standard nor sufficient to support requirements analysis‎[54].

Adjusted Function Point for SOA: In the adjusted function point for SOA ‎[51], the SOA project is broken down into its component services .Each service is estimated using adjusted function point approach, by making adjustments to traditional Function Point cost factors to empirically support SOA. These adjustments included eliminating unused Function Point cost drivers and adding SOA specific cost driver (Service Integration). Although this approach considered the integration nature of service, it did not address the rest of the SOA characteristics.‎ Function Point approach suffers from many problems, but they are widely considered among the best estimation approaches currently available‎[54].

40

2.6.2.

SOA specific cost estimation approaches

As the traditional cost estimation approaches failed to estimate the cost of SOA projects, SOA specific cost estimation approaches were proposed to address the SOA characteristics.

2.6.2.1.

Linthicum Formula

This formula is one of the earliest approaches of SOA cost estimation ‎[8]. The cost of SOA is calculated based on equation (1) Cost of SOA = (Cost of Data Complexity + Cost of Service Complexity + Cost of Process Complexity + Enabling Technology Solution)

(1)

Based on Linthicum formula, the SOA cost is affected by these factors : • Number of data elements • Complexity of data storage technology • System complexity • Service complexity • Process Complexity • New services needed • Enabling technology • Applicable standards • Potential risks The Complexity of the Data Storage Technology is expressed as a % (between 0 and 1). Normal approaches value is 0.3, Object-Oriented value is 0.6 and ISAM value is 0.8. The labor unit is the amount of money it takes to understand and refine one data element. So at $100 a labor unit, or the amount of money it takes to understand and refine one data element, we could have: Cost of Data Complexity = (((3,000) * .8) * $100) or, the cost of data complexity = $150,000 Although this formula takes into consideration many factors ignored by other approaches, it is not a real metric‎[23]‎[33] and has to be used in conjugation with other effort estimate on approaches ‎[8]. Also it is not detailed enough to be used in the industry ‎[33].

41

2.6.2.2.

SMART

SMART (Service-oriented migration and reuse technique) ‎[40] is a method to support organizations to determine the effort of migration to SOA by reusing the existing legacy components, as the cost of exposing a legacy system as services could be higher than actually replacing it with a new SOA-based system‎[21]. SMART approach is based on gathering a wide range of information about the legacy components, the target SOA system, and potential services to produce a service migration strategy as its primary product. SMART also produces other outputs that are useful to any organization, whether or not it decides on migration as shown in Figure 8 ‎[37].

Figure 8: SMART input and output

SMART Process Activities All SMART activities are shown in Figure 9. Migration context has to be established before determining if the migration is feasible. Only if the migration is feasible, the next steps are continued .Candidate services have to be well defined, existing systems have to be documented and target SOA system and environment has to be described in details.

42

Figure 9: SMART process activities

The SMART process different activities are detailed as follows: a) Establish Migration Context: The migration circumstances have to be captured and clearly documented. The SMIG (SMART Interview Guide) meeting ‎[37] is held in order to capture such vital information, which include: the business and technical goals and expectations, budget limitations and identification of the different stockholders and their responsibilities. Understanding both existing and target systems at high level is crucial in this step. Also set of candidate services has to be identified in this step. b) Checkpoint for Migration Feasibility: At this point a decision has to be made either: the migration is initially feasible, the migration has potential, but requires additional information to make an informed decision, or the migration is not feasible, in this case there is no need for the migration. c) Define Candidate Services: In this activity candidate services have to be identified. The candidate services have common characteristics; they execute common functions in the legacy system. d) Describe Existing legacy system: Detailed information about the legacy system has to be identified .Such details include: name, function, and size, programming language, operating platform, complexity and age of the legacy components. Also existence of updated detailed documentation of the legacy

43

system and the quality of such documentation is vital information. Also Legacy dependencies have to be carefully identified and documented. e) Describe Target SOA Environment: In this step the target SOA environment has to be described in details .How services would interact with the SOA environment, QoS expectations and execution environment for new services is the main concern. f) Analyze the Gap: The gap between the existing legacy system and the target SOA system has to be cautiously analyzed. g) Develop Migration Strategy: Based on the gap between the existing legacy system and the target SOA system, the migration strategy is chosen and developed. The migration strategy has to address the migration issues and risks .Also considers the various migration strategies to follow. These migration strategies are wrapping, reengineering, replacement or migration, these migration strategies have been discussed earlier.

2.6.2.3.

AUS_SMAT Framework

This cost estimation approach is based on classifying the services based on its type as mentioned earlier into service mining, service development, application development, service integration, SOA infrastructure and SOA governance. Each type has its own activities, characteristics, considerations, templates and methods that are involved in the cost‎[22]. The overall cost of the SOA project is the accumulation of cost of the constituent services according to each service type; the cost is estimated from both technical and social perspective. Each service type has its activities as follows: a) Service Mining : When an organization migrate to SOA it should determine which existing components or legacy could be migrated or wrapped and used as services. Reusing the existing services is less costly than legacy migration to SOA. There are many activities undertaken in service mining‎[22]: - Understanding and documenting existing systems and identification of existing components that satisfy service needs. - Determine what should be done to these components which might include: wrapping of components, modifying existing services, building new interfaces.

44

- Determining cost and effort of mining, modification and reusing of components, also additional infrastructure information are needed such as software, licensing, hardware, etc. The service mining supports reusability nature of SOA. b) Development of Services: Services are developed from scratch when these services cannot be reused from existing services. These services could be developed using either top-down or bottom-up approaches‎[31].Several approaches may be used varying from traditional spiral to more agile approaches‎[22]. This type of service has many activities involved: - Determination of service requirements: the requirement should be documented, senior management usually ignore or underestimate requirement gathering phase. - Determination of architecture and design of each service : the architecture and design should be documented Implementation of service: coding of services There is also considerable cost in the implementation because this is the main activity. There is another factor also need to be taken into consideration, including acquisition of development and testing tools (hardware , software , license ,etc) , training and learning curve of developers and architects in specific SOA technologies . c) Application Development from Services: When organization needs to build application from services, it either uses home-grown services or services provided by 3rd party‎[22]. These services needed to be integrated into complete application that meets business requirements. The main activities involved in application development are: - Gathering and documenting requirement of the application - Developing and capturing architecture of the application - Implementation of the application as many parts of the application already exists as services, all needs to be done is to integrate among these services. Also the non-availability of services has to be taken into consideration, this activity has the main cost involved - Testing the application: the services consumed by the application have to be heavily tested. There are also cost factors need to be taken into account involving acquisition of development and testing tools (hardware , software , license ,etc) , training and learning curve of developers and architects in specific SOA technologies .

45

d) Integration of Services: The organization determines which components of system will integrate with which service. There are set of activities involved in this type of projects‎[22]: - Determine which services will integrate with which systems : details about services and systems has to be captured and documented - Determine Changes need to be made to existing systems to enable integration and document these changes - Developing SLAs (Service Level Agreement) for the services based on the service requirements, also negotiating with the service providers may be involved. Most developers tend to ignore the cost of integration of services ‎, which will result in inaccurate cost estimation‎[20]. e) Development/Acquisition of an SOA Infrastructure: To execute and manage SOA, organizations need SOA infrastructure, including: security, management, virtualization, etc. There are many vendors such as Microsoft, Oracle, SAP, IBM, etc .Each product has set of technologies that cover most of organization’s needs‎[22]. One of the main challenges in this area is choosing the suitable infrastructure and which pieces are needed and which will be built by the organization. The activities involved in SOA infrastructure are: - Determine requirement of SOA infrastructure - Evaluate different products from different vendors and choose suitable combination - Acquire different pieces of the infrastructure, may involve acquire hardware, software, license, etc. - Customization of the infrastructure within the organization, may involve vendor consultancy. The main cost of the infrastructure exists in the customization of the infrastructure includes license, software, and hardware as well as consultancy and evaluation products. If the organization’s need is quite specialized and no vendor product covered these needs, the organization may build its infrastructure and the cost of development of the infrastructure will be similar to application development.

46

f) SOA Governance: SOA governance refers to the processes used to oversee and control the adoption and implementation of SOA in corresponding to the recognized practices, principles and government regulations. SOA governance is an essential activity to manage many levels of SOA decisions correctly. There are many activities need to be undertaken to establish proper SOA governance ‎[22]: - Developing the strategy and goals for the SOA initiative ; - Determining where funding will come from, who has ownership and who gives the necessary approvals; - Determining the necessary structures, processes and governance mechanisms that need to be in place within the organization; - Determining for each governance process what are the roles, responsibilities and procedures for managing SOA activities; - Developing policies and enforcement mechanisms for policies related to the use of standards, security, release of new versions of application and services and re-use of services; - Developing a set of metrics to show progress of the project and the overall SOA initiative. Determining what are the business outcomes to be achieved and how are they measured and by what metrics; - Determining what are the incentives, penalties and rewards for appropriate “SOA Behavior”. All of these activities above involve cost and maybe an organization will not get it right the first time and SOA governance may be modified and even evolved as the organization gets more experienced with its SOA initiative. g) SOA Architecture Analysis: The analysis of the architecture of an SOA-based system may involve many quality characteristics of that architecture such as performance, scalability, security, adaptability, etc. The analysis can be done in several ways from load testing of the system to building performance and scalability models of the system and running simulations on the system‎[22]. The main activities involved in analyzing the performance and scalability of the architecture: - Determining and capturing the performance and scalability requirements for the application;

47

- Understanding the architecture of the system and identifying the services, workflow and the physical deployment architecture (servers, networks, etc); - Obtaining unloaded performance data for the various services and workflows and understanding the demand that will be on the system; - Building a model of the system based on the services, workflow and servers and parameterizing the model with the performance data; - Running simulations on the model based on the demand to determine the response time, throughput and capacity of the various pieces of the architecture and other metrics that may be of use; - Determining if the system meets the requirements and if not where in the architecture things needs to change (software change, additional CPUs on servers, additional servers, etc.); - Using the analysis data to negotiate and establish service level agreements for the system; The organization may use either home-grown services or external services provided by 3rd party, in the case of external services it may be difficult to access the performance data .Thus SLA need to be negotiated with the service provider in order to guarantee the QoS of the system. Additional costs may involve preparing the environment for performance, monitoring, obtaining the performance data and contracting of an organization to do performance and scalability analysis. In AUS-SMAT framework the project is divided into services according to its type and each type has its own associated activities, templates, cost factors and cost functions. This implies that this approach supports loose coupling and diversity nature of SOA. However the framework is still being developed‎[33] .Also it needs to be applied to projects from different organizations ,in order to refine and mature the framework ‎[22].

2.6.2.4.

Divide and Conquer (D&C)

D&C approach‎ ‎[23] was inspired from the divide and conquer algorithm which is used to solve complicated problems by breaking the problem into smaller problems and solve each problem independently, the overall solution will be the summation of all the smaller problems solutions as shown in Figure 10.

48

Figure 10:Principle of Divide-and-Conquer‎[24]

Figure 11 shows the detailed Divide and Conquer approaches applied to SOA project. The SOA project is broken down into its component basic services either available, migrated, developed from scratch or composed of other services. The effort of available service is noted as E1 which represents the available service effort. The effort of migrated service is noted as E2, which implies migrated service effort. The effort of service to be developed from scratch is noted as E3 which represents new service effort. The integration efforts are noted as E4. The cost of all the services are then accumulated and added to the integration efforts, which results in the overall cost of the SOA project.

Figure 11:Procedure of SOA Project Development Cost Estimation based on Divide-and-Conquer ‎[24]

49

This approach takes the advantage of loose coupling of services and estimates each service independently, but it did not show how cost is being estimated for the individual service. This approach enhances the parallelism‎[24] as many services could be developed and tested simultaneously, which supports loose coupling nature of SOA .Also it supports reusability.

In this chapter the related work to our research has been presented. Different effort estimation approaches have been detailed. From these effort estimation approaches our proposed approach will be presented in the next chapter. The proposed approach estimates the different types of services. The migrated service effort is estimated based on the migration strategy used. The new service is estimated using adjusted function point. The composed service is estimated using D&C approach. In the next chapter the methodology is presented in details.

50

CHAPTER 3 METHODOLOGY

51

Chapter 3: Methodology After taking a quick glance at the literature related to our research, it is time to present our proposed approach in this chapter. The SOA project is decomposed into its component services; each service is classified as available, migrated, new or composed service. Available service is a service which already exists. Migrated service is a service that could be migrated. New service is a service which should be developed from scratch. Composed service is a service which could be separated into one or more component services. Each type of these services has its own estimation procedure as shown in Figure 12. Figure 13, on the other hand, shows the proposed approach in details for each type of service. In case of available service, the cost is often testing and integration costs. The migrated service cost varies depending on the migration strategy used, whether wrapping, re-engineering or replacement. The cost of each of these migration strategies has its phased effort ratio, which will be discussed later in this chapter .The new service total cost could be estimated using either adjusted function point or phased effort estimation. The adjusted function point is derived from the traditional function point with few alterations. The phased effort estimation approach distributes the total effort of the service on the different project phases with ratios. Using either adjusted function point or phased effort estimation is based whether the requirement phase effort is available .Using requirement phase effort the other phase’s effort could be estimated using the phased effort ratio approach .While the phased effort estimation approach estimates the effort of each phase; the adjusted function point approach estimates the total effort of the new service. In case of composed service, the effort could be estimated by decomposing the service into its constituent services and estimate the cost of each then aggregate all these efforts to get the total effort of the composed service. The detailed methodology is shown in Figure 13. All these different service’s cost estimation procedures will be discussed in details in this chapter.

52

Figure 12: The high level methodology

53

Figure 13: The detailed proposed approach

54

3.1. Available service The available service is a service that already exists and ready to be reused. The available service development effort is only the integration and testing efforts‎[42].

3.2. Migrated service The migrated service is a service that has to be modified to satisfy the exact needs. Modifications of the service could be done by wrapping, reengineering or replacement as detailed in the previous chapter. The migrated service effort could be estimated using phased effort distribution approach in which each project phase has an effort ratio, and the cost varies based on the migration strategy. This approach is detailed as follows:

3.2.1.

Phased effort distribution

Reasonable resource allocation to software phases is a vital factor for the success of software projects‎[55].Effort distribution to phases is the key to a more efficient resource allocation. This approach is proposed aims to estimate the effort of migrated services. The effort is estimated for different project phases. The effort of the migrated service often varies depending on the migration strategy used; this is mainly because each migration strategy has its own cost factors. In this approach, the cost factors of migrated services are extracted from previous literatures .These cost factors are distributed among SOA project phases. Each cost factor is given a weight for each migration strategy. These weights of different factors are grouped by project phase. All the weights of all factors are aggregated per migration strategy. The relative effort ratio for each phase is calculated by dividing the total weights of the phase factors to the total weight of the migration strategy. The whole process is summarized in Figure 14.

55

1- Identify the migrated service cost factors

6- The relative effort ratio for each phase is calculated

2- Distribute cost factors into phases

3- Assigning weight value for each cost factor based on the migration strategy used

5-Calculate the relative total cost of each migration strategy

4- Group the weight of cost factors by phase

Figure 14: Block Diagram shows the steps of phased effort for migrated services approach

The output of this approach is phased effort ratios; these ratios can be used as guidelines for project managers for easily resource allocation during the project different phases depending on the migration strategy carried on. On the other hand, this approach is mainly used to estimate the effort of the different project phases by knowing the effort of only one phase. This can help estimating the effort of the whole project early and accurately based on the effort of requirement phase. The effort of the requirement phase could be obtained at the end of the requirement phase; this will give the actual requirement effort. In case of absence of the requirement phase effort, this approach could be conjugated with expert judgment to estimate the effort of the requirement phase. The steps are detailed as follows:

3.2.1.1.

Identify the SOA cost factors:

Exhaustive search for all cost factors related to migrated services in the existing literature has been carried out. A comparative study among different migration strategies has been discussed in‎[37]. The mentioned factors were viewed in the context of comparison of migration strategies. However, for the purpose of our research, we considered only factors related to cost which are needed for original requirements, need for source code, flexibility and stable environment. Yet another research‎[56] has been made on different migration paths which showed 56

comparison among wrapping, re-hosting, componentization, re-engineering and COTS .In our research we have ignored both re-hosting and componentization, as re-hosting is publishing the service on another host and it is out of our scope. On the other hand, componentization’s cost has been mentioned in details in ‎[23]. From‎[56], we extracted factors related to cost which include: business agility, integration with partners, modifications require considerable testing effort and business risk. Unnecessary factors have been removed, these factors include: move from batch processing to online processing and a near real time enterprise and hard-coded business rules as they are not directly related to cost. It is worth noting that there were factors which were mentioned in both researches ‎[23]‎[56], this implies a deep impact on the cost. These factors are migration duration, level of tools support, performance post migration, integration costs which involve systems integration with business partners cost and difficult to integrate with new breed of technologies. Furthermore there are factors have been discussed in both ‎ ‎[23]and‎[37] which are maintainability after migration, modifications require considerable testing effort, and experienced resources. Another research ‎[39] has evaluated migration strategies from technical value and both cost and business value. For the purpose of our research, only business value will be considered. As testing effort can’t be simply ignores, testing factors have to be considered. They have been addressed in ‎[57],which showed testing from two dimensions: testing level and testing perspective.

The extracted cost factors: The following are the cost factors and their meaning to remove any misleading information. A) Business value : The business value can be determined based on the application’s ability to generate business returns both in terms of financial benefits and/or improved customer satisfaction.The business value is the main driver for organizations to go through the risky migration process to SOA. B) System Maintenance (existing problems in legacy systems): The effort spent in solving existing problems in the legacy system after the migration.

57

C) Need for Original requirements : The degree to which the up-to-date documented original requirements are needed . D) Obsolete Legacy system technology : The degree to which legacy system techonologycan be easily used. E) Experienced resources needed: This factor implies whether experienced staff is needed in the migration process. F) Need for Source Code: The degree to which the up-to-date source code of the legacy system is needed G) Time required for migration: This factor shows whether there is a time limitation for the migration . H) Flexibility: how easy it is to modify and extend the application to cope with changing business requirements. I) Stable environment : The degree to which the new system is required to be stable. This is related to the different cutover strategies discussed in Chapter 2 and illustrated in Figure 3. J) Business agility: is the rapid response to changing business requirements.

K) Integration with partners cost: is the effort of the system to integrate with partners. L) Business risk : The degree of the application criticality in achieving business objectives. M) Maintainability post migration : The degree to which the maintenance is easy for the new system after the migration N) Code size: The amount of code required to be written in the modified service. O) Tools support: To what degree is the process automated, and if a tool is proposed or implemented P) Solving existing problems in legacy systems: Does the new system solve the problems already exist in the legacy system.

58

All these cost factors in addition to testing factors, that will be detailed later, affect different project phases.

3.2.1.2.

Distribute cost factors into phases

In the previous step, the cost factors have been identified. In this step, these factors are distributed among the different project phases. It is possible that one factor could be involved into more than one phase. So the factors are distributed into the most relevant phases.

i.

Requirements Phase :

This is the first phase in the project life cycle. In this phase the main functions of the service are defined. The affected cost factors in this phase are business agility, integration with partners cost, business value and business risk.

ii.

Design Phase :

The design phase is the phase in which the target service is described in details. The affected cost factors are needed for original requirements, obsolete legacy system technology, experienced resources needed and need for source code.

iii.

Development Phase :

The actual coding of the service occurs in this phase, the cost factors involved are flexibility, code size, tools support and time required for migration. These factors will be detailed in the next steps.

iv.

Testing phase

In this phase validation and verification of the service happen in this phase. Testing methods and tools are proposed to evaluate the software systems. However classical testing techniques do not fit properly the systems which are made of services‎[33], consequently service functional testing could be done using common methods in testing component or subsystem testing‎[57]. The detailed testing factors will be mentioned in the next step.

v.

Transition Phase :

This phase involves transfer of the new service from a development-testing-staging environment into the production .The cost factors affecting this phase are stable environment, maintainability post migration and system maintenance.

59

3.2.1.3.

Assigning weight value for each cost factor based on the

migration strategy used: After the cost factors have been extracted and distributed among the phases, they have to be weighted by relative weight corresponds to each cost effect on the migration strategy which is wrapping, replacement or re-engineering. The cost factors are weighted on scale from 1 to 3 .Less weight indicates less effort. For example, if the cost factor is weighted 1, this implies low effort and low effect on the cost and vice versa. The cost factors are distributed into phases as follows:

i.

Requirements Phase : The affected cost factors in this phase as mentioned earlier are:

1. Business agility: Business agility is the rapid response to changing business. Wrapping can easily cope with rapid business changes as it is a fast technique so it takes weight 1. Re-engineering can satisfy the business changes, but at high cost so it will take weight 3 Replacement can meet the changing business in moderate cost it will take the weight 2. 2. Integration with partners cost: It is the cost of integration with partners; it is high in wrapping as it involves dealing with legacy systems, so it will take weight 3. Re-engineering has the lowest integration costs as it gives a high business value which reduces integration costs, so will be given 1. Replacement has a moderate cost of integration, so will be given weight 2. 3. Business value : Based on the Figure 6, wrapping will be given weight 1 as it has low cost and business value. Replacement gives high business value and has a high cost and it has low technical value, so it will be weighted as 3. Re-engineering has low cost and high technical value so it will be weighted as 2. 4. Business risk : Wrapping involves little risk on one side and replacement involves high risk so wrapping will take 1 and replacement will take 3, re-engineering will take 2.

i.

Design Phase :

The cost factors involved in the design phase are:

60

5. Need for original requirements : In wrapping there is no need for original requirements as it only encapsulate the existing legacy system, thus it will be weighted as 1 (might need original requirements to make sure that the encapsulation will not affect the main functionality of the legacy system). As re-engineering involves adding new SOA functionality to the legacy system so original detailed requirements used have to be up-to-date to assure that the original functionality will not be affected, so it will take the weight 3. In replacement, original requirements don’t have to be detailed or up-to-date, so replacement will take the weight 2. 6. Obsolete legacy system technology : Wrapping involves direct interaction with legacy code, so if the legacy technology is obsolete, it will be high in cost so it will be 3. As replacement involves the least interaction with the legacy system so the cost weight will be 1. In re-engineering smooth migration from legacy to SOA is carried out so the weight will be 2. 7. Experienced resources needed: Replacement involves high risks so experienced resources are highly needed to overcome these risks, so it will weight 3. As wrapping involves low risk so the experienced resources are not highly needed, so it will weight 1. As re-engineering involves risks in between wrapping and replacement so it will weight 2. 8. Need for Source Code Re-engineering requires up-to-date source code to be available which involves high cost, so it will take weight 3. Replacement doesn’t require available source code so it will take weight 1. Wrapping needs source code as core to build the interface so it will take weight 2.

ii.

Development Phase :

The cost factors affecting the development phase are:

9. Flexibility

As replacement involved the highest level of flexibility which will take lots to time to be implemented to add flexibility to the system, so it will be given 1 .Wrapping is inflexible approach so it will take less time to change a piece of code as it is not time-costly, it will take 3. Re-engineering flexibility is in between, so it will be given 2.

61

10. Code size Wrapping legacy systems involve small code writing, so it will be given weight 1. Replacement involves building new service from scratch, so size of code will be weighted as 3. Re-engineering involves adding new functionality to the existing system; it will be given weight 2. 11. Tools support Wrapping involves direct dealing with legacy systems that could be obsolete technology .So tools could be no longer available .Wrapping will take weight 3 .Re-engineering and replacement involve new technology support so will be given weight 1. 12. Time required for migration Wrapping is a fast strategy as shown in Figure 6, so it requires the least cost compared to other approaches , it will be given 1 .Replacement is time consuming as shown in Figure 6, will be given 3. As re-engineering is in between it will be given 2.

iii.

Testing phase

The testing could be viewed through two dimensions: testing level and testing perspective as shown in Table 2‎[57]. Testing perspectives: Testing could be viewed from many perspectives: developer, provider, integrator, third party and user perspective. Service developer: The service developer tests the service to detect the maximum possible number of failures aiming to release a highly reliable service‎[33]. The testing costs are limited; however the non-functional testing is not realistic for developers, as it doesn't account network configuration or provider and consumer infrastructure‎[57]. Service provider: The service provider tests the service to ensure it meets the requirements in the SLA with the consumer ‎[33].The testing costs are limited, however white box testing is not done and non-functional testing doesn't reflect the configuration at the consumer‎[57]. Service integrator: service integrator needs to make sure that any service bound to composition fit the functional and non-functional assumptions‎[33].Runtime binding can be difficult to be tested as the integrator will have no control on the services that may change without prior notice testing is based on service invocation and results in costs for the integrator and wasted resources to the provider‎[57].

62

Third-party certifier: The service integrator can use a third-party certifier to assess a service’s fault-proneness ‎[33], as this reduces the number of resources and stockholder involved in the testing‎[57]. However the 3rd party certifier doesn't test the service from composition or integration or network configurations which will surely raise serious confidence issues‎[57]. Service User: the user has no clue about service testing, all his concern is that the service works when he wants‎[57].

Table 2: Testing perspectives, each stakeholder needs and responsibilities are shown in black, advantages in green, issues and problems in red ‎[33]

63

Testing Levels: Testing has many levels: functional, non-functional, integration and regression as follows:

A) Service Functional Testing: Service functional testing could be done using common methods in testing component or subsystem testing .WSDL could be used by integrators and providers to generate test cases involving functional and black box strategies‎[57]. B) Non-Functional Testing: Non-functional testing objective is to make sure that QoS meets SLA which represents an agreement between the service provider and service consumer. External factors such as heavy network or server load could affect service performance, therefore stress testing on SLA has to be done‎[57]. Consequently testing could be more complex and expensive. C) Integration Testing The integration testing main concern is to make sure that any problems caused due to the integration of the services are eliminated‎[58]. Classical integration testing usually fails when the service experience dynamic binding .Due to polymorphism of SOA, testing all possible endpoints is costly and endpoints could be unknown at testing time. Despite the complex automatic discovery-and-composition mechanisms available, the integrator must adequately test the service or composition before using it. In this case, test time must be minimized because it affects runtime performance. D) Regression Testing: Re-testing piece of software after a round of change to make sure that changes did not adversely affect the delivered service‎[57].Although this requires the system integrators to know the service release strategy ,integrators usually have no control on the integrated service. Any service integrated into composition requires regression testing, when the service has been updated.

64

Table 3:comparison of different testing levels with different testing perspectives Testing Perspective

Testing Level

Developer white

provider

box

integrator

testing

available Advantages functional

service

specification

available

testing

for

Limited cost

test

cases Disadvantages Advantages

Non

representative

black

box

testing

inputs

Non representative inputs

Limited cost

Limited cost

only

Black Box testing only High cost

High Non-functional testing

Disadvantages

Nonrealistic

testing

environment

Maybe

Nonrealistic

testing

environment

Cost

May depend on network configuration Difficult to check if SLA is met

Advantages QoS

testing

consider

all

must possible

bindings Must regression test a

integration testing

composition

Disadvantages

after

reconfiguration

or

rebinding Service increases

call

coupling

because

of

dynamic binding Advantages

Limited cost

Limited cost

Disadvantages

unaware of who uses

aware that service has changed but

Might be unaware that

the service

unaware who changed it

service has changed

Regression Testing

High

Cost

As this research discusses service migration, so third party and user perspective will be out of our scope, so the resulted table is shown in Table 3. In Table 4, the limited cost will take weight 1 and high cost will take weight 3 and the nonrealistic will take 0 weights. In order to simplify this research, we have ignored testing perspective as it doesn’t directly affect the cost, only testing level has been considered. And the next matrix will include testing levels only. Testing level includes: functional testing, non-functional testing, integration testing and regression testing.

65

Table 4 : weight of the testing levels and testing perspectives matrix Testing Perspective

Testing Level Advantages

provider

Weight = 1

Weight = 1

Non

functional testing

Developer

Disadvantages

representative inputs

Advantages Nonfunctional testing

Weight = 1

Advantages

testing

Disadvantages Advantages

Regression Testing

iv.

Disadvantages

testing

only

Non representative inputs

Maybe

testing

Nonrealistic

testing

environment

environment integration

box

Weight = 3

Weight = 1

Nonrealistic Disadvantages

black

integrator

Weight = 0

Weight = 0

Weight = 1

Weight = 1

unaware of who

aware that service has changed

uses the service

but unaware who changed it

Weight = 3

Weight = 3

Weight = 3

Transition Phase :

This phase involves the transition of the service from the development environment to the production environment. The cost factors affecting this phase are:

13. Stable environment : Wrapping is the least risky approach (weight 1). Replacement has the highest risk (weight 3) .Re-engineering involve compromise between the two (weight 2) 14. Maintainability post migration : As the ‎[56] suggests , wrapping will have high cost (given 3 ), re-engineering will take limited cost (given 1) and replacement will have intermediate cost (weight 2). 15. System maintenance (existing problems in legacy systems): As mentioned earlier wrapping doesn’t solve the existing problems in the legacy system as it only builds interface to the existing system in order to be multiuser accessible . So wrapping will take the max effort to solve existing problems compared to the other approaches. So on the scale from 1 to 3 wrapping will be given 3, on the other hand, replacement involves legacy system elimination, and thus no legacy existing problems will occur so legacy system maintenance will take the weight 1.

66

As re-engineering involves adding new SOA functionality to existing legacy system as mentioned earlier so this will also involve solving existing problems and finding long term solutions for them so that they will not to be repeated , so on the long term the cost of solving legacy existing problems will be lower than wrapping, thus it will take weight 2.

3.2.1.4.

Group the weight of cost factors by phase

After assigning weight for each cost factor based on the migration strategy, those relative weights are grouped and summed by phase. The planning and requirements phase cost factors are aggregated and summed in Table 5. As shown the replacement strategy takes the highest qualitative cost in the planning and requirements phase compared to other migration strategies , this is mainly because the replacement involves replacement the legacy system so clear requirements has to be documented clearly. Design phase cost factors are grouped by migration strategy in Table 6 . The reengineering strategy has the maximum qualitative effort in the design phase compared to other strategies .As reengineering involves adjustments of the old system for easily adding new functionality , this needs an extra effort to deeply understand both old and target system.

Table 5 : Planning & Requirements Phase relative weights grouped by migration strategy

Planning & Requirements Phase

Wrapping Reengineering Replacement

Business agility

1

3

2

Integration with partners' cost

3

1

2

business value

1

2

3

Business risk

1

2

3

Planning & Requirements total weight

6

8

10

Table 6: Design Phase relative weights grouped by migration strategy

Design Phase

Wrapping

Reengineering

Replacement

Need for Original requirements

1

3

1

Obsolete Legacy system technology

3

2

1

Experienced resources needed

1

2

3

Need for Source Code

3

3

1

Design total weight

8

10

6

67

Table 7 shows the development phase cost factors relative weights grouped and summed by the migration strategy. The development phase of the replacement strategy takes the maximum effort compared to other strategies, which is logic as the replacement involves re-writing the service code from scratch.

Table 7 : Development Phase relative weights grouped by migration strategy

Development Phase

Wrapping

Reengineering

Replacement

Flexibility

1

2

3

Code size

1

2

3

Tools Support

3

1

1

Time required for migration

1

2

3

Development weight

6

7

10

Testing phase is vital to all strategies; however in replacement strategy testing has the highest effort as Table 8 shows. As replacement introduces new service to be up and running and replaces the old service / legacy system, so this new service has to be exhaustively tested to ensure its reliability.

Table 8: Testing Phase relative weights grouped by migration strategy

Testing Phase

Wrapping

Reengineering

Replacement

functional Testing

1

2

3

Non-Functional Testing

3

1

1

Integration Testing

3

2

3

Regression Testing

1

2

3

Testing weight

8

7

10

The relative weights of transition cost factors are shown in Table 9. The values of cost factors weights are close to each other; this implies almost-the-same transition efforts in the three strategies.

68

Table 9: Transition Phase relative weights grouped by migration strategy

Transition Phase

Wrapping

Reengineering

Replacement

Stable Environment

1

2

3

Maintainability post migration

3

1

2

3

2

1

7

5

6

Solving existing problems in legacy systems (Maintenance) Transition weight

3.2.1.5.

Calculate the relative total cost of each migration strategy

In the last step the cost factors relative weights have been aggregated by phase, while in this step the relative weights will be aggregated by migration strategy as in Table 10 .In wrapping, the total relative cost is the summation of planning and requirements, design, development, testing and transition. Wrapping total relative cost = 35, for re-engineering total relative cost = 37 and in replacement = 42. This implies relatively lower cost in wrapping and the highest cost in replacement. Of course, these are relative ratios not absolute values, that consider only the cost factors extracted from the literature, this means that there could be other cost factors were not considered in our research. Also each project has its own circumstances that affect the cost which is differs from one project to another. Table 10 : cost factors weights aggregated by migration strategy

Phases

Wrapping

Reengineering

Replacement

Business agility

1

3

2

Integration with partners' cost

3

1

2

business value

1

2

3

Business risk

1

2

3

Planning & Requirements total weight

6

8

10

Need for original requirements

1

3

1

Obsolete legacy system technology

3

2

1

Experienced resources needed

1

2

3

Need for source code

3

3

1

Design total weight

8

10

6

Planning & Requirements

Design

69

Phases

Wrapping

Reengineering

Replacement

Flexibility

1

2

3

Code size

1

2

3

Tools support

3

1

1

Time required for migration

1

2

3

Development weight

6

7

10

Functional testing

1

2

3

Non-functional testing

3

1

1

Integration testing

3

2

3

Regression testing

1

2

3

Testing weight

8

7

10

Stable environment

1

2

3

Maintainability post migration

3

1

2

3

2

1

Transition weight

7

5

6

Total strategy weight

35

37

42

Development

Testing

Transition

Solving existing problems in legacy systems(maintenance)

3.2.1.6.

The relative effort ratio for each phase is calculated

After the relative total cost of strategy is obtained from the previous step, the relative effort ratio for each phase is calculated by dividing the phase effort by the total relative weight for each strategy. For example, in the requirements phase, wrapping takes total weight 6 and the wrapping total cost is 35. Then requirements % = 6/35*100=17.14 %. The same steps are done for the rest of the migration strategies and the result is shown in Table 11. For the purpose of this thesis, the relative costs of phases for each migration strategy are shown in Table 12, which will be our guide when estimating the migrated services in the next chapter.

70

Table 11:Migrated Service Factors Weight Distribution among Phases

Project phases cost factors

Wrapping

Reengineering

Replacement

Business agility

1

3

2

Integration with partners' cost

3

1

2

business value

1

2

3

Business risk

1

2

3

Planning & requirements total weight

6

8

10

Planning & requirements (%)

17.14 %

21.62 %

23.81 %

Need for original requirements

1

3

1

Obsolete legacy system technology

3

2

1

Experienced resources needed

1

2

3

Need for source code

3

3

1

Design total weight

8

10

6

Design (%)

22.86 %

27.03 %

14.29 %

Flexibility

1

2

3

Code size

1

2

3

Tools support

3

1

1

Time required for migration

1

2

3

Development weight

6

7

10

Development (%)

17.14 %

18.92 %

23.81 %

Functional testing

1

2

3

Non-functional testing

3

1

1

Integration testing

3

2

3

Regression testing

1

2

3

Testing weight

8

7

10

Testing (%)

22.86 %

18.92 %

23.81 %

Stable environment

1

2

3

Maintainability post migration

3

1

2

3

2

1

Transition weight

7

5

6

Transition (%)

20.00 %

13.51 %

14.29 %

Relative total cost of strategy

35

37

42

Planning & requirements

Design

Development

Testing

Transition

Solving

existing

problems

in

systems(maintenance)

legacy

71

Table 12: The relative cost of phases for each migration strategy

Phase effort %

Wrapping Reengineering Replacement

Planning & requirements (%)

17.14 %

21.62 %

23.81 %

Design (%)

22.86 %

27.03 %

14.29 %

Development (%)

17.14 %

18.92 %

23.81 %

Testing (%)

22.86 %

18.92 %

23.81 %

Transition (%)

20.00 %

13.51 %

14.29 %

30%

Effort ratio

25% 20% 15% Wrapping

10%

Reengineering

5%

Replacement

0%

Figure 15: Effort ratio distributed in each phase for different service migration strategies

Figure 15 shows the effort ratio distribution among the different phases for the various migration strategies.

3.3. New service A new service is a service that cannot be reused from elsewhere and has to be developed from scratch to satisfy the exact needs. The effort estimation of the new service is supposed to be a straight forward task, this explains why traditional cost estimation approaches could be used for new service estimation after adjustments. In this research an adjusted function point approach is proposed to fit the new service estimation. The adjusted function point estimates the total effort of the service. Yet there is another proposed solution which is phased effort distribution. The phased effort distribution approach provides a way to determine the total cost of the service by knowing the cost of the early phases. It also helps

72

project managers in resource allocation based on the effort of each phase. The two approaches will be detailed as follows:

3.3.1.

Adjusted function point

As mentioned earlier, the adjusted function point approach has been proposed to estimate efforts for SOA projects. The traditional function point could be used to estimate both new and migrated services, however for the purpose of adjusted approach, the new services methodology is our main focus .In our adjusted function point approach, other SOA factors which were ignored by either traditional FP (Function Point) or adjusted FP, or both will be considered in our proposed approach. That explains how our methodology is deviated from both traditional FP ‎[49] and adjusted approach in ‎[51]. Figure 16 shows the abstract methodology of new service effort estimation using adjusted function point, which will be discussed in details in this section.

1- Identify the service scope and service boundary

2- Count the Service functions and determine their complexity

3- General System Characteristics (Cost Factors)

6- Calculate the adjusted function point count

5- Calculate the value adjustment factor

4- Calculate Total Degree of Influence (TDI)

7-Adjusted function point Count to Effort Conversion Figure 16: SOA Adjusted Function Point Estimation Process

3.3.1.1.

Identify the service scope and service boundary:

In this step, identification of the service scope and service boundary is essential as shown in Figure 17. The service scope represents functionality provided by the service to be developed, while the service boundary represents the border between the service and the outer system. The service boundary acts as a ‘membrane’ through which data processed by

73

transactions (EIs, EOs and EQs) pass into and out of the service. Also it encloses the logical data maintained by the application (ILFs). Also it assists in identifying the logical data referenced by but not maintained within this application (EIFs). The service boundary definition is based on the user’s view, not the technical view.

Figure 17: difference between Scope and Boundary

Figure 18 shows an example for the boundary of an HR project, in which the human resource system interacts with the outer systems (currency and fixed assets). The human resources system has an interface which interacts with the user.

74

Figure 18: The boundary of an HR project

3.3.1.2.

Count the Service functions:

After determining the service scope and boundary in the previous step , the service functions have to be counted .The functions are also counted from user’s perspective not the technical perspective. The service functions are classified into data functions and transactional functions as shown in Figure 19. The data functions are subdivided into internal logical files (ILF) and external interface files (EIF).Transactional functions are classified into external inputs (EI), external outputs (EO) and external inquiries (EQ). Classification of service functions is shown in Figure 19.The different service functions are illustrated in Figure 20‎[59].

Figure 19: Count the service functions count

75

Figure 20: The view of a software application from Function Point perspective.

Data functions: Data functions are concerned with data manipulations provided by the service. Data Functions types Data functions are either internal logical files (ILF) or external interface files (EIF). The term “file” does not mean the file in its casual meaning. In fact, “file” refers to a logically related group of data. The data function types are shown in details in Figure 21.

Figure 21 : Data Functions types

Internal Logical files(ILF): ILF is a user identifiable group of logically related data maintained within the boundary of the service. The primary purpose of ILF is to hold data maintained through one or more processes of the service being counted.

76

Examples of ILFs Here are samples of things that can be ILFs ‎[59] include: 1. Tables in a relational database. 2. Flat files. 3. Application control information, perhaps things like user preferences that are stored by the application. 4. LDAP data stores.

External Interface files (EIF) EIF is a user identifiable group of logically related referenced by the service, but maintained within the boundary of another service. In other words, EIF is data that the service needs and uses, but does not maintain .The primary purpose of an EIF is to hold data referenced through one or more processes within the boundary of the service counted. This means an EIF counted for a service must be in an ILF in another service. The primary difference between an internal logical file and an external interface file is that an EIF is not maintained by the service being counted, while an ILF is. Data Functions Complexity Determination To determine data functions complexity, Record Element Type (RET) and Data Element Type (DET) have to be identified and counted as follows: DET DET is a Data Element Type .Which is a unique user recognizable, non-repeated field. DETs represent the service database tables’ fields, which are recognized by the end user. RET RET is a Record Element Type, which is a user recognizable subgroup of data elements within an ILF or EIF. There are two types of the subgroups either optional or mandatory Each RET represents optional or mandatory subgroup of the ILF or EIF. In case of no subgroups exist; count the ILF or EIF as one RET. Figure 22 shows the difference between DET and RET.

77

Figure 22 : the difference between DET and RET

Rate the functional complexity based on RET & DET count After counting the RETs and DETs, determine the functional complexity of the service from Table 13, which shows the complexity of data functions based on total count of RETs as rows and number of DETs as columns.

Table 13: Functional Complexity Matrix of Data functions

Determine ILF and EIF unadjusted function points count The data functions complexity rate (low, average, high) is converted into unadjusted function point count using both Table 14 and Table 15 .ILFs are converted using Table 14 and EIFs are converted using Table 15.

Table 14: ILF functional complexity translation into unadjusted function point table

Table 15: EIF functional complexity translation into unadjusted function point table

78

Calculate the total ILF and EIF contribution to the unadjusted function point count For every ILF, the functional complexity rate is converted into unadjusted function points from Table 14. And the total of ILFs function type count is calculated by summing up all function point count of all ILFs as in Table 16. Also the EIF function type total is calculated by summing up all the function count of all EIFs.

Table 16: Example of the total ILF and ELF count

Transactional functions Transactional functions represent the functionality provided to the user by processing the data from the service. Transactional Function Types Transactional functions are classified into external inputs, external outputs, or external inquiries as shown in Figure 23.

79

Figure 23 : Transactional Function Types

External Inputs: An external input (EI) is an elementary process that processes data that comes from outside the service’s boundary. The primary intent of an EI is to maintain one or more ILFs and/or to alter the behavior of the system. External Outputs: An external output (EO) is an elementary process that sends data outside the service’s boundary. The primary purpose of an external output is to present information to a user through processing logic only or in addition to the retrieval of data. External Inquiries: An external inquiry (EQ) is an elementary process that sends data outside the service’s boundary. The primary purpose of an external inquiry is to present information to a user through the retrieval of data. The significant difference between EO and EQ is that in EQ the processing logic contains no mathematical formulas or calculations, and no derived data are present. Examples of EQs are reports created by the service being counted, where the report does not include any derived data‎[59].

Transactional Functions Complexity Determination: After determining the transactional function types, the complexity has to be measured. Complexity is measured using DET (Data Element Type) and FTR (File Type Referenced). DET Counting Count the DETs in the service, the same way in the data functions in the previous step.

80

FTR Counting FTR refers to File Type Referenced, which is an internal logical file read or maintained by a transactional function or an external interface file read by a transactional function. FTR count depends on the processes of the service Rate the functional complexity based on DET & FTR count After counting both DETs and FTRs, the functional complexity of transactional functions is determined. EI functional complexity is determined depending on number of RET and DET using Table 17. While the complexity of EO is determined using Table 18. In case of EQ, Table 19 is used as reference to determine the complexity.

Table 17: EI functional complexity matrix

Table 18:EO functional complexity matrix

Table 19: EQ functional complexity matrix

Determine EI,EO and EQ unadjusted function points count After determining the transactional functions complexity rate (low, average, high), in this step the EI, EQ and EQ unadjusted function point counts is determined. Table 20 converts the EI and EQ functional complexity from ratings (low, average, high) into unadjusted 81

function points .Where as EQ functional complexity is converted into unadjusted function points using Table 21. Table 22 shows an example of unadjusted function point count and its total.

Table 20: translate the EI and EO to unadjusted function points

Table 21 translate the EQ to unadjusted function points

Table 22: Example of the total unadjusted Function Point Count

Module

Customers

Function

RET/

Type

Description

FTR

Function DET Complexity Rating

Function Points

Customers

ILF

1

12

Low

7

Employees Employees

ILF

1

7

Low

7

ILF

1

12

Low

7

Orders

Corporate Information

Orders

Parts

ILF

1

4

Low

7

Accounts

Payments

ILF

2

10

Low

7

Orders

Workorders

ILF

3

23

Average

10

Customers

Credit rating

EIF

1

6

Low

5

EI

1

9

Low

3

EI

2

16

High

6

Employees

Customers

Add

and

edit

employees Add

and

customers

edit

82

Function

Module

Type

Description Add

Orders

RET/

and

FTR

Function DET Complexity Rating

Function Points

edit

corporate

EI

1

14

Low

3

information Orders

Add and edit parts

EI

1

6

Low

3

Accounts

Add payment

EI

2

15

Average

4

Accounts

Adjust account

EI

2

14

Average

4

Orders

Add workorder

EI

4

27

High

6

Orders

Update workorder

EI

4

25

High

6

Accounts

Produce invoices

EO

4

32

High

7

EO

4

47

High

7

EO

5

39

High

7

EO

3

27

High

7

EQ

1

14

Low

3

Workorder

Orders

productivity report

Accounts

Sales report Parts

Orders

inventory

report

Customers

Customer query

3.3.1.3.

profile

General System Characteristics (Cost Factors)

The general system characteristics represent the factors affecting the cost in function point cost estimation approach. For each cost factor specific weight is assigned. The weight of the factor in function point is called “Degree of Influence”. The weights of the factors range from zero to five. When a factor takes weight zero, this indicates that this factor has no influence on the cost .While on the other side of the spectrum; weight five implies strong influence on the cost. The cost factors in the proposed approach are classified into 3 categories: A. Traditional function point considered factors B. Traditional function point ignored factors C. Adjusted function point considered factors We will discuss each category with its constituent factors in details.

83

A. Traditional function point considered factors: This section contains the cost factors considered in both traditional function point and our approach to estimate the effort of SOA projects. The description of each factor has been modified to fit SOA characteristics .For the purpose of this research, the cost factors are weighted based on the complexity of implementation in SOA .The traditional function point factors considered in our research are data communications, distributed data processing, performance, heavily used configuration, service complexity, SOA maturity, reusability and flexibility. 1. Data communications In Traditional FP, data communications represents the degree to which the application communicates directly with the processor. While in SOA applications, SOA supports a number of communication protocols such as UDDI, XML, and SOAP. There are two common communication mechanisms for services REST and SOAP. a) REST Within the REST environment, the web is considered as a universal storage medium for publishing globally accessible information‎[59]. b) SOAP SOAP treats the web as the universal transport mechanism for message exchange‎[59].

Degree of influence: The REST is less complex that SOAP ‎[23], so SOAP is given weight 2 and REST receives weight 1. Also more complicated protocols will take more weight.

2. Distributed data processing In Traditional FP, distributed data processing describes the degree to which the application transfers data among physical components of the application. In case of SOA, the services are distributed while service registry controls these services ‎[51]. Degree of influence : The normal service registry mechanism will take weight 1.While complicated service registry mechanism, we will use 2.

84

3. Performance In traditional FP, performance is determined using the response time and throughput of the application. While in SOA, performance is measured in terms of response time, throughput, availability, accessibility, success ability and interoperability‎[61].High performance services require extra development and processing‎[19]. When dealing with development of new services, performance is a major concern‎[21], so this factor has to be considered in SOA‎[59]. In the traditional function point approach, performance and transaction rate are deeply related .As they are mainly concerned with how fast the application can perform and its effect on the design, development and implementation. For the purpose of this research, transaction rate will be ignored. We are mainly concerned with the performance of the services as will be discussed later. Degree of influence: High performance services require an extra effort‎[42]. Low performance services will take weight 1. High performance services will receive 5.

4. Heavily used configuration In Traditional FP, heavily used configuration describes the degree to which computer resource restrictions influenced the development of the application. In SOA, this factor is mainly concerned with the hardware infrastructure and how it can handle the service ‎[51]. Degree of influence: Low complexity infrastructure takes weight 1. Extremely complex infrastructure will take weight 5.

5. Service complexity Service complexity describes the degree to which the service is complex, that complexity will affect the development of the service‎[51].Indeed, a more complex service requires more development effort‎[59].The service complexity could be measured in many metrics the included metrics in this research are message models , service discovery ,service patterns and security .

85

a) Messaging Models (Synchronous Vs Asynchronous): Messaging Models of the service represents request-response handling between client and the server. Service messaging models are either synchronous or Asynchronous. Synchronous services: Synchronous service means that every time a client accesses a web service, the client receives an immediate response. Synchronous service is a request-response operation‎[21]. Asynchronous services: In asynchronous services the client invokes a web service, and does not wait for a response. Thus, asynchronous is a one-way operation. The client sends a request in the form of an XML message. The Web service receives the message and processes it, sending the results when it completes its processing‎[59]. Figure 24 shows the difference between synchronous services and asynchronous services. Asynchronous services are more complex to build than synchronous services, which involve more effort.

Figure 24 : Synchronous service Vs Asynchronous

b) Service Discovery: From service discovery perspective, services are classified as syntax and semantic web services. Syntax Syntax relates to the formal or structural relations between signs and the production of new ones‎‎[23].

86

Semantics Semantics deals with the relations between the sign combinations and their inherent meaning ‎‎[23]. Figure 25 represents the main differences between semantic and syntax services‎‎[23].

Figure 25: Semantic web service vs syntax web service

In service development semantics is more complex than syntax, thus semantic web services takes more effort than syntax service ‎‎[23].

c) Service pattern (Orchestration Vs Choreography) Services in SOA has two patterns, orchestration or choreography ‎[62] ‎[23].The organization can use either patterns solely or combine between the two patterns.

Figure 26 Web services Orchestration vs Choreography

87

Figure 26 shows the main differences between choreography and orchestration. Orchestration is based on central control of services .While Choreography is based on coordination between services. These differences in details are as follows: Orchestration: Orchestration is based on centralized control of the services ‎[62] where the business logic and rules are explicitly specified. Orchestration concentrates on the interaction of the master web service with the other services. Choreography: Choreography is based on coordination among services ‎[62]where web services act as peerto-peer. Each web service acts based on its own rules and logic. To show the main differences between the two approaches here is an example: In ballet performance, the music is orchestrated; the maestro provides a master control on the orchestra. While the dancing is choreographed, each dancer knows its individual role and executes it; the response to other dancers is controlled by the music. Choreography is a more complex approach as it involves coordinating among many interacting parts. Also the rules are created to determine the behavior of each individual is complex.

d) Security This factor is considered when there is application-specific security processing that may include internally developed security processing or use of purchased security packages‎[49].Security implementation adds more effort to the development of the service. Degree of influence: Service complexity will be weighted based on the metric above .It will take a weight ranges from 0 to 5. If the service has 0 to 1 of the complexity factors, it is considered as a low complexity service, and takes weight 1.If the service has 2 to 3 of the complexity factors, it is considered to a medium complexity service with weight 3 If the service has 4 to 5 of the complexity factors, it is a high complexity service and takes the weight 5.

6. Installation ease (SOA maturity) In traditional FP, installation ease describes the degree to which conversion from previous environments influenced the development of the application.

88

This factor indicates how difficult is conversion and installation. It is used to be important for traditional software approach, but insignificant in as mentioned in ‎[51]. In our approach, installation ease represents the SOA experience in the organization. The SOA maturity of the organization score is considered to be high if this project is the first time to install SOA and decreased along with the organization experience. Degree of influence: The first SOA project, will take weight 5.The second or third project, will take weight 3. The fourth and later project, will take weight 1. 7. Reusability Reusability is the main benefit of SOA which reduces development, operational, management and maintenance costs. Reusability is one of the traditional function point cost factors ‎[49] but has been neglected in the Adjusted function point approach‎[51].But in this proposed approach we will consider service reusability as it is one of the main characteristics of SOA. Degree of influence: Weight 0 represents an unreusable service. While 2 represents a service that will be reused within the same project. And 5 imply a service that will be reused in multiple projects.

8. Service flexibility (facilitate change) In traditional FP, facilitate change describes the degree to which the application has been developed for easy modification of processing logic or data structure. In SOA service flexibility is one of the main benefits of loose coupling as mentioned earlier .Service flexibility is one of the main cost factors in traditional function point ‎[49]but has been ignored in ‎[51] .In our approach flexibility is considered as it is one of the major benefits of SOA. Degree of influence: Not flexible service will take weight 0 .Low flexibility service will take weight 1 .Medium flexibility service will be given weight 3 .High flexibility service will take weight 5. All the considered factors have been accumulated in Table 23.

89

Table 23: Traditional function point considered factors

Factor

Traditional function point

SOA function point

Degree of influence

The REST is less complex than SOAP The degree to which the SOA supports a number of so SOAP is given application communicates communication protocols weight 2 and Rest Data 1 such as UDDI, XML, and receives weight 1 communications directly with the processor. SOAP. More complicated protocols will take more weight.

2

Distributed data processing

The degree to which the the services are distributed application transfers data while service registry among physical components of controls these services . the application.

Normal service registry mechanism will take weight 1. Complicated service registry mechanism, we will use 2.

High performance services require an extra effort. Low performance 3 Performance services will take weight 1. High performance services will receive 5. Low complexity The degree to which computer infrastructure takes hardware Heavily used resource restrictions influenced The 4 weight 1. Extremely the development of the infrastructure complexity. configuration complex infrastructure application will take weight 5. Service performance is measured in terms of Performance is determined response time, throughput, using the response time and availability, accessibility, throughput of the application. successability and interoperability‎.

5

Service complexity

Installation ease 6 (SOA maturity)

Complexity describes the degree to which the application or service is complex that affects the development of the service. 1-- low complexity service complexity metrics are messaging 3-- medium complexity model(Synchronous or Asynchronous ) ,Service discovery 5-- high complexity (Semantic or Syntax ) ,Service Pattern (Orchestration Vs Choreography) and Security

The degree to which conversion from previous environments influenced the development of the application.

90

1-- The fourth SOA SOA maturity in the project and higher organization is mainly 3-- The second or third related to SOA experience 5-- The first SOA in the organization. project

Factor

7 Reusability

Service flexibility 8 (facilitate change)

Traditional function point

SOA function point

Degree of influence

The degree to which the application or the service has been developed for reusability

0-- Not reusable service 2-- Service will be reused within the project 5--Service will be reused via multiple projects

The degree to which the application or the service has been developed for easy modification of processing logic or data structure.

0--Not flexible service 3--Medium flexibility 5--High flexibility

B. Ignored traditional function point factors: In this subsection we will discuss the traditional function point factors ignored in SOA. They were useful in the traditional software; however they are either meaningless or unavailable in SOA.

These factors are transaction rate, on-line data entry, end-user

efficiency, on-line update, operational ease and multiple sites.

1. Transaction rate In traditional FP, transaction rate describes the degree of the application performance at the peak time. As mentioned in the previous section, this factor is highly related to performance. As performance is considered, so transaction rate will be ignored. 2. Online data entry Online data entry describes the percentage of transactions processed in batch mode This factor has been considered in traditional function point. However it is meaningless in SOA‎[51]. 3. End-user efficiency End-user efficiency describes the degree of ease of use for the user of the application. The on-line functions provided emphasize a design for user efficiency (human factor/user friendliness). The design includes: • Navigational aids (e.g., function keys, jumps, dynamically generated menus, hyper-links) • Menus • On-line help and documents • Automated cursor movement

91

• Scrolling • Remote printing (via on-line transmissions) • Pre-assigned function keys (e.g., clear screen, request help, clone screen) • Batch jobs submitted from on-line transactions • Drop down List box • Heavy use of reverse video, highlighting, colors, underlining, and other indicators • Hard-copy documentation of on-line transactions (e.g., screen print) • Mouse interface • Pop-up windows • Templates and/or defaults • Bilingual support (supports two languages: count as four items) • Multi-lingual support (supports more than two languages: count as six items)

This factor is concerned with the GUI, and is not applicable in SOA. 4. Online update On-line Update represents the degree to which the internal logical files are updated on-line. It also indicates whether the service uses programmed recovery such as SQL rollback and commit. Also it indicates whether the service is required to recover data, reboot, or perform other self-contained functions in the event of a system. This factor is traditional function point specific and ignored in SOA‎[51]. 5. Operational ease In traditional FP, this factor describes the start-up, backup and a recovery procedure of the application‎[49].This factor is not applicable in SOA‎[51]. 6. Multiple sites Multiple sites describes the degree to which the application has been developed for different hardware and software environments‎[49]. This factor is ignored as in SOA‎[51], as SOA is hardware and software independent. All the ignored cost factors will always receive weight 0.

92

Table 24: Ignored function point cost factors

Factor

Description The

1

Reason of Exclusion

degree

of

the

application

Transaction

performance at the peak time which

rate

influence

the

application

development

2

3

4

5

Online data entry

As performance is considered, so transaction rate will be

The percentage of the data is entered or

retrieved

through

interactive Not Applicable in SOA

transactions. The degree of ease of use for the

efficiency

user of the application.

Online

The degree to which internal logical

update

files are updated on-line.

ease

performance.

ignored.

End-user

Operational

This factor is highly related to

Is not applicable in SOA.

Is not applicable in SOA.

describes the start-up, backup and recovery

procedures

of

the out of SOA scope

application The degree to which the application

6

Multiple

has been developed for different SOA is hardware and software

sites

hardware

and

software independent.

environments.

C. Considered Adjusted function point factors This section introduces the cost factors added by ‎[51]and will be considered in our approach. Service integration has been added as a cost factor in SOA projects.

1. Service integration As Integration effort of services has been estimated‎[51]. Service integration is considered a cost factor .It indicates whether the application needs other services to be integrated to perform its functions. Degree of influence: Service will not be integrated with any service will take weight 0.

93

When 1 or two services are integrated, the service will be integrated will take weight 1.When 3 to 5 services will be integrated, the service will take weight 3.And when 6 or more services will be integrated, the service will take weight 5.

3.3.1.4.

Calculate the total degree of influence (TDI)

After the weights are assigned to each cost factor, the weights will be summed up .The result is the total degree of influence TDI as in equation 2. TDI = ∑GSC

(2)

Table 25 shows an example of general system characteristics and their degree of influence and the TDI.

Table 25 : Example of Total Degree of Influence

General System Characteristics

Consideration

Degree

(Y/N)

Influence

1

Data communications

Yes

2

2

Distributed data processing

No

0

3

Performance

Yes

2

4

Heavily used configuration

No

0

5

Transaction rate

No

0

6

On-line data entry

No

0

7

End-user efficiency

No

0

8

On-line update

No

0

9

Service complexity

Yes

1

10 Reusability

Yes

2

11 Installation ease (SOA maturity)

Yes

3

12 Operational ease

No

0

13 Multiple sites

No

0

14 Flexibility

Yes

2

15 Service integration

Yes

4

Total Degree of Influence TDI

16

94

of

3.3.1.5.

Calculate the value adjustment factor

The value adjustment factor (VAF) represents the general functionality provided to the user of the service. VAF could be calculated from equation 3: VAF = (TDI* 0.01) + 0.65

(3)

Where TDI: Total degree of influence determined in the previous step.

3.3.1.6.

Calculate the adjusted function point count:

The Adjusted Function point count is calculated based on the equation 4 Adjusted FP Count = Unadjusted FP Count * VAF

(4)

Where unadjusted FP count is the summation of the function points as in Table 22.

3.3.1.7.

Adjusted function point Count to Effort Conversion:

From the previous steps, adjusted function point count is calculated. Using productivity factor, adjusted function point could be converted into effort in hours. Productivity factor value varies according to the programming language used, the project nature, the experience of the staff, etc. ‎[63]. In case if the organization has its historical project base counts, this will give a proper productivity factor .If historical data doesn’t exist, a historical data of the client can be used. If historical data doesn’t exist at all, then “market productivity” factors are the only solution in this case. They differ based on the programming language used. For Java projects they use 12-14 hours per function points, to .Net they use 8-10 hours per function point‎[63]. In this research we will use 8 hours per function point as the programming language used is .Net and there are no available historical data in the organization or at the client. So the total effort of the service could be calculated from equation 5. Total Estimated Effort in Man-Hour= Adjusted function point count * 8 (5) ‎[63]

3.3.2.

Phased effort distribution

Phased effort distribution leads to enhanced project management, due to improved resource allocation as mentioned earlier. However effort distribution among the different software

95

development phases varies dramatically depending on the estimation approach used .There are no fixed ratios of phases’ effort, as Table 26 shows that different effort estimation approaches has different effort distribution among the software development phases. The development phase effort ratio ranges from 65% in RUP COCOMO II to 29% in water fall COCOMOII. This makes project managers uncertain about the allocation of resources across the different project phases. On the other side, empirical analysis of phased effort data on industry projects has been obtained, from the China Software Benchmarking Standard Group (CSBSG) which has been presented in‎[64].The purpose of the analysis was to examine the effort distribution profile using 75 project data from a certain number of Chinese software organizations to gain additional understanding about the variations and possible causes of waterfall-based phase effort distribution. The software size of the various projects has been measured by SLOC. The effort has been measured by Person-Hour.

Table 26: Phase Distribution of Software Development Effort Based on Estimation Approach

Estimation Approach

Phased ratio

RUP

Inception (6%), Elaboration (18.81%), Construction (60.56%),

‎[65]

Transition (14.61%)

COCOMOII

Rational: Inception (10%), Elaboration (30%), Construction

‎[66]

(50%), Transition (10%) Water Fall : Planning & requirements (7%),Product design(17%), Detailed

design(23%-27%),Code

and

unit

test

(29%-

37%),Integration and test(19%-31%),Transition(12%) RUP: Inception (5%), Elaboration (20%), Construction (65%), Transition (10%)

Table 27 shows the overall phase distribution of the effort, which shows that the maximum standard deviation exists in code phase. The minimum standard deviation exists in the transition phase. This implies that the code has max variation and transition is approximately near in all projects‎[65]. It is worth noting that, the projects with minimum design effort are the same projects with max development effort and vice versa. As frequent rework in code phase results from insufficient effort in design phase.

96

Table 27: overall phase distribution profile

As mentioned earlier the effort distribution varies depending on the estimation technique used. In this section we will compare between the effort distribution of LOC and FP. Figure 27 shows the effort distribution using function point, depending on various software sizes. In small sized projects, the main effort is concentrated in both Planning and design phases. While in XXXL sized projects the effort is mainly in coding and transition phases. Coding of Medium-sized projects takes most of the effort. The scale of different project sizes is illustrated in Table 28 .

Table 28 : Size categories & their equivalent function point size

97

Figure 27 Comparison among different software size using Function Point scales

Figure 28 shows the effort distribution when estimation using LOC approach. Small projects effort distribution in LOC is not all that different from FP effort distribution. All projects’ efforts are concentrated in coding phase in LOC estimation.

Figure 28 : Comparison among different software sizes scale in LOC

98

The both figures Figure 27 and Figure 28 show that as the software size grows from small to medium, more effort should be allocated to code and test phases; as the software size grows from medium to large, more effort should go to plan and requirement and code phases; when the software size is extra-large, more attention should be paid to design, code, and test phases. Our proposed phased effort distribution approach provides a way to get the total cost of the service by knowing the cost of the early phases. It also helps project managers in resource allocation based on the effort of each phase. The detailed approach steps are as follows:

3.3.2.1.

Identify the estimated effort distribution:

The effort distribution among phases is shown in Table 29 these ratios are extracted from‎[64] , as they are derived from real projects and more reliable. These ratios are ploted in Figure 29 , which shows that the main effort is in the development phase while the least effort is in the integration phase. Table 29 : Estimated Effort Distribution

Phase

Estimated effort ratio

Requirements

16%

Design

15%

Development

40%

Testing

22%

Integration

7%

Total effort

100%

99

45 40

Effort ratio (%)

35 30 25 20 15 10 5 0 Requirements

Design

Development

Testing

Integration

Figure 29: Estimated phased effort distribution for new service

3.3.2.2.

Get the actual effort of the requirement phase:

With the help of the requirement phase actual effort, the efforts of the subsequent phases could be obtained.

3.3.2.3.

Calculate the estimated effort of the other phases:

From Table 29 and equation 6, it is possible to calculate the effort of any phase: Estimated Effort of Phase = (phase estimated effort Percentage * requirement phase actual effort) /requirement phase estimated effort Percentage

(6)

From Table 30 and Figure 30 which show a comparison of the phased effort of the migrated service and the new service. The new service main effort is in the development while in the migrated service, the effort is significantly different based on the migration strategy used.

Table 30: Comparison of phased effort distribution of the different migration strategies and the new service

Phase effort %

Wrapping

Reengineering

Replacement

New service

17.14%

21.62%

23.81%

16%

Design (%)

22.86%

27.03%

14.29%

15%

Development (%)

17.14%

18.92%

23.81%

40%

Planning & requirements (%)

100

Phase effort %

Wrapping

Reengineering

Replacement

New service

Testing (%)

22.86%

18.92%

23.81%

22%

Transition (%)

20.00%

13.51%

14.29%

7%

45% 40% 35% 30% 25% 20% 15% 10% 5% 0%

Wrapping Reengineering Replacement New service

Figure 30: Comparison of phased effort distribution of the different migration strategies and the new service

3.4. Composed service Composed service can be estimated as in Figure 13 . The composed service is broken down into its constituent services and the cost of each service is estimated solely based on its type available, migrated or new service. The efforts are then aggregated and added to the cost of integrating these services into the composite service. The proposed approach will be applied in details in the next chapter.

101

CHAPTER 4 EXPERIMENT AND RESULTS

102

Chapter 4: Experiment and Results As the methodology has been detailed in the previous chapter, in this chapter the proposed cost estimation approach for different types of services, will be applied. The data of the case studies were collected from the software industry from an Egyptian organization working mainly in E-government projects for 15 years .The actual efforts of services are stored in the organization’s internal content management system. The efforts are stored in the internal content management system as actual hours for scattered different tasks. These actual hours have been aggregated and grouped by phase to get the actual effort of each phase. The selected projects in the case studies had to contain services, at least one service. All the traditional software projects have been excluded. Unfortunately, there are no project that is composed entirely of services. The cost of a project composed entirely of services will be too high to be justified to the upper management. Instead we included services used in traditional software projects. These services are either: composed, migrated or new .The available service type is excluded from our research since its development effort is only the integration and testing efforts. The integration effort is considered in each type of service as a separate phase. Only completed projects at the time of this research have been included in the study. The actual effort should include all the efforts from start till the end of the project. Only projects with documented technical details are included in the study. Such technical details include project circumstances, development environment, cost factors, project size, team size, project duration and technology used. Unfortunately there is a lack in the historical projects as there is undocumented data. The projects with lost or incomplete data have been excluded from the experiment. As the total effort of each phase is required in the study, therefore the total effort is calculated by summing up all the efforts of the constituent activities. The effort has been measured in Man-hour.

Table 31 shows the details of the case studies with each project’s domain, and the name of each project and the constituent services in the sample. There are two main projects named: Alpha and Beta. The Alpha project has 5 services included in the research. While the Beta project has only one service that fulfills the selection criteria.

103

In order to compare between the estimated and actual efforts relative error has to be calculated using equation 7: %Error =Abs (Estimated Effort-Actual effort)/Actual Effort *100

(7)

Table 31: Case studies description

Project Name

Services

Customer

Domain

Total

Total

estimated

actual

Project

Team size

Project

project

project

size

(Person)

technology

duration

duration

9 months

Name

3

AutoComplete Service

Asp.Net Change Password Service Integration

6

Sql Server 2005

With

Project

Customer Service

Alpha

Client “X” Integration / financial

7

E-Government

11 months

M

,

,

WCF

9

Service

SilverLight Calculate Totals Service

5

, Sql Server 2005

,

WCF E-Government Project Beta

Invoice Service

5 months

/

Asp.Net 6 months

Telecommunic

M

7

Sql Server 2005

ations

In the next sections the estimation approaches discussed in the previous chapter will be applied.

4.1. Project Alpha: It is a project which is implemented in Egyptian post offices to allow major companies (customers) to insert deposits from post offices. Those deposits are integrated in the customer’s internal system. The backend in post offices is website using Asp.net, SQL server database. The project was built as traditional software; however the most reusable parts of the website are either developed as services from scratch or migrated into services.

104

,

The services are either made using WCF or traditional web services. Application of the proposed approach to the project’s services will be detailed next in this section.

4.1.1 Customer Name AutoComplete Service: This service takes the first letters of the customer’s name and returns a list of customers whose name starts with these letters. This service has been developed from scratch and has been modified to enhance its performance. The effort of building the service from scratch and the service migration will be both included in the study. For the new service, both adjusted function point and phased effort estimation will be used. For the migrated service phased effort estimated will be used.

4.1.1.1 New Service Estimation a) Adjusted Function Point Effort Estimation The Adjusted function point approach estimates the effort based on the functions of the service. The functions of the service and its unadjusted function point are shown in Table 32.The total unadjusted function point is 14. The cost factors are weighted based on the requirement documentation. The ignored factors will weight 0.These factors are shown in Table 33 which shows that the TDI = 11.

Table 32: Auto Complete unadjusted function point count

Module

Function Description

AutoComplete

select list of customers based

Name

on the first letters typed

AutoComplete

takes the first names of the

Name

customer

AutoComplete Name

return list of names

Type

RET/ FTR

Function DET

Complexit y Rating

Function Points

ILF

1

2

Low

7

EI

1

2

Low

3

EO

1

2

Low

4

Total Unadjusted Function Point Count

14

105

Table 33:AutoComplete Cost factors and their weights

General

System Consideration

Degree

Characteristics

(Y/N)

Influence

1

Yes

2

No

0

Yes

2

No

0

Data communications Distributed

2

data

processing

3

Performance Heavily

4

used

configuration

5

Transaction rate

No

0

6

On-Line data entry

No

0

7

End-user efficiency

No

0

8

On-Line update

No

0

9

Service complexity

Yes

1

10

Reusability

Yes

0

Yes

3

11

Installation

ease

(SOA

maturity)

12

Operational ease

No

0

13

Multiple sites

No

0

14

Flexibility

Yes

2

15

Service Integration

Yes

1

Total Degree of Influence TDI

of

NOTES SOAP

moderate performance

Synchronous

,

Syntax

,

Orchestration , Low complexity not reusable was one

of the early SOA

projects

moderate flexibility integrated

with

only

one

website

11

From equation 3, VAF = (11*0.01) + 0.65 = 0.76.From equation 4, Adjusted FP count = 10.64. From equation 5, estimated total effort = 85.12 Man-Hour However, the actual effort is 91 Man-Hour. The relative error is calculated from equation 7 =6.46 %

as shown in

Table 34. Table 34 : Relative error of the Autocomplete New service using adjusted function point

Estimated Total Effort (Man-Hour) Actual Effort(Man-Hour)

Relative Error

85.12

6.46 %

91

106

Figure 31 shows the comparison between the estimated effort using adjusted function point and the actual effort . The actual effort is higher than the estimated effort , with relative error 6.46%. Adjusted function point has been underestimating the effort for this service.

92 91

Efort in Man-Hour

90 89 88 87 86 85 84 83 82 Estimated Total Effort (Adjusted Function Point)

Actual Effort

Figure 31: Estimated Effort using Function point versus the actual effort for Customer Name AutoComplete Service

b) Phased Effort Estimation: As mentioned earlier, the new service estimation has two approaches either adjusted function point or phased effort estimation. In this section , the phased effort estimation for the customer name autocomplete service is applied .From phased effort ratio is shown in Table 29 and the estimated effort for each phase is calculated through equation 6 , the estimated effort of each phase is calculated .For example the design phase estimated effort (12 * 15) /16=11.25 man-hour. Note that the requirement phase estimated effort equals the actual effort as the requirement phase effort is not estimated .The results are shown in Table 35.

107

Table 35 : Autocomplete service New service phased estimation results Estimated Effort

Estimated Effort

Actual Effort

Actual effort

Relative

ratio(%)

(Man-Hour)

(Man-Hour)

ratio(%)

error (%)

Requirements

16

12

12

13.19

0.00

Design

15

11.25

19

20.88

40.79

Development

40

30

34

37.36

11.76

Testing

22

16.5

16

17.58

3.13

Integration

7

5.25

10

10.99

47.50

Total Effort

100

75

91

100

17.58

Phase

40 35

Main-Hour

30 25 Estimated Effort

20

Actual Effort 15 10 5 0 Requirements

Design

Development

Testing

Integration

Figure 32: Estimated effort versus actual effort for Customer Name AutoComplete new service

Figure 32 shows the estimated effort in comparison to the actual effort in Man-Hour. The figure shows that the actual and estimated effort differences are small in the testing phase. However the difference between the actual and estimated effort is high in both design and integration phases.

108

50 45

Relative Error (%)

40 35 30 25 20 15 10 5 0 Requirements

Design

Development

Testing

Integration

Figure 33: Effort estimation relative error among the project phases for Customer Name AutoComplete new service

Figure 33 shows the relative error among the phases .There is underestimation in in the design , development and integration phases . There is a slight overestimation in the testing phase. Figure 34 compares between the estimated effort ratio and the actual effort ratio distributed among the project phases. The actual effort ratio is less than the estimated effort ratio in requirements , development and testing phases.The actual effort ratio is higher than the estimated effort ratio in both design and integration phases.

109

45 40 35 30 25

Estimated Effort ratio(%)

20

Actual effort ratio(%)

15 10 5 0 Requirements

Design

Development

Testing

Integration

Figure 34: Comparison between estimated effort ratio and actual effort ratio among the project phases for Customer Name AutoComplete new service

Table 36: Adjusted Function Point relative error compared to the phased effort distribution relative error

Adjusted Function Point Relative

Phased Effort distribution relative

Error(%)

error(%)

6.46%

17.58%

Table 36 compares between the adjusted function point relative error and the phased effort distribution relative error. Figure 35 shows that the adjusted function point is more accurate than the phased effort distribution total effort.

110

20 18

Relative Errot (%)

16 14 12 10 8 6 4 2 0 Adjusted Function Point Relative Error(%)

Phased Effort distribution relative error(%)

Figure 35: Comparison between adjusted function point and phased effort distribution relative error for Customer Name AutoComplete new service

4.1.1.2.

Migrated service:

As mentioned earlier, this service has been built from scratch and has been also modified. In the migration step, many enhancements have been undertaken. These modifications include: performance enhancements by returning only top 10 customers not all the customers and invoked by minimum 3 letters not just one letter. The migration strategy used has been re-engineering. The estimated effort ratios in Table 12 and equation 6 have been used to calculate the estimated effort. For example, development phase estimated effort = (8* 18.92) / 21.62=7 man-hour. Comparison between actual and estimated effort and the error % is shown in Figure 36. The actual effort is concentrated mainly in the design , testing and development phase. The main difference between the actual and the estimated effort is in the implementation phase .The least difference between the actual and the estimated effort is in the development phase.

111

Table 37: Estimated and Actual Effort for Customer Name AutoComplete migrated service

Estimated

Estimated Effort Actual Efforts Actual Effort Relative

Effort Ratio%

(Man-Hour)

(Man-Hour)

ratio(%)

error(%)

Requirements

21.62

8

8

17.78

0.00

Design

27.03

10

12

26.67

16.67

Development

18.92

7

8.5

18.89

17.65

Testing

18.92

7

9

20.00

22.22

Implementation 13.51

5

7.5

16.67

33.33

Total

37

45

100

17.78

Phase

100

Effort in Man-Hour

14 12 10 8 6 4

Actual Effort

2

Estimated Effort

0

Figure 36: Estimated effort and actual effort for Customer Name AutoComplete migrated service

Figure 37: Relative error among the project phases for Customer Name AutoComplete migrated service

112

In Figure 37 the relative error among the phases is plotted. The highest relative error is in the implementation phase. The lowest relative error is in the design and development phases. This indicates more accurate estimation in design and development phases.

30

Effort ratio (%)

25 20 15 Estimated Effort Ratio(%)

10

Actual Effort ratio(%)

5 0

Figure 38: Estimated Effort ratio of customer name autocomplete service compared to its actual effort ratio

Comparison between the estimated effort ratio and the actual effort ratio is in Figure 38. The ratio differences are low; however in both requirements and implementation phase the differences are high.

4.1.2.

Change Password Service:

This service enables the user to change his/her password based on specific password policy. There was a password policy, but this policy has been changed to be more restricted in order to increase the security level. The old service was created from scratch through web service. In the migration step, the password policy has been modified to be more secure. Enhancement of the performance also has been done. The user could change his password, but forgot password has been added as a new functionality. But unfortunately the data of the new service is not available. Only the data of the migrated service is available. The migration strategy used is re-engineering.

113

4.1.2.1.

Migrated service Effort Estimation:

The migration estimated effort, actual efforts and the relative effort are shown in Table 38 . Table 38 : Change Password migrated service results

Estimated

Phase

effort

ratio(%)

Estimated Effort

Actual

(Man- Efforts

Hour)

(Man-Hour)

Relative error (%)

Actual Effort ratio (%)

Requirements

21.62

19

19

0.00

20.00

Design

27.03

23.75

25

4.98

26.32

Development

18.92

16.63

17

2.19

17.89

Testing

18.92

16.63

19

12.49

20.00

13.51

11.87

15

20.85

15.79

100

87.88

95

7.49

100

Implementation & Integration

Total

30

Effort in Man-Hour

25 20 15 Estimated Effort

10

Actual Effort 5 0

Figure 39: Estimated effort of Change Password migrated service compared to the actual effort

Figure 39 shows a comparison between the estimated and the actual effort . There is a slight underestimation in the development phase. While the significant difference is in the integration, testing and design phases.

114

25

Relative Error (%)

20

15

10

5

0 Requirements

Design

Development

Testing

Integration

Figure 40: Relative error in the different phases of change password migrated service

As Figure 40 shows , the highest relative effort is in the implementation phase. However the development phase has the accurate estimation with lower relative error. 30 25 20 15 Estimated effort ratio(%)

10

Actual Effort ratio (%)

5 0

Figure 41: Estimated effort ratio of Change Password migrated service compared to the actual effort ratio distributed among project phases

Figure 41 shows that there is a slight difference between the estimated effort ratio and the actual effort ratios.

115

4.1.3.

Integration with Customer Service

This service main functionality is to integrate with the customer. It has to send customers their data of interest in the required format. The service data are then inserted in the customer’s temporary database. The customer invokes this web service, to get the data. The service has been built using WCF technology and selects the data from SQL server 2005 database. This service was also developed from scratch but the data of the new service is not complete. The modifications of the functionality included adding new columns and changing in the format of the data, enhancing the performance was also an issue as the service will be invoked in regular basis. The migration strategy is re-engineering. Only portions of the activities are available. The migrated service data are complete so in our experiment, only the migrated service will be included.

4.1.3.1.

Migrated service Effort Estimation:

The same above steps were applied to give results in Table 39 which shows total effort relative error = 13.76%. All the results in Table 39 are plotted on the subsequent charts. In Figure 42 the actual effort is compared to the estimated effort. The actual effort is slightly higher than the estimated effort in all phases. The main difference between the estimated and the actual efforts is in the design and the development phases.

Table 39 : Integration with customer service migrated effort estimation

Estimated Phase

Actual

Estimated Effort Effort

Efforts

Relative

ratio(%)

(Man-

(Man-

error(%)

Hour)

Hour)

Actual Effort ratio(%)

Requirements

21.62

22

22

0.00

18.64

Design

27.03

27.51

32

14.05

27.12

Development

18.92

19.25

25

22.99

21.19

Testing

18.92

19.25

23

16.29

19.49

13.51

13.75

16

14.08

13.56

100

101.76

118

13.76

100

Implementation Integration

Total

&

116

Effort in Man-Hour

35 30 25 20 15 10 5

Estimated Effort (Man-Hour)

0

Actual Efforts (Man-Hour)

Figure 42: Estimated effort compared to actual effort of Integration with Customer migrated service

Effort ratio (%)

30 25 20 15 10 5

Estimated Effort ratio(%)

0

Actual Effort ratio(%)

Figure 43: Estimated effort ratio compared to actual effort ratio distributed among the project phases for Integration with Customer migrated service

Figure 43 shows the estimated effort ratios compared to the actual effort ratios . The main difference between the estimated effort and the actual effort is in the requirements phase. The estimated ratio and the actual ratio are close in the implementation and integration phase.

117

Relative Error (%)

25 20 15 10 5 0

Figure 44: Relative error in the different project phases of Integration with Customer migrated service

Figure 44 shows the estimated effort relative error in the different phases. The higher estimation error is in the development phase . the lowest estimation error is in the design and integration phases.

4.1.4.

Client “X” Integration Service:

This service has been built to satisfy specific customer’s exact needs. For the purpose of this research the customer’s identity has been hidden and named “X”. This service main functionality is to transfer the data from the temporary database at the client to the customer’s internal system (SAP system).

4.1.4.1.

New Service Estimation:

a) Adjusted Function Point Effort Estimation As this service has been built from scratch to satisfy special customer’s needs. The different constituent modules and their corresponding function complexity and unadjusted function point are shown in Table 40.the cost factors and their weights are shown in Table 41.The Adjusted function point results are shown in Table 42. As shown the relative error is 14.59 %.

118

Table 40 :Client "X" integration unadjusted function point count

Module

Function

Function

Type RET/FTR DET Complexity

Description

Rating

Function Points

Payment

Payment

ILF

2

6

Low

7

Payment

SetPaymentAck

EI

2

6

Average

4

Payment

SetPaymentSAPID

EI

2

6

Average

4

Payment

SetPaymentData

EI

6

6

High

6

Depositors Depositors

ILF

1

7

Low

7

Depositors SetDepositorData

EI

7

7

High

6

Depositors GetDepositorByID

EI

1

7

Low

3

Total Function point count

37

Table 41:Client "X" integration General System Characteristics General System Characteristics

Consideration

Degree of

(Y/N)

Influence

1

Data communications

Yes

1

2

Distributed data processing

No

0

3

Performance

Yes

1

4

Heavily used configuration

No

0

5

Transaction rate

No

0

6

On-line data entry

No

0

7

End-user efficiency

No

0

8

On-line update

No

0

9

Service complexity

Yes

1

10

Reusability

Yes

1

Yes

1

11

Installation

ease

(SOA

maturity)

12

Operational ease

No

0

13

Multiple sites

No

0

14

Flexibility

Yes

2

15

Service integration

Yes

1

Total Degree of Influence TDI

8

119

NOTES

SOAP

low performance is satisfying

Synchronous,

Syntax,

Orchestration, Low complexity used only in one application the 5th SOA project

for the

organization

moderate flexibility integrated website

with

only

one

Table 42 : Adjusted Function point estimated effort versus the actual effort for Client "X" integration new Service

Estimated Total Effort (Man-Hour)

Actual Effort (Man-Hour)

Relative Error

216.08

253

14.59%

260

Effort in Man-Hour

250 240 230 220 210 200 190 Estimated total effort using Adjusted Function Point

Actual effort

Figure 45: Client “X” Integration new service estimated effort using adjusted Function point compared to the actual effort

Figure 45 shows the difference between the estimated effort using adjusted function point and the actual effort. The actual effort is higher than the estimated effort with relative error 14.59%.

b) Phased Effort Estimation: The phased effort estimation results are shown in Table 43 .The relative error ranges from 5.92% to 34.38% .However; the relative error of the total service is 3.66% .

120

Table 43: Client "X" integration new service :Phased effort estimation results

Estimated

Estimated

effort

effort

ratio(%)

Hour)

16%

Design

Phase

(Man-

Actual efforts Actual

Effort

Effort Relative

(Man-Hour)

Ratio(%)

39

39

15.42

0%

15%

36.56

42

16.60

12.95 %

Development

40%

97.5

89

35.18

9.55%

Testing

22%

53.62

57

22.53

5.92%

7%

17.06

26

10.28

34.38%

100%

243.75

253

100.00

3.66%

Requirements

&

Analysis

Implementation & Integration Total

error(%)

120

Effort in Man-Hour

100 80 60 40

Estimated Effort (Man-Hour) Actual Effort (Man-Hour)

20 0

Figure 46: Client “X” Integration new service estimated phased effort distribution compared to the actual effort

Figure 46 shows the difference between the estimated and the actual effort in the different phases. The main difference between the estimated and the actual effort is in the integration phase. Both design and testing phase estimated and actual efforts are slightly different.

121

45 40

Effort ratio (%)

35 30 25 20 15

Estimated effort ratio (%)

10

Actual effort ratio (%)

5 0

Figure 47: Client “X” Integration new service estimated effort distribution compared to the actual effort ratio

Figure 47 shows the comparison between the estimated and the actual effort ratio distributed in the different phases. The graph shows that overestimation exists in both requirements and development phases. However underestimation exists in the rest of the phases. The gap between the estimated and the actual is high in the development and integration phases.

122

40 35

Relative Error (%)

30 25 20 15 10 5 0 Requirements & Analysis

Design

Development

Testing

Integration

Figure 48:Client “X” Integration new service estimated phased effort distribution relative error

In Figure 48 the relative error among the phases is plotted. The highest relative error is in the integration phase 34.38%, while the lowest relative error is in the testing phase 5.92%. Table 44 shows the relative error between the adjusted function point and the phased effort distribution ratio. The phased effort distribution approach is more accurate than the adjusted function point as shown in Figure 49. Table 44: Client ”X” Integration new service adjusted function point relative error compared to the phased effort distribution relative error

Adjusted Function Point Relative Error

Phased Effort Distribution Relative Error

14.59%

3.66%

123

16 14

Rekative Error

12 10 8 6 4 2 0 Adjusted Function Point

Phased Effort distribution

Figure 49: Comparison between the relative error of adjusted function point and the phase effort distribution for the Client “X” Integration service

4.1.5.

Calculate Totals Service:

This service has been built from scratch. This service main function is to make some calculation (sum, average, count) of fields in transactions table in the Alpha project. These totals are to be displayed in web pages using Silverlight. Technology used is Silverlight, WCF services, SQL server 2005.

4.1.5.1.

New Service Estimation:

a) Adjusted Function Point Effort Estimation The service modules and their function point count are detailed in Table 45. The cost factors and their weights are shown in Table 46.The total cost factors weight is 18. Table 45: Calculate totals service unadjusted function poin count

Module

Function Description

Type

RET/ FTR

Function DET

Complexit y Rating

Function Points

Totals

Totals

ILF

1

5

Low

7

Average

Average

ILF

1

4

Low

7

Totals

GetDayTotals(Date _Date)

EI

1

5

Low

3

Average

GetDayAverage(Date _Date)

EI

1

4

Low

3

Totals

Totals

EO

4

5

Average

5

Average

Average

EO

3

4

Low

4

Total Function Point Count

29

124

Table 46 : Calculate totals service general system characteristics

General

System Consideratio

Degree

Characteristics

n (Y/N)

Influence

1

Yes

2

No

0

Yes

4

No

0

2 3 4

Data communications Distributed

data

processing Performance Heavily

used

configuration

of

NOTES SOAP

Ignored by SOA High performance

Ignored by SOA

5

Transaction rate

No

0

Ignored by SOA

6

On-line data entry

No

0

Ignored by SOA

7

End-user efficiency

No

0

Ignored by SOA

8

On-line update

No

0

Ignored by SOA

9

Service complexity

Yes

1

Reusability

Yes

2

Yes

3

Operational ease

No

0

Multiple sites

No

0

Flexibility

Yes

3

Service integration

Yes

3

1 0 1

Installation ease (SOA

1

maturity)

1 2 1 3 1 4 1 5

Total Degree of Influence TDI

Asynchronous

,

orchestration

,

syntax --> Low Complexity

Moderate reusability

Third SOA project

Ignored by SOA

Ignored by SOA

Medium flexibility Integrated

with

silver

light

(complicated technology)

18

Table 47 shows the estimated effort, the actual effort 224 man-hour and the relative error is 14.04 %.

Table 47: Totals service Adjusted Function point estimates and the relative error

Estimated Total Effort (Man-Hour)

Actual Effort (Man-Hour)

Relative Error

192.56

224

14.04%

125

Figure 50 shows that adjusted function point underestimated the effort of the calculate totals service. 230 225 Effort in Man-Hour

220 215 210 205 200 195 190 185 180 175 Adjusted function point estimated Total Effort (Man-Hour)

Actual Effort (Man-Hour)

Figure 50: Calculate totals new service adjusted function point estimated effort compared to the actual effort

b) Phased Effort Estimation: Table 48 shows the totals service phased effort with total error relative error 19.08%. Figure 51 compares the estimated effort to the actual effort in Man-Hour. There is underestimation in all phases of the project. The highest difference is in both design and integration phases.

Table 48: Totals service phased effort ratio results

Estimated Phase

Estimated

Actual

Effort ratio Effort

Effort Relative

(Man-Hour)

Error(%)

Actual effort

(%)

(Man-Hour)

16

29.00

34

0.00

15.18

Design

15

27.19

42

35.27

18.75

Development

40

72.50

81

10.49

36.16

Testing

22

39.88

45

11.39

20.09

Integration

7

12.69

22

42.33

9.82

Total

100

181.25

224

19.08

100

Requirements & Analysis

126

ratio(%)

90 80 70

Effort in Man-Hour

60 50 40 Estimated Effort (Man-Hour)

30

Actual Effort (Man-Hour)

20 10 0

Figure 51: Calculate Totals new service estimated effort compared to the actual effort in the different phases

Figure 52 shows the estimated and the actual effort ratio among the phases. Which shows that the estimated ratio is higher than the actual ratio in the requirements, development and testing. While the actual ratio is higher than the estimated ratio in the design and integration phases.

127

45 40 35 30 25 20 15

Estimated Effort ratio (%)

10

Actual effort ratio(%)

5 0

Figure 52: Calculate Totals new service estimated effort ratio compared to the actual effort ratio in the different phases

45 40 35 30 25 20

Relative Error(%)

15 10 5 0 Requirements & Analysis

Design

Development

Testing

Integration

Figure 53: Calculate Totals new service estimated effort ratio relative error in the different phases

Figure 53 shows the estimation relative error in the different phases. The highest relative error is in the integration and design phases. While the lowest relative error is in the development and testing phases.

128

Table 49 shows the comparison in accuracy between the adjusted function point and phased effort distribution.

Table 49: Calculate Totals new service estimated effort using adjusted function point compared to the phased effort distribution relative error

Adjusted Function Point Relative Error(%)

Phased Effort distribution relative error(%)

14.04

19.08

As shown in Figure 54 the adjusted function point is more accurate than the phased effort distribution.

25

Relative Error (%)

20

15

10

5

0 Adjusted Function Point

Phased Effort distribution

Figure 54: Calculate Totals service the adjusted function point relative error compared to the phased effort distribution

4.2. Project Beta: This project was built in order to provide the facility for the citizens to pay their bills from the post offices. The system is integrated with the biller’s internal system in inquiry of the bill and paying the bill. The backend in post offices is website using Asp.net, SQL server 2005 database. The project is built as traditional software, however the most reusable parts of the website are either developed from scratch or migrated into web services. The services are either made using WCF or normal web services. Unfortunately there is a lack in

129

documentation of this project and incomplete data about the effort spent in this project. However, only one service has complete data which is included in our study.

4.2.1.

Invoice Service:

Service Description This service has been built from scratch. Its main functionality is to inquire about the bill status either paid or not, and able to pay the invoice.

4.2.1.1.

New Service Estimation:

a) Adjusted Function Point Effort Estimation The unadjusted function point is shown in Table 50 with total unadjusted function point count = 41. Table 50 : Invoice service unadjusted function point count

Module

Inquiry

Function Description

Type

Return status (Paid /not Paid)

EQ

RET /FTR

Function DET

Complexity Rating

Function Points

1

2

Low

3

1

5

Low

3

EI

1

3

Low

3

EO

1

5

Low

4

EQ

5

3

Average

4

ILF

3

3

Low

7

ILF

3

4

Low

7

ILF

1

3

Low

7

EI

1

2

Low

3

Change the status of this PayInvoice

invoice to paid by telephone EI number

GetInvoice

PayInvoice

Get the invoice details by the telephone number Change the status of this invoice

GetInvoice

Get the invoice details

TelephoneNum

Table contains the telephone

berHeader

numbers

TelephoneNum

Table contains the telephone

berDetails

numbers Details

Invoice

Inquiry

table contains the telephone numbers Details Inquiry the selected telephone number

Total Function Point Count

41

130

The cost factors and their weights are detailed in Table 51 as the total degree of influence = 16.

Table 51 : Invoice service general system characteristics

General

System Consideration

Degree

Characteristics

(Y/N)

Influence

1

Data communications

Yes

2

2

Distributed data processing

No

0

3

Performance

Yes

2

4

Heavily used configuration

No

0

5

transaction rate

No

0

6

On-line data entry

No

0

7

End-user efficiency

No

0

8

On-line update

No

0

9

Service complexity

Yes

1

Reusability

Yes

2

Yes

3

Operational ease

No

0

Multiple sites

No

0

Flexibility

Yes

2

Service integration

Yes

4

1 0 1

Installation

1

maturity)

1 2 1 3 1 4 1 5

ease

(SOA

Total Degree of Influence TDI

of

NOTES

SOAP

Moderate performance

synchronous

,

syntax

,Orchestration

moderate reusability was one of the early SOA projects

moderate flexibility integration

is

a

major

concern

16

The estimated total effort is 265.68 Man-Hour and the actual effort is 223 Man-Hour . So the relative error =19.14% as shown in Table 52 .

131

Table 52 : Invoice new Service estimated effort using adjusted function point compared to the actual effort

Estimated Total Effort (Man-Hour)

Actual Effort (Man-Hour)

Relative Error

265.68

223

19.14%

270

Effortin Man-Hour

260 250 240 230 220 210 200 Estimated Total Effort using Adjusted Function point

Actual Effort

Figure 55: Invoice service Estimated effort using adjusted function point compared to the actual effort

Figure 55 shows that adjusted function point has overestimated the effort.

b) Phased Effort Estimation: The actual and estimated phased efforts are shown in Table 53. The total service relative error is 7.51% Table 53 : Invoice new service service phased effort estimation results

Estimated

Estimated

effort

Effort (Man- Effort (Man-

ratio(%)

Hour)

Hour)

16

33.00

33

0.00

14.80

Design

15

30.94

41

24.54

18.39

Development

40

82.50

87

5.17

39.01

Testing

22

45.38

39

16.35

17.49

Integration

7

14.44

23

37.23

10.31

Total

100

206.25

223

7.51

100

Phase

Requirements Analysis

&

132

Actual

Relative Error(%)

Actual Effort ratio(%)

100 90 80 Effort in Man-Hour

70 60 50 40 Estimated Effort (Man-Hour)

30

Actual Effort (Man-Hour)

20 10 0

Figure 56: Invoice new service estimated effort compared to the actual effort in the different phases

Figure 56 shows a comparison between the estimated and actual effort in Man-Hour. The actual effort is higher than the estimated effort in the design , development and integration phases. While in the testing phase the estimated effort is higher than the actual effort.

45 40 35 30 25 Estimated effort ratio(%)

20

Actual Effort ratio(%)

15 10 5 0 Requirements & Analysis

Design

Development

Testing

Integration

Figure 57 : Invoice new service estimated effort ratio compared to the actual effort ratio in the different phases

133

Figure 57 compares between the estimated effort ratio and the actual effort ratio . The estimated effort is slightly higher than the actual effort in both requirements and development phases. While the estimated effort is significantly higher than the actual effort in the testing phase. The actual effort is higher than the estimated effort in both design and integration phases. Figure 58 shows the relative error in the different phases. The significant relative error is in the integration and design phases. The lowest relative error is in the development phase. Table 54 shows a comparison between the adjusted function point and phased effort distribution relative errors. As Figure 59 shows, the phased effort distribution is more accurate than the adjusted function point.

40 35 30 25 20 Relative Error (%) 15 10 5 0 Requirements & Analysis

Design

Development

Testing

Integration

Figure 58 Invoice new service relative error in the different phases

Table 54: Function point relative error compared to the phased effort distribution relative effort

Adjusted function point relative error

Phased effort distribution relative error

19.14%

7.51%

134

25

Relative Error (%)

20 15 10 5 0 Adjusted function point relative error

Phased effort distribution relative error

Figure 59:Ajusted Function Point relative error compared to the phased effort distribution relative error for Invoice new service

4.3. Accumulation of the results After all the case studies are applied and documented, in this section the accumulation of results is conducted.

6.3.1 Accumulation of migrated service results: The overall comparison between estimated effort and the actual effort of migrated services in the case studies are shown in Table 55.

Table 55: Estimated effort compared to the actual effort in all the migrated services of the case studies

Service name

Estimated

effort Actual

Effort

Relative Error (%)

(Man-Hour)

(Man-Hour)

Customer Name AutoComplete

37

45

17.78

Change Password

87.88

95

7.49

Integration With Customer

101.76

118

13.76

135

140 120 100 80 Estimated effort (Man-Hour) 60

Actual Effort (Man-Hour)

40 20 0 Customer Name AutoComplete

Change Password

IntegrationWithCustomer

Figure 60: Estimated effort compared to the actual effort in the migrated services of the case studies

Figure 60 compares between the estimated and the actual efforts for the migrated services in the case studies. The actual effort is higher than the estimated effort in all migrated services, this indicates underestimation. There is a slight difference between the estimated and the actual efforts in both Customer Name Autocomplete service and Change Password service. While in the Integration with Customer service the estimation accuracy is low. Figure 61 shows the relative error for the migrated services. The relative error ranges from

Relatvie Error (%)

7.49 % to 17.78 %.

20 18 16 14 12 10 8 6 4 2 0

Relative Error

Customer Name AutoComplete

Change Password

Figure 61: Relative error in the migrated services of the case studies

136

IntegrationWithCustomer

All the migrated service results are shown in Table 56 , which shows the estimated effort ratios in the different phases and the corresponding actual effort in the case studies services. All these effort ratios are plotted in Figure 62.

Table 56 : Estimated effort ratio compared to the actual effort in the migrated services of the case studies

30 25 20

Re-engineering Estimated Effort ratio (%)

15 10

Integration service Actual Effort ratio (%)

5 0

AutoComplete customer name service Actual Effort ratio (%) Change password actual effort ratio(%)

Figure 62: Estimated effort ratio compared to the actual effort in the migrated services of the case studies

Figure 62 shows that in the requirements phase, there is a gap between the estimated effort ratio and the actual effort ratios of all services; this indicates overestimation in the ratio of requirements phase. In the design phase the estimated and actual effort ratios are neck to neck, which indicates accurate estimation. In the development phase, there are some accuracy fluctuations. Two services have more effort ratio than the actual ratio and one service has actual effort ratio the same as the estimated effort ratio.

137

In the testing phase, there is a slight effort ratio underestimation .which implies estimation accuracy. In the integration phase the actual effort ratio for the two services are close to the estimated effort ratio which indicates the estimation accuracy. One service integration actual effort ratio is significantly higher than the estimated effort ratio.

Table 57: Effort estimation relative error in the different phases for migrated service in the case studies

Phase

Requirements &

Integration

service

Relative error (%)

AutoComplete

customer

name service relative error (%)

Change password relative error (%)

0.00

0.00

0.00

Design

14.05

16.67

15.72

Development

22.99

17.65

30.28

Testing

16.29

22.22

26.68

Integration

14.08

33.33

57.76

Analysis

Table 57 shows the estimation relative errors in the different phases, these data are plotted in Figure 63 . The significant relative error is in the change password service integration phase. The design phase relative error is limited for all services of the case studies.

Figure 63: Effort estimation relative error in the different phases for all the migrated service in the case studies

138

6.3.2 Accumulation of new service results Table 58 shows the overall results for the new service effort distribution ratios among the different phases. Table 58 : Phased effort distribution accumulation, results for all the new services of the case study

45 40 35 30 25 20 15 10 5 0

Estimated effort ratio (%) Client “X” Integration Service Actual effort ratio (%) Invoice Service Actual Effort ratio(%) Calculate totals serviceActual effort ratio(%) CustomerNameAutoComplete service Actual effort ratio(%)

Figure 64: Phased effort distribution accumulation, results for the new services of the case study

Figure 64 shows the estimated effort ratios compared to the actual effort ratios in case of new services. There is a slight overestimation of the effort ratio of the requirements phase. In the design phase, there is an underestimation of the effort ratio.

There is a slight

overestimation in the development phase. In the testing phase, there is a slight underestimation only for one service and the rest of the services are effort ratio overestimated .The integration effort ratio is underestimated in all the services.

139

Table 59: Effort estimation relative error in the different phases for new service in the case studies

Client

Phase

“X”

Integration

Invoice Service

Service

relative

totals service relative error

relative error error(%)

(%)

(%) Requirements

Calculate

Customer Autocomplete

service

relative error (%)

0.00

0.00

0.00

0.00

Design

12.95

24.54

35.27

40.79

Development

9.55

5.17

10.49

11.76

Testing

5.92

16.35

11.39

3.13

Integration

34.38

37.23

42.33

47.50

& Analysis

Name

Table 59 shows the effort estimation relative error in the different phases for new service in the case studies. Figure 65 shows the relative error in the effort ratios in the different phases.CustomerAutocomplelteName service integration and development phase has the highest relative error .the least relative error is in the testing phase effort ratio .The estimation relative error for CalculateTotlas service is high in both design and integration

Relative Error (%)

phases.

50 45 40 35 30 25 20 15 10 5 0

Client “X” Integration Service relative error (%) Invoice Service relative error(%) Calculate totals service relative error (%) Customer Name Autocomplete service relative error (%)

Figure 65: Effort estimation relative error in the different phases for new service in the case studies

Table 60 shows a comparison between the adjusted function point and the phased effort distribution of the new service case studies.

140

Table 60: Comparison between the adjusted function point and the phased effort distribution for the new service of the case studies

Estimation

Service name

Customer

Approach

Used

Name

Estimated effort (ManHour)

Actual Effort

Relative

(Man-

Error(%)

Hour)

Adjusted Function Point

85.12

91

6.46%

Phased Effort Estimation

75

91

17.58%

Client “X” Integration Adjusted Function Point

216.08

253

14.59%

Client “X” Integration Phased Effort Estimation

243.75

253

3.66%

Calculate Totals

Adjusted Function Point

192.56

224

14.04%

Calculate Totals

Phased Effort Estimation

181.25

224

19.08%

Invoice Service

Adjusted Function Point

265.68

223

19.14%

Invoice Service

Phased Effort Estimation

206.25

223

7.51%

AutoComplete Customer AutoComplete

Name

Figure 66 shows that in the Customer Name Autocomplete service the adjusted function point and phased effort distribution have a neck-to-neck values with the actual effort.in the Client”X”Integraiton service the phased effort distribution estimate is more accurate than the adjusted function point. In the Calculate Totals service the adjusted function point has a more accurate estimate than the phased effort distribution. In the invoice service the phased effort distribution is more accurate than the adjusted function point. However adjusted function point has overestimated the effort and the phased effort distribution has underestimated the effort.

141

300 250 200

Adjusted Function Point estimate

150

Phased effort distribution estimate

100 50

Actual Effort

0 Customer Name AutoComplete

Client “X” Integration

CalculateTotals Invoice Service

Figure 66: Comparison between the estimated effort using adjusted function point and phased effort distribution to the actual effort

25

Relative error (%)

20 15 10 Adjusted function point relative error

5

Phased effort distribution relative error 0

Figure 67:Comparison between the adjusted function point and phased effort distribution relative error

As shown in Figure 67, in the Customer Name Autocomplete service the adjusted function point is more accurate than the phased effort distribution. For Client ”X” Integration service phased effort distribution is significantly more accurate than the adjusted function point. In Calculate Totals service both adjusted function point and phased effort distribution have a

142

slightly high relative error. In invoice service the phased effort distribution is more accurate than the adjusted function point. Table 61: Accumulation of the results of the case study

Project Name

Type of

Service name

Service

Estimation Approach Used Adjusted

Customer Name

New

AutoComplete

Change Password Integration

Alpha

Customer

Phased

Effort

Estimation Migrated

Project

Function Point

With

Migrated

Migrated

Phased

Effort

Estimation Phased

Effort

Estimation Phased

Effort

Estimation Adjusted

Client

“X”

Integration

New

Function Point Phased

Effort

Estimation Adjusted Calculate Totals

New

Function Point Phased

Effort

Estimation Adjusted Project Beta

Invoice Service

New

Function Point Phased

Effort

Estimation

Estimated

Actual

effort

Effort

Relative

(Man-

(Man-

Error

Hour)

Hour)

85.12

91

6.46%

75

91

17.58%

37

45

17.78%

87.88

95

7.49%

101.76

118

13.76%

216.08

253

14.59%

243.75

253

3.66%

192.56

224

14.04%

181.25

224

19.08%

265.68

223

19.14%

206.25

223

7.51%

All the case studies are accumulated and summarized in Table 61 which shows that the migrated service estimation minimum relative error is 7.49 % and maximum relative error is 17.78 %.while for the new service , the Adjusted function point gives minimum relative error = 6.46 % and maximum 19.14% . Although the phased effort estimation for the new service minimum relative error = 3.66% and maximum relative error is 19.08%.

143

These results are considered a significant improvement over the estimation relative error of the industry, which is 30% as mentioned in ‎[4].

144

CHAPTER 5 CONCLUSION

145

Chapter 5: Conclusion Indeed effort estimation of software projects, in general, represents a major challenge for project managers. As inaccurate cost estimation, mainly underestimation leads to unpredicted risks, launch slips, mission failure and/or cost overrun. The cost overrun in the software projects is around 30%. Effort estimation of SOA projects in specific has not been addressed properly in the existing literature. The traditional effort estimation techniques do not fit SOA projects entirely, as SOA has unique characteristics were not addressed by the traditional cost estimation approaches. These unique SOA characteristics include: loose coupling, reusability, composability and discoverability. On the other hand, cost estimation approaches that were proposed specifically to estimate SOA projects, are still immature and most of them were impractical or need more development. They cannot be used in real life projects, as they are more guidelines than actual practical cost estimation approaches. This thesis proposes an SOA effort estimation approach which is mainly based on classifying the services into its basic type and estimate the effort considering the cost factors related to this service type. The SOA project is broken down into its component services. Each service is classified into either: available, migrated, new or composed. The available service is the service already exists and could be re-used as is. The available service development effort is only the integration and testing efforts. While the migrated service is the service which could be re-used after modifications. The migrated service effort estimation is based on the migration strategy either wrapping, reengineering or replacement. The cost factors for migrated service has been extracted from the existing literature and weighted for each migration strategy. These cost factors have been grouped and summed by phase, which results in phased effort distribution ratio. Phased effort distribution estimates the effort of the migrated service early and accurately based on the effort of the requirement phase. It also enables the decision makers to choose the adequate migration strategy to be carried on, based on the relative total cost of the migration strategy. The new service, on the other hand, which is a service to be developed from scratch, has been estimated using two approaches: adjusted function point and phased effort ratio estimation. The adjusted function point is mainly the same as the traditional function point, the main modification is made in the cost factors .In the adjusted function point approach the cost factors are classified into: considered traditional function point cost factors, ignored traditional factors and considered adjusted cost factors .The traditional cost factors are cost

146

factors considered in the traditional function point, but the description has been modified to fit SOA characteristics. Each cost factor has been weighted based on the complexity of implementation in SOA. The ignored cost factors are traditional function point cost factors, but are either unavailable or inapplicable in SOA. These ignored factors weight equal zero. The considered adjusted cost factors are factors added by other approaches and considered by our proposed approach. The other estimation approach for the new service is phased effort estimation in which effort ratio of each project phase is extracted from empirical studies from the existing literature. This phased effort distribution could be used to estimate the effort of the different project phases in the early phases of the project by knowing only the effort of the requirements phase. This approach gives phased effort ratios for new and migrated services which is the first time to be applied to SOA projects, the exiting phases ratios has been applied to traditional software projects only. The composed service is estimated by breaking down the service into its component services, estimate the effort of each service based on its basic type and accumulate these efforts in addition to the efforts of integration these component services to create the composed service. This proposed approach has been applied to projects from the industry of an Egyptian organization working mainly in E-government projects. The results of the migrated service phased effort show that the relative error ranged from 7.49% to 17.78%. However it gives more accurate results while estimating each phase solely which ranges from 2.19% to 33.33 %. On the other hand, considering the adjusted function point of the new service, the relative error of the estimated total effort ranged from 6.46%% to 19.14%. While the phased effort ratios approach relative error ranged from 3.66% to 19.08%. In case of the new service , comparison between the phased effort ratios and the adjusted function point , the phased effort overall results are more accurate than the Adjusted function point .However the adjusted function point estimates the effort in the early stages of the project. While phased effort estimation estimates the next phases by knowing the effort of the requirement phase or at least by using expert opinion method. The overall relative error ranges from 3.66 % to 19.14%, which is considered a significant enhancement over the existing relative error which is 30% in the existing literature.

147

5.1. Limitation of the study There are some factors that may affect the validity of the results: •

The sample projects were from the same organization and the same business

domain. •

The projects are implemented using the same technology (.Net, SQL server)



The data available are from two projects only



Only migration strategy available data is re-engineering. There were replacement

case studies; however the data is either incomplete or unavailable. Also the wrapping strategy data is available, but it wasn’t implemented using SOA, it was implemented using traditional software. •

Lack of projects documentation and history has prevented further analysis of the

case studies.

5.2.

Future work

In the future work we aim to apply this approach for several domains and various development environments on data collected worldwide in order to determine phased effort estimation for each domain. For the new service approach we used the function point approach, in the future we aim to apply other traditional effort estimation approach such as COCOMO II and COSMIC . Enhancement of the methodology might be needed in order to decrease the relative error.

148

References [1] Abrams, C., & Schulte, R. W. (2008). Service-oriented architecture overview and guide to SOA research. Gartner Research. [2] Bajwa, I. S., Kazmi, R., Mumtaz, S., Choudhary, M. A., & Naweed, M. S. (2008). SOA and BPM Partnership: A paradigm for Dynamic and Flexible Process and IT Management. World Academy of Science, Engineering and Technology, 45(4), pp.16-22. [3] Krafzig, D., Banke, K., & Slama, D. (2005). Enterprise SOA: service-oriented architecture best practices. Prentice Hall Professional. [4] Jorgensen, M. (2014). What We Do and Don't Know about Software Development Effort Estimation. IEEE software, (2), 37-40. [5] May, L. J. (1998). Major causes of software project failures. CrossTalk: The Journal of Defense Software Engineering, 11(6), pp.9-12. [6] Fairley, R. E. (2011). Managing and leading software projects. John Wiley & Sons. [7] Keil, M., Cule, P. E., Lyytinen, K., & Schmidt, R. C. (1998). A framework for identifying software project risks. Communications of the ACM, 41(11),pp. 76-83. [8] Linthicum, D. (2007, April 17). How Much Will Your SOA Cost? Here are Some Guidelines.

Retrieved

October

26,

2015,

from

http://soa.

sys-con.

com/node/318452. [9] Farrag, E. A. & Moawad, R.(2014). Phased Effort Estimation of Legacy Systems Migration to Service Oriented Architecture. International Journal of Computer and Information Technology, 3 (3). [10]

OASIS SOA Reference Model TC. Retrieved October 26, 2015, from

https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=soa-rm [11]

Open

Group

.

Retrieved

October

26,

2015,

from

http://www.opengroup.org/soa/source-book/soa/soa.htm [12]

Anguelov, Z. V. (2010). Architecture framework in support of effort

estimation

of

legacy

systems

modernization

towards

a

SOA

environment (Doctoral dissertation, TU Delft, Delft University of Technology). [13]

Trendowicz, A., & Jeffery, R. (2014). Software Project Effort Estimation.

Foundations and Best Practice Guidelines for Success, Constructive Cost Model– COCOMO pags, 277-293.

149

[14]

Adams, D., & McNamara, R. (2006). Service-Oriented Architecture and

Best Practices. TIBCO Software, Inc. [15]

Barnes, M., Sholler, D., & Malinverno, P. (2005). Benefits and Challenges

of SOA in Business Terms. Gartner Research [16]

Erl, T., Carter, S., & Ogrinz, M. (2009). SOA: Patterns, Mashups,

Governance, Service Modeling, Executing, and More. InformIT.com, IBM Press. [17]

Kumar, N. (2013, March 15). Enterprise Benefits on Service Oriented

Architecture - SOA - DZone Integration. Retrieved October 26, 2015, from https://dzone.com/articles/enterprise-benefits-service [18]

Erl, T. (2006). Service-oriented architecture: concepts, technology, and

design. Pearson Education India. [19]

Santillo, L. (2007). Seizing and sizing SOA applications with COSMIC

function points. SMEF 2007. [20]

Mahmood, K., Ilahi, M. M., Ahmad, S., & Ahmad, B. (2011, December).

Integration Efforts Estimation in Service Oriented Architecture (SOA) Applications. In Information and Knowledge Management (Vol. 1, No. 2, pp. 23-27). [21]

Steghuis, C. (2006). Service granularity in SOA-projects: a trade-off

analysis. [22]

O'Brien, L. (2009, April). A framework for scope, cost and effort estimation

for service oriented architecture (SOA) projects. In Software Engineering Conference, 2009. ASWEC'09. Australian (pp. 101-110). IEEE. [23]

Li, Z., & O'Brien, L. (2011, March). A qualitative approach to effort

judgment for web service composition based SOA implementations. In Advanced Information Networking and Applications (AINA), 2011 IEEE International Conference on (pp. 586-593). IEEE. [24]

Li, Z., & Keung, J. (2010, June). Software cost estimation framework for

service-oriented architecture systems using divide-and-conquer approach. In Service Oriented System Engineering (SOSE), 2010 Fifth IEEE International Symposium on (pp. 47-54). IEEE. [25]

Mukhopadhyay, D., & Chougule, A. (2012). A survey on web service

discovery approaches. In Advances in Computer Science, Engineering & Applications (pp. 1001-1012). Springer Berlin Heidelberg. [26]

Vegter, W. (2009, June). Critical success factors for a SOA implementation.

In 11th Twente Student Conference on IT, Enschede, June 29th.

150

[27]

Lee, S. P., Chan, L. P., & Lee, E. W. (2006, August). Web services

implementation methodology for SOA application. In Industrial Informatics, 2006 IEEE International Conference on (pp. 335-340). IEEE. [28]

Papazoglou, M. P., & Van Den Heuvel, W. J. (2006). Service-oriented

design and development methodology. International Journal of Web Engineering and Technology, 2(4). [29]

Rush, C., & Roy, R. (2001). Expert judgement in cost estimating: Modelling

the reasoning process. Concurrent Engineering, 9(4). [30]

Jørgensen, M. (2004). Top-down and bottom-up expert estimation of

software development effort. Information and Software Technology, 46(1). [31]

Moløkken, K., & Jørgensen, M. (2005). Expert estimation of web-

development projects: are software professionals in technical roles more optimistic than those in non-technical roles? .Empirical Software Engineering, 10(1). [32]

Everett, G. D., & McLeod Jr, R. (2007). Software testing: testing across the

entire software development life cycle. John Wiley & Sons. [33]

Seth, A., Agrawal, H., & Singla, A. R. (2014). Techniques for Evaluating

Service Oriented Systems: A Comparative Study. Journal of Industrial and Intelligent Information, 2(2). [34]

McLeod, R., & Jordan, E. (2001). Software Engineering: A Project

Management Approach. John Wiley & Sons, Inc.. [35]

Remenyi, D. (2004). Systems Development–A Project Management

Approach: Raymond McLeod Jr. and Eleanor Jordan, published by Wiley and Sons. [36]

M.Rosen (2007). SOA Service Usage Types. Retrieved October 26, 2015,

from

http://www.bptrends.com/publicationfiles/12-

07%20SOA%20Service%20Usage%20Types-Rosen-final.pdf. [37]

O'Brien, L., Brebner, P., & Gray, J. (2008, May). Business transformation to

SOA: aspects of the migration and performance and QoS issues. In Proceedings of the 2nd international workshop on Systems development in SOA environments (pp. 35-40). ACM. [38]

Almonaies, A. A., Cordy, J. R., & Dean, T. R. (2010, March). Legacy

system evolution towards service-oriented architecture. In International Workshop on SOA Migration and Evolution (pp. 53-62). [39]

Khadka, R., Saeidi, A., Jansen, S., & Hage, J. (2013, September). A

structured legacy to SOA migration process and its evaluation in practice. In

151

Maintenance and Evolution of Service-Oriented and Cloud-Based Systems (MESOCA), 2013 IEEE 7th International Symposium on the (pp. 2-11). IEEE. [40]

Lewis, G., Morris, E., & Smith, D. (2005, September). Service-oriented

migration and reuse technique (smart). In Software Technology and Engineering Practice, 2005. 13th IEEE International Workshop on (pp. 222-229). IEEE. [41]

Stehle, E., Piles, B., Max-Sohmer, J., & Lynch, K. (2008). Migration of

Legacy Software to Service Oriented Architecture. Department of Computer Science Drexel University Philadelphia, PA, 19104, pp. 2-5. [42]

Farrag, E., Moawad, R., & F. Imam, I. (2016). An Approach for Effort

Estimation of Service Oriented Architecture (SOA) Projects. JSW, 11(1). http://dx.doi.org/10.17706/jsw. [43]

Lum, K., Bramble, M., Hihn, J., Hackney, J., Khorrami, M., & Monson, E.

(2003). Handbook for software cost estimation. Jet Propulsion Laboratory, Pasadena, CA, USA. [44]

Merlo–Schett, N. (2002). COCOMO (Constructive Cost Model). In Seminar

on Software Cost Estimation WS (Vol. 2003). [45]

Sunkle, S., & Kulkarni, V. (2012). Cost estimation for model-driven

engineering (pp. 659-675). Springer Berlin Heidelberg. [46]

Tansey, B., & Stroulia, E. (2007, May). Valuating software service

development: integrating COCOMO II and real options theory. In Proceedings of the First International Workshop on The Economics of Software and Computation (p. 8). IEEE Computer Society. [47]

Schulmerich, M. (2010). Real options valuation: the importance of interest

rate modelling in theory and practice. Springer Science & Business Media. [48]

Erdogmus, H. (1999, May). Valuation of complex options in software

development. In First Workshop on Economics–Driven Software Engineering Research, EDSER (Vol. 1). [49]

IFPUG, “Function Point Counting Practices Manual, Release 4.2”,

International Function Point Users Group, 2000, http://www.ifpug.org/. [50]

Grupe, F. H., & Clevenger, D. F. (1991). USING FUNCTION POINT

ANALYSIS AS A SOFTWARE-DEVELOPMENT TOOL. Journal of Systems Management, 42(12).

152

[51]

Mahmood, K., Ilahi, M. M., Ahmad, B., & Ahmad, S. (2012). Empirical

Analysis of Function Points in Service Oriented Architecture (SOA) Applications. Industrial Engineering Letters, 2(1). [52]

Gencel, C. (2008). How to Use COSMIC Functional Size in Effort

Estimation Models?. In Software Process and Product Measurement (pp. 196-207). Springer Berlin Heidelberg. [53]

Dumke, R., Neumann, R., Schmietendorf, A., & Wille, C. (2014, October).

Empirical-Based Extension of the COSMIC FP Method. In Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2014 Joint Conference of the International Workshop on (pp. 5-10). IEEE. [54]

Del Bianco, V., Gentile, C., & Lavazza, L. (2008). An evaluation of function

point counting based on measurement-oriented models. Evaluation and Assessment in Software Engineering–EASE, 26-27. [55]

Kultur, Y., Kocaguneli, E., & Bener, A. B. (2009, September). Domain

specific phase by phase effort estimation in software projects. In Computer and Information Sciences, 2009. ISCIS 2009. 24th International Symposium on (pp. 498-503). IEEE. [56]

Erradi, A., Anand, S., & Kulkarni, N. (2006, September). Evaluation of

strategies for integrating legacy applications as services in a service oriented architecture. In Services Computing, 2006. SCC'06. IEEE International Conference on (pp. 257-260). IEEE. [57]

G. Canfora and M. Di Penta, ,Testing Services and Service-Centric Systems:

Challenges and Opportunities, IT Professional, vol. 8, no. 2, pp. 10-17, Mar./Apr. 2006. [58]

Integration testing. (2010, August). Retrieved October 26, 2015, from

http://en.wikipedia.org/wiki/Integration_testing. [59]

Alexander, A.J.: How to determine your application size using function

points. Borland Conference 2004 (2004). [60]

Li, Z., O'Brien, L., & Zhang, H. (2013). Circumstantial-evidence-based

judgment for software effort estimation. arXiv preprint arXiv:1302.2193. [61]

Chang, H., & Lee, K. (2010). A Quality-Driven Web Service Composition

Methodology for Ubiquitous Services. J. Inf. Sci. Eng., 26(6), pp. 1957-1971.

153

[62]

M.Rosen(2008), Orchestration or Choreography? ,Retrieved October 26,

2015, from http://www.bptrends.com/publicationfiles/04-08-COL-BPMandSOAOrchestrationorChoreography-%200804-Rosen%20v01%20_MR_final.doc.pdf. [63]

Binstock.A , Hill.P, The Comparative Productivity of Programming

Languages ,retrieved November 24, 2015,from http://www.drdobbs.com/jvm/thecomparative-productivity-of-programm/240005881. [64]

Yang, Y., He, M., Li, M., Wang, Q., & Boehm, B. (2008, October). Phase

distribution of software development effort. In Proceedings of the Second ACMIEEE international symposium on Empirical software engineering and measurement (pp. 61-69). ACM. [65]

Bohem, B., Brown ,A., Fakharzadeh,C. (1999).MBASE/RUP Phase and

Activity Distributions. [66]

Tan, T. (2012). Domain-based effort distribution model for software cost

estimation (Doctoral dissertation, University of Southern California).

154

‫الملخص‬ ‫في العقود القليلة الماضية أصبحت البنية الخدمية اتجاها شائعا في صناعة تكنولوجيا المعلومات‪ .‬تميل العديد من‬ ‫الشركات لتطبيق البنية الخدمية من أجل مواجهة سرعة تغير المشاريع ‪ .‬أصبح تقدير جهد المشاريع الخدمية تحديا حقيقيا‬ ‫لمديري المشاريع بسبب قلة األبحاث المنشوره لمعالجة هذه المسألة‪ .‬التقنيات التقليدية لتقدير جهود المشاريع ال تناسب‬ ‫المشاريع الخدمية كليا ‪ ،‬حيث إن التقنيات التقليدية لم تعالج الخصائص الفريدة للبنية الخدمية‪ .‬وتشمل خصائص البنية‪:‬‬ ‫االرتباط الحر للخدمات ‪ ،‬وإعادة استخدام الخدمات ‪ ،‬وتركيب الخدمات واكتشاف الخدمة ‪ .‬من ناحية أخرى‪ ،‬نُهُ ٌج تقدير‬ ‫التكاليف التي تم اقترحها لتقدير المشاريع الخدمية‪ ،‬ال تزال غير ناضجة ومعظمها من الصعب تطبيقها في الصناعة‪،‬‬ ‫ألنها توجيهات اكثر منها طرق عملية لتقدير التكلفة الفعلية‪ .‬هذه الرسالة تقترح نهج لتقدير جهد المشاريع الخدمية التي تم‬ ‫تطبيقها على مجموعة مختلفة من المشاريع ‪ .‬وهي تاخذ في االعتبار خصائص البنية الخدمية وعوامل التكلفة المختلفة‬ ‫لمختلف أنواع الخدمات‪ .‬يقدم النهج المقترح جهد تقدير تقنية محددة لكل نوع من أنواع الخدمات ‪ :‬الخدمات المتوفره ‪،‬‬ ‫الخدمات المستحدثة ‪،‬الخدمات الجديدة أو المركبة ‪ .‬كما يعطي النهج نسبة الجهد لكل مرحلة من مراحل المشروع‬ ‫لتخصيص الموارد بسهولة طوال عمر المشروع‪ .‬وقد تم تطبيق هذا النهج على مشاريع في صناعة تكنولوجيا المعلومات‬ ‫‪.‬كما تم تقسيم مشروع البنية الخدمية إلى الخدمات المكونة لها ويتم تقدير جهد كل خدمة على حسب نوعها‪ .‬كل نوع‬ ‫خدمة له طريقة حساب الجهد الخاص به‪ .‬ثم يتم تجميع جهود الخدمات لحساب الجهد العام للمشروع‪ .‬نسبة الخطأ في‬ ‫دراسات الحالة يتراوح بين ‪ ٪3.66‬و ‪ ٪19.14‬وذلك يعتبر تحسن كبير مقارنة بالخطأ النسبي في تقدير جهد المشاريع‬ ‫الحالي في الصناعة اال وهو ‪.٪30‬‬

‫‪155‬‬

‫األكاديمية العربية للعلوم والتكنولوجيا والنقل البحري‬ ‫كلية الحاسبات و تكنولوجيا المعلومات‬ ‫قسم نظم المعلومات‬

‫منهج مقترح لتقدير جهد مشاريع البنية الخدمية‬

‫اعداد‬ ‫اسراء احمد فراج عبد القادر‬

‫رسالة مقدمة لألكاديمية العربية للعلوم والتكنولوجيا والنقل البحري إلستكمال متطلبات نيل درجة‬

‫الماجستير‬ ‫في‬ ‫نظم المعلومات‬

‫إشراف‬ ‫ا‪.‬د‪ .‬رمضان معوض‬

‫ا‪.‬د‪ .‬ابراهيم إمام‬

‫وكيل كلية حاسبات وتكنولوجيا المعلومات‬

‫أستاذ علوم الحاسب‬

‫جامعة المستقبل‬

‫األكاديمية العربية للعلوم والتكنولوجيا والنقل البحري‬

‫‪156‬‬ ‫‪2016‬‬

Suggest Documents