System Dynamics in Software Project Management: towards the development of a formal integrated framework Alexandre Rodrigues and Terry Williams Research Paper No. 1996/5 Alexandre Rodrigues was awarded his Phd by the Management Science Department, Strathclyde Business School and Terry Williams is a Professor at the Management Science Department, Strathclyde Business School, Glasgow, Scotland. Abstract Successful software development is becoming increasingly important as software basedsystems are at the core of a company`s new products. However, recent surveys show that most projects fail to meet their targets highlighting the inadequacies of traditional project management techniques to cope with the unique characteristics of this field. Despite the major breakthroughs in the discipline of software engineering, improvement of management methodologies has not occurred, and it is now recognised that the major opportunities for better results are to be found in this area. Poor strategic management and related human factors have been cited as a major cause for failures in several industries. Traditional project management techniques have proven inadequate to incorporate explicitly these higher-level and softer issues. System Dynamics emerged as a methodology for modelling the behaviour of complex socio-economic systems. There has been a number of applications to project management, and in particular in the field of software development. This new approach provides the opportunity for an alternative view in which the major project influences are considered and quantified explicitly. Grounded on a holistic perspective it avoids consideration of the detail required by the traditional tools and ensures that the key aspects of the general project behaviour are the main priority. However, if the approach is to play a core role in future of software project management it needs to embedded within the traditional decision-making framework. The authors developed a conceptual integrated model, the PMIM, which is now being tested and improved within a large on-going software project. Such a framework should specify the roles of system dynamics models, how they are to be used within the traditional management process, how they exchange information with the traditional models, and a general method to support model development. This paper identifies the distinctive contribution of System Dynamics to software management, proposes a conceptual model for an integrated management framework, and discusses its underlying principles. Research News Join our email list to receive details of when new research papers are published and the quarterly departmental newsletter. To subscribe send a blank email to
[email protected]. Details of our research papers can be found at www.mansci.strath.ac.uk/papers.html. Management Science, University of Strathclyde, Graham Hills Building, 40 George Street, Glasgow, Scotland. Email:
[email protected] Tel: +44 (0)141 548 3613 Fax: +44 (0)141 552 6686
INTRODUCTION The increasing rate of change and the complexity of new technologies and markets has motivated organisations to adopt "management by projects" as a general approach to development1. As software-based systems are at the core of most companies` new products, successful software development has become a critical issue. Recent surveys show that software costs are large and rapidly increasing world-wide, with an average 12% annual growth in the U.S.[2] While studies indicate that the demand for new software systems has been increasing beyond our development abilities, the growth in the software development activity has been marked by major overruns[3]. This "software crisis" emerged with greater impact in the defence industry, but the problems also affect the private sector of commercial software development. A recent MIT-PA survey shows that more than half of development projects fail to meet their targets[4]: over-expenditures range from an average of 40% in commercial developments to an average of 210% in the defence industry, while schedule overruns range from 90% to 360% respectively. Despite the major breakthroughs in the technical aspects of software engineering, much less attention has been given during the last decades to improve management methodologies[3]. Frederick Brooks[5] suggests that "...there is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity [...] we see no sliver bullet". He identifies the most promising strategic "attacks" on the technical difficulties of software development: (1)"buy versus build", (2)requirements refinement and rapid prototyping, (3)incremental development, and (4)"great designers." However, adopting the right technical strategies is not sufficient: effective planning and control are essential functions to guide a project towards the objectives. The importance of effective management has been recognised by Boehm[6]: "Poor management can increase software costs more rapidly than any other factor", and Thomsett[7]: "We ran into problems because we did not know how to manage what we had, not because we lack the techniques themselves." The general awareness that the major breakthroughs are now to be achieved in the management arena[3] highlight the need to improve the techniques.
TRADITIONAL SOFTWARE PROJECT MANAGEMENT Software management comprises the several functions responsible to keep the project within the targets of cost, quality and duration; project management is therefore the specific activity of achieving project control[8]. This comprises the interfacing with the Client and sub-contractors, and managing the interactions between planning, monitoring, technical development, quality assurance, and configuration management. In this paper we focus on three main functions of project control: (1)estimating, (2)planning, and (3)monitoring of progress. The discipline of project management emerged in the early 20th century from the construction industry. Over the years a large collection of procedures has been developed to support the manager with the practical problems of implementation. Most of these techniques have been transferred to other industries, and in particular to the management of large software projects: network models based on the WBS
became of widespread usage to support planning and monitoring[9], while empirical estimating models are the preferred among software managers[10]. A major concern has been the rigorous definition of the software development process based on the principles of the Classic Life-Cycle Model. Rook and Wingrove[11] proposed a formal method called the Process Definition Diagram (PDD), based on an objectoriented approach, to describe in detail the processes within the life-cycle.
Problems and limitations of the traditional approach Traditional techniques are aimed at providing support to operational issues, where detailed decisions about the implementation process have to be taken. Despite their valuable contribution, however, most software projects still fail to meet their targets. Studies in several industries suggest that the main causes lie in strategic areas which have not been the concern of traditional techniques, such as the political/social environment, legal agreements, and human factors. There has been a relative lack of emphasis on strategy and it is now recognised the need for a model for the strategic management of projects[1]. Such a model requires a more holistic view of the project system integrating management with technical development. In the software field, the lack of such global understanding has been cited as a major obstacle to improving management procedures[3]. The inadequacy the current management approach to cope with the softer human aspects of software projects has also been recognised to be a major problem: "Most problems have to do with the way people in the project behave"[12] Four main reasons can be identified for the inappropriateness of traditional models : (1) while the concern is to capture the project in great detail, the resulting complexity does not enable quick and reliable strategic analysis; (2) they do not incorporate explicitly the influence of human factors; (3) they do not consider explicitly the rework phenomenon, and (4) they fail to capture the dynamic interactions between technical development and management policies. The specific characteristics of software projects exacerbate these problems: (i) the main resource is people, (ii) the software product remains intangible throughout most of the development life-cycle, and (iii) a rigorous definition of the product requirements is usually very difficult until the later stages. Staff productivity and work quality are continuously affected by several factors such as learning, schedule pressure, training and communications overheads. Variations in work quality among programmers can reach a magnitude of 10:1 difference in the error rates[2], while the difference in designers` productivity can reach a 2:1 order of magnitude[5]. On the other hand, management performance in undertaking decisions can be badly damaged by work pressure: "an overloaded manager is usually a bad manager"[12]. The intangible nature of the software product, in particular its quality, encourages poor QA performance. High error detection rates are often seen as poor technical development, and because software quality is not tangible in the early stages staff can easily skip QA efforts. On the other hand, such a large amount of defects would probably require schedule extensions, which is also perceived as poor management[3]. This poses an obstacle for organisations to improve the quality of the designs, a key requisite for productivity improvement[5]. Finally, unstable system requirements exacerbate the introduction of changes. The side-effects effects of these changes, in particular work being done out of its natural sequence, often create dramatic problems of rework in the later stages[13].
The need for a new approach Considerable efforts have been directed towards the improvement of the existing traditional models. Inadequacies of network-based tools have bee recognised and Petri net-based extensions have been proposed for software projects[14],[15],[16]. Improvement of estimating models, like the COCOMO, has also been the focus of attention[17]. However, such developments have been directed towards an increasing complexity. A strategic view has not been achieved while the influence of human factors remains ignored or is handled by simplistic assumptions. Furthermore, the models often demand detailed information to which managers cannot relate practical meaning. As managers find little help in the traditional techniques to cope with the higher-level complex problems, they apply crude rules of thumb and intuitive judgement based on past experience (i.e. their mental models). Under constant pressure, they search for the usual problems and revert to problem-solving strategies that they believe have worked in the past[12]. Management faults remain uncovered and organisations fail to learn effectively from one project to the other. This problem requires a more formal systemic analysis to which traditional tools and techniques are not aimed.
APPLICATION OF SYSTEM DYNAMICS System Dynamics was first introduced by Forrester[18], and emerged as a new methodology for modelling the behaviour of complex socio-economic systems, based on the principles of feedback control theory[19]. The approach is based on a holistic perspective of managerial problems and focuses on the human aspects of a system`s behaviour. There has been a number of applications to project management, particularly in the field of software development[20]. For the purposes of this paper, three of these studies deserve especial attention. The first application of System Dynamics to software project management was proposed by Abdel-Hamid and Madnick[21], leading to the development of a generic model for the software development process[3]. The testing and practical application of this model was based on the post mortem analysis of a real software project at NASA`s Goddard Space Flight Centre. At NASA`s Jet Propulsion Laboratory similar work was developed[22] and a System Dynamics model for the software development process was later proposed[23]. Further improvements led to a generic simulation model of the software life-cycle embedded within an expert system (SLICS/HESS)[24]. The testing and validation of this model was based on a real project (a NASA`s space shuttle software development project). Further research has been reported by the authors on other models (SEPS)[25]. Cooper[4] reports major practical applications of System Dynamics models to software development projects. His work focuses on assessing the impacts of the factors "work quality" and "time to discover rework", based on the generic concept of the rework cycle[4]. The models were set to recreate the past behaviour of completed
projects and quantitative measures for these two factors were extracted. His findings suggest that gains in project performance can only be achieved by directing efforts to increase work quality and to detect errors earlier. Although the major practical applications reported in the literature refer to cases of post mortem analysis, the author claims that the models (e.g. the PMMS[26]) have been used with success to support the management of large on going programs.
Current models and the need for improvement The work of Abdel-Hamid and Madnick[3] provide an excellent survey on the human interactions taking place within a software project, in particular the factors affecting staff productivity (e.g. learning, communications overheads, and use of "slack-time"). The authors provide reasonable evidence of empirical validation for the quantification of the relationships within the model. Another important issue is that model considers explicitly a continuous flow of errors escaping throughout the life-cycle, until the later testing stages. This is an important concept since in software development errors are reworked in the phase where they are detected. The model avoids the "iteration of phases" and is therefore consistent with the basic principles of the life-cycle model: (1)precise phases-ends and (2)continuing activities[11]. However, the model does not consider any breakdown of the project work, assuming the highest possible level of aggregation. This way, it cannot provide a detailed analysis of the intermediate schedule milestones. This consideration also avoids the model to consider a planned staff profile for the project and, hence, in replicating the "Raleigh curve", there is no explicit consideration for the "natural" changes in the work intensity. Finally the model also considers stable requirements for the project, something extremely unlikely in most medium-large size software projects. The models proposed by Lin and Levary[24],[25] considers an explicit breakdown of the project work into the classic life-cycle stages, providing a more detailed analysis for the schedules, budgets and staff allocation to the project. The SLICS model focuses on the problem of requirements changes being introduced throughout the lifecycle. An interesting feature of this work is the use of an expert system to support model application. The SEPS model provides an important perspective of a software project described as a dual life-cycle process of engineering (i.e. product development) and management (i.e. decision-making). Also important is the proposed procedure to support empirical validation of the model based on several tests. However, the breakdown of the project work is still restricted to the classical lifecycle phases, whereas in large projects several software components with different characteristics are developed by different teams and enter the integration phase at different moments in time. Although the authors suggest the use of SLICS to support on-going projects, evidence of major practical applications would be desired. The work developed by Cooper[4],[27], in particular the PMMS, introduced the important concepts of the rework cycle and monitoring ramps. This considers explicitly that rework is generated in the project and remain undiscovered until the later stages. The consequent gap between the perceived and the real progress explains the occurrence of the "90% syndrome." The PMMS system provides a more flexible way of capturing the project work structure. A model is developed based on generic "building blocks" to capture the major project activities. This includes design,
construction, procurement, testing, staffing categories, and program management. The procedures to apply the model in practice are based on the calibration for a "problemfree" scenario, followed by "what-if" analysis where disturbances are introduced. However, the models still assume a high level view aiming to support the strategic management of large design and construction programs, which often include several projects being implemented in parallel. The models are not specialised to capture the many specific and unique characteristics of software development. It is, therefore, unlikely that they are suitable to provide support at the lower tactical level. In summary, the above developments represent important contributions for the application of system dynamics to software project management. They introduce valuable concepts and ideas that should be considered in the future. In particular, the work developed by Cooper, at Pugh-Roberts Associates, provides evidence about the practical credibility of the approach. However, most of the reported cases refer to post mortem analysis. Here, the model is used to reproduced the behaviour of completed projects and helps to investigate the causes for deviations. In other cases the models are used within fictitious scenarios as training tools to support policy analysis. A major step forward would be to apply System Dynamics models to support the management of on-going software projects (Cooper claims such an application of the PMMS to major programs). However, such an application requires the approach to be embedded within the traditional project management framework. Despite the limitations, project managers still need operational tools to support planning and monitoring at the tactical level. Integrating system dynamics models with the traditional tools also raises the question of whether quantitative links can be established between both types of models. In the next section we propose a conceptual model to integrate system dynamics with the traditional approach.
A CONCEPTUAL INTEGRATED FRAMEWORK The need to integrate system dynamics within the traditional project management approach has been discussed elsewhere[20]. The conclusions drawn from this study suggest that system dynamics have potential to provide a distinctive and complementary contribution. Both approaches address common managerial needs (e.g. estimating overall project duration, cost, and staff profile), but their perspective of a project is different. Network-based models are based on a top-down process of decomposing the project into its constituent elements in a structured way. They consider the management problems at the operational level and focus on the detailed logic of the project work structure and resource requirements. On the other hand, SD models are based on bottom-up approach to aggregate the many details of the project into a whole system dominated by internal interactions. They consider the management problems at the strategic level and their main priority is to capture the more general aspects of the of the project behaviour that result from the internal feedback processes. Ideally, an integrated framework should consider the benefits of both approaches and establish information links between the two levels of analysis. A conceptual integrated model (PMIM)
Strategic Level
Strategic Planing and Control SDSM High-level Monitoring
High-level Planing
Complex Strategic Issues
(1) Estimating (2) Risk Analysis (3) Diagnosis
High-level plan
Monitoring Information
Operational Level
Management Process
Preliminary work on the development of a project management integrated model (PMIM) has been reported by the authors[28]. As practical framework such a model should specify in some detail: (1) the specific roles of SD models, (2) how the SD models are used within the management process, in particular how they exchange qualitative and quantitative information with the traditional models, and (3) a general description of the required characteristics of the SD models, in particular their structure, and procedures for validation and calibration. Our current research focus on the improvement and testing of the PMIM (figure 1), within in a large software project, at BAeSEMA Ltd.
Revise project objectives
MONITORING
PLANING
(3)Diagnosis of past phases
(1)Estimating, (2)Risk analysis
SDOM.D
SDOM.T
Network Plan
Monitoring information
0
Unsteady Behaviour
Perceived progress and past behaviour
Operational plan
Perceived work progress
Implementation
Engineering Process Requirements
Design
Coding
Testing
Integration
t
Steady Behaviour
System Testing
Figure 1 - The Project Management Integrated Model (PMIM) - a conceptual framework The PMIM considers the use of SD models at both strategic and the operational management levels providing support to the planning and monitoring functions. In planning, the role of the models focuses on estimating future results and performing risk analysis, while in monitoring they are aimed at diagnosing the project past behaviour. At the strategic level we consider the use of high-level system dynamics strategic model (SDSM), which covers the whole project life-cycle and captures the major software development milestones. At the operational level, a more complex model (SDOM) captures in more detail the individual life-cycle phases. The structure
of both models is based on an appropriate the breakdown of the project into major sub-tasks, and in consistence with the traditional WBS. The use of the SDSM focuses on providing quick and preliminary assessment of major strategic decisions and risks before a detailed plan is produced. This is particularly important at the early stages when a detailed plan for the whole life-cycle is not available. The SDOM focuses on the project sub-tasks in more detail providing quantitative data to support work scheduling and resource allocation. As the model requires the availability of detailed information, in practice, it might not cover the full project life-cycle until the middle development stages have been reached. Use of the SD models within the PMIM The use of the SD models within the PMIM to support planning and monitoring is based on a process of continuous calibration as the software project moves throughout the development life-cycle. Figure 2 provides an overview of how the SDOM model is used to support planning and monitoring (a similar process applies to the SDSM). In planning the model is used as a "test laboratory" to assess the performance of the current plan and identify risks. Once a detailed plan has been produced into a logical network the next step is to extract the dynamic characteristics of the planned behaviour. A network plan portrays, implicitly, the image of a project with changing characteristics but evolving, successfully, towards constant targets of cost and schedule. Calibrating the SD model to reproduce this steady behaviour requires the explicit definition of several metrics, which otherwise would remain "hidden" in the plans. In fact, we do not impose the planned results directly in the model; instead, the quantitative relationships of the "mechanics" within the model must be able to produce such behaviour. This uncovering of metrics is an important exercise as it helps to identify unrealistic assumptions and suggests readjustments in the plan. After this stage, the model is used to assess the plan`s sensitivity to risks through the analysis of "what-if" scenarios. Disturbances are introduced in the model and this provides explicit description of possible unsteady behaviours. Planning alternatives can be tested, and further readjustments are carried out to reduce the plan`s sensitivity. After this second stage, the planning decisions selected with the model can be translated into the network plan. In monitoring the model is used as a "test laboratory" to diagnose the past behaviour and help to identify the causes for possible deviations. Once progress monitoring data has been collected using traditional procedures, the next step is to derive from progress data the dynamic characteristics of the project past behaviour. Like in planning, calibrating the model to reproduce the past behaviour provides the uncovering of result metrics. As an example, a low error discovery rate might suggest either an exceptionally good development quality or a poor performance of QA activities. Where the numbers are not realistic, the conclusion might be that progress data is not reliable and hence needs to be reviewed. Another important output from this calibration the estimation of the number of undiscovered defects that escaped to the future development stages. This provides awareness about the amount of rework required in the later stages avoiding "over-optimism" in terms of progress. After the progress data has been eventually revised, the model can be used to investigate the causes for eventual deviations and to test whether alternative planning and control policies could have provided better results.
Monitoring
Planning
• Reported progress • Identified causes for deviations • Policies for process improvement Estimating Possible results (unsteady behaviour)
Risk Analysis Diagnosis of past stages Sensitivity to risks
Calibrate for past stages
SDOM
Produce and readjust plan
System Dynamics Operational Model Uncovered metrics
Past Metrics
Future Metrics
Perceived results (past behaviour) Network Plan
Traditional methods
Collect progress data
• Uncovered metrics • Estimates for intangibles
Calibrate for future stages
Expected results (steady behaviour)
0
Engineering Process Figure 2 - The use of the SDOM model within the PMIM
System Dynamics in Software Project Management: towards the development of a formal integrated framework. A. Rodrigues and T. Williams
t
Traditional methods
Each time the model is re-calibrated to reproduce and diagnose new "segments" of past behaviour, it also provides new estimates for the future behaviour regarding the current plan. As an example, the model might suggest that due to a high number of errors escaping the schedule of the next stage is likely to over-run. This is the starting point of a new control cycle in the planning function. Again, the SD model is recalibrated, the network plan is readjusted, and the whole process is repeated for each control cycle. In summary, within the planning function the models are used to estimate the project outcome and help to identify better planning alternatives, ensuring that the plan is robust and based on realistic assumptions. In monitoring, the models are used to uncover several metrics about the project status, and as a diagnosis tool they help identify possible causes for observed deviations, and support policy improvement. The PMIM considers that both models (SDOM and SDSM) are used in this way, but at different levels of detail. However, to implement this process it is essential that effective procedures are on place so that the planned behaviour can be quickly extracted from a network plan, and the past behaviour can be quickly derived from progress data. As the project progresses and the models incorporate more information the results produced become more accurate.
Definition of the model structure The structure of a SD project model should capture the basic characteristics of the software development process. Several techniques can be used to provide a rigorous definition of this process (e.g. PDD[8]), which is also the basis for the development of traditional models, like the WBS and logical networks. The method we propose to define the structure of the SD models is based on three basic principles: (1) dual lifecycle view of management and engineering, (2) breakdown of the project into major sub-tasks, (3) dual life-cycle of work and defects within the engineering process, and (4) single high-level project management and human resource management function. The method here proposed is intended to support the development of new SD models, specialised in a specific project, and within a specific organisation. The principle of the dual life-cycle emphasis on the interaction between engineering and management implies that the SD model should capture both characteristics of technical development (e.g. life-cycle phases, product components, rework, work dependencies, reviewing techniques), and the characteristics of the management process. This includes the procedures used to monitor progress and the managerial policies employed in re-planning the work schedules and the allocation of resources. The principle of breaking down the project work structure into a set of sub-tasks assumes that each of these tasks holds enough complexity so that planning and control might be problematic. Each task will be simulated by an individual SD sub-model with a structure specialised on the type of work being performed. For the engineering process we propose two main categories of tasks to which correspond two types of sub-models: (1) development (SD-DTModel), and (2) testing (SD-TTModel). A development task refers to the process of developing a particular sub-product, and includes the activities of developing, reviewing and reworking (design, coding, and integrating fall in this category). A testing task refers to the process of running a set of pre-defined tests to check the functionality of a sub-product, diagnose faults, and
rework the defects found (unit testing, group testing and system testing fall in this category). The breakdown of the engineering process is based on three main sources of information: the life-cycle definition, the product structure, and the structure of the technical development team. The lowest level of decomposition would consider a single task for each life-cycle phase of each software sub-component, being implemented by an individual team. For most software projects such level of detail, very close to the network plan, is not appropriate. However, using this decomposition as a starting point we now need to aggregate these elementary sub-tasks into more complex tasks. We define two types of aggregation: •
horizontal aggregation - i.e. ignoring the intermediate schedules of a set of sequential tasks and consider them as a single task with a single schedule;
•
vertical aggregation - i.e. ignoring the individual schedules of a set of parallel but inter-related tasks and consider them as a single task with a single schedule;
The definition of the appropriate level of decomposition/aggregation of the engineering process into tasks is based on four main criteria: (1) type of work being performed, (2) major schedule milestones, (3) level of detail required for management purposes, and (4) minimum level of complexity required. The type of work being developed within a task should be of similar nature and strongly inter-related. As an example, the design and testing of a software component should not be captured by the same task, while the design of two tightly coupled software components may be aggregated into a single design task. The major schedule milestones of the development process should impose limits on the level of horizontal aggregation so that they are not overlooked in the model. In general, most managers would like a decision-support model to consider explicitly the individual schedules of the major life-cycle phases. Finally, each task must comprise a minimum level of complexity so that the use of an SD model is appropriate. System Dynamics models focuses on the interactions among a system`s components. Decomposing the software development process into highly detailed elements tends to eliminate the effects of these interactions, and the individual behaviour of such elements becomes characterised by discrete uncertain events, to which the higher-level continuous perspective of a SD model is not appropriate. Having decomposed the engineering process into a set of individual tasks these can be linked according to precedence relationships. This defines a network of tasks to which a network of SD models will correspond (the SD-TNet). Here, the implementation of sequential tasks can overlap depending on the specific characteristics of the software development process, and on management decisions. As an example, it might be decided that the coding phase can start after 50% of the design phase has been completed. Each SD sub-model comprises an internal management process responsible for monitoring progress within the task and taking corrective actions when necessary. This includes scheduling the man-power available among the activities within the task (e.g. development, review and rework), and adjusting the schedule within a contingency range. The third principle for the model structure is to consider explicitly that the engineering process comprises the continuous flow of two inter-related entities: work, and defects.
As the work is developed, reviewed/tested and reworked throughout the life-cycle, defects are also generated, detected, and reworked. However, some are not detected by the review and testing activities and hence escape, incorporating the sub-product passed onto the next task, as shown in figure 3 (the broader arrows indicate the lifecycle phases where error introduction and removal usually occur with more intensity).
Design
Review
Code Review / UT
Integration Test
System Test
Errors Escaping
Errors Removed
Figure 3 - Work and defects flows: errors escaping throughout the life-cycle Finally, each of the models in the SD-TNet continuously reports progress to a highlevel management sub-component (SD-GMan), which over-views the whole pro ject. This information includes work progress, estimated completion date, esti mated cost at completion, and man-power needed to meet the schedule. Based on this information, reported from the currently on-going tasks, the SD-GMan mimics the high -level decision-making in the project. This includes: (1) adjusting the completion s chedules of the individual tasks, (2) re-schedule staff among the tasks, (3) increase the degree of overlapping between sequential tasks, and (4) hire more staff into the project. The process of hiring/firing staff is modelled by an additional human re source management sub-component (SD-HRM), which considers recruitment and training issues and staff turnover. The complete global model for a software project (SD-GModel) is now defined by linking all sub-models. The conceptual structure for the global model developed for our current case-study is shown in figure 4 (the "KDCOM" is an intensive software project, which aims to develop a Command and Fire Control System which is part of a Combat System to be installed and integrated into a Destroyer of the Republic of Korea Navy). In terms of vertical aggregation, the development of the nine software sub-components was aggregated into three major functional areas (Radar, SSGT, and Core). In terms of horizontal aggregation, the nine different stages of the specified life-cycle were aggregated into five main development phases (design, coding, group testing, integration, and system testing). A practical application of this model is discussed in Rodrigues and Williams[29].
SD-GModel -- Software Project Model for KDCOM SWB1 DESIGN Radar
DESIGN SSGT
CODING Radar
TESTING Radar
CODING SSGT
TESTING SSGT
INTEGRATION
SYST TEST
SD-TNet DESIGN Core
CODING Core
TESTING Core
High Level Management
HRM
Figure 4 - A practical example of the SD-TNet: breakdown of the project work into tasks
System Dynamics in Software Project Management: towards the development of a formal integrated framework. A. Rodrigues and T. Williams
Staff Profile
Information links between SD models and traditional models The use of SD models within the PMIM assumes a close relation with the traditional models in two ways: (1) structure, and (2) results produced. The structure of the models should be according to the project characteristics, and their application in supporting planning and control is based on the exchange of information with the traditional tools. In terms of results produced the important relationship is between the estimates produced by the SD model and a network-based model (e.g. PERT). As described in the previous sub-section the structure of the SD model is based on the breakdown/aggregation of the project work. Similar decomposition of the project work also applies to traditional models, and in particular the WBS, which captures the whole project. Therefore, a formal link can be established: to each engineering task of the SD-TNet (i.e. a SD-Task), should correspond a specific set of work packages in the WBS. This relationship becomes stronger if the majority of these packages are unique to a certain SD task. In such case, the translation of data between the SD model and the network model is easier to implement, and eventually more accurate. The relationship between the WBS and the SD-TNet should be defined by a matrix identifying which WBS packages correspond to each SD-Task, and in case a package is not unique the proportion of effort should also be identified. As an example, a certain WBS package relating to general design support to the coding phase of several software components may specify that say 20% of the effort booked to the package refers to a certain component. The technical development is implemented and managed by project sub-teams as specified in the Organisation Breakdown Structure (OBS). These teams are usually specialised in a certain software development activity (e.g. designing, coding, testing), which are performed continuously throughout the life-cycle. Therefore, each team usually works on several life-cycle phases. Although the work performed within each task of the SD-TNet might be dominated by a certain team, it will inevitably incorporate the contribution of other teams. The relationship between the OBS and the SD-TNet should also be specified in a matrix identifying which teams work in each SD-Task. This is important to support the translation of staff levels between the SD model and the logical network. The software development process adopted in the project is usually specified in terms of a life-cycle document, often using formal techniques (e.g. PDD). This model specifies in detail the life-cycle phases/stages and the processes to be followed within each phase. The structure of the SD model should relate to this information by a clear identification of the life-cycle phase or stage to which the SD-Tasks belong. This should also identify the processes being incorporated in each SD-Task. The use of SD models within the PMIM assumes the exchange of qualitative and quantitative information with the traditional models. These models include estimating models (e.g. COCOMO), and network-based planning and monitoring models (e.g. PERT, PERT/Cost and earned value) As the project progresses, the network plan is continuously updated to incorporate both the past results and the plans for the future. The past behaviour of a project includes: (1) information about the technical development process, specifying the actual effort expenditures, schedules, and staff levels; and (2) information about the
management process specifying re-planning decisions of schedule adjustments and hiring more staff, and progress estimations produced by managers. The past behaviour recorded in a network does not capture the second component, because of the static perspective of the model. When the SD model is calibrated to reproduce the past behaviour it must reproduce with acceptable accuracy the final results of the engineering process, as recorded in the network, but it also must also reproduce the decision-making pattern occurred within the management process (i.e. perceptions of progress and implemented re-planning actions). The calibration for this second component of the project behaviour is more difficult since re-planning decisions often are not recorded in the project information system. Re-planning often implies changing the structure of the network typically by increasing concurrency (as reported by Williams et al[30]). Therefore, the unsteady past behaviour of a project may well be characterised by a continuous evolution of the network structure, reflecting those decisions. Where accurate data about the management component of the project behaviour is not available, managers judgement is essential to validate the calibration. The future behaviour of the project portrayed by a network assumes a "planned success", in which either deviations do not occur or management corrective actions are sufficiently effective to avoid over-runs. When the SD-Model is calibrated to reproduce this steady behaviour it must reproduce the expected results from the engineering process (i.e. schedules, effort expenditure, staff levels). Again, there is usually no information available about the likely decision-making pattern within the management process. As a project is perceived complex, the tendency is to adopt a reactive attitude where deviations and management reactions are not anticipated. Once the future behaviour is assumed steady, the SD model must reproduce a behaviour with little deviations and management actions quickly solving the problems. In summary, the global structure of the SD project model, represented by the SDTNet , should be integrated with the WBS, OBS, and specification of the development proce ss, through the use of appropriate matrixes. A SD-Task should: (1) incorporate a specific set of work packages in the WBS, (2) capture the work performed by several development teams identified by the OBS, and (3) relate to a specific lifecycle phase incorporating some of its internal processes. The relationship between the SD project model and the network model is characterised by both models producing the same quantitative results for the observed past behaviour and for the planned future behaviour. This quantitative correspondence must be considered within a well defined criteria of acceptable accuracy. Figure 5 provides an overview of these relationships. Preliminary estimates produced by traditional models, like the COCOMO, may also be tested in the SD model and if successful are translated into the network plan. Monitoring tools provide the necessary information to derive the project past behaviour. The quantitative links here identified are at the core of a rigorous integration of SD models within the traditional approach, as described by the PMIM.
SD Project Model
WBS
Product breakdown
OBS
Team structure Past Metrics
Future Metrics
Network-based Model Structure
Past Behaviour
Monitoring Tools
Development Process (PDD)
Life-cycle Phases
Structure
• schedules • budgets • staff levels
Estimating tools
Planned Behaviour
0
t
• schedules • effort expenditure • staff levels
Figure 5 - Information links: relationship between a SD project model and the traditional models
System Dynamics in Software Project Management: towards the development of a formal integrated framework. A. Rodrigues and T. Williams
• schedules • budgets • staff levels
CONCLUSIONS Traditional project management and analysis tools have proven inadequate, since they do not provide a strategic overview, and because they fail to capture both influence of human factors and also many important project interactions and feedbacks. System Dynamics has been found to offer important benefits to the analysis of software project management. However, much of this work is post-mortem; to provide support to on-going projects, System Dynamics models need to be embedded within the framework of traditional project management. This paper has described a conceptual Project Management Integrated Model (PMIM), which has been improved and tested over a long period of intensive work within a large software project at BAeSEMA Ltd. The PMIM uses SD models at both the strategic and operational management levels. In order to aid planning, the model is used to assess the current plan and identify risks. In monitoring, the model is used to diagnose the past behaviour and help identify causes for deviations. Specific generic model structures for the software development process have been described; these are based on the dual life-cycle view of management and engineering, breaking the project into major sub-tasks, considering the engineering process as a flow of work and defects, and a single high-level management function. The structure of the SD model, based on a breakdown of the project work, allows a formal matrix-relationship to be established between the WBS and the SD model; a similar relationship can be defined between the OBS and the SD model; information is also exchanged between the SD model and traditional estimating and networkbased models. Each time the model is re-calibrated to reproduce and diagnose segments of the past it provides new estimate for future behaviour. This calibration reproduces not only the final engineering result, but also the management decisionmaking pattern (i.e. perceptions of progress and subsequent re-planning). The authors consider that this integrated tool provides a powerful support to the management of major software-engineering projects as they are on-going. This has been proven by the improvement and use of the tool during an actual project. Acknowledgement -- This work has been funded by JNICT--Comissão Permanente INVOTAN/NATO and Programa PRAXIS XXI, Portugal; and supported by BAeSEMA Ltd., United Kingdom.
REFERENCES
1. J. R. Turner. (1993) The handbook of project-based management. McGraw-Hill, London. 2. B. Boehm and P. Papaccio (1988) Understanding and Controlling Software Costs. IEEE Transactions on Software Engineering 14, 10, 1462-1477. 3. T. K. Abdel-Hamid and S. Madnick (1991) Software Project Dynamics: An Integrated Approach. New Jersey, Prentice-Hall. 4. K. G. Cooper and T. Mullen (1993) Swords and Plowshares: The Rework Cycles of Defence and Commercial Software Development Projects. American Programmer 6, 5, May, 41-51. 5. F. P. Brooks (1987) No Silver Bullet: Essence and Accidents of Software Engineering. Computer, 20, 4, 10-19 6. B. Boehm (1981) Software Engineering Economics. New Jersey, Prentice-Hall. 7. R. Thomsett (1980) People Project Management. New York, Yourdon Press, Inc. 8. P. Rook and A. Wingrove (1990) Software Project Control and Management. In Software Reliability Handbook, Elsevier, 155-209. 9. T. Moores and J. Edwards (1992) Could large UK corporations and computing companies use software estimating tools? A survey. European Journal of Information Systems, 1, 5, 311-319. 10.J. Edwards and T. Moores (1994) A conflict between the use of estimating and planning tools in the management of information systems. European Journal of Information Systems, 3, 2, 139-147. 11. P. Rook (1990) Software Development Process Models. In Software Reliability Handbook, Elsevier, 413-440. 12.G. Weinberg (1982) Overstructured Management of Software Engineering. Proceedings of the Sixth International Conference on Software Engineering, September 13-16, Tokyo, Japan, 2-8 13.K. G. Cooper (1994) The $2,000 hour: how managers influence project performance through the rework cycle. Proj.Mgmt.J. 25, March 1994, 1, 11-24 14.C. Liu and E. Horowitz (1989) A formal model for software project management. IEEE Transactions on Software Engineering, 15, 1280-1293. 15. G. Lee and T. Murata (1994) A b-distributed stochastic Petri Net model for software project time/cost management. Journal of Systems and Software, 26, 149165. 16. K. Lee, I. LU and H. Lin (1994) PM-Net: a software project management representation model. Information and Software Technology, 36, 5, 295-308.
17.R. Gulezian (1991) Reformulating COCOMO. Journal of Systems and Software, 16, 235-242. 18.J. Forrester (1961) Industrial Dynamics. Cambridge Mass, The M.I.T. Press. 19.E. Roberts (1964) The Dynamics of Research and Development. New York, Harper & Row. 20.A. Rodrigues and J. Bowers (1996) System Dynamics in Project Management: a comparative analysis with traditional methods. System Dynamics Review . (forthcoming in Vol. 12, 2) 21.T. Abdel-Hamid and E. Madnick (1982) A model of software project management dynamics. The 6th International Computer Software and Applications Conference (COMPSAC), November 1982. 22.R. Tausworthe, M. McKenzie, and C. Lin (1983) Structural considerations for a software life-cycle dynamic simulation model. Presented at AIAA Computers in Aerospace IV Conf., October 1983. 23.M McKenzie et al. (1984) A dynamic system simulation model of the software development process. In proc. 1984 Summer Computer Simulation Conf. (SCSC), July 1984, 889-904. 24.C. Lin and R. Levary (1989) Computer-Aided Software Development Process Design. IEEE Transaction on Software Engineering, 15, 9, 1025-1037. 25.C. Lin (1993) Walking on Battlefields: Tools for Strategic Software Management. American Programmer 6, 5, 33-40. 26.Pugh Roberts Associates - PA Consulting Group (1993) PMMS - Program Management Modeling System. PA Consulting, Cambridge MA. 27..K. G. Cooper (1980) Naval Ship Production: A Claim Settled and a Framework Build. INTERFACES 10, 6, 30-36. 28. A. Rodrigues and T. Williams (1995) The Application of System Dynamics in Project Management: An Integrated Model With the Traditional Procedures. Working Paper 95/2: Theory Method and Practice Series. Department of Management Science, University of Strathclyde. 29.A. Rodrigues and T. Williams (1996) System Dynamics in Project Management: Assessing the impacts of client behaviour on project performance. To be presented at the International s Conference, MIT, Boston, 22nd-26th July 1996. 30. T. Williams, C. Eden, F. Ackermann and A. Tait (1995) Vicious Circles of Parallelism. International Journal of Project Management 13, 3, 151-155.