What the Lessons Learned from Large, Complex ...

6 downloads 0 Views 96KB Size Report
Sep 30, 1998 - discussion that concludes that systems engineering is a cost-effective ..... Gabig's paper contains little quantitative information on the money ...
Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730.

What the Lessons Learned from Large, Complex, Technical Projects Tell Us about the Art of Systems Engineering Stephen C Cook Systems Engineering and Evaluation Centre University of South Australia The Levels Campus, Mawson Lakes SA 5095, Australia Abstract. This paper examines the literature to identify the lessons that have been learned from a selection of the thousands of projects that have been conducted over the last 15 years or so. The first purpose of this review is to establish a value-for-money argument for the application of systems engineering. The review is also intended to provide guidance on the application and development of systems engineering methodologies. To provide a context, the paper opens with a definition of systems engineering, a description of key concepts and a brief discussion on supplementary methodologies. This is followed by a discussion of the lessons learned by NASA, the UK Ministry of Defence, and the UK and USA software development industries. The paper closes with discussion that concludes that systems engineering is a cost-effective way of ameliorating schedule, performance and user-acceptance risk for large, complex, technical development projects. It also concludes that the lessons learned indicate the need for more than good processes; there is a growing need to understand and manage the socio-political issues associated with creating and successfully fielding a complex, technical system. INTRODUCTION Those who have been accustomed to the application of systems engineering principles and methods to large, complex, technical projects find it hard to envisage tackling a substantial project without a well-established systems engineering framework and set of processes. The value of systems engineering to such practitioners is axiomatic. There are many, however, who view the effort required to produce even a minimal number of process products, (plans, documents, models, etc) as being excessive, especially in smaller projects. This raises the question of the cost effectiveness of systems engineering. A direct approach to examining this issue, such as

comparing the outcomes of projects that did use systems engineering against those that did not, was not attempted because it was felt that to assign projects into one camp or the other would be overly simplistic. Even if the continuum of process sophistication were recognised, a complex judgement would be needed to assign a process maturity index to each project. Moreover, it is unlikely that it would be possible to gain access to the data needed; such information is notoriously hard to obtain for obvious reasons. Given the issues raised above and the limited timescale for the study, the approach taken has been to review the lessons learned from large, complex, technical projects to determine the extent to which the application of systems engineering would be of benefit. In order to place this review on a clearly defined baseline, the paper opens with a definition of systems engineering and a list of the key characteristics of the contemporary view of this field of endeavour. The review follows, along with some comments on the initiatives being undertaken by systems development authorities around the world. The paper concludes with a summary of salient issues and an overall conclusion on the value of systems engineering. WHAT DO WE MEAN BY SYSTEMS ENGINEERING? The background of systems engineering. SE gained recognition as a discipline following the Second World War. The increasing cost and technical complexity of development and acquisition programs in the 1950s and 1960s stimulated the need for a methodology to handle the technical and managerial complexity of large projects. Some of this recognition was no doubt due to large program failures that could have been avoided, or at least mitigated through the use of systems engineering (M’Pherson, 1980; DSMC, 1983). In the 1990s, systems engineering received increased attention as ways are sought to reverse the trend of increasing

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. project failure, particularly in large software-based systems. Systems engineering had it origins in defence and aerospace sectors, particularly in the United States of America and the United Kingdom. A scan of the recent proceedings of the Annual Symposia of the International Council On Systems Engineering (INCOSE) shows an increasing trend to apply systems engineering to a wider range of industries. The cancellation of US military standards a few years ago increased the rate of evolution of systems engineering and the latest standards are now strongly driven by industry concepts of best practice. The key characteristics of systems engineering. Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems (Robertson, 1998). Above all, systems engineering is a systems methodology, which means that it:  Encourages holistic thinking (considering the entity as a whole, its internal arrangement, and its properties).  Recognises the key characteristics of a system:  a hierarchy of subsystems;  the emergence of behaviour at particular levels of the hierarchy that cannot be decomposed to lower layers;  the interfaces, in particular, communication and control between and across layers in the hierarchy. Systems engineering can also be considered as a (human activity) system in its own right: one that creates, modifies and supports, large, complex, technical systems. Systems Engineering not only looks across a wide number of disciplines, it also should take a life-cyclebalanced approach to systems solution formulation. Systems engineering should not be considered merely a set of procedures to be followed. Rather, it is methodology that should be thought of as a way of thinking that encompasses a set of competencies and a set of processes that can be invoked as needed, each of which can be achieved through a range of methods. An important aspect of systems engineering is the selection and tailoring of the processes to suit each project. There has been tremendous progress in systems engineering development over the last decade. The increasing complexity of projects and the increasing incorporation of software into system solutions has driven much of this. Many new standards have appeared. As have many capability maturity models (Gabb, 1999; Sheard and Lake, 1998) that aim to determine the capability of a organisation to undertake the processes needed to successfully conduct systems

engineering. The latest standard ANSI/EIA-632, which was released last year, lists 13 processes needed to engineer a system. These indicate that systems engineering has expanded its breadth beyond that of a purely technical discipline. Henceforth, when talking about contemporary systems engineering we mean the range of processes needed to engineer a system. Systems engineering can be considered a metadiscipline that coordinates and interacts with other related disciplines such as project management, development engineering, integrated logistic support, test and evaluation, configuration management and software engineering. Systems engineering does not exist in isolation to the culture and processes of an organisation. The encapsulation of the project environment by the enterprise environment is particularly salient as it is the latter that ANSI/EIA-632 addresses. Without a rich enterprise environment, project environments can be expected to be sparse and highly variable. Complementary methodologies. It is useful to state that systems engineering is only one of a number of systems methodologies that are used to tackle systems problems. While we believe it is the most comprehensive and the one best suited to the type of large, complex, technical projects that are encountered in the defence, aerospace and software industries, it does have known limitations. Systems engineering fits into the class of hard systems methodologies: ones where the methods of science, engineering and mathematical analysis underpin the problem solving approach and it is known to work well when the following conditions are met (Cook et al., 1998):  Systems objectives can be defined at the very beginning of the project.  All parties can envisage the expected system solution.  The process can be summarised as moving the system in question from its initial condition S0 to an end state S1.  The environment (technology, organisational, social policy) is stable.  Objectives are shared among stakeholders. When some or all of these conditions are not met then supplementary approaches are indicated. Some of these are extrapolations of traditional practice while others are based on the alternative methods of reasoning found in the soft sciences. These can be useful for the following reasons:  Their primary interest is in changing organisational culture and gaining commitment from participants to a particular course of action.  They recognise the importance of values, beliefs and philosophies.

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730.  They use an interpretive approach to tackle systems problems. Soft system methodologies are gaining popularity for systems developments; particularly for those information systems that impact on the way an organisation conducts its business and consequently its internal organisation and power structure. Soft systems methodologies do not replace systems engineering methods but can augment them to better achieve certain goals such as improved requirements elicitation, stakeholder support, user acceptance, and management effectiveness. Interested readers are referred to Checkland and Holwell (1998) for a comprehensive coverage of the applicability of these methods and the difference between the hard and soft traditions. The purpose of introducing soft systems approaches is to provide background for the succeeding analysis of systems engineering lessons learned. We will provide opinion on which issues can be successfully tackled by contemporary systems engineering and which will need augmentation from soft systems methodologies. LITERATURE REVIEW The practice of system engineering has evolved over the years to reflect contemporary factors and organisational priorities. The literature review concentrates on the last 15 years; a period typified by increasing project complexity and a greater emphasis on project through-life cost and more enlightened approaches to risk management. Lessons learned by NASA. NASA (1989) lists a huge number of areas where mistakes have been repeatedly made in projects. Listed below are some of the more salient ones for methodology research: a. Lack of clear definition of requirements early in system design phase. This included:  Starting designs before the requirements were known.  Vague specifications.  Design from the bottom-up rather than the topdown.  Incomplete documentation of requirements.  Lack of early and thorough requirements analyses prior to the start of a design. b. Poorly defined technical interfaces between subsystems and other subsystems/spacecraft. This included:  Failure to understand interface requirements.  Too narrow a focus within the discipline/subsystem level, omission of a systems viewpoint or systems engineering awareness by the discipline engineer.  Incomplete interface definition.

 Failure to update interface documentation. Inadequate test planning during the system design phase, including:  Not seeking out expert advice (from within NASA) early to develop verification plans.  Lack of early test planning and test procedure development.  Inadequate consideration early in the systems design phase of how large amounts of data will be reduced, analyzed and reported.  Inadequate consideration of testability to assure that the design can be tested to demonstrate specification compliance. d. Failure to think the design through to completion of integration. This included:  Inadequate consideration of accessibility/maintainability early in the design.  Inadequate consideration of handling and transportation requirements. e. Insufficient attention paid to mission operations on hardware/software design requirements. Each of these can be ameliorated by simply following traditional systems engineering principles and practice as espoused from the early 1980’s onward (DSMC, 1983). c.

200 150 100 50 0 -50 0

5

10

15

20

25

30

35

Figure 1 Scatter plot of cost overrun versus expenditure in systems design phase NASA (1995).

An important lesson from NASA is that if the expenditure on the system design phases (up to and including the preliminary design phase) is less than 5% of the estimated cost of the project, vast cost (and schedule) overruns can be expected, see Figure 1 reproduced from NASA (1995). This percentage appears elsewhere in the systems engineering literature (M’Pherson 1990, DSMC, 1983). In fact, one of the tenets of the Downey procedures, employed by the UK Ministry of Defence since the late 1960s, is that “up to 15% of the estimated total development cost may need to be spent on project definition in order to reduce risk”

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. (DERA, 1996). The same source quotes that this level of investment has rarely been seen in practice and cites this as one reason for disappointing project outcomes. A point to be made here is that the money must be spent completing the tasks that are expected of these phases (ANSI/EIA-632, 1999; Blanchard and Fabrycky, 1998); not simply creating nugatory process products. Lessons learned by UK Ministry of Defence in military system development. The following information was extracted from MOD (1998). The UK National Audit Office 1997 Review showed that the top 25 major equipment programs had an average slippage of 35-40 months and an annual cost growth of 7-9%. (This excluded the Eurofighter and Trident projects, if these were to be included then the cost growth was -2%). The following causes of cost escalation were identified:  Estimating inaccuracies (£130m).  Changes to equipment specifications (£180m).  Defence industry inflation running higher than general inflation (£340m).  Changes in program numbers or required timetable of introduction (£400m). Schedule slippage was attributed to the following:  Technical difficulties (33%).  Budgetary constraints (25%).  The need to redefine the project (19%).  Difficulties associated with international collaboration (15%). In all fairness, the baselines against which the projects were judged were set early in the procurement cycle, often at the feasibility stage, when the technological solution was still uncertain. Projects missed on average, 40% of program milestones and 10% of projects failed to meet key technical requirements. The Learning From Experience initiative identified the following shortcomings:  Ineffective co-operation between stakeholders.  Over-optimism in predicting time and cost.  Insufficient appreciation of technical and commercial risk.  Ineffective incentivisation of suppliers.  Insufficient investment at the Project Initiation stage. Similar to the US and Australia, UK defence procurement has seen significant change over the last 15 years. Further improvements are planned as part of a major change program known as Smart Procurement that builds upon the application of Downey principles (waterfall systems development model with 15% expenditure on systems definition, (DERA, 1996)) to engender speedier, more coherent and interactive

processes. The key for the proposals is to adopt a through-life systems approach to procurement, allied to improved commercial practices and measures related to personnel, training and information systems. The following bullet points highlight the initiatives of Smart Procurement and illustrate its breadth (MOD, 1998; MOD, 1999, Cranfield, 2000). The MOD intends to: a. Adopt a through-life systems approach with:  A formal Project Initialisation Phase to replace the Pre-Feasibility Phase of the Downey cycle: the intention is to identify and engage stakeholders and establish an applied research program.  Co-operative stakeholder involvement: elimination of organisational and nonconstructive interfaces between stakeholders.  Concurrency and enlightened approval strategies.  Improved requirements management termed Smart Requirements that sees the evolution of the User Requirements Document in parallel with System Requirements Document.  Improved estimating and predicting.  Incremental acquisition. b. Employ improved commercial practices that:  Incentivise performance.  Embrace partnering.  Bear down on Defence inflation.  Use past performance as an evaluation criteria.  Incorporate teamwork in pricing and alternative price control methods in place of free-market economics.  Reduce the procurement overhead. c. Institute remedies for unsatisfactory performance. d. Pay attention to personnel and training (MoD already sponsor an MSc in Defence Systems Engineering at University College London). e. Utilise uniform information management systems. f. Apply benchmarking and continuous business improvement. The lessons learned indicated that improvements could be made to the implementation of many of their systems engineering processes. They also highlight many non-process related factors that they are also addressing through Smart Procurement: they are dealing with both the hard and the soft issues. It is important to note that they are tuning their systems engineering processes to reflect contemporary systems and management thinking and not abandoning them. Lessons learned from large UK civil software-based system developments. Jackson, (1997) reports on a

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. Outcome Type 1, project success

Type 2, project challenged

Type 3, project impaired

Description Project completed on time, on budget, with all features and functions as initially specified. Project completed but over budget, over time, with fewer features and functions than initially specified. The project was cancelled at some point during the development cycle.

UK study of information technology investments across 14,000 organisations that:  80-90% of systems did not meet their goals.  About 80% were delivered late and over-budget.  Around 40% of developments failed or were abandoned.  Under 40% addressed training and skills requirements.  Less than 25% fully integrate business and technology objectives.  Only 10-20% meet their success criteria. While it is hardly comforting to know that the civil sector also has great difficulty realising successful software projects, this information can be used sagely to budget for software developments. Jackson, who is a management professor in the UK, takes a soft systems approach to information systems development along with other notable protagonists (Checkland and Holwell, 1998, Hitchins, 1993). These people are of the opinion that traditional systems engineering approaches are not wholly appropriate to information systems because the problems of this type are messy, ie poorly defined, subject to conflicting viewpoints, and need to be undertaken in a changing environment. These changes include implementation technology, user needs, and cultural perception of what constitutes an acceptable computer application. Given the miserable software success rate reported, this view should not be ignored. Lessons learned from large USA civil softwarebased system developments. The Standish Group (1995) undertook a similar study in the USA. They received responses from 365 organisations covering some 8,380 applications in banking, securities, manufacturing, retail, wholesale, health care, insurance, services and local, state and federal organisations. They divided project outcomes into three categories as shown in Table 1.

Percentage of Projects 16.2 %

52.7 %

31.1 %

There is a remarkable similarity to the profile observed in the UK. Usefully, the Standish report offers some analysis on the reasons for projects falling into each category and generalises them into a Success Criteria Metric that can be used to establish probable project success. They then apply this to four case studies to show its strong correlation to known project outcome. The Standish success-potential metric appears below in Table 2. The score shown in the points column of Table 2 was derived directly from survey responses and relates to the percentage of projects where this criterion determined the fate of the project. Index 1 2 3 4 5 6 7 8 9 10 TOTAL

Success Criteria User involvement Executive management support Clear statement of requirements Proper planning Realistic expectations Smaller project milestones Competent staff Ownership Clear vision & objectives Hard-working, focussed staff

Points 19 16 15 11 10 9 8 6 3 3 100

Table 2 The Standish project success potential metric (The Standish Group, 1995).

Criteria 3, 4, 7,and 9 have always been covered by traditional (pre-1990) systems engineering. Criterion 6 is a practice issue that can be incorporated into the project planning. Criteria 1 and 8 certainly feature in contemporary systems engineering thinking (EIA-632, 1999) and together with criterion 2 form the core concerns of the soft systems approach (Checkland and Holwell, 1998). Thus it is fair to say that by adopting SE principles and by applying some soft systems

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. practices, one’s project has a good chance of becoming one of the elite 16%. In addition to the survey reported above, The Standish Group conducted interviews with various IT managers to shed further light on why projects are beset with problems. While most comments echoed the findings of the survey, the problem of competing priorities emerged as an important issue; in particular when this stemmed from an unresourced reorganisation. We conclude that the application of systems engineering could improve a high proportion of software-based project outcomes. The application of soft-system approaches would also be appropriate in managing project expectation and the management issues reported above. Lessons learned from large USA Federal software development. Gabrig (1993) reports in the second paragraph of his paper a similar situation for US Federal Government Software contracts to that reported by the civilian sector. The paper highlights the following points  When using the waterfall model, if there is a failure in any phase it propagates through to the end.  Be vigilant in trapping faults in the review processes.  Follow the established procedures and, in particular, get the requirements correct.  Use well established software suppliers who have experience in projects of similar size, complexity and type.  Fixed-price contracts have a detrimental effect on the ability of the parties to work as a team and hence on the value of the final product. The first three points would are addressed by systems and software engineering practice (Sommerville, 1996). The fourth point is established quality management practice that is all too often forgotten or overridden by other factors. While Gabig’s paper contains little quantitative information on the money that can be saved from the appropriate application of systems engineering principles and practice, the last paragraph sums up the difference in about the same terms as the other references: “Perhaps the foremost lesson learned by the Federal government has been that without a disciplined and methodical approach to software development, a large project can easily degenerate into an amorphous and overwhelming task. The vast majority of successful large software development projects within the Federal government have been guided

by the waterfall model which provides a sequential approach to the software development process.” Lessons learned from aircraft development case studies. Moody et al., (1997) define two key metrics for projects: design difficulty and resources employed. As might be expected they are able to show that as design difficulty increases so does the resources required. When successful projects are plotted on a graph with these two parameters as the axis, they show that successful projects tend to cluster along a line with positive slope. In Chapter 32, Moody et al. show that this relationship is confirmed for a case study involving six successful aircraft developments. They also extend their method to plot project performance (based on technical, cost and schedule performance) against systems engineering fundamentals (based on the adequacy of a number of well-established systems engineering activities). Figure 2 reproduced from that book shows two lines, ones for civil aircraft and the other for military. With only six points no strong conclusion can be made but both lines indicate that an improved score in systems engineering fundamentals yields improved project performance. The separation between the two lines led Moody at al. to speculate as to the cause. While they suggest that it might be caused by inherent differences between they way the two sectors do business, it was noted that the military developments had marginal cost and schedule performance. They speculate that this suggests that the development process for government aircraft is more prone to cost and schedule difficulties. This assertion would need to be considered in the light of the greater degree of scrutiny government projects receive. 40

30

20

10 Commercial Military 0 60

70

80

90

100

110

120

130

Figure 2 Project performance score versus systems engineering fundamentals (from Moody et al., 1997)

DISCUSSION The literature reviewed held consistent messages for both hardware- and software-dominant systems. The key points are shown below group under three headings.

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. users consider are the important criteria for the system and how the system should be evaluated against them.

Guidance for systems engineering practitioners. 1.

2.

It is imperative to understand what is needed and to gain user and other stakeholder engagement before embarking on a project and to retain it throughout the life of the project. The user focus must be captured in a clear, traceable and testable set of requirements with an adequate process to manage inevitable evolution and change.

3.

A whole-of-life perspective is important both to create and implement appropriate system engineering methods and to identify and measure the benefits of the approach.

4.

System Engineering processes and the application of standards must be tailored to the task in hand.

5.

Select partners or subcontractors with a sound enterprise environment honed on projects of a similar scale and complexity. This may sound obvious but failure to heed this advice has been the cause of many project failures.

6.

Attention to interface definition and management is vital for project success.

Guidance for systems engineering planning. 1.

Adherence to systems engineering principles and processes across an organisation will save money. Many troubled projects cite reasons for their difficulties that should have been avoidable through the application of systems engineering. Thus systems and software engineering are a good investment and the organisation should develop an environment whereby their use becomes part of the work culture.

2.

Insufficient investment in the early design phases (5 to 15%) is likely to lead to project cost overuns of between 50% and 100% for both hardware and software projects. It is better to plan to expend the time and resources during the early phases of a project than to find out that the conceptual design was inadequate the painful, expensive way.

3.

The planning for systems technical evaluation (test and evaluation) albeit through modelling, simulation, analysis, or testing needs to be undertaken in the conceptual design phase. Test and Evaluation Master Plans (TEMPs) provide important guidance to all concerned on what the

4.

It is important to undertake planning against realistic expectations based on sound experience.

Guidance for process development. 1.

The probability of project success (lowness of risk) is strongly related to the quality of the processes employed to engineer a system. Development of an enterprise environment is a worthwhile investment.

2.

Development of processes that encapsulate soft systems approaches will encourage user involvement and stakeholder buy-in.

3.

Improved commercial practices that appreciate the need for teamwork and flexibility need to be evolved from the present fixed-price arrangements that are proving to be counterproductive.

CONCLUSION A common theme that emerges from the review material is that the application of systems engineering principles and practice will deliver better systems faster and more cost-effectively than ad-hoc, non-systemic alternatives. There was good agreement on the areas that require improvement. These can be addressed by a combination of contemporary systems engineering methods and soft systems methodologies. The framework provided by EIA-632 provides a rich and appropriate enterprise environment in which to develop large, complex, technical systems. One that allows practitioners the scope to select from a range of alternatives those processes and methodologies which best suit their industry sector and the scale of the project in hand. Such a framework permits the integration of the hard and soft systems traditions. ACKNOWLEDGMENT An earlier version of this paper appears as part of the final report written for the Defence Acquisition Organisation as a deliverable for CAPO: C003/99. The financial support for this research and permission to publish are gratefully acknowledged. REFERENCES ANSI/EIA-632, Processes for Engineering a System, Electronic Industry Alliance, January 1999.

Published in Proceedings of the INCOSE 2000 Annual Symposium, Minneapolis, USA, 16-20 July 2000, pp 723-730. Blanchard B. S., & Fabrycky W. J., Systems Engineering and Analysis, 3rd Ed., Prentice Hall, 1998. Cook S C, Sydenham P H, Harris D D and Harris M B, Lecture Notes for Principles of Systems Engineering, December 1998, University of South Australia. S., Checkland, P., and Howell S., Information, Systems and Information Systems, Wiley, Chichester, ISBN 0-471-95820-4, 1998. Cranfield, Systems Engineering for Defence: Smart Procurement the Systems Perspective, Proc of the 3rd Annual Systems Engineering for Defence Conference, DERA/Cranfield University, Schrivenham, UK, 15-16 February 2000. DERA, The MoD’s Downey Procedures, 1996, DRA/LS(CS)/SYS_ENG/TG0/CMR/96/1 DSMC, Systems Engineering Management Guide, Defence Systems Management College, Fort Belvoir, Virginia, USA, 1993. EIA/IS-731.1, Systems Engineering Capability Model (SECM), Electronic Industries Alliance, January 1999. [www.geia.org/eoc/G47/eiag47.htm] Foresight, Building Integrated Systems, The report of the Defence and Aerospace Foresight Panel Technology working Group, IEE, 1997, ISBN 0 85296 925 2. Jackson M.C., “Critical Systems Thinking and Information Systems Development”, Proc of the 8th Australasian Conference on Information Systems, 1997. Gabb A.P. “System Engineering & Software Engineering Standards and Model,” [On line accessed 10 March 1998], URL: http://www1.tpgi.com.au/users/agabb/ Gabrig J.S., Lessons Learned from large Federal Software Development Contracts, [Online accessed 25 Aug 1998], URL: http://www.venable.com/business/gabig.htm Hargreaves G., A Comparison of the Australian and UK Defence Procurement Systems for Major Projects, MSc Dissertation, Defence Systems Engineering Group, University College London, September 1998. Martin, James N., Overview of the EIA 632 Standard “Processes for Engineering a System” (TUTORIAL G), 30 September 1998. [On line accessed 11 March 1999] URL: www.geia.org/eoc/G47/eia632.htm M’Pherson P. K., “Systems Engineering: an approach to whole system design,” The Radio and Electronic Engineer, 1980, Vol. 50, No 11/12, p545-558. MOD, Strategic Defence Review Study 2F3/7 Smart Procurement, PPB/P(98)2, 19 January 1998, UK Ministry of Defence.

MOD, The Acquisition Handbook - A Guide to Smart Procurement, 2nd edition, August 1999. [On line accessed 23 February 2000], URL: http://www.mod.uk/policy/spi/handbook/front.htm Moody J.A., Chapman W.L., Van Voorhees F.D. & Bahill A.T., Metrics and Case Studies for Evaluating Engineering Designs, Prentice Hall PTR, Upper Saddle River, NJ, 1997. NASA, NASA Systems Engineering Handbook, SP6105, June 1995. NASA, Space Engineering Lessons Learned, Engineering Directorate, Nov 1989, [Online accessed 25 Aug 1998], URL: http://home.erols.com/ee/sysengll.htm Robertson T.C. (Ed), Systems Engineering Handbook, INCOSE, Jan 1998. Sheard S.A. and Lake J.G., “Systems Engineering Standards and Models Compared”, Proc of INCOSE Annual Symposium, 1998. Somerville I., Software Engineering, Fifth Edition, 1996, ISBN 0-201-42765-6. The Standish Group, Chaos, The Standish Group, 1995, [Online accessed 24 March 1999], URL: http://www.standishgroup.com/chaos.html ABOUT THE AUTHOR After graduating in Electronics Engineering from the South Australian Institute of Technology in 1977, Prof Cook commenced work as a telecommunications equipment design engineer in the UK where he also completed an MSc in Computer Science at the University of Kent. On his return to Australia, he worked in the defence electronics industry until 1988 when he joined the Defence Science and Technology Organisation (DSTO) as an Engineer Class 5 responsible for radio communication research. In addition to his management role, he became an active contributor to the fields of radio system performance monitoring, high frequency radio systems design, multi-mechanism adaptive radio technology, modelling and simulation of military information networks, and engineering design automation. Prof Cook completed a PhD in 1990 in Measurement Science and Systems Engineering at the City University London. In 1994 he was promoted to Senior Principal Research Scientist responsible for the management and scientific leadership of the Military Information Networks Branch of DSTO. Prof Cook joined the University of South Australia as the Foundation Professor of Systems Engineering in 1997. Prof Cook has contributed to three books and has published over fifty refereed journal and conference papers.