Jan 9, 2017 - directly related to development health or schedule, there can be some ...... the Workshop on Constraint Processing at CSAM'93, St. Petersburg, ...
Integration of technical development within complex project environments Project Department of Computer Science at the University of York Author: Nick Brook Project Supervisor: Katrina Attwood Abstract The integration of design and safety functions within complex technical system development and project methodologies is still undeveloped. Deficits in management are primarily manifested through large numbers of major unanticipated changes. This variously results in overrunning costs and schedule and compromised fulfilment of important project requirements. These undesirable consequences are greatly magnified by complexity. This project will attempt to address this deficit. The high-level objectives are to develop a framework to facilitate better planning, monitoring and control of technical activities. This will be achieved through the identification, development and integration of suitable existing concepts and techniques into a complexity management framework. This should apply to all complex projects and particularly those relating to safety critical engineering projects. Primarily, the project builds upon the research which has been previously undertaken by this author in project failings, complexity and existing tools and techniques. The framework will be evaluated through questionnaire survey of several of its component parts, a review against the recommendations from literature and through case study application of the tools and techniques. It is the goal that all or parts of this framework can be put to practical application within industry.
Statement of Ethics This dissertation, including literature research, questionnaire survey and conclusions, has carefully considered and adhered to the three principles of ethics specified by the University of York. These are - to do no harm, to ensure informed consent from human participants and to uphold sound principles of data confidentiality: Do No Harm No physical system or entity has been developed which could cause harm of any kind. The work undertaken pertains to literature research, the gathering of non-sensitive and non-confidential data via a questionnaire survey, and the development of research conclusions. Informed Consent Information which is freely available in the public domain has been appropriately referenced within this dissertation. Some information was acquired by direct requests to, and discussions with, the authors; at all times the authors have been made aware of the purpose of the request and the nature of the critical evaluation. Questionnaire respondents freely participated and were made aware of the purpose of the survey beforehand. Confidentiality of Data No sensitive or confidential data has been acquired, used or released during the course of this dissertation. Number of words is 39,489 as counted by the MS Word word count. This includes all of the body of the report, but excludes the appendices which are included in the project submission for completeness and interest. i
Nick Brook
MSc Safety-Critical Systems Engineering
Left intentionally blank
ii
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Table of Contents Table of Contents ........................................................................................................................ iii Acknowledgements..................................................................................................................... iv 1. Introduction ......................................................................................................................... 1 1.1 Overview ............................................................................................................................................ 1 1.2 Objectives .......................................................................................................................................... 1 1.3 An understanding of complexity ....................................................................................................... 3 1.4 The importance of considering the organisation .............................................................................. 4 1.5 Existing management frameworks .................................................................................................... 5 1.6 Devising the Complexity Management Framework .......................................................................... 8 2. Identifying and quantifying project complexity ...................................................................... 8 2.1. The complexity assessment matrix ................................................................................................... 8 2.1.1. Complexity themes .................................................................................................................... 9 2.1.2. Complexity Criteria .................................................................................................................. 10 2.1.3. Complexity Matrix ................................................................................................................... 11 2.2. How, where and when to assess ..................................................................................................... 14 2.3. Outputs from complexity assessment ............................................................................................. 15 2.4. Complexity profile and the interaction of complexity criteria ........................................................ 15 3. Critical Project Success factors and their application in system development ....................... 17 3.1. Selection of success factors from literature .................................................................................... 17 3.2. Verification of success factors through questionnaire .................................................................... 18 3.2.2. Full dataset .............................................................................................................................. 19 3.2.3. By age ...................................................................................................................................... 22 3.2.4. By role ...................................................................................................................................... 22 3.2.5. By industry ............................................................................................................................... 23 3.2.6. Conclusion ............................................................................................................................... 24 4. System development planning techniques........................................................................... 24 4.1. Overview .......................................................................................................................................... 24 4.2. Design Structure Matrix................................................................................................................... 25 4.2.1. Introduction ............................................................................................................................. 25 4.2.2. Design Structure Matrix principles .......................................................................................... 26 4.2.3. Creating and applying the organisational architecture DSM .................................................. 27 4.2.4. Creating and applying process architecture DSM ................................................................... 27 4.2.5. Application of the process DSM within a complexity management framework..................... 30 4.2.6. Process-organisational MDMs and their application .............................................................. 33 5. System Performance Measures ........................................................................................... 34 5.1. Introduction ..................................................................................................................................... 34 5.2. Desirable properties of Performance Measures ............................................................................. 35 5.3. Selection of System Performance Measures from literature .......................................................... 38 5.3.1. Requirements .......................................................................................................................... 38 5.3.1.1. Satisfaction of Stakeholder and System Requirements .......................................................... 38 5.3.1.2. Requirements attributes ......................................................................................................... 38 5.3.2. Development health ................................................................................................................ 39 5.3.3. Process maturity ...................................................................................................................... 40 5.3.4. System maturity....................................................................................................................... 41 5.3.5. Organisation, process and schedule complexity ..................................................................... 41 5.4. The verification of performance measures through questionnaire ................................................ 42 5.4.1. Performance measures by age, role and industry................................................................... 43 iii
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 5.4.2. Conclusion ............................................................................................................................... 43 5.5. The collective use of performance measures ................................................................................ 43 6. Managing system development risks ................................................................................... 44 6.1. Introduction ..................................................................................................................................... 44 6.2. Important concepts ......................................................................................................................... 45 7. Complexity orientated development framework ................................................................. 46 7.1. Integrating the sub-processes together .......................................................................................... 46 7.2. Analysis of framework against criteria within existing literature.................................................... 49 8. Case study application of framework................................................................................... 50 8.1. Purpose ............................................................................................................................................ 50 8.2. Case study 1 – Boeing Dreamliner ................................................................................................... 50 8.2.1. Background .............................................................................................................................. 50 8.2.2. Work Breakdown Structure ..................................................................................................... 51 8.2.3. Complexity assessment ........................................................................................................... 53 8.2.4. Critical Success Factors ............................................................................................................ 56 8.2.5. Planning technique .................................................................................................................. 58 8.2.6. Performance measurement .................................................................................................... 58 8.2.7. Risk management .................................................................................................................... 59 8.2.8. Comparing framework findings with project outcomes.......................................................... 59 9. Results and Evaluation ........................................................................................................ 60 10. Conclusion .......................................................................................................................... 64 References…………………………………………………………………………………………………………………………………..66 Appendix A Appendix B Appendix C Appendix D Appendix E Appendix F Appendix G Appendix H Appendix I Appendix J Appendix K Appendix L Appendix M Appendix N Appendix O Appendix P Appendix Q Appendix R
Glossary of terms and acronyms Existing management frameworks Critical Success Factors Response summary Complexity questionnaire Questionnaire results ranked by influence Dataset sample sizes Questionnaire results filtered by age Questionnaire results filtered by role Questionnaire results filtered by industry Design Structure Matrix System Performance Measures Risk register template Risk attributes as metadata Criticality Assessments for Boeing Dreamliner case study Timeline to Boeing Dreamliner Complexity management case study using OL3 and AREVA’s EPR Full questionnaire results per respondent
Acknowledgements I would like to thank Lana and Anna for coping with my extended period of further education and putting up with my many grumpy moods. I also have the utmost respect for the University of York and the teaching and administrative staff who have made my studies so enjoyable and rewarding. Finally, my Mum and Dad have been extremely supportive and I could not have done this without them. Continuous effort - not strength or intelligence - is the key to unlocking our potential. - Winston Churchill. iv
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
1. Introduction 1.1
Overview
The characteristics of complex systems include a difficulty in determining outcomes from the inputs [1] and there being present a ‘degree of disorder’ [2]. The relationships and dependencies between processes, organisations and other external systems describe perfectly that of any complex system. What is more, the general trend is for development complexity to increase over time. This is while simultaneously catering for advances in technology and the downward pressures on development timescales and cost. The result is greater prevalence of the wicked problem, that is a technical system development that is ‘highly resistant to resolution’ [3]. This can be thought of as Technical Development Complexity. This dissertation builds on the research undertaken within the previous literature survey [4]. This work described the high failure rate amongst projects, a summary of the main reasons for this and areas of further investigation. Also a variety of definitions were given for complexity and compelling reasons that this should be the focus of attention when planning for the development of a system. The dissertation develops the literature survey’s conclusions, describing a framework for the treatment of complex system development within a wider project environment. As the network of individual activities and their interactions increase it is reasonable to expect that the mechanisms that are used to achieve a successful outcome will be greater in terms of their magnitude and resources to undertake them. Therefore, complexity inevitably influences the effort required to plan, monitor and control development activities. These mechanisms and conditions can be specified within development processes but it would appear reasonable that these also need to be tailored to suit the particular development characteristics and especially those relating to its complexity. These are known as Critical Success Factors (CSF) and describe ‘essential areas of activity that must be performed well if you are to achieve the mission, objectives or project’ [5]. However, it should be noted that outside describing their form and importance, literature provide no satisfactory methods or process for determining suitable CSFs.
1.2
Objectives
It will be proposed that complexity comprises a number of aspects and that, for the purpose of this dissertation, these can be comprised of themes and criteria. These aspects will not be homogeneous across all the development activities nor the development lifecycle. This variation across a project can be seen as a complexity profile that will evolve throughout the development lifecycle. Outwardly similar developments, even within the same organisation, may not exhibit the same complexity characteristics. This can often be attributed to the presence (or lack) of constraints outside the development team’s control and the result of influential management decisions. Consideration of complexity will have a number of goals: Identify the most influential Critical Success Factors (CSF) to put in place environmental conditions required for a successful outcome; Identify methods to put the responsibility for interventions where it is best placed through early identification the establishment of environmental factors is often beyond the direct influence of the development team or its managers [6][7]; Recognise how complexity evolves throughout the development and how some aspects increase as others diminish. As such the methods to plan, monitor and control development activities will need to evolve accordingly throughout the development lifecycle; Describe how interventions can influence complexity and displace its effects elsewhere; Recognise and anticipate development risks identified through the consideration of complexity early in the development lifecycle and during detailed planning activities; Develop a framework for assigning interventions to defined risks more closely, at the appropriate level within the Work Breakdown Structure (WBS). The identification of interventions will be initiated by the recognition of CSFs and risk planning; Page 1 of 70
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Propose methods to guide where the most effort should be applied in planning, monitoring and controlling activities through the identification of areas of maximum complexity. This is where the greatest number of residual risks is likely to be. This is most important where there exists a constraint of available resource which inevitably means effort must be prioritised. This is a factor of varying significance in all but the most exceptional of technical developments; Ensure the framework is scalable with regards to the size of the development and to the level within the WBS Structure to which it can be applied. This is closely related to the preceding item; Provide a framework to supplement planning techniques to better capture aspects of complexity, such as coupling behaviours between activities. This will also allow analysis to identify individual activities or clusters of planned activities that have the potential to influence the overall development outcome disproportionately and as such deserve enhanced management effort; Guide the selection and implementation of development status measures. These should be determined on the basis of the impact of particular areas of technical development and the key activities within these areas. These should be selected primarily for their use in controlling the development and initiating avoiding action and not merely for purpose of reporting status.
An underlying theme of the framework will be that the activities it directly influences (planning and monitoring), and indirectly influences (controlling), are chosen to balance risk and benefit with effort involved. It should complement current techniques, and absolutely not conflict with them. It is the aim that this will allow their use with a minimum of additional resource overhead. It is recognised that the framework will not be implemented in isolation of existing project management and system engineering tools and techniques. A secondary objective is to allow project and process complexity to be understood through practical usage of the techniques. This can be achieved through the feedback of learning back into the process to improve it and develop better responses for future system development. There are often latent issues within system development that are only manifested through delays, cost overruns or quality issues. This is problematic for several reasons. The intervention required is generally greater the later it is left. Also as the development tempo intensifies, the effort required to change the process and organisation will inevitably draw on the very same resources as those developing the system itself. The disruption and additional uncertainty of making these changes will exacerbate the impact of the original latent issue. In fact, the scope of such a change can be seen as a project in its own right [8] and is as undesirable as it is unnecessary. Following on, there is evidently great benefit in balancing the effort between undertaking risk management as opposed to that of issue management. This balance is often forgotten and especially where there is insufficient substantiating data to support the mitigation of risks or to support change. The focus instead is on constant management of issues arising from bad risk planning, commonly known in project management as ‘firefighting’ [9], leading to no long-term sustained improvements in performance. A goal of the framework is to provide strong enough indicators to prompt this early management action. Many of the principles in the management of system related risks are equally applicable to project risks. This can be illustrated by an annotated Bow Tie diagram [10][11] as shown in Figure 1. The role of complexity management is both to identify the threats on the left hand side that are brought about by complexity and to determine effective methods for controlling them. Selecting CSFs should be followed by an understanding of residual risks and ways of providing an early warning of their realisation. A semi-formal method for identification of CSFs within a framework will encourage their wider use and raises the possibility of their use at development and project review points, such as design reviews and stage gates. An objective should be the early identification and best possible assignment of responsibility for the of the mitigation of development risks, not only across the breadth of the project but to a sufficiently senior level of management [12]. Page 2 of 70
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Heuristic techniques will be chosen and developed as a means of guiding decision making and providing inputs into traditional project management and system engineering processes. These will anticipate areas within system development that require the most effort and where effort will bring about the greatest impact. This will provide multiple Plan- Do-Check-Act (PDCA) cycles, an iterative four-step management method used to control and allow continual improvement of processes and product development [13]. Also known as the Demming Cycle, it is equally applicable within the development process. In this application it will be used to influence the development environment, processes and planning for complex system development. The techniques described will use a mix of decomposition and reductionist methods, along with a heuristic approach, to optimise resource usage required both of the framework and by any additional development effort that results from its application.
Figure 1.
Annotated Bow Tie diagram [11].
Heuristic techniques are well suited to the problems associated with planning and executing a complex system development and work well alongside iterative development techniques generally advocated for complex system development. There are many variables in play within complex technical development. While optimal solutions are achievable for a particular aspect, optimisation across all these often competing aspects is very difficult within the current methodologies. An important concept is that of feeding back learning, into the on-going development or subsequent developments, to improve future applications of the framework. The determination of CSFs or risks should include the intended or optimal outcome which can be compared against actual outcomes during the development lifecycle for the efficacy of the CSFs and their implementation to be assessed. This allows improvements to be made in a timely fashion for the benefit of the development. It also avoids the unsatisfactory analysis of lessons learned, often at the very end of the project when personnel drift away and the motivation to hold a review wanes [14]. A glossary of terms and acronyms used throughout this dissertation can be found in Appendix A.
1.3
An understanding of complexity
Complexity must be understood before it can be modelled. It has previously been described and decomposed in a large number of ways, both within and outside the domain of engineering and many other overlapping fields of research. Examples include the Strategic Highway Research Program [15] and Helmsman Institute [16]. For the purpose of this framework it is important for complexity to be represented as simply as possible, while covering all the necessary aspects as effectively as possible. This may appear counterintuitive but is vital if the principles of the framework are to be applied in practice. This section will summarise research on the nature of complexity and will develop it for use in a complexity management framework. A more detailed explanation of the properties of complexity can be found within the literature survey [4]. Page 3 of 70
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Definitions of complexity highlight its highly structured nature [17][18] which encompasses many interactions [19] and a large array of possible configurations [20][21]. Other characteristics include there being no relationship between the behaviour of the overall structure and that of its individual parts and the ‘response’ of system elements to changes to other interrelated system elements [20][21][22]. Together these aspects make the overall structure difficult to understand [23]. It is noted that uncertainty is often omitted from the definition of complexity though it is intertwined with several of the aforementioned characteristics [24], such as configuration variation. This project proposes that ambiguity be considered [25] as a distinct and separate characteristic, alongside Uncertainty. Ambiguity is a common property of system development, caused by the absence or unavailability of reliable information, and resulting in the formation of assumptions. Ambiguity is a dominant factor in the early stages before requirements have been agreed and also during development phases before coupling of large numbers of activities are fully understood. Assumptions cannot be completely validated until later in the development and only after ambiguity has been addressed. An understanding of how the individual characteristics of complexity interact with each other will be pivotal to the determining of CSFs. Further dimensions of complexity will be elaborated upon in Section 2.
1.4
The importance of considering the organisation
The main two components of a system development, apart from the actual system being developed, are the development process and the development organisation. These can be considered to be within the direct control of the management team responsible for system development. Additionally, there will be a number of external interfaces and constraints that will be outside direct control of the system development management team. These may be from within the actual project itself, such as overall project budget Figure 2. The British complexity model [26].
Computer
Society’s
or resource availability, or entirely outside of the project. Examples of the latter may be stakeholders, regulators and existing adjacent operating systems.
Figure 3. Type of relationship between engineering and project management teams (adapted from SEBoK [27]).
An understanding of these boundaries will be important in assigning responsibility for interventions. Figure 2 shows the British Computer Society complexity model with tiered levels of external interface. This is a simplified representation and does not show how these tiers interact but nevertheless is useful in categorising project influences. Specifically, the model includes the ‘macro-environment’, which encompasses interfaces furthest from the control of the development team such market condition, legislation and regulatory frameworks. ‘Micro-environment’ relates to the customer and stakeholders and how requirements are captured and traded-off [26]. The ‘organisational environment’ and the ‘project environment’ relate to the system development and the resource and process constraints imposed on it. The organisational relationship Page 4 of 70
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 between system engineering and the project environment is also an important consideration and something which this model omits. Figure 3 shows three distinct project organisational types from the viewpoint of the technical development and is adapted from the Systems Engineering Book of Knowledge (SEBoK) [27]. These organisational types strongly influence what is and what is not within the direct control of the system engineering team. The definition of the inputs and outputs between project and system development organisations will allow a better understanding of where the responsibility for putting in place and implementing CSFs and risk mitigations resides. These interfaces should be defined within such documents as the Systems Engineering Management Plan [28] and the Project Management Plan [29]. In the first diagram organisational behaviour is typified by transactional type relationships. Engineering may be contracted out or undertaken by a different organisation, possibly located elsewhere. There may be more than one engineering organisation, with complexity increasing according to the number of interfaces. The project management organisation is responsible for managing the outputs only and there is a strong emphasis on functional organisational structures [30]. In the second diagram the overlap in responsibilities and accountabilities can also result in a form of complexity. There is a lesser emphasis on transactional relationship and the project management team have a greater influence over the management of engineering activities. The degree of overlap will affect this relationship. The third diagram could be a matrix type structure. The project management team has overall control and the engineering management team has the least scope to determine how the development will be managed. The type of relationship will thus affect the scope and the method of managing complexity. In the first example system engineering is entirely self-sufficient in terms of contractual complexity. In the last example this is far less likely. Thus the treatment should be amended accordingly.
1.5
Existing management frameworks
There has been a large volume of literature on the subject of managing projects and technical development activities based on a blend of theory and practical experience. The framework presented in this project will take the existing management frameworks and models, many discussed in the literature survey [4], and will attempt to reconcile inconsistencies in approach and content. This section will both consolidate the discussion in the literature survey [4] and introduce concepts that have been found subsequently. The predominant systems engineering model is the ‘V-model’ as shown in Figure 4. It does not specifically address complexity but has some concepts that will be applied within the developed framework. Specifically, its definition of the development lifecycle and requirements-centric view is useful. The satisfaction of requirements is a convenient way of looking at complexity and the monitoring and control of requirements should form an integral part of the way that a complex system is managed. The V-model represents a typical idealised development lifecycle, so the phase descriptions within this model will be used to profile complexity. Of the system engineering and project management methodologies that were previously discussed in the literature survey [4] the Strategic Highway Research Program’s 5DPM was of particular interest. This methodology is described in some detail in both their ‘Guide to Project Management Strategies for Complex Projects’ [31] and also in ‘Managing Mega-Project Complexity in Five Dimensions’ [31]. Rather than considering the usual three project dimensions, i.e. the iron triangle1 [32], it 1
Figure 4.
The V-model [26].
The ‘Iron Triangle’ describes The three related constraints of scope, budget, and schedule [32]. Page 5 of 70
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 introduces the concepts of ‘financing’ and ‘context’ as shown in Figure 5. 5DPM also considers CSFs from the perspective of complexity. Furthermore, this is done within several feedback cycles. Together these represent considerable synergy with many of the concepts that the author previously felt were worthy of consideration. Finance becomes of utmost consideration as both the overall magnitude of the development and risk increases. This is exemplified by the difficulties experienced in securing investment for the UK’s nuclear newbuild programme [32]. For the purposes of technical development, it is an imposed constraint by virtue of being outside the direct control of the management and will be treated as such in the model. The fifth constraint of the 5DPM model is that of context, describing the project environment, including potential constraints and interfaces. The process can be decomposed as follows: Review project factors in each of the five areas; Identify and prioritise complexity factors; Develop 5DPM complexity map; Define CSFs with sub-process steps such as assemble project team, select project arrangements and prepare early cost model and finance plan; Develop project action plan to address resource issues; Re-evaluate complexity map on commencement of Figure 5. Five dimensional project management model [15]. project. 5DPM describes reviews at defined intervals with an iteration of the above steps. This would most naturally align with a Stage Gates type governance structure [29], but it could be envisaged that interim reviews would be undertaken on a periodic basis if stage gates were deemed too far apart for effective control. Overall the model does not try to model complexity itself but rather attempts to model the component parts (within the five dimensions) and manage them to better manage a complex project environment. By focussing on these five dimensions the resulting assessments may provide a superficial view of project management orientated aspects only. Despite this there is considerable merit to the high-level approach and synergy with the principles of the PDCA Cycle [4]. The Helmsman Institute complexity assessment [16] is advocated by the International Centre for Complex Project Management [34]. This again has five areas of general consideration consisting of: Context Complexity; People Complexity; Ambiguity; Technical Challenge; Project Management Complexity. Within these categories there are specific factors. Those solely relating to technical development are: Integration Complexity; System Development Complexity; Impact on Infrastructure. Each of the categories is in turn considered against a large number of factors relating to such concepts as stakeholder numbers and alignment, uncertainty and abstraction. Importantly this framework introduces the concept of ambiguity as does Pich et al [25] and McGowana et al [35]. In each ambiguity is seen as a distinctly separate, though closely coupled, property to that of uncertainty. The consideration of complexity is of interest within the wider domain of business in general. The concept of VUCA (volatility, uncertainty, complexity and ambiguity) [36], as shown in Figure 6, considers many of the previously discussed themes. Volatility can be viewed as the result of one or the product of several complexity characteristics. Traits attributable to volatility include a general lack of understanding of the likely impacts of an event Page 6 of 69
Nick Brook MSc Safety-Critical Systems Engineering along with the impacts being ‘unexpected’ and ‘unstable’. Here volatility is described as a factor in its own right and complexity is described alongside the others rather than being composed of them. Nevertheless, the presentation of an unpredictable business environment in terms of a limited number of characteristics is useful. Much of the literature references complexity without sufficiently breaking it down into its component parts. Classical definitions of the components of complexity were described detail within the literature survey and are as follows [4][37]: The ‘Kolmogorov-Chaitin complexity’, also known as Program-size Complexity [38]; ‘Nonlinearity’; ‘Emergent Complexity’.
Figure 6.
9th January 2017
The VUCA framework [36].
Essentially the first component is the number of interfaces relating somewhat to the development magnitude but also its nature. A large technical development may contain duplicate tasks and low interdependency, thus limiting its Program-size Complexity, while a comparably smaller technical development may conversely have many interdependencies and highly intricate activities. The latter two components collectively can be closely aligned with the previously described volatility within the VUCA framework. A relatively small change results in far-reaching or a disproportionately large impact. Expanding the definition of emergence for the benefit of its use in an assessment we may say that it has a number of properties [39]: Radical novelty – new and unanticipated properties emerge from each interaction; Coherence – though unexpected the new properties are consistent with rules and behaviours; Wholeness – emergence causes the new system that is greater than the sum of its parts; Dynamic – changes from emergence will continue to evolve as long as there is emergence; Downward causation — the system as a whole dictates the behaviour of its parts as well as the system being influenced by the interaction of its parts. This may be caused by two common dynamics, either separately or in conjunction: No one is in charge – thinking in terms of a technical development organisation this could be characterised by a network-centric organisation with a decentralised structure; Simple rules engender complex behaviour – using the example of an organisation this may be a large hierarchical organisation with many functions. This section describes the decomposition and treatment of complexity from varying perspectives. The development of the 5DPM process supports the view that simple consideration of the ‘iron triangle’ is insufficient. It also reinforces the importance of complexity and the validity of an iterative approach. However, its treatment of complexity is overly simplistic. The complexity framework developed in this project will adapt the broad methodology of the 5DPM process, while proposing a more precise definition for development complexity using a combination of prevailing theory and practical application. Specifically, definitions of complexity outlined either in the literature survey or in this section will be re-categorised to fit within the model, maintaining consistency. Page 7 of 70
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Complexity will be decomposed by applying it against the relevant themes within the technical development. It must also be considered that complexity is in the eye of the beholder. The observer will have experience and a perspective that will influence subjective analysis based on their ‘bounded rationality’ [40]. The assessment criteria will ultimately require to be qualified as much as possible to guide interpretation. A more detailed literature review of this area of research can be found in Appendix B.
1.6
Devising the Complexity Management Framework
This dissertation will define complexity in terms of technical development and combine a method of assessing complexity with a number of techniques to allow the planning, monitoring and controlling of complex technical developments. To summarise this dissertation will2: a. Identify a method of quantifying Technical Development Complexity; b. Provide a semi-formal method for the identifications of the pre-requisite pre-planning environmental factors, or so called critical success factors (CSFs). The outcomes from this method will be influenced by an individual project’s characteristics, primarily in terms of its complexity; c. Propose methods for planning technical development and modelling interdependencies using the method of Design Structure Matrix (DSM); d. Provide Performance Measures that relate to the technical development and, where possible, to the assigned CSFs. It is preferable that these are totally objective but it is accepted that there will be considerable subjectivity involved which will need to be constrained to enhance consistency by suitable guidance. These performance measures will be both leading and lagging and provide coverage of the development process biased towards the parts that are particularly significant for reasons either of criticality or of risk of deviation in terms of criteria such as cost, schedule, scope and quality. Coverage of the remaining parts of the development process will be at a lower degree of decomposition commensurate with criticality and risk; e. Propose methods of identifying risks and how this influences CSFs and the Performance Measures. f. Demonstrate how all these (a. to e.) can be used together in a Complexity Management Framework. Together items a. and b. can be thought of as pre-planning activities and together are a strategy for putting in place an environment conducive for the project. Items a. to e. are developed in more detail throughout this project, culminating in Section 7, which includes a framework map integrating all these items together.
2. Identifying and quantifying project complexity 2.1.
The complexity assessment matrix
The complexity concepts described above will be encompassed within a Complexity Assessment Matrix. The matrix will form the most important part of the Complexity Assessment which in turn forms a part of the overall framework. The outputs from the assessment are as follows: 1. Undertake assessment of system development at a high-level within the WBS; 2. Identify of ‘hotspots’ both within the WBS and within the individual themes and direct additional assessments where they are required. It should be noted that focussed assessments will not be possible in very early development stages due to the low level of definition and decomposition of the WBS; 3. Identify development risks and propose CSFs to counter significant levels of complexity. Record additional information such as: a. Rationale; b. Residual risks that remain once CSFs have been put in place; c. Proposed outcomes that are to be brought about by the CSFs. 2
The objectives described in the summary of the literature survey [4] will be adopted with a single change. The development of Object Based Modelling (OBM) will be omitted. DSM will be used alone as a flexible technique that can be used to model the development process. This of course does not preclude the use of techniques, such as OBM, in parallel with DSM. Further information on the use of OBM can be found in the paper by Warboys and Keane [41]. Page 8 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 4. Reassess the project complexity profile with the CSFs in mind to ensure that no new significant unmitigated hotspots have been created. Revise the outputs of step 3 as appropriate; 5. Identify risks for inclusion within the risk register. Any consideration of complexity needs to be complete with an adequate explanation of categorisations. The process will be applicable across the entire development lifecycle and should be scalable. It is to be expected that there will be overlap between definitions due to a combination of minor shortfalls in the model and observer perceptions. It is for this reason that the reasoning behind decisions should be recorded to afford the best chances of consistency in applying the process in future applications. As discussed complexity will be viewed from the perspective of satisfying requirements, whether technical or project management orientated, and will form a part of the framework. There will be interactions between complexity types and it is highly likely that the nature of the complexity will evolve over the development lifecycle. We shall call this the Complexity Profile. Complexity assessment will have the potential to become more detailed as the requirements are defined. It can be reasoned that a pre-requirement, capturing the ‘business mission’ [28], can only be assessed at a very high level while detailed system requirements can be decomposed to much greater detail. It is important that the treatment of complexity can be targeted where it will represent the greatest value, as ambiguity reduces between the derivation of stakeholder requirements and the final validation of the system [28]. This can be achieved by considering assessment against either the development’s WBS or Product Breakdown Structure (PBS), depending upon which is used. Lastly there should be a comparison between actual and intended outcomes and feedback of learning to improve decision making in both the current and future development processes. It should be borne in mind that the objective of the assessment of complexity is to inform future decision making in order to enhance the likelihood of success. Section 3 details the use of CSFs as an established technique to do this by reducing either the probability or impact of complexity related events. Both complexity assessment and the assigning of CSFs can be viewed as pre-planning activities and should be reviewed periodically for applicability and appropriateness.
2.1.1.
Complexity themes Using a combination of the methodologies described within the literature survey and Section 1.4, a typical technical development can be considered to comprise a number of themes. These are captured under the three broad headings of internal interfaces, external interfaces and system development. The themes are designed to provide breadth of coverage over all areas pertinent to system development. Some themes, such as stakeholders and regulatory interfaces could have been combined but it was felt that the identification of independent themes would prompt more detailed analysis in areas that are particularly influential. This also led to the apportioning of a third of the themes to the system development itself, an area underrepresented in other methodologies such as those from 5DPM and the Helmsman Institute [32]. This ensures that the assessment is more balanced overall and specifically in areas such as satisfying requirements and completing scope as a part of technical development. An important consideration is the project environment within which the development is undertaken. There will be constraints imposed upon the technical development that will, to varying degrees, be out with the control of the development management. The organisation type, as discussed in Section 1.4, will be a significant factor but others include the traditional business-level requirements of cost, schedule and scope [32]. Organisational culture and business-level governance, such Stage Gate type processes [29], will also be influential in shaping the technical development. The investment constraints and funding profiles, as elevated to one of the 5DPM dimensions [31], must also be considered. The remaining themes used in the framework generally follow the commonly used themes of people, processes and technology [42]. These are used in business architecture and can be decomposed further to better suit the purposes of the complexity assessment. Page 9 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 People can be decomposed into the internal organisation, i.e. the development team and those external to it. External elements include contracting organisations, regulatory interfaces and all other remaining stakeholder groups that contribute to the development. The properties of the development process are important in terms of detail and number of interfaces. Lastly the technological aspect can be thought of as the nature of technology that the system will comprise, as well as the integration of the technology (internal interfaces) and external interfaces, i.e. how the system integrates with other systems. The nine Complexity Themes used in the framework are listed below. These include high-level considerations that may impact the analysis. 1. Internal factors a. (Project) environmental constraints – external factors (inputs and outputs only), schedule (such as pace and phased handover requirements), the imposed funding profile; b. Development process – how prescriptive and many interfaces; c. Internal organisation – ranging from few disciplines within same organization to many and organisation type; 2. External factors a. Contractual management – ranging from a few simple relationships to many relationships with differing contractual terms; b. Stakeholders - relating to system definition and validation; c. Regulatory interfaces – how many and how influential they are on the development; 3. System development a. External (system) interfaces – the degree to which system impacts or depends upon external systems (from none to part of system of systems); b. Technology – system requirements definition and verification relating to technology type (low to very high tech and low to highly novel); c. Internal (system) interfaces – the level of internal integration required between subsystems ranging from few simple relationships to many complicated relationships.
2.1.2.
Complexity Criteria Complexity Criteria can be assessed against each of these themes. The twin themes of uncertainty and ambiguity will dominate the early phases of any project. Though they are often classically excluded from the definition of system complexity they have such a profound effect on complexity management and will be elevated in importance. Rather than be seen as sub-sets of complexity they will sit alongside the other criteria. Uncertainty and ambiguity will naturally diminish under normal conditions as requirements are defined, verified and validated. Conversely emergence and nonlinearity will tend to increase over time as requirements are progressively defined, implemented, verified and validated. They will reach their zenith as the system approaches handover when even small changes will have a significant impact. The final criterion is that of Program-size Complexity. This is essentially the quantity of information that represents the system development. Techniques to manage this include expenditure of resource and effort (personnel and/or computer-tooling) and the use of modelling techniques. The system development Complexity Criteria are: 1. Uncertainty – likelihood of unexpected events occurring, e.g. stakeholder requirements change significantly during system early development or assumptions are found to be incorrect; 2. Ambiguity – incompleteness of knowledge about functional variables [25], i.e., a lack of information upon which to make decisions. An example may be that stakeholder requirements are undefined in a particular area of system development leading to assumptions being made or to delays in the development schedule while the stakeholder requirements are defined; 3. Emergence – how change to system configuration leads to unexpected behaviours and interactions and re-evaluation of derived system requirements. High Emergence is defined as unexpected Page 10 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 behaviours and interactions in many other components or sub-systems; 4. Non-linearity – the effect of a single small change to system requirements on other system requirements. A non-linear relationship is one where a change results in a disproportionate change in affected components and sub-systems. High Non-linearity is where a small change results in a large impact elsewhere; 5. Program-size Complexity – relates to the minimum amount of information required to describe a process, system or organisational requirements. Manifests itself in shortfalls in modelling and a general low level of understanding in the system or its development even when uncertainty and ambiguity is low. High Program-size Complexity requires a high level of effort to model and to maintain the model. Examples where this may be manifested are requirements management, configuration management and scheduling. In summary, uncertainty and ambiguity dominate early in the development lifecycle when their major effect is lots of low impact changes. Later in the development lifecycle there are less, high impact changes generally brought about by emergent and non-linear behaviours. Program-line complexity impacts the ability to model, understand and control these changes, with higher levels of this criteria making schedules and requirements progressively more difficult to manage.
2.1.3.
Complexity Matrix The Complexity Themes and Complexity Criteria can be presented in matrix form. A scoring system offering five choices, from very low to very high, has been chosen. Areas to be considered (considerations) for each of the themes has been chosen from the literature survey [4] and Section 1.5. These looked at sources such as the Helmsman Institute [16] and NTCP Framework [43]. Considerations provide guidance during application of the assessment and aid in the assessment of appropriate areas of the WBS against each theme. Risk has been omitted as a consideration from any of the themes as it is best understood only after complexity assessment has been undertaken. In this way the assessment will differ from its peers who tend to consider risk as an input into an assessment rather than an activity that is informed by an assessment’s results. Considerations can be seen against the themes within the Complexity Matrix as shown in Figure 7. The most obvious considerations for Project Environmental Constraints are those of imposed cost, schedule and scope. The funding profile that may constrain the development schedule, as well as overall cost, and introduce additional assumptions in lieu due to sub-optimal activity sequencing. Components of schedule include pace, where the development schedule may need to be accelerated to meet external milestones, and phased handover requirements. There may be additional activities and schedule implications due to project governance. Other constraints include those relating to procurement and contract usage which may impose constraints on who contracts are let to and how contracted activities are managed. There may be stipulations with regards to the technology that is used, for example technologies developed in-house. Lastly, organisational culture and politics can have a dramatic effect in areas such as communication and decision making. Tailoring of the development process to achieve the correct balance of governance and rigour against agility and flexibility is an important consideration as is facilitation of integration between the technical disciplines. The use of technology in areas such as requirements management, configuration control and safety case management will influence not only the development process but also the organisation. The organisation will strongly influence lines of communication. It is assumed that the fundamental type of organisation structure is decided early in a project lifecycle. It is generally a difficult undertaking to change it in its entirety as it is often dictated by the parent organisation undertaking the project. However, it is expected that it may be influenced by the findings of the Complexity Assessment. Traditional hierarchical structures allow ease of governance while decentralised network-type structures offer better flexibility [30]. Other aspects include the number of individual disciplines and their particular experience with regards the development type, the technology and integration challenges. Page 11 of 69
Nick Brook
Figure 7.
MSc Safety-Critical Systems Engineering
Complexity Matrix. Page 12 of 69
9th January 2017
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The internal organisation will interact with contracted development organisations and will be required to manage the transfer of information (development inputs and outputs) and the interfaces between areas of scope to ensure there are no omissions or inconsistencies. The development organisation will be shaped by its place in the wider organisation and particularly how it interfaces with the project organisation. This will dictate whether the responsibilities for areas such as resource allocation, contract procurement, cost estimation and scheduling are within the remit of the development organisation and the level of autonomy it holds. Other factors include geographical location of disciplines and other interfacing organisations. Contractual management will of course depend on how much of the development is actually contracted out versus activities that remain within the internal organisation. The reasons for contracting out of parts of the technical development will vary, for reasons such as low internal resource or capability or where particularly specialist or technology specific support is required. The nature of the relationship between the development organisation and contract organisations will be dependent on the composition of the project organisation. Responsibilities for actual contract management, including processing of contract variations and scope may be outside the scope of the development organisation in that they may manage technical interfaces only. Conversely this may all be within the remit of the development organisation. Again the division between scope being undertaken by internal resource and that contracted out should be considered against the outputs of the Complexity Assessment. The identification of significant areas of concern may initiate a reconsideration of contracting strategy or else prompt the establishment of measures to better manage it such as rigorous pre-qualification of suppliers. This is an example of a CSF that was introduced as a concept in Section 1.1 and is subject to more discussion in Section 3. Stakeholders can be thought of as the direct source of all high-level requirements. They inevitably have a strong influence on requirements as they are derived in greater detail throughout the development lifecycle. Factors that will affect stakeholder management include the number and alignment of stakeholders. Nonalignment of stakeholders’ needs leads to trade-off of requirements to gain the appropriate approvals. Large numbers of conflicting requirements will require careful management and introduces the risk for conflict and delays. Other factors include location and availability of access to stakeholders. Incomplete stakeholder identification may precipitate later changes to requirements and scope with potentially significant impact. Long development schedules can increase the likelihood of changes to important stakeholders which can lead to changing requirements also. Regulatory bodies are important stakeholders which can have a tremendous impact on system requirements. Technical developments may have weak interfaces, with only one or two regulators, or there may be strong interfaces, with many regulators. Changes in legislation and regulations may occur at any time due to events outside the control of the organisation and regulatory regimes may vary greatly between geographical areas. The development of a system for use in several countries can introduce much complexity surrounding the management of potentially conflicting regulatory requirements. External interfaces are systems that interface with the system of interest. There may be few interactions of a purely transactional nature with other systems or many interfaces. A system being developed as a part of a system of systems [28] will invoke the greatest management effort. Interfaces may be not properly understood or may evolve over time beyond the control of the development organisation. Technology is simply how ‘novel’ and ‘high-tech’ [43] the technology that has been chosen and dictated by the requirements is. Novelty and high-technology levels, as described by Frank et al [43], generally require highly iterative development methodologies and high levels of expertise. Short development schedules using such technology require greater effort, specialised systems engineering techniques and incur disproportionate level of risk. System integration represents the ‘synthesis of a set of system elements into a realised system that satisfies system requirements, architecture and design’ [28]. There are likely to be many interfaces that need to be managed through integration, verification and validation activities. There are techniques to Page 13 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 manage this complexity, including early integration planning. Examples of other factors include the integration of system elements from other contracted organisations and the careful management of stakeholders to minimise ‘scoop creep’ [44]. Examples of Complexity Matrices, completed for the case studies, can be found in Appendices O and Q.
2.2.
How, where and when to assess
Projects invariably utilise a WBS [45] to organise activities prior to cost estimation and scheduling activities. The assessment considers each Complexity Theme against the five Complexity Criteria at each node of the WBS on a particular level. Scores are assigned (very low, low, medium, high and very high) at each intersection of the matrix. As the WBS increases in detail so does the effort required to undertake the assessment. It may be justifiable to limit the assessment to relevant portions of the WBS or choose a highlevel within the WBS to restrict the nodes to be considered. The complexity assessments should be undertaken against the portion of the WBS that describes the technical development aspects. The WBS, including that describing technical development, will be subject to refinement over time. As such any early assessment will be at a high-level on the WBS and necessarily at a corresponding low-level of overall detail. There will be business-level requirements at the offset and other factors, such as generic organisational processes and the existing regulatory landscape. Additionally, early planning may make assumptions with regards to the organisation and contracting mechanisms. This prerequirements early assessment will identify areas of potential concern to enable early planning through the identification of high-level CSFs.
Figure 8.
Sample WBS for an unmanned aerial vehicle (UAV) [46].
As the project and the development progresses the WBS will be defined level by level with early development phases likely to receive the most detail. The example of a WBS shown in Figure 8 has technical development activities below several of the high-level nodes. In this example requirements definition would receive more attention than test and verification activities due to the dependencies between the two and the inherent difficulty in defining later activities because of this. Complexity assessments would be undertaken as a minimum at the end of development phases and would serve to inform the planning of subsequent phases and as an important input into Stage Gate type governance reviews. Long project phases may demand interim complexity assessments to catch any significant changes during the phase. Assessments would develop the earlier higher-level reviews to progressively lower levels of the WBS to identify hotspots of complexity within the development. This technique affords the opportunity to Page 14 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 disregard areas of low complexity and allow expenditure of effort where it needed. For the purposes of this example WBS ‘control software’ and ‘verification software’ have been identified as having high levels of complexity. This could prompt the assessment of the next level of the WBS below to determine the particular node that is of particular concern. This can be repeated as required and, in the most extreme application, until the lowest level of the WBS has been reached.
2.3.
Outputs from complexity assessment
Identification of complexity should be used inform development planning, enabling the establishment of conducive project environment. An established method is that of CSFs as environmental influences that will enhance the likelihood of success. They may reduce either the probability or impact of the consequences of complexity. Both the rationale and the proposed outcome of the putting in place of CSFs should be recorded. This exercise should also prompt the identification of development risks. Some of these will be residual risks as a result of putting in place of CSFs. The putting in place of CSFs will influence the complexity profile. An example of this, developed later within the case study in Section 8, is that of the development of Boeing’s Dreamliner. Radical changes in Boeing’s procurement strategy were implemented at the very early stage of the project to improve schedule and costs. This changed the profile of complexity within the project’s supply chain, introducing ‘’coordination risks’, with well-publicised results [47]. Using the complexity themes, it may be that complexity within the internal organisation are merely transferred into contractual management. It is therefore important that the complexity assessment is repeated in areas where changes are proposed. Indeed, in the example of AREVA’s design and construction of the Olkiluoto 3 nuclear plant the issues were entirely the reverse of the Dreamliner and related with AREVA choosing to embark on a first of a kind project alone without the necessary experience as ‘architect and engineer’,’ without experienced partners’ and without having all the necessary competences [48]. Repeating the complexity assessment will identify any further areas of complexity that have resulted from the implementation of CSFs and allow the impact of changes to strategy to be understood. Additional CSFs can then be put in place or existing CSFs amended as appropriate. The designation of CSFs is discussed further in Section 3.
2.4.
Complexity profile and the interaction of complexity criteria
Complexity is influenced by many factors and is not a static property. It will evolve over the development lifecycle and its nature may also be transformed by imposed changes. It can be theorised that Complexity Criteria will interact in specified ways and will follow individual general profiles. Each criterion applies equally to modelling of the development process through planning and scheduling as it does to the definition of system elements and their interactions. Uncertainty relates to the likelihood of change of development requirements and it will be the greatest at the beginning when definition of requirements and scope is at its lowest. Naturally as definition increases and the number of assumptions are reduced, uncertainty is also reduced. Uncertainty can relate to changes prompted from sources such as stakeholders, regulators or rework due to iteration or errors. Uncertainty should reduce as the system is verified, and should diminish to its very lowest level as the system approaches final validation. Changes, especially those later in the development, will reintroduce uncertainty. Ambiguity is closely related to uncertainty and relates to the amount of information that is unknown. In the absence of verified information, it is necessary that assumptions are made to enable aspects of the development to progress. Without assumptions other activities will be delayed. Ambiguity will reduce as Figure 10. The changing nature of the project process [25]. Page 15 of 69
Nick Brook
MSc Safety-Critical Systems Engineering 9th January 2017 requirements are defined but as with uncertainty it will be increased with the introduction of changes. Early definition activities will reduce both uncertainty and ambiguity together.
Operations & maintenance
System verification
Sub-system verification
Integration and testing
Implementation
Detailed design
High-level design
Detailed required
High-level requirements
Considering this from the point of view of requirements and scope it can be reasoned that Program-size Complexity of the development will begin and end low. Requirements will start as those necessary for the business or operational need and will be relatively few and at a low level of detail. Requirement Program- size complexity will rise as requirements are defined, increasing in both number and detail. Conversely this property will
Concept of operations
Two methods of detail with relatively low levels of uncertainty and ambiguity include the inclusion of ‘float’ [49] within development Figure 9. Level of organisational effort over time [52]. schedules (sometimes known as slack) and ‘contingency’ [50] within the cost plan or the inclusion of flexibility within the design to accommodate a range of design definition outcomes [51]. Emergence, as defined in Section 1.5, will increase throughout the development lifecycle as ambiguity is reduced. This property is invoked by the interaction of requirements or by the introduction of change in ways that cannot be anticipated. It specifically relates to the impact and the likelihood of emergent behaviours arising. The impact of emergence can be managed through the decoupling of development activities or system elements. Examples of managing emergence are that of removing activities with likely emergent behaviours from the critical path of a development schedule or controlling the interfaces between sub-systems through choice of system architecture. Non-linearity also increases as ambiguity is reduced. The impact of non-linearity can be managed by identifying such interfaces and introducing extra capacity in the affected system elements to manage potential future change. This should be undertaken on the basis of risk and the extra capacity can be either built into the changed requirement to prevent the change being propagated or those affected by the non-linear behaviours to Point of major change absorb the change with minimal impact. It is highly desirable to reduce uncertainty and generally control the incidences of change as non-linearity and emergence increase.
Figure 11. Complexity Profiles over the development lifecycle with no major changes and an incidence of major change during sub-system verification (bottom). Page 16 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 reduce as requirements are verified until the business and operational level requirements are finally validated. Scope is similarly represented via the WBS and activity scheduling. Again these begin at a low-level of detail until the development is fully defined and planned. As activities (scope) is completed the Programsize Complexity again reduces as the number of activities to complete and their interactions reduces. This is broadly in line with organisational effort [56] as can be seen in Figure 9. Changes will increase Program-size Complexity with the example of previously verified requirements needing to be reanalysed and adapted to accommodate a late change. Program-size Complexity is managed through the application of organisational or computing resource and/or modelling techniques. The magnitude and impact of Program-size Complexity over the design definition phases is shown in Figure 10 [25]. Pich et al propose that not only does the complexity of information increase over time but that its relative impact of information content on outcomes diminishes over time. This assertion chimes with that of INCOSE who propose that approximately 70% of committed lifecycle costs are due to concept design against 95% [28]. Figure 11 suggest how the profile of a typical technical development may be represented over its lifecycle with an indicative measure of the magnitude of the particular complexity criteria against time with the development phase also shown. The first graph shows no major changes while the second graph shows a major change that is brought about during sub-system verification. This is manifested by an increase in program-line complexity, uncertainty and ambiguity. Development phasing has been taken from INCOSE as per Figure 4 on page 5. Such late changes present issues are there are now additional activities to manage and potentially a large number of system elements, previously verified, need either to be completely reevaluated or re-verified against the new requirements. Uncertainty and ambiguity are re-introduced into system elements that had previously been defined and built or constructed. The impact of non-linearity and emergence is high due to the timing of the change and the number and nature of system dependencies.
3. Critical Project Success factors and their application in system development Critical Success Factors were first proposed by D. Ronald Daniel in the 1960’s, after which they were further developed by John F. Rockart of the Sloan School of Management [5]. Their application was primarily intended within the areas of business strategy but have also been adopted in project management as success factors [45][53]. It is the intention that the selection and implementation of CSFs should put in place an environment more conducive with a successful outcome.
3.1.
Selection of success factors from literature
Two mistakes that should be avoided are those of confusing a CSF with a requirement and assigning the CSF as too high a level. The latter make the CSF too general for useful application. Requirements are distinctly different from CSF and though their satisfaction is as at least as important, their management is handled elsewhere. Examples include particularly important stakeholder requirements being associated with ‘Measures of Effectiveness’ (MoE) and system requirements with ‘Key Performance Parameters’ (TPP) of the system of interest [54]. These will be discussed more fully in a later section but as performance measurement metrics. A CSF that is described at too high-level across all activities can be viewed as pre-requisites for any technical development, project or business endeavour. Examples of high-level CSFs are provided by the APM Body of Knowledge [45]: Defining clear goals and objectives; Maintaining a focus on business value; Implementing a proper governance structure; Ensuring senior management commitment; Providing timely and clear communication. While these obviously worthy objectives, they apply to every project that the author has worked on. Furthermore, they give no indication of how they might be judged as being satisfied. It is therefore important Page 17 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 that the CSFs do not merely represent best practice but apply to the demands of the particular technical development as indicated by complexity assessment. As such, the CSFs selected for inclusion in the framework will need to be supplemented with enough detail for implementation and, if possible, measurement. I have selected 141 CSFs for inclusion in the framework from a variety of sources, including APM [7], Pinto and Slevin [52], Dvir et al [55], Chow and Cai [56], Ragatz et al [57], Koutsikouri and Dainty [58] and Fortune and White [59. These are in turn divided amongst the Complexity Themes. Some of these CSFs are repeated in more than one Complexity Theme. Ultimately each will be need to be developed with the appropriate detail if chosen in practice and adopted for use. For example, ‘support from senior management’ should include whose support should be in place and when and what this support will achieve. It should be targeted against particular activities and should recognise the influence that development phasing may have on the nature of management support that is required. This who, what, how and why approach can be applied to most items on the list. The full list of CSFs for the framework has been grouped into the applicable Complexity Themes and can be found in Appendix C. There are a number of important concepts that should be borne in mind when applying CSFs. Merely stating development best practice will dilute the effectiveness of the identification of CSFs and instead only particularly areas of the process should be subject to analysis. As such general process improvement should be addressed elsewhere. In the spirit of the incorporation of the principles of the Demming cycle, and the resulting continual improvement, the proposed and the actual effect of the CSF should be recorded. This will enable the effectiveness of individual CSFs and allow them to be catalogued within a toolbox of CSFs for future use. To facilitate in the analysis of the effectiveness of CSF they should have performance measure associated with them where possible.
3.2.
Verification of success factors through questionnaire
Potential CSFs identified in the literature were evaluated for their potential utility in the framework via a questionnaire. The questionnaire used the criterion of perceived impact on the likelihood of project success. The questionnaire was posted online using the SurveyMonkey application [60] and a total of 122 responses were achieved by requests via email and Linkedin social media. An overview showing the source of replies can be seen in Appendix D. Respondents were a varied mixture of engineering and project management professionals. Respondents were asked to rank the CSFs listed in Appendix C according to their potential influence on a project into five categories from very low to very high. Any CSFs not considered relevant or without an impact were removed from the list as being non-applicable. The questionnaire also gave respondents the opportunity to identify other CSFs that the literature survey and subsequent analysis had missed. The questionnaire and explanatory text is contained with Appendix E. The unprocessed results, per respondent, are contained in Appendix R. Processing of the CSFs was undertaken and they are categorised in Appendix F according to their scores: 4.0 and above - shaded in green (high to very high influence); 3.5 and above - shaded in yellow (upper scoring of medium to high influence); 3.0 and above - shaded in orange (low scoring of medium to high influence); Below 3.0 - no shading and red type (low to medium influence). These categories could then be filtered, sorted and compared across age, role description and industry. Further processing was then undertaken to aggregate relevant age and role group. Data was disregarded where this was not possible for particular role and industry groups. This ensured that data groups were statistically significant, allowing the data from the selected groups to be meaningfully compared to the full dataset. Information relating to the size of the aggregated datasets can be found in Appendix G.
Page 18 of 69
Nick Brook
3.2.1.
MSc Safety-Critical Systems Engineering
9th January 2017
Overview
The findings were analysed across the full dataset and then these results compared to the various categories sorted by age, role description and industry. In this way trends within the sub-groups could be identified. It may be surmised that CSFs and performance measures for different industries will be different and as such complexity managed differently across different domains. Also, the influence of age and role description on CSFs and performance measures may skew complexity management due to the biases inherent with them. Examples being project management professionals placing greater importance on established project management techniques such as Earned Value Management while engineering personnel may place a greater emphasis on the management of requirements. Identification of such biases will aid in the composition of teams determining how complexity management will be undertaken. Known biases can then be predicted and defended against. Comparisons were made between the filtered data and the full dataset. This was done for the high to very high (shaded green) CSFs and then for the top three CSFs in each category. 83% of the respondents were from the age categories of ‘35 to 44’ (25%), ‘45 to 54’ (27%) and ‘55 to 64’ (31%). The results were consolidated from seven into five age categories. Consolidation of results outside of the popular categories formed ‘up to 35’ and ‘over 65’ categories. This ensured sample sizes were statistically significant. ‘Up to 35’ was the smallest data group with just below 7% of responses. Role descriptions were similarly consolidated to ensure that samples for each category were sufficiently sized. The data from 12 respondents was disregarded as not fitting with any one of the three aggregated high-level role description that were chosen, reducing the dataset to 110. The groups derived from across full dataset were ‘senior personnel’ (41%), ‘project personnel’ (40%) and ‘engineering personnel’ (19%). Industry categories with the largest sample sizes were considered for analysis. Any groups below 10 in number were disregarded resulting in a total of 83 respondents. Those chosen based on sample size were ‘construction’ (20%), ‘decommissioning’ (16%), ‘defence’ (16%), ‘energy generation’ (18%), ‘oil and gas’ (12%) and ‘professional services’ (18%). Obviously there is some overlap between these industries and particularly those from ‘construction’ and ‘professional services’, which are generic terms that may relate to undertaking of particular activities in any industry, and the other groups. The chosen data group sizes were generally of similar in size, with ‘oil and gas’ being the smallest with 10 responses. Further work, through a larger sample size, could be undertaken to increase the number of industries that can be analysed. This would also improve the general confidence in the findings across age, role description and industry. Furthermore, biases could be more completely determined across individual role descriptions rather than consolidated into the groups that were chosen.
3.2.2.
Full dataset The summarised results, as discussed here, are contained within Appendix F. This section will discuss the top three CSFs for each complexity theme and the trends that were identified by the author from this data. The analysis was across the data from all 122 respondents. Imposed project constraints Clear realistic project objectives with an average (mean) score of 4.51 and 68 of 118 (58%) rating it very high; Adequate budget with an average (mean) score of 4.39 and 62 of 120 (52%) rating it very high; Composition of project team in terms of experience and capability with an average (mean) score of 4.38 and 57 of 120 (48%) rating it very high. In general planning and general business and governance processes appeared a lot less important than a good organisation, certain project management processes (change and risk management) and the overall viability of the development. ‘Strong business case/sound basis for project’ was notable for having the second-highest number of very high ratings but equal tenth-highest number of low ratings amongst the 4.0 and above CSFs. This gave it a relatively lowly seventh most influential overall. This demonstrates how the Page 19 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 perceived importance of a business varies amongst respondents. Project management respondent’s generally assigned this very high-importance, in line with the teachings of project management methodologies, while other groups gave it a much lower score. Planning related CSFs were ranked as a high ‘medium to high’ influence or a low ‘high to very high’ influence which was unexpected considering the relative importance of planning within project and engineering management. A project cancellation process was assigned a ‘low to medium’ influence so should be discounted from the list of CSFs. Evidently the cancellation of an ill-conceived or no-longer-required project was considered either not relevant or of a low overall influence. None of the comments provided CSFs not already included here or within another Complexity Theme. Technical development processes Critical activities are identified with an average (mean) score of 4.47 and 63 of 115 (55%) rating it very high; Clear realistic development objectives with an average (mean) score of 4.32 and 53 of 116 (46%) rating it very high; A well understood and mature design review process is in place with an average (mean) score of 4.27 and 56 of 118 (47%) rating it very high. Unsurprisingly the general trends of the first Complexity Theme were evident in this one. Identification of areas of risk and criticality ranked higher than general planning. Common Systems Engineering techniques such as ‘test early, test often’, ‘modelling and prototyping’ and ‘standardisation and/or modularisation’ were assigned a relatively low importance overall. Examples of additional comments include the use of ‘state of the art’ development processes and ‘clearly defined and mature functional requirements’. Organisational Good leadership with an average (mean) score of 4.62 and 72 of 115 (63%) rating it very high; Transparent definition of responsibilities with an average (mean) score of 4.26 and 51 of 115 (44%) rating it very high; Degree of collaboration with an average (mean) score of 4.26 and 47 of 114 (41%) rating it very high. Leadership was by some way the most important influence. Co-location of teams was not thought to be important which may be reflected by the global nature of modern projects and the availability of better technologies to allow collaboration. Surprisingly ‘competence in technology and technology management’ and ‘domain-specific know-how’ were not considered as ‘high to very high’ influences. Contractual management Clearly understood contractual interfaces with an average (mean) score of 4.45 and 64 of 114 (56%) rating it very high; Good performance by suppliers/contractors/consultants with an average (mean) score of 4.43 and 58 of 114 (51%) rating it very high; Effective monitoring/control with an average (mean) score of 4.38 and 54 of 114 (47%) rating it very high. The ranking of influences did not show any unexpected trends which were in essence the principles of selecting a competent supplier, ensuring the contract was clearly understood and managing it well. Stakeholder management Client/user acceptance with an average (mean) score of 4.58 and 74 of 113 (65%) rating it very high; Decisions are agreed and documented with an average (mean) score of 4.48 and 64 of 113 (57%) rating it very high; User/client involvement with an average (mean) score of 4.44 and 64 of 113 (57%) rating it very high.
Page 20 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Stakeholder management reinforced the need for early and effective communication. The representation of the full system lifecycle was not thought to be particularly important though the involvement of operators was considered as a relatively low in ‘high to very high’ category. The conclusion here is that the eventual decommissioning of a system is often of low concern during its design and implementation. A respondent’s comment that may be worthy of further investigation is that of ‘no informal communications under any circumstances and record all meetings’. The use of formal versus informal communication is an area that is rarely discussed. Informal communication infers frequent and relaxed use of the mediums of telephone calls and email, which suggests trust and collaboration. The inverse is contractually driven, with an administrative burden and an impact on factors such as frequency and content. External interface management Clear communication is established with an average (mean) score of 4.41 and 56 of 112 (50%) rating it very high; Clearly identified and understood external interfaces with an average (mean) score of 4.33 and 55 of 113 (49%) rating it very high; Defined process for managing external interfaces with an average (mean) score of 3.96 and 29 of 115 (26%) rating it very high. This theme mirrored the need for structured and effective communication as discussed in the previous theme. Only two of the six were rated above 4.0. Regulatory interface management Clearly identified and understood interfaces with an average (mean) score of 4.45 and 59 of 111 (53%) rating it very high; Clear lines of communication with regulators with an average (mean) score of 4.36 and 52 of 110 (47%) rating it very high; Good relationship with regulators with an average (mean) score of 4.28 and 48 of 111 (43%) rating it very high. All categories within this theme were assigned a relatively high score and even more so than other external interfaces. This is obviously down to the criticality of the regulator’s role in many projects. Technology development management Well-defined standards up front with an average (mean) score of 4.04 and 31 of 111 (28%) rating it very high; Proven/familiar technology with an average (mean) score of 4.01 and 38 of 111 (34%) rating it very high; Pursuing a simple as possible design with an average (mean) score of 3.98 and 38 of 111 (34%) rating it very high; 3.98 and 38 of 111 rating it very high. The conclusion when considering the highest scoring CSFs within this theme was the avoidance of unnecessary complexity while using established and thoroughly documented design standards. This is often not possible and many projects have elements of research and development within them. These were generally rated a lot lower than the other themes with approximately half the number of very highs assigned. System integration management Well-defined standards up front with an average (mean) score of 4.17 and 37 of 112 (33%) rating it very high; Pursuing a simple as possible design with an average (mean) score of 4.01 and 36 of 113 (32%) rating it very high; Proven/familiar technology with an average (mean) score of 3.98 and 30 of 111 (27%) rating it very high. Page 21 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The scoring was similar to that within technology management. Only two of the eight were rated above 4.0. Again the numbers of very highs assigned were around half that as other themes.
3.2.3.
By age Results filtered by age can be seen in Appendix H. Sample sizes per any age group were at best a third the size of the whole dataset. Instead of analysing absolute ratings the top rated CSFs for each theme were compared against the full dataset for consistency. Imposed project constraints – The ‘Up to 35’ age group generally scored lower with a lot fewer ‘high to very high’ category CSFs. This could be an anomaly as a consequence of the relatively low sample size. ‘35 to 45’ were representative of the overall rankings. Age ‘55 to 64’ and ‘above 64’ advocated ‘strong project sponsor/champion’ and ‘competent and qualified project manager’ respectively. Both these age groups rated ‘strong business case/sound basis for project’ which perhaps reflects their position and seniority within their respective organisations, introducing a related bias. Technical development processes - The ‘below 35’ age group assigned more importance to planning in areas of criticality and the use of past experience, while the ‘above 64’ age group gave more importance to planning and responsiveness. Other age groups were broadly representative of the trends within the full dataset. Organisational - Findings across all age groups were closely in agreement with the full dataset and especially near the top ranked CSFs. Contractual management - Findings across all age groups were closely in agreement with the full dataset and especially near the top ranked CSFs though the position of ‘rigorous pre-qualification process’ varied greatly across age ranges. Stakeholder management - Findings across all age groups were closely in agreement with the full dataset though the ’35 to 44’ age group assigned a lot greater importance to the managing of expectations and the representation of operators. External interface management - Findings across all age groups were closely in agreement with the full dataset Regulatory interface management - Findings across all age groups were closely in agreement with the full dataset Technology development management - Findings across all age groups were closely in agreement with the full dataset though the ‘up to 35’ age group assigned high importance to ‘continuous improvement process for products’. System integration management - This theme was very similar to technology and was again closely in agreement with the full dataset with the ‘up to 35’ age group assigning high importance to ‘continuous improvement process for products’.
3.2.4.
By role Results filtered by role can be seen in Appendix I. Similarly, to by age these sub-datasets were a fraction of the full dataset at less than half the size. The sub-data sets were compared against the full dataset results. Imposed project constraints - The findings were closely in agreement with the full dataset across the role descriptions with the exception of an understandable bias towards a single CSF in each role. Upper management advocated a ‘strong business case/sound basis for project’. Project personnel ranked most highly the use of a ‘competent and qualified project manager’. ‘Support from senior management’ was the third most important CSF for engineering personnel which perhaps indicates its importance gained from previous experience. Technical development processes - Findings across all role descriptions were closely in agreement with the full dataset across senior management and project personnel but perhaps understandably engineering Page 22 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 personnel had a very different perspective. The following CSFs, were not present in the other role descriptions. The first two of these were second and third most important while the others descended in relative influence within the ‘high to very high’ category. Enhanced planning is applied against areas of criticality and uncertainty; Fast transfer of information; Test early, test often philosophy is used during development; Past experience of management methodologies and tools is available; Correct choice of management methodologies and tools; Trouble shooting mechanisms in place. Organisational - Findings across all role descriptions were closely in agreement with the full dataset with the exception of the inclusion of ’composition of development team in terms of experience and capability’ within engineering personnel. Contractual management, Stakeholder management, External interface management and Regulatory interface management - Findings across all these themes and across all role descriptions were closely in agreement with the full dataset Technology development management - Findings across all age groups were closely in agreement with the full dataset though engineering personnel rated the CSFs general lower with only ‘test early, test often philosophy’ achieving the highest ranking. System integration management - Findings across all age groups were closely in agreement with the full dataset though engineering personnel rated the ‘modelling and prototyping’ higher than the other role descriptions in the top three.
3.2.5.
By industry Results filtered by industry can be seen in Appendix J. Sub-datasets were again compared. Imposed project constraints - The findings were closely in agreement with the full dataset across the role descriptions with a few notable exceptions within oil and gas. Perhaps as a reflection of an environment reliant on oil prices and commercial pressures ‘effective change management’ and ‘political stability’ ranked relatively highly. Technical development processes - Construction differed significantly with the following CSFs figuring in the rankings as ‘high to very high’, with responsiveness being the third most important. This reflects the dynamic nature of a construction environment and requirement to act quickly during the construction phase to minimise or avoid rework. Responsive and flexible process to meet client needs; Strong, appropriately detailed and realistic development plan kept up to date; Appropriate development planning technique has been chosen; Correct choice of management methodologies and tools. Organisational - Energy generation elevated the ‘re-use knowledge and experience from previous projects’ which is a strong component of the nuclear following events such as Three Mile Island and Chernobyl [66][67]. Oil and gas elevated the ‘collocation of teams’, perhaps due to the international nature of many of its projects. Professional services included ‘domain-specific know-how’ and ‘appropriate techniques to aid identification of organisational dependencies and interfaces’. Otherwise there was commonality between the industries and most influential CSFs. Contractual management - With some minor differences findings across industries were closely in agreement with the full dataset. Stakeholder management - With some minor differences findings across industries were closely in agreement with the full dataset. Page 23 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 External interface management and Regulatory interface management - Findings across all industries were closely in agreement with the full dataset. Technology development management - Energy generation attributed some Systems Engineering type CSFs which reflects the complex nature of many projects in this domain. These included ‘modelling and prototyping’ and the use of ‘system element maturity’. ‘Continuous improvement process for products’ was also ranked third, well above any other industry. System integration management - Findings across all age groups were closely in agreement with the full dataset though energy generation rated the ‘system element maturity’ highly again.
3.2.6.
Conclusion Analysing the datasets by age, role description and industry does not yield any particular trends nor show any of the sub-datasets deviate significantly from the findings over the full dataset. The following are very high level observations: The age groups ’55 to 64’ and ‘above 64’ show two thirds of all deviations from the overall subdataset; Particular biases were evident in the Complexity Theme ‘imposed project constraints’ that relate to the ‘role description’ dataset; The role description ‘engineering personnel’ shows half of all deviations from the overall sub-dataset; The industries of ‘energy generation’ and ‘oil and gas’ show two thirds of all deviations from the overall sub-dataset; The Complexity Themes of ‘Imposed project constraints’ and ‘Technical development processes’ tended to deviate more across all sub-datasets than any the other themes. From this is could be seen that particular biases should be guarded against when determining CSFs. A workshop approach should balance age groups and role descriptions to ensure particular pertinent CSFs are not overlooked. Furthermore, particular industries may place more emphasis on some CSFs over others. The formation of a ranked list of CSFs will need to be tailored for its application. It would be advised that adoption of the CSFs begin from either this list or, for a particular industry, from the nearest sub-data set as filtered by the closest applicable industry type. Ranking of influence of CSFs could be further refined through subsequent iterations of the Complexity Management process. Stakeholders could be consulted before and after application.
4. System development planning techniques 4.1.
Overview
An important aspect of the framework is that it will support and enhance planning activities. It has been recognised that planning related objectives cannot be achieved in isolation of established planning tools and techniques. Complexity influenced planning will need to be undertaken in parallel with current practice and indeed maximum benefits can only be realised if they can provide readily used and value-adding inputs into these established planning activities. This overview discusses the shortfalls and benefits of current practice, the opportunities in adopting a suitable supplementary technique and the requirements that the technique must fulfil. A suitable technique called Design Structure Matrix, also commonly known as DSM, will then be discussed along with ways it can be integrated into current practice. Gantt charts [63] are the dominant method of project and system development planning. The inherent advantage over other techniques is their relative simplicity, which allows it to be an effective communication tool. The use of Gantt charts also underpins project monitoring and reporting techniques such as Earned Value Management (EVM) [45]53]. This popularity has precipitated the development of advanced tooling, with packages such as Primavera, providing a way of creating the plan and integrating it with other processes. Despite and number of shortfalls they are a good way of managing the overall duration of a project. Page 24 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 There are however limitations in the techniques’ use that are increasingly exposed as complexity increases. Although relationships between activities are represented they are restricted to simple dependencies [64]. These dependencies describe how an activity influences the start (and occasionally the finish) of other activities including the use of lag. The nature of these dependencies cannot be further explored using this planning technique alone. Thus the relationship between coupled activities is much simplified and likely to be misunderstood without further analysis. As complexity and the number of dependencies increase, the effort required in the production and management of the Gantt chart increases significantly. This leads to more and more assumptions and simplifications of the nature of dependencies which can often result in the actual start and duration of activities progressively deviating from the plan as it proceeds. This is obviously a state of affairs to minimise if not to avoid. It is unlikely that the dominance of Gantt charts in industry as the primary method of planning will change anytime soon. Moreover, they have served as an incredibly useful way of coordinating and representing activities across diverse disciplines and functions. With this in mind, other techniques should supplement or feed into, rather than replace, Gantt charts, and should provide a way of enhancing the planning where required. Planning of all of the development activities across a project is a considerable undertaking which only increases as complexity increases. It may be surmised that there is a positive relationship between the effort that is required for effective planning and the effect of the number and potential impact of dependencies. Other factors that deserve consideration within planning include the varying uncertainty and number of ambiguity related assumptions. There is also an inverse relationship between the effort expended and the risk of deviation from the plan. The balance between effort and risk is an important consideration in any project where there are finite resources available. Such detailed planning has a high overhead and in a complex project unfocussed planning can diffuse efforts leading to essential dependencies being overlooked until it is too late. The direction of maximum effort to areas of risks should improve the likelihood of an overall successful outcome. Further to the earlier discussions within a survey of literature [4] and within the objectives within Section 1.2, it would be sensible to use any enhanced planning techniques in areas identified as being either particularly complex, or moderately complex but with a high potential impact in the event of deviation from the plan. Using the principles of the Demming cycle, it is also beneficial to understand the impact of the use of additional planning techniques to allow their use to be developed and improved for later planning of development activities or future technical development. Lastly there may be opportunities to reuse particular activity patterns or structures to reduce future planning effort that needs to be expended. To summarise it is proposed that detailed development activity planning follows these concepts: Will prioritise enhanced planning in areas identified through complexity assessment; Determine where and which planning techniques to be used should be influenced by choice of CSF; Uses enhanced planning to supplement rather than replace traditional project planning techniques; Keeps a record of where system development planning has influenced the overall planning; Records and uses learning to assess effectiveness of techniques and improve subsequent planning; Considers opportunities to develop frameworks findings for re-use in subsequent projects.
4.2. 4.2.1.
Design Structure Matrix Introduction
Design Structure Matrix (DSM) has been chosen to supplement traditional planning methods due to its high functionality and flexibility, allowing it to be used over several domains of planning. DSM is based on a simple graphical representation of the relationship between activities, called the square N x N matrix [65], and has its origins in Graph Theory. It is also variously known as the N2 diagram or coupling matrix [28]. It was developed by several independent parties but most notably by Eppinger and Browning who have refined the Page 25 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 technique, independently and in collaboration, through a number of papers and publications since the 1990s [66]. The nature of DSM’s notation conventions varies and should be designed to suit the particular application. For this reason, detailed descriptions will be omitted. For the purpose of this project the DSM as developed by Eppinger and Browning, will form the basis of future discussions. The IR/FAD convention (as opposed to the IC/FBD convention) will be adopted. Both variants are broadly similar but the use of rows and columns is transposed [67]. Full and detailed descriptions of the technique can be found within their recent publication, ‘Design Structure Matrix Methods and Applications’ [67]. They are also associated with DSMweb.org, which aims to ‘promote and foster’ further development of DSM [68]. DSM is flexible enough to cater for any type of system and can be further adapted for a particular process and organisation. Furthermore, it is ‘highly compact, easily scalable’ [67] and can be applied where needed at the level of detail required. The low level of abstraction, intuitive and graphical representation and the relative density of information that a DSM contains makes it ideal for conveying important information as both a standalone model or as a precursor for further analysis. It can provide system-level views of areas of heavy interaction [67] but for the purposes of this project it is also ideally suited for focussed views on particular aspects of the development.
4.2.2.
Design Structure Matrix principles
DSM is a highly flexible technique that can be applied to provide a view of aspect of a technical development either independently or in combination with another aspect, and can be summarised as follows: Product architecture, i.e. the interactions of the system of interest itself; Organisation architecture, i.e. how the development organisation interacts; Process architecture which describes the development process ranging from high-level to detailed planning; Multi-domain Matrix, also known as MDM, which combines several of the above to show interactions between the elements of the various matrices. Process can be described as having ‘temporal flow’ architecture due to the inherent time-based interactions that occur, with product and organisation possessing ‘static architecture’ [67]. MDM will be either temporal flow or static, depending on its component DSMs. We will primarily be interested in DSMs associated with process and organisational architecture and process/organisational MDMs as a tool to aid in planning of activities. This does not however preclude the use of product DSMs or the inclusion of product architecture in a MDM if the complexity assessment highlights an area of concern. The basic principles of the DSM are very simple and are as follows [67]: • Matrix elements being considered are represented along the diagonal of the matrix from upper left to bottom right; • Element names shown on rows and columns with the ordering of elements kept consistent on both the rows and columns; • Inputs are from left and right of the diagonal (rows) and outputs (columns from above and below as per IR/FAD convention; • Interaction between elements is shown in the matrix cells with the diagonal cells being blank. A simple DSM, which merely acknowledges the existence of a relationship between inputs and outputs, is called a binary DSM. The DSM however can be further developed into a numerical DSM, including attributes such as importance, number of interactions and impact or strength of interaction. Additional attributes can be linked to cells if required and elsewhere stored in a database. Whereas binary DSMs are qualitative, numerical DSMs can be designed to be highly quantitative. The process for creating and managing a DSM is relatively straightforward and follows a five step process. Before this is begun all conventions will need to be agreed including supplementary information to be Page 26 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 contained within numerical DSM models. Suggested additional data to be collected includes interaction strength as a single integer or as a combination of probability and impact. Later changes to notation and the scope of information to be collected and model will be difficult to reconcile with previously completed DSMs, which will potentially lead to confusion and duplicated effort. The inherent limitation of the DSM relates to it describing the activities in terms ‘edges’ rather than ‘nodes’ [67]. For example, a binary process architecture DSM cannot describe the duration of an activity. This can be overcome through the development of a numerical DSM that includes information such as duration or resource. The following sections will outline the essence of the DSM technique. A survey of literature relating to DSM, describing the technique in more detail, can be found in Appendix K.
4.2.3.
Creating and applying the organisational architecture DSM
The steps in the process for creating the Organisational DSM are as follows: Decompose; Identify; Analyse; Display; Improve. Once the DSM is clustered it can be used to inform management decisions with regards to the composition of teams, geographical location and the types and scope of communication. The integration of development activities can be extremely challenging. It can be used to shape the formation of teams or clustering of teams based on concentrations of interactions. The application of collaborative tools, such as databases, will be more appropriate for large groups while meetings and informal methods are more effective for smaller groups. Lastly the co-location of particular organisational elements may be beneficial.
4.2.4.
Creating and applying process architecture DSM
Process DSMs are created using the same five steps as are used for organisational DSM. The level of decomposition will be dictated by the previous complexity assessment and assignment of CSFs. The level decomposition will also be influenced by availability information during the particular phase of development. It is advisable that the elements are kept at a consistent level within the respective breakdown structures. If a model of activities already exists this should be used and used as the planning baseline. Key activities, pivotal to delivery of the entire process, should be identified. These are chosen based on dependencies, coupling and risk. As such disproportionate effort should be placed on ensuring that these are completed as per the plan over less consequential activities. These will be tracked throughout the project with evidence of slippage or increases in risks above the accepted residual risk prompting additional interventions. Analysis of DSMs should enable a better understanding of the interactions between elements and allow the plan to be amended accordingly. There are several relationship types within DSMs: Sequential – where activities follow on from each other in a finish to start type dependency relationship. There may be some overlap may be possible between activities which would be seen as negative lag on a Gantt chart; Parallel activities – these activities may rely on same resource but there is actual no dependency. Resource constraints would normally be considered later using organisational architecture DSM or on a resource loaded Gantt chart schedule; Coupled – with iterations between the outputs and inputs between two or more activities. This is the most difficult relationship to represent in traditional Gantt charts; Conditional – execution of later activity dependant on decision made in an earlier activity. This may or may not be sequential and in confined to the transfer of information. Page 27 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The traditional interaction considered within planning is that of a finish to start dependency where one activity needs to be completed before another can commence, known as sequential within a DSM. This can be further refined by the inclusion of positive or negative lag to either delay or advance the commencement of Figure 12. Process architecture DSM [68]. the following activity by a specified duration. Other variations exist, including start to start (which suggests parallel activities) or finish to finish (which suggests a coupling type relationship). It is these relationships that define the effectiveness of a process or plan and it follows that the greatest impact is achieved by better managing the interfaces between activities. It will also be worth considering representing iterations as separate activities Of the relationship types it is coupling that is very often the most troublesome to manage and commonly causes delays. These relationships most often rely on iterative outputs from one activity into one or more other activities. It is difficult to predict the number of iterations that may be ultimately required. Other variables include the duration to review and accept an iterative input into another coupled activity and difficulties in planning the resource necessary for such interactions. The more activities involved in coupling, the more closely coupled activities are and the number of anticipated iterative cycles all potentially impact the overall duration. There are several types of coupling. Some of these can be eliminated or reduced in terms of their impact through effective planning and/or management of process. This is an example of a process behaviour that should be fed back through the process for CSF consideration. Coupling behaviours may be planned or, due to errors, unidentified interaction type or emergence, be unplanned. Coupling is generally as follows: 1. Inherent coupling – planned coupling behaviour with activities that are structurally interdependent; 2. Poor activity sequencing – information is created too late resulting in the delay of other activities. Though a planned coupling this type of behaviour can be minimised, though not entirely eliminated, through effective analysis; 3. Incomplete activities – where activities are unduly delayed with a similar impact as poor activity sequencing; 4. Poor communication – information or outputs are not passed on completely or in timely fashion again delaying subsequent activities; 5. Input changes – caused by changes to assumptions; 6. Mistakes – defective inputs created and discovered at a later date. Evidently coupling types 2 to 6 are to be avoided whether through planning or subsequent controlling of the plan. Delayed inputs/outputs can result in the formation of assumptions and indeed this is sometimes adjudged as being a desirable response to a coupling behaviour. Adopting such an action simply exchanges coupling behaviour behaviours 2, 3 or 4 for the potential for behaviour 5. Inherent coupling suggests necessary iterations of outputs/inputs between two or more activities. Though necessary, most often even desirable, convergence to a solution should be encouraged as quickly as is practicable. Additionally, the reduction in the number of feedback loops is highly desirable also. Indeed, this has the potential for use as a measure of development status and a measure of complexity in itself. One goal of the analysis of a process architecture DSM is to optimise sequencing of the activities so that as many interactions as possible are below the diagonal. Doing so ensures that activities only begin once all inputs are available and reduces the number of assumptions that need to be made (avoiding the potential Page 28 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 for input changes type coupling) or activities being delayed unduly (avoiding poor activity sequencing type coupling). In the extreme example an output from an activity in the upper right hand corner indicates that assumptions will need to be made for an early activity to progress. If the assumption is subsequently proved incorrect there may be the propagation of changes throughout all the activities with substantial impact on resource usage and schedule completion. There will be conflicts between multiple activities that constrain the sequencing. The result will be a reordering of the rows and columns. Another aim of analysis is the removal of coupled relationships through a process call ‘clustering’ and ‘tearing’ [67]. Instead of removing unnecessary assumptions this process introduces assumptions to make the overall process or plan more efficient. Activities are rearranged into a ‘block’ so that they are grouped around the diagonal. Assumptions are then applied that remove the ‘inherent coupling’ behaviour. There is however a risk of the emergence of an ‘input change’ coupling behaviour at a later date which may reintroduce the coupling. The decision to engage in tearing of activities within the DSM will therefore be based on the risk of later input changes against the benefit of improving the efficiency of the process by the removal of inherent coupling. The discussion of where the process or plan may break down should be undertaken during the collation of information on activity interactions and particularly for coupling behaviours. The use of Failure Mode Effects Analysis (FMEA) may be of use to analyse potential points of failure leading to unplanned process iterations [67]. If significant these should be fed back into the complexity assessment and CSF processes and recorded on risk register. Sequencing Sequencing optimises the ordering of activities. This can be done before the first draft of the DSM, in which case the effect will be dramatic. There is usually a natural or intuitive ordering of activities so sequencing performed on an established DSM is not likely to initiate wholesale change. It is a useful process nonetheless. The process follows a number of steps. The first step, and the easiest, is the identification of activities with either no inputs or no outputs. Activities with no inputs can be undertaken first and activities with no outputs undertaken last. The sequence of all remaining activities will be influenced by the number, strength and type of interaction with other activities depending on whether binary or numerical DSMs are being used. There are a number of heuristics that can be used to undertake sequencing. The first is minimising the number of assumptions that are required in the model by reducing the number of interactions above the diagonal. This reduces the number of activities that need to be completed later in the sequence that feedback to earlier activities. The second is reducing the distance of the remaining interactions above the diagonal. As described previously long feedbacks can be the cause of the propagation of change throughout the development process. These activities are then identified as ‘coupled blocks’. The identification of these coupled blocks can be used to identify early where additional management effort would be well spent. Examples include the co-location of those involved, the use of collaboration tools and application of status and progress measures. Heuristics are of primary interest in this project through constraints of space but also in fitting with the project’s overall philosophy. Algorithms and commercial applications are available with potential benefits in terms of both expended effort and accuracy over a large quantity of data. Clustering and tearing The identification of blocks of coupled activities can be used for further analysis of the DSM. Decomposition of coupled blocks of activities may yield a more manageable set of sub-activities. In contract aggregation can be used to simplify the DSM but this will obscure individual feedbacks and is not generally recommended. The addition of activities earlier in the sequence can be used to reduce the number of assumptions. The decision to add activities will usually be made on a cost-benefit basis and should be used to reduce uncertainty and risk of change. Tearing reduces the interaction between a block of activities by introducing assumptions in place of iterations. Following the clustering of activities into blocks the process is a follows: Page 29 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Suggest interaction to be removed, called a ‘tear’. The link or links with the longest feedback loops back providing the greatest benefit and should be considered first; Analyse the tear and, with the agreement of the stakeholder within the process, accept or reject the tear considering the degree of confidence on the assumption that replaces the interaction. This is a risk-versus-benefit based decision. If the first tear is discounted move onto the next best tear and repeat the analysis; The tear removes an interaction which reduces the block of activities in a particular cluster. The DSM can now be sequenced; Feedback marks in the DSM are replaced with torn marks to provide a prompt to check activity outputs affected by the tear against the new assumptions; Disproved assumptions will result in input change type coupling behaviour and rework of early activities. If this occurs the tearing was unsuccessful and the intended efficiencies were not realised. Single assumptions can apply across multiple tears
Commercial DSM tooling software that incorporates many of the more advanced algorithms and concepts that have been developed for sequencing and clustering and tearing is available. The most popular and commonly used of these are referenced on DSMweb.org [68]. None of these packages can directly interface with the most commonly used scheduling application employed within complex projects is Oracle’s Primavera [69]. This suggests that the general maturity of the applications has some way to go. Several of the applications do profess outputs compatible with Microsoft Project, a scheduling application which (in the author’s experience) tends to be used to plan smaller projects.
4.2.5.
Application of the process DSM within a complexity management framework
The ultimate purpose of DSM analysis, in the general case, is to optimise the matrix which will influence the creation of a time bound schedule. Analysis, as a part of a complexity framework, is no different. There is however an additional rationale for analysis that is consistent with the principle of focusing on areas of concern against which management effort can be directed. Analysis can not only optimise the activity sequence and interventions, but can also identify clusters of activities and single activities that have either the potential to unduly influence the overall schedule or else are of particular concern. This then affords the opportunity to address current issues, mitigate risks or put in place measures to provide early issue identification. It is important to note that the DSM is less useful in showing attributes such as start dates, durations or the lagging or leading nature of dependencies. This is one of the most powerful reasons to retain Gantt charts alongside the use of DSMs. Thus the DSM can be used as an input into the Gantt chart and provide more information on interaction related risks. DSMs can also be used to drive techniques such as PERT. In this instance supplementary information relating to likelihood and impact would be contained with the DSM. Development of a process / organisation architecture MDM can be used to identify required expertise and resourcing requirements for particular activities and better manage the interface between organisational and activity driven requirements. It can be used for ‘resource loading’ [70] of the Gantt chart and schedule to facilitate resource smoothing or levelling of the schedule and implementation of EVM methodologies. It is proposed that there should be a tiered application of DSM following complexity assessment and assignment of CSFs. These increase in detail and also effort according to the risks and benefits of application within the particular area of the technical development. A suggested ranking, descending in terms of effort, is as follows: Not applied and traditional planning techniques are used; A high-level application within WBS between elements of the system development; Application of a binary process DSM at lower level of WBS; Application of a binary MSM between process and organisation; Application of a numerical process DSM; Application of a numerical MSM (process and organisation). Page 30 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Areas of concern are analysed at a lower level in the WBS with integration of the process and organisation via the use of a MSM. The use of numerical matrices has the potential to provide extremely rich information for use within other planning processes. However, this detail inevitably comes at a cost so should be reserved for areas of the development that are likely to require the greatest degree of planning and control. It is likely that analysis will highlight additional areas of concern that should be fed back into the complexity assessment and CSF processes. It is also advised that the use of DSMs at a high-level with the WBS should be undertaken early in the development lifecycle. The identification of interactions at an early stage may influence the composition of the WBS and the ordering of WBS elements within the schedule. Since WBS elements are invariably placed in the Gantt chart in accordance with a WBS numbering scheme it is desirable to place as many of the WBS elements in the correct sequence as is practicable. This will then aid the logical flowing of activities from top left to bottom right of a Gantt chart. It is useful to develop a numerical process architecture DSM convention which incorporates the principles described within the complexity assessment. This will validate the initial assessment findings and allow for the identification of any additional CSFs that arise from changes after further analysis. For simplicity the measurement of uncertainty, ambiguity, emergence and non-linearity will be captured numerically variously using several convections. Some will be absolute integers while others will be subjective. The additional information should be determined alongside the inputs, outputs and interactions which are described above. Some or all of these values should be contained separately within a spread sheet or database to the DSM matrix itself as to do so would clutter the layout. Uncertainty is directly related to the probability of change related to a DSM activity. An integer relating to the likelihood of a change in an activity’s outputs or duration will be adopted. A percentage is often used and this can be translated into a whole number between 0.1 and 0.9 with 0.1 representing 10%, i.e. very low likelihood, and 0.9 representing 90%, which is indicates that the outputs are likely to change. No activities will be assigned 1, i.e. be planned with a 100% chance of change. Change may be due to input changes or mistakes. Ambiguity impacts the development in terms of the number of assumptions that are required and delayed start dates if assumptions are not to be used. The number of assumptions can be counted with high number of assumptions being related to higher levels of uncertainty. Emergence is where there are unexpected interactions [44], i.e. more activities are affected by a single change that is originally envisaged. High emergence is where many activities are so affected. Therefore, it is sensible to assign a value to an activity equivalent to the number of activities that rely on its outputs. Conversely a value can be assigned to each activity that represents the number of inputs it depends upon. The number of outputs from a single activity identifies which activities, if subject to change, are particularly influential on the wider DSM. The number of inputs shows how sensitive an activity is to changes elsewhere. Non-linearity is seen as a disproportionate impact resulting from a change. This can be represented by a ranking from 1 to 10 with 1 representing a slight impact on dependant activities and 10 being a substantial impact. Of course the impact of a change to an activity may not be homogeneous across its outputs to other activities. For the sake of simplicity an average measure of non-linearity will be proposed for this model. Program-size Complexity is directly related to the number of activities to be managed. It is not considered useful to measure this attribute to manage at the level of activities. Measurement of Program-sized complexity would be more useful at a higher level, for example to measure the size of a work package, development phase or the project in its entirety. In these instances, suggested measures would relate to the number of activities or interactions between activities within the chosen area of interest. Figure 13 on page 32 shows an example with fictitious data for two sub-systems and examples of data that could be assigned against the various properties. Completing a numerical DSM in this way will allow further analysis. Case studies [67] demonstrate that the assignment of probability and impact (called uncertainty and non-linearity in this text) to an activity can be used to perform PERT analysis on the schedule. There are however several other ways that these and the Page 31 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 other remaining attributes can be used for analysis. In general, particular thresholds can be filtered and sorted to identify trends or group activities together for analysis and shading and colouring employed for ease of reference and communication of activities of particular concern. In each case there is the potential for tooling to automatically make the necessary calculations based on the inputs into the DSM itself. Additional management effort and status and progress measures can be introduced to specifically address areas of concern. Activity
Uncertainty (Also known as likelihood)
Ambiguity (number of assumptions)
System requirements – sub-system X
4
5
System safety assessment – sub-system x
3
1
Figure 13.
Emergence
12 outputs 5 inputs 5 outputs 5 inputs
Non-linearity (Also known as impact) 7 9
Assigning additional attributes to process architecture DSM activities.
The pairing of poor attributes relating to uncertainty/ambiguity and emergence/non-linearity is certainly not desirable. Section 2.4 discusses the profile of complexity across the development lifecycle. There is proposed that uncertainty and ambiguity should naturally reduce over the development lifecycle while emergence and non-linearity generally increases. High values of three or four of these properties against a group of activities, especially later in the development lifecycle, show an area of risk. Even a single activity with a similarly poor complexity profile, if on the schedule’s critical path, deserves closer attention. High uncertainty and ambiguity later in a development lifecycle may be the result of process or schedule optimisation activities, such as clustering and tearing, where assumptions are introduced to reduce the schedule duration and sequential activities are instead undertaken in parallel. This in turn may be as a result of either the original over-optimistic technical development completion milestone or because of delays demanding a new schedule. Another technique is to analyse activity feedback cycles to determine those that are particularly uncertain. Feedback cycles with a high likelihood of change that coincide with the critical path will be of particular concern as any delay, due to rework, will directly impact on the completion of technical development. To calculate the likelihood of change across a three activity cycle the following calculation can be undertaken: Likelihood of change across a feedback cycle = 1-{(1-a)(1-b)(1-c)} Where a, b and c are the uncertainties for the three activities in the particular cycle. This essentially calculates the likelihood of any of the activities changing within the feedback cycle. More activities and high likelihood of change will produce a low measure. One method of addressing this is through the introduction of additional activities to reduce the number of assumptions and thus the potential for input changes. The introduction of other management controls, described in additional CSFs, should also be considered. Finally, two or more sequential or coupled activities with large attributes of emergence have the potential for substantial propagation of change. It may be difficult to reduce the numbers of impacted activities (for example tearing and clustering to reduce dependencies may instead simply result in input changes) but further reducing uncertainty, even when already relatively low, is sometimes possible. Taking the analysis further other activity measures can be derived from its inputs and outputs. Activities particularly vulnerable to change propagation can be determined by calculating the product of uncertainty and non-linearity for each input to an activity and taking their sum. Large calculated amounts will highlight an activity that is at risk of change. For example, activity Y has four inputs. The inputs have products of uncertainty and non-linearity of 1x8, 4x4, 3x6 and 7x3. The sum of these is 63 (8+16+18+21). This can be mapped across the DSM using colours or shading to denote activities of low or high vulnerability. Similarly, Page 32 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 activities with high potential to influence change can be determined by calculating the product of uncertainty and non-linearity for each output and multiplying this by the number of outputs. In addition to allowing effort to be directed against particular activities these measures will readily allow change impact analysis to be undertaken and influence decision making. A relevant example of the potential benefit is that of a seemingly small non-essential change which has the potential for a substantial impact across the schedule. In this instance the change could safely be argued that it should be disallowed. So in summary the following measures can be calculated: Primary measures; o Uncertainty – subjective measure of likelihood of change, determined in conjunction with stakeholders during formation of DSM; o Ambiguity – determined through the number of assumption required for an activity; o Emergence – determined through the number of inputs and outputs for each activity; o Non-linearity – determined as the general impact of change of an activity’s outputs on other activities; Secondary measures; o Likelihood of change across a feedback cycle – particularly useful if activities coincide with the schedule critical path; o Vulnerability to change propagation – calculated from the uncertainty and non-linearity of an activities inputs; o Change propagation Influence - calculated from the uncertainty and non-linearity of the activity multiplied by the number of outputs it has. The calculation of mean amounts for the primary measures can give an indication of the complexity profile for a portion of the overall technical development based on the boundaries of the DSM. Complexity profiles were discussed in section 2.4. This could be across the entire phase or across the activities currently in progress and could have one of two purposes. The first would be to provide norms for technical development that could be used for later developments. Complexity profiles deviating markedly from the norm would require further investigation. The second would be the identification of trends. This is particularly useful in identifying when uncertainty and ambiguity remain consistently elevated or are increasing. Either would be a cause for concern. Coupled with natural increasing emergence and non-linearity, high uncertainty and/or ambiguity pose a high risk for significant change propagation. The identification of important activities, activity cycles or clusters of activities should direct the implementation of status and progress measures. These measures will be discussed in section 5.
4.2.6.
Process-organisational MDMs and their application
The creation of a MDM requires both DSMs to be available for the particular area of the development and for it to be of sufficient maturity. Much of the effort required is expended creating the donor DSMs and the principle of the MDM is similar to that of the DSM. Although there are a number of types of MDMs that can be created it is a process-architecture MDM that is particularly applicable to the planning and control of complex technical development processes. An MDM can be either of the binary or numerical type but unlike the DSM the MDM does not have a diagonal. It will show the interaction between particular organisation elements and the overall process which is useful in determining the impact of a deficit in resource or capability across activities. It Figure 14.
Process-organisation MDM [68].
Page 33 of 69
Nick Brook MSc Safety-Critical Systems Engineering will also show the organisational requirements of a particular activity.
9th January 2017
Analysis of the MDM may show a cluster of organisational elements that are particularly influential during a particular phase. It may be beneficial to temporarily co-locate a number of teams or make additional efforts to integrate their interactions. The demands of the schedule can be inferred from the MDM allowing resource and capability requirements to be planned in advance. The MDM can be populated with the estimated level of effort that is required in terms of man hours or cost. Missing organisational requirements may be identified by the absence of interactions of MDM rows. The findings of this analysis can be used in cost and resource estimates and be used to resource-load the Gantt chart to allow the application of EVM.
5. System Performance Measures 5.1.
Introduction
The assigning of Performance Measures and the gathering and processing of project data to calculate them is the ‘check’ activity within Demming’s ‘Plan-Check-Act-Do’ methodology. It is a vital component between planning and controlling of a project that should inform if and when action should be taken. Effective Performance Measures should additionally provide information on what type of intervention may be effective. They may indicate a deviation from the plan but also the emergence of undesirable properties within the development process. Pertinent examples of such an undesirable property are poor outcomes after the implementation of a CSF or a large backlog of configuration change requests. The control and management of on-going development activities is dependent on relevant and accurate information on both the status and progress of the activities being undertaken. Technical development processes, as compared to normal business processes, are typified by a number of unique properties which makes effective process monitoring important [72]: ‘Dynamic, creative and chaotic’; Contain many feedback mechanisms; The process in its entirety is largely ‘virtual’ and ‘not always precise’; The likelihood of change is high due to feedback mechanisms, imperfectly defined requirements and customer led changes. Without the appropriate choice and effective implementation of Performance Measures issues may go undetected until they are manifestly apparent, by which time they may be unrecoverable. Additionally, it is useful to know where management effort needs to be directed most efficiently. Examples of existing frameworks used for the formation and collection of measures include House of Quality, known as Quality Function Deployment, Goal-Question-metric and Balanced Scorecard [73]. The use of metrics within concept and development activities is not widespread and can lead to a number of ‘coping’ mechanisms being adopted by personnel in response to them. Simon and Simon list a number of issues that have been identified during an empirical study [74] and as thus should be avoided. Similarly, Demming describes particular ‘traps’ to be avoided [75]. Conversely Kline et al [76] describe a method for designing performance measures that is applicable across any type of technical development. The design and implementation of performance measures should be designed to avoid aspects that reduce effectiveness and embrace those that improve their chance of having a positive impact. The measures that are chosen should also suit the particular development by its particular characteristics. The method recommended by this dissertation is to identify areas of complexity and criticality through the complexity assessment and to measure the implementation of CSFs. Two ways of looking at the performance measures is in terms of areas of the WBS and complexity criteria listed and described in Section 2. Both can be used to tailor the performance measures to balance effort of collection and analysis of data against the benefits of closer and more accurate monitoring and control. However, the constraints inherent in this approach are extensive research, undertaken by the previous literature survey [4] and additional research within this dissertation, revealed the limited number of techniques currently available. This was reiterated Page 34 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 by the findings of questionnaire survey which revealed no additional performance measures from amongst the 122 respondents. Additional research, undertaken since the literature review, can be found within Appendix L.
5.2.
Desirable properties of Performance Measures
Performance Measures, often call metrics, can be placed in several categories. Mathematical system performance measures are calculated from fundamental data. These metrics are either derived or combined and an excellent example is that of the metrics relating to the established method of EVM. In this methodology uses primary data, relating to the actual costs and physical progress, is compared with cost and schedule planning to derive measures such as cost and schedule variance and cost and schedule performance indicators. Practical system metrics are derived from the application of ‘empirically established factual logic’. Heuristic system metrics are similar to practical system metrics but the scope of the metrics is restricted to particular issues [73]. Depending on the level of decomposition the metrics described in this dissertation are most likely to be either practical system or heuristic system metrics. EVM is used extensively within projects and technical developments and the questionnaire survey will use it as a baseline against which the influence of other performance is measured. The properties of the measures are of importance. Foremost the measures should of course have a clear and obvious purpose and it is also beneficial that the measure is as ‘homomorphic’ as possible with the source data [73]. The nature of the measures should provide a view of developments objectives ranging from the short-term to the long-term [73]. Measures should be process orientated as well as schedule and cost orientated indicators [73]. In accordance with the concept of the balanced scorecard there should be breadth to the perspective of the measures that are adopted [77]. Measures may be either lagging or leading with staple measures inevitably relating to cost and schedule. An example of a common system of measurement is those used within EVM which reconciles planned cost and schedule against actual cost and progress. In this instance it is also a lagging measure. That is an issue is flagged only once activities fall behind schedule or estimated costs are exceeded. If the issue is determined early enough action can be taken to converge with the original plan. However, the severity of the issue is greater the later it is detected and both low sensitivity of data and the use of monthly reporting cycles may even delay this late detection by a month or two. In this instance it is much better to predict the occurrence of cost overruns and schedule delays through the use of leading measures. These measures will target areas of the technical development that will later influence the actual activity durations and costs. Other properties that should be considered are their frequency of application and it is important to balance the effort of application against the benefit that may be derived from the frequency of measurement. A measure may require a lot of time and effort to determine, for example the findings of an audit. It may measure the properties of the development that do not change frequently, such as the schedule for the current phase. In both instances it is appropriate to only determine the measure once per development phase or perhaps only three or four times per year. Exceeding this frequency of measurement may entail the use of inaccurate, incomplete or otherwise estimated data or not provide sufficient additional benefit to make it worthwhile. Conversely the measure may follow a dynamic development property. The status of development artefacts may be automatically recorded via the database or tooling that is being utilised. This allows the measures to be taken more frequently or even viewed real-time. The cadence of measure reporting may also be dictated by time-bound criteria. The frequency of EVM reporting is constrained by the monthly frequency that actual costs are most often calculated which in turn relates to the submission of invoices by contractors and consultants. Measures can be quantitative or qualitative. Often it is difficult to provide quantitative data and only a commentary on the on-going activities is possible. Even when quantitative is possible it is worth combining with qualitative to provide valuable context to the information. Examples of leading measures include development resource and the quantity development change requests that have been submitted during a period. They should highlight early concerns either through an absolute Page 35 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 quantity or through a trend over successive reporting periods. Using the first example from above a deviation below the planned level of development resource may not yet be reflected in a delay in the schedule. Early identification would allow management action to be taken to elevate resourcing to a suitable level thus avoid a later schedule deviation. An increasing trend in the number of development change requests may indicate inadequacies within the existing requirements or stakeholder management issues. Furthermore, Increases in resourcing requirements and funding may result from an increase in the amount of change. Again early identification of changes can be used to prevent or minimise an adverse impact on the development activities. It has been proposed that the selection of measures could be based on process analysis. There are several barriers to this approach [73]: There is a high overhead to process modelling which increases processes are already in place; Analysis and implementation of measures needs to be undertaken early for maximum effect but process uncertainty is high and change is likely. This can lead to wasted effort and/or incomplete analysis; Changes will continue to propagate through the project as contracts are let and strategies evolve; The measures chosen by such a method tend to be too abstract. Though some Performance Measures are used in almost each and every project (EVM for example) it is logical others are chosen on the basis of particular development characteristics. Within this framework the selection should be influenced by the complexity assessment and the subsequent sub-processes and following some simple concepts. Specifically, the goals of the measures will be determined with a focus on areas of particular complexity and risk, including the assessment of areas described in CSFs. The Goal-Question-Metric method is an exceedingly straightforward way of determining measures. The ‘goal’ of the measure consists of four components; purpose, issue, object and viewpoint and would be directly related to CSFs and risks. The ‘question’ typically asks for the current status and current trend in this status. The ‘metric’ is perhaps the most difficult aspect to determine as meaningful measures that are at a low level of abstraction can be difficult to determine. Metrics will be very strongly influenced by the range of measures that are currently available within industry or else are developed during the project itself. An illustration of the Goal-QuestionMetric approach is shown in Figure 15 [78]. The level of decomposition should be considered and the WBS can again be used as the means for this. Low levels of decomposition provide an overview of development health but will not provide a real indication as to the source of any issues as the node within the WBS will be influenced by all of those below it. A highlevel of decomposition may suffer from low sensitivity of the data due to lower sample sizes or chosen method of measurement and will certainly require a good deal more Figure 15. The Goal-Question-Metric method [78]. effort to implement. An appropriate level within the WBS should be chosen for the highest level of decomposition and it is perfectly feasible to use metrics following the same principles at simultaneously at different levels of decomposition for both management reporting and to facilitate targeting response to issues arising. Methods of data collection, and thus sensitivity and effort, should be commensurate with the overall goal of the Performance Measure. Performance Measures can be selected to focus on particular areas of concern within the development. This may be through a higher level of decomposition for a particular phase or cluster of activities or else the choice of a unique measure. Methods that can direct where measures are to be chosen include during the Page 36 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 complexity assessment and assigning of success factors and during subsequent planning. In particular DSM can highlight areas of intense coupling and complexity. Such prioritisation of activities may be due to their properties of uncertainty, emergence or non-linearity and their proximity to the schedule’s critical path. The use of focused measures must be in conjunction with other less detailed measures to ensure coverage of the entire schedule of activities. It is also beneficial to overlap metrics, where possible, to provide a degree of confirmation of the findings [73]. Other factors to consider include: Metrics should be complete, correct, consistent and clear; [73]; As they will be communicated and reported metrics should be both ‘simplistic’ [73] and be chosen at an appropriate level of abstraction so they are readily understandable; The collection of meta-data during and as a part of the individual activities, as opposed to periodic and dedicated data collection, will reduce the management burden required. An example would be progress measurement during the production of development artefacts, at the end of each individual iteration. This will introduce a level of ‘automation’ [79] into the process; Consideration should be given to the collection of metrics by project assurance functions rather than the development team [58] to ensure impartiality. The reliability of some Performance Measures is necessarily subjective due to the scarcity of formal methods of collecting data in many areas. Subjectivity can be curtailed through the adoption of rules and guidance in the collection and analysis. A simple example is the progress measurement of development artefacts as shown in Figure 16, and this also incorporates the principle of creating meta-data during the activities themselves. It this case the artefact owner would track progress according to the relevant guidelines which could be placed on a shared database for ready collation and analysis. Artefact completion milestones
Percentage complete
Completion of first draft of artefact
50
Completion of first review
60
Inclusion of first comments
80
Completion of second review
85
Inclusion of second review comments and submission for approval
90
Artefact approved
100
Figure 16.
Simple method of measuring progress in development artefact.
Many Performance Measures are highly abstract. There are a number of these that relate to DSMs and these should be considered for use. They however largely relate to complexity so while they may provide a validation of the other analysis that has been undertaken this will relate to areas of risk within the technical development. Examples of the areas of the development process that should be measured include [80]: Schedule and cost planning – will measure risks, bottlenecks and the critical path as well as the overall cost. It should also consider process iterations and associated uncertainty; Resource allocation – will measure resource and capability levels as compared with the plan and issues such as resource smoothing/levelling, removal of redundancy and accessibility of resource; Quality – this will measure the consistent flow of information, completion of documentation in line with process and meeting requirements and the distribution of risk amongst processes; Flexibility – will measure the status of buffers to absorb delays and defences against individual errors and general process resilience; Organisational decomposition – will measure to determine whether the organisation of workgroups and teams is adequate, efficient communication is in place; Interfaces – will measure which entities need synchronising, speed of communication across interfaces and the relevant communication paths in place; Page 37 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Transparency – will measure whether the organisational units are aware of their impact on outcomes and the mental model of the process organisation. Decision making – will measure which decision points have a high impact on outcomes.
Also artefacts to be measured using a rating based on interdependency and risk: Number of dependencies; Origin of dependencies – with organisational area, interdepartmental, inter-contract, external dependency; Status of dependencies – i.e. the meta data derived for them; Percentage of document that requires dependencies; Identified risks. Focusing effort on particularly critical areas of requirements management is achievable by using the Complexity Assessment and identifying nodes on the WBS. This can identify when Performance Measurement should commence as well as the depth and breadth of measurement that is appropriate.
5.3.
Selection of System Performance Measures from literature
In this section, we discuss the measures which have been selected from both the previous literature survey [4] and further research as a part of this dissertation as being significant for inclusion in the Complexity Management Framework. Each measure will be categorised against the previously described criteria in Section 5.2. They will be chosen for their credibility of previous application and breadth of methodology and aspects of development for which they are used. These will be verified by a questionnaire survey in terms of their comparative influence. EVM will not be described in detail but will be used as a benchmark in the questionnaire survey against which the influence of other Performance Measures can be compared.
5.3.1.
Requirements
The management of requirements is a fundamental part of technical development and the two central techniques relate to the measurement of outcomes (requirements satisfaction) and process (linked to a variety of attributes and metadata).
5.3.1.1.
Satisfaction of Stakeholder and System Requirements
A fundamental tenet of technical development is the elicitation and satisfaction of requirements. In turn these requirements can be broadly sub-divided into stakeholder requirements and system requirements. Requirements, especially system requirements, are prone to successive iterations and coupling behaviours as represented in the DSM model. The measurement of requirements is described within a number of systems engineering manuals and lately INCOSE’s System Engineering Handbook. They observe both stakeholder and system requirements from the perspective of their success in fulfilling the overarching business requirements [28][54] and as such are quality orientated measures. The technique identifies a limited number of particular Stakeholder and Systems Requirements whose realisation (satisfaction) are critical or important to overall success. Selection of these can be influenced by Criticality Assessment though they are often closely linked to high-level business and operational level requirements that are determined at the very beginning. With respect to complexity criteria, the analysis of requirements allows monitoring and control of the system development and particularly the technology and internal interfaces. Other complexity criteria will have a direct impact on requirements with excellent examples of such criteria being development process, internal organisation, organisation and stakeholders. Measurement is facilitated through design review type activities and provides lagging measures. This period of lag is of course dependant on the frequency of design review.
5.3.1.2.
Requirements attributes
Another method of measuring System Requirements is through the collation of associated information to be stored in a matrix, spread sheet or database. This is sometimes known as the Requirements Verification Page 38 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Traceability Matrix [81]. The metadata can be compared against the plan or trends monitored to identify areas of concern. The complexity of the RVTM should be proportional to that of the technical development and may include fields that are specifically chosen to assist in the management of foreseen project and system development risks identified early using techniques including the complexity assessment. Suggested fields to be collated include [81]: System Requirement unique identifier and name; Requirement description; Requirements specification (document reference); Overall requirement status (such as detailed in design, implementation or integration); Trace to overarching Stakeholder Requirement (unique identifier and document reference); Verification and validations procedures (document references); Verification strategy status (such as undefined, strategy only, procedure completed); Verification status (such as not started, failed, completed with reservations or completed and with document references as appropriate); Validation purpose (such as for acceptance, certification, readiness for use or Qualification); Validation status (such as not started, failed, completed with reservations, completed and with document references as appropriate). Other fields that could be employed in either the same artefact or complementary artefacts, including: Stability (susceptibility to change); Design compliance; Interface compliance; Process compliance; Risk and risk status; Safety case and licensing compliance; Procurement and contractual compliance; Importance or criticality of requirements within the project schedule. These measures may give an indication of requirements against the planned development and are leading in nature as they indicate potential issues ahead of requirements satisfaction. They do however allow for the identification of areas of concern. More generally requirements can be given a wide variety of metadata that can be used to interpret the overall status of the requirements or a particular sub-system or area of development. Not all of these attributes should be chosen due to the overhead required to maintain complete and accurate data sets. Attributes should be chosen early in the development lifecycle to ensure that data does not require to be retrofitted at a later date [81]. Both the RVTM and requirements attributes are primarily concerned with quality and identifying issues relating to progress. Many of these attributes, together or independently, can be used to infer the efficacy of resource allocation, organisational decomposition, interfaces and Decision making. For instance, a backlog of requirements with a particular status may indicate one or more bottlenecks in the process. This may be due to issues with resourcing, organisational efficiency or decision making. It can be used to identify risk at a high level of decomposition within the system of interest which can be rolled up to show particular subsystems which deserve additional attention [81]. If trends are identified early enough, for example a backlog of under review requirements within a particular development area, then action can be taken to reduce or eliminate delays to the schedule.
5.3.2.
Development health
Development health is likely to contain a high degree of subjectivity. Facets such as adherence to the planned cost and schedule can be measured through EVM or other similar techniques. Other less tangible aspects will require a framework to provide context and guidance in which they can be assessed. Using a defined method Page 39 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Kline et al [76] formed a weighted matrix of design performance characteristics that can be used to measure development health. Each high-level measure was formed of five individual factors with each of these being assigned an individual text description as a basis for scoring. The high-level measures are as follows: Problem Definition; Prior Knowledge; Divergent Thinking; Professional Analysis; Decision Making; Create & Follow Plan; Iterate & Assess; Validate Solutions; Communication; Teamwork. By observing attributes of the development process and team the technique is a leading indicator and should be able to identify risks to the development before they are fully realised. It should be undertaken reasonably infrequently, say at the beginning of a phase or mid-development phase, at three or four month intervals, if it is particularly long in duration. It is proposed that the Kline et al methodology is applied, partly or in full, to form new measures or tailor exist measures for use in technical development. One candidates for this approach, mentioned in the Literature Survey [4], is Torbet et al’s six criteria of ‘Design Performance Measurement’ [82]; Client needs (stakeholder requirements); Integrating design into objectives (system requirements); Internal design processes (suitability and effectiveness of internal design); External design processes (suitability and effectiveness of external design); Profitability and efficiency (of the design); Learning and innovation. There is of course considerable overlap between these and requirements orientated measures and with the criteria within the complexity assessment. Another way of identifying the subject for measurement is their identification during complexity assessment and designation of CSFs. Alignment of specific performance measures with area of concern broadly follows the principles described in planning via the use of DSM. That is focus on areas where there is particular risk of deviation from the plan while planning (in this case monitoring) at a lower level of decomposition level across the entirety of the development. The headings within Kline et al’s ‘Creating and Using a Performance Measure for the Engineering Design Process’ relate particularly to the complexity criteria of the development process and internal organisation.
5.3.3.
Process maturity
A framework for process maturity is the ‘Project Definition Rating Index’ or PDRI [83][84]. Maturity can apply to both the development in terms of where it is within the process or the technology that is to be used within the system of interest. There will necessarily be some overlap and influence between the two. Though not directly related to development health or schedule, there can be some inference as to the status of these. For instance, in an extreme case technology of low maturity which is approaching validation suggests either problems within the development process or either badly planned or incompletely executed activities leading to that particular point in time.
Of course there is still much potential for subjectivity within the scoring of the PDRI framework but this can be further refined by providing individual guidance for each criterion as is documented in Kline et al’s development health methodology. The result of the scoring can then be compared with the ‘maturity value rating’ and ‘qualitative criteria’ [84]. A maturity rating that does not correspond with the particular phase of Page 40 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 the development that is currently in progress indicates a concern. This can be further analysed to determine the areas where the issues reside by looking at the individual criteria. Though similar in methodology to Kline et al’s method it is a lagging indicators as it essentially observes the status of the development at a particular point in time. Again a review at the beginning of a development phase or every three or four months would be appropriate. Process maturity measures relate directly to the complexity criteria of development process but also indirectly Figure 17. Maturity value rating criteria [85]. to the three complexity criteria.
5.3.4.
System maturity
A popular method of assessing the system with the United States, and especially within military and space industry domains, is that of the Technology Readiness Assessment (TRA) also known as Technology Readiness Level (TRL). Specifically TRL has been developed for use by both the US Department of Defense and NASA [86][87][88]. TRL assesses the maturity of the system as a means of analysing development risk and used in a similar way as the PDRI methodology. That is if the TRL is below that required for the development phase there are serious risks to the realisation of the technology which will be ultimately be reflected in an inability to verify or validate the technology. TRLs generally have ratings between TRL 1 and TRL 9 with subjectivity of analysis constrained through the use of textual guidance. TRL 1 is the least mature and TRL 9 denotes technology proven within its intended operational environment. In both of these applications of TRL a team of subject matter experts will assist in providing the actual rating which will form the basis for stop/go type decisions by a board of appropriate managers at predetermined points in the development such as development stage boundaries. TRLs relate directly to the complexity criteria of technology. A development of TRL, and an attempt to address to address TRL’s inherent limitation, is the System Readiness Assessment or SRA. SRA provides a ‘whole system perspective’ [89]. It assesses all system components, the integration of them along with external dependencies and is designed to be undertaken more frequently than those using solely TRLs [89]. TRLs are assigned to the system of interest as before along with an Integration Readiness Level or IRL. Criteria are provided to guide the assessment as per the traditional TRL method. From these three System Readiness Level Metrics are derived through calculation. Component SRL looks at individual components and how they are integrated to identify elements of the system that are lagging behind the system as a whole. Composite SRL is concerned with the overall integration of the system and this is then converted into an integer between 1 and 9, called the SRL [88]. Assessments and calculations are performed across the system architecture and the ratings are similar to those of TRLs. The IRL relates to the complexity criteria of internal interfaces while SRL is relates to the three system development complexity criteria as a whole.
5.3.5.
Organisation, process and schedule complexity
DSMs were the subject to discussion in Section 4.2 and are useful in both defining areas of the organisation, process or schedule that require particular attention and in providing useful measures to provide a basis for management action. As they relate to the plan they are reasonably static in nature and as such lend themselves to plan validation type activities at the beginning of a development phase or at a new iteration of the schedule. Previously the use of DSM related performance measures were discussed in Section 4.2.5 Page 41 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 and suggested calculating median and peak quantities for measuring plan complexity. Observed DSM primary measures can be either trended through successive iterations or compared against acceptable absolute values that were determined through previous developments. These can either consider the phased activities as a whole or the current schedule critical path. The secondary measures of likelihood of change across a feedback cycle and vulnerability to change propagation could be used to inform rescheduling or other interventions to be undertaken to reduce development risks to acceptable levels. The basic properties of the DSM itself can be used to provide an insight into the schedule and its complexity. The relevant proportion of sequential, parallel, coupled and conditional activities within a development phase can be used to provide an indication of its status. A high incidence of coupled activities especially, gives a clear indication of complexity. Again this could be applied against the critical path activities only to provide information on these priority activities. Finally, Kreimeyer and Lindemann [73] list four metrics using the language of DSM optimisation activities: Sequencing – the number of ‘ideally sequenced’ activities within the DSM, i.e. sequential activities; Tearing – the number of activities that have been subject to tearing due to them being a barrier to sequencing; Banding – the number of activities that are independent to each other, i.e. parallel activities; Clustering – the number of ‘mutually related’ i.e. coupled activities. As a way of identifying areas of concern within the plan this is a leading indicator and should prompt replanning such as further optimisation of the DSM. Process and organisational-architecture DSM related performance measures are concerned with controlling development process and internal organisation complexity criteria.
5.4.
The verification of performance measures through questionnaire
The performance measures chosen from the literature reviewed during the project were verified with the practitioner group as a part of the questionnaire described in Chapter 3 above. The objective was to rank influence and comparison with the most widely used performance measure from the EVM methodology. The questionnaire can be seen in Appendix E with performance measures relating to questions 13 and 14. Results, ranked by influence, are shown in Appendix F. Influences were generally lower than that attributed to CSFs, with results across the entire 122 respondents ranging from 3.04 (just above medium influence) for the use of PDRI and 3.77 (with 4.0 indicating high influence) for measurement of requirements process. This may suggest a general dissatisfaction with the identification and practical use of performance measures and in agreement with the author’s own experiences. No other measures were suggested for use, providing some assurance that the literature survey was comprehensive in its review of the subject. The measurement of the realisation of stakeholder and system requirements was also rated highly and just below the top performance measure at 3.76. Measurement of plan complexity through determination of proportion of sequential, parallel, coupled and conditional activities on the schedule was third with 3.69. This be achieved through deeper analysis of planning through traditional Gantt charts or, as advocated here, using DSM methodologies. EVM did surprisingly poorly at eighth of the twelve measures considered with a rating 3.56 It did rate the third highest number of very highs but suffered a lot of lower scores which offset this. This suggests the high value of integrating requirements management into a project and that EVM is not sufficient alone. This is not reflected in the practice where Earned Value is often the only performance measure used and requirements require scant attention within project management methodologies [4]. Complexity measurement using heuristics against the WBS elements also rated relatively poorly but only just below EVM at 3.48. This supports the author’s opinion that absolute values of complexity are of limited value alone but support the following sub-processes. Its value is qualitative rather than quantitative.
Page 42 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 There were possibly shortfalls in the survey and barriers to Performance Measure use that influenced individual scoring of some of the Performance Measures. Providing a sufficiently brief description and without additional explanatory text may have created a bias against several of the measures. The PDRI project maturity and project health measures scored very poorly and this along with the apparent effort of application (200 individual criteria for PDRI and 10 high-level and 50 low-level organisational and process characteristics for process health) may have depressed the final scores. Further research would be needed to either confirm or disprove this. Two comments from question 14 provided further validation of the author’s approach and initial observations. “Keep the complexity measures as simple as possible!” “Many forms of performance measures out there, I think you have captured most, personally I don't rate them and although their aim or managing system interfaces and relative maturity thereof is correct - they generally don't add much value. There would definitely be mileage in getting back to basics, systems definition, boundary limit definition and design review.”
5.4.1.
Performance measures by age, role and industry
Findings were consistent over age, role and industry and especially over the top three rated performance measures across each dataset. Considering the top three performance measures notable exceptions were as follows: Earned Value Management was considered the third most influential performance measure’ by over 65s’. This elevated the importance of EVM over any other sub-dataset. The measurement of percentage complete of project artefacts and the measurement of IRLs were considered the most important second most important performance measures respectively by ‘engineering personnel’. These reflect the concerns and challenges that are particularly faced by engineering disciplines over other project personnel. ‘Defence’ valued TRL, IRL and SRL over all other performance measures which was not particularly surprising when it is considered that these measures originated from the defence sector.
5.4.2.
Conclusion
The findings of the questionnaire support the assertion that further work is required to develop performance measures that better represent the challenges faced by today’s projects. Requirements in particular should receive greater attention which in turn will encourage their better management. Plan complexity is the other measure highly worthy of development. The combination of the measures of requirements and plan complexity in conjunction with EVM should provide an excellent way of monitoring and thus controlling many project types. This should be supplemented with TRL and IRL methodologies when the novelty of technology dictates. This of course needs to be considered against the resource and cost overheads involved in such an undertaking.
5.5.
The collective use of performance measures
The use of the Balanced Scorecard [78] is much publicised within industry and has been further refined into ‘The DMI Design Value Scorecard’ [90]. These concepts provide little information as to what and how measurements are taken but the principle of presenting together a broad selection of relevant measures has merit. Using the broad principles identified from through the concept of Balanced Scorecard it is proposed that a Development Scorecard could be developed. This would present a variety of metrics together to best represent status across the breadth and depth of the development. It would present a high-level view of the development for director level management, along with a text-based summary to provide context and indicate underlying areas of concern that otherwise may be hidden. Below this the measures would be presented at a number of levels of decomposition. Strategic measures would be primarily of interest to the management leading the development to provide data on particular areas of concern within the process or Page 43 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 organisation. Tactical measures would provide feedback for team leaders and individuals to make interventions at the level of the work being undertaken. The performance measures should be chosen to address concerns raised during the complexity assessment with regards areas of high complexity or criticality. For example, the following measures could be combined in a development scorecard: Plan complexity consisting of: o Average uncertainty, ambiguity, emergence and non-linearity as derived from DSM (leading); o Proportion of sequential, parallel, coupled and conditional activities within DSM (leading); Conformity with the plan consisting of one or more of the following: o Earned Value Management Cost Performance Index (lagging); Schedule Performance Index (lagging); o Overall progress using heuristics with guidelines to aid analysis of individual development artefacts; o Actual resource and capability compared to plan; Process consisting of: o Others such as numbers and processing times for engineering change control requests and quality non-conformances; o Process health using Kline et al method (leading); o Process maturity using PDRI (lagging); Requirements consisting of: o Realisation – relating to MOE, MOS, MOP and TPM (lagging); o Execution – relating to selected requirements metadata (leading); System maturity consisting of: o Technology Readiness level (lagging); o Integration Readiness level (lagging); o System Readiness Level (lagging); Bold type indicates measures assessed as having the greatest influence from the questionnaire survey. The high-level measures are in italics and would be rated from very low to very high. Strategic measures are contained below each of these. This would provide coverage of most of the complexity criteria.
6. Managing system development risks 6.1.
Introduction
‘Management of risk should be systematic and not based on chance. It is about the proactive identification, assessment and control of risk that might affect the delivery of the project’s objectives’ [53]. The process described so far identifies ways of analysing the technical development for areas of concern against which effort can be best directed. An important part of this is identifying development risks which is traditionally achieved through a risk management processes that uses techniques such as workshops and risk reviews. This process can be used to supplement this approach, principally in the identification of risks, as it prompts analysis of various aspects of the development and at varying levels of decomposition. The initial complexity assessment is used to instruct on the areas of particular complexity and CSFs necessary to increase the likelihood of success. Residual risks will remain should the CSFs be imperfectly implemented or only partially address the raised concern. These risks can then be placed upon the risk register. This initial complexity assessment is likely to generate high-level risks only due to the low amount of decomposition possible and available detail on the development process. Successive iterations of the complexity assessment however will allow risks to be targeted and existing risks will be elucidated upon, confirmed or dismissed as not relevant. The planning process will identify still more risks and these will be focussed upon particular Page 44 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 types of interactions or clusters of activities. Again these should be compared to risks identified earlier as a means to improving the quality of the information upon the risk register. A recognised and respected approach for managing risk is contained within the PRINCE2 project management methodology. PRINCE2 bases much of this approach on Management of Risk: Guidance for Practitioners as published by the UK Office for Government Commerce. Their guidance follows a number of important principles which are satisfied by risk management through the complexity management process [53]: Understand the project’s context – complexity assessment looks across the breadth of the technical development; Involve stakeholders – achieved at complexity assessment and particularly during DSM planning activities; Establish clear project objectives – as highlighted during the initial complexity assessment; Develop the project risk management approach – partially satisfied through complexity management process in conjunction with the general risk management process; Report on risk regularly – dependant on the frequency of complexity assessment and planning iterations (possibly partially satisfied); Define clear roles and responsibilities - as highlighted during the initial complexity assessment (possibly partially satisfied); Establish a support structure and a supportive culture for risk management – through an independent risk management strategy; Monitor for early warning indicators – through complexity management process monitoring activities; Establish a review cycle and look for continual improvement – partially satisfied by complexity assessment and planning iterations. Incorporation of the contribution of complexity management within the risk management strategy will provide additional support to the process and prevent duplicated effort. Complexity management will aid the early identification of risks. Conventional risk management techniques are well described in literature and risk and project management methodologies. Rather than define a particular approach to be used alongside complexity management only the few relevant points will be discussed. A template of the risk register, the artefact ultimately produced by any risk management process, is contained within Appendix M.
6.2.
Important concepts
The accommodation of risk management alongside complexity management should not present too much additional effort. Two of the components of the process are particularly important. The classification of risks can be undertaken using a pre-determined risk breakdown structure or RBS [53]. The RBS is a hierarchical structure following the broad principles of other breakdown structures such as the WBS. The RBS contains successive levels of decomposition and there are as many levels to the RBS as is useful for the purposes of risk management. An example of high-level risk categories within a RBS are those of ‘schedule’, ‘resource’, ‘technical’ and ‘licensing’ [91]. Below this other risk category would decompose risks still further. This approach ensures consistency of risk category assignment throughout the project and allows the tracking of trends. These categories can then be further sub-divided as required. Other options include categorising by function or discipline. The goal of such an exercise is so ensure the consistency of categorisation and to allow trending of risks to show where risk particularly resides within the development. The artefact that is ultimately generated by the risk management process is the risk register. An example of the attributes that can be assigned to each risk is shown in Appendix N. The risk matrix; against which risk likelihood, impact and priority are determined; is highly dependent on the magnitude and type of development as well as the organisation’s risk tolerance. As such this should be defined for each organisation or development process. Page 45 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
7. Complexity orientated development framework 7.1.
Integrating the sub-processes together
All the themes discussed in the previous sections can be gathered together into a single process. The process map, shown in Figure 18, represents interactions with the important process activities that are commonly undertaken in projects and technical development. It will also show the relationships between complexity assessment, determination of CSFs, detailed planning, definition of performance measures and risk. The complexity management framework fulfils and expands upon the high-level objectives described on page 8 in Section 1.6 in the following ways: 1. A top down assessment of complexity and measures to best manage it; a. Derive a strategy including proposed frequency and depth of detail during system development lifecycle; b. Undertake the complexity assessment at available or appropriate level of composition; c. Record areas where complexity assessment is considered unnecessary; d. For complexity ‘hotspots’ derive CSFs; e. Ensure that the relationship between complexity, the derived CSF and anticipated outcomes are articulated for later review; f. While considering CSFs in place review complexity to ensure that hotspots have not been simply displaced to another theme or aspect; i. Make appropriate changes to assessment and CSFs as necessary; ii. Repeat if absolutely necessary; iii. Determine frequency of review; 2. A Bottom-up assessment of complexity using information derived from planning exercises; a. Derive strategy, rules and conventions for planning through DSM; b. Apply dependency planning technique to areas of development as appropriate; c. Develop DSM and MDM to required level of detail; d. Assess complexity of DSM and compare with complexity assessment findings; 3. Application of dependency planning techniques against areas of particular complexity; a. Determine level of decomposition and type of DSM to be employed; b. Identify, analyse and display DSM, optimising the DSM as required; c. Feed outputs of DSM into creation of Gantt chart schedule; d. Determine frequency of review; 4. Application of performance measures according to type and depth of complexity; a. Determine ‘goal’ of performance measures and ‘questions’ to be asked and answered by them depending on complexity; b. Define metadata that is required to enable performance measures to be calculated in areas such as requirements management and development artefact production; c. Determine mechanisms to collect metadata including IT requirements d. Determine frequency of the collection of performance measures as dictated by nature of measure and requirements of development; 5. Contribute to risk management; a. Determine how risks identified by complexity management can be integrated into risk management strategy; b. Derive RBS to allow consistent categorisation of risks; c. Identify risks through determination of CSFs and planning activities; 6. Repeat activities at predetermined intervals. The individual activities upon the process map will be colour coded to reflect whether they are those currently undertaken in technical development and unaffected by complexity management (grey) and those Page 46 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 proposed within this framework (green). An additional coloured activity will show which activities currently exist but would be modified under this framework (amber). The first activities are the definition of strategy and conventions for the various sub-processes within the framework. This includes any criterion that is required to allow the Stage Gate to be passed. It is envisaged that the initial complexity assessment will be prompted by an early Stage Gate type decision point in the development lifecycle or the beginning of a development phase. A pre-requisite for the complexity assessment is the existence of a high-level WBS with higher levels of decomposition allowing more detailed analysis of complexity. Subsequent complexity assessments will be to a higher level of decomposition within the WBS. Each WBS element will be assessed against the following complexity themes within three categories: 1. Internal factors a. (Project) environmental constraints; b. Development process; c. Internal organisation; 2. External factors a. Contractual management; b. Stakeholders; c. Regulatory interfaces; 3. System development a. External (system) interfaces; b. Technology; c. Internal (system) interfaces. Each of the themes will be considered against each of five Complexity Criteria and each of these criteria will in turn be rated from very low to very high. This process was repeated for each of the WBS elements to the level of decomposition required. Any complexity theme that cannot be rated will be assigned not applicable. A template for the assessment is contained within Appendix B. The complexity assessment will be subject to additional iterations following the definition of CSFs until the WBS is fully assessed. This will include any changes prompted by identification of a CSF that may inadvertently and unknowingly increase complexity in any part of the development. The complexity assessment will also be subject to iteration at the beginning of each development phase or, for long development phases, at predetermined intervals of say six months. The identification of appropriate CSFs will be the next activity immediately after the complexity assessment is undertaken. The first consideration is the areas of the development that are likely to require more detailed planning and closer monitoring of progress and status. CSFs are then assigned according to areas of vulnerability due to particular complexity within the WBS. The list of CSFs is shown in Appendix C. This should be amended and updated over time according to lessons learned and the demands of the development type or organisation. This will be aided by reflection on the efficacy of the CSFs at the end of the development phase or before the next complexity assessment, as shown on the process map. Activities prompted by the generation of CSFs will include areas of the development that would benefit from the application of DSM and both the goals of monitoring and questions that should to be asked and answered by the performance measures. Consideration of CSF will facilitate the identification of risks. Development risks should be consistently categorised and a RBS will need to be produced to ensure this is done.
Page 47 of 69
Nick Brook
Figure 18.
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Management Framework. Page 48 of 69
Nick Brook
7.2.
MSc Safety-Critical Systems Engineering
9th January 2017
Analysis of framework against criteria within existing literature
In this section the complexity management framework is compared with the recommendations and criteria for successful projects as discussed within the literature survey and from a wide variety sources. Any obvious deficits are discussed and additional measures proposed that may benefit the future development of the complexity management framework. Other recommendation should be satisfied by activities outside the scope of this framework. The 2012 ‘thisiswhatgoodlookslike’ Gartner survey [92] made five high-level observations relating to high-level project strategies. The first concerned the size, complexity and duration of individual projects. Determination of size and duration, and to a lesser extent the complexity, of the overall project is very often outside the control of the technical development and indeed of the project itself. Complexity can be controlled once it is recognised and at the very least prevented from growing unduly. The survey describes that a project must ‘stay on top of costs’ and describes that there should be measures for early identification of cost overruns. This can be achieved through the adoption of leading performance measures that can identify delays to the development that inevitably lead to additional costs due to acceleration of the schedule. The potential for rework can also be highlighted which again can have significant associated costs. A realistic schedule can best be achieved through better planning. The targeted use of DSMs identifies coupling behaviours and dependencies to allow both optimisation of the schedule and improved representation of the schedule when used as an input in the production of Gantt charts. Using the measures previously described it does at the very least provide an early warning of discrepancies between early high-level schedule commitments and likely actual schedule. The fourth of the Gartner survey’s recommendation related to stakeholder and system requirements and the funding that the full realisation of these dictates. Requirements management is not within the scope of this dissertation. However, the assignment of requirements related performance measures and enhanced planning alongside project management of the budget should assist in this. Additionally, effective and rigorous requirements management would very likely be a development CSF which could then be monitored over time. The last recommendation refers to frequent ‘project status and review meetings’ and ensuring alignment of requirements with the business case. Better use of performance measures, as discussed within the dissertation, should facilitate this. Ultimately the identification of a failing project, unlikely to satisfy the business case, may lead to its cancellation. The Royal Academy of Engineering’s 2004 ‘The Challenges of Complex IT Projects’ [93] also had five high-level findings. ‘Lack of constraints’, relating to the definition of reasonable requirements, is outside the scope of this framework. Again this can be partially mitigated through effective planning and performance monitoring. Complexity assessment should highlight where complexity is highest which may indicate unrealistic expectations. This would apply especially to the aspects of environmental constraints and technology, especially where technological demands may be unduly constrained. Similarly, ‘Visualisation’ and ‘Flexibility’ will be again addressed by effective requirements management alongside the framework’s emphasis on performance monitoring. The impact of changes can also be readily assessed in portions of the development planned using DSM. High-impact change requests can then be identified and approvals made accordingly. Lastly both ‘complexity’ and ‘uncertainty’ can be assessed early using the top-down complexity assessment and later, using bottom-up as DSM planning is undertaken. CSFs and detailed planning can both reduce and manage both these properties along with the inevitable changes to development requirements and scope. Denker [94] cites the components of ‘Colossal Complexity’, ‘Invisibility’, ‘Over-Optimism’, ‘Extreme Uncertainties from the Kickoff’ and ‘Rework’. Mirroring the factors mentioned previously, these can be managed through the recognition of development complexity through complexity assessment, determination of ways to manage it through identification of CSF and through better planning. The paper specifically mentions the imbalance between complexity and imposed timescales. DSM should more completely identify coupling and emergence that are features of complex schedules. Other themes are the lack of timely feedback Page 49 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 mechanisms and the identification of cause and effect. Both of these are addressed through the targeted application of performance measures. Lastly the reuse of development patterns can be partially addressed through the complexity assessment, assignment of CSFs and the review of their efficacy for future use. While ‘control mechanisms’ and ‘feedback capabilities’ are directly addressed through the use of performance measures, the remainder Jiang’s 13 [95] should be addressed through general management arrangements. Deficits in Gantt chart schedules, as previously discussed, can be addressed through the use of targeted planning and specifically through the use of DSMs and appropriate performance measures. In it critical activities that are outside of the critical path can be identified and coupling behaviours better represented. The use of appropriate leading and lagging performance measures can measure the quality of processes and its outputs. The high degree of ‘information requirements’ associated with planning are a function of Programsize Complexity. Though it may be assessed it can only be managed, through appropriate resources and technology, and not reduced. Limited modelling of the ‘behavioural aspects of management’ may be represented using numerical type organisational-architecture DSM or process-organisation MDM. This does not preclude the use of other modelling techniques, such as OBM, when deemed appropriate.
8. Case study application of framework 8.1.
Purpose
In this section we evaluate the framework by using it to assess an existing complex project using publically information. The case study uses publically available information on the development of Boeing’s Dreamliner and will illustrate the use of the framework and identify areas of potential concern that require process improvement or additional guidance. The WBS will necessarily be at a high-level/strategy level due to constraints of information availability and the effort required to undertake analysis relating to detailed WBS structures. As such it will be assumed that the assessment is being undertaken during the concept design phase and used to analyse development up to the point of manufacture. The initial complexity analysis will be followed by suggested CSFs and recommendations requiring areas of planning, performance measurement and risk management. The assessment will be undertaken from the viewpoint of Boeing’s Dreamliner design team and was chosen due to the well published development and operational issues [96][97][98], sheer quantity and variety of literature, interesting characteristics and first-of-a-kind complexity. The findings will be compared to the outcomes and assessed for efficacy and it will demonstrate the process, albeit at a low level of detail. An additional case study, using the construction of a nuclear power plant, is contained in Appendix Q. This project is distinctly different from the Boeing Dreamliner and identifies other aspects of the framework requiring development but also demonstrates how the framework can be implemented.
8.2. 8.2.1.
Case study 1 – Boeing Dreamliner Background
In the early 1990’s Boeing required a new commercial aircraft to replace the aging 767. Two options were initially investigated; a sub-sonic cruiser with a speed of 0.98mach and similar fuel efficiency as the 767, and a design similar to the 747 with increased capacity, designed to directly compete with the Airbus A380. Neither was particularly well received by the commercial airline markets and when fuel costs escalated after 9/11 the requirement for fuel efficiency became paramount. This led to the concept of the 787 Dreamliner in 2003 [96]. The requirements for the Dreamliner were as follows: Lower fuel consumption; Lower maintenance costs; Longer range; Improved passenger comfort. This necessitated the use of technology not previously used extensively within commercial airliners. This included advanced composite use, eventually resulting in more than 50% by weight [97], and the replacement of pneumatic and hydraulic systems with electric architectures [98]. Additionally, the fuselage was to be Page 50 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 manufactured in ‘barrel sections’ and integrated later rather than being assembled by Boeing. These design decisions were undertaken early in the development cycle for reasons of overall weight and reduced system power requirements. Boeing’s initial estimate for the development was $6 billion over a four-year period, representing a significant investment Manufacturing costs for the Dreamliner were also a significant factor in order that it was to be both competitive and allow Boeing to make a return on its investment [99]. This prompted the adoption of a contracting strategy that was designed to make efficiencies in the development and manufacture of the aircraft but spread the commercial risks [100]. It also allowed market expansion by allowing the manufacturers in the countries that would buy the Dreamliner participate in the project in ‘offset deals’ [100]. Contractual arrangements allowed Boeing to delay payment to partners until aircraft were delivered [97], including risk-sharing partnerships, with payments depending on final outcome. Manufacturers were only paid when the plane was certificated and delivered, thus Boeing avoided bearing many of the up-front non-recurring research and development costs, encouraging efficiency in the supply chain and could delay investment until later in the project lifecycle. In addition to higher returns on successful deployment of aircraft the manufacturers also were allowed to retain intellectual property rights of their contribution to the aircraft [100]. ‘Partner councils’ were established to aid the creation of an ‘integrated, integrated, modularized supply-chain’ and mirror the practices used in military and business aircraft sectors [97]. The use of globally based suppliers ensured that the necessary expertise could be sourced and also allowed the development of many systems in parallel.
8.2.2.
Work Breakdown Structure
To meet the considerable technological challenges and cost efficiencies it was decided that design development and manufacturing, including research and development, of most of the aircraft would be outsourced across the globe, to Tier 1 suppliers. The contracts were designed to allow Final assembly would be undertaken by Boeing in the US. Ultimately this resulted in 70% of parts being manufactured by 50 companies in 28 countries [98]. The Tier 1 suppliers would in turn contract out manufacture and services to Tier 2 suppliers as appropriate. A WBS using system engineering and project activities has been produced decomposing the aircraft into systems and structures using the not-dissimilar Hyperion project [101] as a basis. Hyperon was chosen because it is within the aviation domain (unmanned aerial vehicle in this case) and has been developed internationally. This WBS has been adapted to include general commercial aircraft systems [102] the Dreamliner’s more detailed ‘Global WBS’ [103]. This gives an emphasis to the technology and contracting components of the Dreamliner project though common design phases have been included including a research and development phase that was undertaken by both Boeing and its Tier 1 partners. It is assumed that research and development would be undertaken after the concept had been agreed, allowing the development of a business case and the commencement of marketing and sales, but before scheme design. Based on the Dreamliner’s history the concept would have been available in late 2003/early 2004 before the first order with Nippon Airlines was agreed. An alternative would have been to decompose the project with a higher emphasis on functions and processes using a combination of system engineering and project management methodology. It should be noted that the contents of the WBS given below is not exclusive, though it is complete enough to allow early analysis. As can be seen the level of detail increases through the project’s phases as the aircraft’s systems are defined. Design
Concept; o o o o o o o o
Aerodynamics; Mass properties; Composites; Structures; Fuselage; Wings; Doors; Stabilisers;
Page 51 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
o Landing gear; o Propulsion; o Controls; o Electrical systems; o Communications; o Navigation; Research and Development; o Structures; Centre wing box; Wing box; Wing to body fairing; Wing to body fairing components; Main landing gear well; Nacelles; Fin; o Fuselage; Forward fuselage; Mid fuselage; Rear fuselage; o Wings; Wing structures; Leading edges; Engine pylons; Fixed trailing edges; Moveable trailing edges; Fin leading edge; Rudder; o Doors; o Stabilisers; Horizontal stabilisers; o Landing gear Landing gear structures; o Propulsion; Engines; Gearboxes; Fuel; o Controls; Flight deck; Flight controls; Speed reference system (SRS); Autopilot; Electronics; Engine controls; o Electrical systems; o Communication; o Navigation; Scheme (similar to R&D and not expanded); Detailed (similar to R&D and not expanded).
Manufacture (not expanded) Integration (not expanded) Certification (not expanded) Test and commission (not expanded) Management
Project management; Financial management; Quality assurance; Legal; Shipping.
Page 52 of 69
9th January 2017
Nick Brook
8.2.3.
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity assessment
The timing of assessments should be aligned with project phasing and project and development governance processes, including stage-gates and major design reviews. For the purposes of this case study a high-level assessment will be undertaken at the beginning and end of the design phase to highlight how the aspects of complexity evolve and illustrate some potential issues. An assessment was completed against the high-level WBS at concept, research and development and detailed design phases. These can be seen in Appendix O. Concept design During this phase it is assumed that the contracting strategy has been agreed in tandem with the development of the overall concept. Environmental constraints: Uncertainty and Ambiguity have been assigned as very high as contracts are still be let to Tier 1 partners under the pain/gain sharing arrangements and the effect on funding is uncertain. Emergence, Non-linearity and Program-size are low at this early point in the project. Development process: Both Uncertainty and Ambiguity have been assigned as very high, recognising the new way of working on this project, and thus immaturity of processes including the unexpected interactions with the Tier 1 suppliers. Emergence, Non-linearity and Program-size are low at this early point in the project. Organisation: Boeing’s organisation is likely to be smaller than on other comparable projects due to the Dreamliner’s outsourcing arrangements but as this is a novel development there will be a high level of Uncertainty and Ambiguity as to how integration will be achieved. Emergence, Non-linearity and Program-size are low at this early point in the project. Contractual management: There is considerable scope for error due to factors such as number of suppliers, use of Tier 2 and 3 suppliers outside direct control of Boeing, location and time zones and cultural and political factors. Uncertainty and ambiguity have been assigned very high due to the enhanced potential for change or conflict. Emergence, Non-linearity and Program-size are low at this early point in the project. Stakeholders: In this instance the stakeholders are largely the prospective customers. As such change relates to Uncertainty and Ambiguity in user requirements and these have been given a medium as there is scope for changes. Emergence, Non-linearity and Program-size are low at this early point in the project. Regulatory interfaces: Due to the technology levels the regulators’ responses may be highly uncertain and ambiguous and especially at the beginning of the development. In many areas the regulators may lack essential competencies to effectively review and agree key principles within the design. Emergence, Non-linearity and Program-size are low at this early point in the project. External Interfaces: This aspect relates to entities such as communication, maintenance and the individual airports at which the Dreamliner would land or take-off from. The use of relatively novel technology creates some uncertainty but changes should be relatively easily accommodated and interfaces relatively simple and understand. Technology: Both Uncertainty and Ambiguity with regards to technology is very high during the concept phase. Emergence, Non-linearity and Program-size are low at this early point in the project. System Integration: Changes to system integration could arise from both the technology characteristics and will be very high before the outsourcing arrangements are fully known. E mergence, Non-linearity and Program-size are low at this early point in the project. Research and Development At the beginning of the research and development phase it is assumed that all the design and manufacture packages have been let to the global Tier 1 partners. Environmental constraints - Boeing’s contractual arrangements with its suppliers were designed to share the pain/gain of the final outcome and shield Boeing from the potential variation in supplier costs. As such Uncertainty and Ambiguity have been assigned a medium rather than a high value which might have been Page 53 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 expected for such an early point in the project. Contracts have been let for the R&D effects of Emergence and Non-linearity resulting from imposed changes has been assigned medium rather than a low. Programsize Complexity is low in terms of environmental constraints. Development process - Both Uncertainty and Ambiguity have been assigned a high value, recognising the new way of working on this project, and thus immaturity of processed including the unexpected interactions with the Tier 1 suppliers. Communication with the Tier 1 suppliers will be especially critical to ensure requirements of the Dreamliner’s component parts and systems meet requirements and can be assembled. Emergence and Non-linearity have been assigned a medium value as there is potential that necessary changes to processes will be significant enough to compromise schedule and cost planning estimates. Program-size Complexity has been assigned high to reflect the processes interactions between the various Tier 1 suppliers. Organisation - Boeing’s organisation will be smaller than on other comparable projects due to the Dreamliner’s outsourcing arrangements, with an emphasis on integration and assembly over design and manufacture. Uncertainty and Ambiguity has been assigned a medium value to reflect the potential for changes that may result from changes in the development process described above. Emergence, non-linearity and Program-size Complexity have been assigned a low status due to the relatively low size of the design team. Contractual management - There is considerable scope for error due to factors such as number of suppliers, use of Tier 2 and 3 suppliers outside direct control of Boeing, location and time zones and cultural and political factors. Uncertainty and ambiguity have been assigned a very high value due to the enhanced potential for change or conflict while Emergence and Non-linearity have been given a medium to reflect the early stage in the development. Program-size Complexity has been assigned a high value due to the number of organisations and scope of their management. Stakeholders - In this instance the stakeholders are largely the customers as suppliers, project team and regulators are included within the other aspects. As such change relates to Uncertainty and Ambiguity of user requirements and these have been given a low value while Emergence and Non-linearity have been assigned a medium value. Changes in the enhanced requirement for fuel efficiency, range, maintenance cost and comfort are not thought likely but would have a significant impact if they came about. Program-size Complexity is seen to be low. Regulatory interfaces - Due to the technology levels the regulators’ responses may be highly uncertain and ambiguous and especially at the beginning of the development. In many areas the regulators may lack essential competencies to effectively review and agree key principles within the design. This will be exacerbated by the outsourcing arrangements. Emergence and Non-linearity has been assigned mediums while Program-size Complexity has been given a low to reflect the understanding relative simplicity of the regulator interfaces. External Interfaces - External interfaces relates to entities such as communication, maintenance and the individual airports at which the Dreamliner would land or take-off from. The use of relatively novel technology creates some uncertainty but changes should be relatively easily accommodated and interfaces relatively simple and understand. Technology - Both Uncertainty and Ambiguity with regards technology is naturally high at the beginning of an R&D phase. However much of this technology has been previously used within other domains and applications. Emergence and Non-linearity have been assigned a medium as there is potential that changes will be significant enough to invalidate the concept design. Program-size Complexity has been given a low at this point in time. System Integration - Changes to system integration could arise from both the technology characteristics and the outsourcing arrangements that have been put in place. As such both Uncertainty and Ambiguity have been assigned a high status. Changes could have a significant effect on the concept design even at this early stage. Lastly the interfaces required for integration are onerous due to the outsourcing arrangements. Page 54 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Scheme design At the beginning of the Scheme Design phase it is assumed that R&D is complete and the contracting arrangements are well embedded. Environmental constraints - These stay broadly similar to those identified in the R&D phase though with an increase in Program-size Complexity. Development process - Both Uncertainty and Ambiguity have been assigned a medium, recognising the embedding and increasing maturity of the project processes. Changes required to the development process will have a greater impact in terms of Emergence and Non-linearity so are given a high. Likewise, the process will become more complex to manage in terms of number of components. Organisation - This aspect will stay relatively stable with changes to the organisation having a greater impact as the Boeing design activities increase in scale. Contractual management - All factors are considered high due to the potential for change, the impact on other contractual arrangements and the number of contracts to be managed. Stakeholders - The impact of stakeholders stays relatively stable though changes in customer requirements or the markets could have an increased impact. Regulatory interfaces - Confidence in the technology and contractual approach should be increased though changes, brought about by regulatory changes and imposed constraints, will increase in impact. External Interfaces - The impact of external stakeholders stays relatively stable though changes could have an increased impact. Technology - Both Uncertainty and Ambiguity with regards technology will reduce and impact increase. Technology will take more management as its scope is defined in more detail. System Integration - In a similar fashion to contractual management, integration will increase in all areas. Detailed design At the beginning of the Detailed Design phase it is assumed that scheme design is complete and most issues regarding management of the design phase are becoming apparent. Environmental constraints - These stay broadly similar to the R&D phase though with an increase in Programsize Complexity. Development process - Both Uncertainty and Ambiguity have been assigned a medium, recognising the embedding and increasing maturity of the project processes. Changes required to the development process will have a greater impact in terms of Emergence and Non-linearity so are given a high. Likewise, the process will become more complex to manage in terms of number of components. Organisation - This aspect will stay relatively stable with changes to the organisation having a greater impact as the Boeing design activities increase in scale. Contractual management - All factors are considered high due to the potential for change, the impact on other contractual arrangements and the number of contracts to be managed. Stakeholders - The impact of stakeholders stays relatively stable though changes in customer requirements or the markets could have an increased impact. Regulatory interfaces - Confidence in the technology and contractual approach should be increased though changes brought about by regulatory changes and imposed constraints, will increase in impact. External Interfaces - The impact of external stakeholders stays relatively stable though changes could have an increased impact. Technology - Both Uncertainty and Ambiguity with regards technology will reduce and impact increase. Technology will take more management as its scope is defined in more detail. Page 55 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
System Integration - In a similar fashion to contractual management, integration will increase in all areas.
8.2.4.
Critical Success Factors
The CSFs will evolve throughout the development though thought should be given to putting them in place in advance of when they are particularly required and the phasing required. CSFs will be ranked and generally taken from ‘high to very high’ rated CSFs from the full survey dataset Appendix F. CSFs within the top three identified from each Complexity Theme will be shown in bold with the chosen CSFs in order of priority. The full dataset was chosen in preference to that from the aviation industry dataset due to its low sample size. Where the number of CSFs are overwhelming in number it is expected that these can be further ranked in importance. CSFs assessed as having the greatest impact will be chosen in preference to the others. Furthermore, it would also be expected that performance measures should be chosen to demonstrate the implementation and impact of CSFs. Examples of how these CSFs may be further tailored towards the Dreamliner project or otherwise supported are provided in italics with top three ranked CSFs identified in bold Concept design At this point in the development lifecycle the emphasis is on managing uncertainty. At this stage ambiguity will also be high to the technology and there may be constraints of funding prior to a commitment to longterm investment. Particular Environmental constraint related CSFs are as follows: I. Clear realistic project objectives – planning undertaken with minimum participation of a defined percent within the supply chain. Plan complexity measured and managed through use of DSMs; II. Composition of project team in terms of experience and capability – minimum contractual prequalification criteria for Tier 2 suppliers, Process-Organisation MDMs implemented. The development process will need to tailored to suit the particular challenges likely to be faced and also manage the uncertainty and ambiguity that will be present. There may be the opportunity to use methods from other domains or industries such as the design of business aircraft in with respect to the use of composite materials [104] or manufacture of electric cars with respect lithium ion batteries. Development CSFs may be: I. Critical activities are identified – Process DSM applied in areas of high complexity; II. Performance measures tailored to monitor areas of criticality and uncertainty – DSM related Performance Measures are implemented where required. The organisation would also need to cope with uncertainty and ambiguity relating to the new technology and could be assigned the following CSFs. i. Transparent definition of responsibilities - Process-Organisation MDMs implemented; ii. Composition of development team in terms of experience and capability – as above. Contractual management would be in its infancy at this point and would relate to the production of contractual documentation and selection of potential Tier 1 partners. In terms of uncertainty and ambiguity the following CSFs are important. i. Clearly understood contractual interfaces - Process-Organisation MDMs implemented; ii. Project breakdown into logical packages – complexity considered before allocation. However, this is one of the most critical aspects of the development and decisions made at this point and written into the contracts will have far reaching consequences. For this reason, CSFs need to be considered for the entire project lifecycle at this point in time. Neither the management of stakeholders or external interfaces are the project’s most challenging aspects and no CSFs should be emphasised. Regulatory interfaces are subject to some uncertainty due to the new technology and it treatment by the various certification authorities across the globe. Early and thorough engagement would be vitally important. i. Clearly identified and understood interfaces – MDM developed to model interfaces; ii. Clear lines of communication with regulators. Page 56 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Technology is one of the most important aspects of the development and while choosing mature technologies is not an option the simplification of design should still be sought where possible to reduce uncertainty of outcome and understand it where at all possible. iii. Pursuing a simple as possible design - Requirements measurement is adopted; iv. Test early, test often philosophy is used during development – TRL measurement is adopted; System integration is the third important aspect, along with contractual management and technology, and should similarly be treated to reduce uncertainty and ambiguity. Some of the other CSFs should also be considered though these will not realise benefits until far into the development lifecycle. Those relating to Concept Design are: I. Test early, test often philosophy is used during development - IRL measurement is adopted; II. System element maturity is monitored - - IRL measurement is adopted. These two elements were taken from the ‘energy generation’ high influence CSFs as the full dataset did not provide any relevant CSFs for the issues encountered. This is possibly a weakness in this Complexity Theme’s CSF that could be bettered through increased sample size and increased representation from engineering personnel. Detailed Design As the development progresses the emphasis changes from Uncertainty and Ambiguity to the impact of change through Emergence and Non-linearity. While the reduction in the former are still important it is now the effect of Emergence and Non-linearity that will be felt most keenly. Alongside this, Program-size Complexity will also increase as the workload and number of parallel and dependant activities increases. As such the CSFs will become more about the plan and managing change to control impact. Important too is the managing the outputs of the design process and methods of working to manage the process as a whole. Considering Environmental Constraints first: i. Strong project sponsor/champion - use of a development health type Performance Measure; ii. Effective change management (project) – measurement of change control metadata is adopted; iii. Proactive risk management process – risk undertaken as a part of complexity management. For the Development aspects some if not all of the following would be applicable to ensure changes were recognised early and readily assessed for impact. Some obviously overlap in their scope. i. A well understood and mature design review process is in place; ii. Technical risks management process – risk management undertaken as a part of complexity management; iii. Effective technical change management processes – measurement of technical change control metadata is adopted; iv. Development and project plans are properly integrated; v. Effective monitoring/control of requirements and development deliverables. The organisation would need to consider the multi-organisational working methods and global span of the project to ensure timely and optimised decision making. i. Degree of collaboration – use of a development health type Performance Measure; ii. Coherent, self-organizing teamwork - use of a development health type Performance Measure. Similarly, contractual management should consider how the process and outputs can be best managed and as discussed above it is important that these are considered at the time of contract production. i. Clearly understood contractual interfaces – Organisational DSMs are adopted; ii. Good performance by suppliers/contractors/consultants; iii. Effective monitoring/control – Development Scorecard approach is adopted. Stakeholders have been considered as having a high impact on the development at this stage of the design. Page 57 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Changes in requirements can have a serious effect on the project lifecycle. Specific CSFs surround the effective and timely communication and management of stakeholders. i. Client/user acceptance– Requirements measurement is adopted; ii. Decisions are agreed and documented; iii. Early identification and management of conflicting interests – Requirements measurement is adopted; iv. Active management of client/user integration – MDMs adopted for critical systems. External interfaces, while important, are not viewed as being as critical as other areas. Regulatory interfaces, technology and Integration have the potential to have an increased impact though a lower likelihood of change as the development proceeds and technology and design matures. As such the CSF will remain the same as there is little that can be done to reduce the impact of emergent change other than identify it early.
8.2.5.
Planning technique
The sheer size, scale and criticality of the project would lend itself to the use a combination of Design Structure Matrices. These could be targeted against particular areas of development and interfaces based on risks borne from the constraints of schedule and relative criticality, both a technically and in relation to the overall project. From the Criticality Assessment the three areas that deserve particular attention are management of Tier 1 suppliers, technology and integration. All have particular challenges arising from the project and development strategies: Organisation architecture DSM to consider how Boeing and the Tier 1 partners taking into account all interdependencies and constraints arising from their diverse geographical locations; Process architecture DSM to focus on particular groups of activities from the WBS. This might particularly look at integration or assembly type activities and dependencies between Tier 1 suppliers; MDM to look at particular parts of the development and be used to plan the interaction between organisations and show the impact on the wider schedule. Full integration of traditional scheduling and DSMs would allow a seamless blend of the two techniques to be used with areas of contractual interfaces, technology centric development and integration activities being prime candidates for binary MSMs and numerical DSMs. Further Complexity Assessment of the WBS could be taken at the next lowest level or levels to identify areas of particular complexity. From the project’s background information such candidates for enhanced planning include the following: Electrical systems, including the lithium-ion batteries [105]; Composite material development and especially construction of barrel sections of fuselage; Design integration activities; Aircraft assembly including Tier 1 partner dependencies; Certification related activities, such as quality assurance, across the entire development. Enhanced planning would be chosen in other areas, depending on the WBS and on the most recent Complexity Assessment findings. The scale and length of the development may justify specific tool support to be developed.
8.2.6.
Performance measurement
To meet the challenges of contractual management, technology and integration the following measures could be combined into a Development Scorecard across the entire project: Conformity with the plan consisting of: o Earned Value Management Cost Performance Index (lagging); Schedule Performance Index (lagging); System maturity consisting of: Page 58 of 69
Nick Brook
MSc Safety-Critical Systems Engineering o Technology Readiness level (lagging); o Integration Readiness level (lagging); Process Health to measure areas such as leadership and collaboration.
9th January 2017
The outsourced nature of the project makes collecting information challenging and quality assurance of Tier 1 supplied information would be required. Other measures could be applied, with discretion, in areas of particular concern and these include: Plan complexity consisting of: o Proportion of sequential, parallel, coupled and conditional activities within DSM (leading); Requirements consisting of: o Realisation – relating to MOE, MOS, MOP and TPM (lagging); o Execution – relating to selected requirements metadata (leading). Such data would need to be trended over time rather than compared with data over the entire project and would be used where there are particular technical risks or evidence of poor project performance. This would provide a mix of leading and lagging measures the purposes of monitoring, control and reporting.
8.2.7.
Risk management
The identification of development risks will not be undertaken in detail due to lack of data. It is not difficult to appreciate that there will be significant risks in the areas of contractual management, technology and integration. Detailed descriptions, risk responses and related information would naturally fall out of more detailed analysis and planning activities. The creation of a hierarchical Risk Breakdown Structure would be an important early activity in the formation of the risk management strategy. One suggestion for the high-level nodes (level 1) of the RBS could be those of ‘schedule’, ‘resource’,’ technical’ and ‘certification’. Level 2 nodes would include contracting and Boeing (below resource) and technology and integration (below technical). Certification could be divided amongst particular systems within the aircraft. With the high reliance on Tier 1 Partners it might be appropriate to also include this as a level 1 RBS node or else place it under resource risks alongside ‘Boeing organisation’. Similarly, the importance of technology and integration can be elevated to that of a level 1 node or else included under one or more of the other nodes.
8.2.8.
Comparing framework findings with project outcomes
Boeing’s Dreamliner has received a large amount of publicity due the prevalence of problems pre and post launch. A detailed time line of events can be found in Appendix P. Delays almost doubled the ‘launch-to-delivery’ time from 49 to 89 months [97] due to a variety of problems involving Tier 1 partners and the technology. Most notable were long standing issues with the supply of fasteners and a number of incidences involving the battery technology causing safety concerns [106]. Recognised risks included the management of the supply chain across different countries, such extensive use of outsourcing and assembly issues made worse by the extensive use of composite technology [97]. Costs spiralled to approximately $40billion, which was twice the original estimates and included costs due to defects, additional R&D, supplier support and buyout, customer contract penalties and cancelled orders [100]. This included bailing out Tier 1 partners such as Alenia and Vought who designed and manufactured the horizontal stabiliser and both central and rear fuselage. There has been much speculation, research and analysis as to the root causes for the issues described. Many relate to supply chain competence and Boeing’s own communication and procurement. Risk sharing neglected to hold individual suppliers to account for their actions and so encouraged them to benefit from savings on direct costs while the overall schedule was delayed at the same time Boeing placed a higher emphasis on costs over schedule. As such optimisation of aspects of development did not equate to overall optimisation, rather quite the opposite [100] Many of the additional costs thus related to Boeing requiring more effort than anticipated managing the supply chain. Post-launch problems were also significant in increasing costs but also Page 59 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 in damaging Boeings reputation and future Dreamliner sales. Significant safety events included a number of electrical fires relating to the batteries required for the aircraft’s electrical system. This has been attributed to quality control rather than strictly technology based issue [107] and as such reflects badly upon both Boeing’s control of its partners and the certification process [108] including the delegation for the provision of test data to the project’s suppliers [109]. Other operational safety issues included engine shutdowns, fuel leaks, loss of transponder, electrical faults, hydraulic failures, a cracked windscreen, cracks in wings discovered at the factory and a variety of on-board management systems [110]. All in all, there have been close to 100 reported incidents as of August 2016 [111]. The Complexity Assessment described particular areas of concern surrounding Contractual Management, Technology and Integration. It is fair to say that most of the issues encountered during the development of the Boeing Dreamliner fell into one of these categories. An additional case study describing the development and construction of AREVA’s Evolutionary Power Reactor in Olkiluoto, Finland (OL3) has been undertaken within Appendix Q. OL3, though very different to the Boeing Dreamliner, exhibits characteristics that are equally as complex. While it has similarities in the depth of technical complexity OL3 differs in Areva’s choice of procurement and contracting strategy. Whereas Boeing predominately contracted out design to Tier 1 partners, Areva undertook virtually all design in-house, creating its own unique issues. OL3 is again well documented and has a reputation well deserving of a case study. Both Boeing’s Dreamliner and Areva’s OL3 were briefly discussed within Section 1.5 of the Literature Survey [4].
9. Results and Evaluation The dissertation has addressed the following objectives in an effort to develop a complexity management framework: Proposed a complexity assessment by decomposing a technical development by: o Discussing the rationale for complexity management alongside current project management methodologies and provide background to the complexity assessment proposal; o Decomposing development by the WBS; o Creating themes for each WBS element – (project) environmental constraints, development process, internal organisation, contractual management, stakeholders, regulatory interfaces, external (system) interfaces, technology and internal (system) interfaces; o Creating complexity criteria for each theme – uncertainty, ambiguity, emergence, non-linearity and programme-line complexity; o Rating of criteria between very low and very high; Describing how complexity may be profiled per criteria over the development lifecycle, how these criteria interacts and how change may affect this; Developing the concept of CSFs for application within the Complexity Framework; o To indicate where extra planning effort should be applied; o To direct where monitoring should be applied to a higher level of decomposition; o To align with areas of high complexity deserving interventions and management effort; o Testing and ranking the selection of CSFs through questionnaire using project and engineering professionals; Proposing the use of Design Structure Matrix to supplement and be used as an input into the production of Gantt chart schedules; o Describing the technique as it is already used for both process and organisational-architecture DSM and Multi Domain Matrix; o Proposing the rationale for the use of binary, numerical and MDM; o Aligning DSM with complexity criteria and use it to derive planning indicators and performance measures; Page 60 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Propose the effective use of different types of performance measures to aid monitoring of development activities; o Discussing the varying types and purposes of performance measures to provide a way of accurately representing status to allow control and mitigating actions to be implemented; o Listing a number that have been previously proposed as suitable for use in this application; o Testing the proposed Performance Measures by questionnaire survey to indicate those that are considered most influential and to provide confidence in the measures selected; o Proposing a development scorecard to consist of a number of the performance measures to provide a balanced way of measuring status across the breadth and depth of a technical development; Describe how the entire process may inform risk identification as a part of the overall development risk management; Gather all these elements into the complete complexity management framework; Test the complexity management framework against two case studies.
The dissertation has satisfied the objectives that were assigned within the introduction with varying degrees of success. The individual components of the Complexity Management Framework will be discussed for their strengths and weaknesses. Finally, areas of further research will be identified where they are appropriate. Complexity assessment The rationale for the assessment was derived from a combination of literature and the author’s personal experience. The use of the WBS as a focus provides a convenient way of decomposing the activities while the complexity themes allow the assessor to look at the WBS elements from a variety of important viewpoints. The complexity criteria decompose the actual complexity further and introduce uncertainty and ambiguity as discreet elements, which are very often barriers to good decision making. The criteria broadly follow the principles the business assessment methodology called VUCA [37], using classical definitions of complexity [37] and adopting with similarities to 5DPM [31]. The regime for scoring individual criteria is as precise as such a heuristic can possibly be though of course it is still open to a high degree of subjectivity. The assessment provides little in isolation but does prompt qualitative analysis of the development and the identification of areas of concern for assignment of CSFs. Together they show potential areas of complexity related risk and a description of the issues within the technical development. Difficulties arise in assigning criteria ratings that relative to the entire development lifecycle. For example, providing a ‘very high’ rating early in the development allows no scope for increasing the rating later on. This can be at least partially cured through the development of assessment guidance. Another area requiring guidance is that pertaining to System Development Themes, of which there can be seen to be some overlap. Practical application would be preferred method of refining analysis and improving consistency of application. Focussed further research could be used to aid lessons learned type exercises. The nature of Complexity Criteria was discussed and how these evolve during a project. Uncertainty/Ambiguity closely equates to likelihood of change while Emergence/Non-linearity related more closely to impact. In this way it can be seen that Uncertainty/Ambiguity is most influential in early phases of development while Emergence/Non-linearity becomes increasingly predominant as the development matures. Complexity Profiles could be developed to allow baselines against which similar technical development could be assessed. The use of case studies high-lighted a number of potential shortfalls in the Complexity Assessment. The WBS is an important consideration during the Complexity Assessment as inconsistent or a poorly conceived structure will prove problematic in the application of Complexity Assessment. Issues arise where the WBS uses inconsistent methodology and omits development phases at a high level as can be seen in the OL3 case study contained in Appendix Q. This makes Complexity Assessment more of a challenge at each point that it is done. In the example of OL3 the WBS will have been created early in the project lifecycle and presumably by a variety of business entities in parallel. In its favour it would have stayed largely unchanged throughout as it would Page 61 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 have been largely uninfluenced by contracting decisions, relying instead on established business functions for its structure. In contrast the Boeing Dreamliner WBS would have been dependant on the ‘Global WBS’ that was created as a consequence of how the aircraft was contracted out for development and manufacture. Later interventions by Boeing to support the supply chain and provide additional supervision and management would have also affected the WBS. Ideally early input into the WBS during initial complexity assessment would alleviate these issues. The case studies allowed an assessment of the technique but with the inevitable bias relating to knowledge of actual real-life outcomes and limitations of high-level application. However, they were able to demonstrate its use against a high-level WBS. Development of assessment against a detailed WBS would benefit from practical application to identify further areas for improvement. Critical Success Factors This is a technique already established in industry [5][7][45] though without any defined methods to define and implement. This dissertation was able to link it into a process designed to better guide its application and also relate the realisation of CSFs to Performance Measures. The literature survey was effective in providing a comprehensive list of CSFs. The validity of these CSFs was confirmed following the questionnaire survey with only a single CSF from the 141 being identified as having low influence. The ranking of CSFs allows them to be both prioritised when chosen for use and when applied in practice. Further development of CSFs will again rely on project data and particularly the review of past development performance. The questionnaire has a reasonable sample size that provides a good starting point for their application. Another area of development that became obvious while undertaking the case studies is the need to categorise the CSFs to match the Complexity Criteria to inform their selection. This would move the selection of suitable CSFs from being highly subjective to that of a semi-formal method. Each CSF would be considered against its potential to reduce or allow better management of Ambiguity, Uncertainty, Non-Linearity, Emergence and Program-size Complexity. Initially this may take the form of having have simple categorisations of positive, neutral or negative influence. There would then be the opportunity to develop this further using lessons learned from the use of CSFs. Foremost the application of CSFs would benefit from a commentary being given to each, proposing how it could be best implemented and under which circumstances. Ultimately the CSFs could be given a more complicated rating against Complexity Criteria. Candidates for this include a scale from high to low (with none and negative to denote where the CSF has no benefit) or a numerical scale with positive and negative integers. Identification of CSFs should not be based purely on the analysis of the current stage of development as omission of CSFs for particular aspects can have far reaching consequences. Examples include Contractual Management where the cognisance of CSFs at the time of contract documentation writing can influence strategy and content. Another example may be the consideration of verification and validation strategy early in the design where the management of uncertainty and ambiguity early in the design may overshadow that of managing the same later on. The application of these generic CSFs to make them effective within particular projects was not developed beyond some examples within the case studies. The application of CSFs could be applied in isolation of the other sub-processes described within this dissertation. As such there is much value in the findings of the questionnaire survey and their application even if complexity management is not adopted as a whole. There is however a significant benefit of the selection of CSFs being informed by a process such as Complexity Assessment. Planning using Design Structure Matrix While there is substantial literature on DSM and related techniques its use has yet to become commonplace. Exceptions include a few specialised application examples, such as Boeing’s Unmanned combat aerial vehicle (UCAV), Intel’s microprocessor development and BMW’s hybrid vehicle architecture concepts [67]. Evidently its use does not yet approach the level that will enable tool support and training to be made widely available. Page 62 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 DSMs can provide substantial value within planning development activities with complex relationships and it should be used to supplement, rather than a replace, currently used techniques. It is supposed that a barrier to its use is that of cost as an additional overhead, requiring additional resource and specialised skills to implement. Therefore, the benefits of DSM need to be demonstrated for industry to justify investment to develop and implement it. The section on DSM clarified the similarities between planning relationship descriptions between DSM and traditional techniques. It showed how the creation of DSMs could be used as an input into Gantt chart development. Finally, it proposed a number of Performance Measures relating to plan complexity. The need for such Performance Measures was confirmed by the results of the questionnaire survey which indicated that this was the third most influential, therefore desirable Performance Measure for use. The activity relationships sequential and parallel are commonly used within planning, the DSM process activity types of coupled and conditional are not. There is an opportunity to better define the interfaces between Gantt charts and both DSMs and MDMs. This would potentially lead to better tool support, including an interface with the most popular planning software such as Oracle’s Primavera [69]. Such tool support for transposition of activity relationships information from numerical DSMs into a Gantt chart format would improve the popularity of DSM as a planning tool. Tool development should not be confined purely to that of Process DSMs. Of equal benefit is the development of techniques and tooling to allow Organisational DSMs, and ultimately Process-Organisational MDMs, to support resource loaded Gantt charts. This would benefit the application of Earned Value management by improving the quality of the plan against which activities are to be monitored and controlled against. As well as providing information and data to support the implementation of EVM these techniques would support the implementation of plan complexity Performance Measures. As the third ranked most influential Performance Measure, behind both Requirements related measures, it should be considered as an important method of monitoring and controlling activities. Indeed, its value is that it is very much a leading measure of project performance. The benefit of using DSM and appropriate tooling is that the collection of these plan complexity performance measures can be largely automated. This allows these measures to influence the planning process itself in reducing complexity. It should influence the ultimate forming of the Gantt chart. It is also a leading measure that can be used to inform management of areas of particular concern against which resources and expertise should be expended. In common with CSFs DSM can be applied outside complexity management and without the proceeding Complexity Assessment and CSF identification. These sub-processes do facilitate the targeted and semi-formal method of applying DSM against areas of particular complexity. Again there is additional value in the application of DSM within this framework in that areas of low concern can be disregarded and planned using more traditional techniques. Performance measurement From the questionnaire it is obvious that there is a strong need for techniques to supplement EVM. EVM was used as a benchmark and rated relatively poorly and even amongst project personnel. This suggests there is considerable room for improvement in this area. Requirements management figured most strongly overall reinforcing the author’s view that the monitoring and control of requirements deserves greater emphasis within project management methodologies. The measurement of plan complexity, which could be satisfied using a number of DSM related measures, also figured strongly as third most influential. Requirements management relates closely to that of scope management since all scope should be derived from development and project requirements. This demonstrates there should be a lot stronger emphasis on the area of scope within the traditional project management model of the ‘Iron Triangle’ [32]. The measurement of schedule and cost, other than through EVM, received barely any attention within the questionnaire comments. In contrast, scope, both directly through plan complexity and indirectly through requirements was mentioned in the top three measures. Page 63 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The use of TRL and IRL was regarded as being more influential in some industries than others and unsurprisingly was considered more important by engineering personnel. Incorporation of these methodologies would be of benefit in developments where novelty of technology or integration were identified as being of concern only. Both the measurement of sub-processes and project documentation, design and drawings should be considered for any project. Relating to the control and monitoring of scope, rather than the requirements, these are important lagging Performance Measures. Other measures scored poorly for reasons previously described. The adoption of a Development Scorecard type approach should eliminate biases that are inherent in any Performance Measure though the scope of performance measurement should be commensurate with the risks and the actual benefits. The adoption of these Performance Measures and the Development Scorecard can be readily applied in isolation of other supporting sub-processes. What Complexity Assessment and application of DSM do allow is the identification of where particular effort can be applied against the WBS. In this way particular areas of requirements and scope can measured. Risk management This is an area of management that is relatively well developed though its application can often be less than effective. The application of a Risk Breakdown Structure should aid consistency and application of the proceeding activities within this framework will aid the identification of risks.
10.Conclusion There is a general need to develop guidance and the formality of application across all of the sub-processes. This can be achieved through a blend of further research and real-world application. These areas for development will be discussed within this section. It is felt that application of Complexity Assessment against the WBS would benefit from further practical application. Further work should be undertaken to provide additional guidance to allow the application of the scoring of criteria and develop typical complexity profiles across the development lifecycle. These are outside the scope of this dissertation due to their reliance on actual project data and the sample sizes required. Application across successive developments may provide general trends which would be beneficial to allow the assessment of a development against norms. It may also support types of responses to complexity and inform high-level risk management exercises. The analysis of high-level trends to group Uncertainty/Ambiguity and Emergence/Non-linearity together would allow the formation of standard complexity profiles. This could then be used to guide initial project strategy and formation of the WBS as well as comparing the results of the complexity assessment against norms. This could then be used to assess where there may be particular risks, i.e. areas where complexity is excess or higher than expected. Only application of the Complexity Framework against different types of WBS and to varying degrees of detail can fully demonstrate its value and allow it to be improved upon. An iterative approach to developing the framework, and its constituent parts, will allow it to be applied more effectively over subsequent projects. Development of guidance would be of some importance to guide its use and attempt to drive consistency of application. Further research through larger survey sample sizes would improve the applicability of CSFs and Performance Measures within individual domains though this of course does not preclude their immediate use from the conclusions based on the survey data within this dissertation. Improvements could be made through increases in sample sizes across a range of industries, ensuring representation across age groups and role descriptions. In this way differences could be better seen between different industries and the ranking of relevant CSFs better tailored to the requirements of the particular industry. Furthermore, while the use of CSFs is Page 64 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 documented a process to aid actual identification, management and the review of their impact is not. Analysis of CSF efficacy would be highly useful and this sub-process provides a basis for such an activity that is not discussed elsewhere in literature. It is also felt more work is required in the Complexity Themes of ‘technology development management’ and ‘system integration management’. An increased sample size, including more engineering personnel, would improve the findings in these areas There is a deficit in the number of CSFs relating to the Criticality Themes under the External factors and System Development groupings. Neither research of literature or the questionnaire survey identified any additional CSFs so again it is felt only practical application of the framework would yield progress in this area. Guidance should be developed for the application of CSFs against particular Complexity Criteria. This could take the form of a simple categorisation of positive, neutral or negative influence to begin with. Additional instruction could be developed for the use of CSFs in the form of a commentary for each one. Guidance should also be developed in applying CSFs in specific technical developments. This should include appropriate wording and content but should also relate to the use of Performance Measures. Further work is required to develop suitable Performance Measures that will support the implementation of CSFs with maximum effort applied to the most influential CSFs. Many of these CSFs are intangible and subjective, though nonetheless important, making this task difficult. For some CSFs objective measurement will be impossible with examples including ‘good leadership’ and ‘degree of collaboration’. However, the identification of which CSFs can and cannot be measured would be useful in itself. Methods to assess those that cannot be measured could be undertaken through appropriate review at Stage Gates or similar development decision points. Examples of those that are readily measured include CSFs such as ‘critical activities are identified’ and ‘clearly understood contractual interfaces’. In these examples the use of DSM methodologies would be of assistance. The application of DSMs has been developed over several decades though commercially available tooling is weak. Development of tooling to allow DSMs to be used and be used as both an input into the development of Gantt charts and Performance Measures would be useful. Other areas of research include the development of numerical DSM methodologies, developed and improved over several development lifecycles, and its use in combination with other modelling techniques. OBM [41] has been referenced earlier in the dissertation but there are many other modelling techniques that may be more appropriate to integrate with DSM and traditional planning techniques. The practical application of complexity related performance measures against DSM would allow the proposed technique to be developed further. There is considerable benefit in such a leading measure and, considering its ranking in the questionnaire survey, its use may well be an attractive proposition within technical development. It is suggested that trialling of this, in conjunction with the application of DSM, is the next step in its development. The development of Performance Measures to support the use of CSFs deserves further research. The measurement of CSF effectiveness would encourage their use and be useful as an input into lessons learned exercises. This can be undertaken alongside general development of Performance Measures. Another aim is that of automation of the collection of data for the use in Performance Measures using existing a new computer tooling. This would reduce the manual overhead and encourage their greater use. The survey findings suggested significant benefits in further development of requirements and scope based Performance Measures. Database and tooling support would allow some degree of automation. Further research should be undertaken in the development of additional measures and those measures that did not receive particularly good rankings in the survey. It is felt the application of these measures may be appropriate in some technical developments. Overall there are opportunities to apply several of the individual sub-processes in isolation to better develop them, despite the benefits of combining them. This may be more viable for their development as the overhead Page 65 of 69
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 in terms of resources will be considerable and especially if they are applied in tandem with conventional and proven project monitoring and control techniques. Project management has failed to learn the lessons over successive high-profile project failures. Many of these failings were discussed within the literature survey [4] and so it may be surmised the discipline has been resistant to change for some considerable time. Without sufficient data, preferably corroborated, the benefits of Complexity Management via this framework will be intangible as best. This combined with project management’s inherent conservatism may be the greatest barrier to the acceptance to the tools and techniques described within this paper. An iterative and considered approach to the management of complexity will be required to gain its support. What is clear is that the current project management and systems engineering methodologies will need to improve their efficacy in the face of increasing levels of complexity. The question that has to be asked before commencing any complex project is what sets this undertaking apart from the high percentage of unsuccessful projects? The recognition of complexity and tailoring of the technical development process to suit a project’s unique complexity traits is one way this question may be answered.
References [1] [2] [3]
[4] [5] [6]
[7]
[8] [9] [10] [11] [12] [13] [14]
[15] [16] [17] [18] [19] [20] [21] [22]
[23]
M Lee (2003). HRD in a Complex World, (Ed), London: Routledge. C E Shannon and W Weaver (1998). The Mathematical Theory of Communication, University of Illinois Press, Champaign. Australian Public Service Commission (2012). Tackling wicked problems: A public policy perspective, Last updated: 31 May 2012, Available at: http://www.apsc.gov.au/publications-and-media/archive/publications-archive/tackling-wicked-problems, Accessed on: 25th October, 2016. N Brook (2015). Integration of technical development within complex project environments Literature Survey, MSc, Safety Critical Systems Engineering, University of York. Mindtools (2016). CSFs; Identifying the Things That Really Matter for Success, Available at: https://www.mindtools.com/pages/article/newLDR_80.htm, Accessed on: 28th October, 2016. L S Pheng and Q T Chuan [2006]. Environmental factors and work performance of project managers in the construction industry, International Journal of Project Management, Volume 24, Issue 1, January 2006, Pages 24–37, Available at: http://www.sciencedirect.com/science/article/pii/S0263786305000633, Accessed on: 25th October, 2016. BMG Research (2014). Factors in project success; Research Report, Prepared for: The Association for Project Management (APM), November 2014, Available at: https://www.apm.org.uk/sites/default/files/APM%20Success%20report_NOV%2014.pdf, Accessed on: 25th October, 2016. OGC (2012). PRINCE2; Directing Successful Projects, 5th Ed, TSO, Norwich. aceproject (2014). Project management and firefighting, Available at: http://www.aceproject.com/blog/2009/04/29/projectmanagement-and-firefighting/, Accessed on: 25th October, 2016. M Ramsden (2013). Ten rules for smart bowtie analysis, 31 October 2013, ERM, Available at: http://www.erm.com/en/newsevents/platform/ten-rules-for-smart-bowtie-analysis/, Access on: 8th November 2016. N Brook (2015). Presentation: Failure curves and bowtie analysis (adapted from ERM (no date). Introduction to Bow-tie Diagrams, Available at: http://events.r20.constantcontact.com/register/event?llr=6quxcycab&oeidk=a07e5zzlwto9ff6b679), Northumbria Water B Juttehttps (no date). 10 Golden Rules of Project Risk Management, Project Smart, Available at: http://www.projectsmart.co.uk/10golden-rules-of-project-risk-management.php, Accessed on: 25th October, 2016. K Jackson (no date). Plan-Do-Check-Act (PDCA), Mindtools, Available at: http://www.mindtools.com/pages/article/newPPM_89.htm, Accessed on: 25th October, 2016. N Settle-Murphy (no date). 10 Top Tips for Leading Great Lessons Learned Reviews in a Virtual World, Guided Insights, Available at: http://www.guidedinsights.com/10-top-tips-for-leading-great-lessons-learned-reviews-in-a-virtual-world/, Accessed on: 25th October, 2016. J Shane et al (2015). Guide to Project Management Strategies for Complex Projects, SHRP 2 Report S2-R10-RW-2, Institute for Transportation, Iowa State University, Available at: http://www.trb.org/Main/Blurbs/167482.aspx, Accessed on: 26th October, 2016. Helmsman Institute (2012). Why Complexity Matters, Available at: http://www.apmginternational.com/nmsruntime/saveasdialog.aspx?lID=5479&sID=6815, Accessed on: 26th October, 2016. N Goldenfeld and L P Kadanoff (1999). Simple Lessons from Complexity, Science 02 Apr 1999: Vol. 284, Issue 5411, pp. 87-89, Available at: http://science.sciencemag.org/content/284/5411/87, Accessed on: 25th October, 2016. D W Oliver et al (1997). Engineering Complex Systems with Models and Objects, McGraw-Hill Inc.,US, Available at: https://oldsite.incose.org/ProductsPubs/DOC/EngComplexSys.pdf, Accessed on: 26th October, 2016. D Rind (1999). Viewpoint: Complexity and climate. Science, 284, 105-107, Available at: http://pubs.giss.nasa.gov/docs/1999/1999_Rind_ri04100j.pdf, Accessed on: 25th October, 2016. S A Kauffman (1993). The Origins of Order: Self-organization and Selection in Evolution, Oxford University Press. J H Holland (1996). Hidden Order: How Adaptation Builds Complexity, Addison-Wesley, Reading, Massachusetts. Y Bar-Yam (1997). Dynamics of Complex Systems, Addison-Wesley, Reading, Massachusetts, Available at: https://fernandonogueiracosta.files.wordpress.com/2015/08/yaneer-bar-yam-dynamics-of-complex-systems.pdf, Accessed on: 25th October, 2016. G Weng et al. (1999). Complexity in biological signaling systems, Science. 1999 Apr 2;284(5411):92-6, Available at: https://www.ncbi.nlm.nih.gov/pubmed/10102825, Accessed on: 26th October, 2016.
Page 66 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
[24] M Brugnach (2008). Complexity and Uncertainty: Rethinking The Modelling Activity, Published in Environmental Modelling, Software and Decision Support: State of the Art and New Perspectives, Amsterdam et al.: Elsevier, 2008.Available at: http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1071&context=usepapapers, Accessed on: 26th October, 2016. [25] M T Pich (2000). On Uncertainty, Ambiguity, and Complexity in Project Management, Published Online: August 1, 2002, Management Science, Available at: http://dx.doi.org/10.1287/mnsc.48.8.1008.163, Accessed on: 26th October, 2016. [26] The British Computer Society (2006). Case Study of Successful Complex IT Projects, Lancaster University, August 2006, Available at: http://www.bcs.org/upload/pdf/casestudy2.pdf, Accessed on: 26th October, 2016. [27] SEBoK (2016). Guide to the Systems Engineering Body of Knowledge, (SEBoK), Available at: http://sebokwiki.org/wiki/Guide_to_the_Systems_Engineering_Body_of_Knowledge_(SEBoK), Accessed on: 26th October, 2016. [28] INCOSE (2015). Systems Engineering Handbook; A Guide for System Life Cycle Process and Activities, 4th Ed, INCOSE‐TP‐2003‐002‐04, January 2015, John Wiley & Sons, Hoboken, New Jersey. [29] Project Management Institute. (2004). A guide to the project management body of knowledge (PMBOK guide). Newtown Square, Pa: Project Management Institute. [30] D A Buchanan and A Huczynski (1997). Organisational Behaviour, 3rd Edition, Hemel Hampstead, Prentice Hall. [31] D D Gransberg and H D Jeong (2015). Managing Mega-Project Complexity in Five Dimensions, Conference: The 6th International Conference on Construction Engineering and Project Management (ICCEPM 2015), At Busan, Korea, October 2015, Available at: https://www.researchgate.net/publication/284533244_Managing_Mega-Project_Complexity_in_Five_Dimensions, Accessed on: 26th October, 2016. [32] Mindtools (2016). The Iron Triangle of Project Management, Balancing Your Budget, Scope, and Schedule, Available at: https://www.mindtools.com/pages/article/newPPM_54.htm, Accessed on: 26th October, 2016. [33] PCubed (2016). Nuclear Industry Sector Challenges, Available at: http://www.pcubed.com/bulletins/nuclear-industry-sectorchallenges, Accessed on: 26th October, 2016. [34] ICCPM (no date). About ICCPM, Available at: https://iccpm.com/content/about-iccpm, Accessed on: 26th October, 2016. [35] A R McGowana et al (2013). A Socio-Technical Perspective on Interdisciplinary Interactions during the Development of Complex Engineered Systems, 2013 Conference on Systems Engineering Research, Volume 16, 2013, Pages 1142–1151, Available at: http://www.sciencedirect.com/science/article/pii/S187705091300121X, Accessed on: 26th October, 2016. [36] N Bennett and G J Lemoine. (2014). What VUCA Really Means for You, Harvard Business Review, January–February 2014 Issue, Available at: https://hbr.org/2014/01/what-vuca-really-means-for-you, Accessed on: 27th October, 2016. [37] G Satell (2013). Management Has to Change In An Increasingly Complex World, June 9th 2013, Available at: http://www.businessinsider.com/how-to-manage-complexity-2013-6?IR=T, Accessed on: 27th October, 2016. [38] L Fortnow (no date). Kolmogorov Complexity, Available at: http://people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf, Accessed on: 27th October, 2016. [39] P Holman (2010). Engaging Emergence: Turning Upheaval into Opportunity, Chapter 1. What Is Emergence? Berrett-Koehler Publishers, San Francisco. Available at: http://peggyholman.com/papers/engaging-emergence/engaging-emergence-table-ofcontents/part-i-the-nature-of-emergence/chapter-1-what-is-emergence/, Accessed on: 27th October, 2016. [40] P Reason (1991). Human Error, Cambridge University Press, Cambridge. [41] J S Warboys and J S Keane (1993). OBM: A Specification Method for Modelling Organisational Process, Published in the Proceedings of the Workshop on Constraint Processing at CSAM’93, St. Petersburg, 1993, Available at: [42] ITIL (2016). Back to basics (People, Process and Technology), Available at: http://www.itilnews.com/index.php?pagename=ITIL__Back_to_basics_People_Process_and_Technology, Accessed on: 27th October, 2016. [43] M Frank et al (2011). The Relationship among Systems Engineers’ Capacity for Engineering Systems Thinking, Project Types, and Project Success, Project Management Journal, Volume 42, Issue 5, pages 31–41, September 2011, Available at: http://onlinelibrary.wiley.com/doi/10.1002/pmj.20252/abstract,, Accessed on: 26th October, 2016. [44] Techopedia (2016). Scope Creep; Definition - What does Scope Creep mean? Available at: https://www.techopedia.com/definition/24779/scope-creep, Accessed on: 26th October, 2016. [45] Association for Project Management. (2012). APM body of knowledge, 6th ed., High Wycombe. [46] A Bassiouny (2008). UAV Stability Augmentation System, Published on Nov 16, 2008, Available at: http://www.slideshare.net/ahmad1957/uav-stability-augmentation-system-usas-presentation, Accessed on: 27th October, 2016. [47] S Denning (2013). What Went Wrong at Boeing? Jan 21, 2013 Forbes, Available at: http://www.forbes.com/sites/stevedenning/2013/01/21/what-went-wrong-at-boeing/#30d2a3465aad, Accessed on: 27th October, 2016. [48] J Laaksonen (2010). Lessons Learned from Olkiluoto 3 Plant, 9th October, 2010, Power Engineering, Available at: http://www.power-eng.com/articles/npi/print/volume-3/issue-3/nucleus/lessons-learned-from-olkiluoto-3-plant.html, Accessed on: 27th October, 2016. [49] L Jun-yan (2012). Schedule Uncertainty Control: A Literature Review, 2012 International Conference on Medical Physics and Biomedical Engineering (ICMPBE2012), Available at: http://www.sciencedirect.com/science/article/pii/S1875389212016069, Accessed on: 27th October, 2016. [50] L P Leach (1999). Critical Chain Project Management Improves Project Performance‖, Project Management Journal, vol. 30, no. 2, pp. 39-51, June 1999, Available at: https://www.pmi.org/learning/library/critical-chain-pm-improves-performance-5305, Accessed on: 27th October, 2016. [51] R de Neufville and S Scholtes (2011). Flexibility in Engineering Design, The MIT Press, Cambridge, Massachusetts, London, England. [52] D Cleland and W R King (1997). Project Management Handbook, Chapter 20, Critical Success Factors in Effective Project implementation by J K Pinto and D P. Slevin, 2nd Edition, Wiley. [53] OGC (2012). PRINCE2; Directing Successful Projects, 5th Ed, TSO, Norwich. [54] G J Roedler and C Jones (2005). Technical Measurement, A Collaborative Project of PSM, INCOSE, and Industry, 27th December 2005, INCOSE-TP-2003-020-01, http://www.incose.org/docs/default-source/ProductsPublications/technical-measurement-guide---dec2005.pdf?sfvrsn=4, Accessed on: 28th October, 2016. [55] D Dvir et al. (1998). In search of project classification: a non-universal approach to
Page 67 of 69
Nick Brook [56]
[57]
[58] [59]
[60] [61] [62]
[63] [64] [65]
[66]
[67] [68] [69] [70] [71] [72]
[73] [74]
[75] [76]
[77] [78] [79]
[80]
[81] [82] [83]
MSc Safety-Critical Systems Engineering
9th January 2017
project success factors, Research Policy, Volume 27, Issue 9, December 1998, Pages 915–935, Available at: http://www.sciencedirect.com/science/article/pii/S0048733398000857, Accessed on: 12th November 2016. T Chow and D-B Cai (2007). A survey study of critical success factors in agile software projects, Journal of Systems and Software, Volume 81, Issue 6, June 2008, Pages 961–971, Available at: http://www.sciencedirect.com/science/article/pii/S0164121207002208, Accessed on: 12th November 2016. G L Ragatz et al. (1997). Success Factors for Integrating Suppliers into New Product Development, Product Innovation Management, Volume 14, Issue 3, May 1997, Pages 190–202, Available at: http://onlinelibrary.wiley.com/doi/10.1111/1540-5885.1430190/abstract, Accessed on: 12th November 2016. Koutsikouri et al (2008). Critical success factors in collaborative multi-disciplinary design projects, Journal of Engineering, Design and Technology, Available at: http://www.emeraldinsight.com/doi/abs/10.1108/17260530810918243, Accessed on: 12th November 2016. J Fortune and D White (2006). Framing of project critical success factors by a systems model, International Journal of Project Management, Volume 24, Issue 1, January 2006, Pages 53–65, Available at: http://www.sciencedirect.com/science/article/pii/S0263786305000876, Accessed on: 12th November 2016. SurveyMonkey (2016). Home, Available at: https://www.surveymonkey.co.uk/, Accessed on: 27th October, 2016. Dr. B Mehlenbacher (2002). Communication Disasters, ENG 421: Computer Documentation Design, Available at: http://www4.ncsu.edu/~brad_m/teaching/eng%20331/Lessons/communication.html, Accessed on: 28th October, 2016. M V Malko (no date). The Chernobyl Reactor: Design Features and Reasons for Accident, Joint Institute of Power and Nuclear Research, National Academy of Sciences of Belarus, Available at: http://www.rri.kyoto-u.ac.jp/NSRG/reports/kr79/kr79pdf/Malko1.pdf, Accessed on: 28th October, 2016. D Haughey (2014). A Brief History of Project Management, Project Smart, Available at: https://www.projectsmart.co.uk/brief-historyof-project-management.php, Accessed on: 28th October, 2016. C Borysowich (2008). Pros & Cons of Gantt Charts, Toolbox.com, Feb 2, 2008, Available at: http://it.toolbox.com/blogs/enterprisesolutions/pros-cons-of-gantt-charts-22233, Accessed on: 6th November 2016. T Browning (2012). The Design Structure Matrix: A Tool for Managing Complexity, Scientific American, 15th September 2012, Available at: http://blogs.scientificamerican.com/guest-blog/the-design-structure-matrix-a-tool-for-managing-complexity/, Accessed on: 28th October, 2016. S D Eppinger et al (1992). Organising the Tasks in Complex Design Projects: Development of Tools to Represent Design Procedures, NSF Design and Manufacturing Systems Conferences, Atlanta, Georgia, January 1992, Available at: http://web.mit.edu/people/eppinger/pdf/Gebala_NSF1992.pdf, Accessed on: 28th October, 2016. S D Eppinger and T R Browning (2014). Design Structure Matrix Methods and Applications, The MIT Press, Cambridge, Massachusetts, London, England. DSMweb.org (2009). The Design Structure Matrix (DSM), Available at: http://www.dsmweb.org/, Accessed on: Accessed on: 28th October, 2016. Oracle (2016). Primavera Enterprise Project Portfolio Management, Available at: https://www.oracle.com/uk/applications/primavera/index.html, Accessed on: 31st October 2016. Planning Planet (2009). Resource Loading, Posted Thu, 2009-02-05, Available at: http://www.planningplanet.com/wiki/422401/resource-loading, Accessed on: 28th October, 2016. GanttChartExample.com (2016). Gantt Chart Example, Available at: http://www.ganttchartexample.com/, Accessed on: 28th October, 2016. S Vajna et al (2010). Designing the Solution Space for the Autogenetic Design Theory (ADT), International Design Conference – Design 2010, Dubrovnik, Croatia, May 17-20, 2010, Available at: https://www.designsociety.org/downloadpublication/29490/designing_the_solution_space_for_the_autogenetic_design_theory_adt, Accessed on: 29th October, 2016. M Kreimeyer and U Lindemann (2011). Complexity Metrics in Engineering Design; Managing the Structure of Design Process, 1st Ed, Springer-verlag Berlin Heidelberg. D Simon and F Simon (2005). Das Wundersame Verhalten von Entwicklern beim Einsatz von Quellcode-Metriken. Proccedings in DASMA Software Metrik Kongress: Metrikon 2005, Shaker Verlag, Aachen, 2005, pages 263-272, Available at: http://www.softwarekompetenz.de/servlet/is/28003/?print=true, Accessed on: 29th October, 2016. M Bremer and B McKibben (2011). Escape the Improvement Trap: Five Ingredients Missing in Most Improvement Recipes. Boca Raton, London, New York. A Kline et al (2003). Creating and Using a Performance Measure for the Engineering Design Process, American Society for Engineering Education, Available at: http://www.webpages.uidaho.edu/ele/scholars/Results/Publications/asee/ASEE_2003_Creating_Performance_Measure.doc, Accessed on: 29th October, 2016. Balanced Scorecard Institute (2016). Balanced Scorecard Basics, Available at: http://balancedscorecard.org/Resources/About-theBalanced-Scorecard, Accessed on: 1st November, 2016. D Huether (2010). How Do You Know Your Metrics Are Worth It, Make Things Better, Available at: http://www.derekhuether.com/2010/02/11/how-do-you-know-your-metrics-are-worth-it/, Accessed on: 2nd December 2016. R Navon (2005). Automated Project Performance Control (APPC) of construction resources, Automation in Construction 14(4):467-476 · August 2005, Available at: https://www.researchgate.net/publication/223091830_Automated_project_performance_control_of_construction_projects, Accessed on: 29th October, 2016. A Yassine et al (2003). Information hiding in product development: the design churn effect, Research in Engineering Design 14 (2003) 145–161, DOI 10.1007/s00163-003-0036-2, Available at: http://necsi.edu/affiliates/braha/RED03_Info.pdf, Accessed on: 28th October, 2016. N Brook (2016). Guidelines for Requirements Engineering and Management, Tractebel Engineering. R Torbett et al (2001). Design Performance Measurement in the Construction Sector: A Pilot Study, Science and Technology Policy Research, Sussex University, Available at: http://www.sussex.ac.uk/spru/documents/sewp66, Accessed on: 29th October, 2016. US Department of Energy (2010). DOE G 413.3-12, U.S. Department of Energy Project Definition Rating Index Guide, Available at: https://www.directives.doe.gov/directives-documents/400-series/0413.3-EGuide-12, Accessed on: 29th October, 2016.
Page 68 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
[84] NASA (2010). PDRI; Project Definition Rating Index Use on NASA Facilities, April 2000, Available at: http://www.hq.nasa.gov/office/codej/codejx/Assets/Docs/ProjectDefinitionRatingIndex.pdf, Accessed on: 29th October, 2016 [85] Construction Industry Institute (2015). Project Definition Rating Index (PDRI), Available at: https://www.constructioninstitute.org/scriptcontent/more/rr113_11_more.cfm, Accessed on: 29th October, 2016 [86] Department of Defense (2010). Technology Readiness Levels in the Department of Defense (DoD), Defense Acquisition Guidebook), Available at: https://www.army.mil/e2/c/downloads/404585.pdf, Accessed on: 31st October. [87] NASA (2007). Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007, Available at : http://foiaelibrary.gsfc.nasa.gov/_assets/doclibBidder/tech_docs/5.%20NASA%20SP6105%20Rev%201%20(Sys%20Eng%20Handbook).pdf, Accessed on: 31st October, 2016. [88] NASA (2012). Technology Readiness Level, Oct. 28, 2012, Available at: http://www.nasa.gov/directorates/heo/scan/engineering/technology/txt_accordion1.html, Accessed on: 31st October, 2016 [89] M F Austin and D M York (2015). System Readiness Assessment (SRA) An Illustrative Example, 2015 Conference on Systems Engineering Research, Volume 44, 2015, Pages 486–496, http//www.sciencedirect.com/science/article/pii/S1877050915002677, Accessed on: 31st October, 2016. [90] M Westcott et al (2013). The DMI Design Value Scorecard: A New Design Measurement and Management Model, Design Management Review, Volume 24, Issue 4, pages 10–16, Winter 2013, Available at:.http://onlinelibrary.wiley.com/wol1/doi/10.1111/drev.10257/full, Accessed on: 7th November 2016 [91] N Brook (2016). Risk Management – Management Procedure, ref 3887; PALLAS Project, Tractebel Engineering. [92] L Mieritz (2012). Thisiswhatgoodlookslike; Gartner Survey Shows Why Projects Fail, Gartner, 1th June 2012, Available at: http://thisiswhatgoodlookslike.com/2012/06/10/gartner-survey-shows-why-projects-fail/, Accessed on: 29th October, 2016 [93] Royal Academy of Engineering (2004). The Challenges of Complex IT Projects, , Available at: http://bcs.org/upload/pdf/complexity.pdf, Accessed on: 29th October, 2016 [94] A Denker (2007). The Challenge of Large-Scale IT Projects, World Academy of Science, Engineering and Technology, International Journal of Social, Education, Economics and Management Engineering Vol:1, No:9, 2007, , Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193.3858&rep=rep1&type=pdf, Accessed on: 29th October, 2016 [95] R Frese (2003). Project success and failure: What is success, what is failure, and how can you improve your odds for Success, 16th December 2003, , Available at: http://www.umsl.edu/~sauterv/analysis/6840_f03_papers/frese/, Accessed on: 29th October, 2016 [96] Modern Airliners (no date). Boeing 787 History, the creation of the Dreamliner, Available at: http://modernairliners.com/boeing-787-dreamliner/boeing-787-dreamliner-history, Accessed on: 29th October, 2016. [97] M Mecham (2011). 787: The Century’s First Jet to Fly. Sep 26, 2011, Aviation Week & Space Technology, Available at: http://aviationweek.com/awin/787-century-s-first-jet-fly, Accessed on: 29th October, 2016. [98] A Casey (2012). Boeing 7E7 case study, 21 November 2012, Available at: https://prezi.com/64hzwunhvlap/boeing-7e7-case-study/, Accessed on: 29th October, 2016. [99] P Ausick (2014). Why a Boeing 787-9 Dreamliner Costs $250 Million, June 17, 2014, 24/7 Wall St, Available at: http://247wallst.com/aerospace-defense/2014/06/17/why-a-boeing-787-9-dreamliner-costs-250-million/, Accessed on: 29th October, 2016. [100] Y Zhao, PhD (2013). Why 787 Slips Were Inevitable? Available at: http://zhao.rutgers.edu/787-paper-12-02-2013.pdf, Accessed on: 29th October, 2016. [101] J Koster et al (2012). Hyperion UAV: An International Collaboration, Conference: AIAA-ASM, Available at: https://www.researchgate.net/publication/263045116_Hyperion_UAV_An_International_Collaboration, Accessed on: 31st October 2016. [102] I Moir and A Seabridge (2008). Aircraft Systems Aircraft Systems: Mechanical, electrical, and avionics subsystems integration, 3rd Ed, John Wiley & Sons, Ltd. [103] J Koster et al (2011). Workforce Development for Global Aircraft Design , Full-text available · Conference Paper · Jan 2011, Available at: https://www.researchgate.net/figure/267593377_fig1_Figure-1-Boeing-787-Global-Work-Breakdown-Structure-1, Accessed on: 31st October 2016. [104] M Thurber (2009). An All-composites Learjet, Business Jet Traveler, Wednesday, April 1, 2009, Available at: http://www.bjtonline.com/business-jet-news/an-all-composites-learjet, Accessed on: 31st October 2016. [105] U Irfan (2014). How Lithium Ion Batteries Grounded the Dreamliner: Official report on Boeing 787 fires tells a cautionary tale about advanced batteries, ClimateWire on December 18, 2014, Scientific American, Available at: https://www.scientificamerican.com/article/how-lithium-ion-batteries-grounded-the-dreamliner/, Accessed on: 31st October 2016. [106] Telegraph Travel (2013). Boeing 787 Dreamliner: a timeline of problems, Telegraph Travel, 28 July 2013, Available at: http://www.telegraph.co.uk/travel/comment/Boeing-787-Dreamliner-a-timeline-of-problems/, Accessed on: 29th October, 2016. [107] C Negroni (2015). FAA Is Doing Nothing About Continued Boeing Dreamliner Battery Failures, 29 October 2015, Gizmodo, Available at: http://www.gizmodo.com.au/2015/10/faa-is-doing-nothing-about-continued-boeing-dreamliner-battery-failures/, Accessed on: 29th October, 2016. [108] P Marks (2013). Grounded: Where the Boeing Dreamliner went wrong, 6 February 2013, New Scientist Available at: https://www.newscientist.com/article/mg21729036-700-grounded-where-the-boeing-dreamliner-went-wrong/, Accessed on: 29th October, 2016. [109] W Kaufman (2013). Dreamliner Woes Expose FAA's Potential Weak Spots, January 23, 2013, npr Business Available at: http://www.npr.org/2013/01/23/170096977/dreamliner-woes-expose-faas-potential-weak-spots, Accessed on: 29th October, 2016. [110] D Rushe (2013). Why Boeing's 787 Dreamliner was a nightmare waiting to happen, 18 January 2013, The Guardian, Available at: https://www.theguardian.com/business/2013/jan/18/boeing-787-dreamliner-grounded, Accessed on: 8th November 2016. [111] AeroInside (no date). Airline Incidents for aircraft type Boeing 787-8 Dreamliner, Available at: https://www.aeroinside.com/incidents/type/b788/boeing-787-8-dreamliner, Accessed on: 31st October 2016.
Page 69 of 69
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix A Glossary of terms and acronyms
70
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
5DPM
Representing ‘Five Dimensions Project Management’. This is a project management methodology for complex projects as advocated by the US Institute for Transportation [30].
Ambiguity
How complete is the knowledge about functional variables [25].
Artefact
Describing a system or project deliverable while not defining in what form it is produced i.e. a requirements artefact may be a document, spreadsheet or database page.
Bow Tie Diagram
A visual model capable of showing overview of risk measures [10].
Complexity Aspects
A general description of the components of complexity within the Complexity, including Complexity Themes and Complexity Criteria within the Complexity Matrix.
Complexity Assessment
The analysis of complexity on a project using Complexity Themes and Complexity Criteria against the WBS.
Complexity Criteria
A means of decomposing complexity using the descriptions of ‘Uncertainty’, ‘Ambiguity’, ‘Emergence’, ‘Non-linearity’ and ‘Programsize complexity’.
Complexity Management Framework
The process for managing complexity from Complexity Assessment through to Risk Management.
Complexity Matrix
A matrix of Complexity Themes and Complexity Criteria to facilitate the measurement of Technical Development Complexity.
Complexity Profile
The way that complexity differs and changes across the Complexity Themes, WBS and over the Development Lifecycle.
Complexity Themes
A means of decomposing a development into nine aspects against which Complexity Criteria can be considered. These include (Project) environmental constraints, Development process, Internal organisation, Contractual management, Stakeholders, Regulatory interfaces, External (system) interfaces, Technology, Internal (system) interfaces.
Controlling (of project activities)
Control comprises tracking performance against agreed plans and taking the corrective action required to meet defined objectives [45].
Coupling (of project activities)
Project activities that are interdependent on one other for their completion [67].
Critical Success Factors
Essential areas of activity that must be performed well if you are to achieve the mission, objectives or goals for your business or project [53].
CSF
See ‘Critical Success Factors’.
Demming Cycle
‘A process that will ensure you plan, test and incorporate feedback before you commit to implementation’ [16]. 71
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Design Structure Matrix
A graphical model to perform both the analysis and the management of complex systems [72].
Development Scorecard
A combination of a number of Performance Measures across the depth and breadth of technical development to represent the overall status.
DSM
See ‘Design Structure Matrix’.
Development lifecycle
The process for planning, creating, testing, and deploying a system. Also the period of time during which the system is developed [28].
Earned Value Management
Earned value management is a commonly used approach used for the planning, management and control of projects [45].
Emergence
How much changes to system configuration leads unexpected behaviours and interactions and re-evaluation of derived system requirements.
EVM
See ‘Earned Value Management’.
Field
A description at the head of a spreadsheet column or a document table or within a database.
Functional organisational structure
An organisational structure which is divided based on functional areas, such as engineering, commercial and human resources [31].
Gantt chart
Graphical model of activities using the length of bars to represent duration [71].
INCOSE
International Council on Systems Engineering [28].
Key Performance Parameter
A critical performance parameter that is used to allow a concept design or system to be selected for re-evaluation or cancellation should it not be satisfied [59].
KPP
See ‘Key Performance Parameter’.
Lessons learned
Documented experiences that can be used to improve the future management of projects [45].
Matrix organisation structure
An organisational structure in which employees report to both a functional manager and a manager in the project area [45].
MDM
See Multi-Domain Matrix
Measure of Effectiveness
Measure of Effectiveness – relating to the Validation of the most important Stakeholder Requirements and being measures of successful realisation of mission or operational objectives [54].
Measure of Performance
Relating to the physical or functional attributes during operation that are measured or estimated under specified testing and / or operational environmental conditions [54]. 72
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Measures of Suitability
Relating to the Stakeholder Requirements of safety, capacity and the various ‘ilities’ of reliability, availability, maintainability etc. Used in conjunction with MOEs [54].
MOE
See ‘Measure of Effectiveness’.
MOP
See ‘Measure of Performance’.
Monitoring (of project activities)
The general activity of gathering and analysing of data and to allow status and progress of a project to be assessed [45].
MOS
See ‘Measures of Suitability’.
Multi-Domain Matrix
MDM allows the analysis of a system across multiple domains, i.e. those of processes, organisation and products [67].
Non-linearity
The effect a single small change to system requirements impacts on other system requirements. Linear behaviour is a one-to-one relationship whereas a non-linear relationship is a one-to-many relationship.
Object Based Modelling
A process modelling technique [11].
OBM
See ‘Object Based Modelling’.
OBS
See ‘Organisational Breakdown Structure’.
Organisation
A group of people and facilities with an arrangement of responsibilities, authorities and relationships [ISO 9000:2000] [28].
Organisational Breakdown Structure
Hierarchical graphical model of an organisation
PBS
See ‘Product Breakdown Structure’.
PDCA
Plan-Do-Check-Act – also see Demming Cycle.
Performance Measures
Measures of status of project activities and products using defined techniques and methods. This includes aspects such as their progress, cost, quality.
Planning (of project activities)
Planning determines what is to be delivered, how much it will cost, when it will be delivered, how it will be delivered and who will carry it out [45].
Process
set of interrelated or interacting activities which transforms inputs into outputs [ISO 9000:2000] [28].
Product
Any hardware, software or firmware contained within a system [53], i.e. any project deliverables or physically delivered scope.
Product Breakdown Structure
Hierarchical graphical representation of all the project’s ‘products’. Also known as the PBS [53]. PBS
73
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Program-size complexity
Relates to the minimum amount of information required to describe a process, system or organisational requirements.
Project
A undertaking with ‘a defined beginning and end’, ‘a specific, preordained goal or set of goals’, ‘a series of complex or interrelated activities’ and ‘a limited budget’. [52].
Requirements
A statement that identifies a system, product or process’ characteristic or constraint, which is unambiguous, can be verified, and is deemed necessary for stakeholder acceptability [28].
Residual risks
A risk that remains after all efforts to identify eliminate and reduce likelihood of its occurrence have been made.
(Project) risks
A risk is an event, or a set of related events that are possible and would impact on the objectives of the project. The impact can be either positive (an “opportunity”) or negative (a “threat”) [53].
(Project) risk management
The process of Identifying and assessing risks and planning and Implementing risk responses will be communicating as appropriate [53].
RVTM
Requirements Verification Traceability Matrix. A means of ensuring each requirement is sufficiently defined, verified and validated.
Scope
The totality of the outputs, outcomes and benefits and the work required to produce them [45].
Solution class
This is a category of solution that will be considered during concept design and feasibility studies. Solution classes may be an approach, e.g. engineering versus a managerial solution or centred on the technology, e.g. dry versus wet storage for spent fuel [28].
Stage
A period within the life cycle of a system that relates to the state of the system description or the system itself [28].
Stage Gate
A decision point within the development lifecycle that aligns with the beginning or end of stage of development.
System
An integrated set of elements, subsystems, or assemblies that accomplish a defined objective. These elements include products (hardware, software, firmware), processes, people, information, techniques, facilities, services, and other support elements’ [28].
System Element
Any part of a system without being specific as to its status as a subsystem or component.
Systems Engineering
‘An interdisciplinary approach and means to enable the realisation of successful systems’ [28].
Technical Development Complexity
Measurement of complexity across Complexity Themes and Complexity Criteria across development WBS. 74
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Technical Performance Measure
Measured attribute relating to how well a system element satisfies a System Requirement [54].
Technology Readiness Level
A measure of maturity of a particular system element allowing the relative risks to be considered or as a method of acceptance at a system development review [85].
TPM
See ‘Technical Performance Measures’.
TRL
See ‘Technology Readiness Level’.
Uncertainty
Likelihood of unexpected events occurring, e.g. stakeholder requirements change significantly during system early development or assumptions are found to be incorrect.
Validation
The provision of objective evidence that a system, when in use, fulfils its business or mission objectives and Stakeholder Requirements, achieving its intended use in the intended operational environment [28].
Verification
The provision of objective evidence that a system or system element fulfils its specified requirements and characteristics [28].
WBS
See ‘Work Breakdown Structure’.
Wicked problem
System development that is ‘highly resistant to resolution’ [3].
Work Breakdown Structure
Hierarchical graphical representation of all the project’s tasks. Also known as known as the WBS [28].
75
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix B Existing management frameworks
76
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
There has been a good deal of literature on the subject of managing projects and technical development activities. Much is based on a blend of theory and practical experience. A new framework will take the existing management frameworks and models and attempt to reconcile inconsistencies in approach and content. The predominant systems engineering model is ‘V-model’ as shown in Figure 1. Though it does not specifically address complexity it has a useful requirements-centric view of technical development and will be used during the dissertation. The satisfaction of requirements is a convenient way of looking at complexity and the monitoring and control of requirements should form an integral part of the way that a complex system is managed. The V-model represent a typical development lifecycle so the phasing descriptions within this model Figure 1. The V-model [28]. will be used to profile complexity. Of the system engineering and project management methodologies that were previously discussed in the literature survey [4] the Strategic Highway Research Program’s 5DPM was of particular interest. This methodology is described in some detail in both their ‘Guide to Project Management Strategies for Complex Projects’ [15] and also in ‘Managing Mega-Project Complexity in Five Dimensions’ [31]. Rather than considering the usual three project dimensions, i.e. the iron triangle [32], it introduces the concepts of ‘financing’ and ‘context’. 5DPM also considers CSFs from the perspective of complexity. Furthermore, this is done within several feedback cycles. Together these represent considerable synergy with many of the concepts that the author previously felt were worthy of consideration. Finance becomes of utmost consideration as both the overall magnitude of the development and risk increases. This is exemplified by the difficulties experienced securing investment for the UK’s nuclear new-build programme [33]. For the purposes of technical development, it is an imposed constraint by virtue of being outside the direct control of the management and will be treated as such in the model. The fifth constraint of the 5DPM model is that of context and describes the project environment, including potential constraints and interfaces. The process can be decomposed into the following steps: · Review project factors in each of the five areas; · Identify and prioritise complexity factors; · Develop 5DPM complexity map; · Define CSFs with sub-process steps such as assemble project team, select project arrangements and prepare early cost model and finance plan; · Develop project action plan to address resource issues; · Re-evaluation complexity map on commencement of project. 5DPM describes reviews at defined intervals with a repeat of the above steps. This would most naturally align with a Stage 77
Figure 2. Five dimensional management model [15].
project
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Gates type governance structure [28] but it could be envisaged that interim reviews would be undertaken on a periodic basis if stage gates were deemed too far apart for effective control. Overall the model does not try to model complexity itself but rather model the component parts (within the five dimensions) and manage them to better manage a complex project environment. By focussing on these five dimensions the resulting assessments may provide a superficial view of project management orientated aspects only. Despite this there is considerable merit the high-level approach. The Helmsman Institute complexity assessment [16] is advocated by the International Centre for Complex Project Management [34]. This again has five areas of general consideration consisting of: · Context Complexity comprised of 3 individual criteria (number of stakeholders and both stakeholder alignment and power); · People Complexity comprised of 4 individual criteria (multi-disciplinary, x-discipline familiarity, breadth of changes and paradigm shift); · Ambiguity comprised of 6 individual criteria (approach uncertainty, assumption uncertainty, breadth of assumptions, level of abstraction, risk and cost estimation); · Technical Challenge comprised of 3 individual criteria (integration complexity, system development complexity and impact on infrastructure); · Project Management Complexity comprised of 13 individual criteria (level of accountability, project team experience, project journey, schedule complexity, size, structure, contracting mechanism, rollout, variation, flexibility, resources, timeframes and financial). Within these categories there are specific factors. Those relating to technical only are: · Integration Complexity - the number of technical systems requiring integration and the nature of the interfaces between them; · System Development Complexity - the level of overall technical system maturity and the amount of development effort required; · Impact on Infrastructure - the extent to which the project impacts on infrastructure and operating model of the organisation. Each category has its own factors which span the entire project scope. Many could be equally applicable in other categories and it is important to appreciate those that have an influence on technical development: · Stakeholder Numbers describing the number of stakeholder groups involved in the governance of the project. It would be expected the impact of this would be greatest during Stakeholder Requirements but issues often emerge during later validation activities leading to scope creep [44]; · Stakeholder Alignment describing the degree of alignment between these stakeholders and similarly affecting development as stakeholder numbers; · Multi-disciplinary describing number of core disciplines involved in project delivery that need to work together at the same time influencing the integration activities and number and nature of interfaces. The definition does not specifically reference contracts and sub-contracts and may imperfectly describe this aspect. An example being the same engineering discipline working independently on interfacing sub-systems through different organisations; · Approach Uncertainty describing the he extent to which the project can rely on previous organisational experience presumably relating to how novel the technology is; · Assumption Uncertainty describing the level of uncertainty associated with key project assumptions presumably reducing as the requirements are defined; · Breadth of Assumptions describing the number of key project assumptions rather than how uncertain the individual assumptions are; · Level of abstraction describing level of conceptual abstraction and complexity inherent in the project. This definition encompasses the impact, fully or partially, of many of the other terms. Abstraction can 78
Nick Brook
· · · · · ·
· ·
MSc Safety-Critical Systems Engineering
9th January 2017
relate to how constrained the solution is by both previous experience and the high-level requirements and will be reduced as the development matures, i.e. a requirement to build an office block in the present is subject to far less abstraction than a requirement for putting a man on the moon in the early 1960’s. Risk describing the level of risk associated with the project and the entities at risk and again with much overlap with many of the other terms; Project Team Experience describing the level of experience of the project team in delivering this type of project and with similarity to approach uncertainty; Project Journey describing the number of different project streams and phases with some overlap with multi-disciplinary; Contracting Mechanism describing the ‘dominant contracting arrangements’ on the project with some overlaps with project journey and multi-disciplinary; Rollout describing the type of rollout strategy needed to deliver the project into operation. This will invariably constrain the chosen verification and validation strategy; Variation describing the number of solution variants that need to be delivered and mostly prevalent in manufacturing domains such as the automotive industry. This generally adds complexity to configuration management and constrain the optimisation of technical development [112]; Flexibility describing the extent to which there is flexibility around time, cost and quality; Resources describing range of resources required and the availability of key resources. Ideally this should be an output of the assessment not an input as resource and expertise requirements should be determined as part of a planning activity rather than being imposed as a constraint at the outset.
The categories and factors are all relevant and demand considerations within any complexity framework. It introduces the concept of ambiguity in connection with complexity which will be considered later. The assessment has many inconsistencies and omissions due to its structure. Also the grouping of the five high-level areas of consideration is not logical. It potentially overlooks many technical considerations by positioning relevant criteria in other categories and considering just three high-level factors in the technical challenge category. Other papers that discuss the importance of considering ambiguity include Pich et al [25] and McGowana et al [35]. Both highlight the role of ambiguity as a distinctly separate property, though closely coupled, to that of uncertainty. Frank et al’s NTCP Framework [43] attempts to profile a development type through consideration of four ‘dimensions’ to allow the development process to be tailored as required. These are: · Novelty - considering innovative versus established technologies; · Technology - rating how high-tech the chosen technology is; · Pace – considering development schedule constraints; · Complexity - from the perspective of system size. The framework is unsatisfactory in that it neglects many of components of complexity and does not reflect the interactions of the other three dimensions on complexity. It does however focus on technical development and the strong influence that the technology must have the chosen development process. Saunders et al [113] describes uncertainty within safety critical industries as having the components of ‘content’, ‘context’, ‘capability’ and ‘culture’. These each contain several high-level themes including project complexity itself. The consideration of uncertainty alongside complexity, regardless of the relationship between the two, is an important concept that will be included. Saunders et al developed an ‘Uncertainty Kaleidoscope’ [114] which develops the concept further into six elements: · Environmental composed of external constraints such as stakeholders and market conditions; · Capability relating to resource and expertise; · Individual relating to concepts such as bound rationality and decision making; 79
Nick Brook · · ·
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity which relates to the project itself including the nature of supply chain, processes and technology; Information which can be translated into ambiguity; Temporal which essentially schedule related constraints.
There are again inconsistencies and overlaps between the elements which may make application troublesome, especially without robust guidance. The consideration of complexity is of interest within the wider domain of business in general. The concept of VUCA (volatility, uncertainty, complexity and ambiguity) [36] considers many of the previously discussed themes. Volatility can be viewed as the result of one, or the product of several characteristics, of complexity. Here it is described as a factor in its own right and complexity is described alongside the others rather than being composed of them. Nevertheless, the presentation of an unpredictable business environment in terms of a limited number of characteristics is of interest. Maurer [115] considers complexity from the overlapping perspectives of ‘market’, ‘product’, ‘organisation’ and ‘process’. This differentiates between the external source of market alongside other internal sources of complexity. The identification of such external The VUCA framework [36]. complexity sources from the Figure 3. relevant perspective is important. A simplistic as the model is it is more consistent than say the Helmsman Institute’s model, which mixes the type of complexity in an unsatisfactory manner. The Association of Project Managements (APM) complexity assessment questionnaire [116] has 48 questions which are arranged in no particular order nor grouped into categories. Of the questions well over half of them are not directly related to complexity itself but rather how the complexity may be managed. These can be considered success constraints, risks or responses. The assessment contains a ‘scenario template’ which has spread sheet headings including ‘system behaviour’; comprising of ‘connectiveness’, ‘dependency’, ‘system dynamics’ and ‘ambiguity’; comprising of ‘emergence’ and ‘uncertainty’. Other headings relate more closely to responses to complexity rather than complexity itself which again shows an inconsistency in the approach. There is no guidance available on the definitions nor the use of the assessment results. Much of the literature references complexity without sufficiently breaking it down into its component parts. A classical definition of the components of complexity was described in the literature survey and is a follows [37]: 80
Nick Brook ·
·
·
MSc Safety-Critical Systems Engineering
9th January 2017
The ‘Kolmogorov-Chaitin complexity’, also known as Program-Size Complexity [38], is the quantity of information that is required to represent an entity. A difficulty with complex projects is that of modelling it sufficiently to allow it to be understood; ‘Nonlinearity’ creates issues in that relatively small changes can have a huge impact on outcomes and is sometimes known as the butterfly affect. Thus a minor change to an artefact within a project can cause rework of other artefacts, either as small changes to lots of artefacts or larges changes to just a few; ‘Emergent Complexity’ is that of unexpected interactions and will exacerbate the above factors. Changes may not be obvious and difficult to appreciate fully or even detect.
Essentially the first component is the number of interfaces relating somewhat to the development magnitude but also its nature. A large technical development may contain duplicate tasks and low interdependency limiting its program-size complexity while a comparably smaller technical development may conversely have many interdependencies and highly intricate activities. The latter two components collectively can be closely aligned with the previously described volatility within the VUCA framework. A relatively small change results in farreaching or a disproportionately large impact. Expanding the definition of emergence for the benefit of its use in an assessment we may say that it has a number of properties [39]: · Radical novelty – new and unanticipated properties emerge from each interaction; · Coherence – though unexpected the new properties are consistent with rules and behaviours; · Wholeness – emergence causes the new system that is greater than the sum of its parts; · Dynamic – changes from emergence will continue to evolve as long as there is emergence; · Downward causation — the system as a whole dictates the behaviour of its parts as well as the system being influenced by the interaction of its parts. This may be caused by two common dynamics, either separately or in conjunction: · No one is in charge – thinking in terms of a technical development organisation this could be characterised by a network-centric organisation with a decentralised structure; · Simple rules engender complex behaviour – using the example of an organisation this may be a large hierarchical organisation with many functions. This section described a number of different ways of decomposing and treating complexity from varying perspectives. The complexity framework will adapt the broad methodology of the 5DPM process while considering determined types of complexity against the relevant themes within the technical development. Much of the criteria contained within the existing literature will be re-categorised to fit within the model to maintain consistency. It must also be considered that complexity is in the eye of the beholder. The observer will have experience and a perspective that will influence subjective analysis based on their ‘bounded rationality’ [40]. The assessment criteria will require to be qualified as much as possible to guide interpretation.
References [112] C Krueger (2015). A new paradigm for product line diversity, variant management, and complexity management in manufacturing, Industrial Embedded Systems, March 31st, 2015, Available at: http://industrial.embedded-computing.com/guest-blogs/a-new-paradigmfor-product-line-diversity-variant-management-and-complexity-management-in-manufacturing/, Accessed on: 26th October, 2016. [113] F C Saunders et al (2013). Understanding Project Uncertainty in Safety critical industries, Conference paper 2013, Available at: http://www.researchgate.net/publication/272784093_Understanding_Project_Uncertainty_in_Safety-critical_Industries, Accessed on: 26th October, 2016. [114] F C Saunders et al (2015). Conceptualising uncertainty in safety-critical projects: A practitioner perspective, International Journal of Project Management, Volume 33, Issue 2, February 2015, Pages 467–478, Available at: http://www.sciencedirect.com/science/article/pii/S0263786314001392, Accessed on: 26th October, 2016. [115] M S Maurer (2007). Structural Awareness in Complex Product Design, PHD Dissertation, University of Munich, Available at: http://mediatum.ub.tum.de/doc/622288/file.pdf, Accessed on: 27th October, 2016. [116] APM (2014). Complexity Assessment 02-2014, Available at: http://www.slideserve.co.uk/complexity-assessment-02-2014, Accessed on: 27th October, 2016.
81
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix C Critical Success Factors
82
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
1. Project environmental constraints a. Business case i. Strong business case/sound basis for project ii. Clear realistic project objectives iii. Adequate budget iv. Specification of corporate and business strategy b. Senior management i. Support from senior management ii. Strong project sponsor/champion iii. Political stability c. Project team i. Competent and qualified project manager ii. Composition of project team in terms of experience and capability iii. Sufficient/well allocated resources iv. Ensured motivation of project employees v. Qualification and training measures for employees vi. Development of a project management culture d. Planning i. Realistic project goals in place ii. Strong, appropriately detailed and realistic project plan kept up to date iii. Holistic cost calculation undertaken iv. Reliable cost assessment and resource allocation v. Project broken down into manageable activities (‘chunk size bites’) vi. Planned close down/review/acceptance of possible failure e. Corporate and project process i. A well understood and mature project governance process ii. Defined corporate governance iii. Effective change management (project) iv. Internal business processes in place v. Optimisation of business processes vi. Compliance with business processes vii. Definition of a continuous improvement process viii. Integrated quality management ix. Proactive risk management process x. Tool support of processes xi. Definition of a continuous improvement process xii. Project cancellation process in place
83
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
2. Development process a. Process i. Definition of reference processes ii. Good systems for communication/feedback iii. Process feedback mechanisms in place iv. Process control mechanisms in place v. Responsive and flexible process to meet client needs vi. Effective technical change management processes vii. Technical risks management process viii. Past experience of management methodologies and tools is available ix. A well understood and mature design review process is in place x. Integrated quality management is in place xi. Defined continuous improvement process is in place xii. Optimisation of business processes has been undertaken xiii. There is compliance with business processes xiv. Continuity of processes in the value chain xv. Integration of trades in the value-add process xvi. Fast transfer of information xvii. Re-use of engineering activities where applicable xviii. Trouble shooting mechanisms in place xix. Test early, test often philosophy is used during development (NB) xx. Modelling and prototyping of system elements is used (NB) b. Plan i. Critical activities are identified (NB) ii. Enhanced planning is applied against areas of criticality and uncertainty (NB) iii. Clear realistic development objectives iv. Appropriate development planning technique has been chosen v. Strong, appropriately detailed and realistic development plan kept up to date vi. Development and project plans are properly integrated vii. Performance measures tailored to monitor areas of criticality and uncertainty viii. Effective monitoring/control of requirements and development deliverables ix. Development broken down into manageable activities (‘chunk size bites’) c. Development specific i. Well-defined design standards up front ii. Correct choice of management methodologies and tools iii. Application and database support is available as required iv. Standardisation and/or modularisation is employed within development d. Tool support i. Consolidated IT landscape ii. Accessibility of tools and methods through the process
84
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
3. Organisation i. Good leadership ii. Decentralised decision making (NB) – increase resilience (Network Theory) / decreased control iii. Organisational adaptation/culture/structure iv. Precise and transparent definition of tasks, competences and responsibilities v. Transparent definition of responsibilities vi. Different viewpoints (appreciating) vii. Competent and qualified development manager viii. Composition of development team in terms of experience and capability ix. Ensured motivation of development team x. Qualification and training measures for development team xi. Development of a project management culture xii. Degree of collocation of teams xiii. Coherent, self-organizing teamwork xiv. Colocation of teams xv. Domain-specific know-how xvi. Competence in technology and technology management xvii. Re-use knowledge and experience from previous projects (lessons learned) xviii. Appropriate techniques to aid identification of organisational dependencies and interfaces xix. Degree of collaboration
85
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
4. Contractual management i. ii. iii. iv. v. vi. vii. viii. ix. x. xi. xii. xiii. xiv. xv. xvi. xvii. xviii.
Effective monitoring/control Effective acceptance/approval processes Good performance by suppliers/contractors/consultants Rigorous pre-qualification process (NB) Clearly understood contractual interfaces (NB) Colocation of teams Consequent claim- and contract management Integrated quality management Definition of a continuous improvement process Optimisation of business processes Compliance with business processes Alignment of processes across organisations Project breakdown into logical packages Project breakdown into manageable activities (‘chunk size bites’) Continuity of processes in the value chain Consolidated IT landscape Definition of reference processes Integration of trades in the value-add process
5. Stakeholders i. ii. iii. iv. v. vi. vii. viii. ix. x. xi. xii. xiii. xiv. xv. xvi.
User/client involvement Client/user acceptance Decisions are agreed and documented (NB) Active management of client/user integration Expectations are adequately managed (NB) Continuity of stakeholders throughout (NB) Clear Identification and integration of project stakeholders Free access to and sufficient resource allocated on stakeholder side (NB) Internal stakeholders receive appropriate training /briefing (NB) Ensure operators are adequately represented Ensure the entire system lifecycle is represented Clear purpose is communicated to stakeholders Clear and consistent communication (formal and informal) Early identification and management of conflicting interests Stakeholder-specific information and communication policy Consideration of project environment using technique such as PESTLE (Political, Economic Socio-cultural, Technological, Legal and Environmental) [cite] xvii. Harmonised objectives amongst engineers and management
86
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
6. External interfaces i. Clearly identified and understood interfaces (NB) ii. Few external interfaces (NB) iii. Low likelihood of change amongst interfaces (NB)
7. Regulatory interfaces i. ii. iii. iv. v.
Clearly identified and understood interfaces (NB) Good relationship with regulators (NB) Clear lines of communication with regulators (NB) Sufficient time and resources allocated to manage regulators (NB) Regulatory environment is stable (NB)
8. Technology i. ii. iii. iv. v. vi. vii. viii.
Proven/familiar technology Technology level is a low as practical to meet requirements {NB) Well-defined standards up front Pursuing a simple as possible design Continuous improvement process for products Test early, test often philosophy is used during development (NB) Modelling and prototyping of system elements is used (NB) System element maturity is monitored
9. System integration i. ii. iii. iv.
Early verification and validation planning Early verification testing (NB) Integration is planned adequately in terms of time and resources (NB) System element integration maturity is monitored
References [117]
PESTLE Analysis (2016). What is PESTLE Analysis? A Tool for Business Analysis, Available at: http://pestleanalysis.com/, Accessed on: 17th December 2016.
87
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix D Response summary
88
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
89
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix E Complexity questionnaire
90
9th January 2017
! !
" # $% &
' (
(
" # $ %& ' ( ) *"" ' +" , + - ." / ( 0," * 1 2, - . !
3# 4 " 1"" "" 2 " 13 " ( " * 13 ."" 5 1 6 " ! 13 . 6 " 6 3 7 . "" . ! ""# 3 " !
9 , 7 3 " 3
8 9
:9
- "
$
8
%6+
2 # 6 # 3 . 3 #3 , +; # 2 # 2 " "" 2 3 6" 1 # ." ; 3 "
8 9
:9
- "
$
8
%6+
." 3 " " ," 7 *," 3 1 " " " ( , " 6 ; " ," ,# *," #7 9 "# , ?7 @ # A!
8 9
:9
- "
$
8
%6+
& . "" " + # , # ; 2 6 " " 9 ," . ' + # " 1 . 2 / 9
9 , 7 3
8 9
:9
- "
$
8
%6+
0 * "7 66 1 7> " # ' # * , 9 ! ." ; ," " ." ," " " ' > : (, "! C D $" #3 , " "" 1 . 2 / 9
E
B 9 7 3 ," - " " 2" " " " 1 . 2 / 9
9 7 3 " ""
8 9
:9
- "
$
8
%6+
1,6 " ' , 9 " ; "
& 1 " # . "," ' > ," - " " 2" " " " 1 . 2 / 9
$9 , 9 " " ," 3
8 9
:9
- "
$
8
%6+
."< "97 "< ,"
7 479 " 1 > , 1 ; & " 1 - 2 # > - 1 " ' 1 " -! 3 1 # #% + % ' , 6 .# %, + . &1 ## & # # % #9
% ?
#
?
#
' '
,
+ +
+ 3
% ? %
# '
+
#%
+
:
#
=
>
?
#
+
/
,
,
' , 5 '%
( = & % '%
0;!
' !-
'
,,
$% !
,
% + ##
'
#9
3
% #
"
# #& + + # # + ,, % ' # 2 # % % ' # 9 ++ , ## ' # & , .# # # % #9 3
#%
+ =
,
' %
' # '
:
# >
' # /
& ,
+
, 5 '%
' ,
( = & % '%
0;!
' !-
'
,, Showing 3 responses
Minimise number of interfaces note: one has to take for granted that you deal with a VUCA environment nowadays, so nothing is stable, and anyway you have no control over regulatory environment,while you can and must act proactively on the 4 other CSF. an understanding of regulatory caprices (high)
$% !
,
% + ##
'
#9
3
% #
"
$ ##1 + - ;+ , # ' , ) #&C / # ' 9& , # , , ) % # '& # # # %
#%
+ =
+ % # +
# '&
.# ' %# %& ' & ' + & , # , , & , - , + # # # , #9 3 %
- #
,
, '
:
# >
% /
# '& ,
- #
, 5 '%
,
' ,
(
= & % '%
0;!
' !-
'
note: do not understand the wording of second CSF
%$,
!
% + ##
#
#9
3
% #
" + ' , - ;+ , # #&C # ' , # , % # '& # , # %
#%
+ =
$ ##1
) / 9& )
'
+ .#
#
' # '& + %# %& ' & ' + & , # , , & , # # # , - , + #9 3 % %
- #
,
, '
:
# >
& /
,
' ,
, 5 '%
' ,
( = & % '%
0;!
' !-
'
,, Showing 3 responses
Integration of Learning across organisation - Very High. Interface specification agreed beween owners of interfacing products. Very High. note: do not understand the wording of second CSF
5
# -
!
"
/ - #
, 8
% + ##
'
+ ,
, .
%
% %
,
#
/
, C
/
,
4
+
+% C
,
+6
%
7'
+# &
:
%
- #
,
:
(
% #
% # + ,
& , ) %
'% % % &
, #
#
#
:
-
, 7/ + , /
#
+
#+
8
C
C-
+
C
C
+
' %
.1 + ,
+ & , # - # 7D >8
+ &
7
& #
,
+ 6++ 8 #C
')
%
'
, #
' %
C
## #C
# '&
# - #
+
:
'
1
= #
,
# '& ,
'
, # ? &+ , - # , ' /
#
, C
, + % # + +9 . # &C / + : #+ & # / , + # , # ? & % # # -
!
%
= +
/ /
/ 7) >8
,
%
'
,
/
' ,
$
'%
- # 79 >8
1# - #
'
A
#
% /
,
+ %
:
, :
&
'
:
* +
'D
? 7 * D8
'
-
#
>
/
,
5 '%
= & % '%
0;!
' !-
'
,, Showing 15 responses
"Maturity," is a falsely applied word here: either the work is done or it isn't. Major Capex Projects - To determine according to the size of the organisation. Should be done at early stage gate and well before build stage. Not sure Useful for complex projects Only for very large projects large size and complex Large, complex projects. Again, may need some tailoring of the criteria, and there needs to be some sense of proportionality - is 200 too many? Depends on size and complexity of the project Never used this - no experience 200 criteria ? Come on !!!! Based on best practices ??? Best practices do not apply on the left side of the Cynefin model. Wrong paradigm and beliefs applied to complexity. Willgive you a lot of data and the illusion of control but not a successful project. Complex and long projects over £10M Again, PDRI reports upon 'lack of progress' - useful, but equal emphasis must be given to recovery processes Dependant on size and complexity of project - 200 criteria would seem very high for any project I don't understand this question
# !
#
&
%
+ ,
,
%
&
% -
%
#
-
++
-
"
Showing 16 responses
The schedule and the budget are the only items to reference when determining success or failure. Simply because imagined justifications for anything may be dreamed up using digital means, crap is still crap. Tie all communications to schedule and budget simply: either the project is on or its off schedule and/or budget Design maturity i.e. model KPIs. Sorry ran out of time to complete survey. The I find the questions polarized to identifying known specific approaches Resource utilization - effective at monitoring / communicating use of resources.
High level Multi-discipline meetings to ensure as early as possible that all aspects of the project are optimised and understood.
Keep the complexity measures as simple as possible! Cost, quality, safety and time are the 4 fundamentals of project performance. All measures should relate to one of those. Number of versions of a document before final release indicating authorship issues, resulting in training and restructuring of document Detailed uncoded daily allocation sheets, bonuys systems, job and knock, Artimas You try to measure too many things. You use an approach that fits with a «complicated» project (right side of the Cynefin model), not one that will work with «complexity». Using the wrong paradigm and beliefs. Managing complexity is not an engineering/technical job, it is a holistic socio-economical job. You want a framework to manage complexity (VUCA), get this one : http://mplaza.ca/product/adaptive-strategy-framework/ . Best regards. Claude Emond
Many forms of performance measures out there, I think you have captured most, personally I don't rate them and although there aim or managing system interfaces and relative maturity thereof is correct - they generally don't add much value. There would definitely be mileage in getting back to basics, systems definition, boundary limit definition and design review (how could we make them more robust? I.e considering system interfaces at those reviews etc
Early establishment of a mature & realistic WBS at tender stage from where the project accurately references its cost/schedule performance. Feedback process working to correct further tender stage WBS formation to ensure bidding is transparent and accurate. buffer and budget contingency management Health and safety assurance. Adequately ensuring zero harm when projects go wrong, the project plans are not helpful. I started this survey but didn't have time to complete it - figured it was best to submit the first part than nothing at all...!
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix G Dataset sample sizes
133
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
134
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix H Questionnaire results filtered by age
135
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
136
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix I Questionnaire results filtered by role
137
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
138
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix J Questionnaire results filtered by industry
139
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
140
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix K Design Structure Matrix
141
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Design Structure Matrix principles Introduction DSM is a highly flexible technique that can be applied to provide a view of aspect of a technical development either independently or in combination with another aspect. DSM can primarily be used in four different ways. · Product architecture, i.e. the interactions of the system of interest itself; · Organisation architecture, i.e. how the development organisation interacts; · Process architecture which describes the development process ranging from high-level to detailed planning; · Multi-domain, also known as MDM, which combines several of the above to show interactions between the elements of the various matrices. Process can be described as having ‘temporal flow’ architecture due to the inherent time-based interactions that occur, with product and organisation possessing ‘static architecture’ [67]. MDM will be either temporal flow or static, depending on its component DSMs. We will primarily be interested in DSMs associated with process and organisational architecture and process/organisational MDMs as a tool to aid in planning of activities. This does not however preclude the use of product DSMs or the inclusion of product architecture in a MDM if the complexity assessment highlights an area of concern. The basic principles of the DSM are very simple are as follows: • Matrix elements being considered are represented along diagonal of matrix from upper left to bottom right; • Element names shown on rows and columns with the ordering of elements kept consistent on both the rows and columns; • Inputs are from left and right of diagonal (rows) and outputs (columns from above and below as per IR/FAD convention; • Interaction between elements is shown in the matrix cells with the diagonal cells being blank. A simple DSM, which merely acknowledges the existence of a relationship between inputs and outputs, is called a binary DSM. The DSM however can be further developed into a numerical DSM, including attributes such as importance, number of interactions and impact or strength of interaction. Additional attributes can be linked to cells if required and elsewhere stored in a database. Whereas binary DSMs are qualitative, numerical DSMs can be designed to be highly quantitative. The process for creating and managing a DSM is relatively straightforward and follows a five step process. Before this is begun all conventions will need to be agreed including supplementary information to be contained within numerical DSM models. Suggested additional data to be collected includes interaction strength as a single integer or a combination of probability and impact. Later changes to convention and the scope of information to be collected and model will be difficult to reconcile with previously completed DSMs with the potential to cause confusion and duplicated effort. The inherent limitation of the DSM relates to it describing the activities in terms ‘edges’ rather than ‘nodes’ [67]. For example, a binary process architecture DSM cannot describe the duration of an activity. This can be overcome through the development of a numerical DSM that includes information such as duration or resource.
Organisational architecture DSM Creating the organisational architecture DSM Decompose - The boundaries and level of decomposition should be determined foremost. The level of decomposition can range from departments down to individuals with the trade-off being the level of effort required against the scope and model usability. Often an existing organisational breakdown structure (OBS) can be used as a basis for subsequent work. The organisational elements are grouped into organisational 142
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 units and laid out in the matrix rows and columns. Often the designation of organisational units closely follows the actual sub-systems or system elements being developed. Identify - The necessary information can be gained through interview with team leaders and verified with the particular team. Often there is symmetry in communication, i.e. both organisational elements communicate with each other. There are often however overlooked or underrated interactions in one direction that require further investigation. The difference between one and two-sided interactions can be denoted on the DSM for additional richness. Analyse - The organisational elements are reordered and the matrix clustered so that the organisational elements with the most or dominant interactions are grouped together in a block. The block will be seen as having organisational elements close to the diagonal and can be identified with a square on the DSM. The blocks may overlap. Elements with interactions across most or all other elements can be grouped together as an integration team or a similar designation. Display - The interactions between the organisational elements are marked on the matrix. Examples of numerical DSM convention include strength or frequency of communication and these can be represented in a number of ways including shading or colours, symbols or numerically. Improve - Organisational architecture DSMs are static and will evolve over time, particularly from one development phase to another. The model will need to be revised or revalidated periodically to ensure its accuracy.
Applying the organisational architecture DSM Once the DSM is clustered it can be used to inform management decisions with regards the composition of teams, geographical location and types and scope of communication. The integration of development activities can be extremely challenging. It can be used to shape the formation of teams or clustering of teams based on concentrations of interactions. The application of collaborative tools, such as databases, will be more appropriate for large groups while meetings and informal methods are more effective for smaller groups. Lastly the co-location of particular organisational elements may be beneficial.
Process architecture DSM Creating the process architecture DSM Decompose - The first decision is that of the boundary of the DSM. External dependencies can be included as appropriate. Inputs from external processes are generally captured as columns on the right hand side of the matrix while outputs are shown as rows below the DSM. The level of decomposition will be dictated by the previous complexity assessment and assignment of CSFs. The level decomposition will also be influenced by availability information during the particular phase of development. It is advisable that the elements are kept at a consistent level within the respective breakdown structures. If a model of activities already exists this should be used and used as the planning baseline. The decomposition into the individual elements is also required for traditional scheduling. This should be in the form of a WBS for the process architecture. A feature described within DSM is that of the relationship between process and activities. DSM describe this as being relative, i.e. an activity is at a higher level of detail than process [67]. In scheduling however an element is at the highest level of detail within a plan [53][118] and this convention should be adopted for process architecture in this application. Within process architecture activities can then be grouped into phases, stage or sub-processes within the WBS hierarchy. It must be recognised that an element within the DSM may merely be the transfer of information.
143
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Key activities, pivotal to delivery of the entire process, should be identified. These are chosen based on dependencies, coupling and risk. As such disproportionate effort should be placed on ensuring these are completed as per the plan over less consequential activities. These will be tracked throughout with evidence of slippage or increases in risks above the accepted residual risk prompting additional interventions Identify - Once the individual elements are agreed the relationships need to be identified and the model first built and then verified for completeness. This is usually achieved through a series of interviews and workshops in a not dissimilar way that stakeholder requirements are elucidated. Each output from a process or organisational element will also be an input into another. Often inputs are more readily identified while outputs are not recognised as such by the particular organisational entity that supplies it. A complete picture is most completely achieved by using multiple views of inputs and outputs to ensure all are fully captured. The relationships and activity sequencing will fall broadly into one of four categories; sequential, parallel, coupled or conditional. These relationships equate with the dependency types found within a Gantt chart. The model should be validated by the stakeholders before analysis is undertaken. Analyse - Analysis is performed to understand relationships and patterns and how this affects system behaviour after all the elements and their interactions have been identified. For process architecture this consists of clustering and sequencing of the elements. Analysis concentrates on the optimisation of the DSM to reduce the incidence of coupled behaviours. Display - The DSM matrix will be created in draft and repeatedly revised during the analysis step until a satisfactory model has been created. Improve - The model will be developed over time as the numbers of assumptions are reduced and more information becomes available.
Analysing the process architecture DSM Analysis of DSMs is should enable a better understanding of the interactions between elements and allow the plan to be amended accordingly. There are several relationship types within DSMs: · Sequential – where activities follow on from each other in a finish to start type dependency relationship. There may be some overlap may be possible between activities which would be seen as negative lag on a Gantt chart; · Parallel activities – these activities may rely on same resource but there is actual no dependency. Resource constraints would normally be considered later using organisational architecture DSM or on a resource loaded Gantt chart schedule; · Coupled – with iterations between the outputs and inputs between two or more activities. This is the most difficult to represent in traditional Gantt charts; · Conditional – execution of later activity dependant on decision made in an earlier activity. This may or may not be sequential and in confined to the transfer of information. The traditional interaction considered within planning is that of a finish to start dependency where one activity needs to be completed before another can commence, known as sequential within a DSM. This can be further refined by the inclusion of positive or negative lag to either delay or advance the commencement of the following activity by a specified duration. Other variations including start to start which suggests parallel activities or finish to finish which suggests a coupling type relationship. It is these relationships that define the effectiveness of a process or plan and it follows that the greatest impact is achieved by better managing the interfaces between activities. It will also be worth considering representing iterations as separate activities Of the relationship types it is coupling that very often the most troublesome to manage and causes delays. There are several types of coupling. Some of these can be eliminated or reduced in terms of their impact through effective planning and/or management of process. This is an example of a process behaviour that 144
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 should be fed back through the process for CSF consideration. Coupling behaviours may be planned or, due to errors, unidentified interaction type or emergence, be unplanned. The types of coupling are as generally follows: 1. Inherent coupling – planned coupling behaviour with activities that are structurally interdependent; 2. Poor activity sequencing – information is created too late resulting in the delay of other activities. Though a planned coupling this type of behaviour can be minimised, though not entirely eliminated, through effective analysis; 3. Incomplete activities – where activities are unduly delayed with a similar impact as poor activity sequencing; 4. Poor communication – information or outputs are not passed on completely or in timely fashion again delaying subsequent activities; 5. Input changes – caused by changes to assumptions; 6. Mistakes – defective inputs created and discovered at a later date. Evidently coupling types 2 to 6 are to be avoided whether through planning or subsequent controlling of the plan. Delayed inputs/outputs can result in the formation of assumptions and indeed this is sometimes adjudged as being a desirable response to a coupling behaviour. Adopting such an action simply exchanges coupling behaviour behaviours 2, 3 or 4 for the potential for behaviour 5. Inherent coupling suggests necessary iterations of outputs/inputs between two or more activities. Though necessary, most often even desirable, convergence to a solution should be encouraged as quickly as is practicable. Additionally, the reduction in the number of feedback loops is highly desirable also. Indeed, this has the potential for use as a measure of development status and a measure of complexity in itself. One goal of the analysis of a process architecture DSM is to optimise sequencing of the activities so that as many interactions as possible are below the diagonal. Doing so ensures that activities only begin once all inputs are available and reduces the number of assumptions that need to be made (avoiding the potential for input changes type coupling) or activities being delayed unduly (avoiding poor activity sequencing type coupling). In the extreme example an output from an activity in the upper right hand corner indicates that assumptions will need to be made for an early activity to progress. If the assumption is subsequently proved incorrect there may be the propagation of changes throughout all the activities with substantial impact on resource usage and schedule completion. There will be conflicts between multiple activities that constrain the sequencing. The result will be a reordering of the rows and columns. Another aim of analysis is the removal of coupled relationships through a process call ‘clustering’ and ‘tearing’. Instead of removing unnecessary assumptions this process introduces assumptions to make the overall process or plan more efficient. Activities are rearranged into a ‘block’ so that they are grouped around the diagonal. Assumptions are then applied that remove the ‘inherent coupling’ behaviour. There is however a risk of the emergence of an ‘input change’ coupling behaviour at a later date which may reintroduce the coupling. The decision to engage in tearing of activities within the DSM will therefore be based on the risk of later input changes against the benefit of improving the efficiency of the process by the removal of inherent coupling. The discussion of where the process or plan may break down should be undertaken during the collation of information on activity interactions and particularly for coupling behaviours. The use of Failure Mode Effects Analysis (FMEA) may be of use to analyse potential points of failure leading to unplanned process iterations. If significant these should be fed back into the complexity assessment and CSF processes and recorded on risk register.
145
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Sequencing Sequencing optimises the ordering of activities. This can be undertaken before the first draft of the DSM, in which case the effect will be dramatic. There is usual a natural or intuitive ordering of activities so sequencing performed on an established DSM is not likely to initiate wholesale change. It is a useful process nonetheless. The process follows a number of steps. The first step and the easiest is the identification of activities with either no inputs or no outputs. Activities with no inputs can be undertaken first and activities with no outputs undertaken last. The sequence of all remaining activities will be influenced by the number, strength and type of interaction with other activities depending on whether binary or numerical DSMs are being used. There are a number of heuristics that can be used. The first is minimising the number of assumptions that are required in the model by reducing the number of interactions above the diagonal. This reduces the number of activities that need to be completed later in the sequence that feedback to earlier activities. The second is reducing the distance of the remaining interactions above the diagonal. As described previously long feedbacks can be the cause of the propagation of change throughout the development process. These activities are then identified as ‘coupled blocks’. The identification of these coupled blocks can be used to identify early where additional management effort would be well spent. Examples include the co-location of those involved, the use of collaboration tools and application of status and progress measures. Heuristics are of primary interest in this project though algorithms and commercial applications are available. These algorithms will be briefly explained. Algorithmic methods of sequencing are poorly explained in literature. There are three main methods that are commonly employed. These include Steward’s path searching algorithm [119] which can be undertaken using a number of methods. The first method is known variously as block diagonalization and block triangulation. In it activity cycles are identified and grouped together as clusters which can be identified and enclosed in square blocks. The ‘powers of adjacent matrix [120] uses a linear algebra technique that creates additional binary matrices considering indirect connections and as many steps removed from the original matrix as required. For large matrices this is ‘computationally intensive’. Finally, Trajan’s depth-first search algorithm [67] is considered the most efficient way of grouping coupled activities and follows outputs to determine if they return back to another activity. Clustering and tearing The identification of blocks of coupled activities can be used for further analysis of the DSM. Decomposition of coupled blocks of activities may yield a more manageable set of sub-activities. In contract aggregation can be used to simplify the DSM but this will obscure individual feedbacks and is not generally recommended. The addition of activities earlier in the sequence can be used to reduce the number of assumptions. The decision to add activities will usually be made on a cost-benefit basis and should be used to reduce uncertainty and risk of change. Tearing reduces the interaction between a block of activities by introducing assumptions in place of iterations. Following the clustering of activities into blocks the process is a follows: · Suggest interaction to be removed, called a ‘tear’. The link or links with the longest feedback loops back providing the greatest benefit and should be considered first; · Analyse the tear and, with the agreement of the stakeholder within the process, accept or reject the tear considering the degree of confidence on the assumption that replaces the interaction. This is a risk versus benefit based decision. If the first tear is discounted move onto the next best tear and repeat the analysis; · The tear removes an interaction which reduces the block of activities in a particular cluster. The DSM can now be sequenced; · Feedback marks in the DSM are replaced with torn marks to provide a prompt to check activity outputs affected by the tear against the new assumptions; 146
Nick Brook ·
·
MSc Safety-Critical Systems Engineering
9th January 2017
Disproved assumptions will result in input change type coupling behaviour and rework of early activities. If this occurs the tearing exercise was unsuccessful and the intended efficiencies were not realised. Single assumptions can apply across multiple tears
Further research has developed several advanced techniques. These include ‘Discrete-event, Monte Carlo simulation’ [121][122] which considers ‘multiple process flow’ factor, including rework probability, to provide a forecast of cost and durations. Eigen structure [123] specifically considers parallel iteration of coupled activities, contributing to the concepts of design convergence and churn. Signal flow graphs and reward Markov chains [124] allow the best sequence of activities to be determined based on probability. Commercial DSM tooling software is available that incorporates many of the more advanced algorithms and concepts that have been developed for sequencing and clustering and tearing. Examples include [125]: · ACCLARO DFSS by Axiomatic Design Solutions, Inc which includes clustering and tearing, FMEA and exports to Microsoft project; · ADePT Design software suite by Adept Management Ltd which uses WBS; · Lattix by Lattix, Inc which supports MDM analysis and can undertake change impact analysis and interfaces with a variety of databases, software and models including UML and SySML; · NDepend by NDepend which uses a simple three-shading numerical DSM convention; · Plexus by Plexus Planning Ltd which uses several optimisation techniques including critical path analysis; · ProjectDSM by Project DSM Pty Ltd which interfaces with commonly used Microsoft applications including Project. None of these packages can directly interface with the most commonly used scheduling application employed within complex projects; namely Oracle’s Primavera [69]. This suggests that the general maturity of this commercially available application have some way to go. Several of the applications do profess outputs compatible with Microsoft Project, a scheduling application which tends to be used to plan smaller projects.
References [118] Mindtools (2016). Gantt Charts; Planning and Scheduling Team Projects, Available at: https://www.mindtools.com/pages/article/newPPM_03.htm, 28th October, 2016. [119] A Karniel, and R Yoram (2011). Managing the Dynamics of New Product Development Processes: A New Product Lifecycle Management Paradigm, Springer Publishing Company, Incorporated, Available at: http://dl.acm.org/citation.cfm?id=2073757, Accessed on: 28th October, 2016. [120] E W Weisstein (2016). Graph Power, MathWorld--A Wolfram Web Resource, Available at: http://mathworld.wolfram.com/GraphPower.html, Accessed on: 28th October, 2016. [121] T R Browning and S D Eppinger (2002). Modeling impacts of process architecture on cost and schedule risk in product development, November, 2002, IEEE transactions on engineering management, Volume 49, Pages 428-442, Available at: http://web.mit.edu/~eppinger/www/pdf/Browning_DSM_Sim_IEEE.pdf, Accessed on: 28th October, 2016. [122] S-H Cho and S D Eppinger (2005). A Simulation-Based Process Model for Managing Complex Design Projects, August 2005, IEEE transactions on engineering management, Volume 52, Pages 316-328, Available at: http://web.mit.edu/~eppinger/www/pdf/Cho_IEEE_2005.pdf, Accessed on: 28th October, 2016. [123] A Yassine et al (2003). Information hiding in product development: the design churn effect, Research in Engineering Design 14 (2003) 145–161, DOI 10.1007/s00163-003-0036-2, Available at: http://necsi.edu/affiliates/braha/RED03_Info.pdf, Accessed on: 28th October, 2016. [124] Eppinger, S. D. and Smith, R.P. (1997). Identifying controlling features of engineering design iteration, Management Science 43 (3), 276-293, Available at: http://web.mit.edu/eppinger/www/pdf/Smith_MS_Mar1997.pdf, Accessed on: 28th October, 2016. [125] DSMweb.org (2016). Commercial Tools, Available at: http://www.dsmweb.org/en/dsm-tools/commercial-tools.html, Accessed on: 28th October, 2016.
147
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix L System Performance Measures
148
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
System Performance Measures Introduction The control and management of on-going development activities is dependent on relevant and accurate information on both the status and progress of the activities being undertaken. Technical development processes, as compared to normal business processes, are typified by a number of unique properties which makes effective process monitoring important [72]: · ‘dynamic, creative and chaotic’; · Contain many feedback mechanisms; · The process in its entirety is largely ‘virtual’ and ‘not always precise’; · The likelihood of change is high due to feedback mechanisms, imperfectly defined requirements and customer led changes. Without this issues may go undetected until they are manifestly apparent. By this point in time the issues may be unrecoverable. Additionally, it is useful to know where management effort needs to be directed most efficiently. Examples of existing frameworks used for the formation and collection of measures include House of Quality, known as Quality Function Deployment, Goal-Question-metric and Balanced Scorecard [83]. The use of metrics with concept and development activities is not widespread and can lead to a number of ‘coping’ mechanisms being adopted by personnel in response to them. Simon and Simon state a number of issues that have been identified during an empirical study [84]: · Optimism strategy – view them as ‘implicit criticism’ and ‘constraint’ on professionalism; · Delegation-strategy – metric findings are determined to be as a result of external factors and not an individual’s works and thus no responsibility is taken for them; · Automatism-strategy – issues arising from metrics are attributed to tool support; · Particularity-strategy – the relevance of the metric to the issues encountered is denied; · Tortoise-and-hare-strategy – metrics are ignored because improvement of the issue occurred before the metric is implemented. These mechanisms can be addressed through the timely implementation of metrics which complexity assessment should facilitate. Early implementation addressed the first mechanisms as sufficient engagement with the development personnel can be undertaken. This also removes the perception that the application of metrics is not ‘bolt-on’ to the development process but an integral part of it. This coupled with carefully devised and comprehensive range of metrics should improve the acceptance of metrics. In addition, Demming identified three ‘traps’ [85]. Firstly, the metrics make no distinction between ‘common’ and ‘special’ situations and can be influenced by other external influences. This takes away the relevance of the metric. Secondly rarely is an issue represented by a single metric and can mislead the organisation. Lastly the improvement in a single metric rarely indicates an improvement in the underlying issue. A more holistic view is required to counter these shortfalls. This can be achieved through a range of overlapping measures that provide a degree of confirmation of the metrics findings. Kline et al [86] described a method for designing performance measures that is applicable across any type of technical development. It is envisaged that this would address many of the issues thus described. This consisted of: · ‘Forming a team with diverse training and perspectives’; · ‘Recruiting an unbiased facilitator versed in the methodology and familiar with the process area’; · ‘synthesizing a descriptive definition that accurately and completely describes the skill set being measured’; · ‘Analysing behaviours of an expert who displays outstanding performance in all dimensions of the skill set’;
149
Nick Brook · · · ·
MSc Safety-Critical Systems Engineering
9th January 2017
‘Selecting the top ten factors which account for variability in performance associated with the skill set’; ‘Proposing positively-phrased descriptors of the skill set at five performance levels ranging from “novice” to “expert”’; ‘Articulating and agreeing on five attributes associated with each performance level for the top ten factors’; Testing the classification and measurement scheme by reflecting on performance of individuals at each performance level in several different contexts’.
This method produces performance measures that, while still subjective, are bounded within defined criteria. The involvement of the development team, whose performance will be directly or indirectly, should address many of the issues describes by both Demming, and Simon and Simon. The measures that are chosen for a particular development should be dictated by its particular characteristics. The method recommended by this dissertation is to identify areas of particular complexity and criticality through the complexity assessment. Two ways of looking at the performance measures is in terms of areas of the WBS and complexity criteria listed and described in Section 2 of the main body of text. Both can be used to tailor the performance measures to balance effort of collection and analysis of data against the benefits of closer and more accurate monitoring and control.
Desirable properties of metrics Metrics can be placed in several categories. Mathematical system metrics are calculated from fundamental data. These metrics are either derived or combined and an excellent example is that of the metrics relating to EVM. In this methodology uses primary data, relating to the actual costs and physical progress, is compared with cost and schedule planning to derive measures such as cost and schedule variance and cost and schedule performance indicators. Practical system metrics are derived from the application of ‘empirically established factual logic’. Heuristic system metrics are similar to practical system metrics but the scope of the metrics is restricted to particular issues [83]. Depending on the level of decomposition the metrics described in this dissertation are most likely to be either practical system or heuristic system metrics. The properties of the measures are of importance. Foremost the measures should of course have a clear and obvious purpose and it is also beneficial that the measure is as ‘homomorphic’ as possible with the source data [83]. The nature of the measures should provide a view of developments objectives ranging from the short-term to the long-term [73]. Measures should be process orientated as well as schedule and cost orientated indicators [73]. In accordance with the concept of the balanced scorecard there should be breadth to the perspective of the measures that are adopted [77]. Measures may be either lagging or leading with staple measures inevitably relating to cost and schedule. An example of a common system of measurement is those used within EVM which reconciles planned cost and schedule against actual cost and progress. In this instance it is also a lagging measure. That is an issue is flagged only once activities fall behind schedule or estimated costs are exceeded. If the issue is determined early enough action can be taken to converge with the original plan. However, the severity of the issue is greater the later it is detected and both low sensitivity of data and the use of monthly reporting cycles may even delay this late detection by a month or two. In this instance it is much better to predict the occurrence of cost overruns and schedule delays through the use of leading measures. These measures will target areas of the technical development that will later influence the actual activity durations and costs. Other properties that should be considered are their frequency of application. A measure may require a lot of time and effort to determine, for example the findings of an audit. It may measure the properties of the development that do not change frequently, such as the schedule for the current phase. In both instances it is appropriate to only determine the measure once per development phase or perhaps only three or four times per year. Conversely the measure may follow a dynamic development property. The status of 150
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 development artefacts may be automatically recorded via the database or tooling that is being utilised. This allows the measures to be taken more frequently. The cadence of measure reporting may also be dictated by time-bound criteria. The frequency of EVM reporting is constrained by the monthly frequency that actual costs are most often calculated which in turn relates to the submission of invoices by contractors and consultants. Measures can be quantitative or qualitative. Often it is difficult to provide quantitative data and only a commentary on the on-going activities is possible. Even when quantitative is possible it is worth combining with qualitative to provide context to the information. Examples of leading measures include development resource and the quantity development change requests that have been submitted during a period. They should highlight early concerns either through an absolute quantity or through a trend over successive reporting periods. Using the first example from above a deviation below the planned level of development resource may not yet be reflected in a delay in the schedule. Early identification would allow management action to be taken to elevate resourcing to a suitable level thus avoid a later schedule deviation. An increasing trend in the number of development change requests may indicate inadequacies within the existing requirements or stakeholder management issues. Furthermore, Increases in resourcing requirements and funding may result from an increase in the amount of change. Again early identification of changes can be used to prevent or minimise an adverse impact on the development activities. It has been proposed that the selection of measures could be based on process analysis. There are several barriers to this approach [73]: · There is a high overhead to process modelling which increases processes are already in place; · Analysis and implementation of measures needs to be undertaken early for maximum effect but process uncertainty is high and change is likely. This can lead to wasted effort and/or incomplete analysis; · Changes will continue to propagate through system development as contracts are let and strategies evolve; · The measures chosen by such a method tend to be too abstract. The selection of Performance Measures will instead be influenced by the complexity assessment and the
Figure 1.
Example of Goal-Question-Metric [73].
subsequent sub-processes and following some simple concepts. Specifically, the goals of the measures will be determined with a focus on areas of particular complexity and risk. The Goal-Question-Metric method is an exceedingly straightforward way of determining measures. The ‘goal’ of the measure consists of four components; purpose, issue, object and viewpoint. The ‘question’ typically asks for the current status and current trend in this status. The ‘metric’ is perhaps the most difficult aspect to determine as meaningful measures that are at a low level of abstraction can be difficult to determine. An example taken from complexity metrics [73] is shown below in Figure 1. 151
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The level of decomposition should be considered. Low levels of decomposition provide an overview of development health but will not provide a real indication as to the source of any issues. A high-level of decomposition may suffer from low sensitivity of the data and will certainly require a good deal more effort to implement. An appropriate level within the WBS should be chosen for the highest level of decomposition and it is perfectly feasible to use metrics following the same principles at simultaneously at different levels of decomposition for both management reporting and to facilitate targeting response to issues arising. Metrics can be selected that focus on particular areas of concern within the development. This may be through a higher level of decomposition for a particular phase or cluster of activities or else the choice of a unique measure. Methods that can direct where measures are to be chosen include during the complexity assessment and assigning of success factors and during subsequent planning. In particular DSM can highlight areas of intense coupling and complexity. Such prioritisation of activities may be due to their properties of uncertainty, emergence or non-linearity and their proximity to the schedule’s critical path. The use of focused measures must be in conjunction with other less detailed measures to ensure coverage of the entire schedule of activities. It is also beneficial to overlap metrics, where possible, to provide a degree of confirmation of the findings [73]. Other factors to consider include: · Metrics should be complete, correct, consistent and clear; [73]; · As they will be communicated and reported metrics should be ‘simplistic’ [73] and be chosen at a level of abstraction that is readily understandable; · The collection of meta-data during and as a part of the individual activities, as opposed to periodic and dedicated data collection, will reduce the management burden required. An example would be progress measurement during the production of development artefacts, at the end of each individual iteration. This will introduce a level of ‘automation’ [79] into the process; · Consideration should be given to the collection of metrics by project assurance rather than the development team [53] to ensure impartiality. Artefact completion milestones Completion of first draft of artefact Completion of first review Inclusion of first comments Completion of second review Inclusion of second review comments and submission for approval Artefact approved Figure 2.
Percentage complete 50 60 80 85 90 100
Simple method of measuring progress in development artefact.
The reliability of some metrics is necessarily subjective due to the scarcity of formal methods of collecting data in many areas. Subjectivity can be curtailed through the adoption of rules and guidance in the collection and analysis. A simple example is the progress measurement of development artefacts as shown in Figure 2, and this also incorporate the principle of creating meta-data during the activities themselves. It this case the artefact owner would track progress according to the relevant guidelines which could be placed on a shared database for ready collation and analysis. Many of these metrics are highly abstract. There are a number of metrics that relate to DSMs and these should be considered for use. They however largely relate to complexity so while they may provide a validation of the other analysis that has been undertaken this will relate to areas of risk within the technical development. Examples of the areas of the development process that should be measured include [79]:
152
Nick Brook · · · · · · · ·
MSc Safety-Critical Systems Engineering
9th January 2017
Schedule and cost planning – will measure risks, bottlenecks and the critical path as well as the overall cost. It should also consider process iterations and associated uncertainty; Resource allocation – will measure resource and capability levels as compared with the plan and issues such as resource smoothing/levelling, removal of redundancy and accessibility of resource; Quality – this will measure the consistent flow of information, completion of documentation in line with process and meeting requirements and the distribution of risk amongst processes; Flexibility – will measure the status of buffers to absorb delays and defences against individual errors and general process resilience; Organisational decomposition – will measure to determine whether the organisation of workgroups and teams is adequate, efficient communication is in place; Interfaces – will measure which entities need synchronising, speed of communication across interfaces and the relevant communication paths in place; Transparency – will measure whether the organisational units are aware of their impact on outcomes and the mental model of the process organisation. Decision making – will measure which decision points have a high impact on outcomes.
Also artefacts to be measured using a rating based on interdependency and risk: · Number of dependencies · Origin of dependencies – with organisational area, interdepartmental, inter-contract, external dependency · Status of dependencies – i.e. the meta data derived for them · Percentage of document that requires dependencies · Identified risks
Selection of system Performance Measures from literature Appropriate measures will be selected from literature which will be categorised against the described criteria. They will be chosen against their applicability and to cover technical development in terms of time and breadth. Commonly used project management techniques, such as EVM, will be excluded. It is recommended that they be used in conjunction with the technique to be described but they are very well described in literature and further discussion will be of little value.
Requirements A fundamental tenet of technical development is the elicitation and satisfaction of requirements. In turn these requirements can be broadly sub-divided into stakeholder requirements and system requirements. Requirements, especially system requirements, are prone to successive iterations and coupling behaviours as represented in the DSM model. Figure 13 uses the activities from a typical systems engineering process and categorised in INCOSE’s Vee Model from Figure 4. It shows areas of coupling, indicated by circular process flows, that are of particular interest. The US Department of Defense and NASA have been extremely influential in developed the discipline of systems engineering since its inception during the mid-20th century. The measurement of requirements is described within a number of systems engineering manuals and lately INCOSE’s System Engineering Handbook. They observe both stakeholder and system requirements from the perspective of their success in fulfilling the overarching business requirements and as such are quality orientated measures. With respect to complexity criteria the analysis of requirements allows monitoring and control of the system development and particularly the technology and internal interfaces. Other complexity criteria will have a direct impact on requirements with excellent examples of such criteria being development process, internal organisation, organisation and stakeholders. 153
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Stakeholder Requirements Measures of Effectiveness or MOEs [54] are client orientated and should be independent of the chosen solution. They represent the most important and high-level evaluation and acceptance criteria. Failure to satisfy a MOE is cause to re-evaluate a design’s suitability [28]. They provide a measure against the projected operational characteristics during design and actual operational characteristics during Validation are judged against and thus a test of whether the chosen system meets the clients’ intended needs. Using the example of a generating power unit this may be the maximum output that would be currently achievable with the design as it was considered at a point in time. As an example if the designed output of is the designed output of a power plant. If this designed output is 1,050MWe, and below the MOE of 1,150MEe, the validity of a business case over the lifetime of generation is threatened. Clearly this indicates a serious issue that should be addressed and provides a basis for management action [81]. Measures of Suitability or MOSs relate to the properties of safety, capacity and the various ‘ilities’ [28] such as reliability, availability and maintainability. The importance of these measures will vary with properties such as safety and environmental performance likely to be non-negotiable. Failure to meet other Stakeholder Requirements, while being undesirable, may in some cases be managed within the requirements process. Using the example of the power plant again generating availability of 310 days per year, as achievable through operational practices dictated by the design, may exceed the MOS of 300 days per year. In this instance the measure is above that required providing a level of confidence that this area of the design will achieve the relevant Stakeholder Requirement [81]. These metric are most suitable for measurement at the boundaries of development phases. They will give an indication of the health of the stakeholder requirements process though they do not necessarily point to where the underlying issues lie. Furthermore, at the point in time that they are taken the issue may have been in place for some time and therefore not easy to rectify.
Figure 3.
System engineering and areas of coupling [81].
154
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
System Requirements Measures of Performance or MOPs are supplier orientated and are used to verify whether the actual system meets the stated System Requirements [28]. These measures apply during Verification with the intention to measure whether the system meets the System Requirements. Meeting the System Requirements does not necessarily mean that the system will meet the Stakeholder Requirements in the same way that meeting the Verification criteria does not necessarily mean the system has met the Validation criteria. Clearly any requirements process aims to ensure that there is no gap between either the stakeholders’ expectations or the derived Stakeholder Requirements or between these Stakeholder Requirements and the derived System Requirements [81]. An example of the MOP could be that of the radiological dose rate to personnel to the exterior of the waste facility. The System Requirement was for a maximum dose rate to personnel to the exterior of the facility of 0.3µS/hr. It is reasonable to expect that roof would need maintenance and that shielding to maintenance workers would be required to satisfy the requirement. If the current design could not achieve the MOP for the chosen solution class, it would require attention [81]. Technical Performance Measures or TPMs relate to MOPs and measure technical attributes of system elements to determine how well a system element will satisfy a System Requirement at any particular point of the design. They are at a higher level of detail than the MOPs and specifically measure design progress, compliance to System Requirements and technical risks [54]. A failure to meet performance requirements, within a tolerance band, represents a risk to schedule, cost or scope. Subsets of the TPMs are Key Performance Parameters (KPPs) [54]. In a similar way as MOEs these are essential to the system’s successful operation. Failure to meet a KPP may be a cause for the project to be re-evaluated or terminated. A relevant TPM that would qualify as a KPP relates to the waste facility example and the physical quantity of waste that the facility would contain. If the quantity that the design could achieve did not meet the amount that was calculated to be generated over the lifetime of the plant it was serving it could be said that the design should be subject to detailed scrutiny. In the same way that MOSs and MOEs can be used as KPIs to measure progress and towards Validation goals, MOPs and TPMs can be similarly used to forecast the outcome of Verification activities in important areas [81]. As with MOEs and MOSs only a limited number of these measures are chosen in important areas of the design. They can be effective in the management of sub-contracted design where the development organisation needs to monitor particular aspects of the health of the sub-contracted technical development. These measures, like those for Stakeholder Requirements, are likely to be assessed infrequently and give an indication of development health only. They are particularly useful for assessing the interface between the main organisation and sub-systems developed by others.
Requirements Traceability Verification Matrix (RVTM) Another method of measuring System Requirements is through the collation of associated information to be stored in a matrix, spread sheet or database. The metadata can be compared against the plan or trends monitored to identify areas of concern. The complexity of the RVTM should be proportional to that of the technical development and may include fields that are specifically chosen to assist in the management of foreseen project and system development risks identified early using techniques including the complexity assessment. Suggested fields to be collated include [81]: · System Requirement unique identifier and name; · Requirement description; · Requirements specification (document reference); · Overall requirement status (such as detailed in design, implementation or integration); · Trace to overarching Stakeholder Requirement (unique identifier and document reference); · Verification strategy status (such as undefined, strategy only, procedure completed); · Verification procedure (document reference); 155
Nick Brook · · · ·
MSc Safety-Critical Systems Engineering
9th January 2017
Verification status (such as not started, failed, completed with reservations or completed and with document references as appropriate); Validation purpose (such as for acceptance, certification, readiness for use or Qualification); Validation procedure (document reference); Validation status (such as not started, failed, completed with reservations, completed and with document references as appropriate).
Other fields that could be employed in either the same artefact or complimenting artefacts, including: · Stability; · Importance; · Design compliance; · Interface compliance; · Process compliance; · Risk and risk status; · Safety case and licensing compliance; · Procurement and contractual compliance; · Requirements criticality within the project schedule. These measures may give an indication of requirements against the planned development and are lagging in nature. They do however allow for the identification of areas of concern.
Requirements’ attributes More generally requirements can be given a wide variety of metadata that can be used to interpret the overall status of the requirements or a particular sub-system or area of development. Not all of these attributes should be chosen due to the overhead required to maintenance complete and accurate data sets. Attributes should be chosen early in the development lifecycle to ensure that data does not require to be retrofitted at a later date [81]. Both the RVTM and requirements attributes are primarily concerned with quality and identifying issues relating to progress. Many of attributes, together or in independence, can be used to infer the efficacy of resource allocation, organisational decomposition, interfaces and Decision making. For instance, a backlog of requirements of a particular status may indicate one or more bottlenecks in the process. This may be due to issues with resourcing, organisational efficiency or decision making. It can be used to identify risk at a high level of decomposition within the system of interest which can be rolled up to show particular sub-systems which deserve additional attention [81]. If trends are identified early enough, for example a backlog of under review requirements within a particular development area, then action can be taken to reduce or eliminate delays to the schedule. A comprehensive list of attributes is shown below [81]. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Unique identifier; Unique name; Originator / author; Date of requirements entry; Change board identity (if more than one); Change status; Version number; Approval date; Date of last change; Due date for requirement; Stability; Status of requirement (under review, approved etc.); Trace to parent; 156
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 14. Trace to source; 15. Trace to interface definition; 16. Trace to peer; 17. Trace to Verification method; 18. Trace to Verification requirements; 19. Trace to Verification results; 20. Trace to Verification status; 21. Trace to Validation (as per Verification); 22. Trace to primary discipline; 23. Trace to secondary discipline; 24. Trace to licensing structure; 25. Trace to procurement (high-value / long-lead items); 26. Trace between functional and non-functional requirements (and vice versus); 27. Trace to Interface Definition; 28. Priority; 29. Criticality (possibly derived from parent and see also MOEs and KPPs); 30. Risk; 31. Key Driving Requirement (KDR) – have large impact on cost or schedule; 32. Owner (accountable for completion); 33. Actionee (responsible for completion); 34. Stakeholder of interest; 35. Rationale; 36. Applicability (if used in another product or on another site); 37. Type; 38. Status of implementation; 39. System of Interest - primary Verification method; 40. Verification approach; 41. Conditions of use; 42. States and modes; 43. Trace to MOE or MOS (Stakeholder Requirements only); 44. Trace to MOP, TPM or KPP (System Requirements only); 45. Additional comments.
Development health Development health is likely to contain a high degree of subjectivity. Facets such as adherence to the planned cost and schedule can be measured through EVM or other similar techniques. Other less tangible aspects will require a framework to provide context and guidance in which they can be assessed. Using a defined method Kline et al [76] formed a weighted matrix of design performance characteristics that can be used to measure development health. Each high-level measure was formed of five individual factors with each of these being assigned an individual and distinct text description basis for scoring. The high-level measures are as follows: · Problem Definition; · Prior Knowledge; · Divergent Thinking; · Professional Analysis; · Decision Making; · Create & Follow Plan; · Iterate & Assess; · Validate Solutions; · Communication; · Teamwork.
157
Nick Brook
Figure 4.
MSc Safety-Critical Systems Engineering
9th January 2017
Development Health matrices [76].
The full matrix, containing all descriptors, can be seen below. By observing attributes of the development process and team the technique is a leading indicator and should be able to identify risks to the development before they are fully realised. It should be undertaken reasonably infrequently, say at the beginning of a phase or mid-development phase, at three or four month intervals, if it is particularly long in duration. It is proposed that the Kline et al methodology is applied, partly or in full, to form new measures or tailor exist measures for use in technical development. One candidates for this approach, mentioned in the Literature Survey, is Torbet et al’s six criteria of ‘Design Performance Measurement’ [82]; · Client needs (stakeholder requirements); · Integrating design into objectives (system requirements); · Internal design processes (suitability and effectiveness of internal design); · External design processes (suitability and effectiveness of external design); · Profitability and efficiency (of the design); · Learning and innovation. 158
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 There is of course considerable overlap between these and requirements orientated measures such as MOE and MOP. There is also overlap with the criteria within the complexity assessment. Another way of identifying the subject for measurement is their identification during complexity assessment and designation of CSFs. Alignment of specific performance measures with area of concern broadly follows the principles described in planning via the use of DSM. That is focus on areas where there is particular risk of deviation from the plan while planning (in this case monitoring) at a lower level of decomposition level across the entirety of the development. The headings within Kline et al’s ‘Creating and Using a Performance Measure for the Engineering Design Process’ relate particularly to the complexity criteria of the development process and internal organisation. Process maturity Maturity can apply to both the development in terms of where it is within the process or the technology that is to be used within the system of interest. There will necessarily be some overlap and influence between the
Figure 5.
NASA PDRI partial hierarchy [85].
two. Though not directly related to development health or schedule, there can be some inference as to the status of these. For instance, in an extreme case technology of low maturity which is approaching validation suggests either problems within the development process or either badly planned or incompletely executed activities leading to that particular point in time. Amongst the other organisations that use process maturity, the U.S. Department of Energy use a version of the ‘Project Definition Rating Index’ or PDRI [83]. Though applying to construction type projects in this instance the process is not dissimilar to that of system development in many respects. The PDRI framework provides several useful functions relating to the process of defining and communicating project scope. We are however interested in its use in determining a standard framework for the determination of development status. Primarily this relates to scope definition and the identification of areas of concern. It also purports to allow ‘monitoring of progress’ during initial planning to identify areas of high risk. It also can provide a benchmark, once these are defined, against which technical development can be assessed. The main hierarchy that is used should be adapted to suit the organisation and type of development. The PDRI is used by a number of other agencies within the United States, including NASA the Construction Industry Institute [84] and [85]. The framework used by NASA on the construction of their facilities is shown in Figure 6 and this is further decomposed into individual criteria against which it is assigned the following scoring as shown in Figure 5. Of course there is still some subjectivity within the scoring but this can be further refined by providing individual guidance for each criterion as is documented Kline et al’s development health methodology. The result of the scoring can then be compared with the ‘maturity value rating’ and ‘qualitative criteria’ [84]. A maturity rating that does not correspond with the particular phase of the development that is currently in 159
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 progress indicates a concern. This can be further analysed to determine the areas where the issues reside by looking at the individual criteria. Though similar in methodology to Kline et al’s method it a lagging indicators as it essentially observes the status of the development at a particular point in time. Again a review at the beginning of a development phase or every three or four months would be appropriate. Process maturity measures relate directly to the complexity criteria of development process but also indirectly to the three complexity criteria within system development.
Figure 6.
Maturity value rating criteria – PDRI sections, categories and elements [85].
160
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
System maturity One of the most popular methods of assessing the system was that of the Technology Readiness Assessment (TRA) also known as Technology Readiness Level (TRL). Not surprisingly, and in common with other systems engineering tools, TRL has been developed for use by organisations such as the US Department of Defense and NASA [86][87][88]. TRL assesses the maturity of the system as a means of analysing development risk and used in a similar way as the PDRI methodology. That is if the TRL is below that required for the development phase there are serious risks to the realisation of the technology which will be ultimately be reflected in an inability to verify or validate the technology. TRLs generally have ratings between TRL 1 and TRL 9 with subjectivity of analysis constrained through the use of text guidance. TRL 1 is the least mature and TRL 9 denotes technology proven within its intended operational environment. Figure 7 shows the scale used by NASA. The US Department of Defense use similar
Figure 7.
NASA’s Technology Readiness [99].
criteria to define their rating which is shown in Appendix O. In both of these applications of TRL a team of subject matter experts will assist in providing the actual rating which will form the basis for stop/go type decisions by a board of appropriate managers at predetermined points in the development such as development stage boundaries. TRLs relate directly to the complexity criteria of technology.
161
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 A development of TRL, and an attempt to address to address TRL’s inherent limitation, is the System Readiness Assessment or SRA. SRA provides a ‘whole system perspective’ [89]. It assesses all system components, the integration of them along with external dependencies and is designed to be undertaken more frequently than those using solely TRLs [89]. TRLs are assigned to the system of interest as before along with an Integration Readiness Level or IRL. Criteria are provided to guide the assessment as per the traditional TRL method. From these three System Readiness Level Metrics are derived through calculation. Component SRL looks at individual components and how they are integrated to identify elements of the system that are lagging behind the system as a whole. Composite SRL is concerned with the overall integration of the system and this is then converted into an integer between 1 and 9, called the SRL [88]. Assessments and calculations are performed across the system architecture and the ratings can be seen in Figure 9. The IRL relates to the complexity criteria of internal interfaces while SRL is relates to the three system development complexity criteria as a whole.
Figure 8.
The SRA process [89].
Figure 9.
SRL criteria and ratings [89].
Organisation, process and schedule complexity DSMs were the subject to earlier discussion and are useful in both defining areas of the organisation, process or schedule that require particular attention and in providing useful measures to provide a basis for management action. As they relate to the plan they are reasonably static in nature and as such lend themselves to plan validation type activities at the beginning of a development phase or at a new iteration of the schedule. Previously the use of DSM related performance measures is discussed in Section 4.2.5 within the main body of text. Suggested uses these measures includes talking median and peak quantities. Observed DSM primary measures can be either trended through successive iterations or compared against acceptable absolute values that were determined through previous developments. These can either consider the phased activities as a whole or the current schedule critical path. The secondary measures of likelihood of change across a feedback cycle and vulnerability to change propagation could be used to inform rescheduling or other interventions to be undertaken to reduce development risks to acceptable levels. The basic properties of the DSM itself can be used to provide an insight into the schedule and its complexity. The relevant proportion of sequential, parallel, coupled and conditional activities within a development phase can be used to provide an indication of its status. A high incidence of coupled activities especially, gives 162
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 a clear indication of complexity. Again this could be applied against the critical path activities only to provide information on these priority activities. Finally, Kreimeyer and Lindemann [73] list four metrics using the language of DSM optimisation activities: · Sequencing – the number of ‘ideally sequenced’ activities within the DSM, i.e. sequential activities; · Tearing – the number of activities that have been subject to tearing due to them being a barrier to sequencing; · Banding – the number of activities that are independent to each other, i.e. parallel activities; · Clustering – the number of ‘mutually related’ i.e. coupled activities. As a way of identifying areas of concern within the plan this is a leading indicator and should prompt replanning such as further optimisation of the DSM. Process and organisational-architecture DSM related performance measures are concerned with controlling development process and internal organisation complexity criteria.
163
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix M Risk register template
164
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
165
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix N Risk attributes as metadata
166
9th January 2017
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
·
Unique ID;
·
Description of risk (cause, risk event, effect);
·
Whether it is a threat or an opportunity;
·
Who raised the risk;
·
Date the risk was raised;
·
Risk category according to the risk breakdown structure (RBS);
·
Link to WBS element;
·
Whether impact is to cost, schedule or scope;
·
Impact magnitude (very low to very high);
·
Impact number (according to impact magnitude banding);
·
Likelihood (very low to very high);
·
Likelihood number (according to probability banding);
·
Priority value (according to product of impact number and likelihood number);
·
Priority band (according to risk matrix);
·
Proximity (date relating to WBS);
·
Risk response category;
·
Risk response actions;
·
Action date;
·
Risk status
·
Priority change in period;
·
Risk owner;
·
Risk actionee;
·
Commentary and reference to other documents.
167
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix O Criticality Assessments for Boeing Dreamliner case study
168
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment - Concept Design
169
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment – Research and Development
170
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment - Scheme Design
171
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment - Detailed Design
172
Nick Brook
MSc Safety-Critical Systems Engineering
Appendix P Timeline to Boeing Dreamliner
173
9th January 2017
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Boeing started work on what would become the Dreamliner in the late 1990s. The first planes were delivered to Nippon Airways in 2011, years late and billions over budget [110]. April 26, 2004 – All Nippon Airways (ANA) becomes the launch customer for the Dreamliner, with the first of 50 planes to be delivered in 2008. By the end of 2004, total orders had reached 237. First flight scheduled for autumn 2007 [96]. July 8, 2007 – first 787 unveiled at Boeing’s Everett assembly factory, with 677 orders already received. Initial plan is for aircraft to enter commercial service in May 2008 [106]. Pre-launch problems [96]. September 2007: Boeing delays plans to deliver the first Dreamliner to Japan’s All Nippon Airways, citing technical issues. March 2008: Goldman Sachs breaks the news that the aircraft will experience further delays. December 2009: The Dreamliner finally completes its first test flight. November 2010: But another test flight goes badly when a fire breaks out on board a plane. Cue further delays. September 2011: Boeing has received regulatory approval to sell Dreamliners by now, but the long delays have led to a huge build up in unsold Dreamliners sitting on the tarmac. Boeing has over $16 billion in inventory—finished planes, partly-finished planes, and plane parts—tied up on its balance sheet, which one fund manager compares to “dinner in the anaconda.” October 2011: The Dreamliner takes off on its first commercial flight. But China Eastern Airlines cancels an order for two dozen Dreamliners, which it ordered in 2005. Boeing is now tackling an order backlog totalling 800 planes, caused by its production delays. July 2012: Boeing and US officials investigate why a Dreamliner jet has been spilling debris. August 2012: Australian airline Qantas pulls an order for 35 Dreamliners, citing the weak market for international air travel. Sept. 5, 2007: A shortage of fasteners and incomplete software cause three-month delay to first flight. Oct. 10, 2007: More software issues cause further three-month delay, and six-month delay to first deliveries because of international and domestic supply changes. Jan. 16, 2008: Another three-month delay announced to first flight. April 9, 2008: Boeing announces fourth delay. First flight is rescheduled until late 2008 and initial deliveries are put on hold until September 2009. Nov. 4, 2008: Boeing workers go on strike and continued fastener problems mean first flight is rescheduled for mid-2009. Various airlines claim they will sue Boeing for compensation. June 15, 2009: In front of the aviation world at the Paris Air Show, Boeing claims the first flight will take place within two weeks. A little over a week later, Boeing cancels the first flight and reschedules for late 2009.
174
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Dec. 15, 2009: Two years late, the aircraft finally makes its maiden flight after making high-speed taxi tests three days earlier. June 2010: Fleet-wide problems on horizontal stabilizers mean all aircraft in the test fleet are inspected and repaired. Aug. 2, 2010: The Trent 1000 engine, one of two used by the airplane, suffers a blowout at a Rolls-Royce facility. First delivery to Japan's All Nippon Airways, a unit of ANA Holdings Inc (TYO:9202), is delayed until February 2011. Nov. 9, 2010: During a test flight above Texas, a 787 experiences an electrical fire and is forced to make an emergency landing. All test flights are suspended until Dec. 23. January 2011: First delivery rescheduled until September 2011 due to electrical and software problems resulting from the in-flight fire. Aug. 26, 2011: Boeing receives approval from the U.S. Federal Aviation Administration and European Air Safety Agency, enabling deliveries to commence. Sept. 25, 2011: Three years behind schedule, ANA receives the first Dreamliner. Oct. 26, 2011: First commercial flight takes place between Tokyo-Narita and Hong Kong. Some seats fetch as much as $34,000 because of high demand from aviation enthusiasts. Post-launch problems Feb. 6, 2012: Boeing finds a manufacturing problem in the fuselage section of some Dreamliners. July 23, 2012: ANA has five aircraft repaired after discovering a problem inside the Rolls-Royce engine. July 28, 2012: A Dreamliner suffers an engine failure on the ground at the Boeing plant in Charleston. An investigation is announced by U.S. authorities. Sept. 5, 2012: A hydraulic problem inside an ANA 787 causes the pilot to abort take-off. White smoke is seen billowing from the aircraft. Oct. 4 2012: An engine problem on-board an Air Bridge Cargo 747 in Shanghai prompts General Electric (NYSE:GE) to recommend the inspection of GEnx engines, which are used on some 747 and 787 aircraft. Dec. 5, 2012: A report of fuel leaks prompts the FAA to order the inspection of all 787s. Jan. 7, 2013: A fire starts on an empty Japan Airlines (TYO:9201) 787 at Boston Logan International prompting the grounding of all 787s worldwide. Jan. 8, 2013: An ANA 787 is grounded after a crack in the windshield is found. Also, a JAL flight is forced to cancel after engineers discover a fuel leak. Jan. 9, 2013: United Continental Holdings Inc. (NYSE:UAL) discovers faulty wiring near a battery on six of its aircraft. The National Transport Safety Board launches an investigation. Jan. 11, 2013: Another Japan Airlines aircraft is found to have a fuel leak. Jan. 13, 2013: Japan’s Transport Ministry launches an investigation after a third leak is discovered on-board a JAL aircraft. 175
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Jan. 16, 2013: An ANA flight from Tokyo to Ube, Japan, makes an emergency landing after a burning smell is detected in the cabin and a warning light comes on. ANA and JAL ground all their 787s, and aviation authorities worldwide order the grounding of all Dreamliners pending checks. Boeing halts all deliveries. April 5, 2013: Redesigned batteries undergo final tests. Flights resume on April 26. June 2, 2013: A sensor pressure detects overheating on one of its 787s. June 23, 2013: United Airlines makes an emergency landing after a problem is discovered with the braking system. July 12, 2013: An empty Ethiopian Airlines 787 develops a fire at London's Heathrow airport, which shuts down the entire airport temporarily. The fire was caused by a faulty battery. July 18, 2013: A maintenance message on-board a JAL flight alerts to a fuel pump error. July 22, 2013: An electrical panel grounds a Qatar Airways 787. July 24, 2013: An investigation is launched after an oven overheats aboard an Air India flight. July 26, 2013: Two ANA-operated Dreamliners are found to have faulty battery wiring, the same problem that caused the fire at Heathrow. July 27, 2013: United Airlines discovers a problem with an emergency beacon. Aug. 27, 2013: A problem with slats (extensions of the leading edge of the wing deployed, like the trailingedge flaps, during take-off and landing for added lift) forces a JAL 787 to turn back to Tokyo. Sept. 19, 2013: A United Airlines 787 develops similar flaps problems and is forced to declare an emergency and land in Anchorage, Alaska. Sept. 28, 2013: Technical problems with a transponder prompt a LOT Polish Airlines flight to make an emergency landing in Iceland. Oct. 9, 2013: Electrical problems caused failed lavatories and the failure of inflight anti-ice systems on a JAL aircraft, which returned to San Diego. Nov. 16, 2013: A British Airways flight experiences hydraulic failure. Jan. 14, 2014: Full Japan Airlines Dreamliner fleet grounded after more battery problems. Jan. 19, 2014: Air India flight loses all transponders. Jan. 19, 2014: A China Southern 787 receives multiple system messages, including flaps, nose gear landing, nose gear position, doors and brakes. Feb. 5, 2014: All management computers fail aboard an Air India flight. March 5, 2014: Cracks discovered on wings of 787s in production. Jan. 19, 2014: Air India flight loses all transponders. Jan. 19, 2014: A China Southern 787 receives multiple system messages, including flaps, nose gear landing, nose gear position, doors and brakes. 176
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Feb. 5, 2014: All management computers fail aboard an Air India flight. Feb, 2014: Delays in fuselage production due to wiring issues March 5, 2014: Cracks discovered on wings of 787s in production. 2015, Boeing discovered a bug in the 787 Dreamliner's software that could lead to a sudden loss of all power in the aircraft at 35,000 feet [106]. Jan, 2016: January wherein a Japan Airlines flight (JAL) 787 from Vancouver to Japan experienced an engine failure mid-flight due to ice build-up [106]. April 24, 2016: Air Worthiness Directive - FAA tells Boeing to urgently resolve icing issues to engines by repairing or replacing them [106]. A detailed and up-to-date timeline of events for the 787-8 Dreamliner can be found on the AeroInside internet site [111]. An additional 25 events were reported since the engine failure event to January 2016 Japan Airlines flight and six incidents since the first draft of this project in early August 2016. https://www.aeroinside.com/incidents/type/b788/boeing-787-8-dreamliner
177
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Feb. 5, 2014: All management computers fail aboard an Air India flight. Feb, 2014: Delays in fuselage production due to wiring issues March 5, 2014: Cracks discovered on wings of 787s in production. 2015, Boeing discovered a bug in the 787 Dreamliner's software that could lead to a sudden loss of all power in the aircraft at 35,000 feet [106]. Jan, 2016: January wherein a Japan Airlines flight (JAL) 787 from Vancouver to Japan experienced an engine failure mid-flight due to ice build-up [106]. April 24, 2016: Air Worthiness Directive - FAA tells Boeing to urgently resolve icing issues to engines by repairing or replacing them [106]. A detailed and up-to-date timeline of events for the 787-8 Dreamliner can be found on the AeroInside internet site [111]. An additional 25 events were reported since the engine failure event to January 2016 Japan Airlines flight and six incidents since the first draft of this project in early August 2016. https://www.aeroinside.com/incidents/type/b788/boeing-787-8-dreamliner
177
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix Q Complexity management case study using OL3 and AREVA’s EPR
178
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Case study 2 – AREVA’s OL3 EPR Background In 2002 the Finnish Government decided in favour of the construction of its fifth nuclear plant to reduce dependence on importing electricity from outside the country [126][127]. After a competitive tendering process, and in recognition of the experiences in operating Europe’s Pressurised Water Reactor (PWR), a consortium of AREVA and Siemens were nominated as the winning bidder with their next generation Evolutionary Power Reactor (EPR). A site on the island of Olkiluoto had previously been chosen adjacent to operating plants OL1 and OL2 and it was handed over to AREVA in 2005. The development and construction was to be a turn-key contract with a planned electrical output of the plant of 1,600MW [126]. The EPR is variously known as the European Pressurised Reactor and Evolutionary Power Reactor [128] and is a development of the Framatome N4 and Siemens Power Generation Division KONVOI reactors [129]. The EPR is designed for enhanced safety while ensuring that it is economically competes with the other reactor designs that are currently in development and construction. One of the important considerations is that of fuel and the design can use 5% enriched uranium oxide fuel which itself can be comprised of 50% mixed uranium plutonium oxide fuel [129]. This allows Mixed Oxide Fuel (MOX) to be used with the advantage that plutonium that has either been recovered from used reactor fuel or weapons-grade plutonium can be utilised for energy and disposed of [130]. Construction began mid-2005 with the first unit originally due to have been operational in 2009 and a total cost of €3.7 billion [129].
Work Breakdown Structure The WBS was created using a blend of hardware and function orientated nodes [131] though traditional business functions dominate. The project was divided between the two partners in the consortium, AREVA and Siemens. AREVA were to be responsible for the nuclear components of fuel, the so called nuclear island, and Siemens controlling the electrical generation aspects, called the turbine island. The fuel component and nuclear island WBSs are closely based on the actual project WBS [132] while the turbine island is decomposed into its standard systems [133]. It is shown at varying levels of detail depending on the part of the WBS but generally at the higher of levels due to the constraints of space and for relevance. The nuclear island will form the basis for most of subsequent analysis due to the higher level of design development and inherent complexity. The concept design phase was completed prior to the commencement of the project as the EPR was developed for global markets in advance of the OL3 project. The terminology across the WBS is inconsistent as is the inclusion of design phasing. Examples of this is the inclusion of ‘overall concept’, ‘basic design’ and ‘detailed design’ while other engineering functions omit this entirely. It is surmised that the WBS was conceived piecemeal by the various functions before being collated into a single entity. Fuel · Fuel management · Design · Fabrication Nuclear Island · General o Project management o Technical input and generic (EPR) data § General definitions § General safety § Manuals 179
Nick Brook
o
o
o
o
o
MSc Safety-Critical Systems Engineering § Hazard management § Chemistry § Radiation protection § Materials technology § QC inspections specifications Overall functional engineering § Overall concept § Core and thermo-hydraulic design § Functional requirements § Safety analysis § (Radiological) containment § Severe accidents (analysis) § Design of plant operating procedure Mechanical systems engineering § Design concepts § Systems engineering process § Main heat transfer system § Reactor auxiliary systems § Reactor ancillary systems § Other nuclear island/reactor plant system § Monitoring systems Mechanical equipment engineering § Generic inputs/data § Reactor equipment § Main heat transfer and transport equipment § Reactor auxiliary system § Reactor ancillary systems § Nuclear fuel handling and storage systems § Other reactor plant and miscellaneous equipment § Monitoring system components § Mechanical equipment – generic engineering Civil engineering § Civil management § Basic design engineering § Detailed design engineering § Contract management § Site management § Field design and engineering § Structural civil works § Finishing works § Mechanical equipment Plant layout, civil works engineering § Generic inputs and data § Plant layout engineering – main buildings § Plant layout engineering – ancillary buildings § Pipe layout and design – main buildings § Pipe layout and design – ancillary buildings § HVAC layout engineering 180
9th January 2017
MSc Safety-Critical Systems Engineering 9th January 2017 § As-built engineering § Civil work § Civil components o Electrical systems and component engineering § Electrical engineering § Earthing and lightening protection § Switchgear component design and realisation § Emergency power component design and realisation § Transformer component design and realisation § Drives component design and realisation § Cables, junction boxes, seals, cable trays, penetrations component design and realisation § Building technology o Instrumentation and control engineering § General concepts § Overall I&C architecture § Main control room and reactor safety systems design § Operational I&C § Reactor protection systems § Post-accident management systems § Reactor control surveillance and limitation system § Process instrumentation § Core related instrumentation § Training simulator § Monitoring systems · Nuclear island procurement (not expanded but mirroring content of other functions) · Nuclear island erection o NI erection management o NI erection methods o Mechanical systems engineering o Mechanical equipment erection o Erection – civils o Erection – piping, insulation, HVAC, civils o Erection –electrical engineering o Erection –instrumentation and controls o NI site management o Nuclear island commissioning o NI commissioning management o Technical input and generic data o Overall commissioning o Mechanical systems commissioning o Mechanical equipment commissioning o Building and building equipment commissioning o Plant layout and civil works commissioning o Electrical systems and components commissioning o Instrumentation and controls commissioning Turbine Island Nick Brook
181
Nick Brook o o o o o o o
MSc Safety-Critical Systems Engineering Steam Turbine Generator Condenser Condensate-Feedwater System Moisture Separator Reheater (MSR). Cooling System Instrumentation and Control System (I&C).
9th January 2017
Complexity assessment The timing of assessments should be aligned with project phasing and project and development governance processes, including stage-gates and major design reviews. For the purposes of the assessments it should be assumed that there are scheme and detailed design phases For the purposes of this case study a high-level assessment will be undertaken at the beginning and end of the design phase to highlight how the aspects of complexity evolve and illustrate some potential issues.
Critical Success Factors The CSFs will evolve throughout the development. A fully completed WBS early in the development cycle allows for complexity to be considered throughout the lifecycle notwithstanding the issues associated with differentiating the phasing from the high-level WBS nodes.
Concept design During this phase it is assumed that the contracting strategy is been agreed in tandem with the development of the overall concept. Environmental constraints: Uncertainty and Ambiguity have been assigned as high due to funding uncertainties and low-design maturity which would lead to a low confidence in outturn costs. Emergence, Non-linearity and Program-size are low at this early point in the project. Development process: Both Uncertainty and Ambiguity have been assigned as medium, recognising that though this is a very large and new project much of the work will be undertaken using existing processes inhouse. There will be interactions with the Siemens as a joint venture partner. Emergence, Non-linearity and Program-size are low at this early point in the project. Organisation: Though AREVA will need to expand their workforce to accommodate the project much of the organisation structure is established as is the office locations. The organisational complexity relating to Uncertainty and Ambiguity at this point is only a medium. Emergence and Non-linearity are low due to the low-level of detail and resulting interactions between functions and engineering disciplines. Program-size complexity is also low as the organisation is still relatively small. Contractual management: Due to there being few contractual relationships Uncertainty and Ambiguity is considered low while Emergence, Non-linearity and Program-size are very low at this early point in the project. Stakeholders: The stakeholders at this early stage are the client and internal stakeholders only and so this is low. Changes in high-level requirements would have a relatively large effect for such an early point in the project so have been adjusted medium. Program-size complexity is very low due to there being few relationships. Regulatory interfaces: The immaturity of some of the technology and first of a kind nature of the project means the regulators’ responses are uncertain and ambiguous and especially at the beginning of the development. As such Uncertainty, Ambiguity, Emergence and Non-linearity has been assigned a medium. In many areas the regulators may lack essential competencies to effectively review and agree key principles 182
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 within the design and this is especially true as this is the first nuclear build in Finland for several decades. Program-size complexity is very low at this early point in the project. External Interfaces: Both Uncertainty and Ambiguity have been assigned as medium and Emergence and Nonlinearity a low. Program-size complexity is very low at this early point in the project. Changes should be relatively easily anticipated and accommodated within an immature design. Technology: Both Uncertainty and Ambiguity with regards to technology is high during the concept phase. Emergence, Non-linearity and Program-size are medium at this early point in the project. System Integration: Changes to system integration could arise from both the technology characteristics and will be very high before the technology matures. Emergence, Non-linearity and Program-size are medium at this early point in the project.
Detailed design At the beginning of the Detailed Design phase it is assumed that scheme design is complete and most issues regarding management of the design phase are becoming apparent. Environmental constraints – There is still a degree of Uncertainty and Ambiguity as the Final Investment Decision is generally dependant on a mature Detailed Design. Emergence, Non-linearity and Program-size is increasing, with each being at medium. Development process - Both Uncertainty and Ambiguity have been assigned a low, recognising the embedding and increasing maturity of the project processes. Changes required to the development process will have a greater impact in terms of Emergence and Non-linearity so are given a medium. Likewise, the process will be at the height of Program-sized complexity nearing the end of Detailed Design. Organisation - This aspect will still be evolving as the nature of technical activities changes through design. The organisation will be at the height of Program-sized complexity nearing the end of Detailed Design. Contractual management – A number of contracts will start to be developed for specialised design and also system implementation making Uncertainty, Ambiguity, Emergence and Non-linearity medium. Contractual management activities will also become more numerous making this a high with a very high at the end of Detailed Design/beginning of implementation. Stakeholders – Changes brought about by stakeholders will be stable though changes in customer requirements at this stage could have a substantial effect. Regulatory interfaces - Confidence in the technology and contractual approach should be increased though changes brought about by regulatory changes and imposed constraints, can have a substantial impact. External Interfaces - The impact of external stakeholders stays relatively stable though changes could have an increased impact. Technology - Both Uncertainty and Ambiguity with regards technology will be much reduced at low while impacts through Emergence and Non-linearity will be high. Technology will take more management as its scope is defined in more detail. System Integration - Both Uncertainty and Ambiguity with regards technology will be much reduced at low while impacts through Emergence and Non-linearity will be high. Technology will take more management as its scope is defined in more detail.
Critical Success Factors The CSFs will evolve throughout the development though thought should be given to putting them in place in advance of when they are particularly required and the phasing required. CSFs will be ranked and generally taken from ‘high to very high’ rated CSFs from the full survey dataset Appendix F. CSFs within the top three 183
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 identified from each Complexity Theme will be shown in bold with the chosen CSFs in order of priority. The full dataset was chosen in preference to that from the aviation industry dataset due to its low sample size. Where the number of CSFs are overwhelming in number it is expected that these can be further ranked in importance. CSFs assessed as having the greatest impact will be chosen in preference to the others. Furthermore, it would also be expected that performance measures should be chosen to demonstrate the implementation and impact of CSFs. Examples of how these CSFs may be further tailored towards the Dreamliner project or otherwise supported are provided in italics with top three ranked CSFs identified in bold. Concept design At this point in the development lifecycle the emphasis is on managing uncertainty. At this stage ambiguity will also be high to the technology and there may be constraints of funding prior to a commitment to longterm investment. Particular Environmental constraint related CSFs are as follows: i. Adequate budget – project is rigorously planned and estimated to ensure budget is realistic. This is important to accommodate the high Ambiguity and Uncertainty and could use norms from other projects as well as the use of several estimating techniques in combination; ii. Composition of project team in terms of experience and capability – the reliance on in-house resource requires a concerted effort to secure suitably qualified and competent project team and a recruitment and training programme will need to be in place well in advance or the requirements for resource and expertise. A matrix of resource and expertise requirements against each function or discipline would allow progress to be assessed. Also there is the potential for the principles of MDM to be adapted to map resource and expertise against the process, assigning risk where there are shortfalls; iii. Competent and qualified project manager – as above as the project is an in-house endeavour it will need managed by suitably skilled project management personnel. A project this size will need more than one project manager and recruitment and training will be needed; iv. Proactive risk management process – important as a first of a kind design and especially as AREVA are inexperienced in this type of undertaking. Again this should be done to identify areas of Ambiguity and Uncertainty within the WBS that have the greatest impact. Development CSFs may be: i. Clear realistic development objectives – this supports the CSFs in Environmental Constraints and aims to reduce Ambiguity and address Uncertainty; ii. A well understood and mature design review process is in place – it is important these are established early in the development lifecycle to ensure Ambiguity and Uncertainty is identified during and at the beginning of each phase of the project. This may delay advancement to the next stage if maturity of design is insufficient, i.e. has higher than acceptable Ambiguity or Uncertainty. A technology or process maturity Performance Measure could be used to form the basis for decision making. The organisation would also need to cope with uncertainty and ambiguity relating to the immature technology but also the first of a kind nature and reliance on in-house resources. It could be assigned the following CSFs. i. Composition of development team in terms of experience and capability – as described in Environmental constraints; ii. Re-use knowledge and experience from previous projects (lessons learned) – this may reduce Ambiguity through the reuse of previously successful processes or even elements of design. It should identify where issues have been encountered previously and indicate how they may be avoided.
184
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 Contractual management would be in its infancy but also minimal so no CSFs have been assigned for Concept Design. Similarly, the management of stakeholders, regulatory interfaces and external interfaces will be disregarded in terms of CSFs at this stage of the development lifecycle. Technology is one of the most important aspects of the development and while choosing mature technologies is not an option the simplification of design should still be sought where possible to reduce uncertainty of outcome and understand it where at all possible. i. Pursuing a simple as possible design - Requirements measurement is adopted which should identify areas of outstanding Ambiguity. Supporting metadata could then be used to highlight Uncertainty and also significant risk; ii. Test early, test often philosophy is used during development – TRL measurement is adopted and also this would reduce Uncertainty through demonstrating the design meets requirements; iii. System element maturity is monitored – as a means of monitoring progress through the use of TRL measurement and showing where low maturity resides (Ambiguity and Uncertainty). System integration is the other most important aspect, alongside technology. I. Test early, test often philosophy is used during development - IRL measurement is adopted; II. System element maturity is monitored - - IRL measurement is adopted. Detailed Design As the development progresses the emphasis changes from Uncertainty and Ambiguity to the impact of change through Emergence and Non-linearity. While the reduction in the former are still important it is now the effect of Emergence and Non-linearity that will be felt most keenly. Alongside this, Program-size complexity will also increase as the workload and number of parallel and dependant activities increases. As such the CSFs will become more about the plan and managing change to control impact. Important too is the managing the outputs of the design process and methods of working to manage the process as a whole. Considering Environmental Constraints first the following CSFs could be added to those from Concept Design: i. Support from senior management - use of a development health type Performance Measure which should help reduce impacts through facilitating timely intervention when it is needed and ensuring adequate resource, technology and expertise are secured to combat Program-size complexity; ii. Effective change management (project) – measurement of change control metadata is adopted to identify where areas of Uncertainty reside but also allow the direction of effort to reduce impacts (Non-linearity and Emergence); iii. Proactive risk management process – risk undertaken as a part of complexity management. v. Strong, appropriately detailed and realistic project plan kept up to date – use DSM on areas of risk and criticality to better manage Non-linearity and Emergence. For the Development aspects some if not all of the following would be applicable to ensure changes were recognised early and readily assessed for impact. Some obviously overlap in their scope This was of relatively low concern though due to the inexperience of the team processes should be put in place to support a PDCA type methodology. i. A well understood and mature design review process is in place – as described within Concept Design; ii. Enhanced planning is applied against areas of criticality and uncertainty – use DSM on areas of risk and criticality to better manage Non-linearity and Emergence. iii. Effective monitoring/control of requirements and development deliverables – to manage change and this Non-linearity and Emergence;
185
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 The organisation would need to consider the technically challenging nature of the project: i. Good leadership; ii. Transparent definition of responsibilities – reduce Ambiguity and time to resolution of issues leading to Non-linearity and Emergence; iii. Composition of development team in terms of experience and capability - as described previously. Contractual management will begin during detail design and become more complex, relating to specialised design and the implementation phases of the project. It is important that the following are considered at the time of contract production. i. Clearly understood contractual interfaces – Organisational DSMs are adopted; ii. Good performance by suppliers/contractors/consultants; iii. Effective monitoring/control – Development Scorecard approach is adopted; Stakeholders have been considered as having a high impact on the development at this stage of the design. Changes in requirements can have a serious effect on the project lifecycle. Specific CSFs surround the effective and timely communication and management of stakeholders: iv. Client/user acceptance– Requirements measurement is adopted to reduce Ambiguity and Uncertainty; v. Decisions are agreed and documented - to reduce Ambiguity and Uncertainty; vi. Early identification and management of conflicting interests – Requirements measurement is adopted- to reduce Ambiguity and Uncertainty; vii. Active management of client/user integration – MDMs adopted for critical systems. Regulatory interfaces are also important as small changes at this stage could have a dramatic impact: i. Clear lines of communication with regulators - to reduce Ambiguity and Uncertainty and identify areas of Non-linearity and Emergence early; i. Good relationship with regulators – aids effective communication; ii. Sufficient time and resources allocated to manage regulators – aids effective communication. Similarly, external interfaces should be closely managed: i. Clear communication is established - to reduce Ambiguity and Uncertainty and identify areas of Non-linearity and Emergence early; ii. Clearly identified and understood external interfaces - to reduce Ambiguity and Uncertainty and identify areas of Non-linearity and Emergence early; iii. Defined process for managing external interfaces - aids the above. The management of technology and system integration will be vital in the overall success. Technology related CSFs would be as follows: i. Test early, test often philosophy is used during development; ii. Modelling and prototyping of system elements is used- to reduce Ambiguity and Uncertainty and manage the effects of Non-linearity and Emergence early; iii. System element maturity is monitored – as described previously; It is recommended that system integration CSFs would be similar to those for technology: i. Test early, test often philosophy is used during development – as described previously; ii. Modelling and prototyping of system elements is used – as described previously; iii. System element maturity is monitored – as described previously;
Planning technique The sheer size, scale and criticality of the project would lend itself to the use a combination of Design Structure Matrices. These could be targeted against particular areas of development and interfaces based on risks borne from the constraints of schedule and relative criticality, both a technically and in relation to the 186
Nick Brook MSc Safety-Critical Systems Engineering 9th January 2017 overall project. From the Criticality Assessment the two areas that deserve particular attention are technology and integration. Process architecture DSM could be used to focus on particular groups of activities from the WBS. This might particularly look at integration or assembly type activities and dependencies functions and disciplines. Full integration of traditional scheduling and DSMs would allow a seamless blend of the two techniques to be used with areas technology centric development and integration activities being prime candidates for binary MSMs and numerical DSMs. Further Complexity Assessment of the WBS could be taken at the next lower level or levels to identify areas of particular complexity. From the project’s background information such candidates for enhanced planning include the following: · Instrumentation and control engineering; · Design integration activities; · Certification related activities, such as quality assurance, across the entire development. Enhanced planning would be chosen in other areas, depending on the WBS and on the most recent Complexity Assessment findings. The scale and length of the development may justify specific tool support to be developed.
Performance measurement To meet the challenges of the technology and integration the following measures could be combined into a Development Scorecard across the entire project: · Conformity with the plan consisting of: o Earned Value Management § Cost Performance Index (lagging); § Schedule Performance Index (lagging); · System maturity consisting of: o Technology Readiness level (lagging); o Integration Readiness level (lagging); · Plan complexity consisting of: o Proportion of sequential, parallel, coupled and conditional activities within DSM (leading); · Requirements consisting of: o Realisation – relating to MOE, MOS, MOP and TPM (lagging); o Execution – relating to selected requirements metadata (leading). Such data would need to be trended over time rather than compared with data over the entire project and would be used where there are particular technical risks or evidence of poor project performance. This would provide a mix of leading and lagging measures the purposes of monitoring, control and reporting.
Risk management The identification of development risks will not be undertaken in detail due to lack of data. It is not difficult to appreciate that there will be significant risks in the areas of contractual management, technology and integration. Detailed descriptions, risk responses and related information would naturally fall out of more detailed analysis and planning activities. The creation of a hierarchical Risk Breakdown Structure would be an important early activity in the formation of the risk management strategy. One suggestion for the high-level nodes (level 1) of the RBS could be those of ‘schedule’, ‘resource’,’ technical’ and ‘certification’. Level 2 nodes would include AREVA and contracting (below resource) and technology and integration (below technical). Certification could be divided amongst particular systems within the plant design. The importance of technology and integration can be elevated to that of a level 1 node or else included under one or more of the other nodes. 187
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Comparing framework findings with project outcomes [134] Construction began in 2005 with both completion of commissioning and beginning of operations forecast for 2009. First reports of delays were in 2006 due to quality control problems resulting from the poor control of sub-contractors previously inexperienced in nuclear projects. Blame was also apportioned to the delays in the approval of documentation by the Regulator, STUK and client, TVO. The delay of 12 months in May 2006 was increased to 18 months in December of the same year. In June 2007 the Regulator reported a number of safety related ‘design and manufacturing deficiencies’ [131] and a further 12 months of delay were reported in August 2007. This time the delay was due to reinforcing to the reactor building and issues presenting documentation to the regulator. By September 2007 the delay was extended to two years, further extended to three years in October 2008. Costs were increased by €1.5 billion, approximately 25% of the original budget. By May 2009 the delay was now three and a half years and 50% over the original budget. The cost of the plant was forecast as €5.3 billion in August 2009. By June 2010 the first date of operation was estimated to be the end of 2012 which had slipped to 2013 before the end of the year. By December 2011 the date of operation was set as August 2014 and by July 2012 this was amended to 2015. As of September 2014 the estimated cost was €8 billion with a first date of operation of 2018. As of 2016 OL3 construction is still in progress and the expected operational date has not been revised. Between 51% and 75% of AREVA is was given to the nuclear company EDF in an attempt to provide sufficient financing and expertise to complete OL3. Both organisations are predominately owned by the French government [135]. The project exhibited signs of being poorly planned amply demonstrated by the frequent and rapid iterations of the completion date and outturn cost. Both the planning inadequacies and the many quality issues are likely due to the lack of sufficiently experienced and suitably competent personnel to oversee such a large and complex development.
References [126] TVO (no date). OL3, Available at: http://www.tvo.fi/OL3_3, Accessed on: 14th November 2016. [127] Carbon Brief (2015). New nuclear: Finland’s cautionary tale for the UK, 20 October 2015, Available at: https://www.carbonbrief.org/new-nuclear-finlands-cautionary-tale-for-the-uk, Accessed on: 14th November 2016. [128] Beyond Nuclear (2009). European Pressurized Reactor (EPR), Available at: http://www.beyondnuclear.org/epr-reactor/, Accessed on: 14th November 2016. [129] P Langley (2011). Paul Langley's Nuclear History Blog, April 24, 2011, Available at: https://nuclearhistory.wordpress.com/, Accessed on: 14th November 2016. [130] WNA (2016). Mixed Oxide (MOX) Fuel, World Nuclear Association, (Updated August 2016), Available at: http://www.worldnuclear.org/information-library/nuclear-fuel-cycle/fuel-recycling/mixed-oxide-fuel-mox.aspx, Accessed on: 14th November 2016. [131] IAEA (2012). Basic Principles Objectives, IAEA Nuclear Energy Series, Technical Reports, Project Management in Nuclear Power Plant Construction: Guidelines and Experience, Available at: http://www-pub.iaea.org/MTCD/Publications/PDF/Pub1537_web.pdf, Accessed on: 14th November 2016. [132] AREVA (no date). Olkiluoto 3 – Finland, Available at: http://www.areva.com/EN/operations-2389/olkiluoto-3-finland.html, Accessed on: 14th November 2016. [133] Nuclear Power (no date). Conventional (Turbine) Island, Available at: http://www.nuclear-power.net/nuclear-powerplant/conventional-turbine-island/, Accessed on: 14th November 2014. [134] University of Cape Town (no date). EPR (nuclear reactor), UCT Libraries, Available at: http://www.lib.uct.ac.za/sites/default/files/image_tool/images/281/we_frahn_library/Courses/Theory_and_Design_of_Nuclear_Rea ctors/Course_Material/EPR.pdf, Accessed on: 14th November 2016. [135] F Gatte (2016). Where did it all go wrong for French nuclear giant AREVA?, 15-02-16, Energy and Carbon, http://energyandcarbon.com/where-did-it-all-go-wrong-for-french-nuclear-giant-areva/, Accessed on: 14th November 2016.
188
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment - Concept Design
189
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Complexity Assessment - Detailed Design
190
Nick Brook
MSc Safety-Critical Systems Engineering
9th January 2017
Appendix R Full questionnaire results per respondent
191
(
!"! # !"!#"#"$ $ !"!#%#%$ " #!!# # %&
%"&&"&"
&'!%
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
3 .
$*
/+ 2 +,
/+* +, * 4+ +
3 .
) '5**
3 .
*0 +,+*
6
** +*
3 .
0+ +,
5
7*+, * +
3 .
)
+++ *+,+
++
3**
7***** *
3 .
$, 5 0*8. *9*:
$
*5 '05'+ +**
5
15 * +, 0 +**
5
+ 0
5
0.
+,
3 .
; * **+*** +
3 .
*** '
-. .
3 .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
5
1** * .*. ..+**
7*+ *0 4+** *
0. .
+***
3 .
/ $ * #
+ , - $
=. ** +**
3 .
>*.+
-. .
$*4+ . * * *0
3 .
**
-. .
*** '
12
-. .
$** . ** +
)+ + * ***
-. .
$** . ** +
-. .
)++ *
-. .
7*+ *0 4+** *
3 .
)
+,*+ *'.+
-. .
0. .
+***
3 .
$*
5
=. ** +**
3 .
/+ 2 +,
-. .
$*4+ . * * *0
3 .
/+* +, * 4+ +
3 . 15 * * 05+***
+
3 .
; 2 * +
3 .
) '5**
3 .
*0 +,+*
3 .
6
** +*
3 .
0+ +,
3 .
7*+, * +
3 .
)
+++ *+,+
++
3 .
3**
*+0 +*** +
*.+
-. .
*+0 +**
**
+ *++*' *' * *
3 .
/ $ * #
+ , - $
>*.+
3 .
**
3 .
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
5
$ * #
+ - $
,
*+*2+*
3 .
? * *+
5
*. ** *+*
3 .
$*
*+*+***
/++** *.*
/ *+0 +** +*
/ **
3 .
=** +.*+.**
0+
**
++
** ***
).?*+
+
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
5
2 $ * #
+ - $
3 $ / *4 *
/ ***.
? * *+
$*
*+*+***
/ *+0 +** +*
5
D * +*
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
3 .
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
5
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
'(
(1
!"! )$*@ # * E !"%#!%#"1 $ * E !"%# %#1 " #!!# !# %&
!&!&"&"
&'!%
=.
$,
'(
) $ * #
+ , - $
$
. $ * #
+ - $
,
)
* ***'* ** +,
+***
-. .
/*+,,0*
3 .
>*** '
3 .
12
-. .
$** . ** +
3 .
)+ + * ***
$** . ** +
)++ *
3 .
7*+ *0 4+** *
)
+,*+ *'.+
3 .
0. .
+***
$*
5
=. ** +**
3 .
/+ 2 +,
$*4+ . * * *0
/+* +, * 4+ +
-. . 15 * * 05+***
+
3 .
; 2 * +
) '5**
-. .
*0 +,+*
-. .
6
** +*
0+ +,
3 .
7*+, * +
3 .
)
+++ *+,+
++
3**
-. .
*+0 +*** +
+ *++*' *' * *
3 .
) * '* *+5.
0+
-. .
/ *;= *+
3 .
7 *+?2 +**
5
1** * .*. ..+**
/ * *
3 .
/ $ * #
+ , - $
>*.+
-. .
**
-. .
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
-. .
$0 ' .
; *.*0+++
'
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
-. .
/ *+0 +** +*
5
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
5
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
-. .
D * +*
5
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
-. .
+**
4 *
)* *
**. ?*+ *5.4 *
('(
"!'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
3 .
*9**** +4 *
(2
!"! # * E !"(## E1 $ * E !"(#%#!%1 " #!!# #! %&
!%&E& %&E
3 .
* + +4. . ++ *2 ++ 0* .*. / #
5.*.+4 * +
A* - . 2* / #
5 + . 2* -
1++ .** * * + + +, * 5
* / #
-. . *++. ** *9
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
-5
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
5
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
"'(
&'!%
"E
*** '
-. .
$** . ** +
3 .
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
5
*+0 +*** +
) '5**
*0 +,+*
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
3 .
**
3 .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
3 .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
-. .
; *.*0+++
'
=. 0*5*+ 2 *
5
$ * #
+ - $
,
*+*2+*
3 .
? * *+
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
3 .
/ *+0 +** +*
-. .
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
5
2 $ * #
+ - $
3 $ / *4 *
/ ***.
3 .
? * *+
3 .
$*
*+*+***
-. .
/ *+0 +** +*
3 . 3 .
D * +*
5
=** +.*+.**
0+
5. .
* *
++
** ***
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
5
('(
( '(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
-. . /+4+,*5.+ * . + *
(
!"! # * E !" # #$ $ * E !" ## " #!!#(#( %&
%"&(&&"
$
.
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-5
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
-5
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
5
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-5
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
D'1
) $
, ,$$ ,
6
('(
&'!%
"
/ *
)
('(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
3 .
)++ *
3 .
)
+,*+ *'.+
-. .
$*
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
+***
>*** '
-. .
$** . ** +
3 .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
3 .
) '5**
-. .
*0 +,+*
3 .
*** '
-. .
$** . ** +
-. .
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
3 .
* *
3 .
/ * 5. *
-. .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
3 .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
/ *+0 +** +*
/ **
3 . 3 .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
3 .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
-. .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
-. .
++
** ***
-. .
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
* + +4. . ++ *2 ++ 0* .*. / #
!E'(
3 . @ +4+,*** * . .++,* + 0 * 3 . 1*0
!%'(
A* - . 2* / #
@. .0+,*5. *
+*
1++ .** * * + + +, * 5
* / #
3 .
5
* +,*
$,
; 4$7;*
!! 0** .+,+** / #
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / #
* 2 *+***.* 2 ***++00 0 .**.*5 *.*& / # * .
.+, *
=. 7 **0=7 / #
* **
. +, *
; 7 **0;7 / #
* **
.+, *
)*7 **0)7 / #
(.
!"! # =.* % !"#%#E1 $ =.* % !"## " #!!#!# %&
(& !!& &
1
> + +,* 5.*
4.* .+,
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* 0**?+****.** +**
.
2** 2 ? *& / #
&'!%
%
*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
3 .
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
5
/++** *.*
-. .
/ *+0 +** +*
3 .
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
/ *+0 +** +*
3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
3 .
E'(
%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
3 .
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
3 .
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
**. K*F) * 5..+, ***'*.+ &@ +**5
. * F) *
* *+ &
('(
(0
!"! # =.* % !"#E#1 $ =.* % !"E#!#!%1 " #!# #" %&
%"& &&"
&'!%
**
6**
!'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
3 .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
$*
5
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
+***
>*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
3 .
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
>*.+
3 .
**
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
7 0 **
10 '*
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
$*
*+*+***
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
3 .
=** +.*+.**
0+
3 .
**
++
** ***
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
/ *+0 +** +*
3 . 3 .
D * +*
=** +.*+.**
0+
5. .
* *
++
** ***
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
E'(
(1
!"! # =.* % !"(#%#1 $ =.* % !"(##"1 " #!!#"# %&
%& & &E
&'!%
3.
$
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
3 .
)+ + * ***
-. .
)++ *
)
+,*+ *'.+
-. .
$*
/+ 2 +,
-. .
/+* +, * 4+ +
3 .
+***
>*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
-. .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
-. .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
D * +*
-. .
=** +.*+.**
0+
5. .
* *
5
++
** ***
3 .
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
5
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
D'1
* + +4. . ++ *2 ++ 0* .*. / #
3 . 30
.0 +4 + *** + ** **0.* &=.* 5 + .+4 . $
&
A* - . 2* / #
3 . > *
+ **&
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *& / #
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / #
(2
!"! # =.* % !"(#(#1 $ =.* % !"!# %#E1 " #!!# %# %&
!(&&"E&!
&'!%
"E
$
/.
*5*0 ***& -. .
F * *** ***&
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
'(
"'(
) $ * #
+ , - $
$
. $ * #
+ - $
,
)
* ***'* ** +,
+***
-. .
/*+,,0*
-. .
>*** '
3 .
12
-. .
$** . ** +
3 .
)+ + * ***
3 .
$** . ** +
)++ *
3 .
7*+ *0 4+** *
)
+,*+ *'.+
3 .
0. .
+***
-. .
$*
=. ** +**
-. .
/+ 2 +,
-. .
$*4+ . * * *0
3 .
/+* +, * 4+ +
-. . 15 * * 05+***
+
3 .
; 2 * +
3 .
) '5**
3 .
*0 +,+*
3 .
6
** +*
3 .
0+ +,
3 .
7*+, * +
3 .
)
+++ *+,+
++
-. .
3**
3 .
*+0 +*** +
*.+
-. .
**
-. .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
3 .
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
$0 ' .
-. .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
-. .
? * *+
3 .
*. ** *+*
-. .
$*
*+*+***
3 .
/++** *.*
-. .
/ *+0 +** +*
/ **
-. . -. .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
-. .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
-. .
**. ?*+ *5.4 *
D'1
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
'(
-. . $,*5..+ +. 0+ '*
+0 ' 0 .
'(
* + +4. . ++ *2 ++ 0* .*. / #
3 . $,*'+ * * 5. 5 *
A* - . 2* / #
3 . 1+,*
1++ .** * * + + +, * 5
* / #
(4
!"! # =.* % !"##E1 $ =.* % !" ##$ " #!!# # %&
(&(&E&(%
&'!% L+,*5.+, * 0 .+
+** .+. +45.*
+,+ **&&
..* ?.0 ..
+
+
$ ** )0*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
5
$,
* +,*
$,
; 4$7;*
!! 0** .+,+** / #
* 0**?+****.** +**
.
2** 2 ? *& / #
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / # * 2 *+***.* 2 ***++00 0 .**.*5 *.*& / #
* .
.+, *
=. 7 **0=7 / #
1 $7;+*+ G + **G? * 2+.*** 0 0+*** 3 . < ? +,*5.. 00 0+ +.** * . +,& & +,* 3 .
$,*5...0. .+ 4 + 3 .
>0 ? +,*5..5 *.+,'+
*
3 . 1 + 5..*+ ' + +,5.=. =. 2 =. ++.* +0
* **
. +, *
; 7 **0;7
5
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
'(
'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
3 .
$*
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
+***
3 .
>*** '
3 .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
3 .
; 2 * +
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
*** '
5
$** . ** +
) * '* *+5.
0+
$** . ** +
/ *;= *+
-5
1** * .*. ..+**
5
7*+ *0 4+** *
0. .
+***
/ $ * #
+ , - $
=. ** +**
>*.+
-. .
$*4+ . * * *0
5
**
3 .
*.+5. *
-. .
/ * 5. *
-. .
) ** *
3 .
7 0 **
3 .
$ * #
+ - $
, $0 ' .
-. .
=. 0*5*+ 2 *
-. .
? * *+
$*
*+*+***
-. .
/ *+0 +** +*
-5
=** +.*+.**
0+
++
** ***
3 .
)* *
$ * #
+ - $
$0 ' .
-. .
=. 0*5*+ 2 *
-. .
? * *+
$*
*+*+***
-. .
/ *+0 +** +*
-5
=** +.*+.**
0+
++
** ***
3 .
)* *
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
* + +4. . ++ *2 ++ 0* .*. / #
3 . 3 .+4+,*L!!+* *.+
-. .
**
-. .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
10 '*
-. .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
; *.*0+++
'
-. .
*+*2+*
-. .
*. ** *+*
-. .
/++** *.*
-. .
$**./)**@*5.
&
# .* .5.-A/1 0 5** .
** 5.0 0 0 5. *+0 .. /)@& $ * #
+ - $
, $0 ' .
3 .
=. 0*5*+ 2 *
D'1
/ **
-. .
? * *+
3 .
**
-. .
$*
*+*+***
-. .
).?*+
+
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
3 *,0*
*
*
-. .
/ *+0 +** +*
-. .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
-. .
$**./)**@*5.
&
# * .5
* /)@
2 $ * #
+ - $
3 $
$ * #
+ - $
/ *4 *
-. .
$0 ' .
3 .
/ ***.
-. . -. .
=. 0*5*+ 2 *
D'1
D * +* 5. .
* *
? * *+
3 .
+**
4 *
-. .
$*
*+*+***
-. .
3 .
/ *+0 +** +*
-. .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
-. .
**. ?*+ *5.4 *
$**./)**@*5.
&
# * .5
* /)@ 5 *,$ * $
,
%'(
%'(
/+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
* + +4. . ++ *2 ++ 0* .*. / #
3 . 1 .
. * ./ +4 . 0 * * 0 *
**+0* 3 . *
*O 0P* 0*** * 0 + * * * *+0+ 5.+ .500+ * . +,*+
1++ .** * * + + +, * 5
* / #
-5
* +,*
$,
; 4$7;*
!! 0** .+,+** / #
* 0**?+****.** +**
.
2** 2 ? *& / #
3 .
A* - . 2* / #
$,.. 5*
.** *!. .? 0 !5?0 9 +** .** / #
; *I+* *&M 0**0**
. + + *** '
3 .
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
1 $ * #
+ - $
-
2 $ * #
+ - $
3 $
4 $ * #
+ - $
$
$ * #
+ - $
,
$ * #
+ - $
5 *,$ * $
,
) $
, ,$$ ,
6
%('(
(/
!"! # ) !"E#!# 1 $ ) !"%#!# 1 " #!!#(#% %&
%"&E&& !
&'!%
"
/ *
)
(!'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
-. .
)+ + * ***
3 .
)++ *
3 .
)
+,*+ *'.+
3 .
$*
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
+***
3 .
>*** '
$** . ** +
$** . ** +
5
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
5
*+0 +*** +
5
) '5**
3 .
*0 +,+*
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
3 .
* *
-. .
/ * 5. *
-. .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
-. .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
3 .
D * +*
-. .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
('(
("'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
D'1 * .2*
(0
!"! )$*@ # ) !"%#(#!"1 $ ) !"%## 1 " #!!# #( %&
(& &E"& !
5 =.**++++,* 00
+*+ *
A* - . 2* / #
7 0 0 +,* 5.**
+
,*
1++ .** * * + + +, * 5
*
D'1
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
D'1
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
D'1
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
D'1
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
D'1
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
D'1
) $
, ,$$ ,
6
(E'(
&'!%
*** '
-. .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
*+0 +*** +
*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
3 .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
3 .
**
-. .
*.+5. *
3 .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
3 .
10 '*
-. .
4+ *2
-. .
/ *.*. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
; *.*0+++
'
-. .
=. 0*5*+ 2 *
5
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
-. .
$*
*+*+***
5
/++** *.*
-. .
/ *+0 +** +*
/ **
-. . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
3 .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
5
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
3 .
$*
*+*+***
5
/ *+0 +** +*
3 . 3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
-. .
)* *
-. .
**. ?*+ *5.4 *
3 .
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
'(
3 . < ?*. .?++,** + .$.0* * 5..*+* .+,
**+++ +++*&&
*&
"'(
* + +4. . ++ *2 ++ 0* .*. / #
3 . 1 ?*+,*5. .**+ &.50 ++ *
.5.*?
* . . 5. 0
A* - . 2*
1++ .** * * + + +, * 5
* / #
$,.. 5*
.** *!. .? 0 !5?0 9 +** .** / #
* +,*
$,
; 4$7;*
!! 0** .+,+** / #
< .5 + + * *
&D 5. **
(
!"! # *1 *! !"%#"#($ $ *1 *! !"(# # $ " #!#!# %&
%"& %& ! &!%
&'!%
$ ** )0*
3 . 1 * *. .?+ +,&=..** +,' .*. .*H 3 . 1 *
. . *** * ++ ?* !! N+ * *9 +4 .+,
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / #
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*& / #
=.**0* *+5..* 2 .
) .** * &*** '
3 .
/*+,,0*
-. .
$** . ** +
3 .
12
-. .
$** . ** +
3 .
)+ + * ***
-. .
7*+ *0 4+** *
3 .
)++ *
-. .
0. .
+***
3 .
)
+,*+ *'.+
-. .
=. ** +**
3 .
$*
5 -. .
$*4+ . * * *0
3 .
/+ 2 +, /+* +, * 4+ +
3 .
15 * * 05+***
+
3 .
) '5**
-. .
; 2 * +
3 .
*0 +,+*
3 .
6
** +*
3 .
0+ +,
3 .
*+0 +*** +
*** '
3 .
$** . ** +
3 .
$** . ** +
7*+ *0 4+** *
-. .
0. .
+***
=. ** +**
$*4+ . * * *0
3 .
15 * * 05+***
+
3 .
; 2 * +
D'1
*+0 +*** +
) '5**
*0 +,+*
3 .
*** '
3 .
$** . ** +
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
3 .
*0 +,+*
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
7 0 **
-. .
10 '*
4+ *2
/ *.*. .
5
/; +,*.*
@** * *
*.*
5
$0 ' .
3 .
; *.*0+++
'
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
/++** *.*
/ *+0 +** +*
3 .
/ **
3 . -. .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
-. .
D * +*
-. .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
)* *
**. ?*+ *5.4 *
3 .
E'(
E"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
5
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
5
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
-. .
) $
, ,$$ ,
6
EE'(
(0
!"! # 1 *!% !"!##!$ $ 1 *!% !"# #!$ " #!!#%# %&
!(&"& (& E
&'!%
"
**
E%'(
) $ * #
+ , - $
$
>*** '
$** . ** +
3 .
$** . ** +
-. .
7*+ *0 4+** *
/*+,,0*
3 .
12
)+ + * ***
5
)++ *
3 .
)
+,*+ *'.+
3 .
$*
5
/+ 2 +,
3 .
/+* +, * 4+ +
3 . ; 2 * +
) '5**
*+0 +*** +
5
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
>*.+
3 .
**
5
*.+5. *
5
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
5
10 '*
-. .
4+ *2
/ *.*. .
3 .
/; +,*.*
-. .
@** * *
*.*
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
*. ** *+*
$*
*+*+***
5
/++** *.*
/ *+0 +** +*
5
/ **
3 . 3 .
=** +.*+.**
0+
**
++
** ***
3 .
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
5
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
3 .
$*
*+*+***
5
/ *+0 +** +*
D * +*
=** +.*+.**
0+
5. .
* *
++
** ***
+**
4 *
)* *
**. ?*+ *5.4 *
%'(
%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
5
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
5
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
-. .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
5
* **
. +, *
; 7 **0;7
5
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
%'(
(1
!"! # *1 *! !"##%1 $ $ *1 *! !"##"1 " #!!# # % " # %&
%%& ! & & %&
&'!%
"E
/ *
)
%"'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
3 .
$*
3 .
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
+***
-. .
>*** '
3 .
$** . ** +
3 .
$** . ** +
-. .
7*+ *0 4+** *
3 .
0. .
+***
-. .
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
5
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
-. .
>*.+
-. .
**
5
*.+5. *
3 .
* *
-. .
/ * 5. *
-. .
) ** *
3 .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
-. .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
/++** *.*
-. .
/ *+0 +** +*
5
/ **
-. . -. .
=** +.*+.**
0+
3 .
**
++
** ***
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
/ *+0 +** +*
3 .
D * +*
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
5 *,$ * $
,
) $
, ,$$ ,
6
('(
( '(
(2
!"! # *1 *! !"#!#"$ # $ *1 *! !"# E#$ " #!!#"#% %&
!%&E& %&" %&
&'!%
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
3 .
"E
$*
+ *++*' *' * *
-. .
* *
-. .
7 *+?2 +**
-. .
10 '*
3 .
/ * *
-. .
4+ *2
/ *
/ *.*. .
3 .
/ *2 ?
-. .
/; +,*.*
@** * *
*.*
; *.*0+++
'
; 2
-. .
*+0 +**
*.+5. *
3 .
/ * 5. *
-. .
) ** *
-. .
7 0 **
3 .
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
3 . *9
3 . + * /+4 3*+0< F
$**./)**@*5.
& * *
A* - . 2* / #
)+* ++ **
*
$ * #
+ - $
,
1++ .** * * + + +, * 5
* / #
$0 ' .
3 .
=. 0*5*+ 2 *
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
=** +.*+.**
0+
3 .
++
** ***
)* *
$ * #
+ - $
$0 ' .
3 .
=. 0*5*+ 2 *
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
=** +.*+.**
0+
3 .
++
** ***
)* *
$,.. 5*
.** *!. .? 0 !5?0 9 +** .** / # * +,*
$,
; 4$7;*
!! 0** .+,+** / # * 0**?+****.** +**
.
2** 2 ? *& / # * .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / # * 2 *+***.* 2 ***++00 0 .**.*5 *.*& / # * .
.+, *
=. 7 **0=7 / # * **
. +, *
; 7 **0;7 / # * **
.+, *
)*7 **0)7 / #
(('(
*9 +4
D** +0 *9 +4 3 . D+* .
* .
* *
=.*+.+, + *.* 3 .
1 3 . + ** * +0 * 3 . + ** * +0 * 3 . + ** * +0 *
!!'(
) $
, ,$$ ,
6
*.* * ***, 1*
()4
!"! # =.*1 * !"#E#%$ $ =.*1 * !"#!%#!E$ " #!!# !#( %&
%"&%%&"& !
&'!%
!'(
"
= *+
$
! '(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
-. .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
$*
3 .
/+ 2 +,
3 .
/+* +, * 4+ +
-. .
+***
3 .
>*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
3 .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
3 .
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
>*.+
3 .
**
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
/ *.*. .
/; +,*.*
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
/ **
3 . 3 .
=** +.*+.**
0+
**
++
** ***
3 .
).?*+
+
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
D * +*
3 .
=** +.*+.**
0+
5. .
* *
++
** ***
3 .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
3 .
!E'(
!%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
3 .
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
-. .
* **
.+, *
)*7 **0)7
-. .
) $
, ,$$ ,
6
!('(
()
!"! # @1 * !" #!E#1 $ @1 * !" #"# " #!!# %# %&
"& (&&"
1
&'!%
"
$ ** )0*
)
!'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
-. .
$*
/+ 2 +,
3 .
/+* +, * 4+ +
-. .
) '5**
-. .
*0 +,+* 6
** +*
>*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
3 .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
5
+ *++*' *' * *
-. .
/ $ * #
+ , - $
>*.+
-. .
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
**
-. .
*.+5. *
3 .
/ * 5. *
) ** *
7 0 **
5
'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
5
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-5
* +,*
$,
; 4$7;*
!! 0** .+,+**
-5
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
5
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
-5
* **
. +, *
; 7 **0;7
-5
* **
.+, *
)*7 **0)7
-5
) $
, ,$$ ,
6
E'(
()
!"! # @1 * !""##1 $ @1 * !""## 1 " #!!#%#( %&
!(& ! &"%&%%
&'!%
A*
$
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
-. .
)++ *
-. .
)
+,*+ *'.+
-. .
$*
5
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
+***
>*** '
$** . ** +
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
-. .
*0 +,+*
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
*.+5. *
3 .
* *
-. .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
$0 ' .
3 .
; *.*0+++
'
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
*. ** *+*
-. .
$*
*+*+***
3 .
/++** *.*
-. .
/ *+0 +** +*
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
$*
*+*+***
3 .
/ *+0 +** +*
-. .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
5
A* - . 2*
5
1++ .** * * + + +, * 5
*
5
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
5
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
5
* **
. +, *
; 7 **0;7
5
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
()
!"! # 1 * !"#(# 1 # $ 1 * !"# # 1 " #!!# #! %&
!&!& &" %&
&'!%
*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
5
0. .
+***
-. .
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
3 .
; 2 * +
-. .
*+0 +*** +
5
) '5**
3 .
*0 +,+*
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
-. .
>*.+
-. .
**
5
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
-. .
10 '*
3 .
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
5
$0 ' .
-. .
; *.*0+++
'
-5
=. 0*5*+ 2 *
-. .
$ * #
+ - $
,
*+*2+*
5
? * *+
3 .
*. ** *+*
5
$*
*+*+***
-. .
/++** *.*
3 .
/ *+0 +** +*
-5
/ **
5
=** +.*+.**
0+
-5
**
++
** ***
-5
).?*+
+
5
)* *
-5
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
5
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
-. .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
-. .
/ *+0 +** +*
-5 -5
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
-5
+**
4 *
3 .
)* *
-5
**. ?*+ *5.4 *
5
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
D'1
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
D'1
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
D'1
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
D'1
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
D'1
) $
, ,$$ ,
6
'(
())
!"! # =*1 * !" #!"#1 $ =*1 * !" #(# " #!!# #% %&
&%!& &
1
&'!%
1
F* **
'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
-. .
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
-. .
$*
/+ 2 +,
-. .
/+* +, * 4+ +
3 .
+***
>*** '
3 .
$** . ** +
3 .
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
-. .
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
*** '
3 .
$** . ** +
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
5
$*4+ . * * *0
5
15 * * 05+***
+
5
; 2 * +
5
*+0 +*** +
5
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
>*.+
-. .
**
3 .
*.+5. *
-. .
* *
3 .
/ * 5. *
3 .
) ** *
7 0 **
3 .
10 '*
3 .
4+ *2
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
-. .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
5
$ * #
+ - $
,
*+*2+*
-. .
? * *+
*. ** *+*
3 .
$*
*+*+***
/++** *.*
-. .
/ *+0 +** +*
3 .
/ **
3 .
=** +.*+.**
0+
**
++
** ***
-. .
).?*+
+
3 .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
/ *+0 +** +*
-. . 3 .
D * +*
-. .
=** +.*+.**
0+
5. .
* *
-. .
++
** ***
3 .
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
E'(
%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
5
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
('(
()/
!"! # =*1 * !""#!# 1 $ =*1 * !""# !#1 " #!!#!# %&
% & & &( %&
&'!%
/ *
)
!'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
-. .
12
3 .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
$*
/+ 2 +,
-. .
/+* +, * 4+ +
+***
>*** '
$** . ** +
$** . ** +
7*+ *0 4+** *
-. .
0. .
+***
=. ** +**
$*4+ . * * *0
3 .
15 * * 05+***
+
; 2 * +
5
*+0 +*** +
) '5**
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
>*.+
-. .
**
3 .
*.+5. *
* *
/ * 5. *
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
/; +,*.*
3 .
@** * *
*.*
$0 ' .
5
; *.*0+++
'
5
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
-. .
? * *+
3 .
*. ** *+*
$*
*+*+***
/++** *.*
/ *+0 +** +*
3 .
/ **
3 .
=** +.*+.**
0+
**
++
** ***
).?*+
+
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
5
$ * #
+ - $
$0 ' .
3 *,0*
*
*
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
/ ***.
? * *+
$*
*+*+***
/ *+0 +** +*
D * +*
5
=** +.*+.**
0+
5. .
* *
5
++
** ***
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
E'(
()0
!"! #=*1 * !"%# #$ # $ $ =*1 * !"%##$ " #!!#%# %&
( &!&"&(%
&'!%
"
$
%'(
) $ * #
+ , - $
$
. $ * #
+ - $
,
12
-. .
+***
3 .
)
+,*+ *'.+
-. .
>*** '
-. .
$*
$** . ** +
-. .
/+ 2 +,
3 .
7*+ *0 4+** *
3 .
/+* +, * 4+ +
0. .
+***
-. .
) '5**
3 .
*0 +,+*
6
** +*
0+ +,
3 .
7*+, * +
-. .
)
+++ *+,+
++
-. .
3**
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$ * #
+ - $
,
; *.*0+++
'
$ * #
+ - $
*+*2+*
3 .
*. ** *+*
/++** *.*
-. .
/ **
-. .
**
-. .
).?*+
+
5
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC 3 *,0*
*
*
=. 0*5*+ 2 *
? * *+
3 .
$*
*+*+***
-. .
/ *+0 +** +*
-. .
=** +.*+.**
0+
3 .
++
** ***
3 .
)* *
-. .
-. .
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
3 .
* 0**?+****.** +**
.
2** 2 ? *&
-. .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
-. .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
5 *,$ * $
,
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
D * +*
-. .
5. .
* *
-. .
+**
4 *
-. .
**. ?*+ *5.4 *
-. .
"'(
"'(
) $
, ,$$ ,
6
()1
!"! #=*1 * !"(#!#E$ # $ $ =*1 * !"!#!%#%$ " #!!# E# %&
"&"& &(
&'!%
"
+ *++*' *' * *
-. .
) * '* *+5.
0+
-. .
/ *;= *+
-. .
7 *+?2 +**
-. .
1** * .*. ..+**
-. .
/ * *
-. .
/ $ * #
+ , - $
>*.+
-. .
**
3 .
+ *++*' *' * *
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
5
/ * *
>*.+
3 .
**
3 .
*.+5. *
5
* *
/ * 5. *
5
) ** *
5
7 0 **
5
10 '*
4+ *2
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
5
$0 ' .
; *.*0+++
'
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
5
? * *+
*. ** *+*
5
$*
*+*+***
/++** *.*
5
/ *+0 +** +*
5
/ **
5 5
=** +.*+.**
0+
5
**
++
** ***
5
).?*+
+
)* *
5
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
5
3 *,0*
*
*
=. 0*5*+ 2 *
5
2 $ * #
+ - $
3 $ / *4 *
5
/ ***.
5
? * *+
$*
*+*+***
5
/ *+0 +** +*
5 5
D * +*
5
=** +.*+.**
0+
5. .
* *
5
++
** ***
5
+**
4 *
5
)* *
5
**. ?*+ *5.4 *
5
E('(
%!'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
5
* + +4. . ++ *2 ++ 0* .*.
5
A* - . 2*
-5
1++ .** * * + + +, * 5
*
5
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
%'(
(.4
!"! # *1 * !"##E1 # $ *1 * !" ##1 " #!!#%# %&
&&!"& %
&'!%
"
**
$
% '(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
3 .
)+ + * ***
)++ *
)
+,*+ *'.+
3 .
$*
5
/+ 2 +,
-. .
/+* +, * 4+ +
3 .
+***
5
>*** '
5
$** . ** +
5
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
*0 +,+*
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
-. .
>*.+
-. .
**
3 .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
3 .
4+ *2
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
$0 ' .
-. .
; *.*0+++
'
5
=. 0*5*+ 2 *
-. .
$ * #
+ - $
,
*+*2+*
5
? * *+
*. ** *+*
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
5
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
$*
*+*+***
/ *+0 +** +*
3 .
D * +*
=** +.*+.**
0+
5. .
* *
++
** ***
3 .
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
3 .
%E'(
%%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
5
A* - . 2*
5
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
5
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
%('(
(.
!"! # *1 * !"#%#!%1 $ $ *1 * !"#!"#!E1 " #!!# E#% %&
(&"&(& %
&'!%
*** '
-. .
$** . ** +
-. .
$** . ** +
3 .
7*+ *0 4+** *
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
; 2 * +
5
*+0 +*** +
3 .
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
/ * *
-. .
>*.+
-. .
**
3 .
*.+5. *
* *
3 .
/ * 5. *
) ** *
3 .
7 0 **
3 .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
$*
*+*+***
3 .
/++** *.*
-. .
/ *+0 +** +*
3 .
/ **
-. . 3 .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
-. .
/ *+0 +** +*
3 . -. .
D * +*
=** +.*+.**
0+
5. .
* *
++
** ***
3 .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
!'(
!'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
(.
!"! # *1 * !"E##1 $ *1 * !"%##!"1 " #!!#%# %&
%&(&&("
&'!%
"
!'(
$ ** )0* + *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
-. .
**
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
-. .
; *.*0+++
'
-. .
=. 0*5*+ 2 *
-. .
$ * #
+ - $
,
*+*2+*
-. .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
-. .
/ *+0 +** +*
-. .
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
-. .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
-. .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. . -. .
D * +*
-. .
=** +.*+.**
0+
5. .
* *
-. .
++
** ***
+**
4 *
-. .
)* *
-. .
**. ?*+ *5.4 *
-. .
('(
!'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
-. .
* + +4. . ++ *2 ++ 0* .*.
-. .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
-. .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+**
-. .
* 0**?+****.** +**
.
2** 2 ? *&
-. .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
-. .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
-. .
* **
.+, *
)*7 **0)7
-. .
) $
, ,$$ ,
6
'(
(..
!"! # *1 * !"%##%1 # $ *1 * !"%# # 1 " #!!# E#" %&
%!&"&(&!(
&'!%
= *+
*
'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
3 .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
-. .
$*
D'1
/+ 2 +,
3 .
/+* +, * 4+ +
-. .
+***
3 .
>*** '
3 .
$** . ** +
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
*+0 +*** +
3 .
) '5**
-. .
*0 +,+*
-. .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
-. .
**
-. .
*.+5. *
* *
-. .
/ * 5. *
) ** *
7 0 **
10 '*
-. .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
3 .
; *.*0+++
'
5
=. 0*5*+ 2 *
5
$ * #
+ - $
,
*+*2+*
5
? * *+
3 .
*. ** *+*
5
$*
*+*+***
/++** *.*
/ *+0 +** +*
/ **
-. .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
5
2 $ * #
+ - $
3 $ / *4 *
/ ***.
? * *+
$*
*+*+***
/ *+0 +** +*
3 .
D * +*
=** +.*+.**
0+
5. .
* *
++
** ***
3 .
+**
4 *
)* *
**. ?*+ *5.4 *
E'(
%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
5
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
5
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
5
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
5
* .
.+, *
=. 7 **0=7
5
* **
. +, *
; 7 **0;7
5
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
('(
(./
!"! # *1 * !"%#E# 1 $ *1 * !"(#!# 1 " #!!##!( %&
%"&%%&"(&!!
&'!%
/ *
/ *
!'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
-. .
)++ *
-. .
)
+,*+ *'.+
-. .
$*
-. .
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
) '5**
-. .
*0 +,+* 6
** +*
>*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
-. .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
>*.+
-. .
**
-. .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
-. .
; *.*0+++
'
-. .
=. 0*5*+ 2 *
-. .
$ * #
+ - $
,
*+*2+*
-. .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
-. .
/ *+0 +** +*
-. .
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
-. .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
-. .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. . -. .
D * +*
-. .
=** +.*+.**
0+
5. .
* *
-. .
++
** ***
-. .
+**
4 *
-. .
)* *
-. .
**. ?*+ *5.4 *
-. .
'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
-. .
* + +4. . ++ *2 ++ 0* .*.
-. .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
-. .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+**
-. .
* 0**?+****.** +**
.
2** 2 ? *&
-. .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
-. .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
-. .
* **
.+, *
)*7 **0)7
-. .
) $
, ,$$ ,
6
E'(
(.0
!"! # *1 * !"!## 1 # $ *1 * !"#(#%1 " #!# %#E %&
E&&E& " %&
&'!%
)
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
-. .
12
3 .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
3 .
$*
5
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
+***
-5
>*** '
$** . ** +
5
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
5
=. ** +**
5
$*4+ . * * *0
15 * * 05+***
+
5
; 2 * +
*+0 +*** +
5
) '5**
3 .
*0 +,+*
+ *++*' *' * *
D'1
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
D'1
/ * *
D'1
>*.+
3 .
**
5
*** '
3 .
$** . ** +
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
3 .
; 2 * +
3 .
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
3 .
**
3 .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
3 .
7 0 **
-. .
10 '*
3 .
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
3 . 3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
3 .
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
(.2
!"! # *1 * !"%#!#$ # $ *1 * !"%#"#%$ " #!!##! " # %&
%"&%& %&
&'!%
**
+ *++*' *' * *
-. .
) * '* *+5.
0+
5
/ *;= *+
5
7 *+?2 +**
D'1
1** * .*. ..+**
3 .
/ * *
-. .
/ $ * #
+ , - $
>*.+
5
**
3 .
*.+5. *
* *
-. .
/ * 5. *
) ** *
5
7 0 **
3 .
10 '*
-. .
4+ *2
-. .
/ *.*. .
5
/; +,*.*
3 .
@** * *
*.*
5
$0 ' .
D'1
; *.*0+++
'
5
=. 0*5*+ 2 *
D'1
$ * #
+ - $
,
*+*2+*
3 .
? * *+
D'1
*. ** *+*
3 .
$*
*+*+***
D'1
/++** *.*
3 .
/ *+0 +** +*
D'1
/ **
-. . -. .
=** +.*+.**
0+
D'1
**
++
** ***
D'1
).?*+
+
3 .
)* *
D'1
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
5
$ * #
+ - $
$0 ' .
D'1
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
D'1
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
-. .
? * *+
D'1
$*
*+*+***
D'1
/ *+0 +** +*
D'1 D'1 D'1
D * +*
5
=** +.*+.**
0+
5. .
* *
5
++
** ***
+**
4 *
**. ?*+ *5.4 *
5
E'(
E"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
D'1
* + +4. . ++ *2 ++ 0* .*.
D'1
A* - . 2*
D'1
1++ .** * * + + +, * 5
*
D'1
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
D'1
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
D'1
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
D'1
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
D'1
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
D'1
) $
, ,$$ ,
6
D 0* * *
.*.+****
*
EE'(
(/
!"! # =.*1 * !"(#(#$ $ $ =.*1 * !"!# #($ " #!!# #" %&
% &%& &% %&
&'!%
"
*** '
-5
$** . ** +
5
$** . ** +
-5
7*+ *0 4+** *
-5
0. .
+***
-5
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-5
; 2 * +
5
*+0 +*** +
) '5**
3 .
*0 +,+*
*** '
3 .
$** . ** +
5
$** . ** +
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
*+0 +*** +
3 .
) '5**
3 .
*0 +,+*
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
3 .
**
-. .
*.+5. *
3 .
* *
-. .
/ * 5. *
3 .
) ** *
3 .
7 0 **
-. .
10 '*
3 .
4+ *2
3 .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
-. .
? * *+
3 .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
-. .
/ *+0 +** +*
-. .
/ **
3 . 3 .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
3 .
$*
*+*+***
-. .
/ *+0 +** +*
-. . -. .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
-. .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
5
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
5
) $
, ,$$ ,
6
(/1
%&
%%& %&
!"! ; 0 # 1 * ( !" # #E$ # $ 1 * ( !"#!%#!$ " #!!## + 7$ + 7 7 3 ! +&. R0 & ! &%&E
&'!%
'(
"
$ ** )0*
)
"'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
-. .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
-. .
$*
5
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
+***
3 .
>*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
-. .
) '5**
-. .
*0 +,+*
-5
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
>*.+
-. .
**
5
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
3 .
7 0 **
3 .
10 '*
-. .
4+ *2
/ *.*. .
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
$0 ' .
-. .
; *.*0+++
'
-. .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
-. .
? * *+
-. .
*. ** *+*
$*
*+*+***
-. .
/++** *.*
3 .
/ *+0 +** +*
-. .
/ **
3 . 3 .
=** +.*+.**
0+
**
++
** ***
3 .
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
D * +*
-. .
=** +.*+.**
0+
5. .
* *
5
++
** ***
+**
4 *
)* *
3 .
**. ?*+ *5.4 *
3 .
'(
'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
-. .
* + +4. . ++ *2 ++ 0* .*.
-. .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
-. .
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
-. .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
(/2
!"!
; 0 ; 0 # 1 * ( !"E##!$ # $ 1 * ( !"%#!# $ " #!!#(# + 7J
+ 7 7 *** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
-. .
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
-. .
**
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
3 .
$0 ' .
D'1
; *.*0+++
'
-. .
=. 0*5*+ 2 *
D'1
$ * #
+ - $
,
*+*2+*
3 .
? * *+
D'1
*. ** *+*
-. .
$*
*+*+***
D'1
/++** *.*
-. .
/ *+0 +** +*
D'1
/ **
-. . -. .
=** +.*+.**
0+
D'1
**
++
** ***
D'1
).?*+
+
-. .
)* *
D'1
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
D'1
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
D'1
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
D'1
$*
*+*+***
D'1
/ *+0 +** +*
D'1 D'1
D * +*
-. .
=** +.*+.**
0+
5. .
* *
-. .
++
** ***
D'1
+**
4 *
-. .
)* *
D'1
**. ?*+ *5.4 *
-. .
'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
D'1
* + +4. . ++ *2 ++ 0* .*.
D'1
A* - . 2*
D'1
1++ .** * * + + +, * 5
*
D'1
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
D'1
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
D'1
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
D'1
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
D'1
* .
.+, *
=. 7 **0=7
D'1
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
D'1
) $
, ,$$ ,
6
(0
!"!
; 0 ; 0 # =*1 *! !"%# #!1 $ =*1 *! !"%#"#1 " #!!# #! " # + 7$@ * + 7 7 ) *; = ! + * *R & ! %&
% &E& & ( %&
&'!%
E'(
= *+
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
3 .
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
3 .
$*
5
/+ 2 +,
/+* +, * 4+ +
3 .
+***
>*** '
3 .
$** . ** +
$** . ** +
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
*+0 +*** +
) '5**
3 .
*0 +,+*
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
10 '*
3 .
4+ *2
3 .
/ *.*. .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
? * *+
3 .
*. ** *+*
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
/ **
3 .
=** +.*+.**
0+
**
++
** ***
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 *,0*
*
*
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
/ ***.
3 .
? * *+
$*
*+*+***
/ *+0 +** +*
3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
++
** ***
3 .
+**
4 *
)* *
**. ?*+ *5.4 *
"'(
"'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
(0
!"!
; 0 ; 0 # *1 * !" #!#!"$ $ *1 * !"#!# ($ " #!!## " # + 7/* + 7 7 / * ! *& *&R & ! %&
%&&(E&" %&
&'!%
"'(
$,
""'(
) $ * #
+ , - $
$ )
* ***'* ** +,
/*+,,0*
-. .
12
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
$*
3 .
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
) '5**
*0 +,+* 6
** +*
>*** '
-. .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
$*4+ . * * *0
3 .
15 * * 05+***
+
; 2 * +
3 .
*+0 +*** +
3 .
*.+5. *
3 .
* *
3 .
/ * 5. *
-. .
) ** *
-. .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
/; +,*.*
@** * *
*.*
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
-. .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
-. .
/ *+0 +** +*
3 .
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
3 . -. .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
-. .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
E'(
E '(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
3 .
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
E'(
(0)
!"! # )+! !" ##!$ # $ )+! !" # #$ " #!!## %&
&!"&!&% %&
&'!%
**
$,
E'(
) $ * #
+ , - $
$ )
* ***'* ** +,
/*+,,0*
-. .
12
3 .
)+ + * ***
)++ *
)
+,*+ *'.+
-. .
$*
-. .
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
) '5**
3 .
*0 +,+* 6
** +*
+***
>*** '
3 .
$** . ** +
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
-. .
*+0 +*** +
3 .
*** '
3 .
$** . ** +
3 .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
3 .
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
-. .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
-. .
/; +,*.*
3 .
@** * *
*.*
-. .
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
3 .
? * *+
-. .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
3 . 3 .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
3 . 3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
%E'(
%%'(
5 *,$ * $
,
(0/
!"! # =*)+!" !"#(#$ $ $ =*)+!" !" #!!#"$ " #!!# # ! " # %&
%& & &
/+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
3 .
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
$*
3 .
* .
.+, *
=. 7 **0=7
3 .
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
* **
. +, *
; 7 **0;7 * **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
&'!%
)
) $ * #
+ , - $
$
%('(
)
* ***'* ** +,
-. .
/*+,,0*
3 .
12
-. .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
-. .
) '5**
3 .
*0 +,+*
3 .
6
** +*
0+ +,
D'1
7*+, * +
3 .
)
+++ *+,+
++
3 .
3**
7***** *
$, 5 0*8. *9*:
3 .
$
*5 '05'+ +**
3 .
15 * +, 0 +**
3 .
+ 0
3 .
(!'(
0.
+,
-. .
; * **+*** +
3 .
*.+
3 .
0
'
3 .
**
3 .
0+ '++0+***
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
>*.+
3 .
**
3 .
+ *++*' *' * *
3 .
* *
7 *+?2 +**
3 .
10 '*
/ * *
3 .
4+ *2
3 .
/ *
/ *.*. .
3 .
/ *2 ?
/; +,*.*
3 .
@** * *
*.*
3 .
; *.*0+++
'
; 2
*+0 +**
+ *++*' *' * *
3 .
* *
-. .
7 *+?2 +**
3 .
10 '*
-. .
/ * *
3 .
4+ *2
-. .
/ *
3 .
/ *.*. .
-. .
/ *2 ?
3 .
/; +,*.*
-. .
@** * *
*.*
-. .
; *.*0+++
'
-. .
; 2
3 .
*+0 +**
3 .
+ *++*' *' * *
-. .
* *
7 *+?2 +**
10 '*
3 .
/ * *
3 .
4+ *2
-. .
/ *
/ *.*. .
/ *2 ?
/; +,*.*
3 .
@** * *
*.*
3 .
; *.*0+++
'
; 2
*+0 +**
*.+
>+ *++*' *' * *
**
7 *+?2 +**
/ * *
+ *++*' *' * *
3 .
* *
3 .
7 *+?2 +**
3 .
10 '*
3 .
/ * *
3 .
4+ *2
3 .
/ *
3 .
/ *.*. .
3 .
/ *2 ?
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
; *.*0+++
'
-. .
; 2
3 .
*+0 +**
3 .
+ *++*' *' * *
3 .
* *
-. .
7 *+?2 +**
10 '*
3 .
/ * *
3 .
4+ *2
-. .
/ *
/ *.*. .
-. .
/ *2 ?
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
; *.*0+++
'
3 .
; 2
3 .
*+0 +**
+ *++*' *' * *
* *
5
7 *+?2 +**
10 '*
5
/ * *
5
4+ *2
5
/ *
/ *.*. .
5
/ *2 ?
5
/; +,*.*
5
@** * *
*.*
; *.*0+++
'
5
; 2
*+0 +**
*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
/ $ * #
+ , - $
=. ** +**
>*.+
-. .
$*4+ . * * *0
**
-. .
15 * * 05+***
+
3 .
; 2 * +
3 .
*+0 +*** +
*.+5. *
3 .
/ * 5. *
3 .
) ** *
7 0 **
"E'(
$ * #
+ - $
,
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
D'1
* + +4. . ++ *2 ++ 0* .*.
-. .
D'1
$0 ' .
-. .
=. 0*5*+ 2 *
D'1
? * *+ $*
*+*+***
A* - . 2*
D'1
=** +.*+.**
0+
D'1
D'1
++
** ***
5
1++ .** * * + + +, * 5
*
)* *
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
D'1
* +,*
$,
; 4$7;*
!! 0** .+,+**
D'1
* 0**?+****.** +**
.
2** 2 ? *&
D'1
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
D'1
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
$ * #
+ - $
$0 ' .
-. .
=. 0*5*+ 2 *
D'1
? * *+
D'1
$*
*+*+***
/ *+0 +** +*
=** +.*+.**
0+
++
** ***
D'1
)* *
"E'(
"E"'(
(11
!"! ; 0 # *)+ !""# %#"$ $ *)+ !""# #!!$ " #!!## + 7 )
7 /* ! *""" R &
8M* %&
%"&%&%(& !"
&'!%
**
$
) $ * #
+ , - $
$ )
* ***'* ** +,
/*+,,0*
3 .
12
3 .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
-. .
$*
-. .
/+ 2 +,
-. .
/+* +, * 4+ +
3 .
) '5**
3 .
*0 +,+*
3 .
6
** +*
0+ +,
-. .
7*+, * +
-. .
)
+++ *+,+
++
-. .
3**
-. .
7***** *
-. .
$, 5 0*8. *9*:
3 .
$
*5 '05'+ +**
-. .
15 * +, 0 +**
-. .
+ 0
-. .
0.
+,
-. .
; * **+*** +
3 .
0
*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
-. .
/ $ * #
+ , - $
0. .
+***
3 .
>*.+
-. .
=. ** +**
3 .
**
3 .
$*4+ . * * *0
+ *++*' *' * *
3 .
* *
3 .
7 *+?2 +**
-. .
10 '*
/ * *
-. .
4+ *2
/ *
-. .
/ *.*. .
/ *2 ?
/; +,*.*
@** * *
*.*
; *.*0+++
'
-. .
; 2
*+0 +**
3 .
*** '
-. .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
7*+ *0 4+** *
0. .
+***
3 .
=. ** +**
$*4+ . * * *0
15 * * 05+***
+
/ *;= *+
1** * .*. ..+**
/ $ * #
+ , - $
0 $ * #
+ - $
1 $ * #
+ - $
-
2 $ * #
+ - $
3 $
4 $ * #
+ - $
$
$ * #
+ - $
,
; 2 * +
*+0 +*** +
3 .
+ *++*' *' * *
-. .
* *
-. .
7 *+?2 +**
-. .
10 '*
-. .
/ * *
-. .
4+ *2
-. .
/ *
-. .
/ *.*. .
3 .
/ *2 ?
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
; *.*0+++
'
3 .
; 2
-. .
*+0 +**
3 .
*.+5. *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
$ * #
+ - $
, $0 ' .
=. 0*5*+ 2 *
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
3 .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
-. .
$ * #
+ - $
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
-5
* + +4. . ++ *2 ++ 0* .*. / #
-. . 1
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
-5
* 0**?+****.** +**
.
2** 2 ? *&
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
$0 ' .
=. 0*5*+ 2 *
-. .
* .
.+, *
=. 7 **0=7
* **
. +, *
; 7 **0;7
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
3 .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
-. .
E!'(
E!'(
(2
!"! )$* @ # *)+ !"!##!"$ $ *)+ !"#%#!!$ " #!!## %&
!E&!&! &
$ * #
+ - $
5 *,$ * $
,
) $
, ,$$ ,
6
&'!%
E
$ ** )0* *.+
-. .
$*4+ . * * *0
3 .
**
D'1
15 * * 05+***
+
3 .
; 2 * +
3 .
*+0 +*** +
3 .
+ *++*' *' * *
-. .
* *
-. .
7 *+?2 +**
-. .
10 '*
-. .
/ * *
-. .
4+ *2
-. .
/ *
D'1
/ *.*. .
-. .
/ *2 ?
3 .
/; +,*.*
-. .
@** * *
*.*
3 .
; *.*0+++
'
3 .
; 2
-. .
*+0 +**
3 .
+ *++*' *' * *
3 .
* *
-. .
7 *+?2 +**
3 .
10 '*
3 .
/ * *
-. .
4+ *2
-. .
/ *
3 .
/ *.*. .
3 .
/ *2 ?
/; +,*.*
3 .
@** * *
*.*
3 .
; *.*0+++
'
3 .
; 2
*+0 +**
3 .
*.+5. *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
$ * #
+ - $
, $0 ' .
-. .
=. 0*5*+ 2 *
-. .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
3 .
=** +.*+.**
0+
3 .
++
** ***
3 .
)* *
3 .
$ * #
+ - $
$0 ' .
-. .
=. 0*5*+ 2 *
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. .
=** +.*+.**
0+
-. .
++
** ***
-. .
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / #
D*5.*
*.H
* + +4. . ++ *2 ++ 0* .*. / #
/+4+,* *9
A* - . 2* / #
-. . ;+ + * *
1++ .** * * + + +, * 5
* / #
$,.. 5*
.** *!. .? 0 !5?0 9 +** .** / # * +,*
$,
; 4$7;*
!! 0** .+,+** / #
;+ +,* A* +4+,* A* +4+,*
* 0**?+****.** +**
.
2** 2 ? *& / #
A* +4+,*
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / #
5
$ ,***.*
* 2 *+***.* 2 ***++00 0 .**.*5 *.*& / #
A* *+ ***
* .
.+, *
=. 7 **0=7 / #
$.+** . +,*
5
5
* **
. +, *
; 7 **0;7 / #
-5 /*.* +, -5
* **
.+, *
)*7 **0)7 / #
*
) $
, ,$$ ,
6
E '(
D'1
E
J+.+4****+*+**H
'(
(2.
!"! ; 0 # =.*)+ !"!#(#1 $ =.*)+ !"!#%#1 " #!!# (#!E + 7 3).
7
! 5&.& R &
8M* %&
%"&%&!& "
&'!%
"
**
)
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
3 .
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
-. .
$*
/+ 2 +,
3 .
/+* +, * 4+ +
) '5**
3 .
*0 +,+*
3 .
6
** +*
0+ +,
3 .
7*+, * +
3 .
)
+++ *+,+
++
3 .
3**
-. .
7***** *
3 .
$, 5 0*8. *9*:
3 .
$
*5 '05'+ +**
15 * +, 0 +**
3 .
+ 0
3 .
0.
+,
-. .
; * **+*** +
*** '
3 .
-. .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
/ *;= *+
3 .
1** * .*. ..+**
3 .
7*+ *0 4+** *
-. .
0. .
+***
-. .
/ $ * #
+ , - $
=. ** +**
-. .
>*.+
3 .
$*4+ . * * *0
**
-. .
+ *++*' *' * *
3 .
* *
-. .
7 *+?2 +**
3 .
10 '*
5
/ * *
-. .
4+ *2
3 .
/ *
/ *.*. .
3 .
/ *2 ?
/; +,*.*
3 .
@** * *
*.*
3 .
; *.*0+++
'
; 2
3 .
*+0 +**
3 .
*.+5. *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
3 .
$ * #
+ - $
, $0 ' .
-. .
=. 0*5*+ 2 *
3 .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
3 .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
3 .
$ * #
+ - $
$0 ' .
-. .
=. 0*5*+ 2 *
3 .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
3 .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
3 .
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
D*5.2* *
A* - . 2* / #
3 . 0 * 0* +,***+ ..*+ ,0* +
1++ .** * * + + +, * 5
* / #
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .** / # * +,*
$,
; 4$7;*
!! 0** .+,+** / # * 0**?+****.** +**
.
2** 2 ? *& / #
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, & / #
=.**+ * *
+ . *.
.
4+ **0&1* + *.
+ * +,+*
D*5.
.
D* -. . /.
* .50 .0 .
*. . 2 . .*5
**+,05
;+ ** *5* *.*&
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7 / #
=.*+ * .5. *. .+,
* **
. +, *
; 7 **0;7 / #
1*0
* **
.+, *
)*7 **0)7 / #
; 5..*
) $
, ,$$ ,
6
3 .0?*+
* ** *+**.*+* .+, +* *&
;*+ *..+ * *
..
0 .
*0 *.+ &
EE'(
E%'(
(20
!"! ; 0 #=.*)+ !"%#!#"$ # $ =.*)+ !"%##$ " #!!##" + 7
+ 7 7 ). ! *.R & !
8M* 8 %&
%&
&!& & "
&'!%
)
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
-. .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
$*
-. .
/+ 2 +,
/+* +, * 4+ +
3 .
) '5**
*0 +,+*
6
** +*
3 .
0+ +,
7*+, * +
3 .
)
+++ *+,+
++
3 .
3**
-. .
7***** *
$, 5 0*8. *9*:
$
*5 '05'+ +**
15 * +, 0 +**
+ 0
0.
+,
; * **+*** +
3 .
*** '
3 .
5
$** . ** +
) * '* *+5.
0+
$** . ** +
/ *;= *+
5
1** * .*. ..+**
7*+ *0 4+** *
5
0. .
+***
3 .
/ $ * #
+ , - $
=. ** +**
3 .
>*.+
-. .
$*4+ . * * *0
3 .
**
*** '
-. .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
3 .
1** * .*. ..+**
7*+ *0 4+** *
-. .
0. .
+***
-. .
/ $ * #
+ , - $
=. ** +**
-. .
>*.+
-. .
$*4+ . * * *0
**
*** '
-. .
$** . ** +
3 .
? * * *+
3 .
/. . * *
3 .
1++ **++*0*2
3 .
) * '* *+5.
0+
3 .
$** . ** +
3 .
/ *;= *+
3 .
7*+ *0 4+** *
3 .
1** * .*. ..+**
-. .
/ $ * #
+ , - $
0. .
+***
-. .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
3 .
$* *+ **+ * *+ **
3 .
= *+ *+ **
3 .
>*.+
-. .
**
3 .
*** '
3 .
/ *;= *+
3 .
$** . ** +
3 .
1** * .*. ..+**
/ $ * #
+ , - $
$** . ** +
3 .
0. .
+***
3 .
=. ** +**
3 .
>*.+
-. .
**
3 .
$*4+ . * * *0
*.+5. *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
$ * #
+ - $
, $0 ' .
3 .
=. 0*5*+ 2 *
? * *+
3 .
$*
*+*+***
/ *+0 +** +*
=** +.*+.**
0+
3 .
++
** ***
)* *
$ * #
+ - $
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
D'1
A* - . 2* / #
3 . 1 *9
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
$0 ' .
3 .
* **
. +, *
; 7 **0;7
=. 0*5*+ 2 *
3 .
* **
.+, *
)*7 **0)7
? * *+ $*
*+*+***
3 .
) $
, ,$$ ,
6
/ *+0 +** +*
=** +.*+.**
0+
3 .
++
** ***
)* *
3 .
EEE'(
EE%'(
(4
!"! )$* @ # @)+" !""#"#$ $ @)+" !""##$ " #!!#E# %&
((&&%&(
&'!%
=.
F* **
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
3 .
12
-. .
)+ + * ***
3 .
)++ *
3 .
)
+,*+ *'.+
3 .
$*
/+ 2 +,
3 .
/+* +, * 4+ +
-. .
) '5**
3 .
*0 +,+*
3 .
6
** +*
3 .
0+ +,
3 .
7*+, * +
-. .
)
+++ *+,+
++
3 .
3**
3 .
7***** *
3 .
$, 5 0*8. *9*:
3 .
$
*5 '05'+ +**
15 * +, 0 +**
3 .
+ 0
-. .
0.
+,
-. .
; * **+*** +
3 .
*** '
-. .
-. .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
1** * .*. ..+**
3 .
7*+ *0 4+** *
0. .
+***
-. .
/ $ * #
+ , - $
=. ** +**
3 .
>*.+
3 .
$*4+ . * * *0
**
*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
15 * * 05+***
+
3 .
; 2 * +
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
3 .
**
*.+5. *
3 .
* *
-. .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
5
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
*. ** *+*
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
/ **
3 . 3 .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
3 .
)* *
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
/ ***.
3 .
? * *+
$*
*+*+***
3 .
/ *+0 +** +*
3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
3 .
)* *
**. ?*+ *5.4 *
%'(
% '(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
1++ .** * * + + +, * 5
*
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* +,*
$,
; 4$7;*
!! 0** .+,+**
* 0**?+****.** +**
.
2** 2 ? *&
3 .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
) $
, ,$$ ,
6
%'(
(40
!"! )$* @ # ) )+% !"(#E# 1 $ ) )+% !"!#!%# %1 " #!!## %&
%&!&&%(
&'!%
"
/ *
$,
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
3 .
)+ + * ***
)++ *
-. .
)
+,*+ *'.+
-. .
$*
5
/+ 2 +,
3 .
/+* +, * 4+ +
3 .
+***
-. .
>*** '
3 .
$** . ** +
3 .
$** . ** +
3 .
7*+ *0 4+** *
3 .
0. .
+***
3 .
=. ** +**
3 .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
3 .
*0 +,+*
3 .
*** '
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
3 .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
-. .
/ * *
-. .
>*.+
-. .
**
3 .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
3 .
4+ *2
-. .
/ *.*. .
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
3 .
; *.*0+++
'
-. .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
-. .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
-. .
/ *+0 +** +*
/ **
-. . -. .
=** +.*+.**
0+
-. .
**
++
** ***
-. .
).?*+
+
-. .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
-. .
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
++
** ***
-. .
+**
4 *
-. .
)* *
-. .
**. ?*+ *5.4 *
5
% E'(
% %'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
3 . A*+****+,' +4
(42
!"! )$* @ @ #) )+% !"(##$ # $ ) )+% !"!##$ " #!!##! %&
&("& &E %&
3 . $ * + * *
&'!%
A* - . 2* / #
-. . $,*0
"
1++ .** * * + + +, * 5
* / #
*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
-. .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
3 .
) '5**
-. .
*0 +,+*
-. .
*** '
3 .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
-. .
=. ** +**
-. .
$*4+ . * * *0
3 .
15 * * 05+***
+
-. .
; 2 * +
-. .
*+0 +*** +
-. .
) '5**
3 .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
-. .
>*.+
-. .
**
3 .
*.+5. *
-. .
* *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
10 '*
-. .
4+ *2
3 .
/ *.*. .
/; +,*.*
-. .
@** * *
*.*
-. .
$0 ' .
-. .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
-. .
*. ** *+*
-. .
$*
*+*+***
-. .
/++** *.*
-. .
/ *+0 +** +*
-. .
/ **
-. . -. .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
-. .
)* *
-. .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
-. .
$ * #
+ - $
$0 ' .
-. .
3 *,0*
*
*
-. .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
-. .
/ ***.
-. .
? * *+
-. .
$*
*+*+***
3 .
/ *+0 +** +*
3 . 3 .
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
3 .
+**
4 *
-. .
)* *
3 .
**. ?*+ *5.4 *
3 .
%'(
%'(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
-. .
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
-. .
1++ .** * * + + +, * 5
*
-. .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+**
3 .
* 0**?+****.** +**
.
2** 2 ? *&
-. .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
-. .
* **
. +, *
; 7 **0;7
-. .
* **
.+, *
)*7 **0)7
-. .
) $
, ,$$ ,
6
%'(
(
!"! )$* @ # )+( !"%# #!1 $ )+( !"%##1 " #!!# # %&
(&& E&
&'!%
/ *
%"'(
) $ * #
+ , - $
$ )
* ***'* ** +,
3 .
/*+,,0*
3 .
12
-. .
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
-. .
$*
3 .
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
+***
3 .
>*** '
-. .
$** . ** +
$** . ** +
7*+ *0 4+** *
0. .
+***
-. .
=. ** +**
3 .
$*4+ . * * *0
3 .
15 * * 05+***
+
3 .
; 2 * +
3 .
*+0 +*** +
3 .
) '5**
-. .
*0 +,+*
3 .
+ *++*' *' * *
-. .
/ $ * #
+ , - $
0 $ * #
+ - $
7 *+?2 +**
3 .
/ * *
3 .
>*.+
-. .
**
3 .
*.+5. *
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
3 .
4+ *2
3 .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
3 .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
3 .
$ * #
+ - $
,
*+*2+*
3 .
? * *+
3 .
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
3 .
/ **
-. . -. .
=** +.*+.**
0+
3 .
**
++
** ***
3 .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
3 *,0*
*
*
3 .
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
=. 0*5*+ 2 *
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
3 .
=** +.*+.**
0+
3 .
D * +*
3 .
++
** ***
3 .
5. .
* *
3 .
)* *
3 .
+**
4 *
3 .
**. ?*+ *5.4 *
3 .
%'(
% '(
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 *
3 .
* + +4. . ++ *2 ++ 0* .*.
3 .
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
3 .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
3 .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
3 .
* .
.+, *
=. 7 **0=7
3 .
* **
. +, *
; 7 **0;7
3 .
* **
.+, *
)*7 **0)7
3 .
) $
, ,$$ ,
6
%'(
(
!"! )$* @ # )+( !"!#!#(1 $ )+( !"!#"#%1 " #!!#"# % %&
((&"&E&""
&'!%
"
=.
)
%'(
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
3 .
)+ + * ***
-. .
)++ *
3 .
)
+,*+ *'.+
3 .
$*
-. .
/+ 2 +,
3 .
/+* +, * 4+ +
+***
>*** '
-. .
$** . ** +
-. .
$** . ** +
-. .
7*+ *0 4+** *
-. .
0. .
+***
3 .
=. ** +**
-. .
$*4+ . * * *0
15 * * 05+***
+
-. .
; 2 * +
3 .
*+0 +*** +
-. .
) '5**
*0 +,+*
-. .
*.+5. *
3 .
* *
3 .
/ * 5. *
3 .
) ** *
3 .
7 0 **
3 .
10 '*
-. .
4+ *2
-. .
/ *.*. .
3 .
/; +,*.*
3 .
@** * *
*.*
3 .
$0 ' .
; *.*0+++
'
3 .
=. 0*5*+ 2 *
$ * #
+ - $
,
*+*2+*
3 .
? * *+
*. ** *+*
3 .
$*
*+*+***
3 .
/++** *.*
3 .
/ *+0 +** +*
5
/ **
3 . 3 .
=** +.*+.**
0+
-. .
**
++
** ***
3 .
).?*+
+
3 .
)* *
3 .
/ * +, 0 *
. 2 *.*$ )= $ )? =. 0 BC
3 .
$ * #
+ - $
$0 ' .
3 .
3 *,0*
*
*
3 .
=. 0*5*+ 2 *
2 $ * #
+ - $
3 $ / *4 *
3 .
/ ***.
3 .
? * *+
3 .
$*
*+*+***
3 .
/ *+0 +** +*
D * +*
3 .
=** +.*+.**
0+
5. .
* *
3 .
++
** ***
+**
4 *
3 .
)* *
3 .
**. ?*+ *5.4 *
3 .
5 *,$ * $
,
) $
, ,$$ ,
6
) +*0&=.; .2* *+9
5 *+ ++.*
%('(
%"!'(
(
!"! )$* @ # =*)+ ! !" ##!$ $ =*)+ ! !" #E#$ " #!!##!E %&
E(&"E&" &"!
&'!%
*
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
3 .
)+ + * ***
)++ *
3 .
)
+,*+ *'.+
3 .
$*
3 .
/+ 2 +,
-. .
/+* +, * 4+ +
-. .
) '5**
-. .
*0 +,+*
3 .
6
** +*
0+ +,
7*+, * +
-. .
)
+++ *+,+
++
3 .
3**
3 .
7***** *
-. .
$, 5 0*8. *9*:
-. .
$
*5 '05'+ +**
3 .
15 * +, 0 +**
-. .
+ 0
-. .
0.
+,
-. .
; * **+*** +
3 .
*** '
-. .
3 .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
3 .
1** * .*. ..+**
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
/ $ * #
+ , - $
=. ** +**
3 .
>*.+
-. .
$*4+ . * * *0
**
+ *++*' *' * *
-. .
* *
-. .
7 *+?2 +**
-. .
10 '*
-. .
/ * *
-. .
4+ *2
-. .
/ *
-. .
/ *.*. .
-. .
/ *2 ?
-. .
/; +,*.*
-. .
@** * *
*.*
-. .
; *.*0+++
'
-. .
; 2
-. .
*+0 +**
-. .
*.+5. *
-. .
/ * 5. *
-. .
) ** *
-. .
7 0 **
-. .
5 *,$ * $
, /+4 5*
.** *+* +4 *+* . 0+ * F5 * / # * + +4. . ++ *2 ++ 0* .*. / #
-. . ,/+4$,*?=
.*9 . * & -. . ,/+4$,*?=
.*9 . * &
$ * #
+ - $
,
A* - . 2*
3 .
1++ .** * * + + +, * 5
*
-. .
$,.. 5*
.** *!. .? 0 !5?0 9 +** .**
-. .
* +,*
$,
; 4$7;*
!! 0** .+,+** / #
-. .
$0 ' .
-. .
=. 0*5*+ 2 *
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. .
=** +.*+.**
0+
-. .
++
** ***
-. .
)* *
-. .
$ * #
+ - $
$0 ' .
-. .
=. 0*5*+ 2 *
-. .
,/+4$,*?=
.*9 . * &
* 0**?+****.** +**
.
2** 2 ? *&
-. .
* .* *. ** 2 ** 0 *** )* $ =. $ *
.+, &
-. .
* 2 *+***.* 2 ***++00 0 .**.*5 *.*&
-. .
* .
.+, *
=. 7 **0=7
-. .
? * *+
-. .
$*
*+*+***
-. .
/ *+0 +** +*
-. .
=** +.*+.**
0+
-. .
* **
. +, *
; 7 **0;7
-. .
++
** ***
-. .
* **
.+, *
)*7 **0)7
-. .
)* *
-. . ) $
, ,$$ ,
6
$**./)**@*5.
& ;
** * ?-3 .&
%(('(
(!!'(
(1
!"! )$* @ # *)+ !"#%#$ $ =.*)+ !" # #! 1 " #!!## %&
& "&"(& !
&'!%
"
/ *
$,
) $ * #
+ , - $
$ )
* ***'* ** +,
-. .
/*+,,0*
-. .
12
-. .
)+ + * ***
3 .
)++ *
-. .
)
+,*+ *'.+
3 .
$*
/+ 2 +,
-. .
/+* +, * 4+ +
3 .
) '5**
-. .
*0 +,+*
3 .
6
** +*
3 .
0+ +,
7*+, * +
3 .
)
+++ *+,+
++
3 .
3**
7***** *
$, 5 0*8. *9*:
$
*5 '05'+ +**
15 * +, 0 +**
3 .
+ 0
0.
+,
3 .
; * **+*** +
*** '
3 .
-. .
$** . ** +
-. .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
3 .
1** * .*. ..+**
3 .
7*+ *0 4+** *
-. .
0. .
+***
3 .
/ $ * #
+ , - $
=. ** +**
-. .
>*.+
3 .
$*4+ . * * *0
3 .
**
3 .
*** '
-. .
$** . ** +
3 .
) * '* *+5.
0+
$** . ** +
3 .
/ *;= *+
3 .
1** * .*. ..+**
3 .
7*+ *0 4+** *
3 .
0. .
+***
-. .
/ $ * #
+ , - $
=. ** +**
-. .
>*.+
3 .
$*4+ . * * *0
**
3 .