Lessons Learned From Industrial Validation of ... - Semantic Scholar

2 downloads 11430 Views 135KB Size Report
COSYSMO that was motivated by a similar model from the software domain, ... The COCOMO development methodology served as a useful ..... result, custom rating scales were developed for drivers that had unique properties. ..... 27th Conference of the International Society of Parametric Analysts, Denver, CO, June 2005.
Lessons Learned From Industrial Validation of COSYSMO Ricardo Valerdi MIT 77 Massachusetts Ave. Cambridge, MA 02139 [email protected]

John E. Rieff Raytheon P.O. Box 660246 Dallas, TX 75266 [email protected]

Garry J. Roedler Lockheed Martin P.O. Box 8048 Philadelphia, PA 19101 [email protected]

Marilee J. Wheaton The Aerospace Corporation 2350 E. El Segundo Blvd. El Segundo, CA 90245 [email protected]

Gan Wang BAE Systems 11487 Sunset Hills Rd. Reston, VA 20190 [email protected] Copyright © 2007 by Ricardo Valerdi, John Rieff, Garry Roedler, Marilee Wheaton, and Gan Wang. Published and used by INCOSE with permission.

Abstract. The development of COSYSMO has been an ongoing collaboration between industry, government, and academia since 2001. INCOSE provided expertise as well as a forum for collaboration between stakeholders that led to the eventual development of the model. In 2004, we provided eleven lessons learned from experiences collecting systems engineering data from six companies in collaboration with the INCOSE Measurement Working Group and the Practical Software and Systems Measurement (PSM). These lessons were focused on the development of COSYSMO that was motivated by a similar model from the software domain, COCOMO II, but was a first of its kind for systems engineering. Now that the development phase of the model is completed we take a retrospective view of lessons learned during the ongoing validation phase of the model and present new lessons learned that should help cost model developers, academic researchers, and practitioners develop and validate similar approaches. These lessons include the need for more specific counting rules, an approach to account for reuse in systems engineering, and strategies for model adoption in organizations.

Background The Need for COSYSMO. The emergence of CMMI as the de facto process capability standard highlights the importance of the integration of the systems engineering function with other engineering disciplines. Organizations striving to be at Level 2 or above strive to use parametric models to estimate the effort needed to perform their tasks. COSYSMO provides this capability by providing a way of constructing the attributes that affect systems engineering effort through the use of size drivers and cost drivers. These parameters are ways in which organizations can measure the size of the system – via requirements, interfaces, algorithms, and operational scenarios – and assess the context under which systems engineering is being performed to estimate the amount of systems engineering effort needed to deliver a system.

Evolution of COSYSMO. The Corporate Affiliates of the USC Center for Systems and Software Engineering (CSSE) were the main driving force behind the development of what is now COSYSMO. The effort began in 2001 with a workshop at USC that focused on industry’s need to do what Constructive Cost Model (COCOMO) had done for software engineering but now for systems engineering. The COCOMO development methodology served as a useful guide for identifying the necessary steps for academia, industry, and government to collaborate on. The eight step modelling methodology is described in detail in the book Software Cost Estimation with COCOMO II (Boehm, et al 2000). Namely, industry and government provided program expertise and systems engineering data while academia served as the neutral broker by analyzing sanitized data and providing a calibration that was influenced by participating companies. The development of the model itself also involved over a dozen workshops held throughout the country to arrive at a consensus across industry on the most relevant systems engineering size and cost drivers (Valerdi, Miller, Thomas 2004). Summary of Lessons Learned from Development Phase. The following is a collection of lessons learned that were identified by the working group during the data collection activity for the COSYSMO model (Valerdi, et al 2004). Some lessons are generalizable to the model development process and others are more specific to particular features of COSYSMO. Lesson #D1: A standardized WBS and dictionary provides the foundation for decisions on what is within the scope of the model for both data collection and for estimating. Lesson #D2: Careful examination of potential projects is necessary to ensure completeness, consistency and accuracy across all required data collection items for the project. Lesson #D3: The collection of the size driver parameters requires access to project technical documentation as well as project systems engineering staff that can help interpret the content. Lesson #D4: The rating of effort multiplier parameters for a completed project requires an assessment from the total project perspective. Lesson #D5: Agree on a standardized set of life cycle phases for the model despite the different processes used by Affiliate companies. Lesson #D6: The data collection form must be easy to understand and flexible enough to accommodate organizations with different levels of detail so that they can contribute data and use the model. Lesson #D7: Spending more time on improving the driver definitions has ensured consistent interpretation and improved the model’s validity. Lesson #D8: If no data can be collected for a particular driver then that driver cannot be used because its influence on systems engineering effort cannot be validated. Lesson #D9: Historical data can help determine which drivers should be kept in the model and which should be discarded.

Lesson #D10: Establishing non-disclosure agreements early on in the process enables the data sharing and collaboration to easily take place. Lesson #D11: The success of the model hinges on the support from the end-user community. The lessons gathered during the development of COSYSMO were helpful in maturing the model to an acceptable level so that organizations would begin to use it as a sanity check on their proposals. The process of applying COSYSMO to existing programs introduced additional challenges which will be described throughout the remainder of this paper. In order to appreciate these challenges, we will set the context for how four particular organizations adopted the model.

Industry Validation of COSYSMO This section includes a summary of the use of COSYSMO at The Aerospace Corporation, BAE Systems, Lockheed martin, and Raytheon in terms of corporate training, using the model for sanity checks, the development of proprietary models such as SECOST, the establishment of measurement programs as a result of COSYSMO, use of COSYSMO for CMMI assessments, and how COSYSMO has changed as a result of industry use. The Aerospace Corporation, a Federally Funded Research and Development Center (FFRDC), provides support to national security, civil, and commercial customers for space systems. In its role as a trusted partner, Aerospace utilizes cost models such as COSYSMO for cross checks and independent program assessments for program executability. The Young panel report on National Security Space (NSS) Acquisition (Young 2003) identified as one of its findings that the space acquisition system is strongly biased to produce unrealistically low cost estimates in areas such as systems engineering. The panel further identified that “clear tradeoffs among cost, schedule, risk, and requirements are not well supported by rigorous system engineering, budget, and management processes.” The Aerospace Corporation developed Smarter Buyer course states that systems engineering and integration activities are the first areas to be cut when an industry cost proposal needs to meet a particular cost target. To address this, COSYSMO has been employed to improve the process of estimating systems engineering effort on space programs. BAE Systems Electronics & Integrated Solutions (E&IS) Operating Group (OG), headquartered in a Nashua, New Hampshire, is a major defense electronics business with sites across the U.S., U.K., and Israel. Several of its lines of business were involved in the COSYSMO development since its inception. Early in 2006, E&IS OG launched an internal effort to establish a standardized systems engineering estimation process across its Lines of Business, which focused on piloting COSYSMO and calibrating the model to its products and platforms. The initial goals were to provide cross checks and independent cost evaluations for bids and proposals and for program executions, and to evaluate the feasibility of adopting the model as the baseline methodology as a standard estimation process. As part of this effort, a systems engineering estimation workbook based on COSYSMO has been developed as the standardized estimating tool. Around 50 historical program data points have been collected to create platform-specific calibrations. An effort is underway to pilot the workbook in programs and bid and proposal &P activities across the OG to evaluate the applicability and limitations of the model and to refine the calibration data points to analyze data correlation. BAE Systems is also actively working with MIT/LAI and other Affiliate companies to develop the reuse extension to COSYSMO and improved cost estimating relationships (CERs) for the model related to its cost drivers. In parallel to these development efforts, numerous seminars and

workshops have been conducted at the multiple sites to teach the model and to discuss its application. A train-the-trainer class is currently being developed for deploying the model and the workbook tool. E&IS OG has been the focal point for the greater BAE Systems family world-wide, as there have been increasing interests from other operating groups in the U.S. and business units from the U.K. Lockheed Martin recognizes the value in developing better models for estimating Systems Engineering costs on programs. As a result, it has supported the development of COSYSMO from the inception of the project, contributing to its development, data collection, and validation. Since the initial release of COSYSMO, Lockheed Martin has been conducting validation and piloting exercises to better understand its potential usage and limitations. Through these exercises, several necessary model improvements have been identified and developed by Lockheed Martin. These improvements include accounting for risk in the estimates (providing ranges of estimates), addressing reuse of systems engineering elements, and providing the ability to create effort distributions based on program profiles. These improvements have been incorporated into the COSYSMO model. The validation results at Lockheed Martin have been promising and together with the improvements provide the basis for further adoption in the future. COSYSMO, along with other cost estimation models and methods will continue to be applied and perfected within Lockheed Martin to allow continual improvement in this area. The Raytheon Company has been a strong supporter of the COSYSMO model from its inception. As member of the USC Industrial Affiliates program, Raytheon, along with several other industrial members, strongly recommended to the USC CSSE staff that they investigate the development of a cost estimation model that focuses on Systems Engineering. Thus, COSYSMO was born. After the publication of the COSYSMO dissertation, Raytheon embarked on the development of an in-house tool built upon the COSYSMO This SE cost estimation tool is currently in prototype deployment for use as a second opinion estimation. Current results indicate a high degree of promise for the use this COSYSMO-based tool for direct bid purposes. In addition to the pilot deployments, Raytheon is also collecting historical data from previously completed programs in order to form a calibration database for its internal cost estimation tool.

Lessons Learned In the same way COSYSMO has influenced the way organizations estimate systems engineering effort, the experience of applying the model to actual projects has provided numerous lessons learned and exposed additional areas for continued research. This section outlines fifteen lessons and the motivation behind each. Skills Needed to use COSYSMO. One of the first things we learned when we used COSYSMO in industry was the amount of information that the model assumed was already known by the user. In most cases, there was a significant learning curve on two fronts: the understanding of parametric cost modeling and the understanding of systems engineering concepts included in COSYSMO. Lesson #V1: Provide a list of assumptions/prerequisites for model use as well as the appropriate training/resources for COSYSMO understanding.

Model Usability. It is also helpful to better understand how the usability of COSYSMO changes as the system evolves through its life cycle. A major component of this is the detail and quality of information available at different stages of the program. In the conceptualize phase, it is common to have limited and vague information which is not useful for COSYSMO. As the system matures, its technical scope is defined and its characteristics are shaped by the relevant stakeholders. This dynamic property of the model should be recognized and managed accordingly. Lesson #V2: Understanding usability (Miller 2006) will lead to more reliable inputs to the model especially at the early phases of the life cycle where there is little project information available. Model Adoption. We discovered that there was a strong need for guidance on the steps needed to adopt the COSYSMO model in organizations. As a result, we developed a 10-step process (Miller & Valerdi 2006) that could guide organizations through the training, data collection, validation, tailoring, and calibration of the model. Lesson #V3: Providing organizations with a sequential process driven by implementation experience will facilitate the adoption of COSYSMO. Accounting for Reuse. All systems today have some degree of legacy considerations. Industry is rapidly going away from building systems from scratch. Instead, system upgrade and development spirals dominate today’s programs. In this environment, the concept of reuse becomes ever increasing subject in systems engineering. During COSYSMO implementations, organizations noticed large errors in their estimates compared to actuals. After further exploration, it was discovered that these programs had a considerable amount of reuse in their systems. As a result, we developed a way to quantify the effort benefit associated with reusing components from other systems (Valerdi, Gaffney, et al 2006). Lesson #V4: Providing a way to account for reuse in systems engineering (Roedler 2006) is essential for improving the accuracy of the model. Risk in Cost Estimates. The output of COSYSMO is currently a single point cost estimate which makes it difficult to assess the probability of meeting that estimate. Other studies have shown that risk is an important part of the cost estimation process (Garvey 2000; Anderson & Covert 2005; Keefer 1994) so we developed a way to incorporate risk in the output of the model as a function of uncertainty in the inputs of the model (Valerdi & Gaffney 2007). Lesson #V5: Modeling the probability of the estimate provided by COSYSMO will help assess the risk associated with that estimate as part of the overall risk management strategy for the project. Counting Rules. One of great difficulties in developing an estimate in COSYSMO is consistent counting of requirements and, in particular, defining the boundaries between “system-level requirements” and other requirements at lower levels such as those for software and hardware. Moreover, this process was observed to be inconsistent across organizations. It was determined that more guidance was needed to ensure consistent interpretations of where the systems engineering requirements should be counted (Valerdi & Eiche 2005; Valerdi 2006). This lesson ties directly with lesson #D7 above (spending more time on improving the driver definitions has ensured consistent interpretation and improved the model’s validity).

Lesson #V6: Detailed counting rules can ensure that size drivers, specifically requirements, are counted consistently across the diverse set of systems engineering projects, hence improving the model’s application across organizations. Rating Complexity. For each of the four size drivers - requirements, interfaces, scenarios, and algorithms - user inputs must be separately identified in complexity categories of easy, nominal and difficult. This is an extremely subjective process since what is easy for one person may be difficult for another based on domain experience, training, and other factors. To address this, it was necessary to develop clear guidance for making these selections. The guidance, however, had to obtain some flexibility so that organizations could incorporate their own measures of complexity for their programs. The most important aspect is that the characterization of these inputs be consistent with the data set in the industry calibrated model. Lesson #V7: Guidance on rating complexity via easy, nominal, difficult is necessary to ensure consistent use across organizations. Rating Drivers With Multiple Viewpoints. The COSYSMO model includes 14 project cost drivers or effort multipliers; eight of them focus on the application being developed and six of them on the team developing the application. For each of the cost drivers, users need to identify a rating for each driver that, in most cases, ranged from Very Low to Very High. This is relatively simple to interpret for drivers that can be operationalized in a single viewpoint such as Requirements Understanding. But for some drivers such as Technology Risk, the rating scale had a more complicated structure because it captured multiple viewpoints. These included technology maturity, readiness, and obsolescence which together provided a measure of technology-related risk on the overall program. When these viewpoints had rating values that were orthogonal to each other, users were required to weigh them against each other to obtain an overall rating for that driver. This process is prone to subjectivity based on the possible assessments of each viewpoint. Lesson #V8: Clarification on how rating levels are averaged between multiple viewpoints is needed to reconcile possible conflicts in the driver interpretations. Driver Rating Scales and Polarization. Representing a spectrum of systems engineering cost drivers on uniform scales ranging from Very Low to Very High proved to be difficult. As a result, custom rating scales were developed for drivers that had unique properties. For example, the Migration Complexity driver was designed to capture the systems engineering complexities created by having legacy system considerations. It was simple to identify effort penalties associated with increasing demands by legacy systems on the systems engineering effort, but the absence of a legacy system did not have a systems engineering effort savings. As a result, a rating scale that reflected this bias was developed. It ranged from Nominal to represent no effort penalty to Extra High to represent the most severe effort penalty possible when the legacy system created extraneous effort on the part of the systems engineering staff (Valerdi 2005).The polarity of rating scales was also adjusted to fit the meaning of the phenomenon being measured. In the case of Process Capability, higher ratings translated to higher effort savings because of the organizational efficiencies introduced by CMMI. In contrast, Level of Service Requirements, had a rating scale whose higher ratings yielded higher effort penalties due to the additional impact of system “ilities” such as survivability. Lesson #V9: Matching the ratings scales and polarization of drivers made their impact on systems engineering effort easier to understand.

Overlap of Requirements, Operational Scenarios, Algorithms, Interfaces. Need to be sensitive to the fact that early on in the program the system may be defined in terms of operational scenarios rather than just requirements, interfaces, and algorithms. As the system evolves and its scope is formalized, requirements begin to appear and may overlap with operational scenarios, interfaces, and algorithms. The COSYSMO model is built around a set of size drivers that determine the size of the “bread-box” upon which SE effort is applied. The number of Systems Requirements form the foundational element of these size drivers. Where do these size drivers originate? For requirements, that is somewhat easy. System Requirements typically originate for the system specification that defines the functionality the system must perform or meet. The statement of requirements from this specification is then typically managed in a requirements database. However, the Statement of Work, Concept of Operations, Interface Specification, and Interface Control Documents also provide types of requirements that influence the overall system. Some of these are in the form of constraints, but others are also expressed in terms of “shall” statements. Some of these constraints and statements then appear in the requirements database. But, how should we count them? For requirements that define the functionality of the system, that is straightforward. They are counted as systems requirements. But, what about interfaces? Often times interfaces are also defined by a set of “shall” statements that appear in the requirements database, do we count the number of “shall” and include this number with the system requirements, or do we count the number of physical and logical interfaces from our architecture? From experience, we have determined that the best approach is to count requirements or interfaces, but not both. Lesson #V10: Detailed examples need to be provided to prevent double dipping across multiple size drivers (Valerdi 2006). Effect of Schedule on Effort. We have observed that systems engineering schedule does not follow the cube root law in software. The law states that the software development time in calendar months is roughly three times the cube root of the estimated effort in person-months (Cook 2004). Software cost models have the cube root law and schedule compression relationships embedded in their implementation (Yang et al 2005). Since systems engineering is a supporting role, it is not on the critical path throughout the entire life cycle. Therefore, it is difficult to develop an empirically validated approach for systems engineering schedule that mirrors the cube root law. Lesson #V11: Systems engineering schedule is driven by project-level milestones and is therefore unlike the cube root law in software. Life Cycle Coverage. Similar to lessons #D5 (agree on a standardized set of life cycle phases), #D8 (if no data can be collected for a particular driver then that driver cannot be used because its influence on systems engineering effort cannot be validated) and #D9 (historical data can help determine which drivers should be kept in the model and which should be discarded) above, the model scope eventually had to be defined by the amount of data that we were able to collect. The end result was a model that was calibrated to the first four life cycle phases as shown in Figure 1. Since most contracts end at the transition to operation phase, the later phases (i.e., operate/maintain/enhance and replace/dismantle) were not covered. The COSYSMO assumption that all executed programs span the entire lifecycle had to be changed because many contracts that involve systems engineering effort only cover partial lifecycle phases.

Lesson #V12: Focus the scope of COSYSMO only on life cycle phases that can be calibrated with historical data.

Figure 1. Systems Engineering Effort Profile (Valerdi & Wheaton 2005). Systems Engineering Effort Profile. To be useful in planning and budgeting for system engineering effort, we need to understand the effort profile over the life cycle phases. While the total effort estimated by COSYSMO may be useful for a sanity check on a bottoms up estimate, in order to allocate these resources across the project life cycle phase, we need to collect data on how resources are utilized on actual projects. We observed that about 23% of the effort was being spent on the conceptualize phase while, the development phase was 36%, operation, test & evaluation phase 27%, and transition to operation phase was 14%. A more detailed distribution of systems engineering activities throughout aforementioned phases is shown in Figure 2. Lesson #V13: The capability to model systems engineering effort distribution by phase is necessary since many projects estimate portions of the life cycle rather than the entire lifecycle.

Figure 2. Systems Engineering Effort Profile (Valerdi & Wheaton 2005). Local Calibration. Calibration of COSYSMO had, in the past, been a tedious task that could be performed by only one person (the model developer). With multiple organizations striving to collect data to calibrate the model, it was necessary to streamline the calibration process and make it available to everyone. Fortunately, commercial tools such as SystemStar1 are now available for this purpose. Lesson #V14: Provide ways for individual organizations to self-calibrate COSYSMO. 1

SystemStar is developed by SoftStar Systems (www.softstarsystems.com)

Prototypes. Unlike in software, in which the basic size driver (i.e., lines of code) is consistent and well understood, understanding of system metrics differs greatly across different system and platform types. A “systems engineering requirement” can vary across people based on their perspective and across product platforms based on the application domain. A requirement is significantly different for a Line Replacement Unit (LRU) or box type from one of a complex system such as airplane or aircraft carrier. However, COSYSMO must apply to both systems types. When creating local calibrations, it is advisable to define prototypical system or platform types, so as to ensure better convergence of the calibrations. Lesson #V15: Defining prototypical system types will help communicate the application of the model. Aside from the validation of COSYSMO, these lessons should help researchers and practitioners validate future systems engineering measurement efforts such as the Leading Indicators currently being developed (Roedler & Rhodes 2005).

Application Challenges of COSYSMO Early experience of calibrating and piloting COSYSMO has shown that the model provides a good foundation and bridges a critical gap in systems engineering estimation that has historically lagged behind other engineering fields such as software. Experience also shows that there are two near-term challenges the model is facing when applied to real applications. Improvements in these areas are imperative as the model is adopted for widespread operational use. Requirements Change. It is inevitable that requirements change during the life cycle of a project. Whether it is done by customers during the definition phase of the program or by contractors during the detailed design, they affect the quantity of systems engineering needed on programs. Some programs today do not have the traditional requirements, but instead only operational needs are suggested. It is up to the engineering team to define and refine the requirements during the course of analysis and design activities. COSYSMO today partially addresses the problem of requirements volatility by providing an effort multiplier to account for the level of requirement understanding. The rating scale ranges from a Very Low understanding to a Very High understanding, with an effort penalty and savings, respectively. This only address the requirement change due to the team’s understanding of the requirements, assuming the underlining requirements are stable. A more complete solution is to address the inherent changes in requirements due to external factors, in terms of the percentage of requirements that will change during the course of development and with a mapping of such changes to the engineering life cycle. The rationale is that the later a change occurs in the life cycle, the more system engineering effort will be needed to implement such a change. Such a relationship follows the S-curve of commitment of system-specific knowledge and cost (Blanchard & Fabrycky 1998, page 37). Standard WBS and Systems Engineering Tasks. COSYSMO estimates the total system engineering effort that can be decomposed to five high level systems engineering activities as defined by ANSI/EIA 632. However, in real estimating applications, estimates by engineering tasks are highly desirable, if not necessary, to develop a complete engineering bid. This requires an additional level of detail than what is currently available. The five activities defined in the existing implementation are a prerequisite, but the model needs to be calibrated at a lower level of the systems engineering WBS to provide additional tailoring capabilities to match specific program needs.

The Future of COSYSMO COSYSMO continues to evolve as a result of the industrial momentum that has been fueled by INCOSE and organizations interested in systems engineering cost estimation. Despite its success thus far, a spectrum of new directions lie ahead as more organizations apply the model to their projects and perform local calibrations. Deeper involvement with government entities is a necessary next step as COSYSMO becomes a trusted tool in the aerospace and defense industries. This evolution has benefited from the help of commercial tool vendors such as PRICE Systems, Galorath, and SoftStar Systems who plan or are planning to offer commercial implementations of COSYSMO as part of their existing product suites. As COSYSMO evolves, so will the needs of the systems engineering community. The evolution plan of the model will continue to be reactive to the standards and trends that drive its stakeholders while capturing additional lessons learned along the way. As shown in the paper, these lessons have greatly shaped the scope and definition of the model.

References Anderson, T. P., Covert, R., P., (Eds.) “Space Systems Cost Analysis Handbook.” Space Systems Cost Analysis Group, November 2005. Blanchard, B. S., Fabrycky, W. J., Systems Engineering and Analysis, Prentice Hall, 1998. Boehm, B., Abts, C., Brown, A. W., Chulani, S., Clark, B., Horowitz, E., Madachy, R., Reifer, D. J. and Steece, B., Software Cost Estimation With COCOMO II, Prentice Hall, 2000. Boehm, B., Rieff, J., Thomas, G., Valerdi, R.., “COSYSMO Tutorial.” 13th Annual INCOSE Symposium, Crystal City, VA, July 2003. Cook, D. A., Leishman, T. R., “Lessons Learned from Software Engineering Consulting.” Journal of Defense Software Engineering, February 2004. Garvey, P. R., Probability Methods for Cost Uncertainty Analysis, Marcel Dekker, New York, NY, 2000. Keefer, D. L., Certainty Equivalents for Three-Point Discrete-Distribution Approximations. Management Science, Vol. 40, No. 6, June 1994. Miller, C., Measuring Usability in COSYSMO, PhD Dissertation Proposal, George Washington University, Spring 2006. Miller, C., Valerdi, R., “COSYSMO Adoption Process.” 21st Annual Forum on COCOMO and Software Cost Modeling, Herndon, VA, November 2006. Roedler, G., Rhodes, D., (Eds.), Systems Engineering Leading Indicators Guide, MIT Lean Aerospace Initiative, December 2005. Roedler, G., “Adapting COSYSMO to Accommodate Reuse.” LMCO Measurement Workshop, Valley Forge, PA, September 2006. Valerdi, R., Boehm, B., Reifer, D., “COSYSMO: A Constructive Systems Engineering Cost Model Coming Age.” Proceedings of the 13th Annual International INCOSE Symposium, Crystal City, VA, July 2003. Valerdi, R., Rieff, J., Roedler, G., Wheaton, M., “Lessons Learned From Collecting Systems Engineering Data.” 2nd Annual Conference on Systems Engineering Research, Los Angeles, CA, April 2004. Valerdi, R., Miller, C., Thomas, G., “Systems Engineering Cost Estimation by Consensus.” 17th International Conference on Systems Engineering, September 2004, Las Vegas, NV. Valerdi, R., Eiche, B., “On Counting Requirements.” 3rd Conference on Systems Engineering Research, Hoboken, NJ, March 2005.

Valerdi, R., The Systems Engineering Cost Model (COSYSMO), PhD Dissertation, University of Southern California, May 2005. Valerdi, R., Wheaton, M., “ANSI/EIA 632 As a Standard WBS for COSYSMO.” AIAA 1st Infotech@Aerospace Conference, Arlington, VA, September 2005. Valerdi, R., “academicCOSYSMO Users Manual.” MIT Lean Aerospace Initiative, September 2006. Valerdi, R., Gaffney, J., Roedler, G., Rieff, J., “Extensions of COSYSMO to Represent Reuse.” 21st Annual Forum on COCOMO and Software Cost Modeling, Herndon, VA, November 2006. Valerdi, R., Gaffney, J., “Reducing Risk and Uncertainty in COSYSMO Size and Cost Drivers: Some Techniques for Enhancing Accuracy.” 5th Conference on Systems Engineering Research, Hoboken, NJ, 2007. Yang, Y., Chen, Z., Valerdi, R., Boehm, B., “Effect of Schedule Compression on Project Effort.” 27th Conference of the International Society of Parametric Analysts, Denver, CO, June 2005. Young, P. (Ed.), Report of the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force on Acquisition of National Security Space Programs, May 2003.

BIOGRAPHIES Ricardo Valerdi is a Research Associate at the Lean Aerospace Initiative at MIT and a Visiting Associate at the Center for Systems & Software Engineering at USC. He earned his BS in Electrical Engineering from the University of San Diego, MS and PhD in Industrial and Systems Engineering from USC. He is a Member of the Technical Staff at the Aerospace Corporation in the Economic & Market Analysis Center. Previously, he worked as a Systems Engineer at Motorola and at General Instrument Corporation. He is on the Board of Directors of INCOSE. John E. Rieff is a Senior Manager at Raytheon’s Space & Airborne Systems in Dallas, TX. Previously he worked for Raytheon's Intelligence and Information Systems business area in Garland, TX. He is one of the co-authors of the Raytheon Enterprise Architecture Process (REAP). John has been employed by E-Systems (now Raytheon) from 1986 to the present. He was previously employed by Texas Instruments and Rockwell International. John received his Bachelor of Science degrees from Iowa State University and graduate and post-graduate degrees from Iowa State University, University of Texas and the University of Iowa. Garry J. Roedler is a Senior Manager of Systems Engineering (SE) at the Lockheed Martin Engineering Process Improvement Center. He is responsible for the development/selection of SE processes, implementation assets, training, and tools for the corporation towards an integrated set of SE enablers to aid program performance. This role also provides leadership to the corporate SE Subcouncil. In prior roles in LM, he has been responsible for achievement/sustainment of Level 5 CMM/CMMI objectives, including a world first to achieve Level 5 ratings in the SE-CMM. Garry has over 25 years experience in engineering, measurement, and teaching and holds degrees in mathematics education and mechanical engineering from Temple University. Other work includes leadership roles in many technical/standards organizations, including: US Head of Delegation and Task Group lead for ISO/IEC JTC1/SC7 Working Group 7 (systems & software process standards), Practical Software and Systems Measurement (PSM) Steering Group; International Council On Systems Engineering (INCOSE) Corporate Advisory Board, Technical Board and Committees; INCOSE Delaware Valley Chapter co-founder; and the IEEE Standards Association.

Marilee J. Wheaton is currently the General Manager of the Systems Engineering Division at the Aerospace Corporation. She has a B.A. in Mathematics from California Lutheran University and an M.S. in Systems Engineering from the University of Southern California (USC). She is an Associate Fellow of the AIAA and a member of the AIAA Technical Committee on Economics. She is also a Fellow and Life Member of the Society of Women Engineers (SWE) and a Past President of SWE's Los Angeles Section. She serves as a member of the INCOSE Corporate Advisory Board, and is an instructor in the Systems Architecting and Engineering (SAE) Program at the USC Viterbi School of Engineering. Gan Wang is a Senior Principal Engineer at BAE Systems. He has been engaged in the development of cost estimating and decision support methodologies for enterprise and capability-based engineering. Prior to joining BAE Systems, Dr. Wang has spent many years developing real-time geospatial data visualization applications, and man-in-the-loop flight simulation and aircrew training systems. He has over 20 years of experience in software development and software-intensive systems engineering and integration.