verifying requirements completeness using the method ... mission statement for which the requirements are to be ... function of compliance with a higher-level.
Requirements Completeness: A Deterministic Approach Ronald S. Carson, Ph.D. Information and Communications Systems Boeing Information, Space, and Defense Systems P.O. Box 3999 M/S 3C-JM Seattle, WA 98124-2499 Abstract. A process for determining requirements completeness is developed. The method is comprised of three steps: (1) defining the problem to be solved by identifying and quantifying all system interfaces associated with the system development, operational, and maintenance concepts, (2) producing the requirements by analyzing the system interfaces to determine requirements under all conditions, and (3) verifying requirements completeness using the method of complementary antecedents (Carson 1995). The process allows one to demonstrate that requirements are complete for the associated mission (problem) statement(s). INTRODUCTION Much has been written regarding the development of "good" requirements (e.g., Kar and Bailey 1996). The definition of "complete" requirements has been somewhat more difficult to define beyond simply ensuring that all higher-level requirements are allocated (EIA-632, 3.4.3.1). (Mar 1994) identifies 5 characteristics for establishing requirements completeness: “(1) all categories of requirements (functional, performance, design constraints, interfaces) ... are addressed, (2) .... all responsibilities allocated from higher-level specifications are recognized, (3) .... all scenarios and states are recognized and not [sic] To-Be-Determined (TBD’S) exist..., (4) all assumptions are documented...., and (5) .... the requirements conform to appropriate [sic], and [have] unique or ambiguous terminology defined.” Among these the most difficult to ensure are (1) “all categories .... are addressed”, and (3), that “all scenarios and states are recognized.” How do we ensure that such is the case? The importance of ensuring "requirements completeness" can not be over-emphasized. The scope of activity required to "complete" the requirements affects the staffing and budget for the system requirements development process and verification (test and analysis) of the completed system against the
requirements. Excess requirements unnecessarily drive up costs and may prevent achieving an optimum system solution. Insufficient requirements can cause substantial rework as the requirements are too often "derived" during integration and test when the cost of implementation is higher. This paper develops a process for determining requirements completeness, beginning from the mission statement for which the requirements are to be developed. It is shown that through this process it can be proved that the requirements are complete with respect to the associated mission statement. One element of "requirements completeness" is a test to determine that each requirement, individually, is complete (i.e., states a requirement in terms of a function to be performed ("what"), the measure of success (performance requirement or "how well"), the conditions under which the function is to be performed, and any constraints which affect the possible solution, such as an interface constraint. We limit our discussion to the collection of requirements so derived, and refer the reader to other works regarding "good requirements" statements (e.g., Kar and Bailey 1996). The requirements completeness problem is broken into three steps. The first step involves defining the problem to be solved (for which the requirements are to be developed). The second step involves producing the requirements according to a process which will deterministically ensure "complete" requirements. Finally, the requirements are verified for completeness. In summary, we "systems engineer" the requirements development process to ensure a solution for “requirements completeness”. STEP ONE: DEFINE THE PROBLEM In assessing requirements completeness one first must establish the problem: complete with respect to what measure? In other words, for what problem is the requirements set complete? (Mar 1997) emphasizes the need to develop a "shared vision" of the system
regarding the "type of design problem which we are trying to understand or fix." If our goal is to produce a system that satisfies a mission, we must first define and validate the mission. EIA-632 defines requirements completeness in two stages, beginning from the mission or "stakeholder requirements" (EIA-632, 3.4.3.1): "Stakeholder requirements are complete if they capture essentials of what the stakeholders require....System technical requirements are complete if they reflect all the considerations contained in the validated stakeholder requirements." Whenever a collection of requirements (specifications) is derived from one level to another, satisfaction of the "mission statement" is primarily a function of compliance with a higher-level specification. "Completeness" is then a function of ensuring that all allocated functions and requirements are captured at the next level. In contrast, "stakeholder requirements are validated by agreement between the developer and the identified stakeholders." (EIA-632, 3.4.3.1). How then, do we capture the mission statement and its associated operations and maintenance concepts? Equivalently, how do we identify and validate the stakeholder requirements? Figure 1 shows a simple process for defining the problem. Define missions, phases
Identify interfaces
captured as part of the "problem statement". This allows identifying the interfaces for interaction between the system and the rest of the universe1. EIA-632 (5.3.1) identifies the tasks required to complete the stakeholder requirements. Such methods as quality function deployment (QFD), prototyping (which is especially powerful for user interfaces), surveys, etc. are available to derive stakeholder requirements, whether the stakeholder is the customer (acquirer), user, maintainer, disposer, or just a "neighbor" (Boehm, et al. 1997; Bahill and Dean 1997). Each of these stakeholders interacts with the proposed system in specific ways as can be defined in the development, operations, maintenance, and disposal concepts (the “scenarios” for each of these phases). The key elements of such definitions are (1) identification of the specific interfaces associated with the interaction (box 2 in figure 1), and (2) the quantification of the interface properties (e.g., for the operations and maintenance) for all phases of the system life cycle (box 3 in figure 1). Figure 2 displays a generalized context diagram. Stakeholder/ system 1
Quantify interfaces
System
Stakeholder/ system 2 Interface description (for each stakeholder or system)
Figure 1. Step One: Define the problem. Several methods are available, all of which are represented by the notion of "involving the stakeholders", or, "deriving stakeholder requirements". The importance of the "stakeholder" is emphasized in the ballot draft of EIA-632 as being the source(s) of information required to develop, produce, operate, maintain, and eventually dispose of the system. Among these various phases, the "mission" statement (what the system is supposed to do as its essential nature) generally applies only to the "operations concept". However, the remaining phases (or states, e.g., development, production, maintenance, retirement) have their own "missions" and associated stakeholders, which may differ from those of the "operations" state. How the system is expected to interact with its environments and users, and any additional regulatory or other design constraints which define what the system must "be" in addition to what the system must do ("constraints" per EIA-632, 3.4.3.2) must all be
Stakeholder/ system 3 Stakeholder/ system 4
Figure 2. System context diagram indicating 4 stakeholders or associate systems. A different context diagram may be required for each lifecycle phase because of different stakeholders or interfaces.
1
(Cutler 1997) notes that stakeholders include all those who are or think they are affected by the system being conceived, which is somewhat broader than the EIA-632 definition (which limits stakeholders to only those "affected by" the system, EIA-632, 3.4.2). Only those truly affected by or who can affect the system will remain on the context diagram.
For example, a ship has operational interfaces with external command and control, must defend against enemy fire (e.g., incoming missiles), and may also have a mission to be able to launch attacking missiles, all the while operating on the ocean. Key interfaces in this example (among others) are (1) radio signals for command, control, intelligence, detection (e.g., radar), and other operations, (2) ship/water interfaces for such parameters as speed, maneuverability, and damage levels which affect floatation, and (3) physical interfaces for incoming and outgoing missiles (and there are clearly many others associated with different elements of the specific mission(s) of the ship). Each of these can be quantified in terms of the operational and maintenance requirements. Applicable interfaces during a developmental phase will include test facilities and operations, manufacturing facilities and operations, and the people performing these functions. Additional regulatory entities may also be involved. Example. The following example of a functional performance requirement may serve to highlight the notion of reducing all functional performance and mission requirements to interface requirements. Suppose we identify an item whose primary mission is to provide power (for simplification we will ignore analysis of "conditions" in this example). Therefore, its “functional” requirement is, “provide power”. Do we know enough to begin design? No, we still need a quantification of the performance associated with the function. Suppose we then write the requirement, “provide power of 35 Watts”. Have we defined the requirement sufficiently to begin design? No, because “power of 35 Watts” may take many forms, from conducted DC at different voltages to infrared to radio-frequency to "light". Suppose we write, “provide power of 35 Watts conducted DC at 6V.” Have we defined the requirement sufficiently to begin design? The answer here is, “maybe”. We have clearly defined the performance requirement with functional constraint. And we have the performance requirement defined with sufficient precision to differentiate “conducted DC at 6V” from “radiated infrared at 100 µm (± 1 µm) into 290 °K space”. However, if there already exists a description of the physical constraint (such as an interface control drawing which specifies that this interface requires specific pins in a particular connector type), then the requirement is incomplete until such specification is
incorporated either directly or by reference (e.g., “in accordance with… ” the particular interface control document or part specification). If the design is begun prior to identifying the constraint, some rework may be required. This example demonstrates that functional, mission, and performance requirements cannot be complete until and unless they include a statement about the interface point at which the performance requirement will be measured. And it is often at the point of defining the interface that the true functional and performance requirement (i.e., the "mission") is elicited (note the difference in the example above between a “power supply” and a “heater”). The focus on functional requirements should restrain our use of solution-space language ("power supply", "heater") so that we do not bias the design process and we avoid the appearance of specifying a solution as opposed to stating a requirement. Summary. Once the key interfaces are identified and quantified, the requirements associated with them can be defined and further analyzed to develop completeness. The test for completeness of the problem statement step is that all stakeholder interfaces are identified and quantified for all applicable development, assembly, operations, maintenance, and disposal phases and related operating modes. This takes the form of (1) a series of system context diagrams (one for each unique life-cycle phase, such as figure 2) which identify the interfaces and the associated external entities, plus (2) textual or diagrammatic descriptions of the intended behaviors (i.e., missions or "concept of operations") of the system during each life-cycle phase. The “problem” to be solved is then finding a solution (detailed system definition in terms of requirements) which satisfies both the context diagrams and defined behaviors. Put another way, a "statement of requirements" is a statement of the problem to be solved, however it is derived from consideration of "mission" and stakeholder requirements. Can the problem statement so derived be “proved” to be correct and complete? Or, stated another way, can it be proved that the list of interfaces and the associated operating concepts in each phase are complete and correct? Or, stated still another way, are the stakeholders bound by the derived problem statement such that they can never "change their mind"? The answer is, “No”. The process of identifying and quantifying certain interfaces as relevant to the problem statement necessarily excludes certain interfaces as being not relevant. This is no different
from the process of excluding certain kinds of physical phenomena in the development of a hypothesis for scientific evaluation (e.g., ignoring the intensity of moonlight on the acceleration or braking operations of an automobile). A decision is made by the scientist or requirements analyst that certain effects or types of interfaces have no material bearing on the problem to be solved. Exclusion of these effects always constitutes a risk that important items may be excluded by failure to understand what can affect the system being analyzed. And stakeholders always have the option of changing their minds! The only method, therefore, to “prove” completeness of the problem statement is to include all possible interfaces and operations in the initial problem definition, since there is no a priori basis for initially excluding any conceivable interface until it is demonstrated to have no effect. In other words, one must “prove” the exclusion of any conceivable interface in order to “prove” that the problem statement is “complete”. Clearly, this is non-value-added, although rigorously “complete” (and fails to address the problem of changing stakeholder views). The initial analysis process, using the methods described above, must limit the scope of the problem to those items that are deemed “relevant”. Hence, the focus on “stakeholder” identification and interaction during this problem definition stage of the requirements development process. Thus, the problem definition stage is the most critical of the three steps, because there are no independent means to verify the completeness or correctness of the problem statement besides stakeholder validation (and the laws of physics). Validation is required because, as (Rechtin and Maier 1997) point out, "Don't assume that the original statement of the problem is necessarily the best, or even the right one" (emphasis added). STEP TWO: DEVELOP COMPLETE REQUIREMENTS (Kar and Bailey 1996) recognized the "difficult problem of identifying requirements which are necessary but are missing from the set" of defined requirements. Several suggestions were made which enhance the probability of achieving completeness, but do not verifiably ensure that completeness is achieved. Other authors have made similar observations (Grady 1993). (Carson 1996) proposed a method for ensuring requirements completeness based on defining the required system behavior under all possible conditions
of the system or subsystem interfaces. The key element in the process is identifying and quantifying the interface conditions (step one above), and then grouping them into one or more behaviors for which a particular response is defined in the requirements specification. Although the focus of the analysis was the "anomalous" conditions which tend to cause unexpected behavior during integration and test (because of unanticipated or undocumented conditions), the model clearly applies to the general problem of requirements completeness. He asserted that the process defines sufficient conditions to ensure completeness of requirements. Let us examine the process in more detail (figure 3). Define behavior for all interface conditions in all phases
Ensure all interface conditions are analyzed
Validate conditions, behaviors with stakeholders
Figure 3. Step Two: Develop complete requirements. In a "typical" requirements analysis, the analyst determines the required behavior for specific input conditions based either on the flow-down of higherlevel requirements or validated stakeholder requirements (reference EIA-632, 5.3.1). What is often missed are the "failure path" (as opposed to the "go path") requirements: what to do when the anticipated interface condition is not realized, or is realized at the wrong time, the so-called, "what if" conditions? Defining responses to the interface conditions. The process model of figure 2 identifies the necessity of defining responses for all possible conditions of the interfaces. Any time a condition can be posed for which the response is not defined, the requirement set is incomplete (because the required response is indeterminate). If the system or detailed design provides a response to the unanticipated condition, a "surprise" may appear during integration and test. Such surprises evidence incomplete requirements. The process model next defines a method for identifying the complete set of interface conditions. The key (step three below) is that the union of the identified set of conditions must constitute the universal set of all possible conditions (and in the case of software systems, it is only necessary that the condition be mathematically possible, not necessarily physically possible). Groupings of conditions with
identical required consequences can be combined into a single requirement, albeit one which may have a potentially complicated antecedent. (Carson 1996) proposed the concept of "functional failure" to include any anomaly at the interfaces, including conditions of non-compliance with an interface control drawing (ICD) or out-of-range environments, all of which are treated as "anomalies" which must be analyzed. The language of functional failure is used to elicit the condition for which requirements are often missing, the "what if" conditions. Using the concept of complementary antecedents (Carson 1995), at least one condition can be defined in terms of a "failure to experience" a specific condition or set of conditions. The associated response to this new set of conditions can be defined in the requirements as a prescribed response (functional and performance requirement). Sufficiency of interface analysis. It may be argued that limiting the examination of conditions to the interface of the system or subsystem may omit important conditions. Equivalently, this argument asserts there may exist conditions of the subsystem that must be analyzed in the system requirements analysis (SRA) that do not manifest themselves at the boundary. (This is in contrast to such activities as the actual failure mode analysis, which must examine internal failures as well as interface failures. Some of the former may indeed have no manifestation at the subsystem boundary, i.e., the "next-level" effect is null.) If we recall the scope of our SRA and problem statement from step one, it is apparent that the SRA is actually performed at the boundary of the system or subsystem itself, by specifying functions and performance requirements for specific conditions at the interface of the subsystem, generally without regard to any internal design. Thus, the examination of behavior only at the subsystem boundary is entirely consistent with the SRA. Limiting the scope of the analysis to the subsystem boundary is equivalent to saying that there exist no conditions within the scope of the SRA which fail to be manifested at the system or subsystem boundary. Therefore, analysis of the interface conditions over all parameter space is sufficient to establish completeness of the requirements in the system requirements analysis. This means that “mission analysis”, “performance analysis”, “scenario analysis”, and other forms of requirements analysis are reducible to interface analysis, based on the scope of the SRA. Any requirement not reducible to an interface
requirement should be considered suspect and may evidence imposed design (which is appropriate in the “design and construction” section of a specification). Even such seemingly non-interface requirements as "reliability" can, upon closer examination, be stated in terms of interface conditions. The true essence of "reliability" is that the inputs and outputs fail to perform as required on-average no more often than some specific number (mean time between failure), or with no more than a certain probability of failure per unit time (e.g., failure rate per million hours). This defines a set of conditions under which the applicable requirements are not performed. And it again highlights the absolute necessity of defining the required behavior(s) even in the presence of failure. Constraint failures (a failure to comply with a nontradable requirement per EIA-632, e.g., a failure to comply with a regulation) may or may not have allowable conditions associated with them, even though they must always be complied with. If no such conditions are possible, the interaction of the system with the stakeholder associated with the constraint is captured with the single condition statement, "always". Thus, no unique behavior is required to be identified for "failure" of this type of interface, because “failure” is not possible. However, a constraint may indeed be violated during certain phases or operating states (a response is not “allowable” but is physically “possible”, such as an emissions limit). The required system response to such a condition may well be “shut down”. For such a stakeholder constraint, the requirements statements are incomplete until this condition is captured. In other cases, such as materials requirements (e.g., “The system shall be constructed entirely of wood products” 2), the point of interface is either the stakeholder decision to impose the materials requirement, or a technical requirement derived during the requirements development phase. In either case, the system must, by its very nature, continuously satisfy the requirement; the only way for the “condition” to change is for the system to no longer be that which it was developed and verified to be, short of destruction. Each of the constraints must be examined for possible conditions of “failure to comply”, and an appropriate response defined.
2
For example, the “Trestle” electromagnetic pulse tester at the former Air Force Weapons Laboratory (now the Phillips Laboratory), Kirtland AFB, New Mexico, contained no metal fasteners to perturb the electromagnetic fields.
Analysis of interface conditions. For each interface the set of all conditions must be defined. This set will necessarily include (a) those conditions defined by the source requirements or derived as part of the requirements development process of allocating source requirements, and (b) those conditions which arise from "functional failure" of the interface. The union of the two sets must constitute the universal set of possible conditions at each interface (Carson 1995). The defined behaviors (requirements) associated with the functional failures constitute "companion functions" to the requirements derived from the source requirements. The analysis of functional failure (including environments, physical interfaces, and constraints) includes each of these possible "functional failure modes", and should therefore be much less sensitive to design implementation, whether the failure modes arise from hardware, software, requirements, or procedural faults. To determine the specific anomalies for a given interface, one must examine all the parameters associated with the interface, including such things as pressure, timing, voltage, impedance (depending on the interface type). Each parameter must have a description of a valid range or ranges. If a parameter P1 is examined for its five separate ranges depicted in figure 4, then treatment of each of the ranges (either separately or in groups) ensures that there do not exist any conditions for P1 that have not been examined. In keeping with (Carson 1995), this requires that at least one requirement constitute an "or else" condition, such as "|V| > Vmax", that extends the parameter over the maximum physical value. This ensures that the analysis of P1 with respect to conditions is complete. For the case of concurrent multiple parameters (P1 and P2 in figure 4, a more typical case, such as voltage and time), the analyst must initially assume that the parameters can vary independently.
Invalid
Invalid
Invalid
P2 Valid Invalid
Valid Invalid
Invalid
P1 Figure 4. Two-dimensional plot of valid parameter space for two parameters. Only the
intersections of validity of P1 and P2 constitute valid conditions (after Carson 1996). For this case, the regions of validity for the two parameters are only where the valid ranges overlap. All other combinations of parameter ranges are invalid either because one or both parameters are invalid. The measure of completeness is that the parameter space represented in figure 4 is fully covered; that is, every unique combination of parameters has a defined (but not necessarily unique) behavior in the system requirements analysis. Following the set theory model of (Carson 1995), the analysis of antecedents is complete so long as the union of parameters is the universal set. “Fuzzy” antecedents. It may be the case that the line separating conditions is not sharply drawn as in figure 4, but has "fuzzy" boundaries, representing tolerances of interface conditions (a common hardware condition also applicable to signal processing). The required behavior may still be definable using a combination of several methods. First, some consideration of history in the condition may lead to differentiation in the required behavior, such as change based on upward-going vs. downward going thresholds. Second, either of two or more possible outcomes may be considered acceptable as determined by the subsystem being analyzed. The resulting behavior could be predicted by a statistical analysis of the conditions, the requirements, and the design (Sanchez 1996). At a minimum, all the possible outcomes are defined. Stakeholder validation of system response. For each unique set of input conditions a behavior (requirement) must be prescribed. Within this set of input conditions will also reside the nominal input conditions associated with satisfying the mission of the system and their associated performance requirements (the conditions traceable from upper-level requirements). In addition, one or more responses to other conditions must be defined. The definition of the required behaviors (functions and performance requirements) remains within the normal scope of the SRA. As part of validating the system technical requirements there is a feedback loop to the stakeholder (EIA-632, 5.3.1). Similarly, during the validation of detailed technical requirements there is a feedback loop to the system technical requirements. These feedback loops allow the validation of behaviors derived as a consequence of considering all possible conditions of the interfaces. Note that these new
conditions will generally not be traceable either directly from stakeholder requirements nor from system technical requirements, since the conditions themselves are derived during the requirements development activity. These will include responses to single and perhaps multiple failures, out-of-sequence inputs, out-of-range environments, etc. Step two is then complete when all interface conditions over all parameter space have been analyzed and captured in the system requirements analysis, and the derived responses have been validated by the stakeholder(s). Such responses to various conditions constitute “scenarios” that can be examined by the various stakeholders. It may also be that the activity of the process step identifies interfaces or behaviors that are missing from the problem definition activity (step one). In this case, there must be an iteration back through step one before step two can be completed. Scope of the analysis task. The requirement to analyze every interface under all possible conditions can cause the requirements analysis task to become very large, even though it ensures "completeness." For cost reasons it may be important to deliberately limit the scope of certain analyses with the attendant risk of incomplete system requirements definition. This simply says that “complete requirements” may be an expensive goal, but in no way invalidates the methodology. As in all other engineering problems, the skill of the analyst has a direct bearing on the cost and effectiveness of applying a particular process. Thus, the question becomes a need to optimize the process for a specific application to maximize the effectiveness with a “tolerable” degree of risk. The “process” itself is a standard which can be tailored for specific applications. This is a normal management function for the application of any process. Some degree of prioritization is therefore indicated in order to maximize the “return” on the analysis investment (which is entirely consistent with the problem of overall justification of systems engineering per se!). Again, the skill and experience of those defining the prioritization and selecting areas for exclusion will directly relate to level of increased risk experienced during the development process. STEP THREE: VERIFY REQUIREMENTS COMPLETENESS (Carson 1995) proposed a test for requirements completeness based on defining the required system
behavior under all conditions, using the notion of complementary sets for the requirements antecedents (the conditions under which the requirement is to be performed). Although the focus of the analysis was the "anomalous" conditions which tend to cause unexpected behavior during integration and test, the model clearly applies to the general problem of requirements completeness. We use this model as part of the final verification of requirements completeness. In addition, EIA-632 (3.4.3.1) requires that all stakeholder and system technical requirements are covered by the next-lower-level requirements set. These two elements are capture in figure 5. Completeness of requirements antecedents. Figure 4 captures the essence of ensuring the completeness of requirements antecedents: the union of all parameter conditions at the system or subsystem interface is the universal set of all possible conditions. That is, if each parameter associated with an interface is examined over its entire mathematical range and all such parameters are examined, then completeness of the requirements antecedents is assured. Verify the completeness of the requirements antecedents
Verify the responses per the stakeholder requirements
Figure 5. Step Three: Verify requirements completeness. This occurs because the set of conditions (the union of all parameter ranges per figure 4) constitutes a complete set (i.e., there are no conditions that cannot be subsumed under one of the identified categories). Mathematically, this can be expressed as follows (Carson 1995): Each function within a functional analysis can be modeled as a logical construction with an antecedent and consequence: If A then F, or, A → F . Here, A defines the antecedent ("under what conditions"), while F defines the function and its associated performance requirements (and any associated constraints). The interpretation is strictly made as follows: When A is true, F is also true; or, F follows A.
Whenever a given antecedent is not realized (i.e., A is not true) at least one other consequence must be defined, such as If ~A then F’, or,
~ A → F ’.
For the general case of n antecedents whose union is the universal set, Ai → Fi , where A1 ∪ A2 ∪ Λ An = U , the universal set.
One or more Fi may be identical (i.e., have identical consequences). With this discussion we can now state the generalized test for the completeness of the system requirements analysis: No requirements set with unique antecedents, Ai , (i=1...n) is complete unless the union of all antecedents Ai is A1 ∪ A2 ∪ Λ An = U , the universal set of all possible conditions (antecedents).
requirements. Following the three-step process will confidently and verifiably demonstrate compliance with the new problem statement. ACKNOWLEDGMENTS/DISCLAIMER The author wishes to acknowledge the INCOSE Requirements Working Group and, especially, Pradip Kar, for stimulating this paper and for their insightful comments. The comments of the INCOSE reviewers are also appreciated. The opinions expressed herein are solely those of the author and not necessarily related to those of INCOSE, the INCOSE Requirements Working Group, or of The Boeing Company or any of its subsidiaries or operating divisions. REFERENCES EIA-632, "Processes for Engineering a System," Ballot Draft Version 0.9, July 5, 1997. Boehm, Barry, Ivy Hooks, Stephanie White, Regina Gonzales, "PANEL: The Requirements Elicitation Process: The Genesis of Systems", INCOSE 1997 International Symposium Panel. (http://www. incose.org/workgrps/rwg/97panel/97panel.html)
This rule applies to the total requirements set. It can also be used to test smaller groups of requirements for “associated” requirements whose antecedents constitute the complementary conditions (~A). Responses per stakeholder requirements. Once the set of antecedents has been verified to be complete, one can verify that the associated responses are consistent with the validated stakeholder and technical requirements from the prior step. Any variances require an iteration with the prior step, and possibly with the first step. When these two tasks are completed and no variances are found, the set of requirements can be said and demonstrated to be, complete with respect to the original problem statement. CONCLUSIONS A three-step process has been developed to ensure requirements completeness. The critical step is the first, the problem definition step. So long as the problem statement remains stable, it is demonstrated that subsequent steps ensure a complete and correct set of requirements with respect the problem statement. Whenever the problem statement changes (e.g., changes to stakeholder requirements, which is more often the case), the requirements analysis process must be exercised to evaluate the consequences to the system
A. Terry Bahill and Frank F. Dean, "The Requirements Discovery Process", Proceedings of INCOSE, 1997. Carson, Ronald S., "A Set Theory Model for Anomaly Handling in System Requirements Analysis", Proceedings of NCOSE, 1995. Carson, Ronald S., "Designing for Failure: Anomaly Identification and Treatment in System Requirements Analysis, Proceedings of INCOSE, 1996. Cutler, William H., "Systems Process for Public Policy Application", Proceedings of INCOSE 1997. Grady, Jeffrey O., System Requirements Analysis, McGraw-Hill, New York, 1993, page 411. Kar, Pradip and Bailey, Michelle, "Characteristics of Good Requirements", Proceedings of INCOSE, Volume II, 1996. Mar, Brian W., "Requirements for Development of Software Requirements," Proceedings of NCOSE, 1994. Mar, Brian W., "Back To Basics Again-A Scientific Definition of Systems Engineering", Proceedings of INCOSE, 1997. Rechtin, Eberhardt, and Maier, Mark W., The Art of Systems Architecting, CRC Press, page 26 (1997). Sanchez, James, “Application of Statistical Processes on a Set Theory Model for Anomaly Handling in System Requirements Analysis”, Proceedings of INCOSE, 1996.
BIOGRAPHY Dr. Carson is a Systems Engineer in the Electronic Products organization of Boeing Information, Space & Defense Systems. He has been responsible for the system design and performance analysis of built-in-test for the Boeing 777 Cabin Management System, and for the Boeing Phased Array Communication Antenna System for live, satellite-broadcast television. He received "Best Paper" Award at the 1995 Symposium of the International Council on Systems Engineering for his work on "A Set Theory Model for Anomaly Handling in System Requirements". He is currently responsible for phased-array system design for various applications, including the Teledesic satellite transmit antennas.