Baselining a Domain-Specific Software Development Process R. L. Feldmann, J. Münch, S. Queins, S. Vorwieger, G. Zimmermann SFB 501 TR-02/99
Baselining a Domain-Specific Software Development Process
Raimund L. Feldmann, Jürgen Münch, Stefan Queins, Stefan Vorwieger, Gerhard Zimmermann {feldmann, muench, queins, vorwiege, zimmerma}@informatik.uni-kl.de
Technical Report 02/1999
Sonderforschungsbereich 501 Fachbereich Informatik / Department of Computer Science Universität Kaiserslautern / University of Kaiserslautern Postfach 3049 67653 Kaiserslautern Germany
Baselining a Domain-Specific Software Development Process
Abstract
This report documents the results of a case study investigating a new requirements analysis process of the Sonderforschungsbereich 501 (SFB 501) at the University of Kaiserslautern. An "intelligent" light control system for one floor of a university building served as case. The case study was performed by a team between May and October in 1998. Its main objectives were to • test the efficiency of a building-automation specific analysis method in a group, based on a new hierarchical model architecture, on SDL as description technique, and on prototyping, • test a configuration managment system for the introduced analysis method in a group development environment, • use manual datacollection procedures, • streamline and characterize the requirements analysis process (e.g., in terms of effort, defects, and calendar time), • create an executable system requirements model as input for testing system design processes in future developments, • create a basis for reusable artifacts, • create a baseline for further case studies. Results documented in this report include a short description of the analysis method, the experimental goals, the project plan for the case study, the empirical results with respect to the above objectives, and possible future experiments.
Contents
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 The Experimental Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Experience Base and Characterization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Existing Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.6 The Team Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2
Experimental Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3
Project Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Formal Process Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 The Requirements Analysis Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.1 The Products and Steps of the Requirements Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.2 Guidelines and Templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3 Product Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.1 Terms and Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.2 Configuration Item Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3.3 Baseline and Configuration Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.4 Tool Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.5 The SCM System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.6 Training. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Data Collection Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4
Project Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 Raw Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3 Qualitative Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.4 System Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5
Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1 Analysis Concerning Calendar Time (Goal 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.1.1 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.1.2 Analysis and Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.1.3 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 Analysis Concerning Effort (Goal 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2.1 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2.2 Analysis and Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.2.3 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.3 Analysis Concerning Defect Detection (Goal 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.3.1 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
I
Contents
5.3.2 5.3.3
Analysis and Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Analysis Concerning Defect Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.4.2 Analysis and Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.4.3 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.5 Further Quantitative Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.6 Essential Qualitative Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6
Experience Base Update. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.1 Changes of Existing Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.2 The Team 3 Documentation in the Experiment-Specific Section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.3 Newly Gained Experience in the Organization-Wide Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7
Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.1 Requirements Analysis Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.2 Development Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.3 Possible Future Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.4 Reusable Artifacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Appendix A Development Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 A.1 A.2 A.3 A.4 A.5
Problemdescription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem-Addendum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Buildingdescription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dictionary of Terms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ObjectStructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61 64 64 70 72
Appendix B GQM Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 B.1 B.2 B.3
Characterization of Calendar Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Characterization of Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Characterization of Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Appendix C MVP-L Project Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Appendix D Questionnaire: ‘Defects’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Appendix E Qualitative Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 E.1 E.2 E.3
II
Requirements Analysis Method (Goal 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Development Platform (Goal 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Instrumentation of the Experiment (Goal 6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Baselining a Domain-Specific Software Development Process
1 1.1
Introduction
The Case Study
This paper reports a case study that was conducted over a period of six month in the Sonderforschungsbereich 501: "Development of Large Systems with Generic Methods" (SFB 501). The SFB 501 is structured into projects numbered A1, A2, ..., D2. References to projects will be by using these numbers in this report. One common goal of all projects is the improvement of the development process of large systems. This case study had the goal to test and improve a method for the analysis phase in the development process of reactive systems that was proposed by D1. Process and experimental support was given by A1 and B1. A team of developers and domain experts was necessary to handle a sufficiently complex case. Therefore, B4, B5, C1, D1, and D2 provided the necessary manpower to create a team of eight persons with D1 in charge, besides the support mentioned above. The names of the persons are reported in Chapter 8. This report is a summary and also an acknowledgement of the excellent work done by this group. The SFB 501 has chosen building automation as an application domain that is sufficiently complex and versatile and is currently developing into an interesting market with the goal of improving comfort, maintainability, and saving of energy in buildings. Building automation systems are typically highly distributed embedded reactive digital hardware/software systems that vary from building to building and over time with changing building usage and user requirements. They have moderate real time, reliability and safety requirements. Building automation systems are very sensitive to development, implementation, and maintenance costs. They are, therefore, ideally suited for testing the quality and efficiency of software development methods. From the above we can some extract common requirements for the development process of most embedded systems: 1. The developed system has to be reliable with fail-safe features. 2. The hardware requirements have to be minimal, especially in the case of high volume markets. 3. The development time has to be short and predictable to guarantee short time-to-market. 4. The development cost has to be low, especially in the case of low volume markets 5. Time and cost for developing system variants have to be low because of different product environments and changing requirements. 6. Usage and maintenance of the developed systems have to be easy. Currently, system development methods cannot guarantee any of these requirements. Pure hardware design has made large progress towards these goals in the VLSI design process, but hardware/software codesign is still very limited in its application. Therefore, most embedded systems are still developed in unstructured ad hoc ways without using strict constuction methods. The results are badly documented products with high maintenance and change costs, not to mention quality problems.
1
1
Introduction
The SFB501 tries to overcome these problems and thus increase the acceptance of structured rigorous development methods. Since it cannot be expected that we can solve all problems, we concentrate on specific domains and try to proceed in small steps of the process. One of these steps is the subject of this case study. During the system development process the analysis phase is the least formal phase in practice. In this phase the system requirements specification is derived from an informal, incomplete, and often inconsistent customer requirements description. Typically, the customer is no domain expert and his view of the requirements is purely user oriented. During the analysis phase, as we define it, domain knowledge and system design expertise have to be integrated to refine the external customer view into an internal system’s view. This process will very often also result in a refined and corrected customer view, because some of his requirements may have been impossible to fulfil, in contradiction to others, leading to inferior quality of the system, or simply incomplete. The method that was developed by D1 and tested in several smaller case studies [Zim98] takes this into account and results in an executable system requirements specification. One advantage of an executable model of the system is its formality. Formal methods are easier to verify than informal ones, especially during further refinements in the design and implementation phases of the process. Another advantage is that it can be used for prototyping. This is especially important to refine or validate the customer requirements, part of the problem description. Prototyping is also very useful for the verification of the system requirements specification itself. Furthermore, the prototyping is used in addition to inspection. Other specifics ogf our method are summarized in Chapter 3 and a more detailed description can be found in [Zim98]. One of the main goals of the method is efficiency. This is achieved by reuse in its widest meaning, by a strict user guidance through the steps of the analysis phase, and by a new model architecture. This was possible by making the method domain specific in contrast to the existing universal methods. As already mentioned, the domain is building automation, although similar domains can be found where the same method could be applied. "Domain specific" may be in contrast to "general", but it is not in contrast to "generic". Reuse of artifacts or knowledge is only efficient if the right amount of genericness is applied. The analysis phase is often skipped during embedded system designs because of its large time consumption. Most development projects are under time pressure and, therefore, analysis is often mixed with design and implementation. The result was already pointed out. Therefore, time and cost of the analysis as described above must be cut down so far that the gain for the complete process can be proven. At the same time the quality of the product must not be sacrificed. This is our goal and this case study was defined to measure these features. Since we do not have data to compare with, this case study is also a baseline for further case studies. Thus, we cannot currently show that this method is better than others. But we can use the results to predict time and cost for other projects that use the proposed method. From the list of requirements above we attack items 2 and 3. Although this report is mainly concerned with the experimental method and the execution of the case study, the reported results only being relevant in comparison with further experiments, it can be said that the case study was successful beyond these data. The method itself proved to be easy to learn and apply by a group of software developers, the process support worked without much overhead, and the resulting specification can be used for further process steps or parts of it packed for reuse. The group gained valuable experience for further case studies. The method showed room for improvement, especially from the viewpoint of the underlying process philosophy. In some cases the granularity of tasks was too large and caused inefficiencies due to waiting times. This has already resulted in rethinking the design theory behind the process and will be tested and reported as result of another case study.
1.2
Experiments
Techniques, methods, and tools for software development must be tested experimentally, in order to gain experiences regarding their strengths and weeknesses under different project conditions. Such experi-
2
Baselining a Domain-Specific Software Development Process
ences are an indispensable requirement for the effective reuse of techniques, methods, and tools, as well as for their systematic development and improvement. Conducting software engineering experiments is an accepted means for gaining such experiences. This includes the creation of explicit experiment plans, the execution of experiments on the basis of such plans, and the capturing of measurement data during experiment execution. Additionally, captured experience can be stored in an experience base which serves for storage and administration of models, documentation techniques, products, and processes together with empirically gained experience for the purpose of reuse. In this report, the conduction of an software engineering experiment is described. The remainder of this chapter gives a characterization of the context (i.e., research and development environment) and surveys the existing experience that was available via the SFB 501 experience base. Chapter 2 introduces the experimental goals that result in procedures for data collection depicted in Chapter 3. Furthermore, this Chapter provides an informal and formal description of the development process as well as the planning of the product management. Results of the experiment execution are documented in Chapter 4: the system documentation, the project trace describing the timely evolution of the experiment, and experiences in the form of raw measurement data and qualitative statements. This experience is analyzed in Chapter 5. Chapter 6 explains how resulting experience was integrated into the sections of the experience database. Chapter 7 discusses improvement potentials and, resulting from them, possible future experiments.
1.3
The Experimental Environment
The SFB 501 provides a framework for conducting software engineering experiments in the domain of reactive systems. The SFB 501 uses a central platform, the SFB 501 SE laboratory, as a basis for the experience factory, to conduct software engineering experiments systematically and to provide for systematic reuse of a wide spectrum of software artifacts, processes, techniques, and tools. The experience factory stores analyzed and packaged experience throughout the lifetime of projects. The central aim of the SFB 501 is the application of generic methods1 gained from any kind of experiences from previous development projects. In order to be able to defer generic information about software development projects, each SFB 501 project is conducted either as a case study or as a controlled software engineering experiment. Both are called experiments in the remainder of this report. Experiences gathered in those experiments are analyzed and stored in relation to their specific context, and are only valid in this context. Context-specific analysis information, for example, is provided as a list of reasons why certain development techniques failed or were not appropriate in the given context. Some controlled experiments and case studies have already been conducted within the SFB 501. Socalled teams were founded to establish a personnel basis for larger case studies. These teams consist of members of a number of subprojects of the SFB 501 to strengthen interdisciplinary teamwork. Smaller case studies are executed in the subprojects to establish results, which will be tested by the teams. The objectives of the first team, called Team 1, were to find out which functional and non-functional properties are important in the application domain of building automation systems, and which formal methods are particularly suited to describe these properties. Knowledge about the application domain was acquired and made available to the other subprojects of the SFB 501. As a result, a reference model of the physical properties of the problem domain and a simulator facilitating prototype development were finished. Furthermore, requirement specification documents for a control system have been prepared, using three different formal to semi-formal specification techniques: SDL (Specification and Description Language), Statemate, and NRL/SCR (Naval Research Lab/Software Cost Reduction). The three documents were compared with regard to structure, traceability of decisions, ease of change, readability, etc. [BDK97]. 1.
"Generic methods" are used in the SFB 501 to describe any kind of description or generation technique and appropriate tools that support systematic reuse of existing software artifacts, processes, techniques and tools.
3
1
Introduction
Team 1 was followed by Team 2. Its main task was the development of an architecture for building automation systems. In order to validate the architecture, a prototype was developed in a variant restricted to a single room [DKK97]. Team 3 was founded at the beginning of the year 1998, i.e., at the beginning of the second funding period of the SFB 501. Its aim was described above. There are plans to pursue further variants from that Baseline Experiment to demonstrate continuous enhancement of the development of large software systems by the SFB 501. BaX (Baseline eXperiment) was the first experiment of Team 3. It serves for the fundamental acquisition of effort and error data as well as for the collection of experience of a State-of-the-Art development process. The following report describes the case study BaX. An important condition for our experiment is the prototyping environment. Embedded systems do not function without their "bed". One environment is a building simulator for prototyping, developed by C1, that is based on a specification that resulted from Team 1. Another one is an office equipped with sensors and actuators as specified by the above building model,with an interface to the prototyping environment.
1.4
Experience Base and Characterization
One central component of the SFB 501 reuse activities is the web-based implementation of the Experience Base of the SFB 501 [FeV98]. The Experience Base (SFB-EB) acts as a repository for all kinds of experience. Fig. 1 shows its current logical structure. On the top level, the SFB-EB is divided into two sections, called experiment-specific section and organization-wide section. The organization-wide section stores experience relevant to several projects, (e. g., process models and process descriptions), while all information concerning single experiments (i. e., case studies and controlled experiments) is documented in the experiment-specific section [Fec98].
technologies process modeling
case studies
background knowledge qualitative experience measurement
controlled experiments
component Within the SFB-EB, all kinds of software engineering experirepositories ence, especially models, instances, and qualitative experience, as well as complete project documentations, are regarded as experience elements [FMV98]. Experience eleorganization-wide section ments are stored in the SFB-EB according to their context. experiment-specific section Hence, each context has to be precisely described. The conFig. 1: The current structure of the text of each experience element is described with the help of a SFB-EB so-called context vector. A context vector consists of a number of attribute-value-pairs (e. g., , ). With the help of these attribute-value-pairs it is possible to describe and characterize an experience element and its context, and therefore, search for it in the SFBEB.
To be able to search the SFB-EB for reusable experience elements for the BaX case study, it is necessary to characterize BaX with the help of a context vector as well. Tab. 1 lists the attributes of a context vector, their possible values (column 2), as well as the values chosen for the characterization of BaX (column 3). Some attributes of the context vector explicitly allow the usage of the value ‘none’. This value is used when no other predefined value is suitable for a certain experience element. For example, it is not the aim of BaX to implement and code the complete system. Thus, no implementation language, such as C++ or Java, is used. Therefore, the attribute implementation_technique_or_language of the BaX context vector is set to ‘none’.
4
Baselining a Domain-Specific Software Development Process
1: context vector attribute
2: possible value(s)
3: BaX characterization
number_of_developers
positive integer value
7
component_type
user interface / application software / communication system
application software
project_type
creation from scratch / maintenance: perfective / maintenance: adaptive / maintenance: corrective
creation from scratch
experience_of_developers
research assistants / students with SE / students without SE
research assistants
application_domain
building automation: temperature / building automation: safety / building automation: light / building automation: ventilation / reactive systems (others) / internet communication
building automation: light
requirements_technique
informal / SDL / OO / SCR / Statemate / Mills / temporal logic / UML / MSC / none
informal /SDL
development_guidelines
implementation: OO (AGSE) / implementation: OO (AGSE)+ / implementation: C++ (CoDEx) / implementation: C++ (CoDEx)+ /implementation: C++ (AGSE) / implementation: C++ (Waste) / process-specific / model architecture (AGZ)
model architecture (AGZ)
implementation_technique_or_language
C / C++ / Java / none
none
design_technique
informal / SDL / OO / none
none
inspection_technique
ad-hoc / PBR / functional testing / structural testing / code reading / checklist-oriented / scenariooriented / none
ad-hoc / checklist-oriented
process_and_life_cycle_model
V-model: waterfall / V-model: iterative enhancement/ SFB 501 reference model / SDL-based RA / SDL pattern based design process / OMT / OMT+ / PSP / none
SDL-based RA / SFB 501 reference model
validation_technique
black-box testing / white-box testing / none
black-box testing
organizational_context
University of Kaiserslautern
University of Kaiserslautern
Tab. 1: The context vector of BaX
1.5
Existing Experience
With the help of the BaX context vector (as defined in Tab. 1) the SFB-EB was searched for already existing experience elements, suitable for being reused in the new SFB team project. Because BaX is the first case study of its type, only few experience elements were found in the SFB-EB. Among those found was the ‘SFB 501 Reference Process Model’ that is stored in the process modeling area2. It was used as the starting point for modeling the BaX process which actually refines the first phase of the reference model. Another process description, the Bræk & Haugen model [BrH93] for developing real-time systems, was also retrieved from the SFB-EB3 when searching for suitable models. This was due to the fact that this model’s attributes also match the SDL and real-time attributes of the BaX context vector. But since there was greater closeness between the SFB 501 Reference Process Model and the intended process for BaX, reuse of the Bræk & Haugen model was rejected. Furthermore, the following experience elements were found and chosen for being reused in BaX: • Error check-lists retrieved from the case study ‘CoDEx’ stored in the experiment-specific section of the SFB-EB4 were used as a basis for the checklist used in the BaX verify process.
2. 3. 4.
http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/MODELLE/CONTEXT_VECTOR/sfb_referenzmodell-english.html http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/MODELLE/CONTEXT_VECTOR/real_time-english.html http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/SPEZIFISCH/FALLSTUDIEN/codex_contents.html#ReviewChecklisten
5
1
Introduction
• On the process support side, a GQM plan for measuring calendar time was found in the experiment-specific section. This GQM plan together with FrameMaker templates for GQM plans and abstraction sheets, stored in the measurement area of the SFB-EB5, served as input for creating and defining the BaX measurement program (see Chapter 2 and Section 3.4). Besides the experience stored in the SFB-EB, there was external knowledge and experience used in this case study, in particular: • The modeling method, process, and model architecture developed and tested in D1, presented in Section 3.1 and 3.2. • The domain dictionary containing domain knowledge from earlier case studies. • A building model, describing the 4th floor of building 32. • Checklists for the verification steps. These were extracted from the experience obtained from earlier executions of the used process. • Templates and guidelines for specifying the various products (see Section 3.2.2). • A prototyping environment based on SDL (Specification and Description Language, [BrH93] and [Z.100]) and Tele-logic’s tool SDT.
1.6
The Team Process
Team processes are different from single person processes and have to be planned and managed differently. Assignment of roles, work hours, tasks, the locality, and the computing environment are critical for the comparison of experimental data and experiences. Especially in a research environment strictly managed team processes are unusual and the scientists, mostly Ph.D. students, have no experience in this domain. Therefore, a process has to be found that is feasible, minimizes the interference with the normal duties of the team members, and is realistic for system design in the domain of embedded systems. The complexity of distributed embedded systems does not typically result from large number of lines of code, but it results mainly from the complexity of the problem domain, the nonfunctional requirements, and the complex, typically event driven, asynchronous interaction between components and between the system and the environment. Therefore, small interdisciplinary groups of highly trained specialists are typical, also because of cost and time limitations. Team 3 consisted of five software modeling experts, two domain experts, and one manager, who was familiar with both domains. One of the modeling experts also acted as project leader. The team met at regular times twice a week for four hours in one room equipped with X-terminals with access to Sunservers with the software tools. This regulation was crucial for the success, because all issues concerning more than one person could be resolved without delay. Such issues were mostly interface conflicts and domain questions. Because the process and experiment support was tried for the first time and partly developed during the case study, three software engineering experts were present most of the time to give support. Because of this it was possible to avoid distortion of the experimental data due to delays caused by supporting software. Only at the end of the process we deviated from the above rules to release most of Team 3. Testing and prototyping was done by one person of D1 with the help of student aids. This resulted in a large time lag because of unavailability of the students. This time lag shows up in the resulting calendar time. The data of this part of the process have to be interpreted accordingly.
5.
6
http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/MEASUREMENT/gqm_templates.html
Baselining a Domain-Specific Software Development Process
2
Experimental Goals
Many description techniques and methods are candidates for performing a requirements analysis. The selection of a specific one requires answers to questions like "which requirements analysis method is best suited for the domain ‘building automation’?". This question may be reformulated to "is method ‘A’ better suited to a specific context than method ‘B’?". To get valuable answers for our context, it is essential to quantitatively define terms like "best" and "better" so that experiments can be defined and performed. An example for a more objective definition of "better" may be "costs 10% less (measured in developer hours)". The experiment described in this report is a case study. Only one requirements analysis method is used and, consequently, a comparison between two or more methods is not possible in the scope of the experiment. The purpose of the experiment is mainly the test of the method in a group process and the establishment of a baseline. This includes a) baselining the process concerning quantitatively measured key factors (such as effort) and b) gaining qualitative experience related to certain aspects (e. g., the applied requirements analysis method, description technique, and tool support). The results can be used for a weakness analysis of the method to identify improvement potentials. Beyond that the results can be used for a comparison with results of further experiments examining an improved method or alternative methods. For the description of the quantifiable goals we use the Goal/Question/Metric paradigm (GQM), which supports the definition of goals and their refinement into metrics as well as the interpretation of the resulting data [BW84][BR88]. The conception of the GQM paradigm is that by explicitly stated goals, all data collection and interpretation activities are based on a clearly documented rationale. According to [BDR97], goal-oriented measurement is the definition of a measurement program based on explicit and precisely defined goals that state how measurement will be used. Advantages of goal-oriented measurement are • Goal-oriented measurement helps ensure adequacy, consistency, and completeness of the measurement plan and therefore of data collection. • Goal-oriented measurement helps manage the complexity of the measurement program. • Goal-oriented measurement helps stimulate a structured discussion and promote consensus about measurement goals. Every measurement goal can be expressed using a template with five facets: "object of study", "purpose", "quality focus of study", "point of view" and "context". This template is used in the following to describe the quantifiable goals of BaX. The first goal of BaX is the characterization of calendar time (see Fig. 2). For this goal no variation factors (i. e., independent variables) are measured explicitly and therefore, no hypotheses1 are stated. A baseline concerning calendar time can be used, for example, to get a project trace, to perform analysis concerning parallel work, or to identify extraordinarily long lasting processes.
1.
A hypothesis describes the expected impact of the variation factors on the quality focus of the measurement goal.
7
2
Experimental Goals
Analyze the Team 3 processes (object of study) for the purpose of characterization (purpose) with respect to calendar time (quality focus of study) from the point of view of the project planner / project manager (point of view) in the context of SFB 501 - Team 3. (context) Fig. 2: Goal 1 (characterization of calendar time)
The other two key factors measured in BaX are effort and defects. The respective goals are described in Fig. 3 and Fig. 4. Analyze the Team 3 processes for the purpose of characterization with respect to effort from the point of view of the developer in the context of SFB 501 - Team 3. Fig. 3: Goal 2 (characterization of effort)
Analyze the Team 3 processes for the purpose of characterization with respect to defects from the point of view of the developer in the context of SFB 501 - Team 3. Fig. 4: Goal 3 (characterization of defects)
The information needed to plan measurement and perform data analysis is included in a GQM plan. Such a plan describes precisely why the measures are defined. It consists of a goal, a set of questions, and measures. Additionally models and hypotheses may be part of a GQM plan. As an example, an excerpt of the GQM plan for goal 3 is described here. One of the questions (and the metrics that could be used to answer this question) is illustrated in Fig. 5. Note that in this question the source product of a defect is understood as the product on the highest level of abstraction in which the defect appears. For instance, this might be the first product in a sequence of products describing the system functionality on different refinement levels. Q1: How many defects were detected in each process of the Team 3 case study (distinguished by source products)? Metrics: - For each defect: Identifier of the process in which the defect was detected - For each defect: Identifier of the source product Fig. 5: Example question with metrics from the GQM plan “defects”
8
Baselining a Domain-Specific Software Development Process
We assume a relationship between the complexity of modeled objects and the defects that will be detected in these objects. Therefore, the following hypothesis is stated. H1: The number of detected defects per object increases linearly with the complexity of the modeled objects. The complete GQM plans for goal 1, goal 2, and goal 3 can be found in Appendix B. Supplementing the quantitative measurement, the collection of qualitative experience is an important aim of the experiment. Qualitative experience is subjective, but it can be seen as a source for improvement proposals and explanations as well as a means for getting essential information which is not quantitatively gathered. Additionally, qualitative experience can be used to interpret results of quantitative analyses. The following goals describe the aspects that are of interest in BaX. Goal 4. Gain qualitative experience concerning the requirements analysis method and the development process. Goal 5. Gain qualitative experience concerning the development platform. Goal 6. Gain qualitative experience concerning the instrumentation of the experiment.
9
2
10
Experimental Goals
Baselining a Domain-Specific Software Development Process
3
Project Plan
This Chapter presents a reference model that serves as a framework for development processes in the context of the SFB 501, gives a description of the process model embedded herein for the experiment considered in this report, and finally sketches details of the applied method and product management aspects. We stated a reference model that describes a framework for all experiments conducted in the context of the SFB 501. The motivation for the usage of this so-called SFB 501 reference model is twofold. On the one side, it serves as an anchor for the integration of various software development methods. On the other side, the reference model can be used for comparability of the measured data between different experiments (e. g., effort data can be addressed to defined processes of the reference model independently from the examined development methods). The top level of the reference model is shown in Fig. 6. It comprises a complete development cycle (producing a complete system called Used_System) and a prototyping process for rapid development of solid requirements (producing a Prototype). An informal problem description is used as a first high-level description of the problem to be solved (enclosed in the consumed product Problem). The product Domain_Knowledge is introduced to contribute to the following two aims: First, the reference model is intended to be domain-oriented, i. e., it should be specialized to the specific characteristics of the domain "reactive systems", in particular, "building automation". Second, the reuse of software artifacts (such as products) should be supported. Problem
SFB_501_ Reference_ Model
Domain_Knowledge
Used_System
Prototype
Legend product: consume: process: refinement:
produce:
Fig. 6: SFB 501 reference model (top level)
The refinement of the SFB 501 reference model is shown in Fig. 7. The reference model comprises different types of development processes. There exist development processes (Requirements_Analysis, Create_System_Design, Create_Prototype), constructive processes (Integrate_System, Create_Usable_System, Installation), and analytical processes like validation processes (Integration_Test, Acceptance_Test, Prototype_Test, Test_in_Action). Verification processes can be defined on a refined level of the model. It should be recognized that the processes for the development of the control system (Develop_Control_System_Software), the operating system (Develop_Operating_System), the control hardware (Develop_Control_Hardware), and the communica-
11
3
Project Plan
tion system (Develop_Communication_System) are complete development life cycles themselves. Since there are dependencies between these life cycles, the product Coordination_Products was introduced. The shadowed products and processes in Fig. 7 are chosen for the first experiment of Team 3. This has mainly the following two reasons. On the one hand, a method should be applied that had been tried out to a certain level so that an evolution of the method itself during the experiment could be excluded. An appropriate method that seemed to fulfill this requirement is an SDL-based requirements analysis method. One of the important features of this method is early prototyping. The reasons have been described in the introduction. This requires executable specifications. On the other hand, the experimental goal "characterize defects" (see Chapter 2) also requires the development of an executable system. The reason for this is that we did not want to operate only on documents (i. e., the creation of the product System_Requirements and intermediate products). Of course, it is possible to create a defect baseline this way. But it is impossible to gather data about late defects (i. e., defects detected late in the development process). Because of the importance of late defects, which usually cause enormous rework efforts, we decided to include generation of a prototype in the experiment. The prototype is only a provisional substitution for the complete system. It is intended to perform a baselining of the analysis method in the context of a complete development cycle (including design, etc.) in further experiments.
Problem
Domain_Knowledge
Application_ Knowledge
Test_in_Action
Requirements_ Analysis
Prototype_Test
System_ Requirements
Acceptance_Test
Create_ System_Design
Operating_ System_Knowledge
Control_Hardware_ Knowledge
Communication_ System_Knowledge
Prototype
Usable_ System
Executable_ System
Integration_Test
Create_Useable_ System
System_ Design Develop_Control_ System_Software
Control_ System_Software
Develop_Operating_ System
Operating_ System
Develop_Control_ Hardware
Control_ Hardware
Develop_Communication_ System
Integrate_System
Communication_ System
Coordination_ Products
Legend product: consume: process: produce: aggregation: refinement:
Fig. 7: SFB 501 reference model (refined level)
12
Installation
Create_Prototype
Design_ Knowledge
Control_System_ Knowledge
Used_System
modify:
Baselining a Domain-Specific Software Development Process
In the following the processes performed in the experiment are described in detail. For reasons of preciseness, completeness, consistency, unambiguity, and understandability, a formal as well as an informal process description is given.
3.1
Formal Process Description
In this section we use formal process models to represent the activities performed in the experiment. Formal process models represent knowledge about organizational processes (such as quality assurance) and technical software processes (such as coding) explicitly. Furthermore, they make this knowledge persistent. Explicit process models are the prerequisite for experiments where processes need to be documented, implemented, analyzed, and changed. One school of research, known as software process technology, has developed approaches for modeling, analyzing, simulating, packaging, and performing software development processes. The following advantages of formal process models can be seen (see [BHMV97]): 1) Formal process models facilitate the creation and modification of consistent software process descriptions; 2) Formal process models are appropriate means for storing software development knowledge; 3) Formal process models enable sophisticated analyses; 4) Formal process models are the basis for process-sensitive software engineering environments. In order to benefit from these advantages we formalized the relevant processes of the experiment. The results of this formalization are shown in Fig. 6, Fig. 7, and Fig. 8 (using a graphical style). Appendix C contains excerpts of this formalization in a process modeling language. Fig. 8 shows the processes performed during the first Team 3 experiment. It mainly consists of an SDLbased requirements analysis method and an additional prototype generation and test loop.1 The requirements analysis process is divided into three main processes. The first two (Object_Structure_Design and Task_Assignment) lead to a structured but still informal description of the requirements. The third process (Requirements_Modeling) formalizes this description into an SDL representation. Based on this formal representation a Prototype can be generated. The development processes are supplemented by two verification processes (Verify_Requirements_Description and Verify_System_Requirements) and a validation process (Prototype_Test). As proposed by the requirements analysis method, in order to enhance reuse, the products are refined in the following way: The product Problem is refined into the subproducts Problem_Description, Building_Description and Addendum2. The product Application_Knowledge is refined into the subproducts Dictionary and Development_Guidelines. Finally, the product System_Requirements is refined into several subproducts. The difference between explicit process models and a project plan should be clarified: Models represent classes of real-world objects (e. g., the process model Create_Design is an abstraction of a design activity). Process models can be regarded as activity types. A project plan results from instantiating and relating models to build a representation of the experiment with respect to the goals and characteristics of the experiment. For short, a project plan describes the activities to be performed in the project, and can provide additional information such as default values for maximum effort (for a detailed definition see [RV95]). The project plan (described in the process modeling language MVP-L [BLRV95]) for the experiment described in this report can be found in Appendix C. A more comprehensive understanding of the term project plan also includes information about product management. This is described later on in this Chapter.
1.
2.
In subsequent iterations of the experiment this prototype generation can be used for so-called ’backto-back’ testing where the same test cases are submitted to both the used system and the prototype. Differences in the test results suggest problems which should be investigated in more detail. The document Addendum results from the decision that we regarded the Problem_Description and the Building_Description as non-modifiable input. Changes of these subproducts that were necessary in the context of the experiment are documented in the Addendum.
13
3
Project Plan
7 Problem Prototype_ Test
Prototype
Requirements_ Analysis
1 1. Informal_Object_Design Domain_Knowledge
1.1
Application_
Object_Structure_ Design
Knowledge
Test_Cases
1.2
. . .
Object_ Structure
5
Task_ Assignment
6
Create_ Test_Cases
Create_ Prototype
2 Verify_Requirements_ Description
Requirements_ Description
3 Requirements_ Modeling
4 Verify_System_ Requirements
System_ Requirements
Legend product: consume: process: produce: aggregation: refinement:
modify:
Fig. 8: SDL-based requirements analysis
3.2
The Requirements Analysis Process
In this Chapter, we give a short informal description of the goals of our requirements analysis process (RA process) and of the individual process steps, illustrated in Fig. 8. The overall goal is to produce executable requirements of a system that fulfill the needs of the customer, listed in the problem description. These system requirements are described by an executable model. The model is executable, if it can be transformed into an executable program that shows the required external behavior of the system. Therefore, the executable model has to contain a description of the internal behavior of the system, from which the external behavior results. One experience from previous SFB 501 projects (Team 1, Team 2) is that we cannot define the behavior of a system without knowledge of its structure. Therefore, the main feature in the analysis process is to use a reference architecture and detailed development guidelines for the creation of a problem specific system architecture. According to the guidelines the whole system is partitioned into objects and composed in a strong aggregation hierarchy, which we call the organizational hierarchy. The special features of this hierarchy are as follows: - Strong aggregation: Every object can only be a component of one other object. Thus, all objects in our system are placed in a tree and not, as conventional, in a graph.
14
Baselining a Domain-Specific Software Development Process
- The type of communication: Exchange of signals or messages is only allowed between objects that are directly related in the hierarchy. - The way in which functionality is assigned to the objects: Contrary to conventional aggregation hierarchies, the functionality is assigned not only to the leaves of the tree. All objects in the hierarchy realize parts of needs in the problem description. - the type of structure diagrams: Instead of pure object type diagrams, used in languages like OMT [RBP91], or pure instance diagrams (used in [Boo91]), we use a mix of both, type-instance-diagrams. Every type consists of a number of objects (called components), which are instances of other types, and a controller to realize the local functionality. This simplifies the modeling of complex systems. Fig. 9 shows the metamodel of our organizational hierarchy (using UML-Notation). 1..1 Type
1..1 Controler
0..n Component
Fig. 9: Metamodel of the type-instance-hierarchy
Instances of this model architecture that are created in the analysis phase do not have to be of the final type as the system architecture that is used in the implementation. It is rather a first solution, which has to be modified and optimized in the design process (e. g., due to timing, fault toleranz, communication, or hardware constraints). Having an executable model, we can generate a prototype and use this prototype to check the system’s behavior against the given problem description in order to validate the latter or verify the model. 3.2.1
The Products and Steps of the Requirements Analysis Phase
As shown in Fig. 8 the starting point of the SDL-based RA process is the informal problem document, supplied by the customer of the system. It consists of a building description and the problem description, in which the customer typically just documents the expected external behavior of the system. The form of this document is mainly a list of needs (the functional requirements), and the non-functional requirements such as real time and fault tolerance requirements. Since these needs are expressed in terms specific to the building domain, a dictionary of terms is created and extended during the whole development process. These terms and definitions occur in many control projects in the same domain. Therefore, a general domain dictionary is a good example for reuse and is part of the more general experience database. Any relevant information about domain objects can be stored in this dictionary, such as the physical laws that determine the physical part of the internal behavior. This procedure guarantees that terms and abbreviations are used in a unique way throughout the SDL-based RA process. A good dictionary saves a great deal of time and discussions. Object Structure Design The internal behavior is specified by software and domain experts during the analysis phase. Typically, in building automation, these are automation, HVAC (Humidity, Ventilation, Air-Conditioning), lighting, and facility management experts. As a result from earlier case studies it became obvious that this process can be best structured and documented if the organizational hierarchy for the specific problem is defined
15
3
Project Plan
first. This is documented in the object structure document. Therefore, the first activity is to evaluate the object structure, following the ideas described in Section 3.2.2. The output of the step is a type-instance-diagram, which brings all the components of our system together. Changes to this hierarchy may be done during the next step, the task assignment. Task Assignment Many of the control problems are multivariable problems with incomplete knowledge about the controlled system. Therefore, classical control theory often fails and solutions have to be sought by the control and software expert together. The result of this task assignment is an informal document, which we call requirements description. We have experimented with use cases [JCJ93] and Message Sequence Charts [Z.120]. None seemed to be appropriate to the building control problem at hand. Therefore, we resorted to the description of tasks. A task is one part of a need. One or more tasks (and the interaction between them) realizes one need. Most tasks require decisions or actions based on data. Accordingly, a task has to be realized by the object at the right level of competence and where most of the required data are produced by its direct subcomponents. This is supported by the model hierarchy and keeps signal flows local. If no appropriate object in the hierarchy can be found, the object hierarchy has to be refined. This splitting of the needs into tasks often results in conflicting demands that have to be resolved. For each task we describe a strategy as the solution of the control task at hand. These strategies build the internal behavior and are later modeled in the requirements specification. Some of the fault tolerance requirements are resolved here by special strategies. Real time requirements are refined and additional ones are introduced as part of the solutions. Strategies may require additional tasks. The task list is updated accordingly. As the last decision in this step, the object hierarchy can be changed, for example, by merging two objects, if one of them has no tasks to fulfill. Requirements Modeling The requirements description has to be formalized to get an executable model. Out of many possible techniques (e. g., UML, Statemate), we have decided to use SDL, because control systems are behaviorally oriented and the case tool SDT seems to give the best support. The major activities of the requirements modeling step are first to map the object hierarchy into an SDL block hierarchy and second to translate the strategies into executable processes. The mapping can be done by using model patterns as described later. Every object from the object hierarchy is translated into an SDL blocktype, which can be instantiated several times. The control processes are defined by state transition diagrams. SDL offers no formalism for real time requirements. Instead, operational delays can be specified in the state transition graphs for the time behavior. Especially periodic activations, for example, feedback control loops, are specified using delays. Additional delays can be used to simulate latencies and execution times. Some values of these delays will be known from the used devices, for example sensor response times, some have to be estimated. Verification After each process step, it is indispensable to insert a verification step. Mostly these steps are done by inspecting the documents. Both the design decisions and the consistency to earlier documents are checked. The output is a list of defects, which are the input of a rework step. This iteration has to be done until no more defects occur.
16
Baselining a Domain-Specific Software Development Process
Create Test Cases Test cases offer the possibility of checking the control system against the problem description. From the problem description, we derive a number of test cases for the system test, which has to consider all needs and furthermore some facts, invented by the developers, which extend these needs. Prototype Generation Prototype generation consists of two main tasks: the translation of the executable model into an executable program and the realization of the connection to the controlled environment. We assume that we already have a realization of the controlled environment, meaning the existing building with all needed installation or a simulation of it. More about defining and creating these building simulators can be found in [SRZ97]. Prototype Test This is the last task in our analysis phase. Each test case is handled individually, using it as an input to the prototype. The system’s behavior is logged, so we can derive the failures of the system by determining inconsistencies between the prototype (and thus of the SDL model) and the problem description. If there are no inconsistencies, the SDL model is correct from the point of view of the developers and the customer can use this prototype to check the behavior of the system from his point of view. 3.2.2
Guidelines and Templates
This Chapter gives an overview about some modeling guidelines and templates for the different analysis steps. Since we do want to reuse experience gained in earlier case studies, it is mandatory to support the activities by guidelines and templates. The guidelines contain suggestions for doing the design decisions and the templates will save time in writing down these decisions. Furthermore, this help will improve understanding between the developers and will reduce the number of errors. Object Structure Design Buildings are typically structured geometrically and functionally into objects by the building architect. It is easy to understand that one of these structures is a good basis for the structure of a possible control system, at least for the requirements specification model. The reason is obvious: the control system should be embedded in the building. Therefore we choose one of the two structures as the starting point for the object hierarchy. In this experiment, we use the geometrical structure because we regard only one physical effect. This structure has to be extended by objects, which will contain special functionality, for example a controller or an object to determine the occupancy of a room. Analyzing the given needs is a very helpful decision aid for finding these objects. Furthermore, the experience of Team 2, expressed in a reference architecture, helps us in finding the objects and in placing them into the object hierarchy. Basically, a type should not be composed of too many components for one type. The reason is that the resulting part of the SDL model would be too complex. In our domain, up to eight or ten components are sufficient for most problems. Task Assignment The task assignment is a very creative design step and it produces an informal description. Therefore, we cannot give very restrictive guidelines or templates for this step. The developer has to examine every need in the problem description, divide it into smaller pieces (called tasks) and assign it to the appropriate objects. One rule is: Define the tasks inside one object as independently as possible. This will simplify the transfer into an SDL model because, in case of independence, the transformation can be done for each task separately. For every task, a short strategy, which has to be a guidance for the realization, should be indicated. The description form of the strategies is not given, we can use natural language, tables, or small state machines, if necessary.
17
3
Project Plan
Block Type Blk
2(2)
/* */
oi oi (BlkInp)
oo BlkCtrl
(BlkInp)
(cp)
(cm)
a.i1
c.p
c.m
(BlkOutp)
Panel
d.p
Monitor
oi i1:SP
d.m
oo
(ai1)
a.i2
oi i2:SP
oo
oi i3:SP
oo
oi i4:SP
oo
oi i5:SP
oo
oi i6:SP
oo
x.i5
x.i6
oi i7:SP
oo
oi i8:SP
oo
x.i7
(ai8)
(xi8)
b.i4 (bi4)
xi5
b.i5 (bi5)
xi6
b.i6 (bi6)
xi7
(xi7)
x.i8
b.i3 (bi3)
xi4
(xi6)
(ai7)
a.i8
x.i4
b.i2 (bi2)
xi3
(xi5)
(ai6)
a.i7
x.i3
b.i1 (bi1)
xi2
(xi4)
(ai5)
a.i6
xi1
(xi3)
(ai4)
a.i5
(dm)
(xi2)
(ai3)
a.i4
x.i2
(BlkOutp)
(dp)
(xi1)
(ai2)
a.i3
x.i1
oo
b.i7 (bi7)
xi8
b.i8 (bi8)
Fig. 10: SDL structure template
Furthermore, an initial interface of the types has to be defined. This means that the resulting signals with their parameters have to be defined using the given rules for the names. Requirements Modeling The requirements modeling step can be divided into two parts: First, we have to transfer the structure of the system into an SDL structure and second, we have to define the behavior, that means defining the state-transition diagrams, of each resulting blocktype. Creating the SDL-structure is supported by a number of SDL Templates, which implement a generic blocktype with their parts (Fig. 10). Each blocktype is composed of a number of instances. A controlblock has it’s own functionality given by the tasks, and two blocks, called Panel and Monitor, should implement a user interface for the communication with a user and the possibility to trace the values of some interesting variables, for example, energy consumption or occupancy times. The small blocks on the right hand of the instances help us distinguish different instances of the same type in the controller block. These blocks are needed, because SDT does not support instance specific values to distinguish signals. Using these blocks, we can modify these values outside the instance. Modeling of behavior means transfering the tasks of an object with its strategies into an SDL state machine. The initialization phase is generically modeled in an SDL Template (Fig. 11), which has to be extended by the states and transitions, realizing the tasks. Process BlkCtrl
1(1) DCL id Pid; DCL name Charstring; DCL readyCount INTEGER := 0; DCL noOfInst INTEGER := 8; Init
ident (self, ’’) via oo
ident(id,name)
ready
Init
ident(id,name) VIA oo
readyCount := readyCount+1;
false
readyCount= noOfIns true ready via oo
Init
Idle
Idle
Fig. 11: SDT process template
18
Baselining a Domain-Specific Software Development Process
Verification As mentioned before, the verification is done by inspecting the documents. Checklists guide the inspectors through reading the documents by giving some hints where most defects occur. These checklists are specific for each kind of verification step. Create Test Cases We have to regard each need from the problem description separately and define one or more test cases relating to it. There are three issues to be described for each test case: - start conditions: The state of the system (control and controlled system) before starting the sequence, defined later. - observe: the test objectives (signals, settings of variables, system states etc.), which have to be examined. - sequence: an enumeration of activities (in the control or controlled system), processed in the given order. For each step in this sequence, the expected reaction can be stated. Create Prototype Creation of a prototype means primarily the translation of the SDL Model into an executable program. In SDT, this is done automatically by a C-code-generator. Only a part of the entire interface of the system to the controlled environment, that is, the real existing building or a simulation of it, is modeled and therefore, only a part of the code is generated for the communication. We support the creation by using a generic component, called ProtoCtrl, which, together with some short C functions, implements the interface using the UNIX-Socket mechanism. This component and one instance of the uppermost blocktype, modeled in the object structure, are the only instances at the highest hierarchy level in the prototype (Fig. 12). The functionality of the ProtoCtrl includes a translation of names, used in the prototype syscontrolled environment existent building
Prototype ProtoCtrl
Socket building simulator
Controlsystem generic components Fig. 12: The prototype and the controlled environment
tem, to the real names, used in the controlled environment, and the interpretation, respectively the generation, of commands (for activating actuators or reading sensor-values …). More information about the interface can be found in [MQS96]. Because of the identity of the interfaces we can replace the existing building with a building simulator. This will help us to process the different test cases, explained in the next section. Prototype Test Using the generated prototype, we can examine the functionality of our modeled system. So the task in this step is to process each of the given test cases by going through each test case one by one. The start
19
3
Project Plan
conditions have to be set (for example, a specific outdoor illuminance) and the sequence must be processed. After this, the observed values can be compared with the expected values. To observe the values, we have different possibilities: - output of the control system: Special outputs, defined in the model, can be used as debugging information. - logging the communication: The communication between prototype and control system is recorded in a file. - output of the controlled environment: The behavior of the controlled environment can be logged or observed on-line. Furthermore, we have the possibility of selecting a special kind of prototype. In the SDT environment, we have the following possibilities: - Simulation: Only a simulation of the system, with a user interface to trace and influence the system states, the value of variables and so on. Here, time is only a logical value and has no correlation to real time. All work, invoked at a specific time, is done in zero time, and the simulator switches, also in zero time, to the next time, which is defined in the SDL model. - Real-time Simulation: Same as Simulation, except for the treatment of time. One second in the control system corresponds to one second of the computer that executes the control system. - Application: A realization of the control system, where the processes can not be stopped or influenced by the tester.
3.3
Product Management
The following section mainly deals with two topics: the way versions and consistent sets of versions — called configurations— are handled and the way product development is supported by tools. Both topics have to be clearly defined and planned in order to guarantee sound product handling in the project execution phase. Both terms and definitions (in Section 3.3.1) and product management planning (configuration management planning and tool bindings; remainder of Section 3.3) are described in this Chapter. 3.3.1
Terms and Definitions
Software Configuration Management (hereafter called SCM) definitions can be found in various publications (e.g., [IEEE 828], [IEEE 1024], [MIL1456A], [Ber92], [Buc96], [Dar90]). However, the definitions vary and adequate "personal" definitions must be found for each project. Hence, the term SCM itself has many meanings and the understanding of SCM influences any further definition of terms. For the first baselining project of the SFB 501 Team 3 we prefer the following definition: SCM is a discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements [IEEE 610]. Further requirements to SCM are adherence to the defined development processes [Dar90] and minimizing the technical overhead for the developers when dealing with versioning and configuring activities. The following definitions are needed to understand this definition of SCM and are valid throughout the case study BaX. Revision: A Configuration Item (CI) A' is called a revision of CI A if A' is a rework of A. Revisions are used to illustrate evolution in time. Variant: A CI A' is called a variant of CI A if both A and A' share the same characteristics, like, in the case of a program module, functional specification, but differ in certain aspects like data structure, platform, or usergroups. Variants of CI's are used to illustrate parallel development. In SCM variants are realized as different branches.
20
Baselining a Domain-Specific Software Development Process
Version: A version is either a revision or a variant. Release: A release is a version of a CI or the complete system that is made public for either other developers in the subsequent development phase or for the customer. Internal releases (developer) are distinguished from external releases (customer). Configuration Item: A Configuration Item (CI) is an element of a configuration, which is in a certain sense a stand-alone, test-alone and use-alone element (e.g., a development product like a source file). A configuration itself can be a configuration item. In the context of the BaX case study, configuration items are either (atomic or compound) development products or configurations. Configuration: A configuration is a set of versions, where each revision comes from a different CI, and the revisions are selected according to a certain criterion that fulfills a predefined consistency. Baseline: A baseline is an agreed-on point of time, after which any change must be communicated to all parties involved. With product definition, a baseline is a CI or a collection of CIs formally reviewed, agreed on, or designated at a given point in time during a project’s life cycle. [Ber92] 3.3.2
Configuration Item Identification
Configuration item identification should be independent of the kind of configuration item (i.e., a certain development product or a configuration) and should be identified in a similar way. A pair (name, version) is sufficient to guarantee an unambiguous association to a configuration item, if it is based on a clear naming and versioning schema. Name may be a filename, a logical name defined in the product model or in the configuration naming schema, or a search tupel for a database with an unambiguous search result. Version is a number or name which describes an unambiguous revision and variant of the relevant configuration item. 3.3.2.1
Naming Schema
Due to the process-oriented configuration management in BaX, names for development products (which are represented by their filenames) are again given in pairs (processId, productId). A function supports the association between this pair and its filename: product-name ← getName (processId, productId) Hence, product-names are specified by the productId in the scope of a development process (defined on the granularity of the process model) given by the processId. The processId is a string directly derived from the process model (e.g., an abbreviation), and productId is an unambiguous integer within the process scope3. Both are determined manually prior to the start of the case study and are provided to the developers. With this name representation, the number of products are dramatically reduced and same products can be treated differently depending on the process scope. Because product names are represented by unix filenames including path information, they are unambiguous due to the unix filesystem, i.e., one file name associates exactly one development product. Configuration names are given prior to the start of the case study —as far as they are known— and are managed by the configuration tool through a control list. This guarantees unambiguous configuration name management even if configurations are added during the execution of the case study. 3.
This cryptic naming was intended to be used by a process machine and will be transformed in a form better readable for humans in future case studies.
21
3
Project Plan
3.3.2.2
Versioning Schema
Versioning is done on the basis of RCS [Tic85] for atomic products. Hence, version names for atomic products (i.e., development products that are exactly one item in the real world) are unambiguously numbered according to RCS using RCS mechanisms: Every version4 of an atomic product is defined by a pair of integers (releaseNumber, revisionNumber) represented as dotted pair: version = releaseNumber.revisionNumber, whereas releaseNumber is meant to be a major release increased before shipment of the executable system and in case the system has undergone major changes. RevisionNumbers illustrate smaller changes (e.g., within one day) of a single product. Compound products (i.e., products representing all files within a directory) and configurations are managed through lists (that are dynamic in the case of a compound product). List items (i.e., elements of lists) are versioned independently according to RCS. Since the list itself is an atomic product, it is again under version control. Additional functionality is used to create version and configuration numbers different from those genereated by RCS. Compound products are again versioned as in RCS (consisting of two parts: releaseNumber.revisionNumber), whereas configurations are just numbered by their revisionNumber (i.e., an ordered sequence of integer numbers). A function getProduct is provided to get direct access to a configuration item. Development products (atomic or compound) are addressed as follows: product ← getProduct (processId, productNumber, releaseNumber.revisionNumber) To access the current version, the version number can be omitted: product ← getProduct (processId, productNumber) Configurations are not process-specific, which allows for an even simpler interface: configuration ← getConf (configurationName, revisionNumber) The functions getProduct and getConf each prepare the corresponding object for editing (i.e., checking out the given version with exclusive lock of the product or configuration). 3.3.3
Baseline and Configuration Definition
In the case study BaX process templates are used to define configuration sets and baseline triggers. Additionally triggers for versioning and configuration construction are defined on the basis of process triggers. 3.3.3.1
Templates and Triggers
Process templates are patterns within the process model. Instances of process templates are parts of process models, hence, process templates have the semantics of a process meta model. Lets take a look at an example to get a more concrete impression of this abstract description: Fig. 13 shows an example for the verification pattern. Verification basically appears in every development phase. In BaX this is done just two times because BaX only covers the requirements phase within a whole development cycle. Generally, the verification activity is related to a creation/modification activity (between trigger ➊ and ➋) and has both the input (called X) and output product (called Y) as its input. Furthermore, the verifiers need checklists or similar supporting products to get their work done systematically. The verification activity results in a list of detected errors, which is normally followed by a rework activity to eliminate previously found errors. From the point of view of product management they can be used to identify versions and configurations that should be managed in a similar way. Fig. 13 shows an example process template for the verification 4.
22
Variants —through branching— are not needed and hence are not supported.
Baselining a Domain-Specific Software Development Process
subconfiguration
input doc X for create/modify Y
support documents (e.g., checklists)
➌ verify Y↔X
➊ create/modify Y
➍
Legend
➋
configuration not refined in Fig. 2
defectlists
➎
rework on Y
➏
Y
see Fig. 6
Fig. 13: Example process template: "verification"
cycle that appears in every development phase for nearly every development product. Verification expands the standard create/modify template by a verification/rework process and several products. Elements in Fig. 13 drawn in dashed lines are not explicitly modeled in the process model (see Fig. 8) to keep complexity low. The verification template illustrates six triggers:
➊:
When starting a creation or modification activity, preparations for versioning (e.g., exclusive locking or checking out a version that should be worked on) on product Y have to be done. This trigger is activated by the developer through the tool invocation interface by calling an edit tool for the respective product and process.
➋:
When ending a creation or modification activity, preparations for versioning (e.g., unlocking or increasing the version number) on product Y have to be done. This trigger is activated by the developer through the tool invocation interface by closing the respective edit tool.
➌:
On starting the verification, three products are needed for input: •
product Y, which was created or modified by the ’create/modify’ activity
•
the input product X for Y. Y can be regarded as a description of X on a lower abstraction level. Y is verified against X, which means that the consistency between these two products is checked.
•
supporting documents (eg., inspection checklists or review guidelines)
To guarantee consistency between these products regarding their development state, a new version of subconfiguration is created (see Fig. 13) with those three products as elements. This subconfiguration is provided to the reviewers.
➍:
On ending the verification, new defect lists are created or supplemented. At this point, the complete configuration is created, consisting of all products taking part in the verification template. It consists of the subconfiguration and all defect lists.
➎:
If necessary, when starting the rework activity (which is similar to trigger ➊), preparations for versioning on product Y (i.e., locking, …) have to be done.
➏:
On ending the rework activity, product Y’s version is increased and is made public for the subsequent development phase. This version will be part of a system configuration.
23
3
Project Plan
For BaX two more process templates have been the basis for trigger definition: a template for validation, which is similar to verification except it contains additional products, and a template for the system used to define system release configurations. These triggers are used to perform necessary configuration management activities for versioning of products and configurations. If an underlying process machine (such as ProcessWeaver [Bou93]) used, which is able to interpret the process model, these triggers can be (manually) bound to instances of the process templates (equally to parts of the process model) and activated automatically. 3.3.3.2
Instantiation: Versioning, Configurations and Baselines
In the case study BaX, process templates as described in Section 3.3.3.1 have to be identified and instantiated. Fig. 14 shows three configurations created by triggers of type ➍, denoted with configurations verification 1, verification 2, and validation. Configuration verification 1 shows deviations to the previously defined process template and triggers: it has been decided that input products Problem and Domain Knowledge are not under version control and hence, are not part of a configuration. Baselines are defined (and always created), when an instance of a process template is completed. The resulting configurations are part of the system’s baseline. Versioning on a lower level that is not represented on the coarse grained level of the process model or the project plan is bound to an explicit tool call for modification (see Section 3.3.4). Explicit versioning (e.g., creation of personal versions for internal testing) is always possible totally independent from automatic versioning. 3.3.4
Tool Invocation
In BaX only two types of documents are developed: FrameMaker and SDT files. In order to prohibit accidental modification in case of an intended view access, the twin tool to FrameMaker called ireader should be used for FrameMaker files.
validation Problem
Prototype_ Test
verification 1
Prototype
Requirements_ Analysis
1. Informal_Object_Design Domain_Knowledge Application_ Knowledge
. . .
Test_Cases Object_Structure_ Design
Object_ Structure
Task_ Assignment
Create_ Test_Cases
Create_ Prototype
verification 2 Verify_Requirements_ Description
Requirements_ Description
Requirements_ Modeling
Legend see Fig. 13
Verify_System_ Requirements
Fig. 14: Configurations defined by instantiated process templates
24
System_ Requirements
Baselining a Domain-Specific Software Development Process
Process-oriented product management allows for individual product tool binding focussed on the scope of the process. Each process determines the kind of access —view or modify— to a product as well as its type —FrameMaker or SDT. As described in Section 3.3.2, products are identified unambiguously by a function called getProduct: product ← getProduct (processId, productNumber, releaseNumber.revisionNumber), whereas releaseNumber.revisionNumber can be omitted if the current version is needed. A function launchPrg: tool-call(product) ← launchPrg (processId, productNumber, releaseNumber.revisionNumber), determines the corresponding product —atomic or compound—, locks it exclusively if meant to be modified, and calls the corresponding tool. After work is finished and the tool is closed, the product is added to the version pool by increasing the version number, and the lock is released. 3.3.5
The SCM System
As shown in Fig. 15 the provided SCM tool consists of the public domain versioning tool RCS that manages versioning of atomic products. Compound product management, configuration management, tool invocation, and the association of products and processes as well as the trigger mechanism are provided by a set of csh scripts. They use basic RCS functions and provide a mapping between compound and atomic products management as well as between configuration and atomic products management. Auxiliary files are used to manage the additional information needed for compound products and configurations. A tool binding list describes the association of activities on products to a corresponding tool call and allows for external tool binding management without changing a line of code. Through this tool invocation interface the developer never calls a development tool directly, which would allow to manipulate products without SCM control. This could lead to inconsistencies within the product hierarchy, even to data loss. This first version of a supporting SCM system for SFB 501 case studies is implemented as a command line that provides a set of commands with at most two to three parameters. This user interface is easy to apply may be extended by a GUI to increase usability in the future. The current implementation with csh scripts is heavily platform dependent and is only portable within Unix Systems. 3.3.6
Training
Due to the level of automation, no special training is needed to use this SCM tool. The developers are provided with a description of the tool interface’s functionality that is almost as simple as a call of a tool.
process oriented SCM
Process description Trigger rules
developm. developm. tools tool RCS
SDT e.g.e.g. SDT
atomic product versioning
Tool binding list Auxiliaryfiles files Auxiliary Auxiliary files
compound product versioning configuration management tool invocation
developm.product product developm.
Fig. 15: Configuration management system
25
3
Project Plan
More complex operations —like creation of a special configuration or adding a new product— are supported by the help of the product manager.
3.4
Data Collection Procedures
Before the actual execution of the project could start, the data collection procedures for the collection of the measurement data have to be defined. As described in Chapter 2, the quantifiable goals have already been formalized in accordance with the GQM method. Therefore, all metrics that need to be collected for the accompanying measurement program of BaX are listed in the three GQM plans for defects, effort, and calendar time (Appendix B). To define the data collection procedures, a measurement plan is set up. A measurement plan is basically a table that contains all information concerning the measures to be collected, the point-in-time (according to the process model) when they will be collected, and the procedures that are used to collect the measurement data. Therefore, it integrates the different information from the GQM plans and the process model. First, all metrics and their descriptions are extracted from the GQM plans and are written into the table of the measurement plan; one row for each metric. Then, it is defined for each metric -with the help of the information of the process model- which resource (tool or person) can offer the needed data and at which point-in-time the data can be collected. Typical points-in-time are the entry or exit points of a (sub-)process, or the occurrence of a special event or exception. In the next step, the table is resorted according to the different point-in-time entries in the measurement plan. The number of different pointsin-time defines the needed data collection procedures. Tab. 2 shows an excerpt of the measurement plan for BaX. The depicted rows are the entries in the measurement plan for the metrics of question Q1 (see Fig. 5) from the GQM plan defects. collected by
point-in-time
questionnaire name
question(s) on questionnaire
metric ID
metric description
Defect_Q1_M1
Identifier of process in which the defect was detected
developer
defect detection
Defects
4
Defect_Q1_M2
Identifier of source product
developer
defect detection
Defects
6, 7
Tab. 2: Excerpt from the BaX measurement plan
In BaX 17 different points-in-time were identified. It turned out that we could only collect measurement data directly from the persons who perform the processes. No measurement data could be automatically collected by tools. So it was decided to use (paper) questionnaires to collect the measurement data. To reduce the number of questionnaires, it was further decided to use the same questionnaire for measures that have to be collected at the entry and exit point of a certain process. This reduced the final number of questionnaires (i. e., the data collection procedures chosen for BaX) to nine: one questionnaire for defects and one for each of the eight processes Object_Structure_Design, Task_Assignment, Verify_Requirements_Description, Requirements_Modeling, Verify_System_Requirements, Create_Test_Cases, Create_Prototype, and Prototype_Test to collect the data for calendar time and effort. As an example Fig. 16 shows the part of the questionnaire defects that is used to collect the two
26
Baselining a Domain-Specific Software Development Process
metrics Defect_Q1_M1 and Defect_Q1_M2 from the measurement plan excerpt in Fig. 16. The complete questionnaire can be found in Appendix D.
4. Process identifier in which the defect was discovered: ______________________ 5. Description of the defect:_____________________________________________ _________________________________________________________________ (How was the error observed?) C. Error analysis 6. Name / identifier of the source product (i. e., product on the highest level of abstraction) affected by the error: ______________________________________ 7. Version number of error-prone product: _________________________________ Fig. 16: Excerpt from the BaX questionnaire ‘defects’
27
3
28
Project Plan
Baselining a Domain-Specific Software Development Process
4 4.1
Project Execution
Trace
This Chapter describes the flow of activities of Team 3 in relation to calendar time as recorded in Fig. 18. The project started with the object structure design. Because a well structured building model existed as input, less time was spend for this step than if it had been done from scratch. During task assignment first the needs of the problem description were assigned to the objects. This was the starting point for defining the tasks of each object which fulfilled the assigned needs. Some changes in the object structure resulted from task assignment in order to reduce the complexity of some of the objects. But these changes were very local, no global changes in the model had to be done. At the end of the informal object design, a verification step followed. The errors in the documents were corrected in a new iteration of the task assignment step and again verified. After this requirements modeling could be started. During this step, some iterations to refine the requirements description document had to be done. Most of these changes related to the definition of signals, less to real errors which had not been detected in the earlier verification steps. This step took the longest time, nearly one and a half month. In parallel, the definition of the test cases was done. A lot of time was been spent, because a suitable description technik had to be developed. The prototype creation could be started early in the process, because the model-specific parts will be fully generated and the interface to the environment was defined outside this process. This is the only necessary input for the creation of the model independent parts, depicted in Fig. 12. Finally, the prototype test was done. The errors, found in this process step, were corrected immediately and it was checked whether their correction had an influence on a test case which had already been tested. If so, the test was repeated.
4.2
Raw Data
Each questionnaire was validated by A1 personnell before it was taken into account for evaluation. For example, it was checked that each question was answered and no unrealistic values had been inserted. Altogether 155 questionnaires were collected during May 26th to October 14th, 1998 while BaX was executed. Tab. 3 shows the concrete numbers of the different types of questionnaires that were collected. As can be seen, a total of 119 defects were reported, most of them found in the verification steps of the process (i.e., Verify_Requirements_Description and Verify_System_Requirements). Tab. 4 breaks the defects down to the products in which they occurred. All defects caused a total rework effort of 1,129 minutes. Compared to the total effort of 22,074 minutes needed for executing the complete BaX case study, this calculates to a total of 5% rework time. Tab. 5 shows how much effort was spent on each process. Graphical representations for the collected data concerning calendar time, effort, and defect detection can be found in Chapter 5.
29
4
Project Execution
questionnaire
# handed in
Object Structure Design
1
Task Assignment
6
Verify Requirements Description
12
Requirements Modeling
5
Verify System Requirements
6
Create Test Cases
2
Create Prototype
1
Prototype Test
3
defects
119
Tab. 3: Number of collected questionnaires by type product
# defects
Problem
3
Object Structure
2
Requirements Description
83
System Requirements
31
Test Cases
0
Prototype
0
Tab. 4: Number of defects by products
process Informal Object Design Verify Requirements Description Requirements Modeling Verify System Requirements Create Test Cases Create Prototype Prototype Test
total effort (inc. rework)
rework effort
4,198
641
931
157
10,060
271
860
60
3,590
0
555
0
1,880
0
Tab. 5: Effort distribution (in minutes)
4.3
Qualitative Experience
Qualitative experience is documented in natural language. Documenting experience qualitatively is especially appropriate for those experiences that are valuable for future projects and that are not in the focus of GQM goals, i. e., they are not gained from measurement activities and quantitative analyses. Documenting lessons learned is an example for describing qualitative experiences in a structured way as input for subsequent analyses and processing. The following aspects are described: 1. Situation (in which situation did a problem emerge, i. e., what are the project characteristics and the
30
Baselining a Domain-Specific Software Development Process
project state?), 2. Symptom (what problem does appear?), 3. Diagnosis (what are possible causes for the problem?), 4. Reaction (what has been done during the project to solve the problem?), 5. Result (to which extent could the problem be solved and what were the consequences?), and 6. Recommendation (what has been learned and what can be recommended for future projects?). The documentation of the reaction, the result, and the recommendation is optional. In BaX, qualitative experiences were gathered in a meeting of the developers and the members of the supporting team after project termination. Additionally, the support team added experience concerning their support procedures and techniques. A list containing essential qualitative experience can be found in Appendix E.
4.4
System Documentation
Here we present the problem and give an overview of the documents that were created during the requirements analysis process. They are specified according to the development steps introduced in Chapter 3.2. Problem The task for Team 3 was the development of an intelligent lighting control system for the 4th floor of a Building 32, part of the Computer Science Department at the University of Kaiserslautern. Instead of typical commercial light control systems, individual user comfort and control and energy saving had to be combined as sometimes conflicting objectives. In a university environment with many different and changing users with irregular working habits this is no straightforward control task. An additional objective was the extendabitity, for example for climate and access control. Fail-safe features were another objective. With these objectives the task was complex enough to fulfil the definition of a large system, but on the other hand it was in the range of possibilities for a small team. Therefore, it was good test case and also a good example for advanced light control systems. The control philosophy or strategy was founded on three aspects. First, control should be based on occupancy of spaces with possible user or facility manager overrides. Occupancy is difficult to detect reliably. Therefore, any strategy had to be fault tolerant. On the other hand, occupancy can result in the best energy savings if applied intelligently. Second, indoor light levels should be controlled by outside illuminance instead of indoor sensors. This decision was based on the experience of lighting designers. It requires some celestial calculations and a set of sensors. Third, the system should be tolerant in the case of software/hardware failures, e. g., controllers, communication channels, or sensors. Manual overrides had to function even in the case of large failures. The problem was specified in two documents, the building description and the problem description. The document building description contains the architecture of the floor and the installed sensors and actuators including their physical specification like response time or range of values. The whole floor is divided into three sections, according to the given firewalls. There are 22 rooms of different types and three hallway sections to be controlled. The types and number of rooms can be found in Fig. 17. Furthermore, the location were the outdoor light sensors are installed is shown in this picture. Besides the outdoor light sensors, there are four more types of sensors (switches, status lines, push-buttons, and motion detectors) and two types of actuators (dimmers and impulse relays) installed. The assignment of these sensors and actuators to the different kinds of rooms can be found in Appendix A.3. The system’s behavior from the customer’s point of view is defined in the problem description. 39 needs are grouped into user needs (U1..U19), facility manager needs (FM1..FM11) and non functional
31
4
Project Execution
ols1
O435 O433 O431
CL426
O429 O424 ols2 M427
CL422
H1 O425
P420 O423 O421
CL418 O419
H2
ols4
O416
O417
O414
O415
ols3
O412 P413
Oxxx: Office Mxxx: Meeting room CLxxx: Computer lab
ols5
ols6
H3 CL411
CL410
Pxxx: Peripheral room Hx: Hallway olsx: Outdoor light sensor Fig. 17: Architecture of the 4th floor
needs (NF1..NF9). The user needs describe the system behavior related to the user, that means primarily the behavior of the different rooms. An example for a user need is given below: • U1: If a person occupies a room, the light has to be sufficient to move safely, if nothing else is desired by a chosen light scene. The facility manager needs deal with more building-specific needs like controlling the light centrally or observing energy consumption. An example for such a need is: • FM6: The facility manager can turn off any light in a room or hallway that is not occupied.
32
Baselining a Domain-Specific Software Development Process
Finally, the non-functional needs deal with safety and legal aspects as well as time requirements or fault tolerance. One need is listed below: • NF3: If an outdoor light sensor does not work correctly and a hallway is occupied, the lights in this hallway have to be on. The whole problem description can be found in Appendix A.1, a domain dictionary delivered from the customer in Appendix A.4. Object Structure The first approach of the system architecture is described in the document object structure. This structure contains 20 different types of objects, which can be divided into three major groups: • Sensors: Outdoor Light Sensor, Switch, Motion Detector, Sun Detector (composed of several outdoor light sensors) • Actuators: Tasklight, Light, HWLight • Control system specific types: HWDoor, HWLight, HWOcc, Hallway, Door, Desk, RoomOcc, Office, CompLab, PRoom, MRoom, Section, Floor The whole object structure (defining the relations between the types and instances) as an object diagram is shown in Appendix A.5. Requirements Description The document requirements description contains different tasks and their strategies for each object. We have defined about 70 tasks in natural language. To define the strategies, we have used tables, too. The document is structured into the different object types, that means for each type (excluding the simple sensors and actuators) the tasks and strategies are defined. Here is an example for a task, defined for the composed object sun detector: • Task: determine illuminance with respect to the given room • Strategy: • determine the correct outdoor light sensor(s) and return its value(s), according to Tab. 6. • if the illuminance value(s) of column “light sensor” is not accessible, use the light sensors as given in column “aux. light sensor”, otherwise return an error. room number
light sensor
aux. light sensor
435, 433, 431, 429
ols1
ols5
427
(3*ols1+5*ols5) / 8
ols1 oder ols5
425, 423, 411, 413
ols5
(ols1+ols3) / 2
421
(5*ols3+1*ols5) / 6
ols3 oder ols5
419, 417, 415
ols3
ols5
426, 424
ols2
ols6
422
(ols2 + 3*ols6)/4
ols2 oder ols6
420
(ols4 + 3* ols6)/4
ols4 oder ols6
418, 416, 414
ols4
ols6
412
(3*ols4 + 3*ols6)/6
ols4 oder ols6
Tab. 6: Assignment room number ↔ relevant light sensors
33
4
Project Execution
room number 410
light sensor ols6
aux. light sensor (ols2+ols4) / 2
Tab. 6: Assignment room number ↔ relevant light sensors
Requirements Specification Using SDT as a tool for creating SDL Models, we have defined blocktypes for each object in the object structure, using the template shown in Fig. 10. The templates were refined to the correct number of subinstances in the object structure as shown in Appendix A.5. That means, the different room types (according to the number of doors) and the type section (according to the number of rooms) were refined to nine different types. The resulting 24 different types are instantiated to 388 instances. Besides a structural description, including the subobjects and the signal lists for interaction, these blocktypes implement the tasks as defined above. The whole system was divided into several parts, each modeled as an SDL Package, which acts as a small library. The packages are as follow: - Datatypes: Includes all type definitions and the definition of globally used signals - Utils: Includes some useful procedures, for example to realize the updating of physical sensor values. - Sensors: The sensors used in the system - Actuators: The actuators used in the system - CP1 - CP4: The control system hierarchy. Because of its complexity and the need to work on it with several developers in parallel, the hierarchy was divided into four parts. These parts relate to subtrees of the object structure and therefore relate to each other. - System: The interface of the system to the environment (the protoctrl) and the instance floor4 of the highest blocktype in the hierarchy. Test Cases The test cases were developed in parallel to the requirements analysis process, only using the problem description and the object structure document as inputs. They consist of 24 different tests, concerning the different needs. Here is an example regarding the need to provide a given level of light: • Testcase 5: Start conditions: select Room (Office or CompLab or PRoom or MRoom), selected Room is empty, this means RoomOcc = false Observe: ambient light in Room (iAmb) (sim), ceiling lights (sim), task light (sim), RoomOcc(sys) Sequence: 1. Set minimum ambient light value (iAmbMin) 2. set outside illuminance = 2 lux
/* no light*/
3. select a new light scene /*controller should calculate the resulting inside illuminance (in_ill) from the outside illuminance*/ 4. let a user enter an empty Room, expected result: RoomOcc should change to true, if [(in_ill > iAmbMin + 10%) and (light scene does not require more light)] -> artificial lights (ceiling lights) = off (else on) 5. increment outside illuminance in steps of 1000 lux until outside illuminance > 9 000 lux, return to 3.
34
Baselining a Domain-Specific Software Development Process
6. in turn, return to 3 until all light scenes have been tested
35
4
36
Project Execution
Baselining a Domain-Specific Software Development Process
5
Empirical Results
This Chapter surveys essential results concerning quantitative analyses (goal 1-3) and qualitative experience (goals 4-6). The description of the qualitative analyses comprises a graphical representation of the baseline, analysis results, their interpretation in the context of the Team 3 case study, and consequences for the next experiment and the contents of the SFB-EB. Complementary, the results of deepening quantitative analyses are surveyed. Finally, some essential qualitative experience is sketched. A more detailed description of the qualitative experience can be found in Appendix E.
5.1
Analysis Concerning Calendar Time (Goal 1)
5.1.1
Results
For all processes of Team 3: When (start time and finish time) was the process enacted?
process
1.1 1.2 2 3 4
6 7 May
Jun
Jul
1.1: Object_Structure_Design 2: Verify_Req._Desc. 3: Req._Modeling 1.2: Task_Assignment
Aug
Sep
4: Verify_Syst._Req. 5: Create_Test_Cases 6: Create_Prototype 7: Prototype_Test
Oct
calendar time
5
Fig. 18: Calendar time baseline
37
5
Empirical Results
5.1.2
Analysis and Interpretation
Analysis: •
Interpretation: •
Longest Process: -
Requirements_Modeling (98/6/4 - 98/7/16, with rework • lasting till 98/10/9) •
most complex task of all, refinement and formalization low developer experience synchronizing work spaces was timeintensive
•
unequal partitioning of development tasks led to waiting times
•
dictionary incomplete
•
unnecessary defects (caused by inconsistencies)
Tab. 7: Calendar time (analysis and interpretation)
5.1.3
Consequences
Consequences: Experience base:
Next experiment: •
partitioning of tasks (respectively components) into equal sized, finegranular working packages.
•
improved synchronization of work assignments
•
more training
→
further modifications of the development process (see subsequent analyses)
•
revised calendar time model
•
revised / enhanced dictionary
•
revised / enhanced development guidelines
→
storage of (packaged) reusable artifacts
Tab. 8: Calendar time (consequences)
5.2 5.2.1
Analysis Concerning Effort (Goal 2) Results
What is the effort distribution in the Team 3 case study broken down by processes (distinguished by causing processes)?
38
Baselining a Domain-Specific Software Development Process
10500
160 111
effort (minutes)
9000 7500
from IOD (1) 9789
from VRD (2)
6000
4500 3000
65
38 61 375 102
from VSR (4) 3590 1880
3557 12 1500 60 25 555 122 800 774 0 ) 2) (4) 3) (7) ( (5) P (6 ) ( 1 R T ( D C M S C P R D V VR CT IO process
from CTC (5) from CP (6) from PT (7)
Fig. 19: Effort baseline
5.2.2
Analysis and Interpretation
Analysis:
Interpretation:
•
particularly high: in the process Requirements_Modeling (163 h)
•
process Requirements_Modeling is very complex
•
•
resonable rework effort: in the process Informal_Object_Design (17 h)
additional effort in the process Requirements_Modeling for the adaptation of task granularity
•
additional effort for the necessary refinement of the product Object_Structure
•
many costly removable defects in the product Requirements_Description
•
defect classification insufficient with respect to additions
Tab. 9: Effort (analysis and interpretation)
39
5
Empirical Results
5.2.3
Consequences
Consequences: Experience base:
Next experiment: •
introduction of a new process Refine_Object_Structure (as subprocess Requirements_Modeling)
•
revised development guidelines (e. g., guidelines describing the abstraction level for tasks)
•
improvement of the process Verify_Requirements_Description (enhanced check lists)
•
revised process model for the SDLbased requirements analysis technique
•
more precise definition of the exit criteria for the process Informal_Object_Design
•
revised check lists
Tab. 10: Effort (consequences)
5.3
Analysis Concerning Defect Detection (Goal 3)
5.3.1
Results
How many defects were detected in each process of the Team 3 case study (distinguished by source products)?
90 75
SourceProduct:
19
#defects
60 Problem OSD+Req._ Description
45 64
System_ Requirements Test_Cases
30 19
15
Prototype
2
0 IOD + VRD (1+2)
12 1
3 1 RM + VSR (3+4)
CTC (5) CP (6)
PT (7)
detection process Fig. 20: Defect detection baseline
40
Baselining a Domain-Specific Software Development Process
5.3.2
Analysis and Interpretation
Analysis:
Interpretation:
•
particularly high: in the processes Requirements_Modeling and Verify_System_Requirements (69%)
•
particularly inefficient: in the process Verify_Requirements_Description a lot of defects (64) were overseen
•
notation of the product Requirements_Description too informal / abstract
•
developers not familiar with enactment of the processes
•
process Verify_Requirements_Description not efficient
•
some additions and refinements were interpreted as defects, because templates for error analysis did not allow differentiation
Tab. 11: Defect detection (analysis and interpretation)
5.3.3
Consequences
Consequences: Experience base:
Next experiment: •
change notation / document structure of the product Requirements_Description (more formality, more traceability)
•
improve automatic support of error/ refinement recording, change forms for analysis
•
support error/refinement trace by system
•
change verification process
•
more training
•
change defect classification
•
revised development guidelines
•
adaptation of defect models due to new defect classification
•
distinguish error from refinements
Tab. 12: Defect detection (consequences)
41
5
Empirical Results
5.4
Analysis Concerning Defect Types
5.4.1
Results
For the product Requirements_Description: What is the distribution of detected defects broken down by defect class?
#defects (>=3) 20 16
16
14 12
12
10
10
8
8
8
5
argument is dropped
incorrect task description
strategy incorrect
signal defect
dispensable signal
task missing
new signal name used
argument type has changed
signal missing
0
defect class
3
4
Fig. 21: Defect types baseline
5.4.2
Analysis and Interpretation
Analysis: •
particularly frequent: defects related directly or indirectly to signals
Interpretation: •
incomplete interface descriptions
•
inconsistent representation of signals in the product Requirements_Description
•
missing naming conventions
•
ignorance of / missing development guidelines
Tab. 13: Defect types (analysis and interpretation)
42
Baselining a Domain-Specific Software Development Process
5.4.3
Consequences
Consequences: Experience base:
Next experiment: •
change notation / document structure of the product Requirements_Description
•
revised dictionary (extension with standard identifier)
•
put more effort in interface descriptions
•
revised development guidelines
•
update dictionary frequently with improved system support
•
revised defect models
•
•
base defect model on artifact model
revised check lists for verification processes
Tab. 14: Defect types (consequences)
5.5
Further Quantitative Analyses
Focus
Results
Short Interpretation
Important Consequence(s)
effort per object
the modeling of some objects was very costly
not yet understood: probably too little experience with control systems
packaging of reusable objects
correlation between object complexity and effort
• No correlation identifiable if all objects are analyzed (correlation factor: 0.34 for the product Requirements_Description; 0.42 for the product System_Requirements)
• complexity measure adequate?
• usage of other complexity measures
• some objects are additions / modifications of other objects (internal reuse)
• stronger verification of complex objects in order to reduce rework effort
inconsistent representation of signals
revised development guidelines, revised notation / document structure
• correlation exists if only objects are analyzed that are no modifications of others (correlation factor: 0.79 for the product Requirements_Description; 0.78 for the product System_Requirements) • correlation concerning rework effort exists defects per defect class
signal defects dominate
43
5
Empirical Results
Focus
Results
Short Interpretation
Important Consequence(s)
defects per object
some specific objects (with large complexity) were very defectprone
to be expected: verification not tailored to object complexity
intensify verification of defect-prone objects, enable early testing of complex objects by introducing stubs
correlation between object complexity and number of defects
linear correlation exists
complexity influences defectproneness
stronger verification of complex objects
effort for defect correction
signal defects cause high correction effort (cumulated and average)
difficult correction because many interfaces exist
avoid signal defects by using an improved representation for signals; provide change support
defect slippage (broken down to defect classes)
many signal defects were detected very late
many signal defects were overseen in the verification processes
improved verification processes
Tab. 15: Further analyses concerning effort and defects
44
Baselining a Domain-Specific Software Development Process
5.6
Essential Qualitative Experience
Description →
Component testing missing
Consequence •
introduction of component testing as part of the process Requirements_Modeling
•
Explicit documentation of decisions and dependencies
•
Use of (tool-supported) notification mechanisms may help
•
Integration of the customer early in the development process (at defined milestones)
•
Modification of the defect classification
•
Improvement of defect registration support
The complexity of the whole system necessitates systematic testing of system parts →
No systematic propagation of changes and decisions / agreements with the customer. The developers were not systematically informed about design decisions and agreements with the customer.
→
Assignment of detected defects to defect classes was difficult Incompleteness and additions were not defined precisely enough. Systematic defect model missing.
Tab. 16: Essential qualitative experience
A complete list of qualitative experiences that was assembled during a brainstorming session is included in Appendix E. This list and the discussion during it’s preparation proved to be as helpful as the quantitative measurements. This is due to the fact that the example was too small to provide enough cases for statistical evaluations and that no comparable data existed. Most of the quantitative data showed a normal behavior, which is a sign for the "normality" of the process. Some special values, together with the personal experience of the team members led to most of the improvements planned for the next case study. In future experiments, collection of qualitative experience should become an important part of the experimentation procedure.
45
5
46
Empirical Results
Baselining a Domain-Specific Software Development Process
6
Experience Base Update
This section describes how the final results of the Team 3 experiment (i.e., the BaX case study) are reflected in the SFB-EB (Experience Base). Therefore, all experience elements that have been modified (Section 6.1) or added (Section 6.2 and 6.3) to the SFB-EB are listed. In addition, the entry of the whole Team 3 project in the case studies area of the experiment-specific section of the SFB-EB (see Fig. 1) is sketched.
6.1
Changes of Existing Experience
Due to the fact that BaX was the first case study of its type, there were only a few experience elements that could be reused from the SFB-EB (see Section 1.5 on page 5). Therefore, it is not surprising that from the experience gained in BaX, no direct changes regarding these experience elements were captured. The only changes caused by BaX appeared in the structure that describes these experience elements in the SFB-EB. To be more precise, ’uses/used_in’ relations were added between the BaX documentation in the experiment-specific section and the (re-)used experience elements in the organization-wide section of the SFB-EB. Since it can be assumed that an experience element that has been often (successfully) reused in different experiments is quite valid, the number of used_in relations for a certain experience element gives a first hint at its validity. Hence, the experience elements that were reused in BaX can be trusted more in the future.
6.2
The Team 3 Documentation in the Experiment-Specific Section
As for all SFB 501 experiments done so far, we recorded any result that might be relevant for future reuse and may be regarded during the analysis and packaging phase in an EDB section called the experiment-specific section (see Section 1.4 on page 4) for BaX, too. This section is structured as follows: Technical Preparation The technical preparation part contains data gathered prior to the official planning of the experiment. Generally, it is a small part with just a few entries like any kinds of calls for personnel, first discussion minutes, etc. Despite its small size, it is useful when new experiments are to be planned. The remainder is structured according to the results of the first five steps of the quality improvement paradigm (QIP) [BR88]. Characterization (QIP 1) A characterization of an experiment —most useful in a formal or at least semiformal format— is essential if reuse plays an important role in software planning, management, and development. Reusable elements resulting from an experiment are primarily valid in the context in which the experi-
47
6
Experience Base Update
ment took place. The context itself is described by all parts of this characterization part which provides placeholders for the context vector described in Section 1.4, a description of the team members with all kinds of addresses to ease communication as well as a description of the technologies applied in this experiment. Goals (QIP 2) The main intention of this part is to keep track of the project and experimental goals. Project goals cover a description of the kind of product to be developed as well as a list of quality requirements the project execution and its results should fulfill. Project goals have their focus on the current project, whereas experimental goals are driven by a long term improvement goal in a sequence of experiments, to which the current experiment should contribute. BaX provides a list of GQM plans to get a baseline for effort, errors, and calendar time. Plan (QIP 3) This part covers the complete project plan describing —among others— the process, product management, and resources. Furthermore, it provides the developers with a list of useful documents to support and facilitate their tasks. Examples are: Guidelines, checklist for the different reviews, training manuals, a more detailed help (technology packages) for applied planning, management, and development technologies as well as a list of external literature references describing these technologies. Execution (QIP 4) The execution part holds the complete system documentation covering all phases of the development process (e.g., analysis, design, coding, test). In the case of BaX this part holds the problem description written in human language, an SDL description of the requirements specification, test cases as well as test results. This part also covers measurement data gathered to support experimental measurement as well as quality control during experiment execution. The latter can be found in the subpart ’project trace’. Analysis (QIP 5) This part contains results of the analysis phase of the experiment BaX. Analysis of the experiments data was done with regard to possible/future reuse. Objects of analysis are development products, management products, and quality models based on measurement data and lessons learned collected during and after experiment execution. Herein, concepts of new quality models are prepared for future reuse. The experiment-specific section holds the complete documentation about an experiment. Parts of it are intended to migrate to the organization-wide section in the packaging phase, mainly elements from the analysis part. In order to decide and to allow for new (differing) decisions in later times any information about the experiment needs to be collected and stored persistently. The on-line documentation of the BaX specific section can be found as part of the SFB-EB1.
6.3
Newly Gained Experience in the Organization-Wide Section
In addition to the documentation of BaX in the experiment-specific section of the SFB-EB, a few new experience elements have been stored in the organization-wide section, too. However, the total number of new experience elements in this section of the SFB-EB is relatively small. This is due to the fact that most of the gained knowledge is first-hand experience that has not been made before and therefore is kept in the experiment-specific section until it is validated by some more experiments and then can be 1.
48
http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/SPEZIFISCH/FALLSTUDIEN/baX_contents.html
Baselining a Domain-Specific Software Development Process
packed into the organization-wide section. In the following we list and describe the new experience elements and give their URL inside the SFB-EB where they can be found. • A refined SFB 501 Reference Process Model. Because the ‘SFB 501 Reference Process Model’ that was (re-)used in BaX was not finely granular enough, it was necessary to define a more detailed one when planning the case study. As a result a new, more detailed process model for the first phases of the SFB 501 reference process was defined and modeled with MVP-L. This model is now stored in the organization-wide section for future reuse in experiments that deal with the first phases of the SFB 501 Reference Process. One can download and/or view this document in the SFB-EB at any time2. • Lessons learned regarding planning and conducting experiments. As described in Section 5.6 qualitative experience describing the planning of an experiment were gained in BaX. They were documented in the form of seven lessons learned and stored in the SFB-EB3. A more detailed description of each of the lessons learned regarding planning experiments can be found in Appendix E. • Lessons learned regarding managing experiments. The lessons learned from BaX regarding the management of an experiment can be found inside the SFB-EB4. A more detailed description of each of the three lessons learned regarding the management of experiments can be found in Appendix E. • Lessons learned regarding techniques. Altogether eight lessons learned regarding technologies used in BaX were gained and documented. Almost all of them are concerned with configuration management. They are listed inside the SFB-EB5. • Lessons learned regarding the initial problem statement. Defects in the initial problem description were documented during project execution in the subproduct "Problem-Addendum" (see Appendix A.2). This concludes the list of newly stored experience in the organization-wide section of the SFB-EB. To prepare and support future experiments it can be foreseen that after a more detailed analysis of the used SDL templates and the gained SDL description of BaX there will be some SDL templates defined that are going to be stored in the components area of the SFB-EB. But this is not a direct outcome of BaX and hence, is not part of this report.
2. 3. 4. 5.
http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/MODELLE/CONTEXT_ VECTOR/req_analysis_zimmermannenglish.html http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/planning.html http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/management.html http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/techniques.html
49
6
50
Experience Base Update
Baselining a Domain-Specific Software Development Process
7
Outlook
This Chapter describes improvement proposals for future projects. These proposals are based on the empirical experience of the BaX experiment. First, possible improvements of the requirements analysis technique applied and the development process used are described. Second, improvement proposals of the development platform are stated. Third, consequences from the point of view of the experimenter are outlined. Finally, a list of reusable artifacts is given, which can be used in subsequent replications of this experiment or in other (similar) experiments in the domains “embedded systems” and “building automation”.
7.1
Requirements Analysis Method
The consequences of the experiment BaX that result from the qualitative analyses and the qualitative experience can be used as hints for improving the SDL-based requirements analysis method and the underlying development process. The following modifications of products and processes can be regarded as essential sources for improvements: • Revision and support of the dictionary, • Revision of the development guidelines, • Precise definition of exit criteria for the processes, • Modification of the notation and document structure of the product Requirements_Description, • Adaptation of the abstraction level of the object descriptions in the product Task_Assignment, • Separate modeling and testing of objects, • Introduction of the process Refine_Object_Structure, • Integration of early substructure tests, • Modification of the verification processes (revised checklists, tailoring to complex objects), • Integration of object reuse. • A new object centered instead of process step centered development theory
7.2
Development Platform
Due to the qualitative experience listed in Appendix E.2 and their resulting possible reaction, some extensions and changes to the platform for tool invocation and software configuration management seem to be sensible. The major change concerns the technical realization: the system has started as a pure prototype with time-consuming performance and strong platform dependency. The textual interface is not State-of-the-Art and should be substituted with a graphical user interface (GUI). This is best supported by Java since Java is generally platform independent. Runtime performance should be satisfactory. The Java GUI covers the results discussed in Appendix E.2.3.
51
7
Outlook
Further changes and extensions for the next release should be driven by the needs of the next case study, since they are too numerous to consider in this study. They include: • Compound products and configurations are managed by external lists. The lack of manipulating functions forced the product manager to manually edit these lists. This is very error-prone and the product manager must always identify the correct list from a multitude of lists. If the next case study seems to behave highly dynamically concerning redefinition of compound products and configurations, functions must be provided that allow easy manipulation of these lists (see Appendix E.2.1 and Appendix E.2.5). • The status report function (showing version numbers, change logs, etc.) had a view only functionality. If "backstepping" to older versions due to undesired changes is something that happens frequently in the next case study, a functionality that enables the developer to "work with" the status report information directly must be provided. That means for example, that any developer should be able to select an older version of a configuration with just a mouse-click (see Appendix E.2.2). • Most versioning and configuration functions and tool calls are hidden or activated automatically. Some managers feel they have no control over what the system is managing. The SCM and tool binding system just provides process-oriented product management, which is completely controlled by the development process due to predefined behavior. If process independent management of products is a need in the next case study, process-oriented product management must be supplemented with explicit functions for versioning, configuration management, and tool binding, independent of the current development step (see Appendix E.2.4). • Currently, compound products are definable via directories. This means, all files within a directory are elements of a compound product. This a very static definition. More flexible definitions allow lists, with every list element definable via a list of file masks (e.g., ’*.c *.h’). A list of exceptions organized in the same way would increase flexibility even more (see Appendix E.2.6 and Appendix E.2.7). • The start and end of a development activity are usually designated as points in time when measurement data have to be gathered. These are, for example, calendar time (when has the activity started and when has it ended?), or effort (how long has the developer worked on this piece of product effectively?). At these points of time, the developer gets in contact with the platform when activating or closing a development tool. Popping up a window that looks like the paper questionnaires would remind the developer to fill out this questionnaire. Furthermore, some information, like version number, product identification, and time can be filled in automatically by the platform. • Error and change protocols should be automated as far as possible to reduce redundant information entry by developers and to support automatic analysis of data. Data summaries should be provided for the project manager at all times automatically. • Meaningful change notification should be supported by the system. Overhead caused by change propagation should be minimized.
7.3
Possible Future Experiments
On the one side, replications and variants of the experiment should be performed in similar contexts with a minimum of variations in the context vector. In the case of a variant of the experiment, only those variables of the context vectors should be changed that are affected by the improvements selected (based on the listing above). Nevertheless, slight changes of the goals, the input, the process, or the context might be applied. Consequently, the supporting models, measurement plans, etc. will have to be adjusted. From the viewpoint of the experimenter, the following artifacts might be adapted for a replication or a variant of the experiment: • The calendar time model should be adjusted based on the actual project trace of this experiment. • The effort model should be adapted based on the measured effort data.
52
Baselining a Domain-Specific Software Development Process
• The defect models should be adapted based on the measured defect data and new defect classifications. • The (modified) process models should be instantiated in a new project plan. • The measurement plan should be derived from a (modified) GQM Plan and the new project plan. • The data collection forms should be adjusted based on the new measurement plan and the new quality models. • The development platform should be adapted to the new execution environment. The adaptation can be done based on the new context and experiences from BaX. It should be remarked that the impact of exceptional events, which are not representative for future products, should be removed.
7.4
Reusable Artifacts
The following artifacts are packaged for reuse in future experiments: • Templates for the products Object_Structure (new HTML templates), Requirements_Description (new HTML templates), System_Requirements (revised SDL templates) • Objects and design (depending on the problem) • Development guidelines (revised and extended) • Checklists for verification processes (revised) • Dictionary (revised and extended) • Quality models for calendar time, effort defects (modified) • Experiences concerning the measurement-based experimental process, e.g. GQM plans / data collection sheets (modified) • Reusable process fragments, e.g., the SFB Reference Model (new) • Executable prototype (can be used as an executable oracle for testing application software in future developments) • Initial problem statement (revised version)
53
7
54
Outlook
Baselining a Domain-Specific Software Development Process
8
Acknowledgment
We would like to thank all members and supporters of the SFB 501 Team 3 who supported the planning and execution of this first baseline project BaX and also the project leaders of the SFB who assigned the members to the team. Besides the authors of this report, the team consisted of (in alphabetical order): Martin Becker, Jörg Mentges, Christoph Kozieja, Martin Kronenburg, Rolf Merz, Christian Peper, and Jörg Schäfer. Thomas Deiss provided the building simulator for prototyping. The following table shows the areas of responsibilities of the authors: Author
Chapters
R. Feldmann
1.4, 1.5, 3.4, 4.2, 6.1 6.3, App-D
J. Münch
2, 3.1, 4.3, 5, 7.1, 7.3, 7.4, App. B, App. C., App. E.1, App. E.3
S. Queins
1.5, 3.2, 4.1, 4.4, App-A
S. Vorwieger
Abstract, 1.2, 1.3, 3.3, 6.2, 7.2, 8, App-A, App-E.2
G. Zimmermann
1.1, 1.6, 4.4
Table 17: Areas of responsibilities We also thank the Deutsche Forschungsgemeinschaft and the state of Rheinland Pfalz for providing the funding for the SFB 501.
55
8
56
Acknowledgment
Baselining a Domain-Specific Software Development Process
9
References
[BDK97] Lothar Baum, Barbara Dellen, Erik Kamsties, Antje von Knethen, Stefan Vorwieger "Modeling Real-Time Systems with SCR - An Evaluation and Lessons Learned in a Building Automation System Project" Technical Report No. SFB-501-TR-078/97, 49 pages, Sonderforschungsbereich 501, Dept. of Computer Science, University of Kaiserslautern, Germany, December 1997. [BDR97] V. Basili, C. Differding, D. Rombach Practical Guidelines for Measurement-Based Process Improvement Software Process: Improvement and Practice, Vol. 2, Issue 4, 1997. [BR88] V. Basili, D. Rombach The TAME Project: Towards Improvement-Oriented Software Environments IEEE Transactions on Software Engineering, 14(6), pages 758-773, June, 1988. [BrH93] Broek, Haugen "Engineering Real Time Systems: An Object-Oriented Methodology using SDL" Prentice Hall, London, 1993 [BW84] V. Basili, D. Weiss A methodology for collecting valid Software Engineering Data IEEE Transactions on Software Engineering, 10(6), pages 728-738, November 1984. [Ber92] H. Ronald Berlack Software Configuration Management John Wiley and Sons, Inc., 1992 [BHMV97] Ulrike Becker, Dirk Hamann, Jürgen Münch, Martin Verlage MVP-E: A Process Modeling Environment In IEEE TCSE Software Process Newsletter, Volume 10, Khalet El Emam (Ed.), 1997 [BLRV95] Alfred Bröckers, Christopher M. Lott, H. Dieter Rombach, Martin Verlage MVP-L Language Report Version 2 (in German) Technical Report No. 265/95, University of Kaiserslautern, Department of Computer Science, 67653 Kaiserslautern, Germany, 1995.
57
9
References
[Boo91] Grady Booch Object-Oriented Design The Benjamin/Cummings Publishing Company, Inc, 1991 [Bou93] Maryse Bourdon Building Process Models using PROCESSWEAVER: a Progressive Approach Proceedings of the International Software Process Workshop 8, 1993 [BrH93] R. Brœk, Ø. Haugen Engineering Real Time Systems: An Object-Oriented Methodology using SDL Prentice Hall, London, 1993 [Buc96] Fletcher J. Buckley Implementing configuration management: hardware, software, and firmware IEEE Computer Society Press, 2nd ed., 1996 [Dar90] Susan Dart Spectrum of Functionality in Configuration Management Systems CMU/SEI-90-TR-11l, December 1990 [DKK97] B. Dellen, F. Kollnischko, M. Kronenburg, G. Molter, J. Münch, C. Peper, S. Queins Ergebnisbericht Team2: Entwicklung einer Kontrollsystemarchitektur http://sep2.informatik.uni-kl.de/localdocs/Report1997/Bericht.Team2.ps [Fec98] Marion Fechtig. Fixing the case studies’ structure for the access and storage system of the experiment-specific section in the SFB 501 Experience Base (in German). Projektarbeit, Dept. of Computer Science, University of Kaiserslautern, Germany, 67653 Kaiserslautern, Germany, January 1998. [FMV98] Raimund L. Feldmann, Jürgen Münch, Stefan Vorwieger Towards Goal-Oriented Organizational Learning: Representing and Maintaining Knowledge in an Experience Base In Proceedings of the Tenth International Conference on Software Engineering and Knowledge Engineering (SEKE‘98), San Francisco, USA, June 1998, p.236-245 [FeV98] Raimund L. Feldmann, Stefan Vorwieger. Providing an Experience Base in a research Context via the Internet In Online-Proceedings of the ICSE 98 Workshop on "Software Engineering over the Internet", Kyoto, Japan, April 1998, http://sern.cpsc.ucalgary.ca/~maurer/ICSE98WS/ICSE98WS.html [IEEE 610] IEEE STD 610.12-1990, IEEE-STD Glossary of Software Engineering Terminology IEEE Standards Dept., Box 1331, Piscataway, NJ 08855 [IEEE 828] IEEE STD 828-1990, IEEE-STD Software Configuration Management Plans IEEE Standards Dept., Box 1331, Piscataway, NJ 08855
58
Baselining a Domain-Specific Software Development Process
[IEEE 1024] IEEE STD 1024-1986, IEEE-STD IEEE STD Guide to Software Configuration Management IEEE Standards Dept., Box 1331, Piscataway, NJ 08855 [JC93] I. Jacobson et al. Object-Oriented Software Engineering - A Use Case Driven Approach ACM Press/Addison-Wesley, 1993 [LM96] Oliver Laitenberger, Jürgen Münch A Process Model for the Experimental Evaluation of Software Development Processes Technical Report No. SFB-501-TR-04/1996, Sonderforschungsbereich 501, Dept. of Computer Science, University of Kaiserslautern, Germany, 1996 (in German). [MIL1456A] MIL STD 1456A Configuration Management Plan U.S. Dept. of Defense, 11 Sep 1989 [MQS97] A. Metzger, S. Queins, S. Schürmann Installation eines Testraum zur Gebäudeautomatisierung und deren Schittstelle Technical Report No. SFB-501-TR-13/96, 1996 (in German) [MSV97] Jürgen Münch, Markus Schmitz, Martin Verlage Tailoring Large Process Models on the Basis of MVP-L In Sergio Montenegro, Ralf Kneuper, Günther Müller-Luschnat (Eds): Proceedings of the 4. Workshop der Fachgruppe 5.1.1 (GI): Vorgehensmodelle -Einführung, betrieblicher Einsatz, Werkzeug-Unterstützung und Migration, Berlin, Germany, March 17-18, 1997 (in german language). [MV96] Jürgen Münch, Martin Verlage Support for Complex Work Processes: The MVP Project to Systematic Software Development Contribution to "Coordinating Work Processes" Workshop, University of Kaiserslautern, Germany, August 22-23, 1996. [RBP91] J. Rumbaugh et al. Object-Oriented Modeling and Design Prentice Hall, 1991 [SRZ97] M. Schütze, J.P. Riegel, and G. Zimmermann A Pattern-Based Application Generator for Building Simulation ESEC 97, Zürich (CH), 1997 [Tic85] Walter F. Tichy RCS — a system for version control Software—Practice and Experience, 15(7): 637-654, July 1985 [RV95] Dieter Rombach, Martin Verlage Directions in software process research In “Advances in Computers”, Vol. 41, 1995
59
9
References
[VM97] Martin Verlage, Jürgen Münch Formalizing Software Engineering Standards In Proc. of the Third International Symposium and Forum on Software Engineering Standards (ISESS'97), California, USA, June 1-6, 1997. Also available as Technical Report SFB-501-TR10-96. [Zim98] Gerhard Zimmermann A Domain Specific Model Architecture for Complex Embedded Systems: A Building Automation Case Study Technical Report No. SFB-501-10/1998, Sonderforschungsbereich 501, Dept. of Computer Science, University of Kaiserslautern, Germany, 1998. [Z.100] ITU-T Reccommendation Z.100 (3/93) CCITT Specification and Description Language (SDL) International Telecommunication Union (ITU), 1994. [Z.120] ITU-T Reccommendation Z.120 (10/96) Message Sequence Charts (MSC) International Telecommunication Union (ITU), 1996.
60
Baselining a Domain-Specific Software Development Process
Appendix A A.1
Development Products
Problemdescription
A.1.1 Organizational Aspects Project:
Building Automation System
Subproject:
Floor32/4-Light-Control
Documenttype:
Problemdescription
Documentname:
team3-problemdescription
Responsible:
Team 3/Group Problemdescription
User:
Team 3
Description:
Problem description for a system for controlling lighting to guaranty comfort and energy saving in all rooms of floor 4 of building 32.
A.1.2 Introduction This document contains the needs for a new light control system for the fourth floor of building 32 of the University of Kaiserslautern. The main motivation for the development of a new light control system are the disadvantages of the currently existing system. Since all lights are controlled manually, electrical energy is wasted by lighting rooms which are not occupied and by little possibilities to adjust light sources relative to need and daylight. The architecture of the fourth floor of building 32 and the installation of the hallways and the rooms of this floor are described in the document team3_inst_arch.v[n].fm. An explanation of terms can be found in the document dictionary.
A.1.3 Needs In this Section the needs for the new light control system are presented. In Section A.1.3.1 functional needs are listed and in Section A.1.3.2 non-functional needs. A.1.3.1
Functional Needs
The functional needs are split into two groups, user needs and facility manager needs, depending on the person who has expressed them. A.1.3.1.1
User Needs
The user needs are numbered by U.
61
Appendix A Development Products
At first, general user needs are listed that are required for each kind of room: U1 U2 U3 U4 U5
U6 U7 U8 U9 U10 U11
If a person occupies a room, the light has to be sufficient to move safely, if nothing else is desired by a chosen light scene. As long as the room is occupied, the actual chosen light scene has to be maintained. If the room is reoccupied within T1 minutes after the last person has left the room, the last chosen light scene has to be reestablished. If the room is reoccupied after more than T1 minutes since the last person has left the room, the standard light scene has to be established. The wall switches for the window- and the wall-ceiling light group in a room should show the following behavior: (i) if the corresponding ceiling light is completely on, then the light will be switched off (ii) otherwise the ceiling light will be switched on completely The light scenes can be determined by using the control panel. For each room the actual ambient light level can be set by the user using the control panel. For each room a default light scene can be set (not by using the control panel). For each room a default ambient light level can be set (not by using the control panel). The value T1 can be set for each room separately (not by using the control panel). If the outdoor light sensor or the motion detector of a room does not work correctly, the user has to be informed.
The user needs concerning the offices are: U12 U13 U14
The ceiling lights and the task light should be maintained by the control system depending on different light scenes. The control panel should be installed movably like a telephone in the offices. The control panel should contain at least: (i) a switch to set the task light (on/off) (ii) a switch to set the ceiling lights (on/off/ambient) (iii) a possiblity to set the actual ambient light level
The user needs for the remaining rooms are: U15 U16
In all other rooms the control panel should be installed near a door to the hallway. The control panel should contain at least: (i) a switch to set the ceiling lights (on/off/ambient) (ii) a possibility to set the actual ambient light level
The user needs for the hallway sections are: U17 U18 U19
A.1.3.1.2
When a hallway section is occupied by a person, the light in this hallway section has to be sufficient to move safely. Before a person enters a hallway section from another one, the light in the entered section has been turned on if necessary. The wall switches for lights in the hallway section have to show the following behavior: (i) if the light is on then the light will be switched off (ii) otherwise the light will be switched on Facility Manager Needs
The facility manager needs are numbered by FM. FM1 FM2 FM3 FM4 FM5 FM6
62
Use daylight to achieve the desired light whenever possible. Lights in a hallway section have to be switched off when the section has been unoccupied for T2 min. If a room is unoccupied for more than T3 minutes, all lights must be switched off. The value T2 can be set for each hallway section separately. The value T3 can be set for each room separately. The facility manager can turn off any light in a room or hallway section that is not occupied.
Baselining a Domain-Specific Software Development Process
FM7 FM8
If a malfunction occurs, the facility manager has to be informed. If a malfunction occurs, the control system supports the facility manager by finding the reason. FM9 The system provides reports on current and past energy consumption. FM10 All malfunctions and unusual conditions are stored and reported on request. FM11 Malfunctions that the system cannot detect can be entered manually. A.1.3.2
Non-Functional Needs
The non-functional needs are split into several groups depending on the aspect they are dealing with. They are numbered by NF. A.1.3.2.1
Fault Tolerance
In any case of failure the system shall provide a stepwise degradation of functionality down to manual operability. Needs in the case of a malfunction of the outdoor light sensor: NF1
NF2 NF3
If the outdoor light sensor does not work correctly, for rooms the control system should behave as if the outdoor light sensor submits the last correct measurement of the outdoor light constantly. If the outdoor light sensor does not work correctly, the standard light scene for all rooms is that all ceiling lights are on. If the outdoor light sensor does not work correctly and a hallway section is occupied, the lights in this hallway section has to be on.
Needs in the case of a malfunction of the motion detector: NF4
If the motion detector of a room or a hallway section does not work correctly, the control system should behave as if the room or the hallway section is occupied.
Needs in a worst case failure of the control system: NF5
A.1.3.2.2
NF6 NF7 A.1.3.2.3
NF8 NF9
If the lights in a hallway section are neither controllable automatically nor manually the lights have to be on. Safety and Legal Aspects
All hardware connections have to be made according to DIN standards. No hazardous conditions for persons, inventory, or building is allowed. User Interface
The control panel should be easy and intuitive to use. The system warns about unreasonable inputs.
63
Appendix A Development Products
A.2
Problem-Addendum
Fehler ID
Question 1: short description of the issue
Ques. 2: process name
Ques. 3: date
Ques. 4: person
Prob_1
Hallway is not a room → U20: shut off of the light in the hallway must be delayed.
task assignment
27.5.98
MK
Prob_2
NF1 for hallway, too.
task assignment
27.5.98
MK
Prob_3
NF5 can’t be implemented, because it can’t be assured if the lights in a hallway section are neither controllable automatically nor manually
Verify Requirements Description
4.6.98
MK
Prob_4
NF1 and NF2 are contradictory. Also, NF1 has a conflict to U17. Therefore, NF2 will be preferred.
Create Test Cases
23.6.98
SM
Prob_5
FM6 must be changed, in order to enable the FM to switch on/off the light. Its priority can be given up.
requirements modeling
A.3
SQ
Buildingdescription
A.3.1 Organizational Aspects Project:
Building Automation System
Subproject:
Floor32/4-Light--Control
Documenttype:
architecture and installation description
Documentname:
team3_inst_arch.v1.fm
Responsible:
SFB501-D1
User:
Team3
Description:
System for controlling lighting to guarantee comfort and energy saving in all rooms of floor 4 of building 32.
In the following document keywords are marked at their first occurrance and listed in the additional dictionary. Keywords have to be used in all other documents in the same way. Paragraphs are numbered for easier reference during visual inspection. Words written in emphasis are names of physical sensors/actuators.
A.3.2 Building Architecture In this document, the architecture and the installation of the given sensors and actuators of Building 32, 4th floor is described. A.3.2.1
Building Structure
The fourth floor of building 32 consists of three sections and shares two staircases SCE and SCW with other floors of the building, as shown in Appendix-Fig. 1. Sections are divided into offices(O), computer labs(CL), hardware labs(HL), peripheral rooms(P), meeting rooms(M), and hallways(H). All rooms in a section are accessible via connected hallways. There are three hallways and 22 rooms to control. Appendix-Fig. 1 also shows the six outdoor light sensors (ols1 - ols6) and the major compass directions. The sensors cover the six directions of the different walls. The number in the rooms express the kind and a unique number.
64
Baselining a Domain-Specific Software Development Process
E
ols1
SCE
O435 O433 O431
section1
CL426
O429 O424 ols2 M427
CL422
H1 O425
P420 O423 O421
CL418 O419
section2
H2
ols4
S
O416
O417
O414
O415
ols3
O412 P413
ols5 section3
ols6
H3 CL411
CL410
SCW W Appendix-Fig. 1: Floorplan
A.3.3 Current Installation Currently, ceiling light groups in all rooms can only be turned on or off in groups.
65
Appendix A Development Products
In all rooms, each ceiling light group is controlled by one or more pushbuttons that toggle the light if switched to the other position. Task lights are controlled manually by one pushbutton. In the hallways, several pushbuttons can toggle the ceiling light group on and off. All pushbuttons are connected in parallel.
A.3.4 Planned Installation A.3.4.1
Offices
An office (shown in Appendix-Fig. 2) has one door (d1) to the hallway and can have doors to the adjacent rooms (d2, d3). Only those doors are part of a room that open into the room. Therefore, d3 is not an object of the shown room, but the name can be used as a reference. Each door is equipped with a door closed contact, named dcc, where n is the number of the door in the room. Each office is equipped with 1. 2. 3. 4. 5. 6.
one motion detector, so that the room is fully covered (imd1). Actually, several motion detectors can be connected in parrallel to achieve the coverage. two ceiling light groups (window and wall) which can be dimmed individually with dimmer-actuators lle1 (window) and lle2 (wall) a panel to control the light groups directly or select light scenes, a desk with a task light on it with movable position.The task light can be manually turned on and off (pb3). two pushbuttons (pb1 (window) and pb2 (wall)) for the control of the ceiling lights. three status lines (sll1…3) that show the status of the three light sources.
Desk light group adjacent office
adjacent office
office d3
d2
d1 ControlPanel Hallway imd1 ceiling lights
Appendix-Fig. 2: Office
66
Baselining a Domain-Specific Software Development Process
A.3.4.2
Hallway
Each hallway is bordered by two doors, leading to the adjacent hallways. Each door is assigned to only one hallway. Therefore, in the given floor with 3 hallways and 4 doors, there exists one hallway with two doors and two hallways with only one door. The assignment of the doors and their associated names are shown in Appendix-Fig. 3. Each door is equipped with a door closed contact, named dcc, where n is derived from the name of the door.
d1
d2
d3
d4
Appendix-Fig. 3: Three hallways
Each hallway is equipped with 1. 2. 3. 4. 5. A.3.4.3
two motion detectors (imd1 and imd2), placed above the doors at each end of the hallway to determine a person near a door, one motion detector to cover the whole section (imd3), can be several connected in parallel for coverage. one ceiling light group that can be turned on and off, several wall pushbuttons (pb) to toggle the light, an impulse relay, which controls the ceiling light group, and a normal relay in parallel to pushbuttons. one status line (sll1) that determines if the light is on or off (sl1) Staircase
Staircases connect several floors. At the floor level, a staircase is equipped with 1.
A.3.4.4
one motion detector imd1 above the door to the adjacent hallway to detect motion near the door. Computer Labs
A computerlab has one door (d1) to the hallway and can have doors to the adjacent rooms (d2, d3). The light installation is the same as in the offices. The sensors at the doors are named as before at the offices. Each computer lab is equipped with 1.
one motion detector, so that the room is fully covered (imd1). Actually, several motion detectors can be connected in parrallel to achieve the coverage.
67
Appendix A Development Products
2. 3. 4. 5. A.3.4.5
two ceiling light groups (window and wall) which can be dimmed individually with dimmer-actuators lle1 (window) and lle2 (wall) a panel to control the light groups directly or select light scenes, two pushbuttons (pb1 (window) and pb2 (wall)) for the control of the ceiling lights. two status lines (sll1, sll2) that show the status of the light sources. Hardware Labs
Same as computer labs, but with more than one door to the hallway. A.3.4.6
Meeting Room
Same as computer lab. A.3.4.7
Peripheral Rooms
The peripheral rooms will not be controlled by a computer system and so they will be not be presented further!
A.3.5 Sensors This section describes the real physical sensors including converters if necessary. Analog sensors typically have an exponential time response. Reaction time is the time from a change of the sensed property to the time when the sensor has reached 90% of the change, excluding conversion time. Conversion time is the time to convert the analog value to a digital one, that can be accessed by the control system. NC means “normally closed”. Closed is coded as 1, open as 0.
Name
Abbrev.
Type
Resolution
Range
Reaction / Conversion Time
Description
door contact
dcc
NC-contact
0, 1
10 ms
It is placed above the door and is 1, if the door is fully closed.
wall switch
lsw
switch
0, 1
10 ms
2 stable positions
pushbutton
pb
momentary pushbutton
0, 1
10 ms
1 as long as pushed
imd
passiv infrared motion detector
0, 1
1s
1 means a person is moving, even very slowly, in the range of the detector. Transition to 0 can be delayed.
ols
analog light sensor
1 - 10000 lux
10 ms / 1 s
Mounted perpendicular to facade, measures the illuminance of the facade for the calculation of light flow through a window.
motion detector
outdoor light sensor
1 lux
Appendix-Tab. 1: Sensors
68
Baselining a Domain-Specific Software Development Process
A.3.6 Actuators Actuators have a linear time response. Reaction time is therefore defined as the time to change from 0 to 100% / 100 to 0%, if different.
Name
dimmable light
Abbrev
Type
Range
Control
Reaction Time
Description
sll
status line
0, 1
10 ms
Senses if the light has the voltage turned on or off.
cia
status line
0, 1
10 ms
Even if the CS sends an 1 within every 60 s, the CS is still alive.
dll
dimmer
0 - 100%
10 ms
Controls light between 0 (off) and 10100% (on).
1, 0
10 ms
relay
Appendix-Tab. 2: Actuators
The structure of the dimmable lights is shown in the next picture. As entries in the dimmable light are the pulse line to toggle the light, a dim value to set the current dim value and the signal control system is active to show the status of the control system. If this signal is not sent every 60 s, the light switches to fail safe mode and dim value changes to 100%. The output is a status line to show the current state (on or off) of the light.
phase pushbutton pulse status line Controlsystem
control system is actice
Dimmable light
dim value
Appendix-Fig. 4: Structure of the dimmable lights
69
Appendix A Development Products
A.4
Dictionary of Terms Keyword
Abbreviation
German Translation
Description
actual user
current user of a room
aktueller Benutzer
actuator
device that can be controlled by control system
Aktuator
ambient light
Umgebungs-licht
blind
used to shade window from the outside
ceiling light
luminaire under or in the ceiling,
comfort time
time period in which full room climate comfort is expected
Benutzungszeit
computer lab
Room with many terminals, workstations, open to all group members and temporarily to students of a class
Terminalraum
contact
electrical or magnetic gadget to determine the state of a door, window etc.
Kontakt
control panel
Small panel, typically at the wall, with a keyboard, LEDs for important states, and a simple display for textual messages
control system
Hard- and software system that controls indoor climate, lighting, safety and security
dimmer-actuator
controls the output of a luminaire
door
Jalousie
Kontrollsystem
Tür
environment
surrounding of section of the building, indoor and outdoor
Umgebung
Person responsible for running a building on a daily bases
Hausmeister
hallway
part of a building between several rooms to connect each of them
Flur
hardware lab
room, in which experiments with hardware are performed
Hardware-Praktikumsraum
illuminance
Amount of light incident on a surface, measured in lux
Beleuchtungsstaerke
impulse relais
changes state (on, off) with each signal
Stromstossrelais
installation
Equipment belonging to the building and that can be operated, as e.g. radiators, windows openers, light fixtures
Installation
light scene
Light scenes are predifined settings of the light levels and an ordered list of luminaires which should be used to reach them. One light scene consists of 4 parameters: 1. name of light scene 2. illuminance of the ambient light of the room 3. illuminance for the desk 4. an ordered list, which consists the installed illuminaires. The order of the list is the order, in which the control system has to use the illuminaires to reach the given light values.
Licht Szene
light sensor
Measures the illuminance in a half spere perpendicular to it’s flat bottom
Lichtsensor
facility manager
luminaire
FM
light
member motion detector
imd
Leuchte Persons in research group
Gruppenmitglieder
Detects motion of a person or animal in its range, state is on during positive detection
Bewegungsmelder
Appendix-Tab. 3: Dictionary of keywords of the application domain
70
Baselining a Domain-Specific Software Development Process
Keyword
Abbreviation
Description
office
Room for one or two group members with terminals and/or workstations for inhabitants
off-time
time in which no usage is expected
peripheral room
Room for computer peripherals, copy machines, general group access
public room
Accessible to all group members and the public at all times
pushbutton
Is on, as long as pushed manually
responsible person
Person responsible for individual settings of one room
room
Büro
Peripherieraum
Taster
Raum
sensor
device that can sense state of the building, users or environment
stand-by time
time in which an inhabitant is expected in a room
status line
A wire that has the status of a device as value
switch
Can be turned on or off manually
system task light
German Translation
Sensor
Schalter System
luminaire on desk
Arbeitsleuchte
temperature sensor
Temperatursensor
user
Benutzer
weather
outdoor temperature, wind, radiation, humidity
window workshop
Wetter Fenster
Room with tools and special machines for electronic and metal work, access for technicians
Werkstatt
Appendix-Tab. 3: Dictionary of keywords of the application domain
71
scw
72
OutDoorLight
ols
SunDet
sud1
Appendix-Fig. 5: Object Structure of BaX
MotDet Switch
dcc
dcc
occ
Section
sec2
Type, defining a leaf of the hierarchy
Type
is-instance-of consists-of
n instances, named i1..in
Type, defined on another place in the hierarchy
i
PRoom
pr
occ
door
Light RoomOcc Door
door light
Light RoomOcc Door
occ
Type
Type
MotDet
imd
CompLab
cl
sec3
door light
Light RoomOcc Door
Type
MotDet TaskLight Switch
HWDoor
imd
tli
imd3
door
Door
Desk
HWLight HWOcc
door light
desk
Office
off
hwOcc
hwl
Hallway
hw
sec1
occ
Light RoomOcc
light
MRoom
mr
A.5
Staircase
sce
Floor
Appendix A Development Products
ObjectStructure
Baselining a Domain-Specific Software Development Process
Appendix B B.1
GQM Plans
Characterization of Calendar Time Project:
SFB 501, Team 3, 1st Case Study ("BaX")
Documenttype:
GQM Plan
Documentname:
GQM Plan "Characterization of Calendar Time"
Author(s):
Dipl.-Inform. Jürgen Münch Advice: Dipl.-Inform. Christiane Differding
User:
Team 3
Description:
This document defines the measurement goal, the quality characteristics of interest, the factors influencing the quality factors of interest, and hypothetical causal relations between the influencing factors and the quality characteristics of interest. This is done according to the Goal/Question/Metric (GQM) paradigm.
Analyze the: Team 3 processes for the purpose of: characterization with respect to: calendar time from the viewpoint of: the project planner / project manager in the: SFB 501 - Team 3
Quality Focus Q1:
For all processes of Team 3: When (start time and finish time) was the process enacted? list {, ...}
Variation Factors / Explanatory Variables none
73
Appendix B GQM Plans
Dependencies none
B.2
Characterization of Effort Project:
SFB 501, Team 3, 1st Case Study ("BaX")
Documenttype:
GQM Plan
Documentname:
GQM Plan "Characterization of Effort"
Author(s):
Dipl.-Inform. Jürgen Münch Advice: Dipl.-Inform. Christiane Differding
User:
Team 3
Description:
This document defines the measurement goal, the quality characteristics of interest, the factors influencing the quality factors of interest, and hypothetical causal relations between the influencing factors and the quality characteristics of interest. This is done according to the Goal/Question/Metric (GQM) paradigm.
Analyze the: Team 3 processes for the purpose of: characterization with respect to: effort from the viewpoint of: the developer in the context of: SFB 501 - Team 3
Quality Focus Q1:
What is the effort distribution in the Team 3 case study broken down by processes (distinguished by causing processes)? list {, ...} Comment:
Q2:
The causing process initiates the enactment of a process. In case of initial creation: identifier of the causing process = identifier of the process In case of rework: identifier of the causing process ≠ identifier of the process In case of rework the causing process is that process in which the defect has been detected.
What is the distribution of effort in the product Requirements_Description broken down by objects (distinguished between effort for initial creation and effort for rework)? list {, ...}
74
Baselining a Domain-Specific Software Development Process
Q3:
What is the distribution of effort in the product System_Requirements broken down by SDL objects (distinguished between effort for initial creation and effort for rework)? list {, ...}
Variation Factors / Explanatory Variables E1:
What is the complexity of the modeled objects in the product Requirements_Description? Model: n
object_complexityRD :=∑ task _complexity(i) i=1
list {, ...} Comment:
task_complexity: subjective measure [0..4] with 0 = low and 4 = high n: number of tasks of the object
E2:
What is the complexity of the SDL objects in the product System_Requirements? Model: n
object_complexitySR := 3* ∑ task _complexity(i) i=1
+ number of forwarded signals + number of aggregated instances list {, ...} Comment:
task_complexity: subjective measure [0..4] with 0 = low and 4 = high; if an object already exists in the product Requirements_Description, replace: n
∑ task _complexity(i) := object_complexityRD i=1
n : number of tasks of the object
75
Appendix B GQM Plans
Dependencies D1:
What influence has the complexity of the modeled objects in the product Requirements_Description on the effort/object? Hypothesis H1: The effort/object increases linearly with the complexity of the modeled objects. list {, ...} Comment:
D2:
The effort/object cumulates the effort for initial creation and rework. Effort for verification is not included.
What influence has the complexity of the modeled objects in the product Requirements_Description on the effort for rework/object? Hypothesis H2: The effort for rework/object increases linearly with the complexity of the modeled objects. list {, ...}
D3:
What influence has the complexity of the SDL objects in the product System_Requirements on the effort/object? Hypothesis H3: The effort/object increases linearly with the complexity of the SDL objects. list {, ...} Comment:
D4:
The effort/object cumulates the effort for initial creation and rework. Effort for verification is not included.
What influence has the complexity of the SDL objects in the product System_Requirements on the effort for rework/object? Hypothesis H4: The effort for rework/object increases linearly with the complexity of the SDL objects. list {, ...}
76
Baselining a Domain-Specific Software Development Process
B.3
Characterization of Defects Project:
SFB 501, Team 3, 1st Case Study ("BaX")
Documenttype:
GQM Plan
Documentname:
GQM Plan "Characterization of Defects"
Author(s):
Dipl.-Inform. Jürgen Münch Advice: Dipl.-Inform. Christiane Differding
User:
Team 3
Description:
This document defines the measurement goal, the quality characteristics of interest, the factors influencing the quality factors of interest, and hypothetical causal relations between the influencing factors and the quality characteristics of interest. This is done according to the Goal/Question/Metric (GQM) paradigm.
Analyze the: Team 3 products for the purpose of: characterization with respect to: defects from the viewpoint of: the developer in the context of: SFB 501 - Team 3
Quality Focus Q1:
How many defects were detected in each process of the Team 3 case study (distinguished by source products)? list {, ...}
Q2:
How many defects were detected in each product of the Team 3 case study (distinguished by detection process)? list {, ...}
Q3:
For each product: What is the distribution of detected defects broken down by defect class? list {, ...} list {, ...} The defect class varies in dependence of the source product (see appendix of this GQM plan). The defect class1 originates from the original defect classification.
77
Appendix B GQM Plans
The defect class2 originates from a revised defect classification.
Q4:
What is the distribution of detected defects in the product Requirements_Description broken down by modeled objects? list {, ...}
Q5:
What is the distribution of detected defects in the product System_Requirements broken down by SDL objects? list {, ...}
Q6:
For each product: What is the average effort for defect correction broken down by defect class? list {, ...} list {, ...} Comment:
The effort for the correction of a defect is the effort for the correction of the defect in the source product and in subsequent products (if affected). The correction effort does not include effort for additional verifications. The defect class varies in dependence of the source product (see appendix of this GQM plan). The defect class1 originates from the original defect classification. The defect class2 originates from a revised defect classification.
Q7:
For each product: What is the cumulated effort for defect correction broken down by defect class?
list {, ...} list {, ...} Comment:
The cumulated effort for the correction of a defect is the effort for the correction of all defects in the source product and in subsequent products (if affected). The correction effort does not include effort for additional verifications. The defect class varies in dependence of the source product (see appendix of this GQM plan). The defect class1 originates from the original defect classification. The defect class2 originates from a revised defect classification.
78
Baselining a Domain-Specific Software Development Process
Variation Factors / Explanatory Variables E1:
What is the complexity of the modeled objects in the product Requirements_Description? Model: n
∑ task _complexity(i)
object_complexityRD :=
i=1
list {, ...} Comment:
task_complexity: subjective measure [0..4] with 0 = low and 4 = high n: number of tasks of the object
E2:
What is the complexity of the SDL objects in the product System_Requirements? Model: n
object_complexitySR := 3*
∑ task _complexity(i) i=1
+ number of forwarded signals + number of aggregated instances list {, ...} Comment:
task_complexity: subjective measure [0..4] with 0 = low and 4 = high; if an object already exists in the product Requirements_Description, replace: n
∑ task _complexity(i) := object_complexityRD i=1
n : number of tasks of the object
Dependencies D1:
What influence has the complexity of the modeled objects in the product Requirements_Description on the number of detected defects/object? Hypothesis H1: The number of detected defects/object increases linearly with the complexity of the modeled objects.
79
Appendix B GQM Plans
list {, ...}
D2:
What influence has the complexity of the SDL objects in the product System_Requirements on the number of detected defects/object? Hypothesis H2: The number of detected defects/object increases linearly with the complexity of the SDL objects. list {, ...}
Appendix of this GQM Plan A:
Defect Classification for the Product Problem For the product Problem no defect classification is used.
B:
Defect Classification for the Product Object_Structure
Original Defect Classificationa
Revised Defect Classificationb
incorrect relation (e. g., arrow forgotten)
incorrect relation (e. g., arrow forgotten)
(appropriate) object(s) missing
(appropriate) objects missing
miscellaneous
incorrect aggregation miscellaneous
a. The original defect classification for this product was determined at the beginning of the project. b. The revised defect classification arose during the project or after project termination. A revision of the defect classification was made in the case where the original defect classification for the product was proven as inappropriate (e. g., if there was a high proportion of defects belonging to the class “miscellaneous”).
C:
80
Defect Classification for the Product Requirements_Description
Original Defect Classification
Revised Defect Classification
need missing
need missing
Baselining a Domain-Specific Software Development Process
Original Defect Classification
Revised Defect Classification
task missing
task missing
incorrect modeling (contradictory tasks)
incorrect modeling (contradictory tasks)
incorrect task description
incorrect task description
objects from „object structure” not considered
objects from „object structure” not considered
strategy missing
strategy missing
miscellaneous
wrong receiver object from signal signal defect inconsistent signal description new size not specified syntax new actuator introduced new signal name used signal missing dispensable signal argument type has changed new argument introduced argument is dropped strategy incorrect miscellaneous
D:
Defect Classification for the Product System_Requirements
Original Defect Classification
Revised Defect Classification
task missing
Task missing
incorrect (data-)definition of a task
incorrect (data-)definition of a task
incorrect computation formula of a task
incorrect computation formula of a task
incorrect control flow of an object
incorrect control flow of an object
incorrect communication between objects
incorrect communication between objects
81
Appendix B GQM Plans
Original Defect Classification
Revised Defect Classification
incorrect structure building
incorrect structure building
(modeling-)guidelines not kept
(modeling-)guidelines not kept
miscellaneous
dispensable signals task incomplete incorrect block of instances specified signal not used signal missing miscellaneous
E:
Defect Classification for the Product Test_Cases
Original Defect Classification incorrect computation of expected value input outside physical definition range test case missing (i. e., not all tasks are addressed at least once) miscellaneous
82
Baselining a Domain-Specific Software Development Process
Appendix C
MVP-L Project Plan
This appendix contains excerpts of the formalized models and the project plan. The process modeling language MVP-L [BLRV95] was used to formalize the software engineering activities, products and qualities of the first experiment of Team 3. Resources (such as developers or tools) are not formalized. The complete formalization comprises 18 process models, 30 product models, 10 process attribute models, and 36 product attribute models. The attached excerpt consists of the MVP-L project plan and interrelated models for the creation and verification of a document (i.e., the process models Task_assignment and Verify_Requirements_ Description, the product model Requirements_Description, the quality model Defects_Type_ task_missing). MVP-L is a process modeling formalism that was designed for building models that are understandable to humans, and not only to machines. MVP-L’s main characteristics are: software process domain-specific representation, representation of different kinds of elementary models, use of formal process model interface parameters, and instrumentation of software engineering processes for data collection. MVP-L’s basic concepts are products, processes, resources (i.e., humans and tools), and attributes which are related to the former three concepts. Products and processes are arranged in tree hierarchies (i.e., a process is refined in sub-processes, which again may be refined; the same works for products respectively). Attribute values correspond to measurement data that is collected throughout a project. The attributes along with the products, processes, and resources can be used to guide developers or inform management about the process state. In addition to guidance, the use of attributes allows improvement processes to observe products, processes, or resources in order to evaluate new technology or to identify problems. In MVP-L the relationships between the different concepts are explicitly modeled. Processes and products are related by product flow clauses, and resources are assigned to processes as being responsible for performing them. MVP-L is rule-based: control flow among processes is expressed by using pre- and postconditions (called entry and exit criteria in MVP-L). Please note, that the formal MVP-L representation is not intended to be read by people other than process engineers. Translators exist and are developed which provide more readable representation (e.g., graphical facilities). MVP-L can be translated into natural language using template sentences. MVP-L’s constructs were found sufficient to capture real processes for measurement purposes in several exercises. Thus MVP-L can be considered as a sound definition about what should be described by process models. A number of tools has been developed around MVP-L. Together they are called MVP-Environment, for short MVP-E [BHMV97]. It comprises, in particular, the tool GEM, which has been used to model the project plan and processes of this experiment. GEM (Graphical Editor for MVP-L) provides a user interface to MVP-L models in graphical notation, so that a process engineer can easily define process models or arrange a graphical layout of existing models for review by people involved in the development pro-
83
Appendix C MVP-L Project Plan
cess. Furthermore, it provides functions to check and analyze process models. The attached screenshot shows an excerpt of the Team 3 process model in the graphical GEM representation. Project:
SFB 501, Team 3, 1st Case Study ("BaX")
Documenttype:
MVP-L Project Plan
Documentname:
MVP-L Project Plan "BaX"
Author(s):
Dipl.-Inform. Jürgen Münch
User:
Team 3
Description:
This document contains excerpts of the project plan and a set of process models, product models, and attribute models.
project_plan BaX is imports process_model Create_Prototype, Create_System_Design, Develop_Control_System_Software, Develop_Operating_System, Develop_Control_Hardware, Develop_Communication_System, Integrate_System, Installation, Prototype_Test, Create_Useable_System, Requirements_Analysis; product_model Application_Knowledge, Problem, Design_Knowledge, System_Design, Control_System_Knowledge, Operating_System_Knowledge, Control_Hardware_Knowledge, Communication_System_Knowledge, Control_System_Software, Operating_System, Control_Hardware, Communication_System, Executable_System, Used_System, Prototype, TestCases, Method_Specific_Product, Coordination_Products, Useable_System, System_Requirements, Test_Results; objects create_prototype: Create_Prototype; create_system_design: Create_System_Design; application_knowledge: Application_Knowledge; problem: Problem; design_knowledge: Design_Knowledge; system_design: System_Design; develop_control_system_software: Develop_Control_System_Software; develop_operating_system: Develop_Operating_System; develop_control_hardware: Develop_Control_Hardware; develop_communication_system: Develop_Communication_System; control_system_knowledge: Control_System_Knowledge; operating_system_knowledge: Operating_System_Knowledge; control_hardware_knowledge: Control_Hardware_Knowledge; communication_system_knowledge: Communication_System_Knowledge; control_system_software: Control_System_Software; operating_system: Operating_System; control_hardware: Control_Hardware; communication_system: Communication_System; integrate_system: Integrate_System; executable_system: Executable_System; installation: Installation; used_system: Used_System; prototype: Prototype; prototype_test: Prototype_Test; testcases: TestCases; method_specific_product: Method_Specific_Product; coordination_products: Coordination_Products; create_useable_system: Create_Useable_System; useable_system: Useable_System; system_requirements: System_Requirements; requirements_analysis: Requirements_Analysis; test_results: Test_Results; object_relations requirements_analysis(application_knowledge => application_knowledge, problem => problem, method_specific_product => method_specific_product, testcases => testcases, system_requirements => system_requirements, test_results => test_results); prototype_test(testcases => testcases, prototype => prototype, test_results => test_results); create_system_design(system_requirements => system_requirements, design_knowledge => design_knowledge, system_design => system_design); develop_control_system_software(system_design => system_design, control_system_knowledge => control_system_knowledge, coordination_products => coordination_products, control_system_software => control_system_software); develop_operating_system(system_design => system_design, operating_system_knowledge => operating_system_knowledge, coordination_products => coordination_products, operating_system => operating_system); develop_control_hardware(system_design => system_design, control_hardware_knowledge => control_hardware_knowledge, coordination_products => coordination_products, control_hardware => control_hardware); develop_communication_system(system_design => system_design,communication_system_knowledge => communication_system_knowledge, coordination_products => coordination_products, communication_system => communication_system); create_prototype(system_requirements => system_requirements, prototype => prototype); integrate_system(control_system_software => control_system_software, operating_system => operating_system, control_hardware => control_hardware, communication_system => communication_system, executable_system => executable_system); installation(used_system => used_system, useable_system => useable_system);
84
Baselining a Domain-Specific Software Development Process
create_useable_system(executable_system => executable_system, useable_system => useable_system); end project_plan BaX
------------------------------------------------------------------------------------------------------------------------process_model Task_Assignment() is process_interface imports product_model Object_Structure, Problem, Defect_List_Requirements_Description, Model_Architecture, Application_Knowledge, Test_Results, Requirements_Description; process_attribute_model Effort, Rework_Effort, Start_Date, End_Date, Detected_Defects_Problem, Detected_Defects_ObjectStructure; exports effort: Effort; rework_effort: Rework_Effort; start_date: Start_Date; end_date: End_Date; detected_defects_problem: Detected_Defects_Problem; detected_defects_objectstructure: Detected_Defects_ObjectStructure; product_flow consume object_structure: Object_Structure; problem: Problem; defect_list_requirements_description: Defect_List_Requirements_Description; model_architecture: Model_Architecture; application_knowledge: Application_Knowledge; test_results: Test_Results; produce consume_produce requirements_description: Requirements_Description; context entry_exit_criteria local_entry_criteria object_structure.status='complete' and requirements_description.status='incomplete'; global_entry_criteria local_invariant object_structure.status='complete'; global_invariant local_exit_criteria requirements_description.status='complete' or object_structure.status='incomplete'; global_exit_criteria end process_interface process_body implementation end process_body process_resources personnel_assignment tool_assignment end process_resources end process_model Task_Assignment ------------------------------------------------------------------------------------------------------------------------process_model Verify_Requirements_Description() is process_interface imports product_model Problem, Requirements_Description, Checklist_Requirements_Description, Model_Architecture, Defect_List_Requirements_Description; process_attribute_model Effort, Rework_Effort, Start_Date, End_Date, Detected_Defects_Problem, Detected_Defects_ReqDescrip; exports effort: Effort; rework_effort: Rework_Effort; start_date: Start_Date; end_date: End_Date; detected_defects_problem: Detected_Defects_Problem; detected_defects_reqdescrip: Detected_Defects_ReqDescrip; product_flow consume problem: Problem; requirements_description: Requirements_Description; checklist_requirements_description: Checklist_Requirements_Description; model_architecture: Model_Architecture;
85
Appendix C MVP-L Project Plan
produce defect_list_requirements_description: Defect_List_Requirements_Description; consume_produce
context entry_exit_criteria local_entry_criteria requirements_description.status='complete'; global_entry_criteria local_invariant global_invariant local_exit_criteria requirements_description.status='verified' or requirements_description.status='incomplete'; global_exit_criteria end process_interface process_body implementation end process_body process_resources personnel_assignment tool_assignment end process_resources end process_model Verify_Requirements_Description ------------------------------------------------------------------------------------------------------------------------product_model Requirements_Description() is product_interface imports product_attribute_model Product_Status, Defects_Type_need_missing, Defects_Type_task_missing, Defects_Type_incorrect_modeling, Defects_Type_incorrect_task_description, Defects_Type_ojects_from_obj_str_not_considered, Defects_Type_strategy_missing, Defects_Type_wrong_receiver_obj_from_signal, Defects_Type_signal_defect, Defects_Type_inconsistent_signal_description, Defects_Type_new_size_not_specified, Defects_Type_syntax, Defects_Type_new_actuator_introduced, Defects_Type_new_signal_name_used, Defects_Type_signal_missing, Defects_Type_dispensable_signal, Defects_Type_argumenttype_has_changed, Defects_Type_new_argument_introduced, Defects_Type_argument_is_dropped_out, Defects_Type_strategy_incorrect, Defects_Type_miscellaneous; exports status: Product_Status; defects_type_need_missing: Defects_Type_need_missing; defects_type_task_missing: Defects_Type_task_missing; defects_type_incorrect_modeling: Defects_Type_incorrect_modeling; defects_type_incorrect_task_description: Defects_Type_incorrect_task_description; defects_type_ojects_from_obj_str_not_considered: Defects_Type_ojects_from_obj_str_not_considered; defects_type_strategy_missing: Defects_Type_strategy_missing; defects_type_wrong_receiver_obj_from_signal: Defects_Type_wrong_receiver_obj_from_signal; defects_type_signal_defect: Defects_Type_signal_defect; defects_type_inconsistent_signal_description: Defects_Type_inconsistent_signal_description; defects_type_new_size_not_specified: Defects_Type_new_size_not_specified; defects_type_syntax: Defects_Type_syntax; defects_type_new_actuator_introduced: Defects_Type_new_actuator_introduced; defects_type_new_signal_name_used: Defects_Type_new_signal_name_used; defects_type_signal_missing: Defects_Type_signal_missing; defects_type_dispensable_signal: Defects_Type_dispensable_signal; defects_type_argumenttype_has_changed: Defects_Type_argumenttype_has_changed; defects_type_new_argument_introduced: Defects_Type_new_argument_introduced; defects_type_argument_is_dropped_out: Defects_Type_argument_is_dropped_out; defects_type_strategy_incorrect: Defects_Type_strategy_incorrect; defects_type_miscellaneous: Defects_Type_miscellaneous; end product_interface product_body implementation end product_body end product_model Requirements_Description ------------------------------------------------------------------------------------------------------------------------product_attribute_model Defects_Type_task_missing() is attribute_type integer; attribute_manipulation end product_attribute_model Defects_Type_task_missing
86
Baselining a Domain-Specific Software Development Process
Appendix-Fig. 6: Excerpt of the project plan (modeled with the MVP modeling tool GEM)
87
Appendix C MVP-L Project Plan
88
Baselining a Domain-Specific Software Development Process
Appendix D
Questionnaire: ‘Defects’
As an example, the BaX questionnaire for defects is depicted on the following pages. Note that the questionnaire shown here, is not the original one used during BaX, since original one was formulated in German. For demonstration purposes, we have translated the questionnaire into the English language in this report.
Team3
questionnaire defects
A. Administrative information 1. Defect-ID: ________________ (The Defect-ID will be filled out later, by members of the SFB subproject A1.) 2. Discoverer:_____________________________________________________________ 3. Date: ___.___.___
B. Defect discovery 4. Process identifier in which the defect was discovered: ___________________________ 5. Description of the defect:__________________________________________________ ______________________________________________________________________ (How was the error observed?)
C. Error analysis 6. Name / identifier of the source product (i. e., product on the highest level of abstraction) affected by the error: _____________________________________________________ 7. Version number of error-prone product: ______________________________________
89
Team3
questionnaire defects
8. Object(s) with error(s):____________________________________________________ ______________________________________________________________________ (Please answer this question ONLY, if the name of the earliest product affected by the error is ‘Requirements Description’ or ‘Requirements Specification’) 9. Error classification: (If the error is part of one of the products mentioned below, please classify the error according to the suggested descriptions) Product ‘Object Structure’ o incorrect relation (e. g., arrow forgotten)
o (appropriate) object(s) missing o miscellaneous: ____________________
Product ‘Test Cases’ o incorrect computation of expected value o input outside physical definition range o test case missing (i. e., not all tasks are addressed at least once)
o miscellaneous: ____________________ Product ‘Requirements Specification’ o o o o o
task missing incorrect (data-)definition of a task incorrect computation formula of a task incorrect control flow of an object incorrect communication between objects
o incorrect structure building o (modeling-)guidelines not kept o miscellaneous: ____________________
Product ‘Requirements Description’ o o o o o
need missing task missing incorrect modeling (contradictory tasks) incorrect task description objects from „object structure” not considered
o strategy missing o miscellaneous: ____________________
D. Error correction 10. Please put down the effort needed to correct the defect in the tables below (one table for each affected product and object): (Put down the product described in part C. and all other affected products that are affected by the defect. In addition, name the corrected object(s), if the product ‘Requirements Description’ and/or ‘Requirements Specification’ is affected! Please put only one object per table! Put down only the effort needed for error analysis and error correction. The effort for error detection and the verification of the corrected product(s) is collected in the ‘questionnaire effort’)
90
Baselining a Domain-Specific Software Development Process
Team3 product name: _____________
questionnaire defects old version number: ________________
object name (optional): ____________________ start date:
correction effort (minutes):_____
# of persons: ________________
___.___.___
end date: ___.___.___
product name: _____________
old version number: ________________
correction effort (minutes):_____
# of persons: ________________
___.___.___
end date: ___.___.___
old version number: ________________
correction effort (minutes):_____
# of persons: ________________
___.___.___
end date: ___.___.___
old version number: ________________
correction effort (minutes):_____
# of persons: ________________
time: ___.___
time: ___.___
time: ___.___
object name (optional): ____________________ start date:
product name: _____________
___.___
object name (optional): ____________________ start date:
product name: _____________
time:
time: ___.___
time: ___.___
object name (optional): ____________________ start date: ___.___.___
end date: ___.___.___
time: ___.___
time: ___.___
Please use additional questionnaires if more products / objects are affected
91
92
Baselining a Domain-Specific Software Development Process
Appendix E E.1
Qualitative Experience
Requirements Analysis Method (Goal 4)
E.1.1 Inconsistent Description of Signals1 Context:
Product Requirements_Description
Symptom:
The description of signals contains a lot of inconsistencies.
Diagnosis:
Often, signals were defined multiple times in a document. This implied redundancies which led to inconsistencies.
Reaction:
The inconsistencies were removed after the termination of the process Verify_Requirements_Description by doing rework in the process Task_Assignment
Result:
Complete removal of the inconsistencies detected.
Recommendation:
The document Application_Knowledge (especially the dictionary) should contain a standard identifier for signals, which should be used during development.
E.1.2 Different Levels of Granularity in the Description of Tasks2 Context:
Product Requirements_Description, process Requirements_Modeling
Symptom:
Different tasks in the document Requirements_Description are described at differing levels of abstraction. However, the process Requirements_Modeling requires a description of the tasks at the same granularity level.
Diagnosis:
No guidelines existed describing the level of detail for task descriptions.
Reaction:
After the termination of the process Verify_Requirements_Description the task descriptions were adapted in the process Task_Assignment. Mainly refinements of task descriptions were performed.
Result:
Complete unification of the abstraction level.
Recommendation:
On the one hand, guidelines should be formulated that describe a unique level of abstraction for task descriptions. On the other side, developers could be guided by sample task descriptions.
1. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p01.html 2. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p02.html
93
E.1.3 Missing Process Refine_Object_Structure3 Context:
Product Object_Structure, process Requirements_Modeling
Symptom:
During the process Requirements_Modeling, a refined description of the product Object_Structure was required, which contains knowledge gained during the process Task_Assignment.
Diagnosis:
No process has been planned for the refinement of the product Object_Structure.
Reaction:
A respective refinement of the product Object_Structure was performed before starting the process Requirements_Modeling.
Result:
The process Requirements_Modeling could be performed with the refined description.
Recommendation:
Insert a respective process in the project plan.
E.1.4 Missing Component Tests4 Context:
Process Requirements_Modeling
Symptom:
During the process Requirements_Modeling, the whole system (e. g., the executable parts of the product System_Requirements) was checked (concerning specific aspects such as consistency). It was very difficult to perform these checks.
Diagnosis:
It was only possible to check the system as a whole. The complexity of the system was too high for such checks.
Reaction:
Some checks of the whole system were performed with respect to selected foci.
Result:
The checks were performed inefficiently because of the enormous complexity of the whole system.
Recommendation:
Systematically perform component tests during the process Requirements_Modeling.
E.1.5 Missing Name Conventions5 Context:
Process Task_Assignment, product Requiurements_Description, process Requirements_Modeling, product System_Requirements
Symptom:
Occurrence of synonyms and homonyms in the documents; different formats for identifier used. This led to inefficiencies in the creation of documents.
Diagnosis:
No guidelines existed describing which identifiers should be used.
Reaction:
Ambiguities concerning identifier were solved during rework (in part after precursory discussions of the developers).
Result:
Non-ambiguous identifiers in the documents.
Recommendation:
Explicitly defined name conventions and standard identifiers should be integrated in the product Application_Knowledge (especially Dictionary) and used throughout the development.
3. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p03.html 4. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p04.html 5. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p05.html
94
Baselining a Domain-Specific Software Development Process
E.1.6 Non-systematic Propagation of Decisions6 Context:
Process Task_Assignment, process Requirements_Modeling.
Symptom:
Decisions were not propagated systematically to the involved developers.
Diagnosis:
1) Only a few decisions were documented explicitly. 2) Most of the processes were enacted as group activities. No communication channels were defined explicitly.
Reaction:
Discussions and informal communications were used to propagate decisions.
Result:
The informal communication helped to overcome the deficiencies. An advantage was that all developers were located in one room during the main activities.
Recommendation:
Document decisions and dependencies explicitly. This is especially important for the maintenance of the developed system. The use of (tool-supported) notification mechanisms may help.
E.1.7 Time-intensive Partitioning of Work Spaces7 Context:
Process Task_Assignment, process Requirements_Modeling.
Symptom:
The distribution of development tasks (e. g., creation of parts of a document, creation of object descriptions) among the developers and keeping consistency of documents during parallel work on them was time-intensive.
Diagnosis:
The used tools FrameMaker and SDT are not (or only rudimentarily) suited for collaborative team work.
Reaction:
Manual partitioning of development tasks / definition of object sets as work spaces.
Result:
Partitioning of tasks, assignment of work spaces and subsequent integration were time-intensive.
Recommendation:
Tool-support that is suited for collaborative work may help.
E.1.8 Unequal Assignment of Development Tasks to Developers8 Context:
Process Task_Assignment, process Requirements_Modeling.
Symptom:
The developers had to perform tasks of enormously differing complexity. Consequently, some developers had some waiting time at the end of the processes which could not be used for other tasks.
Diagnosis:
Unequal distribution of work packages. Some work packages were too large.
Reaction:
The problem was detected too late so that no reaction was possible during the project.
Result:
-
Recommendation:
Partitioning of the tasks (respectively the components) in unique, fine-granular working pakkages.
6. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/MANAGEMENT/baX_m01.html 7. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/MANAGEMENT/baX_m02.html 8. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/MANAGEMENT/baX_m03.html
95
E.2
Development Platform (Goal 5)
E.2.1 SCM Tool: Static Configuration Definition9 Context:
Functional aspect: Configuration Management
Symptom:
Additional products (esp. product refinement) cannot be inserted by developer or project manager. The same applies to deletion. Both needed to be managed by the product manager.
Diagnosis:
The management of products is controlled by an external list. This list is very specific and was only understood by the product manager.
Recommendation:
A function to manipulate these lists will be provided in future releases.
E.2.2 SCM Tool: View only Status Reports10 Context:
Functional aspect: Versioning
Symptom:
Status reports just shows information on products and configurations. However, this information cannot be used directly (e.g., to set-back to older version). Instead, this must be manually put in to activate the desired function.
Diagnosis:
Due to the fact that the UI is on a textual basis, this functionality is not providable without changing the UI itself.
Recommendation:
Future releases will have a graphical UI which will provide this functionality.
E.2.3 SCM Tool: Time-Consuming Handling of Composite Products11 Context:
Functional aspect: UI
Symptom:
Checking in and out can last a few minutes, when composite product consists of a few hundred elements.
Diagnosis:
Interface is a set of C-Shell programs: not State-of-the-Art and heavily dependent on the platform
Recommendation:
Future releases will be implemented in Java and will provide a graphical UI. This should increase performance due to an optimized RCS-call mechanism and guarantees platform independency.
E.2.4 SCM Tool: Explicit Versioning and Configuration Management12 Context:
Functional aspect: versioning and configuration management, tool binding
Symptom:
Most versioning and configuration functions and tool calls are hidden or activated automatically. Some managers feel they have no control over what the system is managing.
Diagnosis:
The SCM and tool-binding system just provides process-oriented product management that is completely controlled by the development process due to predefined behavior.
9. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t01.html 10. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t02.html 11. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t03.html
96
Baselining a Domain-Specific Software Development Process
Recommendation:
Process-oriented product management must be supplemented with explicit functions for versioning, configuration management, and tool binding. These functions are pure product controlled ones!
E.2.5 SCM Tool: Hierarchical Configuration Definition13 Context:
Functional aspect: configuration management
Symptom:
Hierarchical configurations are just supported on a predefined level controlled by external files. Dynamic definiton of new hierarchies or redefinition of existing configurations is not possible.
Diagnosis:
The management of configurations is controlled by external lists. These lists are very specific and were only understood by the product manager. (see E.2.1)
Recommendation:
A function to manipulate these lists will be provided in future releases.
E.2.6 SCM Tool: List Definition for Composite Products14 Context:
Functional aspect: versioning
Symptom:
The management of atomic products and directories for composite products is supported. A dynamic list for the definition of composite products is not supported.
Diagnosis:
This functionality is missing.
Recommendation:
This functionality will be provided in future releases if needed in the case studies.
E.2.7 SCM Tool: Mask and Filter Definition for Composite products15 Context:
Functional aspect: versioning
Symptom:
The definition of composite products is possible through a default mask (*). However, exception lists (filters) or distinguished mask definitions (like *.h) are not provided.
Diagnosis:
This functionality is missing.
Recommendation:
This functionality will be provided in future releases if needed in the case studies.
12. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t04.html 13. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t05.html 14. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t06.html 15. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t07.html
97
E.3
Instrumentation of the Experiment (Goal 6)
E.3.1 Difficult Assignment of Detected Defects to Defect Classes16 Context:
Product Requirements_Description, process Verify_Requirements_Description
Symptom:
The assignment of detected defects to the predefined defect classes was often not possible in a definite way. In the product Requirements_Description, especially, it was difficult to distinguish between corrections resulting from incomplete descriptions (which were counted as defects) and refinements resulting from descriptions that were too abstract.
Diagnosis:
The defect classification was too coarse. Many defects were classified as “miscellaneous”. This may be attributed to the lack of knowledge about typical defects.
Reaction:
During project enactment the defect classification was extended, due to a first analysis of the defects classified as “miscellaneous”. The assignment of defects to defect classes often resulted from discussions among developers.
Result:
The use of the revised defect classification resulted in an easier assignment of defects to defect classes.
Recommendation:
Use the revised defect classification in future projects and define defect classes more precisely.
E.3.2 SCM Tool: Automation of Data Collection Procedures17 Context:
Functional aspect: tool activities
Symptom:
Measurement data were captured by questionnaires only.
Diagnosis:
Some data can be captured automatically like calendar time (every tool activity is logged). Some other data can be provided by the tool, like the version number of a product or configuration.
Recommendation:
With a graphical user interface, questionnaires with pre-filled fields can be provided online.
E.3.3 Gaining Qualitative Experience after Project Termination18 Context:
All processes of the case study.
Symptom:
It was difficult for the developers to express qualitative experience.
Diagnosis:
The qualitative experience was only gathered after project termination.
Reaction:
-
Result:
The essential qualitative experience could be captured (based on the developers’ opinion).
Recommendation:
Qualitative experience should be gathered regularly during project performance (e. g., in accordance with certain milestones) and after project termination.
16. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p06.html 17. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/TECHNIQUES/baX_t08.html 18. http://sep1.informatik.uni-kl.de:18000/EDB/INHALTE/UEBERGREIFEND/Q_ERFAHRUNGEN/LL/PLANNING/baX_p07.html
98