Software, including parallel programs, has become the key element in .... The former can be classified as either data or code error in Table 3, while the latter as ...
Classification of software defects in parallel programs
Henryk Krawczyk, Bogdan Wiszniewski Faculty of Electronics, Technical University of Gdan´sk, 80-952 Gdan´sk, Poland
Peter Mork Department of Control Engineering, University of Miskolc, Miskolc, Hungary
Abstract. In this survey we investigate frameworks for systematic detection of errors in parallel programs. For sequential programs there are two basic classifications of errors. One is related to the total quality assurance in the software life cycle development. Another concentrates on logical properties of software defects. We systematically review existing terminology in order to characterize the relationships between them when applied to parallel software. We illustrate our ideas with some realistic programming examples. Our conclusions provide a proper platform for systematic parallel software testing. It is essential for specifying the testing and debugging tools. Keywords. Program testing, error classification, bug taxonomy, parallel programs, path analysis.
1. Introduction By systematic testing we mean a formal procedure, where inputs must be prepared, outputs predicted, tests documented, code executed, and results observed, as put by Beizer in [2]. We aim at path analysis, since it can provide total code coverage and reasonable coverage of program "execution histories". Based on this concept, we define classes of errors in parallel programs. Systematic testing of paths should be able to detect these errors. Throughout the rest of this paper we will briefly introduce standard terminology related to software testing and quality assurance, review existing taxonomies of bugs in application programs, provide a formal model of a program path in order to capture properties of both sequential and parallel programs, and finally use this model to define classes of errors. An example of a PVM program is analyzed in detail to illustrate the main ideas.
- 89 -
2. Standard terminology Software, including parallel programs, has become the key element in dependable work of computerbased systems. Its development has created new discipline called software engineering, which comprises a set of methods, tools and procedures in order to obtain efficient and reliable software systems. The classic software life cycle begins at the conceptual definition, progresses through requirement analysis, design, implementation and testing, goes through installation and acceptance, exploitation and maintenance and stops on retirement [14]. All these life cycle phases contain specific problems and are described by different characteristics [24]. Moreover, the constructive approach for development of software assumes various checking activities, that should be properly planned and used. Three of them are the most important ones; they are: verification, validation, and measurement (see Table 1).
Software life cycle
Techniques for improving dependability
Concept
Careful study of user’s needs, measurement
Requirements
Rapid prototyping, simulation, reviews, measurement
Design
Rapid prototyping, simulation, inspections, measurement
Implementation
Proof of correctness, inspections, measurement
Tests
Verification, validation, measurement
Installation and acceptance
Monitoring, measurement
Exploitation and maintenance
Measurement, evaluation
Retirement
Functional analysis of software quality
Tab.1: Dependability and the software development process
Verification is the process of determining whether or not the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. Validation is the process of evaluating software at the end of software development process, to ensure compliance with software requirements. Along with verification and validation, activities involving
measurement (monitoring) are
frequently used in software development phases [5]. Monitoring contributes to the analysis of software
- 90 -
functionality and performance evaluation. However, measurement techniques apply not only to software quality assessment, but also to fault detection. For example, measurements of processing times higher than a required value may be a sufficient reason to reject the implemented code and to generate a request to redesign it. Therefore good measurements should be integrated with well organized processes of verification and validation [22]. Detection of faults throughout the life cycle through reviews, inspection, prototyping, simulation, or proof of correctness of the software product will improve dependability. Error detection, which involves testing, is quicker and efficient during earlier phases, while slower and inefficient during later phases [27]. Installation tests and integration tests are needed to confirm the absence of some kind of faults. However, more tests do not necessarily means better software dependability. During operations and regular maintenance of a software product, faults do not necessarily have to be fixed immediately. Careful choice of a testing procedure and a systematic approach to test generation play a very important role in software engineering practices. Based on standard terminology proposed in [16] we use the following definitions to describe strategies of testing for software anomalies. A discrepancy between a deliverable product of the software development phase and its documentation, the products of an earlier phase, or the user requirements, is called a software problem [6]. An error is the software problem found during the review of the phase where it was introduced, while defect is the problem found later than the review of the relevant phase in which it was introduced. Either errors or defects are considered faults. The inability of a functional software unit to perform its required functions is called a failure. A failure severity can be categorized in different ways, for example as a software unit being inoperable, its major function being inoperable, or just a cosmetic failure [27]. A failure is caused by a defect encountered during code execution. Such a code execution is called a testing process. By using test reports generated during that execution, a certain number of defects causing failures can be identified. The term bug is frequently used to address all kinds of damages caused by faults. This term is not precise, but it reflects very well the uncertainty surrounding the origins of faults: bugs are really
- 91 -
creeping into programs when programmers are wrong but confident. Localizing errors in a program involves a process known as debugging. Debugging requires repeated executions of a product piece of code in order to observe patterns in program behaviors and enable on-line inspection of selected variables. Special tools for debugging of concurrent programs are described in [22]. Testing and debugging are different, but related activities. Testing reveals the existence of bugs, while debugging localizes and removes them. It is important for tests to be developed correctly with regard to some adopted standard, to the current development phase and to the right timing of processes. Testing can then contribute well to the quality assessment process [10]. Stevens in [29] discusses the ESA software standard that effectively supports software development methods. Verifying a system in ESA is initially a top-down, and then a bottom-up activity, as shown in Figure 1. The golden rules of verification in ESA are: do it early, do it at as low a level as possible, and use the cleverest people. Validation and verification is always a compromise, balancing cost and time against dependability and quality. It also took wide spread training, consulting and staff reinforcement, as well as suitable management. Also inspection techniques are very popular. They involve tracing and analyzing of a program source code. Faults are more efficiently removed by inspection, rather than by testing; it is better (cheaper) to avoid faults early, instead of hoping to find them later [9].
are verified by
User requirements
are verified by
Software requirements
Architectural design
Detailed design Write tests
is verified by
is verified by
Source code Fig.1: Testing in the software lifecycle
- 92 -
Acceptance tests System tests Integration tests
Unit tests Run tests
3. Taxonomies for bugs Testing, debugging and measurement activities cannot be represented in a form of observable and controllable systems unless explicit input-output or cause-effect relationships are precisely established. On the other hand, techniques that identifies these relationships can help designers by providing feedback on the progress of their software development efforts. The fundamental issue is to determine the suitable cause-defect model. There are two terms usually involved in determining such a model; they are: incidents and triggers [3]. An incident is any undesirable or unexpected event observed during a test, trial, or any other operation of a system. The reason for an incident is a fault, what causes real problems for users. A trigger is a combination of circumstances which may include some failures or malicious actions. It activates the source of a fault, and the immediate effect is a local error in some system component. When it propagates across the component’s interface, then that component fails from the system point of view. A sequence of interactions: failure, fault, error, failure, etc. is called an error propagation [18]. The systematic approach requires classification of attributes for such basic entities as failures, errors, and faults [23]. They are shown in Table 2.
Attributes Location Timing Mode Effect Mechanism Cause Severity Cost Count
Definitions Where is the entity? When did the event occur? What was observed? What consequences have resulted? How did a given entity arise? Why did the event occur? How much was the user affected? How much cost was incurred by the vendor? How many entities were observed ?
Tab.2: Attributes of entities
General classification of faults has been proposed by Beizer [2]; the categories of faults described are broad, in order to reflect that bugs have usually several symptoms. They are shown in Table 3. Very interesting propositions have been given lately by Hewlett-Packard [9] and Motorola [6]. The
- 93 -
Error category Function errors System errors Process errors Data errors Code errors Documentation Other
Symptoms Wrong or missing system function Interface errors, resource management problems, control and sequence errors Incorrect data processing by individual components Specification, design, layout and initialization of data structures Ambiguous syntax of a programming language Incomplete description, mistakes in text Unclear causes
Tab.3: Broad categories of errors
former classifies faults for the entire software design process, while the latter concentrates on some software metrics used to improve and control the design process. These two categorizations of faults conform to the categorization given in Table 3, except two classes [18]: microcode defects and so called unclear faults. The former can be classified as either data or code error in Table 3, while the latter as other error with unclear causes. It should be noted that the categories characterized above tend to overlap. Study of some faults including network environments is presented in [21]. In accordance to empirically based expectations, operating behaviors of networks can be classified as being either normal or anomalous. Network based systems produce new software problems, such as race conditions, mutual exclusion of processes, or access conflicts [7]. The testing process of distributed and real time systems is described in detail in [28].
4. Systematic testing Testing provides a natural mechanism for exploring systems’ operational behavior. The leading motto for any kind of software testing is: "this program is incorrect and we shall show it". Therefore testers "assume" programs to be correct and then attempt to find counter-examples to show they are not. This explains why testing cannot be considered as a method of showing the correctness of programs. If after a series of experiments with a number of such "counter-examples" testers fail to demonstrate incorrect behaviors of tested programs, they can either feel more confident about these programs or less confident about the way they have tested these programs. Therefore it is very important what it is - 94 -
meant by the testing process. Beizer [2] indicates the need for distinguishing three models: a model of the environment, a model of a program, and a model of the errors expected. In this paper we assume the environment to be a set of distinct and logically separated communicating asynchronous sequential processes [20]. We model programs by their control flow graphs [11] describing potentially infinite sets of distinguishable paths. Each path represents a finite sequence of operations constituting a computation in input variables. Finally, we use the path model of computer programs to classify errors [13].
4.1 Parallel systems At the earliest stage of system development, i.e. requirement analysis, parallel systems are usually viewed as sets of event-driven activities. The main focus is on understanding and defining an application problem; an axiomatic system representation is developed in order to prove various logical properties of a system and its environment. Problems thus defined are resolved during system design, and finally implemented as a working product. During system design, events are often associated with individual programs called processes. Processes define basic interconnected components of a system; typically they are viewed as independent units that communicate through a set of well defined access points. Interface between parallel system components involves synchronization and exchange of data. Rules for communicating processes are established by protocols; protocols control ordering of selected events. Implementation is aimed at expressing specified system units (processes) and their interface in terms of a selected programming language and the language’s underlying concurrent programming model [1]. Finally, implemented code is executed for specially selected test cases. One question that arises when considering the role of testing is whether formal methods alone can guarantee final quality of parallel software products. It is clear that a protocol specification that is proved to be correct does not guarantee that it’s physical implementation is free of errors. For example, if a programmer has incorrectly used some interprocess communication primitives provided by the implementation language, then a correct protocol specification is implemented by a wrong program. Another reason may be that, although proved to be correct, a given protocol is wrong; this
- 95 -
is when a system designer does not fully understand (or has insufficient information about) a problem to be solved by the system being designed. Finally, complex communication patterns may be hard to verify formally for very practical reasons; for example, how to prove correctness of operating systems like UNIX™? A general conclusion then is that although formal approaches contribute to the quality of a product they do not eliminate the need for physical execution of that product code. The key problem is what paths should be tested. The number of paths in any realistic program can be unacceptably high. There are two reasons of that: the presence of iteration loops and combinatorial explosion of possible interactions between parallel processes. The former poses a problem for effective selection of test cases for sequential programs and has been investigated before [2]. By using specific modeling techniques of concurrent systems using Finite State Machines [26] or directed graphs [30] it is possible to apply path coverage techniques for sequential programs to parallel programs. Arbitrary ordering of parallel program statements leads to an excessive volume of paths being considered for tests. However, modular programming paradigms supported by modern programming languages make it possible to ignore most of the internal statements of parallel processes when considering interprocess communication; properly designed units (processes) can communicate only through a well defined interface and these units are in fact sequential programs. The strongest form of this separation is when processes are physically isolated by running on different processors.
4.2 A program path Consider program P computing function C in input variables forming a set X={xi i=1,...,k}. The set Xk of all k-vectors contains all possible combinations of values that input variables can ever assume. Program domain D⊂Xk is a set of all k-vectors accepted by P, i.e., D=dom(C). Path analysis of any sequential program P is based on its control flow graph G defined as a quadruple [11] P=(Q,A,s,e), where Q is a finite set of nodes, A is a finite set of arcs, and s, e are distinguished start and end nodes. Each arc (a,b)∈A, where a,b∈Q is labeled by a symbol gab representing either an assignment expression or a predicate. A path p from node a to node b is a sequence of nodes p=(q0q1...qj), a=q0, b=qj, such that for i=0,...,j-1 each (qi,qi+1)∈A. If a=s and b=e, then path p is called
- 96 -
a program path. The path computation C(p) is a function in Xk computed by the sequence of respective assignment expressions along p. The path condition is a conjunction of all respective predicates along p; this conjunction describes path domain D(p) being a subset of D. With this interpretation one can represent program P as a (possibly infinite) set P={pi i=1,2,...} of program paths such that C={C(p1),C(p2),...,C(pn),...} and D=D(p1)∪D(p2)∪...∪D(pn)∪... . Consider an example program graph in Figure 2. We denote predicates and assignment expressions using the syntax of the C programming language. A graph in Figure 2 has the start node at 1 and the end node at 5. Arcs (1,2), (1,5), (2,3), and (2,4) are labeled by symbols representing predicates, while arcs (4,1) and (3,1) are labeled by symbols representing assignment expressions. The example program has domain D=N2, and computes the greatest common divisor of input variables , i.e., C=GCD(x,y). In order to illustrate path domains and path computations consider for example path p1=(1 5). Path p1 computes C(p1)= for input pairs satisfying path condition x==y, therefore path domain D(p1)={,,...}. Another path, for example p2=(1 2 3 1 2 4 1 5), computes C(p2)= for input pairs satisfying path condition y==3x/2; therefore path domain D(p2)={,,...}. y=y−x
s:
1
x=x−y
x!=y
2 x==y
e: 5
x>=y
x0)∧(x+10
12 x==n
x1)) /* GCD of results is final /*13: 14 */ {pack(&x,&y); /* pair of arguments into message /*14: 15 */ pvm_send(S[2],1); /* send message to slave S2 /*15: 16 */ pvm_recv(-1,2); /* receive message /*16: 21 */ z=unpack(); /* final result to z /*17: 18 */ } else /*18: 19 */ {pvm_kill(S[2]); /* kill the unnecessary process */ /*19: 20 */ z=1; /*20: 21 */ } /*21: 22 */ print_out(&z); /* the result */ /*22: 23 */ pvm_exit(); /*23: */ }
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */
Fig.7: The example parallel program
The example program in Figure 7 constitutes a system of four parallel processes: one master and three slaves. We specify the control flow graph of the master program implicitly as comments in a form of the graph adjacency matrix, where each line denotes the respective graph node number and the list of its successors. The master process reads three values x, y, and z, for which the greatest common divisor is computed. Each slave is computing the greatest common divisor of two numbers. For brevity we assume here that each slave starts its operations by performing a blocking receive of two parameters (x, y), and then computes GCD(x,y) as shown in Figure 2, sends the result back to the master program and terminates. We assume the execution file "gcd" to contain the relevant code of function GCD(x,y). The computation of the final result is performed by the master in two steps. The first step is when two slaves, S0 and S1 are started to compute GCD(x,y) and GCD(y,z) in parallel. The third slave S2 waits for results produced by S0 and S1. As soon as the results are delivered to the
- 105 -
master, slave S2 can start its computation of the final result, which is the greatest common divisor of the two results delivered by slaves S0 and S1. Note, that synchronization of actions of communicating processes here is quite restrictive. Sending a message to slaves from the master (nodes 5 and 7) is followed by a blocking receive operation (nodes 8 and 10). The master can continue only when both results have been delivered; it sends then a message to the third slave (node 14) and blocks again (node 15). This "serialized" communication pattern, combined with the physical distribution of internal data objects of all cooperating processes makes the master program to behave like the sequential program. Interprocess communication at nodes 8, 10 and 15 is in fact an assignment statement: a hypothetical procedure unpack() shown here assigns variables of the master with the respective values returned by each slave. Testing of this example program for domain, computation and subcase errors is like for any sequential program. For example, domain error may occur at node 12 if the corresponding predicate
((x>1)&&(y>1)) has defects, like wrong relation operators being used or wrong variables being referred, for example. If it occurs, the subsequent execution of the master program may follow the wrong path. On the other hand, if function GCD(x,y) computed by slave processes is incorrect, then assignments at nodes 9 and/or 11 may cause a computation error. Finally, if the entire if-else clause (nodes 12 and 17) is missing we may observe a subcase error. If the example parallel program has wrong interprocess synchronization we will still be observing the basic three classes of path errors, but their detection may require more effort in preparing and executing tests (or luck, otherwise). Let us assume that the example program in Figure 7 has a defect, i.e., at node 10 there is a call of the non-blocking receive function pvm_nrev, instead of its blocking counterpart pvm_rev. Such a minor typing error may escape detection during program source code inspection for example, neither be ever caught by the C compiler. This fault, however, may entirely change the example program computations, being a trigger for many unexpected situations. Below we consider some of them. If slave S1 is fast enough to compute its result within a period of time needed by the master to reach node 10 (receiving from S1) after leaving node 7 (sending to S1) the bug will not show up. The - 106 -
problem of timing here is quite complex, since slave S0 can influence the execution of the master by being able to block it at node 8, thus allowing slave S1 to pass the master. Linking incorrect master program results being printed at node 21 directly to the timing problem at node 10 is not that obvious. For example, if the tester selects test point T1=, the bug will not be revealed anyway, since the unchanged but still correct value of y will be sent by the master to slave S2 for further (correct) processing. Clearly, we have the observability problem here. The tester must be able to distinguish two situations, one situation where slave S1 (computing y) arrives on time to communicate with the master at node 10, and another situation when it fails to do that. Auxiliary variables considered in the previous section will have to be provided to monitor the two different execution histories. On the other hand, if the tester is not aware of this problem and selects a different test point than T1, e.g. T2=, the bug caused by the non-blocking operation at node 10 will be detected as a simple computation error. Note that in such case the unchanged value of y will be wrong at node 11, and will obviously lead to the wrong final result computed by the master program. Moreover, this particular computation error observed for test point T2 does not imply a domain error at node 12. A domain error at node 12 can be observed for yet another test point, e.g. T3=, when the wrong value of y=2, not updated on time by slave S1 to the correct value y=1, will force the master to follow the wrong path and compute the wrong result. In this case, the related predicate at node 12 is correct, but evaluates to the wrong value because of the wrong value of y computed before. This is an example of the error propagation, when some computation error along a path implies the wrong evaluation of a correct predicate. Finally, a locking error will occur in this example program when any process here has not communicated at some node with the right process. For example, if the master sends a message at node 14 by calling pvm_send(S[1],1), i.e., sending again to slave S1 instead of S2, it may lock at node 15, as S1 has already communicated with the master before (at node 7). Simple observation of slaves S1 and S2 to check on how they communicate with the master is not sufficient to detect this bug, since in PVM the sending operation is non-blocking. Therefore, both slaves S1 and S2 may be - 107 -
terminated by the time the tester realizes that the master waits for something that has not happened.
6. Conclusions The general conclusion of this survey is that testing and debugging should be considered broadly, from the perspective of the entire software life cycle. This is because various software quality assurance practices at any stage of the final product development can contribute to many specific problems the testing and debugging have to deal with. Effective testing requires precision in describing possible program bugs. Unfortunately there is no uniform description for various environments, programs and errors. Testing of parallel programs is more complicated than sequential programs owing to the enormous combinations of possible program interactions. However, the lower the level of consideration, the smaller is the difference between testing sequential and parallel programs. We have emphasized that by distinguishing classic path errors from the observability and locking errors, since the latter two constitute in our view some natural restrictions to parallel software testing and debugging. Moreover, observability problems may be considered both in program data space and time. Program assignment statements, decision statements and communication statements are the three basic classes of objects to be considered by any systematic testing method. Different relations between these objects may be provided in order to define a family of test completion criteria. Of particular interest to us are various ordering relations between these objects, since we want to capture execution histories of parallel programs. Bug taxonomies and error classes reviewed in this survey either tend to overlap, or exclude, or complement one another. Taxonomies concentrate rather on managerial aspects of software quality, where the costs of finding or repairing defects are very important. Error classification deals more with the functional aspects and the desired code level of confidence. Good understanding of these aspects of the final product quality is the key factor in specifying requirements for the testing and debugging tool. We will continue on this problem in the next report.
- 108 -
References 1.
Andrews, G.R., Schneider, F.B.: Concepts and notations for concurrent programming. Computing Surveys 15 3-43 (1983)
2.
Beizer, B.: Software system testing. Van Nostrand Reinhold Company, 1990
3.
Chillarege, R., et al.: Orthogonal defect classification - a concept for in-process measurements. IEEE Trans. on Software Eng. SE-18, 943-956 (1992)
4.
Clarke, L.A., Hassel, J., Richardson, D.J.: A close look at domain testing. IEEE Trans. Software Eng. SE8, 380-390 (1982)
5.
Cunha, J.C., et al.: Monitoring and debugging distributed memory systems. Proc. Conf. Microcomputer Systems and Applications, Budapest, Hungary, 1994, (to appear).
6.
Daskalantonakis, M.K.: A practical view of software measurement and implementation experiences within Motorola. IEEE Trans. on Software Eng. SE-18, 998-1010 (1992)
7.
Dauphin, P.: Combining functional and performance debugging of parallel and distributed systems based on model-driven monitoring. Proc. Conf. Euromicro Workshop on Parallel and Distributed Processing, Malaga, Spain, 1994, pp.463-470.
8.
DeMillo, R.A., et al.: An extended overview of the Mothra software testing environment. Proc. 2nd Workshop on Software Testing, Verification and Analysis, Banff, Canada, 1988, pp.142-151.
9. 10.
Grady, R.B.: Practical results from measuring software quality. Comm. ACM 13, 1993, pp.62-68. Graham, D.R.: Testing and quality assurance - the future. Information and Software Technology 34, 694697 (1992)
11.
Hecht, M.S.: Flow analysis of computer programs. North-Holland, 1977.
12.
Howden, W.E.: Methodology for the generation of program test data. IEEE Trans. Comp. C-24, 554-559 (1975)
13.
Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Trans. Software Eng. SE-2, 208-215 (1976)
14.
IEEE GUIDE for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software, IEEE Standard, No.982.2.1988.
15.
Khanna, S.: Logic programming for software verification and testing. The Computer J. 34, 350-357 (1991)
16.
Laprie, J.C. (ed.): Dependability: basic concept and terminology. Springer-Verlag, Wien, New York,
- 109 -
1992. 17.
Laski, J., Korel, B.: A data flow oriented program testing strategy. IEEE Trans. Software Eng. SE-9, 347354 (1983)
18.
Lee, I., Iyer, R.K.: Faults, symptoms, and software fault tolerance in the tandem GUARDIAN90 operating system. Proc. Int. Symp. FTCS-23, Tolouse, France, 1993, pp.20-29.
19.
Leffler, S.J., et. al, "An Advanced Socket-Based Interprocess Communication Tutorial", Network Programming, SunOS 4.0.3. manual.
20.
Lynch, N.A., Fischer, M.J.: On describing the behavior and implementation of distributed systems. Theoretical Computer Science 13 17-43 (1981)
21.
Maxion, R.A., Feather, F.E.: A case Study of Ethernet A ˛ nomalies in a distributed computing environment. IEEE Trans. on Reliability 39, 433-443 (1990)
22.
McDowell, C.F., Helmbold, D.P.: Debugging concurrent programs. ACM Comp. Surv. 21, 593-622 (1989)
23.
Meller, P.: Failures, faults and changes in dependability measurement. Information and Software Technology 34, 640-654 (1992)
24.
Pressman, R.S.: Software engineering - a practitioner’s approach. McGraw-Hill, 1992.
25.
Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Trans. Software Eng. SE-11, 367-375 (1985)
26.
Sarikaya, B., Bochmann, G., Cerny, E.: A test design methodology for protocol testing. IEEE Trans. Software Eng. SE-13, 518-531 (1987)
27.
Selby, R.W.: Empirically based analysis of failures in software systems. IEEE Trans. on Reliability 39, 44-454 (1990)
28.
Shutz, W.: "Fundamental issues in testing distributed real-time systems. Tech. Rep. Centre National De La Recherche Scientifique, Toulouse, Cedex, France, 1993.
29.
Stevens, R.: Creating software the right way. BYTE, August 1991, pp.31-38.
30.
Taylor, R., Kelly, C.: Structural testing of concurrent programs. Proc. Workshop on Software Testing, Banff, Canada, 1986, pp.164-168.
31.
White, L.J., Cohen E.I.: A domain strategy for computer program testing. IEEE Trans. Software Eng. SE6, 247-257 (1980)
- 110 -
32.
White, L.J., Wiszniewski, B.: Path testing of computer programs with loops using a tool for simple loop patterns. Software-Practice and Experience 21, 1075-1102 (1991)
33.
Wiszniewski, B.: "Path testing methodology for parallel computer programs: a developer’s perspective", Tech. Rep. 90-14-01, Dept. of Comp. Science, Bradley University, Peoria, USA, 1990.
- 111 -