Case-based software reliability assessmentby fault injection unified ...

4 downloads 1210 Views 1MB Size Report
May 13, 2008 - a.boyarchuk@mail.ru. ABSTRACT. The fault injection methods and tools are presented and analyzed. The unified procedures of software fault ...
Case-based Software Reliability Assessment by Fault Injection Unified Procedures Kharchenko Vyacheslav

Gordeyev Alexander Ukrainian Academy of Banking Petropavlovskaya str. 57, Sumy, Ukraine. +38 0951300557

Andrashov Anton

National Aerospace University “KhAI” National Aerospace University “KhAI” Chkalov str. 17, Kharkiv, Ukraine Chkalov str. 17, Kharkiv, Ukraine +38 0577074503 +38 0577074503

[email protected] Konorev Boris

[email protected] Sklyar Vladimir

Certification Centre ASU, State Scientific-technical Centre on Ak. Proskuri str.1, Kharkiv, Ukraine. Nuclear and Radiation Safety +38 0577603598 Chernyshevsky str. 31, Kharkiv, Ukraine +38 0573792402 [email protected]

[email protected] Boyarchuk Artem National Aerospace University “KhAI”, Chkalov str. 17, Kharkiv, Ukraine +38 0577074503

[email protected]

[email protected] Software development is a complicated and responsible process which consists of some stages; their sequences and content are determined by normative documents [3, 4]. The software quality directly depends on the quality of its development. The tendency of increasing software projects complexity stipulates growing of the risks of its quality and reliability decreasing. Software quality assessment on each development stage is realized by its verification. The verification is the confirmation (implemented by the expert examination) and representation of impartial evidence that specified requirements are entirely implemented [5]. The quality of verification defines the quality of software product as a whole. There are numbers of methods for assessing the verification process. Some of these methods and tools are based on software fault injection [6]. Fault injection consists of a) the insertion of the faults into software on i-th development stage, b) the software testing process to define the faults, c) the analysis (comparison) of injected and detected faults during the testing [8]. The objectives of the article are the following: analysis of methods and tools for software fault injection, development of fault injection technique and tools for assessing the quality of testing and software reliability by use of unified procedures. Software reliability assessment is based on assessment of test cases completeness.

ABSTRACT The fault injection methods and tools are presented and analyzed. The unified procedures of software fault injection for assessment of software testing and quality verification on different development stages are proposed. Quality evaluation technique of software testing and verification is described. The results of using the developed tools «InExp» and «SoftAsVer» are presented.

Categories and Subject Descriptors D.2.2 [Design Tools and Techniques]: Computer-aided software engineering (CASE).

General Terms Reliability

Keywords Software quality assessment, fault injection, unified procedures, tools.

1. INTRODUCTION Software reliability is an important attribute of computer-based control systems for critical and business application dependability. The analysis of accidents in the field of aero-space machinery shows that each one launch out of 100 launches in the 1990th resulted in crash [1], and each one crash out of 100 launchs’ crashes caused by the software faults. One can remember the crash of two American satellites DART and MUBLCOM on April 15th 2006 [2]. The cause of these accidents were software faults of one of the satellites’ navigation system. The total damage cost was estimated in $ 110 millions.

2. ANALYSIS OF METHODS AND TOOLS The analysis of the methods and tools for the software fault injection has been conducted to define the possibility of using them to inject the faults into software at various stages of its development and testing, on various platforms, for different faults and fault injection technique types. The results of known methods and tools [9-30] analysis are shown in the table 1 where corresponded attributes are defined. 1. Attributes of the developing process 1.1. Fault types: incorrect application of IF (1.1.1); incorrect arithmetic operation (1.1.2). 1.2. Faults injection techniques: injection of errors into the source code (1.2.1). 1.3. Development stages: development of architectural design (1.3.1); coding (1.3.2). 1.4. Programming language: Java (1.4.1); VHDL (1.4.2); SQL (1.4.3).

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SEESE’08, May 13, 2008, Leipzig, Germany Copyright 2008 ACM 978-1-60558-076-0/08/05...$5.00.

1

GOOFI

Doctor

Ferrari

MEFISTO

DEPEND

Fine

Define

Xception

EFA

CSFI

Loki

Ftape

WS-FIT

Orchestra

Mendous

Fiat

Fiesta

EXFI

NFTAPE

SockPFI

UMLinux

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Name

1



[30]

[29]

[28]

[27]

[26]

[25]

[24]

[23]

[22]

[21]

[20]

[19]

[18]

[16,17]

[15]

[14]

[12,13]

[12]

[11]

[10]

[9]

Reference

Linux

AIX

Any OS







Any OS

Mach ,Solaris



ChorusOS, AIX



PARIX



PARIX

Unix

Unix



32-bit processors

SunOS

HARTS

Windows 2000,Solaris8

system (OS)

Operation

2002

1995

2000



1997

1988

2002

1995

2002

1996

1999

1992

1995

1994

1993

1990

1994

1992

1995

1999

Release date





















1.1.1

1.1.1, 1.1.2

1.1.2



1.1.1, 1.1.2

1.1.1, 1.1.2

1.1.1



1.1.1

1.1.1, 1.1.2

Fault type





















1.3.2

1.3.2

1.3.2



1.3.2

1.3.2

1.3.1

1.3.1



1.3.2

1.3.2

Stage





















1.2.1

1.2.1

1.2.1



1.2.1

1.2.1

1.2.1

1.2.1



1.2.1

1.2.1

Technique

Development

Table 1: The analysis of software fault injection methods and tools





















1.4.2

1.4.2

1.4.2



1.4.1

1.4.1

1.4.2

1.4.2



1.4.1

1.4.3

1.4.1,

Program language

2.2.1,2.2.2 2.1.1, 2.1.2, 2.1.3

2.1.4, 2.1.5

2.2.4

2.2.1, 2.2.2,

2.2.5 2.1.1, 2.1.2, 2.1.3,

2.2.3, 2.2.4, 2.1.4, 2.1.5

2.2.4

2.2.1, 2.2.2, 2.1.1, 2.1.2, 2.1.3, 2.1.5

2.2.1

2.2.1

2.2.1

2.2.4

2.2.4

2.1.1, 2.1.2

2.1.1, 2.1.2

2.1.2

2.1.4, 2.1.5

2.1.4, 2.1.5

2.2.4

2.2.5

2.1.1, 2.1.2, 2.1.3 2.1.4, 2.1.5



– –



2.2.2, 2.2.3

2.1.1, 2.1.2, 2.1.3 –





– –







2.2.1

2.1.1, 2.1.2, 2.1.3





Technique



Fault type

Run-time

Not Sown Faults Profile (NSFP) is a subset of FP (IFP) and it includes faults which were not detected as a result of testing and verification.

2. Attributes of run-time 2.1. Fault types: CPU faults (registers set/reset, set of incorrect value) (2.1.1); memory (bits set/reset, bit-flips) (2.1.2); I/O (disk controller error set) (2.1.3); network communication faults (2.1.4). Faults injection techniques: trap mechanism (2.2.1); faults injection triggered by hardware or software timer interruption (2.2.2); exception mechanism (2.2.3); faults injection by damaging network packages (2.2.4); faults injection by changing a device driver byte-code (2.2.5).

Sown Faults Profile (SFP) is a subset of IFP which is generated as a result of testing and verification. FP (SFP) is intended for determining of faults which were sown and detected as a result of testing and verification. Faults’ Seeding is a process of eliminating faults which were artificially injected and not detected during the testing.

The main disadvantages of existing methods and tools are: - existing tools are ad-hoc and developed mostly for internal usage (quality assessment). They do not support some programming languages (C#, PHP); - the most of existing tools are platform-dependent tools; - existing tools do not provide the fullness of injection, taking into account all the faults types and stages of development and testing; - the most of the methods do not use quantitative measures to asses the software quality.

3. UNIFIED PROCEDURES The fault injection is used to solve the tasks of software testing and verification quality assessment, namely the calibration of testing and verification tools, the quality rating, the test cases. The solution of the above-listed problems is based on the concept of faults profile (FP) (fig. 1). Faults profile (FP) is a complex concept consisting of two parts:: taxonomy, which defines fault’ types (“What to inject?”), and faults weight defining the quantity of faults to inject for each faults type (“How many faults to inject?”). The given concept is crucial one for the unified procedure for software fault injection.

Figure 1: Software faults profile Own Faults Profile (OFP) is faults profile generated by testing and verification results; it includes faults which do not belong to FP (IFP). It is necessary to define the faults which were not injected, but were brought in by developers.

The procedure unification is provided with its typical structure independently of the software lifecycle stage. It includes the fault forecast procedure, injection, testing, detection and results analysis (fig. 2). To describe the unified procedure we have to introduce the list of the concepts.

General Faults Profile (GFP) is faults profile composed of FP (IFP) and FP (FFP). It is necessary to analyze the results of testing and verification. The forecasting procedure is based on the forecast of potential faults quantity models [31-36]. Such models’ analysis allowed correlating it with the software development stages to forecast potential faults quantity models in the context of the following stages (table 2): before development(0), system requirements gathering (1), specification (2), architecting ( 3), design development (4), coding (5), modular testing (6), integral testing (7), system testing (8). The distribution of general faults quantity by types is based on statistical data. In such a way FP (FFP) is generated. It is necessary to note that the fault types, total forecasting faults quantity and its distribution by types depend on the development stage of software, on which fault injection takes place.

Forecast Faults Profile (FFP) is faults profile which corresponds to faults profile taxonomy (FPT). It reflects absolute or relative forecasting of software faults quantity by types, using faults prediction models. FFP is necessary to predict potential faults quantity. Injection Faults Profile (IFP) is a part of forecast faults profile. It differs from FFP with less faults quantity by types and contains particular faults for injection. FP (IFP) determines nomenclature and faults quantity for injection. Discovered Faults Profile (DFP) is the faults profile generated from the results of testing and verification. FP (DFP) is necessary tool to reflect all the faults which were detected as a result of testing and verification.

Injection procedure is essential to faults direct implantation into software on i-th development stage. Only implantation profile's faults are injected. The major task of testing and verification

3

The variants of profiles’ discrepancy are shown in table 3, where FTM(DF) - fault types' set of the discovered faults, FM(DF) - set of the discovered faults, FTM(GFP) - fault types' set of the generalized fault's profile, FM(GFP) - faults' set of the generalized fault's profile.

procedure is detection of all faults irrespectively of the injected or software proper faults. Sown, own and not sown software faults profiles are formed according to the testing results. The fault sowing is carried out after the final software testing. When the fault sowing is over, all the generated faults profiles have to be analyzed (in the decision-making procedure). It is necessary to make conclusions about the quality of testing and verification in decision-making procedure and to define the actions directed to increase the quality.

To provide better accuracy of testing and verification of quality assessment the following indices were introduced: 1. General testing and verification quality index (GTVQI). It is necessary to asses the test suites quality and the quality of testing and verification tool. Suggested index is calculated by the formula:

4. QUALITY AND RELIABILITY MEASURES

GTVQI=

q

åK i=1

Suggested method is oriented on the testing and verification quality assessment, it includes quality measuring of test suites and evaluation of testing and verification tool calibration quality. It is based on the fault injection unified procedure and the faults profiles’ discrepancy.

i

* PTVQI i ,

where PTVQI is partial testing and verification quality index; Ki is weighting factor for PTVQI i-th development stage, which is defining by experts, q

åК

i

=1

i=1

Figure 2: IDF0 – model of fault injection procedure 2. Partial testing and verification quality index (PTVQI) (for i-th development stage). It is necessary to asses the test suites quality and the quality of testing and verification tool for each stage of software development using the formula:

PTVQI = К1* FQI + К2* ITVPQiDS, where ITVPQiDS is index of testing and verification process quality on i-th development stage; FQI is forecast quality index on i-th development stage.

4

Table 2: The analysis of models for prediction potential faults quantity in software №

Software lifecycle stage

Model Name

0 1 2 3 4 5 6 7 8 1 Phase-oriented model [31] 2 Roma’ air forces laboratory model (RL – TR – 92 - 52) [32] 3 Roma’ air forces laboratory model (RL – TR – 92 – 15) [32] 4 Software faults statistical data model [31] 5 Mussa’ run-time model [33] 6 Holsted model [34] 7 IBM model [33] 8 McKeib model [35] 9 Akiyama model [31] 10 Lipov model [33] 11 Gafni model [36] 4. Testing and verification process quality index 3. Forecast quality index (FQI) (for i-th development (TVPQI). It’s calculated by the formula TVPQI = К1* FTVPQI stage). It’s calculated using the formula FQI = К1* FFQI + К2* + К2* FTTVPQI, where FTVPQI is faults testing and FTFQI, where FFQI is faults forecast quality index; FTFQI is verification process quality index; FTTVPQI is faults types faults types’ forecast quality index (by taxonomy). testing and verification process quality index. Table 3: The variants of profiles discrepancy Testing and verification quality № 1 2 3 4 5 6 7 8 9

Faults type (FTM (DF) and FM(DF) FTM (DF) = FM(GFP) FTM (DF) = FM(GFP)) FTM (DF) = FM(GFP)) FTM (DF) Ì FM(GFP) FTM (DF) Ì FM(GFP) FTM (DF) É FM(GFP) FTM (DF) É FM(GFP) FTM (DF) Ç FM(GFP) ¹ Æ | FTM (DF) | =| FM(GFP) | FTM (DF) = Æ

Faults quantity (|FTM (DF)|and |FM(GFP)|) | FTM (DF) | = |FM(GFP)| | FTM (DF) | > |FM(GFP)| | FTM (DF) | < |FM(GFP)| | FTM (DF) | = |FM(GFP)| | FTM (DF) | < |FM(GFP)| | FTM (DF) | = |FM(GFP)| | FTM (DF) | > |FM(GFP)|

Testing based on fault’ types + + + – – + +

Testing based on fault’ quantity + + – + – + +

| FTM (DF) | = |FM(GFP)|

+

+

| FTM (DF) | =0 -

5. TOOLS Using testing and verification quality assessment method software fault injection tools were developed. They can insert the faults into software on coding stage and provide calculation and visualization of software testing and verification quality assessment indices.

– – setup of faults quantity accordingly to its type (4); faults injection start button (5); faults sow start button (6); information about faults injection process (7).

5.1 Tool «InExp» The tool is used to inject the faults into software during coding stage, i.e. in the listing of the program written in Java. The interface of developed program is represented on the fig. 3 (a) and consists of the following elements: - selection of the files to inject software faults in (1); - selection of the programming language, which was used for software development (2); - faults injection accordingly to faults type (FT): “Incorrect application of IF”, “Incorrect arithmetic operation”,” Incorrect logic operation”,” Initialization faults” (3);

Figure 3 a: «InExp» interface

5

Fig. 3 (b) represents fault injection report, which involves the following components: - report forming button (1); - save report button (2); - file path (3); - initial code line (4); - code line with fault (5).

- RMD for TVPQI and FQI (11); - RMD for PTVQI of the current development stage (12). It is significant, that the current tool supports the weighting factor tuning ability, necessary for the indices priority control. Besides, “SoftAsVer” tool provides PTVQI calculation and visualization for all software lifecycle stages (fig. 4 b).

Figure 3 b: «InExp» report

5.2 Tool «SoftAsVer»

Figure 4 b: « SoftAsVer » PTVQI calculation

The current tool is intended for calculation and visualization of software quality assessment indices (fig.4 a). Reviewed tool supports such indices calculations as: GTVQI, PTVQI, FQI, TVPQI, FFQI, and FTFQI. The ability of the automatic forming of radial-metrics diagram (RMD) [37] is provided at the basis of calculated indices. It is possible both for each of software life cycle stage and for the whole project.

6. EXPERIMENT The developed technique and tools were used to assess the testing and verification quality in the context of “QBH” software development on the coding stage. Software is intended for searching of the music track in the DB by its humming in the microphone. Current software structure includes front-end part (C++), back-end part (Perl) and Database (MySQL).

The interface of the current tool includes: - the list of the software lifecycle stages (1); - the list of the calculated indices (2); - lines for the initial data input, which are used to calculate corresponded indices (3);

The experiment took place in the following environment: OS Windows XP, J2EE, CPU Pentium 4 (2,6 Ghz), RAM 1 Gb. 1 expert, 2 testers, 2 developers have participated in the experiment. As a result of the injection orientation on the coding stage, the decision was made to use the software faults taxonomy, involving such faults as “Incorrect application of IF”, “Incorrect arithmetic operation”,” Incorrect logic operation”,” Initialization faults” [38, 39]. Akiyama's model [31] was used as the potential faults quantity forecast model. The reason of its using is the availability of the software source-code. According to the forecasted faults quantity the calculation was equal to 70 (D=4, 86 + 0,018*3641=70). According to FPT the faults were distributed in the following way: “Incorrect application of IF” 10, “Incorrect arithmetic operation” - 25, “Incorrect logic operation” - 20, “Initialization faults” - 15. DFP was generated as the result of testing. It differs from IFP. After the comparison of DFP and IFP the following conclusions were made: - the fault types in both profiles are identical ones; - 65 faults were sowed during the testing (injected 70); - there were detected not only injected faults in the context of testing, but also own software faults. In particular 41 injected and 24 own faults were detected (fig. 5).

Figure 4 a: « SoftAsVer » interface - RMD forming and indices calculations button (4); - the list of the calculated indices for the current development stage (5); - the quality levels range selection (6-9); - controls for the tuning of the quality levels range (10);

6

[7] David N. Card, “Managing Software Quality with Defect”, 26th Annual International Computer Software and Applications Conference (COMPSAC 2002), 2002, pp. 472475.

The conclusions about the quality of “QBH” project testing and verification were made based on the calculated index value (TVPQI = 0, 75) and fig.5. The testing and verification quality of software is "normal" on the current coding stage.

[8] J. Vinter, P. Folkesson, J. Karlsson, “GOOFI : Generic Object-Oriented Fault Injection Tool”, Dependable Computing Department of Computer Engineering Chalmers University of Technology. Report № S-412 96, Goteborg, Sweden, 1996. [9] S. Han, K. G. Shin, H. A. Rosenberg, “DOCTOR: An integrated software fault injection environment for distributed real-time systems”, In IEEE Int. 7 «Computer Performance and Dependability Symp.» (IPDS'95), March 1995, pp. 204-213. [10] G. A. Kanawati, N. A. Kanawati, Abraham J. A., “FERRARI: A tool for the validation of system dependability properties“, InProc. of the 22nd Int'l «Symp. on Fault-Tolerant Computing» (FTCS-22), July 1992, pp. 336-344.

Figure 5: «QBH» testing results

7. CONCLUSIONS

[11] E. Jenn, J. Arlat, M. Rimen, J. Ohlsson, “Fault injection into VHDL models: The MEFISTO tool”, In Proc. of the 24nd Int'l «Symp. on Fault-Tolerant Computing» (FTCS-24), June 1994, pp. 66-75.

The software fault injection methods and techniques were analyzed considering the known tools. The method of quality testing assessment and assessment of test cases completeness is proposed. It is based on the software fault injection unified procedures, software fault injection tool for Java applications, testing and verification quality calculation indices. The tools are the part of instrumental system for supporting assessment and independent verification of critical software reliability [40]. This system was developed and installed at the Certification Centre ASU (Ukrainian State Committee of Nuclear Regulation). Software fault injection taxonomy and tool's usability for the C++, PHP, and Perl software are planned to be deployed further.

[12] K. K. Goswami, R. K. Iyer, L. Young, ”DEPEND: A simulation-based environment for system level dependability analysis”, IEEE Transactions on Computers, 46 (1) 1997, pp. 1–74. [13] W.-L. Kao, R. K. Iyer, D. Tang, “FINE: A fault injection and monitoring environment for tracing the Unix system behavior under faults”, IEEE Trans. Software Eng, November 19(11) 1993, pp. 1105-1118. [14] W. Kao, R. K. Iyer, “DEFINE: A distributed fault injection and monitoring environment “, IEEE Workshop on FaultTolerant Parallel and Distributed System, June 1994, pp. 45-54.

8. REFERENCES [1] V.S. Kharchenko, V.Sklyar, O.M.Tarasyuk, “Risks Analysis of the Space Rockets Crashes: Evolution of Tendencies and Causes”, Radio-Electronic and Computer Systems, Ukraine, 2003 Num. 3, pp. 135 – 149. (In Ukrainian) [2] “NASA Releases Accident Report Summary”, DOI= http://lenta.ru/news/2006/05/16/dart/, 2006.

[15] J. Carreira , H. Madeira, J. G.Silva, “Xception: Software fault injection and monitoring in processor functional units”, In Proc. of the 5th IFIP Int'l «Working Conf. Dependable Computing for Critical Applications» (DCCA-5), September 1995, pp. 135-149.

[3] “International Standard ISO/IEC 25000 Software engineering – Software product Quality Requirements and Evaluation (SQuaRE)” – Guide to SquaRE, ISO/IEC, 2005.

[16] J. Carreira, H. Madeira, J. G.Silva, “Xception: A technique for the evaluation of dependability in modern computers“, IEEE Trans. Software Eng, 24(2) 1998, pp.45-52.

[4] S.A.Vilkomir, J.P.Bowen, “Establishing Formal Regulatory Requirements for Safety-Critical Software Certification”, Proceedings of AQuIS 2002: 5th International Conference on Achieving Quality In Software and SPICE 2002: 2nd International Conference on Software Process Improvement and Capability Determination, Venice, Palazzo Papafava, 13–15 March 2002, pp. 7–18.

[17] K. Echtle, M. Leu, “The EFA fault injector for fault-tolerant distributed system testing”, In IEEE Workshop on FaultTolerant Parallel and Distributed Systems, July 1992, pp. 28-35.

[5] ISO/IEC 9126-2.3. Software Engineering – Product Quality Part 2 – External Metrics. JTC1. PDTR. – 2000. – 119 p.

[18] J. Carreira, M. Henrique, G. Joao, “Assessing the Effects of Communication Faults on Parallel Applications”, Procceedings of «International Computer and Dependability Symposium» (IPDS'95), Erlangen Germany, April 1995, pp. 214-223.

[6] Hsueh Mei-Chen, Timothy K. Tsai, Ravishankar K. Iyer, “Fault Injection Techniques and Tools”, Computer, April 1997. – pp. 75-82.

[19] M. Cukier, R. Chandra, D. Henke, J. Pistole, “Fault injection based on a partial view of the global state of a distributed system”, In Proc. of the 18th «Symposium on

7

[30] M. Тelles, J. Хsiх, The science of debugging, М.: KYDICIMAGE, 2003.

Reliable Distributed Systems» (SRDS'99), October 1999, pp. 168–177.

[31] M.R.Lyu, Handbook of Software Reliability Engineering, IEEE Computer Society Press, 1995.

[20] T. K. Tsai, R. K. Iyer, “An approach to benchmarking of fault-tolerant commercial systems”, In Proc. of the 26th Int'l Symp. on «Fault-Tolerant Computing» (FTCS-26), June 1996, pp. 314–323.

[32] Y. Huang, C. Kintala, N. Kolettis, N. Fulton, “Software Rejuvenation: Analysis, Module and Applications”, In Proceedings of the 1995 International Symposium on FaultTolerant Computing, June 1995, pp. 381–390.

[21] N. Looker, J. Xu, “Assessing the Dependability of OGSA Middleware by Fault Injection”, Proceedings of the Symposium on Reliable Distributed Systems, 2003, pp. 293–302.

[33] R. Polonnikov, A. Nikandrov, Methods of software dependability assessment, SPb.: Polytechnika, 1992. (In Russian)

[22] S. Dawson, F. Jahanian, T. Mitton, “ORCHESTRA: A Fault Injection Environment for Distributed Systems”, In Proc. 26th Int. Symp. on «Fault Tolerant Computing» (FTCS-26), Sendai Japan, June 1996, pp. 404-414.

[34] M .X. Holsted, The beginning of science about the program, M.: Finances and statistics, 1981. [35] T.A. McCabe, “Complexity Measure”, IEEE Transactions on Software Engineering, Vol. 4 1976, pp. 308-320.

[23] Z.Segall, D. Vrsalovic, D. Siewiorek, D. Yaskin, “FIAT fault injection based automated testing environment”, In Proc. of the 18th Int'l Symp. on «Fault-Tolerant Computing» (FTCS-18), June 1988, pp. 102-107.

[36] J.R.Gaffney, “Estimating the Number of Faults in Code”, IEEE Trans. Software Eng, vol. 10 (4)1984, pp. 357-362. [37] V.S. Kharchenko, O.M. Tarasyuk, “The using of radialmetrics diagrams to asses software characteristics”, Extensible integrated informational and computer-based systems, Kharkiv: National Aero-space University, Vol. 18 2003, pp. 123-133. (In Ukrainian)

[24] M. Blaschka, “FIESTA: A Framework for Schema Evolution in Multidimensional Information Systems”, In Proc. 6th CAiSE Doctoral Consortium, Heidelberg Germany, June 1999, pp. 95-103. [25] D.T. Stott, “Automated Fault-Inject Based Dependability Analysis of Distributed Computer Systems”, Report for Preliminary Exam Science Laboratory University of Illinois, Feb 2000.

[38] Preckshot G.G., Scott J. A Proposed Acceptance Process for Commercial Off-the-Shelf (COTS) Software in Reactor Applications // Lawrence Livermore National Laboratory. 1995. - 123 p.

[26] A. Benso, P. Prinetto, M. Rebaudengo, “EXFI: a low-cost fault injection system for embedded microprocessor-based boards”, TODAES, 3(4) 1998, pp. 626-634.

[39] El Emam Khaled, Wieczorek Isabella. The Repeatability of Code Defect Classifications // Fraunhofer Institute for Experimental Software Engineering. International Software Engineering Research Network Technical Report ISERN98-09. - 1998. - 21 p.

[27] S. Dawson, J. Farnam, T. Mitton, “A Software Fault Injection Tool on Real-Time”, 16th IEEE «Real-Time Systems Symposium» (RTSS '95), 1995, 130 p.

[40] V. Kharchenko, V.Sklyar, B. Konorev, G. Chertkov, Yu. Alexeev, S. Zasuha, L. Semenov. Space Systems Software Quality Assessment and Ensuring, Kharkiv, National Space Agency of Ukraine. – 2007. – 243 p.

[28] Li Xiaoyan, Martin Richard P., Nagaraja Kiran, “Mendosus: A SAN-based Fault-Injection Test-Bed for Construction of Highly Available Network Services”, In Proceedings of the 1st Workshop on «Novel Uses of System Area Networks» (SAN-1), Feb 2002, pp. 121-127. [29] H.-J.Hoxer, K. Buchacker, V. Sieh, “UMLinux - a tool for testing a linux system's fault tolerance”, In LinuxTag, Karlsruhe Germany, June 2002, pp. 63-97.

8

Suggest Documents