Implementation of New Approaches of Software

0 downloads 0 Views 623KB Size Report
It includes the use of error detection techniques, analysis of single errors, ... Correcting only the single error and not addressing underlying problems ..... Security Auditing and Scanning: Security Auditing includes direct inspection of the.
Implementation of New Approaches of Software Testing Methodology for Finding Errors and Its Analysis Techniques Er. Manoj Kumar Sharma 5. Chapter wise details of Proposed Research Work 5.1 Introduction Software error analysis includes the techniques used to locate, analyze, and estimate errors and data relating to errors. It includes the use of error detection techniques, analysis of single errors, data collection, metrics, statistical process control techniques, error prediction models, and reliability models. Error detection techniques are techniques of software development, software quality assurance (SQA), software verification, validation and testing used to locate anomalies in software products. Once an anomaly is detected, analysis is performed to determine if the anomaly is an actual error, and if so, to identify precisely the nature and cause of the error so that it can be properly resolved. Often, emphasis is placed only on resolving the single error. However, the single error could be representative of other similar errors which originated from the same incorrect assumptions, or it could indicate the presence of serious problems in the development process. Correcting only the single error and not addressing underlying problems may cause further complications later in the lifecycle. Thorough error analysis includes the collection of error data, which enables the use of metrics and statistical process control (SPC) techniques. Metrics are used to assess a product or process directly, while SPC techniques are used to locate major problems in the development process and product by observing trends. Error data can be collected over the entire thesis and stored in an organizational database, for use with the current thesis or future thesis. As an example, SPC techniques may reveal that a large number of errors are related to design, and after further investegation, it is discovered that many designers are making similar errors. It may then be concluded that the design methodology is inappropriate for the particular application, or that designers have not been adequately trained. Proper adjustments can then be made to the development process, which are beneficial not only to the current thesis, but to future thesis. The collection of error data also supports the use of reliability models to estimate the probability that a system will operate without failures in a specified environment for a given amount of time. A vendor may use software reliability estimation techniques to make changes in the testing process, and a customer may use these 1

techniques in deciding whether to accept a product. The error data collected by a vendor may be useful to auditors. Auditors could request that vendors submit error data, but with the understanding that confidentiality will be maintained and that recriminations will not be made. Data collected from vendors could be used by the auditors to establish a database, providing a baseline for comparison when performing evaluations of high integrity software. Data from past thesis would provide guidance to auditors on what to look for, by identifying common types of errors, or other features related to errors. For example, it could be determined whether the error rates of the thesis under evaluation are within acceptable bounds, compared with those of past thesis. Ideally, software development processes should be so advanced that no errors will enter a software system during development. Current practices can only help to reduce the number of errors, not prevent all errors. However, even if the best practices were available, it would be risky to assume that no errors enter a system, especially if it is a system requiring high integrity. The use of error analysis allows for early error detection and correction. When an error made early in the lifecycle goes undetected, problems and costs can accrue rapidly. An incorrectly stated requirement may lead to incorrect assumptions in the design, which in turn cause subsequent errors in the code. It may be difficult to catch all errors during testing, since exhaustive tesdng, which is testing of the software under all circumstances with all possible input sets, is not possible [MYERS]. Therefore, even a cridcal error may remain undetected and be delivered along with the final product. This undetected error may subsequently cause a system failure, which results in costs not only to fix the error, but also for the system failure itself (e.g., plant shutdown, loss of life). Sometimes the cost of fixing an error may affect a decision not to fix an error. This is particularly true if the error is found late in the lifecycle. For example, when an error has caused a failure during system test and the location of the error is found to be in the requirements or design, correcting that error can be expensive. Sometimes the error is allowed to remain and the fix deferred until the next version of the software. Persons responsible for these decisions may justify them simply on the basis of cost or on an analysis which shows that the error, even when exposed, will not cause a critical failure. Decision makers must have confidence in the analyses used to identify the impact of the error, especially for software used in high integrity systems. 2

A strategy for avoiding the high costs of fixing errors late in the lifecycle is to prevent the situation from occurring altogether, by detecting and correcting errors as early as possible. Studies have shown that it is much more expensive to correct software requirements deficiencies late in the development effort than it is to have correct requirements from the beginning [STSC]. In fact, the cost to correct a defect found late in the lifecycle may be more than one hundred dames the cost to detect and correct the problem when the defect was bom [DEMMY]. In addition to the lower cost of fixing individual errors, another cost benefit of performing error analysis early in development is that the error propagation rate will be lower, resulting in fewer errors to correct in later phases. Thus, while error analysis at all phases is important, there is no better time, in terms of cost benefit, to conduct error analysis than during the software requirements phase. Software testing is a set of activities conducted with the intent of finding errors in software. It also verifies and validate whether the program is working correctly with no bugs or not. It analyzes the software for finding bugs. Software testing is not just used for finding and fixing of bugs but it also ensures that the system is working according to the specifications. Software testing is a series of process which is designed to make sure that the computer code does what it was designed to do. Software testing is a destructive process of trying to find the errors. The main purpose of testing can be quality assurance, reliability estimation, validation or verification. The are some software testing includes. [6][7][8]  The better it works the more efficiently it can be tested.  Better the software can be controlled more the testing can be automated and optimized.  The fewer the changes, the fewer the disruption to testing.  A successful test is the one that uncovers an undiscovered error.  Testing is a process to identify the correctness and completeness of the software.  The general objective of software testing is to affirm the quality of software system by systematically exercising the software in carefully controlled circumstances. Classified by purpose software testing can be divided into [4] 1. Correctness Testing 2. Performance Testing 3. Reliability Testing 4. Security Testing 3

5.2 Research objectives and approach 1. To Study of reliability testing, this discovers all the failure of the system and removes them before the system deployed. 2. To find load testing is to check whether the system can perform well for specified user or not. 3. The fewer the changes, the fewer the disruption to testing. 4. A successful test is the one that uncovers an undiscovered error. 5. To find the comparative study of software testing 6. A successful test is the one that expose a concealed error. 7. Testing is a method to recognize the accuracy of software.

4

5.3 Review of Work Software testing is a process or a series of processes designed to verify computer code does what it was designed to do. According to ANSI/IEEE 1059 standard [1, 2], Testing can be defined as ―A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. Another more appropriate definition is this: [3] Testing is the process of executing a program with the intent of finding errors. The concept of testing is as old as coding and is change along with time. Gelperin and Hetzel [4] proposed the concept of the testing process model based on associated publishing event. The Debugging Process Model (Before 1956): During that period, the concept of program checkout, debugging and testing were not clearly distinguishable. They used testing and debugging interchangeably. At that time, Alan Turing wrote two articles and addresses some questions and also defined an operational test for intelligent behavior. The Demonstration Process Model (1957-78): During that period, the debugging testing were clearly distinguishable by including efforts to detect, locate, identify and correct fault. Charles baker emphasis on program checkout with two goals: make sure the program runs, and program solves the problem. The Destruction Process Model (1979-82): Myers wrote the book ‘The Art of Software Testing’, discussed software analysis and review testing technique. The software testing was first time described as “the process of executing a program with the intent of finding errors”. The Evaluation Process Model (1983-87): The Institute for Computer Sciences and Technology of the National Bureau of Standards published Guideline, specifically targeted at federal information processing system(FIPS) for Validation, Verification, and Testing of Computer Software in 1983, in which a methodology that combine analysis, review, and test activities to provide product evaluation during the software lifecycle was described. The guidelines assured that a carefully chosen set of VV&T techniques can help to ensure the development and maintenance of quality software. The Prevention Process Model (Since 1988): Beizer wrote the book ‘Software Testing Techniques’ which have most complete catalog of testing techniques, and defined that “the act of designing tests is one of the most effective bug preventers known.” 5

Manpreet Kaur and Rupinder Singh (2014), Software testing is important to reduce errors, maintenance and overall software costs. One of the major problems in software testing area is how to get a suitable set of test cases to test a software system. We identify a number of concepts that every software engineering student and faculty should have learned. There are now many testing techniques available for generating test cases. This set should ensure maximum effectiveness with the least possible number of test cases. The main goal of this paper is to analyzed and compare the testing technique to find out the best one to find out the error from the software. Gelperin and Hetzel [4] presented the evolution of software test engineering which traced by examining changes in the testing process model and the level of professionalism over the years. Two phase models such as the demonstration and destruction models and two life cycle models such as the evolution and prevention models are given to describe the growth of software testing. They also explain the prevention oriented testing methodology according to the models. Richardson and Malley[5] proposed one of the earliest approaches focusing on utilizing specifications in selecting test cases. They proposed approaches to specification-based testing by extending a wide range of implementation-based testing techniques to be applicable to formal specification languages and determine these approaches for the Anna and Larch specification languages. Rapps et al. [6] presented a family of program test data selection criteria derived from data flow analysis technique. The authors argued that currently used path selection criteria that examine only the control flow of a program, are inadequate. Definition/use graph is introduced and compared with a program graph based on the same program. And discuss the interrelationships between these data flow criteria. Harrold et al. [7] presented a new approach to class testing that supports data flow testing for data flow interaction in a class. They also describe class testing and the application of dataflow testing to class. Claessen et al. [8] developed a lightweight and easy to use tool named “quickCheck”, that is a combination of two old techniques (specifications as oracle and random testing) works extremely well for Haskell program. They present a number of case studies, in that the tool was successfully used and also point out some pitfalls to avoid. 6

Vilkomir et al. [9] presented a new concept of tolerance of a testing criterion that describes the ability of every test set, satisfying this criterion, to give a similar level of effectiveness. And evaluation shows the low level of tolerance for such criteria as DC, CC, D/CC, and FPC and, in contrast, the high level of tolerance for MC/DC and RC/DC. Ntafos [10] presented the comparisons of random testing, partition testing and proportional partition testing. The author guaranteeing that partition testing has at least as high a probability of detecting a failure comes at the expense of decreasing its relative advantage over random testing. Madeyski Lech et al.[11] presented the concept of using a set of second order mutants by applying them to large open source software with number of different algorithms. They shows that second order mutation techniques can significantly improve the efficiency of mutation testing at a cost in the testing strength. Jia et al. [12] presented a detailed survey and analysis of trends and results of Mutation Testing. These findings provide evidence to support the claim that the field of Mutation Testing is now reaching a mature state. Graves Todd et al. [13] proposed regression test selection techniques to reduce some expense. They performed an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment considered five techniques for reusing test cases that focusing on their relative abilities. Juristo et al. [14] analyzed the maturity level of the knowledge about testing techniques. For this, they examined existing empirical studies about testing techniques. According to knowledge, they classified the testing techniques and choose parameters to compare them. J. A. Whittaker[15] presented a four phase approach to determine how bugs escape from testing. They offer testers to a group related problems that they can solve during each phase. Duran and ntafos[16] presented the comparing random testing to partition testing, in which, using hypothetical input domains, partitions, and distribution of errors, and then compared the probabilities that each technique would detect an error. Their conclusion was that, despite random testing did not always compare fairly to path testing, but a cost effective alternative. Hamlet and taylor [17] presented more extensive simulations, and reach at more precise results about the relationship between partition probability, failure rate, and effectiveness. 7

Li et al. [18] Presented a comparison of four unit level software testing criteria such as mutation testing, prime path coverage (PP), edge pair coverage (EP) and alluses testing (AU), which were compared on two bases, first, the number of seeded faults found and second, the number of tests needed to satisfy the criteria. They concluded the result as mutation was the most efficient criterion.

8

5.4 Current work and preliminary results Security Testing: ‘Security testing’ makes sure that only the authorized personnel can access the program and only the authorized personnel can access the functions available to their security level. Security testing of any developed system or (system under development) is all about finding the major loopholes and weaknesses of a system which can cause major harm to the system by an authorized user. [1][2] Security testing is very helpful for the tester for finding and fixing of problems. It ensures that the system will run for a long time without any major problem. It also ensures that the systems used by any organization are secured from any unauthorized attack. In this way, security testing is beneficial for the organization in all aspects. [1][2] Five major concepts which are covered by security testing are 

Confidentiality: By security testing, we will ensure the confidentiality of the system i.e. no disclosure of the information to the unknown party other than intended recipient.



Integrity: By security testing, we will maintain the integrity of the system by allowing the receiver to determine that the information which he is getting is correct.



Authentication: Security testing maintains the authentications of the system and WPA, WPA2, WEP are several forms of authentication.



Availability: Information is always kept available for the authorized personnel whenever they needed and assures that information services will be ready for use whenever expected.



Authorization: Security testing ensures that only the authorized user can access the information or particular service. Access control is an example of authorization.

5.5 Work plan and implications

9

Performance Testing' involve all the phases as the mainstream testing life cycle as an independent discipline which involve strategy such as plan, design, execution, analysis and reporting. This testing is conducted to evaluate the compliance of a system or component with specified performance requirement. [2] Evaluation of a performance of any software system includes resource usage, throughput and stimulus response time. By performance testing we can measure the characteristics of performance of any applications. One of the most important objectives of performance testing is to maintain a low latency of a website, high throughput and low utilization. [5]

Fig. 1. Represent two types of performance testing Some of the main goals of performance testing are: [5]  Measuring response time of end to end transactions.  Measurement of the delay of network between client and server.  Monitoring of system resources which are under various loads. Some of the common mistakes which happen during performance testing are: [5]  Ignoring of errors in input.  Analysis is too complex.  Erroneous analysis.  Level of details is inappropriate.  Ignore significant factors.  Incorrect Performance matrix.  Important parameter is overlooked.  Approach is not systematic. There are seven different phases in performance testing process: [5] 10

 Phase 1 – Requirement Study  Phase 2 – Test plan  Phase 3 – Test Design  Phase 4 – Scripting  Phase 5 – Test Execution  Phase 6 – Test Analysis  Phase 7 – Preparation of Report

6. Need of the Proposed Research work 11

Software testing is an important technique for the improvement and measurement of a software system quality. But it is really not possible to find out all the errors in the program. So, the fundamental question arises, which strategy we would adopt to test. , I have described some of the most prevalent and commonly used strategies of software testing which are classified by purpose and they are classified into [5] 1. Correctness testing, which is used to test the right behavior of the system and it is further divided into black box, white box and grey box testing techniques (combines the features of black box and white box testing). 2. Performance testing, which is an independent discipline and involves all the phases as the main stream testing life cycle i.e. strategy, plan, design, execution, analysis and reporting. Performance testing is further divided into load testing and stress testing. 3. Security Auditing and Scanning: Security Auditing includes direct inspection of the operating system and of the system on which it is developed. In Security Scanning the auditor scan the operating system and then tries to find out the weaknesses in the operating and network. 4. 2. Vulnerability Scanning: Various vulnerability scanning software performs Vulnerability Scanning, which involves the scanning of the program for all known vulnerability. 5. Risk Assessment: Risk Assessment is a method in which the auditors analyze the risk involved with any system and all the probability of loss which occurs because of that risk. It is analyzed through interviews, discussions, etc. 6. Posture Assessment and Security Testing: Posture Assessment and Security Testing help the organization to know where it stands in context of security by combining the features of security scanning, risk assessment and ethical hacking. 7. Penetration Testing: Penetration Testing is an effective way to find out the potential loopholes in system and it is done by a tester which forcibly enters into the application under test. A tester enters into the system with the help of combination of loopholes that the application has kept open unknowingly. 8. Ethical Hacking: Ethical Hacking involves large no. of penetration test on a system under test. To stop the forced entry of any external elements into a system which is under security testing.

7. Bibliography 12

[1].

Manpreet Kaur and Rupinder Singh(2014), “A Review of Software Testing Techniques”, International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014), pp. 463-474 © International Research Publication House http://www.irphouse.com

[2].

A. P. Mathur, “Foundation of Software Testing”, Pearson/Addison Wesley, 2008.

[3].

IEEE Standard 829-1998, “IEEE Standard for Software Test Documentation” pp.1-52, IEEE Computer Society, 1998.

[4].

D. Gelperin and B. Hetzel, “The Growth of Software Testing”, Communications of the ACM, Volume 31 Issue 6, June 1988, pp. 687- 695[history of st]

[5].

D. Richardson, O. O’Malley and C. Tittle, “Approaches to specification-based testing”, ACM SIGSOFT Software Engineering Notes, Volume 14 , Issue 9, 1989, pp. 86 – 96[Approaches to specification-based testing]

[6].

S. Rapps and E. J. Weyuker, “Selecting Software Test Data Using Data Flow Information,” IEEE Transactions on Software Engineering¸ April 1985, pp. 367-375

[7].

Harrold Mary Jean, and Gregg Rothermel. "Performing data flow testing on classes." ACM SIGSOFT Software Engineering Notes. Vol. 19. No. 5. ACM, 1994.[acm.pdf]

[8].

Claessen Koen, and John Hughes. "QuickCheck: a lightweight tool for random testing of Haskell programs." Acm sigplan notices 46.4 (2011): 53-64.

[9].

Vilkomir Sergiy A., Kalpesh Kapoor, and Jonathan P. Bowen. "Tolerance of control-flow testing criteria." Computer Software and Applications Conference, 2003. COMPSAC 2003. Proceedings. 27th Annual International. IEEE, 2003.[control ieee]

[10].

Ntafos Simeon C. "On comparisons of random, partition, and proportional partition testing."

Software

Engineering,

IEEE

Transactions

on

27.10

(2001):

949-

960.[comparison random] [11].

Madeyski Lech et al. "Overcoming the Equivalent Mutant Problem: A Systematic Literature Review and a Comparative Experiment of Second Order Mutation." (2013): 11.[LR4]

[12].

Jia Yue, and Mark Harman. "An analysis and survey of the development of mutation testing." Software Engineering, IEEE Transactions on 37.5 (2011): 649-678.

13

[13].

Graves Todd L. et al. "An empirical study of regression test selection techniques." Proceedings of the 20th international conference on Software engineering. IEEE Computer Society, 1998.

[14].

Juristo Natalia, Ana M. Moreno, and Sira Vegas. "Reviewing 25 years of testing technique experiments." Empirical Software Engineering 9.1-2 (2004): 7-44.

[15].

J. A. Whittaker, “What is Software Testing? And Why Is It So Hard?” IEEE Software, January 2000, pp. 70-79[hard software testing]

[16].

Duran, Joe W., and Simeon C. Ntafos. "An evaluation of random testing."Software Engineering, IEEE Transactions on 4 (1984): 438-444.

[17].

Hamlet Dick, and Ross Taylor. "Partition testing does not inspire confidence (program testing)." IEEE Transactions on Software Engineering 16.12 (1990): 1402-1411.

[18].

Li Nan, Upsorn Praphamontripong, and Jeff Offutt. "An experimental comparison of four unit test criteria: Mutation, edge-pair, all-uses and prime path coverage." Software Testing, Verification and Validation Workshops, 2009. ICSTW'09. International Conference on. IEEE, 2009.[testcritcomp.pdf]

[19].

Hamlet, Richard. "Random testing." Encyclopedia of software Engineering(1994).

[20].

Luo, L. "Software Testing Techniques: Technology Maturation and Research Strategy." Class Report for (2001).

[21].

Jovanavic, Irena. "Software testing methods and techniques." IM Jovanovic is with the Inzenjering, Mat 26 (2008).

[22].

Bluemke, Ilona, and Karol Kulesza. "A Comparison of Dataflow and Mutation Testing of Java Methods." Dependable Computer Systems. Springer Berlin Heidelberg, 2011. 1730.

[23].

Yoo, Shin, and Mark Harman. "Regression testing minimization, selection and prioritization: a survey." Software Testing, Verification and Reliability 22.2 (2012): 67120.

[24].

Biswas, Swarnendu, et al. "Regression Test Selection Techniques: A Survey."Informatica (03505596) 35.3 (2011).

14

15

Suggest Documents