Swarm Intelligence-Based Test Data Generation for ... - IEEE Xplore

2 downloads 50499 Views 528KB Size Report
Corresponding Email: [email protected]. Abstract — Automated generation of test data has always been a challenging problem in the area of software testing.
2012 IEEE/ACIS 11th International Conference on Computer and Information Science

Swarm Intelligence-Based Test Data Generation for Structural Testing Chengying Mao1,2,

Xinxin Yu1

1. School of Software and Communication Engineering, Jiangxi University of Finance and Economics, 330013 Nanchang, China

Jifu Chen1

2. The State Key Laboratory of Software Engineering, Wuhan University, 430072 Wuhan, China

Corresponding Email: [email protected]

software testing. This new way of test data generation has received the widespread study. The general idea behind the search-based test data generation is to choose the set of test cases from a search space. The test adequacy criterion is usually expressed as a fitness function [4]. The basic procedure is to generate a test suite that meets a test adequacy criterion. When a specific coverage criterion is selected, the search activity should attempt to produce a test suite which can cover all construct elements mentioned in the criterion. Meanwhile, testing is a time-consuming activity during the whole software development. Thus, the size of test suite should be as small as possible in order to reduce the testing time. Among the existing search methods, meta-heuristic search (MHS) techniques such as simulated annealing (SA) [5] and generic algorithm (GA) [6,7] are the most popular algorithms which have been widely adopted for generating test cases. Although they can produce test data with appropriate fault-prone ability [8,9], they fail to produce test data fast due to their slow convergence speed. Recently, as a swarm intelligence technique, particle swarm optimization (PSO) [10,11] has become a hot research topic due to its convenient implementation and faster convergence speed. In the paper, we studied how to apply particle swarm optimization technique to test data generation problem in software structural testing. On the other hand, some realworld programs have been used for comparison analysis, and the experimental results show that test data set generated by swarm intelligence technique performs better than other traditional search techniques such as GA. The paper is structured as follows. In the next section, we will briefly introduce the basic procedure of search-based software testing. Meanwhile, the related work about this problem is also addressed. Then, the swarm intelligence technique is introduced in Section 3. In Section 4, the framework and some technical details of PSO-based test data generation are discussed. The experimental and contrastive analysis is performed in Section 5. Finally, concluding remarks are given in Section 6.

Abstract — Automated generation of test data has always been a challenging problem in the area of software testing. Recently, meta-heuristic search (MHS) techniques have been proven to be a powerful tool to solve this difficulty. In the paper, we introduce an up-to-date search technique, i.e. particle swarm optimization (PSO), to settle this difficulty. After the basic idea of PSO is addressed, the overall framework of PSO-based test data generation is discussed. Here, the inputs of program under test are encoded into particles. During the search process, PSO algorithm is used to generate test inputs with the highest possible coverage rate. Once a set of test inputs is produced, test driver will seed them into program to run and collect coverage information simultaneously. Then, the value of fitness function for branch coverage can be calculated based on such information, which can direct the algorithm optimization in next iteration. In order to validate our method, five realworld programs are used for experimental analysis. The results show that PSO-based method outperforms other algorithms such as GA both in the coverage effect of test data and the convergence speed. Keywords — Test data generation; PSO; branch coverage; fitness function; convergence speed

I. INTRODUCTION With the rapid development of information technology, software has been widely used around our lives. At the same time, its quality problem has also caused much attention among software customers and developers. In the past, quite a few instances tell us that software failures may lead to significant economic loss or threat to life safety, such as the explosion incident of Ariane-V rocket [1] and the BP deepwater horizon disaster [2]. Therefore, how to ensure and improve the high quality of software system is a challenge problem in current information society. Nearly three decades of experiences have proved that, software testing is an effective approach to ensure better software quality through revealing bugs. In general, the existing software testing methods can be classified into two kinds: functional (black-box) testing and structural (whitebox) testing [3]. Due to the higher fault exposure capability, structural testing has been used as an important faultrevealing method during software development. However, how to generate test data set satisfying some specific coverage criterion is not an easy task for structural testing. In recent years, search techniques have been referred to find the favorable set of test cases, so-called search-based 978-0-7695-4694-0/12 $26.00 © 2012 IEEE DOI 10.1109/ICIS.2012.103

II. SEARCH-BASED SOFTWARE TESTING A. Search-Based Test Data Generation Test input data generation is the most difficult task in software testing activity. In fact, it is a process of sampling the respective input data which can reveal the potential faults

633 641 637 623

in program from the input space. In recent years, incorporating the search techniques into test data generation has been recognized as a rational solution in the field of software testing, also known as search-based software testing (SBST).

scale programs, and validated its effectiveness over other meta-heuristic search algorithms. Another well-known search algorithm is simulated annealing [12], which can solve complex optimization problems based on the idea of neighborhood search. Tracey et al. proposed a framework to generate test data based on simulated annealing algorithm [13]. Their method can incorporate a number of testing criteria, for both functional and non-functional properties, but their experimental analysis is not sufficient. On the other hand, Cohen et al. adopted simulated annealing technique to generate test data of combinatorial testing [14]. But their method is mainly intended for functional testing. Although the above two are the classical search algorithms, their convergence speed is not very significant. PSO proposed by Kennedy and Eberhart [10,11] can overcome this weak point. Windisch et al. firstly applied a variant of PSO (CL-PSO) to generate structural test data [15], but some experiments have proved that the convergence speed of CL-PSO is perhaps worse than the basic PSO. Meanwhile, their experiments mainly focused on the artificial test objects. Recently, Chen et al. used PSO to generate test suite of pair-wise testing [16]. However, it belongs to the combinatorial testing problem. In the paper, we will address how to utilize the basic PSO algorithm to generate test data for branch coverage testing, and also use some real-world programs to perform comparison analysis.

Figure 1. The Basic Framework for Search-Based Test Data Generation.

As shown in Figure 1, the basic process of search-based test data generation can be described as below. At the initial stage, the search algorithm generates a basic test suite by the random strategy. Then test engineers or tools seed test inputs from the test suite into the program under test (PUT) and run it. During this process, the instrumented code monitors the PUT and can produce the information about execution results and traces. Based on such trace information, the metric for some specific coverage criterion can be calculated automatically. Subsequently, the metrics are used to update the fitness value of the pre-set coverage criterion. Finally, test data generation algorithm adjusts the searching direction of the next-step iteration according to the fitness information. The whole search process could be terminated until the generated test suite satisfies the pre-set coverage criterion. There are three key issues in the above framework: trace collection, fitness function construction and search algorithm selection. In general, execution traces are collected by code instrumentation technology, which can be settled with compiling analysis. While constructing fitness function, testers should determine which kind of coverage will be adopted. Experience tells us that branch coverage is a better cost-effective criterion. Considering easy implementation and high convergence speed, here we adopt PSO to produce test inputs.

III. SWARM INTELLIGENCE TECHNOLOGY Swarm intelligence represented by PSO is a relatively new technique. It emulates the swarm behavior of insects, animals herding, birds flocking, and fish schooling where these swarms search for food in a collaborative manner. PSO was first introduced in 1995 by Kennedy and Eberhart [10]. Although PSO shares some similarities with genetic search techniques, it does not use evolution operators such as crossover and mutation. Each member in the swarm, called a particle, adjusts its search patterns by learning from its own experience and other members’ experiences. During the iteration process, each particle maintains its own current position, its present velocity and its personal best position. The iterative operation leads to a stochastic manipulation of velocities and flying direction according to the best experiences of the swarm to search for the global optimum in solution space. In general, the personal best position of particle i is denoted by pbesti, while the global best position of the entire population is called gbest. Suppose the population size is s in the D-dimensional search space, a particle represents a potential solution. The velocity Vi d and position X id of the dth dimension of the ith particle can be updated by the following two formulas. Vi d (t ) Vi d (t  1)  c1 ˜ r1di ˜ ( pbestid  X id (t  1))

B. Existing Methods Test data generation is the key problem of automated software testing, which has attracted extensive attention of researchers in the past decades. Here, we mainly concern on the generation methods based on meta-heuristic search algorithms. In the 1990s, generic algorithm was adapted to generate test data. Jones [6] and Pargas [7] investigated the use of GA for automated generation of test data for branch coverage. Experiments on some small programs showed that GA usually outperformed random algorithm significantly. In recent years, Harman and McMinn et al. [8,9] performed empirical study on GA-based test data generation for large-

 c2 ˜ r2di ˜ ( gbest d  X id (t  1))

X id (t )

624 638 642 634

X id (t  1)  Vi d (t )

(1) (2)

where X i

( X , X , " , X ) is the position of the ith

particle, Vi

(Vi1 , Vi 2 , " , Vi D ) represents the velocity of

1 i

2 i

D i

Based on the fitness value of each particle (i.e. test case), the personal best position pbesti and global best position gbest can be updated (from line 17 to 22). The whole particle evolution process is controlled by the termination condition in line 05. For the testing problem, the termination condition can be the following two cases: (1) all construct elements have been covered, or (2) the maximum evolution generation max_gen is reached.

particle i. pbestid is the personal best position found by the d

particle assigned for dimension d, and gbest is the global best position of dimension d. The parameter c1 and c2 are the acceleration constants reflecting the weighting of stochastic acceleration terms that pull each particle toward pbest and gbest positions, respectively. r1 and r2 are two random numbers in the range [0, 1]. Moreover, a particle’s velocity on each dimension is clamped to a maximum magnitude Vmax . The concept of inertia weight is introduced in reference [11], which is used to balance the global and local search abilities. The inertia weight w controls the impact of the previous history on the new velocity. Thus, the velocity update formula is modified as below. Vi d (t ) w ˜Vi d (t  1)  c1 ˜ r1di ˜ ( pbestid  X id (t  1))

Algorithm. TDGen_PSO Input: (1) the program under test P , and arg (a1 , a2 , " , am ) is the argument list of P ; (2) structural coverage criterion C ; (3) algorithm parameter of PSO, i.e. n , w , c1 , c2 and Vmax ; (4) the maximum evolution generation max_gen . Output: test data set TS satisfying the criterion C . Stage 1: initialization stage 01 encode argument list arg into a m-dimension position vector; 02 design the fitness function f according to the coverage criterion C ; 03 instrument program P for gathering structural coverage information; 04 initialize the velocity vector Vi d and position vector X id ;

 c2 ˜ r2di ˜ ( gbest d  X id (t  1)) (3) Generally speaking, a large inertia weight is more appropriate for global search, and a small inertia weight facilitates local search.

IV. PSO-BASED TEST DATA GENERATION A. The Overall Algorithm In the framework of search-based test data generation, it is essentially a cooperation process of meta-heuristic search algorithm and program dynamic execution. Once the search algorithm produces a test suite during the search process, test driver will seed them into the program under test (PUT) and execute program to gather coverage information. Based on such information, the fitness value of the corresponding coverage criterion can be calculated. Then, the fitness value is used to adjust the search direction to find test suite which can achieve the maximum possible coverage ratio. Therefore, the key problem is how to realize the perfect interaction between basic search algorithm and coverage information extraction. Speaking in detail, the overall algorithm (denoted as TDGen_PSO) of PSO-based test data generation can be described as below. At the initialization stage, we need to encode the argument list arg (a1 , a2 , " , am ) into a mdimension position vector. For a given structural coverage criterion C such as branch coverage, the fitness function f ( ) for PSO should be designed. Meanwhile, assign the initial value of f ( pbesti ) and f ( gbest ) with 0. In order to calculate the fitness value of each particle (i.e. test case), we should instrument the program under test ( P ) to gather the coverage information about construct elements. On the other hand, some random values are utilized to initialize the velocity vector Vi d and position vector X id . In the algorithm body, the procedure between line 08 and 14 is used to determine the current position X id of particle i at different dimension d. In line 15, each particle vector X i in whole population is decoded into a test case. Then, the fitness of each test case f ( X i ) is evaluated (in line 16).

Stage 2: test suite optimization stage 05 while(termination condition doesn’t reach) 07 for each particle i in population with size n 08 for each dimension d ( 1 d d d m ) of particle i 09 calculate the current velocity Vi d of particle i in dimension d; 10 if Vi d exceeds the boundary Vmax then 11 adjust it within the boundary; 12 endif 13 calculate the current position X id ; 14 endfor 15 decode vector X i into a test case tci  TS ; 16 execute program with the test case tci and collect the coverage information to calculate the fitness f ( X i ) ; 17 if f ( X i ) ! f ( pbesti ) then 18 pbesti X i ; 19 endif 20 if f ( X i ) ! f ( gbest ) then 21 gbest X i ; 22 endif 23 endfor 24 endwhile 25 return TS {tci } ;

B. Instrumentation and Fitness Function In order to ensure search algorithm to find a solution satisfying some specific coverage criterion, coverage

625 639 643 635

information plays an important role in adjusting search direction. In structural testing, the construct elements, such as statement, branch, path, and definition-use pair, can all be treated as coverage objects. Here, we mainly consider the widely used branch coverage as the search objective. That is to say, the final test inputs can traverse all branches in program code. While considering the implementation details, the branch coverage information is collected via a count array. For each branch, a corresponding array element is assigned to judge whether it has been traversed or not. More importantly, fitness function plays key roles in population optimization, because the search algorithm knows nothing about the problem except fitness information during the search process. In general, fitness reflects how good the solution of the particle is in relation to the global optimum solution. Its value can be calculated for each particle in each population by a fitness function. In most cases, fitness is measured by decoding the position vector of program’s arguments to an objective function. Whichever kind of program elements (statement, branch or path) is selected as coverage object, branches in program code have an important impact on search process. In our method, we adopt the concept of branch distance to construct the final fitness function by considering Korel’s and Tracey’s work [5,8,17]. In fact, so called branch distance is the deviation for a given branch predicate when the variables are assigned to the input values. In Table 1, the branch functions for different predicates are listed in the second column. The value k ( k ! 0 ), refers to a constant which is always added if the term is not true.

where f (bchi ) is the branch distance function for i-th branch in program, and wi is the corresponding weight of this s

branch. Obviously,

Branch (Distance) Function f(bchi)

boolean

if true then 0 else k

™a a

b

V. EXPERIMENTAL ANALYSIS A. Experiment Setup In order to validate the effectiveness of PSO-based test data generation, we performed experimental analysis on some real-world programs, which are the well-known benchmark programs and have been widely adopted in other researchers’ work [18, 19]. The details of these programs are shown in Table 2. The first test program, triangleType, receives three integer numbers as input and decides what kind of triangle they represent: equilateral, isosceles, scalene, or no triangle. Here, the program used in experiment is our own version which is written according to the above specification. The second program, gcd, computes the greatest common denominator of the two integer arguments. The calday test program computes the day of the week, where a date is expressed by three integer arguments. These two programs are adopted from Reference [18] and [19], and are available at http://tracer.lcc.uma.es/problems/testing/index.html. The fourth program, isValid-Date, checks date formatted in year, month, day to be a valid one or not. This program is also implemented by us. The final test program cal is to calculate the number of days between two dates, which is adapted from origin version in [20].

negation is propagated over a if abs(a  b) z 0 then 0 else k

ab

if a  b  0 then 0 else ( a  b)  k

adb

if a  b d 0 then 0 else ( a  b)  k

a!b

if b  a  0 then 0 else (b  a )  k

atb

if b  a d 0 then 0 else (b  a )  k

a and b

f ( a )  f ( b)

a or b

min( f (a ), f (b))

TABLE 2. THE BENCHMARK PROGRAMS USED FOR EXPERIMENTAL ANALYSIS

Since we consider the branch coverage in search-based test data generation, a branch function should be constructed for each branch in program according to the guideline of Table 1. Suppose a program has s branches, then the fitness function of the whole program can be defined by comprehensively considering fitness of each branch, the formula is expressed as below. s ª º fitness 1 « 0.01  ¦ wi ˜ f (bchi ) » i 1 ¬ ¼

1.

We also conducted an experiment on weight allocation for branches. The results showed that the strategy of assigning the same weight to all branches doesn’t surely bring better search results. The rational way is that, weight should be assigned according to the reachable difficulty of each branch. In general, the harder reaching branch the higher weight value. Take the classical triangle classification problem as an example, we assign the weight 0.8 to equilateral triangle, 0.17 to isosceles case, 0.01 to normal triangle, non-triangular, and illegal inputs respectively.

if abs(a  b) 0 then 0 else abs(a  b)  k

azb

i

i 1

TABLE 1. THE BRANCH FUNCTIONS FOR SEVERAL KINDS OF BRANCH PREDICATES Branch Predicate

¦w

Program

Branch Number

LOC

triangleType

5

31

gcd

5

38

calday

11

72

isValidDate

16

59

cal

18

53

Description type classification for a triangle greatest common denominator calculate the day of the week check a date is valid or not compute the days between two dates

The experiment is performed in the environment of MS Windows 7 with 32-bits and runs on Pentium 4 with 2.4 GHz and 2 GB memory. The algorithms are implemented in C++ and run on the platform of MS Visual Studio .Net 2010.

2

(4)

626 640 644 636

In this section, we mainly compare PSO and GA in terms of the ability of test data generation. The parameters of experiment and these two algorithms are set as shown in Table 3.

triangleType and calday, the avg gens of TDGen_ PSO is about one third of that of TDGen_GA. More importantly, the biggest advantage of TDGen_PSO algorithm mainly lies in the execution time. The execution time for generating test data of the algorithm has been shortened nearly 100 times. From the experimental results we can find that, TDGen_PSO has obvious improvement in both effectiveness and efficiency than GA-based method for generating test data. Meanwhile, we also take program triangleType into consideration in the convergence processes of the two algorithms. As shown in Figure 2, TDGen_PSO can reach the full branch coverage much rapider than TDGen_GA method. With the first four generations, the difference between them is not so significant. From generation 5 to 20, the coverage growth rate of TDGen_PSO has great improvement than TDGen_GA. Hereafter, the coverages of both algorithms trend to steady, but TDGen_PSO outperforms TDGen_GA about five percentage points.

TABLE 3. THE PARAMETER SETTING FOR EXPERIMENT AND ALGORITHMS Item

Common Issues

Parameter

Value

population size punishment factor k (cf. Table 1) number of max generations

30

100

number of runs

100

GA

0.1

selection strategy

gambling roulette

crossover probability

0.90

mutation probability

0.05 It varies for different program, but its rang is [0.2, 1].

inertia weight w acceleration constant c1 and

c1

c2

PSO

the maximum velocity Vmax

c2

2.05

Set it according to the input space of the specific program, such as Vmax 24 for triangleType program.

B. Results Analysis At first, we conducted the comparison analysis between GA-based test data generation algorithm (denoted as TDGen_GA) and our TDGen_PSO, and the results are shown in Table 4. Here, we chose the following four issues for comparing. (1) Average coverage, i.e. the average of achieved branch coverage of all test inputs in 100 runs. (2) Successful rate, i.e. the probability of all branches can be covered by the generated test data. (3) Average (convergence) generations, i.e. the average evolutionary generation of realizing all-branch coverage. (4) Average time, i.e. the average execution time (millisecond) of realizing all-branch coverage. For the first two items (i.e., avg cov. and succ. rate), TDGen_GA can only attain 100% for gcd program. The avg cov. and succ. rate of TDGen_PSO are higher than TDGen_GA for other four programs. This result shows that TDGen_PSO algorithm can ensure the generated test data to cover all branches with larger probability than TDGen_GA. While considering the avg (convergence) gens, TDGen_PSO algorithm has much improvement. Especially for program

Figure 2. The Branch Coverage Accumulated with Generations.

In order to compare with the work of Windisch et al., we also implemented comprehensive learning particle swarm optimization (CL-PSO) to generate test data for five subject programs, and the results are shown in Table 5. In terms of coverage effect of generated test data, CL-PSO is as good as TDGen_PSO for program gcd and isValidDate, but is somewhat defective for the other three programs. The obvious discrepancy mainly lies in the issues of avg gens and avg time. We can find that, for all benchmark programs, the convergence speed of CL-PSO-based method is much slower than our TDGen_PSO algorithm. On the other hand, although the avg gens of CL-PSO is larger than TDGen_GA

TABLE 4. EXPERIMENTAL RESULTS OF TWO ALGORITHMS FOR TEST DATA GENERATION TDGen_GA

Program

TDGen_PSO

avg cov.

succ. rate

avg gens

avg time (ms)

triangleType

0.95

0.76

15.54

14.55

0.9994 

0.9980 

5.3550 

0.1850 

gcd

1.00

1.00

0.84

0.95

1.0000 

1.0000 

0.5350 

0.0360 

calday

0.9631

0.66

35.11

35.32

1.0000 

1.0000 

10.3690 

0.3500 

isValidDate

0.9987

0.98

19.94

17.22

1.0000 

1.0000 

12.0150 

0.8020 

cal

0.9928

0.99

15.23

11.78

1.0000 

1.0000 

8.3270 

0.4970 

627 641 645 637

avg cov.

succ. rate

avg gens

avg time (ms)

in the most situations, the avg time is still much shorter, and it is about one-tenth to that of TDGen_GA method.

Academic Talent in Jiangxi University of Finance and Economics. REFERENCES

TABLE 5. THE EXPERIMENTAL RESULTS FOR CL-PSO-BASED TEST DATA GENERATION

[1]

CL-PSO Program avg cov.

succ. rate

avg gens

avg time (ms)

triangleType

0.9988 

0.9960 

33.4892

1.0790

gcd

1.0000 

1.0000 

2.6600 

0.1360

calday

0.9735 

0.6850 

27.3263

1.2630

isValidDate

1.0000 

1.0000 

33.4100

1.5300

cal

0.9685 

0.8570 

51.9832

2.6170

[2] [3] [4] [5]

Therefore, we can draw a conclusion that the basic PSO algorithm is more suitable than CL-PSO to generate test data satisfying branch coverage criterion.

[6]

VI.CONCLUSIONS

[7]

How to automatically generate test suite satisfying specific coverage criterion is always a challenge in the past thirty years. Although many attempts were made, it is still an open problem. In recent years, a promising solution, searchbased method, has attracted much attention. In the paper, we attempt to introduce a new heuristic search technique —— particle swarm optimization to settle this problem. For test data generation problem, a fitness function constructed by program coverage elements is treated as optimization objective. Here, branch coverage is adopted as coverage criterion. In the whole process, PSO algorithm is used to produce program input data with the fitness value as high as possible. After the coverage information is collected by seeding input data into program, the search direction is changed in the next iteration. In order to compare our approach (TDGen_PSO) with other existing work, five realworld programs are chosen for experimental analysis. Test data generated via TDGen_PSO can attain higher coverage rate than GA algorithm for nearly all programs, and also shows better than CL-PSO for most cases. While considering the issue of convergence generation and execution time, TDGen_PSO has great improvement for both other algorithms. It is 100 times faster than GA, and 10 times than CL-PSO. Of course, some issues should be incorporated into depth investigation in future work. For instance, the sensitivity analysis should be performed to gain the best parameter combination. In addition, more reasonable forms of fitness function is also a valuable research topic.

[8]

[9]

[10]

[11]

[12] [13]

[14]

[15]

[16]

[17]

ACKNOWLEDGMENT This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 60803046 and 61063013, the Natural Science Foundation of Jiangxi Province under Grant No. 2010GZS0044, the Science Foundation of Jiangxi Educational Committee under Grant No. GJJ10433, the Open Foundation of State Key Laboratory of Software Engineering under Grant No. SKLSE2010-08-23, and the Program for Outstanding Young

[18]

[19]

[20]

628 642 646 638

T. Cargill, “Exception Handling: A False Sense of Security”, C++ Report, Nov.-Dec. 1994, 6(9), also available at http://www.awprofessional.com/content/images/020163371x/supplements/Exception_Ha ndling_Article.html. D. Shafer and P. A. Laplante, “The BP Oil Spill: Could Software be a Culprit?” IT Professional, 2010, vol. 12, no. 5, pp. 6-9. A. P. Mathur, Foundations of Software Testing, Addison-Wesley Professional, 2008. M. Harman, “Software Engineering Meets Evolutionary Computation”, IEEE Computer, 2011, vol. 44, no. 10, pp. 31-39. N. Tracey, J. Clark, K. Mander, and J. McDermid, “An Automated Framework for Structural Test-Data Generation”, Proc. of the 13th International Conference on Automated Software Engineering (ASE’98), Hawaii, 1998, pp. 285-288. B. F. Jones, H.-H. Sthamer, and D. E. Eyres, “Automatic Structural Testing Using Genetic Algorithms”, Software Engineering Journal, 1996, vol. 11, no. 5, pp. 299-306. R. P. Pargas, M. J. Harrold, and R. Peck, “Test-Data Generation Using Genetic Algorithms”, Software Testing, Verification and Reliability, 1999, vol. 9, no. 4, pp. 263-282. P. McMinn, “Search-Based Software Test Data Generation: A Survey”, Software Testing, Verification and Reliability, 2004, vol. 14, pp. 105-156. M. Harman and P. McMinn, “A Theoretical and Empirical Study of Search-Based Testing: Local, Global, and Hybrid Search”, IEEE Transactions on Software Engineering, 2010, vol. 36, no. 2, pp. 226247. J. Kennedy and R. C. Eberhart, “Particle Swam Optimization”, Proc. of IEEE International Conference on Neural Networks (ICNN’95), Path, Australia, 1995, pp. 1942-1948. Y. Shi and R. C. Eberhart, “A Modified Particle Swarm Optimizer”, Proc. of The 1998 IEEE International Conference on Evolutionary Computation (ICEC’98), 1998, pp. 69-73. S. Kirkpatrick, J. C. D. Gelatt and M. P. Vecchi, “Optimization by Simulated Annealing”, Science, 1983, vol. 220, no.4598, pp. 671-680. N. Tracey, J. Clark, and K. Mander, “Automated Program Flaw Finding using Simulated Annealing”, ACM SIGSOFT Software Engineering Notes, 1998, vol. 23, no. 2, pp. 73-81. M. B. Cohen, C. J. Colbourn, and A. C. H. Ling, “Augmenting Simulated Annealing to Build Interaction Test Suites”, Proc. of the 14th International Symposium on Software Reliability Engineering (ISSRE’03), 2003, pp. 394-405. A. Windisch, S. Wappler, and J. Wegener, “Applying Particle Swarm Optimization to Software Testing”, Proc. of the 9th Annual Conference on Genetic and Evolutionary Computation (GECCO’07), 2007, pp.1121-1128. X. Chen, Q. Gu, J. Qi, and D. Chen, “Applying Particle Swarm Optimization to Pairwise Testing”, Proc. of the 34th Annual IEEE Computer Software and Applications Conference (COMPSAC10), 2010, pp. 107-116. B. Korel, “Automated Software Test Data Generation”, IEEE Transactions on Software Engineering, 1990, vol. 16, no. 8, pp. 870879. A. Bouchachia, “An Immune Genetic Algorithm for Software Test Data Generation”, Proc. of the 7th International Conference on Hybrid Intelligent Systems (HIS’07), 2007, pp. 84-89. E. Alba and F. Chicano, “Observations in Using Parallel and Sequential Evolutionary Algorithms for Automatic Software Testing”, Computers and Operations Research, 2008, vol. 35, pp. 3161-3183. P. Ammann and J. Offutt, “Introduction to Software Testing”, Cambridge University Press, 2008, pp. 149-150.

Suggest Documents