Investigation on performance testing and evaluation ...

4 downloads 83149 Views 661KB Size Report
Jan 7, 2011 - performance, testing of PReWebD has been carried out using Mercury ... It is the software tester who tests and makes the application.
www.ietdl.org Published in IET Software Received on 28th October 2010 Revised on 7th January 2011 doi: 10.1049/iet-sen.2010.0139

ISSN 1751-8806

Investigation on performance testing and evaluation of PReWebD: a .NET technique for implementing web application M. Kalita1 T. Bezboruah1,2 1

Department of Electronics & Communication Technology, Gauhati University, Guwahati 781014, Assam, India Multidisciplinary Laboratory, The Abdus Salam International Centre for Theoretical Physics, Strada Costiera, 11, 34151 Trieste, Italy E-mail: [email protected] [email protected]

2

Abstract: A prototype research web application based on Visual Studio platform is developed with .NET as framework, Internet Information Server (IIS) (Version: 5.1) as web server and Microsoft Standard Query Language (SQL) Server (Version: 2005) as database server to study the performance and evaluation of the .NET technique used for developing the web application. The authors call it the PReWebD. As the performance is one of the most important features of a web application, to evaluate the performance, testing of PReWebD has been carried out using Mercury LoadRunner (Version 8.0) for validation and to study some other attributes like scalability, reliability etc. The performance depends on parameters such as hits/s, response time, throughput, errors/s etc. These parameters for PReWebD are tested with different stress levels. The statistical testing and analysis is done to assure the stability, reliability and quality of the application. Here the authors report in details the architecture, testing procedure, result of performance testing as well as the results of statistical analysis on recorded data of PReWebD.

1

Introduction

The explosive growth of web applications and web services has changed the present scenario in exchanging the information, in government, corporate or educational and research institutes. The productivity and operational efficiencies have been increased to manifold with the development of sophisticated but simple web applications. With such development, the responsibility on the developer’s side is unlimited. Although the applications would be provided with easy to use features, at the same time these applications must also be able to handle large number of concurrent users. In such a situation, as most of the businesses are conducted through web, it is very important and crucial to get the web application be tested. In software development life cycle, testing is one of the most important features and arguably the least understood part. The factors that provide information about the presence of bugs in the software systems may be the followings: (i) the user executed untested code, (ii) the order of the statements has changed during testing, (iii) the user applied illegal inputs and (iv) the users operating environment may be faulty. It is the software tester who tests and makes the application worth visiting. To perform and execute testing, the tester must be familiar with the system, the inputs and the way they are combined, and the operating environment of the system [1]. IET Softw., 2011, Vol. 5, Iss. 4, pp. 357– 365 doi: 10.1049/iet-sen.2010.0139

In general, the testing is performed in the following four phases: (a) modelling the system’s environment, (b) selecting test scenarios, (c) running and evaluating test scenarios and (d) analysing test results. With the massive growth of web user population, it is highly important to check and measure for the reliability and stability of the application. One good candidate for effective web quality assurance is statistical testing. This technique requires the collection of large number of datasets describing various parameters of the application [2]. In view of the above we have designed, developed, implemented and tested a web application in Instrumentation & Informatics Research Laboratory, Department of Electronics and Communication Technology, Gauhati University, Assam, India. The application has been tested for 10, 20, 30, 40, 50, 75, 100 and 125 virtual users and the results of performance testing; statistical analysis and details results are presented here.

2

Software testing

The software testing is a process to evaluate the efficiency of a system. In software development, testing is used at key checkpoints in an overall process to determine whether objectives are being met or not. When the design of a web application is completed, coding follows. The code is then tested at the unit or module level by the programmer. At early or later stages, the service may also be tested for 357

& The Institution of Engineering and Technology 2011

www.ietdl.org usability. At the system level, the developer or independent reviewer may subject to service of one or more performance tests [3]. A web application’s Quality of Service (QoS) is measured in terms of response time, throughput and availability. One of the best ways to measure an application’s QoS is to conduct load testing. The performance testing is a testing that is performed, from one perspective, to determine how fast some aspects of a system perform under a particular workload. It can serve to validate and verify scalability, reliability and usage of resources. It can also demonstrate whether the system meets performance criteria or not. It can compare two systems to find out which one performs better. It can also diagnose the part of software that contributes most to the poor performance of the system [4]. After verifying the correctness of the code, the load and stress testing is performed to measure the performance and scalability of the application under heavy load. After analysing the results obtained during this phase, it is possible to determine the bottlenecks, memory leakage or performance problems related to the database layer. The load testing is done to have an overall insight of the system. It models the behaviour of users in real world. The load generator mimics browser behaviour and each emulated browser is called a virtual user. A load test is valid only if virtual users’ behaviour has characteristics similar to those of actual users [5]. In load testing the system is subjected through reasonable load in terms of number of virtual users to find out the performance of the system, mainly in terms of response time. In this case the load is varied from zero to the maximum up to which the system can handle in a decent manner. The stress testing is performed to determine the stability of a given system. It involves testing beyond the normal operational capacity. The stress testing is performed to uncover memory leakage, bandwidth limits, transactional problems, resource locking, hardware limitations and synchronisation problems that occur when an application is loaded beyond the limits determined by the performance statistics [6 – 8]. Different testing tools are used to simulate the scenario and to evaluate the performance parameters of the application before it is actually deployed. They create stress on the system by simulating large number of virtual users. Then they collect the system performance parameters in the form of various graphs which can be used later to analyse the behaviour of the application. These tools give comprehensive analysis of application performance and support all major enterprise platforms. The testing on the PReWebD is done to determine the responsiveness, throughput, reliability and scalability of the combination of .NET technology with Internet Information Server (IIS) under a given workload. The operations performed are insertion, updation, deletion and search. The testing is object based in the sense that the whole application has been divided into different objects [9].

3

Multi-tier architecture of PReWebD

The architecture of PReWebD is shown in Fig. 1. It is a threetier architecture. This architecture has the advantages of separation, independence and re-usability [10]. The layers of this architecture are presentation layer (PL), business layer (BL) and data layer (DL). The PL contains the user interface and presentation code. Since the .NET platform is used for design purpose, it 358 & The Institution of Engineering and Technology 2011

Fig. 1 Architecture of PReWebD

includes Active Server Pages Extended (ASPX) pages, user controls, server controls etc. The server controls include Label, TextBox, Button, DropDownList, FileUpload, Gridview, FormView etc. The validation controls include RequiredFieldValidator, RangeValidator and RegularExpressionValidator. The BL is the most essential layer in the architecture. This layer is further divided into two sub layers: (a) the business logic layer (BLL) and (b) the data access layer (DAL). The BLL holds the required Business Logic functions, validations and calculations related to the data. The DAL is responsible for accessing data and forwarding it to BLL. The BL acts as a mediator between the PL and the DL. In ASP.NET sqlClient acts as the DAL. The DL manages the physical storage and retrieval of data. It receives the data from the BL and sends it to the database and vice versa. The database queries have been written to access the data from database or to perform operations like insert, update and delete. In our present work, the Standard Query Language (SQL) Server Express 2005 acts as the DL.

4

Design aspect of PReWebD

The web application’s design is essentially its look and feel [11]. We have taken into account of all the web elements for example audience information, purpose and objective statement, domain information, web specification and combine them to produce an arrangement for implementing the PReWebD. The application has been developed by considering the profile of the Department of Electronics and Communication Technology (ECT), Gauhati University as the sample data. The Create, Read, Update, Delete operations are performed to generate the response. It is an operation in which data are created, retrieved, updated and deleted according to the user requirement. There are two types of accounts available for accessing the PReWebD. The ‘USER ACCOUNT’ which is available for the registered users that has limited access to PReWebD. The other one is the ‘ADMINISTRATIVE ACCOUNT’ available for Administrator that has full access to the PReWebD. The basic working principle of PReWebD is illustrated in the flowchart shown in Fig. 2. The flowchart is self-explanatory. When users open the web application, the main menu will open. The users then can click any link provided in the main menu. They can search the site or can do some transactions. For searching information, any user can open the site and perform the necessary searching. For other operations for example, insert, modify or delete, users would have to go through the registration process, which is supervised by the administrator. A registered user is authenticated to get access to pages performing database transaction. Users can log out at the end of his session. The session will be automatically closed if users do not perform any transaction for a specific time period set in the web application. The IET Softw., 2011, Vol. 5, Iss. 4, pp. 357 –365 doi: 10.1049/iet-sen.2010.0139

www.ietdl.org

Fig. 2 Flowchart for basic working principle of PReWebD

different module of the system design of PReWebD is shown in Fig. 3. Instead of testing the application as a whole in one attempt, the application is divided into different modules. Modularity in design is an approach that subdivides a system into smaller parts. The modules incorporated in the PReWebD are inserts, modify, delete, search, login, logout and register. These modules can again be composed of different objects like

server controls and validation controls. Since the performance of the PReWebD depends on these modules, the behaviours of these modules are studied separately.

5 Technical specification and testing of PReWebD The technical specification of the hardware and the software for the development as well as testing environment for PReWebD is as given below. 5.1

Hardware and software configuration

The hardware and software configuration for the present work is as given below.

Fig. 3 Structure of the PReWebD IET Softw., 2011, Vol. 5, Iss. 4, pp. 357– 365 doi: 10.1049/iet-sen.2010.0139

5.1.1 Hardware platform PC: Intelw Pentiumw Dual CPU E2200. Processor speed: at 2.20 GHz. RAM: 1 GB. Memory space: 150 GB. 359

& The Institution of Engineering and Technology 2011

www.ietdl.org 5.1.2 Software platform Web Server: IIS 5.1. Database Server: Microsoft SQL Server 2005. Operating system: Window XP Professional Service Pack 2. Software Platform: Visual Studio 2005. Browser: Internet Explorer 6.1. Network bandwidth: 128 kbps.

5.2 Testing procedure: The Mercury LoadRunner (Version 8.0) is used for testing PReWebD. It is an automated performance and load testing tool for studying system behaviour and performance, while generating actual load [12]. Using limited hardware resources, LoadRunner emulated hundreds or thousands of concurrent users through the rigours of real-life user loads. It has excellent monitoring and analysis interface where reports are presented with easily understandable charts [13, 14]. LoadRunner 8.1 has added features of ADO.NET supporting. During the experiments, the stress level is gradually varied, so that it can saturate the server. This can lead to find out the capability of the server. Each HTTP requests cause a SQL INSERT commands to insert two text fields in the database table. After invoking the application, users will log into the PReWebD using a unique username and password. Successful login will authenticate users to perform the transaction. Real-life values are inserted into the text fields. The values can then be saved into the corresponding database table by pressing the ‘SAVE’ button. User think time of approximately 12 s is incorporated in performing the transactions. The run time for the experiments are chosen subject to two constraints: (i) the test duration must be long enough to assess accurately the server’s ability to support the target request rate and (ii) the duration of each test should be as short as possible, in order to test many request and accurately identify the peak performance of each server. Considering the above aspects, an average steady-state period of 30 min is fixed for all the experiments. The flowchart for testing procedure is shown in Fig. 4. The different steps involved in performing the performance test include: 1. The preparation of test plan: The first step involved is to prepare a test plan. We designed various test cases to view performance test from user’s perspective. The test cases include the stress levels for each performance indicators, number of users, architectures which include client PC, browsers and network bandwidth. 2. The creation of test environment: The second step is the creation of a test environment. The test environment consists of hardware and software configuration of client PC, operating system chosen, number of client PC participating in the test etc. 3. The setting of test period: The third step is to set a test period for which the stress test will be performed. There are three phases: (a) the ramp up phase initialises the system until it reaches a steady state, (b) the steady-state phase where the measurements are taken and (c) the ramp down phase which allows the PReWebD to cool down. 4. The setting of performance parameters: The performance parameters are important metrics to measure the performance of the application. All the parameters are not relevant to us. The parameters that are important to our present work are set during this step, for example response time, throughput, hits/s, errors/s etc. 360 & The Institution of Engineering and Technology 2011

Fig. 4 Flowchart for testing procedure

5. The execution of the test: After setting the performance parameters the test is executed by deploying the applications on Mercury LoadRunner. 6. The analysis of test results: The test responses that are in forms of graphs are then analysed to evaluate the performance of PReWebD. 5.3

Performance parameters

The performance has been analysed from user’s as well as developer’s point of view. The users are more concerned with no refusal of connection having fast response at the client end. On the other hand, at the server end, the developer is concerned with high connection throughput and high availability. The factors on which the web server performance depends are: (i) hardware and software platform, (ii) operating system, (iii) server software, (iv) network bandwidth and (v) load on the server. There are three models that can be used to analyse the performance of an application namely: (i) the simulation, (ii) the analytical model and (iii) measurements of existing system. In the analytical model, a set of algorithm is used to analyse the system. In the simulation model a computer program is used to generate the parameters needed to perform the test. We have measured the performance of the existing system, when it was ready. IET Softw., 2011, Vol. 5, Iss. 4, pp. 357 –365 doi: 10.1049/iet-sen.2010.0139

www.ietdl.org 5.4

Testing parameters

There are three main parameters that are varied during the testing procedure. They are: (a) the workload intensity measured in terms of number of virtual users, that is stress level, (b) the workload mix, which will define what users will do in each session and (c) the user behaviour parameter, which is the think time. 5.5

Test responses

The parameters of the load and stress test which we have monitored mainly include: (i) the response time in terms of second, (ii) the throughput in bytes per second, (iii) the hits per second, (iv) the number of successful virtual users allowed for transaction, (v) the transaction summary which includes the number of completed and abandoned sessions, (vi) the error report etc. 5.6

Fig. 5 Hits/s against number of users for 75 virtual users

Experiments and their results

The performance testing is performed on our PReWebD for 10, 20, 30, 40, 50, 75, 100, 125 virtual users. All performance parameters are measured in 128 kbps bandwidth. We observed various parameters provided by the LoadRunner. On observing the response parameters, it is seen that up to 40 users the application runs smoothly. All the performance tests are conducted with ramp up schedule with 1 virtual user operating every 30 s. The steady-state measurement period is set at 30 min duration. Then they are phased out at the same time after the completion of the steady-state period [15]. The delays for the users think time is included to emulate the behaviour of real users. The results for insert and delete operations are presented in Table 1. Some sample plots of the tested parameters are shown in Figs. 5 – 7 respectively. Fig. 5 shows the plot for hits/s against number of users for 75 virtual users. In this case 69 users are allowed to perform the transaction, rest are failed. It is observed that hits/s increases with the number of virtual users. It becomes maximum at around 63 virtual users and then the parameter decreases gradually. The recorded average hits/s is 12.241 with maximum of 19.539. Fig. 6 shows the plot for throughput against number of users for 75 virtual users. It is observed that the throughput increases with the number of virtual users. It becomes maximum at around 63 virtual users and then the parameter decreases gradually. The recorded average throughput is 328 940.716 with maximum of 607 717.333 bytes/s. Table 1

Fig. 6 Throughput against number of users for 75 virtual users

Fig. 7 Response time against number of users for 75 virtual users

Results for insert and delete operations

Scenario insert operation

No. of users

Recorded parameters

Average

Connection refusal in %

30

response time, s throughput, bytes/s hits/s response time, s throughput, bytes/s hits/s response time, s throughput, bytes/s hits/s response time, s throughput, bytes/s hits/s

52.3 125 146 3.327 45.86 328 940 12.241 49.859 308 799 8.963 76.646 52 673 1.515

0

75

100

125

IET Softw., 2011, Vol. 5, Iss. 4, pp. 357– 365 doi: 10.1049/iet-sen.2010.0139

8

35

84

361

& The Institution of Engineering and Technology 2011

www.ietdl.org Fig. 7 shows the plot for response time against number of users for 75 virtual users. It is observed that the response time increases initially with the number of virtual users, then it reached a steady state and then increases to a maximum level with increasing virtual users. The recorded average response time is 45.86s with maximum of 57.801 s.

6

Statistical analysis of the application

The statistical analysis for ten users run for 5 min in steady state is presented here. The number of samples we have taken is 30. 6.1

Table 4

Class width and frequency for throughput

Throughput (bytes/s)

Observed frequency

,60 000 .60 000–61 500 .61 500–63 000 .63 000–64 500 .64 500–66 000 .66 000–67 500 .67 500–69 000

1 5 7 11 4 1 1

Statistical analysis for PReWebD

The observed range for response time lies between 47.244 and 60.425 s. The range for throughput lies between 58 633 and 68 755 bytes.s. The hits/s lies between a minimum value of 1.395 to maximum of 2.06. The recorded 30 samples are divided into six and seven classes depending on their range. The class width and range for response time, hits/s and throughput are given in Tables 2 – 4, respectively. 6.2 Distribution for response time, hits/s and throughputs Our objective is to determine the distribution of response time, hits/s and throughputs. One of the ways of determination is to plot a histogram of the observed parameters as shown in Figs. 8 – 10, respectively. The applied distribution is normal distribution according to the histograms. However there is a major drawback with histograms, that is, depending on the used bin size; it is possible to draw very different conclusions. A better technique is to plot the observed quantiles against the recorded data in a quantile plot [16]. If the distribution of observed data is normal, the plot is close to be linear. The resultant plots are shown in Figs. 11– 13. Based on the observed data, the response time, throughput and hits/s do appear to be normally distributed. Table 2

Observed frequency

47– 49.6 .49.6– 52.2 .52.2– 54.8 .54.8– 57.4 .57.4– 60 .60 –62.8

Hits/s ,1.5 .1.5 –1.6 .1.6 –1.7 .1.7 –1.8 .1.8 –1.9 .1.9 –2.0 .2.0

Fig. 9 Histogram for hits/s

Class width and frequency for response time

Response time (s)

Table 3

Fig. 8 Histogram for response time

3 9 6 5 5 2

Class width and frequency for hits/s Observed frequency 2 2 6 13 5 1 1

362 & The Institution of Engineering and Technology 2011

Fig. 10 Histogram for throughput

The test of normality can be verified graphically, using the normal probability plot. If the data samples are taken from a normal distribution, the plot will appear to be linear. The normal probability plots of the response time, throughput and hits/s are shown in Fig. 14– 16. The data follow a straight line, which predicts that the distribution is a normal one. IET Softw., 2011, Vol. 5, Iss. 4, pp. 357 –365 doi: 10.1049/iet-sen.2010.0139

www.ietdl.org

Fig. 11 Quantile plot for response time

Fig. 12 Quantile plot for throughput

Fig. 15 Normal probability plot for hits/s

Fig. 16 Normal probability plot for throughput

6.3 Confidence interval of response time, hits/s and throughput The 95% confidence interval for the mean values of response time, hits/s and throughputs are estimated. The population mean m can be expressed as [17, 18] tS − m = x + √c  N

(1)



Fig. 13 Quantile plot for hits/s

where x is the mean value, tc(0.05, 29) the critical value, √ S the standard deviation, N the sample size and (tc S/ N ) the margin of errors. Considering different values for parameters, obtained during load testing, we evaluate the critical value, mean and margin of errors that are given in Table 5. The population mean m is calculated from (1). From Table 5, we can conclude the following with 95% confidence: that mean response time lies between 53.9337 + 1.455 that is 52.478 and 55.389 s, mean hits/s lies between 1.686 and 1.785 and mean throughput lies between 62 289.773 and 63 898.226. 6.4

Fig. 14 Normal probability plot for response time IET Softw., 2011, Vol. 5, Iss. 4, pp. 357– 365 doi: 10.1049/iet-sen.2010.0139

Factors influencing the response time

In order response relation, assumed assumed

to verify whether there is a relationship between time, hits/s and throughput, we assume that such a if exists, be a linear one. The response time is as criterion variable. Hits/s and throughput are to be predictor variables. The scatter plots of the 363

& The Institution of Engineering and Technology 2011

www.ietdl.org Table 5

Critical value, mean and margin of error

N

t0.05, 29

30

2.045



Parameters

(x )

S

response time, s hits/s throughput, bytes/s

53.9337 1.736 63 094

3.89829 0.132 2154

t S √c  N 1.455481966 0.0493 804.2265084

response time against hit/s and throughput are given in Figs. 17 and 18. The two scatter plots with their respective regression line show linear relationship. Greater the value of hits/s, more will be the response time, however very small. In a similar

way, the greater the value of throughput more will be the response time. To examine the combined effect of throughput and hits/s on response time, we consider the case of a multiple linear regression test. The regression test is carried out at 95% confidence level. We assumed the null hypothesis (H0) which is: response time does not depend on hits/s and throughput. The alternate hypothesis (H1) is: response time is a function of hits/s and throughput. The regression analysis is carried out in Microsoft Excel. The analysis of variance shows F ratio to be 5.087 which is significant at 0.05. This provides evidence of existence of linear relationship between response time, hits/s and throughput. As such, the null hypothesis may be rejected. This implies that the equation has 95% chance of being true. The analysis also suggested that our model accounts for 27.37% variance on response time. Thus we may infer that the hits/s and throughput have some influence on response time.

Fig. 17 Hits/sec against response time

7

Fig. 18 Throughput against response time

The objective of our present investigation is to evaluate the overall performance of .NET technique used in Visual Studio with IIS as web server and to predict the influence of hits/s and throughput on the response time. The analysis of the recorded data predicts that up to 40 virtual users the application shows ideal response without any refusal in connectivity with an average response time of 44.28 s which is acceptable. As we increase the number of virtual users the errors per page of the application as well as connectivity errors increases. For 50 virtual users the average response time is 44.762 s and 14% connections are refused. Similarly for 75 virtual users the average response time is 45.86 s and 8% connections are refused. For 100 virtual users the average response time is 49.859 s with 35% connection refusal. Finally for 125 virtual users the average response time is 76.646 s with 68% connection refusal.

Table 6

Overall performance results and discussion

Comparison of responses with Java technique

Scenario

insert operation

No. of users

30 75 100 125

364

& The Institution of Engineering and Technology 2011

Recorded parameters

response time, s throughput, bytes/s response time, s throughput, bytes/s response time, s throughput, bytes/s response time, s throughput, bytes/s

Average PReWebD

PReWebN

52.3 125 146 45.86 328 940 49.859 308 799 76.646 52 673

115.672 98 806 124.876 129 440 139.412 122 645 129.12 103 503

IET Softw., 2011, Vol. 5, Iss. 4, pp. 357 –365 doi: 10.1049/iet-sen.2010.0139

www.ietdl.org The refusal of connection at higher number of virtual users may be due to garbage collected heap because of improper release of memory. This may also be due to decrease of response at server end because of increase in number of virtual users. The sudden rise and fall in various figures of response time, throughput and hits/s may be due to not releasing or lately releasing of server resources including memory for the consecutive requests. This conclusion however requires a thorough and close monitoring and analysis of the server side resources. This situation occurs more often with higher number of virtual users when the server is stressed because of increased load. The histograms, quantile plots and normal probability plots of the application show linearity and normality which shows enough evidence for the scalability and reliability of the application with large number of virtual users. However in some plots histograms are right or left skewed. Also, the normal probability plots are not always perfectly straight line but depart from it at the ends. This shows the evidence of longer tails than the normal distribution. From the statistical analysis, it is observed that hits/s and throughput have individual influence as well as combined effect on response time. Individually, hits/s and throughput contribute approximately 18 and 26.02% to the response time and together they contribute around 27.37%.

9

As part of our future work we propose to carry out detailed investigations on implementations of web applications with different implementation techniques.

10

Conclusion

From the above statistics we can conclude that with the increase of virtual users the errors as well as connection refusal increase. The system almost becomes inoperable near 125 users. The system seems to be stable up to 75 users with 8% error, which is tolerable. With the increase in stress level the collisions between requests increases. Hence less number of hit/s and throughput with increasing response time of the server. Although, the PReWebD comply with industry standard, however it is essential to test and analyse the system thoroughly to have an in-depth idea of the factors hampering the application. The statistical analysis shows that the distribution of response time, hits/s and throughput are normal. This shows enough evidence for the scalability and reliability of the web application with large number of virtual users. It is also found that the hits/s and throughput have some influence on response time of PReWebD, however it is small. The multiple regression test on PReWebD shows that hits/s and throughput have 27% combined effect on response time. As such, we can conclude that the application implemented with. NET technique is reliable and stable considering intermediate number of users with the above-mentioned technical specification. Table 6 gives a comparison of response time and throughputs between PReWebD and PReWebN, a web application developed with Java technique [19]. From the table it can be concluded that the PReWebD developed with .Net technique provides faster response time and more throughputs in comparison to PReWebN developed with Java technique.

IET Softw., 2011, Vol. 5, Iss. 4, pp. 357– 365 doi: 10.1049/iet-sen.2010.0139

Acknowledgments

One of the authors Dr. Tulshi Bezboruah is thankful to the Director, ICTP, Trieste, Italy for providing excellent computing facilities under Junior Associate Scheme of ICTP during his present visit to the centre. This work is supported partially by the University Grant Commission (UGC), Govt. of India. The authors are thankful to the Head, Department of Electronics & Communication Technology, (ECT), Dean Faculty of Technology, Gauhati University for infrastructural support towards the work. The authors are also thankful to Prof. (Mrs.) K. Boruah, Professor and Head Department of Physics; Prof. H. K. Boruah, Professor, Department of Statistics and Formerly Dean, Faculty of Science, Gauhati University for their valuable suggestions during the statistical analysis of the data. The authors are thankful to the anonymous reviewers of IET Software for providing various constructive comments and suggestions.

11 8

Future work

References

1 Whittaker, J.A.: ‘What is software testing? And why is it so hard?’, IEEE Softw., 2000, 17, (1), pp. 70–79 2 Kallepalli, C., Tian, J.: ‘Measuring and modeling usage and reliability for statistical Web testing’, IEEE Trans. Softw. Eng., 2001, 27, (11), pp. 1023– 1036 3 http://searchwindevelopment.techtarget.com/sDefinition/0,sid8_ gci534970,00.html 4 http://en.wikipedia.org/wiki/Software_performance_testing 5 Menasce´, D.A.: ‘Load testing of web sites’, IEEE Internet Comput., 2002, 6, (4), pp. 70– 74 6 http://www.manageengine.com/products/qengine/stress-testing.html 7 http://en.wikipedia.org/wiki/Stress_testing 8 http://www.faqs.org/faqs/software-eng/testing- faq/section-15.html 9 Subraya, B.M., Subrahmanya, S.V.: ‘Object driven performance testing of web applications’. The First Asia-Pacific Conf. on Quality Software (APAQS’00), Hong Kong, China, pp. 17–26 10 Kalita, M., Bezboruah, T.: ‘Performance analysis of web application using microsoft. Net technology’, Int. J. Electron. Eng. Res., 2010, 2/ 2, pp. 243– 251 11 Kalita, M., Bezboruah, T.: ‘On HTML and XML based web design and implementation techniques’, Far East J. Electron. Commun., 2007, 1, (1), pp. 65–79 12 http://pcquest.ciol.com/content/software/2004/104093002.asp 13 www.ciao.co.uk 14 http://learnloadrunner.com/introduction/advantages-of-loadrunner/ 15 Cecchet, E., Chanda, A., Elnikety, S., Marguerite, J., Zwaenepoel, W.: ‘Performance comparison of middleware architectures for generating dynamic web content’ (Springer-Verlag), 2003, pp. 242– 261 ´ ., Szita´s, Z., Levendovszky, T., Charaf, H.: 16 Boga´rdi-Me´szo¨ly, A ‘Investigating factors influencing the response time in ASP.NET web applications’, Lect. Notes Comp. Sci. 3746, 2005, 3746, pp. 223– 233 17 Spiegel, M.R.: ‘Theory and problems of probability and statistics’ Schaum’s Outline Series (McGraw-Hill Book Company, 2000, SI edn.) 18 Pal, S.K.: ‘Statistics for Geo scientists, techniques and applications’ (Concept Publishing Company, New Delhi, 1998) 19 Kalita, M.: ‘Investigations on implementation of web applications with different implementation techniques’. PhD Thesis, Gauhati University, 2010

365

& The Institution of Engineering and Technology 2011

Suggest Documents