Document not found! Please try again

A Hybrid Test Case Model for Medium Scale Web Based Applications

4 downloads 3396 Views 328KB Size Report
Modelling and testing is very essential for development to quality of web based applications. In our paper we proposed an approach for testing of medium scale ...
!"#$%&'!#()'"*)+'%,)+-#.,)/"*")0"#,)#()),1+'%1"#.,234'%)5# "0!),-,56#7(8 9.:#;?

A Hybrid Test Case Model for Medium Scale Web Based Applications Muhammad Bilal1, Nadeem Sarwar2, Muhammad Sajjad Saeed3.

Department of Computer Sciences & IT The Islamia University of Bahawalpur, University of Gujrat Sialkot Campus, Sialkot, Pakistan2 [email protected], [email protected], [email protected]

Abstract—Now a days, web based applications have become

intricate, popular and crucial in every organization which requires high quality and high reliability of web based application. Quality of web based applications effects security and functionalities (including functional and non-functional requirements). Web applications are complex due to its heterogeneous and distributed nature. With the increasing complexity, development of web applications requires arduous testing techniques which are cost effective and efficient. Modelling and testing is very essential for development to quality of web based applications. In our paper we proposed an approach for testing of medium scale web based applications. In our approach, the weights of flows (derived from UML activity diagram) calculated using weight based graph [1] and then ideal distance is calculated based on weights of flows. Next we find the trade-off between the effort levels required for testing of medium scale web based applications and exhaustiveness of testing using ideal distance represented by graph. The proposed approach is demonstrated by the mean of case studies. Keywords. Medium scale web application, Activity diagram,

weight of flow, ideal distance

I. INTRODUCTION he field of research and its major issues are presented, highlights the addressed research problem, and describes the research motivations and the major research objectives. All the software are developed to meet and satisfy functional needs. A functional need may be technical, process or a business. The main purpose of testing a software is to ensure that it functionally works. Through functional Testing, the expected behaviour of an application is tested. There are desktop applications and web based applications. Our focus is on testing of web based applications that refers to any program or software that runs in a web browser using HTTP. Webbased applications also may be client-based and processing is done on a server through internet. The key reason of popularity of web based applications is to maintain without disturbing and installing over the thousands of computer clients. Web based applications divided into three categories 1) small scale web based applications 2) medium scale web based applications and 3) large scale web based applications, but we focus on testing of medium scale web based applications. The wide distribution of internet has created a substantial growth of the demand of web based applications with severe requirements of full functionality, dependability, reliability, usability, inter-operability and security. Due to short time, the testing of web based applications is ignored by

T

developers, as it is considered as too much time consuming [2]. Many techniques has been developed by researchers for testing of web based applications. In this study we propose an approach for testing of medium scale web based applications in which we calculate the weights of flows derived from UML activity diagram [1] and calculate ideal distance. Next we find the trade-off between effort levels required for testing medium scale web based applications and exhaustiveness of testing represented by a graph. The proposed approach is demonstrated by the mean of case study. A. Test Case Description

The extensive distribution of internet has produced a substantial growth of the demand of web based applications with serious requirements of consistency, friendliness, usability, dependability and security. Test case is a set of conditions or variables under which a tester will determine whether a system satisfies the requirements or works properly. The test case generation process also helps to find out problems in the requirements or design phase of an application. The test cases should be effective. For test case generation, the test case description is needed to meet further needs. The well-defined reusable test case is not only present novel content of test case, but also proposal the reused substance of test case. The defined test case may be a combination of different items: Test case = [Unique test case ID, description about test case, preconditions, test steps, test input/data, expected result, actual result, post conditions]. The abbreviations are [TCID, TCD, PC, TS, TI/D, ER, AR, and PC]. The detail of the test case items is as under: TCID (Test Case ID): the test case ID is used to identify / distinguish the test case. TCD (Test Case Description): TCD is the description about the test case purpose. PC (Pre Conditions): description about the previous step which are required for that test case. The pre-condition is the dreadful need for further test. TS (Test Step): define the steps that are taken for the test case. This means that all the steps for test case that fulfil the requirement. TI/D (Test Input/data): valid / invalid data that is entered for the test case. ER (Expected Result): the results that are yield after test case execution called expected results that may be the actual output, expected or the final context. AR (Actual Result) the actual results after test case execution. PC (Post Conditions): describe that what will be displayed finally [3]. As per above description, we are going to exemplify the test case of login in any system:

978-1-5090-2000-3/16/$31.00 ©2016 IEEE 632

Test Case ID: H001 Test Case Description: Successful user login Pre-Condition: A valid User account to login to be available Test Steps: • In the login Panel, enter user name • Enter Password •

Click “login” button

Test input/data: A valid User Name, A valid Password Test Expected Result: The user is logged in successfully Post Condition: The page of User personal information displayed. II. RELATED WORK The authors discuss a novel approach for prioritizing test scenarios derived from UML activity diagram using path complexity. In the proposed approach first the activity diagram is converted in control flow graph (CFG) which is a directed graph. Each activity, decision, fork, join and merge in activity diagram is represented as node is control flow graph (CFG). Circle represents the nodes (initial node, final node and decision node). Oval represents the nodes (fork, join and merge). After that the basis paths are generated from the CFG using depth first search traversal. In basis path the loop execution is at most once that avoid path explosion. In proposed approach, the test scenarios are prioritized with the help of path length, coverage of node, coverage of condition, logical condition and information flow (IF) matric. In proposed approach the test scenarios are prioritized derived from UML activity diagram using path complexity. A path with highest path complexity will be contain the highest probability of fault occurrence and it will be given high priority to test first [4]. Another technique for test prioritization based on hamming distance. In proposed approach, test cases are prioritized in such order to gain maximum fault coverage. Faults exposed by each test case represented by binary string. Hamming distance of two equal length binary string is the number of position at the point where symbols are different. For example: 1011101 and 1001001 is 2. Test steps of algorithms are as: Set T’ to empty

Find S which is a binary String Bi with large no of 1’s. T’=T’U {ti}

T=T-{ti} B=B-{S} FC=S

While ((no of 1’s in FC) B->C->D->E->G->H->I->J

7

F2

A->B->C->D->B->C->D->E->G>H->I->J

9.5

F3

A->B->C->D->E->F->C->D->E>G->H->I->J

10

F4

A->B->C->D->B->C->D->E->F>C->D->E->G->H->I->J

12.5

Effort Level

Flow ID

IDj = W Table 2: Ideal distance & 3rd poser of weights of flows

Flow

F1

A->B->C->D->E>G->H->I->J A->B->C->D->B>C->D->E->G>H->I->J A->B->C->D->E>F->C->D->E>G->H->I->J

F2 F3 F4

A->B->C->D->B>C->D->E->F>C->D->E->G>H->I->J

3rd Weight Power of Ideal of Flow Weight Distance of Flow (IDj) 7

343

39-7=32

9.5

857.375

39-79.5=22.5

1000

39-7-9.510=12.5

10 12.5

1000 500

Ideal Distance and 3rd Power of Weights of Flows:-

Flow ID

1500

0

In given below table we calculate the ideal distance for exhaustive testing by using weights of flows and sum of weights of flows with the help of given below formula and effort level is approximated by taking the 3rd power of weights of flows. Tradeoff between effort level and ideal distance is represented by the graph. Sum of weights of flows is denoted by W and ideal distance is denoted by IDj is calculated as follows:

39-7-9.51953.125 1012.5=0

W=39

Tradeoff between effort level and ideal distance

0

10

20 Ideal Distance

30

40

Graph 1. Tradeoff between effort level and ideal distance V. CONCLUSION The tradeoffs between effort level and testing completeness, all the points on the curve in graph (1) represent valid choices. Conclusively this work offers testing level choices along with their worth in context of the costs they incur to the software testers. Such choices enable the software managers to decide about the extent of software error detection/correction in their given budgets.

References

[1] Gantait, A. (2011, February). Test case Generation and [2] [3] [4]

[5] [6]

Results are represented by the graph of tradeoff between effort level and ideal distance. 636

Prioritization from UML Models. In Emerging Applications of Information Technology (EAIT), 2011 Second International Conference on (pp. 345-350). IEEE. Di Lucca, G. A., & Fasolino, A. R. (2006). Testing Web-based applications: The state of the art and future trends. Information and Software Technology,48(12), 1172-1186. Liu, Z., Gu, N., & Yang, G. (2005, September). An automate test case generation approach: using match technique. In Computer and Information Technology, 2005. CIT 2005. The Fifth International Conference on (pp. 922-926). IEEE. Kaur, P., Bansal, P., & Sibal, R. (2012, September). Prioritization of test scenarios derived from uml activity diagram using path complexity. In Proceedings of the CUBE International Information Technology Conference (pp. 355359). ACM. Maheswari, R. U., & JeyaMala, D. (2013, December). A novel approach for test case prioritization. In Computational Intelligence and Computing Research (ICCIC), 2013 IEEE International Conference on (pp. 1-5). IEEE. Gupta, S., Raperia, H., Kapur, E., Singh, H., & Kumar, A. (2012). A Novel Approach for Test Case Prioritization. International Journal of Computer Science, Engineering and Applications, 2(3), 53.

[7] Dobuneh, M. R. N., Jawawi, D. N., & Malakooti, M. V. (2013).

An Effectiveness Test Case Prioritization Technique for Web Application Testing. International Journal of Digital Information and Wireless Communications (IJDIWC), 3(4), 117-125. [8] Reddy, P. D. K., & Rao, a. A. HTCPM: A Hybrid Test Case Prioritization Model for Web and GUI applications. [9] Hajiabadi, H., & Kahani, M. (2011, September). An automated model based approach to test web application using ontology. In Open Systems (ICOS), 2011 IEEE Conference on (pp. 348353). IEEE. [10] Sapna, P. G., & Mohanty, H. (2009, July). Prioritization of scenarios based on uml activity diagrams. In Computational Intelligence, Communication Systems and Networks, 2009. CICSYN'09. First International Conference on (pp. 271-276). IEEE. [11] Kung, D. C., Liu, C. H., & Hsia, P. (2000). An object-oriented web test model for testing web applications. In Quality Software, 2000. Proceedings. First Asia-Pacific Conference on (pp. 111-120). IEEE. [12] Qian, Z., Miao, H., & Zeng, H. (2007, December). A practical web testing model for web application testing. In Signal-Image Technologies and Internet-Based System, 2007. SITIS'07. Third International IEEE Conference on (pp. 434-441). IEEE. [13] Shikimi, R., Ogata, S., & Matsuura, S. (2012, July). Test Case Generation by Simulating Requirements Analysis Model. In COMPSAC (pp. 356-357). [14] Garg, D., Datta, A., & French, T. (2012). New test case prioritization strategies for regression testing of web applications. International Journal of System Assurance Engineering and Management, 3(4), 300-309.

[15] N.

Sarwar, IS Bajwa & Rauf Sajjad (2016) “Automated

Generation of EXPRESS-G Models Using NLP” Sindh University Research Journal (Science Series) Vol. 48 (1) 05-12. [16] Kumar, H., & Chauhan, N. (2015, March). A Coupling effect based test case prioritization technique. In Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on (pp. 1341-1345). IEEE.

637

Suggest Documents