A Formal Approach to Verify Software Scalability ...

5 downloads 20401 Views 221KB Size Report
tested using a software named ab1 which is a HTTP server benchmarking tool .... user selects the ticket to buy then s/he will be considered as a buyer and asked ...
A Formal Approach to Verify Software Scalability Requirements using Set Theory and Hoare Triple Alim Ul Gias∗ , Mirza Rehenuma Tabassum∗ , Amit Seal Ami∗ , Asif Imran∗ , Mohammad Ibrahim∗ , Rayhanur Rahman∗ and Kazi Sakib† Institute of Information Technology, University of Dhaka Ramna, Dhaka–1000 ∗ {bit0103,

bit0129, bit0122, bit0119, bit0117, bit0101}@iit.du.ac.bd † [email protected]

Abstract—Scalability is the ability of a system to handle variation in execution environment and continuing to function in order to meet user needs. For ensuring scalability, it is important to verify that programmers are writing code that can scale. However, verifying scalability from code level has its own limitations as it did not receive adequate attention from researchers. This paper proposes a formal approach to verify scalability from the code level using set theory and Hoare triple. The method denotes variables and functions involving scalability through set notations. Hoare triple is used to measure the performance fulfillment with varying workload by a code segment given that certain code quality measure like caching or data compression is applied. The methodology is presented by means of an algorithm which strictly inhibits to passover a specific scalability requirement and requires to re-apply a quality measure until a specific requirement is being satisfied. The approach is applied for developing a real life online ticketing system and results show that it provides stable response time over a wide range of user requests. This indicates that the proposed approach is capable of ensuring scalability by verifying it from system’s code. Index Terms—Software Scalability, Verification, Formal Method, Set Theory, Hoare Triple

I. I NTRODUCTION Scalability is an expected requirement of a software system characterizing it’s ability to fulfill quality goals like performance, throughput, etc. when characteristics of execution environment vary over a expected range [1]. It can be crucial for systems like e-commerce and social networking sites, web services and online ticketing systems which deal with growing number of users. These systems tend to experience unpredictable and extensively varying degrees of requests, especially during events such as breaking news, national occasions and Denial of Service (DoS) attacks [2]. Developing a scalable web application involves careful planning about coding so that it can make the application work properly under demanding workloads [3]. Therefore, it is essential to verify scalability from the code level for ensuring that the system can scale elastically when it is needed. Moreover, in case of some organizations inconsistent system performances due to lack of scalability can be fatal from the business perspective. Thus, during the development of those systems it is important to apply strictness to ensure none of the scalability requirements can remain unimplemented.

Although several works have been done on scalability framework and evaluation method [4], [5], [6], [7], [8], it is hard to find a significant work that specifically focuses on verifying scalability from the code level. State of the art works focus on architectural and design aspect [4], prediction of resources to make the system scalable [5] and scalability elicitation and analysis in requirement engineering phase [7], [6], [8]. However, it is arguable that existing works can be used to verify that whether the code can scale and impose the strictness regarding the implication of scalability. This paper proposes a formal approach to verify the software scalability from the code level. The approach uses set theory [9] to define multiple sets of variables and their mapping functions involving system scalability. These sets include system’s load extensive states, code quality measures like caching or data compression, performance expectation over a specific range of requests, state-quality compatibility pairs and code segments. Mapping functions include map from system states to code segments and compatibility pair to performance fulfillment with varying user requests. A performance requirement is fulfilled if its respective Hoare triple [10] including code quality measure, code segment and performance expectation holds. The whole procedure is represented by an algorithm that strictly inhibits to leave a specific requirement without being implemented and requires to reapply certain quality measure until the expected performance goal is being achieved. The approach was applied to develop the prototype of an online ticketing system. Before the implementation, requirement analysis was performed to identify load extensive states, code quality measures and expected performance goals with respect to specific number of user requests. The prototype is tested using a software named ab1 which is a HTTP server benchmarking tool provided by Apache. The software was used to generate a large amount of user traffic varying the number of requests at a time from 100 to 1000 for a total period of 10 second keeping the concurrency level to 100. Results show that the system is responding within the expected response time of 15 ms and 30 ms with minor fluctuations due to multiple web server processes and caching limitations. 1 httpd.apache.org/docs/2.2/programs/ab.html

These results assure that the proposed methodology is capable of ensuring scalability from the code. The rest of the paper is organized as follows: Different frameworks for scalability analysis and elicitation is discussed in Section II. The proposed approach is defined in Section III. Section IV presents the case study of an online ticketing system on which the proposed approach is being applied. Lastly, this paper is concluded with a discussion about few observations visualized from results of the case study and future research directions. II. L ITERATURE R EVIEW Scalability is a major concern for software development but not much work has been done on it. Earlier, scalability as a requirement was overlooked while developing software systems for lack of techniques for eliciting and testing scalability requirements [7]. Most of the works focused on building framework for scalability. Some of the significant works is going to be discussed briefly in this section. Scalability was analyzed for Admission Control Server Based Resource Allocation (ACS-RA) in [4]. It was then compared with Resource Reservation Protocol (RSVP) [11], [12] approach. The work was done by analyzing a framework for reserving resources according to the RSVP model in a network cloud. They analyzed the scalability in the worst case scenario which is a single centralized Admission Control Server for a given differentiated services cloud [13], [14]. They showed that ACS-RA is better than RSVP approach in the aspect of scalability. However, investigation on the comparison of the user plane performances was expected in their work. Assuming a single centralized ACS to have a worst case scenario proved their claim that ACS-RA architecture scales better than RSVP successfully but some other scenarios should be added. Lloyd G. Williams et al. presented Quantitative Scalability Evaluation Method (QSEM) to directly quantify the scalability of a software system [5]. Their method identifies specific quantity of resources needed to be used for ensuring scalability. They have shown results from a case study after following seven steps of QSEM. It uses straightforward measurements of maximum throughput at different amount of processors or nodes of the targeted system. Identifying critical use cases, selecting representative scalability scenarios, determining scalability requirements, planning measurement experiments, performing measurements and presenting the result are the steps of QSEM to complete the scalability analysis. The steps of their method shows how to evaluate scalability from the very first phase with identifying critical use cases. Their evolution method can measure scalability requirement quantitatively which can surely help to figure out the actual requirement and outcome. This approach is good to verify scalability from use case to application level but a code verification step could make this approach more attractive. As scalability is not defined and understood clearly, a framework for accurately characterizing and analyzing scalability requirements is presented in [6]. Resolving intuitions,

ambiguities and inconsistencies of scalability is the primary goal of that framework. Authors claimed that their framework precisely detects dependency relationships of scalability and describe important steps of scalability analysis. The framework treats scalability as a multi-criteria optimization problem. They used principles of microeconomics to support their analysis. Scalability has not been given the systematic treatment it deserved claimed in [7]. A framework for characterization and analysis of software system scalability for addressing the mentioned issue has been proposed. Application of the popular method goal-oriented requirements engineering (GORE) [15], [16] is described with a case study of a real word software system in the paper. The study found appropriateness of GORE as well as it’s limitations which are shown in their result. They claimed that GORE lacks techniques to elaborate goal models with respect to scalability goals. Though the advantages of using GORE are discussed quite elaborately, drawbacks are not discussed properly. Leticia Duboc et al. worked for elaborating specific scalability requirements through identifying goal-obstacles to satisfy those requirements [8]. Their method systematically elaborate and analyze scalability requirements. Result of their research was presented from a case study in which the method was applied to a complex, large-scale financial fraud detection system. Method of this research was formulated from KAOS [17], [18] goal-oriented requirements engineering method. They used KAOS for the elaboration of goal models, management of conflicts between goals and selecting alternative system designs. The key issues they identified in their work can help to fix the developers view towards scalability requirements. The review of state of the art works shows that though multiple issues have been considered in proposing a method for scalability prediction, elicitation and analysis, none of the work offer an approach to verify scalability from the code level. However, in order to have consistent system performance over varying workloads it is essential to write scalable code and a formal approach should be provided for its verification. This paper introduces one such approach for verifying scalability using set theory and Hoare triple and strictly requires to fulfill all scalability requirements. III. F ORMALIZING S CALABILITY V ERIFICATION We define a set of states which will receive excessive load as S = {s1 , s2 , ....., sn }. Let C = {c1 , c2 , ....., cn } denote the set of code segments that eventually forms the whole system. We also define a mapping function ψ : S → C that will relate a specific load extensive state to a code segment. The set of certain code quality measures that should be applied for allowing the code to scale is defined as Q = {q1 , q2 , ....., qm }. It is obvious that all quality measures will not be applicable to a specific state. Thus we define a set of compatibility pairs where each pair include a specific state and quality measure. This is denoted as Ω = {ω1 , ω2 , ....., ωp } where ωt = {si , qj }, i ≤ n and j ≤ m. We define the set of expected performance outputs with varying workloads as

K = {k1 , k2 , ....., kp } where kt is a output of the expected function ξ with respect to ωt which means kt = ξ(ωt ). In order to fulfill a specific scalability requirement, each compatibility pair has to satisfy the function φ : Ω → {0, 1}. The output of the function is defined as follows:  φ(si , qj ) =

1 0

if the Hoare triple holds otherwise

A Hoare triple of the form {Q} C {K} will hold if after executing ci ∈ C derived from si ∈ ωt using ψ, the expected performance output kt = ξ(ωt ) is satisfied given that qj ∈ ωt is implied. Algorithm 1 illustrates certain steps that need to be performed for verifying the system scalability using the proposed formal approach. The algorithm requires the scalability requirement specification as an input and it will ensure that all requirements are satisfied. Initially load extensive states S, code quality measures Q, compatibility pairs Ω and expected outcomes K will be defined. During the implementation phase, C and mapping function ψ will be defined. After finishing the implementation it has to be ensured that all Hoare triple hold. If a Hoare triple does not hold then the code chunk must be redefined with respect to code quality measures until the triple holds. Algorithm 1 Scalability Verification Require: Scalability requirement specification Ensure: All the scalability requirements are satisfied 1: Begin 2: Identify load extensive states S 3: Define code quality measures Q and compatibility pairs Ω 4: Define expected outcomes K 5: As the system being implemented define C and mapping function ψ 6: while |Ω| 6= 0 do 7: Use ψ to get ci from si 8: if {qj } ci {kt } holds then 9: Ω ← Ω \ {ωt } 10: else 11: redefine ci with respect to qj 12: end if 13: end while 14: State that all the scalability requirements are satisfied 15: End

IV. C ASE S TUDY A case study was conducted on an online ticketing system to illustrate how the proposed method can be used to verify system scalability. This section discusses the details of that case study including the system scenario, implication of the proposed approach on developing the prototype of the ticketing system and results obtained after load testing the prototype for measuring system consistency. A. System Scenario The user will initially have a view of the tournament fixture for the current season. The fixture will show the match number, teams, venues, date and time of matches. An option will be provided beside the matches yet to occur, enabling the user

to buy ticket. After choosing a match to buy ticket, the user will be given the option to choose the gallery, block number, row and the seat number. After choosing the options the price of the ticket (according to the option) will be shown. If the user selects the ticket to buy then s/he will be considered as a buyer and asked for providing necessary information which are name, address and mobile phone number. An SMS having a secret code will be send to the submitted mobile number. The buyer will then be provided the option to submit that code and if the code is correct the submitted information will be stored in the database. A unique serial number will be generated to keep track of the buyer and s/he will be taken to complete the payment. After receiving the confirmation that the money has been paid, a receipt in the PDF format, having the generated serial will be provided by the system. The buyer can use that receipt to collect his/her ticket from the nearest ticket collection center. B. Exercising the proposed approach After analyzing the whole process of this online ticketing system using the approach of [5], some of the states are considered as critical which must be scalable. It is showed by the system scenario that several requests from users and options providing by systems can be programmed in such a way which can ensure the expected performance. Data compression and caching are the two techniques implemented here for scalability. Only two techniques are used to keep it simple for this case. Code sections of critical states have been fetched where cache and compression are applied. Scalability is verified by the algorithm showed in earlier section with Hoare Triple and Set Theory. Code sections with the selected states are mapped and techniques are implemented upon them. Different states have different quality measures. Data compression and caching are the two quality measures where related states are coupled to form the compatibility pair. Expected outcomes for the compatibility pairs are made from the user statistics and surveys as discussed in [5]. Critical states are first taken and related quality measures are mapped. Viewing fixture by the user, gallery selection, ticket buying option provided by the system, adding user information and ticket purchasing information of users, updating ticket availability and generating receipt are the critical states of online ticketing system. The states which affects the systems scalability is referred as critical states. Two of the critical states are shown here as representing states. Fixture viewing state can be cached to ensure scalability for varying execution environment. The assumptions were based on statistics that 10,000 users may request to view fixture per seconds with a 56 Kbps connection. Test was done in Local area network which indicates that the requests should be served within 15 ms. When the state is sf and the quality measure cache is qc , it makes the compatibility pair ωm . Expected outcome km should be within 15 ms. {qc } cf {km } hold, if the outcome after test is less than 15 ms. Generating receipt is another critical state where a receipt of the purchase is provided to the user. Compression is applied

TABLE I: Formal Scalability Verification of View Fixture and Generate Receipt Modules Name of the state Quality Measure (Q) Code Chunk (S → C) Expected Outcome (K) Hoare Triple

View Fixture (sf ) Cache (qc ) (cf ) Response time within 15 ms up to 10,000 (km ) {qc } cf km

in this state to compress the receipt and pass over the network. If a 311 KB receipt file becomes 259 KB after compression, then it should take around 30 ms per request which was stated in the requirements specification document. The compatibility pair ωn is made with the state sr and qp . {qp } cr {kn } hold, if expected outcome kn is less than 30 ms. Table I summarizes the formal verification of mentioned two states. C. Results The prototype of the system was developed using PHP and MySQL Database on 32 bit Linux Mint 15 machine with Intel Core-i3 processor, 4GB RAM. It was hosted using Apache server and load tested using the benchmarking tool ab provided by Apache. Two modules - View Fixture and Generate Receipt were implemented each having two versions. In case of version one for the View Fixture module, caching was applied following the proposed approach and the other version involved no caching. Similarly, in case of Generate Receipt module a version included data compression and the other did not. User requests were generated through ab by varying the number of requests at a time from 100 to 1000 for a total period of 10 second keeping the concurrency level to 100. The response times in each of these four cases are reported in Table II and III. The graph of Table II and III is shown in Figure 1 and 2 respectively. From the graph it is observed that the response time in case of caching is much more less than caching being not applied. However, in case of compression though the same thing is suppose to happen, the opposite phenomenon is observed. This is detailed discussed in Section V

Generate Receipt (sr ) Compression (qp ) (cr ) Response time within 30 ms up to 10,000 (kn ) {qp } cr {kn }

at the same time and processed other types of data similar to any real application server. As a result, random spikes occurred throughout the observation. However, overall time required per requests remained consistent. In Figure 1b, performance in terms of no cache is depicted. Here, the observation is that time required to serve requests was high right from the start. The peaks are observed at specific number of requests, such as 400, 600 and 800. The reason is that because of lack of cache, the time require to process requests was always high. The decreased value is observed because the number of requests was not made continuously, but discretely. To explain, values taken started from 200 requests, with increment of 200 requests per round, to 1000 requests. It is also observed that with the number of requests handled, the time required to handle requests increased as well, making this approach inappropriate for scaling. In Figure 2a and 2b, the performance is shown in terms of time required per request for compressed and non compressed data. However, unlike known observations, it is observed that performance of compressed data is not good compared to uncompressed data. The reason is that the experiment was conducted in Local Area Network, where there is low latency for communication between client and server end. As a result, the time required to compress data for each request became significant. There are also several spikes observed randomly, which occurred due to the real system like property of our environment serving multiple types of request simultaneously instead of just allocating processor for providing web services.

V. D ISCUSSION

VI. C ONCLUSION AND F UTURE W ORK

An important observation can be found from Figure 1a. Here, it is observed that even though the time required to serve request remains somewhat consistent, there are several significant spikes in terms of time required. This phenomenon is observed because when a request is submitted at first, it is processed to be stored in cache for future usage. As a result, time required for handling further requests within the same axes are shown to be smaller comparatively. This can be seen in the Figure 1a where number of requests was between 200-400 range, where initial time required was from 13.5 to 14.5 unit times, whereas further requests required around 13 unit times only. The other spikes observed, particularly within 600-800 number of requests range and 800-1000 number of requests range however, are not observed in cases of initial time required. This happened because the system, hosting the web server, was tasked for serving multiple types of requests

This paper proposes a formal approach that utilizes Set theory and Hoare triple to ensure proper exercise of each scalability requirements. The importance of verifying scalability from the code level was highlighted as it can make applications consistent with respect to performance goals despite rapid increase of user requests. The proposed method applies a strictness that none of the scalability requirements can be skipped and allows re-applying code quality measures, like data compression and caching, until fulfilling certain performance goals. For experimental purpose, an online ticketing system was developed using traditional approach without following the proposed method. Requests varying within a range of 1001000 with constant concurrency of value 100 were sent to the application for a time period of 10 second. Another application was developed that followed the proposed approach

TABLE II: Performance Comparison of Two Versions of the Generate Receipt Module Number of Req. at a Time 100 200 300 400 500 600 700 800 900 1000

Caching Req. per Second 8282.82 7777.96 7768.76 6797.78 7976.41 7699.57 7158.67 7806.78 7481.52 7351.29

Applied Time per Req. (ms) 12.073 12.857 12.872 14.711 12.537 12.988 13.969 12.809 13.366 13.603

No Caching Applied Req. per Second Time per Req. (ms) 176.98 565.027 176.76 564.783 177.29 564.051 176.75 565.764 177.39 563.743 177.09 564.676 176.64 566.113 177.47 563.468 177.19 564.37 176.22 567.468

TABLE III: Performance Comparison of Two Versions of the Generate Receipt Module Number of Req. at a Time 100 200 300 400 500 600 700 800 900 1000

Compression Applied Req. per Second Time per Req. (ms) 3614.46 27.667 3955.13 25.284 3704.21 26.996 3654.21 26.312 3841.4 26.032 3957.48 25.269 4095.89 24.415 3890.11 25.706 3917.67 25.525 3679.44 27.178

567 566.5

14.5

Time per Request (ms)

Time per Request (ms)

15

No Compression Applied Req. per Second Time per Req. (ms) 5392.12 18.546 5485.54 18.23 5368.53 18.627 5469.02 18.285 5533.1 18.073 5493.3 18.204 5531.74 18.078 5542.59 18.042 5501.72 18.176 5430.53 18.414

14 13.5 13 12.5 12 0

200

400

600

800

Number of Requests at a Time

1000

566 565.5 565 564.5 564 178

563.5

8200 8000 7800 7600 7400 Requests per Second 7200 7000 6800

177.5

563 0

200

177 400

Requests per Second 600

176.5 800

Number of Requests at a Time

(a) Caching Applied

1000

176

(b) No Caching Applied

Fig. 1: Performance visualization in terms of caching being applied and not applied

28

19

Time per Request (ms)

Time per Request (ms)

27.5 27 26.5 26 25.5 25

18.8 18.6 18.4 18.2

4100

24.5 4000

24 0

3900 200

400

3800 600

Number of Requests at a Time

800

3700 1000

3600

(a) Compression Applied

Requests per Second

5600 5550

18

5500 0

200

5450 400

600

Number of Requests at a Time

5400 800

5350 1000

5300

(b) No Compression Applied

Fig. 2: Performance visualization in terms of compression being applied and not applied

Requests per Second

and similar number of requests were sent. The response time was recorded for each of those cases and results show that the later system significantly outperforms the first one. These results illustrate that the proposed approach can be used to ensure scalability of a system from it’s code. The proposed method considers fulfilling each scalability requirement as a binary function and functions specific to all requirements must return true for a successful exercise of the approach. A further research challenge could be applying specific weight to each scalability requirement and consider the approach as a Fuzzy system. Work has to be done on how to assign those weight and obtaining the thresh-hold value based on which the success of the approach will be measured. ACKNOWLEDGEMENT This research has been supported by the University Grant Commission, Bangladesh under the Dhaka University Teachers Research Grant No-Regi/Admn-3/2012-2013/13190. We want to express our gratitude to Mr. Shah Mostafa Khaled, Assistant Professor, Institute of Information Technology, University of Dhaka for helping us in modeling the proposed method. R EFERENCES [1] D. S. Rosenblum, “Software system scalability: concepts and techniques,” in Proceedings of the 2nd India software engineering conference. ACM, 2009, pp. 1–2. [2] C. Olston, A. Manjhi, C. Garrod, A. Ailamaki, B. M. Maggs, and T. C. Mowry, “A scalability service for dynamic web applications,” in Proceedings of the 2nd Biennial Conference on Innovative Data Systems Research, 2005, pp. 56–69. [3] A. Otto, “Write scalable code,” Code Project [Online], 2011. [4] A. Detti, M. Listanti, S. Salsano, and L. Veltri, “Supporting rsvp in a differentiated service domain: an architectural framework and a scalability analysis,” in International Conference on Communications, vol. 1. IEEE, 1999, pp. 204–210. [5] L. G. Williams and C. U. Smith, “Qsemsm: Quantitative scalability evaluation method,” in Proc. CMG, 2005.

[6] L. Duboc, D. Rosenblum, and T. Wicks, “A framework for characterization and analysis of software system scalability,” in Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering. ACM, 2007, pp. 375–384. [7] L. Duboc, E. Letier, D. S. Rosenblum, and T. Wicks, “A case study in eliciting scalability requirements,” in Proceedings of 16th International Requirements Engineering. IEEE, 2008, pp. 247–252. [8] L. Duboc, E. Leiter, and D. S. Rosenblum, “Systematic elaboration of scalability requirements through goal-obstacle analysis,” IEEE Transactions on Software Engineering, vol. 39, no. 1, pp. 119–140, 2013. [9] B. Potter, D. Till, and J. Sinclair, An introduction to formal specification and Z. Prentice Hall PTR, 1996. [10] C. Stirling, “A generalization of owicki-gries’s hoare logic for a concurrent while language,” Theoretical Computer Science, vol. 58, no. 1, pp. 347–359, 1988. [11] L. Zhang, S. Deering, D. Estrin, S. Shenker, and D. Zappala, “Rsvp: A new resource reservation protocol,” IEEE Network, vol. 7, no. 5, pp. 8–18, 1993. [12] G. R. Mallofre, “Resource reservation protocol (rsvp),” in Seminar on Transport of Multimedia Streams in Wireless Internet, Department of Computer Science University of Helsinki, Finland. Citeseer, 2003. [13] E. Nyberg, S. Aalto, and R. Susitaival, “A simulation study on the relation of diff-serv packet level mechanisms and flow level qos requirements,” in International Seminar on Telecommunication Networks and Teletraffic Theory, 2002. [14] R. Balmer, F. Baumgarter, T. Braun, and M. Gunter, “A concept for rsvp over diffserv,” in Proceedings of Ninth International Conference on Computer Communications and Networks. IEEE, 2000, pp. 412– 417. [15] A. van Lamsweerde, “Goal-oriented requirements enginering: a roundtrip from research to practice [enginering read engineering],” in Proceedings of 12th International Requirements Engineering Conference. IEEE, 2004, pp. 4–7. [16] A. Lapouchnian, “Goal-oriented requirements engineering: An overview of the current research,” Technical Report http:// www.cs.toronto.edu/ ∼alexei/ pub/ Lapouchnian-Depth.pdf , University of Toronto, 2005. [17] R. Darimont, E. Delor, P. Massonet, and A. van Lamsweerde, “Grail/kaos: an environment for goal-driven requirements engineering,” in Proceedings of the 19th International Conference on Software engineering. ACM, 1997, pp. 612–613. [18] W. Heaven and A. Finkelstein, “Uml profile to support requirements engineering with kaos,” IEE Proceedings-Software, vol. 151, no. 1, pp. 10–27, 2004.