by a genetic programming environment, in which the functions are naturally defined by the ... Play store for the android-based mobile devices. The process to ...
A Framework for Modeling Automatic Offloading of Mobile Applications Using Genetic Programming G. Folino and F.S. Pisani Institute of High Performance Computing and Networking (ICAR-CNR), Italy {folino,fspisani}@icar.cnr.it
Abstract. The limited battery life of the modern mobile devices is one of the key problems limiting their usage. The offloading of computation on cloud computing platforms can considerably extend the battery duration. However, it is really hard not only to evaluate the cases in which the offloading guarantees real advantages on the basis of the requirements of application in terms of data transfer, computing power needed, etc., but also to evaluate if user requirements (i.e. the costs of using the clouds, a determined QoS required, etc.) are satisfied. To this aim, in this work it is presented a framework for generating models for taking automatic decisions on the offloading of mobile applications using a genetic programming (GP) approach. The GP system is designed using a taxonomy of the properties useful to the offloading process concerning the user, the network, the data and the application. Finally, the fitness function adopted permits to give different weights to the four categories considered during the process of building the model.
1
Introduction
The introduction of larger screens and the large usage and availability of cpuconsuming and network-based mobile applications further reduce the battery life of mobile devices. Therefore, due to these problems and to the proliferations of mobile devices (i.e. tablets and smartphones), the interest in trying to improve the limited life of their batteries is greatly increased. A possible solution to alleviate this problem is to offload part of the application or the whole computation to remote servers, as explained in [6], where software-based techniques for reducing program power consumption are analyzed, considering both static and dynamic information in order to move the computation to remote servers. In the last few years, the emergence of the Cloud Computing technology and the consequent large availability of cloud servers [2], encouraged the research into the usage of offloading techniques on cloud computing platforms. A number of papers were published trying to cope with the main issues of the process of offloading, mainly oriented toward a particular problematic, i.e. wifi issues [8], network behavior [12] and network bandwidth [15], the tradeoff between privacy and quality [9] and the effect of the context [1]. A.I. Esparcia-Alc´ azar et al. (Eds.): EvoApplications 2013, LNCS 7835, pp. 62–71, 2013. c Springer-Verlag Berlin Heidelberg 2013
A Framework for Modeling Automatic Offloading of Mobile Applications
63
However, to the best of our knowledge, the works concerning this thematic do not report a complete model keeping into account both the hardware/software issues and the user’s requirements, neither an automatic and adaptive model to take the decision of performing the offloading. Indeed, the offloading technique potentially could improve both performance and energy consumption, however it is a NP-hard problem to establish whether it is convenient to perform the migration, especially considering all the correlated problems such as network disconnections and variability, privacy and security of the data, variations of load in the server, etc. In this work, we try to overcome these limitations, by designing a framework for modeling automatic offloading of mobile application using a genetic programming approach. Logically, we divided the software architecture of the framework into two parts: a module for simulating the offloading process and an inference engine for building an automatic decision model on the same process of offloading. Both of them are based on a taxonomy that defines four main categories concerning the offloading process: user, network, data and application. The simulator evaluates the performance of the offloading process of mobile applications on the basis of the user requirements, of the conditions of the network, of the hardware/software features of the mobile device and of the characteristics of the application. The inference engine is used to generate decision trees able to model the decisions on the offloading process on the basis of the parameters contained in the categories defined by the taxonomy. It is constituted by a genetic programming environment, in which the functions are naturally defined by the taxonomy, and the fitness function is parametrically defined in a way that permits to give a different weight to the cost, the time, the energy and the quality of service depending on what interests more. The rest of the paper is structured as follows. Section 2 presents the overall software architecture. In Section 3, the GP system and the taxonomy used to build the decision model for the task of the offloading are described. In Section 4, we survey existing works. Finally, Section 5 concludes the work.
2
The Framework for Simulating and Generating Decision Models
The main idea behind our system is using Genetic Programming to evolve models, in form of Decision Trees, which will decide if it is convenient to perform the offloading of a mobile application on the basis of the parameters and the properties typical of the application, of the user and of the environment. The overall software architecture of the system is described in figure 1. On the top of the architecture, the modules containing the data, which will be used by the other components of the system, are illustrated. The different modules will constitute a set of data for each of the four different categories considered. We remind to the section 3.1 for a detailed description of these categories. It is out of the scope of this paper to describe the techniques used to estimate these components; we just would like to mention that in the case of the parameters describing
64
G. Folino and F.S. Pisani
Fig. 1. The overall software architecture of the system
the mobile devices, they can be found in literature for some models and can be estimated using appropriate applications, such as PowerTutor, available in the Google Play store for the android-based mobile devices. The process to estimate the energy consumption for various types of applications and for multiple classes of devices is analogous to that presented in [11]. Afterwards, the sampler module will generate the training and the validation dataset, simply combing randomly the data estimated by the above-mentioned models. These two datasets will be used respectively to generate and validate the decision models. Analyzing the rest of the software architecture, we find the two main modules, used respectively for the simulation and for the inference of the mobile offloading module. The inference part of the designed system consists of a Genetic Programming module, developing a population of models, apt to decide the possible offloading of a mobile application. The chosen GP system is CAGE (CellulAr GEnetic programming), a scalable cellular implementation of GP [4]. One of the advantages of the chosen GP–based module is that it can run on parallel/distributed architectures, permitting time-saving in the most expensive phase of the training process, described in the following. Indeed, each single model of the GP population represents a decision tree able to decide a strategy for the offloading and must be evaluated using the simulation module.
A Framework for Modeling Automatic Offloading of Mobile Applications
65
The simulation module consists of the GreenCloud simulator (simulating the cloud part of the offloading process) together with a mobile simulator built adhoc, modeling the mobile device’s behavior. In practice, each model generated by the GP module is passed to the simulator module, which provides to perform the fitness evaluation directly on the basis of the results obtained simulating the model using the training dataset. At the end of the process, the best model (or the best models) will constitute the rules adopted by the offloading engine, which will decide whether an application must be offloaded, considering the defined conditions (user requirements, bandwidth, characteristic of the mobile device and so on). All these models must be validated using the simulation engine with the validation dataset; if the result of this evaluation overcomes a predefined threshold, the model is added to a GP model repository for future use.
3
Designing an Efficient GP System for Automatic Mobile Offloading
In this section, the GP system and its design are described. The GP system chosen for generating the rules used to perform the offloading is the parallel Genetic Programming tool CAGE [4]. The motivation behind this choice is that this system can run on parallel/distributed architectures, permitting time-saving in the most expensive phase of the training process; in addition, GP supplies the characteristic of adaptivity and the possibility of working with a few knowledge of the domain, really useful to this particular aim. As usual, in order to use GP for a particular domain, it is sufficient to choice an appropriate terminal and function set and to define a fitness function. We chose a typical approach to GP for generating decision trees, choosing as terminals simply the two answers, yes or no to the question “Is the process offloadable?”. In the following subsection, the taxonomy of the main parameters/properties of the offloading process is described. This taxonomy will be used to define the functions of the GP module, while the terminals would be constituted by the final decision on the offloading process. In the subsection 3.2, the way in which the fitness is evaluated is described. 3.1
A Taxonomy of the Main Properties for the Offloading Process
This section is devoted to the definition of a taxonomy of the parameters and of the properties, which the GP module will use to build the model that will decide the offloading strategy. Our taxonomy divides the parameters in four different categories: Application (i.e. the parameters associated to the application itself), User (the parameters desired and imposed by the user’s needs), Network (i.e. parameters concerning the type and the state of the network), and Device (i.e. the parameters depending only from the hardware/software features of the devices). In practice, the decision model, built by the GP module of our architecture, will perform the decision of offloading or not on the basis of the different parameters associated to
66
G. Folino and F.S. Pisani
these categories. It is worth to notice that many parameters could be more detailed and complex and other could be added; however, our taxonomy does not pretend to be exhaustive, but we tried to simplify the model and did not consider particular aspects, which should not influence the offloading process. On the following, we will give a short description of the parameters chosen for each defined category. Where not differently specified, the values of the parameters have been discretized in ranges, using discrete values such as low, moderate, high, etc. Application The parameters associated to this category consider features, which are typical of the application to be executed on the mobile devices, trying to characterize all the aspects useful to take a decision on the offloading process. Average Execution Time. The average execution time of the application measured for different typical dimensions of the input. Memory. The size of the code and of the data of the application. Local Computation/communication ratio. A value expressing if the application devotes most of its execution time to the execution of the local computation or to use the network (0 indicates an application performing only local computation, 1 an application continuously interacting with the network). Probability of Interruption. A mobile application could be interrupted for different reasons: system crash, user interruption, no availability of a necessary resource (network, gps, etc.). This parameter represents the probability the application is interrupted before the end of the process. User Requirements This class of parameters considers the needs and the behavior of the user of the mobile device, modeling the different categories of users with their different styles. Mobility. Every user has different behaviors and habits in terms of mobility. Some users spend most of the time in the same place (typically at house and at work), while other moves frequently in different places and this should be considered in the process of offloading for the continuous changes in the network used, the need for a better duration of battery and so on. This parameter models the probability of mobility of the user. Urgency. This parameter models the urgency or the priority with which the user wants to obtain the partial/final results of the application. If the user is too impatient to get results could prefer a greater consumption of the battery rather than a larger time of waiting. Privacy sensitivity. People are understandably sensitive about how the application captures their data and how the data are used. However, different users can
A Framework for Modeling Automatic Offloading of Mobile Applications
67
present different sensibility, from the paranoiac to the too confident user. In our context, choosing solutions privacy-preserving can degrade the performance of the offloading process, mainly for the difficulty of moving the data or for the additional cost of adding protection. Cost. The cost a user can/wants to pay is a fundamental parameter in order to perform the offloading. In fact, the cost of the cloud platform is usually correlated with the time and the resources used. Network The network plays a fundamental role in the process of offloading, as it determines the velocity of the process and limits the possibility of exchanging data in real time. In fact, application needing to exchange a stream of data cannot be offloaded whether the networks do not permit a fast transfer of the data. QoS of the network. We consider as Quality of Service of the network a parameter estimating the reliability and stability of the network. Bandwidth. The bandwidth of the network is an important parameter to establish if the offloading of large applications/data can be quickly executed. Latency. The latency of the network is useful when small quantities of data must be exchanged during the offloading process. Type of network. This parameter models the type of network available, i.e. Wifi, 3G, 2G, etc. The type of the network is crucial for saving energy, as for instance, a device transmitting data using wifi consumes less battery than using 3G. Mobile Device The last class of parameters we consider is pertinent to the mobile device. Analyzing the state and the characteristics of the mobile device is essential in order to drive the process of offloading. On the following we list the most important parameters to be considered. Battery Level. The battery charge level is of fundamental importance, as, when it is low, the process of offloading can be encouraged. CPU Load and Memory availability. The load of the cpu and the available memory are two important parameters, which can influence the choice of the offloading rather than the execution on local resources. Connectivity. This parameter represents the strength of the network signal detected by the mobile device (obviously it depends both from the device and from the quality of the network). Example of Rules After the description of the different parameters of the four categories defined above, we give two examples of rules, which could be extracted from our system
68
G. Folino and F.S. Pisani
Table 1. Example of rules generated for a chess game and for a password manager application Password Manager if memory is high and privacy is high and bandwidth is low then execute locally Chess Game if privacy is low and bandwidth is high and cost is moderate then perform offloading
concerning two typical applications, i.e. a password manager and a chess game, shown in table 1. Note that, really, even if our framework generates decision trees, we choose to show them in terms of if . . . then rules, for the sake of clearness. 3.2
Fitness Evaluation
This subsection defines the fitness function used in the GP inference engine. We modeled it with a simple equation, described in the following. First of all, we define three normalized functions, representing respectively the energy saved, the time saved and the cost saved during the process of offloading (really the latter is a negative value, as it is a cost not a saving): Senergy , Stime and Scost . Elocal −Eof f load , i.e. the ratio between the energy saved executing Senergy = max(E of f load ,Elocal ) the process on remote servers and the energy necessary to perform the offloading. The energy is computed in accordance with the analysis defined in [7]. Tlocal −Tof f load Stime = max(T , i.e. the ratio between the time saved executing of f load ,Tlocal ) the process on remote servers and the time necessary to perform the offloading. C f load Differently, the cost function is computed as Scost = − of Csup , i.e. the ratio between the cost due to the remote execution and a parameter Csup defining a threshold of cost (if the cost overcomes Csup , Scost becomes −1). Finally, the fitness is computed as the weighted sum of the three equations described above, using three positive parameters (a, b , c), modeling the importance we want to give respectively to the energy saving, to the time saving and to the cost saving. Considering an element Ti (representing an application running on a determined device, with a required QoS, etc.) of the training set T composed by n tuples, the fitness of this element is computed as f (Ti ) = a ∗ Senergy + b ∗ Stime + c ∗ Scost and consequently the total fitness is i=n given by ftot = i=1 f (Ti ) − d ∗ QoS where the term QoS represent the ratio between the number of cases (tuples) of the datasets, which does not respect the QoS constraints, and the total number of tuples. In practice, a penalty is added to the fitness function for each case in which the QoS is not guaranteed. A fourth parameter d is used to give a particular weight to the QoS.
A Framework for Modeling Automatic Offloading of Mobile Applications
4
69
Related Works
Analyzing the works in literature concerning the offloading of mobile applications, the problematic of finding an automatic methodology to perform offloading is not much explored. A paper introducing general thematic on the process of offloading is written by Kumar and Lee [7]. The authors analyze the main problems derived from offloading mobile applications on the Cloud such as privacy, costs, energy consumption and show which applications can benefit from this approach. They introduce a simple model for deciding whether it is convenient to perform the offloading and they try to apply the technique only to computationally expensive functions while computing locally other tasks. Other papers are devoted to the utility of performing offloading, basing on some criteria, i.e. energy consumption, costs, network usage, etc. For instance, in [10] the decision of performing the offloading is based on the difference between the energy used if the task was executed locally on the mobile device or remotely on the cloud servers. The power consumption of the local execution is estimated by counting the cpu cycles while, as for the remote execution, it is calculated only considering the network usage (data transfer). Our model is more sophisticated, as it considers also the hardware components that are used during computation and the problematic concerning the transfer of the data (i.e. cpu, wifi, 3g, display, system, etc.). In [6] a two-step method is used. First, a database of application power usage is built through standard profiling technics. Then, the authors exploit the property stated in the paper that, for a limited class of applications (i.e. applications in which the cost depends only on the scale of the input), the algorithmic complexity combined with the profiling can be used to predict the energy cost of the execution of the application itself. Unfortunately, real-world applications can be hardly modeled considering only their input. Many papers are devoted to techniques and strategies to alleviate the process of offloading analyzing the code of the application or optimizing some energyconsuming processes, i.e. the acquisition of the GPS signal. For instance, Saarinen et al. [14] analyses the application source code and identifies method presenting hardware and/or software constraints, which do not permit the offloading. In addition, they also consider traffic patterns and power saving modes. However, the work does not consider network conditions and user requirements. Note that these approaches are orthogonal to our work and can be adopted in order to optimize some phases of the offloading process. Spectra [3] is a remote execution system that monitors application resource usage and the availability of the resources on the device and dynamically choose how and where to execute application tasks. The framework provides APIs to developers to build application suitable with the defined architecture. Xray [13] is an automatic system, which profiles an application and decides what and when an offloading computation is useful. The Xray profiling stage observes application and system events (Gui, sensor, GPS, memory, CPU) and identifies remotable methods. If a method does not use local resources then it is remotable. Differently
70
G. Folino and F.S. Pisani
form these two systems, our framework is not method-based, but considers the entire application and the decision to perform the offloading is based not only on the application characteristics (size of data, privacy concern) but also on the system status (battery, 3g or wifi connection) and on some constraints requested by the user. Gu et al. [5] extend an offloading system with an offloading inference engine (OLIE). OLIE solves two problems: first, it decides when to trigger the offloading action; second, it selects a partitioning policy that decides which objects should be offloaded and which pulled back during an offloading action. The main focus of OLIE is to overcome memory limitations of a mobile device. To make a decision OLIE consider available memory and network conditions (i.e. bandwidth, delay). To achieve a more powerful triggering system, OLIE uses a Fuzzy Control model with rules specified by the system and by the application developers. Similarly to our approach, this paper is based to an evolutionary algorithm to automate the decision of offloading a task in order to save energy. However, it only considers the hardware/software characteristics of the application and of the device and not the user’s requirements and the QoS.
5
Conclusions and Future work
This work presents an automatic approach to generate models for taking decisions on the process of offloading of mobile applications on the basis of the user requirements, of the conditions of the network, of the hardware/software features of the mobile device and of the characteristics of the application. The system constitutes a general framework for testing offload algorithms and includes a mobile simulator and an inference engine. The latter is a genetic programming module, in which the functions are the parameters of a ad-hoc designed taxonomy, which includes all that can influence the process of offloading and that must be considered during the decision process. The fitness function is parametrically defined in a way that permits to give a different weight to the cost, the time, the energy and the quality of service depending on what interests more. For future works, the framework will be tested with real datasets in order to verify whether the models work in real environments. Finally, the taxonomy will be refined on the basis of the results obtained from the experiments. Acknowledgments, This research work has been partially funded by the MIUR project FRAME, PON01-02477.
References 1. Abowd, G.D., Dey, A.K.: Towards a Better Understanding of Context and ContextAwareness. In: Gellersen, H.-W. (ed.) HUC 1999. LNCS, vol. 1707, pp. 304–307. Springer, Heidelberg (1999) 2. Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I.: Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Gener. Comput. Syst. 25(6), 599–616 (2009)
A Framework for Modeling Automatic Offloading of Mobile Applications
71
3. Flinn, J., Park, S., Satyanarayanan, M.: Balancing performance, energy, and quality in pervasive computing. In: ICDCS, pp. 217–226 (2002) 4. Folino, G., Pizzuti, C., Spezzano, G.: A scalable cellular implementation of parallel genetic programming. IEEE Transaction on Evolutionary Computation 7(1), 37–53 (2003) 5. Gu, X., Nahrstedt, K., Messer, A., Greenberg, I., Milojicic, D.S.: Adaptive offloading inference for delivering applications in pervasive computing environments. In: Proceedings of the First IEEE International Conference on Pervasive Computing and Communications (PerCom 2003), Fort Worth, Texas, USA, March 23-26, pp. 107–114 (2003) 6. Gurun, S., Krintz, C.: Addressing the energy crisis in mobile computing with developing power aware software. In: UCSB Technical Report. UCSB Computer Science Department (2003) 7. Kumar, K., Lu, Y.-H.: Cloud computing for mobile users: Can offloading computation save energy? IEEE Computer 43(4), 51–56 (2010) 8. Lee, K., Rhee, I., Lee, J., Chong, S., Yi, Y.: Mobile data offloading: how much can wifi deliver? In: CoNEXT 2010, Philadelphia, PA, USA, November 30 - December 03, p. 26. ACM (2010) 9. Liu, J., Kumar, K., Lu, Y.-H.: Tradeoff between energy savings and privacy protection in computation offloading. In: Proceedings of the 2010 International Symposium on Low Power Electronics and Design, Austin, Texas, USA, August 18- 20, pp. 213–218. ACM (2010) 10. Miettinen, A.P., Nurminen, J.K.: Energy efficiency of mobile clients in cloud computing. In: Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing, HotCloud 2010, USENIX Association, Berkeley (2010) 11. Namboodiri, V., Ghose, T.: To cloud or not to cloud: A mobile device perspective on energy consumption of applications. In: WOWMOM, pp. 1–9 (2012) 12. Ortiz, A., Ortega, J., D´ıaz, A.F., Prieto, A.: Modeling network behaviour by fullsystem simulation. JSW 2(2), 11–18 (2007) 13. Pathak, A., Hu, Y.C., Zhang, M., Bahl, P., Wang, Y.-M.: Enabling automatic offloading of resource-intensive smartphone applications. Technical report, Purdue University (2011) 14. Saarinen, A., Siekkinen, M., Xiao, Y., Nurminen, J.K., Kemppainen, M., Hui, P.: Offloadable apps using smartdiet: Towards an analysis toolkit for mobile application developers. CoRR, abs/1111.3806 (2011) 15. Wolski, R., Gurun, S., Krintz, C., Nurmi, D.: Using bandwidth data to make computation offloading decisions. In: 22nd IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2008, Miami, Florida, USA, April 14-18, pp. 1–8. IEEE (2008)