Framework for Computation Offloading in the Internet of Things

0 downloads 0 Views 4MB Size Report
May 16, 2018 - to facilitate automated extraction of data from them [1]. IoT devices are expected .... According to the above discussion, the W5 framework can be used as a ..... interested researchers for smooth replication of our experi- ments.
Received March 9, 2018, accepted April 17, 2018, date of publication April 24, 2018, date of current version May 16, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2829840

The W5 Framework for Computation Offloading in the Internet of Things HANAN H. ELAZHARY 1 Computer

1,2 ,

(Member, IEEE), AND SAHAR F. SABBEH3,4 , (Member, IEEE)

Science Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia 2 Computers and Systems Department, Electronics Research Institute, Cairo 12622, Egypt 3 Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia 4 Information Systems Department, Faculty of Computing and Information Sciences, Banha University, Banha 13511, Egypt

Corresponding author: Hanan H. Elazhary ([email protected])

ABSTRACT The Internet of Things (IoT) aims at providing things in the world with Internet connectivity. The goal is to facilitate communication among such IoT devices in addition to accessing and/or controlling them as needed. IoT devices are typically smart devices with constrained resources. Accordingly, computation offloading from such clients to more powerful servers is anticipated. Mobile cloud computing was first proposed for computation offloading from mobile devices, especially smartphones, to the cloud. Nevertheless, this scenario proved to be unsuitable for latency-sensitive applications such as multimedia applications due to the relatively long distance between the mobile clients and the cloud data centers. Accordingly, cloudlets and mobile clouds, which are ideally located at the edge of the Internet within one hop from mobile clients have been proposed as alternative solutions. Recently, other forms of edge computing such as micro and nano data centers, mini and micro clouds, fog computing, and mobile edge computing have also been proposed for both mobile and IoT devices. In this paper, we formulate offloading in the IoT as a decision problem concerned with what, where, who, when/why, and how to offload. Accordingly, we present the W5 framework, which takes those factors into consideration, and so can be used as a reference model to guide researchers in this area. As an example, we developed a client-based agent for three computation offloading scenarios among constrained smartphones in the IoT. Through detailed discussion and an extensive set of experiments, we attempt to answer the above wh-questions. INDEX TERMS Computation offloading, edge computing, Internet of Things, mobile cloud computing.

I. INTRODUCTION

The Internet of Things (IoT) was first proposed by Kevin Ashton, who envisioned Radio Frequency Identification Systems (RFIDs) and sensors provided with Internet connectivity to facilitate automated extraction of data from them [1]. IoT devices are expected to be smart devices with constrained resources. Accordingly, computation offloading from such clients to more powerful servers is anticipated. Computation offloading has been extensively studied from constrained mobile devices to the cloud within the context of mobile cloud computing [2]. Nevertheless, offloading to the cloud proved to be unsuitable for latency-sensitive applications such as multimedia applications due to the relatively long distance between the mobile clients and the cloud data centers. Hence, cloudlets and mobile clouds, which are ideally located at the edge of the Internet within one hop from the mobile clients have been proposed. A cloudlet [3] has been described as ‘a datacenter in a box.’ It is essentially a cluster of multicore

VOLUME 6, 2018

processors with massive gigabit internal connectivity. One of its prominent characteristics is that it should be self-managed since it is expected to be owned by local businesses. It has been proposed for hostile environments such as disaster locations and battlefields. Mobile clouds [4], on the other hand, are formed partially or fully from a set of mobile devices with unused resources that can be exploited for this purpose. Recently, the above and other forms of edge computing have been proposed for both mobile and IoT devices. They include micro and nano data centers. A micro data center [5] is formed of one rack of servers or less, while a nano data center [6] is a much smaller infrastructure such as a home gateway. An essential characteristic of either micro or nano data centers is that they should be formally managed by service providers. Mini and micro clouds, on the other hand, are self-managed similar to cloudlets. A mini cloud [7] is a cluster of computers within a single Local Area Network (LAN), while a micro cloud [8] is formed of a set of small portable

2169-3536 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

23883

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

computers and can be used indoors and outdoors. Fog computing has been defined as ‘Internet mobility, ad hoc networking, middleware and the more pervasive availability of computing and storage resources, close to the edge of the network’ [9]. In other words, it is an alternate term for referring to edge computing and so the two terms are frequently used interchangeably in the literature [10]. It has been realized using various technologies such as micro clouds and nano data centers [8], [10]. Finally, Mobile Edge Computing/ Cloud (MEC) [11] refers to edge computing in which servers are directly connected to mobile base stations. In this paper, we introduce the W5 computation offloading framework for all constrained devices (including mobile devices) in the IoT. In this framework, we formulate the offloading problem as a decision problem concerned with five different wh-questions. The first question is about what to offload. In this respect, offloading techniques can be classified into application migration, record/replay, distributed processing, and Remote Procedure Call (RPC) techniques. In application migration, the whole application migrates from the client to the server. In case of record/replay, on the other hand, recorded input events at the client are replayed at the server on an obsolete copy of the application to bring it to the state of the client application at the time of offloading. In case of distributed processing, as the name implies, the application runs in a distributed fashion on the client and the server. Finally, in RPC techniques, a copy or merely the offloadable portion of the application resides on the server and remote procedures are called as needed. The second question is concerned with where to offload; in other words, whether offloading should take place to the cloud or to any of the above edge computing platforms. The third question deals with who is responsible for offloading. In other words, whether the client is responsible for the process or whether a broker should do it on its behalf. The fourth question deals with two correlated factors, which are when and why offloading should take place. This is important, because offloading is not always advantageous [2]. While it is intended for saving the client’s resources, the incurred overhead may itself affect those resources. Finally, the fifth question is about how offloading can take place. This is a question related to the design of the offloading technique. Various possible offloading scenarios are discussed within the context of the framework. According to the above discussion, the W5 framework can be used as a reference model to guide research studies in computation offloading in the IoT. As an example, we developed a client-based agent for three what computation offloading scenarios among constrained smartphones in the IoT. Through an extensive set of experiments, we attempt to answer the other wh-questions of the framework. The rest of the paper is organized as follows: Section II presents the W5 framework explaining various possible offloading scenarios. Section III discusses various computation offloading techniques in the literature, which implicitly address the wh-questions of the framework. An overview 23884

of the developed prototype is presented in Section IV. The details of the experiments and experimental results are provided in Sections V and VI respectively. Finally, the conclusion and future work are presented in Section VII. II. THE PROPOSED FRAMEWORK

This section describes the proposed W5 framework for computation offloading in the IoT. As shown in Figure 1, offloading form a constrained mobile device can take place directly to the cloud, but the same is not true in case of a typical constrained IoT device. Actually, to the best of our knowledge, the cloud was never proposed for direct offloading from IoT devices. On the other hand, both can offload to the various unconstrained edge servers (edge technologies excluding mobile clouds and IoT clouds). Those servers are ideally located within one hop from the client devices and are typically assumed to be a middle tier of a three-tier architecture including the client and the cloud. Thus, offloading can take place to an edge server, from where computation can be offloaded to the cloud as needed, for example in case the computation is resource intensive and latency-insensitive. Nevertheless, in case of IoT devices a broker such as a gateway may be needed in order to manage offloading to the edge servers on behalf of the devices. As shown in the figure, offloading can take place as needed among constrained devices, whether mobile devices or IoT devices in the broad sense. This is based on the assumption that they are compatible and in close proximity to each other to be able to communicate directly. Otherwise, a broker may be needed to manage offloading. The broker may also be optionally added to free the client from the offloading process setup and management. In case of IoT devices, the broker may be a gateway acting as an interface between the IoT devices and the Internet. It is worth emphasizing that in this context, offloading should always take place from a weak device to a strong device. This is not trivial since a stronger mobile can become weaker when it is running out of power or other computing resources [12]. Those various possible scenarios entail that any developed offloading technique in the IoT should specify where offloading takes place and who is responsible for the process (the client or a broker). One of the eight scenarios in the figure should be specified. Considering also the what question, we enumerate at least forty different scenarios as shown in Figure 2. Those scenarios can be exploited as a reference model to guide future research. The answers of the how and when/why questions are left to the developers depending on the nature of their applications. The when/why question is in fact an excellent potential research point. As shown in the Figure 2, according to the where question, offloading can be classified roughly into offloading to constrained devices and offloading to other unconstrained edge servers. According to the who question, each of the above classes may be further classified into two subclasses. Computation offloading may be done by an agent that resides on the client or an independent external broker is VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

FIGURE 1. The W5 framework for computation offloading in the IoT considering the who and where questions; the term ‘edge’ refers to all forms of unconstrained edge technologies.

responsible for offloading on behalf of the constrained clients. This is also a rough classification since, for example, the client agent may be integrated with the offloadable application (part of the application) or may be an independent background service that serves many applications. Offloading techniques can be further classified according to the what question into four subclasses namely, application migration, record/reply, distributed processing, and RPC. Accordingly, under each of the above four where/who contexts, ten possible scenarios are enumerated. As shown in the figure, they sum up to a total of forty different possible offloading scenarios. In case of application migration, an inherent assumption is that neither the application nor the data reside on the server and so both should migrate. In Scenario 1, a migrated batch VOLUME 6, 2018

application executes remotely using the migrated data and the result returns to the client. In Scenario 2, on the other hand, the migrated application is an interactive application. Hence, the user should be able to login remotely to the server and access the migrated application to process the migrated data using a software tool such as TeamViewer [13], unless the server is intended to replace the client. In case of record/replay, there are three possible scenarios. In Scenario 3, both the application and data reside on the server. In practice, the application might be a standard application located by default on the server together with standard data, or their existence may be confirmed using a handshaking protocol. In Scenario 4, on the other hand, we assume that only the application resides on the server. Hence, when 23885

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

FIGURE 2. Possible offloading scenarios within the context of the W5 framework considering the where, who and what questions.

offloading is anticipated, data has to be sent to the server in advance. In Scenario 5, neither the application nor the data reside on the server. Accordingly, both have to be sent there in advance. This scenario is similar to Scenario 2 except that offloading is delayed. In all cases, all user interactions with the application (such as button click, menu item selection, text change, and focus change) at the client are recorded. Data at the server may be periodically synchronized by replaying those events. In this case, all recorded events prior to a given synchronization are discarded and new events are recorded. Upon offloading, recorded events at the client are sent to the server to be replayed, bringing the application on the server to the state of that on the client. As in Scenario 2, after offloading, the user should be able to login remotely and access the application to process the data using a software tool such as TeamViewer unless the server is intended to replace the client. In case of distributed processing, there are three possible scenarios. In Scenario 6, the core of the application and data are sent to the server and the client becomes a thin 23886

client through which the user interacts with the application on the server. Scenario 7 is similar to Scenario 6 except that offloading is delayed. Hence, all user interactions are recorded to be replayed at the server periodically and/or upon offloading. In Scenario 8, on the other hand, classical distributed processing techniques may be employed. Finally, in case of RPC, there are two possible scenarios. In Scenario 9, one obvious assumption is that the application resides on the server. All what needs to be done is to send data to the server to be processed and the result returns to the client. Again, in practice, the application might be a standard application located by default on the server or its existence may be confirmed using a handshaking protocol. In fact, only offloadable parts of the application need to reside on the server. Finally, in Scenario 10, the application in sent to the server in advance, when offloading is anticipated, to be used afterwards for RPC as needed. It is worth noting that virtual machine (VM) migration is not suitable for constrained devices due to the relatively large size of a VM. Accordingly, this offloading scenario is not considered in the proposed framework and the corresponding classification. VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

TABLE 1. Addressing the different wh-questions in example computation offloading techniques.

III. W5 QUESTIONS

2) RECORD/REPLAY TECHNIQUES

In this section, we discuss example research studies in the literature classified according to the what question of the W5 computation offloading framework with emphasis on offloading from constrained mobile and IoT clients whenever relevant. This is followed by a discussion of how some of them address the other wh-questions. Additional research studies are also considered in this discussion as needed. A comparison among example offloading techniques consistent with the W5 framework is provided in Table 1.

As previously noted, the idea of record/replay techniques is to have copies of an application on both the client and server. All execution events of an application on the client are recorded such that they can be replayed on the server in order to obtain similar copies of the application on both platforms. Deterministic replay is concerned with ensuring an exact copy of an application after replay by addressing all nondeterministic factors [16]. Opportunistic replay, on the other hand, is a relaxed variant that is concerned with merely recording user’s interactions at the client and replaying them at the server [17]. It is based on the fact that replay does not need to be exact in order to be useful. Applying deterministic replay to constrained mobiles and IoT devices is a challenging task due to numerous inherent factors such as the differences in hardware and operating system between the client device and the server [18]. Accordingly, Hung et al. [19] attempted to apply opportunistic replay to such devices. Specifically, they considered offloading from constrained mobile devices to the cloud based on this approach. In the proposed framework, inserted pseudo checkpoints mark locations at which input events from the user should be recorded by the client. If the application is suspended, offloaded and resumed at the server, it can restart from the most recent pseudo checkpoint replaying recorded events until it reaches the state at which it was migrated. This approach has been proposed for offloading mobile applications to the cloud on a VM that is as close to the mobile environment as possible. It is worth noting that, due to the limitations of deterministic replay discussed above, we only consider opportunistic replay is our proposed framework.

A. WHAT QUESTIONS

As previously noted, computation offloading techniques can be broadly classified according to the what question into application migration techniques, record/replay techniques, distributed processing techniques, and RPC techniques. In the following subsections, we present example research studies for computation offloading based on each of those techniques. It is worth emphasizing that we refer to the offloading source as the client and the offloading destination as the server. 1) APPLICATION MIGRATION TECHNIQUES

Application migration is generally achieved through migration of VMs and/or applications from the client to the server. For example, in the Internet Suspend/Resume (ISR) system [14], VMs migrate from mobile laptops to a remote server or from one server to another based on distributed storage. A VM acts as a parcel encapsulating both usercustomized operating system and application. Such a parcel can be suspended, migrated, and resumed as needed. Live VM migration has also been proposed over a WAN in such environments [15]. In live VM migration, the VM is not suspended and resumed. Alternatively, changes in the client VM are continually conveyed to the other VM on the server. It was shown that in a WAN, live VM migration faces numerous challenges such as long latency and variable or limited bandwidth. It is worth emphasizing that VM migration is not suitable for constrained mobile and IoT devices, because the size of a typical VM is too large for them. Hence, explicit application migration is the only viable option based on the current technology. VOLUME 6, 2018

3) DISTRIBUTED PROCESSING TECHNIQUES

As previously noted, applications can be partitioned between a client and server in a distributed fashion such that only a portion of an application on the client migrates to a remote server and the result of any remote execution returns to the client to be integrated with the result of the local execution. For example, in CloneCloud [20], an Android mobile application can be partitioned such that a thread can migrate to a clone on the cloud to be executed and then the migrated 23887

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

thread is reintegrated to the mobile device. In the Java Virtual Machine (JVM), a new stack is created for each thread. Whenever a thread invokes a method, a new frame is created and pushed onto the corresponding stack. The method uses its frame for storing temporary data including parameters and local variables and instructions operate on the frame data [21]. In the eXtensible Cloud (eXCloud) middleware [22], the top stack frame can migrate to the server and the corresponding code and heap data can also migrate as needed. The execution results then return to the mobile in order to be integrated with the results of any local executions. Frames can also migrate in parallel to multiple VMs for more rapid execution. 4) REMOTE PROCEDURE CALL TECHNIQUES

As previously noted, offloading can also take place in a RPC fashion. In this case, a copy of the application resides on the server and offloadable methods are invoked remotely as needed. For example, MobiByte [23] assumes a mobile application is composed of a set of independent components and that each component is invoked by an input resulting in a corresponding output. Some components have to run locally such as those that access the sensors and implement the user interface. All the other components reside on the mobile and the cloud. Three offloading objectives (performance improvement, energy efficiency and execution under limited resources) are provided to the programmer to select from. A context-aware mobile application changes its behavior according to context [24]. In other words, some tasks are executed when their corresponding contexts are satisfied. Context is typically extracted from the sensors and the hardware features of the mobile. An example of a sensor is the proximity sensor that measures how close an object is to the screen of the mobile, while an example of a hardware feature is the camera [25]. CAMO [26] was developed to deal with offloading tasks of context-aware Android mobile applications to the cloud. Those tasks are implemented on the mobile and the cloud, though the implementations may be dissimilar, as long as the functionality is adequate. CAMO was also intended to facilitate the context-aware application programmers’ job [27]. It allows them to profile tasks in isolation locally and on the cloud. Accordingly, the programmer can specify custom task offloading plans including criteria and/or objectives for offloading in specific contexts. Offloading criteria allow rapid offloading in case the mobile environment is stable. A set of abstract classes and methods are provided for programmers’ support.

an Android mobile and a UNIX server based on distributed shared memory. They exploited the DalvikVM for this purpose. DalvikVM is an interpreter based on JVM that can run on both Android and UNIX. Satyanarayanan et al. [3] proposed using cloudlets to migrate the core of an application leaving to the mobile (laptop) the user interface such that the mobile acts as a thin client. Since a suitable VM might not be available on the cloudlet, they proposed dynamic VM synthesis, in which base VMs running Windows or Linux operating systems, for example, reside on the cloudlet and add-ons are overlaid to synthesize the required custom launch VMs. The authors developed a tool called Kimberley for synthesizing VMs using VirtualBox, which is a hosted VM manager (VMM) for Linux. The RERAN tool on Android platform records both graphical user interface (GUI) events and sensor events [29]. In RERAN, record/replay is between two compatible smartphones and is intended for repeatability, reproducing bugs, and time wrapping. It is worth noting that relatively few research studies considered edge technologies for computation offloading. This is merely because those technology are still developing. Nevertheless, we expect this to change in the near future due to the limitations of cloud computing for computation offloading and the increasing interest in edge technologies. The W5 framework proposed in this paper is intended to guide such research studies since, as previously noted, it can be used as a reference model by providing questions that should ideally be answered with each new technique if all offloading aspects are to be considered. C. WHO QUESTIONS

The most common approach to answering the who question is to rely on the client for computation offloading, normally in cooperation with the server. Nevertheless, some research studies in the literature used an alternative approach in which a broker is employed for this purpose. For example, Hasan et al. [30] proposed the Aura computing model in which a cloud of IoT smart devices is used for computation offloading from mobile devices. An assumption is that such IoT smart devices are located at fixed locations in relatively close proximity to the mobile. A controller acts as a broker between the mobile clients and the IoT devices willing to participate. A mobile agent advertises a job and its specifications. In response, the interested IoT devices advertise their specifications. The controller matches both parties and accepts the job provided the price is attractive. It is also responsible for job partitioning, distribution, monitoring, and computation results integration.

B. WHERE QUESTIONS

The above example research studies except ISR [14] consider the cloud as their offloading server. Some research studies, similar to ISR, consider edge technologies for computation offloading. For example, COMET (Code Offload by Migrating Execution Transparently) [28] is designed to allow multithreaded applications to be executed concurrently on 23888

D. WHEN/WHY QUESTIONS

Numerous computation offloading techniques deal merely with the offloading process ignoring the trigger of the process. On the other hand, this has been the concern of many other research studies. One approach that gained considerable attention is graph partitioning. For example, VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

in CloneCloud [20], a static analyzer represents the application using a graph of objects and control flow among them and identifies legal partitions under a set of constraints (such as restricting partitioning to method entry and exit). A dynamic profiler, on the other hand, profiles the application on both the mobile and the cloud using a set of inputs to compose a cost model of the application under different partitions. The optimization solver then selects one of the legal partitions that optimizes an objective function in terms of both execution time and energy based on the profiling results. Due to the complexity of graph partitioning algorithms other approaches have been proposed for application partitioning. For example, in ThinkAir [31], the programmer annotates offloadable methods of an Android application. One or more complete VMs of the mobile run on the cloud. ThinkAir code generator processes the source code and generates necessary offloadable method wrappers. At runtime, the offloading decision is based on environmental conditions such as the network conditions the first time the method is encountered. Subsequently, the decision is based on history in term of execution time and energy consumption under different scenarios and current environmental conditions. Dynamic allocation of resources at runtime is possible and parallelization on multiple VMs is enabled via a parallel processing module on the cloud. Different policies are available for the user to specify such as minimizing execution time or energy consumption. As discussed above, MobiByte [23] offers three offloading objectives (performance improvement, energy efficiency and execution under limited resources) to the programmer to select from. CAMO [26], on the other hand, allows more informed decisions by introducing the concept of task offloading plans based on offloading objectives and criteria. It facilitates profiling a task on both the mobile and the cloud. Based on the profiling results, the programmer develops a decision tree, from which a set of offloading plans is developed specifying offloading criteria and objectives of a given task in different contexts. Offloading criteria allow fast offloading in case the mobile environment is stable. E. HOW QUESTIONS

Computation offloading research studies differ considerably regarding the provided level of details. For example, Hung et al. [19] provided detailed explanation of the proposed framework, but merely an overview of the implementation details. COMET [28], on the other hand, was mainly concerned with the implementation details. IV. OVERVIEW OF THE DEVELOPED PROTOTYPE

As previously noted, we developed a client-based agent for handling various computation offloading requests from a constrained mobile to another. To achieve its goal, it cooperates with another agent on the server as needed. In the current implementation, the agent is developed as part of the offloadable application. Within the context of the proposed

VOLUME 6, 2018

W5 framework, we considered three different offloading scenarios, namely application migration, record/replay, and RPC. Specifically, we considered Scenarios 2, 3 and 9 as indicated in Figure 2 using solid arrows. A resource intensive image processing application was developed to apply basic image processing tasks such as image rotation, resizing, and filtering (grey, gamma, bright and blue). Despite being a simple application, it is a resource intensive application that requires high usage of CPU and memory proportional to the image size and resolution and is thus suitable for the experiments. The application was intended as a learning tool for students using a set of public university-owned mobiles and a set of images assembled for this purpose. Accordingly security was not an issue. Otherwise, whenever an image is sent to a remote mobile, copyright should be protected, possibly using encryption [32] or lightweight watermarking techniques [33]. In the application migration scenario, the application and image do not exist on the server. Hence, they should migrate upon offloading. Since the server is a public mobile, it is reasonable to assume that the student is able to log in to it remotely using for example TeamViewer. In the record/replay scenario, on the other hand, we assume a replica of the application and the image exist on the server. Again, this is a reasonable assumption since the server is public and the data is a set of preassembled images provided with the application. All user’s interaction on the client are recorded, and upon offloading, the recorded log is sent to the server to be replayed bringing the image on the server to the state of the corresponding image on the client at the time of offloading. The student can then log in using TeamViewer to resume working as in the case of application migration. Finally, in case of the RPC scenario, similar to the record/replay scenario, we assume that a replica of the application exists on the server. Upon offloading, the agent on the client sends the image and the command needed for execution to the agent on the server. In effect, the agent on the client calls a remote procedure on the server to execute the command on the sent image. The result of the execution returns to the client, where the agent passes it to the application. The answers to the wh-questions in the developed prototype are as follows: –The what question: application migration, record/replay, and RPC scenarios –The where question: another constrained device; in our experiments we use two smartphones for realizing both the client and the server –The who question: the client (an agent on the client in cooperation with another on the server); no broker manages offloading between the two parties –The when/why question: we experiment with the above scenarios to explore the answer of this question –The how question: we address this question by thorough explanation of our experiments and their technical details to guide other interested researchers

23889

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

FIGURE 3. CPU, memory, and network usages at startup.

V. DETAILS OF THE EXPERIMENTS

In this section, we provide implementation details needed by interested researchers for smooth replication of our experiments. We chose to develop our prototype for the popular Android platform. Since Android is Java-based, we used the Java language to develop both the client side and the server side. Android Studio 3.0 was exploited for this purpose. We used it to create Android Virtual Devices (AVDs). The selected development environment in Android Studio was Android 8.0 (API level 26). The image processing application was developed using Android studio and Java to apply basic image processing tasks such as image rotation, resizing, and filtering (grey, gamma, bright and blue) via a user interface (UI). To avoid the problem of unresponsive user interfaces, the developed application was thread-based. As previously noted, the agent is developed as part of the application. It was created as a background service for performing network activities. This is because network activities cannot be performed on the main UI thread in Android 3.0 (API level 11) and higher to avoid the NetworkOnMainThreadException. Since the goal of the experiments is to explore the when/why question of the proposed framework, offloading was not automated. On the contrary, it was initiated manually through the UI in order to facilitate experimentation with the prototype as needed. The server agent is also thread-based in order to serve multiple concurrent clients though this was not required in our experiments. We used socket programming for communications between both parties. The server agent is developed such that it is always up and running listening for requests from the client agent to execute the corresponding tasks. The server, thus, manages connections from the client, receives data, executes the requested tasks and sends back the results to the client whenever applicable. 23890

In order to test our prototype, we were unable to use emulators within Android studio, because they do not support network activities by default. Thus, we used two mobile phones for this purpose. To realize the client, we used a Lenovo Vibe K5 smartphone running Android 6 with 3 GB RAM and octal core 1.8 GHz processor. On the other hand, to realize the server, we used a Samsung Galaxy Note 4 smartphone running also Android 6 with 3 GB RAM, but with 2.7 GHz quad-core processor. It is worth noting that though we selected Android 8.0 for our project, the implemented agents can run on lower versions. A. THE APPLICATION MIGRATION SCENARIO

In this scenario, the client agent notifies the application to save its state (image path and the image bitmap file). The application package (APK) and the image are located by the agent, which calls startService(Intent) to send the image and the APK file to the server as a single serializable object. The server agent uses a background thread to listen to the corresponding socket and start receiving the sent application and image whenever offloading is to take place. The received object is stored and its path is passed using an intent to the main UI thread where listeners access the intent to check the datatype of the APK, set it to application/vnd.android.package-archive, and start the activity of the intent for the APK to be installed. It is worth noting that the server agent uses two special permissions for its operation, namely, REQUEST_INSTALL_PACKAGES and INSTALL_PACKAGES. B. THE RECORD/REPLAY SCENARIO

In this scenario, replicas of the application and image reside on the server. The client agent logs user’s interactions with the application (such as button click, menu item selection, VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

FIGURE 4. CPU usage during local execution and during RPC send and receive phases.

FIGURE 5. Memory usage during local execution and during RPC send and receive phases.

text change, and focus change) during the current session. Whenever offloading is to take place, the recorded events are sent to the server for replay. Accessibility services were used at the client side to enable logging user interactions. An accessibility service is a background service that receives callbacks from the system on any accessibility event. This service records user events onAccessibilityEvent object, finds the source node of the event using AccessibilityNodeInfo. FOCUS_INPUT, finds the Id of the node using nodeInfo.getViewIdResourceName(), finds the type of the event using event.getEventType(), and then inspects it with getClassName(). Those pieces of data are then timestamped and logged. The logged data is sent to the server as an array of strings. The server agent replays the events to provide a non-disruptive session for the user. To enable recording, the client uses BIND_ACCESSIBILITY_SERVICE permission. C. THE REMOTE PROCEDURE CALL SCENARIO

In this scenario, the client agent sends the image and the requested task to the server agent. In effect, it calls the remote procedure on the server to execute the task on the image. At the server side, the image is processed and is then sent back by the server agent to the client agent, from where it is passed to the main UI thread of the application. VOLUME 6, 2018

The process starts by the client agent sending the image and the requested task as a serializable object to the server agent. The server agent receives the image and passes it to the main UI thread using an intent to execute the requested task on the image. The resulting image is saved and its path is sent from the UI thread to the server agent using an intent. The image returns back to the client agent as a serializable object. The client agent receives the processed image and passes it to the main UI thread via result receivers using intents; the results are passed to the client application. VI. EXPERIMENTAL RESULTS

To obtain the experimental results, we used Android Profiler to monitor the maximum CPU, memory and network usages on the client while performing different tasks locally and remotely. Measures were first recorded at application startup. As shown in Figure 3, when the application launches, the maximum recorded CPU usage is about 5%, while that of the memory is about 48 MB. No network usage is recorded. The next step was to monitor the performance when applying filters on images with different sizes locally and remotely using the RPC scenario. Figures 4, 5 and 6 show the CPU, memory, and network usages for a set of images with 23891

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

FIGURE 6. Network usage during RPC send and receive phases.

FIGURE 7. CPU, memory and network usages during record/replay.

increasing sizes when processed locally and remotely. As shown in Figure 4, the CPU usage is almost constant during sending each image and the corresponding task. On the other hand, there is no clear relationship between the image size and the CPU usage neither during receiving nor during local execution. Nevertheless, the CPU usage during local execution is constantly higher than that during remote execution implying savings in terms of CPU usage. In other words, having concerns regarding the CPU usage in case of RPC can be an answer to the when/why question. As shown in Figure 5, memory usage during local execution increases with the image size. Additionally, during sending and receiving it is almost the same since an image size is constant before and after processing. Nevertheless, as in the case of CPU usage, there is no clear relationship between memory usage and the image size. But, again there are savings in comparison to local execution. Hence, concerns 23892

regarding memory usage in case of RPC can be another answer to the when/why question of the framework. However, the savings in the CPU and memory usages are at the expense of the network usage as shown in Figure 6. As expected, the network usages during sending and receiving depend on the image size. It is interesting to notice that during receiving, it is slightly higher. This is probably because of the differences in specifications between the mobile client and mobile server as discussed above. We repeated the experiment for the case of record/replay. It is worth emphasizing that in this case, the application and image reside on the server. Hence, only the recorded log, which is independent of the image size, is sent to the server to be replayed on a replica of the image to update it so that the student can resume work. The CPU, memory, and network usages are shown in Figure 7. It is clear that negligible CPU and network usages are encountered during sending VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

the recorded log for replay. This does not provide a direct answer to the when/why question. Nevertheless, it indicates that whenever offloading is needed, for example in case of low battery or high CPU utilization, replay can be executed with minimal effect on the client.

FIGURE 10. Network usage during application migration.

FIGURE 8. CPU usage during application migration.

slightly higher in case of application migration. It is also slightly lower in case of an image of size 3.65MB. Accordingly, concerns regarding the CPU usage only can be an answer to the when/why question in case of application migration; of course at the expense of the network usage. VII. CONCLUSION AND FUTURE WORK

FIGURE 9. Memory usage during application migration.

We then experimented with the application migration scenario. Figures 8 through 10 show the CPU, memory and network usages for the same set of images respectively, but during migration. It is clear that they all increase with the increase of the image size though the increase is not perfectly proportional. For example, the increase in the CPU usage from an image of size 23 KB to another of size 3.65 MB is only 4.97%. The same is true in case of memory and network usages. It is worth noting that the CPU, memory, and network usages for migrating the application alone are 15%, 71.26 MB, and 2.37 MB/s respectively and hence the y-axis in each of the three figures is adjusted accordingly. By looking at Figures 4 and 8, we notice that the CPU usage in case of application migration is lower than that in case of local execution. On the other hand, by inspecting Figures 5 and 9, we notice that the memory usage is only VOLUME 6, 2018

This paper proposed the W5 framework for computation offloading in the IoT. Within the context of the framework, we provide a reference model to guide developing offloading techniques and related research studies in the future. According to the model, five wh-questions should ideally be answered with each new offloading technique or research study. The framework takes into consideration various offloading scenarios from constrained mobile and IoT devices to help in answering the where, what, and who questions. A survey of the literature presents and discusses example research studies that attempt to answer the various wh-questions. A comparison among research studies compatible with the W5 framework is also presented. As an example, we developed a client-based agent for three computation offloading scenarios among constrained smartphones in the IoT. Specifically, we considered RPC, application migration, and record/replay techniques. In other words, we consider various what techniques given that the answer to the who question is the client itself, and the answer to the where question is offloading to other constrained devices; both the client and the server are constrained. To answer the when/why question, we conducted a set of experiments. Details of the experiments were provided for answering the how question and in order to facilitate smooth replication of our experiments by interested researchers. Regarding future research, we intend to extend the developed client-based agent to address other scenarios based on the W5 framework. For example, the client-based agent will be developed as a separate background service to serve multiple applications. Offloading can also take place to cloudlets or other variants of edge computing. 23893

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

Another intended research direction considers offloading by the help of an independent third-party broker, which might be more suitable for constrained IoT devices. It is worth noting that the most challenging among of the five wh-questions of the W5 framework is the when/why question, which needs considerable attention from researchers and would be a serious subject of our future research. REFERENCES [1] K. Ashton, ‘‘That ‘Internet of Things’ thing,’’ RFID J., Jun. 2009. [Online]. Available: http://www.rfidjournal.com/articles/view?4986 [2] A. U. R. Khan, M. Othman, F. Xia, and A. N. Khan, ‘‘Context-aware mobile cloud computing and its challenges,’’ IEEE Cloud Comput., vol. 2, no. 3, pp. 42–49, May/Jun. 2015. [3] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, ‘‘The case for VMbased cloudlets in mobile computing,’’ Pervasive Comput., vol. 8, no. 4, pp. 14–23, 2009. [4] S. A. Noor, R. Hasan, and M. M. Haque, ‘‘CellCloud: A novel cost effective formation of mobile cloud based on bidding incentives,’’ in Proc. IEEE 7th Int. Conf. Cloud Comput., Anchorage, AK, USA, Jun./Jul. 2014, pp. 200–207. [5] Anixter. (2017). Micro Data Center Solutions. [Online]. Available: https://www.anixter.com/content/dam/anixter/resources/brochures/ anixter-micro-data-center-brochure-en.pdf [6] A. Maiti, A. A. Kist, and A. Maxwell, ‘‘Latency-adaptive positioning of nano data centers for peer-to-peer communication based on clustering,’’ in Proc. IEEE Int. Conf. Commun. Workshop, San Diego, CA, USA, Jun. 2015, pp. 1921–1927. [7] B. Mejias and P. V. Roy, ‘‘From mini-clouds to cloud computing,’’ in Proc. 4th IEEE Int. Conf. Self-Adapt., Self-Organizing Syst. Workshop, Budapest, Hungary, Sep. 2010, pp. 234–238. [8] Y. Elkhatib, B. Porter, H. B. Ribeiro, M. F. Zhani, J. Qadir, and E. Rivière, ‘‘On using micro-clouds to deliver the fog,’’ IEEE Internet Comput., vol. 21, no. 2, pp. 8–15, Mar./Apr. 2017. [9] F. Bonomi, ‘‘Connected vehicles, the Internet of Things, and fog computing,’’ in Proc. 8th ACM Int. Workshop Veh. InterNetw. (VANET), Las Vegas, NV, USA, 2011. [Online]. Available: https://www.sigmobile.org/mobicom/2011/vanet2011/program.html [10] K. Kaur, T. Dhand, N. Kumar, and S. Zeadally, ‘‘Container-as-a-service at the edge: Trade-off between energy efficiency and service availability at fog nano data centers,’’ IEEE Wireless Commun., vol. 24, no. 3, pp. 48–56, Jun. 2017. [11] S. Wang, R. Urgaonkar, M. Zafer, T. He, K. Chan, and K. K. Leung, ‘‘Dynamic service migration in mobile edge-clouds,’’ in Proc. IFIP Netw. Conf., Toulouse, France, May 2015, pp. 1–9. [12] P. Patil, A. Hakiri, and A. Gokhale, ‘‘Cyber foraging and offloading framework for Internet of things,’’ in Proc. IEEE 40th Annu. Comput. Softw. Appl. Conf., Atlanta, Georgia, USA, Jun. 2016, pp. 359–368. [13] TeamViewer. (2017). Connections From Mobile to Mobile Devices. [Online]. Available: https://community.teamviewer.com/t5/KnowledgeBase/Connections-From-Mobile-to-Mobile-Devices/ta-p/280 [14] M. Satyanarayanan et al., ‘‘Pervasive personal computing in an Internet suspend/resume system,’’ IEEE Internet Comput., vol. 11, no. 2, pp. 16–25, Mar./Apr. 2007. [15] W. Zhang, K. T. Lam, and C. L. Wang, ‘‘Adaptive live VM migration over a WAN: Modeling and implementation,’’ in Proc. IEEE 7th Int. Conf. Cloud Comput., Anchorage, AK, USA, Jun./Jul. 2014, pp. 368–375. [16] Y. Chen, S. Zhang, Q. Guo, L. Li, R. Wu, and T. Chen, ‘‘Deterministic replay: A survey,’’ ACM Comput. Surv., vol. 48, no. 2, Nov. 2015, Art. no. 17. [17] A. Surie, H. A. Lagar-Cavilla, E. de Lara, and M. Satyanarayanan, ‘‘Lowbandwidth VM migration via opportunistic replay,’’ in Proc. 9th Workshop Mobile Comput. Syst. Appl., Napa Valley, CA, USA, 2008, pp. 74–79. [18] J. Flinn and Z. M. Mao, ‘‘Can deterministic replay be an enabling tool for mobile computing?’’ in Proc. 12th Workshop Mobile Comput. Syst. Appl., Phoenix, AZ, USA, 2011, pp. 84–89. [19] S.-H. Hung, C.-S. Shih, J.-P. Shieh, C.-P. Lee, and Y.-H. Huang, ‘‘Executing mobile applications on the cloud: Framework and issues,’’ Comput. Math. Appl., vol. 63, no. 2, pp. 573–587, 2012. 23894

[20] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, ‘‘CloneCloud: Elastic execution between mobile device and cloud,’’ in Proc. 6th Conf. Comput. Syst., Salzburg, Austria, 2011, pp. 301–314. [21] B. Venners, Inside the Java Virtual Machine, 2nd ed. New York, NY, USA: McGraw-Hill, 2000. [22] R. K. K. Ma, K. T. Lam, and C.-L. Wang, ‘‘eXCloud: Transparent runtime support for scaling mobile applications in cloud,’’ in Proc. Int. Conf. Cloud Service Comput., Hong Kong, Dec. 2011, pp. 103–110. [23] A. U. R. Khan, M. Othman, A. N. Khan, S. A. Abid, and S. A. Madani, ‘‘MobiByte: An application development model for mobile cloud computing,’’ J. Grid Comput., vol. 13, pp. 605–628, Dec. 2015. [24] H. Elazhary, ‘‘A cloud-based framework for context-aware intelligent mobile user interfaces in healthcare applications,’’ J. Med. Imag. Health Informat., vol. 5, no. 8, pp. 1680–1687, 2015. [25] H. Elazhary, A. Althubyani, L. Ahmed, B. Alharbi, N. Alzahrani, and R. Almutairi, ‘‘Context management for supporting context-aware Android applications development,’’ Int. J. Interact. Mobile Technol., vol. 11, no. 4, pp. 186–201, 2017. [26] H. Elazhary, S. Aloraini, and R. Aljuraid, ‘‘Context-aware mobile application task offloading to the cloud,’’ Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 5, pp. 381–390, 2017. [27] H. Elazhary, ‘‘Facile programming,’’ Int. Arab J. Inf. Technol., vol. 9, no. 3, pp. 256–261, 2012. [28] M. Gordon, D. A. Jamshidi, S. Mahlke, Z. M. Mao, and X. Chen, ‘‘COMET: Code offload by migrating execution transparently,’’ in Proc. 10th USENIX Symp. Oper. Syst. Design Implement., Hollywood, CA, USA, 2012, pp. 93–106. [29] L. Gomez, I. Neamtiu, T. Azim, and T. Millstein, ‘‘RERAN: Timing- and touch-sensitive record and replay for Android,’’ in Proc. Int. Conf. Softw. Eng., San Francisco, CA, USA, May 2013, pp. 72–81. [30] R. Hasan, M. M. Hossain, and R. Khan, ‘‘Aura: An IoT based cloud infrastructure for localized mobile computation outsourcing,’’ in Proc. IEEE Int. Conf. Mobile Cloud Comput., Services, Eng., San Francisco, CA, USA, Mar./Apr. 2015, pp. 183–188. [31] S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, ‘‘ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading,’’ in Proc. 31st IEEE Annu. Int. Conf. Comput. Commun., Orlando, FL, USA, Mar. 2012, pp. 945–953. [32] N. F. E. Abady, H. M. Abdalkader, M. I. Moussa, and S. F. Sabbeh, ‘‘Image encryption based on new one-dimensional chaotic map,’’ in Proc. Int. Conf. Eng. Technol., Apr. 2014, pp. 1–6. [33] H. Elazhary, ‘‘A fast, blind, transparent, and robust image watermarking algorithm with extended Torus Automorphism permutation,’’ Int. J. Comput. Appl., vol. 32, no. 4, pp. 34–41, 2011.

HANAN H. ELAZHARY (M’17) received the B.Sc. (Hons.) degree in electronics and communications engineering and the M.Sc. degree in electronics and communications engineering with a major in computer engineering from the Faculty of Engineering, Cairo University, Cairo, Egypt, in 1992 and 1996, respectively, and the Ph.D. degree in computer science and engineering from the University of Connecticut, Connecticut, USA, in 2005. She also worked part time with Eastern Connecticut State University, USA, for one semester before graduation. After graduating, she was a part time Assistant Professor in reputable private universities, Cairo, Egypt. She has been with the Computers and Systems Department, Electronics Research Institute, Cairo, Egypt, since 1993. She was also a Teaching Assistant with the University of Connecticut, USA, for five years. She was also a full time Assistant Professor for two years and also an Associate Professor for one year with Akhbar Elyom Academy, 6 October City, Egypt. She served as the Head of the Computer Science Department for three years. She is currently an Associate Researcher Professor with the Electronics Research Institute and also an Associate Professor with the Computer Science Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia. She is also supervising seven M.Sc. students and one Ph.D. student. VOLUME 6, 2018

H. H. Elazhary, S. F. Sabbeh: W5 Framework for Computation Offloading in the Internet of Things

SAHAR F. SABBEH (M’17) received the B.Sc., M.Sc., and Ph.D. degrees from the Faculty of Computers and Information Technology, Mansoura University, Egypt, in 2003, 2008, and 2011, respectively, all in information systems. She was a Teaching Assistant with the Alzara High Institution for Management Information Systems from 2004 to 2009 and also with the Misr Higher Institution of Engineering and Technology, Mansoura, Egypt, from 2009 to 2011. She has

VOLUME 6, 2018

been with the Faculty of Computers and Information Technology, Banha University, Egypt, since 2011, as an Assistant Professor. She was also a part time Assistant Professor in several reputable private universities, Cairo, Egypt. She is currently an Assistant Professor with the Faculty of Computers and Information Technology, Banha University, Egypt, and also an Assistant Professor with the Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia. She is also supervising two M.Sc. students and one Ph.D. student.

23895

Suggest Documents