Realistic Context Generation For Mobile App Testing and Performance Evaluation Manoj R. Rege⇤ , Vlado Handziski† , and Adam Wolisz‡
Technische Universität Berlin Email: ⇤
[email protected], †
[email protected], ‡
[email protected] individual context modalities might have to be collected in a correlated manner and from heterogeneous sources. They can be recorded using mobile devices, leveraged from trace databases, generated using mathematical models etc. All the traces would then have to be converted to a common format. Further, these simulated conditions would need to be made available to the app under test. There are two possible approaches to do this: using either a real device, or a mobile emulator. Linking such a simulation to a real device is operating system (OS) platform dependent, and not transparent. It would require developers to include the simulation as a library in the app source code, and modify it to use the simulation APIs instead of the standard APIs provided by the OS. On the other hand, mobile emulators imitate internal hardware of mobile devices in a software stack, I. I NTRODUCTION and have the functionality that enables the software stack to The growing popularity of mobile devices like smart-phones, communicate with the app without having to make any changes tablets, and wearables has created a highly-competitive mobile to the app source code. They also have existing APIs exposed app ecosystem where quality of the user experience is crucial to configure the stack appropriately. These APIs can be reused for the success of the app. Recent studies [1], [2] have found to simplify the linking of a context simulation by feeding traces that 44% users delete their apps immediately if they do not in an orchestrated manner. Despite being widely used, mobile perform as per their expectations. Bad experiences are attributed emulators lack comprehensive capability for context simulation, to heterogeneous factors such as ease of user interfaces as well which is increasingly becoming important with sensor-rich as poor app performance over networks and across different apps. Thus, linking context simulations to emulators can help mobile devices etc. Thus, pre-release app testing and evaluation to bridge the gap between them and real devices, increasing is critical. their level of realism. The app performance in terms of crashes, response times, In this paper, we present ContextMonkey, a framework energy consumption, network bandwidth consumption, etc. is that simulates realistic contextual conditions for app testing significantly impacted by the context created by the complex and evaluation in a mobile emulator. ContextMonkey enables interaction of multiple factors related to the execution envi- context simulation by leveraging traces from external trace ronment. These include mobile device capabilities, the quality databases, models, libraries, and trace files in a correlated of the network connectivity to the backend, the load on the and interdependent manner. It automates the fetching of traces backend servers, the type and frequency of sensor values, from these heterogeneous sources, and converts them into and users location among others. Furthermore, these factors emulator specific format, and then orchestrates their feeding to are often interrelated, and dependent on each other. The app the emulator. Furthermore, it provides a set of APIs using which needs to be exposed to a wide array of realistic combinations the developers can define suitable contextual simulations as of all these parameters in the test phase so as to reduce per the requirements of the app under test. ContextMonkey performance issues post release. Recreating such rich context framework is emulator independent, and can be used for in the physical reality introduces high overheads in time and simulating context in multitudes of mobile emulators running cost, and challenges in controllability and repeatability. Thus, on the local workstation as well as those running in the cloud realistic trace-driven simulation of such context conditions can based services. be a viable alternative. II. C ONTEXT M ODELING F OR A PP T ESTING & However, creating trace-driven context simulation is quite E VALUATION laborious. One would have to create and build a library of wide variety of realistic traces and play them in an orchestrated Many definitions of Context have been proposed in the manner. This is not straightforward since the traces for the literature (see [3]). Dey [4] defines context as “any information Abstract—Mobile app testing and evaluation requires exposing the app to a wide array of real world context conditions viz. location, sensor values, network conditions etc. Such comprehensive context conditions are difficult to create in a development environment on a real device, therefore, simulating them in a mobile emulator is a promising alternative. We present ContextMonkey, a framework for context generation in a mobile emulator. ContextMonkey can generate realistic context by leveraging traces in correlated and interdependent way from heterogenous sources: remote trace databases, models, and trace files. It eases the burden on developers to collect correlated traces, convert them to a common format, and feed them to the emulator in an orchestrated manner. We present examples demonstrating the simplicity and potential of ContextMonkey in app testing scenarios.
Location
Network
Compass
Camera
Accelerometer
Fig. 1: A Directed Acyclic Graph for A Walk at Brandenburg Gate context.
Ontology based models can be complex, and they are generally resource intensive. Graph based models [11], [12] use relationships to represent context. Such a model offers a middle ground in terms of variation in representation of low-level relationships, richness of expression, and ease of use due to its intuitive nature. Therefore, for ContextMonkey we have decided to use graph based models. In the following we give an example illustrating its use, by modeling a simple context scenario A Walk at the Brandenburg Gate using low-level modalities. Such a context might be defined by the facts that 1) The mobile device should have the location with latitude and longitude values of the Brandenburg Gate. 2) The device should experience similar network conditions as at the Brandenburg Gate for a given network carrier. 3) The camera on the mobile device should capture image of the Brandenburg Gate when pointing towards it, and 4) the accelerometer on the mobile device should have values as in real walking. The context is represented as a Directed Acyclic Graph (DAG) G =< V, E > (shown in Figure 1), where V is the vertices set representing all the modalities in the context viz. location, camera, compass, network, and accelerometer. E is the edge set, where each edge ekj represents the interdependence between two modalities and implies that the modality Vk is dependent on V j . In our example, the camera modality is dependent on both the location and the compass, and the network modality is only dependent on the location. The acyclic nature of the graph prevents circular dependencies between the modalities. The dependencies in the context graph are transitive by default. The context graph can have disconnected components that represent modalities independent from others, for example, accelerometer modality. The temporal variation in the modality is specified using the rate of change parameter. Each modality has its own rate of change.
that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves”. Context for mobile apps can be understood as a set of interrelated factors that can potentially impact the end behavior of the app. We refer to these factors as modalities further on in this paper. These modalities include: 1) Mobile device capabilities - CPU, memory, camera, storage. 2) Sensors - temperature, pressure, humidity, light intensity, noise sound level, and change in their values. 3) User state, location, and environment: stationary, mobile, indoors/outdoors, etc. 4) Network type and conditions - 3G, LTE, WiFi - Latency, bandwidth, and packet loss. These modalities are low-level, often interdependent on each other, and have values that may vary over time. The context logic in mobile apps consumes these low-level modalities, aggregates them, and uses them for context estimation, adapting their behavior over time accordingly. Thus, firstly, ContextMonkey needs a context model which is based on lowlevel modalities that vary over time, as this would enable the context logic in the apps to function seamlessly. Secondly, the model needs to provide support for defining inter-dependencies between the low-level modalities as observed in the real-world. And finally, the model should be easy to learn and use by III. C ONTEXT M ONKEY developers and at the same time be able to express wide array of high-level context definitions. This is due to the fact that in A. Design Goals app testing and evaluation, the appropriate high-level context The ContextMonkey framework design is based on achieving definition to be used, the relevant modalities associated with three important goals that are discussed in the following. them, and the dependencies among the modalities are largely 1) Emulator and Data Source Independence: The framework app under test specific, and only best known to the developers. should be decoupled from both the emulator as well as the data Context Modeling has been well studied in the literature sources used in obtaining the traces for simulation. This ensures (see [5], [6], [7]). There are several approaches to model seamless integration of the framework with various emulators the context. Key-value model is the simplest way to rep- (those offering suitable interface to feed the traces) available resent context information as key-value pairs. However, it for smart-phones, tablets, wearable devices, and heterogeneous cannot model the inter-dependencies and relationships between data sources for traces (those offering interface to obtain traces) modalities. Markup based models (ContextML [8], Composite such as crowd-sourced databases, models, and trace files. Capabilities/Preference Profiles (CC/PP) [9], etc.) represent 2) Extensibility: The framework should promote easier context data as tags that enable efficient retrieval and validation. integration of new high-level context scenarios in the future. Markup based models are highly app dependent and increase This implies that it should be possible to add new low-level in complexity as the information at multiple levels increase, modalities to the framework. These low-level modalities can hence, they are not preferable. Logical models use rules, be new physical and virtual sensors (functions over physical expressions, and facts to represent context. They have high- sensor values) added on the emulators due to growing mobile level of expressibility, however are strongly coupled to the device capabilities, or any modality that exist on current mobile application. Ontology based models [10] are the most popular device but not supported by its emulator presently. choice due to its semantic reasoning, application independence, 3) Fidelity & Seamlessness: The framework should simulate and expressiveness capabilities. However, representation of context for the app in an emulator-agnostic manner, just as on
a real device. It should capture context at fine-grained level in high-fidelity manner; feeding traces of each individual modality to the emulator at rates observed on a real mobile device. It should also adapt the trace feeding rates of modality as per the rates configured in the app source code. Furthermore, it should provide context simulation to the app seamlessly, without requiring to make any changes in the app source code like integrating external libraries, and API change.
Emulator App Emulator Specific API
Emulator Manager
B. Challenges ContextMonkey API
Feed API Callback API Context Manager Camera
Accelerometer GPS Context Data Queues
Context Layer
Callback API Trace API Data-Source Manager
Trace Format Manager
Data-Source Handler
Trace Layer
One of the key challenge in the context simulation with regards to the above design considerations is the potential interdependent nature of the modalities. Interdependence requires careful orchestration in fetching and feeding traces to the emulator. This is not easy when the data source types of modalities differ. Each data source may have its own method and APIs to generate traces; the time for fetching traces from each data source type may differ widely. Further, the data source may have its own format used to represent the trace. On the other end, the emulator would have its own APIs for feeding traces, and trace format understood by the emulator might differ from that of the data source. Therefore, the framework needs to understand and execute the respective API calls for the heterogeneous data sources and emulators, and also handle data format conversions. These tasks have to be executed respecting the defined rate of change of the individual modalities so as to maintain high fidelity.
Feed Format Manager
Feed Layer
Emulator Handler
ContextMonkey Data-Source Specific API Databases
Models
Trace Files Data-Source
C. Architecture The ContextMonkey architecture has been proposed in the poster [13]. The architecture is component based and decoupled into three layers (shown in Figure 2). 1) Trace Layer: The Trace Layer is primarily responsible for interacting with heterogeneous trace data sources used in simulation, and orchestrating fetching of traces from them. It primarily supports three data source types: • Trace database services in the cloud such as Google StreetView [14], Open Signal [15], Foursquare location [16], etc. that support fetching traces using API queries. • Models such as BonnMotion [17], SUMO [18], etc. that provide highly specialized tools & libraries to generate traces. • Trace files that contain individual modality trace datasets captured using real mobile devices (such as those available at Crawdad [19]). The Trace Layer includes three components: Data Source Manager - The Data Source Manager coordinates all the Trace Layer functionalities associated with obtaining traces from the data sources and processing them for the simulation. It constructs trace fetching requests. In case of trace database services this includes API URLs with appropriate query parameters, authorization and authentication necessary for using APIs, the HTTP request method, etc. In case of models, it is the library path details, the appropriate commands and their corresponding options needed to generate
Fig. 2: The ContextMonkey Architecture. The dotted elements are external to the architecture. traces, and file paths, headers and their respective formats for trace files. It also handles the asynchronous Trace API methods that are explained further on. Using the data source definitions, the manager maps incoming trace fetching requests to the appropriate data source handler. The manager also handles processing of traces fetched from a data source. The data source trace value units for the modality could be different and need to be converted. For instance, in case of camera modality images might have different resolution, file compression format which need to be processed. The manger filters unwanted attributes from the fetched traces. It also handles trace fetching failures that can occur due to heterogeneous reasons: unavailability of traces, exceeded limits on the API calls, service unavailability, network failures, etc. In case of a failure, the manager supports trace generation based on default values, interpolation, and extrapolation functions. Finally, it sends processed traces to the Trace Format Manager and forwards format converted traces received from the Trace Format Manager to the Context Layer. Trace Format Manager - A data source might have its own trace representation format which could be different from the emulator understood trace format. The Trace Layer is emulator-agnostic, therefore the Trace Format Manager facilitates conversion from the data source specific trace formats to a common internal representation format of ContextMonkey.
ContextMonkey External API - To the app developers createContextSimulation - create an instance of ContextMonkey simulation environment createContextGraph - create an instance of Context Graph to be used in the simulation addModality - include a new modality or modalities in the Context Graph addDependency - add a modality dependency or dependencies in the Context Graph addDataSource - associate a data source with the modality addEmulator - include a compatible mobile emulator in the simulation runTime - specify the context simulation run time period in seconds start - start the context simulation
Fig. 3: ContextMonkey API Description Data Source Handler - The Data Source Handler contains the connection logic for fetching traces from the data sources. It multiplexes execution of trace fetching requests from various data sources. It executes the queries for fetching trace values from the database, commands for models, and read operation on trace files and forwards them to the Data source manager. It maintains a local cache of fetched traces, and only fetches a trace in case of a cache miss.
component that coordinates activities of the Feed Layer. It maintains information necessary for integrating mobile emulator with ContextMonkey. This includes emulator specific information about different supported modalities by the emulator, the emulator API description of a modality for setting and getting values, their respective formats, the communication protocol and configuration parameters to be used in calling the emulator API. Furthermore, it also handles the asynchronous Feed API exposed to the Context Layer to set and get the individual modality values from the emulator. It forwards received traces to the Feed Format Manager for trace format conversion. Finally, it maps the modality trace set and get requests to the appropriate handler for feeding the trace. Feed Format Manager - The Feed Format Manager deals with the trace format conversions between the ContextMonkey’s internal representation format, and the emulator data format. Emulator Handler - The Emulator Handler is responsible for establishing connection to the emulator using the emulator specified protocol. It executes requests to set and get the modality values in the emulator by making appropriate API calls. 4) API Description: The ContextMonkey API description is given in Figure 3. The external APIs are those that are exposed to the app developers to enable them to specify a context scenario for simulation in the form of graphs, and configure the simulation related parameters. This includes defining modalities and their dependencies, the rate of change, registering data sources and their APIs, the emulator and its APIs, the trace formats, and simulation run period. The internal Trace API and the Feed API are used for the internal communication between the three layers for fetching the traces and feeding them to the emulator.
2) Context Layer: The Context Layer lies at the heart of ContextMonkey managing the control and the execution of the simulation. It acts as a bridge between fetching traces from the data sources, and feeding them to the emulator. It is both data source, as well as mobile emulator-agnostic. Context Data Manager - The Context Data Manager handles all the functionality of the Context Layer. It maintains the context model definition that includes the context graph, the physical, and the virtual modalities, and its dependencies. It computes virtual modality values using these definitions. The manager maintains an individual real clock timer for each modality which periodically fires according to its rate of change parameter, and calls the asynchronous Trace API in order to request traces. In order to obtain a modality trace that is IV. D ESIGN A LTERNATIVES dependent on another, the manager keeps the knowledge of During ContextMonkey design, we considered several the current value of its predecessor modalities in the context alternatives, some were not obvious at the time, however quite graph. Furthermore, it registers, and implements the callback important in refining the design goals. We discuss them in the API function to be notified of successful trace requests. Upon following. receiving the requested trace, the registered callback function forwards the trace to the Feed Layer. A. System Model Alternatives Context Data Queue - ContextMonkey also supports There are two possible models to implement the ContextMondefining virtual modalities. The virtual modality computation key architecture. In a local model, the framework runs on the requires trace values of dependent physical modalities. As developer’s local workstation that has an app development the physical modalities can have varying amount of time environment setup alongside a mobile emulator. For instance, delay in fetching their trace, the queues store a copy of their Android SDKs [20] provide toolchains, platform system images, fetched trace value before forwarding it to the Feed Layer. Each external libraries, and APIs along with emulators for app modality has its own separate queue. Separate queues ensure development and testing. Alternatively, developers may test sequential control logic for individual context modality traces. their apps remotely on a mobile emulator that is running Computing virtual sensor function requires a synchronous on a Virtual Machine (VM) in the cloud. CrowdMeter [21], dequeue from dependent modality queues. Caiipa [22] are some platforms that offer such cloud based 3) Feed Layer: The Feed Layer provides the glue which testing services. The cloud system model, due to its scalability, is necessary to bind ContextMonkey to a compatible mobile offers an additional degree of flexibility by loosely coupling emulator. Compatibility implies that a mobile emulator should the ContextMonkey layers among multiple VMs. We limit support a suitable API for feeding traces, and also make them the implementation of the architectural layers on to a single available to the app. The main components of Feed Layer are: dedicated machine. However, we consider both the local Emulator Manager - The Emulator Manager is the core workstation and the cloud VM for the evaluation.
that provides support for event driven network programming. ContextMonkey is available as open-source project at Mobile OS Mobile OS Mobile OS Mobile OS https://github.com/manojrege/contextmonkey. Feed Layer Feed Layer Feed Layer The prototype consists of ⇡ 8K lines of Python code. In the Context Layer Context Layer following, we discuss the implementation in detail. Trace Layer Emulator Emulator Emulator Emulator The contextmodel module includes class definitions for Feed Layer the modality and the Context Graph (DAG) used in expressing Context Layer Context Layer context within the framework. The ContextModality Trace Layer Trace Layer Trace Layer Host Platform Host Platform Host Platform Host Platform abstract class defines an individual context modality and its generic attributes like rate of change, modality depenFig. 4: Design Alternatives for organization of three layers of dencies, data-source, and trace-processing library. PhysiAbstraction calContextModality and VirtualContextModality are two subclasses that extend the ContextModality B. Layer Organization class. The PhysicalContextModality class represents One of the most simple ways to realize the ContextMonkey sensors, location sensors, camera, and network condition modalarchitecture is to place the ContextMonkey as a mobile app ities within the context. The VirtualContextModality inside the guest mobile OS in the emulator. Besides loosing the class represents virtual sensors and has computation attribute emulator and OS platform independence, and seamlessness, it specifying the method used to calculate the sensor value. The introduces overhead from having to run the app in an emulated data-source associated with a context modality are defined environment where code translation from the guest to the host in the DataSource abstract class and includes attributes system is necessary. The other four alternatives (illustrated in such as datasourcetype, datasourceformat, queryparameterlist Figure 4) stem from the possibilities to structure the layers associated with the data-source. The Database, the Model, between the emulator and the underlying host platform. The first and the TraceFile classes extend the DataSource class alternative places the three layers within the mobile emulator. and represent the three types of sources supported by ConIn case of the second, and in the third alternative, the layers textMonkey. The ContextGraph class represents the DAG are distributed between the emulator and the host platform. for context specification. The DAG is implemented using the However, these three alternatives fail to achieve complete python NetworkX library. The node attribute in the DAG emulator independence. The fourth alternative is comprised are of type PhysicalContextModality and Virtuof placing all the layers on the host platform external to the alContextModality, and the edge attributes define the emulator, and reflects our design goals. dependencies among them. It also provides methods for getting C. Data Trace Representation and adding context modalities in the graph, and performs cyclic Feeding a trace to an emulator requires format conversion dependency checks in the context model. The ContextSimulation class exposes ContextMonkey from a data source format to an intermediate common format, APIs to the developers. It maintains simulation definition, and then to an emulator understood format. XML, JSON, and environment, and the associated global variables. The ConProtobuf [23] are some of the data formats that can be used for textEngine class includes the event based Twisted reactor representing traces. Both XML and JSON are human-readable, loop that coordinates and schedules the events in each layer. and offer encoding and decoding without the knowledge of It is also responsible for starting and stopping the simulation. the schema in advance. However, compared to XML, JSON is The developers need to import the contextmodel and the compact, and less verbose. A binary format like Protobuf needs ContextSimulation class to create and run a context a predefined schema that needs to be compiled before using it simulation. for encoding and decoding. Being a binary format, Protobuf The ContextManager class is responsible for generation can seamlessly support encoding and decoding of binary trace of physical modality trace requests based on the modality rate modalities like images. Furthermore, it has a smaller footprint of change. It includes a list of timers for each modality based compared to XML, JSON, and also offers faster data encoding on the TimerService utility from the Twisted library. When and decoding. The trace feeding rate is an important metric in order to achieve high fidelity for context simulation. Therefore, the modality timer fires a trace request is sent by calling the we use Protobuf for representing the traces internally within getValue method exposed by the Trace Layer. The DataSourceManager class handles all the the ContextMonkey. Trace Layer functionality. Its executeFetch method V. I MPLEMENTATION dynamically instantiates and loads a DatasourceHandler The ContextMonkey prototype has been implemented in class instance for trace fetching. This handler Python. Each architectural layer is organized as a separate instance then dynamically maps the request to either Python module, and the components are implemented as one of the DatabaseRequestHandler, or the separate classes within the module. The individual components FileRequestHandler, or the ModelRequestHandler in the implementation are shown in Figure 5. At the heart of class instances, based on its modality type. The ContextMonkey implementation lies the Twisted [24] library DatabaseRequestHandler class is extendable and App
App
App
App
File read
Trace request uses
Context Manager
DataSource Handler
Context Engine Stock Android emulator Handler
SensorProtocol
uses decoded trace
Feed trace Stock Emulator
GPS Protocol
Camera Protocol
Accel. Protocol
Context Model
ContextSimulation API
uses
DataSource Manager
Data Queue encoded trace
Location Traces
mapping
uses
Emulator Manager
trace
TCP/IP
File Request Handler
uses
Trace Fetching request
Accelero meter Traces
command SUMO =
DataSource Handler
Model Request Handler
encoded trace
Bonn Motion
AndroidConnection
Genymotion Emulator
decoded trace
Genymotion Handler
HTTP
decoding
Database Request Handler
Genymotion Client
encoding
OpenSignal Streetview
Feed trace GPS Protocol Genyshell
Inter Process Communication
Genymotion ProcessProtocol
Genymotion Telnet
Genymotion Connection
Trace Processing
Trace Protobuf Library
Feed Format Converter Decoding definition
LocationTrace
Encoding definition
Trace processing & filtering StreetView
Trace Format Converter
OpenSignal
Fig. 5: The component diagram of ContextMonkey implementation provides a set of methods for fetching traces from remote databases. Presently, it supports fetching using HTTP and HTTPS based GET method. It uses the Twisted WebClient that enables sending specific request methods, setting appropriate headers and also supports error handling based on response codes. Similarly, the FileRequestHandler, and the ModelRequestHandler classes support trace fetching methods from the files and the models respectively. Some models like BonnMotion provide commands to generate traces and store them in a file. The ModelRequestHandler then delegates the trace fetching to FileRequestHandler. The trace fetching requests are handled asynchronously by the underlying Twisted reactor loop. The TraceProcessing class provides a generic interface that provides methods to filter and process trace for each context modality. As these are data source dependent, the developer can use a separate class to implement these methods and enable customized trace processing. Presently, it supports processing of camera and network modality traces from StreetView and OpenSignal databases respectively. The support for trace format conversion from data source format to Protobuf is implemented in the TraceFormatManager class. The current version supports conversion between the comma separate value (CSV), JSON and byte formats, and Protobuf. The ContextDataQueue class provides support for trace buffering in order to be utilized for the computation of virtual sensor value. The traces from the buffer are dequeued as per the feed rate. The EmulatorManager class exposes
a setModalityValue method for this purpose. The FeedFormatManager class implements format conversion from Protobuf to emulator understood format. The EmulatorHandler provides a generic Connection interface that contains method to establish a connection with the emulator and set the trace values of individual modality in the emulator using the emulator provided API. We have implemented connection interfaces for the stock Android [25] (AndroidConnection) and Genymotion [26] (GenymotionConnection) emulators. The control flow in each layer is implemented using the Deferred design pattern provided by the Twisted framework. It registers a sequential callback chain on the deferred objects in each stage, consisting of trace request generation, datasource handler mapping, trace request fetching, trace processing, format conversion, and trace feeding. VI. E VALUATION A. Emulator Independence In order to evaluate the emulator independence, we present examples demonstrating ease of integration of ContextMonkey with stock Android [25], and Genymotion [26] emulators. 1) Stock Android: The stock Android emulator comes bundled with Android SDK Platform [20], and uses the Quick EMUlator (QEMU) [27] virtualization suite that provides emulation support for several different CPU architectures. The emulator runs Android guest operating system with the Goldfish kernel in QEMU. The emulator supports sensor simulation stack with a Telnet based command interface.
The command interface enables setting trace values in the Android Hardware Abstraction Layer (HAL) that are then made available to the app space in the emulator. We successfully integrated the stock Android Emulator with our ContextMonkey framework. The integration module consists of 338 lines of code. The module contains the AndroidConnection class that implements the interface exposed by the Emulation Handler for connection establishment and API call logic. It establishes a Telnet session over a persistent TCP socket connection. Each modality is implemented in a separate class that extends from GeneralSensorProtocol class, and provides method for setting trace values. We implemented support for 7 sensor modalities with the stock Android emulator. TABLE I: Details of emulator integration with ContextMonkey Emulator
Stock Android
Genymotion
Number of Files Lines of Code Number of sensors
10 338 7
8 399 2
2) Genymotion: Genymotion provides a custom GenyShell shell environment to communicate with the emulator. Similar to the stock Android emulator, the connection and the API call logic is implemented through Emulator Handler provided interface in GenymotionConnection. The integration source code for Genymotion emulator consists of 399 lines of code. The connection spawns a Telnet server process and the GenyShell as a separate daemon process. The inter process communication between the Telnet server and the GenyShell process is carried using the ProcessProtocol provided by Twisted. The API calls to set the trace modality are sent as commands to the Telnet server that forwards them to GenyShell to set the values in the emulator.
Fig. 6: MobiCamSim: A location based mobile camera simulation with the StreetView in the stock Android emulator. C. Extensibility: MobiCamSim - A location context driven mobile camera We demonstrate the ContextMonkey extensibility using a MobiCamSim example. MobiCamSim is a trace-driven location context based camera simulation that can be used for testing camera sensing based mobile apps. The present camera modality in the stock Android emulator serves either the checker-board or the webcam images. We introduce a new camera modality in the stock Android emulator and implement the support for feeding image traces to the camera through an API. We then use ContextMonkey to define a new context simulation that includes location, and compass dependent camera modality with a predefined field of view parameter. We use the StreetView database to feed location based image traces to MobiCamSim (shown in Figure 6). D. Fidelity
We measure fidelity by comparing the modality vector reporting rate in an app running on a real mobile device to ContextMonkey instrumented stock Android emulator. The selected modalities include sensors, location, and camera. We use only stock Android emulator for our evaluation as the Genymotion emulator has a limited simulated sensor stack B. Data Source Independence API support. For our experiments, we use a Moto G phone To demonstrate the data source independence, we integrated running Android Lollipop 5.1.1. The stock Android emulator modalities from popular external trace databases such as has an equivalent hardware with respect to ARM CPU, memory, StreetView [14], OpenSignal [15], and FourSquare [16] with sensors, and OS configuration. We report results from 10 trial ContextMonkey (shown in Table II). We also implemented runs. We measure the ContextMonkey simulation fidelity on a support for location, mobility modality trace generation using local work station (2.3 GHz Intel Core i5 processor with 4GB popular models like BonnMotion [17], and libraries like RAM and Ethernet connectivity), and in our CrowdMeter [30] SUMO [18]. Furthermore, modality traces from datasets cloud testbed (t2.medium virtual machine with 2 virtual CPU available on CRAWDAD [19], like taxi mobility traces [28], cores, and 4.0 GB RAM from Amazon Web Services (AWS) and accelerometer sampled traces collected from Android smart- Elastic Compute Cloud (EC2) [31]). Sensors - We use accelerometer, and compass modality phones while driving on different vehicles [29] were also added. traces captured on a Moto G phone in a file to measure the TABLE II: Description of different data sources integrated with sensor simulation fidelity. For the sake of brevity, we only ContextMonkey present results for the accelerometer modality. Our analysis is as well applicable to other sensors such as temperature, rotation, Modality Source Type gravity, gyroscope, light, etc. present on typical Android Location FourSquare Database SUMO devices. In the app, we configure four delay settings provided Model Mobility BonnMotion in the Android API for reporting sensor vector value events: CabSpotting File FASTEST, GAME, UI, NORMAL, and measure their respective Camera Google StreetView Database average reporting rates over a 2 min interval. FASTEST, GAME Network OpenSignal Database UI, NORMAL settings report events at 100 Hz, 50 Hz, 20 Hz, Accelerometer Mobile Device (driving) File and 5 Hz respectively. We set these values as feeding rates in
75 50 25 0 MotoG
Workstation
Cloud VM
0.1 GPS
0.08 0.05 0.02 0 MotoG
Workstation
(a)
Cloud VM
(b)
Capture Latency (ms)
100
Event Rate (Hz)
Event Rate (Hz)
FASTEST GAME UI NORMAL
125
50 40 30 20 10 0
Camera
MotoG
Workstation
Cloud VM
(c)
Fig. 7: ContextMonkey Fidelity measurements in the app space. A comparison of (a) Accelerometer reporting rate (b) Location reporting rate (c) Camera image capture latency measured on a Moto G phone, and on a stock Android emulator instrumented with ContextMonkey simulation running on the local work-station and a VM in the cloud testbed. the ContextMonkey based accelerometer simulation, and then measure the reporting rate in the app within the emulator. We observed that the maximum achievable average reporting rate in the app in case of simulation is 1.20 Hz (local workstation) and 1.23 Hz (cloud VM). On further investigation, we found that although ContextMonkey is able to feed values to the simulated sensor stack within the emulator at the configured rate, the stack only reports events to the app every 800 ms (1.25 Hz). We modified the emulator source to change the reporting period as per the FASTEST, GAME UI, NORMAL values measured on the Moto G phone. Figure 7(a) shows the bar plots comparing the average event reporting rates (over a 2 min interval) for accelerometer on a Moto G phone, and in a simulation running on the local workstation and a VM in the cloud testbed. Post fine-tuning, the average reporting rate in simulation improved significantly to 47.7 Hz (local workstation) and to 47.9 Hz (cloud VM), but fell short of FASTEST value (100 Hz). Our analysis suggest that the bottleneck lies in the simulated sensor stack that has to report events to the HAL and then to the app space, and the time required for it is larger than 10 ms. Also, the binary opcode translation for the ARM CPU did not play a big role, as only a marginal improvement was observed in the average reporting rate for the emulator with x86 CPU: 48.5 Hz (local workstation) and 48.9 Hz (cloud VM). We plan to address this in our future work. However, ContextMonkey was consistently feeding traces to the emulator at the desired rate for sensor simulation. Location - Location is an important modality for context. We also use trace files generated from a BonnMotion model to measure location simulation fidelity. The Android based devices provide six different types of location data services to the app: cached GPS, cached network (Cell-id, WiFi), realtime GPS, real time network, and passive (piggybacked location updates are received based on other app’s location requests). The Android API also provides minTime and minDistance settings that serve as lower bounds to deliver the location updates to the app. We configure the app to use a GPS provider and obtain location updates as fast as possible (minTime=0 and minDistance=0). We do not consider other factors such as time to get a location fix, number of satellites, accuracy, etc. On a Moto G phone, the GPS provider location update events are reported at the rate of 0.05 Hz . We use this value to configure the location simulation. From the Figure 7(b), one can see that unlike in
TABLE III: Description of context simulation parameters Modality
Frequency(Hz)
Data-Source
Type
Accelerometer Location Camera Network
30 1 0.01 0.01
Traces BonnMotion StreetView OpenSignal
File File Database Database
the sensor simulation, in the location simulation (on both the local workstation, as well as a VM in the cloud testbed), it is possible to achieve a reporting rate that is equivalent to a Moto G phone. Camera - We use MobiCamSim to measure the camera simulation fidelity using capture latency as a metric. The image capture latency is defined as the amount of time taken to capture a picture using a camera app. It also includes the time required to store the picture of a particular resolution on the device file system, or on the internal memory, or the external storage. We use a prototyped Android camera app for the measurements. The resolution of the picture is set to 640 x 640 (maximum resolution provided by StreetView). Figure 7(c) illustrates the bar plots for the camera capture latency measured on a Moto G phone and in MobiCamSim on a local workstation and a cloud VM. The image capture latency of simulation on the local workstation and a cloud VM are 1.14 and 1.04 times that of real Moto G camera. Overall, we are encouraged by the fidelity results. In future, we plan to address the sensor stack bottleneck issue, through a new stack design for the stock Android emulator. E. Microbenchmark - Maximum Achievable Feeding Rate We instrumented the ContextMonkey code to measure the time complexity of each individual component in the simulation, and thus, the maximum achievable feeding rate. The maximum achievable feeding rate is measured as the inverse of total time interval starting from the trace request generation to the feeding of trace to the emulator by making an API call. We simulated The walk around Brandenburg Gate context example using the accelerometer, the location, and the location dependent camera and network modalities (details given in Table III). Table IV shows the component-wise latency breakdown of each modality in the context simulation run on the local workstation and a cloud VM. The maximum achievable feeding rate of a modality trace is largely dependent on its
TABLE IV: ContextMonkey components latency break down on the Local Workstation and a Cloud VM. The average, maximum, and minimum statistics are calculated across 100 trial runs Component
Modality - Data-source
Local Workstation (ms)
Cloud VM (ms)
Avg
Max
Min
Avg
Max
Min
DataSource Handler
Accelerometer - Trace File (JSON) Location - Model (Text) Camera - StreetView (Binary) Network - OpenSignal (JSON)
0.09 0.024 97.88 867.37
0.12 0.028 164.88 1255.18
0.03 0.022 67.57 778.54
0.03 0.017 19.62 791.23
0.05 0.03 146.47 946.70
0.02 0.014 10.04 739.44
DataSource Manager - Processing
StreetView OpenSignal
42.12 0.09
53.11 0.11
38.11 0.08
18.42 0.07
29.64 0.1
13.49 0.5
Trace Format Manager - Encoding
Accelerometer - Trace File Location - Model Camera - StreetView Network - OpenSignal
0.00 0.14 0.00 0.15
0.00 2.8 0.00 0.19
0.00 0.11 0.00 0.14
0.00 0.08 0.00 0.14
0.00 1.0 0.00 0.16
0.00 0.03 0.00 0.11
Feed Format Manager - Decoding
Accelerometer - Trace File Location - Model Camera - StreetView Network - OpenSignal
0.00 0.01 0.09 0.02
0.00 0.04 0.14 0.04
0.00 0.01 0.09 0.02
0.00 0.01 0.07 0.02
0.00 0.03 0.1 0.03
0.00 0.00 0.06 0.01
Emulator Handler (API call)
Accelerometer - Trace File Location - Model Camera - StreetView Network - OpenSignal
0.42 0.40 8.20 0.05
1.20 1.28 9.23 0.06
0.33 0.35 7.95 0.05
0.37 0.39 3.63 0.05
1.18 1.10 4.85 0.05
0.31 0.37 2.67 0.03
Total Time
Accelerometer - Trace File Location - Model Camera - StreetView Network - OpenSignal
0.49 0.50 152.13 867.90
2.01 2.05 221.13 1255.12
0.49 0.50 118.23 778.90
0.49 0.49 45.64 791.99
1.35 1.45 179.34 949.23
0.49 0.48 30.12 739.78
data-source type and the trace format. We discuss the least maximum achievable feeding rate observed for different data sources (across 100 trial runs). The least maximum achievable feeding rate for the accelerometer modality using trace files is 497 Hz (local workstation) and 740 Hz (cloud VM). These results support our earlier analysis that the bottleneck lies in the simulated sensor stack of the emulator. In case of the location modality using model, the least maximum achievable feeding rate observed is 489 Hz (local workstation) and 687 Hz (cloud VM). The above results clearly demonstrate that ContextMonkey can easily achieve the feeding rates for high frequency sensors when using trace files, and models. For the camera and the network modality that use a trace database, the maximum achievable feeding rate varies from 4.5 Hz to 8.4 Hz (StreetView) and 0.7 Hz to 1.2 Hz (OpenSignal) on a local workstation. On a cloud VM, the maximum feeding rate increases due to the low end-to-end network path latency and fewer number of hops: 5.5 Hz to 33.2 Hz (StreetView) and 1.05 Hz to 1.35 Hz (OpenSignal). In case of database, the feeding rate is limited by the end-to-end network path latency, and the backend response time (accounting for more than 98%).
data related to surrounding events using mobile devices [32]. The app requests the user to respond to a simple text query; taking picture, video snaps, etc. Geofence defines a geographical area around a point of interest specified by the fine-grained and coarse-grained radius parameters. The coarsegrained radius triggers an injection, and in-app deployment of the geofence over the network, while the fine-grained radius triggers collection of the data when the user enters the area of interest. Reliable triggering of a geofence is dependent on the coarse-grained radius, the user speed, the network conditions, and the location service provider. We use ContextMonkey to simulate context for GeoCrowdSens in the stock Android emulator deployed on a VM in the CrowdMeter testbed. The test set-up includes a hermetic server [33] running locally on the same VM in the testbed. The 3G network is simulated using the in-built network simulation support provided by the emulator (leaky bucket with traffic shaping). For the purpose of simulation, we collected location (GPS location provider) and 3G network traces for different context - walking, running, driving, and commuting (train), using a Moto G phone. The 3G network traces included the end-to-end latency, the bandwidth, and the packet loss between the phone and the GeoCrowdSens F. ContextMonkey Case Studies backend service. We used Ping utility and Iperf [34] tool for the We have carried out two case studies using ContextMonkey. network trace collection. The location, and the network traces Our goal here is to demonstrate the simplicity, and potential were then included in a file, and used in the ContextMonkey of ContextMonkey in the mobile app testing and evaluation, simulation. The simulation consists of 16 lines of code (shown and not the novelty of the apps themselves. in Appendix A). For the simulation, the rate of change of 1) GeoCrowdSens: GeoCrowdSens is a crowd-sensing location is set to 20s. We present the results from 30 trial runs based Android app that uses geofences to trigger collection of
Hit %
Hit %
Walking
100 75 50 25 0
Real Sim(Hermitic) Sim(NonHermitic) 20
50
100
Biking
100 75 50 25 0
300
Real Sim(Hermitic) Sim(NonHermitic) 20
Driving
100 75 50 25 0
Real Sim(Hermitic) Sim(NonHermitic) 20
50
100
50
100
300
Radius (m)
Hit %
Hit %
Radius (m)
Train
100 75 50 25 0
300
Real Sim(Hermitic) Sim(NonHermitic) 20
Radius (m)
50
100
300
Radius (m)
15 10 5 0 Real
Workstation Cloud VM
20
Response Time (s)
20
Response Time (s)
Response Time (s)
Fig. 8: A comparison of geofence hit percentage in (hermitic and non-hermitic) simulation with real field for different mobility scenarios.
15 10 5 0 C1
C2
(a)
C3
(b)
C4
20 15 10 5 0 Berlin
Rome
Mumbai
(c)
Fig. 9: Fine-grained CamTour response time analytics on a 3G network using ContextMonkey A comparison of (a) Response time estimates for a network carrier using ContextMonkey on a local workstation and a cloud VM. (b) Response time estimates for different network carriers in Berlin. (c) Response time estimates at different geographical locations. for the impact of user speed, and the coarse-grained radius on the detection (hit) percentage of geofences. The hit percentage is the fraction of geofences that were successfully triggered for both the enter and the exit events. The bar plots in Figure 8 compare the hit percentage in field test and hermetic simulation for the 3G network. As expected, the results show an increasing trend in the hit percentage with the increase in the coarse-grained radius for different mobility scenarios. For walking (5km/h), the coarse-grained radius as small as 20 m can achieve 96.7% hits in hermetic simulation. For average biking speed (20 km/h), 100% hits are achievable for increased radius of 100 m. In case of driving (40-80 km/h), and commuting (train) (>80km/h) 100% hits are obtained for radii values greater than 300 m while radii smaller than 100 m fail to trigger even a single geofence. Next, we compare the hermetic simulation results to a small scale field measurements. The worst case percentage error between field testing and hermitic simulation estimated hit percentages for walking, biking, driving and commuting(train) are 0.03%, 8.3%, 9.8% and 3.7% respectively. These results demonstrate that it is possible to estimate an appropriate threshold coarsegrained radius value based on user speeds and the network using ContextMonkey simulation. We also carried out non-hermitic test simulation experiments, using the location traces from BonnMotion model, and the network statistics that have been reported in the literature [35], [36]. In these experiments, the worst-case percentage error for biking, driving, and commuting (train) increased to 61%, 66%, and 33% respectively while it remained similar in case of walking (0.03%). These test results highlight the impact that the test set-up, and the selected trace data source within ContextMonkey simulation can have on the estimated app performance parameters.
2) CamTour: CamTour is a camera based Android app inspired by Google Glass, and provides information about the landmarks in cities. A user snaps a picture of landmark; CamTour then uses the location coupled with the picture content to search and deliver landmark related information to the user. Response time is an important performance metric for CamTour. Developers usually measure response times within a limited scope using real devices. While such measurements are quite valuable, obtaining fine-grained Response time analytics based on the real world landmark imagery is desirable for apps such as CamTour. We use ContextMonkey to simulate context for CamTour in the stock Android emulator. We define a context based on a location dependent trace driven camera from StreetView image traces, and the 3G network conditions (latency, bandwidth, and packet loss) from the OpenSignal network traces. The emulator is deployed on a VM in the CrowdMeter testbed. It is not possible to deploy a hermitic setup in the CamTour case study due to an external API dependency [37] for picture content search. Therefore, we use a non-hermitic setup, with the CamTour backend running in the production Google App Engine [38] server. The source code for the ContextMonkey simulation consists of 18 lines of code (given in Appendix B). We measure the CamTour Response time estimates under a given load for 5 different landmarks in 3 different cities, and on for 4 different network carriers. The results from 20 trial runs are shown in Figure 9. We have labeled the network carriers C1-C4 to mask their identity, and focus on the significance of the results. In Figure 9a, we compare the response times obtained in the simulation with the real measurements obtained from a 3G network carrier in Berlin. The mean response time percentage error between
the simulation and the real measurements is 31.8% on a local workstation and 7.8% in the CrowdMeter cloud testbed. Next, we discuss the response time analytics obtained from the simulations in the CrowdMeter testbed. The app response times across different network carriers in Berlin vary significantly (see Figure 9b). The average 3G latencies on carrier C1 are lower by 73.5% compared to carrier C3. As a result, the average response times observed on C1 are 39.4% lower than C3. Figure 9c illustrates the CamTour estimated response times on a C1 network carrier, across three different cities around the world. It provides insight into the impact of the CamTour users geographical location on the response times. VII. R ELATED W ORK Our work builds on the top of previous research efforts in context modeling, and complements the existing literature in context enabled testing. Context modeling is well studied in the area of Pervasive and Ubiquitous computing. SimuContext [39] is a context simulation framework for testing context aware applications that provides separation between the data sources, and context-aware logic. ContextMonkey, however, is aimed at simulating mobile app specific contexts within mobile emulators. There are a wide array of tools and platforms focusing on different aspects of testing like fault detection, crashes and bugs, UI, functionality etc. For example, AppInsight [40] is a instrumentation framework that focuses on identifying faults within mobile apps. VanarSena [41] utilizes instrumentation to find bugs in the mobile app. Caiipa [22] is a scalable cloud based testing service to discover performance bugs and crashes, and hosts a predefined set of contextual trace libraries for black box testing. ContextMonkey nicely complements such platforms and services. It can be utilized by app developers, as well as by large-scale cloud based testing services for context enabled testing. OpenIntents [42] and Samsung Sensor Simulator [43] provide simplified individual sensor simulation capabilities on mobile emulation platforms. The authors in [44] come closest to our goal. They propose a Android based context simulator that uses app source instrumentation with special libraries. In contrast, ContextMonkey does not require app code instrumentation. It offers wide range of possibilities as it provides context simulation support across the emulators using heterogeneous trace data sources. VIII. C ONCLUSIONS We presented the ContextMonkey framework for context generation in mobile emulators. The key focus of ContextMonkey is to promote use of realistic context in testing and evaluation of mobile apps. Accordingly, the ContextMonkey framework provides cross-emulator support for context generation using trace-driven simulation. It lessens the burden on the developers by automating interdependent trace generation from heterogeneous sources and feeding them to the emulator. Our evaluation of the ContextMonkey framework is based on the performance comparison for two different mobile apps in the real field (on a micro scale) with the simulations generated by the framework in the emulator. In the future, we plan
to extend the ContextMonkey framework to support scalable context simulation that can facilitate parallelized and scalable testing of mobile apps. ACKNOWLEDGMENTS The work has been partially supported by Amazon AWS Research Grant. A special thanks to our shepherd, Antinisca Di Marco, and the anonymous reviewers, for their suggestions in improving the technical content and presentation of the paper. R EFERENCES [1] “Why your mobile apps are failing?” http://www.perfectomobile.com/ why-mobile-apps-fail, accessed: 25/9/2016. [2] “Apigee survey: Users reveal top frustrations that lead to bad mobile app reviews,” http://apigee.com/about/press-release/ apigee-survey-users-reveal-top-frustrations-lead-bad-mobile-app-reviews. [3] C. Bettini, O. Brdiczka, K. Henricksen, J. Indulska, D. Nicklas, A. Ranganathan, and D. Riboni, “A survey of context modelling and reasoning techniques,” Pervasive Mob. Comput., vol. 6, no. 2, pp. 161–180, Apr. 2010. [4] G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles, “Towards a better understanding of context and contextawareness,” in Proceedings of the 1st International Symposium on Handheld and Ubiquitous Computing, ser. HUC ’99. London, UK, UK: Springer-Verlag, 1999, pp. 304–307. [5] G. Chen and D. Kotz, “A survey of context-aware mobile computing research,” Hanover, NH, USA, Tech. Rep., 2000. [6] T. Strang and C. Linnhoff-Popien, “A context modeling survey,” in In: Workshop on Advanced Context Modelling, Reasoning and Management, UbiComp 2004 - The Sixth International Conference on Ubiquitous Computing, Nottingham/England, 2004. [7] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “Context aware computing for the internet of things: A survey,” IEEE Communications Surveys Tutorials, vol. 16, no. 1, pp. 414–454, First 2014. [8] M. Knappmeyer, S. L. Kiani, C. Frà, B. Moltchanov, and N. Baker, “Contextml: A light-weight context representation and context management schema,” in Proceedings of the 5th IEEE International Conference on Wireless Pervasive Computing, ser. ISWPC’10, 2010, pp. 367–372. [9] “Composite Capabilities/Preference Profiles,” https://www.w3.org/Mobile/ CCPP/, accessed: 11/01/2017. [10] D. Ejigu, M. Scuturici, and L. Brunie, “An ontology-based approach to context modeling and reasoning in pervasive computing,” in Pervasive Computing and Communications Workshops, March 2007, pp. 14–19. [11] K. Henricksen, J. Indulska, and A. Rakotonirainy, “Generating context management infrastructure from high-level context models,” in In 4th International Conference on Mobile Data Management (MDM) Industrial Track, 2003, pp. 1–6. [12] G. Chen and D. Kotz, “Context aggregation and dissemination in ubiquitous computing systems,” in 4th IEEE Workshop on Mobile Computing Systems and Applications (WMCSA 2002), 20-21 June 2002, Callicoon, NY, USA, 2002, p. 105. [13] M. R. Rege, V. Handziski, and A. Wolisz, “Poster: A context simulation harness for realistic mobile app testing,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’15. ACM, 2015, pp. 489–489. [14] “Google Street View,” https://www.google.com/maps/views/streetview. [15] “OpenSignal Maps,” http://opensignal.com. [16] “Foursquare,” https://developer.foursquare.com/. [17] N. Aschenbruck, R. Ernst, E. Gerhards-Padilla, and M. Schwamborn, “Bonnmotion: A mobility scenario generation and analysis tool,” in Proceedings of the 3rd International ICST Conference on Simulation Tools and Techniques, ser. SIMUTools ’10, ICST, Brussels, Belgium, Belgium, 2010, pp. 51:1–51:10. [18] M. Behrisch, L. Bieker, J. Erdmann, and D. Krajzewicz, “Sumo simulation of urban mobility: An overview,” in in SIMUL 2011, The Third International Conference on Advances in System Simulation, 2011, pp. 63–68. [19] D. Kotz, T. Henderson, I. Abyzov, and J. Yeo, “CRAWDAD dataset dartmouth/campus (v. 2009-09-09),” Downloaded from http://crawdad.org/dartmouth/campus/20090909, Sep. 2009. [20] “Android Studio,” https://developer.android.com/sdk.
[21] M. R. Rege, V. Handziski, and A. Wolisz, “Crowdmeter: An emulation platform for performance evaluation of crowd-sensing applications,” in Proceedings of the 2013 ACM UBICOMP Adjunct Publication. ACM, 2013, pp. 1111–1122. [22] C.-J. M. Liang, N. D. Lane, N. Brouwers, L. Zhang, B. F. Karlsson, H. Liu, Y. Liu, J. Tang, X. Shan, R. Chandra, and F. Zhao, “Caiipa: Automated large-scale mobile app testing through contextual fuzzing,” in Proceedings of the 20th Annual International Conference on Mobile Computing and Networking. ACM, 2014, pp. 519–530. [23] “Protocol Buffers,” https://developers.google.com/protocol-buffers/. [24] “Twisted,” https://twistedmatrix.com/. [25] “Android Emulator,” http://developer.android.com/tools/help/emulator. html, accessed: 10/03/2013. [26] “Genymotion A faster Android emulator,” https://www.genymotion.com. [27] F. Bellard, “Qemu, a fast and portable dynamic translator,” in Proceedings of the Annual Conference on USENIX Annual Technical Conference, ser. ATEC ’05. Berkeley, CA, USA: USENIX Association, 2005, pp. 41–41. [28] M. Piorkowski, N. Sarafijanovic-Djukic, and M. Grossglauser, “CRAWDAD dataset epfl/mobility (v. 2009-02-24),” Downloaded from http: //crawdad.org/epfl/mobility/20090224, Feb. 2009. [29] M. Jain, A. P. Singh, S. Bali, and S. Kaul, “CRAWDAD dataset jiit/accelerometer (v. 2012-11-03),” Downloaded from http://crawdad. org/jiit/accelerometer/20121103, Nov. 2012. [30] M. R. Rege, V. Handziski, and A. Wolisz, “Demonstration abstract: Crowdmeter: Predicting performance of crowd-sensing applications using emulations,” in Proceedings of the 13th International Symposium on Information Processing in Sensor Networks, ser. IPSN ’14. Piscataway, NJ, USA: IEEE Press, 2014, pp. 311–312. [31] “Amazon Web Services,” http://aws.amazon.com/, accessed: 10/03/2013. [32] S. Gaonkar, J. Li, R. R. Choudhury, L. P. Cox, and A. Schmidt, “Microblog: sharing and querying content through mobile phones and social participation,” in MobiSys, 2008, pp. 174–186. [33] “Hermetic servers,” hermetic-servers.html.
https://testing.googleblog.com/2012/10/
[34] “Iperf,” http://iperf.sourceforge.net/, accessed: 10/03/2013. [35] Q. Xiao, K. Xu, D. Wang, L. Li, and Y. Zhong, “TCP performance over mobile networks in high-speed mobility scenarios,” in 22nd IEEE International Conference on Network Protocols, ICNP 2014, Raleigh, NC, USA, October 21-24, 2014, 2014, pp. 281–286. [36] J. Huang, Q. Xu, B. Tiwana, Z. M. Mao, M. Zhang, and P. Bahl, “Anatomizing application performance differences on smartphones,” in Proceedings of the 8th international conference on Mobile systems, applications, and services, ser. MobiSys ’10. ACM, 2010, pp. 165–178. [37] “Google cloud vision api,” https://cloud.google.com/vision/. [38] “Google Cloud Platform - App Engine,” https://cloud.google.com/ appengine/, accessed: 10/1/2017.
A PPENDIX C ONTEXT M ONKEY - C ONTEXT S IMULATION C ODE A. Context Simulation: GeoCrowdSens 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
1 2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20
[43] “Samsung Sensor Simulator,” http://developer.samsung.com/android/ tools. [44] V. Vieira, K. Holl, and M. Hassel, “A context simulator as testing support for mobile apps,” in Proceedings of the 30th Annual ACM Symposium on Applied Computing, ser. SAC ’15. ACM, 2015, pp. 535–541.
dsloc = TraceFile ( datasourceid = 1 , datasourcetype = ’ f i l e ’ , datasourceformat = ’ t e x t ’ , path = ’ . . / t r a c e f i l e s / l o c a t i o n . t x t ’ , header = ’ true ’ ) locmod = P h y s i c a l C o n t e x t M o d a l i t y ( name = ’ l o c a t i o n ’ , s a m p l i n g r a t e = 0 . 0 5 , feedrate = 0.05 , datasource = dsloc ) # C r e a t e C o n t e x t Model sim . a d d M o d a l i t y ( [ locmod , nwmod ] ) sim . c r e a t e C o n t e x t G r a p h ( locmod . name , nwmod . name , e0 = ( locmod . name , nwmod . name ) ) #Add e m u l a t o r emulator = Emulator ( ’ Android ’ ,{ ’ addr ’ : ’ 1 2 7 . 0 . 0 . 1 : 5 5 5 4 ’ }) sim . a d d E m u l a t o r ( e m u l a t o r ) # Start simulation sim . runTime ( 6 0 0 ) engine . s t a r t ( )
B. Context Simulation: CamTour
[40] L. Ravindranath, J. Padhye, S. Agarwal, R. Mahajan, I. Obermiller, and S. Shayandeh, “Appinsight: mobile app performance monitoring in the wild,” in Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, ser. OSDI’12, 2012, pp. 107–120.
[42] “Openintents,” https://code.google.com/p/openintents/.
# M o d a l i t i e s and d a t a s o u r c e s dsnet = TraceFile ( datasourceid = 1 , datasourcetype = ’ f i l e ’ , datasourceformat = ’ t e x t ’ , path = ’ . . / t r a c e f i l e s / network_driving . t x t ’ , header = ’ true ’ ) nwmod = P h y s i c a l C o n t e x t M o d a l i t y ( name = ’ n e t w o r k ’ , s a m p l i n g r a t e = 1 , f e e d r a t e = 1 , datasource = dsnet )
Listing 1: GeoCrowdSens
[39] T. Broens and A. van Halteren, “Simucontext: Simply simulate context,” in Autonomic and Autonomous Systems, 2006. ICAS ’06. 2006 International Conference on, July 2006, pp. 45–45.
[41] L. Ravindranath, S. Nath, J. Padhye, and H. Balakrishnan, “Automatic and scalable fault detection for mobile applications,” in Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’14. ACM, 2014, pp. 190–203.
im por t contextmonkey . ContextMonkeyEngine as eng in e i m p o r t c o n t e x t m o n k e y . C o n t e x t S i m u l a t i o n a s sim from c o n t e x t m o n k e y . c o n t e x t m o d e l . D a t a s o u r c e i m p o r t D a t a b a s e from c o n t e x t m o n k e y . c o n t e x t m o d e l . D a t a S o u r c e i m p o r t T r a c e f i l e from c o n t e x t m o n k e y . C o n t e x t m o d e l i m p o r t P h y s i c a l C o n t e x t M o d a l i t y from c o n t e x t m o n k e y . E m u l a t o r i m p o r t E m u l a t o r
21 22 23 24 25 26 27 28
imp ort contextmonkey . ContextMonkeyEngine as e ng in e i m p o r t c o n t e x t m o n k e y . C o n t e x t S i m u l a t i o n a s sim from c o n t e x t m o n k e y . c o n t e x t m o d e l . D a t a s o u r c e i m p o r t D a t a b a s e from c o n t e x t m o n k e y . c o n t e x t m o d e l . D a t a S o u r c e i m p o r t T r a c e f i l e from c o n t e x t m o n k e y . C o n t e x t m o d e l i m p o r t P h y s i c a l C o n t e x t M o d a l i t y from c o n t e x t m o n k e y . E m u l a t o r i m p o r t E m u l a t o r # M o d a l i t i e s and d a t a s o u r c e s d s n e t = Database ( q u e r y p a r a m e t e r l i s t =[ ’ l a t ’ , ’ lng ’ , ’ d i s t a n c e ’ , ’ network_type ’ , ’ a p i k e y ’ ] , keymapping = d i c t ( l a t = ’ l a t i t u d e ’ , l n g = ’ l o n g i t u d e ’ ) , d a t a s o u r c e f o r m a t = " j s o n " , u r l = ’ h t t p : / / a p i . o p e n s i g n a l . com / v2 / n e t w o r k s t a t s . json ’ , queryparameters = d i c t ( l a t = " 37.7907 " , lng = " 122.4058 " , d i s t a n c e = " 20 " , n e t w o r k _ t y p e = " 3 " , a p i k e y = "@@@@@@@@@" ) , f e t c h t y p e = "HTTP" ) netmod = P h y s i c a l C o n t e x t M o d a l i t y ( name = ’ n e t w o r k ’ , s a m p l i n g r a t e = 0 . 0 1 , f e e d r a t e = 0.01 , d a t a s o u r c e = dsnet , t r a c e p r o c e s s i n g =" O p e n S i g n a l N e t w o r k M o d i f y " , t r a c e v e c t o r = [ ’ name ’ , ’ r s s i ’ , ’ a v e r a g e R s s i D b ’ , ’ downloadSpeed ’ , ’ uploadSpeed ’ , ’ pingTime ’ ] ) t f l o c = TraceFile ( datasourcetype = ’ f i l e ’ , datasourceformat = ’ t e x t ’ , path= t r a c e f i l e p a t h , h e a d e r = ’ t r u e ’ , l e n g t h =100) locmod = P h y s i c a l C o n t e x t M o d a l i t y ( name = ’ g p s ’ , s a m p l i n g r a t e = 0 . 0 1 , f e e d r a t e = 0 . 0 1 , d a t a s o u r c e = t f l o c , t r a c e v e c t o r =[ ’ a c c u r a c y ’ , ’ a l t i t u d e ’ , ’ b e a r i n g ’ , ’ l a t i t u d e ’ , ’ l o n g i t u d e ’ , ’ s t a t u s ’ ] , t r a c e p r o c e s s i n g =" LocationModify " ) dbcam = D a t a b a s e ( q u e r y p a r a m e t e r l i s t = [ ’ s i z e ’ , ’ l o c a t i o n ’ , ’ h e a d i n g ’ , ’ p i t c h ’ , ’ key ’ ] , keymapping = d i c t ( s i z e = " 600 x400 " , l o c a t i o n = " l a t i t u d e , l o n g i t u d e " , h e a d i n g = " 1 5 1 . 7 8 " , p i t c h = " 0.76 " ) , d a t a s o u r c e t y p e = " d a t a b a s e " , d a t a s o u r c e f o r m a t = " b i n a r y " , u r l = ’ h t t p s : / / maps . g o o g l e a p i s . com / maps / a p i / s t r e e t v i e w ’ , q u e r y p a r a m e t e r s = d i c t ( s i z e = " 640 x480 " , l o c a t i o n = " 4 6 . 4 1 4 3 8 2 , 1 0 . 0 1 3 9 8 8 " , h e a d i n g = " 1 5 1 . 7 8 " , p i t c h = " 0.76 " ) , e x t e n s i o n = ’ j p g ’ , f e t c h t y p e = "HTTPS" ) cammod= P h y s i c a l C o n t e x t M o d a l i t y ( name= ’ c a m e r a ’ , s a m p l i n g r a t e = 0 . 0 1 , f e e d r a t e = 0 . 0 1 , d a t a s o u r c e =dbcam , t r a c e v e c t o r = [ ’ image ’ , ’ o r i g i n a l ’ , ’ e n c o d e d ’ , ’ camera_type ’ ] , t r a c e p r o c e s s i n g =" StreetViewModify " ) # C r e a t e C o n t e x t Model sim . a d d M o d a l i t y ( [ cammod , locmod , nwmod ] ) sim . c r e a t e C o n t e x t D e p e n d e n c y G r a p h ( locmod . name , netmod . name , cammod . name e0 = ( locmod . name , netmod . name ) , e1 = ( locmod . name , camod . name ) ) #Add e m u l a t o r emulator = Emulator ( ’ Android ’ ,{ ’ addr ’ : ’ 1 2 7 . 0 . 0 . 1 : 5 5 5 4 ’ }) sim . a d d E m u l a t o r ( e m u l a t o r ) # Start simulation sim . runTime ( 6 0 0 ) engine . s t a r t ( )
Listing 2: CamTour