SMARTCOMP 2014
A Crowdsourcing Approach to Promote Safe Walking for Visually Impaired People Chi-Yi Lin, Shih-Wen Huang, and Hui-Huang Hsu* Dept. Computer Science and Information Engineering Tamkang University Taipei, Taiwan, R.O.C.
[email protected]*
Abstract—Visually impaired people have difficulty in walking freely because of the obstacles or the stairways along their walking paths, which can lead to accidental falls. Many researchers have devoted to promoting safe walking for visually impaired people by using smartphones and computer vision. In this research we propose an alternative approach to achieve the same goal – we take advantage of the power of crowdsourcing with machine learning. Specifically, by using smartphones carried by a vast amount of visually normal people, we can collect the tri-axial accelerometer data along with the corresponding GPS coordinates in large geographic areas. Then, machine learning techniques are used to analyze the data, turning them into a special topographic map in which the regions of outdoor stairways are marked. With the map installed in the smartphones carried by the visually impaired people, the Android App we developed can monitor their current outdoor locations and then enable an acoustic alert whey they are getting close to the stairways. Keywords—Accelerometer; Crowdsourcing; learning; Smartphone; Visually impaired
I.
Machine
INTRODUCTION
Vision loss has been a common public health problem around the globe. According to the fact sheet published by World Health Organization (WHO) in October 2013, 285 million people are estimated to be visually impaired worldwide, among which 39 million are blind [1]. For the blind and the visually impaired people, they face two biggest challenges in independent living: difficulty in accessing to printed material, and the stressors associated with safe and efficient navigation [2]. To prevent the visually impaired people from accidental falls, they need to be trained to use white canes or guide dogs to assist walking. With the advances of the information and communication technologies, the services of high-speed mobile Internet access and a variety of smartphones are getting more and more popular. Combined with the cloud computing technology, many innovative applications are created as smartphone Apps to make people’s lives more convenient. In the literature there has been a lot of research into the field of safe living for visually impaired people assisted by using smartphones based on computer vision. For instances, many of them use built-in or external cameras to detect the status of pedestrian signals [3], the region of the zebra crossing [4], or the obstacles along the walking path [5][6][7]. Alternatively, use of tri-axial accelerometer sensors to detect
978-1-4799-5711-8/14/$31.00 ©2014 IEEE
the walking behaviors of visually impaired people is available in [8]. By analyzing the collected acceleration patterns, behaviors such as walking on a flat road, descending steps, or waiting at stoplights can be distinguished. Many others also use tri-axial accelerometers along with other sensors such as gyroscope, magnetometer, or even Wi-Fi fingerprinting in smartphones and/or inertial measurement units (IMUs) to assist indoor and outdoor navigation [9][10][11]. Among the researches in the literature we found [12] very interesting. The authors make use of machine learning techniques to detect the freezing of gait (FoG) behavior of patients with Parkinson’s disease, by using smartphones as an unobtrusive and inexpensive wearable assistant. This motivates us to design a similar system that also uses machine learning techniques, but to detect the regions of outdoor stairways instead. The rationale behind our thinking is that if the regions of outdoor stairways can be learned, the visually impaired people with smartphones can be alerted when they are getting close to those regions. We believe that this will be helpful in preventing them from falling accidentally, which means their walking safety can be promoted. Specifically, we would like to use the built-in accelerometer and the GPS receiver of smartphones to generate a special topographic map specific for the visually impaired people. In the topographic map, the approximate regions of outdoor stairways are marked by analyzing the triaxial accelerometer data collected by visually normal people. With the map installed in the smartphones carried by the visually impaired people, the Android App we developed can monitor their current locations and then enable an acoustic alert whey necessary. What distinguishes our work from [12] is the way the accelerometer data are collected. In our work the accelerometer data are collected with the aid of visually normal people who play the role of contributors, while in [12] the data are collected from the patients themselves who are the system users (a.k.a. beneficiary). However, the task of collecting accelerometer data covering very large geographical areas is non-trivial. To do that, in our work we take the crowdsourcing approach [13]. That is, we envision that most visually normal people will be willing to help to build the special topographic map for the visually impaired people. They can participate in the process of data collection simply by running an Android App in the background that keeps recording the tri-axial accelerometer data and the
corresponding GPS coordinates. The data are then uploaded to a cloud computing platform for analysis by machine learning techniques. If a window of accelerometer data points matches the pattern that represents either walking upstairs or downstairs, the GPS coordinates within the window are marked as stairways. The overall concept of the proposed idea is illustrated in Fig. 1.
Figure 1. The proposed crowdsourcing approach to build a topographic map for the visually impaired people. (image from http://www.tku.edu.tw)
The rest of this paper is organized as follows. In Section II we briefly introduce some related works on promoting walking safety for the visually impaired people by using smartphones. In Section III we use four machine learning algorithms to classify the collected accelerometer data as either walking on a flat road or going upstairs/downstairs. Then in Section IV we will describe the preliminary implementation of the proposed concept. Finally, Section V concludes this paper and provides some future work. II.
RELATED WORKS
In this section, we review some related works in the literature on promoting walking safety for the visually impaired people based on the techniques of computer vision and sensors in smartphones. A. Object detection by computer vision It has been a common choice by various researchers to use the external or built-in camera of smartphones to capture the surrounding images, and then use the computer vision techniques to identify the objects in the images. In [3], the authors proposed a mobile-cloud collaborative approach for context-aware outdoor navigation, which leverages the computational power of Amazon EC2 cloud for real-time image processing. They developed an Android App that can determine the current location of the blind user; if he/she is detected at an urban intersection and will possibly cross it, the blind user is prompted to take a picture which is then sent to the cloud. By using the Pedestrian Signal Detection algorithm to process the picture, if the status of the pedestrian signal is positive, the Android App uses speech feedback to notify the blind user to cross the intersection. In [4] the authors focused on a single object recognition – the
zebra crossing. They use a Hough-based segment detector to search for the presence of long line segments, and then the segments are grouped according to criteria such as parallelism, distance constraints, existence of alternating black and white color blocks. The algorithm was evaluated by implementing in the iPhone 3GS platform and yielded good precision. A guidance system for visually impaired people was proposed in [5]. They developed the system based on the concept that the parallel edges of straight paths captured as an image will converge to the vanishing point. When a person is going straight along the straight path, the vanishing point will be horizontally centered in the image. However, if the person’s walking direction is deviating from the straight path, the vanishing point will be moving either to the left or to the right in the image. As a result, the authors developed an Android App to detect the horizontal movement of the vanishing point, so that the blind person can be notified to turn left or right accordingly. In [6] a virtual walking stick Android App was developed. With this App, obstacles including objects or people in front of the camera can be identified. The distance between an object and the user is inferred using the focal distance of the object, which can be obtained by using the Android getFocusDistance method. As the blind person is approaching an object, the vibrating frequency of the smartphone will be increased to alert the person. In [7] the authors assume that the smartphone carried by a user can be at a fixed tilt angle so the floor in front of the user is always visible in the image. Using color histograms-based and edge-based recognition, anything looks different from a clear floor region (i.e., obstacles attached to the floor) can be found. B. Motion recognition and indoor positioning by sensors in smartphones With the low-cost sensors widely available in most smartphones, there has also been a lot of research on the topic of using the sensor data to recognize human motion or to achieve indoor positioning. In [8] the authors conducted a preliminary experiment on a blind person with a smartphone in her coat pocket to collect the tri-axial accelerometer data. They observed that the vibration patterns of three different behaviors (walking on a flat road, descending steps, and waiting at stoplights) are apparently different, meaning that motion recognition of the blind people using the tri-axial accelerometer data is quite practicable. The primary contribution in [9] is the fusion of Pedestrian Dead Reckoning (PDR) data and the Wi-Fi fingerprinting data to achieve indoor positioning for visually impaired people, in which the outcome of PDR algorithm combines the accelerometer, gyroscope, magnetometer, and barometer data. The fusion is a two-step process: using Kalman filter to detect the floor on which the person is moving, and then using Particle filter to estimate the trajectory. The experimental results show position accuracies of a few meters can be realized. The indoor navigation system proposed in [10] uses an IMU worn on the subject’s hip and a smartphone for data processing. The authors developed a waypoint navigation algorithm that can limit the error
accumulation as opposed to the traditional PDR algorithms. [12] is the first work that uses machine learning algorithms to detect FoG episodes online. First, the FoG-detection classifier is trained offline. Then, the Android App uses the trained classifier to detect FoG events. III.
RECOGNITION OF WALKING IN STAIRWAYS USING MACHINE LEARNING
As stated previously, our goal is to build a special topographic map for the visually impaired people with the help from visually normal people. That is, by using machine learning to recognize visually normal people's state of walking in outdoor stairways, we can mark the corresponding GPS coordinates in the map as stairways. Therefore, the first step we need to do is to collect tri-axial accelerometer data and then evaluate the effectiveness of using various machine learning algorithms to classify the data. A. Data collection and preprocessing We developed an Android App to collect the tri-axial accelerometer data and the corresponding GPS coordinates during walking. In the App we allow users to manually label the data as “walking on a flat road”, “going upstairs”, or “going downstairs” by pressing the corresponding buttons. The flow chart of the App is shown in Fig. 2, in which we assume that the person’s initial state is always “walking on a flat road” (label = Walk). When entering the region of stairways, the person must long press either the Up or Down button in order to label the state as either “going upstairs” (label = Up) or “going downstairs” (label = Down). During the entire process the smartphone was hold by hand for ease of pressing the buttons. Begin
Read the current accelerometer data and GPS coordinates
Press Start button
Record the current accelerometer data and GPS coordinates with label = Walk
Long press Up button
Press Stop button
Long press Down button
Record the current accelerometer data and GPS coordinates with label = Up
Save the recorded data to a new file
Record the current accelerometer data and GPS coordinates with label = Down
Release Up button
End
Release Down button
Figure 2. Flow chart of the Android App for collecting data.
We set the sampling rate to 10Hz, and within each sample the data fields include the timestamp, acceleration data of x, y, and z-axis, latitude, longitude, and altitude. Then we select some features for classifier training with supervised machine learning techniques. The features include: 1. X/Y/Z: the raw tri-axial accelerometer data. 2. State: Walk, Up, or Down (nominal data). 3. XYZ: square root of sum of squares of x, y, and z. 4. XYZ-G: XYZ minus 9.8 m/s2. 5. ΔX/ΔY/ΔZ: the difference of two consecutive raw triaxial accelerometer data. 6. ΔXYZ: the difference of two consecutive XYZ data. 7. ΔXΔYΔZ: square root of sum of squares of ΔX, ΔY, and ΔZ. 8. W5(X/Y/Z): the average X/Y/Z in a window of 5 consecutive samples. 9. W10(X/Y/Z): the average X/Y/Z in a window of 10 consecutive samples. Besides, we also pruned the data points to make the quantity of different states balanced for classifier training. The resulting three datasets are named WUD, WU, and WD, where the number of data points with respect to the three states are shown in Table I. The reason we designed the three datasets is that we would like to investigate the recognition accuracies of different combinations of states. TABLE I. Status Dataset WUD WU WD
THREE DATASETS AND THE DISTRIBUTION OF STATES Walk
Up
Down
5,384 (49.94%) 2,663 (48.80%) 2,623 (50.17%)
2,793 (25.91%) 2,793 (51.20%) 0 (0%)
2,602 (24.13%) 0 (0%) 2,605 (49.83%)
B. Evaluation of four machine learning algorithms on the datasets We selected four popular machine learning algorithms to test the datasets, including k-means clustering, C4.5 decision tree, neural networks, and support vector machines (SVM) to evaluate their efficiency in recognizing the behaviors of walking in stairways and walking on a flat road. These algorithms are executed in the Weka data mining software [15] for offline data analysis. We also use ten-fold crossvalidation in the analysis to estimate the accuracy of the trained classifier. Table II to Table IV show the confusion matrices of using the four algorithms to distinguish the three or two states in the WUD, WU, and WD datasets, respectively. We can see from Table II that the accuracies of recognizing the Up state are very low for all the algorithms in the WUD dataset. The same observation can be found in Table III that with only Walk and Up data points, the accuracies of recognizing the Up state are still unsatisfactory. However, in Table IV we can see that when there are only Walk and Down data points, the recognition accuracies for all the algorithms are improved. We infer that the reason behind the above phenomenon is that during a person is going downstairs, the sudden acceleration change when the foreleg steps on the floor is normally greater than that when going
upstairs and walking on a flat road. In contrast, the behaviors of going upstairs and walking on a flat road are rather similar, so it is not easy for the classifier to distinguish between the two states. TABLE II. CONFUSION MATRICES FOR WUD DATASET Recognized
Algorithms Actual Walk Up k-means Down Walk Up C4.5 tree Down Walk Neural Up networks Down Walk SVM Up Down
Walk
Up
Down
2,484 984 580 3,909 1,153 699 4,625 1,441 629 4,827 1,613 744
2,529 1,196 1,092 811 973 573 300 730 341 181 644 193
371 613 930 664 667 1,330 459 622 1,632 376 536 1,665
Accuracy (%) 46.13 42.82 35.74 72.60 34.83 51.11 85.90 26.13 62.72 89.65 23.05 63.98
TABLE III. CONFUSION MATRICES FOR WU DATASET Recognized
Algorithms Actual Walk k-means Up Walk C4.5 tree Up Walk Neural Up networks Walk SVM Up
Walk
Up
1,405 1,259 1,916 1,001 2,091 1,002 2,066 913
1,258 1,534 747 1,792 572 1,791 597 1,880
Accuracy (%) 52.76 54.92 71.94 64.16 78.52 64.12 77.58 67.31
TABLE IV. CONFUSION MATRICES FOR WD DATASET Recognized
Algorithms Actual Walk k-means Down Walk C4.5 tree Down Walk Neural Down networks Walk SVM Down
Walk
Down
1,189 891 1,980 513 2,133 522 2,153 482
1,434 1,714 643 2,092 490 2,083 470 2,123
Accuracy (%) 45.32 65.79 75.48 80.30 81.31 79.96 82.08 81.49
We further summarize the experimental results in Table V, which shows the average recognition accuracy with respect to each algorithm and each dataset. On the last row, we can see the average recognition accuracy with respect to specific dataset for all the four algorithms. As described in the previous paragraph, the WU dataset has the highest recognition accuracy because of the distinct features of Walk and Down states. On the last column, we can see the average recognition accuracy with respect to specific algorithm for all the three datasets. It is obvious that the recognition accuracies of using C4.5 decision tree, neural networks, and SVM outperform that of using k-means significantly. Unfortunately, while using SVM has the highest recognition accuracy, the accuracy is still far from satisfactory. Even the highest recognition accuracy among all the combinations, the accuracy is merely about 82%.
TABLE V. AVERAGE RECOGNITION ACCURACY Datasets Algorithms k-means C4.5 tree Neural networks SVM Average accuracy w.r.t. specific dataset
WUD
WU
WD
42.77% 57.63%
53.87% 67.96%
55.53% 77.88%
Average accuracy w.r.t. specific algorithm 50.72% 67.82%
64.82%
71.15%
80.64%
72.20%
66.20%
73.32%
81.79%
73.77%
57.86%
66.58%
73.96%
Nevertheless, we found that the problem of low recognition accuracy can be mitigated easily in two ways. First, since a state is recognized every 0.1s, in practice a Down state cannot appear within a long period of continuous Walk states during a person’s normal walking path. Therefore, if such recognition result does exist, in the postprocessing phase the only Down state within a long period of Walk state can be corrected as a Walk state. We call this onedimensional (1D) correction because the correction is performed on every single 1D line (walking path). Second, with the crowdsourcing approach, a number of different walking paths collected from different people may interweave over a particular geographical area. Let’s assume in this geographical area there is only one outdoor stairway. If the data points corresponding to the area of this stairway are marked on the map of this geographical area, the markers should cluster at the stairway. That is, in the post-processing phase if sporadic markers scattered on the map are found, and they are far from the cluster, these markers can be removed from the map. Doing so also corrects misrecognized Down or Up state to Walk state. We call this two-dimensional (2D) correction because the correction is performed on the 2D plane (area covered by various walking paths). Therefore, with both the 1D and 2D corrections in the post-processing phase, the noise caused by misrecognized states can be filtered out. IV.
PROTOTYPE IMPLEMENTATION
After the evaluation of different machine learning algorithms, we begin our prototype implementation. For simplicity, at the current stage our implementation is quite preliminary. Specifically, the recognition of stairway regions is done by manual marking or thresholding-based detection. We will be developing a more advanced version shortly. The prototype Android App SafeVi we developed has two operating modes: visually normal mode and visually impaired mode. The flow chart of the SafeVi App is shown in Fig. 3, and the details are described in the following subsections. A. Visually normal mode As the name implies, the visually normal mode is to be used by visually normal people to collect data for the system. After entering this mode, a map UI will be shown. To ease the system design, currently we allow the users to mark the region of stairways manually by touching the map or shaking the smartphone.
Visually Impaired Mode
Visually Normal Mode
Shake
Touch No
No
Yes
Yes
(a)
(b)
Figure 4. Confirmation of maker placement by (a) touching the map, and (b) shaking the smartphone.
Figure 3. The flow chart of SafeVi Android App.
1) Touch: This operation is very straightforward and easy to use. As long as a user is familiar with a specific topographic area, he/she can zoom in on the map to show this area, and then touch a point on the map where the known stairway is located. Once the user touches the map, a message box will appear as a confirmation of the marker placement as shown in Fig. 4(a). Also, as soon as the user touches the map, the GPS coordinates corresponding to this “point of touch” can be acquired immediately by using the Google Maps APIs. The returned GPS coordinates are then saved into the database in a remote server. 2) Shake: This functionality is to be used when a user is physically located in the area of stairways. Specifically, we assume that the user is holding the smartphone in hand while walking. When the user is walking through the region of stairways, he/she shakes the smartphone intentionally, and then stops shaking as soon as he/she leaves this region (i.e., back to walking on a flat road). In the SafeVi we designed a shake detector based on thresholding. That is, when the detected linear acceleration exceeds a predefined threshold for a period of time, a shaking behavior will be recognized. And once the shaking behavior is recognized, the current GPS coordinates will be captured and sent to the database. The screenshot in Fig. 4(b) shows the message box appeared after a shaking behavior was detected, in which the GPS coordinates of the “point of shaking” is also shown. B. Visually impaired mode After the visually impaired mode is entered, the SafeVi App will download the markers from the remote database and show them on the Google Maps UI. An example screenshot is shown in Fig. 5(a), in which those locations
(a)
(b)
Figure 5. (a) Google Maps UI that shows the stairway regions marked by red pin icons. (b) Message box appeared when a stairway region is nearby.
marked by the red pin icons are the stairway regions. The SafeVi will keep monitoring the visually impaired person's current location via the Android LocationManager services. When the visually impaired person is walking toward one of the stairway regions, and the remaining distance to that region is less than 30 meters, an acoustic alert will be played (as well as a vibrating alert) to warn the person of the existence of a nearby stairway. For experimental purpose, a message box as shown in Fig. 5(b) will also appear along with the acoustic and vibrating alerts. C. Discussions Apparently, a smartphone App for visually impaired people should be designed with special care because they have difficulty in using the App through a graphical user interface. In the current version of SafeVi, entering the visually impaired mode requires the user to touch a specific button at the App main page. As stated previously, the current implementation is merely a proof of concept. About this concern, we will redesign the App and add assistive
interfaces for visually impaired people, such as using gestures or tactile buttons implemented in [6] to activate specific features. There is another important concern about the power consumption caused by running the SafeVi App. In the visually impaired mode, the App must do locationing all the time, which consumes considerable battery power. In fact, this is a common problem for most location-based services (LBS) Apps. A possible solution to this problem is to disable SafeVi during the period when the user is motionless. This can be achieved easily also by thresholding the accelerometer data: during motionless periods, the variance of the accelerometer values becomes relatively small. We have planned to add this feature in the new version. V.
REFERENCES [1]
[2]
[3]
[4]
[5]
CONCLUSIONS AND FUTURE WORK
In this research, we proposed a crowdsourcing approach to promote safe walking for visually impaired people by using smartphones. The main concept is that the large number of visually normal people can volunteer and contribute to the establishment of a special topographic map simply by running an Android App in the background that keeps recording the tri-axial accelerometer data and the corresponding GPS coordinates. The accelerometer data can then be sent to a cloud computing system and analyzed by machine learning algorithms. For those data points that are recognized as the state of walking in stairways, the GPS coordinates associated with the data points are then stored in a database to be used in the App that can alert the visually impaired people when they are close to the stairways. To verify the practicability of our idea, in the experiment we chose four popular machine learning algorithms to analyze three types of datasets. Although the recognition accuracies are far from satisfactory, it is possible to filter out the misrecognized states by using the simple 1D and 2D correction methods during the post-processing phase. The current implementation is a proof of concept that has limited features. Nevertheless, with more careful and attentive designs, we believe that visually impaired people will be more willing to use this system to promote their walking safety. In the future, we will investigate the applicability of using barometers to detect state of walking in stairways, on both Android and iOS platforms. Besides, we will also incorporate the concept proposed by [14] to preprocess the data collected from the visually normal people, which can discard low-quality data in the first place. Doing so not only can decrease the burden of cloud servers that analyze the data, but also reduce the network traffic consumed by data uploads from the visually normal people.
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
WHO Fact Sheet N°282, "Visual impairment and blindness," http://www.who.int/mediacentre/factsheets/fs282/en/, last updated Oct. 2013. N. A. Giudice and G.E. Legge, Blind Navigation and the Role of Technology. The Engineering Handbook of Smart Technology for Aging, Disability, and Independence. John Wiley & Sons, Inc., Hoboken, NJ, USA. 2008. doi: 10.1002/9780470379424.ch25 B. Bhargava, P. Angin, and L. Duan. "A Mobile-Cloud Pedestrian Crossing Guide for the Blind," in Proc. International Conference on Advances in Computing & Communication (ICACC), Apr. 2011. D. Ahmetovic, "Smartphone-Assisted Mobility in Urban Environments for Visually Impaired Users through Computer Vision and Sensor Fusion,", 2013 IEEE 14th International Conference on Mobile Data Management (MDM), vol.2, no., pp.15-18, Jun. 2013. M. Asad and W. Ikram, "Smartphone based guidance system for visually impaired person," 2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA), pp.442447, Oct. 2012. T. A. Ueda, L. V. de Araújo, "Virtual Walking Stick: Mobile Application to Assist Visually Impaired People to Walking Safely," 8th International Conference on Universal Access in HumanComputer Interaction. Aging and Assistive Environmens (UAHCI), LNCS vol. 8515, pp. 803-813, Jun. 2014. E. Peng, P. Peursum, L. Li, and S. Venkatesh, "A Smartphone-Based Obstacle Sensor for the Visually Impaired," 7th International Conference on Ubiquitous Intelligence and Computing (UIC), LNCS vol. 6406, pp. 590-604, Oct. 2010. Y. Fukushima, H. Uematsu, R. Mitsuhashi, H. Suzuki, and I. E. Yairi, "Sensing human movement of mobility and visually impaired people," 13th international ACM SIGACCESS Conference on Computers and Accessibility, pp. 279-280, Oct. 2011. T. Moder, P. Hafner, and M. Wieser, "Indoor Positioning for Visually Impaired People Based on Smartphones," 14th International Conference on Computers Helping People with Special Needs (ICCHP), LNCS vol. 8547, pp. 441-444, Jul. 2014. T. H. Riehle, S. M. Anderson, P. A. Lichter, W. E. Whalen, and N. A. Giudice, "Indoor Inertial Waypoint Navigation for the Blind," 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5187-5190, Jul. 2013. X. Zhu, Q. Li, and G. Chen, "APT: Accurate Outdoor Pedestrian Tracking with Smartphones," 2013 IEEE INFOCOM, pp. 2508-2516, Apr. 2013. S. Mazilu, M. Hardegger, Z. Zhu, D. Roggen, G. Troster, M. Plotnik, and J. M. Hausdorff, "Online Detection of Freezing of Gait with Smartphones and Machine Learning Techniques," 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), pp. 123-130, May 2012. G. Chatzimilioudis, A. Konstantinidis, C. Laoudias, D. ZeinalipourYazti, "Crowdsourcing with Smartphones," IEEE Internet Computing, vol.16, no.5, pp.36-44, Sept.-Oct. 2012. Y. Wang, W. Hu, Y. Wu, and G. Cao, "SmartPhoto: A ResourceAware Crowdsourcing Approach for Image Sensing with Smartphones," 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), Aug. 2014. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, "The WEKA Data Mining Software: An Update," ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10-18, Jun. 2009.