Towards Natural Interfaces to Interact with Physical ... - IEEE Xplore

5 downloads 631 Views 764KB Size Report
mobile devices. A low-cost, embedded system-based setup and an ... design of a mobile application to interact with a DC servomotor test bed using an iPhone.
Towards Natural Interfaces to Interact with Physical Systems Using Smart Mobile Devices Jared Alan Frank and Vikram Kapila Department of Mechanical and Aerospace Engineering NYU Polytechnic School of Engineering Brooklyn, NY 11201, USA [email protected], [email protected] Abstract—As machines in most industries continue to grow in complexity, user interfaces play an increasingly important role in allowing operators to naturally interact with such machines. In this paper, we present a versatile approach that can enable operators to interact with a variety of machines using smart mobile devices. A low-cost, embedded system-based setup and an easy-to-implement communication method to wirelessly interface smart mobile devices with physical systems are presented. An application development and evaluation approach is adapted from human-computer interaction research and employed to investigate the functionality and usability of mobile-mediated interactions with physical systems. The approach is applied to the design of a mobile application to interact with a DC servomotor test bed using an iPhone. Keywords—interaction, iOS, iPhone, mobile, smart device, smartphone, teleoperation, user interface

I.

INTRODUCTION

Many machines in industry are automated to perform advanced tasks with speed, accuracy, and precision, e.g., CNC machines that must orient and operate tools in their workspaces. When such machines are directly controlled by an operator, or teleoperated, an elaborate interface device is required with a myriad of controls, such as buttons and toggles, which calls for several hours of user training. Thus, there is a demand for user interfaces with higher usability to interact with complex machines. Before the emergence of smartphones and tablets, much work was done in the early 2000’s [1, 2] to develop natural, multimodal interfaces using familiar devices like personal digital assistants (PDA) for controlling robots. However, in recent years, PDAs have been overshadowed by smart mobile devices, namely the smartphone and the tablet computer. Moreover, PDAs lacked many of the technologies that are embedded in today’s mobile devices to facilitate playing video games, checking the weather, navigating, and social networking. Since operators of today’s industrial machines are usually already familiar with smart mobile devices and often have their own personal devices with them in the field, harnessing the potential of these devices can result in improvements in the ease of interactions with physical systems. Furthermore, as appliances in the home and office become more complex, the design of portable and intuitive interfaces on smartphones and tablet devices can allow non-expert, elderly, and disabled users to effortlessly interact with these machines [3].

k,(((

We now live in a society driven by ubiquitous handheld gaming, music, communication, and computing technologies. Market research forecasts indicate that by 2017, 87% of the worldwide connected device market will be dominated by smartphones and tablet computers [4]. Mobile applications leverage state-of-the-art features and embedded sensors of devices, such as high-resolution cameras, Bluetooth and Wi-Fi connectivity, touchscreens, 3-axis accelerometers, gyroscopes, compasses, and GPS for enhanced functionality and user experience [5]. Moreover, most mobile platforms now allow users to perform varied tasks by simply speaking commands into the microphones of their devices [6]. For some machines, the design of a natural interface (i.e., the choice of natural mappings and comprehensible feedback between the interface and the machine) can be a straightforward process. Very often, however, the choice of such mappings and feedback are not obvious to developers. Research in the design and development of mobile applications to remotely interact with physical systems is still in infancy. Although several general-purpose, mobile-based client applications have been created [7—9], these applications have not been designed with any particular physical system in mind and have not been used to conduct usability studies or to investigate natural interactions with complex machines. On the other hand, recent works [10—12] have developed techniques to effectively interact with specific systems such as robots and home appliances using specific modalities available on smart mobile devices. However, prior studies do not offer a general approach to interface smart mobile devices with physical systems, or to design and evaluate a mobile application to interact with a general physical system. Physical systems such as industrial machines and home appliances do not normally have the ability to communicate bidirectionally with a smart mobile device to enable effective user interaction. In Section II, we present a microcontrollerbased approach that equips a physical system with hardware to: (1) acquire and broadcast sensor data from the system to a mobile interface, and (2) provide control commands to the system as they are wirelessly received from the mobile interface. To ensure that the emerging class of mobile interfaces for interacting with physical systems have high usability (i.e., easy to learn, easy to use, easy to remember, efficient, errorresistant, and satisfying to users [13]), new principles must be established for their design and evaluation. While some techniques can be adapted from research in human-computer



interaction [14], others can be drawn from research on teleoperation of complex machinery, such as the work done with robots [15]. To this end, we propose a mobile interface design and evaluation approach in Section III that incorporates a parallel design methodology [16, 17] directly into the architecture of the application. Thus, several interface design alternatives may be explored simultaneously on the same application, with the goal of synthesizing the most desirable elements from each alternative into a merged design that can be developed further using an iterative design methodology. Section IV shows an implementation of our approach to monitor and control a DC servomotor using an iPhone. Section V discusses some preliminary results of user testing and Section VI draws some concluding remarks. II.

HARDWARE AND COMMUNICATION

Before developers can design a user interface on a smart mobile device to interact with a physical system, they need to be aware of how the different components in the system will be integrated. This section presents an embedded systembased setup and a message formatting method that can be used to quickly and easily integrate a smart mobile device with a physical system for prototype testing and user research. A. Hardware Setup If data acquisition and control of a physical system is already implemented using commercial hardware, no additional hardware setup may need to be built to interface the smart mobile device. However, microcontrollers offer a lowcost data acquisition solution for a variety of applications [18]. In fact, recent papers have developed data acquisition and control solutions that additionally interface low-cost microcontrollers with GUIs running on PCs [19]. To interface physical systems with GUIs running on smart mobile devices, we have adapted this microcontroller-based approach. The main hardware components include a microcontroller for data acquisition and control implementation, a Wi-Fi module to facilitate wireless communication, and additional analog-todigital conversion (ADC) and digital-to-analog conversion (DAC) electronics to interface the microcontroller with a physical system. Fig. 1 shows the proposed hardware setup. Through a serial communication link, the microcontroller configures parameters of the Wi-Fi module that control information about the network (such as the network name, security mode, IP addresses, port numbers, DHCP settings, etc.) allowing the Wi-Fi module to exchange data with the smart mobile device. To interact with the physical system over the Internet, the Wi-Fi module can be configured to join a wireless network that is connected to the Internet. Alternative embedded systems, such as single-board computers, and network modules, such as Ethernet modules, can be used depending on the requirements imposed by the application.

Fig. 1. The hardware setup to interface smart devices with physical systems.

B. Communication Method Since interaction can be considered a two-way exchange or communication of information [20], time delays and corrupt data in the information presented to the user from the physical system can deteriorate the usability of the system by hindering the user’s performance of tasks and reducing the user satisfaction with the system. Similarly, the stability of a controlled physical system with little autonomy can be put in jeopardy if incoming signals experience significant delay or corruption. Thus, the choice of communication method and protocol are two critical considerations when developing the interface. Several efforts have used Bluetooth communication between a smart mobile device and the physical system. However, Bluetooth is inferior to Wi-Fi technology in range, reliability, and transmission speed [21]. These limitations prevent Bluetooth-based approaches from being effective for highly responsive and interactive applications. Thus, Wi-Fi is chosen as the communication method with the standard TCP/IP protocol due to its universality, large range, fast and reliable data transmission, large bandwidth, potential to stream video, and potential worldwide accessibility via the Internet. A TCP/IP server is set up on the Wi-Fi module and a TCP/IP client is created on the smart mobile device. Packets containing formatted messages are sent between the smart mobile device and the microcontroller via the Wi-Fi module. A message is read as a sequence of bytes that are stored into an array, and so it is formatted such that the devices on each end of the communication know when a new message is being received, when the complete message has finished being sent, what the meaning of each received message is, and whether or not the received message is corrupt. In particular, we employ a prefix-value-terminator (PVT) format that makes it easy for the devices to determine this information by using characters as prefixes and terminators, which resemble the headers and trailers that contain the control information for each TCP packet [22]. For devices to know when a new message is being received and what the received message represents, each message is given a nonnumeric prefix character as the first byte of the array. This prefix corresponds to a parameter or some action associated with the application that the program on the microcontroller and the mobile application agree upon. When the prefix of a message is read by either device, that device knows that a new message has been received and how to interpret the data it contains. For the devices on either end of the wireless communication to know when a complete message has been received, each message is appended with a nonnumeric terminator character. When this one byte terminator is read, each program knows to concatenate and store the characters between the prefix and terminator bytes into an array that contains the value of the received data. The program is then directed to listen for the next incoming byte, expecting it to be the prefix of a new message. To avoid delimiter collision, any character may be used for the terminator as long as it is not expected to be a part of the prefix or value of the message. Using the PVT protocol has the important effect of increasing the reliability of the communication between devices by having each device reject any messages that do not follow the established format.



III.

INTERFACE DESIGN AND EVALUATION

Smart mobile devices are computationally powerful, pervasive, and interactive, more than any previously released personal device. Thus, they have a high potential of providing natural user interfaces that can transform the way in which the public thinks of interaction with machines [23]. To accomplish this, human-computer interaction research has shown that multimodal interfaces can be developed to produce highly transparent user experiences by capturing people’s naturally coordinated communication patterns [24]. Moreover, by integrating and overlapping multiple modalities in a concurrent manner, the application becomes capable of capturing information that is both redundant (contributing to the stability and robustness of the interactions) and synergistic (contributing to the expressive power of the user and the effectiveness of the interactions). In other words, not only can the weaknesses of one modality be offset by the strengths of another, but combinations of modalities can be used to form interaction metaphors which have a profound effect on the intuitiveness of the interface. Before the issue of usability can be addressed, the functionality of the system must be assured. Once a task analysis has been conducted to establish a complete understanding of the users’ goals, the task that users need to perform on the physical system to achieve those goals, and the feedback needed by the users to achieve those goals [25], developers need to conduct a functional analysis to determine which modalities available on smart mobile devices can be used to perform the task. To remotely control a machine to execute a challenging task, some traditional modalities, such as buttons, may be difficult to use. However, for a smart mobile device with an embedded touchscreen and accelerometer, intuitive mappings may be devised to accomplish these tasks, e.g., by drawing a path on the screen or by tilting the device [26], respectively. While some tasks may become easy to perform by devising a clever mapping with a particular modality, other tasks may remain difficult to accomplish using that same modality. Some tasks may feel more natural by speaking a command into the device’s microphone or making a body gesture to be captured by the device’s camera. Visual feedback combined with audio or haptic feedback can be used to enhance the user’s awareness of the system’s behavior. Users with physical disabilities or situational impairments (such as experiencing screen glare, wearing gloves which cannot stimulate the touchscreen, or having both hands occupied while driving a car), may require a variety of input and output modalities so that they can select those that resonate most with them.

To achieve expected task performance and usability, all frameworks, APIs, and libraries available to mobile application developers should be utilized with responsiveness and interactivity in mind. Since resulting applications will involve lots of modality-specific processing and continuous communication with a physical system running in the background, the design of such applications will require many of these processes to be running concurrently. Using input and output modalities concurrently will also yield an interface with which the physical system can be simultaneously commanded and monitored; elements like buttons, sliders, and touchsensitive areas of the interface can collect input from users while animation and video provide users with feedback about the state of the physical system. The interface-, processing-, and communication-related tasks that the application must perform can be grouped into three layers that we refer to as the human interface, processing, and communications layers, respectively. Fig. 2 shows these layers along with examples of the roles they serve in the mobile application. The application must be able to respond to the user input and update its display regardless of whether processingintensive tasks are being performed in the background or data is being sent and received over the wireless network. By executing the communication functions in parts of the application’s code that detect user input, an event-driven architecture can be created which sends messages to the microcontroller when a button is pressed, a touch gesture such as a swipe, double tap, or pinch is recognized, or when sensor data (e.g., accelerometer) indicates a certain condition has been met. To support natural human communication patterns, in future research, we will investigate how the more desirable modalities can be synthesized to form a truly multimodal interface (i.e., an interface consisting of an additional layer which incorporates methods of coordinating and fusing the information received by two or more modalities simultaneously).

Once a list of candidate modalities has been created, developers can explore a broad range of mappings and metaphors that incorporate these modalities and design and test these alternative solutions simultaneously with users [27]. Thus, settling on the final modalities, mappings, and metaphors to be used in the construction of a multimodal interface, which can then be subjected to an iterative design approach, should begin by evaluating the task performance and usability associated with each modality independently (no multimodal integration is being performed and, in some cases, the modalities are evaluated in separate views).



Fig. 2. Software architecture for mobile application.

IV.

CASE STUDY: IPHONE AND DC SERVOMOTOR

To examine the potential of a smart mobile device to provide natural interactions with a DC servomotor test bed, we used the hardware approach and communication method described in Section II and the application architecture described in Section III to interface, develop, and test an iOS application on an iPhone. By pressing buttons and dragging sliders in the interface, typing into a text input box, tilting the device, performing single-touch swiping and multi-touch rotation gestures, playing with a graphical animation on the touchscreen, and speaking commands into the device’s microphone, the user can command the position of the DC servomotor. In this paper, only visual approaches (i.e., an animation of the motor and a dynamic plot) are used to present the state of the DC servomotor to the user, however alternative technologies available on the iPhone can be utilized to support users with vision impairments. Fig. 3 shows the laboratory hardware setup. An Arduino UNO microcontroller and WiFly shield are used to process and transfer data between the iPhone and the servomotor. Arduino microcontrollers have the ability to import several software libraries and connect to hardware shields that provide varied functionality and capabilities, such as motor control and Wi-Fi connectivity [28]. In particular, the WiFly shield is an expansion board based on the RN-131G WiFly GSX module by Roving Networks that equips Arduino microcontrollers with the capability to host, join, and communicate over 802.11b/g wireless networks. Through its SPI interface, the WiFly shield is first configured by the microcontroller to have the proper SSID, IP information, and other parameters. Next, the program running on the microcontroller relays the messages between the WiFly shield and the iPhone. In between communication tasks, the microcontroller collects the sensor data from the servomotor and uses a proportional-plus-derivative (PD) algorithm to compute the control signals for driving the servomotor. The sensor data is formatted using the PVT protocol and sent serially to the WiFly shield to be transferred to the iPhone over a TCP/IP network. A flowchart for the Arduino program is given in Fig. 4.

Fig. 3. A DC servomotor apparatus under remote control from an iPhone.

The test bed consists of an armature controlled DC servomotor, a gearbox, a continuous rotation potentiometer, a tachometer, and a power amplifier. The potentiometer and tachnometer are analog sensors that output voltages that are directly proportional to the position and speed of the servomotor arm, respectively. The ADC and DAC setup of [29] is used to interface the Arduino microcontroller to the test bed. Specifically, the DC servomotor test bed sends and receives analog signals from the Arduino microcontroller using an LTC1296 12-bit ADC and a MAX537 12-bit DAC, respectively. The LTC1296 ADC is used to convert signals

from the potentiometer and tachometer sensors to a 12-bit data representation and the MAX537 DAC is used to convert a 12bit data representation of the control voltage to a continuous voltage signal for driving the motor. A simple operational amplifier circuit is used to transform a 0—5VDC output from the MAX537 DAC to a ±5VDC signal required by the DC servomotor. To control the position of the DC servomotor’s output shaft, a PD feedback control algorithm is implemented on the Arduino microcontroller. This linear controller is designed to compute the control voltage applied to the motor using the potentiometer and tachometer measurements such that for a step command, the closed-loop response of the motor angular position exhibits a peak overshoot of less than 5% and a 2% settling time of approximately one second. Programming for iOS is done in C, C++, and the ObjectiveC language [30] using Apple’s Xcode Suite as the development environment. The mobile application begins with a menu screen which displays a list of control and interaction modes that a user may choose from (see Fig. 5(a)). Selecting the desired mode and pressing the “Start Control” button causes the appropriate view to load into the application window. In each mode, an animation of the motor is displayed in the view and updated in real-time using the sensor data received from the Arduino microcontroller. Real-time plots of the sensor data are also provided at the bottom of each view. These plots are generated by parsing PVT messages containing sensor data as they are received from the microcontroller and storing the sensor data. Note that the sensor data is sampled at a frequency of 10 Hz. In the Button Control view (see Fig. 5(b)), two buttons are available for commanding the motor counter-clockwise and clockwise. Moreover, a text input box is provided for the user to manually input a desired angular position using a numerical keypad on the screen. In the Slider Control view (see Fig. 5(c)), a slider is available to command the motor position between í90° to 90°. In the Tilt Control view (see Fig. 5(d)), the iPhone accelerometer is monitored and the tilt of the device is used to command the position of the motor. In the Gesture Control view (see Fig. 5(e)), finger swiping, rotation, and double-tap touch gestures are detected and used to command the motor. Specifically, a swiping gesture causes the motor to move counterclockwise for swipes in the right-hand direction and clockwise for swipes in the left-hand direction, while a finger rotation gesture causes the motor to rotate in the direction of the rotation gesture. Finally, double-tapping onto the screen commands the motor to stop moving. In the Animation Control view (see Fig. 5(f)), an animation of the motor is displayed in the view wherein a user can touch the motor arm in the animation and drag it to a desired position. The arm position in the animation is used to command the actual motor to the corresponding position. In the Speech Control view (see Fig. 5(g)), the user presses a button to initialize and calibrate the voice recognition capability of the interface. Then the user speaks the value of the desired angle in degrees to command the motor. Only integer values are currently supported by the interface. In addition to these views, a view containing sliders and text input boxes is available in which a remote user can adjust the proportional and the derivative control gains of the PD controller. This option can



be accessed by using the Adjust Controller button at the bottom of each of the aforementioned views (see Fig. 5(h)). The PD gains shown in Fig. 5(h) are used to produce the responses shown in Fig. 5(b—g). V.

USER STUDIES

A study has been conducted with 32 undergraduate mechanical engineering students to examine the usability of each of the modalities for interacting with the DC servomotor test bed. First, using the DC servomotor, the students performed a lab experiment that involved designing a feedback controller and interacting with the test bed while working on desktop computers in a MATLAB/Simulink environment. Next, the students were provided the mobile application and asked to use each of the modes to command the orientation of the servomotor to any desired reference. Finally, students were administered a questionnaire, which asked them to self-report various aspects of their experiences with the mobile interface. Figs. 6(a)—(f) show tallies of how each mode of control was rated for its ease of use, where a rating of 1 represents an unacceptable rating and a rating of 5 represents a superior rating. Fig. 6(g) shows how the participants rated the overall ease of use of the mobile application compared to using the conventional computer-based interface. From the results of the user study in Fig. 6, it is seen that there is no modality that is rated distinctly higher than the others by the students. We believe that this is because the DC servomotor test bed tasks the students to simply command the motor arm to point in desired orientations, not a very challenging task. We envision that as the complexity of the physical system or the difficulty of the intended task is increased, one or more modalities may yield better task performance or higher rated user experiences than the others. In recent research, this notion has been investigated by using the same DC servomotor test bed to construct a ball and beam system and tasking participants to use some of the same modalities as discussed in this study to balance the ball as close to the center of the beam as possible using a similar iPhone application [26]. Depending on the application, users may prefer to use the traditional modalities and metaphors that they are accustomed to using for a particular task. For example, in one study, the approach presented in this paper has been used to find that users prefer simple buttons and joysticks on the touchscreen of an Android tablet to interact with a mobile manipulator robot [31]. VI.

CONCLUSIONS

interacting with a DC servomotor, is demonstrated. The results of a preliminary user experiment are reported to confirm that the system yields positive user experiences. The framework provided in this paper can be extended to other smart mobile devices, such as those operating on the Android OS, and to a variety of physical hardware, such as robots. ACKNOWLEDGMENT Work supported in part by the NSF RET Site grants EEC0807286 and EEC-1132482, an NSF GK—12 Fellows grant DGE: 0741714, and the NY Space Grant 48240-7887. The authors thank David Lopez and Ryan Caeti for their invaluable help. REFERENCES [1]

[2] [3]

[4]

[5] [6] [7] [8] [9] [10]

[11]

[12]

[13] [14]

In this paper, we presented an easy-to-implement approach to interface, design, and test mobile applications for smart mobile devices to naturally interact with physical systems, such as complex industrial machines. The approach included a lowcost, microcontroller-based data acquisition and control setup and communication method that enables the system to be accessible by a mobile interface. A mobile application architecture is proposed, which incorporates the parallel design approach, to develop interfaces that allow researchers to evaluate interaction modalities and metaphors separately with users. A system, which incorporates the aforementioned hardware setup, communication method, and architecture for

[15]

[16] [17]

[18]



T.W. Fong et al., “Novel interfaces for remote driving: Gesture, haptic, and PDA,” Intelligent Systems and Smart Manufacturing, pp. 300—311, 2001. D. Perzanowski et al., “Building a multimodal human-robot interface,” IEEE Intelligent Systems, Vol. 16, No. 1, pp. 16—21, 2001. S.G. Sakamoto, L.C. de Miranda, and H. Hornung, “Home control via mobile devices: State of the art and HCI challenges under the perspective of diversity,” Universal Access in Human-Computer Interaction. Aging and Assistive Environments, pp. 501—512, 2014. L. Columbus, “IDC: 87% of connected devices sales by 2017 will be tablets and smartphones,” Forbes, September 12, 2013. Online: http://www.forbes.com/sites/louiscolumbus/2013/09/12/idc-87-ofconnected-devices-by-2017-will-be-tablets-and-smartphones/. R. Want, “iPhone: Smarter than the average phone,” IEEE Pervasive Computing, Vol. 9, No. 3, pp. 6—9, 2010. J. Aron, “How innovative is Apple’s new voice assistant, Siri?” New Scientist. Vol. 212, No. 2836, pp. 24, 2011. Ardumote, Website of Ardumote iPhone App, Accessed March 2014. Online: http://samratamin.com/Ardumote_Tutorial.html. Ciao, Website of Ciao iPhone App, Accessed October 2013. Online: http://ciaoapp.com/. OSC, Introduction to OSC, Accessed March 2014. Online: http://opensoundcontrol.org/introduction-osc. Y.-H. Jeon and H. Ahn, “A multimodal ubiquitous interface system using smart phone for human-robot interaction,” Int. Conf. Ubiquitous Robots and Ambient Intelligence (URAI), pp. 764—767, 2011. T. Westermann, “I'm home: Smartphone-enabled gestural interaction with multi-modal smart-home systems,” Informatiktage, pp. 137—140, 2010. R. Komatsu et al., “Multi-modal communication interface for elderly people in informationally structured space,” Intelligent Robotics and Applications, pp. 220—228, 2011. J. Nielsen, Usability Engineering, Morgan Kaufmann, San Francisco, 1994. J. Nielsen, “Enhancing the explanatory power of usability heuristics,” Proc. of the SIGCHI Conf. Human Factors in Computing Systems, 1994. M.A. Goodrich and D.R. Olsen, “Seven principles of efficient human robot interaction,” IEEE Int. Conf. Systems, Man and Cybernetics, Vol. 4, pp. 3942—3948 , 2003. J. Nielsen and J.M. Faber, “Improving system usability through parallel design,” Computer, Vol. 29, No. 2, pp. 29—35, 1996. J. Nielsen and H. Desurvire, “Comparative design review: An exercise in parallel design,” Proc. INTERACT'93 and CHI'93 Conf. Human Factors in Computing Systems, pp. 414—417, 1993. S.-H. Lee, A. Panda, V. Kapila, and H. Wong, “Development of a Matlab data acquisition and control toolbox for PIC microcontrollers,” ASEE Computers in Education Journal, Vol. I, 38—51, 2010.

[19] S.-H. Lee, Y.-F. Li, and V. Kapila, “Development of a Matlab-based graphical user interface environment for PIC microcontroller projects,” ASEE Computers in Education Journal, Vol. XV, 41—56, 2005. [20] N.O. Bernsen and L. Dybkjaer, Multimodal Usability, Springer-Verlag, 2009. [21] J.A. Martin, “Mobile computing tips: Bluetooth vs. Wi-Fi FAQ,” PCWorld, September 2002. [Online]. Available: http://www.pcworld.com/article/103848/mobile_computing_tips_blueto oth_vs_wifi_faq.html. [22] T. Igoe, Making Things Talk, O’Reilly, Sebastopol, CA, 2007. [23] L. Iftode et al., “Smart phone: An embedded system for universal interactions,” Proc. IEEE Int. Workshop Future Trends of Distributed Computing Systems, pp. 88—94, 2004. [24] S. Oviatt and P. Cohen, “Perceptual user interfaces: Multimodal interfaces that process what comes naturally,” Communications of the ACM, Vol. 43, No. 3, pp. 45-53, 2000. [25] D. Diaper and N. Stanton, eds., The Handbook of Task Analysis for Human-Computer Interaction, CRC Press, 2003. [26] J.A. Frank and V. Kapila, “Performing difficult teleoperation tasks using dominant metaphors of interaction.” Proc. ASME Conf. Engineering Systems Design And Analysis, Copenhagen, Denmark, June 25-27, 2014. [27] A. Holzinger, “Usability engineering methods for software developers,” Communications of the ACM , Vol. 48, No. 1, pp. 71—74 , 2005. [28] B. Evans, Beginning Arduino Programming, Apress, New York, NY, 2011. [29] I. Ahmed, H. Wong, and V. Kapila, “Internet-based remote control using a microcontroller and an embedded ethernet board,” Proc. Amer. Contr. Conf., pp. 1329—1334, Boston, MA, June 2004. [30] D. Mark and J. LaMarche, Beginning iPhone Development: Exploring the iPhone SDK, Apress, Berkeley, CA, 2009. [31] D. Lopez, J.A. Frank, and V. Kapila, “Comparing interface elements on a tablet for intuitive teleoperation of a mobile manipulator,” Proc. ASME Conf. Engineering Systems Design And Analysis, Copenhagen, Denmark, June 25-27, 2014.

Fig. 4. Flowchart of the algorithm used by the Arduino program that interfaces an iOS mobile application with physical hardware.



(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 5. The mobile application: Various views to monitor, command, and control the DC servomotor.

(a)

(b)

(e)

(f)

(c)

(d)

(g) Fig. 6. Relative histograms showing the ratings given to (a) button control; (b) slider control; (c) tilt control; (d) gesture control; (e) animation control; (f) speech control; and (g) overall ease of use of the application.



Suggest Documents