Multimodal Architecture

3 downloads 87391 Views 546KB Size Report
Design and Development of Multimodal Applications: A Vision on Key Issues and Methods. Samuel Silva, Nuno Almeida, Carlos Pereira, Ana Isabel Martins, ...
Design and Development of Multimodal Applications: A Vision on Key Issues and Methods Samuel Silva, Nuno Almeida, Carlos Pereira, Ana Isabel Martins, Ana Filipa Rosa, Miguel Oliveira e Silva, António Teixeira DETI/IEETA – University of Aveiro

In this article we discuss several aspects which we deem important to the design, development and evaluation of multimodal systems. These, derive from our experience gathered from continued work on multimodal applications in the context of several European projects such as S4S1, AAL4ALL2 and Paelife3. The contribution of this article is a vision of the full multimodal application design and development cycle, for which we have made contributions, at different levels, along with our perspective on what are the key issues to address. At the onset of our proposals are concerns regarding how traditional methods need to be adapted, to serve the more complex scenario of multimodality, and what needs to be proposed, to tackle new challenges and provide users with the best possible experience of usable and useful applications. In our discussion, we consider three aspects: 1) the system architecture, to support multimodality; 2) the design and development methodologies, to account for proper gathering and fulfillment of requirements, adapted to the target users; and 3) the evaluation, at the different stages of development, considering the increasing complexity of the application.

Multimodal Architecture The architecture adopted for the multimodal application is important as it directly influences the effort required for development, and the versatility of the application to adapt to new or improved interaction modalities/features. Aligned with the W3C multimodal architecture [Dhall2013], we adopt an architecture [Almeida13] that allows decoupled modalities (Fig. 1): modalities do not have a hard coded connection to the application core communicating with it through messages. This allows that modalities are added, removed or modified without changes to the application. This is also

Figure 1 - The W3C Multimodal Architecture

1

http://www.smartphones4seniors.org/ http://www.aal4all.org 3 www.paelife.eu 2

important since modalities (e.g., such as speech [Almeida14a]), can be reused on any application using the same architecture. Furthermore, this kind of architecture opens new routes for innovative features. One such feature, which we have been exploring is multi-device applications [Almeida14b]: the same application runs on multiple devices, presenting alternative views (e.g., overall view on a tablet and detailed contents on TV) with interaction potentially happening on all devices.

Multimodal Application Development Methodology Designing user experience is a challenging task, particularly if the target audience is characterized by strong heterogeneity [Silva2014], e.g., the elderly. Inheriting from user-centered design (UCD), the proposed development method is aligned with the methodology described in [Martins2012]. After obtaining the requirements (phase 1), a prototype is proposed (phase 2) and evaluated (phase 3), in order to refine the requirements. This iterative methodology continues with additional prototypes and evaluations towards an increasingly refined application (Fig. 2).

Figure 2- The adopted development methodology consists in an iterative process, in which multiple development cycles increasingly refine requirements and prototypes, building on the outcomes of multiple evaluation stages.

Note that to support rapid prototyping, the adopted architecture plays a key role in reducing the development effort. The prototype works as a mediator of the dialog between the developers and the end users to gather feedback, refine requirements and elicit new requirements.

Evaluation, from Conceptual Validation to Pilot Tests Assessing user experience is an integral part of development. As the application is at different development stages, each time it is tested, the evaluation methodology should evolve to increasingly tackle more complex aspects. On the other hand, evolving the evaluation protocols should not hinder the analysis/comparison of how feedback evolves along the different development iterations. This is an important reason why a common ground should exist among all evaluations regarding how feedback is elicited and performance measured. We adopt a methodology based on the International Classification of Functioning, Disability and Health (ICF). The ICF was developed by the World Health Organization and defines functionality as the result of complex interaction between health conditions and contextual factors, including environmental factors.

This evaluation methodology assumes three phases:   

Conceptual validation, to determine if an idea of a product or service is sustainable in terms of interface and functions; Prototype tests, intended to collect, in a controlled environment, information regarding the usability and user satisfaction. Pilot test, to evaluate, in addition to usability and satisfaction, the impact on users’ lives, often conducted in users’ homes and integrated into their daily lives.

On complex multimodal applications, working in dynamic environments, modalities and tasks cannot be looked at as isolated phenomena. They are part of a dynamic flow of information, between the application and the user, which is subject to a multitude of internal and external interference: modalities used simultaneously might cause sensory overload; tasks crossing might cause disorientation; or people present in the room might hinder user performance. Therefore, to evaluate these complex systems, particularly during pilot tests, traditional evaluation methods might provide a timid response. To tackle this issue, we have been working on an evaluation platform (DynEaaS – Dynamic Evaluation as a Service), capable of evaluating user performances in dynamical environments, by allowing evaluation teams to create and conduct context-aware evaluations. The platform allows evaluators to specify evaluation plans which are triggered at precise timings or only when certain conditions are met, thus gathering better contextualized data. By using ontologies, the platform is highly flexible and can be used in different domains without core changes.

References [Almeida2013] N. Almeida and A. Teixeira, “Enhanced interaction for the Elderly supported by the W3C Multimodal Architecture,” in 5a Conferência Nacional sobre Interacção, 2013. [Almeida2014a] Nuno Almeida, Samuel Silva, António Teixeira, "Design and Development of Speech Interaction: A Methodology", Proc. HCII 2014, LNCS vol. 8511, pp. 370-381, June, 2014 [Almeida2014b] Nuno Almeida, Samuel Silva, António Teixeira, "Multimodal Multi-Device Application Supported by an SCXML State Chart Machine", Proc. EICS Workshop on Engineering Interactive Systems with SCXML, 2014 [Dhall2013] D. A. Dahl, “The W3C multimodal architecture and interfaces standard,” J. Multimodal User Interfaces, vol. 7, no. 3, pp. 171–182, Apr. 2013 [Martins2012] Ana Isabel Martins, Alexandra Queirós, Margarida Cerqueira, Nelson Rocha, António J. S. Teixeira. The International Classification of Functioning, Disability and Health as a conceptual model for the evaluation of environmental factors. Procedia Computer Science, vol. 14, pp. 293-300, October 2012 [Silva2014] Samuel Silva, Daniela Braga, António Teixeira, "AgeCI: HCI and Age Diversity", Proc. HCII 2014, LNCS 8515, pp. 179-190, June, 2014

Suggest Documents