A wearable sensor-based platform for ... - ACM Digital Library

150 downloads 130107 Views 942KB Size Report
Sep 16, 2016 - Any Problems? A wearable sensor-based platform for representational learning-analytics. Gerald Pirkl, Peter Hevesi,. Paul Lukowicz. German ...
UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY

Any Problems? A wearable sensor-based platform for representational learning-analytics. Gerald Pirkl, Peter Hevesi, Paul Lukowicz

German Reasearch Center for Artificial Intelligence University of Kaiserslautern

[email protected]

Pascal Klein,Carina Heisel,Sebastian Gr¨ ober,Jochen Kuhn

University of Kaiserslautern Physics Education Research

Bernhard Sick

University of Kassel Intelligent Embedded Systems

[email protected]

[pklein,heisel,groeber,kuhn] @physik.uni-kl.de

ABSTRACT

for learning and problem solving in physics [4],[1]. To foster representational competence and the creation of mental models it’s important to analyze and study how learners use and produce representations and interact with them effectively while learning physical content. Therefore our project focuses on sensor-based analysis of learners effective handling of representations which leads to successful creation of mental models. In this context students work on traditional problems of typical introductory, calculus-based physics courses for freshman students. Their progress is analyzed by our on-body sensor system.

We describe in this work a sensor-based learning platform which supports both the teacher and the learner during exercises. We use a combination of eye tracker, sensor pen and exercise texts to capture the progress of learners. The eye tracker retrieves information about the gaze, for example reading or scanning for key words; the sensor pen captures trends like number of words or the pressure applied to the paper. Combining this information, the platform should be used to indicate problems of the learner to the teacher. Besides presenting the data information to the teacher, we work on advancing the platform to an adaptive system, which could give individual feedback to the learners themselves according to their individual cognitive and affective requirements.

2.

SYSTEM OVERVIEW

In literature different sensor-based support systems have been presented and evaluated. In many cases they include acceleration, microphone and camera systems. The authors of [25] summarize and evalute in their work different approaches to support the learner.

Keywords sensor supported eduction, sensor pen, eye tracker

Eye Tracker gaze analysis

Categories and Subject Descriptors H.1.2 [User/Machine Systems ]; H.5.2 [User Interfaces]; I.2.6 [Learning]; I.2.4 [Knowledge Representation Formalisms and Methods]; I.2.8 [Problem Solving, Control Methods, and Search]; I.2.10 [Vision and Scene Understanding]; K.3.2 [Computer and Information Science Education]

1.

Exercise Text with semantic meta Information

Text

INTRODUCTION

SmartWatch heart rate analysis

The creation of mental models of the learning content requires an active part in information processing [19]. Therefore the presentation format of the learning material is essential and can be structured into text/picture or classified according to dynamics and interactivity [7]. Competent handling of these representations is supposed to be significant Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Sensor Pen hand writing analysis

Figure 1: Information about the mental and physical state of the subject is drawn from eye tracker, sensor pen and pulse watch. The exercise is annotated with meta information.

Ubicomp/ISWC’16 Adjunct September 12-16, 2016, Heidelberg, Germany © 2016 Copyright held by the owner/author(s).

Centered around the subject, our system (Figure 1) consists of instructional material and on body sensors cap-

ACM ISBN 978-1-4503-4462-3/16/09. DOI: http://dx.doi.org/10.1145/2968219.2971383

353

UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY

turing the current state and activity of the person during the instruction / exercise phase:

evidence of such influences was shown even for very simple kinds of hand movement. We therefore use a sensor pen to capture the progress of the test subjects while answering the exercises: The Stabilo Digi-Pen captures its own orientation, acceleration, gyroscope and compass information in addition to the pressure applied to the paper plain while the person is writing (sampling rate 200 Hz). Features (acceleration variance, variance and magnitude of the pressure applied to the paper, raw signal is presented in figure 2) are extracted from the hand writing and are used identify basic processes for the creation of mental models.

Instructional Material. The document presented to the user includes, besides the usual text, meta information describing the text structure and components of the text like “text”, “figure”, “equation”. The document consists of both a summary of the physical topic and a set of exercises, which had to be worked out by the students.

Sensor Pen. Hand motor skills can be captured by means of standard graphics tablets or pens equipped with sensors that measure pen orientation, acceleration, grip power, or power applied to the surface of a tablet or the paper. The writing dynamics can be seen as a biometric feature that allows to distinguish human individuals from others. For that reason, handwritten passwords or signatures are used in [13], [8], [29], [3]. For authentication tasks, the challenge is to identify parts of a signature that are highly invariant and at the same time discriminatory regarding a number of individuals. Moreover, this problem has to be solved with a minimal number of password or signature samples.

Stress Level Estimation. Non invasive sensors have been proposed for example in [18], capturing the stresslevel during car rides using Piezo and other sensors to monitor the stress level of the driver. Jarvis et al. describe in [14] the estimation of workload using on body sensor including blood pressure and EEG sensors introduce invasive sensor attachements and influences on the person while performing their tasks. We therefore attach a SmartWatch capturing the pulse as a rough indication for stress and the mental pressure.

Pen pressure (digits)

Eye Tracker.

1400 1200 1000 800 600 400 200 0 300

An eye tracker (combination of pupil camera and world camera as described in [15]) retrieves the (rough) gaze position of the person.

305

310

Time in seconds

315

320

Figure 2: Example of the pen’s pressure signal trajectories while writing a formular (left) and a text segment / word (right).

Figure 3: Pupils Lab Eye tracker and attached IMU. Gaze and scene recording in combination with head movement.

The writing dynamics is also an important biometric feature analyzed for medical, psychological, pedagogical, or related purposes. Examples are the detection of diseases, disorders or the detection of changes in their progress (e.g., stroke, Alzheimer’s disease, depression, obsessive-compulsive disorders or schizophrenia) as described in [6],[10], [27], [21], [20], to measure side-effects of drugs (e.g., for typical and atypical neuroleptics as in [5]), or to decide upon the handedness of a child at an early age (for example [11] or [9]). Here, the challenges are – depending on the task – either to identify discriminating features (e.g., for left- or righthandedness, independent from the individuals) or to find features that reflect certain properties of the fine motor skills that change when a disease advances (e.g., for Alzheimer’s disease). Earlier work we presented in [12] may be seen as being related to the work presented here. By means of Hollerbach’s script generator model we demonstrated that influences such as low temperature, physical strain, or writing with the non-preferred hand (in most cases the left hand) have a noticeable influence on handwriting dynamics. The

This information, as for example discussed in [23], can be used to differentiate between reading of a text or scanning for buzz words necessary to solve a task. As indicated in [28], eye gaze during handwriting is usually different than during reading (fixations are longer), which can be used to analyse the subject’s activities. We additionally attached an IMU sensor on the head mounted camera system to gather head movements in the 3 orientations to recognize search phases (looking between the exercise text and the answers of the factual knowledge) or reading phases with longer lowmovement segments of the head.

354

SESSION: DEMOS

Figure 4: Left: Gaze information derived from an eye tracker. Right: Sensor pen IMU information while writing.

3.

EXPERIMENTS

6 Subjects (2 females, 4 males) took part in the experiment. Two students and 4 researchers worked on this exercise. Based on their self-assessment on how familiar they are with the topic before the experiment, they were categorized as experts or novices (3-3 persons). The subjects had to solve gravitational field related exercises covering factual knowledge and knowledge transfer exercises. The factual knowledge is tested using a clove, the more challenging exercises require hand written answers, mathematical calculations and creation of diagrams and graphs. The sensor analysis addresses signals of text, formulars, di-

Figure 5: Overall number of words vs. time difference between words for each person. Experts need significantly less time to think when they answer the questions, which indicates they have less insecurities recalling the necessary informations. agrams and graphs. Initially we segment the sensor signal of the pen using the applied pressure of the tip of the pen. Combining the length of the segments with acceleration and gyroscope features allows to distinguish between mathematical formulas and descriptive text (100 percent recognition rate after manually annotation of the signals). The time between the words additionally shows information about cognitive processes and insecurities of the learnt topics which are visible in variances of the written text and mathematical formulas. The effect can be seen on figure 5. The vertical axis shows the average time a person spent between using the pen (writing a word, formula or drawing). The horizontal axis indicates the overall number of words (including drawings and mathematical formulas) written by the person during the exercise. Experts need typically much less time for thinking or interpreting the exercise tasks than the subjects without prior knowledge of the topic.

4.

CONCLUSION AND FUTURE WORK We could detect and differ basic representations (written

355

text vs. formulas), but we have to do further work to detect and analyze complexer representations such as diagrams and graphs. Furthermore we advance the instructional material using recognition techniques (as for example QR-codes) and optimizing the content based on the analyzed limits identifying diagrams and graphs. Beside presenting the data information to the teacher we work on advancing the platform to an adaptive system, which could give individual feedback to the learners themselves according to their individual cognitive and affective requirements. With this we combine the importance of representations as one of the key elements of effective teaching and learning science (amongst others the work of Barnet et al. [2], Kozma [16] or Lajoie [17]) with open questions of the dynamic research area of learning analytics ([26], [22], [24]) and make a first step towards an adaptive teaching and learning system which provides needs-based individual feedback based on implicit sensor information.

5.

REFERENCES

[1] Shaaron Ainsworth. 2006. DeFT: A conceptual framework for considering learning with multiple representations. Learning and instruction 16, 3 (2006), 183–198. [2] Michael Barnett, Lisa Yamagata-Lynch, Tom Keating, Sasha A Barab, and Kenneth E Hay. 2005. Using virtual reality computer models to support student understanding of astronomical concepts. The Journal of Computers in Mathematics and Science Teaching 24, 4 (2005), 333. [3] M. Diaz, A. Fischer, R. Plamondon, and M. A. Ferrer. 2015. Towards an automatic on-line signature verifier using only one reference per signer. (2015). [4] Jens Dolin. 2007. Science education standards and science assessment in Denmark. Waxmann, 71–82. [5] M. Dose, C. Gruber, A. Grunz, C. Hook, J. Kempf, G. Scharfenberg, and B. Sick. 2007. Towards an automated analysis of neuroleptic’s impact on human hand motor skills. (2007). [6] P. Fourneret, F. de Vignemont, N. Franck, A. Slachevsky, B. Dubois, and M. Jeannerod. 2002. Perception of self-generated action in schizophrenia. Cognitive Neuropsychiatry 2, 7 (2002), 139–156. [7] R Girwidz, Th Rubitzko, S Schaal, and FX Bogner. 2006. Theoretical concepts for using multimedia in science education. Journal of Science Education 17 (2006), 77–93. [8] C. Gruber, T. Gruber, S. Krinninger, and B. Sick. 2010. Online signature verification with support vector machines based on LCSS kernel functions. IEEE Transactions on Systems, Man, and Cybernetics, Part B Cybernetics 40, 4 (2010), 1088–1100. [9] T. Gruber, B. Meixner, J. Prosser, and B. Sick. 2012. Handedness Tests for Preschool Children: A Novel Approach Based on Graphics Tablets and Support Vector Machines. Applied Soft Computing 12, 4 (2012), 1390–1398. [10] R.M. Guest, S. Chindaro, M.C. Fairhurst, and J.M. Potter. 2003. Automatic classification of hand drawn geometric shapes using constructional sequence analysis. (2003).

UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY

[11] V. Henkel, R. Mergl, G. Juckel, D. Rujescu, P. Mavrogiorgou, I. Giegling, H.-J. M¨ oller, and U. Hegerl. 2001. Assessment of handedness using a digitizing tablet: a new method. Neuropsychologia 39 (2001), 1158–1166. [12] J. Hofer, C. Gruber, and B. Sick. 2006. Biometric analysis of handwriting dynamics using a script generator model. (2006). [13] D. Impedovo and G. Pirlo. 2008. Automatic signature verification: The State of the Art. IEEE Transactions on Systems, Man, and Cybernetics, Part C Applications and Reviews 38, 5 (2008), 609–635. [14] Jan Jarvis, Felix Putze, Dominic Heger, and Tanja Schultz. 2011. Multimodal Person Independent Recognition of Workload Related Biosignal Patterns. In Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI ’11). ACM, New York, NY, USA, 205–208. DOI: http://dx.doi.org/10.1145/2070481.2070516 [15] Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. (April 2014). http://arxiv.org/abs/1405.0006 [16] Robert Kozma and Joel Russell. 2005. Multimedia learning of chemistry. The Cambridge handbook of multimedia learning (2005), 409–428. [17] Susanne P Lajoie, Claudia Guerrera, Steven D Munsie, and Nancy C Lavigne. 2001. Constructing knowledge in the context of BioWorld. Instructional Science 29, 2 (2001), 155–186. [18] Y. Lin, H. Leng, G. Yang, and H. Cai. 2007. An Intelligent Noninvasive Sensor for Driver Pulse Wave Measurement. IEEE Sensors Journal 7, 5 (May 2007), 790–799. DOI: http://dx.doi.org/10.1109/JSEN.2007.894923 [19] Richard E. Mayer (Ed.). 2014. The Cambridge Handbook of Multimedia Learning (second ed.). Cambridge University Press. http://dx.doi.org/10.1017/CBO9781139547369 Cambridge Books Online. [20] R. Mergl, G. Juckel, J. Rihl, V. Henkel, M. Karner, P. Tigges, A. Schr¨ oter, and U. Hegerl. 2004a. Kinematical analysis of handwriting movements in depressed patients. Acta Psychiatrica Scandinavia 109 (2004), 383–391. [21] R. Mergl, P. Mavrogiorgou, G. Juckel, M. Zaudig, and U. Hegerl. 2004b. Effects of sertraline on kinematic aspects of hand movements in patients with obsessive-compulsive disorder. Psychopharmacology 171 (2004), 179–185. [22] Daniele Di Mitri, Maren Scheffel, Hendrik Drachsler, Dirk B¨ orner, and Stefaan Ternier. 2016. Learning Pulse: Using Wearable Biosensors and Learning Analytics to Investigate and Predict Learning Success in Self-regulated Learning. In Proceedings of the First International Workshop on Learning Analytics Across Physical and Digital Spaces (CrossLAK 2016) co-located with 6th International Conference on Learning Analytics & Knowledge (LAK 2016), Edinburgh, Scotland, UK, April 25-29, 2016. 34–39. http://ceur-ws.org/Vol-1601/CrossLAK16Paper7.pdf

[23] Ayano Okoso, Takumi Toyama, Kai Kunze, Joachim Folz, Marcus Liwicki, and Koichi Kise. 2015. Towards Extraction of Subjective Reading Incomprehension: Analysis of Eye Gaze Features. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1325–1330. [24] Zacharoula K Papamitsiou and Anastasios A Economides. 2014. Learning Analytics and Educational Data Mining in Practice: A Systematic Literature Review of Empirical Evidence. Educational Technology & Society 17, 4 (2014), 49–64. [25] Jan Schneider, Dirk B¨ orner, Peter van Rosmalen, and Marcus Specht. 2015a. Augmenting the Senses: A Review on Sensor-Based Learning Support. Sensors 15, 2 (2015), 4097. DOI: http://dx.doi.org/10.3390/s150204097 [26] Jan Schneider, Dirk B¨ orner, Peter Van Rosmalen, and Marcus Specht. 2015b. Augmenting the senses: a review on sensor-based learning support. Sensors 15, 2 (2015), 4097–4133. [27] A. Schr¨ oter, R. Mergl, K. B¨ urger, H. Hampel, H.-J. M¨ oller, and U. Hegerl. 2003. Kinematic analysis of handwriting movements in patients with Alzheimer’s disease, mild cognitive impairment, depression and healthy subjects. Dementia and Geriatric Cognitive Disorders 15 (2003), 132–142. [28] Jodi C Sita and Katelyn A Taylor. 2015. Eye movements during the handwriting of words: Individually and within sentences. Human movement science 43 (2015), 229–238. [29] F. J. Zareen and S. Jabin. 2013. A comparative study of the recent trands in biometric signature verification. (2013).

356