IDC 2011
SHORT PAPERS
20th-23rd June, Ann Arbor, USA
Evaluating the Usability of an Interactive Map Activity for Climate Change Education Vanessa L. Peters, Nancy Butler Songer School of Education, University of Michigan Ann Arbor, MI, 48109 USA +1 734 604 6096
[email protected],
[email protected] ABSTRACT
central to our curriculum, it was important to evaluate the usability of early activity prototypes.
We report the results of usability testing on a professional modeling tool designed to support students’ learning about climate change impacts. Using a questionnaire and test responses as data sources, we evaluated the efficacy of an interactive map activity that was completed by 84 middle school students. An analysis of the data showed that students had difficulties manipulating simple data overlays when answering questions about tree distributions with the model. The findings from this study have implications for researchers who wish to design interactive map-based activities for science education, and for technology developers seeking to improve the usability of complex learning environments for middle school students.
2. USABILITY IN EDUCATION
Most school-age children today have a fair amount of web proficiency. Despite this, few research studies have specifically investigated how children interact with educational web sites [12]. More often than not, efforts to design learning environments have relied on developers’ assumptions of how children engage with web-based material [7]. In the learning sciences, usability testing is essential for creating technology-enhanced learning environments that support specific learning goals. Without early usability testing, researchers risk designing curriculum materials that require students to adapt and adjust to the new technology environment, rather than designing the environment in a way that is sensitive and responsive to students’ needs [10]. By evaluating usability at the formative stage of development, researchers can capture valuable user input that can inform or change subsequent decisions about functionality and interface design [9].
Categories and Subject Descriptors
K.3.1 [Computers and Education]: Computer Uses in Education – computer-assisted instruction (CAI), Collaborative learning.
General Terms
There are a number of researchers who have investigated the role of usability in developing educational technologies. Crowther, Keller, and Waddoups [1] report a case study that outlines the relationship between usability testing and educational evaluation, explaining how the former is integral to improving the quality and effectiveness of computer-supported learning and instruction. They point out that instructional environments that are ill designed are unlikely to have much pedagogical value, and that early usability tests can avoid time-consuming and costly replacements or modifications of technology development. Unfortunately, usability testing is most often conducted on fully developed products, which lessens the potential for usability findings to improve technology design [8, 13]. Educational researchers, however, have much to gain from obtaining students’ feedback early in the design process before committing to a particular curricular design or technology framework.
Performance, Design, Human Factors.
Keywords
Climate change education, predictive modeling, usability, inquiry learning.
species
distribution
1. INTRODUCTION
The modeling tool that was evaluated in this study was part of a large-scale curriculum unit that teaches students about the ecological impacts of climate change on biodiversity and ecosystems. With a focus on local datasets, the modeling tool leverages massive caches of online geospatial occurrence data to create maps that students use for constructing explanations and making predictions about species distributions based on changing climatic conditions.
3. EVALUATION APPROACHES
Our goal was to obtain user feedback to inform decisions about data presentation, map layout and interface design while still in the early phases of development. Specifically, we wanted to learn if students had difficulties manipulating overlay data in an activity based on a Google map application. Since this type of data is
There several ways in which usability studies can be classified, however, most tests fall into one of two main categories: formative and summative [3]. While summative evaluations are conducted on finished or nearly finished products, formative evaluation occurs concurrently with design and development work. Although there are many different approaches to testing usability, most efforts focus on aspects of an interface in the context of a given task, such as user success rate (if the user can perform the task), the time required to complete the task, the error rate of task completion, and the level of user satisfaction [5]. Generally speaking, usability studies measure some quality associated with the interface, such as learnability, memorability,
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IDC 2011, June 20-23, 2011, Ann Arbor, USA. Copyright 2011 ACM 978-1-4503-0751-2…$10.00.
197
IDC 2011
SHORT PAPERS
20th-23rd June, Ann Arbor, USA
Designing a usable interface was critical so students could focus their attention on interpreting and understanding the map data, rather than on manipulating the map interface. In addition, since the curriculum will include map data that is significantly more complex, it was critical to learn about the difficulties students faced when using digital maps to learn about species distribution. Figure 2 shows an example of the type of authentic vector-based map data that will be included in the final curricular activities.
and efficiency of use [4, 11]. Different evaluation methods are often employed to measure these qualities and to capture the different aspects of the users’ experience with the technology [2]. The evaluations methods used in the current study are described in the following section.
3.1 Evaluation Methods
The formative usability testing in this study was conducted in two phases. In the first phase, project team members examined the activity site to identify problems or potential problems with the interface. The examiners were asked to evaluate the layout according to a number of established criteria: ease of navigation, availability of user controls (e.g., including a “Back” button), and consistency in presentation (e.g., ensuring that links or other features with the same or similar functions were in the same location on each web page). In the second phase, a sample drawn from our target population (i.e., middle school students) completed two web-based activities without any interference from the researchers. This paper reports the results of the second phase of usability testing. Different usability metrics are available for evaluating interface design [4]. Each of these metrics can be used to measure the userfriendliness and utility of an interface, and to inform researchers of areas that are in need of revision or refinement. The following metrics were used to obtain user information in this study:
Figure 2. Interface with authentic map data
4.1 Participants
User profiling: Students were asked to complete an online survey prior to and after completing the activity; ! Successful task completion: Students were asked to complete a series of tasks. A task was successfully completed when the student answered the question(s), or they moved on to another activity; ! Error counts: The number of students who interpreted the activity correctly and performed the desired actions. !
Participants were 84 middle school students from the United States and Canada. The mean age of the students was 11.57 (SD = 13.77). Prior to the study, an online questionnaire was administered to determine a baseline for students’ computer usage, and to establish homogeneity among participants (see Figures 3 and 4). 35 30 Frequency
We used the above metrics to assess students’ performance and interactions during a data overlay map activity that was developed specifically for this study, with the goal of using the findings to inform the technology design for our curricular units.
4. STUDY DESIGN
We worked with a software developer to design a Google map activity that was a simplification of the predictive distributionmodeling tool that is central to the wider curriculum project (Figure 1). The activity required students to click on a series of map overlays in order to answer questions about tree distributions.
25 20 15 10 5 0
Every day 5-6 times 3-4 times 1-2 times
0 times
Computer Usage per Week
Figure 3. “How many days a week do you use a computer or mobile device?” 60
Frequency
50 40 30 20 10 0
Figure 1. Map interface for usability study
Finding information
Gaming
Email
Internet Usage
Social networking
Figure 4. “How do you spend most of your time on the Internet?”
198
IDC 2011
SHORT PAPERS
4.2 Data Sources
questionnaire, 41% of students indicated having just two overlays visible at one time, yet many of them still answered the Yes/No questions incorrectly. In other words, even though students had the correct data layers displayed, they failed to interpret that data correctly. This suggests that students’ difficulties in answering the questions were at least partially due to their lack of knowledge and familiarity with interactive map components.
Data for the study were drawn from students’ responses to two online questionnaires and their answers to questions tree distributions. Informal interviews were conducted with a small number of students (n = 4) to gain insight into their experiences and perspectives of the activity.
5. RESULTS AND DISCUSSION
30
The map activity included four questions about the types of trees one could find in the United States. To answer these questions, students had to make use of data overlays to identify the distribution of tree types. Since authentic species distribution data was not yet available at the time of the study, we created mock overlays for the purpose of usability testing. Thus, the data we used was not authentic map data but simple overlays that were deliberately designed to require students to access different areas of the map. For example, to answer the question “Are there coniferous trees in Oregon?” students would first have to pan to the left to locate the state of Oregon (all states were clearly labeled), and then resize the map with the zoom function to distinguish between state boundaries. Students would then have to click one or more of the tree overlays (deciduous, coniferous and pine forests) to determine which of those tree types could be found in Oregon.
Frequency
25
Incorrect Response
1. Are there deciduous trees in Texas?
43%
57%
2. Are there pine forests in Arizona? 3. Can you find coniferous trees in Oregon?
52%
48%
64%
36%
15 10
0
Strongly agree
Agree
Neutral
Disagree Strongly disagree
Response Options
Figure 5. “It was easy to find the information I needed for the map activity.”
6. CONCLUSION
The findings from this study have implications not only for our own research project, but also for educational research more generally. With respect to usability, the students in our study were challenged to use technology features that were fundamental to the activity. When answering the map questions, students were unable to make the connection between the physical map overlays and the data they represented – a critical distinction for using maps to answer scientific questions. In future development work, carefully designed scaffolds are required to improve not only the usability of the technology interface, but also to guide students to engage with the curricular content in ways that lead to improved learning.
Table 1. Responses to tree distribution questions Correct Response
20
5
Students’ responses to the map questions indicated that many students had difficulties manipulating even the simplest of map interfaces. In the activity, three of the four questions were simple Yes/No questions that had only one correct response; however, answering these questions accurately depended on students’ ability to arrange the data so that the overlays and state boundaries were visible. Table 1 shows the breakdown of student responses to the Yes/No map questions:
Question
20th-23rd June, Ann Arbor, USA
7. REFERENCES
The fourth question in the map activity was similar to the others in that it asked students what type of trees could be found in a particular state. However, unlike the other questions, this was not a Yes/No question with radio buttons but an open-ended question where students were required to type in their response. To answer the question correctly, students needed to list the three different tree types, or type the words “all three”. Of the 84 participants 42% answered the question correctly and 34% listed either coniferous or pine forests (no student entered both). A small number of students replied to the question with “none” (7%) or “does not apply” (5%). Interestingly, 12% of students answered “spruce birch” which was not included as a data layer option in the activity. Responses to the post-study questionnaire further indicated that students had difficulties with map functionality and data manipulation. For example, in their questionnaire responses, only 26% of students “agreed” or “strongly agreed” that it was easy to find the information they needed to complete the map activity (Figure 5). One unexpected finding pertained to how students used the overlays while interacting with the map data. In the
199
[1]
Crowther, M.S., Keller, C. C., Waddoups, G. L. 2004. Improving the quality and effectiveness of computermediated instruction through usability evaluations. British Journal of Educational Technology, 35(3), 289–303.
[2]
Genov, A., Keavney, M., and Zazelenchuk, T. 2009. Usability testing with real data. Journal of Usability Studies, 4(2), 85-92.
[3]
Hix, D., and Hartson, H. R. 1993. Developing user interfaces: Ensuring usability through product and process. New York, NY: John Wiley.
[4]
Nielsen, J. 1993. Usability Engineering. San Francisco, CA: Morgan Kaufmann.
[5]
Nielsen, J. 2001. Usability Metrics. Jakob Nielsen’s Alertbox. Retrieved October 16, 2010, from: http://www.useit.com/alertbox/20010121.html
[6]
Nielsen, J. 2003. Usability 101: An Introduction to Usability. Jakob Nielsen’s Alertbox. Base article retrieved November 2, 2010, from: http://www.useit.com/alertbox/20030825.html
IDC 2011
SHORT PAPERS
[7]
Nielsen, J. 2010. Children’s Websites: Usability Issues in Designing for Kids. Jakob Nielsen’s Alertbox. Article retrieved November 2, 2010, from: http://www.useit.com/alertbox/children.html
[8]
Johnson, R. R., Salvo, M. J., and Zoetewey, M W. 2007. User-centered technology in participatory culture: Two decades “Beyond a narrow conception of usability testing”. Proceedings of the IEEE Transactions on Professional Communication, 50(4), 320-332.
[9]
20th-23rd June, Ann Arbor, USA
design features of computer-assisted instruction programs in mathematics for students with learning disabilities. Computers and Education, 55, 363-377. [11] Shneiderman, B. 1998. Designing the user interface: Strategies for effective human-computer interaction (3rd ed.). Menlo Park, CA: Addison Wesley. [12] Shapiro, A. M. 2008. Hypermedia design as learner scaffolding. Educational Technology Research and Development, 56, 56-29.
Poore-Pariseau, C. 2010. Online learning: Designing for all users. Journal of Usability Studies, 5(4), 147-156.
[13] Sullivan, P. 1989. Beyond a narrow conception of usability testing. iEEE Transactions on Professional Communication, 32, 256-264.
[10] Seo, Y. J., and Woo, H. 2010. The identification, implementation, and evaluation of critical user interface
200