Information practices and user interfaces: Student use

0 downloads 0 Views 759KB Size Report
Student use of an iOS application in special education. ... studies that suggests user actions are not driven solely by internal cognitive processes (Meyers,. Fisher ...
This is the accepted manuscript. The final publication is availble at link.springer.com Demmans Epp, C., McEwen, R., Campigotto, R., & Moffatt, K. (2015). Information practices and user interfaces: Student use of an iOS application in special education. Education and Information Technologies, 1–24. http://doi.org/10.1007/s10639-015-9392-6

Information practices and user interfaces: Student use of an iOS application in special education Carrie Demmans Epp, Rhonda McEwen, Rachelle Campigotto & Karyn Moffatt Abstract A framework connecting concepts from user interface design with those from information studies is applied in a study that integrated a location-aware mobile application into two special education classes at different schools; this application had two support modes (one general and one location specific). The five-month study revealed several information practices that emerged from student attempts to overcome barriers within the application and the curriculum. Students engaged in atypical and unintended practices when using the application. These practices appear to be consequences of the user interface and information processing challenges faced by students. Abandoning activities was a strategic choice and was an unanticipated information practice associated with the application’s integration into lessons. From an information processing perspective, it is likely that students reinterpreted information in the location mode as housing application content rather than being location specific and the information practice of taking photos emerged as an expressive use of the device when an instrumental task was absent. Based on these and other emergent practices, we recommend functionality that should be considered when developing or integrating these types of applications into special education settings and we seek to expand the traditional definition of information practice by including human-computer interaction principles.

Introduction The mainstream accessibility of touch-input mobile devices has created a substantial market for mobile applications. In 2013, there were over two million applications available for iOS or Android devices (Ingraham 2013; AppBrain 2013). Moreover, the use of mobile applications in school settings is gaining popularity among teachers and administrators (Ally 2009) who are balancing keeping the curricula current with parent expectations and student interest in new media (Du, Sansing, and Yu 2004; Windschitl and Sahl 2002). This is especially true in special education where some of this interest may be due to the general popularity of iOS devices and a desire to use mainstream devices rather than specialized devices that have limited functionality and are costlier (Goggin & Newell, 2003). It may also be attributable to the general population’s use of such devices and the desire of special education students to fit in with those around them (Kim-Rupnow and Burgstahler 2004; Ludlow 2001). Interest in using technology in special education has also been growing, especially as a means to compensate for communication deficits (Ludlow 2001; Tentori and Hayes 2010; Turnbull 1995). As these studies have shown, focusing on special education student experiences can highlight device-user interaction issues that can be hard to identify when observing typically developing students who may accommodate for challenging human-computer interfaces while those with intellectual, physical, or cognitive impairments exhibit alternative responses that flag interaction obstacles (Campigotto, McEwen, and Demmans Epp 2013). While studies of the use of touch-input mobile devices in special education classrooms are only just emerging, the focus has largely

been on how adaptive interfaces - that is, technology interfaces that can offer alternative input/output or otherwise learn from users with atypical learning abilities - can improve learning outcomes for students in special education classes. The United Nations Educational, Scientific, and Cultural Organization (UNESCO) published a highly cited report in 2000 called the ‘Information and Communication Technology in Special Education” (Carey, Evreinov, Hammarstrom & Raskind, 2000). This report represented the first comprehensive survey of education technology applications used in special education. It highlighted the role that graphical user interfaces can play in improving the communication and accessibility of text or writing for users with learning disabilities. Following this report, several research articles and collections have described the development of applications for special education (Hirotomi, 2007; Fernández-López, Rodriguez-Fórtiz, Rodriguez-Almendros & Martinez-Segura, 2012; Starcic, Cotic & Zajc, 2013; Mateu, Lasala & Alamán, 2014; and Miesenberger, Fels, Archambault, Penaz & Zagler, 2014) mainly from computer science, educational research, human-computer interaction, or learning science perspectives. In almost all of the existing literature there is an objective to create applications that will improve accessibility and aid the learning outcomes for students in special education. We are interested in expanding the research trajectory of touch-input mobile device use in special education from an accessibility lens to include a more critical examination of the experiences of students as these technologies are deployed – we are interested in the user experiences and practices that emerge following the introduction of these technologies. In particular, we bring a few novel aspects to the discussion: a) an information practice and user interface conceptual framework; b) a focus on location-aware applications since they employ spatial information, such as maps, and can provide orientation data that assists users across contexts; and c) a detailed analysis of the challenges that arise, with recommendations for future development and deployment of these technologies in special education. This paper presents findings from a study of students who used the application MyVoice i on iOS devices while attending special education classes in schools in Toronto, Canada. While we use this particular application for our analysis we believe that the findings and recommendations from the study are applicable to many other applications currently being used in classrooms worldwide. In the study we asked two research questions: 1) how did students use the location-aware support application and 2) in what ways did specific aspects of the user interface influence student information practices. For question 1, we were interested in student choices, indicated preferences, and emergent or unanticipated application uses. For question 2, we explored the relationship between the application’s user interface and its consequences for user actions; specific actions that could be attributed to the application design were identified and the interface’s role in subsequent actions was considered.

Conceptual Framework Information practice is a concept in information studies that suggests user actions are not driven solely by internal cognitive processes (Meyers, Fisher, and Marcoux 2009; Wilson 2000), but that user actions should be analysed within a broader context that includes factors external to the user’s immediate perception (Caidi and Allard 2005; McEwen and Scheaffer, 2013). Therefore, when someone is observed using information, we look at the consequences for user action that derive from other persons or objects in the immediate environment, even if the user cannot articulate the effect of external entities on his or her actions. Savolainen (2009) applied this lens to user actions when positing that observed

actions can be understood as processes that users undertake when interacting with information. Human-computer interaction (HCI) explores the attempted communication between two powerful information processors, human and computer, using narrow-bandwidth over a highly constrained interface (Tufte 1989). HCI, thus, considers the user interface a major contributor to system success, and a fundamental goal of HCI is to increase the useful bandwidth across that interface (Jacob 1994). However, this does not always occur and user access to information via the interface is often diminished. Users often cope with interface barriers by consciously or unconsciously employing strategies to counteract information loss. Through the process of appropriation, users adopt and adapt technologies to fit within their existing practices and transform those practices to fit the technology (Dourish 2003). This allows atypical user actions to be viewed as information practices composed of strategies invoked to attempt to overcome user-interface obstacles while managing information. The linguistic form of textese that emerged when users exchanged information through the character-limited interface of SMS messages on mobile phones is one example (Mose 2013) that suggests a conceptual framework that relates information (i.e., the content resulting from communication between the user and system), user interfaces (i.e., media), and user information practices (i.e., strategies). We apply this framework to observations of special education students using a touch-input mobile application. In so doing, we seek to expand the traditional definition of information practice by including HCI principles.

Methodology This five-month study was conducted in 2011 using iOS devices in two Toronto area public schools with students in grades 7 through 12. Data were gathered from demographic information profiles, interviews, and application usage logs and this constituted a mixed-methods approach described in more detail below. Working with the support of the schools’ principals, and ethics boards, we engaged with the teachers from one classroom in each school. This allowed for the study of a total of 23 students aged 12 to-21. Both classrooms were identified as Special Education classes by the Ontario Ministry of Education, and students were identified as having intellectual and/or cognitive exceptionalities that require additional support and differentiation within the curriculum to support their success. Both classrooms fall under the jurisdiction of the Toronto District School Board and there are several types of special education programs running within the board. For our study we investigated classrooms running Intensive Support Programs (ISPs), where there is a lead teacher, educational assistant(s) and middle-school students in a 1:8 ratio of teachers/assistants: students. This low ratio is typical of ISPs in the board and has had positive outcomes for classroom management (TDSB, 2013). iOS devices were introduced to the students in both schools as classroom tools for the first time during this study.

Demographic Information Profiles Teachers completed anonymized demographic profiles for participating students. This included each student’s ethnicity, sex, and official diagnosis. Information about prior student experience with other support tools and iOS devices was collected, and teachers conducted a brief assessment of the student’s social and communication skills. This profile of the student’s communication needs and abilities included information on student speech and language difficulties as well as any behavioral and attention-based challenges that students may have had. The information corresponds to that found in students’ individualized education plans (IEP) – which describes the accommodations and services needed by each student. ii. An additional checklist that had been informed by characteristics of students with learning difficulties was completed by teachers and included students’ social abilities, verbal skills, language development, and reading comprehension.

Interviews Individual, semi-structured interviews were conducted with the teachers at both schools. Each teacher was interviewed twice: a third of the way into the study period and at the end of the study. The interview script contained 15 questions that included questions toward the development of a baseline of the class (e.g. questions on the level of social interaction typically observed in the class; th e types of augmentative communication devices and strategies employed by the teacher and assistants; their general expectations of using a mobile device and application with the class), followed by several open-ended questions to collect data on the teacher’s observations after the introduction of the iOS application (e.g. questions on observed differences in social interaction; student engagement and/or motivation with the application’s vocabulary based activities; and questions on planned and unplanned activities involving the iOS applicatio n, and any consequences for lesson planning). Teacher 2 independently chose to conduct a group interview with his students. He asked what they enjoyed and what they had difficulties with when using MyVoice: all student responses were anonymously recorded on a single d evice and transcribed.

Application Usage Logging Every interaction that users had with the application was logged. This logging captured each user action within the system and includes navigational actions (e.g., switching modes, viewing a category, or viewing words) as well as those intended to support cognition or communication (e.g., selecting a word to be spoken).

The MyVoice Application The MyVoice application has been iteratively evaluated and refined (Demmans Epp et al. 2011). Early development was user-centered and involved the continued use of the application by an adult with aphasia—a language disorder that can disrupt reading, writing, and speaking (Aphasia Institute 2003). This target user employed the application to support his communication as he went about his daily routines, and he provided the development team with regular feedback about the application’s design and functionality. The developers then adjusted the application and gave their tester the updated software. Feedback was also solicited from those who worked with populations who face communicative challenges and, where appropriate, their suggestions were incorporated. Beyond the above user testing discount evaluation methods were used, where specialists in human-computer interaction stepped through the application and identified potential usability challenges by applying the Gestalt principles (Mullet and Sano 1995) and Nielsen’s heuristics (1994). Even though tools like MyVoice are commonly used by individual special education students, the MyVoice interface had not been evaluated for its’ ease of use with this population prior to this study. Moreover, MyVoice had not been designed for use in a classroom setting. We, therefore, set out to explore its repurposing to support a special education population since they could benefit from its use in classroom settings and teachers are interested in trying new methods that might support student needs.

User Interface The MyVoice application is a dual-interface (web and mobile) tool that was designed to support communication. The web interface is intended to allow for the creation, editing, and organization of support materials whereas the mobile (iOS) interface is intended to enable the delivery of

those materials. The use both interfaces enables users to take advantage of the strengths of different form factors: the larger screens and input capabilities of laptops and desktops can ease content creation, while the portability of a small mobile device is helpful for content access. Moreover, this separation of functionality enables caregivers and helpers to assist with the set up and administration of the application even when the user is not present. The web-based interface allows students or teachers to enter and organize support materials (i.e., words or phrases) into collections of vocabulary items. The entered data is then synchronized with the application that is installed on the student’s device. This ensures that the same collection of vocabulary items is available via both interfaces. The ability to create collections of support materials and distribute them from a remote location allows teachers to provide students with new vocabulary items even when they do not have physical access to student devices. The iOS device interface allows users to interact with previously entered vocabulary by navigating through a hierarchical or location-based organization. Once the user has found a vocabulary entry, it can be selected by touching the device’s screen and the vocabulary entry will be verbalized using text to speech. The mobile application runs on iOS devices that enable user mobility and flexible support. However, the physical dimensions of iPhones (11.5 x 6.2 x 1.2 cm) and iPod Touches (11.1 x 5.9 x 0.7 cm) and their delicate nature can present challenges for some users. Both interfaces provide the ability to associate an image with a vocabulary entry. However, the iOS device interface only allows the user to add an image to a previously existing word or phrase. The user does this by selecting the desired vocabulary item and then photographing something within his or her environment. In contrast, the web-based interface allows users to create and organize vocabulary; this allows them to add new words for any images to which they have access by browsing through image files that are on their computer and uploading those images so that they can be associated with a specified vocabulary entry. Users can identify locations and associate words with those locations. This location-aware functionality exploits the Global Positioning System (GPS) information that is provided by the devices that we used as well as many other modern mobile devices.

Information Organization MyVoice provides communication and cognitive support via two modes: word view and place view (see Figure 1). Both modes provide access to words and phrases. However, the organization of words differs between modes. Where word view allows for the deep nesting and organization of words and phrases, place view enforces a flat organizational structure.

Figure 1 Word view for one of the lessons (left) with identifying information obscured, the expanded view of the “Mean, Median, Mode”

category (centre), and an example of place view (right); this example is not from our data but was chosen to show the interface. Word view supports navigation through hierarchically organized sets of vocabulary and users can tap on categories to navigate deeper within the hierarchy or tap on individual items to have them verbalized. Place view supports navigation through a location-based organization of vocabulary. It is intended to provide fast access to the vocabulary that are relevant to a particular location and is the key functionality that distinguishes MyVoice from other commercially available communication support tools. Once a location has been selected, all of the words that have been associated with that location are visible; there is no hierarchy. The verbalization of words and phrases is performed using the same actions as those required in wordview; the user taps on the item that he or she wishes to have verbalized.

Research Sites and Participants Two research sites were selected for this study: one class from each of two public schools in the Toronto area. While this choice was purposive, it was also convenient: both schools were recommended by the principal of a school that had previously participated in research conducted by the lead researcher. That principal facilitated recruitment of both sites by introducing the researchers to the principals of both schools. Moreover, these schools represent two types of special-education environments common within the provincial school-boards in Ontario, Canada: all of the students at School 1 had been designated as special needs whereas School 2 had some classrooms that were dedicated to supporting students with special needs with the aim of re-integrating those students into the main academic stream. We use the terms School 1 and School 2 throughout the paper to facilitate reference to these two sites (see Figure 2).

Figure 2 Participant diagnoses heat map (darker colors indicate increased incidence)

A two period, non-credit course for girls aged 17 to 21 was selected from School 1. This class had 1 teacher, 2 educational assistants (EA), and 15 students. Ten of these students were included in the study and had difficulty understanding and interpreting commonly used language. The curriculum for this course, entitled “The World of Work”, has the goal of exposing students to different aspects of work through job shadowing, co-operative placements (co-op)iii, and guest speakers. Each student had an IEP that identified her as having abilities that deviated from typical development. This included diagnoses of Down’s syndrome, various learning disabilities (LD), Autism Spectrum Disorder (ASD), and Mild Intellectual Disability (MID). Some stayed in class all semester while others participated in co-op placements. A combined seventh and eighth grade class that had male and female students was selected from School 2. This was an intensive support class that

targets students who have learning disabilities and who demonstrate a significant discrepancy between average or better intellectual ability and lower academic achievement. The class had 13 students, 1 teacher, and 1EA; the EA was present during the classes for various subjects including geography, writing, reading, and math. The teacher described his students as needing additional support communicating, specifically with respect to their ability to produce language (i.e., write, speak, or articulate their message). School 2 was recruited to compare experiences, gain the perspective of a new teacher, and evaluate the application’s integration into a class by enabling a new feature that allowed the teacher to create and share collections of vocabulary items.

Study Implementation The devices and application were introduced to both research sites using a similar approach. After receiving research ethics board approval for the study, the research team met with school principals and teachers to describe the study at a high-level. Following this initial consultation process, consent was obtained from three parties: principals, teachers, and students. Students opted-in to the study by completing and returning an informed consent form that had also been approved by their parents. Students were not excluded from learning opportunities if they did not consent; only their data was excluded. Voluntary consent was also requested from any EA that was present during the study. All invited students consented to participate; however, one student, S1_H, was unable to fully participate because of fine- motor problems that made using the device difficult. This student remained with the class during exercises and was provided with teacher-designed alternative learning materials, rather than the iOS device and application. Both classrooms were provided with iPhone and iPod Touch devices on which the MyVoice application had been installed. The schools supplied a computer in the participating classrooms so that participants could access the web interface. School 1 was provided with four iPhones and six iPod Touch devices. School 2 received six iPhones and seven iPod Touch devices. A random subset of students at both schools were assigned the iPhones, which had vibration output called haptic-feedback enabled. All other students received an iPod Touch, which did not support this feature. The random distribution of the application on haptic-feedback enabled devices was meant to help answer research questions relating to how the use of vibration affects patterns of application use. We were, specifically, interested in exploring whether the availability of haptic feedback helped increase the amount of information that these students received and understood. Even though we randomly assigned participants to a haptic feedback condition, we did allow students who had the haptic-feedback feature enabled to turn it on or off via a configuration tool since it is known that this type of information can reinforce behaviors, which has the potential to be disruptive. We wanted to allow the teachers to change a student’s feedback type if the teacher felt that it may have been harming learning. We, therefore, tracked the status of this feedback feature. Each student had an individual account within the application and was identified by his/her username. Usernames were pre-created and anonymous to protect student privacy. Measures were taken to minimize distractions for students and to restrict access to internet browsers, games, music, and other applications unnecessary for the study because this was requested by the participating teachers. The intent was to align the use of the devices with the existing curriculum in so far as it was possible; teachers were given sample lesson plans and training on both the device and application but were afforded the freedom to use the device as frequently as they felt was appropriate and in any manner

that met their needs. Furthermore, teachers were encouraged to develop curriculum -based lesson plans that integrated application use. Additional researcher support was given to School 1 since, at that time, adding vocabulary to each student’s account was a tedious task beyond what could be added to a teacher’s daily workload. The ability to batch upload vocabulary to all devices at once was added before School 2’s participation began. This reduced teacher workload to a reasonable level. Following the completion of each school’s participation, the devices were collected and all identifying information was deleted. Students were given a copy of any photographs that they had taken before deleting our records of those images. Student and teacher behaviors were identified within the data with a focus on anything that contributed to or hindered the integration of the mobile application or device into a special needs classroom. Considering the application’s focus of supporting communication, particular attention was paid to information, social, and communication practices as well as teacher and student motivation and behaviors when using the technology.

Classroom Integration of MyVoice To better situate the interpretation of the results, we first detail how each teacher integrated MyVoice into his or her lessons by describing a lesson. School 1 typically used the devices to document the fieldtrip activities of students. In one lesson, students visited a local park and took pictures of each other, man-made structures, flora, and fauna. This type of activity was not intended to support particular learning goals, but the teacher found that the students enjoyed taking photos, which helped keep them engaged. In another lesson, students went to the library where students used MyVoice to categorize different information about the librarian, such as his name and phone number. Teacher 1 also used the hierarchical organization of vocabulary to support student understanding of the different types of literature available: fiction, children’s literature, and biographies were among the selected categories. Students were expected to take pictures of books best suited to each category by navigating into that category, finding one of the listed books in the library, and photographing it. The teacher at School 2 more actively integrated MyVoice into his courses, where lessons were confined to the school. Aside from allowing students to explore the device, take pictures, and practice using words along with the device, the teacher planned a math lesson where MyVoice was meant to support student learning about central tendencies. He created categories for ‘Mean’, ‘Median’ and ‘Mode’ and added pictures to convey the meaning of the associated concept since he felt this would help students remember the new terms and trigger word meanings when students did not have access to the device. Once students chose the category, they were provided with the definition. The application would read the definition to aid students whose reading comprehension was lacking. Students were also allowed to use MyVoice to help them complete later classroom activities and worksheets.

Results and Discussion Data coding The application usage log data were a tremendous source in this study. While the user demographic profiles and interviews provided rich context and allowed before and after comparisons to take place, the application usage log data were a detailed account of each participant’s activities and were independent of the human bias that often confounds mobile media research reliant on self -report data (Boase & Ling, 2013). The log data were collected through the MyVoice application. The application logged each action that could be performed. Each

action was assigned a code, which was logged whenever the user performed that action. This was not done using the more traditional approach to user activity logging that tracks every single click, including the sequence in which keys were pressed. Rather, logging was done at the semantic level whenever a user performed an action or a step in a more complex action. For example, a user must follow a series of steps when creating a location. This involves indicating that they are interested in creating a location, search ing for that location on a map, identifying the correct location on the map, giving that location a meaningful name, and saving the locat ion. The application, therefore, logged each of these interactions. For simpler interactions, such as having a word read aloud only one event would be logged. These logs were then transferred to the server before they were cleaned an analyzed. This involved checking the logs for inconsistencies before they were analyzed. Interviews were transcribed and codes were manually developed from the identification of patterns that corresponded to our research questions. For example, teacher’s accounts of student’s preferred activities on the devices led us to code for photo taking and saving and resulted in the emergence of the information practice theme ‘image capture’ relevant to the second research questi on. Again, from the interviews, we developed codes describing the manner in which students navigated the application and this coding resulted in the identification of the information practice themes ‘mode switching’ and ‘searching’ relevant for the second research question. The demographic information profiles for the participants were transferred to SPSS and were available for descriptive statistics used in the reporting of the results. The results are organized by research question, with the analysis of the application’s use being subdivided into general application use, the logging of learning activities, and the influence that haptic feedback had on student actions. After presenting our analysis of the data with respect to application use (i.e., research question 1), we explore the implications that the user interface had for student information practices (i.e., research question 2).

Research Question 1 – General Application Use The following are results to the first research question: how did students use the location-aware communication application? Most results are presented in aggregate form. However, there are places where highlighting individual differences is important. In these cases, a code that starts with the school (i.e., S1 or S2) and ends with a letter that identifies the student is used (e.g., the student who had device D at School 1 would be identified as S1_D). The actions of opening the application and using it to verbalize vocabulary were highly correlated (see Table 1) as were opening the application and performing different forms of navigation within the application. These relationships indicate that the application was, at least, partly used, in a manner that is consistent with other communication support tools where users will employ the application to find words from various categories and combine them in an attempt to communicate their intended message. Students at School 2 also appreciated the use of the text to speech to verbalize learning materials “because it like sticks in your head, so [Teacher 2], my teacher doesn’t have to say it and re-say it to everybody in the class which takes energy sometimes.” However, interviews with students and the teacher at School 2 indicate that the rate of speech of the text to speech engine was too fast for some. Students reported that they would repeat verbalizing a vocabulary entry because they

did not understand it the first or second time that they heard it. If we look at the types of activities that students performed, a considerable amount of their actions were dedicated to information seeking practices, such as navigating in and out of categories, looking at the detailed view of a vocabulary entry, and switching between the word and location views (see Table 2). These types of activities were generally expected. However, the information seeking practices that related to the use of both views (i.e., the flat vocabulary organization in place view and hierarchically organized word view) were unexpected, especially in School 2 where no field trips were planned and students were not expected to use the location view Further discussion of the relationships that are described in Table 1 will be provided as we discuss student usage of different application features. In particular, we will highlight how these relationships indicate the student’s ability to manage the different methods (i.e., place and word view) that the application provides for organizing support materials. We will also detail what these relationships tell us about application usage patterns.

Table 1 Correlated User Actions within the Application. Commonly correlated activities included opening the application and different content navigation activities. Student Action

Student Action

Opened Application

Verbalized Vocabulary in Word View

Scatter Plot (r)

Typical Usage Scenario

The participant started the application. It opened to the category of words where the user was last, and the user tapped on a word in order to hear that word. (0.849**)

Opened Application

Verbalized Vocabulary in Location View

(0.699**)

The participant started the application. It opened and showed the words that are associated with the last location that the user had open. The user tapped on a word in order to hear that word.

Opened Application

Navigated Into a Category

The participant started the application. It opened to the category of words where the user was last. The user sees a sub-category that might contain the word(s) that s/he wants and taps on that category (0.889**) to see which words it contains.

Opened Application

Navigated Out of a Category

The participant started the application. It opened to the category of words where the user was last. The user sees that they are in a category that does not contain the desired word(s) and taps on a (0.887**) parent category to see which words it contains.

Opened Application

Changed Views

The participant started the application. It opened to the view (location or word) where the user was last. The user does not see what s/he wants and thinks that s/he knows where the words are in (0.893**) the other view. The user taps the button to switch views.

Opened Application

Viewed Vocabulary Details

The participant started the application. It opened to the category of words where the user was last. The user sees the word that s/he wants but cannot see all of the information. So, s/he presses on the (0.922**) word to see more of the details, this includes displaying a larger version of the image that is associated with the word.

Opened Application

Searched for Location

The participant started the application. It opened to the location search screen and a location search was automatically initiated. (0.898**)

Searched for Location

The user searched for one of the location’s that they had entered and selected it.

Navigated Into A Category (0.729**)

Searched for Location

Navigated Out Of A Category

Searched for Location

Created a Location

The user did not see the word(s) that s/he wanted. Navigated up to the parent category and still did not see the word(s) or category of words that s/he wanted. The user then changed to the Location view (0.733**) and searched for a location with which s/he may have associated the desired vocabulary.

The user initiated a location search and did not find the location that s/he wanted. The user then created a new location that is associated with the user’s GPS coordinates at the time the search was (0.732**) performed. ** Significant at the p < .01 level (2-tailed) The correlation between opening the application and searching for a location (r = 0.899, see Table 1) alongside the minimal number of locations created (see Table 5) indicates that students allowed the application to turn itself off when in place view. They then turned the application back on, resulting in the location search being reinitiated automatically. Students also tried turning the application on and off when they had entered the location view or other screens which they may have struggled to use effectively (see Table 3 and Table 4). This could be interpreted as an attempt to reset the device, much like turning a computer off does, except the application stored the current state so that the user could return to it.

Table 2 Student actions within the application and their focus

Total 378 1056 831 141 194 216 77 188 4512 534 1169 948

Communication

Total 1310 3256 2824 1857 251 177 53 515 4855 705 2229 1411

Content Creation / Modification

Total 1688 4312 3655 1998 445 393 130 703 9367 1239 3398 2359

School 2 Mean s.d. 23.63 17.52 66.00 34.57 51.94 28.53 17.63 11.62 12.13 7.82 13.50 9.41 4.81 3.02 11.75 8.29 282.00 177.84 76.29 82.34 73.06 44.57 59.25 35.58

Mode Management

Student Actions Viewed Vocabulary Details Navigated Into a Category Navigated Out of a Category Changed Views Searched for Location Initiated Location Creation Created a Location Took a Picture Verbalized Vocabulary (Word View) Verbalized Vocabulary (Location View) Opened the Application Closed the Application

School 1 Mean s.d. 131.00 58.06 325.60 166.84 282.40 140.20 185.70 92.73 25.10 14.15 17.70 17.08 8.83 9.56 51.50 29.00 485.50 250.73 70.50 56.22 222.90 119.56 141.10 87.48

Information Seeking

Action Foci

Table 3 Log file excerpt of student usage of place view. Note that within a period of less than 9 minutes (from 13:53:13 to 14:02:02) the student cycles the application on and off 8 times without any other action; this may indicate difficulty using the device effectively. Participant S2_G S2_G S2_G S2_G

Student Action Switched Mode Find Places Add Place Session Ended

Time 2011-06-13 13:53:02 2011-06-13 13:53:04 2011-06-13 13:53:08 2011-06-13 13:53:13

View Location Location Location Location

Vocabulary Entry

S2_G S2_G S2_G S2_G S2_G S2_G S2_G S2_G

S2_G

Session Started 2011-06-13 13:54:28 Session Ended 2011-06-13 13:54:37 Session Started 2011-06-13 13:54:39 ... 6 session starts and 6 session ends ... Session Ended 2011-06-13 14:02:02 Session Started 2011-06-15 15:04:31 Verbalized 2011-06-15 15:04:58 Vocabulary Verbalized 2011-06-15 15:05:04 Vocabulary

Location Location Location Location Location Location Location

Verbalized Vocabulary

Location

2011-06-15 15:05:08

Location

The mode is the number that occurs the most in a collection of numbers. The median is the middle number in a given sequence of numbers. When there is an even number of numbers in the collection, take the middle two numbers and average them to discover the median number. Mean is usually known as the word average. You find an average by taking the total for a set of numbers divided by the number or numbers in the collection.

The correlation (see Table 1) between searching for a location and content navigation actions may also indicate that students attempted to use the place view to search through vocabulary or that they failed to find a desired vocabulary item and decided to change to the place view in order to try and find it. In this case, the application would initiate a location search if the user had not previously specified his or her location. However, the number of failed or abandoned attempts at creating a location (263) and the correlation between searching for a location and creating one (0.732) may indicate that the user interface in the place view presented students with some challenges. It may also indicate that students liked to look at the map but did not necessarily want to save their location or view the learning materials that were associated with a location, which is partially supported by a School 2 student comment: “It is good to find the GPS location and for me it was good showing how I can use the device in general educational way”. Furthermore, it appears that participants stayed within a location (see Table 3) once they had selected one. Subsequent mode changes were between the limited categories that had been associated with a location and the vocabulary hierarchy present in the word view. Table 4 Log file excerpt of view navigation behaviors that were typical of the group. Note the repeated cycling between modes and starting and ending of the session. Participant S2_L S2_L S2_L S2_L S2_L S2_L S2_L

Student Action Switched Mode Switched Mode Switched Mode Session Ended Session Started Switched Mode Navigated Into a Category

Time 2011-06-17 14:38:15 2011-06-17 14:38:17 2011-06-17 14:38:18 2011-06-17 14:38:34 2011-06-29 15:03:13 2011-06-29 15:03:20 2011-06-29 15:03:22

View Places Words Places

Vocabulary Entry

Words Show and Not/Tell

As Table 5 shows, the number of failed attempts at creating locations by students at both sites shows that students were willing to experiment with this functionality. However, the average number of failed attempts (4.9) at School 1 and the fact that only 1 student (S1_N) at this school was able to successfully create a location illustrates the difficulty that this feature posed, either conceptually or as a result of its design. In contrast, only 1 student (S2_G) at School 2 was unable to create locations. The difference in student ability to create locations based

on school membership is significant as measured by an independent t-test (t < 0.001, p = 0.05). This indicates that the design of the location creation process and user interface is not generally problematic, but the current design may inhibit the use of this feature by some users, based on the cognitive challenges they face. Table 5 Location-creation descriptive statistics Research Site

Failed Location Creation M (SD)

Locations as Categories M (SD)

Locations Created M (SD)

School 1

4.90 (7.81)

0.00 (0.00)

0.10 (0.32)

School 2

1.81 (1.38)

0.63 (0.96)

1.69 (0.79)

Five School 2 students (S2_K, S2_P, S2_Q, S2_R, and S2_S) repurposed the locations that they had created as categories by flattening the hierarchy that was present in portions of the word view and assigning it to a location. For example, students might take all of the words from each subcategory (i.e., mean, median, and mode) and associate them with one location called school. In many cases, students flattened multiple hierarchies from the word view and assigned them to the same location.

Research Question 1 – Application Use: Feedback Configuration Approximately half of the students in each school were provided with a version of the application that incorporated haptic feedback. An analysis of the students’ actions showed no difference between application usage for students who received haptic feedback and those who did not. Teachers at both schools reported that they did not change the status of the haptic feedback on student devices. However, the logs show that some students at School 2 changed the status of either their own feedback or that of another student (see Table 6). The haptic feedback settings were modified at least once for each student at School 2 and most of those whose devices did not have vibration capabilities only attempted to change this setting once. However, 2 students (S2_N and S2_O) reset the application to its original configuration. No such behavior was observed at School 1.

Table 6 Student changes to haptic feedback status for School 2. Note that rows highlighted in grey denote students with an iPod device, which did not support vibration feedback. Participant

Vibration Status Start End

No. Times Status Changed

S2_I

On

Off

5

S2_J

On

On

4

S2_K

On

On

2

S2_L

On

Off

3

S2_M

On

Off

3

S2_H

On

Off

3

S2_G

Off

On*

1

S2_N

Off

Off

2

S2_O

Off

Off

2

Off

*

1

S2_P

On

S2_Q S2_R S2_S

Off

On*

1

Off

*

1

*

1

Off

On On

*Although technically set to On, the device could not vibrate and changing the setting did not affect the application’s functionality

Research Question 1 – Application Use: Media Creation Participants used two approaches to taking photos during their course activities: the structured creation of learning materials and the unplanned recording of learning activities. The first tended to occur when students were engaged in prescriptive, teacher-led activities where teachers had prepared vocabulary for an activity in which students were required to photograph examples of the vocabulary. The second occurred during more exploratory classroom activities. In many cases, the unplanned recording of learning activities occurred alongside activities where students were expected to take pictures of vocabulary. While students took many photos, closer inspection of the logs reveals that students at both schools retook photos multiple times and that they took photos for fewer than 10 vocabulary entries (see Table 7). Contrast this with the over 100 distinct photos that were taken outside of the application by School 1 students and approximately 20 photos that were taken outside the application by School 2. These student-initiated activities indicate increased engagement, and the dominance of this activity in School 1 fits with its less structured curriculum where the teacher adapted activities to emergent student interests. It also indicates that the application did not fully support unstructured learning activities. When comparing behaviors between schools, it was noted that the inability to directly add new learning materials via the mobile interface may explain this behavioral difference.

Table 7 Student photo taking practices within the application

School 1 School 2

Photos Taken per Vocabulary Entry Max. Min. Average (s.d.) 20 1 2.18 (2.52) 9 1 2.35 (2.29)

Vocabulary Entries for which Students Took Photos Max. Min. Average (s.d.) 7 1 4.82 (1.94) 5 1 2.75 (1.24)

Total Photos Taken Min. 7 1

Max. 84 26

Average (s.d.) 51.50 (29.00) 11.75 (8.29)

Research Question 2 – Interface Consequences for User Information Practices In this section we discuss the second research question: in what ways did specific aspects of the user interface influence the information practices of the student users? Viewed from the perspective of our conceptual framework, the findings that can be derived from the above results are grouped into five sections: i) location creation and use, ii) mode switching, iii) searching, iv) image capture, and v) haptic feedback preferences.

Location Creation and Use In a typical user scenario for the ‘create a new location’ activity, the user must complete several steps. The steps include initiating the creation of a new location by selecting the command to do so, entering text for the name of the location, planning for the types of images and text that would be useful in the new location, selecting or taking photos to be associated with it, and saving the new location. The correlation between

initiating the creation of a new location and the termination of the session (r = 0.667) indicates that students did not complete the intermediate steps, suggesting that they either turned-off their devices or stopped interacting with the application for the period of time that is necessary to trigger the device’s sleep mode. This is an atypical interaction, since it is expected that all of the steps involved in location creation should take less than a minute to complete. This indicates that the location creation process was difficult for this population to understand or perform to completion while attending to both classroom activities and the application. Students may have abandoned the location creation activity because they experienced difficulty completing the steps. This could be the result of an interaction design where the number or sequence of steps was too complex for these users and negatively impacted their goal-direction. Since task complexity is directly related to the user’s goal effects (Locke and Latham 2002), it is probable that the user interface had a moderating effect on student ability or willingness to complete the activity. From an information processing perspective, we can also apply cognitive load theory (Sweller, Ayres, and Kalyuga 2011) and conjecture that the complexity of the actions and decisions that were required to complete this activity increased the student’s extraneous load such that students employed the information practice of abandoning the activity to reduce their cognitive load. In this case, abandoning the activity is a strategic choice for some students and is an emergent information practice associated with this activity.

Mode Switching As previously described the application has two modes through which the user can access information: word view and place view. While it is possible to switch between modes when using the application, it is more likely that a classroom user would only employ one mode to accomplish activities for a specific interaction. Students were expected to stay in place view if they were in a location with which they had associated vocabulary. It was also expected that they would remain in word view during classroom activities. However, some School 1 students chose to use place view to help them navigate to their field trip destination and then switched to word view to access the speech functionality that would enable them to perform general communication tasks and their prescribed learning activities. Student mode switching activities also exceeded interactions that were dedicated to creating locations or verbalizing support materials, when in place view, suggesting that the students in this study were experiencing difficulty in managing or understanding the different modes. There are visual cues, both textual and graphical, to facilitate user understanding of which mode they are in. However, the data suggests that these interface design elements were insufficient to remove ambiguity for students. As a result, switching modes did not serve the intended transitional role of allowing users to move between modes, but switching modes could be viewed as an activity in and of itself. Students may have employed mode switching as a strategy for navigating the content, an unanticipated search practice. From an information processing perspective, it is likely that students reinterpreted locations as categories housing content. We, therefore, surmise that students were building mental models that considered the modes as navigational categories instead of different functional views of the support materials.

Searching Within information practice, searching is a core and analytically instructive activity that is observed in a variety of everyday environments and especially within digital spaces. In this application, users search by moving through hierarchies of nested categories with varying degrees of classification abstraction. For example, the category labeled ‘Math’ may have a lower-level sub-category

labeled ‘Mean, Median, Mode’ with a subcategory labeled ‘The mean is usually known as’, and items such as ‘average’ or ‘arithmetic mean’ with a photo accompanying the text-label for each level. In a typical search scenario users would start at a broader or more abstract level and then navigate or drill-down into narrower related sub-categories. After arriving at the required or expected photo and completing the activity, users could then navigate or drill back up to broader levels and continue with further activities within the application. Students were expected to remain within a subset of categories for an extended period of time given the nature of classroom activities. Before we discuss the challenges that students faced when navigating through the vocabulary hierarchies, it is worth noting that Teacher 2 even took a little while to become comfortable with the hierarchical organization of the vocabulary that came preloaded on the application: “It took me a while to figure that out, but after I was like ‘oh I like this’ because it’s like you go general and you go kind of more specific”. Students performed asymmetrical searches where they engaged in drilling down into the hierarchy from broader categories to more discrete items but tended not to navigate back up, sequentially, through hierarchy levels. After arriving at a lower level, students typically ended the application session or moved up several categories at a time. The lack of symmetry between navigating into and out of categories indicates that students used the home button to return to the top of the hierarchy or skipped levels in the hierarchy when navigating out of a sub-category. If we consider the arithmetic mean example, a student might navigate up into the math category rather than first navigating up into the ‘Mean, Median, Mode’ category. It is also possible that students were experiencing difficulty recalling the path to return to the top of the hierarchy even though there were text-based indicators. This may indicate working memory issues: working memory is an information processing and cognitive psychology concept that considers the short term memory capacity for all persons to be limited (Baddeley and Hitch 1974). One of the components of working memory is called the visuo- spatial sketchpad and it is assumed to be responsible for manipulating visual information. According to Baddeley, the visuo-spatial sketchpad plays a key role in assisting people with the spatial relationships between objects as we move through an environment (McLeod 2008), and the visuo-spatial sketchpad may be taxed by MyVoice since it relies on visual and spatial interactions. In the case of students navigating hierarchies, it would be working memory that is at the heart of the information processing tasks, and the lack of easily interpretable visual information (i.e., the image that represented the category) may have put pressure on the visuo-spatial sketchpad and prevented student recognition of the categories and hierarchy structure. The students’ choice to return to the home screen rather than navigate back up through the previous levels is an information practice strategy that was employed to relieve this pressure. Ending the session is another information practice that also serves to reduce demands on working memory. However, further investigation into the relationship between user navigation practices and working memory would better illuminate the information practices of users and allow system designers to accommodate for the variable capacities of user’s working memory.

Image Capture Taking photos was a popular student activity and an integral part of interacting with the application, but atypical uses were also observed. When students used the application while performing unstructured tasks, they took and saved many photos but did not associate them with learning

materials. They used the device camera to take photos of classmates and themselves, and they captured images of the locations that they were in at the time. When teachers provided an application specific task that involved taking photos, fewer photos were taken and saved. This may indicate that image capture as an activity was interpreted by students differently in the different usage contexts. Additionally, students did not have the ability to associate photos and words on the device itself; this function was performed online via a desktop computer to which students had limited access. This may have reduced the spontaneity of photo taking activities. Students would see something they liked, take a photo, but then would have to remember it at a later time when they had computer access and type in a word to go along with the photo. This presented too many steps to either keep them interested or may have been too complicated a process. It may also suggest that the user interface on the application discouraged or restricted general image capture and channeled students into using the camera function within the application in a goal directed manner. Moreover, students took and saved more photos outside the application when visiting a variety of locations. This may also suggest that students had prior experience with image capture using mobile devices and that they already had mental models to facilitate this practice. In addition, it appears that taking photos was an information practice that emerged as an expressive use of the device when an instrumental task was absent.

Haptic Feedback Preferences To determine the extent to which haptic feedback (which vibrates the device when objects are selected on the screen) influenced student practices, we gave 43.5 percent (n =10) of students devices with haptic feedback enabled and 56.5 percent without. Students were not told whether or not they had a device with haptic feedback enabled. Two thirds of School 2 students that began the study with haptic feedback enabled (n = 4, see Table 6) figured out how to turn it off, so that 21 of the 23 students had haptic feedback turned off at the end of the study. It appears that students preferred to interact with the device and application without haptic feedback. Applying the framework, we can conjecture that students may have found the haptic feedback distracting when other sensory information (visual, audio, and tactile) was already being provided. Haptic feedback may have taxed student information processing by increasing the extraneous load on the cognitive system to a point where students had to make a decision to remove this sensory information. By turning haptic feedback off to reduce cognitive load, students regained control over the user interface and relieved pressure on their information processing. This does not suggest that haptic feedback will overburden information processing in all application contexts but indicates that haptic feedback has the potential to provide as much or greater sensory information to the user as do visual and auditory modes.

Opportunities Revealed through System Usage An examination of the results related to student information practices offers insights into student experiences and engagement with the application and provides a basis for a discussion on the extent to which these findings may be related to our conceptual framework connecting information practices, user interfaces, and information processing. Even though many students struggled with the location creation process, creating and using locations was still of benefit to some. Students liked being able to see the map because “you can find where you are”. Several students created locations and assigned vocabulary to those locations. In addition to students liking the location view and being able to repurpose it, students did not comment about managing the modes or

changing between them during the teacher-led interview. This lends weight to their not seeing a difference between the views. Moreover, the repurposing of the location view to support vocabulary navigation shows how students were able to personalize the application in a way that met their information seeking needs and reduced the cognitive load inherent to word view’s hierarchical organization. However, these types of features require more scaffolding if they are to be used effectively by all of the students in a class. The potential challenges that students faced when navigating through vocabulary may be partly due to the lack of training, in the vocabulary hierarchy’s organization, that they were given. Teacher 2 thought that allowing the students to enter and organize the vocabulary themselves may have helped with this problem much in the way that the student repurposing of locations helped their information seeking practices. It may be that increased agency supported student information practices rather than the reduction of multiple levels of hierarchical information into a single level. While both views at times acted as information gatekeepers for some students, their use by others shows that the interface design can enable, hinder, or challenge users depending on their cognitive abilities. Based on this, the design of interfaces for neuroatypical users should allow for high levels of customization based on user abilities and preferences. The log files also revealed behaviors that were inconsistent with those demonstrated by users of other support tools. In some cases, the observed behaviors may have appeared because the hardware on which the application was running is capable of supporting behaviors that other support tools do not (e.g., taking pictures). Student behaviors indicate that the application did not support desired functions, such as the noticing and recording of learning activities, which can benefit students and is supported by other mobile tools (Kukulska-Hulme and Bull 2009). We, therefore, recommend that designers of educational systems take full advantage of the platform’s ability to log learning activities through any combination of media including audio, visual (i.e., video or pictures), and textual methods. This can also be used to further support student cognition and recall. We would further recommend that learners be allowed to organize information in a structure that meets their information seeking needs. This may mean that users can organize support tools using a flat, graph, or hierarchical tree-like structure. It may even require that learners can access the information via different organizational structures based on their current preferences and the other demands that are being placed on their information systems. Application feedback, the features that are available to students, and the extent to which students can record learning activities via the application should also be configurable since this would allow both the student and the teacher to ensure that the features which are available to a particular student are appropriate to his or her abilities and the activities being performed.

Conclusion and Recommendations The integration of a mobile support tool into the existing curricula of two special education contexts revealed student information practices. An action research approach was employed where logs of user actions, student interviews, and teacher interviews were used to track and explain application usage. This showed the potential limitations of integrating mobile support tools into different types of special education programs. Students at both schools demonstrated agency by developing information practice strategies despite the information processing and user interface obstacles they faced. These strategies included the repurposing of locations as categories and mode switching to support information seeking.

These strategies also included the logging of unplanned learning activities via other device functions since the application did not permit the impromptu recording of learning materials. These practices were identified and explained by applying a new conceptual framework that considers information use, information processing, and user interfaces in tandem. Following from these practices and other observed behaviors, several improvements in the design and integration of these types of tools can be made. Among them are: the ability to easily find learning resources, record learning materials and activities, and control different feedback features. It is also important to facilitate the creation of new material within the support tool. The information seeking challenges that students faced and the practices that emerged as a result of these barriers indicate that students should be given multiple paths to finding the same information. This need is demonstrated by students’ flattening vocabulary hierarchies and saving them as a category that was associated with a location. By providing students with different paths, system designers enable student exploitation of the information seeking practices that best suit them. The ability for users to control different aspects of the tool is essential to its continued use and classroom integration as shown by studentinitiated changes to haptic feedback settings as well as reports of students verbalizing the same word multiple times because the rate of speech was too fast for some to understand (Campigotto, McEwen, and Demmans Epp 2013). Students faced many challenges and employed various information practices to overcome the barriers that they faced while interacting with the tool. Their ability to develop and employ strategies to overcome barriers that were due to the user interface design, instructional design, and information organization shows that these types of tools can be repurposed for supporting students in special education settings. The combined use of the study of user interactions and information practices through the deployment of a support tool in special education settings revealed how resourceful members of this population can be in overcoming the barriers that the integration of technology can present.

References Ally, Mohamed (2009). Mobile Learning: Transforming the Delivery of Education and Training. Edmonton: AU Press. Aphasia Institute (2003). “What Is Aphasia? | The Aphasia Institute.” Aphasia Institute. http://www.aphasia.ca/aboutaphasia.html. AppBrain (2013). “Number of Available Android Applications - AppBrain.” AppBrain. http://www.appbrain.com/stats/number-of-android-apps. Avison, David E., R. Baskerville, and Myers, M. (2001). “Controlling Action Research Projects.” Information Technology & People 14 (1): 28– 45. Avison, David E., Lau, Francis, Myers, Michael D., and Nielsen, Peter A. (1999). “Action Research.” Communications of the ACM 42 (1): 94– 97. doi:10.1145/291469.291479. Baddeley, A.D., and Hitch, G. (1974). “Working Memory.” In The Psychology of Learning and Motivation: Advances in Research and Theory, edited by G.H. Bower, 8:47–89. New York: Academic Press. Boase, J., & Ling, R. (2013). Measuring mobile phone use: Self-report versus log data. Journal of Computer-Mediated Communication, 18(4), 508-519. Caidi, Nadia, and Allard, Danielle (2005). “Social Inclusion of Newcomers to Canada: An Information Problem?” Library & Information Science Research 27 (3): 302–24. doi:10.1016/j.lisr.2005.04.003. Campigotto, Rachelle, McEwen, Rhonda, and Demmans Epp, Carrie (2013). “Especially Social: Exploring the Use of an iOS Application in Special Needs Classrooms.” Computers & Education 60 (1): 74–86. doi:http://dx.doi.org/10.1016/j.compedu.2012.08.002. Carey, K., Evreinov, G., Hammarstrom, K., & Raskind, M. (2000). Information and Communication Technology in Special Education. Analytical survey. UNESCO. http://www.iite.unesco.org/pics/publications/en/files/3214585.doc , last viewed March 2015. Demmans Epp, Carrie, Campigotto, Rachelle, Levy, Alexander, and Baecker, Ron (2011). “MarcoPolo: Context-Sensitive Mobile Communication Support.” In FICCDAT: RESNA/ICTA, 4 pgs. Toronto, Canada. http://web.resna.org/conference/proceedings/2011/RESNA_ICTA/demmans%20e pp-69532.pdf. Dourish, Paul (2003). “The Appropriation of Interactive Technologies: Some Lessons from Placeless Documents.” Computer Supported Cooperative Work (CSCW) 12 (4):465–90. doi:10.1023/A:1026149119426. Du, Janxia, Sansing, William and Yu. Chien (2004). The Impact of Technology Use on Low- Income and Minority Students’ Academic Achievements: Educational Longitudinal Study of 2002. Dubé, Line, and Paré, Guy (2003). “Rigor in Information Systems Positivist Case Research: Current Practices, Trends, and Recommendations.” MIS Q. 27 (4): 597–636. http://dl.acm.org/citation.cfm?id=2017204.2017209. Fernández-López, Á., Rodríguez-Fórtiz, M. J., Rodríguez-Almendros, M. L., & Martínez-Segura, M. J. (2013). Mobile learning technology based on iOS devices to support students with special education needs. Computers & Education, 61, 77-90. Goggin, G., & Newell, C. (2003). Digital disability: The social construction of disability in new media. Rowman & Littlefield. Hirotomi, T. (2007). Multifaceted user interface to support people with special needs. In the proceedings of the Second IASTED International Conference on Human Computer Interaction, ACTA Press, Anaheim, CA, p. 87-92.

Ingraham, Nathan (2013). “Apple Announces 1 Million Apps in the App Store, More than 1 Billion Songs Played on iTunes Radio.” The Verge. http://www.theverge.com/2013/10/22/4866302/apple-announces-1-million-apps-in-the-app-store. Jacob, Robert J. K. (1994). New Human-Computer Interaction Techniques. Human-Machine Communication for Educational Systems Design. Kim-Rupnow, Soon, Weol, and Burgstahler , Sheryl (2004). “Perceptions of Students with Disabilities Regarding the Value of TechnologyBased Support Activities on Postsecondary Education and Employment.” Journal of Special Education Technology 19 (2): 43–56. http://www.editlib.org/p/99229/. Kukulska-Hulme, Agnes, and Bull, Susan (2009). “Theory-Based Support for Mobile Language Learning: Noticing and Recording.” International Journal of Interactive Mobile Technologies (iJIM) 3 (2). doi:10.3991/ijim.v3i2.740. Lau, F. (1997). “A Review on the Use of Action Research in Information Systems Studies.” In Information Systems and Qualitative Research, edited by Allen S. Lee, Jonathan Liebenau, and Janice I. DeGross, 31–68. IFIP — The International Federation for Information Processing. Springer US. http://link.springer.com/ chapter/10.1007/978-0-387-35309-8_4. Locke, Edwin A., and Latham, Gary P. (2002). “Building a Practically Useful Theory of Goal Setting and Task Motivation: A 35-Year Odyssey.” American Psychologist 57 (9): 705–17. doi:10.1037/0003-066X.57.9.705. Ludlow, Barbara L. (2001). “Technology and Teacher Education in Special Education: Disaster or Deliverance?” Teacher Education and Special Education 24 (2): 143–63. Mateu, J., Lasala, M. J., & Alamán, X. (2014). VirtualTouch: a tool for developing mixed reality educational applications and an example of use for inclusive education. International Journal of Human-Computer Interaction, 30(10), 815-828. McEwen, Rhonda N., and Scheaffer, Kathleen (2013). “Virtual Mourning and Memory Construction on Facebook: Here Are the Terms of Use.” Bulletin of Science, Technology & Society, December, 64–75. doi:10.1177/0270467613516753. McLeod, S. A. (2008). “Working Memory - Simply Psychology.” http://www.simplypsychology.org/working%20memory.html. Meyers, Eric M., Fisher, Karen E., and Marcoux, Elizabeth (2009). “Making Sense of an Information World: The Everyday-Life Information Behavior of Preteens.” Library Quarterly 79 (3): 301–41. Miesenberger, K., Fels, D., Archambault, D., Penaz, P., Zagler, W. (Eds.) (2014). Computers Helping People with Special Needs: 14th International Conference Proceedings, ICCHP, Paris, France. Mose, Norah (2013). “SMS Linguistic Creativity in Small Screen Technology.” Research on Humanities and Social Sciences 3 (22): 114–21. http://www.iiste.org/Journals/ index.php/RHSS/article/view/9564. Mullet, K., and Sano, D. (1995). Designing Visual Interfaces: Communication Oriented Techniques. Englewood Cliffs, NJ: Prentice Hall. Nielson, J. 1994. “Heuristic Evaluation.” In Usability Inspection Methods, edited by J. Nielson and R.L. Mack, 25–62. New York, NY, USA: John Wiley & Sons. Savolainen, Reijo (2009). “Information Use and Information Processing: Comparison of Conceptualizations.” Journal of Documentation 65 (2): 187–207. doi:10.1108/00220410910937570. Starcic, A. I., Cotic, M., & Zajc, M. (2013). Design‐based research on the use of a tangible user interface for geometry teaching in an inclusive

classroom. British Journal of Educational Technology, 44(5), 729-744. Sweller, John, Ayres, Paul L. and Kalyuga, Slava (2011). Cognitive Load Theory. New York: Springer. Tentori, M., and Hayes, G. (2010). “Designing for Interaction Immediacy to Enhance Social Skills of Children with Autism.” In ACM International Conference on Ubiquitous Computing (Ubicomp ’10), 51–60. Copenhagen, Denmark. doi:10.1145/1864349.1864359. TDSB (2013). Special education report: Toronto District School Board. Special Education and Sections Programs, Toronto, ON. http://www.tdsb.on.ca/Portals/0/Elementary/docs/SpecED/SpecED_EducationReport.pdf , last viewed March 2015. Tufte, Edward R. (1989). Visual Design of the User Interface: Information Resolution, Interaction of Design Elements, Color for the User Interface, Typogragphy and Icons, Design Quality. Armonk, New York: IBM. Turnbull, Ann P. (1995). Exceptional Lives: Special Education in Today’s Schools. Upper Saddle River, N.J.: Merrill/Prentice Hall. Wilson, Thomas D. (2000). “Human Information Behavior.” Informing Science 3 (2): 49–56. Windschitl, Mark, and Sahl, Kurt (2002). “Tracing Teachers’ Use of Technology in a Laptop Computer School: The Interplay of Teacher Beliefs, Social Dynamics, and Institutional Culture.” American Educational Research Journal 39 (1): 165–205. doi:10.3102/00028312039001165. i

We used the original version – features have been and continue to be added. See www.edu.gov.on.ca/eng/general/elemsec/speced/iep/iep.html for more information. iii Co-operative (co-op) placements are experiential learning opportunities in the form of credit courses that allow secondary school students in the Toronto District School Board to ‘use what is learned in the classroom and apply it in the workplace. Co-op is an opportunity to “try out” a career and can help with making decisions about your future’. The objective is for students to ‘develop work habits, attitudes and job skills necessary for a successful transition to post-secondary education or the workplace’. See http://www.tdsb.on.ca/HighSchool/YourSchoolDay/Curriculum/ExperientialLearning.aspx ii