Int. J. Teaching and Case Studies, Vol. 1, Nos. 1/2, 2007
Articulating case-based learning outcomes and assessment John M. Carroll* College of Information Sciences and Technology, Center for Human-Computer Interaction, The Pennsylvania State University, University Park, PA 16802, USA E-mail:
[email protected] *Corresponding author
Marcela Borge Graduate School of Education, University of California at Berkeley, 4533 Tolman #1670, Berkeley, CA 94720-1670, USA E-mail:
[email protected] Abstract: Contemporary classroom practices have evolved, and are continuing to evolve toward an emphasis on authentic learning activities. To a considerable degree, lectures and proscribed classroom exercises are being replaced with more open-ended, problem-based activities that are frequently carried out by groups of students (i.e., collaborative learning). We are investigating the use and assessment of such learning activities in a usability engineering course for 3rd and 4th year Information Science undergraduate students at the Pennsylvania State University. This paper advances our project by attempting to better articulate the specific learning objectives of case-based activities employed in this course and appropriate assessment goals and methods for these learning objectives. Keywords: authentic assessment; authentic learning; case-based learning; case studies; human-computer interaction; usability engineering; teaching models; curriculum design; collaborative learning; problem solving; learning assessment; cognitive strategies. Reference to this paper should be made as follows: Carroll, J.M. and Borge, M. (2007) ‘Articulating case-based learning outcomes and assessment’, Int. J. Teaching and Case Studies, Vol. 1, Nos. 1/2, pp.33–49. Biographical notes: John M. Carroll is Edward Frymoyer Chair Professor of Information Sciences and Technology at the Pennsylvania State University. His research interests include methods and theory in human-computer interaction, particularly as applied to networking tools for collaborative learning and problem solving, and the design of interactive information systems. He serves on several editorial boards for journals, handbooks, and series and is Editor-in-Chief of the ACM Transactions on Computer-Human Interactions. He received the Rigo Award and the SIGCHI Lifetime Achievement Award from ACM, the Silver Core Award from IFIP, the Goldsmith Award from IEEE. He is a fellow of the ACM, IEEE, and HFES.
Copyright © 2007 Inderscience Enterprises Ltd.
33
34
J.M. Carroll and M. Borge Marcela Borge is a PhD candidate in Cognition and Development, Education in Math Science and Technology, at the University of California at Berkeley. She is also working as a Research Affiliate at the Pennsylvania State University. Her research interests include developing students’ collaborative learning and problem solving skills, behaviour regulation, and educational software development. She is a member of the ThinkerTools Research Group and as such has helped to develop instructional materials and educational software.
1
Introduction
Recent innovations in science and engineering curricula and educational infrastructures have focused on ‘authentic’ learning activity: collaborative group projects that model the work of technical professionals (e.g., Dietrich and Urban, 1996; Hayes et al., 2003; Lamancusa et al., 2001). This focus on authentic learning activities is based in the hypothesis that learning outcomes will be enhanced if the activities students engage in, and the materials they use, more directly reflect the social and technical contexts of actual scientific and engineering practice. Moreover, learning with others naturally allows students to engage knowledge and skill in the context of using it to describe, explain, listen, negotiate, and interpret. Realistic activities and materials are more intrinsically motivating because they constantly remind learners of the possibilities for meaningful application of knowledge and skills in the world beyond the classroom (Dewey, 1966). Case studies, or cases, are descriptions of a specific activity, event, or problem, drawn from the real world of professional practice. They provide models of practice to students and other novice practitioners. Cases incorporate vivid background information and personal perspectives to elicit empathy and active participation. They seek to engage the student in the drama of a real situation. Cases include contingencies, complexities, and often dilemmas to evoke integrative analysis and critical thinking. Case studies can support authentic learning experiences by presenting episodes of real (or realistic) professional practice. Cases are widely used in professional education – in business, medicine, law, and engineering (Williams, 1992). For example, the Harvard Business School case collection includes over 7,500 case studies of business decision-making. Perhaps coinciding with contemporary recognition that all disciplines incorporate practice (and not merely knowledge), or perhaps just reflecting contemporary pedagogical concern with active learning and critical thinking, cases have become pervasive through the past decade. For example, the NSF-supported National Centre for Case Study Teaching in Science includes many case studies in medicine and engineering, but also environmental science, anthropology, botany, social and cognitive psychology, and experimental design (Herreid and Schiller, 2005). The traditional case study is a brief but provocative story, comprising just a half-page of text, which sketches a problematic situation. The reader is left to elaborate not only the possible resolutions, but the missing details of the premise. Cases can also be presented as extensive document collections, for example, presenting multiple points of view as to what even happened, and requiring detailed study and analysis to characterise the problematic situation and its possible resolutions.
Articulating case-based learning outcomes and assessment
35
Interestingly, both of these extremes can yield ‘good’ cases, that is, both kinds of cases can engage their users in realistic problem solving, decision-making, and learning (Schön, 1991; Shulman, 1992). It is especially beneficial to contrast multiple cases, as this will help students to pull out bits of information they may have missed otherwise (Barron et al., 1998; Bransford et al., 1989). This is why we are so interested in developing a library of cases designed for usability engineering courses.
2
Usability Case Studies
During the past six years, with support from the US National Science Foundation, we developed a small online collection of usability engineering case studies (see Carroll and Rosson, 2005a, 2005b; Rosson et al., 2004a, 2004b). Our UCS collection is available free via the World-Wide Web at http://ucs.ist.psu.edu. Our case studies are – by design – articulated at a fine level of technical detail. For example, the hypertext of the garden.com case study includes 82 documents, sorted by 26 hierarchical category nodes, and with considerable inter-linking (Figure 1). This contrasts with many other senses of ‘case’, in which the term refers to a single brief narrative document. Usability-engineering is the concepts, processes, and practices engaged in the development process to ensure that software systems and applications serve their intended users effectively. To a considerable extent, it mirrors and complements software engineering. Our approach to teaching usability engineering takes user interaction scenarios as the primary context for analysing requirements, describing specifications, envisioning designs, developing rationales, creating various sorts of prototypes, and evaluating systems and applications (Carroll, 2000; Rosson and Carroll, 2002). Some of the core concepts in our usability-engineering course are scenarios, design trade-offs (couched as claims analysis), user interface metaphors, mental models, collaborative coordination and awareness, ubiquitous computing, graphical design, direct manipulation, Fitts’ law, information architectures, and information visualisation. Students learn techniques associated with the various phases in the usability-engineering process, for example, requirements-interviewing and ethnographic observation, task analysis and user-modelling, paper-based and scenario-machine prototyping, evolutionary development, and usability evaluation methods like experimental analysis of user performance, collection and interpretation of thinking-aloud protocols and field study data, and survey construction and analysis. Students also need to develop a repertoire of useful strategies and thinking processes in order to make the kinds of evaluations and decisions inherent with usability engineering that are highlighted by the core concepts and techniques. These processes are included as much as is possible in our case studies. Our case studies describe real system development projects; for example, our garden.com case study describes the development of a Web portal for gardening supplies. It is an authentic story of one of the ‘dotcoms’ from the late 1990s, from a usability engineering perspective. Stories of real system development processes are neither brief nor strictly linear. The stories can be organised by an overall timeline, but the logical dependencies among events, activities, and problems that occur in the cases are quite interconnected, and related subactivities often occur in parallel. System development case studies can also be quite voluminous. Thus, a system development project typically involves a requirements phase in which stakeholders representing those
36
J.M. Carroll and M. Borge
who will design, implement, buy, sell, maintain, train and use the system are interviewed, and previous and related systems are analysed with respect to stakeholder perceptions and/or performance. The requirements analysis activities alone can generate a huge amount of content, yet their outcome is merely a problem statement that guides subsequent system development activities. Figure 1
Usability Case Study (UCS) browser: the map metaphor document in the lower right is drawn from the actual materials of the product development process. It is accessed through the container (above and to the left), providing mostly non-coded metadata. The container is accessed through a list of envisionment documents from the information design phase of the development process
Rather than condensing our case materials into a linear narrative, we developed a simple information design for a collection of documents. We developed a case schema corresponding to fairly standard phases in the development process for interactive systems (Rosson and Carroll, 2002): •
requirements analysis
•
activity design
•
information design
•
interaction design
Articulating case-based learning outcomes and assessment •
documentation
•
usability testing.
37
Each of the phases is further decomposed into a set of focal activities: for example, requirements analysis and usability testing are unpacked into •
planning
•
methods and materials
•
information gathering
•
synthesis
the four design phases are unpacked into •
exploration
•
envisionment
•
rationale.
This results in 20 terminal categories of development activity. The terminal categories are populated with design scenarios and mock-ups, as well as a variety of other design documents. The documents include early statements of the design concept; results from requirements surveys; focus groups, and other market research (including, where possible, instrumental documents such as advertisements used to recruit participants into focus groups); notes from design brainstorming sessions and other design discussions; analyses of metaphors and technology options; user interaction design scenarios; user interface sketches, mock-ups and prototypes; use cases and point-of-view scenarios to analyse software designs; design issues and tradeoffs; usability specifications; descriptions and results from usability studies, including test materials; and design revisions and justifications for changes. A description of the content for the garden.com case study is in Table 1. In our usability engineering course, we use the cases in four ways. First, we make specific homework assignments using the cases. For example, one assignment asked students to discuss how prototyping was used in two of the case studies, to contrast the two prototyping strategies, and to hypothesise why different prototyping strategies might have been used in the two projects. Second, we use case studies as background for in-class activities. For example, one activity asked groups of students to assess the impact of a changed requirement for the garden.com case study (e.g., a business-to-business strategy instead of direct retail). Third, we use the case studies to exemplify principles, practices, concepts and techniques that are described in lectures and other presentations. For example, one of the case studies is a personal digital assistant application, which provides a nice example for introducing PDAs. Fourth, the students use the case studies as models for their own semester projects. Each student group develops a small system project through the course of the semester, and, among other things, documents it as a case study of system development. In all of these uses, students access and study the cases individually, reading and browsing the hypertexts.
38
J.M. Carroll and M. Borge
Table 1
Content for the Garden.com case study
Requirements analysis Planning
Initial brainstorming session; market research; root concept
Methods and materials
Script for interviewing nursery workers; script for interviewing customers; focus group agenda; focus group recruiting ad; screening survey questions
Information gathering
Photo of highway entrance sign; photo of nursery exterior; photo of nursery interior; photo of nursery main retail display; photo of outside covered retail display; photo of outside open retail display; photo of superstore outdoor display; photo of indoor superstore display; photo of gardening customer; garden sketch; plant hardiness map; catalog index; catalog order form; interview excerpt-nursery workers; interview excerpt-customers
Synthesis
Problem claims; problem scenarios
Activity design Exploration
Summary of proposed activities; conceptual metaphors; system technology
Envisionment
Activity scenarios; shopping model; sequence analysis; point-of-view analysis
Rationale
Activity claims
Information design Exploration
Department information requirements; product page information requirements; summary of proposed information offerings; information metaphors; information technology options
Envisionment
Sketch of map metaphor; sketch of catalog metaphor; sketch of homepage; sketch of search results; sketch of product page; information scenarios
Rationale
Information claims
Interaction design Exploration
Interaction metaphors; interaction technology
Envisionment
Online Shopping Script; Functional Study; Design Revision #1; Design Revision #2
Rationale
Before/after main page; before/after department page; before/after product list page; before/after product information page; before/after wheelbarrow page; usability study #2 report
Usability testing Planning
Operational definition; usability specifications
Methods and materials
User background survey; gardening quiz; attitudes toward the Internet survey; session script; usability testing scenarios; post-scenario ease of use assessment; usability report card; strategy questions; debriefing questions; equipment list
Data collection
Notes on sessions; notes on participants; post-subtask comments; post-session comments; usability report card scores
Interpretation
Gardening quiz scores; attitudes toward the Internet; time on task; lostness ratings; number of mouse clicks; post-subtask ratings
Synthesis
Design recommendations
In general, we believe that the usability engineering cases and case-based learning activities have enriched the course. During the 2004 and 2005 offerings of the course, we gathered various anecdotal and global information about how the students used the cases, how they felt about using them, and about their perceived self-efficacy with respect to
Articulating case-based learning outcomes and assessment
39
25 key usability engineering capacities (Carroll and Rosson, 2005a, 2005b). However, the linkage of instructional events and outcomes to the specific case-based learning activities is quite indirect in our prior work. Our current research interest is to explore and develop a more focused linkage between case-based activities and learning events and outcomes.
3
Selecting an instructional approach through learning goals
Boekaerts (2002) compares classroom environments to rain forests: complex and dynamic systems where students, competing for resources, have to adapt to different environments in order to flourish. Self-regulated learners are extremely successful at adapting to changing demands in the environment by monitoring and regulating learning behaviours when necessary (Butler and Winne, 1995). Students make judgements with regard to courses, such as how to prioritise work, what type of effort is necessary, how much time to spend on different aspects of the course, etc. These judgements are in large part made through an evaluation process where students’ own learning goals interact with the perceived goals for the class (Boekaerts, 1996; Pintrich, 1995). However, students may misperceive goals during instruction and as a result incorrectly evaluate their work quality and class performance (Butler and Winne, 1995). Certain goals may simply be easier for students to grasp than others. Instructors can help minimise these types of problems by actively verbalising the learning goals for a course and providing students with useful feedback. Assessments are a valuable way for instructors to gauge student development in order to tailor feedback to best suit each student. However, in order to use assessments as training tools, they need to be connected to verbalised learning goals and take place throughout the course. In designing the usability engineering course, we carefully devised a list of what we felt to be the main learning goals for the class. These learning goals, to a great degree, shaped the organisation and content focus of the course and helped select the assessment procedures and teaching methods we would utilise during instruction. In order to ensure that students are aware of all of our learning goals, the instructor will talk to students about them and explain how instructional methods and assessments will facilitate their achievement. There are six main learning goals for this course, some of which we touched on earlier: •
retain knowledge of core concepts and techniques and apply this knowledge to accomplish real world tasks
•
build knowledge base through collaborative discussions and refine necessary skills in collaborative groups
•
utilise cases in the case library effectively and use case-based activities to demonstrate students’ developing understanding of course content
•
think outside the box when accomplishing tasks
•
use outside resources to accomplish tasks
•
understand and apply underlying cognitive strategies needed to accomplish these different tasks.
40
J.M. Carroll and M. Borge
There is a strong student expectation that the first of these goals is primary, based on socialisation in the traditional classroom-learning paradigm. Therefore, it is critical to explicitly discuss and support achievement of all six learning goals. Moreover, if students are not held accountable for demonstrating mastery of all six goals, they may only direct their time and attention to those goals that will be tested. The assessments we describe below address different combinations of the six goals. For example, the final course project may assess goals 1–2, 4–6, but not the third goal. As long as each goal is explicitly assessed by some means, students are reminded to pay attention to whether or not they are achieving every goal. Achieving these goals will not be an easy task for students because they have to focus on learning outcomes (i.e., an understanding of the core concepts of usability engineering) as well as learning processes (i.e., ways to go about accomplishing the tasks within usability engineering). Usability engineering is a domain of professional practices. It is important for students to develop two very different kinds of knowledge in such a domain: declarative and procedural. This distinction is fundamental, but not always easy to operationalise in particular instances: Declarative knowledge is knowledge about concepts, whereas procedural knowledge is knowledge about how to carry out actions, enacting core concepts. For example, in order to know how (procedural) to carry out a complex task, it is necessary to know that (declarative) certain goals need to be met and that (declarative) different strategies and processes exist that are related to the procedure. Declarative knowledge consists of an understanding and awareness of all of the content in the course. Concepts like user interface metaphor, direct manipulation, contextual help, and so forth are examples of some of this knowledge. Besides these more obvious examples, other forms of declarative knowledge include such things as knowledge of: cognitive strategies, goals of collaboration, and organisation of the case-libraries. Procedural knowledge has to do more with the mechanics for manipulating the knowledge base and carrying out different functions. Some examples of procedural knowledge as it pertains to this course are: how one can determine which of two pointing devices is better, how to carry out a requirements interview, how to determine whether a design change improved a user interface or not. The reason this distinction is important to take into consideration is because developing procedural knowledge requires not only an understanding of the tasks and prescriptive strategies at hand, but actual practice in identifying events, carrying out the necessary actions, and reflecting on performance (Hogan, 1999; VanLehn, 1989; White et al., 1999). A further distinction exists between the core course content (be it procedural or declarative) and the learning and analysis strategies used by teachers and students in the course. Examples of analysis strategies include: using the case library, understanding how all the parts of a usability engineering process link up, and analysing tradeoffs (e.g., orienting to the tension between potential upsides or strengths of a user interface design and potential downsides or weaknesses of a design). Students often do not recognise that even the best design solutions typically (perhaps always) have undesirable characteristics; naively, they think design is about finding the right solution. In order to address these misunderstandings, procedurally oriented disciplines such as usability engineering must provide students with information about the underlying processes inherent to procedures, opportunities to carry them out, and feedback on student performance. Bjork described a vivid example of this distinction
Articulating case-based learning outcomes and assessment
41
“One chance to actually put on, fasten, and inflate an inflatable life vest, for example, would be of more value – in terms of the likelihood that one would actually perform that procedure in an emergency – than the multitude of times any frequent flier has sat on an airplane and been shown the process by a steward or stewardess.” (Bjork, 1994, p.188)
Case libraries and case-based learning activities can provide students with access to higher cognitive learning opportunities and therefore contribute to a better understanding of the course content. However, as previously discussed, cases and activities are not enough, in and of themselves, to develop the kind of thinking processes necessary for students to engage in the problem solving events associated with usability engineering. In our experience, case studies and case-based activities need to be further supported by teaching methods that focus less on traditional lectures and more on interactive problem solving. Interactive problem solving refers to class sessions during which the instructor poses a problem and the entire class works together to conceptualise it in different ways, identify strategies, and suggest and carry out different solutions. Throughout interactive problem solving the instructor’s main objective is not to give students answers, but rather to get students to verbalise reasoning and figure out the answers collaboratively with their peers. However, when the class falters, the instructor can provide suggestions to help students get back on track. In this way, students can develop learning and analysis strategies and processes; after all, the best way to learn about problem solving is to engage in it. Therefore, for a course such as the one we are proposing, effective instruction must follow a process that allows students to learn about new strategies and practice new skills simultaneously. This can be accomplished through the use of a modified version of the cognitive-apprenticeship model of instruction. A cognitive-apprenticeship model is particularly useful when the activity at hand is complex enough that merely reading about it, or even seeing it enacted, would be insufficient for a learner to fully understand the processes and thinking involved in the activity. In these cases, learners need the real-time meta-commentary on the processes being enacted. Through this meta-commentary students can ‘see’ expert processes, such as strategy selection, in action. Subsequent steps in the cognitive apprenticeship model present students with the opportunity to emulate observed behaviours in a classroom and ensure that students take an active role in their learning processes while having access to instructor modelling, feedback, and support. Innovative teaching methods, such as these, prioritise not only the learning content, but also the thinking processes behind associated problems and how experts in respective fields might go about solving them. Teaching techniques within a cognitive-apprenticeship model of instruction generally follow a similar process: demonstration of task, discussion of problems related to the task and strategies associated with them, coaching students to resolve problems in order to accomplish the task, and finally, fading away in order to allow the student to fully demonstrate that they can accomplish the task without assistance (Brown et al., 1989; Collins et al., 1991; Schoenfeld, 1987). During the discussion and coaching aspect of this model, students’ understanding is often questioned and students are expected to actively participate with the instructor in order to resolve problems and reflect on performance. Formative assessments of this nature have been shown to improve students’ abilities to develop complex problem solving skills (Brown et al., 1989; Collins et al., 1991; Kuhn et al., 2000; Rogoff et al., 1996; Schoenfeld, 1987; Schraw and Moshman, 1995) as well as improve student
42
J.M. Carroll and M. Borge
motivation (Schunk, 1995). Furthermore, teaching methods that focus on the importance and utility of learning processes at the strategic level have been shown to not only improve student performance on complex tasks, but also improve students’ ability to transfer these skills to novel situations (Bransford et al., 2001; Butler and Winne, 1995; White and Frederiksen, 2005). Therefore, this type of teaching model seems extremely compatible with our learning goals for students and the case-based activity structure. The things people do in the workplace, and the ways they do them, are rich sources of constraint and inspiration for designing systems to support real work activity. A common usability engineering technique is to view the metaphors that people employ in talking about their work flows and data structures as reflecting naïve mental models of the activity domain. These metaphors are then used as guidance for developing activity designs (e.g., Rosson and Carroll, 2002). For example, a user might describe personal schemes for organising documents into piles on a desk, or for polling colleagues to arrange team meetings. One of the assignments we have used in our case-based approach to teaching usability engineering is to ask students to review a case and critique the metaphors that the design team identified at this stage. This is a far more useful way to talk about the role of metaphors in requirements analysis than merely to describe and illustrate their use in a lecture (Carroll and Rosson, 2005a, 2005b). However, we still find that students have trouble making the ‘leap’ from reading and critiquing an instance of real usability engineering practice and being able to enact the practice themselves. Some students, even after critically analysing the use of metaphor analysis in a case study, just invent metaphors that have little relationship to anything the users told them. Other students seem to be more able to identify metaphors in what the users tell them, but then are unable to go beyond those metaphors in their own activity designs, taking verbatim what the users told them instead of reacting to it creatively and integratively as designers. Of course, what we are asking for is not a small thing. It is difficult enough perhaps for a student of usability engineering to learn to listen carefully to users and to understand what they are describing, let alone to learn to creatively integrate what the users literally say into a new and more comprehensive design metaphor that they will easily recognise, appropriate, and extend. Nevertheless, this high level of performance is key to the professional practice of usability engineering, and therefore it is exactly what we are trying to help the students learn. This is precisely the kind of topic in which we think cognitive apprenticeship can make a big contribution to the usability engineering course: Rather than only having students read and critique case study examples of metaphor analysis, we will then have the instructor actively model metaphor analysis for the class. What we have in mind is not a lecture, but a workshop of real-time design problem solving. This could be achieved by the instructor solving a problem that the students worked on, that is, enacting the process of the ‘teacher’s solution’ instead of handing out the solution state only. It could also achieved by inviting a team of advanced undergraduates or graduate students involved in a design project (or, of course, actual designers) to carry out a requirements analysis session for the class, or even by showing a video recording of a requirements analysis session and stopping the video frequently to deconstruct the designers’ problem solving. Our teaching model is a modified version of a cognitive apprenticeship model because student will continue to receive feedback and guidance, if needed, when creative their final projects. The cognitive apprenticeship model employs a stricter approach to ‘fading’ teacher support that we expect or intend to follow.
Articulating case-based learning outcomes and assessment
4
43
The course structure
The primary structure of the course, as previously mentioned, centres around an apprenticeship model of instruction. In addition, we wanted the majority of learning to take place in a simulated real-world environment. Our approach will be to teach usability engineering in collaborative and team environments, where brainstorming, idea building, and problem solving will be an integral part of the learning experience. In order to explain the proposed course in more detail, we will discuss three important aspects: teaching methods and techniques used for the course, general cognitive skills emphasised, and learner assessments used to evaluate student performance. This organisation will provide an opportunity to further illustrate these components with concrete examples.
4.1 Teaching methods and techniques Instructors can use various techniques to create a learning environment that adheres to an apprenticeship model of instruction. The methods and techniques we describe here are by no means novel to instructional practice, as they have been used in other areas such as mathematics, science and law. However, they are not commonly practiced in usability engineering, especially in larger undergraduate courses. Furthermore, we do not mean to imply that lectures are not an important aspect of learning or that they should not be used as a means of instruction. We are suggesting that lectures should be supplemented with other teaching methods that allow for the kinds of collaborative exchanges that research has shown to be important considerations for learning. We have drawn from various examples in educational research (Brown and Campione, 1994; Brown et al., 1989; Collins et al., 1991; Schoenfeld, 1987; White and Frederiksen, 2005; Williams, 1992) and tailored these examples to fit with the learning goals for the usability-engineering course. During classes, demonstrations will be used, along with help clinics, and interactive discussions to highlight different ways to conceptualise a task and search for and carry out solutions. An example of demonstrations would be asking a group of students to research a particular usability engineering practice (for example, metaphor analysis in requirements interviewing and activity design, as discussed above, or analysing and synthesising an application’s data and functions into an information architecture). The students would prepare a demonstration to present in class – sort of a peer-based cognitive apprenticeship – for critique by the rest of the class. In general, demonstrations and interactive discussions are proven ways of presenting students with expert thought processes, strategies, and problem conceptualisation through collaborative problem solving and reflection (Brown and Palincsar, 1985; Schoenfeld, 1987). Students can be introduced to new concepts through a combination of lectures and demonstrations. Interactive discussions and help clinics (see below) can then be used as an avenue for students to practice using new concepts, develop their understanding, and practice new skills. Instructors can further use these sessions as opportunities to support student learning by coaching students through the process, much like a sports coach would an athlete. Help clinics are sessions where student-groups can present the class with whatever problems they encounter. The instructor begins by demonstrating how to conceptualise different problems and provide suggestions for resolving them through the use of video
44
J.M. Carroll and M. Borge
(discussed in the demonstrations section). In order to ensure that every group participates in these sessions, each group will be required to present their ‘number one problem’ to the class. This can include design problems or problematic collaborative interactions. The group will in essence become the subject of whole-class discussion and analysis by role playing different problem scenarios for the class. The entire class can then participate by conceptualising problems in different ways, practicing strategy selection, and implementing solutions by having the presenting group act out different corrective strategies. Throughout this process, the instructor will ask questions in order to ensure that students focus on important bits of information, monitor whether strategies are working, and change their approach when necessary.
4.2 General cognitive skills Earlier in this paper we presented the core concepts and techniques we feel are important aspects of usability engineering. In this section, we will talk about the more implicit forms of knowledge that students will need in order to fulfil the learning goals for this class. While the instructor will be actively trying to make these more explicit to students, they remain, for many students, harder to perceive, use, and understand than general content goals (Barron, 2003; Hogan, 1999; Kuhn et al., 2000; White et al., 2004). These implicit forms of knowledge include: thinking processes, cognitive strategies and group-processing strategies, as well as collaborative interaction. Some processes and strategies will be particular to usability engineering, while others are central to effective learning, problem solving, and collaborating in general. In order to address these differences, demonstrations will be tailored to focus on those cognitive skills and processes specific to usability engineering as well as collaborative interaction. Examples of processes specific to usability engineering include: how to gather and evaluate information in order to decide which tradeoffs are necessary for design, and how different circumstances and perspectives can shape this decision making process. The usability case studies showcase processes specific to usability engineering to some extent, but think aloud protocol (verbalisation of thoughts during problem solving) will also be used to spotlight important strategies and expert conceptualisation. The instructor will verbalise his thoughts during problem solving demonstrations and ask students to try to do the same during interactive discussions. There are two demonstrations that can serve as examples of this approach. Both of the demonstrations occur early in the course. The first demonstration consists of the instructor using the case library to accomplish a case-based activity. Throughout the demonstration, the instructor will verbalise the entire thought process. The instructor will conceptualise and rephrase the activity, pointing out the kinds of information needed to complete the activity, discuss the organisation and function of the case library, and how to find different kinds of information. The main purpose of the demonstration will be to show how an expert would think about the activity and effectively utilise the case libraries. In this way students can learn about and emulate expert behaviours. The second demonstration will focus on group dynamics and collaborative interactions. In this demonstration, the instructor will show the class a video of a collaborative design meeting. This can either be an actual meeting from one of the instructor’s research teams, or a staged meeting where the actors hope to illustrate common problems that occur within these sessions. Before showing the video the instructor will discuss the goals of collaboration, common pitfalls, and useful corrective
Articulating case-based learning outcomes and assessment
45
strategies. The video will then serve as a way to get the class to critically analyse how well the group is meeting these collaborative goals, identify problems, and suggest ways of correcting them. Rather than watching the video from start to finish and then discussing it postmortem, the students should shout out when they think it should be paused and discuss problems and resolutions as they occur. Alternatively, the instructor may pause the video at crucial times and ask students to identify problems and how they can be resolved, then play the video to see how the group actually handled it. This video demonstration can be a vehicle to present students with the idea that group interaction is a skill that can be improved by •
increasing your awareness of collaborative dynamics, goals, and strategies
•
using this awareness as a means of monitoring and improving collaborative activity.
Students will be expected to emulate the processes illustrated in this demonstration during student help clinics. During all of the demonstrations discussed in this section, the instructor will work collaboratively with the entire class to resolve questions and suggest helpful strategies, both cognitive and social. Cognitive strategies include strategies to help learn or remember material, accomplish different design tasks, as well as help monitor and control thinking processes. Group-processing strategies are similar to cognitive strategies, but focus on the social goals of collaboration rather than usability engineering. For example there could be strategies to accomplish group tasks, strategies to help monitor and control interactions, and strategies to resolve common group problems. As the course evolves, students will be asked to identify these strategies during interactive discussions and use the strategies to resolve problems during help clinics. The instructor should listen to student suggestions and try them out even if they are not appropriate to the situation. The class can then reflect on how well different strategies work and suggest others when necessary. These types of interactions help to build environments where students feel empowered and personally invested in the class (Collins et al., 1991).
4.3 Learner assessments One of the challenges we faced in designing this course was to come up with assessments that would capture more than just how much of the concepts students remembered. We wanted to pick assessments that would also prioritise the processes students were expected to use as well. This includes problem solving as well as important collaborative skills, i.e., effective communication and negotiation of ideas. Furthermore, we wanted assessments to be meaningful and useful to students, and explicitly tied to the learning goals for the class. The assessments we decided on are a combination of formative and summative assessments. We begin by explaining the final project and then discuss the ongoing formative assessments and how they will help students to complete the final project. Students’ final project will be the creation of a new case study where students will present each step in the design process, detailing problems they encountered, analyses of their finding, and tradeoffs they felt were necessary. The final project is meant as a way to reinforce the use of processes within the case libraries in environments similar to those in which the actual cases took place. Therefore, students must complete the final project
46
J.M. Carroll and M. Borge
as part of assigned design teams. Students will be expected to participate in regular design meetings and complete a series of tasks with their team members. Students will be assigned a design task and will create a case study by emulating the steps taken by different design teams in the case libraries (e.g., requirements analysis, activity design, information design, and interaction design). Exceptional student cases will be added to a new ‘student section’ that future students can then utilise. Having students create purposeful artifacts of this nature can be a powerful motivating factor for learning (McCombs, 1996; Pintrich and Schunk, 1996; Schwartz et al., 1999). Each step in the design process will have to be completed by predetermined dates and students will need to turn in detailed transcript-like notes of the meetings during which sections were completed. These detailed notes will include such information as: who was present during the meeting, the names of speakers and issues they raise, as well as other general discussions that take place during the meeting. These notes can be used as both formative and summative assessments. As a formative assessment, the notes can help an instructor evaluate students’ developing understanding, and use this information to provide students with feedback to improve their final project. Notes can also be used as a way of determining grades for group participation (summative assessment). Lastly, during sessions coinciding which the due dates for a section, the instructor can randomly call on teams to present their completed sections and discuss their rationale. Besides calling on random teams, the instructor can also preselect a set of students to call on during lectures and interactive class discussions. Much in the same way that law and medical students are randomly selected to provide explanations and examples during lectures, so can usability engineering students be expected to contribute to class discussions. The potential be called upon to ‘perfom’ for an audience, as an individual or part of a group, will likely motivate students to come to class prepared and complete assignments on time. Moreover, these types of formative assessments provide opportunities for students to verbalise and revise their thinking and instructors to better understand how to best support students’ development (Bransford et al., 2001).
5
Conclusion
Our objective in this (ongong) work is to enhance the authenicity of a course in usability engineering. Authentic approaches to teaching and learning are always a good approach. For usability engineering, which after all is a highly procedural domain, and indeed an professional practice, we feel there is both the opportunity and perhaps the need to pursue authentic instructional approaches more aggressively. Many any of the students who study usability engineering, including a few who do not expect it, go on to become usability engineers. Cases, of course, are a paradigmatic example of authentic content; but there are many ways to use cases, including some relatively traditional lecture and discussion questions approaches, that are not strongly authentic, and particularly inauthentic for a highly practical and procedural domain like usability engineering. We further recognise that there is a difference between being a successful student and an effective practitioner in the field. If we want to develop the latter we need to teach them more than just theory, we must demystify and support practice. We have created tools (the case library and case-based activities) to help students develop a better understanding of the practice of usability engineering. However, a tool’s
Articulating case-based learning outcomes and assessment
47
usefulness is only as effective as an instructional situation allows. We are trying to create a learning environment that not only requires the use of our instructional tools and provides students with examples of how to use it, but also gives students feedback on how they are using the tool and what they should be learning from it. Our goal is to continue to develop a case-based approach to teaching and learning usability engineering, but to embed that case-based pedagogy in even more authentic learning activities. We have been asking here how can we better exploit the opportunity to create a more authentic learning experience in usability engineering – and to better assess the outcomes of what we do. Research in cognitive psychology and education have contributed to robust theories of how people learn and how this information can be used to tailor teaching/training practice to optimise learning potential. As knowledge of learning increases, more and more researchers are pushing educators to reevaluate their pedagogy and revise practice to include more process oriented, learner centred forms of instruction. To a great extent that is what we have done here. However, we also need to investigate the effect of these changes on student learning and motivation across different fields and age-groups. This can be done through longitudinal studies, self reports, and pre and post instructional interviews. Educators teaching similar courses but utilising different instructional techniques could also collaborate with each other to build a better understanding of the most effective teaching methods for there specific needs. Basically we are saying that it is about time that we bridged the gap between research and practice to develop more effective forms of instruction.
References Barron, B. (2003) ‘When smart groups fail’, Journal of the Learning Sciences, Vol. 12, No. 3, pp.307–359. Barron, B., Schwartz, D., Vye, N.J., Moore, A., Pertrosino, A. and Zech, L. (1998) ‘Doing with understanding: lessons from research on problem- and project-based learning’, The Journal of The Learning Sciences, Vol. 7, Nos. 3–4, pp.271–311. Bjork, R.A. (1994) ‘Memory and metamemory considerations in the training of human beings’, in Metcalf, J. and Shimamura, A.P. (Eds.): Metacognition: Knowing About Knowing, MIT Press, Cambridge, MA. Boekaerts, M. (2002) ‘Bringing about change in the classroom: strengths and weaknesses of the self-regulated learning approach – EARLI Presidential Address 2001’, Learning and Instruction, Vol. 12, pp.589–604. Boekaerts, M. (1996) ‘Self-regulated learning at the junction of cognition and motivation’, European Psychologist, Vol. 1, No. 2, pp.100–112. Bransford, J.D., Brown, A.L. and Cocking, R.R. (2001) How People Learn, National Academy Press, Washington DC. Bransford, J.D., Franks, J.J., Vye, N.J. and Sherwood, R.D. (1989) ‘New approaches to instruction: because wisdom can not be told’, in Vosniadou, S. and Ortony, A. (Eds.): Similarity and Analogical Reasoning, Cambridge University Press, New York, pp.470–497. Brown, A.L. and Campione, J.C. (1994) ‘Guided discovery in a community of learners’, in McGilly, K. (Ed.): Classroom Lessons: Integrating Cognitive Theory and Classroom Practice, MIT Press/Bradford, Cambridge, MA, pp.229–270. Brown, A.L. and Palincsar, A.M. (1985) Reciprocal Teaching of Comprehension Strategies: a Natural History of one Program for Enhancing Learning, University of Illinois at Urbana-Champaign; Bolt Beranek and Newman Inc, Champaign, Ill, Cambridge, Mass.
48
J.M. Carroll and M. Borge
Brown, J.S., Collins, A. and Duguid, P. (1989) ‘Situated cognition and the culture of learning’, Educational Researcher, Vol. 18, No. 1, pp.32–42. Butler, D.L. and Winne, P.H. (1995) ‘Feedback and self-regulated learning: a theoretical synthesis’, Review of Educational Research, Vol. 65, No. 3, pp.245–281. Carroll, J.M. (2000) Making Use: Scenario-Based Design of Human-Computer Interactions, MIT Press, Cambridge, MA. Carroll, J.M. and Rosson, M.B. (2005a) ‘A case library for teaching usability engineering: design rationale, development, and classroom experience’, ACM Journal of Educational Resources in Computing, Vol. 5, No. 1, Article 3, pp.1–22. Carroll, J.M. and Rosson, M.B. (2005b) ‘Toward even more authentic case-based learning’, Educational Technology, Vol. 45, No. 6, pp.5–11. Collins, A., Brown, J.S. and Newman, S.E. (1991) ‘Cognitive apprenticeship: making things visible’, American Educator: The Professional Journal of the American Federation of Teachers, Vol. 15, No. 3, pp.6–11, 38–46. Dewey, J. (1966) Democracy and Education, Macmillan/Free Press (First published 1916), New York. Dietrich, S.W. and Urban, S.D. (1996) ‘Database theory in practice: learning from cooperative group projects’, ACM SIGCSE Bulletin, Proceedings of the 27 SIGCSE Technical Symposium on Computer Science Education, Vol. 28, No. 1, pp.112–116. Hayes, J.H., Lethbridge, T.C. and Port, D. (2003) ‘Papers on software engineering education and training: course delivery and evaluation: Evaluating individual contribution toward group software engineering projects’, IEEE Proceedings of the 25th International Conference on Software Engineering, accessed via ACM DL, pp.622–627. Herreid, C.F. and Schiller, N.A. (2005) The National Center for Case Study Teaching in Science, State University of New York at Buffalo, http://ublib.buffalo.edu/libraries/projects/cases/ ubcase.htm Hogan, K. (1999) ‘Thinking aloud together: a test of an intervention to foster students’ collaborative scientific reasoning’, Journal of Research in Science Teaching, Vol. 36, No. 10, pp.1085–1109. Kuhn, D., Black, J., Keselman, A. and Kaplan, D. (2000) ‘The development of cognitive skills to support inquiry learning’, Cognition and Instruction, Vol. 18, No. 4, pp.495–523. Lamancusa, J.J., Jorgensen, J.E., Zayas-Castro, J.L. and de Ramirez, L.M. (2001) ‘The learning factory – integrating design, manufacturing and business realities into engineering curricula’, International Conference on Engineering Education, August 6–10, Oslo, Norway, pp.1–6. McCombs, B.L. (1996) ‘Alternitive perspectives for motivation’, in Baker, L., Afflerback, P. and Reinking, D. (Eds.): Developing Engaged Readers in School and Home Communities, Erlbaum, Mahwah, NJ, pp.67–87. Pintrich, P.R. (1995) ‘Understanding self-regulated learning’, New Directions for Teaching and Learning, No. 63, pp.3–12. Pintrich, P.R. and Schunk, D.H. (1996) Motivation in Education: Theory, Research and Application, Merril Prentice-Hall, Columbus, Ohio. Rogoff, B., Matusov, E. and White, C. (1996) ‘Models of teaching and learning: participation in a community of learners’, in Olson, D. and Torrance, N. (Eds.): The Handbook of Education and Human Development, Blackwell Publishers, Malden, MA. pp.388–414. Rosson, M.B. and Carroll, J.M. (2002) Usability engineering: Scenario-Based Development of Human-Computer Interaction, Morgan-Kaufmann, San Francisco. Rosson, M.B., Carroll, J.M. and Rodi, C. (2004a) ‘Teaching computer scientists to make use’, in Alexander, I.F. and Maiden, N. (Eds.): Scenarios, Stories, Use Cases: Through the Systems Development Life-Cycle, John Wiley & Sons, New York, pp.445–463.
Articulating case-based learning outcomes and assessment
49
Rosson, M.B., Carroll, J.M. and Rodi, C. (2004b) ‘Case studies for teaching usability engineering’, Proceedings of 35th SIGCSE Technical Symposium on Computer Science Education, ACM Press, New York, Norfolk, VA, March 3–7, pp.36–40. Schoenfeld, A.H. (1987) ‘What’s all the fuss about metacognition’, in Schoenfeld, A.H. (Ed.): Cognitive Science and Mathematics Education, Lawrence Erlbaum Associates, Hillsdale, NJ, pp.189–215. Schön, D.A. (1991) The Reflective Turn: Case Studies in and on Educational Practice, Teachers College Press, New York. Schraw, G. and Moshman, D. (1995) ‘Metacognitive theories’, Educational Psychology Review, Vol. 7, pp.351–371. Schunk, D.H. (1995) ‘Self-efficacy and education and instruction’, in Maddux, J.E. (Ed.): Self-Efficacy, Adaptation, and Adjustment: Theory, Research, and Application, Plenum Press, New York, pp.281–303. Schwartz, D., Lin, X., Brophy, S. and Bransford, J.D. (1999) ‘Toward the development of flexibly adaptive instructional designs’, in Reigelut, C.M. (Ed.): Instructional Design Theories and Models, Erlbaum, Hillsdale, NJ, Vol. II, pp.183–213. Shulman, J.H. (Ed.) (1992) Case Methods in Teacher Education, Teachers College Press, New York. VanLehn, K. (1989) ‘Problem solving and cognitive skill acquisition’, in Posner, M.I. (Ed.): The Foundations of Cognitive Science, MIT Press, Boston, MA, pp.527–579. White, B. and Frederiksen, J.A. (2005) ‘Theoretical framework and approach for fostering metacognitive development’, Educational Psychologist, Vol. 40, No. 4, pp.211–223. White, B., Frederiksen, T. and Borge, M. (2004) ‘Paper presented at the annual meeting of the American educational research association’, How Can Role Playing and Reflective Journals Foster Young Learners’ Meta-Socio-Cognitive Development?, San Diego, CA. White, B., Shimoda, T. and Frederiksen, J. (1999) ‘Enabling students to construct theories of collaborative inquiry and reflective learning: computer support for metacognitive development’, International Journal of Artificial Intelligence in Education Computers as Cognitive Tools, Volume Two: No More Walls, Vol. 10, No. 2, pp.151–182. Williams, S.M. (1992) ‘Putting case-based instruction into context’, The Journal of The Learning Sciences, Vol. 2, No. 4, pp.367–427.