Session F3E CALIBRATED PEER REVIEW™ AND ... - CiteSeerX

15 downloads 3103 Views 93KB Size Report
review, web-delivered instructional software, cognitive ... custom crafting of writing tasks and a library of edited ... definition of “communication” and to measure.
Session F3E CALIBRATED PEER REVIEW™ AND ASSESSING LEARNING OUTCOMES Patricia A. Carlson 1 and Frederick C. Berry 2 Abstract -- The need for more focused and less-labor intensive assessment practices has brought new challenges, both for institutions and for individual educators. We elaborate on Calibrated Peer Review™ (CPR™) – an endto-end computer-mediated learning environment that seamlessly integrates writing as a vehicle for critical thinking into a technical or content course. Developed as a tool to help incorporate writing into the teaching of science, CPR™ moves well beyond the scope of most web-delivered educational software. We draw from experiences using CPR™ in two courses offered at Rose-Hulman Institute of Technology, RH131 (Rhetoric and Composition) and ECE 360 (Principles of Engineering Design). We focus on four questions. First, what is CPR™ ? Second, how does CPR™ improve student learning? Third, how can CPR™ measure learning outcomes appropriate for an ABET-style assessment? And fourth, does the system serve as a de facto electronic portfolio in engineering education?

Four structured workspaces perform in tandem to create a series of activities that reflect modern pedagogical strategies for using writing in the learning process. Separate instructor and student interfaces provide reports on performance for individual assignments (see Figure 1).

Index Terms – assessment, writing in the disciplines, peer review, web-delivered instructional software, cognitive growth

WHAT IS CPR™ ? Developed by the Division of Molecular Sciences at UCLA, Calibrated Peer Review™ is an excellent "learning environment" that creates an asynchronous, disciplineindependent platform for creating, implementing, and evaluating writing assignments, without significantly increasing the instructor’s workload. Furthermore, the extensive data collected by the "environment" can be used to measure learning outcomes. In fact, the versatility of the software makes it very appropriate as a fine-grained tool for ABET accreditation criteria. CPR™ is a component of a large-scale, National Science Foundation-supported project led by a team of educators at UCLA to develop a completely networkdelivered Molecular Science Curriculum. The fully integrated CPR™ contains an assignment authoring tool for custom crafting of writing tasks and a library of edited assignments contributed by instructors from varied disciplines and institutions. Currently hosted at UCLA, the system draws from the model of manuscript submission and peer review in the conduct of scientific inquiry [1].

FIGURE 1 A DYNAMIC, MULTI-STAGED LEARNING ENVIRONMENT









Task: Students are presented with a challenging writing task, with guiding questions to act as scaffolding for the demanding cognitive activities. Students compose using a word processor, but upload the finished text as an HTML file. Graphics, equations, and scientific notation are supported. Calibration: Students read through three “benchmark” samples and assign each a score based on a series of evaluation questions (a rubric). Students are then given a “reliability index” from 1 to 6, based on their demonstrated competency in these exercises. This segment mitigates the common objection to peer review in the undergraduate classroom: that the experience reduces itself to the-blind- leading-the-blind. Peer Review: After becoming a “trained-reader” – and being assigned a credibility weighting – students read and provide written feedback on three anonymous peer essays using the same rubric (set of performance standards) as in the calibrations. Students also assign each essay a holistic score from 1 to 10. Self-Assessment: In a final activity, students evaluate their own essay. As with calibration and peer review, students use the same rubric for the task. Having

1

Patricia A. Carlson, Rose-Hulman Institute of Technology, Humanities and Social Sciences, 5500 Wabash Avenue, Terre Haute, IN 47803 [email protected] 2 Frederick C. Berry, Rose-Hulman Institute of Technology, Head, Electrical and Computer Engineering, 5500 Wabash Avenue, Terre Haute, IN 47803 [email protected]

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-1

Session F3E “trained” on benchmark samples and then applied their newly-gained expertise in evaluating peer texts, students assess their own submission. Students are encouraged at this time to make comments that capture the evolving insights they have gained in the previous two segments. They are also invited to reflect on whether they have gained a deeper level of understanding for the assignment and its outcomes. Our thesis is two-fold. First we argue that the multistaged writing workspaces in a typical CPR™ session encourage students to develop higher-order reasoning processes such as discerning patterns of meaning, practicing processes of inquiry, and drawing inferences from observation. Such goals are easy to enumerate but more difficult to substantiate using the assessment methods currently applied to writing outcomes [2]. Thus, the second part of our thesis is that CPR™’s built-in data collection provides a range of in-situ observations from which learning outcomes can be measured through standard and specialized data reduction methods. These learning outcomes can be represented as cognitive growth for individual students or aggregates of students over time or they can be given as synoptic overviews resulting from statistical methods such as ANOVAs and MANOVAs. Both the talk and the paper are based on extensive experience with the web-delivered learning environment. Our pragmatic emphasis is on addressing aspects of ABET’s Engineering Criteria 2000. Section 3 (“Program Outcomes and Assessment”) of EC 2000 lists eleven competencies to be measured. We focus on item “g,” “ability to communicate effectively” [subsequently referred to as EC3(g)]. Framing our comments at the outset, we briefly consider two central issues that will inform all aspects of our argument. First, most engineering programs applaud EC3(g)’s emphasis on writing, but seem a bit vague about how to interpret / implement the criterion and how to measure outcomes. It must be admitted that “ability to communicate effectively” covers a wide range of possibilities. Driskill [3], in examining how EC3(g) is handled in available ABET accreditation plans, notes that “communicate effectively” may too easily be reduced to a cliché. She finds little evidence in the literature that ABET assessment plans incorporate either modern discourse theory or contemporary workplace expectations in their objectives for writing in engineering education. Methods of measuring anything beyond the pro forma aspects of communication (especially writing) are still emerging. Thus, it is far too tempting to revert to counting artifacts (e.g. number of reports written) or to evaluate surface features (e.g. grammar or adherence to specific genre features) rather than to develop a rich definition of “communication” and to measure “effectiveness” by a set of descriptors that reflect “authentic” conditions of engineering practice.

Our second premise – essentially that writing within the educational process should be treated as crystallized thought – falls naturally out of the call for a more sophisticated definition of EC3(g). Based on the ideas of noted learning theorists, the “writing as a way of learning” approach to pedagogy holds that placing ideas into language mediates higher-order intellectual activities essential to mature thinking [4]. Indeed, practitioners who have pursued the notion that writing is a heuristic for cognition report their students are more actively engaged in learning and also find improvements in critical meta-cognitive abilities (or thinking about one’s own thinking). An annotated bibliography – prepared by Hansen, Scribner, and Asplund – indicates the richness of pedagogy associated with the “writing as thinking” approach to teaching in a range of content courses [5]. The remainder of our discussion considers how Calibrated Peer Review™ – as an example of modern information technology (IT) applied to education – 1. enables EC3(g) to be defined and enacted beyond the level of a cliché , and 2. enables outcome assessment at the behavioral level to measure gains that, in turn, can be fed back into curricular improvements.

CPR™ S UPPORTS STUDENT LEARNING Developed as a digital tool to help improve science instruction, CPR™ has had fairly extensive exposure over the past five years [6]. Especially prevalent in the literature, instructors of chemistry describe CPR™’s usefulness for fielding assignments that stimulate critical thinking and conceptual reasoning in students. In essence, CPR™ has the characteristics of what an educational technologist would call the “cognitive apprenticeship mo del.” This term is an extension of the theories of Vygotsky [7] and is used to explain the mentoring and developmental dynamics that occur between master and apprentice and also between peers during collaboration. CPR™’s four structure workspaces function as mentoring stages that not only facilitate the accomplishment of a writing task but also help the learner to internalize profound strategies for later performance of the same or similar tasks, without the presence of the technology. Because our intent is to explain CPR™ as an end-to-end IT application for instruction, we will start by defining central concepts. With the burgeoning interesting in educational reform, terms such as “critical thinking,” “cognitive growth,” and “reflective judgment” have been so extensively used that their meanings have become a bit amorphous. Thus, we use two education paradigms – Bloom’s Taxonomy and the Perry Scheme – whose widescale acceptance and longevity serve this discussion as a baseline from which to describe CPR™ as an interactive learning tool.

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-2

Session F3E Briefly to review Bloom’s schema (1956), the model posits six categories of behaviors; together these categories constitute levels of progressively more complex cognitive behaviors. Table I – adapted from [8] – represents the categories. TABLE I

specified rhetorical situation and (2) participation in a collaborative, evaluative exercise that culminates in selfreflection. Figure 2 guides the analysis of how this computer-mediated environment instantiates both dimensions of learning: cognitive development (Bloom) and reflective judgment /maturation (Perry).

BRIEF DEFINITION OF BLOOM’ S TAXONOMY OF COGNITIVE OBJECTIVES Lowest to Highest Bloom’s Definition Knowledge Remembering previously learned information Comprehension Grasping the meaning of information Application Applying knowledge to actual situations Analysis Breaking down objects or ideas into simpler parts and seeing how the parts relate and are organized Synthesis Rearranging component ideas into a new whole Evaluation Making judgments based on internal evidence or external criteria

A second explanatory model recurrent in today’s engineering education literature – the schema codified by William Perry – focuses on a student’s attainment of mature decision making and ethical sensibilities [9]. Essentially an epistemological model, the Perry Scheme traces the development of attitudes toward learning and toward the origin of knowledge through nine stages in which the student progressively moves from depending on external, “teachercentered” authority to a more self-assured ability (a) to reconcile multiple perspectives, (b) to tolerate ambiguity, and (c) to reflect on the process of knowing (metacognition). For convenience, these nine positions are frequently grouped into four layered categories, as in Table II, adapted from [10]. TABLE II: Lowest to Highest Duality

ABBREVIATED PERRY MODEL OF STUDENT GROWTH Description

Multiplicity (or Complex Duality)

Contextual Relativism Committed within Relativism

Students concentrate on facts and have low tolerance for ambiguity. Belief systems are characterized by right vs. wrong, and a reliance on the teacher as authority. Students see that a single, unified framework for learning and knowledge is not always available and that multiple versions of the “truth” may exist. However, the locus of authority is still externally defined. Students believe that “the correct answer” is dependent upon a number of circumstances and that the student herself is a legitimate component in the construction of knowledge Students demonstrate consolidated attitudes toward positioning the self within the context of learning. Unlike the previous position, the student now asserts an “identity” within the construction of knowledge.

FIGURE 2 CONCEPTUAL VIEW OF TH E LEVELS AND LEARNING PARADIGMS INSTANTIATED BY CPR™

Within the “writing workspace” – Levels 0-2 on the diagram – an instructor has five integrated sub-components with which to build an authentic and challenging task. • Writing Prompt – a concise statement of the assignment, giving a full definition of the rhetorical situation, including audience, purpose, and indication of genre / form. • Assignment Goals – gives an overview of the assignment; sets the context; links the process of inquiry to larger learning objectives of the course. • Source Materials – links to URLs or to files providing rich data sets or supplemental information from which students construct their new knowledge artifact. (A URL link could be to web-delivered courseware used in conjunction with the course. Additionally, legally available journal articles and / or databases or simulations make the writing workspace extremely varied and flexible.) • Student Instructions – operational or logistical guidance. • Guiding Questions – A set of “heuristics” that foster activities involving Bloom’s analysis, synthesis, and evaluation.

The written product need not be lengthy; however, the mental manipulations required for the text’s production Well-coordinated CPR™ sessions comprise learning should engage the student in (a) mature inquiry as a activities that accommodate both the cognitive processing precursor to knowledge, (b) reflective judgment as the emphasized by Bloom’s taxonomy and the affective growth foundation for decision making, and (c) meta-cognition as outlined in Perry’s Scheme. We emphasize that a CPR™ the initiator of self-confidence. Admittedly, authoring such session contains two very distinct types of instructional assignments requires talent and time. However, CPR™’s activities: (1) construction of a written product to fit a fullyimplicit process model aids the instructor to see how the 0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-3

Session F3E parts fit together and to identify opportunities for student engagement in genuine and contextualized learning experiences. Levels 3-5 (calibration and peer review) offer two guided-inductive workspaces. The activities in each help students to move beyond creating a knowledge artifact and toward an understanding of why some written results are better than others. Having created her own artifact in Levels 0-2, the student now carefully examines six different responses to the same assignment. (The first three have been selected or authored by the instructor to serve as benchmarks; the second three are randomly selected from the contributions of fellow students.) The learning expectations laid out in the prompt and performance standards from Levels 0 and 1 are now consolidated and carried forward to a problem-space where the student learns by evaluating the work of others. As indicated by Bloom, justifying an evaluation takes students to a very demanding level of intellectual activity. Several studies on using peer review activities in engineering classroom indicate (a) improvements in writing skills, (b) increased acceptance of the responsibility for one’s own learning, and (c) more mature behaviors indicative of becoming a member of an active learning / working community [11] [12]. But good evaluating capabilities (as indicated by solid critique commentary) – like good thinking (as represented by sound writing) – are slow-blooming skills that require much intentional nurturing. Merely providing the opportunity for students to review one another’s materials – even with the added focus of a rubric – will not automatically foster growth. As instructors, we all know that moving students from novice to expert behavior is not a linear progression, and teaching is not as simple as providing instruction for each component competency and then asking students to put all the parts together. All activities in Levels 3 - 5 are “interactive”; however, each in a different fashion. The “calibration” and “peer review” iterations work through a set of instructor-authored questions that focus student attention on specific performance aspects of the artifact being judged. (Ideally, these questions would reflect the gamut of Bloom’s Taxonomy, from content manipulation to concept formation.) Calibration (Levels 3 and 4) is basically a tutorial. Students answer questions and are given feedback. At the end of the session, they are given a performance rating. Having gained a level of proficiency in calibration, the student moves on to answer the same set of questions about three artifacts submitted by classmates. A feedback box enables peers to give commentary on their evaluations or make suggestions for improvement. This segment also gives feedback. At the end of the session, students are given a report that indicates how their ratings of classmates’ texts were similar to or different from the ratings of the same texts by other members of the class.

Level 6, self-assessment, broadens a student’s perspective through self-reflection. Having looked at three benchmarks and critiqued three peer-submitted texts, the student returns to her own artifact with new awareness. Self-review prods meta-cognition, a capacity necessary for performance at the top levels of either the Bloom or the Perry model of student intellectual maturation. Kathleen Yancey’s [13] cogent remarks best describe how reflection reinforces learning: When we reflect, we call upon the cognitive, the affective, the intuitive, putting these into play with each other to help us understand how something completed looks later, how it compares with what has come before, how it meets stated or implicit criteria, our own, those of others.

Because of the relative newness of the system, few substantive empirical studies of how CPR™ enhances learning have been published. Yet, one can argue from a theoretical perspective that the system should achieve outcomes as dramatic as other guided-inductive learning environments, where IT partners with the student and gently guides the fledgling learner through complex, multidimensional intellectual activities and teaches strategies for problem-solving that endure, even in the absence of computer mediation [14].

CPR™ AND LEARNING O UTCOMES In our opening statements, we claimed that the multi-staged workspaces in a typical CPR™ session encourage students to develop mature habits of mind. We next elaborated on this notion of improved learning by using the theoretical framework of Bloom’s Taxonomy and the Perry Scheme. We now consider how CPR™’s build-in data collection may be used for programmatic or course-specific assessment. Each of the four structured workspaces returns data that indicate various aspects of student performance. Descriptive analyses and interpretive inferences can be drawn from these embedded indicators. For example, as shown in Figure 3, the calibration sequence tracks four specific indicators of student performance. The results of the calibration exercise are reported in five numbers, as indicated in Figure 3. (Note that “Score” is not a separate measure; rather, it is a composite result derived by CPR™’s algorithms and used as an element in determining the student’s overall “grade” for performance in all four workspaces.) Both the “%Style” and “%Content” column indicate the percentage of questions about the calibration texts for which the student’s answer matched that of the “answer” indicated by the instructor.

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-4

Session F3E Assignment: The guidedinductive instructions serve as performance requirements that anticipate the standards set forth in the rubric.

Student

%Style

SMITH

55.55

JONES

88.89

Rubric: Consists of a set of custom-designed questions that guide students in their evaluation of the 3 sample responses.

%Content

Avg Dev

Score RCI

77,78

1.33

1.67

3

100.00

2.00

10.00

5

FIGURE 3: I NDICATORS T RACKED IN THE CALIBRATION W ORKSPACE

Each benchmark text is also given a holistic rating from 1 to 10 by the assignment author. The “Average Deviation” (Avg. Dev.) number indicates how far away from the rating standard the student was for all three of the calibration texts. The “Score” is the number of points (a portion of the potential total of 100) the student received for the calibration section of the session. (The instructor sets the point distribution between text, calibration, peer review, and selfassessment.) The “RCI” (Reviewer Competency Index) – an integer from 1 to 6 – indicates how well the student “trained” during the calibration. This number is used later in the session – especially in the peer review workspace – to determine a “weighting” for this student’s rating of the three peer texts she reviews. (The algorithms embedded in CPR™ are beyond the scope of this paper. Suffice it to say that the calculations are very robust and that the instructor can set the level of tolerance for many of the indicators.) In a well-crafted assignment, the calibration questions contained in the rubric are indicative of the learning objectives implicit in the assignment, which – in turn – have been indicated in the guiding questions of the writing prompt. Thus, in performing the calibration, the student demonstrates the degree to which she can enact the knowledge learned in the assignment. Aggregating the scores for the entire class (or multiple sections of classes): (a) provides insight into the learning outcomes for groups and sub-groups, (b) helps the instructor to adjust the assignment for later use, and (c) aids in modifying upcoming activities. At a minimum, the data serve as a snapshot of how well the students understood the bundled concepts presented as “Style” and as “Content” questions. A simple case example demonstrates how the data may be used as an assessment indicator. A strong negative correlation between either “Style” or “Content” (or their average) and “Avg. Dev.” indicates that the activity has taught the student well. In other words, the higher the percentage of questions the student can answer correctly

either in “Style” or in “Content,” the better she should be able to apply the rubric and lower her deviation from the norm on the holistic number for the text. In other words, as one number increases, the other should decrease, if learning is taking place. For measures of interactions, consider that both the peer review workspace and the self-assessment workspace also return data. More complex statistical manipulations may provide deeper interpretation of the learning outcomes. For example, multivariate analysis (MANOVA) potentially gives insight into patterns that would not be readily apparent if only the attendant traditional paraphernalia of the exercise (texts, peer-review sheets, and self-assessment reflective statements) were collected and evaluated – after the fact – using the more standard means of assessment. Additionally, as noted in the literature, post hoc assessment accomplished by drawing inferences from the final product (the written text) is both labor-intensive and open to the criticism of subjectivity [15].

IS CPR™ AN ELECTRONIC PORTFOLIO ? The functionality described above would suggest that CPR™ is – de facto – an electronic portfolio. Certainly, the basic characteristics of (a) student account structures, (b) storage of artifacts, (c) course and academic career management tools, and (d) evaluative information would permit a portfolio-like implementation. The system administrator creates an account for an individual student, and the student’s name is added to each course using CPR™; files tagged as from specified courses are maintained in the student’s record. Over the student’s learning career, a collection of CPR™ results builds up – potentially from freshman to senior courses. Results exported to a spreadsheet become a “database” that may be addressed in a number of different ways, providing multiple “views” on an individual or a corpus of students, assignments, or clusters of assignments. Extracting a meaningful meta-analysis from a student’s multi-year records is best accomplished when programmatic educational objectives, outcomes, and outcome indicators have been fully articulated (including a feedback loop), down to the course level, as shown in Figure 4. However, despite the utility of CPR™ as a versatile learning tool, the system has some failings as an electronic portfolio. At its origin, a portfolio was first “used by students and practitioners of the visual arts to document professional achievement” [16]. Thus, the collection represented a showcase of exemplars to capture the range and depth of an individual’s accomplishments. Creating a portfolio has a significant motivational dimension: the student can see the artifacts of completed courses as evidence of professional behaviors, as a vehicle for establishing a more mature identity, and as a consolidation of what might otherwise remain (from the student’s perspective) a disparate collection of learning episodes. In

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-5

Session F3E many electronic portfolio scenarios, the student (frequently in consultation with a faculty mentor) makes a selection of artifacts and provides reflective statements on how each item maps on to stated programmatic goals or objectives. The clear advantage is pride of ownership, participatory career management, and inculcating professional identity.

its own usage – can provide humans with a means (a) to selfmanage behavior, (b) to perceive and value their contribution to the total enterprise, and (c) to make informed judgments about their own performance. In our view, it is from this powerful ability to “informate” that CPR™ draws its significance both as a learning tool and as an assessment tool.

ACKNOWLEDGMENT Patricia Carlson’s participate in this research project was funded by the National Science Foundation, Grant No. CCLI -9980867. We express our appreciation for this support.

Institutional : Program Goals and Program Outcomes

Strategic: Outcome Indicators and Performance Targets

REFERENCES Tactical: Outcome Elements and Outcome Attributes

CPR™ Experience s

FIGURE 4 EXAMPLE OF PROGRAMMATIC ARTICULATION It is fair to say that portfolio assessments (electronic or otherwise) as now used in engineering education are deductive assessment tools. The student is given a set of educational goals, and asked to use her judgment in selecting and explaining why certain artifacts validate learning across one or more categories. CPR™ – on the other hand – acts as an inductive assessment tool. The system captures specific observations during learning episodes and the evaluation accrues from feature analyses of intellectual artifacts and learning activities. Patterns emerge in this data, from which general conclusions about performance can be advanced. This distinction between deductive and inductive should not create the impression that CPR™ treats the learning process as atomistic or excludes the learner from formulating a consolidated view of instructional episodes. To conclude, we turn to studies of IT in the workplace. Shoshana Zuboff's groundbreaking book, In the Age of the Smart Machine [17], put the dark side of computermediated human activity into an idiom difficult for modern day executives – or educators – to ignore. She warns against simple automation because it strips away the meaning of work for people. Instead, she compellingly argues that IT's ability to "informate” – that is, collect and interpret data on

[1] Chapman, O. L. and M. A. Fiore. “Calibrated Peer Review™,” Journal of Interactive Instruction Development. Winter 2000, pp. 11-15. [2] Swarts, J. and L. Odell. “Rethinking the Evaluation of Writing in Engineering Courses,” Proceedings, ASEE/IEEE Frontiers in Education Conference, 2001, pp. TCA-25 – 30. [3] Driskill, L. “Linking Industry Best Practices and EC3(g) Assessment in Engineering Communication,” Proceedings, American Society for Engineering Education Conference, 2000. Available: http://www.ruf.rice.edu/~engicomm/public/Driskill.html [4] Emig, J. “Writing as a Mode of Learning,” College Composition and Communication. Vol. 28, 1977, pp. 122-128. [5] Hansen, K., R. T. Scribner, and E. Asplund. “Annotated Bibliography: Using Writing to Enhance Learning,” Brigham Young University. Available: http://saugus.byu.edu/writing/bibliography.htm [6] Robinson, R. “An Application to Increase Student Reading & Writing Skills,” The American Biology Teacher, Vol. 63, No 7. September 2001, pp. 474-480. [7] Vygotsky, L. Thought and Language. Eugenia Hanfmann & Gertrude Vakar (Trans.), Cambridge, MA: MIT Press, 1962. [8] Bloom, B., M. Englehart, E. Furst, W. Hill, and D. Krathwohl. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. New York: Longmans-Green, 1956. [9] Perry. W. “Cognitive and Ethical Growth: The Making of Meaning,” In A. W. Chickering et al. (Eds.), The Modern American College, San Francisco, CA: Jossey-Bass, 1981. [10] Irish, R. “Engineering Thinking: Using Benjamin Bloom and William Perry to Design Assignments,” Language and Learning Across the Disciplines, Vol. 3, No. 2, pp. 83-102. [11] Eschenback, E. A. “Improving Technical Writing via Web-Based Peer Review of Final Reports,” Proceedings, ASEE/IEEE Frontiers in Education Conference, 2001, pp. F3A1-5. [12] Sharp, J.E., B. M. Olds, R. L. Miller, and M. A. Dyrud, “Four Effective Writing Strategies for Engineering Classes,” Journal of Engineering Education, Vol. 88, No. 1, January 1999, pp. 53-57. [13] Yancey, K. B. “Looking Back as We Look Forward: Historicizing Writing Assessment,” College Composition and Communication, Vol. 50, No. 3, pp. 483-503. [14] Carlson, P. and M. Slaven. “Hypertext Tools: The Next Frontier: Tools that Teach,” Information Systems Management, Vol. 9, No. 2, Spring 1992, pp. 53-61. [15] Allen, J. “The Role(s) of Assessment in Technical Communication: A Review of the Literature.” Technical Communication Quarterly, Vol. 2, 1993, pp. 365-388. [16] Heinricher, A.C., J. E. Miller, L. Schachterle, N.K. Kildahl, and V. Bluemel. “Undergraduate Learning Portfolios for Institutional Assessment,” Journal of Engineering Education, Vol. 91, No. 2, April 2002, pp. 249-253. [17] Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books, 1988.

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference F3E-6

Suggest Documents