A Developmental Approach to Assessing Design Skills and Knowledge

7 downloads 27510 Views 21KB Size Report
College of Computing and School of Architecture. Georgia Institute of ... understanding how to best apprentice students through a series of courses to successful ...
A Developmental Approach to Assessing Design Skills and Knowledge Wendy C. Newstetter and Sabir Khan EduTech Institute College of Computing and School of Architecture Georgia Institute of Technology Atlanta, Ga. Abstact - This paper reports on efforts to develop alternative assessment instruments for design learning. We discuss three tools we have developed to assess the development of design skills and knowledge in a freshman level Introduction to Design course. These tools include portfolios assessment, cognitive maps and a writing technique called "freewritng". Preliminary results of our experiments and future plans for research are presented.

Introduction Engineering students rarely become effective designers from a single design experience. ABET apparently concurs with this stating: "Design cannot be taught in one course; it is an experience that must grow with the student's development." [1] To that end, Georgia Tech's EduTech Institute is spearheading an interdisciplinary research and curriculum development effort aimed at understanding how to best apprentice students through a series of courses to successful design practices. Key to this endeavor has been the development of a predisciplinary Introduction to Design course which serves as a laboratory for the development of innovative classroom and curricular strategies. Team taught by a mechanical engineer, a computer scientist, an architect and a cognitive scientist, the course is a first step in producing undergraduates who have sound design skills and knowledge. Over the term, students engage in individual and group design projects that introduce them to design as complex decision-based activity. We are not alone in developing innovative design-focused activities for freshman engineering students. [2-4]. A number of schools recognize the need to start students soon after high school grappling with real-world design projects. What perhaps sets our initiative apart is our focus on discovering and developing appropriate tools for the assessment of design learning. This paper reports on three alternative forms of assessment that we have developed for the Introduction to Design course that specifically target student learning of design skills and knowledge.

Assessment in design education The role of assessment in any formal educational setting is to determine what a student can do at various points in

a learning experience1. Traditionally it is talked about in terms of formative and summative a s s e s s m e n t . Formative assessment, occurring throughout the learning process, determines incremental outcomes, and summative assessment, at the end, determines more holistic and integrative outcomes. Assessment data collected at these different stages are subsequently used for multiple purposes: (a) instructional management and monitoring (b) program evaluation and accountability (c) selection and placement of students [5]. In effective assessment, the specific targets of assessment activity are derived from a conceptual model of the knowledge structures found in an expert. What is learned from assessment activity is then used to build a model of the student's evolving conceptual knowledge structure as it compares with the target structure. From recent protocol studies 2 we know that expert designers have learned to contend with ill-structured problems. Such problems have the characteristics of being open-ended, unconstrained, and requiring external sources of problem-related information. Moreover, when a solution is found, that solution is probably not universally acceptable because it is only one of many possibilities. Ill-structured problems, Reitman [6] suggests, require the designers to continuously identify open constraints and close them. Yet, all design constraints cannot be identified much less closed prior to the problem solving activity. Goel and Pirolli [7] further identify a dozen invariants features found in the structure of design problem spaces that set them apart from nondesign problems. These include the need for : • problem structuring • decomposition into modules • distinct problem-solving phases • reverse transformation of an unsolved problem into one with a solution • incremental artifact development • limited solution commitment control strategy 1 In the article, we make a distinction between

assessment which addresses individual student learning outcomes and evaluation which targets the implementation and outcomes of an educational program. 2 In such studies, expert designers talk aloud as they solve a design problem. Transcripts of the protocol are analyzed and coded to determine the strategies used by successful designers.

• commitments • stopping and evaluation rules • memory retrieval • construction and manipulation of models • abstraction hierarchies • use of artificial symbol systems. Because design problem goals and states are underspecified, structuring the problem space is a foundational activity. This is the first move towards setting constraints by drawing on personal knowledge/experience and determining how the artifact might be used. Simon [8] stated that what the solver needs to achieve this is the ability to specify the information that is germane to the solution. This information is generally stored externally which means it must be organized and structured in a manner that is appropriate to the particular problem to be solved. To accomplish this the solver must have an understanding of the conceptual knowledge of the components of the problem and the ability to utilize those components in organizing the solution. Design activity tends to occur in distinct phases: initial design, refinement and detail design. The freedom to innovate or change decreases as the designer moves through these phases. Various forms of representation from text to graphics to algorithms and formulas help the designer grapple with the design problem space. These representations help form rich mental models of the problem solution that are built on hierarchies of abstraction and which communicate possible situations through symbol systems of various kinds. Moreover, when a solution is found it is probably not universally acceptable because it is only one of many possibilities. It is obvious from these multiple characterizations of design spaces that the knowledge and skills students need to navigate them are not reducible to a set of testable facts or prescribed routines. Rather design activity is dynamic and interactively responsive to the evolving problem structuring, constraint setting and problem understanding. Design expertise consists of process knowledge, domain knowledge and application and product evaluation procedures. These three forms of knowledge are being applied concurrently as the designer engages with a problem. It is therefore a daunting task to assess student learning outcomes in design-focused classrooms. Not only are many of these activities not visible to an observer, they are also not reducible to nor easily captured through multiple choice questions or even essays. Design expertise arises from procedural know-how as well as conceptual knowledge. Additionally the acquisition of design expertise, we believe, follows a developmental pattern. Students can therefore be expected to develop expertise in stages that build on each other. In one sense, design learning is cummulative development. As students experience more

design opportunities, their skills in and understanding of design can be expected to change and grow. Various capacities emerge from engaging with design problems that allow the student to participate more fully in design experiences of greater complexity. Given this developmental perspective, assessment tools should be capable of capturing the learning trajectory that students both as individuals and as a group go through. On an individual level, we can observe the stages or advances a student is making and on the group level we can begin to observe more generic developmental trends in design learning that can inform instructional strategies. So the challenge is twofold. To develop assessment instruments that 1) to reveal what design expertise students have acquired and 2) to capture the developmental thrust of this acquisition process. In our predisciplinary design course for freshman engineers and computer sciences, we have developed and are currently testing assessment instruments that target general design understanding. We are experimenting with three types: 1) portfolios 2) free writing 3) concept mapping. At present we have more to say about portfolios as we experimented with and systematically refined their use as an assessment tool over three quarters. However, we discuss all three, offer rationale for their development and discuss how we are implementing them in the context of this class.

Portfolios In our predisciplinary Introduction to Design course for freshman engineers and computer scientists, we have been experimenting with portfolio assessment over three terms. Although portfolio development and assessment has been commonplace in art schools for years, it has only recently received a lot of attention in other domains. Since 1988, the Vermont Department of Education, for example, has been developing an innovative statewide performance assessment program. Although the program has several elements, it is best known nationally for its use of student portfolios in mathematics and writing in Grades 4 and 8. The program, which has been implemented statewide since the 1991-92 school year, was the nation’s first effort to make portfolio assessment a cornerstone of an ongoing statewide assessment and has accordingly received widespread attention across the nation. The appeal of portfolios arises from their easy integration into the life of the classroom. Unlike many forms of assessment, the contents of a portfolio are products of classroom instruction not add-ons. Instead of a one-off portrait of a student on a particular day, they provide a more equitable and fair representation of student work over time. Well-designed portfolios contain student work on key instructional tasks meaning they represent student accomplishment on important curricular goals. They encourage students and teachers to reflect on their progress

and provide a guide of how to alter instructional strategies. As products of significant instructional activity, portfolios presumably reflect contextualized learning and complex thinking skills, not low level cognitive activity. A typical portfolio contains two things: 1) student work, both drafts and finished products, over a period of time: 2) three or four of their best polished pieces. The motivation for this portfolio arrangement is that it affords two things: 1) a record on the formative development of a student over time 2) a summative record on the student's best accomplishments and the criteria they may have developed to arrive at their choice. Since we understand general design skills as developmental and design learning as contextualized in the projects undertaken by the students, portfolios seemed the obvious choice for our assessment purposes. Good portfolio assessment, however, is design in its own right. It requires continuous refinement and reflection on several key questions:

problem-structuring, 3) evidence of problem decomposition 4) evidence of incremental development of the design solution 5) evidence of constraint setting 6) evidence of different levels of abstraction and various types of modeling/ representation of the problem space. These criteria are derived from descriptions of expert design activity and instanitate the types of behaviors we want the students as designers to engage in. Currently we are investigating rater reliability across these six criteria. The two-person instructional team both evaluate the portfolios using the rating scheme and then compare scores. Categories where discrepancies have occurred such as constraint setting and modeling are being reviewed so as to articulate the attributes that constitute a rating on the scale. At the time of this writing, significant consistency is being achieved across the scales. The next step is to expand the number of raters so as to continue to validate the scoring scheme.

Concept Maps 1) What is the assessment purpose? 2) What tasks should be included in the portfolio collection? 3) What standards and criteria should be applied? 4) How will consistency in scoring and judgment be assured? Our progress in answering these questions has been evolutionary. The contents of the portfolios, the objects for student reflection, the means of evaluation have changed each term as we have developed a better understanding of what we wanted to accomplish both in the course and in our assessment of student learning outcomes. In a very real sense, we have been designers of the assessment process, iteratively defining and refining constraints and optimizing the expected outcome. We have closely linked classroom activities with the assessment activities. As an example, the first project students undertake is to reverse engineering a computer disk. Each student develops a series of specifications for the dissected artifact over the course of three weeks. As the students participate in classroom discussions and activities, the specs evolve into richer and richer descriptions of a disk on various levels of abstraction. We ask students in their first portfolio assignment (and we give them a portfolio assignment for each design cycle) to include all iterations of the design specs, a final spec. with detailed explanation of the artifact and an essay on how they moved from one speciofication to the next- a kind of diary of their journey from simple observation of the structure and parts to high level integration of function, behavior and structure. The evaluation criteria used to rate the portfolios has evolved over three quarters from one rater's internalized judgment of an effective design sequence to a set of six criteria against which the students are rated on a five point scale of 0-2. The six criteria are: 1) design of the portfolio as an artifact of communication and persuasion 2) evidence of

A concept map is a physical structural representation that consists of nodes and labeled lines. The nodes represent important terms and concepts and the lines denote relationships between the concepts. Concept maps purport to represent aspects of a student's propositional knowledge in a subject domain. They help " tap into a learner's cognitive structure and externalize, for both the learner and the teacher, what the learner already knows" [9]. The cognitive theory that underlies concept mapping suggests that domain experts develop hierarchical memory structures that represent the components and relationships between central and subordinate concepts.[10] Associations theory [11] likewise suggests that cognitive structures entail sets of concepts and their relationships which form into "semantic networks". Networks are composed of concepts which are linked directionally by labeled arrows to produce propositions. However, in contrast to Ausubel's maps, these networks do not necessarily exhibit hierarchical structure. The applicability of concept mapping as an assessment tool is obvious. It is a way to elicit student knowledge structures in a given domain, in our case--design [12]. At present, on the first day of class, we ask students to list all words they associate with the concept design. They are then asked to map those words into a structure that shows links and relationships. This initial map gives us a baseline picture of how students understand the activity of design. At the conclusion of the course, we repeat this procedure and to compare the baseline maps to the end of term maps. These will be compared both individually and across the class. Our intention here is to take a broad look rather than a more incremental look at the evolving conceptual structure that students have of the design process. In this sense we are using the maps summatively. It would also be possible to collect

versions of the map throughout the course for formative assessment which we may do in future course. At this time we are developing a scoring system with which the concept maps can be evaluated accurately and consistently. Using the scoring methods developed by Novak and Gowin [13] we are looking at the maps from the perspective of 1) propositions 2) hierarchy 3) cross-links and 4)examples. As space does not permit, a more in depth explanation of these parameters, please consult the above referenced authors for a detailed description of their scoring technique.

Freewriting Freewriting [14] also known as "looping" is a technique developed in composition studies for helping writers, both novices and experts. Writers of all kinds can experience a block that stymies their ability to generate ideas or move forward in a writing task and freewriting is a technique developed to help writers overcome that problem. When orchestrated in a classroom, students are instructed to write non-stop with no gaps for five to ten minutes. Even if they have nothing to say, they are to continue writing the words "I have nothing to say" or what ever comes to mind. The exercise in a sense captures the stream of consciousness on paper. At the end of the allotted time, students are instructed to summarize in one sentence what they have written so far. This becomes the starting point of the next freewriting segment. This activity of non-stop writing and summarization can continue for as long or short as the writer/instructor deems necessary to move forward with the writing project. We are using this technique to capture the evolving design strategies used by our students. We give the students a design problem which they will work on over the course of two or more weeks in class. They do freewriting on the problem for a few interations and then we ask students to share what they've done. We collect these and look for evidence of strategies they are deploying as they move forward with design. We want to see what they do on first engagement with a problem, in the middle and near the end. Therefore, we intend to have them free write at different times in a design cycle so that we can look for evidence of general design skills and knowledge. Our future plans include developing coding schemes which indicate which skills they are using or avoiding as they tackle design problems. This technique we believe can yield data similar to that collected in protocol studies but with much less difficulty.

Preliminary results What we can report at the time of this writing is that we are getting a rich picture of the students' developing design understanding. The portfolios offer a window on a students evolving understanding of the levels of

abstraction of an artifact. We could see from the portfolio entries on the dissected computers disks that student knowledge of the design that went into the disks got more complex through the iterations. The looping has revealed how students initially enter and negotiate an ill-structured problem space. It has also helped us understand student confusion over the level of abstraction at which they are tackling a problem. We found that many thought they were at a functional level but were really at the structural. Most had difficulty tackling the design problems at the behavioral level. In fact, the most prevalent strategy was to leap from function to structure. We anticipate at this writing, that the end of term conceptual maps will be much more expansive, much richer than what we collected at start of term. This conjecture comes from the more sophisticated level of discourse occurring in the classroom as students critique their own and other design processes and products. Perhaps most gratifying of all , in a recent set of reflection essays in which the students made connections between the class readings and design activities, three students commented that what the class was teaching them was not design per se but how to design their own design process. Such observations which come unprompted help us to evaluate the success of our teaching and curricular interventions for they serve as evidence of internal cognitive activities that we hope will occur.

References 1. Criteria for Accrediting Programs in Engineering in the United States. 1995-96 cycle. ABET. 2. Freeman, James J. & Rositano, Sal. A Freshman Design & Engineering Tools Course. 1995. Proceedings of the FIE 95 Conference. 3. Dally, J.W. Zhang, G.M. A freshman engineering design course. Journal of Engineering Education. 82(2); April 1993: 83-91. 4. Dym, C. L. Teaching design to freshman: style and content. Journal of Engineering Education. 83(4); October 1994:303-310. 5. Resnick, L.B. & Resnick, D.P. 1990. Assessing the thinking curriculum: New tools for educational reform. In B.R. Gifford & M.C.O'Connor (Eds.) New approaches to testing; Rethinking aptitude, achievement and assessment (pp. 37-76) New York: National Committee on Testing and Public Policy. 6. Reitman, W. Cognition and Thought. 1965. New York: Wiley. 7. Goel, V. & Pirolli, P. The structure of design spaces. Cognitive Science 16; 1992: 395-429.

8. Simon, H.A. The structure of ill-structured problems. Artificial Intelligence, 4; 1965:181-201. 9. Novak, J. D. &. Gowin., D.R. (1984). Learning how to learn. New York: Cambridge Press. 10. Ausubel, T. H. (1968). Educational psychology: A cognitive view. New York: Holt Rinehart and Winston. 11. Desse, J. (1965). The structure of associations in language and thought. Baltimore: Johns Hopkins Press. 12. Jonassen, D. H., Beissner, K., &Yacci,M. (1993). Structural knowledge. Techniques for representing, conveying and acquiring structural knowledge. Hillsdale; NJ: Lawrence Erlbaum. 13. See 9. 14. Elbow, P. (1981). Writing with power; Techniques for mastering the writing process. Oxford: Oxford University Press.

Suggest Documents