Distributed Systems Design supported by Reflective ... - CiteSeerX

2 downloads 0 Views 203KB Size Report
mandatory to follow a teacher-centered approach. In the experimental ..... and labs with the students and had fun with their enthusiasm. Student populations ...
Session S2F

Distributed Systems Design supported by Reflective Writing and CATs Maria Feldgen and Osvaldo Clua University of Buenos Aires, [email protected], [email protected] Abstract - Reflective writing in the form of three reflective writing tasks has been used in our Distributed Systems Design course to develop and assess students’ thinking and learning. Case studies with real-life scenarios are developed in a Problem based Learning approach where students work in teams to identify learning needs and develop a viable solution in an iterative approach. The same case study was applied in two sister institutions with similar curricula and different policies. In the control group institution, it was mandatory to follow a teacher-centered approach. In the experimental group institution, students followed the CAT set “assessing skills in problem solving” to solve the case, write their arguments and the collected evidence for their positions in argumentative essays. Essays were debated in class to reach consensus and reflections were documented for reciprocal peer review. The outcomes of the essays and peer reviews were used as formative measures of student learning and as a vehicle for providing critical feedback to the students. In this paper, we will describe the experience, present and analyze our data comparing the students’ outcomes from the two institutions over three terms. We will also describe the conducted classroom research and students’ attitudes and difficulties. Index Terms – Classroom assessment techniques, Distributed systems design, Formative assessment, Reflective writing. INTRODUCTION

SOFTWARE DESIGN COURSES Traditionally software design courses use a PBL model [2]. Students engage in complex, challenging problems designs and collaboratively work toward their resolution in successive stages of development as recommended by software engineering. Teams are organized as project teams; most of the work is homework following a work breakdown and some in-class discussions. The design of distributed systems calls for a different thinking paradigm. In general, the process of going from analysis to design in software development has always involved mapping or transforming conceptual models. Models used during analysis aim to describe real-world systems from a problem-domain perspective. The concepts found in an analysis model should relate directly to concepts in the real-world system. Models used for software design, on the other hand, involve an additional level of abstraction and serve a different purpose. These models describe software concepts like objects, structures, and processes that only indirectly relate to concepts from the problem-domain. The purpose of a design model is to provide a blueprint for implementation and a framework for subsequent evolution of the system. Ideally, the conceptual distance between an analysis and a corresponding design should be kept to a minimum to improve understandability, traceability, and maintainability. For distributed systems, however, there are several fundamental problems that tend to increase the conceptual distance and complicate the design process [3]. This is because the distributed system is a complex system and must manifest a correct emergent behavior involving a collection of loosely coupled components over a network that also is a complex system. The correctness of this emergent behavior or even what the emergent behavior is may not be obvious from the point of view of any component in the system [4]. The behavior of a simple system is often easy to understand as the sum of the behavior of its component parts; good engineering practice is to design components with well-defined and reliable behaviors for precisely this reason. As systems become more complex, this reductionist way of understanding them fails resulting in a behavior that cannot feasibly be predicted from understanding of the individual parts, or were not expected by the system designer who assembled the parts, or both. Designing a system correctly is to create models that help the designers understand and evaluate both the system requirements and

In a Distributed Systems Design course for advanced Informatics Engineering students we have combined a Problem based Learning (PBL) approach with a commitment to developing independent learners with professional communication skills. Our goal was to create an effective, supportive learning environment among students. Commitment to the student-centered approach principles led us to assess students using formative assessment strategies as well as traditional summative assessment. In this paper, we report on our formative assessment tools based on CATs (Classroom Assessment Techniques) as reflective writing activities for in-class debates, in our efforts to facilitate effective design experiences within a Community of Practice (CoP) [1]. We will describe the experience, present and analyze our data comparing the students’ outcomes from two institutions over three terms. We will also describe the conducted classroom research and students’ attitudes and difficulties. 978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-1

Session S2F behavior. These modeling techniques are analysis (analytic models), simulation, and prototyping [5].

subsystems. The subsystems are problems where there is no obvious decentralization.

I. Community of Practice (CoP)

II. The project

We were aware of the importance of the process of creating an effective, supportive learning community around a complex system design problem. In this community students have the opportunity to debate their designs. Odin states, “in an effective learning community, the instructional tasks are contextualized in authentic situations, and students are given opportunities to construct knowledge as they test their ideas on others and evaluate other perspectives” [6]. Communities of practice are specialized learning communities defined by the knowledge, not the task. That is one of the differences between CoP and Project Teams. The goal in project teams is accomplish a specific task, not learning, sharing and creating knowledge. The domain of the CoP is the shared understanding of purpose and value to members that allows members to decide what is worth sharing, how to present their ideas, which activities to pursue, and include complex and long-standing issues that require sustained learning [1][7].

Both groups designed the system for the case using the Rational Unified Process Software Development Process (RUP) [10]. RUP provides a disciplined approach to assign tasks and responsibilities within a development organization and is an iterative and incremental development process divided in phases. For each phase, students documented the solution using UML (Unified Modeling Language) diagrams. The implemented solutions were tested in the lab. The phases are Analysis, Design, Code, Test and Integration. The project had two iterations. The first iteration was the development of the application’s prototype and its API (Application Programming Interface) over TCP connections. The second iteration was the development of a middleware for the first iteration’s prototype and API. Students were asked to develop three writing tasks based on the first three phases of the engineering design process. The fourth phase was the implementation task. The same work was done for the second iteration. Students were novice designers. Rowland recommended the design of a tool, incorporating aspects of instructional design expertise, as a potential strategy for helping novices solve ill-structured problems in more expert-like ways [11]. These aspects were addressed at the beginning of the project, handing out guidelines, questionnaires, and assessment rubrics. Rubrics assessment purposes are to improve the reliability of scoring written assignments and oral presentations, to convey goals and performance expectations of students in an unambiguous way, to convey "point values" and relate them to performance goals and to engage students in critical evaluation of their own performance [12]. Rubrics are also a guide of how to do the work. For each iteration and first phase, students must identify the external entities with which the system will interact, define the nature of these interactions at a high-level and understand the project requirements, key features and main constraints. For the next phase, students must analyze the problem domain and establish a sound architectural foundation. Students were required to write reports with their conclusions and alternatives using UML diagrams and some arguments, as individual homework and for in-class discussion with the project team. The project team (the class) had to reach consensus on a single shared architecture in a subsystems approach. Each student’s contribution fits with the others to complete the task fostering positive interdependence and building a cooperative environment (projects office). Design decisions are made as teamwork. For each subsystem, teams were formed for the next phases and iteration. We must apply different PBL approaches based on the different institution policies.

CONTEXT OF OUR STUDY Depending on their beliefs about learning students possesses qualitatively different motivational frameworks. These frameworks affect both students’ responses to external feedback and their commitment to the self-regulation of learning. Students must be given the means and opportunities to work with evidence of their difficulties through formative assessment [8][9]. Formative Assessment informs teachers and learners the appropriate roles and influence of self, peer assessment and feedback. It is complemented with group work assessment and tests and exams evaluation. The same case study was applied in Distributed Systems courses at two sister institutions with similar curricula. The goal was to assess the reflective writing activities with CATs in a CoP approach using formative assessment from 2008 to 2010. The control group belongs to an anonymous sister Institution and the experimental group to the School of Engineering at the University of Buenos Aires. I. The case study The case studies were problems from the industrial automation field. A case-study statement had a blend of concrete information, including facts, data, observable phenomena, such as videos, animations, prototypes, charts, and pictures, and abstract concepts such as principles, theories, etc. It had a decentralized nature to introduce the concepts of decentralized control. For example: a car assembly plant where people, computers, robots and other automated equipment work together to create the vehicles. Production requires a lot of coordination and involves a complex mix of functions and operations. Related functions are located in close proximity and grouped into autonomous work groups. The work groups can be modeled as

978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-2

Session S2F II. Control Group The institution did not assign teaching assistants to the Distributed Systems course. The course was delivered only the first semester every year. The institution policy establishes that the classes are teacher centered and rely primarily on lecture. Assessment and grading policies were seen as reinforcing a traditional reliance on summative assessments based on three types of measures (homework, mid-term and final exams). The institution did not have experience in inductive learning approaches and formative assessment. It is mandatory to follow the traditional teaching approach even in the lab. Out-of-class time in the institution is not allowed. The PBL approach was solving the case using project teams as a traditional software engineering subject. The design and its traditional software engineering documentation were homework and were graded, because we must adhere to the school's grading norms (summative assessment) and have to place a grade on a grade sheet for each activity we were doing with the students, indicating what level of content knowledge a student has achieved in the subject listed. Time was scheduled in class for teamwork discussion, assistance, and project phases tracking. We gave them some bonus points for participation. Most of the class time was devoted to introducing students in the distributed systems concepts and specific experiments and in-class programming exercises highlighting emergent behavior and performance issues. II. Experimental Group At the FIUBA, the course was delivered every semester. We had two teaching assistants. Formative and summative assessments were used. The CAT set “Assessing Skill in Problem Solving” as mandatory homework supported the reflective writing activities [13]. Students must follow the CAT elements and guidelines documenting their decisions together with their conclusions (UML diagrams). Students must write their arguments and the collected evidence for their positions in argumentative essays (reports). Explicit strategies for developing arguments were introduced using the basic tools for writing argumentative essays at the beginning of the course and out-of-class time were scheduled to help them with these reflective writing activities [14]. First, we use the Problem Recognition Tasks (PRT) CAT to assess how well students can recognize various problem types [13]. Students must describe the problem using the UML use cases and their particular problem types and characteristics must be recognized and identified. Then, we use the What’s the Principle? (WP) CAT to assess students’ ability to associate specific problems with the general principles used to solve them, i.e. how they use models and abstraction in our subject [13]. Students must decide what principles to apply in order to solve the problem and their alternatives.

Finally, we use the Documented Problem Solutions (DPS) CAT [13] to assess how well students solve problems and how well they understand and can describe their problem-solving methods and strategies. CATs use rubrics. These outcomes rubrics were used as formative assessment tools for feedback. Students earned points, not grades. Students had to bring these reports for in-class debates to reach consensus. The outcome of the debates must be a clear statement or conclusions about the discussed issues. The goal of the debates was a means of encouraging critical thinking, personal expression, and tolerance of others' opinions. It increased student’s awareness of how well they were communicating their learning, in providing opportunities to gain insights from each other’s experiences, and understanding of the engineering design process. Students’ reflections had to be written for documentation and reciprocal peer review, etc. Much more important than the classroom time was the work that students did on the project and debates. New levels of abstractions and key concepts about complexity are introduced after the need to learn them has been established in the context of observed phenomena or unexpected system behaviors. Formative assessment provided the information needed to adjust teaching and learning while they were happening when students needed more experimentation and when it was the time to introduce new concepts. In this sense, formative assessment informed both teachers and students about student understanding at a point when timely adjustments can be made [9]. Students analyzed how and why system behavior might not be predicted based on its components. They also analyzed how nonlinearity and nested complexities determined the observed system behavior [15-17]. Students carried out their own experimentation and simulations with our assistance. FINDINGS We assumed that the planned exposure to the CATs experience in a CoP approach should make some sort of difference between students from different groups. Was it worthwhile? We spent a lot of out-of-class time in the office and labs with the students and had fun with their enthusiasm. Student populations were small at both institutions (Table 1). We compare the outcomes of the first semesters only when the control group course was delivered. TABLE I STUDENTS POPULATIONS

Student’s project performance was assessed in the two groups. The control group had four checkpoints, that were graded and feedback was given. An iteration had two

978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-3

Session S2F checkpoints. The first checkpoint was for the design activities and the second for the implementation and testing of the subsystem or system. Design and implementation calls for different skills and background knowledge. The experimental group was assessed after each activity earning points and using the rubrics and comments as feedback. Point’s count is equivalent to the checkpoint grade in order to compare them. In the following tables all marks and points are normalized to a 10 point scale as used in Argentina being 10 the best. In the Student’s t-test, p is the probability of the null hypothesis, i.e. both groups belong to the same population. If p < 0.05 the null hypothesis must be rejected and there is a different between control and experimental group. TABLE II 2008 RESULTS

TABLE III 2009 RESULTS

achieving in design had other factors that are important, such as student attitudes and level of effort. We identified some factors. I. Control group We could not replace the students’ belief that classroom activities are done for grades rather than for acquiring knowledge and that grades are the corresponding result. Students misunderstood our grading system using rubrics and feedback comments, guidelines and our role as facilitator. The emphasis placed on grades encouraged cheating (not collaboration), restricted study to material likely to be included in the tests (lectures) and to conform to our (instructors) views and opinions (our examples) [18]. Most of the students did not show up or came late to the in-class activities, such as discussion or project follow-up sessions, which were not graded. Bonus points were not considered to be worthwhile due to the effort the activity demanded. Reports were handed in incomplete at the late deadline. A few students handed-in the reports at due date and had feedback to improve their work for a better mark. Subsystems evolved as belonging to different systems. The feedback of the assessment through the rubrics and comments was taken into account only if grades were low. These students came to the discussion sessions asking for help or left the course. As stated by research, a test at the end of a unit or teaching module or system design, as in our case, is pointless; it is too late to work with the results, for formative purposes [8]. This fact is observed in the tables. The average of the performance of the control group in design got worse in the second iteration. Many students showed novice designer attitudes again. II. Experimental group

TABLE IV 2010 RESULTS

Reports and in-class activities, such as debates and project follow-up sessions, had a strict plan to follow as a real-world project. Argumentation was hard to master by our novice designers at the beginning. In the guidelines we gave them some questionnaries to answer and to think beyond the stated information, as experts do. Both groups were using the same guidelines, but the answers with argumentation of these questionnaries had to be debated in the experimental group. In class debates and reflective writing addressed the different learning styles and showed different ways of presenting the same information for the students. The CATs guided our novices to seek for clarification and understanding of the problem and to elaborate alternative solutions instead of shifting rapidly from problem analysis to solution generation and failing to elaborate alternatives. We assisted them in the preparation for the debates and during the debates as facilitators and consultants. After the first debates, students felt comfortable with the CoP approach. Roles and responsibilities in the project/team changed depending on achievement of the activities.

At the beginning of a semester we conducted test score analyses that provided further insight into student achievement and background using the same Background Knowledge Probe (BKP) CATs for both populations [13]. Both groups solved the BKP problems as novice designers, which should be expected. Students previous design experience was to apply specific concepts and models to solve specific problems with guidance. The exception was the 2010 experimental group with all the students with high scores in the BKPs. A BKP analysis found no significant differences at the beginning of the course, but both groups evolved in a different way, as shown in the tables. The differences were also observed in the classroom and remarked the fact that 978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-4

Session S2F Our statement in the guidelines: “Individual homework must be strictly individual work. Teamwork and Programming assignments will be done in groups of 2 students. Collaboration with other groups on design and programming assignments is encouraged at the level of ideas and is the goal of the debates. Feel free to ask each other questions, brainstorm on models and algorithms, or work together on a blackboard. You should not, however, copy the actual code for programming assignments, or copy the wording for written homework. This will be considered cheating and will be punished severely!” was taken literally. The blackboard was not enough. The first group (2008) created a project blog on the Internet to post all the stuff they considered useful, to continue debates and collaboration over the Internet and to be in touch with us. The blog administration was rotated between the students, every week. The next semester teams used the same approach. All the information, reports, ideas and programming codes were shared and used responsibly. Nobody cheated. Lectures changed to interactive classes where problems, models, concepts and solutions were discussed. Experiments were conducted to verify in-class and out-of-class behavior. Students’ enthusiasm and commitment are reflected in the table results. CONCLUDING REMARKS Having a small number of students in each semester, we had the opportunity to use a student-centered approach exhaustively in order to have a close observation of our students’ learning process and to give them personalized feedback or to discuss with them during the brainstorming sessions in both institutions. However, students in the control group perceived the project team approach as a set of interdependent tasks focused on the project deliverables that we graded. In this context, the in-class activities for learning, sharing and creating knowledge through discussions and interactions to solve the case were not perceived as an important or necessary step in a project team approach. Thus, the aim of a project team got lost. In addition, revision and feedback of due-date handed-in reports without grades were not considered. Advanced students had fear of low grades. The grade, instead of working for the students' best interests, merely discouraged them, and some left the course. Other options to improve were dismissed. Students in the experimental groups grew intellectually and began to focus on the three skills essential to critical thinking: clarification and understanding, rational evaluation, and articulation. The community of practice was well represented by the course blog and the collaboration across teams. Debates, CATs and reflective writing built the community of learning. All these students passed the test and final exam with high grades.

ACKNOWLEDGMENT This study was supported with the grants I007 and I008 from the University of Buenos Aires, Argentina. REFERENCES [1]

Wenger, E., McDermott, R., Snyder W. M., Cultivating Communities of Practice: A Guide to Managing Knowledge. Harvard Business Press, 2002.

[2]

Delaney, J. D., Mitchell, G. G., Delaney, S., “Software Engineering meets Problem-based Learning”. The Engineers Journal, Vol. 57 No 6, 2003.

[3]

Clyde, S. W., “Design Issues for Modeling Behavior in Distributed Systems”, ER'97 Workshop on Behavioral Modeling, 1997, http://osm7.cs.byu.edu/ER97/workshop4/sc.html

[4]

Mogul, J. C., "Emergent (Mis)behavior vs. Complex Software Systems", EuroSys 2006, ACM, 2006, pp 293-304.

[5]

Kirshbaum, D., “Introduction to Complex Systems”, 1998, http://www.calresco.org/intro.htm.

[6]

Odin, J. K. , Teaching and Learning Activities in the Online Classroom: A Constructivist Perspective, New Jersey: Prentice Hall. 2002.

[7]

Kayler, M., Weller, K. ,”Pedagogy, Self-Assessment, and Online Discussion Groups”. Educational Technology & Society, Vol. 10 No. 1, 2007, pp.136-147.

[8]

Dweck, C. S., Self-theories: their role in motivation, personality, and development. Psychology Press, 2000.

[9]

Irons, A., Enhancing Learning through Formative Assessment and Feedback (Key Guides for Effective Teaching in Higher Education). Routledge, 2007.

[10] Kruchten, P., The Rational Unified Process: An Introduction (3rd Edition) Addison-Wesley Object Technology Series, 2008. [11] Rowland, G., “What do instructional designers actually do? An initial investigation of expert practice”. Performance Improvement Quarterly, Vol. 5 No.2, 1992, pp. 65-86. [12] Stevens, D. D., Levi,, A. J., Introduction To Rubrics: An Assessment Tool To Save Grading Time, Convey Effective Feedback and Promote Student Learning. Stylus Publishing, 2004. [13] Angelo, T., Cross, K. P., Classroom Assessment Techniques. A Handbook for College Teachers, 2nd. Edition. Jossey Bass, 1993. [14] Johnston, I., Essays and Arguments: A Handbook on Writing Argumentative and Interpretative Essays, Malaspina University College, Nanaimo, BC. http://www.mala.bc.ca/~johnstoi/arguments/argument1.htm [15] Feldgen, M., Clua, O. , "The use of CATs and case-based teaching for dealing with different levels of abstractions", 39th Frontiers in Education Conference 2009, IEEE, 2009. [16] Engineering Council for Undergraduate Education (E-CUE), “From Useful Abstractions to Useful Designs –Thoughts on the Foundations of the Engineering Method” Part I and Part II, MIT School of Engineering [17] Feldgen, M., Clua, O., “Teaching Abstraction in Distributed Systems with CATs” , Proceedings of the IEEE Frontiers in Education FIE 2008, Saratoga Springs, NY, 2008. [18] Stallings, W.M. Leslie, E.K., "Student Attitudes Toward Grades and Grading Practices", RR 276, Illinois Univ. Urbana.

978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-5

Session S2F AUTHOR INFORMATION Maria Feldgen Associate Professor, University of Buenos Aires, School of Engineering, [email protected].

Osvaldo Clua, Associate Professor, University of Buenos Aires, School of Engineering, [email protected].

978-1-61284-469-5/11/$26.00 ©2011 IEEE October 12 - 15, 2011, Rapid City, SD 41st ASEE/IEEE Frontiers in Education Conference S2F-6

Suggest Documents