Incorporating Concepts from Quality-Based Assessment: Experiences in Computer Science Education Suzanne W. Dietrich
Susan Haag
Leah S. Folkestad
Dept of Mathematical Sciences and Applied Computing
Ira A. Fulton School of Engineering
Aspen Research
Arizona State University
Assessment & Evaluation
Phoenix, AZ 85069
Arizona State University
[email protected]
Tempe, AZ 85287
Phoenix, AZ 85016
[email protected]
[email protected]
ABSTRACT
1.
One of the main concepts of quality-based assessment is to provide students with scoring rubrics to define expectations on assignments so that students are actively involved in selfassessment. Another feature of quality-based assessment is to guide the assessment of student work where there is a reduced number of outcomes, such as in the assessment of papers and projects. This paper reports on experiences incorporating some concepts from quality-based assessment in undergraduate and graduate courses offered in a computer science department. Scoring rubrics are introduced in the assessment process for self, peer, and instructor evaluation. The rubrics also provide a useful mechanism to assess papers and projects with a limited number of outcomes. The use of the rubrics as an assessment form provides a direct correspondence between the expectations and the assessed score. The quality-based assessment mechanisms described can be facilitated by course management tools in online-enhanced courses. The results of a student survey indicate that rubrics provide clear feedback, and the self and peer assessment aided in student learning.
As educators, we are faced with the challenge of fairly assessing student assignments. When creating assignments, we pay close attention to the details, incorporating our expectations in the specification of the assignment. Once students submit the assignments, we determine partial credit for the expected mistakes and document our decisions for the unexpected ones so that we can consistently assess the assignment. Some students will disagree with the assessment and indicate that the expectations of the assignment were unclear. Quality-based assessment [5] may be a solution. One of the main concepts of quality-based assessment is to provide students with scoring rubrics to define expectations on assignments so that students are actively involved in self-assessment.
Categories and Subject Descriptors K.3.2 [Computers and Education]: Computer and Information Science Education – computer science education, self-assessment. H.2.m [Database Management]: Miscellaneous – computer science education, assessment, scoring rubrics.
General Terms Human Factors
Keywords Assessment, scoring rubrics.
INTRODUCTION
Scoring rubrics [7] are rating scales for assessing student work, providing a description of the scale on which an evaluation criterion for student work is defined. Checklists are a simplified form of a rubric, where the evaluation result is whether a specific criterion has been met or not. The benefits of scoring rubrics include providing an evaluation metric and formative feedback to students. Quality-based assessment extends the benefit of scoring rubrics by providing the rubric to the student in advance of the assessment process, giving the student the ability to self-assess and improve. Quality-based assessment is more than just rubrics, since one of the goal’s of this approach is a reduced number of possible assessment outcomes rather than a numerical score. Computer science educators can use quality-based assessment to assess papers or demonstrated projects that have a limited number of outcomes. After a brief overview of quality-based assessment, this paper describes how to incorporate this assessment process in undergraduate and graduate courses offered in a computer science department. Since these courses are online-enhanced, the discussion emphasizes how the quality-based assessment can be incorporated into classes using a course management tool. Specifically, Section 3 describes the inclusion of quality-based assessment in an undergraduate course, using checklists to convey expected features and scoring rubrics to assess student work. The use of the rubric as an assessment form provides a direct
correspondence between the expectations and the assessed score, giving quality feedback to the students. Section 4 discusses how quality-based assessment forms the basis of peer review for papers and group projects. Section 5 reports on the result of a student survey on the use of checklists and rubrics for various assignments in a database course for self, peer, and instructor evaluation. Section 6 concludes with a discussion of the experience.
2. QUALITY-BASED ASSESSMENT The quality-based assessment process, developed in the late 90’s [5], incorporates three distinct features. First, the quality-based process requires faculty to define what is expected before assigning work at the beginning of the semester. Second, students engage in self-assessment, which is facilitated by providing expectations to the student when the work is assigned. Third, the process assesses rather than grades student work, which results in a reduced number of possible assessment outcomes.
3. RUBRICS FOR ASSIGNMENTS A checklist of expected features was introduced to clearly identify the expectations on the assignment. Figure 1 shows the list of expected features for a programming language assignment. Although the details in the table are specified in the assignment description, the table provides a succinct checklist for students. Note that the header of the table in Figure 1 includes the term required features in conjunction with the term expected features [5] to reflect that the features are not only expected but also required. A NO assessment results in a loss of points. YES
The quality-based assessment process characterizes the features of the assignment to assess the submitted product. The students receive the list of features with each assignment as expected, revealed and exciting features. Expected features are required and are assessed by indicating whether the expectations specified on the checklist have been met by checking Yes or No. Revealed features are requirements that are known and are assessed by the following ratings: Wow, OK, or Weak. The revealed features represent criteria for the scoring rubric. Exciting features represent unexpected features of student work that illustrate outcomes beyond the explicit assignment objectives. A comment box is allocated for the recognition of exciting features. The assessment outcomes are: Exceeds Expectations, Meets Expectations, Needs Improvement, No Credible Effort and Not Submitted. The assessment rules allow students to revise and resubmit a Needs Improvement outcome for re-assessment. However, a re-assessed work can never be assessed as Exceeds Expectations. This quality assessment process in [5] also describes a mapping of the three-level assessment to traditional letter grades. The goal of the quality-based assessment process is universal– to have students submit quality work. There are, however, several concerns with the implementation of this entire process as described in [5] for engineering design courses. One concern is the burden on the instructor to re-assess student work that needs improvement. Students should be given guidance in what constitutes a quality submission and ample time to achieve that goal. However, the resulting assessment of the student’s work is the consequence of the student’s effort. Another issue in the process described in [5] is the requirement that all assessment in the course use the same reduced assessment outcomes. Some instructors prefer the traditional numerical scale for grading certain assignments and exams. This paper describes experiences with the inclusion of concepts from the quality-based assessment process into courses both at the undergraduate and graduate level. Rubrics are introduced in the assessment process for self, peer and instructor evaluation. Rubrics are also used to facilitate the assessment of papers and projects that have a limited number of assessment outcomes.
NO
EXPECTED/REQUIRED FEATURES The assignment was submitted electronically by the due date and time? The name appears as a comment line at the beginning of the .java file? The program includes comments to document its purpose? The program uses descriptive variable names? The program successfully compiles? MUST HAVE ELECTRONIC COPY The javadoc utility successfully generates the appropriate documentation without warnings? Hardcopy of program submitted for assessment? Hardcopy of execution submitted for assessment?
Figure 1. Programming Assignment Checklist The revealed features for a programming language assignment are based on the correctness of the logic. Figure 2 shows part of the scoring rubric for a first programming assignment in CS1 that uses a random number generator to generate the bounds of a rectangle, followed by the computation of the rectangle’s area and perimeter using the appropriate methods. The scoring rubric indicates the partial credit decisions in the second column for the logic of the corresponding step of the program described in the third column. The first column is where the correctness of the student’s answer is assessed. Due to lack of space, Figure 2 does not show additional columns that relate an exemplary solution to the scoring rubric and provide space for comments. Pts
Max 10 15 20 15 10 10 5 10 5 100
Program Logic Import required packages Construct a Random number generator Use random numbers to generate bounds Construct a Rectangle with bounds Display bounds of Rectangle Compute area using getWidth & getHeight Display area Compute perimeter using getWidth & getHeight Display perimeter Subtotal: Logic Figure 2. Sample Scoring Rubric
The quality-based assessment approach indicates the expected features of the assignment and the partial credit decisions for the assessment. Some partial credit decisions may have to be made during the assessment process because it is hard to anticipate all creative solutions by students. The use of a rubric also facilitates the assessment process when working with a teaching assistant or grader, since the partial credit decisions are clearly documented for both the student and the person assessing the student’s work.
4. PEER REVIEW Concepts from quality-based assessment can be applied to the self, peer, and instructor assessment of papers and group projects. This section discusses how quality-based assessment can be used to coordinate peer review, which provides the basis of a reflective self-assessment. 4.1 Papers In a graduate-level course, students were given a semester-long assignment to investigate research topics related to the course. The students work in cooperative groups [4] of two or three members so that students can help each other explore the research area. The assignment consists of three deliverables: 1) a written paper, documenting the results of the exploration; 2) a hands-on project, illustrating depth of understanding; and 3) a Web page, providing an accessible overview of the topic. Students are required to submit a proposal for the topic and project associated with the research assignment during the fourth week of the semester. Once the topic is approved, the students are given the qualitybased assessment form, which is initially based on the assessment form in [5]. The expected/required features listed those features that are required when writing a paper, such as appropriate structure and a complete bibliography. Since the revealed features are known features that are rated, the assessment form used in this experience changed the terminology from revealed features to rated features with ratings of +, √ and ∆ . The + and ∆ ratings are familiar terminology from the cooperative learning literature [4]. The list of rated features includes items for the paper’s presentation, grammar and content. The reader is encouraged to develop a scoring rubric based on her experience with peer reviewing for conferences and journals. Since a scoring rubric is a working document, the rubric can be revised after reflecting on the assessment process so that the rubric is ready for its next use [6]. Comment boxes should also be incorporated for more descriptive feedback. Figure 3 provides a mechanism to determine the reduced assessment outcomes of +, √ and ∆. For example, a paper receives a + when YES is checked for all expected features and the rated features do not include a ∆ and have at least a few +’s. These guidelines are included on the assessment form. The mapping to a numerical score will be addressed later in this section. Students are required to submit their paper for anonymous peer review. Each paper is electronically submitted using a digital drop box and assigned a number. Each student is given a paper to review and assigned a reviewer designation, e.g., reviewer A or B. Each paper is assigned at least two reviewers from different groups. The assessment form is given to the students as a Portable Document Format (PDF) file. The rubric is initially designed in Word and converted to PDF. Then, the form tool in Adobe Acrobat is used to add radio buttons for the (required and rated)
features and text fields for comments. In one week, each student is required to complete this quality-based assessment form in the full version of Adobe Acrobat, which is available on campus, and to submit the review using the digital drop box. Use of the digital drop box allows the instructor to identify the reviewer. The instructor then forwards the review anonymously to the authors. Students are given a week to revise their paper based on the anonymous peer review before their final submission. The peer review is anonymous so that students can honestly evaluate a paper. The peer review is used only to provide feedback to the authors so that they can improve the paper. The instructor does not use the completed peer evaluation to assign a grade to the paper. However, the instructor uses the same qualitybased assessment form to assess the revised student work. The quality-based assessment process has a reduced number of outcomes: Exceeds Expectations, Meets Expectations, Needs Improvement, and No Credible Effort. One of the challenges of scoring rubrics is how to convert the assessment outcome to a more conventional grading system [6]. This conversion process must be based on the instructor’s experience and assessment philosophy. Since a numerical grading system is more conventional, Figure 3 indicates how the assessment outcome can be converted to an overall numerical score. The mapping of the scoring rubrics to the assessment outcome is closely based on the work of McNeill et. al. [5]. This assessment result was then converted to a numerical scale, assigning 95 to +, 85 to √ and 75 to ∆ . This mapping is consistent with the assignment of a letter grade for the course, which uses a strict numerical scale (90 ≤ A; 80 ≤ B < 90; 70 ≤ C < 80). Result
Assessment Result [5]
+
Exceeds Expectations
√
Meets Expectations
∆
Needs Improvement
Description
Numerical Scale
All YES’s for the Expected Features plus at least a few +’s on Rated Features and no ∆ All YES’s for the Expected Features plus at most a few ∆’s on Rated Features If there are any NO’s for the Expected Features or there are too many ∆’s
95
85
75
Figure 3. Reduced Assessment Outcomes The quality of an individual’s peer review forms another component of the grade on the paper. A review is rated on a scale of +5 to –5. A poor review, consisting of only checked boxes and no comments, lowers a student’s score. A quality review with thoughtful, detailed comments raises a student’s score. From an instructor viewpoint, the quality-based assessment of the research paper has several benefits. The incorporation of the peer review process strongly encourages students to make a credible effort on the paper for the peer review deadline. Students are then actively involved in the assessment process, providing input to their peers. Students are also required to submit a self-assessment of their own paper. Another benefit is the ability of the instructor
to use the same form for assessment, providing detailed feedback electronically to the student. This benefit increases in importance when an instructor is unable to complete the assessment before the last in-class meeting of the semester. For entirely online classes, the use of an electronic assessment form for feedback is essential. There are also two learning objectives of the quality-based assessment process for the research assignment that are not necessarily obvious. One objective is the exposure of students to research topics associated with the course. The peer review exposes the student to an additional research topic beyond their chosen one. Another objective is the experience the student gains in the peer review process. The goal is to have the student use the peer review experience as a basis of a reflective assessment. One obvious goal of the peer review is for each student to get an honest and detailed assessment of the paper for use in the revision process. However, whether a student provides the appropriate input or takes advantage of the detailed information is a decision made by each student. Again, the objective is to provide the experience and basis for the reflective assessment in the revision process for those students who are conscientious. 4.2 Group Projects Peer review can also be used in group projects where the team coordinates the peer review process. In an undergraduate database class, the group project consisted of a semester-long assignment to design, develop and test a database implementation [2]. The group project was broken down into three phases: (1) gathering requirements and conceptual design, (2) relational database design, and (3) implementation and verification. Each phase consisted of detailed deliverables with checklists provided to the students to guide them in the necessary requirements of the project. Students worked in groups of 4-5 and the project was structured using cooperative learning concepts promoting positive independence and individual accountability [4]. The positive interdependence in the third phase of the project is the unified look and feel of the database implementation. Each student was individually accountable for the implementation of several components of the database application. Two weeks before the final project was due, students were given the assessment form. Students were asked to provide a self-assessment of their own implementation components, and a peer-assessment of the implementation components of a fellow team member. The peer assessment was not anonymous. The team was responsible for assigning and gathering the assessment.
5. STUDENT PERSPECTIVE To gain a student’s perspective of the quality-based assessment technique, a survey was conducted at the end of the offering of an undergraduate database course that incorporated the use of checklists and scoring rubrics for the homework assignments, and self and peer assessment for the database group project discussed in the previous section. The homework in the database course consisted of a sequence of assignments on various query languages over a given database enterprise. Examples of these assignments are given as case studies in the text [1]. Students use the WinRDBI educational tool [8], which executes the formal and industry standard query languages, to validate their queries before submitting the assignment for assessment. The logic of each query is checked by either the instructor or teaching assistant to verify its correctness using a scoring rubric. A checklist of expected features is provided with the homework assignment. Students taking the undergraduate database course were administered a simple Web survey using a course management tool to provide the instructor with feedback. Students were asked to voluntarily answer the following three questions on a Likert scale of Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree: Q1. The grading form for individual WinRDBI assignments provided clear feedback on the assignment. Q2. Providing an assessment form to enable self-assessment on the project aided my learning. Q3. Using peer evaluations within groups on the team projects improved the quality of my deliverable. In a class of 48 students, there were 37 respondents (77%), which is a quality response rate. Table 1 shows the percentage of responses for each question. Students overwhelmingly agreed that the grading form for the individual WinRDBI assignments provided clear feedback. There were no answers of Strongly Disagree. Students also agreed that the self and peer assessment forms aided in their learning and the quality of their deliverable. Based on the percentages, the students felt that the self assessment aided their learning more than the peer assessment. Table 1. Survey Responses in Percentages
The rubric for the implementation of the forms, reports, and queries in the database project remind the students of the items to be assessed, some of which are summarized here at a high level:
Strongly Disagree
Disagree
Neutral
Agree Strongly Agree
Q1. WinRDBI
0
5
16
41
38
Q2. Self
3
11
19
49
19
Q3. Peer
5
14
30
38
14
•
using form controls, such as combo boxes and check boxes, where appropriate, and displaying meaningful, sorted information to the user;
•
validating fields and parameter values; and
•
showing that the test data illustrates the correctness of the various features of the form, report, and query.
•
“The grading scheme helped me see where I lost points, and reassured me that this was consistent across the entire class.”
An evaluation of the peer assessment on the database group projects is provided in the next section.
•
“The grading criteria sheet was very useful both on the assignments and when gathering all of the materials at the end of each phase.”
The following open-ended question was also included on the survey: Please provide additional comments regarding the use of quality-based assessment in the CSE 412 Database Management class. Your comments are encouraged and appreciated.
•
“Checklists for homework and assessment questions are very helpful.”
•
“Evaluating our own and other's work was very useful. We were able to see what our other group members were doing and offer our own input and ideas.”
•
“It was nice to know ahead of time what exactly the grader was looking for, so that I could make sure my submitted material met these requirements.”
•
“The grading form is a good idea.”
•
“The checklists for completion of the different phases of the project were valuable, as were the checklists on the returned homework.”
In addition to the student perspective, an analysis of the scores on the final exam questions related to the query language homework assignments was performed. The final exam included answering a query in each query language - relational algebra, domain relational calculus, tuple relational calculus, and SQL. The mean student score for each of the query language questions on the final was higher for the course offering that included the quality-based assessment versus an earlier course offering that did not use the assessment technique.
the quality-based assessment concepts presented by providing the electronic means to distribute feedback effectively. The use of online forms and the digital drop box makes it easier to keep the process confidential. Setting clear expectations becomes even more important when teaching entirely online. Scoring rubrics used to guide learning and for peer reviews are two examples of clarifying expectations that lend themselves to an online environment. Assessment in entirely online classes requires innovative techniques since the feedback to students is limited to electronic means. In fact, the quality-based assessment process was discovered through a discussion on assessment as part of an online faculty development workshop for online learning techniques [3]. The advantages of a quality feedback mechanism are not limited to online classes. The inclusion of concepts from quality-based assessment in face-to-face and online-enhanced classes has proven successful.
7. ACKNOWLEDGEMENTS Thanks to Veronica Burrows at Arizona State University who shared her quality-based assessment experience with us.
8. REFERENCES 6. DISCUSSION The incorporation of quality-based assessment is a worthwhile endeavor. The results of a student survey indicate that the students agree that the scoring rubric provided clear feedback and the self and peer assessment aided in their learning and the quality of their work. The use of quality-based assessment significantly reduces grading disputes. In the past, most grading disputes involved student disagreement associated with partial credit decisions. This type of grade dispute is not negotiable because students are consistently graded on the same criteria. The grading criteria and point allocations are clearly stated on the assessment form. The quality-based assessment concept has been shared with colleagues who have also adopted the use of the assessment form for providing consistent feedback to students. Teaching assistants indicate that the scoring rubric facilitates the grading process. Although there is some initial set-up required for establishing the form, the investment is rewarded during the assessment process. The quality-based assessment process can be implemented in traditional and online environments. The experiences discussed in this paper were in online-enhanced courses, where classes met face-to-face twice a week and a course management tool formed the basis of an online component. The online medium facilitates
[1] [2]
[3] [4]
[5]
[6]
[7] [8]
Dietrich, S. W., Understanding Relational Database Query Languages, Prentice Hall, 2001. Dietrich, S. W. and Urban, S. D., “A Cooperative Learning Approach to Database Group Projects: Integrating Theory and Practice”, IEEE Transactions on Education, November 1998, CDROM 06, pp. 346. Folkestad, L. S. and Haag, S., Building Success Online: A Faculty Toolkit, Aspen Research, 2001. Johnson, D. W., Johnson, R. T., and Smith, K. A., Active Learning: Cooperation in the College Classroom, Interaction Book Company, 1991. McNeill, B., Bellamy, L. and Burrows, V., “A Quality Based Assessment Process for Student Work Products”, Journal of Engineering Education, October 1999, pp. 485-500. Mertler, C., “Designing Scoring Rubrics for Your Classroom”, Practical Assessment: Research & Evaluation, 7(25), 2001. Moskal, B., “Scoring Rubrics: What, When and How?”, Practical Assessment: Research & Evaluation, 7(3), 2000. WinRDBI Educational Tool, Arizona State University, http://www.eas.asu.edu/~winrdbi