A Tutor on Scope for the Programming Languages Course - CiteSeerX

10 downloads 16793 Views 59KB Size Report
Mar 7, 2004 - evaluating it for two semesters in our Programming Languages course. Categories and Subject Descriptors. K.3.1 [Computer Uses in ...
A Tutor on Scope for the Programming Languages Course Eric Fernandes, Amruth N Kumar Ramapo College of New Jersey 505 Ramapo Valley Road Mahwah, NJ 07430-1680

[email protected] In Section 2, we will describe the features of the tutor on scope. In Section 3, we will discuss our evaluation of the tutor in two semesters of our Programming Languages course, and the results we obtained. Finally, we will present literature related to our work, and discuss future directions for our work.

ABSTRACT In order to facilitate problem-based learning in our Programming Languages course, we developed a tutor on static and dynamic scope. Static scope includes the scope of variables, the referencing environment of procedures and the scope of procedure names in a language that permits nesting of procedure definitions (e.g., Pascal, Ada). Dynamic scope includes the scope of variables, and the referencing environment of procedures. In this paper, we will describe the design of our tutor, and present the results of evaluating it for two semesters in our Programming Languages course.

2. FEATURES OF THE TUTOR In the Programming Languages course, static and dynamic scope are studied in the context of a language that permits nesting of procedure definitions (such as Pascal, Ada), because: • The rules for determining the scope of a variable are a generalization of the rules in most other statically scoped languages. • The rules for determining the scope of procedure names is significantly more complicated than in any other language. M ost of the other languages (such as Java, C++, C#) and their concepts (such as namespaces, nested classes) are a simplified instance of these languages. Therefore, our tutor on scope considers a language with nested procedure definitions, and within this context, addresses five topics: 1. Static Scope of variables, i.e., all the procedures to which the scope of a variable extends; 2. Static Referencing Environment of procedures, particularly all the non-local variables that can be referenced in a procedure; 3. Dynamic Scope of variables 4. Dynamic Referencing Environment of procedures, particularly non-local referencing environment; 5. Static scope of procedure names. Even though most students in the Programming Languages course are already familiar with static scope, they consider it in a language with nested procedure definitions for the first time in this course. Dynamic scope is new to most students, and static scope of procedure names is hard for most students who are only familiar with Java, C++ or C#.

Categories and Subject Descriptors K.3.1 [Computer Uses in Education]: Computer-Assisted Instruction. D.3.3 [Programming Languages]: Language Constructs and Features – procedures, functions and subroutines.

General Terms Design, Experimentation.

Keywords Tutor, Scope, Evaluation

1. INTRODUCTION Solving problems helps improve long-term retention [5]. It helps students put to practice the theory they learn in class. In order to facilitate problem-based learning, we have been developing tutors for various topics in Computer Science. These tutors are meant as a supplement for classroom instruction, as a tool to clarify, test and round-out a student’s knowledge of a topic. Our experience has shown that problem-solving with such tutors helps improve student learning regardless of the complexity of the concept – basic or advanced. We have been developing tutors for topics in Computer Science that are based on analyzing program code, such as pointers [12], parameter passing [13], and scope. In this paper, we will present our tutor on scope, designed for students in the Programming Languages course.

The objective of our tutor is to help students better understand and learn these topics by solving problems. For each of these five topics, the tutor generates problems, grades the user’s answer and provides feedback. In the rest of this section, we will describe the types of problems generated by the tutor, the feedback provided by the tutor, and the tutor’s Graphical User Interface.

PERMISSION TO MAKE DIGITAL OR HARD COPIES OF ALL OR PART OF THIS WORK FOR PERSONAL OR CLASSROOM USE IS GRANTED WITHOUT FEE PROVIDED THAT COPIES ARE NOT MADE OR DISTRIBUTED FOR PROFIT OR COMMERCIAL ADVANTAGE AND THAT COPIES BEAR THIS NOTICE AND THE FULL CITATION ON THE FIRST PAGE. TO COPY OTHERWISE, OR REPUBLISH, TO POST ON SERVERS OR TO REDISTRIBUTE TO LISTS, REQUIRES PRIOR SPECIFIC PERMISSION AND/OR A FEE. SIGCSE’04, MARCH 3–7, 2004, NORFOLK, VIRGINIA, USA. COPYRIGHT 2004 ACM 1-58113-798-2/04/0003…$5.00.

2.1 Problem Generation The tutor randomly generates the outline of a program with nested procedure definitions, and asks multiple questions based on each program. The program may contain up to twenty procedure definitions, and may have 2-5 levels of nesting. Some of the questions the tutor asks on each topic are listed below:

1

Static Scope: The user is asked to identify all the procedures that lie within the scope of a variable declared in a procedure [10].

Dynamic Referencing Environment: For each missed variable, the tutor points out how the variable was declared in a procedure in the call sequence and was not re-declared thereafter. For each incorrect variable, it notes either that the variable is not declared in any procedure in the call sequence, or that it has been re-declared in a more recent procedure in the call sequence.

Static Referencing Environment: The user is asked to identify all the variables, along with the procedures where they are declared, that can be referenced in a given procedure [11]. Dynamic Scope: The user is asked to identify all the procedures that lie within the dynamic scope of a variable, given a sequence of procedure calls in the program.

Procedure Callability: For each missed procedure, the tutor points out why the procedure can be called: because it is a child, earlier sibling, ancestor, or an earlier sibling of an ancestor. For each incorrect procedure, it points out why it cannot be called – because it is none of the above.

Dynamic Referencing Environment: The user is asked to identify all the variables, along with the procedures where they are declared, that can be referenced in the last procedure in a given sequence of procedure calls in the program.

The tutor keeps track of and reports the user’s progress: how many problems the user has attempted, and how many the user has answered correctly, incorrectly, and partially.

Procedure Callability: The user is asked to identify all the procedures in a program with nested procedure definitions that a given procedure can call.

2.3 Graphical User Interface The tutor uses the same graphical user interface for all five modules: a left panel for the program and its visualization, and a right panel for the problem, answering options and feedback provided to the user. This uniformity of interface helps reduce the learning curve for the tutor. The user is led through an intuitive clockwise flow of action (as shown by the superimposed arrows in Figure 1): from the program to the problem statement (A), answering options (B), grading button (C), feedback (D), and the button to generate another problem (E).

In order to help the user analyze the structure of the program, the tutor provides several formatting options: the program may be printed using a different color for each level of nesting, and/or with a bounding box around each procedure definition. In addition, the tutor provides two graphical visualization tools for the program: • The user may view the static tree of the program, displayed with procedure and variable names listed at each node. • The user may view the procedure call sequence as a graphic chain, with procedure and variable names listed at each node.

3. EVALUATION OF THE TUTOR Hypothesis: We wanted to evaluate whether students could learn from the explanation provided by our tutor. In the following discussion, we will use the term “minimal feedback” to mean that the tutor indicates whether an answer is correct or wrong, without explaining why. “Detailed feedback” includes an explanation of why an answer is wrong. This explanation is one of the features that set our tutor apart from textbooks – whereas textbooks typically provide minimal feedback through answers to selected exercise problems, they do not explain why the student’s answer is incorrect. So, we wanted to evaluate the effectiveness of the explanations provided by our tutor.

2.2 Feedback After the user solves a problem, the tutor lists the parts of the user’s answer that are correct, the parts that are incorrect, and the parts of the correct answer that the user missed. It then prints an explanation for every incorrect and missed answer. Following are some sample explanations provided by the tutor: Static Scope: For each missed procedure, the tutor points out that the variable in question was not re-declared in the procedure or any of its intervening ancestor procedures. For each incorrect procedure, it points out either that the procedure is not a descendant of the procedure that declared the variable, or that it or one of its ancestors re-declared the variable.

Protocol: We evaluated the tutor in Fall 2001 and Fall 2002. We used a within-subjects design. The protocol was as follows: 1. The students took a written pretest, and a post-test (8 minutes each) on static scope and referencing environment without any practice in between. We used this step to check for any practice effect and/or Hawthorne effect [6]. 2. The students repeated the process for dynamic scope and referencing environment, but practicing with the tutor for 12 minutes in between the tests. The tutor provided minimal feedback; 3. The students repeated the process for procedure callability, and practiced for 12 minutes between the tests using the tutor configured to provide detailed feedback. Going into the testing, students were most familiar with static scope, and least familiar with procedure call. The types of problems on the pretest, practice and post-test were all the same. The levels of difficulty of pretest and post-test were similar. The students did not have access to the tutor before the practice session. They did not have access to their textbook, notes, or any other reference material between the pretest and post-test. Therefore, any change of score from pretest to post-test was attributable solely to the use of the tutor for practice.

Static Referencing Environment: For each missed variable, the tutor points out that the variable was declared in an ancestor, and was not re-declared in any intervening ancestral procedure. For each incorrect variable, it points out either that the procedure that declares the variable is not an ancestor, or that the variable is redeclared in an intervening ancestral procedure, shadowing the visibility of the incorrectly selected variable. Dynamic Scope: For each missed procedure, the tutor points out how the variable was not re-declared in any procedure called between the procedure where the variable was declared and the missed procedure (inclusive). For each incorrect procedure, assuming that the incorrect procedure was called after the procedure in which the variable was declared, it points out the first intervening procedure that re-declared the variable.

2



Tutor helped me learn new material: 3.27 (minimal feedback) versus 1.93 (detailed feedback) Clearly, students thought that the tutor with detailed feedback helped them learn better.

Analysis: In our analysis, we considered the score per attempted problem in order to eliminate the effect of practice on the scores. In the following tables, we list the score per attempted problem on the pre-test and the post-test, followed by the percentage change in the average score, and the corresponding t-test 2-tailed p-value for Fall 2001 and Fall 2002.

On the feedback for dynamic scope (minimal feedback), a student wrote the following: • “I got confused so the feedback would have been very welcome. :)” On the feedback for procedure call (detailed feedback), students wrote the following, indicating that they preferred detailed feedback: • “I immediately caught on to the idea here. Excellent tutor.” • “I like the feedback because it helps me see what I did wrong. Lots of examples help me understand things easier and this helped show me what I was doing wrong.”

Table 1: Results from Fall 2001 Evaluation Fall 2001 (N = 7) Average Std-Dev Average Std-Dev Average Std-Dev

Pre-Test Post-Test % Score Score Change No Practice (Static Scope) 2.85 3.27 14.83% 1.57 0.99 Minimal Feedback (Dynamic Scope) 2.35 2.24 -4.76% 1.61 0.86 Detailed Feedback (Procedure Call) 1.95 2.93 50.46% 1.04 0.73

t-test p-value 0.17067

0.80252

4. RELATED WORK AND FUTURE WORK Our results are similar to those found in Physics, where the use of problem generation systems has been shown to increase student performance by 10% [7]. In Computer Science, many systems have been developed to solve problems entered by the student (e.g., [8]), or administer problems created by the instructor (e.g., [1]), but only a few systems have been developed that generate and solve problems for the user, such as PILOT [3], SAIL [4] and Gateway labs [2]. PILOT is a problem generation tool for graph algorithms, SAIL is a LaTeX-based scripting tool for problem generation, and Gateway Labs generate problems on mathematical foundations of Computer Science. Our work is different from earlier work in that it deals with problems based on program code, and provides detailed feedback to help users learn from their mistakes.

0.03071

Table 2: Results from Fall 2002 Evaluation Fall 2002 (N = 7) Average Std-Dev Average Std-Dev Average Std-Dev

Pre-Test Post-Test % Score Score Change No Practice (Static Scope) 3.11 3.71 19.36% 0.65 0.51 Minimal Feedback (Dynamic Scope) 3.10 2.69 -13.26% 0.71 0.59 Detailed Feedback (Procedure Call) 2.17 3.45 58.76% 0.82 0.26

t-test p-value 0.08093

0.18565

0.01102

We plan to extend our tutor to include implementation issues/techniques – static chaining, display, shallow access and deep access. With this addition, the tutor would deal with all the aspects of scope as covered in a typical Programming Languages course, and would account for fully 25% of the test grade in our course.

From the two semesters, it is clear that: • 15-19% improvement in student scores can be attributed purely to increased familiarity with the testing process; • Providing minimal feedback, as is typically done in textbooks, is not conducive to learning – as a matter of fact, it may hurt student learning; • A tutor that provides detailed feedback can help improve student scores by 50-58%, and this improvement is statistically significant (p-value < 0.05) unlike in the other two cases. Although our sample sizes were small, since we used a withinsubjects design, and our results were statistically significant and consistent across two semesters, we believe that the results are a good indication of the effectiveness of the explanation generated by our tutor at helping students learn. We plan to continue to evaluate the tutor in future semesters, as well as at other institutions.

From our experience developing this and other tutors, we have identified two types of problem-solving tutors for Computer Science topics: 1. Tutors that need an interpreter to solve the problem – these tutors generate problems whose answers are based on the execution (semantics) of the program; 2. Tutors that do not need an interpreter – those that generate problems whose answers are based on the structure (e.g., static-semantics) of the program. (We have not considered tutors for syntax issues.) Scope belongs to the second category, whereas pointers [12], expression evaluation [9], and parameter passing in programming languages [13], some of the other topics for which we have developed tutors, belong to the first category. Both the categories of tutors are driven by a domain model. The domain model used by interpreter-based tutors is incremental, complementary, and closely tied to language semantics. For the second category of tutors (e.g., scope), we can use any convenient domain model (e.g., a tree representation), and it does not have to be related to the language in question.

Students filled out feedback forms after each step in the protocol. On a Likert scale of 1 (Strongly Agree) through 5 (Strongly Disagree), the class responses to selected items on the feedback form from Fall 2001 and Fall 2003 combined were: • Feedback was sufficient: 2.92 (minimal feedback) versus 1.69 (detailed feedback) • Tutor helped me better understand what I knew: 2.5 (minimal feedback) versus 1.29 (detailed feedback)

Our tutor may be used for solving problems either in class or after class. Since the tutor provides feedback that is designed to

3

reinforce learning, it may be used for asynchronous learning in distance education courses. Since the tutor generates the problems randomly, and hence, generates a different problem for each student, it can be used for assignments and tests without the concern of cheating. Finally, instructors who teach “blended” courses, i.e., courses with a mix of face-to-face and on-line instruction can use the tutor to introduce active learning into their Programming Languages course. The tutor is currently posted at: http://phobos.ramapo.edu/~amruth/problets/docs. It requires Java 2 with Swing.

[6]

[7]

[8]

[9]

5. ACKNOWLEDGMENTS Partial support for this work was provided by the National Science Foundation’s Course, Curriculum and Laboratory Improvement Program under grant DUE-0088864.

[10]

6. REFERENCES [1]

[2]

[3]

[4]

[5]

Arnow, D. and Barshay, O., WebToTeach: An Interactive Focused Programming Exercise System, Proceedings of FIE 1999, San Juan, Puerto Rico (November 1999), Session 12a9. Baldwin, D., Three years experience with Gateway Labs, Proceedings of ITiCSE 96, Barcelona, Spain, (June 1996), 6-7. Bridgeman, S., Goodrich, M.T., Kobourov, S.G., and Tamassia, R., PILOT: An Interactive Tool for Learning and Grading, Proceedings of the 31st SIGCSE Technical Symposium, Austin, TX, (March 2000), 139-143. Bridgeman, S., Goodrich, M.T., Kobourov, S.G., and Tamassia, R., SAIL: A System for Generating, Archiving, and Retrieving Specialized Assignments Using LaTeX, Proceedings of the 31st SIGCSE Technical Symposium, Austin, TX, (March 2000), 300-304. Farnsworth, C. C., Using computer simulations in problembased learning. In proceedings of Thirty Fifth ADCIS conference, Omni Press, Nashville, TN, (1994), 137-140.

[11]

[12]

[13]

4

Franke, R.H. and Kaul, J.D.: The Hawthorne experiments: First statistical interpretation, American Sociological Review, 1978, 43, 623-643. Kashy E., Thoennessen, M., Tsai, Y., Davis, N.E.., and Wolfe, S.L., Using Networked Tools to Enhance Student Success Rates in Large Classes, Proceedings of FIE 97, Pittsburgh, PA, (November 1997). Rodger, S., and Gramond, E., JFLAP: An Aid to Study Theorems in Automata Theory, Proceedings of ITiCSE 98, Dublin, Ireland, (August 1998), 302. Krishna, A., and Kumar A.: A Problem Generator to Learn Expression Evaluation in CS I and Its Effectiveness, The Journal of Computing in Small Colleges, Vol 16, No. 4, (May 2001), 34-43. Kumar A.: Dynamically Generating Problems on Static Scope, Proceedings of The 5th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2000), Helsinki, Finland, (July 2000), 9-12. Kumar A., Schottenfeld, O., and Obringer, S.R.: Problem Based Learning of 'Static Referencing Environment in Pascal’, Proceedings of the Sixteenth Annual Eastern Small College Computing Conference (ESCCC 2000), University of Scranton, PA, (October 2000), 97-102. Kumar A.: Learning the Interaction between Pointers and Scope in C++, Proceedings of The Sixth Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2001), Canterbury, UK, (June 2001), 45-48. Shah H. and Kumar, A.: A Tutoring System for Parameter Passing in Programming Languages, Proceedings of the Seventh Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2002), Aarhus, Denmark, (June 2002), 170-174.

Figure 1 The flow of action in the tutor (A problem on Procedure Callability in Pascal is in progress).

5

Suggest Documents