Computer Supported Collaborative Learning for helping ... - CiteSeerX

2 downloads 17433 Views 571KB Size Report
effort by students and teachers of computer courses. In initials activities of disciplines and projects that use computer programming, we expect at least that ...
Computer Supported Collaborative Learning for helping novice students acquire self-regulated problem-solving skills in computer programming S.R. Brito1,2, A. S. Silva1, 2, O. L. Tavares3, E. L. Favero4, and C. R. L. Francês2 1 Cyberspace Institute, Federal University Rural of Amazônia, Belém, Pará, Brazil 2 Laboratory of High Performance Networks Planning and Postgraduate Program in Electrical Engineering, Federal University of Pará, Belém, Pará, Brazil 2 Informatics Department, Federal University of Espírito-Santo, Vitória, Espírito-Santo, Brazil 4 Post-Graduate Program in Computer Science, Federal University of Pará, Belém, Pará, Brazil

Abstract - This paper presents the integration of a viewer and simulator of programs into an environment learning to support learning programming in initial classes of Engineering, Computing and Information Systems Courses. The integration proposed was based on a redesign of the architectureto combine resources avaliable in learning environment with an automatic evaluator of programs and with a new resource that promotes collaborative feedback through peers review. The new architecture resulting from this study is based on principles of collaborative feedback as a way of developing self-assessment skills in the programming disciplines. Keywords: Learning computers programming, collaborative feedback, self-assessment skills, self-regulated learning, automatic evaluation of programs.

1

Introduction

The development of skills and abilities needed for the construction of computer programs requires a significant effort by students and teachers of computer courses. In initials activities of disciplines and projects that use computer programming, we expect at least that students will be able to construct solutions to solve simple daily problems. Despite all efforts, difficulties encountered in subsequent disciplines are still visible, which require computer programming skills. In order to build these skills, teachers devote efforts in teaching of syntactic and semantic aspects of programming languages. Often, the didactic approach involves the presentation of sample programs with structures similar to those applied in programming exercises. According to Menezes et al. [1], this strategy is based on thet fact that students need to create, in his/her memories, structural patterns that can be used to solve different problems. Some of the obstacles faced in teaching programming are described by Sajaniemi and Kuittinen [2] and are often

associated with difficulties in understanding the language syntax and abstract concepts like loops, pointers, arrays and other formal constructions applied for the first time. In most cases, when student perceives that he/she is not evolving in discipline, he/she feels discouraged and his/her performance becomes increasingly smaller. By articulating the assumptions of self-regulated learning, we find three main stages [3][4]: planning, implementation and self-reflection. These phases are also present in computer programming laboratories, promoting and contributing to intellectual autonomy of students and stopping teacher to impose his/her knowledge and his/her way of working to the student. When the student stops to receive ready and finished concepts, he/she learns to find answers and search for solutions with freedom and responsibility, i.e., working to develop his/her autonomy, regulating his/her goals, strategies, motivation, cognition and behavior. In the process of self-regulated learning, several authors agree that feedback is considered as a key aspect [5][6][7][8][9]. For our research, the central focus is the importance of monitoring and feedback to encourage selfregulation in a computer supported learning environment for novice programming learners. Besides seeking to improve the quality of feedback messages, we propose to explore learning scenarios that involve self-assessment skills, emphasizing situations where the student provides feedback to the solutions of his/her peers (teachers, aides, students and other participants in the process teaching-learning). As a result, we present an architecture and a system with focus on learning programming initials skills and we show that students who develop self-assessment skills, through the exercise to provide feedback to their peers, achieve significant progress in learning, even when the teachers’ feedback is poor, which happens often in classes with high number of students.

2

Challenges of learning computer programming

In the following subsections, we describe some problems that are relevant challenges for the context of this research.

2.1

A. The importance of reflection in programming labs

When based on experimentation and construction, the learning encourages the development of cognitive skills and attitudes of high intellectual level. Therefore, the construction of computer programs cannot be seen only as the application of knowledge for building and implementing a computing solution without extensive discussion about its results. As well as in disciplines of physics or mathematics, the construction of a solution through an algorithm or a program written in a programming language is seen not only as the production of a set of results, but as a moment of work, reflection, analysis, inquiry, interpretation, exchange, decisions and conclusions, which are provisional or not. More than an activity that causes curiosity, the process of building a solution should be considered a pedagogical practice in which "doing" is important, but the reflection is essential. To Stamouli and Huggard [10], it is important to provide opportunities for decisions review because, although the understanding of program's logical correctness is significant, the analysis of the solution by student directly affects the construction of strategies to solve problems to be faced later.

2.2

Teacher’s overload in mediation learning

Besides the evaluation of program correctness, teacher can ask student about his/her solution so that he/she is conducted to describe his/her program or steps taken for his/her construction, making student to reflect on it. It can also provide new activities from the solved problem or encourage peer review. The assessment is important because it can stimulate students to reflect on their solutions and about the process adopted to solve the problems. Requiring students to keep a description of their activities is a way to encourage the reflection. On the other hand, mediation is difficult when the classes are large. In cases when syntheses or reviews explanation about their solutions are requested, there is an overload of work for teacher to assess and, therefore, a delay in the teacher’s feedback to these descriptions. It happens mainly when the process of evaluation and feedback is focused on the teacher. In student-centered learning, the feedback process can be designed to include peer review or it can be facilitated by computer technology which provides elements that can contribute significantly to the feedback be internalized for

self-regulation. Moreover, the teacher’s feedback should not have reduced importance in self-regulation process, which is justified for at least two arguments: self-assessment integrated with teacher’s feedback contributed to increase identification and correction of mistakes by students [11], and teacher's interventions can increase motivation and cognition, resulting in development of self-regulation skills [12].

2.3

The feedback limited to the transmission of information

According to Nicol [8], a greater student’s commitment and responsibility in teaching-learning process imply changes in the way of conducting the assessment and feedback process. In a student-centered learning approach, the feedback cannot be conceived as a process of transmission, in which teachers provide information in order to conduct the correction and improvement of academic work. The conception of feedback as a process of "transmission of information", often centered on the teacher, has been contested by several researchers [13][14][15][8][9]. Some main arguments are: (1) there are strong indications that the feedback messages are complex and difficult to decipher, therefore, students need opportunities (e.g. through discussion) to understand and use the feedback to adjust their performance [15][16]; (2) it is a way to ignore the its relationship with the student's motivation [8], once the feedback from peers, teachers and mentors influences on how students see themselves and how they learn or build their own strategies [17]; (3) it focuses on teacher effort to produce feedback, which increases the overload of teacher’s work, according to the number of students and classes, not always ensuring its effectiveness; (4) when it not leads to a reinterpretation by student about the causes of failures in their solutions, not contributing to student's perception about their own effectiveness [14].

3

Basic principles architecture

for

the

proposed

In the following subsections, we describe a conceptual framework to design the proposed architecture.

3.1

The role of feedback in the process of selfregulation learning

Researchers argue that students with self-regulation learning skills are more persistent, inventive, convict and enterprising [18][19], and are less dependent on external support from teacher [18]. According to Nicol [8],

development of self-regulation skills can be facilitated by learning environments.

experimentation, discovery, creating solutions, discuss and knowledge systematization.

Our concern is to provide conditions for the feedback to help in the development of autonomy. For this, the feedback should be interpreted, (re) constructed and internalized so that it can have a significant influence on student learning [16].

3.3

Thus, we consider the feedback as a way to provide the student the necessary mechanisms to clearly understand [15]: (1) which are the course objectives, i.e. what is considered as a good performance, a learning goal to be internalized and achieved, (2) how current performance is related to the desired performance (for this, students should be able to compare the achieved performance to the desired performance), (3) how to act to reduce or eliminate the gap between the current performance and the desired performance. To reduce the gap between the educational and the desired objectives , Sadler [15] suggests that students should possess "some of the teacher's skills evaluating". For some researchers [13][14], this observation shows that, besides improving the quality of feedback messages, we must invest in effort for building self-assessment skills. These arguments explain the fact that students with self-assessment skills achieve greater progress.

3.2

The importance collaboration

of

observation

and

In order to make learning more active, Shang et al. [20] suggest to expand the learning experiences, like the creation of small groups of students, suggesting questions or putting them in situations of decision making. As example of collaborative experiences, Lazakidou and Retalis [21] propose the usage of computer-supported collaborative learning strategies for helping students acquire self-regulated problem-solving skills in mathematics. After solving a programming problem, students may be asked to describe the way chosen and the decisions taken to discuss with their peers. For example, in the methodology proposed by Menezes et al. [1], adapted from Polya [22], students write their own understanding of problem. The text produced can help to promote discussions in small groups. These discussions are precious even for those not involved in dialogue. This opportunity, when the experience is related to the dialogue, offers students a new perspective on their beliefs and values, helping them build new possible meanings for their experiences. We consider four ways of learning strategies presented by Shang et al. [20]: reflection, dialogue, observation and doing. The combination of these learning modes adds value to the process, making it more attractive and significative for the students, hence the importance for replacing most the traditional lesson by learning scenarios involving

Process of problem solving

To solve a problem, the student must have a clear idea of the characteristics of the problem, attributes and rules, i.e., is necessary that the student understands the process of problem solving as a transformable system. Just as occurs in the teaching of sciences and mathematics, several models of problem solving have been proposed to support the teaching and learning programming. These models have in common the definition of steps or phases in process of solving problems. For example, from an adaptation of ideas from Polya [22], Menezes et al. [1] suggest the following steps in the specification of what they called Computer Supported Programming Learning: (1) Understanding the problem, (2) planning, (3) Development, (4) Evaluation of process and its results, (5) Socialization of the results. According Lazakidou and Retalis [21], the common point of the models support the teaching of disciplines to solve problems is initial phase: when the student is oriented to identify objectives of the problem, it is also oriented adjust their actions, considering the goals identified. That is, the student builds the internal model of problem and this model should be confronted with the goals identified. The step of understanding the problem involves definition of input data and their properties, expected results and relationships between input data and expected results. After building the internal model of the problem, and know some rules associated with the problem, the student performs the planning of the instruction set that must be applied in solving the problem (even mentally). With the planning of its solution, the student builds some instructions, such as reading and writing data. It is the process of coding and testing solutions in a programming language. The assessment of instructions using a computer environment or mental form, should lead to selecting the most appropriate instructions to the objectives of problem. These instructions or block of statements are registered in memory as pattern positive and will be applied to solve the current or other problems. To recover these patterns of human memory, Lazakidou and Retalis [21] suggest that mediation during the process of solving problems can include meta-cognitive strategies, as support, which can be verbal or not. Therefore, assessment is a process of reflection at every step of process builds programs in order to consolidate the strengths and weaknesses each step. Thus, during the review or peer review, students can also go back the instructions or commands, confronting results (even partially) with the objectives and actions. The diversity in understanding the rules of a student to another, also contribute promote

interaction in a cooperative environment. This process can derive new actions and changes in the initial instructions of the student's solution. As result of operative action selfregulating, through a selective process, actions compatible with the goals are registered. Viewers and simulators algorithms and programs can contribute for student's perception about the instruction or block of statements that are more suitable for the solution the problem. Thus, while builds instructions, the student develops an orderly and sequential reasoning especially suitable for learning algorithms. The socialization the result is considered an important experience because it is an opportunity of peer reviews, where students send and receive feedback. Therefore, it is essential provide an explanation of the developed solution: the objectivity and clarity of ideas are tested and experienced.

3.4

Construction of self-efficacy beliefs

The self-efficacy beliefs refer to personal evaluation or perception regarding their own capabilities [23]. The relevance of self-efficacy beliefs about motivation and performance are common in studies on self-regulation of learning [24][25]. In this direction, we emphasize: (1) self-efficacy's perception, i.e., observation of other students who reached good results suggests to the student that he can handle similar challenges, motivating him to solve the problems [23]; (2)appropriate degree of difficulty of the task, where the intent is to provide a sufficient variety of activities, preventing the student faced challenges with difficulty just too high that can causes bad effects in your motivation [23]; (3)progress perception can provide the student the belief that it can solve new problems - for this, must be pointed out the progress through an effective feedback; (4)social comparison should be avoided [23], i.e., must be avoid supply the same tasks and demand the same pace in solving problems, form groups of students according to their capacity and excessively stimulate the competition in class in general, the goal is promote mechanisms that contribute to the student's perception about their own performance; (5)the teacher should be informed of situations where their intervention is needed, i.e. in situations where the programming environment does not provide sufficient evidence to support the student's progress, verbal persuasion [23] must be present in teacher feedback. Previous researchs [12] indicate that the intervention increases motivation and cognition, towards the development of self-regulation skills. We recognize that self-efficacy beliefs alone are not responsible for the success of students, but we propose adopt

this principles so that students make using more appropriate strategies, investing more effort in solving the problems despite obstacles and failures. The effort and perseverance combined with right strategies can achieve, in general, better performances [4] [23][24][25][26].

4

Proposed environment

This research started with the development of a simulation and visualization programs called JavaTool [27], in order to use programs animation to support the learning of algorithms and programming in initial disciplines of Engineering and Computer Science. The integration of the Moodle [28] with JavaTool was justified by integration of resources and implementation of new features that allowed the JavaTool's invocation from Moodle [28][29]: examples, tasks and quizzes programming. We incorporate the method proposed by Moreira and Favero [30] towards assessment automated of programs. The resulting platform this work was used in four classes of algorithms and programming in courses of Computer Science and Information Systems. Based on the theoretical principles of self-regulated learning, we found that most efforts to provide feedback was centered teacher and learning environment, i.e., the feedback so far has been conceived as a process of transmitting information to guide the improvement and correction of student solutions. Since then, requirements were identified towards the construction of a new architecture for the environment proposed by Mota et al. [27].

4.1

E-learning environment architecture

The architecture was built to accommodate characteristics of tutoring (through examples and simulations these examples), simulation (of solutions students), student tracking and support to collaboration. The integration of Moodle [28] with JavaTool allows access resources and activities. The tutoring subsystem uses a resource type called programming example and others subsystems (Simulation, Automatic Evaluation and Collaborative Feedback) uses new types of activities incorporated into Moodle (Fig.2). These subsystems are described below. For teachers, the architecture provides a subsystem to support the planning and monitoring of students. Addition to facilitate teacher's planning, the information given by the teacher in planning must serve as inputs to the perception subsystem of learning objectives used by student. The simulator allows the student to visualize graphically and textually, through the explanation of steps, the execution of each example available in your learning space, beyond their own solutions. Through the actions of the student and their solutions, the Subsystem Monitoring and Evaluation

provides automatic feedback with additional information that might contribute to learning self-regulation.

students can use the same structures of the examples in their solutions. E.g., if along simulation the student perceives a syntax error in repetition structure their solution, it can search a example that uses the same structure are trying to use in examples base.

4.3

Simulation and visualization programs

A study based on the functioning of the cerebral hemispheres confirms the hypothesis is that visualization algorithms can help to better understanding [31]. According this study, the left hemisphere processes verbal and logical information while the right hemisphere processes visual and spatial information. Therefore, given the source code, read and view a graphical representation this code can stimulate both hemispheres, increasing the chance of better code understanding. Fig. 2. environment architecture of programming learning with support for cooperative evaluation and feedback. Finally, the subsystem supports the collaborative interactions between participants, so that feedback is not restricted to teacher or to learning environment. Thus, students do the exercise to give and receive feedback.

4.2

Tutoring Subsystem

Along a programming course, is important that students have access to basic learning materials and content and examples. Specifically to disciplines of programming, we implemented a new resource type in Moodle [28], called "programming example”. In using this resource, the teacher registers examples of programs or save a solution implemented by the student on the basis of examples. Examples are available as resources and can are used according to teacher criteria or free exploration by students. All examples can be visualized with the code animation. Were also developed two new types of activities: programming task and programming quiz. The programming task is a problem whose solution will be implemented in to simulation environment. The quiz contains several questions, made or selected by teacher, and the solutions for questions are also implemented in to simulation environment. The student can be consult the questions and the history of attempt with their assessment (Fig. 5), which may have been generated by automated evaluator or by teacher.

Fig. 5. Initial view the programming quiz (screenshot of student's screen) The teacher can relate programming tasks and programming quiz with examples of programs so that

In that direction, the programs and visualization systems of programs and algorithms appeared, whose benefits justify the researches broadcasting in that area [32]. For other hand, in one of the most important studies about the effectiveness of those systems [33] concluded that the way students use the visualizations is more important than the animations and images in itself. The most recent study of evaluation of those systems realized by Urquiza-Fuentes and Velázquez-Iturbide [34] uses the taxonomy proposed by Price et al. [35]: Algorithms Visualization (AV), to encourage or present statically the algorithm; Programs Visualization (PV), to encourage or to present statically the algorithm or data structures; Systems based on Scripts (SBS), where the users select the code passages that they want to visualize; Interface Systems (IS), it doesn’t generate visualization, they just interact with the user (in some cases invoking SBS), and; the Compilers Based Systems, that insert the visualization actions automatically, simplifying its use, but limiting the user’s actions. In class of viewers programs, the simulation in Java Tool [29] occurs through the animation of source code and can be executed from examples, tasks or programming quiz. In Figure 6, we see a simulator screenshot (solving programming question). The region "A" contains simulator toolbar, where they are located animation menu and history, help, information and save buttons. The editor is located in region 'B', where the student can see keywords of the language in highlight (subset of Java language). The region 'C' contains the simulator's control panel with options to control the animation's execution, to save program, get help and information. The region 'D' contains the outputs of program run. Finally, 'E' and 'F' (disabled during the simulation) contain the spaces responsible for graphical and textual representation of the instructions performed. The simulator allows textual and graphics animation, towards the student can understand the logic with

an overview of program. Among Java's resources available programming environment are primitive types, arrays, selection and repletion control structures, some methods of class Math's and the creation and simplified methods invocation.

Fig. 6. Preview JavaTool in Moodle (Screnshot student's screen).

4.4

Monitoring and Assessment Subsystem

The evaluation subsystem allows automatic or manual assessment, with posting of notes and teachers feedback. The model of automatic evaluation [30] combines the use of a assessment of the code's complexity by statistical technique of multiple linear regression, with complexity indicators, with a code tester by input/output. The simulator is responsible for activating the automatic evaluation.

each question towards calculate the programming quiz.

final score

of

Automatically, when a student's solution achieves the highest score it becomes part of the tests base. In addition, the teacher can also select the student's solutions that should be incorporated into the test base. The test base contains the solutions used by the evaluate subsystem and by tutoring subsystem that is responsible for presenting examples.

Fig. 7. Automatic assessment to student's solution (student's interface).

4.5

Subsystem collaborative assessment

The programming environment integrated into Moodle [28] allowed the use of interaction technologies that already exist on the platform.

The indicators used by code evaluator are software engineering metrics. The technique consists in extracting from student's solution an value obtained by applying each metrics described by Moreira and Favero [30]. The subsystem assessment submits these values to multiple linear regression model, which consists of a linear equation previously obtained by training on test base. In this case, the metrics are the formula entries which calculate directly the student's grade. Finally, is checked if the solution returns the expected result for previous entries informed. For the correct operation of the evaluator, the teacher must register a solution considered "model answer" to the problem. Thus, if there is no compilation error, the system evaluates and obtains a value for the student's solution, which can be modified depending on assessment's outcomes make do tester of input/output. The final assessment for each question, with history of attempts, is displayed in student's interface. The subsystem uses the highest score obtained on Fig. 8. Activity collaborative: peer feedback.

However, in order to exercise and develop skills of selfassessment, we implemented a new feature, called "collaborative evaluation", where students can provide feedback to the solutions of their peers based on their knowledge. With the new functionality (Fig. 8), students can view feedback provided by their peers and contribute to the solutions of other students. The selection of solutions assessed by peers is done randomly from a database of solutions performed by students. Thus, the student immediately receive information about their assessment and it can consider relevant or not the feedback given.

4.6

Planning and monitoring subsystem

The subsystem planning and mediation is derived from the integration of new functionality implemented and incorporated to Moodle and to JavaTool, from the perspective of teacher. In addition to planning of contents and activities, the teacher uses this subsystem's resources to track the uses of resources and activities by students. Especially in automated assessment, the teacher can view instantly the outcomes (Fig. 9), which allows to adopt different strategies in face the learning difficulties presented in its accompanying map, such as creating pairs or students groups during a class laboratory practice.

Fig. 9. Visualization of solutions (history of student attempts).

5

Conclusions

Two important aspects of assessment were considered so far: usability and educational effectiveness. According Kulyk et al. [36], usability's should be part of the design process and implementation. Therefore, there should only be performed when the learning environment is ready for use by students. Thus, we consider assessment phases: (1) the first prototypes were evaluated with heuristics inspections and the outcomes these evaluations were used to improve the system, (2) the version improved system was evaluated on consultation techniques, controlled experiments and observational studies and these results helped to correct bugs and improve the system, and finally, (3) observational studies were used to evaluate the use of the environment in real scenarios of learning in laboratory. From the standpoint of usability, we consider: informal assessments, where students' opinions was captured after

system use [29], heuristic evaluations performed by specialist teachers with the use of interactive resources, visualizations and simulations [27][29][30], observational studies where developers and evaluators observed the use of the system and reported important aspects of use of the system towards the specification of new requirements [29]; consultations through questionnaires to students. The diversification of strategies for usability evaluation was important because only observational studies and questionnaires applications are realized in environments partly controlled by evaluators and therefore the information captured are more restricted. The diversification of strategies for usability evaluation was important because only observational studies and the questionnaires applications was realized in environments partly controlled by evaluators and therefore the information captured was more restricted. In controlled experiments, the environment and student's tasks are controlled towards answers provide information on efficiency, ease of use and others usability indicators. However, the records of the actions of students are maintained even in uncontrolled experiments for comparison of performance and usability through the system logs. As for educational effectiveness, Hundhausen et al. [33] show that the effort devoted by students to tasks related to visualization is more important than the visual content displayed by viewer and simulators. Naps et al. [37] developed a taxonomy for different levels to interact with these technologies. In this taxonomy, the authors suggest a hierarchical structure, where a high level of student involvement leads to greater educational benefits. Every level, from the visualization level, includes all previous levels. After integration with Moodle, a new evaluation was performed using a taxonomy of Naps et al. [37]. In this evaluation, and in comparison with other environments, the new environment was rated at level 4, including all categories: View, Reply, Modify and Build. The level 5 can be reached from the adoption of a methodology where the student presents your solution to the class, posting your solution and commenting in forums or other mechanisms of interaction available. The environment was used in four courses in algorithms and programming, with classes of 15, 29, 42 and 47 students. Questionnaires were used at the beginning of the course, aiming to capture the students' initial knowledge, development of logic and organizational skills. Structured questionnaires also were applied at end of course, with the aim of assessing the motivation for use of environment learning and technologies interface used..

6

References

[1] C. S. Menezes, O. L. Tavares, R. A. Nevado, D. Cury. “Computer Supported Co-operative Systems to support the problem solving - a case study of learning computer programming”. In: The 38th Annual Frontiers in Education (FIE) Conference, 2008, New York. Proceedings of The 38th Annual Frontiers in Education (FIE) Conference, 2008.v. 1. [2] J. Sajaniemi and M. Kuittinen. “Program Animation Based On The Roles Of Variables”. Proceedings of The 2003 ACM Symposium on Software Visualization, 2003. [3] B. J. Zimmerman. “Academic studying and the development of personal skill: a self-regulatory perspective”. Educational Psychologist, v. 33, n. 2/3, p. 73-86, 1998. [4] D. H. Schunk. (1989). “Self-efficacy and cognitive skill learning”. In: AMES, Carol & AMES, Russell (eds.) Research on Motivation in Education. Goals and Cognititons. New York: Academic Press, Inc., v. 3, p. 13-44, 1989. [5] D. L. Butler and P.H. Winne. “Feedback and selfregulated learning: a theoretical synthesis”, Review of Educational Research, 65(3), 245-281, 1995. [6] P. Black and D. Wiliam. “Assessment and classroom learning, Assessment in Education”, 5(1), 7-74, 1998. [7] G. Gibbs and C. Simpson. “Conditions under which assessment supports students’ learning”. Learning and Teaching in Higher Education 1, 3-31, 2004. [8] D. Nicol. “Principles of good assessment and feedback: Theory and practice”. From the REAP International Online Conference on Assessment Design for Learner Responsibility, 29th-31st May, 2007. [9] D. Nicol. “Assessment for learner self-regulation: Enhancing achievement in the first year using learning technologies”. Assessment and Evaluation in Higher Education, 34 (3) pps 335 -352, 2009. [10] I. Stamouli and M. Huggard. “Object Oriented Programming and Program Correctness: The Students’ Perspective”. Icer’06, September 9–10, Canterbury, United Kingdom. ACM, 2006. [11] M. Taras. “The use of tutor feedback and student selfassessment in summative assessment: towards transparency for students and for tutors. In: Assessment and Evaluation in Higher Education, vol. 26, no. 6, pp. 605-614, 2001. [12] B. K. Hofer, S. L. Yu and P. R. Pintrich. . “Teaching College Students to Be Self-Regulated Learners”. In: Schunk, D. H. & Zimmerman B. J. (Eds.) Self-regulated

learning- from teaching to self-reflective practice (pp. 57–85), New York: Guilford, 1998. [13] M. Yorke. “Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice”, Higher Education, 45(4), 477-501, 2003. [14] D. Boud. “Sustainable assessment: rethinking assessment for the learning society”, Studies in Continuing Education, 22(2), 151-167, 2000. [15] D. R. Sadler. “Formative assessment: revisiting the territory”. In: Assessment in Education, 5(1), 77-84, 1998. [16] R. Ivanic, R. Clark and R. Rimmershaw. “What am I supposed to make of this? The messages conveyed to students by tutors’ written comments”, in: M.R. Lea and B. Stierer, (Eds) Student Writing in Higher Education: New Contexts. Buckingham, SHRE/Open University Press, 2000. [17] C. Dweck. “Self-theories: Their Role in Motivation, Personality and Development”. Philadelphia: Psychology Press, 1999. [18] P. R. Pintrich and D. H. Schunk. “Motivation in education: Theory, research and applications”. Englewood Cliffs, NJ: Prentice Hall Merrillm, 1996. [19] B. J. Zimmerman and D. H. Schunk. “Self-regulating intellectual processes and outcomes: a social cognitive perspective”, in D. Y. Dai & R. J. Sternberg (Eds) Motivation, emotion and cognition. Lawrence Erlbaum Associates, New Jersey, 2004. [20] Y. Shang, H. Shi and S. Chen. “An intelligent distributed environment for active learning”. In: ACM Journal of Educational Resources in Computing (JERIC). Vol. 1, No. 2, Summer 2001, Article 4, 17 pages. ISSN:15314278. New York: ACM Press, 2001. [21] G. Lazakidou and S. Retalis. “Using computer supported collaborative learning strategies for helping students acquire selfregulated problem solving skills in Mathematics”.Computers & Education, 54(1), 3-13, 2010. [22] G. Polya. “How to solve it”. Princeton NJ: Princeton University Press, 1973. [23] A. Bandura.“Perceived Self-efficacy in Cognitive Development and Functioning”. Educational Psychologist, v.28, n. 2, p. 117-48, 1993. [24] F. Pajares. “Self-Efficacy Beliefs in Academic Settings”. Review of Educational Research, v.66, n. 4, p. 54378, 1996.

[25] B. J. Zimmerman. “Self-efficacy: an essential motive to learn”. Contemporary Educational Psychology, v. 25, n. 1, p. 82-91, 2000. [26] D. J. Stipek. Motivation to Learn: from theory to practice. 3.ed. Englewood Cliffs, NJ: Prentice Hall, 1998. [27] M. P. Mota, L. W. K. Pereira and E. L. Favero. “Javatool: Uma Ferramenta Para Ensino de Programação”. In: Workshop Sobre Educação em Computação, Belém-Pará. Procedings of XVIII Congresso da Sociedade Brasileira de Computação. Porto Alegre-RS: Sociedade Brasileira de Computação, 2008. [28] http://moodle.org [29] M. P. Mota, S. R. Brito, S. R, M. P. Moreira, E. L. Favero. “Ambiente Integrado à Plataforma Moodle para Apoio ao Desenvolvimento das Habilidades Iniciais de Programação”. In: XX Simpósio Brasileiro de Informática na Educação (SBIE 2009). Florianópolis, 2009. [30] M. P. Moreira and E. L. Favero. “Um Ambiente Para Ensino de Programação com Feedback Automático de Exercícios”. In: Workshop Sobre Educação em Computação Anais do XVIII Congresso da Sociedade Brasileira de Computação. Belém-Pará: SBC, 2008. [31] S. P. Springer and G. Deutsch. “Left Brain, Right Brain”. W. H. Freeman And Company, New York, 1985. [32] J. Stasko, A. Badre and C. Lewis. “Do Algorithm Animations Assist Learning?” An Empirical Study and Analysis. In Proceedings of The Sigchi Conference on Human Factors In Computing Systems (Chi’93). 61–66, 1993. [33] C. D. Hundhausen, S. A. Douglas and J. T. Stasko. “A Meta-Study of Algorithm Visualization Effectiveness”. Journal of Visual Languages & Computing, 2002. [34] J. Urquiza-Fuentes and J. Velázquez-Iturbide. “A Survey of Successful Evaluations of Program Visualization and Algorithm Animation Systems”. Trans. Comput. Educ. 9, 2 (Jun. 2009), 1-21, 2009. [35] B. Price, R. Baecker and I. Small.“An Introduction To Software Visualization”. In Software Visualization, J. Stasko, J. Domingue, M. Brown, And B. Price Eds. Mit Press, Cambridge, Ma, 3–27, 1998. [36] O. Kulyk, R. Kosara, J. Urquiza-Fuentes and I. Wassnick. “Human-centered visualization environments”. Lecture Notes in Computer Science, vol. 4417, Chapter Human-Centered Aspects. Springer-Verlag, 13–75, 2007. [37] T. L. Naps, G. Rößling, V. Almstrum, W. Dann, R. Fleischer, C. Hundhausen, A. Korhonen, L. Malmi, M.

McNally, S. Rodger and J. A. Velázquez-Iturbide. “Exploring the role of visualization and engagement in computer science education”. SIGCSE Bull. 35, 2, 131-152, 2003.

Suggest Documents