Assessing mathematics automatically using computer ... - CiteSeerX

0 downloads 0 Views 264KB Size Report
Jun 12, 2003 - Students enter their user name and password and choose a subject for which ... It is possible to obtain grades or make alterations during a lab .... One may test for mathematical properties of an answer, a possibility ..... questions about them might prompt debate, conjecture, generalization and hence deeper.
Submitted June 12, 2003.

Assessing mathematics automatically using computer algebra and the internet C J Sangwin LTSN Maths, Stats and OR Network, School of Mathematics and Statistics, University of Birmingham, Birmingham, B15 2TT, UK Telephone 0121 414 6197, Fax 0121 414 3389 Email: [email protected]

http://www.mat.bham.ac.uk/C.J.Sangwin/

Abstract: This paper reports some recent developments in mathematical computer aided assessment which employs computer algebra to evaluate students’ work using the internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university students are described.

1

Introduction

This paper reports some recent developments in mathematical computer aided assessment which employs computer algebra to evaluate students’ work using the internet. These developments are of interest to all who teach mathematics and other technical or scientific subjects. This includes class-based school teaching, university lecture courses and distance learning. This is a paper of two parts. The first is an explanation of the technology itself and is necessarily of a somewhat technical nature. We must justify the claims made for computer algebra marking. The most expedient manner in which to do this is to give some working illustrations by drawing on experience gained while implementing one particular system, known as AIM, at the University of Birmingham, United Kingdom. This allows statements concerning “tailored feedback”, “partial credit” etc. to be more fully explained and justified. These examples are of actual problems set in core calculus and algebra to first year university students. The second part attempts to give a justification of how and why the technology could be used. With reference to educational theory, this section makes practical suggestions how to assign and evaluate responses to a particular style of educationally valuable task that would not traditionally be possible for large student groups because of the onerous marking load it would place on staff. In addition, one instance of a task of this style reveals an interesting misconception, and automatically generated feedback to address this is implemented. By combining both aspects we attempt to connect assessment practice with educational theory in a single paper. We also believe that some technical detail is important to motivate others to consider using computer algebra in this way. If the perception exists that implementing such a system is overly difficult, only the very committed enthusiast will explore the ideas further. 1

Note that the AIM system described below acts as both a working system in its own right, and also as evidence of the feasibility of the more general ideas described. However, the technology detailed here is not speculative: it works. Other approaches are possible, for example the Australian system Calmæth uses Mathematica1 . Other commercial products are currently being developed which use computer algebra. The author firmly believes that in the near future all computer aided assessment systems will link computer algebra and assessment to perform the sort of automatic and intelligent marking of mathematics described in this paper [10].

2

The AIM computer aided assessment system

In this section we describe the AIM computer aided assessment (CAA) system and use this to illustrate a number of general features of computer algebra marking. The AIM package combines the Maple2 computer algebra system (CAS) with the internet delivery to provide a sophisticated mechanism with which to grade students’ work. The original AIM system was developed by Theodore Kolokolnikov and Norbert Van den Dergh at Ghent [5], with the core of the latest system by Neil Strickland, at the University of Sheffield [15]. The system has been used at the University of Birmingham for three academic years, supported with the help of an institutional Learning Development Unit grant [4]. AIM has been used most to assess work from a core calculus and algebra course comprising 40% of the credit for the first year. Work is set weekly to groups of up to 190 students, and this includes opportunities for independent self study, formative assessment, summative work and formal invigilated examinations taking place in computer laboratories. Work has also been set in a number of other courses, both within the School of Mathematics and Statistics, on other courses for non-specialists, and for schools, both for students and as part of continuing professional development provision for teachers. The server has generally been reliable, and student feedback has been positive. Much of the material described below may be viewed at the URL http://www.mat.bham.ac.uk/aim/

2.1

Using AIM

From the students’ view the system is internet based, and they may use any mainstream browser. Students do not need Maple, other plug-in components or special software, making it highly accessible. Students with special needs may use their normal browser adaptations, such as rendering the fonts in a larger size. Material is arranged into a number of ‘subjects’ each of which contain a number of ‘quizzes’. These in turn comprise a number of individual ‘questions’. Students enter their user name and password and choose a subject for which they are registered. They may then choose a quiz and enter their answers using Maple syntax. In most cases this is relatively simple and intuitive. For example, cos(nπx)/(1 + x2 ) is simply entered as cos(n*Pi*x)/(1+x^2). Syntax errors are not penalized. A student may ‘validate’ their answer prior to it being ‘marked’. Validation performs a syntax check, echoes the student’s answer and allows the student to confirm that the expression displayed is indeed 1 2

Mathematica is a registered trademark of Wolfram Research Inc. Maple is a registered trademark of Waterloo Maple Software.

2

the answer they want to enter. Students who answer incorrectly are encouraged to try the question again immediately. If deemed appropriate, the system can be set to subtract some of the marks, with a default of 10%, for each incorrect attempt. This mechanism operates in addition to partial credit awarded by the question and is designed to reward persistence and diligence. Students respond well to this opportunity to correct mistakes. Screen shots are presented in Figures 1 and 2. Students may return to a partially completed quiz and may review and change answers up to the ‘due date’. After this they may view worked solutions if these have been supplied by the teacher. Students very much appreciate the flexibility offered by the internet, and the logged data of actual student use shows a wide variety of access times and locations. The system requires a web-server to host the Maple software and server components. Once set-up, all administrative tasks are performed via the internet including authoring questions, creating quizzes, obtaining grade sheets and statistical analysis of the variety of answers. It is possible to obtain grades or make alterations during a lab session from a remote terminal, for example. There are a number of options, depending on whether a particular quiz is to be used as formative or summative assessment. These include guest access, due dates, quiz weight, levels of feedback, availability of solutions, etc. As we shall see, some knowledge of both Maple and LATEX is necessary, although questions on this particular system are easy to author, and may be shared. If Maple is already in use, then staff with the necessary Maple expertise to author AIM questions will already be present. Much may be gleaned by examining examples of existing question source code. Links to external internet resources—such as notes or applets—are also possible of course. The system itself is open source, although one needs appropriate licences for the underlying Maple software. AIM is becoming widely used in higher education, and there is a JISCMAIL list [email protected] for users and developers. The remainder of this section concentrates on question authoring and other general aspects of computer algebra marking, rather than the student or administrative interfaces.

2.2

Computer algebra marking

To illustrate how the capabilities of the CAS may be harnessed we examine question authoring using AIM. Questions are written as plain text files containing lines of code and each line begins with an AIM flag. The following calculus question is typical. Example question 1 Differentiate (x − 1)3 with respect to x. This may be authored with the following simple code. The question stem follows the t> flag, is written in LATEX, and is subsequently displayed to the student on the screen. The answer, a Maple expression, follows the a> flag. t> Differentiate $(x-1)^3$ with respect to $x$. a> 3*(x-1)^2 end> Genuine comparison is made between the student’s and teacher’s answer. This is algebraic, using the sophistication of the computer algebra which is very robust indeed. Note, the system 3

correctly equates 3(x − 1)2 and 3x2 − 6x + 3 marking either correct. Some current commercial testing products compare answers by substituting a number of random values which is clearly more limited. Other popular systems are offer only multiple choice, numeric input, or other objective tests which often limit or distort questions. Furthermore, since the CAS is able to perform calculus, there is no need for the staff member to actually answer the question. It would be perfectly acceptable for the staff member to enter the answer as a> diff((x-1)^3,x) and exploit the CAS’s ability to differentiate the question, although the student is usually prevented from doing so. This feature allows one to write truly random questions. In fact genuinely random questions are relatively easy. Perhaps we wish the student to repeatedly practice differentiation of polynomials with a degree and coefficients taken within a specified parameter range. The corresponding code for such a question exploits the CAS to generate a random polynomial: local> p h> p := x^4 + randpoly(x, degree=3, coeffs=rand(-3..3)); t> Differentiate $ @p@ $ with respect to $x$. a> diff(p,x) end> Here the text following the h> flag is hidden Maple code, and the polynomial generated by this is differentiated by the marking scheme and compared with the student’s answer. The question designer may choose truly random parameters and reverse engineer the problem automatically and symbolically. Indeed investigating the question space, that is to say the range of permissible change in any question situation, is far more rewarding a task for the teacher than large quantities of repetitive marking. There are clearly many ways of generating such random questions. Consider setting the ubiquitous question of solving ax2 + bx + c = 0. There are a number of question subspaces, containing quadratics with real roots, distinct roots, rational and integer roots. Certain degeneracies exist, such as zero or repeated roots, which one may wish to avoid. Naturally in this case one may begin with a(x − α)(x − β) and ask the CAS to expand this to generate the question stem. One might choose a, α and β to be distinct non-zero integers to preserve certain features of the question. Care clearly needs to be taken to ensure questions of similar difficulty are posed during summative assessment. Care also needs to be exercised that a question is indeed possible. We confess to the bitter experience of setting mathematically impossible problems, and to choosing parameters which subsequently result in a zero denominator. One benefit of randomization is a reduction in the frequency of impersonation of one student by another, copying and other forms of cheating, although we really have little idea of the true extent of this problem outside invigilated labs. However, formative sessions can be lively collaborative discussions between groups of students as to an appropriate method rather than swapping their answer. When students have identical paper based assignments such group 4

Figure 1: Automatic feedback provided by the AIM system work is often discouraged as plagiarism. We consider the opportunities for group work this system offers to be one of the distinct benefits. Furthermore, we could check for equivalence with an incorrect answer if this is derived from a common mistake, and provide appropriate feedback. Here experience in writing distracters for traditional multiple choice questions is invaluable in predicting such common errors. In addition to checking for algebraic equivalence between a student’s and teacher’s answer, the CAS allows the expression entered by the student to be manipulated and tested symbolically. One may test for mathematical properties of an answer, a possibility which is examined in Section 3 below in more detail. This may be used to provide tailored feedback to the student based on properties of their answer. As a specific example, consider Example question 2 Evaluate the following integral Z x(3x2 + 8)3 dx using the substitution u = 3x2 + 8. Assume further that a student incorrectly evaluated the integral as (3x 3 + 8)4 + c. One might, in an attempt to encourage checking activity spontaneously, want to point out that the derivative of (3x3 + 8)4 + c with respect to x is equal to 36x2 (3x3 + 8)3 , not x(3x2 + 8)3 . This sort of feedback is desirable but onerous to provide when hand marking very large quantities of work. The use of computer algebra allows feedback, including this derivative, to be automatically generated by symbolically differentiating the student’s incorrect answer. See, for example, Figure 1. 5

Figure 2: A worked solution We have seen how questions may be randomly generated with these random parameters substituted into the question and then used to generate an answer. They may also be used to generate a worked solution, by substituting into an answer template, for example. In the case of Example question 2, a worked example is shown in Figure 2. Note, this worked solution is dx etc. Clearly care is generated from a template by substituting values for u, the integrand, du needed to ensure the template is valid for the chosen parameter ranges, and not all questions sets lend themselves to such templates. Obviously there is scope to be much more creative than the simple Example Question 1. Equally clearly, the more sophisticated one tries to be, the more complex the question source code becomes. Fortunately question source code files are plain text, making them easy to view, copy, modify and share. Once a selection of questions have been authored, or obtained from other users, modification is relatively simple. A range of questions have been contributed by users and are readily available with the system itself. Details of how to obtain more technical background information is published in [15].

2.3

Limitations

Not all tasks can be automatically marked by this system. Only those mathematical criteria which may be checked with the computer algebra system can, of course, be implemented. Nevertheless a large proportion of core topics such as questions in algebra, calculus, linear algebra and differential equations, are included making the tool very versatile. However, it is difficult to envisage how proof writing or other reasoning skills would be assessed in this way. This tool is powerful, sophisticated and opens new possibilities but will clearly not 6

automatically mark all question types. The current AIM system has relatively poor facilities for assessing individual steps in questions. However, the use of computer algebra makes automatic follow-on marking perfectly possible in many situations by manipulating a student’s answer symbolically. For example, assume a student entered an incorrect answer for part (i) of Example question 3 (i) Find a quadratic with roots at x = 1, and x = 3. (ii) What is the gradient at x = 2? As follow-on marking, one could differentiate their answer and substitute in x = 2 to check if they had calculated the gradient of their function at x = 2 correctly. Facilities for implementing such multi-step questions are being developed. The AIM system trusts the teacher, and unfortunately it is perfectly possible to set impossible questions, or set up a perpetual machine loop by poor choices of parameters or marking procedure. Furthermore, mis-spelling local variable names and effectively using a variable that has never been assigned a value causes problems and has been known to crash the server in extreme cases—care with authoring questions needs to be exercised. Of course, mistakes in setting conventional written examinations have been known to cause havoc too! Restricting the Maple available to the teacher might help to solve these problems but would greatly diminish the system capabilities. Some protection against the most obvious forms of student abuse exist within the system. Students are limited in the range of Maple commands that will be evaluated, and the teacher is warned if a student attempts to evoke an operating system command through Maple. Students are also prevented from evaluating expressions that return ridiculously large integers, which prevents an obvious form of denial of service attack. Furthermore, since AIM is specifically designed to mark all algebraically equivalent answers as correct, it is harder to write questions testing very simple algebra. Recall that the whole point of using computer algebra is to check algebraic equivalence. Thus, the system does not usually distinguish between answers which are equivalent, such as x3 x2 and x5 , or 1/2 and 3/6. It is possible to do so, although not using the default answer flags. For example, one can distinguish between x2 +2x+1 and (x+1)2 , but only using a custom procedure, not the a> flag. Furthermore, when a student enters an answer to be validated, some basic simplification takes place: 12 cos(−x)/9 will normally be echoed back to a student as 4/3 cos(x). These issues often force the staff member to think carefully about precisely what forms are acceptable as an answer, or stimulate discussion in the laboratory or workshop environment.

3

Mathematical exemplification

New technology will only be of benefit if used in a sensitive and constructive way. CAA provides some clear benefits to the students, which include flexibility of access time or location, opportunity for repeated self testing and practice. Instant and possibly tailored feedback may be given, and answers are marked objectively against set criteria. The staff member has the possibility of a reduced marking load, and tests may be recycled and improved after an initial use. This time may be used for richer feedback on tasks that may not be computer 7

marked. For the remainder of this paper we focus on styles of mathematical questions and their implementation, to concentrate on the new features computer algebra marking provides. Research, such as [3], shows that a powerful tool available to the mathematical educator is the design of assessment tasks. However, [1] suggests that “changing assessment alone will not be sufficient to orient students to deep approaches to learning”, which further concludes that changes are needed to students’ “total experience of learning”. This includes factors such as students’ conception of the nature and purpose of the subject, their motivations for studying mathematics, their perception of staff attitudes, volume of work, and so on. Simply adding computer aided assessment is unlikely to be beneficial. The position and purpose of tasks in the whole learning cycle needs to be considered. The study reported in [9] examined the variety and relative frequencies of tasks in undergraduate mathematics material. Tasks, which may be described using the taxonomies of [9] or [14], include factual recall, use of routine procedures, writing of proof and criticism of incorrect arguments etc. This study involved an analysis of some 489 questions set to undergraduate students at a United Kingdom university. The conclusion was that the majority of current work may be successfully completed by routine procedures or minor adaption of results learned verbatim, without the use of higher skills, as defined in [12]. Many students appear to be most comfortable tackling problems for which they have previously seen a worked solution. This procedural behaviour may be encouraged by particular schemes of work, and unless students are occasionally moved away from this comfort zone it is unlikely that they will develop critical thinking and other higher order skills. Such schemes of work may even encourage strategic learning. One type of task that requires the use of higher level skills asks students to generate an example, or multiple examples, of an object satisfying certain properties. For concreteness, the following questions are exemplars of this style. Example questions 4 1. Give examples of differentiable functions, each with a turning point at x = 1. 2. Find a cubic polynomial p(x) with the following properties: (i) p(0) = 0, (ii) p(1)=1, (iii) p is a bijection of the real line to itself. 3. Find a singular 5 × 5 matrix with no repeated entries. Educational research, such as [2, 18], has already shown that one very effective way to learn a concept is to generate examples of objects. In particular [2, p 293] concluded that “the generation of and reflection upon examples provided powerful stimuli for eliciting learning events”. The work of [17] went further and suggested that learning itself “is seen as growth and adaptation of personal, situated, example spaces; teaching involves providing situations in which this can take place”. However students show a reluctance to generate their own examples [8]. Such tasks were found in the analysis of [9], but were rare. Perhaps one reason for this is the onerous marking load they would place on staff faced with a rich variety of correct answers, each of which required close attention. It may be that asking students to generate their own examples shifts their attention from a local focus on the steps of a particular procedure to a more global awareness of the criteria which a correct example satisfies. This appears very similar to the “delicate shift of attention” which [7] suggests is central to the process of mathematical abstraction. By encouraging the 8

activity of example generation we may help in this process and begin to change students’ conception of mathematics from the fragmented body of knowledge found by [1], and others, to a more coherent whole. It is firmly believed that questions of this style are possible and desirable at all levels of mathematical education, and many, if not all, questions and topics have the potential to be assessed in this way. This may be achieved by altering an existing question to provide a little freedom. Example question 5 How may you write a half ? Give as many examples as possible. Answers to this question include, “a half”, 12 , 0.5, 50%,

2 4

(and many other fractions, including √ √ 1 2) non-integer and non-positive numbers). Continuing: 2−1 , 4− 2 , 1/ 4, log( log(2) , and so on. Furthermore, one may use 0.1 in base 2. Now, of course, you ask what is a half in base 3? Also, 0.4999 · · · —opening a can of worms—or even 0.0111 · · · in base 2. What about base −2, or other negative number bases [6]? Plenty of scope here for real and serious discussion of the simplest questions of number representation, and we haven’t even begun to consider historical perspectives on this question. Absent from much of the educational literature are practical methods for marking complex examples generated as solutions to coursework problems. Marking some questions of this style might involve significant calculations on the part of the staff member. When teaching large groups of students, such as those at university, practical constrains impinge even when educational theory would advocate certain activities. When a routine algebraic procedure exists to mark such a question, the work of [11, 12] suggests that this may be performed by a computer algebra assessment system such as AIM. Therefore, AIM and equivalent systems, provide new opportunities to pose questions which educational theory has demonstrated to be greatly beneficial, but which would be impractical to set if marked by hand. Questions of this style have been incorporated into a core calculus and algebra course at the University of Birmingham. Space only allows one example activity to be considered in detail below.

3.1

An example activity: odd and even functions

In the section below we consider in detail the following question concerning odd functions. Example question 6 Give an example of an odd function. The task appears quite trivial, however in examining the students’ answers we reveal a common misconception which might not otherwise have surfaced. Feedback to address this has been implemented to provide subsequent student groups with the benefit of this insight. First some practical details of the implementation. Mathematically, to check a given function is odd we calculate whether f (x) = −f (−x) for all x ∈ R. This can be validated algebraically in a straightforward way using Maple. One particular AIM code to implement this question is t> Give an example of an odd function. 9

x x3 x5 x7 x3 + x x3 − x x5 + x x5 + x 3 x7 + x 3 3x5 5x7 5x3 + x 5x3 − 3x 7x5 − 5x 9x7 − 7x x3 + 2x x5 + 3x3 + 2x 6x5 − x3 + x x5 − 4x3 + x 4x7 − 8x5 − x3 sin(x) 1/x

( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (

3 , 3 , -) 15 , 5 , 1 ) -, 8 , 3 ) -, -, 1 ) 1 , -, -) 1 , -, -) -, -, 1 ) -, -, 1 ) -, 1 , -) -, 1 , -) -, -, 1 ) -, 1 , -) 1 , -, -) -, 1 , -) -, -, 1 ) 1 , -, -) -, 1 , -) -, -, 1 ) 1 , -, -) -, 1 , -) 3,1,1) -, 2 , 1 )

Odd powers

Combinations of odd powers

Multiples of odd powers Sums and multiples

Trig Discontinuous at x=0

Table 1: Student generated odd functions ap> $f(x):=$ s> [(ans) -> ‘aim/Test‘(ans+subs(x=-x,ans),0), x^5] end> The ap> flag changes the answer prompt, to discourage students from entering ‘f (x) =’ or ‘y=’ etc. as part of their answer. Notice the use of the s> flag to write a custom grading procedure and supply an answer. Here the in-built Maple procedure ‘aim/Test‘(a,b), checks the equality of a and b using a variety of algebraic tests. One can symbolically manipulate the student’s answer, and equivalently test whether f (x) + f (−x) = 0, or f (x) = −f (−x). The teacher has supplied the function f (x) = x5 , as a correct answer, via a second argument to the s> flag. A group of undergraduate students were asked for three different odd functions, using the AIM system to mark and collate the results. The results generated by the students have been recorded in Table 1. The table entry ‘x3 , (15, 5, 1)’ means that 15 people gave x3 as their first answer, 5 as their second, 1 as their third. To a mathematician, a very natural activity when confronted with a raw list of students’ answers showing this variety is to spot patterns, and to attempt to arrange this data. One entirely subjective arrangement has been chosen and used to compile Table 1. Asking students to undertake this would expose them to a wide variety of examples. Furthermore, asking questions about them might prompt debate, conjecture, generalization and hence deeper understanding. On examining this data we spot a number of interesting features. Firstly the majority of the coefficients which are not equal to 1 are odd. Eg 3x5 , 5x7 . Clearly this is not necessary, but do 10

the students realize this? It appear that students are they playing safe and making everything odd. The research of [16] used the notions of concept definition and concept image to draw a distinction between the formal definition and the cognitive structure in the individual’s mind. Students’ concept image appears to include only a subset of functions captured by the agreed concept definition. Feedback to address this misconception has been implemented in the AIM system. For example, if a student provides an odd function with odd coefficients all greater than one, the system responds in a manner shown in Figure 3. Note, the system modifies the student’s answer and gives some positive and hopefully thought provoking feedback. Looking again at the table, significantly more people use x3 than x, which is over complicated. Furthermore f (x) = 0 is odd, but is absent, corroborating previous research such as [13] which demonstrated that such singular (sometimes referred to as trivial or degenerate) examples are often ignored by students. When asked for a function that was both odd and even (there is only the constant function f (x) = 0) only 35% of the students were eventually able to answer correctly, 30% failed to answer the question at all, and the remainder gave an incorrect answer. Examining these incorrect answers revealed that 24% of the students simply added an odd function to an even function, perhaps to “give a function that is odd and even”. Examples of this include x + x2 ,

x2 + x 3 ,

x5 − x 6 .

Unless an instructor knows such misconceptions exist it is impossible to intervene to correct them. The freedom of expression inherent in this style of question allows such misconceptions to surface more naturally than with more traditional routine questions. The requirements of the system syntax do restrict which expressions one is able to enter when automatically marking such questions. For example, functions defined piecewise are difficult to enter. However, a tutorial might be an ideal setting in which to discuss such examples. The activity of collating these examples and finding patterns could also be used to generate discussion. Some suggestions of possible student activities connected with odd functions are given below • Prove each of the functions the students have given is odd. • Sketch the graph of each of the given functions. • Can we add odd functions together? • How can we combine odd and even functions to maintain oddness? For example, all constant functions are even, multiplying an odd function by an even function is odd, so multiplying by a constant keeps an odd function odd. In such a tutorial setting, students will have already answered the question and thus they may then approach the discussion with confidence that they have something – namely their examples – to contribute. Indeed experience with students reveals they contribute not only their distinct examples, but additionally their different methods. Clearly there are analogous activities for even functions, we have not recorded the corresponding data here. Using such tasks in a tutorial session after a computer test provides a valuable opportunity to provoke mathematical enquiry and to discuss these concepts further. Furthermore, using students’ examples in subsequent tutorials helps to embed the use of CAA into their overall learning experience. 11

Figure 3: Positive feedback to address misconceptions

4

Conclusion

This paper contains two quite different themes. The first is a practical examination of computer aided assessment which takes advantage of computer algebra. The second is motivated by educational research and examines one particular style of mathematical question: that of generating examples of objects which enjoy certain properties. By combining both aspects in a single paper we have shown how practical schemes of work may be developed which are motivated by educational research. Conversely, we show how the technology may be used as a tool for uncovering students’ misconceptions about simple topics, such as that of an odd function. Computer algebra evaluation of students’ work over the internet provides the pragmatic educator with significant new potential. This might simply be to perform marking of existing problem sets. Beyond this, one may provide instant tailored feedback based on properties of a student’s answer. Partial credit is possible within the existing system, and it is clear how better follow-on marking support could be developed. In addition, the style of question which encourages students to create examples is no longer impractical for large groups of students. No doubt there are many alternative approaches to using computer algebra marking. Further development both of the underlying technology, and availability of a variety of question sets, would clearly be of great benefit. Further research should examine the impact on learning of computer algebra based CAA.

12

References [1] K. Crawford, S. Gordon, J. Nicholas, and M. Prosser. Qualitatively different experiences of learning mathematics at university. Learning and Instruction, 8(5):455–468, 1998. [2] R. P. Dahlberg and R. P. Housman. Facilitating learning events through example generation. Educational Studies in Mathematics, 33:283–299, 1997. [3] G. Gibbs. Using assessment strategically to change the way students learn. In Assessment Matters in Higher Education: choosing and using diverse approaches, pages 41–53. Society for Research into Higher Education & Open University Press, 1999. [4] D. F. M. Hermans. Embedding intelligent web based assessment in a mathematical learning environment. In Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC2002), Internet Accessible Mathematical Computation 2002 workshop, Lille, July 2002. [5] S. Klai, T. Kolokolnikov, and N. Van den Bergh. Using Maple and the web to grade mathematics tests. In Proceedings of the International Workshop on Advanced Learning Technologies, Palmerston North, New Zealand, 4–6 December, 2000. http://allserv.rug.ac.be/~nvdbergh/aim/docs/ (viewed August 2002). [6] D. E. Knuth. The Art of Computer Programming, volume 2: Seminumerical Algorithms. Addison-Wesley, 1969. pp 171 and exercises on pp 176, 177 and 179. [7] J. Mason. Mathematical abstraction as the result of a delicate shift of attention. For the Learning of Mathematics, 9(2):2–8, 1989. [8] R. C. Moore. Making the transition to formal proof. Educational Studies in Mathematics, 27:249–266, 1994. [9] A. Pointon and C. J. Sangwin. An analysis of undergraduate core material in the light of hand held computer algebra systems. International Journal for Mathematical Education in Science and Technology, accepted. [10] C. J. Sangwin. Assessing higher skills with computer algebra marking. JISC Technology and Standards Watch, TSW 02–04, September 2002. http://www.jisc.ac.uk/techwatch/ (viewed September 2002). [11] C. J. Sangwin. Encouraging higher level mathematical learning using computer aided assessment. In Proceedings of the International Congress of Mathematicians Satellite Conference on Mathematics Education, Lhasa, Tibet, August 12–17, 2002. [12] C. J. Sangwin. New opportunities for encouraging higher level mathematical learning by creative use of emerging computer aided assessment. International Journal for Mathematical Education in Science and Technology, accepted. [13] A. Selden and J. Selden. Research perspectives on concepts of functions. In G. Harel and E. Dubinsky, editors, The concept of function, volume 25 of Mathematical Association of America Notes, pages 1–16. Mathematical Association of America, 1992.

13

[14] G. Smith, L. Wood, M. Coupland, and B. Stephenson. Constructing mathematical examinations to assess a range of knowledge and skills. International Journal of Mathematics Education in Science and Technology, 27(1):65–77, 1996. [15] N. Strickland. Alice interactive mathematics. MSOR Connections, 2(1):27–30, 2002. http://ltsn.mathstore.ac.uk/newsletter/feb2002/pdf/aim.pdf (viewed December 2002). [16] D. O. Tall and S. Vinner. Concept image and concept definition in mathematics, with special reference to limits and continuity. Educational Studies in Mathematics, 12:151– 169, 1981. [17] A. Watson and J. Mason. Extending example space as a learning/teaching strategy in mathematics. In Proceedings of the Annual Conference of the International Group for the Psychology of Mathematics Education (PME26, Norwich, United Kingdom), volume 4, pages 378–385, 2002. [18] A. Watson and J. Mason. Student-generated examples in the learning of mathematics. Canadian Journal for Science, Mathematics and Technology Education, 2(2):237–249, 2002.

14

Suggest Documents