Improving Students’ Learning of Statistics: The Impact of Web-based Learning Support on Student Outcomes
A thesis submitted in fulfilment of the requirement for the award of the degree
DOCTOR OF PHILOSOPHY
from the
UNIVERSITY OF WOLLONGONG
By
Norhayati Baharun Diploma in Statistics, MARA Institute of Technology (ITM) Bachelor of Applied Science (Hons.) (Applied Statistics and Operations Research), University of Science Malaysia (USM) Master of Science (Statistics), University of Science Malaysia (USM)
School of Mathematics and Applied Statistics 2012
Certification
I, Norhayati Baharun, declare that this thesis, submitted in fulfilment of the requirements for the award of Doctor of Philosophy in the School of Mathematics and Applied Statistics, University of Wollongong, is my own original work unless otherwise referenced or acknowledged. The document has not been submitted for qualifications at any other academic institution.
Norhayati Baharun
i
Dedication
My dear husband, Ariff, and my two girls, Namirah and Fatihah, who are the inspiration for my life and my study. Their love, understanding and support have always kept me strong to get through the ups and downs of my journey. Thank you Ariff, Namirah, and Fatihah, from the bottom of my heart for being there for me, in happiness and sadness.
ii
Acknowledgement First and foremost, I thank God who has made my epic journey come to its end.
The time has arrived for me to express my sincere gratitude to many people who have contributed and supported my studies for the past three and a half years in Australia. I express my deep sense of gratitude to my dear supervisor who is also my idol, Associate Professor Anne Porter. She is an inspired educator who has fired me with her enthusiasm for teaching statistics and has allowed me to work with her as a valued colleague. Thank you, “Dr. Anne”, for being a very supportive supervisor, sometimes you were like my “Mum” when I missed her so badly, being apart in different country. I am so grateful for the opportunity to work with “Dr. Anne”. Her insightful comments and continued guidance on my work has provided an invaluable experience to me.
I thank Dr. Carole Birrell, Professor Jacqui Ramagge, and Dr. Mark Nelson for reviewing and providing suggestions to improve my thesis. Their timely and useful comments have helped me greatly. This has enabled me to write up this thesis professionally and with confidence.
My beloved parents, Baharun (my Dad) and Normah (my Mum), deserve special thanks for their continued moral support and encouragement though they are thousands of miles away from Australia. They patiently listened when I needed to talk about my life and studies through long hour calls to Malaysia. The motivation they gave has always kept me going through the difficulties for the past three and a half years.
I thank my husband and my children who have understood, been helpful, and cheerful throughout the period of my study. When I was busy with my work, my husband played a big part in taking over my responsibility as a mother and a housewife. A million thanks to my family, for believing in me and for their patience with my frequent absence, both emotionally and physically.
Last but not least, I would also like to thank Sue, Carolyn, Kerrie, and Joe who have provided the administrative support that has made this thesis become a reality at the end of my journey. I thank my friends, Rebecca, Ali, Bothaina, and Maman (among others) who have suffered the same struggle to produce a thesis.
iii
Abstract This thesis investigated the provision of learning supports within an e-learning environment in terms of its impact on student learning outcomes in mathematics and statistics. A review of literature on relevant learning theories was used by the researcher as a way of describing and explaining the educational practices within an online learning environment and her involvement through three case studies. The two primary case studies involved two introductory level statistics subjects. These subjects were taught and co-ordinated by the same lecturer with similar assessment practices and high quality resources. A heavy emphasis was placed on the completion of tasks, the structure of which drew on a variety of theoretical perspectives (behaviourism, cognitivism, constructivism, social constructivism, connectivism, and experiential learning). The third case study involved adapting and trialling learning designs in two mathematics subjects. Mixed methodology was used to gather evaluation data in the case studies. This included survey responses, e-learning forum analysis and student tracking statistics, and document analysis in order to triangulate the outcomes at each of the four stages of Alexander and Herdberg‟s (1994) model for evaluating innovation (Design, Development, Implementation, and Institutionalisation). The first case study addressed students‟ concerns about the study of statistics. The initial iteration involved 86 postgraduate students enrolled in GHMD983 August/Spring 2008 at the University of Wollongong. The analysis of responses to prompts given to students by the lecturer in the forum revealed that 58% of the students were anxious studying statistics due to their lack of mathematical skills and unpleasant past experiences when taking similar subjects. A suite of video resources on major topics was developed to support student learning and was made available via the subject‟s e-learning site. At the end of the session, survey responses revealed that 98% of the students favoured the use of video resources to assist them to learn and understand statistics, and that these reduced their anxieties and helped them learn and understand statistics better. Based on the results of the first implementation in the first case study demonstrating the use of video resources potentially assisting students‟ learning of statistics and easing their anxieties, the second case study was initiated with 89 undergraduate students enrolled in the first year undergraduate subject, STAT131, March/Autumn 2009. The suite of video resources was extended to include major topic areas for the first year introductory statistics subject. These resources were also provided as a learning support within the subject‟s e-learning site. At the end of the session, survey responses revealed that only 39% of the students regarded the video resources as useful for their learning. This finding contrasted with the results from postgraduates in the first case study on the worth of videos for providing learning support. However, unlike iv
the postgraduates, the undergraduates commented that they were not aware of the existence of the video resources in the e-learning site. In contrast to the postgraduate subject, the videos were placed in a by-type resource folder rather than weekly folders. The second iteration in the first case study in August/Spring 2009 addressed issues related to learning designs: how can the video resources be embedded more effectively within the elearning site so as to support students‟ learning of statistics? The researcher reviewed the literature to identify an appropriate learning design framework and representation to more effectively embed the support resources within an online learning environment. On the basis of Oliver and Herrington‟s (2001) framework, a Learning Design Visual Sequence (LDVS) representation (developed from the Learning Designs Project available at http://www.learning designs.uow.edu.au/) was adapted for this study. This led to the implementation of learning design maps within weekly folders embedded in the e-learning system. As a result, learning design maps (in DOC files) were implemented in the subject with the maps placed in the subject‟s e-learning homepage within weekly folders, and the students were provided with bytype resource folders. Outcomes from the surveys conducted at the end of the session revealed that the video resources were still considered to be of great importance for student learning by 96% of the students, but there were still some technical issues to resolve. In the third implementation of SHS940 August/Spring 2010, resources were provided in the e-learning site via two designs: the learning design maps (in PDF files) within the weekly folders, and by-type resource folders. Data from the e-learning student tracking in week 12 suggested that students preferred to use both designs rather than just having the maps within weekly folders. Sixty one per cent of the students had used both the by-type resource folders and the learning design maps within weekly folders to access resources in the e-learning system but none of them used only the learning design maps within weekly folders to access resources. In August/Spring 2011, both the weekly folders without the maps (but contained all the components that the maps suggested, i.e. resources, tasks, and supports) and the by-type resource folders were provided to students in the e-learning site. There were changes in the assessment system with the inclusion of a draft and redraft approach to the first two assignments in addition to a test retest approach of the remaining three assessments. The final outcomes demonstrated that the proportion of students attaining top grades (High Distinction and Distinction) in 2008 (67%) was significantly higher (Z = 3.63, p < 0.001) than in 2009 (37%). There was a significant increase in the (83%) between 2010 and 2011 (Z = 4.72, p < 0.001) compared to the overall top grades rate (54%) between 2008 and 2009. On average, the proportion of students who failed in 2010 and 2011 overall top grades rate was approximately 2% compared to 4% both in 2008 and 2009. From this implementation, it appeared that the provision of both designs: (i) weekly folders associated with resources, tasks, and supports, and v
(ii) by-type resource folders, along with the draft and redraft of assignments for students to sufficiently support their learning of the subject, led to the achievement of high grades with few failures. More importantly, students reported far less anxiety than when the study was initiated. The second implementation in the second case study in March/Autumn 2010 also addressed the issue of subject design specifically on how the resources can be displayed and delivered effectively within the e-learning system so that they would be beneficial for all students. In this study, there was a focus on reducing high failure rates. Design improvements included the use of maps in HTML files within weekly folders in addition to by-type resource folders. The CAOS test (Comprehensive Assessment of Outcomes in Statistics from https://app.gen.umn.edu/artist/) was administered in the subject at the beginning (pre-test) and the end of the session (post-test). This test was used as an external measure to assess the students‟ basic statistical literacy and reasoning in STAT131. While students welcomed the video resources and new designs, failure rates remained high. As a consequence, in March/Autumn 2011 a Headstart program was introduced in STAT131. This program allowed students to access the first module of work (including the first assessment) in the e-learning site four weeks prior to the start of the formal session. The use of videos was integral to the design of the Headstart program. In regard to the impact of the Headstart program introduced in the subject, a one-way ANOVA and Scheffe post-hoc tests revealed a statistically significant difference (F2, 192 = 5.301, p = 0.006) in the mean of final marks between the students who engaged with the program (72 marks) compared to those who did not (63 marks). An analysis of differences between two proportions showed that the proportion of students who perceived the video resources as useful for learning increased significantly (Z = 1.91, p = 0.028) from 39% in 2009 to almost 60% in 2011. An analysis of student outcomes showed that students with access to the Headstart program including assessment involving draft and redraft of assignment, the improved learning design maps within weekly folders, and the by-type resource folders performed better in their assessment tasks compared to students in previous years without access to the Headstart program, the draft and redraft of assignment, and the improved learning design in the e-learning sites. A test for differences in proportions also showed that the failure rate fell significantly (Z = 1.99, p = 0.023) in 2011 (13%) compared to the overall failure rate between 2005 and 2010 (19%). The third case study examined the effectiveness of the subject design and the applicability of learning design maps in a different context involving prospective primary and early childhood teachers throughout three consecutive teaching sessions. These students were enrolled in a pair of mathematics subjects, MATH131 and MATH132, at the main campus of the University of Wollongong and at four remote campuses. Both subjects investigated high quality resources as confirmed with a change evaluation and measure of four dimensions of high vi
quality learning (student engagement, acknowledgement of the learning context, challenging students, and providing practice) and used a continuous assessment system including online practice quizzes considered to be one of the most important resources for learning throughout the three teaching sessions. Several assessments were regarded as being extremely useful for students and these included assignments (87%), quizzes (90%), and mid-session exam (82%). The failure rate was consistently under 10% since March/Autumn 2009. The subject design evaluated encompassed the delivery of online resources via the fortnightly learning design maps and by-type resource folders, as well as the assessment system. An analysis of data from the elearning student tracking statistics showed that students vary in their preference for accessing resources with 30% of the students in MATH132 using the fortnightly learning design maps, while 57% using the by-type resource folders to access resources provided in the e-learning site. In MATH131, 21% of the students used the fortnightly learning design maps, and 51% used the by-type resource folders. This suggested that students preferred to use the by-type resource folders rather than the fortnightly learning design maps to access resources in the e-learning system. However, a test for differences in two proportions showed that the proportion of access to resources in the e-learning system via the fortnightly learning design maps in MATH131 (47%) was significantly higher (Z = 9.43, p < 0.0001) than in MATH132 (27%). The final stage of institutionalisation demonstrated that the university staffs have adapted the approach of the learning design maps in teaching and learning in other disciplines (SCIE911, Fundamentals of Science Communication). Two further mathematics subjects with high failure rates are preparing to implement Headstart programs. This thesis concluded with a view to the future technological learning supports in statistics education and made suggestions for improvement. The exploration of these three case studies has reinforced the identified needs for an embedded learning support system to be built upon high quality learning resources, a redemptive assessment system, and good video and print support resources. Several approaches to assessment including the test and retest, draft and redraft of assignment, minor and major assessments, and practice quizzes appeared useful for supporting students‟ learning and for creating an online learning support system wherein students at risk are identified for further learning support purposes. Learning designs need to vary with students‟ learning context, so that students can optimally locate resources. Students have strong preferences in how they wish to access resources. The recommendations from this study would be to include both by-type resource folders and weekly (topic) folders aligning the primary high quality resources, tasks, and supports. A learning support system may not be all that is required, with constant gathering of evidence to direct change being necessary. In this study, evidence pointed to the adoption of an innovative Headstart program which allowed more students to successfully complete their studies. vii
Publications The following presentations and publications have been emerged from this thesis so far.
Paper in refereed conference proceedings 1.
Baharun, N. & Porter, A. (2009). Teaching statistics using a blended approach: Integrating technology-based resources. In Same places, different spaces. Proceedings ascilite Auckland 2009. Available at http://www.ascilite.org.au/conferences/auckland09 /procs/baharun.pdf
2.
Baharun, N. & Porter, A. (2009). Removing the Angst From Statistics. In Mohd Tahir Ismail and Adli Mustafa (Eds.), 5th Asian Mathematical Conference Proceedings (Volume III), June 2009, Penang, Malaysia, pp 250 - 255. ISBN: 978-967-5417-55-9 (http://math.usm.my/).
Available
at
http://ro.uow.edu.au.ezproxy.uow.edu.au/info
papers/742/
3.
Baharun, N. & Porter, A. (2009). Learning Designs to Engage and Support Learners. Paper presented at The Future of Learning Design Conference, December 2009, University of Wollongong, Australia. Available in an online proceedings at http://ro.uow.edu.au.ezproxy.uow.edu.au/fld/09/Program/5/
4.
Baharun, N. & Porter, A. (2010). The use of technology to support student learning. In D. Boorer, H. Dhindsa, J.S.H.Q.Perera, S. Sithamparam, Wan Mat Sulaiman & Yahya Othman (Eds.), Bridging worlds: Making connections in education. (pp. 217 - 229). Bandar Seri Begawan: Universiti Brunei Darussalam. ISBN: 99917-1-238-0. Available at http://ro.uow.edu.au.ezproxy.uow.edu.au/cssmwp/47/
5.
Porter, A. & Baharun, N. (2010). Developing a learning support system for students in mathematics rich disciplines. In R. Robert and B. Dave (Eds.), Going Mainstream. Monograph of the 5th Workshop on the Impact of Tablet PCs and Pen-based Technology on Education (WIPTE). Purdue University Press: Indiana, United States of America. ISBN: 978-1-55753-574-0. Available at http://www.wipte.org/docs/2010/ 2010paper_abstracts.pdf
viii
6.
Baharun, N. & Porter, A. (2011). Improving Statistical Education through a Learning Design. In E. Beh, L. Park and K. Russell (Eds.), Proceedings of the Fourth Annual ASEARC Conference, University of Western Sydney, Parramatta, Australia. Available at
http://www.uow.edu.au/content/groups/public/@web/@inf/@math/documents/doc/
uow096236.pdf
Paper in conference proceedings 1.
Porter, A., Baharun, N. & Algarni, A. (2009). Tablet PCs in the Grander Scheme of Supporting Learning. Paper presented at the Australasian Tablets in Education Conference (ATiEC), December 2009, Monash University, Australia. Available at http://www.monash.edu.au/eeducation/events/atiec2009/presentations.html
2.
Baharun, N. & Porter, A. (2010). The impact of video-based resources in teaching statistics: A comparative study of undergraduates to postgraduates. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute.
Available
at
http://www.stat.auckland.ac.nz/~iase/publications/icots8/
ICOTS8_C150_BAHARUN.pdf
3.
Baharun, N. & Porter, A. (2010). A learning design to support student learning of statistics within an online learning environment. In H. MacGillivray and B. Phillips (Eds.). Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS
2010),
December,
Fremantle,
Australia.
Available
at
http://www.oznznets.com/ozcots2010 (retrieved 30 December, 2010 from http://opax. swin.edu.au/~3420701/OZCOTS2010/OZCOTS2010_paper _Baharun.pdf.
4.
Porter, A. & Baharun, N. (2011). Stepping into Statistics: Providing a Head Start for students. In L. Paditz and A. Rogerson (Eds.), Proceedings of the 11th International Conference, Turning Dreams into Reality: Transformations and Paradigm Shifts in Mathematics Education, Rhodes University, Grahamstown, South Africa, pp. 276 - 281. ISBN: 83-919465-0-9.
ix
Table of contents Certification
i
Dedication
ii
Acknowledgement
iii
Abstract
iv
Publications
viii
Table of contents
x
List of tables
xvii
List of figures
xxii
CHAPTER 1
INTRODUCTION
1.0
The context
1
1.1
What is learning support?
5
1.2
Supporting student learning using technology
7
1.3
The problem
7
1.4
Objective and purpose
9
1.5
Approach and methodology
10
1.5.1
Evaluation framework
CHAPTER 2
11
ISSUES IN STATISTICS AND MATHEMATICS EDUCATION
2.0
Technology in statistics education
16
2.1
Teaching and learning statistics
17
2.1.1
Statistical learning
19
2.1.2
Statistical concepts
20
2.2
Use of technology for improving learning outcomes
24
2.2.1
Statistical software packages
25
2.2.2
Educational software
26
2.2.3
Spreadsheets
28
2.2.4
Applets/stand-alone applications
29
2.2.5
Graphing calculators
31
2.2.6
Multimedia materials
33
2.2.7
Data and materials repositories
34
2.3
Anxiety or fear?
35 x
2.3.1
Mathematics anxiety
38
2.3.2
Statistics anxiety
40
2.3.3
Statistics anxiety versus mathematics anxiety
41
2.4
Conclusion
CHAPTER 3
42
LEARNING DESIGNS
3.0
Introduction
44
3.1
Why learning designs?
45
3.2
Designing online learning experiences
47
3.3
The definition
48
3.4
Educational theories of learning
51
3.4.1
Behaviourism
52
3.4.2
Cognitivism
53
3.4.3
Constructivism
56
3.4.4
Connectivism
59
3.4.5
Experiential learning
61
3.5
Selection of the learning design framework
65
3.6
LDVS representation
67
3.7
High quality learning in higher education
71
3.7.1 3.8
Principles of high quality learning
Other forms of learning designs
73 74
3.8.1
Concept maps
74
3.8.2
Webpage
76
3.8.3
Lesson plan
77
3.8.4
Subject outline
78
3.9
Conclusion
CHAPTER 4
79
METHODOLOGY
4.0
Introduction
80
4.1
Selection of the research paradigm
81
4.2
Selected strategies of inquiry
84
4.2.1
Exploratory research
85
4.2.2
Case studies
86
4.2.3
Survey research
88
4.2.4
Action research
89 xi
4.2.5
Phenomenology
91
4.3
Why not a “control group” study?
92
4.4
Positioning of self
94
4.5
Ethics
97
4.6
Limitations
98
4.7
Educational evaluation
99
4.7.1
The definitions
100
4.7.2
An evaluator or a researcher?
103
4.7.3
Models for evaluation
104
4.7.4
CIPP model
104
4.7.5
Stake‟s responsive model
105
4.7.6
Kirkpatrick‟s four-level model
107
4.7.7
Evaluation model for educational innovations
107
4.8
Phases of the evaluation model
110
4.8.1
Initial case study: GHMD983/SHS940
112
4.8.2
Second case study: STAT131
113
4.8.3
Third case study: MATH131/MATH132
113
4.8.4
Overall evaluation
114
4.9
Local context
115
4.9.1
Selection of case study subjects
115
4.9.2
Baseline data: failure rates
1188
4.9.3
Student surveys
119
4.9.4
Approaches to assessment
122
4.9.5
Learning supports
124
4.10
ALTC project
4.10.1 4.11
125
Development of video resources
Conclusion
CHAPTER 5
127 128
CASE STUDY 1
5.0
Introduction
130
5.1
Design phase: Why video resources?
130
5.1.1
Aim for the study
132
5.1.2
Existing learning resources
133
5.1.2.1
Assessment
134
5.1.2.2
Worked solutions
135
5.1.2.3
Video resources
135 xii
5.1.2.4
Learning strategies
137
5.1.3
Method
139
5.1.4
Outcomes
140
5.1.4.1
Responses to the prompts
141
5.1.4.2
Impact of learning resources on student learning
143
5.1.4.3
Accessing and playing the videos
145
5.1.4.4
Anxieties while taking the subject
146
5.1.4.5
Perceived competency in topics
148
5.1.4.6
Changes in perspective at the end of the session
150
5.1.4.7
Improvement of video resources
152
5.1.4.8
Recommendations for improvement of subject
153
5.1.5 5.2
Summary
Design phase: the use of videos in a different context
155 157
5.2.1
Aim for the study
157
5.2.2
Existing learning resources
157
5.2.3
Method
158
5.2.4
Outcomes
159
5.2.4.1
Impact of learning resources on students learning
160
5.2.4.2
Video resources for student learning
162
5.2.4.3
Student perceived comfort with topics learning
164
5.2.4.4
Changes in perspective at the end of the session
164
5.2.4.5
Assessment
165
5.3
Comparison of outcomes between the cohorts
167
5.4
Development and implementation phases
172
5.4.1
Aim for the study
172
5.4.2
The context: existing resources
173
5.4.3
Method
174
5.4.4
Outcomes
176
5.4.4.1
Impact of learning resources on students learning
179
5.4.4.2
Video resources for student learning
180
5.4.4.3
Student perceived comfort with topics learning
180
5.4.4.4
Provision of learning design map in subject
181
5.4.4.5
Changes in perspective at the end of the session
182
5.4.4.6
The e-learning student tracking statistics
183
5.4.4.7
Assessment
185
5.4.4.8
Failure rates, pass rates, and shifts in grades
187 xiii
5.4.4.9 5.5
Recommendations for improvement of subject
Conclusion
CHAPTER 6
190 191
CASE STUDY 2
6.0
Introduction
194
6.1
Development phase: the subject design features
194
6.1.1
Method
196
6.1.2
Outcomes
197
6.1.1.1
Impact of learning resources on student learning
198
6.1.1.2
Learning design map as support resources
200
6.1.1.3
Student perceived confidence with topics
205
6.1.1.4
Changes in comfort at the end of session
206
6.1.1.5
Assessment for learning
207
6.1.1.6
Improvement of the subject
210
6.1.3 6.2
Changes in outcomes between cohorts: major observations
Implementation phase: new design features
211 213
6.2.1
Headstart program
214
6.2.2
Improved learning design map
220
6.2.3
CAOS test
222
6.2.4
Method
224
6.2.5
Outcomes
227
6.2.5.1
Headstart program to support learning
228
6.2.5.2
Impact of learning resources on student learning
236
6.2.5.3
Effectiveness: dimensions of high quality learning
238
6.2.5.4
Student perceived confidence with topics
241
6.2.5.5
Improved learning design map for supporting learning
243
6.2.5.6
Performance in the assessment
245
6.2.5.7
Performance in CAOS test
247
6.2.5.8
Recommendations for improvement of subject
251
6.2.5.9
Failure rates, pass rates and shifts in grades
253
6.2.5.10 Final outcomes 6.3
Conclusion
256 257
xiv
CHAPTER 7
CASE STUDY 3
7.0
Introduction
259
7.1
Background
259
7.1.1
The subject design
260
7.1.1.1
Weekly and fortnightly learning design maps
262
7.1.1.2
Navigational tools by content materials
267
7.1.1.3
Video supports on some concepts
268
7.1.1.4
Assessment system
269
7.1.1.5
Online practice quizzes
270
7.2
Aim for the study
271
7.3
Method
271
7.4
Outcomes
273
7.4.1
Students‟ response rates and background
273
7.4.2
Representativeness: expected performance versus final grades
274
7.4.3
Completing the subjects‟ components
276
7.4.4
Importance of learning resources on student learning
277
7.4.5
Effectiveness: face-to-face and online learning
281
7.4.6
Student perceived confidence in the subject
283
7.4.7
The use of the fortnightly learning design maps
287
7.4.8
Effectiveness: dimensions of high quality learning
289
7.4.8.1
Student engagement
289
7.4.8.2
Acknowledge the learning context
290
7.4.8.3
Challenge students
291
7.4.8.4
Provide practice
292
7.4.9
The use of online learning resources
7.4.10 Assessment for learning
293 295
7.4.10.1
MATH131 results
296
7.4.10.2
MATH132 results
299
7.4.11 Failure rates, pass rates and final outcomes
302
7.4.12 Improving the subjects
304
7.5
7.4.12.1 Improvement in MATH131
305
7.4.12.2 Improvement in MATH132
306
Conclusion
307
xv
CHAPTER 8
CONCLUSION
8.0
Revisiting the evidence
310
8.1
Subject design considerations in the e-learning systems
312
8.1.1
Placing the resources in e-learning sites
312
8.1.2
Learning design maps to organise major components
313
8.1.3
Video resources to support learning
315
8.1.4
Nature of assessment for students learning
317
8.1.5
High quality resources for student learning
319
8.1.6
Alternative approaches
322
8.1.7
A Headstart program to support learning
323
8.1.8
Use of learning theories
324
8.2
Institutionalising the innovations
325
8.3
Future directions
327
8.3.1
Teaching of statistics
328
8.3.2
Supporting the Headstart program
329
8.3.3
Statistics assessment
330
8.3.4
Nature of video supports
331
8.3.5
Learning design
332
8.4
Conclusion
REFERENCES
333
336
Appendices Appendix 3.1: STAT131 March/Autumn 2010 subject outline
365
Appendix 4.1a: Participant Information Sheet used for STAT131 in March/Autumn 2009
367
Appendix 4.1b: Consent Form used for STAT131 in March/Autumn 2009
368
Appendix 4.2: An example of Ethics Approval granted by the University of Wollongong
369
Appendix 5.1: A sample student survey used in August/Spring 2008
370
Appendix 6.1: CAOS post-test performance of individual statistical skills between the cohorts
378
Appendix 7.1: MATH131 March/Autumn 2010 subject outline
381
Appendix 7.2: MATH131 March/Autumn 2010 content timeline
384
Appendix 7.3: Example of MATH131 assignment
385
Appendix 8.1: The e-learning showcase handout
387 xvi
List of tables Table 2.1
Assessment tasks that may distinguish the three instructional domains
23
Table 2.2
Sample of subject objectives for STAT131 March/Autumn 2010
24
Table 3.1
Graduates qualities addressed in STAT131
55
Table 3.2
The design of group project assignment in STAT131
59
Table 4.1
Qualitative, quantitative, and mixed methods approaches
83
Table 4.2
Research design
84
Table 4.3
An integrated evaluation framework
108
Table 4.4
A four-stage of evaluation model selected for the study
111
Table 4.5
Components of GHMD983/SHS940 versus STAT131
116
Table 4.6
Student grades (in percentage) for GHMD983
118
Table 4.7
Student grades (in percentage) for STAT131
119
Table 4.8
Student survey responses across sessions in STAT131
121
Table 4.9
Assessment weightings for STAT131 across sessions
123
Table 5.1
Provision of video resources on selected topic areas in GHMD983 subject
136
Table 5.2
Percentages of time spent by students per week on the subject
141
Table 5.3
Perceived impact of resources on student learning and understanding of GHMD98
144
Students‟ comments on “How and what ways were the videos useful (or not)?”
145
Proportions of students responding as moderately confident or confident in topic learning (GHMD983)
148
Differences on perceived comfort with topics between genders (GHMD983)
149
Differences on perceived comfort with topics between students‟ nationalities (GHMD983)
150
Percentages of changes in students‟ perspective after completing the subject (GHMD983)
150
Differences on changes in students‟ perspectives between genders (GHMD983)
151
Table 5.4 Table 5.5 Table 5.6 Table 5.7 Table 5.8 Table 5.9
Table 5.10 Differences on changes in students‟ perspective between nationalities (GHMD983)
152
Table 5.11 Students‟ anticipated and actual grades in STAT131
160
Table 5.12 Students‟ perception of exam preparedness versus completing their laboratory tasks in STAT131
161
Table 5.13 Perceived impact of resources on students learning in STAT131
162
Table 5.14 Student perceived competency with topics learning in STAT131
164 xvii
Table 5.15 Changes in students‟ perspective after becoming more comfortable completing the subject in STAT131 165 Table 5.16 Percentages of students reporting their perceptions of subject assessment system in STAT131
166
Table 5.17 Distribution of assessment marks in STAT131
167
Table 5.18 Percentages of time spent by students per week on the subjects
170
Table 5.19 Student perceived competency with topics learning between cohorts
171
Table 5.20 Changes in students‟ perspective after completing the subjects between cohorts
172
Table 5.21 Changes of lecturer(s) and assessment systems used in GHMD983/SHS940 across sessions
173
Table 5.22 Students‟ demographic data reported in the surveys between cohorts
177
Table 5.23 Percentages of average time spent by students per week on the subjects
178
Table 5.24 Perceived impact of resources on students learning across sessions 179 Table 5.25 Student perceived competency with topics learning across sessions 181 Table 5.26 Changes in students‟ perspective after completing the subject between cohorts
183
Table 5.27 Data from of the e-learning student tracking statistics between week 1 and week 13 between cohorts
184
Table 5.28 Students‟ perceptions on the subject assessment system in GHMD983 August/Spring 2009
185
Table 5.29 Comparison of the average assessment marks in GHMD983 between cohorts
186
Table 5.30 Assessment results on specific topics in SHS940 August/Spring 2010
186
Table 5.31 Pass rates for the years 2003 to 2011 in GHMD983/SHS940
188
Table 5.32 Major themes of students‟ recommendations for subjects‟ improvement
190
Table 6.1
Distribution of students enrolled in STAT131 and survey response rate between the 2009 and 2010 cohorts 197
Table 6.2
Percentages of students‟ final grades and their expected performance in STAT131 the 2009 and 2010 cohorts
198
Perceived importance of resources on student learning in STAT131 between the 2009 and 2010 cohorts
199
Usage of video resources in STAT131 between the 2009 and 2010 cohorts
199
Usage of learning design map in the STAT131 March/Autumn 2010
201
Table 6.3 Table 6.4 Table 6.5
xviii
Table 6.6
Table 6.7 Table 6.8 Table 6.9
Student perceived confidence on specific topics in STAT131 between the 2009 and 2010 cohorts
206
Changes in students‟ perspective after completing the subject in STAT131 between the 2009 and 2010 cohorts
207
Distributions and changes of mean assessment marks on specific topics in STAT131 between the 2009 and 2010 cohorts
208
Six levels of learning in Bloom‟s taxonomy
223
Table 6.10 Distribution of students‟ background reported in STAT131 surveys across sessions between 2009 and 2011
227
Table 6.11 Distribution of students‟ final grades in STAT131 between the 2010 and 2011 cohorts
228
Table 6.12 Questions on the use of Headstart program in STAT131
231
Table 6.13 The use of the Headstart program reported via student survey in STAT131
232
Table 6.14 The use of the Headstart program reported via student survey and data from the e-learning tracking
235
Table 6.15 Comparison of mean of final marks between students who engaged and did not engage with the Headstart program in STAT131 236 Table 6.16 Perceived importance of resources on students learning in STAT131 across sessions between 2009 and 2011
237
Table 6.17 Perceived impact of subject design on student learning based on the four Boud and Prosser‟s principles
239
Table 6.18 Student perceived confidence on specific topics in STAT131 across sessions between 2009 and 2011
242
Table 6.19 Student perceived confidence on specific topics in STAT131 between those who accessed the Headstart program and who did not in 2011
243
Table 6.20 Changes in the assessment on specific topics in 2010 and 2011
247
Table 6.21 Correlation between the assessment marks and final marks in STAT131 for the 2010 and 2011 cohorts
247
Table 6.22 Average skills performance at the commencement of STAT131 the 2010 and 2011 sessions
248
Table 6.23 Changes in overall performance in the CAOS tests between cohorts in comparison to delMas et al.‟s (2007) study 248 Table 6.24 Correlation between the post-test and final grades in 2010 and 2011 249 Table 6.25 Differences of correct responses on the CAOS post-test by topics between the 2010 and 2011 cohorts
251
Table 6.26 Pass rates for the years 2000 to 2011 in STAT131
253
Table 6.27 Comparison of mean final marks between the three cohorts in STAT131
255
Table 7.1
261
Components of MATH131 versus MATH132
xix
Table 7.2
Summary of learning designs within the e-learning site for mathematics subjects
266
Subtopics covered in the video supports on two mathematical topics
268
Distribution of students enrolled by campuses and gender, and the survey response rates across five sessions in mathematics subjects
274
Percentages of students‟ final grades and their anticipated performance in mathematics subjects
275
Students‟ responses on “How did you work through the tutorial exercises?” in mathematics subjects
277
Resources perceived as moderately or extremely useful on student learning across sessions in mathematics subjects
278
Differences on the perceived usefulness of resources for learning in MATH132
279
Students‟ perception of exam preparedness versus completing their tutorial exercises in MATH131 in 2010
280
Table 7.10 Students‟ perception of exam preparedness versus completing their tutorial exercises in MATH132 in 2009
281
Table 7.11 Student perceived confidence on topics and sequence of topics taught in mathematics subjects across sessions
284
Table 7.12 Differences on the perceived confidence on topics learning in MATH132
285
Table 7.13 Percentage of students reporting perception of subject relevance in mathematics subjects across sessions
286
Table 7.14 Proportion of accesses to resources in the e-learning site in week 12 between cohorts in mathematics subjects
288
Table 7.15 Choices of student access to resources in the e-learning site in week 12 between cohorts in mathematics subjects
288
Table 7.16 Students‟ perceptions on six dimensions of engagement in mathematics subjects
290
Table 7.17 Students‟ perceptions on four dimensions of acknowledging the learning context in mathematics subjects
291
Table 7.18 Students‟ perceptions on four dimensions of challenging students in mathematics subjects
292
Table 7.19 Students‟ perceptions on four dimensions of providing practice in mathematics subjects
293
Table 7.20 Descriptive statistics of e-learning student tracking between week 1 and week 13 in mathematics subjects across sessions
294
Table 7.21 Design of assessment system in mathematics subjects
295
Table 7.22 Comparison of students‟ assessment marks in MATH131 between cohorts
297
Table 7.3 Table 7.4
Table 7.5 Table 7.6 Table 7.7 Table 7.8 Table 7.9
xx
Table 7.23 Comparison of students‟ total assessment and final exam marks in MATH131 between cohorts
298
Table 7.24 Correlation between the total assessment and final exam marks in MATH131
298
Table 7.25 Comparison of students‟ assessment marks in MATH132 between cohorts
300
Table 7.26 Comparison of students‟ total assessment and final exam marks in MATH132 between cohorts
301
Table 7.27 Correlation between the total assessment and final exam marks in MATH132
301
Table 7.28 Pass rates for the years 2009 to 2011 in MATH131 and MATH132 302 Table 7.29 Summary of changes in the subject designs and student outcomes across sessions in mathematics subjects
308
xxi
List of figures Figure 1.1
Three case studies conducted using four stages of evaluation framework
12
Figure 2.1
Independent domains with some overlap
22
Figure 2.2
Reasoning and thinking within literacy
22
Figure 2.3
A Fathom slider allows students to change the parameter value (y-intercept) b = 5 of a graph y = (1/2)x + b with slope ½
27
Example of simulations results from a Binary Distribution for n = 50 using Excel
29
Figure 2.5
Sampling Distribution Simulation Applet
30
Figure 2.6
Using graphing calculator (model TI-83+/84+) to perform statistical plots
32
Figure 2.7
CAST Introductory statistics e-book
33
Figure 2.8
Resources by material type available on CAUSE website
35
Figure 3.1
Types of memory based on information processing system
54
Figure 3.2
Sample of prompts given to initiate student interaction in GHMD983 e-learning site
57
Articles and reports accessible via the MATH131/132 e-learning site
61
Figure 3.4
Kolb‟s Experiential Learning Cycle
64
Figure 3.5
A weekly laboratory tasks for students to complete in STAT131
65
Figure 3.6
Three key elements of a learning design based on Oliver (1999) and Oliver and Herrington (2001)
66
Figure 3.7
The Learning Designs Project website
67
Figure 3.8
Example of the visual learning design representation invented in the Learning Designs Project
69
A concept map used in MATH111 subject
75
Figure 2.4
Figure 3.3
Figure 3.9
Figure 3.10 Adaptation of a concept map using building blocks in MATH131 subject
76
Figure 3.11 A webpage of statistics subject from http://bcs.whfreeman.com/bps3e/
77
Figure 3.12 A weekly lesson plan for laboratory work in GHMD983
78
Figure 4.1
Action research spiral
91
Figure 4.2
An example of STAT131 Week 6 laboratory tests in March/Autumn 2008
124
Figure 4.3
ALTC project website
127
Figure 4.4
Example of video resources in STAT131 developed using the tablet PC technology and Camtasia Studio
128 xxii
Figure 5.1
Two designs of video resources impacted student‟s cognitive load
136
Figure 5.2
STAT131 e-learning homepage and lectures content within by-type resource folders
168
GHMD983 e-learning homepage and contents within weekly folders
169
GHMD983 August/Spring 2009 e-learning homepage and learning design maps using DOC files embedded in weekly folders
175
SHS940 e-learning homepage and improved learning design maps created in PDF
176
Figure 5.6
Distribution of students‟ actual grades between cohorts
178
Figure 5.7
The distribution of assessment marks in SHS940 August/Spring 2010
187
Percentages of fail versus higher grades and top grades, and innovations introduced in GHMD983/SHS940 for the years 2003 to 2011
188
Learning design map for weekly work combining the resources, tasks and supports
195
The weekly folders located in the STAT131 March/Autumn 2010 e-learning homepage
196
Figure 6.3
A Headstart program in the STAT131 e-learning site
214
Figure 6.4
Sample of lecture notes included in the Headstart modules
217
Figure 6.5
Assessment guide designed for the Headstart program
219
Figure 6.6
Image file created using the Paint program
220
Figure 6.7
HTML document opened using the Notepad program with the highlighted codes replaced with image link created
221
HTML source codes pasted on the content space of HTML file creator in the e-learning site
221
Weekly learning design map created using a HTML file
222
Figure 5.3 Figure 5.4
Figure 5.5
Figure 5.8
Figure 6.1 Figure 6.2
Figure 6.8 Figure 6.9
Figure 6.10 STAT131 homepage in the e-learning site accessed in June 2011 (last week of session)
225
Figure 6.11 Comments on the draft of assignment in the e-learning site
226
Figure 6.12 Reading materials, video clips, and tasks included in the Headstart lecture notes
229
Figure 6.13 Percentages of fail versus higher grades and top grades, and innovations introduced in STAT131 for the years 2000 to 2011
254
Figure 6.14 Students‟ performance in the assessment tasks in 2010 and 2011
256
Figure 7.1
MATH131 and MATH132 homepage in the e-learning site
262
Figure 7.2
The links provided in the subject folder
263
Figure 7.3
The resources by week/topics accessible via “one-stop” weekly folders
263 xxiii
Figure 7.4 Figure 7.5 Figure 7.6 Figure 7.7 Figure 7.8 Figure 7.9
The fortnightly learning design maps and other links provided in the subject folder
264
A two-week work accessible via the fortnightly learning design maps
265
A navigational tool by topic materials provided in MATH132 subject folder
267
A navigational tool by topic materials with links to key concepts in MATH131 subject folder
268
Sample of video displaying the concepts on fraction accessible via the e-learning site
269
Sample of online practice quiz provided in MATH132 subject folder
270
Figure 7.10 Implementation cycles of MATH131 and MATH132 included in this case study
271
Figure 7.11 Percentages of average time spent by students per week in mathematics subjects
276
Figure 7.12 Student perceived impact of tutorial exercises on their final exam in mathematics subjects
280
Figure 7.13 Student perceived belief in mathematical learning in mathematics subjects between sessions
286
Figure 7.14 Student performances on assessment tasks in MATH131
303
Figure 7.15 Student performances on assessment tasks in MATH132
304
Figure 8.1
SCIE911 subject homepage accessible in the e-learning site
326
Figure 8.2
The three symbols containing links to relevant subject materials similar to the learning design maps used in STAT131
327
A “message box” posted in the STAT131 March/Autumn 2011 e-learning homepage in week 9
333
Figure 8.3
xxiv
CHAPTER 1 INTRODUCTION
1.0
The context A very high proportion of students entering higher education in disciplines such as
engineering and science are required to enrol in courses that involve mathematical or statistical based content and processes. For more than a decade the decline in mathematical and statistical skills among undergraduates has been apparent even in higher level courses of study across many disciplines (Croft, Harrison & Robinson, 2009; Furinghetti, 2000; Roberts, 2002; Smith, 2004; Stenberg, Varua & Yong, 2010). Consequently, a vast number of students experience difficulty and anxiety when learning. This frequently leads to failure of students to complete these mathematics rich subjects in their first attempt. It is a tough situation for students when these subjects are compulsory subjects within their courses of study. The decreasing level of statistics competencies among high school leavers in Australia was reported in the “Statistics at Australian Universities” review sponsored by the Statistical Society of Australia Inc. (SSAI):
One persistent theme articulated very clearly by many (though not all) of the students and recent graduates that we spoke to was the failure of current statistics programmes in the schools to excite students‟ interest in statistics. Indeed, it was put to us that current programmes frequently have the effect of driving good quality students away from the subject. (SSAI, 2005, p. 13) The issue of declining skills in mathematics and statistics is not only evident in Australia (Aminifar, 2007; Barrington & Brown, 2007; Broadbridge & Henderson, 2008; MacGillivray, 2009). The lack of these skills is also demonstrated by many students upon their entry to universities in the United Kingdom (Roberts, 2002; Smith, 2004), in Malaysia (Arul, Baharun, Hussin & Nasir, 2004), in Canada (Kajander & Lovric, 2005), in Hong Kong (Luk, 2005), in Italy (Furinghetti, 2000), in France (Artigue, 2001), and in Japan (Hoyles, Newman & Noss, 2001). The evidence of a decline in mathematical and statistical skills over several decades is well documented in reports from learned societies, professional bodies and academic research. Poor mathematical skills are frequently related to the poor performance of students in mathematics based disciplines such as engineering and physics, and increasingly in mathematics 1
rich disciplines such as psychology, biology, geology, health sciences, economics, nursing, and business studies (Croft et al., 2009; Croft & Grove, 2006; Pell & Croft, 2008). Most of these students discover they need to master their mathematical and statistical skills, tools and applications to achieve success in their chosen disciplines. In the Australian context, the decline in the proportion of year 12 students studying enabling subjects in mathematics and science has resulted in the increase of skill shortages in engineering (Barrington & Brown, 2007). Further, Broadbridge and Henderson (2008) report the decline in mathematical ability is due to the lowering of standards for entry into engineering degree programs as well as the failure to instil the required level of numeracy in both primary and secondary school years. Research suggests that students‟ mathematical ability plays an important role in determining their success in completing studies in disciplines such as business (Stenberg et al., 2010), economics (Pozo & Stull, 2008) and accounting (Joyce, Hassall, Montaño & Anes, 2006). In the United Kingdom, Tariq (2008) revealed that the deficits in mathematical skills of first-year bioscience undergraduates often contributed to poor performance in diagnostic tests and in the mathematical elements of their bioscience curricula. In the United States, Crawford and Schmidt (2004) claimed that this issue has led many engineering colleges to lose nearly fifty percent of their students during the first two years of study. In Malaysia, Arul, Nasir, Hussin and Baharun (2005) revealed that the majority of students from non-science based disciplines such as architecture, accounting, and office management systems experience a high level of mathematics anxiety due to their poor mathematical background and that has led to increased failure rates in first year mathematics subjects at tertiary level. The current lack of mathematical preparedness among students in higher education affects a wide range of programs in universities worldwide. Indeed, mathematical and statistical skills have been identified as indicators of success in many disciplines.
Like language, mathematical skills and thinking underpin much in other areas, and tertiary study asks for them to be accessed and used confidently and promptly in new and sometimes taxing context. Thus any specific mathematics skills requiring use in new contexts in tertiary study must be as familiar to a student as his or her language. (MacGillivray, 2009, p. 457) In the learning of statistics, the achievement of students is closely related to their ability in mathematical skills. This has been demonstrated in a study that revealed a significant relationship between student mathematical skills and their performance in an introductory statistics course (Johnson & Kuennen, 2006). Thus it is relevant to highlight the issue of deficit mathematical skills relative to statistical skills in the practice of improving statistics education. To deal with these issues, and in particular to improve mathematical and statistical learning, there is a need to increase reliable, supportive and expert learning support to assist students 2
learning mathematics and statistics across a wide range of disciplines in universities (Broadbridge & Henderson, 2008; MacGillivray, 2008). In this study the researcher explored the need for learning support embedded in subjects in which students are enrolled within a blended learning environment. This was distinct from the provision of support through a centre that provides assistance for students from various disciplines in the university. According to Matheos, Daniel, and McCalla, blended learning is described „as the best of both worlds from the integration of online and face-to-face teaching, resulting in an enhanced learning experience‟ (2005, p. 57). In the literature, the term “blended learning” is sometimes known as “hybrid learning” where the online learning becomes an advancement or extension of traditional face-to-face learning (Swan, 2009; Ward, 2004). It offers students the benefit in learning from both online technology and direct interaction with the teachers and peers. One issue associated with blended learning approaches is the need to provide effective strategies to teachers and learning designers so they can acquire the necessary skills and knowledge to allow both teachers and students to benefit from the convenience of online learning as well as the comfort of face-to-face contact. Although students can possibly improve their learning within technology enhanced or online learning environments (Collis, 2002), it is claimed that the effective blending of technology and face-to-face interaction is essential for students to experience proficient learning and to retain the knowledge and skills learned (Singh, 2003). The main advantage of the blended learning environment is that, through technologies such as Blackboard and WebCT students are provided with faster and easier access to learning materials, tasks, and assessments provided by teachers with less restrictions caused by time and place of the traditional classroom (Swan, 2009; Ward, 2004). Students in the blended environment retain from the traditional classroom regular, face-to-face interaction with the teachers and peers in addition to support within the lecture session. Certainly, research has shown that the blended approach impacts on student learning and their engagement (Bergtrom, 2009; Chen, Guidry & Lambert, 2009) and this leads to the improvement of student learning outcomes (Singh & Reed, 2001). The main reason for implementing blended learning environments for any courses or subjects is to provide a wide range of learning resources and experiences for the students through a combination of technology-based learning and face-to-face or traditional classroom. Indeed, blended learning has been expanding rapidly. In 2004 more than fifty percent of the United States higher institutions offered at least some blended courses for both graduate and undergraduate students, with not less than seventy-five percent of large public institutions providing blended classes (Swan, 2009). Matheos et al. (2005) asserted that the blended approach provides effective strategies particularly for on-campus based traditional universities 3
to improve teaching and learning in higher education. Essentially, this approach has the potential to enhance student learning experiences and improve learning outcomes particularly in large classroom settings (Twigg, 2002). The blending of technologies also provides greater flexibility in teaching and supporting student learning which enables the delivery of high quality content and effective learning within the online learning environment. Consequently, blended learning has become the focus of many research activities over the past fifteen years (e.g. Bergtrom, 2009; Collis, 2002; McVey, 2009). These studies continue to span many different technology tools which differ in functions. Although research has shown that blended learning can enhance student learning, engagement and improve learning outcomes, there are issues regarding the implementation of technology in blended learning relating to how, what, why, and when this technology is appropriately and effectively combined with face-to-face or traditional classroom learning. This research is essential as Garrison and Vaughan (2008, p. x) noted,
[w]hen blended learning is well understood and implemented, higher education will be transformed in a way not seen since the expansion of higher education in the late 1940s. The goal of the blended learning approach is not to substitute face-to-face interaction which occurs naturally in traditional classrooms, as argued by Merisotis and Phipps (1999, p. 31): „it seems clear that technology cannot replace the human factor in higher education‟. Rather, it uses technology in numerous ways to enhance and expand the classroom potentiality in ways that support student learning. Therefore, the choice and types of technologies implemented in a blended learning environment should be well understood by examining the context in which it can be appropriately applied. In a blended learning approach, the situation where students are more likely to prefer classroom-based or technology-based learning depends on students‟ learning preferences or their choices of learning resources as well as the nature of the subjects (Matheos et al., 2005). This was in accord with the perspective of student-centred approach (discussed in section 3.1 in Chapter 3) as learning means, „different things to different people and it is a very broad concept‟ (Australian Flexible Learning Framework, 2008, p. 2). Nevertheless, making available different resources does not ensure that they are adequate or integrated appropriately into the blended program. Teachers essentially need to understand how to integrate the technology effectively within blended learning environment so as to support student learning and optimise its impact on learning outcomes.
4
1.1
What is learning support? According to MacGillivray (2008), learning support in mathematics and statistics is
defined as … any facility or program providing extra assistance in mathematics and statistics during their enrolled study in a university degree program, whether undergraduate or postgraduate, with such assistance being outside the formally scheduled classes and activities of their enrolled course. (2008, p. 6) MacGillivray added that learning support in mathematics and statistics can be described as any extra non-credit, optional, non-compulsory program or facility that helps students in developing their mathematical and/or statistical skills during their enrolled study in a degree course. Perkin and Croft (2004) described learning support undertaken in a centre where students are offered an additional program to that provided by their routine teaching program of lectures, tutorials, and problem classes. Lawson, Halpin and Croft (2003) suggested that learning support is not necessarily a facility offered to mathematics and statistics students, rather it is often for students from other discipline areas which require mathematics and statistics. A report from the 2007 Symposium on Learning Support in Mathematics and Statistics in Higher Education held at the Queensland University of Technology revealed that the focus of mathematics support centres in many Australian universities is the support of students from non-mathematics disciplines, such as engineering, nursing, business and economics, rather than mathematics students (MacGillivray, 2008). For instance, a current position (2010) at the Learning Development, University of Wollongong (refer http://www.uow.edu.au/student/ services/ld/overview/index. html) provides support for non-mathematics degree students. At many universities, the mathematics support centre is usually available to specific groups and/or course of studies however is often limited to first year students. All the above descriptions suggest the term „learning support‟ in mathematics should be regarded as an „umbrella‟ term encompassing a variety of supports including drop-in assistance, mathematics workshops, mathematics remedial programs, and one-on-one tutoring. This additional support is often targeted at students from specific groups but with voluntary attendance. Frequently, learning support is offered to students when they enrol in subjects such as mathematics and statistics. This support can include additional tutorials, team learning, staff consultation hours, and online learning via learning management systems (LMS) (Broadbridge & Henderson, 2008). Broadbridge and Henderson (2008) further noted that the accessibility of teaching and learning resources available via online to students at times convenient makes this an effective way to address student needs. Furthermore, this form of technology is increasingly 5
being used in the teaching of mathematics and statistics due to decreased numbers of mathematics staff and increased class sizes (SSAI, 2005) and increased variation of the students‟ backgrounds entering the university (e.g. on-campus versus distance students, domestic versus international students). At the University of Wollongong, this form of learning support is also aligned with the graduate quality that promotes self-learning or being an independent learner among students of diverse backgrounds. Distance students are those who are completing their courses at times convenient, live and study anywhere, all at their own pace subject to assessment deadlines but having access to the same online services as on-campus students (refer http://www.uow.edu.au/future/distance/index.html). International students are generally those who studying in educational institutions of different country from their own either for a short term or long term study programs. The provision of learning support to assist students learning of mathematics and statistics can be categorised into two forms: subject-specific support and generic support. For instance, PASS (Peer Assisted Study Sessions) is a worldwide program; in particular it is a subjectspecific support program at the University of Wollongong (refer http://www.uow.edu.au /student/services/pass/index.html) which has been operated for students in some subjects since 2001. PASS offers a learning support session for the selected subjects that differ from the traditional tutorial environment. The Learning Development centre offers free generic support for students to improve their learning, academic literacy and English language skills. The centre runs several programs such as individual consultations for students seeking advice about their academic study. It also provides general guidance for developing effective learning and writing strategies, student workshops on various academic skills, and online study resources on a range of academic learning matters (refer http://www.uow.edu.au/student/services/ld/students/index. html). To assist students who experience language difficulty, Learning Development runs weekly English Conversation Groups throughout each session for both ESL (English as Second Language) and international students (refer www.uow.edu.au/content/groups/public/@web/ @stsv/@sedlo/documents/doc/uow068005.pdf). These forms of support are offered to students through a unit or centre and are aimed at helping student learning in various subjects across disciplines. In the context of this study, the phrase „learning support‟ refers to subject-specific support provided to students through a web-based e-learning management system. At the University of Wollongong, the e-learning system is known as a Blackboard Learning system. Within this system, all learning resources, tasks, and support materials can be provided to students including lecture notes, Edu-stream (audio recorded lectures), laboratory work or exercises, assessment (assignments, quizzes, tests), worked examples, video resources, laboratory notes, past test and examination papers, and a student forum. 6
1.2
Supporting student learning using technology In the past two decades, teaching with blackboard and chalk was considered the most
appropriate delivery methods in mathematics and statistics classrooms by teachers worldwide. Prior to the nineteen nineties, the over-head projector was commonly seen as a significant technological tool used for teaching that introduced more appealing learning environments for students. Nowadays, it is hard to imagine teaching mathematics and statistics without using some form of technology. The emergence of the World Wide Web and the subsequent development of associated technology in the nineteen nineties has led to dramatic changes in teaching styles and learning environments. These changes include how lessons are delivered, how students are expected to learn and how feedback is given to students. Jowallah (2008) stated that changes in the learning environment have led to the development of technology aimed at engaging and supporting students so as to maximize their full learning potential. Technology has the potential to significantly improve teaching and improve student learning outcomes. To date it appears as a reliable tool used to support and facilitate student learning in many disciplines and universities worldwide (Cooner, 2004; Fitzallen, 2005; Oldknow & Taylor, 2000). Means and Olson (1997, p. x) claimed that research in the utilization of technology in supporting student learning was valuable to both students and educators by [a]dding to the students‟ perception that their work is authentic and important…Increasing the complexity with which students can deal successfully… Dramatically enhancing student motivation and self-esteem…Making obvious the need for longer blocks of time…Creating a multiplicity of roles…Instigating greater collaboration…Giving teachers additional impetus to take on a coaching and advisory role.‟ (Cited in Jowallah, 2008, p. 43) The technology revolution has had a great impact on the teaching and learning of many disciplines. However, the challenge for educators of mathematics and statistics in the higher education sector in many countries including Australia is how to effectively integrate technology into their teaching and thereby how to effectively support student learning. This is essential due the declining of skills in mathematics and statistics among students upon their entry to universities worldwide and hence their increasing need for support.
1.3
The problem With technology, statistics subjects may be taught with the assistance of a computer
where the lecture slides are projected onto a screen in a lecture theatre, or may take place in a laboratory with students working on their tasks using statistical software at individual 7
computers. The style of teaching may involve the use of the internet, web-based courses, online discussions, collaborative tasks, electronic texts, and assessment materials. These have changed the way most statistics educators work, changing what and how we teach (Moore, Cobb, Garfield & Meeker, 1995). The use of technological tools to improve teaching and support student learning of statistics is well documented in many research studies. These technologies include the use of graphing calculators (Dunham & Dick, 1994), electronic textbooks (Symanzik & Vukasinovic, 2006) or e-books e.g. CAST (Stirling, 2010), simulations (Chance & Rossman, 2006; Lane & Peres, 2006; Lane & Tang, 2000; Mills, 2004), educational software such as TinkerPlots and Fathom (Konold & Miller, 2005), spreadsheets e.g. Excel (Barr & Scott, 2010), and applets/stand-alone applications (Chandrananda & Wild, 2010). The teaching and learning of statistics is commonly assisted by the use of statistical software packages such as SPSS, Minitab, JMP, SAS, S-PLUS, and GenStat for statistical data analyses. Other forms of technology used may include the provision of multimedia materials and resources, and data repositories. Although there are a variety of technology tools that can be used to facilitate and improve the learning of statistics, the research focus over the last 15 years is not on whether one tool is better than another, but rather, as it is in this study, the focus is on understanding how the technology or tools may be used effectively so as to improve student understanding and reasoning about important statistical ideas. Some general reminders need to be considered by educators when planning to use technology in their teaching as mentioned by Chance, Ben-Zvi, Garfield, and Medina (2007, p. 16-17):
[t]echnology does not replace the teacher, but teachers need to actively observe the students, identify their difficulties, probe their thought processes and the conclusions they are forming, quickly curb any problems they are having with the technology, keep students on task, and answer questions... Teachers don‟t have downtime while students are interacting with technology. Statistics educators need to make relevant choices from a variety of educational tools, applications and software to optimize student learning. There are issues associated with technology as highlighted by Moore (1997, p. 135), „teaching our subject and not the tool‟. The researcher strongly agreed with the point made by Bates and Poole (2003, p. 45) that … whatever technology is used for teaching: Good teaching matters. Clear objectives, good structuring of learning materials, relevance to learners‟ need, etc., apply to the use of any technology for teaching, and if these principles are ignored, then the teaching will fail, even if the unique characteristics of the medium are 8
stylishly exploited. Good teaching may overcome a poor choice in the use of technology, but technology will never save bad teaching; usually it makes it worse. Educators need to understand the theories of learning behind their teaching practices, particularly when the technology acts as the primary teaching medium as for example in the use of online learning or e-learning systems. Educators need to be aware that the teaching process is not as simple as throwing out their knowledge to the students without understanding how they learn; students might not understand the information in the same way as the educators. It is essential for educators to understand how to design subjects including: the assessment system, tasks given in the exercises and laboratory work, resources and materials provided, and the selection use of real data for analyses. In addition, educators need to understand how to develop students‟ skills and competencies rather than having students‟ rote learn. In this study, the researcher initially intended to gain an understanding of how students experience the learning of mathematics and statistics with the use of technology within an online or e-learning environment. However, as the study progressed, the aim was extended or refined as the researcher sort to identify effective online learning experiences and students‟ perceptions of different learning designs. This knowledge will be used to provide effective learning supports for subjects hosted for students either on-campus or in distance learning modes at the University of Wollongong.
1.4
Objective and purpose The general objective of this study is to examine ways to improve student learning of
statistics through a blended learning environment with the provision of learning support embedded in the e-learning system. In order to do this, it aims to answer the following questions:
Can a learning support system for the students in an introductory statistics subject be built within the online learning environment?
What role can video learning resources play in supporting student learning in an introductory statistics subject?
How effective are the learning resources provided in improving student learning outcomes such as confidence, understanding and in developing students‟ reasoning about statistical concepts?
How can effective learning designs be created to support student learning of statistics?
With blended technologies what does a model of learning support look like? 9
How can resources be developed so that the processes are sustainable and likely to be adopted by other staff?
Note that in this study, subjects are offered in two academic year sessions: first session in Autumn from March to June (March/Autumn), and second session in Spring from August to November (August/Spring).
1.5
Approach and methodology The case studies involve subjects taught within a blended learning environment. In each
case the development of embedded learning support to improve student learning via the elearning sites was investigated. Three case studies form the basis of this exploration:
The first case study involved the provision of learning resources, particularly the introduction of video resources and subsequently learning design maps, to postgraduate students (on-campus and distance modes) in an introductory statistics subject (GHMD983/SHS940). These were aimed at accommodating students‟ needs, in particular to ease their anxieties, to improve their confidence and to increase their understanding of statistics.
The second case study involved embedding learning supports in a similar introductory statistics undergraduate subject. The students were predominantly in their first year of study. This case study was aimed at determining the applicability of learning support in a new group of students, the undergraduates enrolled in STAT131. This case study addressed poor learning outcomes, in terms of grades in STAT131, and has led to an additional focus to the provision of resources, that of learning design maps, and finally to a Headstart program. In all these implementations, the assessment system has been central to identifying students at risk of failure.
The third case study involved the provision of learning design maps incorporating learning resources and supports in the e-learning site, to a new subject. The students in this case were prospective primary and early childhood teachers enrolled in a pair of mathematics subjects, MATH131 and MATH132, introduced in March/Autumn 2009. This case study examined the effectiveness of the „subject design‟. Here the term „subject design‟ encompasses the assessment system (composed of online quizzes, midsession examination, and assignments), the delivery of lectures and tutorials, and the resources provided to students in the e-learning system. 10
1.5.1 Evaluation framework Evaluation is needed to provide valid evidence of the impact of technological tools, or innovations, on student learning of mathematics and statistics. Evidence is needed to support assertions of improvement in student learning outcomes. Specifically in this study, the use of video-based resources in supporting student learning of statistics and appropriate learning designs implemented within an online learning environment must be evaluated so as to examine their impact on student learning outcomes. In the context of developing and implementing technologies in teaching, researchers need to use models of evaluation which are adaptable to their needs and the nature of their innovations. For instance, Bain (1999) highlighted several evaluation frameworks that can be used for evaluating technology-based interventions (e.g. Alexander & Hedberg, 1994; Gunn, 1997). Evaluation models provided by Kirkpatrick (1994) and Alexander and Hedberg (1994) emphasize the need for evaluation to be undertaken at all four stages of project development. The approach to evaluation builds on the triangulation between different sources of evidence. This was also suited to this study which gathered a mix of evidence. These stages being: (1) Design, (2) Development, (3) Implementation, and (4) Institutionalisation. Alexander and Hedberg‟s evaluation model provided a relevant framework for evaluation in this study which involved a sequence of case studies conducted across six consecutive teaching sessions. Furthermore, this framework was in accord with the statement made by Alexander (1999) that the model is more advantageous within technology-based of teaching and learning and in particular for the e-learning environment. Evaluation at the design phase places emphasis on an analysis of the need to provide learning support to assist students learning in subjects within the e-learning systems. It provides evidence through gathering baseline data in all three subjects. However, there was movement in those baselines in that developments from the first case study were incorporated into the baseline implementation of subjects for the second and third case studies. In the development phase emphasis was placed on evaluating the implementation of innovation in subjects which is essentially formative assessment, „commencing at the onset of the implementation and concluding, temporarily, at the completion of the subject‟ (Porter, 2001, p. 196). Formative evaluation of each session was used to develop the subsequent innovation and subject design used in later sessions. The summative assessment collected at the end of a subject became part of the formative process directing the subject design of the latter implementations. In the implementation phase, focus was on the summative evaluation through comparison of outcomes in the assessment and student surveys of three implementations of the subjects. In the final
11
phase, institutionalisation, emphasis was on the identification of uptake or adaptation of resources and learning designs in other subjects or faculty within university (refer Figure 1.1). In three case studies it was possible to compare outcomes between sessions from the initial design through to the implementation phase of the evaluation. In the first case study, it was possible to compare outcomes between two different groups of students, the undergraduates in STAT131 and the postgraduates in GHMD983 (refer Figure 1.1).
Note:
*
4-stage evaluation model (Alexander & Hedberg, 1994), Microsoft Word Document (DOC), Portable Document Format (PDF), Hypertext Markup Language (HTML), Comprehensive Assessment of Outcomes in Statistics (CAOS)
Figure 1.1 Three case studies conducted using four stages of evaluation framework
12
The initial case study was one of opportunity and a starting point for the researcher who was interested in statistics anxiety and a lecturer considering the development of resources to support student learning. It provided a baseline measurement allowing for comparison of outcomes after the introduction of innovations. The study flowed from an initial communication with distance students, who were postgraduates of an introductory statistics subject (GHMD983) in August/Spring 2008. The lecturer gave prompts to initiate student interaction and „getting to know each other‟ and to provide thinking on the study of statistics in the elearning forum. The students were encouraged to give their comments on the prompts, (1) “Please introduce yourself” (2) “What do you think statistics is about?” (3) “What do you look forward to in learning statistics?” and, (4) “What are you concerned about or what do you anticipate in learning statistics?”. Interestingly, the majority of the students responding to the prompts revealed that they were anxious in taking the statistics subject particularly the mathematics components involved within the subject. With ethics approval and student consent, an analysis of responses indicated that the two main reasons for student anxieties when contemplating completing the subject were the lack of mathematical skills and unpleasant past experiences when taking statistics subjects. The emergent study examined how the provision of video-based resources embedded within the e-learning site could help student learning of and understanding of statistics. In general, this study aimed at identifying the impact of learning resources on student learning outcomes such as understanding, confidence, and anxiety, and the way in which these resources may be better designed and delivered to the students in an e-learning or online learning environment. Over three iterations of the subject the researcher (and lecturers) moved from a focus on resources to addressing issues related to learning design. Change evaluations were focused on the value of resources to postgraduates and this led to the development of learning design maps embedded within the e-learning site. The second case study was undertaken with undergraduate students from the introductory statistics subject (STAT131) in the March/Autumn 2009. The results of the first case study, the usefulness of video-based resources in helping the student learning of statistics and ease of anxieties, informed this case study. Therefore, resources were provided as major learning supports within the subject‟s e-learning site. Video resources were provided to cover most of subjects‟ major topic areas. The outcomes of the first year undergraduates in STAT131 (March/Autumn 2009) were compared with outcomes from the postgraduates in GHMD983 (August/Spring 2008) in the first case study. The findings revealed contrary results on the relative worth of the video resources (Baharun & Porter, 2010). Although the resources and the video supports were similar in both subjects, the manner in which the resources were integrated within the e-learning sites 13
differed. Thus, the students responded to the resources differently. The undergraduates commented more about the failure to locate the resources, on the other hand the postgraduates found the resources were valuable (Baharun & Porter, 2009). This finding highlighted the issue of learning design, specifically how resources can be provided and delivered in the e-learning system so as to support student learning more effectively. Later iterations in the second case study were informed by the literature on learning designs that viewed learning as being based on a student-centred approach. This case study was undertaken with the undergraduates who enrolled in STAT131 in March/Autumn 2009 (baseline) to March/Autumn 2011 (implementation of a new subject design). The framework developed by Oliver (1999) and Oliver and Herrington (2001) comprises three key elements of learning. These are resources, tasks, and support materials that scaffold knowledge construction among students. These elements formed the basis for creating an effective learning design within the online or e-learning system. Further work has been carried out by a group of researchers from the Learning Designs Project (refer www.learningdesigns.uow.edu.au). This framework was refined by adopting the use of a Learning Design Visual Sequence (LDVS) representation that illustrates and summarizes visually the three key elements using symbols (refer Learning Designs Project at www.learningdesigns.uow.edu.au). Based on this work, the focus of the case study moved to adapting the learning design representation (LDVS) to create a learning design map for students which incorporated the elements of learning via links in the elearning sites. In particular, video resources were embedded through links in the maps rather than in by-type resource folders as a means of providing learning support for specific weekly tasks. Additionally, this study used an assessment system that implemented a test and retest competency approach. This approach allowed the identification of students at risk with a variety of learning issues who possibly needed further assistance. The retest approach has been implemented elsewhere, for example at La Trobe University where it was regarded to be effective for student learning and improved the pass rate in a mathematics subject (Broadbridge & Henderson, 2008). With support resources clearly identified an exasperated, “what else” can support student learning, or engage them led to a new questioning of students and a new direction. The innovations expanded to the introduction of a Headstart program in STAT131 which commenced in March/Autumn 2011. The third case study focused on the evaluation of a subject design which was developed in parallel with the second case study to determine the applicability of learning designs in a different learning context. These students were prospective primary and early childhood teachers who enrolled in a pair of mathematics subject (MATH131 March/Autumn in 2010 and 2011, and MATH132 August/Spring in 2009 and 2010). The subjects were offered at five campuses with a mix of on-campus and remote campus students and over four successive 14
teaching sessions beginning in August/Spring 2009. This resulted in two iterations of each subject. The learning designs and resources examined were aimed at supporting the trainee teachers to learn and understand mathematical and statistical content through a subject design delivered to the students within a blended learning environment specifically in the e-learning systems. The three case studies provide an opportunity to evaluate the subject designs in placing the resources effectively in the e-learning systems so as to support student learning of mathematics and statistics. Each case study was undertaken intensively with particular groups of students and discussed in three individual chapters (Chapter 5, 6, and 7) developed for this thesis.
15
CHAPTER 2 ISSUES IN STATISTICS AND MATHEMATICS EDUCATION
2.0
Technology in statistics education For more than a decade, educators and researchers in statistics education have seen
tremendous growth in the use of technology in teaching and learning statistics which has become more up-to-date, powerful, flexible, and efficient, with user friendly interfaces and high interactivity via the World Wide Web (internet). As early as 2000, Ben-Zhi (2000, p. 128) described how technological tools could be used pedagogically to support student learning of statistics. The pedagogical practices included:
i.
engaging students‟ active knowledge construction, by “doing” and “seeing” statistics;
ii.
providing opportunities for students to reflect on observed phenomena; and,
iii. developing students‟ metacognitive capabilities, that is, knowledge about their own thought processes, self-regulation, and control. More recently, Chance et al. (2007) drew statistics educators‟ attention the need to discover appropriate tools and ways to employ them in promoting the student learning of statistics. They cautioned,
[c]hoice of a particular technology tool should be made based on ease of use, interactivity, dynamic linkages between data/graphs/analyses, and portability. Good choices if used appropriately can enhance student collaboration and studentinstructor interactions, and often a combination of several different tools will be necessary. (Chance et al., 2007, p. 20) Chance et al. (2007) also detailed the impact of technology in statistical practices on the content, pedagogy, and format of the introductory statistics subject. In the context of this study, emphasis was on changes due to use of technology, specifically changes in the subject designs. That is, technology changes how information is delivered to students and shared among students in classes through learning management systems (LMS) such as the Blackboard Learning System and Moodle. For instance, Blackboard a commonly known e-learning system is capable 16
of providing space for communication and collaboration between student-instructor (for example, academic discussion forums, media libraries, blog tools) as well as assessment (for example, online quizzes and surveys, assignment drop boxes). These systems provide opportunities for students and teachers to stimulate communication, feedback, reflection, and revision (Chance et al., 2007). Undoubtedly, technology has a great deal of impact on the teaching and learning of statistics. It has the potential not only to improve student achievement but also to provide teachers with professional development (Chance et al., 2007). This of course will only happen if the technology tools are used appropriately. In July 2010, the Eighteenth International Conference on Teaching Statistics (ICOTS 8) highlighted the current state of research in all its diversity regarding the role of technology in statistics education. For instance, Francis (2010), and Lovett, Meyer, and Thille (2010) investigated the use of online learning or web-based course in teaching statistics to effectively support students learning. Petty (2010) developed a series of YouTube videos to engage the students and enhance the learning of statistics using Excel, and Stirling (2010) implemented CAST (Computer Assisted Statistics Textbooks) applets as an electronic textbook for teaching introductory statistics in helping teachers to teach the statistical concepts and methods. At the Australian Conference on Teaching Statistics (OZCOTS 2010), there was an enormous amount of work and significant endeavours reported on the development and implementation of technological tools in teaching statistics to help students effectively learn. In this study, the aim was to find better ways of integrating technology into statistics subjects through providing an embedded learning support system so as to improve student learning outcomes.
2.1
Teaching and learning statistics Teaching students to both think and compute, however, proved difficult. Statistics relied heavily on mathematics, in part because data were expensive and calculations were tedious and slow, and so approximation methods were required. ... in order to fully understand statistics, one had to understand it through mathematics.‟ (Gould, 2010, p. 298) Indeed, mathematics plays an important role in teaching and learning statistics hence this
statement seems to lend support to the false implication in Bullock‟s (1994) assertion that „[m]any statisticians now claim that their subject is something quite apart from mathematics, so that statistics courses do not require any preparation in mathematics‟. Not to mention a specific
17
curriculum, Herbert G. Wells contended the role of numeracy (particularly mathematics) in societies as essentially a need to produce “good citizens”,
The time may not be very remote when it will be understood that for a complete initiation as an efficient citizen of one of the new great complex world wide states that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write. (Cited in Gould, 2010, p. 297) In the context of teaching and learning statistics, basically there are some degree of differences between these two disciplines, mathematics and statistics, postulated by Moore and Cobb (2000) and Cobb and Moore (1997). However, there is a belief among practitioners that view statistics is another part of the mathematics subject (Morris, 2008). Cobb and Moore (1997) argued that for statistics in particular, data are not just a matter of numbers; statistics deal with numbers as well as providing a “context” making problems more realistic, and forcing students and teachers to think about the validity and applicability of their solutions. They viewed context differently in mathematics subjects, … the [fundamental] focus in mathematical thinking is on abstract patterns: the context is part of the irrelevant detail that must be boiled off over the flame of abstraction in order to reveal the previously hidden crystal of pure structure. In mathematics, context obscures structure… In data analysis, context provides meaning. (Cobb & Moore, 1997, p. 803) In the nineteen eighties when the curriculum had become focused on teaching procedures and rote memory, the design of the curriculum in statistics was evidently unsuccessful in achieving its aim of preparing a citizenry for thinking and computing with data (Gould, 2010). Matthews and Clarks (2003) recently discovered that even the best students could not understand the fundamental concepts of statistics several weeks after completing a traditionalbased college level introductory statistics subject. As raised by one student: „I‟m not really sure how to explain what [the mean] would mean. I just know how to do the formula‟, quoted in Gould (2010, p. 298).
Deming (1940) essentially pointed out the substantial role of statisticians that [a]bove all, a statistician must be a scientist… Statisticians must be trained to do more than to feed numbers into the mill to grind out probabilities; they must look carefully at the data, and take into account the conditions under which each observation arises. (Cited in Gould, 2010, p. 298)
18
Furthermore Deming argued that statistics might use mathematics, but most importantly it was about data and hence in the nineteen nineties many statisticians or educators including Moore (1997), Cobb (1991; 1993) and Wild (1994) firmly urged the curriculum designers or educators to put data at the centre of the statistics subject. Gould (2010, p.307) recently proposed a new definition of data, … [it] should be inclusive, rather than exclusive. It should acknowledge that data are created by both need and accessibility. Data arises from activities, objects, experiences, and relations. Data can be shared and stored and transmitted. Data include a context, but data also create context, as when emails (data) are studied to produce more data to recreate institutional social structures or make a better spam filter. As noted in Moore (1997, p. 127) the focus in statistics courses has been shifted to include „[m]ore data and concepts… fewer recipes and derivations… emphasize statistical concepts…[and]… point to core statistical ideas that are not mathematical in nature‟ and this affords more active learning opportunities that align with constructivist views of learning. Garfield and Ben-Zvi (2007) therefore posited the important contributions in statistics education to understanding the nature and development of student‟s statistical reasoning and what it means to understand and learn statistical concepts. This is necessary in developing the desired learning outcomes in introductory statistics courses where these outcomes are frequently referred to as statistical literacy, statistical reasoning, and statistical thinking.
2.1.1 Statistical learning In this study, the selection of the first two case studies involved the teaching of introductory statistics subjects taken by undergraduate and postgraduate degree programs. The delivery of these two subjects was high functioning with technology embedded approaches to learning within the e-learning system. Technology has been used to facilitate communication between students and the teacher, and to increase access to learning resources and feedback. It also facilitates the processes of learning the discipline content. That is, it allows students to emphasise the statistical process that precedes the calculations and the interpretation of the results of these calculations. Moore (1997, p. 134) argued the use of technology in teaching and learning statistics: [c]ontent and pedagogy – our understanding of what students should learn and of effective ways to help them learn – should drive our instruction. Technology should serve content and pedagogy. Yet technology has changed content and allows new forms of effective pedagogy. 19
and Moore further posited
[t]he most effective teachers will have a substantial knowledge of pedagogy and technology, as well as comprehensive knowledge about and experience applying the content they present. Content and technology are discussed briefly in the next sections whilst pedagogy will be examined in the next chapter.
2.1.2 Statistical concepts In the context of this study, the researcher is in accord with the work of Chance, Garfield, delMas, Ben-Zvi, Gould, and Rumsey in their search for a taxonomy which affords more effective classification of statistical learning. The three domains: statistical literacy, statistical reasoning, and statistical thinking have been selected to provide the mechanism for defining statistical learning in this study. Garfield and Ben-Zvi (2007) distinguished between the three domains and cautioned statisticians and statistics educators to carefully define the unique characteristics of statistics. The three domains are distinguishable in terms of the cognitive outcomes of statistical learning. They canvassed several definitions, firstly defining statistical literacy as … a key ability expected of citizens in information-laden societies, and is often touted as an expected outcome of schooling and as a necessary component of adults‟ numeracy and literacy. [It] involves understanding and using the basic language and tools of statistics: knowing what basic statistical terms mean, understanding the use of simple statistical symbols, and recognizing and being able to interpret different representations of data. (Garfield & Ben-Zvi, 2007, p. 15) and statistical reasoning as … the way people reason with statistical ideas and make sense of statistical information. [It] may involve connecting one concept to another (e.g., center and spread) or may combine ideas about data and chance. [It] also means understanding and being able to explain statistical processes, and being able to interpret statistical results. (Garfield & Ben-Zvi, 2007, p. 16) and concluded that statistical thinking implicated a higher order of thinking skills (that is, the way statisticians think) rather than statistical reasoning,
[i]t includes the knowing how and why to use a particular method, measure, design or statistical model; deep understanding of the theories underlying statistical 20
processes and methods; as well as understanding the constraints and limitations of statistics and statistical inference. [It] is also about understanding how statistical models are used to simulate random phenomena, understanding how data are produced to estimate probabilities, recognizing how, when, and why existing inferential tools can be used, and being able to understand and utilize the context of a problem to plan and evaluate investigations and to draw conclusions. (Garfield & Ben-Zvi, 2007, p. 16) Rumsey (2002, p. 4) claimed that the definition of statistical literacy can be regarded as two different learning outcomes, these are statistical competence that „refers to the basic knowledge that underlies statistical reasoning and thinking,…‟, and statistical citizenship which „refers to the ultimate goal of developing the ability to function as an educated person in today‟s age of information‟. Gould (2010) further viewed the definitions of statistical literacy and statistical thinking according to two paradigms on how statistics courses are designed, either for “consumers” or “producers” of statistics. He posited that statistical literacy seems to be aimed fairly at consumers which refer to individuals who must learn to “read” statistics. Consumers, … will read about statistics in the news media, and will need to understand basic statistical concepts (such as median vs. mean) in order to make household decisions and in some cases workplace decisions… they are consumers of statistical summaries. Consumers are majoring in literature, law, and mathematics… (Gould, 2010, p. 307) On the other hand, he postulated that producers are better served in a statistics course that teach statistical thinking in which it is more comprehensive competency than statistical literacy. … [P]roducers will, at some point in their intended career, perform statistical analyses on data that either they or colleagues have collected. Producers are majoring in biology, atmospheric science, social science, psychology, engineering. (Gould, 2010, p. 307) Garfield and Ben-Zvi (2007) contended that the three learning outcomes are unique; however there appears to be some overlap and a type of hierarchy, with statistical literacy serving as a foundation for statistical reasoning and statistical thinking (see Figure 2.1). The domains proposed by Rumsey (2002), Garfield (2002), and Chance (2002) have been discussed in delMas (2002). delMas compared and contrasted the two different perspectives, showing through Venn diagrams, how the outcomes of instruction are associated (see Figure 2.1 and Figure 2.2). delMas claimed that
[i]f we focus on literacy as the development of basic skills and knowledge that is needed to develop statistical reasoning and thinking… then a Venn diagram such 21
as the one presented in [Figure 2.1] might be appropriate. This point of view holds that each domain has content that is independent of the other two, while there is some overlap. (delMas, 2002, p. 3) and the other perspective is represented by Figure 2.2 in which
[t]his perspective treats statistical literacy as an all-encompassing goal of instruction. Statistical reasoning and thinking no longer have independent content from literacy. They become subgoals within the development of the statistically competent citizen… It may also be the case that the statistical expert is not just an individual who knows how to “think statistically”, but is a person who is fully statistically literate as described by Rumsey. (delMas, 2002, p. 4)
Figure 2.1 Independent domains with some overlap (source from delMas, 2002, p. 4)
Figure 2.2 Reasoning and thinking within literacy (source from delMas, 2002, p. 4)
Reflecting on the above perspectives, delMas (2002) highlighted a different view on how statistics educators can distinguish the aims of statistical literacy, reasoning and thinking based on their instructional activities.
What moves us from one of the three domains to another is not so much the content, but, rather, what we ask students to do with the content. (delMas, 2002, p. 5) From this perspective, delMas viewed the nature of the assessment tasks as essential to identifying whether instruction promotes these outcomes as illustrated in Table 2.1. To promote the development of statistical literacy, instructors can set tasks that ask … students to identify examples or instances of a term or concept, describe graphs, distributions, and relationships, to rephrase or translate statistical findings, or to interpret the results of a statistical procedure. (delMas, 2002, p. 5) 22
and statistical reasoning can be developed if we ask … students to explain why or how results were produced… or why a conclusion is justified[.] (delMas, 2002, p. 5) delMas ultimately believed that statistical thinking can be distinguished from the other domains in which we ask … students to apply their basic literacy and reasoning in context… [and]… is promoted when instruction challenges students to apply their understanding to real world problems, to critique and evaluate the design and conclusions of studies, or to generalize knowledge obtained from classroom examples to new and somewhat novel situation. (delMas, 2002, p. 5) Table 2.1 Assessment tasks that may distinguish the three instructional domains (source from delMas, 2002, p. 6)
Drawing from delMas‟s (2002) perspective, Morris (2008) argued that the development of statistical literacy is closely related to “lower order skills” whilst the other two domains, reasoning and thinking seem associated with “higher order skills”.
The two subjects that formed the basis of this study have covered fundamental statistical concepts and procedures, and linked the three domains as appeared in the subject objectives shown in Table 2.2.
23
Table 2.2 Sample of subject objectives for STAT131 March/Autumn 2010 Variation and uncertainty occur in most aspects of life. In this context the discipline objectives of this subject are that all students should be able to: 1. Understand the techniques, concepts and theory that assist in making sense of this variability and uncertainty. 2. Identify, select, use and interpret the techniques for a) displaying variation in data through graphs and tables b) summarising data c) reporting and presentation skills. 3. Measure uncertainty. 4. Estimate point and interval estimates of population parameters. 5. Understand, test and interpret hypotheses tests about means and proportions. 6. Understand, construct and simulate sampling distributions. 7. Understand the implications of the central limit theorem. 8. Identify and define the key properties, and determine the appropriateness of models for describing uncertainty in data. Addressing a) Discrete Models and specifically the Binomial and Poisson b) Continuous Models and specifically the Normal and Exponential 9. Question the appropriateness of measurement, design of studies, analysis of data and interpretation of outcomes. 10. Apply statistical problem solving skills in new situations. Source: STAT131 March/Autumn 2010 Laboratory Manual
2.2
Use of technology for improving learning outcomes In this study, technology has been used in statistics subjects to allow students to actively
engage in simulation tasks, producing graphs, statistical analyses and computer-aided calculation. Consequently, students are able to focus on the more complex statistical thinking and interpreting needed in analysing real data. In the literature, the implementation of technology in statistics classrooms has evidently impacted on the student learning of statistics as it appears as one of the major areas discussed in statistics education research publications and conferences. For example, in journals and conferences such as Statistics Education Research Journal
(http://www.stat.auckland.ac.nz/~iase/publications.php?show=serj),
Technology
Innovations in Statistics Education (http://repositories.cdlib.org/uclastat/cts/tise/), Journal of Statistics Education (http://www.amstat.org/publications/jse/), International Conference in Teaching
Statistics
(ICOTS)
(http://www.stat.auckland.ac.nz/~iase/publications.php?
show=icots8), and Australian Conference on Teaching Statistics (OZCOTS) (http://oznznets. com/ozcots2010. html). Chance et al. (2007) described types of technology used in statistics instruction. Technology which can be categorised as statistical software packages, educational software, spreadsheets, applets/stand-alone applications, graphing calculators, multimedia materials, and data repositories. These technological tools are regarded as benefiting users whilst there is much overlap in their capabilities across categories, yet none of them ostensibly covers all possible 24
educational uses of technology (Ben-Zvi, 2000). The next section reviews types of technological tools available, however
[i]t is important to remember that the focus of instruction should remain on the content and not the tool, and to choose technology that is most appropriate for the student learning goals, which could involve a combination of technologies. (Chance et al., 2007, p. 4)
2.2.1 Statistical software packages Statistical packages are software designed for computing statistics and constructing visual representations of data. These software packages include SPSS (http://www.spss.com), JMP (http://www.jmp.com),
Minitab
(http://www.minitab.com),
SAS
(http://www.sas.com),
STATISTICA (http://www.statsoft.com), and S-PLUS (http://www.insightful.com). For instance, SPSS was originally developed for statistical analysis in social science and it is widely used by marketing organizations, health researchers, survey companies, government, and educators. Minitab is a statistical software package used in statistics education and it is designed to assist educators teach and students learn. While many of these packages (e.g. SAS, JMP, STATISTICA) were initially developed for the industry users, they have also evolved to be more user-friendly with more menu-driven packages and hence suitable for student use. Menudriven is a software product term used to describe the method of operation that allows the user to navigate file menus easily using the mouse instead of commands (Chance et al., 2007). As these packages become more flexible and user-friendly, they are being widely used as teaching and learning tools in introductory statistics subjects worldwide. SPSS is one of the statistical packages that has added a menu-driven capability with options available for complex statistical analyses through menu choices. It is becoming increasingly feasible as a tool that allows student exploration and construction of ideas. For instance, students at the University of Wollongong (UOW) also used SPSS to simulate samples for different models and carry out rapid production of alternative graphs to enable them looking at data from different perspectives. This software package has the more basic capabilities such as exploratory data analysis (EDA), linear regression analysis, and analysis of variance (ANOVA) and in addition more sophisticated analysis capability. Nowadays there are more freely accessible or cost effective statistical software which are designed as student versions that are smaller in scope limiting the number of variables and procedures. There also several online stand-alone statistical software packages freely available or at low cost and some of these are free for a trial period and can be downloaded.
25
There are several factors affecting the choice of software packages used at the UOW. JMP is site licensed by the Information Technology Services (ITS) for use by students and staff. There is ongoing debate regarding the use of this software among academics in the Health Science Faculty, some of whom would prefer SPSS. SPSS is widely used in industry and across faculties, and has a long history of use at the UOW. For over 25 years, this software has been used with a lot of support/expertise for users. Its use is culturally embedded in many faculties. Like JMP, SPSS is site licensed and a student version is available for student home use only. Policy regarding use of software in mathematics and statistics department specifies SPSS at 100 level subjects, with higher level mainstream statistics subjects using different packages each year (e.g. JMP for 200 level, SAS for 300 level, R or S for 400 level). Both SPSS and JMP are installed in central computer laboratories for student use and teaching purposes.
2.2.2 Educational software There are a variety of statistical software packages which have been developed exclusively to suit different kind of needs and functions aimed at helping students learn statistics. In this study, it useful to explore the use of other software packages to support students learning in different context. For instance, TinkerPlots is exploratory data analysis software designed for the student use in grades 4-8 particularly in the United States and but also used in some other countries (e.g. Ben-Zvi, 2006). It was developed by Clifford Konold and Craig Miller (Konold & Miller, 2005; http://www.keypress.com/x5715.xml) at the University of Massachusetts Amherst and aimed to aid younger students‟ investigation of data and statistical concepts. At the UOW, a lot of analysis of data is undertaken in statistics subjects to allow students to explore statistical concepts such as determining sample size through simulation and cross tabulation. Using TinkerPlots, students can make a large variety of graphs,… But rather than making these graphs directly using commands, students construct them by progressively organizing cases using basic operations including “stack,” “order,” and “separate.” Responding to these operations, case icons animate into different screen positions. The interface was based on observations of people organizing “data cards” on a table to make graphs to answer specific questions. (From http://en.wikipedia.org/wiki/TinkerPlots#cite_note-0) Fitzallen and Watson (2010) recently revealed the value of employing TinkerPlots in the classroom of grade 5/6 students as the way to facilitate the development of statistical reasoning for students with no previous data handling experience.
26
With some similarities to TinkerPlots, Fathom (from http://www.keypress.com/x5656.xml) is an innovative computer-based learning environment that provides a flexible and dynamic tool for exploratory data analysis (EDA) and algebra. It is intended to assist students particularly in high school and college introductory statistics courses understand the abstract concepts and processes in statistics. Ben-Zvi (2000, p. 144) described the strongest features of Fathom including … dynamic manipulation, that is, instantaneous updating of every representation and calculation while dragging data points, axes, attributes, or bars; formulas to calculate values, plot functions, and control simulations; “sliders” [see Figure 2.3] as part of function plotting, attribute definition, and filters; simple simulation and sampling tools; and direct import of data from the Internet.
Figure 2.3 A Fathom slider allows students to change the parameter value (y-intercept) b = 5 of a graph y = (1/2)x + b with slope ½ (Franzosa, McGarry & Hall, 2008) GenStat for Teaching and Learning (refer http://www.vsni.co.uk/software/genstat) is a new statistical software tool, a special version of GenStat for Windows designed to cover the statistical analyses that are needed in schools or in university undergraduate courses (Biggs, 2010).
One of the many benefits of GenStat is that it is a complete package. [Students] can use GenStat to manage and illustrate [their] data, summarize and compare, model relationships, design investigations and of course analyse [their] experiments from the simplest ANOVA right through to the most complex REML [Restricted Maximum Likelihood]. (Source from http://www.vsni.co.uk/software/genstat accessed on 27/12/2011) It is specifically tailored to suit the needs of different level of students thus there are menus for all the standard analyses, for example: exploratory data analysis (EDA) and basic 27
statistics, statistical tests (t, chi-square, nonparametric tests), regression (general models not just e.g. simple linear regression), generalized linear models, nonlinear models (standard curves and user-defined), analysis of variance, design of experiments and sample size, REML analysis of linear mixed models, multivariate analysis, six-sigma, survival analysis, time series and repeated measurements.
2.2.3 Spreadsheets Spreadsheet packages such as Excel are widely used for introductory statistical instruction and have many compelling capabilities. Excel is also the most popular spreadsheet for scientific, engineering and technical applications because of its powerful features and universal availability. However, Chance et al. (2007) contended that statisticians and educators must be cautioned in using Excel as a statistical educational package particularly for the calculation of algorithms and choice of graphical displays as they noted, „it is still very difficult to make a boxplot in Excel‟ (p. 6). In the context of this study, there was also an issue on the difficulty in plotting distributions thus is a reason for non-use by statisticians at the UOW. Hunt (1996) further stated that the use of Excel in education has been disposed, nevertheless
Excel does have some strengths in helping students learn to organize data and in “automatic updating” of calculations and graphs as values are changed, and some advocate Excel due to its widespread use in industry and relatively easy access… (Chance et al., 2007, p. 6) Engineers at the UOW are in favour of the use of Excel for teaching and learning as it is a widely accessible tool. In the engineering field of study, it appears that
[b]y using Excel, freshman students can perform a what-if-analysis for which the problem‟s solution is sought, with minimum efforts... Moreover, Excel spreadsheets are useful and easier to modify for parametric studies on the overall process than any other computational tool or program. (Al-Khalaifat & AlRifai, 2002, p. 384) The same rationale of accessibility results in student teachers at the UOW using Excel. In the recent proceedings of the Australian Conference on Teaching Statistics (OZCOTS 2010), Barr and Scott (2010) used Excel in teaching statistical concepts through a spreadsheet environment to perform statistical calculations and simulations (see Figure 2.4). Additionally, in the proceedings of the Eighth International Conference on Teaching Statistics (ICOTS 8) Petty (2010) developed a series of videos on teaching Excel and statistical concepts. These videos were uploaded onto YouTube so as to engage and enhance the student learning of statistics as 28
well as Excel. In an earlier ICOTS (ICOTS 6 in 2002), Vermeire, Carbonez, and Darius (2002) encouraged the use of Excel in statistics training as a tool for statistical calculations.
Figure 2.4 Example of simulations results from a Binary Distribution for n = 50 using Excel (Barr & Scott, 2010)
2.2.4 Applets/stand-alone applications The development of the World Wide Web has produced unprecedented global means for statistics educators to easily share their materials and ideas on ways to improve the teaching and learning of statistics (Lock, 1998) ostensibly within the online learning environment. Online applets aiming at helping students to explore statistical concepts in a visual, interactive and dynamic environment have been expanded rapidly over the last decade. Applets are software components usually small in size performing a narrow function, and run typically in a Web browser (for example, Internet Explorer, Firefox, Netscape, Google Chrome) either with or without an internet connection (Chance et al., 2007). According to Chance at al. (2007) many applets nowadays are designed to be userfriendly, for students to use and they often capture interesting “context” for students. For example, the “Monty Hall” problem (http://www.math.ucsd.edu/~crypto/Monty/monty.html) allows students to explore a particular concept, in this instance the Sampling Distribution Simulation allows students to explore the nature of sampling distributions of sample means (see Figure 2.5). Applets have been created to allowed students to learn and understand statistics by major topic areas, for example Applets by Topic (http://www.bbn-school.org/us/math/apstats/ applets/applets.html#anchor160911).
29
Figure 2.5 Sampling Distribution Simulation Applet (source from: http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/ SamplingDistributionApplet.html) Research has shown that applets have been widely used by statistics educators to support students learning of statistics (Francis, 2010; Stirling, 2010a; Vermeire et al., 2002). Some limitations however appear as noted in Chance et al. (2007, p. 6),
[w]hat these tools often gain in visualization and interactivity, they may sometimes lose in portability. And while they can be freely and easily found on the Web, they are not often accompanied by detailed documentation and activities to guide student use. The time required for the instructor to learn a particular applet/application, determine how to best focus on the statistical concepts desired, and develop detailed instructions and feedback for the students may not be as worthwhile as initially believed. In the context of this study, the researcher highlights the need for proper documentation of subject design allowing portability amongst educators when designing resources and materials to support students learning within an online learning environment. As per legal aspects in Australia, the availability of materials on the internet does not mean they can be embedded in the UOW site. Links to sites outside the university can be costly for students in terms of internet quota and risky in terms of ongoing availability for the subject. Many materials 30
do not contain necessary legal statements or have the license under a Creative Commons (refer http://creative commons.org/licenses/by-nc-sa/2.5/au/deed.en).
2.2.5 Graphing calculators Graphing calculators are perhaps the most portable or inexpensive technology tools (Burrill, 1996) and they are increasingly used in schools worldwide. Graphing calculators are designed as learning tools to help students visualize, develop problem solving abilities and improve their understanding of mathematical, statistical, and science concepts (Dunham & Dick, 1994; Keller & Russell, 1997). Advancements in technology have made the graphing calculators a powerful tool for plotting graphs; solving simultaneous equations; analyzing, exploring, and simulating data; as well as performing numerous other tasks with variables. Some graphing calculators can be used to collect and measure variables such as temperature, voltage, light, pH, and much more functioning as data loggers. Most models and brands of calculators such as Casio, Hewlett Packard, Texas Instruments, and Sharp (to name a few) are capable of performing many statistical calculations, including inference procedures and probability distributions (see example in Figure 2.6). In the Australian context, graphics calculators are increasingly used in schools depending on policies that vary from state to state. However, they do not form part of the UOW statistics teaching culture, with priority given to software packages used in industry and education.
31
Figure 2.6 Using graphing calculator (model TI-83+/84+) to perform statistical plots (Source from http://mathbits.com/mathbits/tisection/Openpage.htm) Burrill (1996) identified, with respect to the secondary level, five major areas in statistics which have been greatly affected through the use of graphing calculators, these are: introductory data analysis, linear equations, least squares linear regression, sampling distributions, and multivariate regression. Burrill further contended that the calculator … allows students to do old things in new ways that enhance concept development and student understanding. [It] also allows students to do things that were not possible before… The advantage of the calculator is that every student can have one for use at any time and at any place at a much lower cost. (Burrill, 1996, p. 16) Despite the enormous benefits for students, Chance et al. (2007) posits that care must be exercised by students (or even teachers) in using calculator‟s output as it might not present sufficient communication of statistical results (for example, calculators may produce outputs such as graphs without labels and scales). At the UOW, the graphics calculator is not a popular tool due to high ownership by students of personal computers and their preference to use computers with output. Besides, students have good access to computer laboratories. 32
2.2.6 Multimedia materials Multimedia is a computer-based system that combines several different types of technology such as text, sound, still images, full-motion video, animation, and computer graphics content forms. The term multimedia is often used as synonym for hypermedia that can be defined as a work available on CD-ROM and where consultation can be non-linear, with a high degree of user-interaction, containing non-textual documents such as graphs, videos and animation (Ciaccio, 1998). For example, ActivStats (http://www.datadesk.com/products/mediadx/activstats/) is a rich learning environment for an introductory statistics course (Ben-Zvi, 2000) provided on a CD which includes videos of real world uses of statistics, simulation, mini lectures accompanied by animation, narration, text, and interactive experiments that can instantly launch associated pieces of statistical software for analysing a data set. This learning environment benefits the students in that they only need to learn one type of technology within a single course (Chance et al., 2007). There are many other recent innovative multimedia resources being developed around the world, for instance: electronic textbooks (e-books) such as CyberStats (Symanzik & Vukasinovic, 2006; http://www.cyberk.com), CAST (Stirling, 2010b; http://cast.massey.ac.nz; see Figure 2.7); e-tutorial learning environment, ALEKS (Tempelaar, Rienties, van de Loeff & Giesbers, 2010; http://www.aleks.com); and open access web-based course, Open Learning Initiative (OLI-Statistics) (Lovett et al., 2010; http://oli.web.cmu.edu/openlearning/forstudents/ freecourses/statistics).
Figure 2.7 CAST Introductory statistics e-book (Source from: http://cast.massey.ac.nz/core/index.html?book=general)
33
2.2.7 Data and materials repositories The role of the World Wide Web in statistics education is important in locating and using pedagogically useful data sets and exploratory activities for the use of both educators and students (Chance et al., 2007). There are varieties of data repositories available that come with the contexts outlining their background and classroom uses, for instance: The AMBER Data Repository (Vieira, Mendes, Duraes & Madeira, 2008; http://www.amber-project.eu), the Journal
of
Statistics
Education
(JSE)
Dataset
and
Stories
feature
(http://www.amstat.org/publications/jse/jse_data_archive.htm), and The Data and Story Library (DASL) (http://lib.stat.cmu.edu/DASL). CAUSE (Pearl, 2010; http://www.causeweb.org; see Figure 2.8) is a good resource for data sets and peer-reviewed classroom activities that features a searchable annotated listing of approximately 3000 referenced articles in statistics education, more than 1800 high quality resources for teaching statistics, approximately 300 fun items (cartoons, songs, jokes, quotes, poems, word puzzles, or videos) and many others for free use by statistics instructors and students. However in Australia particularly at the University of Wollongong, there are issues associated with using and sharing resources or teaching materials via the internet. One of these has to do with the resources disappearing mid subject. Lecturers building subjects need control over the availability of key resources or a ready list of alternatives. The ability to copy resources into the e-learning sites is made difficult with the Australian copyright laws and vigilance by the Information Technology Services (ITS) departments. Porter and Denny (2011) highlighted these legal issues in an Australian Learning and Teaching Council (ALTC) project report with the project team making resources available under a Creative Commons license. This enables lecturers and educators to use and adapt the resources for teaching with the permission of the developers but with recognition, all in the pursuit of providing better learning opportunities for students (refer to http://www.uow.edu.au/informatics/maths/research/mathsresources/UOW 060154.html).
34
Figure 2.8 Resources by material type available on CAUSE website (Source from: http://www.causeweb.org/resources/)
2.3
Anxiety or fear? Lecturer: What do you look forward to in learning statistics? Student: I would have to agree… I am (hopefully) looking forward to passing this course and never having to do statistics again. I struggled big time in my undergraduate degree but that had to do mainly with the way it was taught... I am also looking forward to never having to do statistics again after this… I have absolutely no intention of going anywhere near a job that needs statistics! Lecturer: What are you concerned about or what do you anticipate about statistics? Student: Studying by distance I think it will be hard but may also make some more sense. I also am lucky to have a good support base of people who will help me should I run into trouble (which I am sure I will). (Source from: student forum via GHMD983 August/Spring 2008 e-learning site on July 25) In the early stage of this study the researcher initially intended to examine ways to
improve student learning outcomes particularly overcoming anxiety in learning statistics. The remarks by students in one of the subjects examined in this thesis provide the starting point for investigating how to improve student learning outcomes. What is meant by anxiety? Anxiety in general can be defined as an unpleasant feeling in a person during a certain situation where he or she might think about the outcomes from that situation. The terms anxiety and fear are often used interchangeably even though the concepts are distinct; fear arises when threat is certain 35
(such as a perceived threat), whereas anxiety appears without the existence of actual danger (such as apprehension or discomfort) (Merrel, 2008; Muris, 2007). May (1977, p. 61) has described the distinction between fear and anxiety as
[i]n fear we are aware of ourselves as well as of the object, and we can orient ourselves spatially with reference to the thing feared. But anxiety “attacks us from the rear”,… or, I would say, from all sides at once. In fear, your attention is narrowed to the object, tension is mobilized for flight; you can flee from the object because it occupies a particular point spatially. In anxiety, on the other hand, your efforts to flee generally amount to frantic behaviour because you do not experience the threat as coming from a particular place, and hence you do not know where to flee. May stated that when someone finds they cannot cope with a situation then fear may pass into anxiety or as someone begins to feel they can cope adequately the anxiety may pass into fear. Muris (2007) provides an example, worry; when someone feels worried, he or she might think about the negative things that might happen regarding specific situations or events. In short, Murris simply defined anxiety as a feeling of tension, apprehension, and worry. Anxiety undoubtedly may reduce one‟s self confidence in facing certain life‟s tasks. Anxiety however should not only be perceived as a negative effect, for instance Chinn (2005) and Merrel (2008) postulated that facilitative anxiety motivates and adjusts behaviour positively, in contrast, debilitative anxiety discourages and changes behaviour negatively. Accordingly, May (1977) viewed anxiety as a sign of new possibility for individual‟s self development enlarging the scope of individual‟s world as in dealing with anxiety rather than running away from anxiety (for example, procrastination). Alternatively, Sieber (1980, p. 17) described anxiety situations from a different perspective, …for one individual, anxiety may become a stimulus for effective problem solving, whereas for another it may become a confounding stimulus in problem solving. Anxiety that does not lead to effective problem solving reduces future chances of developing effective problem-solving habits. Anxiety realistically cannot be avoided in one‟s life because this phenomenon occurs with its own purpose, namely to protect someone from dangers that threaten on certain occasions. For instance, losing out in competition or failure, feeling unwanted, isolated, and ostracized (May, 1977). Although the research on anxiety has been conducted for more than 30 years by researchers around the world (e.g. Hembree, 1990; Miller & Bichsel, 2004; Onwuegbuzie, 2004; Onwuegbuzie & Daley, 1999; Zeidner, 1991), the prevalence of anxiety among students and its relationship to learning and achievement remains a major issue in educational settings. In regard to this study, the researcher‟s focus was not to eliminate anxiety 36
that interrupts the student learning, but to help students find a way to cope with it through using a suitable learning approach or system that would provide such resources to deal with or to capitalize on the anxiety so as to lead to better learning outcomes. Anxiety is divided in two types, state and trait anxiety; individuals possessing state anxiety tend to experience it only in a specific test or evaluative situations, on the other hand, individuals experiencing trait anxiety tend to feel anxious in all kind of situations (Miller & Bichsel, 2004; Zeidner, Paul & Reinhard, 2007). Further, Spielberger, Gorsuch, and Lushene (1970, p. 3) theoretically posited the conceptual distinction between state and trait anxiety.
State anxiety (A-State) may be conceptualized as a transitory emotional state or condition of the human organism that varies in intensity and fluctuates over time. This condition is characterized by tension and apprehension, and activation of the autonomic nervous system… Trait anxiety (A-Trait) refers to relatively stable individual differences in anxiety proneness, that is, to differences between people in the tendency to respond to situations perceived as threatening with elevations in A-State intensity. State anxiety normally will be higher in stressful situations such as evaluative situations or examination, in which an existing danger (failure) is perceived as threatening and in nonstressful situations, state anxiety will be low (Krohne & Laux, 1982). Krohne and Laux (1982) further noted those persons who are high in trait anxiety tend to perceive many situations as more threatening than those who are low in trait anxiety and eventually, they are high in state anxiety as well. In other words, high in trait anxiety persons will respond with higher elevations in state anxiety. Miller and Bichsel (2004) claimed that both state and trait anxiety are related to individuals performance on various tasks and many studies have been conducted regarding the effect of anxiety on students performances (e.g. Maloney, Risko, Ansari, & Fugelsang, 2010; Miller & Bichsel, 2004; Onwuegbuzie, 2004; Sansgiry & Sail, 2006; Zakaria & Nordin, 2008). Test anxiety is defined similarly as anxiety, but the difference is test anxiety arises in evaluative or test situations. In particular, test anxiety can be classified as state anxiety where the individual‟s anxiety proneness occurs in a certain evaluative situations. Referring to Pekrun, Neil, and Paul (2001), test anxiety can be defined as anxiety that relates to tests or evaluative situations and typically, it may be accompanied with the feeling of worry of failing in a test. Many researchers defined test anxiety as an unpleasant feeling that has physiological and behavioral concomitants which are experienced in formal testing or other evaluative situations (for example, Sieber, 1980; Zeidner, 1998). The impact of test anxiety is often found in educational settings (Lane, Dale & Horrell, 2006; Merrel, 2008; Murtonen & Lehtinen, 2003; Papanastasiou & Zembylas, 2008). Zeidner et al. (2007) posited that it may also severely affect one‟s physical and mental health. Research on students‟ test anxiety has been widespread and 37
continuous attracting researchers‟ interest for more than 50 years with more than 1,000 empirical studies producing evidence on its structures, antecedents, effects as well as how to cope with the anxiety (e.g. Hembree, 1990; Meijer, 2001; Onwuegbuzie & Leech, 2003; Rodarte-Luna & Sherry, 2008). The investigation of the concept and studies on test anxiety began in the early 1950s by Sarason and Mandler (1952) with studies including how test anxiety relates to an individual‟s performance progressing to the development and validation of instruments to assess individual differences in test anxiety (Stöber & Pekrun, 2004). However, Stöber and Pekrun (2004) argued there was a reduction in the number of scientific publications on test anxiety after a peak in the 1980s, but it may be that it appears under different titles of research, for instance, mathematics or statistics anxiety (Onwuegbuzie & Leech, 2003; Zeidner, 1991), computer anxiety (Beckers, Rikers & Schmidt, 2006), library anxiety (Onwuegbuzie, 1997), and language anxiety (Yang, Tsao, Lay, Chen & Liou, 2008). A huge number of research studies on test anxiety have been conducted as a way of exploring the students‟ performance and achievement as well as coping strategies through many disciplines (e.g. Miller & Bichsel, 2004; Onwuegbuzie, 2004). Test anxiety is normally evoked when students have high expectations (Onwuegbuzie & Daley, 1999) regarding outcomes from the test situations, thus Zeidner et al. (2007) posited there may also be students who are testanxious and are unwilling to study as they feel the study situation focuses on scary events. Meijer (2001) consequently believes that the true potential of students may not be measured adequately in a classroom exam as it may be confounded with test anxiety. This is one of the reasons there is still continued research on test anxiety as it remains an important factor in educational settings.
2.3.1 Mathematics anxiety Mathematics seems to be a subject that creates anxiety. Mathematics anxiety is likely to be the response to mathematical content as stated by Richardson and Woolfolk (1980, p. 271), … mathematics anxiety is something more or different than test anxiety when mathematics is involved. Math anxiety appears to be the reaction to mathematical content, to some of its distinctive features as an intellectual activity and its connotative meanings for many persons in our society as a reaction to the evaluative form of mathematics tests and problem-solving activities. Furthermore, Hembree (1990) defined mathematics anxiety as an emotional state of great fear of future mathematics-related activities. In other words, students not only experience test 38
anxiety, but they are also afraid or anxious to deal with any kind of mathematics-related tasks, that involve numbers (Chinn, 2005). Students may experience anxiety in almost all subjects taken in their study unfortunately debilitative anxiety is frequently met in mathematics (Chinn, 2005). Chinn (2005) highlighted several factors that may contribute to anxiety in learning mathematics such as poor understanding of mathematics, the abstract nature of mathematics, inappropriate instruction, badly designed tasks, a curriculum that does not take into account the range of students targeted, underachievement, teachers‟ attitudes, parental attitudes, the pressure of having to do mathematics quickly, and the extreme judgmental nature of mathematics, that is answers are almost always judged as “right” or “wrong”. In addition, there are several myths relating to mathematics presumably the reasons that many students can‟t do their mathematics tasks well such as “mathematics problems definitely have only one answer”, “not everybody was born to be good in mathematics”, “those who are good in mathematics are those who can do counting well”, “mathematics needs logic but not creativity” and “boys are much better than girls in doing mathematics” (Arul et al., 2004). Preis and Biggs (2001) stated that negative experiences in mathematics may result in poor mathematics performance. Furthermore, Arul et al. (2004) found that students who have positive experiences in mathematics (strong mathematics background) were less-anxious and performed well in mathematics. Mathematics anxiety hence appears to be the main reason for low performance in mathematics (Arul et al., 2004; Hembree, 1990). Zaslavsky (1994) claimed that mathematics anxiety is experienced by individuals of all ages, races and economic backgrounds nevertheless there might appear to be some differences between individuals depending on the degree of mathematics anxiety experienced by them. Gender is one of the factors that relates to mathematics anxiety. The myth that says “boys are much better than girls in doing mathematics” as discussed above seems true with respect to research revealing that women are more mathematics anxious than men (Bernstein, Reilly & Cote-Bonanno, 1992; Burns, 1998). Arul et al. (2004) however found no significant relationship between gender and mathematics anxiety and further posited mathematics background needs to be considered as well. Reflecting on race as another factor that relates to mathematics anxiety, Bernstein et al. (1992) and Burns (1998) found Asian American and Native American men have higher mathematics anxiety than women in the United States. In addition, international students were also found to have higher mathematics anxiety than domestic students in the United States (Bell, 1998). Mathematics anxiety is experienced by many individuals and its effects depend on the degree of mathematics anxiety that one has. Within educational contexts in particular, a lot of work has been directed toward overcoming mathematics anxiety and several coping strategies 39
have been suggested to improve the students‟ performance in mathematics (e.g. Arem, 2003; Chinn, 2005; Preis & Biggs, 2001). However, these were not discussed in detail in this thesis. In accord with Zeidner‟s (1991) findings that mathematics anxiety correlates with statistics anxiety, the next sections described the nature of statistics anxiety and some possible ways or strategies that might help students to deal with anxiety in the learning of statistics.
2.3.2 Statistics anxiety According to Zeidner (1991, p. 319), statistics anxiety can be defined as … a particular form of performance anxiety characterized by extensive worry, intrusive thoughts, mental disorganization, tension, and, psychological arousal. Statistics anxiety arises in people when exposed to statistics content, problems, instructional situations, or evaluative contexts, and is commonly claimed to debilitate performance in a wide variety of academic situations by interfering with the manipulation of statistics data and solution of statistics problems. Onwuegbuzie (2004) postulated statistics anxiety arises in people due to a specific situations where the symptoms of anxiety become known at a certain time and in certain situations during the learning process and when applying statistics in a formal setting. Onwuegbuzie and Wilson (2003) further stated the feeling of anxiety occurs in students when they having problems of any kind or any level of knowledge that deals with statistics. They claim statistics anxiety does not occur in students immediately when they enrol in statistics courses, but this phenomenon typically occurs prior to enrolling in this course. Statistics anxiety is indeed not only due to a lack of training skills but it may be caused by misperceptions of statistics and negative experiences in statistical classes (Pan & Tang, 2005) or even mathematical classes (Preis & Biggs, 2001). Statistics courses are presumed “killer subjects” by many students and the most difficult or anxiety-inducing courses in the study program (Schacht & Stewart, 1990; Zeidner, 1991). Students consequently often delay taking this course until the end of their studies hence it leads to failure to complete their studies on time (Onwuegbuzie, 1997; Onwuegbuzie, 2004; RodarteLuna & Sherry, 2008). In addition, many researchers suggest that the level of difficulty of learning statistics is as similar as learning a foreign language (Lalonde & Gardner, 1993; Lazar, 1990; Onwuegbuzie, 2000b). Enrolling in statistics courses frequently produces high levels of anxiety in many students (Onwuegbuzie & Wilson, 2003). There are three factors contributing to statistics anxiety, these are situational, dispositional and environmental factors (see Onwuegbuzie & Wilson, 2003 for a review). Similarly, Pan and Tang (2005) discussed these three factors in their literature: (i) situational 40
factors, such as mathematics experience (Baloglu, 2003; Hong & Karstensson, 2002; Pan & Tang, 2004; Wilson, 1997), statistics experience (Sutarso, 1992), computer experience (Zimmer & Fuller, 1996) and research experience (Pan & Tang, 2004; Papanastasiou & Zembylas, 2008); (ii) dispositional factors, such as mathematics self-concept or self-esteem (Zeidner, 1991), scholastic competence, intellectual ability and creativity (Daley & Onwuegbuzie, 1997; Onwuegbuzie, 2000a), perfectionism (Onwuegbuzie & Daley, 1999; Walsh & UgumbaAgwunobi, 2002), hope (Onwuegbuzie, 1998), and procrastination (Onwuegbuzie, 2004; Walsh & Ugumba-Agwunobi, 2002); and (iii) environmental factors, such learning styles (RodarteLuna & Sherry, 2008; Wilson & Onwuegbuzie, 2001), age (Baloglu, 2003; Pan & Tang, 2004), gender (Baloglu, 2003; Bradley & Wygant, 1998; Hong & Karstensson, 2002; Rodarte-Luna & Sherry, 2008) and ethnicity (Bell, 1998; Onwuegbuzie, 1999). Pan and Tang (2005) highlighted some factors that may affect statistics anxiety. For instance, factors such as fear of failure, evaluation concerns, and perfectionism are among those that possibly contributed to statistics anxiety (Walsh & Ugumba-Agwunobi, 2002). Moreover, other factors such as minimal past mathematics preparation, late-in-career introduction to quantitative analysis, general anti-quantitative bias, lack of appreciation for the power of analytical models, and lack of mental imagery useful in thinking about quantitative concepts have been found relevant to social work students who are often anxious about statistics (Forte, 1995).
2.3.3 Statistics anxiety versus mathematics anxiety Among all the subjects the students had enrolled in their study program, mathematics and statistics appear to be the most highly correlated to student‟s anxiety (Hembree, 1990; Onwuegbuzie, 2004; Rodarte-Luna & Sherry, 2008). Zeidner (1991) believed that the two forms of anxiety are related to each other as statistics anxious students are also found to have low ability in mathematics, suffering from fear of negative evaluation, dislike of tests, and low selfesteem. Zeidner further argued mathematics and statistics anxiety are conceptualized as twofactor structure consisting of test anxiety and the content components. Others perceive statistics anxiety as a multidimensional in concept (Cruise, Cash & Bolton, 1985; Onwuegbuzie, Daros & Ryan, 1997). Cruise et al. (1985) have proposed six factors of statistics anxiety: worth of statistics, interpretation anxiety, test and class anxiety, computational self-concept, fear of asking for help, and fear of statistics teachers. Onwuegbuzie et al. (1997) further suggested four components of statistics anxiety, these are (i) instrument anxiety such as computational self-concept and statistical computing, (ii) content anxiety such as fear of statistical language, fear of application of statistics, perceived usefulness of statistics, 41
recall, (iii) interpersonal anxiety such as fear of asking for help and fear of statistics teachers, and (iv) failure anxiety such as study-related anxiety, test anxiety and grade anxiety. Baloglu (1999) however disagreed that statistics anxiety and mathematics anxiety are the same construct. Baloglu contended that there are many ways that these two forms of anxiety differ (Cruise et al., 1985; Onwuegbuzie et al., 1997; Onwuegbuzie & Wilson, 2003), whereas Widmer and Chavez (1986) claim that statistics anxiety is a specific form of mathematics anxiety. Consequently, there are many similarities but also differences between these two anxieties in terms of their nature, antecedents, effects, and treatments (Baloglu, 1999).
2.4
Conclusion The literature review in this chapter focused on the role of technology in improving
statistics education which has been widely documented in the current research papers and publications through conferences, scholarly articles and journals worldwide. Specifically in this study the researcher is interested in seeking ways to support student learning by integrating technology into the teaching and learning of statistics within a blended learning environment (mainly via online or e-learning system) in the pursuit of improving learning outcomes. In the context of teaching and learning of statistics, it was necessary for educators to identify the desired learning outcomes in statistical learning such as statistical literacy, statistical reasoning, and statistical thinking. Gould (2010) hence contended that a future student should be modelled as a “Citizen Statistician”.
Citizens can act as statisticians with or without formal training by accessing data and performing analyses, however crude and reaching conclusions, however flawed. Educators must not merely help citizens interpret statistics, we must help them analyse data. A good Citizen Statistician should know data when she sees it (and she should see it everywhere). She should recognize when she is viewing actual data that can be brought to her computer and analysed to answer questions she has about the world, and when she is viewing others‟ summaries of data. In the latter case, she should think critically and see the summary in the proper substantive context, but also in the context of other data sets or databases. (Gould, 2010, p. 308) Moore (1997) emphasized the role of technology in statistical learning; “technology greatly influences both what we teach and how we teach” and hence Moore contended that statistics instruction nowadays should be built on strong synergies between content, pedagogy, and technology. Reflecting upon the recent implementations of technology in the statistics classroom, there are several types of technology used in statistics instruction which can be categorised 42
according to their functions and purposes, though there appears much overlap in their capabilities across categories. These technological tools include statistical software packages (for example, SPSS, JMP, SAS), educational software (for example, TinkerPlots, Fathom), spreadsheets (for example, Excel), applets or stand-alone applications, graphing calculators, multimedia materials (for example, ActivStats, CAST), and data and materials repositories (for example, CAUSE, the AMBER Data Repository). Anxiety may appear as one of the major factors which may interrupt the student learning process particularly in mathematics and statistics; hence it possibly has some effect on student learning outcomes of these two subjects. Mathematics anxiety frequently relates to a person who is anxious in dealing with any kind of mathematics-related tasks specifically tasks that involves numbers. Statistics anxiety arises in a person when they are exposed to statistics content, problems, and applying statistics in a formal setting, this is also a form of mathematic anxiety. The learning of statistics historically has been associated with students having difficulties in learning it and poor academic outcomes. For instance, statistics is seen as one of the most anxiety-provoking subjects (Baharun & Porter, 2009), perceived as difficult as learning a foreign language (Onwuegbuzie, 2003), and perhaps the reason which contribute to incompletion of study on time as students often failed in their first attempt (Rodarte-Luna & Sherry, 2008). As such, statistics is one subject area that is in need of improved teaching approaches that is how to teach this subject effectively so that students can learn and understands it better; which was the main aim of this study. The next chapter describes and discusses learning designs within the context of this study in terms of its relevancy, definitions, types of learning design models, selection of a learning design framework used for this study, and other forms of learning designs available in elearning systems. Whilst examining a suitable learning design framework to be adapted in this study, Chapter 3 focuses on the pedagogical aspects, relevant learning theories for delivering and implementing the resources and materials effectively within the e-learning environment.
43
CHAPTER 3 LEARNING DESIGNS 3.0
Introduction The use of Information and Communication Technology (ICT) in the higher education
sector is growing rapidly and has become an important part of the education agenda around the world. The introduction of new technologies to support the processes of teaching and learning over the last decade has driven major change and transformed the experience of both teachers and students (Bennett, Agostinho, Lockyer, Harper & Lukasiak, 2006; Keengwe, Onchwari & Onchwari, 2009; Littlejohn, 2002; Matheos et al., 2005; Sharpe & Benfield, 2005). Research has shown that the integration of technology in teaching practices can enhance student learning experiences and increase their learning outcomes (Collis, 2002; Singh & Reed, 2001).
However, referring to the problem as stated in Oliver (2006, p. 1),
[d]espite substantial investments in personnel and infrastructure and a perceived need, the use of ICT to promote and enhance learning opportunities in higher education continue to return only limited outcomes. There are several factors which contribute to the issue such as low levels of educators uptake of technology for teaching, lack of suitable pedagogy when technology is used, lack of relevant models of good practice with technology in teaching, and problems with supports, guidance and dissemination of models for teachers (New e-learning ideas Monday 6 April 2009, retrieved
at
http://new-elearning-ideas.blogspot.com/2009/04/topic-learning-design-for-
mathematics.html). In other words, the problem to address in this study is how educators can effectively implement appropriate learning designs with the use of technology to support students‟ learning. Consequently, Koper and Tattersall (2005, p. 3) pose several questions about the needs of learning design:
How can we help [students] to learn in an effective, efficient, attractive and accessible way?
What support do [students] need in order to learn?
How can we assess and communicate the results of a learning process?
44
How to improve the overall quality of students‟ experiences using technologies (Alexander & McKenzie, 1998) remains an issue particularly for online learning systems, although many claims are made that it can enhance the quality of learning. Alexander (1999, p. 181) states the provisions of particular learning supports specifically using the technologies „could not, in the absence of a learning framework for their use, be expected to account for improved or enhanced learning opportunities‟. Thus, the researcher aimed to identify potential learning designs by examining the effectiveness of designs from the perspective of their impact on students learning.
3.1
Why learning designs? Technology is becoming mainstream in higher education while teachers are faced with an
ongoing challenge to review their teaching practices which not only involves the process of designing and presenting materials using technology but also utilizing knowledge of how students‟ experience learning through the technologies (Boud & Prosser, 2002). This approach to teaching adopts a student-centred perspective rather than teacher-centred perspective (Keengwe et al., 2009; Kember, 2009). However, many teachers see themselves primarily as experts in their discipline and hold content-centred approaches of teaching (Kember, 2009) thus technology may not be used effectively to improve students learning in higher education (Herrington, Reeves & Oliver, 2005). It was evident in Kember‟s (2009) study that studentcentred approaches positively impacted on the quality of teaching and learning environments in universities. Fundamentally, in a student-centred approach the central focus is on understanding the students‟ experiences of learning using technologies. As viewed by Boud and Prosser (2002, p. 237), „learning arises from what students experience, not what teachers do or technology does.‟ Furthermore, Keengwe et al. (2009) contended that teachers should provide intellectually powerful, student-centred, and technology-rich environments for students without undermining sound pedagogical practices. Keengwe et al. (2009) proposed a model that could improve the depth and scope of student learning with a focus on three pedagogical areas: understanding the unique identity of each student, fostering active learning activities, and incorporating technology into instruction. Essentially,
[w]hile technology can play an important role in restructuring teaching and learning practices, teachers must take a leading role in designing appropriate learning environments that effectively incorporate technology to help their students learn well with technology… [T]echnology, as tools, could empower students with thinking and learning skills, and help students interact with complex materials… [However], technology by itself cannot change the nature of classroom instruction 45
unless teachers are able to evaluate and integrate the use of that technology into the curriculum… Even so, teachers who practice sound pedagogical practices and value technology in learning can inspire students to learn well with technology tools. (Keengwe et al., 2009, p. 12-13) According to Bennett et al. (2007) teacher-centred perspectives use conventional information delivery or content-centred approaches. There appears to be a difficulty in encouraging teachers to shift from content-centred approaches of teaching towards more student-centred approaches even though studies (e.g. Honkimaki, Tynjala & Valkonen, 2004; Kember & McNaught, 2007) have shown student-centred learning is effective in promoting deep learning and academic engagement. As such, university educators need to understand what constitutes “good university teaching” (Biggs, 2003; Prosser & Trigwell, 1999; Ramsden, 1992) that will foster high quality learning in higher education. Kember (2009) examined models of good practice from interviews with 18 award-winning teachers in Australia and Hong Kong at the university level and found that an active approach to learning or student-centred approach was an important component of good teaching in higher education. Duffy and Cunningham (1996) contended that the most effective learning environments which promote meaningful learning in higher education are those based on contemporary learning theories that support knowledge construction through student-centred learning settings. The relevant principle of learning was noted by Kember and McNaught (2007, p. 46) that „[m]eaningful learning is most likely to occur when students are actively engaged with a variety of learning tasks.‟ In contrast with the conventional teaching approaches, Cunningham et al. (1998, p. 25) elaborated
[t]he growing acceptance of new educational philosophies and practices, such as constructivism and action learning during the 1980s, have challenged the valence of the didactic lecture/tutorial/textbook model common in higher education, promoted the notions of the academic role as a „guide on the side‟ rather than „the sage on stage‟, and conceived of the student role as one of the independent selfdirected learner. Consequently, based on student-centred pedagogy the researcher highlights the importance of learning design in creating high quality learning experiences particularly within an ICT-based learning environment which encourages students seeking to understand to construct their own knowledge and through this attain in successful learning outcomes (Alexander & McKenzie, 1998).
46
3.2
Designing online learning experiences Over the past twenty years, online learning or web-based learning has played an
important role in the higher education sector around the world as a medium of teaching and learning using technology. Through online learning, students are allowed to manage time and space (Cole, 2000) which provides the opportunity for learning with no constraints of time zones, location, and distance (Ally, 2008; Suanpang, Petocz & Kalceff, 2004). There are numerous definitions of online learning making it difficult to find a generic meaning of online learning. Horton (2001) explained online learning as using a computer with internet connection and other forms of information technologies that creates the learning experiences among students in a given course or subject. In general, online learning provides space for course instructors to deliver learning materials to students using a computer (Carliner, 1999) and it also known as an innovative approach for teaching or delivery using the internet, particularly to distance students (Khan, 1997). Ally (2008) described these terms as a situation where the student is at a distance from the instructor, and in which the student uses a computer (through the Web) to access learning materials, to interact with the instructor and peers, and to get learning support provided by the instructor. Therefore, the researcher uses the definition of online learning as provided by Ally (2008, p. 7), … [t]he use of the Internet to access learning materials; to interact with the content, instructor, and other [student]s; and to obtain support during the learning process, in order to acquire knowledge, to construct personal meaning, and to grow from the learning experience. It is unclear to what extent online learning or use of a particular technology could improve learning (Beynon, 2007) though it has been known that it can provides efficient and easy access to learning materials. Suanpang et al. (2004) stated several studies have shown there is no difference in test scores between online learning and traditional classroom students. Further, a review of research reports and papers from 1928-1998 (in Russell‟s (1999) book) provides evidence that there is no significant difference in student grades, satisfaction, or effectiveness among training conducted in classrooms and online learning (Suanpang et al., 2004). Contrasting with this, there were studies that found the students in this delivery mode or instruction had significantly better performance in examinations (Daugherty & Funke, 1998), lower percentage of course or subject withdrawals and a higher percentage of success (Hartman, Dziuban & Moska, 2000) than traditional classroom students. The issue to address here is not which approach is better than the other but rather to focus on how to improve these approaches or how to do it right. According to Ally (2008) since online 47
learning has many promises, the instructor should invest a lot of commitment and resources. Thus, by doing it right means that online learning materials must be designed to be studentcentred and delivered appropriately, with adequate support provided. Based upon the students‟ experience through surveys undertaken in this study, the researcher and the lecturer(s) aimed at finding effective ways to design the delivery of the subjects‟ resources and materials within the online learning environment so as to support students‟ learning. Ring and Mathieux (2002) and Oliver, Herrington, Herrington, and Reeves (2007) have further suggested that an effective online learning should have high authenticity, high interactivity, and high collaboration.
3.3
The definition It is a current challenge for university teachers in designing effective online learning
experiences, or effective educational strategies which Agostinho (2006) referred to as learning designs. Lockyer and Bennett (2006) describe learning designs as activities in designing learning experiences which includes planning schedules, preparing course outlines and materials, determining assessment tasks, and anticipating students‟ needs. It may also involve the activities of modifying a previous course such as updating materials or implementing new learning strategies (Bennett et al., 2007). It is important for university educators to identify what elements of learning designs would need to be used and altered to enhance the possibility of the activity inducing a high quality learning experience on the part of the students engaging with it (Boud & Prosser, 2002). There is currently no standard form of learning design used by educators in describing and presenting their teaching practices and ideas. According to Agostinho (2009), a learning design concept is relatively new within the literature and there is no precise meaning or definition of learning design, thus there is no consistent ways of representing learning designs to date. Further, in describing the notion of learning designs Falconer and Littlejohn (2006, p. 1) stated in their recent study, „researchers have yet to find ways to describe practice models so that practitioners in mainstream education can understand and apply them.‟ Consequently, a research work has examined the generic learning designs in an existing repository in terms of their ability to be effectively shared and reused (Agostinho et al., 2009). The findings revealed that from a repository of 32, six learning designs were identified as effective learning design descriptions based on three criteria: (1) a clear summary and description of the pedagogy embedded in the learning design, (2) well supported with evaluation data or some form of quality rating, and (3) offered some advice and guidance about how it can be reused. Lack of detail about the explicit students‟ engagements and activities, limited information about the linkages between supports, resources and tasks, lack of information about 48
access to supports and tasks and a superficial description of pedagogy used were the shortcomings of the other learning design descriptions. These six learning design representations are as follows:
i.
E2ML or Educational Environment Modelling Language (Botturi, 2006) which represents a learning design as a structured set of learning activities aligned to specific learning outcomes.
ii.
IMS Learning Design or a formal computer language or an XML file (Burgos & Griffiths, 2005) which represents a learning design as sequence of learning activities in the form of acts in a play.
iii. LAMS
or
Learning
Activity
Management
System
(from
http://www.lams
international.com/) which represents a learning design as a sequence of learning activity in the form of online tools used to run each activity. iv. LDVS or Learning Design Visual Sequence (Agostinho, Harper, Oliver, Hedberg & Wills, 2008) which represents a learning design as an illustration of tasks, resources, and supports with accompanying textual documentation. v.
LDLite or also known as “blended” learning design which aims to assists teachers integrate both face-to-face and online learning activities by documenting a learning design in a tabular form consisted five major elements of IMS Learning Design, these are:
tutor
roles,
students
roles,
content
resources,
service
resources
and
assessment/feedback (Oliver & Littlejohn, 2006) and vi. Patterns is a pattern repository that has been compiled by E-LEN project (from http://www2.tisip.no/E-LEN/) which represents a learning design as a solution to a design problem in the form of a textual description. (Source from Agostinho, 2009)
Along with documentation such as textual descriptions, flow charts, and computer readable language, and their purposes, these representations define a learning design. For instance, Koper and Tattersall (2005, p. 3-4) defines the term learning design as „the application of learning knowledge when developing a concrete unit of learning, e.g. a course, a lesson, a curriculum, a learning unit‟. Furthermore, Koper and Tattersall (p. 4) state that … the quality of a unit of learning depends largely on the quality of the learning design, and, moreover, that every learning practice (e.g. a course) has an underlying learning design that is more generic than the practice itself. This is similar to the belief that every building has an underlying architecture which is more generic than the building itself. The design can be re-used over and over again at different times and places for more or less the same course (or building). 49
This does not necessarily mean that the design is made explicit before it is used. That may well be the case when it comes to the architecture of buildings, but it is common practice in education. From another perspective, Conole and Fill (2005) describes a learning design as an application of pedagogical model for a particular learning objective, target group and a particular context or knowledge domain. It explains the teaching and learning process where the activities performed by the teachers and students are to achieve the required learning goals. Consequently, there is a need to improve the current ways of representing and sharing educational designs or learning designs which combine both pedagogical theory and the practical application of that theory (Goodyear, 2005). As “music and dance”, Waters and Gibbons (2004) highlights the notion of educational design system should provide a common understanding among educators that will foster better communication of ideas thus in turn could serve as inspiration to improve the quality of teaching and learning. Agostinho (2009) explicitly described the term learning design as two mechanisms; these are (1) a process of designing the student learning experiences and (2) a product as the outcome or documentation of the design process as happened in (1). The first term conformed to Boud and Prosser‟s (2002, p. 238) definition of learning design: …a variety of structures using new technologies that support student learning experiences. Learning designs may be at the level of a whole subject, subject component or learning resources. and Bennett et al. (2007) referred to learning designs as a variety of ways of designing student learning experiences. The researcher favoured the idea of learning design as in Oliver (2006, p. 1),
[t]he term learning design is commonly used today to describe the representations(s) of a learning process and its outcomes in ways that might enable it to be replicated and reused. Learning designs are very much like lesson plans, involving descriptions of learner activities, the resources being used and the supports provided by the teacher. As the study progressed, it became essential to examine the process of designing student learning experiences within ICT mediated learning environments through case studies. These examine both the student perspective and their impact upon student learning. While learning designs as discussed by Agostinho (2009) were to communicate with staff (educators), the researcher highlights the need for educators to understand the impact of learning designs within online learning environments. To determine students‟ needs and current level of expertise, to assign appropriate learning materials for them to select from and to achieve their desired 50
learning outcomes, educators need to know how students experience the teaching and learning environment. In other words, the implementation of appropriate learning designs within online learning systems could serves as learning support to improve or enhance the student learning experiences (Alexander, 1999).
3.4
Educational theories of learning The purpose of integrating technology into teaching and learning, and indeed, in any
instructional system is to promote and enhance student learning. Therefore, the choice and use of technology particularly in an online learning environment has to rely entirely on educators‟ beliefs and assumptions about the nature of knowledge or epistemology, how the subject discipline should be taught, and how students learn (Bates & Poole, 2003). Furthermore, Ally (2008) postulated that the design and development of effective online learning environments should be based on substantiated and sound learning theories. Although Knowlton (2000, p. 5) observed that „[i]t is daunting task to theoretically frame the pedagogy of the online classroom‟. What is „learning‟? According to Schunk (1991) there is inconsistency in the definitions of “learning” adopted by theorists, researchers and educators. However, these definitions have involved either implicitly or explicitly three common criteria for defining learning as Schunk (1991, p. 2), suggests „[l]earning is an enduring change in behaviour, or in the capacity to behave in a given fashion, which results from practice or other forms of experience‟. Shuell (1986) provided the criteria used to define learning, which were: (a) a change in a person‟s behaviour or ability or capability to do something different from what he (she) could at an earlier point in time; (b) a condition that this change occurs through some sort of practice or other forms of experience (for example, observing others); and (c) a condition that the change is enduring over time. The latter two criteria exclude certain types of behavioural changes for example, human maturation, temporary changes due to drugs, alcohol, and illness which are not classified as learned. From a philosophical point of view, learning can be discussed under the heading of epistemology, that is, the study of origin, nature, limits, and methods of knowledge which centres on the questions of how we know what we know and what makes people believe that something is “true” (Bates & Poole, 2003; Schunk, 1991). Bates and Poole (2003, p. 26) define „[e]pistemology is a branch of philosophy concerned with the nature and justification of knowledge‟. Therefore, the researcher contended that educators must tacitly or explicitly understand the principle of learning and how students learn prior to designing and developing the subject materials particularly in online learning environment.
51
There are many different epistemologies or more specifically, schools of thought on learning; however it appears impossible to refer exclusively to particular schools or learning theories in designing subjects within an online learning environment. In this study, the researcher worked in an environment where the educational practice reflected these theories. Consequently, the theories are being used as a way to describe and explain how the subjects were designed within the online learning environments provided by the lecturer(s) in connection with this thesis. In the following sections relevant learning theories and discussed as is the recent connectivist theory which is applicable for the emerging digital age of distributed and networked learning.
Educators should be able to adapt existing learning theories for the digital age, while at the same time using the principles of connectivism to guide the development of effective learning materials. What is needed is not a new standalone theory for the digital age, but [rather] integrates the different theories to guide the design of online learning materials. (Ally, 2008, p. 18)
3.4.1 Behaviourism What is behaviourism? According to Watson‟s renowned statement (1926, p. 10),
[g]ive me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I‟ll guarantee to take any one at random and train him to become any type of specialist I might select -- doctor, lawyer, artist, merchantchief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. The behaviourist school of thought emerged in the middle of last century from an attempt to model the study of human behaviour on the methods of physical sciences. It was influenced by Thorndike (1913), Pavlov (1927) and Skinner (1974), with claims that learning is a change in observable behavioural responses caused by external stimuli in the environment such as, „the contraction of an iris in the eye when stimulated by bright light‟ (Bates & Poole, 2003, p. 31). Furthermore, Thorndike‟s “Law of Effect” suggested that certain stimuli and responses become disconnected from or alternatively associated with each other based on whether responses were ignored, punished, or rewarded (Schunk, 1991). Thorndike therefore concluded that behaviour can be modified or controlled by appropriately reinforcing random behaviour (trial and error, or reward and punishment) as it occurs. Behaviourists believe the student has learned something by observing the changes in behavioural responses, but not what goes on inside a student‟s mind. In contrast, other educators argued that not all learning can be observed and measured as indicators of learning; rather there 52
is more to learning than changes in behaviour demonstrated by the student. Consequently, as evident today there has been a massive shift away from behaviourist to other approaches to learning in higher education (Bates & Poole, 2003). Nevertheless, behaviourists‟ strategies appear as relevant for teaching the facts particularly when designing resources and support materials within the online learning environment. There are implications for designing the online learning resources and materials based on the principles evolved from behaviourist school of thought as listed below. i.
Students should be informed of the explicit outcomes of the learning so they can set expectations and judge for themselves whether or not they have achieved the outcomes of the lesson via the online learning system.
ii.
Students should be assessed to determine whether or not they have achieved the learning outcomes. Online testing or other forms of testing and assessment should be integrated into the learning sequence to examine individual student‟s achievement level and provide feedback.
iii. The learning resources and support materials should be sequenced appropriately to promote learning. The sequencing could take the form of simple to complex, known to unknown, and knowledge to application. iv. Students should be provided with feedback from time to time so that they can monitor how they are doing and take corrective action if required. (Adapted from Ally, 2008, p. 20-21)
In the context of this study, the design of subjects in the three case studies appears to align with this theory. For instance, (i) learning objectives spelled out for each week laboratory tasks in statistics subjects, (ii) feedback in the form of worked solutions provided to students in the week following laboratory tasks, and five laboratory tests/assignments with feedback within a week, (iii) all learning resources provided to students either by weekly or fortnightly basis in the e-learning sites, and (iv) test and retest assessment system with feedback implemented throughout the sessions in statistics subjects.
3.4.2 Cognitivism Unlike behaviourists, cognitivists are concerned with various mental activities (internal processes that involve memory, perception, thinking, reflection, abstraction and motivation) as the basis for learning which relate to human information processing and problem solving (Shuell, 1986). Cognitivists emphasise mental processes and knowledge structures that can be deduced from behavioural indices. These processes and structures are responsible for various 53
types of human behaviour that are essential for human learning. Cognitivists believed that learning is an active, constructive, and goal-oriented process that relies on the mental activities of the student (Shuell, 1986). As opposed to behaviourism, Fontana (1981, p. 148) summarized cognitivism as follows, [t]he cognitive approach … holds that if we are to understand learning we cannot confine ourselves to observable behaviour, but must also concern ourselves with the [student]‟s ability mentally to reorganize his psychological field (i.e., his inner world of concepts, memories, etc.) in response to experience. This latter approach therefore lays stress not only on the environment, but upon the way in which the individual interprets and tries to make sense of the environment. It sees the individual not as the somewhat mechanical product of his environment, but as an active agent in the learning process, deliberately trying to process and categorize the stream of information fed into him by the external world. Cognitivists view learning from an information processing perspective which involves working memory of several kinds. In this model, the student uses three types of memory during learning, these are; sensory memory, working memory (short-term memory), and long-term memory (see Figure 3.1). Sensory memory work as a buffer for stimuli received through the senses. Information is passed from the sensory memory into working memory by attention, thus filtering the stimuli to only those which are of interest at a particular time. Working memory works as a scratch-pad for temporary storage of information being processed. The working memory has limited capacity and duration is approximately 20 seconds (Kalat, 2007). Therefore information should be organized or chunked in pieces (in the form of information maps for example, concept mapping) of appropriate size to facilitate processing (Ally, 2008). After the information is processed in working memory, it is stored in long-term memory. Long-term memory is intended for storage of information over a long period of time. Information from the working memory is transferred to long-term memory after a few seconds and the amount of information is determined by the quality and depth of processing in working memory. The information transferred is either assimilated or accommodated in long-term memory and eventually changed either to fit into existing cognitive structures or to incorporate the new information.
Figure 3.1 Types of memory based on information processing system 54
Cognitivism has influenced learning theory and research in several significant ways, with its focus on abstraction, generalization, active learning, and creative thinking. Thus it seems to fit much better in higher education particularly within the online learning environment (Bates & Poole, 2003) as it is in the context of this study. For instance, all subjects have been designed with the intention of fostering the development in all students a set of qualities defined by the University of Wollongong. These graduate qualities appear to align with this theory as presented in Table 3.1.
Table 3.1 Graduates qualities addressed in STAT131 Graduate Qualities – Learning to learn objectives Not all aspects of the graduate qualities are addressed in every subject. STAT131 seeks to develop aspects of the following qualities. Informed Have a sound knowledge of an area of study or profession and understand its current issues, locally and internationally. Know how to apply this knowledge. Understand how an area of study has developed and how it relates to other areas. 1. Have sound statistical knowledge at a level appropriate to the degree 2. Understand the applications of statistics to other fields Independent learners Engage with new ideas and ways of thinking and critically analyse issues. Seek to extend knowledge through ongoing research, enquiry and reflection. Find and evaluate information, using a variety of sources and technologies. Acknowledge the work and ideas of others. 1. Have skills in accessing, understanding, summarising, extending and generalising technical information and in applying methods to new situations 2. Have the ability to extend their skills in using software and learning to use new software Problem solvers Take on challenges and opportunities. Apply creative, logical and critical thinking skills to respond effectively. Make and implement decisions. Be flexible, thorough, innovative and aim for high standards. 1. Have the ability to generate statistical approaches for solving problems 2. Are able to identify relevant statistical knowledge and apply it appropriately referring to relevant resources as appropriate 3. Are able to interpret data and results and draw valid conclusions 4. Are able to use statistical software effectively and appropriately Responsible Understand how decisions can affect others and make ethically informed choices. Appreciate and respect diversity. Act with integrity as part of local, national, global and professional communities. 1. Are aware of the need for statistical accuracy and the pitfalls associated with the misuse of statistical methods 2. Are aware of and able to develop arguments about ethical and privacy issues in the design of statistical studies, in their analysis and in writing reports about those studies 3. Understand conventions for the referencing, citation and attribution of the work of others 4. Are able to argue why their work is statistically valid and important in the broader community Source: STAT131 March/Autumn 2010 Laboratory Manual
55
3.4.3 Constructivism Behaviourists and cognitivists believed that there exists an objective and reliable set of facts, principles, and theories that either have been or will be discovered and delineated over the period of time, thus knowledge is viewed as external to the student and the learning process as the act of internalizing knowledge. In contrast, constructivists deem that knowledge is essentially subjective in nature, constructed from individual‟s perceptions and mutually complied upon conventions (Bates & Poole, 2003). In this perspective of learning, new knowledge is constructed internally by individuals and tested through external interaction with the world rather than simply acquiring it via memorization or through transmission from those who know to those who did not know. Constructivism is a belief that students are not like “empty vessels” to be filled with knowledge. Meaning is constructed and driven internally by students through the process of assimilation (integrating new knowledge into existing knowledge structures), relating it to students‟ existing knowledge, and cognitively processing it. Hence the notion of constructivism appears to be aligned with a student-centred learning approach which promotes and fosters life-long learning for students (Ramsden, 2003). In accord with Bates and Poole (2003), the researcher reflected a constructivist epistemological stance believing that educators need to place a strong emphasis on students‟ developing personal meaning through reflection, analysis, and construction of knowledge through conscious mental processing. According to social constructivists this process works effectively through discussion and social interaction which allows students to test and challenge their own understandings with those of others. Many cognitivists such as Vygotsky, Bruner and Ausubel have directed much of their educational work towards a more “social constructivist” perspective (Morris, 2008). Consequently, Prosser and Trigwell (1999, p. 13) defined this perspective from a Vygotskian‟s viewpoint, … knowledge is thought to develop internally, but in a process driven by social interaction with the outside world…From this perspective, the context, and particularly the social context, is of prime importance. It is the context which brings about knowledge development within individual students. Bates and Poole (2003) have argued the social context of learning is critical for many educators as thoughts are tested not just on the teacher but also with fellow students, friends, and colleagues. Furthermore, they postulated that knowledge is about “values” rather than just content, and values are primarily developed through social processes or institutions that are socially constructed (for example, schools, universities).
56
Although constructivism has been adopted by many educators and researchers in higher education in recent years (Aminifar, 2007; Morris, 2008) the researcher also believes that learning is perceived as essentially a social process which requires communication among student, teacher, peers and others as collaborators (known as co-constructivism). Based upon Vygotsky‟s perspective, the teacher in co-constructivism is viewed as a co-constructivist collaborating with students through mediation (Morris, 2008). For example, in the introductory statistics lecture (both in GHMD983 and STAT131) students were introduced to the notion of different perspectives of statistics. They participated in a group task to generate ideas (ethics, questions, sampling, measurement design) leading to a statistical process, analysis and interpretation, all as they relate to variation. Social orientation has also been included in the subjects through class exercises involving teamwork and peer review on others‟ work via the elearning site. In the context of this study, technology particularly the e-learning system has been used to extend this use of the social process, with technology facilitating the process and thereby improving learning. For example, prompts were also given to initiate student interaction by the lecturer in the e-learning forum (see Figure 3.2).
Figure 3.2 Sample of prompts given to initiate student interaction in GHMD983 e-learning site
Furthermore, the researcher views learning as knowledge that is brought in from the outside and constructed on the inside of the individual. From this perspective, there is no separation between the student and the world. That is,
57
… [each and] “every individual exists in a continually changing world of experience in which he is the centre.” The external world is interpreted within the context of that private world. (Bates and Poole, 2003, p. 34) In the context of designing resources and support materials within the online learning environment, constructivist strategies suggest ways to foster high-level thinking students promoting personal meaning. Jonassen, Howland, Moore, and Mara (2003) suggest the following factors are required for constructivist learning particularly when designing resources and support materials within the online environment.
Learning should be an active process which promotes long-term retention of information and motivate students toward further learning, that is, students observing and manipulating the environment.
Students should construct their own knowledge, rather than accepting that provided by the instructor, namely spoon-fed learning, that is, students create meaning from experience.
Students should be encouraged to collaborate and co-operate with other students so that each individual could benefit from one another‟s learning strengths.
Students should be given control of the learning process that is, allowing them to make decisions about learning goals directed by the instructor.
Students should be encouraged to reflect on and process the information in a relevant and meaningful manner.
Learning should be made meaningful by keeping the learning in context, that is, learning should be authentic.
Learning should be interactive to encourage students reflecting upon their learning processes and experiences, that is, promoting higher-level learning and social presence, and developing personal meaning. (Adapted from Ally, 2008, p. 30-32)
The subjects involved in the case studies also revealed elements of social constructivism. For example, in the introductory statistics subjects (both in GHMD983 and STAT131) students were required to work in pair to complete their assignment and present their work in classes so that others could benefit from one another‟s ideas and information. This assessment was included in the subject to provide students with an opportunity to work collaboratively with peers using a real-world example and to gain experience in drawing meaning from data (see Table 3.2). 58
Table 3.2 The design of group project assignment in STAT131
Group project and presentation in class Date: Value: Format:
Marking: Task:
Presentation Resources:
In lab class, Week 13 8% The report will take the form of a five (5) minute presentation per group in your laboratory class of Week 13. All members of your group need to speak OR be included in the video clip or speak to the resource. You must have visual aids to support your presentation and you must acknowledge the source of the information you present. In all other aspects, your presentation can be made in any manner that you wish (e.g. video, music, drama, poetry, posters, PowerPoint, iPhone movie). A computer and data projector will be available for you to use in your presentation. More detailed assessment criteria will be posted to e-learning in Week 9. The remaining marks are awarded for the quality of the presentation. Collect an appropriate set of data to undertake an independent t-test, paired t-test or test for differences in proportions. Ideally the data will be relevant to the chosen discipline of at least one of the pair of students. Your task will be to create and present a learning resource suitable for your peers. Students using word and PowerPoint are advised to use Equation Editor for writing formulae. To do this, select the Insert Menu; select Objects and Microsoft Equation Editor to insert the required mathematical formulae. Presentations A guide to making seminar presentations can be found at http://www.math.uow.edu.au/current/talk.shtml For power-point presentations and movies created in PowerPoint one general rule is no more than seven lines per page and no more than seven words per line. Use no less than 24 point.
Source: STAT131 March/Autumn 2010 Laboratory Manual
3.4.4 Connectivism Connectivism proposed by Siemens (2004a) and Downes (2006) is a recent theoretical framework for describing learning in the digital age and guiding the development of learning materials for the networked world. Behaviourism, cognitivism, and constructivism were developed in a time when learning was not vastly impacted through technology. Due to the exponential growth and complexity of information available on the internet, Siemens (2006) asserted that a new learning theory, connectivism, was required for the emerging age of distributed and network learning. There are new possibilities for people to communicate on global networks, and abilities to aggregate different information streams. Unlike other learning theories which generally believed that knowledge occurs inside a person, connectivists contend that knowledge does not only reside in the mind of an individual but also resides outside of an individual. That is, knowledge is stored and manipulated by technology (Siemens, 2004b). Accordingly, learning is not under the control of the students as others in the network innovate continually, changing information and procedures in a given field (Ally, 2008). As a result, students are required to learn new and/or current information, but must be able to unlearn past information as “what we learn today may not be applicable tomorrow”. Connectivism has been debated and criticized within the context of existing theories by other educational practitioners such as Kerr (2007), Verhagen (2006) and Miller (1993). However, this new connectivist learning theory draws its strength through the use of online 59
learning environments or e-learning systems for teaching and learning (Kop & Hill, 2008). Furthermore, as Kop and Hill (2008, p. 10) comment
Downes and Siemens do not suggest that connectivism is limited to the online environment. The online environment is one application that has been important for the development of connectivism, but the theory applies to a larger learning environment, and helps to inform how we understand our relatedness to the world, and consequently how we learn and understand from it. Networks are not just comprised of digitally enabled communications media, nor are they exclusively based in neurological brain-based mechanisms. Siemens (2006) similarly defined learning as a network phenomenon aided by socialisation and technology. In the context of this study, Siemens (2004a) proposed relevant guidelines for designing online resources and materials for student learning based on connectivist theory. These guidelines were elaborated as follows:
i.
Due to the explosion of the information nowadays, students should be allowed to explore and research current information. Hence, appropriate use of the internet is an ideal learning strategy in a networked world.
ii.
Some information and procedures become obsolete because of changes in the field and innovation; students should therefore be able to unlearn old information and mental models and learn current information and mental models.
iii. The rapid increase of information available from a variety of sources means that some information is not as important or genuine as other information. As a result, the student should be able to separate the most important information from the least important information. iv. Students should have the ability to recognize the knowledge which is no longer valid so that they can acquire the new knowledge for a discipline. This requires students to keep up-to-date in the field and be active participants in the network of learning. v.
Students should be allowed to connect with others around the world to examine others‟ opinions and to share their thinking with the world. For instance, online learning promises to help students function in a networked world where they can learn at any time and access from anywhere.
vi. Learning should be delivered in a multi-channel system where different communication technologies are used to deliver the resources and materials to facilitate optimal learning. vii. Online teaching strategies should give students the opportunity to research and locate new information in a discipline so that they can keep up-to-date in the field. In addition 60
to using the internet to deliver flexibility, instruction must be designed for experiential and authentic learning. viii. Students should be networked (for example, email, web conference, blog, social network) with other students, teachers and experts from around the world to make sure that they are continually learning and updating their knowledge. ix. Due to innovation and our increasing use of technology, learning is becoming more multidisciplinary. Thus, students should be exposed to different fields so that they can see the connections between the information in the fields. (Adapted from Ally, 2008, p. 34-35)
In the context of this study, connectivism appears relevant to the delivery of the subjects in the case studies particularly within the e-learning system. For instance, students in mathematics subjects were encouraged to access current articles and reports related to their discipline of study via the e-learning site (see Figure 3.3).
Figure 3.3 Articles and reports accessible via the MATH131/132 e-learning site
3.4.5 Experiential learning As discussed in earlier sections, theories of learning are generally developed in isolation from each other and hence there is no overall consistency in their applicability in our educational practices. In contrast, the great strength of experiential learning theory is that it 61
„provides an underpinning philosophy that acts as a thread joining many of the learning theories together in a more unified whole‟ (Beard & Wilson, 2006, p. 16). What is experiential learning? This learning theory highlights the connection between experience and learning as a solid one where the two are so closely intertwined as to be almost inseparable. The link between the two terms has been described synonymously as “learning” by a number of educational practitioners. For instance, Wilson (2005, p. 5) defined learning as „a relatively permanent change in knowledge, attitude or behaviour occurring as a result of formal education or training, or as a result of informal experiences‟, and likewise Kolb (1984, p. 41) elucidated learning as „the process whereby knowledge is created through the transformation of experience. Knowledge results from the combination of grasping experience and transforming it‟. Beard and Wilson (2006) asserted that the interaction between self and the external environment, the experience, is the foundation of how many learning processes occur in each and every individual. This concept was further supported by Rogers (1996, p. 107) who noted that „[t]here is a growing consensus that experience forms the basis of all learning‟. Experience and learning present the same idea in many aspects of human life as the two must not be separated, for each informs the other and these become connected. From Dewey‟s perspective, experience was a lens to understanding the interaction between humans and their environments. The experience was consequently viewed as a continual process and situation linking between action and thought as Dewey (1916, p. 144-145) contended, „[t]hinking, in other words, is the intentional endeavour to discover specific connections between something which we do and the consequences which result, so that the two become continuous. Their isolation, and consequently their purely arbitrary going together, is cancelled; a unified, developing situation takes place‟. Similarly, Beard and Wilson (2006) took this concept further by representing the nature of experience through a cycle of theory and practice as stated below.
Our theories are abstract conceptualisations of how thoughts and external objects relate to one another in a consistent manner. They inform and guide us in our practice, and enable us to gain insights into the various events in which we are involved. If our practical experience does not match our theory of how we think things should be, then we often revise our theories or sometimes revisit the experience in order to see if it can be fitted into our Weltanschauung – our way of seeing the world. Thus there is a continual interaction of theory and practice in which each informs the other. (Beard and Wilson, 2006, p. 18) David Kolb is one of the most influential practitioners on the development of understanding about experiential learning theory. This theory is drawn upon the perspective of learning provided by Kurt Lewin, John Dewey and Jean Piaget. For Kolb (1984, p. 3) experiential learning is regarded as „the foundation for an approach to education and learning as a lifelong process that is soundly based on intellectual traditions of social psychology, 62
philosophy, and cognitive psychology‟. Kolb (1984) developed a four-stage learning cycle based upon similarities between Lewin‟s and Dewey‟s, and Piaget‟s description of the learning process which involves observation, knowledge, judgment, and cognitive development. Kolb‟s model viewed the bases of learning as having two dimensions. These are: (i) a processing dimension – a continuum spanning from doing to watching, and (ii) a perception dimension – a continuum spanning from feeling to thinking (see Figure 3.4). Furthermore, Kolb deemed effective learning experiences as resulting from the learning process which involves all four stages of the learning cycle as listed below.
Stage 1: Concrete experience (feeling). The student actively experiences an activity such as a lab session or field work and relating to people. The student is sensitive to other‟s feelings;
Stage 2: Reflective observation (watching). The student consciously reflects back on that experience. The student looks for the meaning of things;
Stage 3: Abstract conceptualisation (thinking). The student attempts to conceptualise a theory or model of what is observed.
Stage 4: Active experimentation (doing). The student is trying to plan how to test a model or theory or plan for a forthcoming experience which includes risk-taking.
David Kolb and Roger Fry developed a learning style inventory (Kolb, 1976) which was designed to place students on a continuun between concrete experience and abstract conceptualization; and also on a continuum between active experimentation and reflective observation. In using these two continuums, Kolb and Fry proceeded to identify four basic learning styles that correspond to the four stages (illustrated in Figure 3.4). The learning styles highlight the conditions under which students learn more effectively. These learning styles are:
assimilators, have a strong ability to create theoretical models, excel in inductive reasoning, and are concerned with abstract concepts rather than people;
convergers, are strong in the practical application of ideas, can focus on hypo-deductive reasoning for specific problems, are unemotional and have narrow interests;
accommodators, have a strong ability in doing things, are more likely to be risk takers, can perform well when required to react to immediate circumstances, and can solve problems intuitively;
divergers, have a strong imaginative ability, are good at generating ideas and seeing things from different perspectives, are interested in people, and have broad cultural interests. 63
Figure 3.4 Kolb‟s Experiential Learning Cycle (Clark, 2004; from http://www.nwlink.com/~donclark/hrd/styles/kolb.html) In the literature, it is evident that the work on understanding the nature of experiential learning undertaken by Dewey, Lewin, Kolb, Honey and Mumford, and Deming/Shewhart are all interlinked and have influenced each other. Thus there appears to be a fundamental principle particularly in this study where educators (including the researcher) need to combine thinking with doing or applying in order to create an effective learning process. In the context of this study, experiential learning, of which Kolb‟s learning cycle is a part, was adopted by the researcher as this approach affords opportunities for students to fulfil the needs of life-long learning (Morris, 2008) and fosters critical thinking skills particularly in statistical learning (discussed in section 2.1.1 in Chapter 2). For instance, students in an introductory statistics subject were required to complete weekly tasks and working with other students in laboratory classes (see Figure 3.5). This task was designed to involve all four stages of the learning cycle: (1) students actively experience the activity in laboratory class with the assistance of the tutor and working with their peers, (2) students looking for the meaning of the activity undertaken, (3) students conceptualising a theory related to the observed activity, and (4) students recognising limitation(s) in order to plan for a forthcoming activity that relates to the theory.
64
Figure 3.5 A weekly laboratory tasks for students to complete in STAT131
3.5
Selection of the learning design framework The researcher beliefs and literary review and experience followed notions of all learning
theories discussed in the previous sections. According to Oliver (1999), constructivism and social constructivism are the two beliefs that best describe the process of how learning takes place. Constructivists consider people learn more effectively by active rather than passive means (Bruner, 1973). In theory, constructivists emphasise the learning processes of individual students interpreting and processing what is received through the senses to create knowledge (Ally, 2008) and that includes both physical and intellectual learning (Phillips, 2005). In other words, learning is viewed as the construction of knowledge rather than students being given knowledge through instruction (Duffy & Cunningham, 1996) and memorization of facts (Jonassen & Reeves, 1996). Hung, Looi, and Koh (2004) stated situated learning is the main focus of constructivism which perceives learning as contextual; that is, it encourages students to 65
contextualise the knowledge and information in the learning process and apply the information broadly in other contexts or beyond the classrooms. In this thesis, the application of constructivist approaches to learning is informed by the work of Oliver (1999) and Oliver and Herrington (2001) through the design inclusion of three critical elements required in designing an effective online learning environment. These elements are: (1) the content or resources students interact with, (2) the tasks or activities students are required to perform, and (3) the support mechanisms provided to assist students to engage with the tasks and resources (see Figure 3.6). This learning design framework highlights the importance of planning the roles for students, the teacher and the technology in the learning environment.
Figure 3.6 Three key elements of a learning design based on Oliver (1999) and Oliver and Herrington (2001) (source from http://www.learningdesigns.uow.edu.au) The subjects which formed the basis of the three case studies were designed to include these three elements and use a learning design representation developed from the Learning Designs Project (www.learningdesigns.uow.edu.au) known as Learning Design Visual Sequence (LDVS). The LDVS illustrates the chronology of tasks, resources, and supports using symbols for each of the three elements of the learning design, these are: rectangles represent the tasks, triangles represent the resources, and circles represent the supports (Agostinho, 2009) (details are described in Section 3.6). This representation provides a visual summary of a learning design from the teacher‟s perspective and in this work is used to facilitate students understanding of the subjects‟ teaching and learning processes. It also enables teachers to easily understand the principle of designing pedagogically sound learning experiences with ICT 66
integration thus suggesting “best practice” of learning design (Agostinho et al., 2009). Consequently, the researcher adapted this framework to be implemented in the case studies and this is also aligned with the teaching and learning practices at the university level (available at http://focusonteaching.uow.edu.au/learningdesign/index. html).
3.6
LDVS representation Funded by the Australian Universities Teaching Committee (AUTC) since 2000 until
2002, the Learning Designs Project (Agostinho et al., 2008) developed a learning design representation, LDVS which aimed to identify and describe innovative ICT-based teaching and learning practices in higher education. The project‟s website (www.learningdesigns.uow.edu.au) documents a range of generic learning designs and tools intended to assist academics in higher education reuse and adapt innovative ICT-based learning designs for their own teaching contexts. Hicks (2004) reported that the site provides extensive web based resources in higher education offering five generic learning design guidelines, four generic learning design software tools and 32 contextualized learning design exemplars (see Figure 3.7).
Source: http://www.learningdesigns.uow.edu.au/project/learn_design.htm
Figure 3.7 The Learning Designs Project website 67
As mentioned in the previous section, LDVS graphically displays the three elements of learning design (via symbols), a sequence of learning activities (tasks), resources and support materials, along with the works produced by students in achieving the learning outcomes as a final component of the sequence in the time parameters (see Figure 3.8). The sequence representation can be explained as below (refer http://www.learningdesigns.uow.edu.au/ project/learn_design.htm).
i.
The learning activities (tasks) are represented by a series of rectangles and arranged vertically. Each rectangle has a description of what the students are required to do or produce.
ii.
Learning resources are represented by triangles to the left of the task sequence. An arrow from a resource (triangle) to a task (rectangle) indicates that resources are available to the student when completing the task. An arrow from a task (rectangle) to a resource (triangle) indicates that a resource is produced during the activity (task) and becomes a resource for others to use later.
iii. The learning supports are represented by circles to the right of the task sequence. An arrow from a "circle" to a "rectangle" indicates that support strategies are being used to assist the students in their learning.
68
Source: http://www.learningdesigns.uow.edu.au/project/learn_design.htm
Figure 3.8 Example of the visual learning design representation invented in the Learning Designs Project In terms of the combinations of elements in the visual sequence representation, learning resources and supports can be specific to a task, they can be introduced before the beginning of a task or when a task is complete, or they may be available for the entire duration of the learning experience. Graphically, these can be represented as a horizontal arrow to the specific task if the learning resources or supports are limited to particular tasks, whereas a vertical arrow indicates the resource and/or support is available for period of time or multiple tasks. Through the project‟s website, each visual sequence representation is supported with detailed textual description which explains the design and implementation of the tasks, resources, and supports and the implementation context of the learning design. Thus, these provide guidance for teachers in planning and designing the teaching and learning experiences according to their own contexts. An evaluation instrument, referred as the Evaluation and Redevelopment Framework (ERF), has been developed by Agostinho, Oliver, Harper, Hedberg, and Wills (2002) to examine the degree to which the learning designs have the potential to facilitate high quality 69
learning. This instrument was developed based on the four principles of high quality learning asserted by Boud and Prosser (2001) (details are described in Section 3.7.1). These principles were incorporated into the evaluation instrument through a series of questions in both formative and summative evaluation tools (http://www.learningdesigns.uow.edu.au/project/eval_rev _frame.htm). Agostinho et al. (2002) suggest that the instrument provides a valuable tool to evaluate the potential success of existing learning settings, as well as providing guidance for educators in the process of designing and planning learning settings. In the context of this study, it was possible to adapt the instrument for inclusion in the student surveys, although it was initially designed for academic use. This is appropriate as Agostinho et al. noted,
[d]esigners/educators need to examine their learning designs from the perspective of their impact on learning, that is, placing themselves in the “students‟ shoes” and thus examining their learning designs from the student perspective. (Agostinho et al., 2002, p. 32) Agostinho (2006) further discovered that the visual characteristic of the learning design representations, LDVS was one of its main benefits. Whilst the visual feature of the LDVS reveal its relative simplicity in construction and informal approach to describing tasks, resources, and supports, Agostinho (2009) highlighted the issue of overlap in the definition of resources and supports elements used within learning settings; they are “distinguished” from each other so as to avoid confusion. In this study, resources refer to the subjects‟ primary resources including lecture notes, and Edu-stream (audio recorded lectures); tasks refer to the subjects‟ assessment including weekly exercises/laboratory tasks, quizzes/laboratory tests/midsession test, assignments, worked solutions which were to be checked against student work, and data sets; and supports refer to (i) specific resources such as videos to support most topics, SPSS notes, and (ii) ongoing supports such as learning strategies, student forum, past tests and exam papers, and relevant websites. Agostinho (2009) explored the issues of different perspectives about the purposes of six learning design representations (as noted in Section 3.3) currently used in the e-learning environments by comparing and contrasting these representations, using tables that compared explicitly their purposes, structures, and key features. From the study outcomes, Agostinho (2009) highlighted the need for educators to understand more about each learning design representation and then explored how these could be implemented as it was considered unlikely that one all-encompassing learning design representation would adequate. This contrasted with the idea of providing a common language of “learning design” notation for educators (Waters & Gibbons, 2004) as educators design learning activities in different ways and contexts with different representations and tools to support and suit their needs (Conole, Oliver, Falconer, 70
Littlejohn & Harvey, 2007; Masterman, 2006). In this study the researcher sought an understanding of what constitutes an effective learning design representation based upon a student-centred perspective of learning within the online learning environment in teaching mathematics and statistics.
3.7
High quality learning in higher education From the student-centred perspective, research has shown there are two predominant
approaches students take in learning in higher education. In Marton and Saljo‟s (1976) study, students were asked to describe how they learned when they read, and the study revealed there were students who tried to memorise the material (surface approach) and those who tried to understand it (deep approach). In a deep learning approach, students aim to understand the ideas and seek meanings. Thus they are more likely to go beyond the memorisation of facts, that characterised a surface approach (Bates & Poole, 2003). Specifically, in a surface approach students‟ motivation tends to be extrinsic in contrast to deep approach students. In a deep approach students have an intrinsic interest in tasks, and the expectation of enjoyment in carrying out the tasks (Prosser & Trigwell, 1999). Prosser and Trigwell (1999, p. 3) explained these two approaches as [i]n [a deep approach] … they [students] adopt strategies that help satisfy their curiosity, such as making the task coherent with their own experience; relating and distinguishing evidence and argument; looking for patterns and underlying principles; integrating the task with existing awareness; seeing the parts of a task as making up the whole; theorising about it; forming hypotheses; and relating what they understand from other parts of the same subject, and from different subjects. … In the other (surface) approach, students see tasks as external impositions and they have the intention to cope with these requirements. They are instrumentally or pragmatically motivated and seek to meet the demands of the task with minimum effort. They adopt strategies which include a focus on unrelated parts of the task; separate treatment of related parts (such as on principles and examples); a focus on what are seen as essentials (factual data and their symbolic representations); the reproduction of the essentials as accurately as possible; and rote memorising information for assessment purposes rather than for understanding. Biggs (1988) postulated a third approach to learning, achieving. This is described as … different from deep and surface in that the latter refer to the kinds of cognitive processes used when engaging the task (meaningful or rote learning), whereas the former refers to arranging the context for carrying out the task. While deep and surface are mutually exclusive at any given moment, an achieving approach may be linked to either: one can rote learn in an organised or an unorganised way, or seek meaning in an organised or an unorganised way. Surface-achieving is the approach adopted by students who want to obtain high grades and think that the 71
way to do so is by using the surface strategy;… Deep-achieving is, on the other hand, characteristic of many of the better students. (Biggs, 1988, p. 199) Biggs further contended that deep and achieving approaches appeared to be the most significant ways to learn in higher education if students consistently approach their academic tasks so as to promote life-long learning. Mann (2001) proposed alternative perspectives on the student experience shifting from surface and deep approaches to learning to alienation and engagement experiences of learning. Mann highlighted seven perspectives in viewing the possible students‟ experience of alienation in higher education: (i) the postmodern condition – the sociocultural context, (ii) positioned as subject/object – the primacy of discourse, (iii) the student as outsider – knowledge, power and insight, (iv) bereft of the capacity for creativity – the teaching and learning process, (v) exiled from the self – loss of the ownership of the learning process, (vi) disciplined into docility – assessment practices, and (vii) “leave me alone” – alienation as a strategy for self-preservation. The first six examined the conditions under which alienation might arise, the seventh explored alienation as a strategy for self-preservation and survival among students. A research group in Gothenburg University, Sweden, carried out naturalistic experiments which revealed that variations in learning outcomes were closely related with variations in approaches to learning, either the development of personal understanding of reality (deep approach) or simply to cope with the immediate tasks requirements (surface approach) (Marton & Saljo, 1976). According to Prosser and Trigwell (1999), this result demonstrated that it was more plausible for deep approaches to learning to be affiliated with higher quality learning outcomes in higher education as described in Prosser and Trigwell (1999, p. 4),
[l]earning outcomes, or ways of understanding which include the more complete ways of conceiving of something, are of a higher quality than those involving more limited conceptions. Students who are able to see relations between elements of their understanding in a subject and are aware of how that understanding and those relationships can be applied in new and abstract contexts have a higher quality learning outcome than students who cannot. Consequently, this perspective conceptualises approaches to learning as relational phenomena. As stated in Boud and Prosser (2002, p. 238), „learning is always situated in a particular learning context. That is, students perceive the same learning context in different ways and this variation in ways of perceiving the context is fundamentally related to how they approach their learning and to the quality of their learning outcomes. In relation to this study, the researcher emphasises the rationale of adopting a student-centred perspective in designing resources within the online learning environment. For example, in statistics subjects, students completed online assessment by taking necessary steps of using the data set given in order to 72
write a meaningful paragraph about the variable of interest. Other variables useful for helping them to make sense of the data or justify the exclusion of errors were included in the data set. They also produced statistics, histograms, stem-and leaf plots and boxplots to facilitate writing a meaningful paragraph. They had to use appropriate statistical terms and convey their meaning in simple English.
3.7.1 Principles of high quality learning The concept of learning designs can guide educators to design appropriate learning experiences for students. A learning design describes the strategies used by educators to engage the students; therefore it can be used to optimize the students learning opportunities. Boud and Prosser (2001) contended that an effective learning design should be based on a set of principles that constitutes “high quality learning” in higher education. Two Australians Boud and Prosser (2001) developed a paper on high quality learning titled “Key Principles for High Quality Student Learning in Higher Education – from a Learning Perspective”. These principles are holistic in that they encompass both learning outcomes and learning processes. They are based upon an experience-based student-centred perspective of learning that is, „learning arises from what students experience from an implementation of a learning design.‟ (Agostinho et al., 2002, p. 31-32). The principles are not a comprehensive list, as they could be extended with different foci and a variety of terms may be used. These principles represent the main issues emerging from Boud and Prosser‟s (2001) understanding of the literature in higher, adult and professional education. Some of the principles may be more salient than others according to different learning contexts. However all four principles must be considered in any application. Specifically, the four key areas were formulated in the application of e-learning environment and are adaptable to this study. These principles are:
1.
Engaging learners. This includes starting from where learners are, taking into account their prior knowledge and their desires and building on their expectations.
2.
Acknowledging the learning context. This includes the context of the learner, the course for which the activity is part and the sites of application of the knowledge being learned.
3.
Challenging learners. This includes motivating learners to be active participants, using the support and simulation of other learners, taking a critical approach to the materials and going beyond what is immediately provided.
73
4.
Providing practice. This includes demonstrating what is being learned, giving feedback, reflection on learning and developing confidence through practice. (Boud & Prosser, 2002, p. 240-241)
3.8
Other forms of learning designs As discussed in Section 3.3, there are several learning design definitions used by
academic practitioners in explaining their teaching and student learning processes. Boud and Prosser (2002) argued there was a need to examine student learning perspectives on the use of learning designs in the context of subjects as a whole. In relation to this issue, the researcher grasped the notion that learning designs cannot be created independently of the context in which they are to be used. To reflect on the statement made by Boud and Prosser (2002, p. 238) that „[l]earning designs may be at the level of a whole subject, subject component or learning resources‟, there are other forms of learning designs which have been developed by educators including concept maps, webpages, lesson plans, subject outlines, and many others. Some of these are described in detail in the following sections.
3.8.1 Concept maps A concept map is defined as a diagram that presents the relationships among a set of connected concepts and ideas. It is a visual way to demonstrate how the individual‟s (teacher or student) mind views a particular theme or subject matter. In the context of students, the construction of concept maps could induce reflection on what students know and what they don't know. The concepts displayed in a concept map are commonly represented by words enclosed in particular shapes of object (for example, boxes, rectangles, circles) and these are connected to other concept by arrows. Icaza, Bravo, Guiñez, and Muñoz (2006, p. 2) described the concept maps as … a technique for organizing knowledge graphically. Concept maps give the possibility to capture, process, and store information, in a way similar to that used by the brain. Concept maps carry out a schematic representation of significant relationships among concepts expressed as sentences… [It] allows to connect a series of concepts and to navigate through them, establishing their interrelations and linking them with contents and necessary didactic materials to improve students‟ understanding.
74
Concept maps traditionally expound the links between concepts. Figure 3.9 shows an example of concept map from an applied in mathematics subject at the University of Wollongong, where the links between nodes were used to represent the links between different ideas. An adaptation of a concept map using building blocks (Porter & Denny, 2011) has been used to visually display via the e-learning system the sequential nature of mathematics and navigation through the mathematics subject in the third case study (MATH131/MATH132) (Figure 3.10).
Figure 3.9 A concept map used in MATH111 subject
75
Figure 3.10 Adaptation of a concept map using building blocks in MATH131 subject
3.8.2 Webpage A webpage is a page of the WWW (World Wide Web), normally in HTML/XHTML format and providing hypertext links enabling the navigation from one page or section to another. Web pages often use associated graphics files to provide illustration, and these graphics also provide clickable links. A webpage is displayed using a web browser such as Internet Explorer, Firefox, Netscape, and Google Chrome, can be designed to make use of applets which may provide motion graphics, interaction, and sound. This kind of delivery mode for teaching and learning has been positively accepted as good learning tools by students (Icaza et al., 2006) (see example of a subject‟s webpage in Figure 3.11).
76
Figure 3.11 A webpage of statistics subject from http://bcs.whfreeman.com/bps3e/
3.8.3 Lesson plan A lesson plan is usually designed by a teacher or instructor to dictate the structure of the teaching of a particular subject. A lesson plan provides detail of what is going to be taught in one learning session, such as a time plan and the learning aims and objectives. When creating the lesson plan it is common to look at the main components: (i) topics, (ii) aims (the broader goals of the lesson, what it is reaching towards), (iii) objectives (the specific, measurable outcomes of the lesson – the particular skills or knowledge students should have acquired at the end of the lesson), (iv) procedures (what the teacher will do to get the students there), (v) the time required for each session of teaching and learning, the resources required and supports available, (vi) serving for the different needs of the individuals (cultural differences, learning styles, learning strategies, special needs), and (vii) how the lesson is to be assessed. Figure 3.12 displays a lesson plan designed for a laboratory statistics class at the University of Wollongong.
77
Figure 3.12 A weekly lesson plan for laboratory work in GHMD983
3.8.4 Subject outline Subject outlines are intended to provide students with an overall plan for a subject to enable them to function efficiently and effectively in the subject. A subject outline can be divided into several sections such as subject name and code, the number of credits, the name and title of the instructor, day, place and time of classes, prerequisites - particular subject, specific knowledge or skills a student should know before beginning the subject (e.g., use of the computer, ability to read architectural plans, etc.), access to the instructor - consultation hours for students, office location, emails and telephone number for office appointments, learning outcomes, subject content, instructional method, subject materials and supports, assessment, and subject related policy statements (for example see Appendix 3.1).
78
3.9
Conclusion In this study the researcher aimed to gain a better understanding of how students
experience learning by adopting a student-centred perspective when designing learning within an online environment. Prior to examining approaches to improving learning outcomes within online learning environments, the nature of knowledge or epistemology was elaborated. There are many different schools of thought on learning, such as behaviourism, cognitivism, constructivism, social constructivism, and co-constructivism. In this thesis, the researcher connects the notion of all learning theories and in addition the current theory, connectivism, as a way of describing what the researcher observed in educational practice within the online learning environment. Kolb‟s experiential learning cycle was also adopted in this study which offers the basic principles needed in creating effective learning processes. … we all enter the teaching enterprise with an implicit set of theories about the nature of knowledge and the ways students learn. These implicit theories are at the heart of our teaching philosophies and our teaching behaviours. (Bates & Poole, 2003, p. 26) As there is no standard form or consistent notation system in representing learning designs, the researcher intended to examine the process of designing the student learning experiences through three case studies. The work of Oliver and Herrington (2001) suggested a suitable framework for designing effective online learning environments, that were trialled in this study. In doing this, the four principles of high quality learning developed by Boud and Prosser (2001) provided guidelines for creating effective learning designs particularly in the elearning systems. The main issue in designing the learning activities for a particular subject involved understanding how teachers engage students by challenging their experiences of the field of study, their present understandings, and helping them to develop their self-critical skills (Boud & Prosser, 2002). Thus, based on the student-centred perspective, this study suggests that understanding the impact of learning designs is fundamental to understanding how students experience the teaching and learning situation within their learning context.
79
CHAPTER 4 METHODOLOGY 4.0
Introduction … a discussion of the usefulness of any research [design] should first of all consider the particular information need i.e. what we want to know [the aims and objectives specified in Chapter 1] that motivates a research or data collection endeavour, as different information needs often call for different research [designs]. (Gal & Ograjenšek, 2010, p. 288) Predominantly, there are three research designs or approaches highlighted by Creswell
(2009), which are: quantitative, qualitative and mixed methods research. He contended that the first has been available to the social and human scientist from the late ninetieth century until the middle of the twentieth century, the second has emerged in the latter half of the twentieth century and along with it, the development of mixed methods research increased. These three key terms are defined distinctly:
Qualitative research is a means for exploring and understanding the meaning individuals or groups ascribe to a social or human problem. The process of research involves emerging questions and procedures, data typically collected in the participant‟s setting, data analysis inductively building from particulars to general themes, and the researcher making interpretations of the meaning of the data…
Quantitative research is a means for testing objective theories by examining the relationship among variables. These variables, in turn, can be measured, typically on instruments, so that numbered data can be analysed using statistical procedures… Mixed methods research in an approach to inquiry that combines or associates both qualitative and quantitative forms. It involves philosophical assumptions, the use of qualitative and quantitative approaches, and the mixing of both approaches in a study. Thus, it is more than simply collecting and analysing both kinds of data; it also involves the use of both approaches in tandem so that the overall strength of a study is greater than either qualitative or quantitative research… (Creswell, 2009, p. 4) Until recently there were short falls in the major approaches that included only quantitative or qualitative methods in social and human sciences research; many research practices resided in the middle of a continuum between the two (e.g. Creswell & Plano Clark, 2007; Gal & Ograjenšek, 2010; Newman & Benz, 1998). 80
In particular, Petozc and Reid (2010) asserted that in statistics education research both quantitative and qualitative approaches are common; nevertheless mixed methods research seems more pertinent, „[w]hile statistics is essentially a quantitative discipline, it contains a necessary core of qualitative components. To be or to become a statistician [in particular, statistics educator] involves some familiarity with the qualitative as well as the quantitative!‟ (p. 272). In the context of online learning environments as evident in Ward‟s (2004) study, the researcher believed … that as more and more students and users use interactive web-based learning and dissemination tools, researchers will increasingly need to employ mixed-methods and adapt qualitative methods to suit emerging needs… The integration of different types of qualitative methods with quantitative ones will also be needed to examine events and processes in the learning communities…, i.e. classes involving 300+ students where teaching teams operate in a complex social and technology-based environment. (Gal & Ograjenšek, 2010, p. 294) There are many research strategies and methods that can be used within each of these research approaches for the purpose of data collection. In this study, the use of technology in improving statistics education was examined using data generated from multiple evidentiary sources (i.e. mixed methods) which include both quantitative data (assessment) and qualitative data (surveys and observation). For instance, the records of student achievement (assessment data) and their perceived learning of the subject (student surveys) were used to identify what impact the technology had on their learning and understanding. A mixed methods approach additionally grant the researcher the ability to both generalise the research findings to the different contexts of learning (populations) as well as to develop a detailed view of the meaning of student learning experiences within the individual‟s online learning environments. In the preliminary study, the researcher sought to identify which variables to study, that is, to identify students‟ learning needs in the subject (for example, anxieties, confidence, difficulties on particular topics, technology issues) and then to study those variables with a large sample of individuals across three consecutive teaching sessions by collecting and analysing both quantitative and qualitative data (Creswell, 2009).
4.1
Selection of the research paradigm In accord with Creswell (2009), the choice of research design for a study involves choices
within three major elements: philosophical assumptions (paradigms), strategies of inquiry, and specific research methods; along with the research problem in the study, the personal
81
experiences of the researcher, and the audiences for whom the research study will be written. Mertens (2005, p. 8) highlighted three questions that led to the definition of the term paradigm;
1.
The ontological question asks, “What is the nature of reality?”
2.
The epistemological question asks, “What is the nature of knowledge and the relationship between the knower and the would-be known?”
3.
The methodological question asks, “How can the knower go about obtaining the desired knowledge and understanding?”
Likewise, Creswell (2003, p. 6) contended that
[p]hilosophically, researchers make claims about what is knowledge (ontology), how we know it (epistemology), what values go into it (axiology), how we write about it (rhetoric), and the processes for studying it (methodology)… The selection of a research paradigm and methodological approach to this study consequently has been pragmatic, drawn upon approaches which are aligned with different philosophical assumptions (refer to Table 4.1). The researcher‟s position is a pragmatic part from the necessity given from the learning contexts and also from her personal positioning as a teacher-researcher. In the context of this study, the term teacher refers to the researcher‟s role as the tutor of the subjects. The pragmatist position may be described as;
Pragmatists decide what they want to research, guided by their personal value systems; that is, they study what they think is important to study. They then study the topic in a way that is congruent with their value system, including variables and units of analysis that they feel are the most appropriate for finding an answer to their research question. They also conduct their studies in anticipation of results that are congruent with their value system. This explanation of the way in which researchers conduct their research seems to describe the way that researchers in the social and behavioural science actually conduct their studies, especially research that has important social consequences. (Tashakkori & Teddlie, 1998, p. 27) In accord with Onwuegbuzie and Leech, there appear benefits of becoming a pragmatic researcher; … it enables researchers to be flexible in their investigative techniques, as they attempt to address a range of research questions that arise. Pragmatic researchers also are more likely to promote collaboration among researchers, regardless of philosophical orientation… pragmatic researchers are more likely to view research as an holistic endeavour that requires prolonged engagement, persistent observation and triangulation…pragmatic researchers are in a better position to use qualitative research to inform the quantitative portion of research studies, or vice versa. For 82
example, the inclusion of quantitative data can help compensate for the fact that qualitative data typically cannot be generalised. Similarly, the inclusion of qualitative data can help explain relationships discovered by quantitative data… (Onwuegbuzie & Leech, 2005, p. 383) Table 4.1 Qualitative, quantitative, and mixed methods approaches Tend to or typically to use Philosophical assumptions Strategies of inquiry Methods
Practices of research
Qualitative Approaches Constructivist/ advocacy/participatory knowledge claims Phenomenology, grounded theory, ethnography, case study, and narrative Open-ended questions, emerging approaches, text or image data
Quantitative Approaches Post-positivist knowledge claims
Mixed Methods Approaches Pragmatic knowledge claims
Surveys and experiments
Sequential, concurrent, and transformative
Close-ended questions, predetermined approaches, numeric data
Position him- or herself Collects participant meanings Focuses on a single concept or phenomenon Brings personal values into the study Studies the context or setting of participants Validates the accuracy of findings Makes interpretations of the data Creates an agenda for change or reform Collaborates with the participants
Tests or verifies theories or explanations Identifies variables to study Relates variables in questions or hypotheses Uses standards of validity and reliability Observes and measures information numerically Uses unbiased approaches Employs statistical procedures
Both open- and closeended questions, both emerging and predetermined approaches, and both quantitative and qualitative data and analysis Collects both quantitative and qualitative data Develops a rationale for mixing Integrates the data at different stages of inquiry Presents visual pictures of the procedures in the study Employs the practices of both qualitative and quantitative research
Adapted from Creswell (2009, p. 17) In this study, the phrase „necessity‟ means the condition in which the researcher works with the lecturer of a subject who designs the teaching/learning activities and support materials. However, the lecturer is guided by the evaluation conducted by the researcher to introduce change, hence the lecturer selected the activities that have worked, and eliminated and revamped those that have not. For instance, the organisation or design in terms of the subject e-learning site was done by both the lecturer and the researcher; that is, the assessment folder was placed on the subject homepage to allow the ease of addition of assessment by the lecturer whilst the 83
layout of the learning design maps which incorporated links to resources in the e-learning site was set up by the researcher. What works in terms of student learning within the online learning environment has emerged from the students‟ needs, and the lecturer‟s experience, history and education. Indeed, pragmatic has offered the most expedient paradigm for application in this study.
4.2
Selected strategies of inquiry Creswell (2009) contended that the choice of a research paradigm inevitably aligns with
the strategies or research methodologies (Mertens, 1998) that provide specific direction for procedures in a research design. Creswell further posited that
[t]he strategies available to the researcher have grown over the years as computer technology has pushed forward our data analysis and ability to analyse complex models and as individuals have articulated new procedures for conducting social science research. (Creswell, 2009, p. 11) The researcher‟s own practice and experiences of her profession as a teacher has been critically investigated in this study. Her enthusiasm for implementing changes to improve the learning of her students particularly within an online learning environment has increased throughout. Her initial thoughtfulness has led to the identification of many potential sources of evidence of student learning, and abundant of reflective comments on successes and shortcomings and the reasons behind them. In accordance with the mixed methods approach and the specified research paradigm of this study, the researcher has selected several strategies or methodologies as displayed in Table 4.2. These are discussed in the ensuing sections to further illuminate connection between theory and practice.
Table 4.2 Research design Paradigm/ assumptions Strategies/ Methodologies
Pragmatism Exploratory research
Case study
Survey research
Each of the three case studies involved an observation phase in order to explore the problem and design interventions to resolve problems or assess needs and systemised data collection to evaluate outcomes. Each of the three case studies was related to the organisation of teaching/learning/assessments in an online learning environment (e-learning system) within the closed systems of the individual subjects. Each of the three case studies involved both open- and closeended student surveys using a structured questionnaire and rating scale to examine the impact of design interventions.
84
Action research
Phenomenology
Methods/ Techniques
Data types Mode of analysis
Case studies in Chapter 5, 6 and 7 involved several implementations which refined both the interventions and the data collection protocols. Description of the multiple dimensions of learning in a blended environment involves reporting of the experiences of all participants within the context of the subjects.
Questionnaires Document analysis Online discussions/student forum Observations Assessment grades and results e-learning student tracking statistics Peer briefings Quantitative and qualitative Statistical Thematic
(Adapted from Morris, 2008, p. 28)
4.2.1 Exploratory research The great majority of exploratory research is conducted to investigate an issue or topic in order to develop insight and ideas about its underlying nature. Topics are often a problem or issue that requires additional research study for problem resolution. Designing and conducting a small-sample exploratory study, therefore, is often the first step in a more comprehensive or complex research project. (McNabb, 2010, p. 96) In another perspective, Stebbins (2001) viewed the notion of exploratory research as the “process of exploration” of the overall approach to data collection which is „not only at the beginning but also throughout the research‟ (Davies, 2006, p. 110). Exploratory research is a methodological approach which enables the researchers to describe compound phenomena and to conjecture about underlying factors that influence them; essentially it also helps the creation or building of a tentative theory about the phenomena of interest (Davies, 2006; Gal & Ograjenšek, 2010). Glaser and Strauss (1967) hence claimed exploratory research as a synonym for grounded research.
The defining characteristic of grounded theory is that the theoretical propositions are not stated at the outset of the study. Rather, generalizations (theory) emerge out of the data themselves and not prior to data collection. Thus the emergent theory is grounded in the current data collection and analysis efforts. (Mertens, 2005, p. 242) Any exploratory research frequently has by default a qualitative core (Gal & Ograjenšek, 2010), however some authors asserted that it can involve both qualitative and quantitative enquiries (Davies, 2006; McNabb, 2010). Davies (2006) perceived the idea of exploratory research as exploration-for-discovery and the researcher as explorer who is flexible and 85
pragmatic yet engages in a broad and thorough form of research. Davies disputed some fundamental misinterpretations often seen in understanding the role of exploratory research; … exploratory research has become synonymous with the notion of „feasibility study‟ or „pilot study‟, both of which imply a prior or sequential stage in a research programme and thus a limited appreciation of exploration. It is also misleading to use exploration as a synonym for qualitative research. (Davies, 2006, p. 110) McNabb (2010) further discussed the methods or approaches generally used for data gathering in exploratory research, these are: (1) literature reviews, (2) personal interviews, (3) focus group unstructured interviews, (4) case studies, and (5) pilot surveys. In this study, the researcher …derives a general, abstract theory of a process, action, or interaction grounded in the views of participants [students]. This process involves using multiple stages of data collection [case studies] and the refinement and interrelationship of categories of information… Two primary characteristics of this design are the constant comparison of data with emerging categories and theoretical sampling of different groups to maximize the similarities and differences of information. (Creswell, 2009, p. 13)
4.2.2 Case studies Case studies, in which the researcher explores in depth a program, an event, an activity, a process, or one or more individuals. The case(s) are bounded by time and activity, and researchers collect detailed information using a variety of data collection procedures over a sustained period of time… (Creswell, 2003, p. 15) Nevertheless, there exist differences of view on the definitions of a case study whether it is a method or a research strategy. For instance, Stake defined a case study as an object of study rather than a research strategy and asserted that „the more the object of study is a specific, unique, bounded system‟ (Stake, 2000, p. 436) thus „the greater the rationale for calling it a case study‟ (Mertens, 2005, p. 237). O‟Leary (2005) likewise was against the inclusion of a “case study” as a research strategy based on the argument that case studies may involve varieties of data sources and employ other types of strategies to the study. In the context of this study, Chapter 5 and 6 have been presented as two case studies which explore the changing of learning designs for teaching, learning and assessment across multiple sessions in an introductory statistics subject. Chapter 7 presents the third case study in a different subject area which targeted trainee teachers in the primary setting. The study involved the implementation of learning designs developed for teaching and learning of mathematics targeting the improvement of student learning outcomes. Within these case studies, another type of research strategy, 86
action research, has been employed by the study, the development of appropriate learning designs aimed at improving student learning and understanding of mathematics and statistics. In addition, the evaluation processes have allowed comparisons across the multiple implementations. On the other hand, case studies have also been described by Mertens (2005), Creswell (2003) and Crabtree and Miller (1992) as one option in the strategy choices of qualitative research due to the variety of methods that may be used to collect data. Substantially,
[c]ommonality in definitions recognizes case studies as investigations which focus on a single instance in a single context. Description, analysis and interpretation provide particular understanding of the instance in context. (Morris, 2008, p. 26) Stake (2000) and Morris (2008) have further cautioned researchers to emphasise the intrinsic interest in each case study rather than making comparisons in an effort to find similarities or differences with other cases. In this study, the main purpose has resided in the development and evaluation of three different learning contexts. Case studies however may provide more global insight of other situations through generalizations of findings in each case (O‟Leary, 2005; Mertens, 2005).
The idea behind [case study] research is to purposefully select participants or sites (or documents or visual material) that will best help the researcher understand the problem and the research question. This does not necessarily suggest random sampling or selection of a large number of participants and sites, as typically found in quantitative research. A discussion about participants and site might include four aspects… [1] the setting (where the research will take place), [2] the actors (who will be observed or interviewed), [3] the events (what the actors will be observed or interviewed doing), and [4] the process (the evolving nature of events undertaken by the actors within the setting). (Creswell, 2009, p. 178) With regard to providing evidence of student learning, Alexander (1999) claimed that case studies offer a variety of good practices in program design, development, implementation and evaluation. Alexander recommended several case study methods that are frequently applied by researchers when evaluating their students‟ learning. Several of these (*) are applicable in this study:
comparison of the performance of students who used the program, with those who did not use it; (*)
a comparative study with control and treatment group, and pre- and post-tests;
comparison of students‟ solutions to problems in examinations, with those of students from other universities; 87
pre- and post-tests combined with student interviews; (*)
review of students‟ responses in examinations and overall performance in assessment; (*)
assessment of content and retention of learning; (*)
questionnaire concerning students‟ experience of the program as well as their reaction to it; (*)
questionnaires concerning students‟ perceptions of learning outcomes; (*)
questionnaires given to students before and after the use of program; (*at the end of one session to guide implementation and then at the end of that session to evaluate i.e. different cohorts)
interviews with students about changes in their conceptions; (*)
focus groups;
expert reviews; (* researcher‟s conferences papers and publications) and
observation of students‟ use of the program. (*) (Adapted from Alexander, 1999, p. 179-180)
4.2.3 Survey research In contrast to exploratory research and case studies, survey research is one of the most common strategies or approaches used for quantitative research (Crabtree & Miller, 1992; Creswell, 2009; Mertens, 2005). The use of survey research as a method of investigation is widely applied in the social sciences (Gordon, 2000), and it has also been used pervasively in educational and psychological research (Mertens, 2005).
Survey research involves the systematic collection and analysis of information on more than one case. The variables must be collected in a consistent manner so that the cases can be compared, whether they are individuals, referral files, organizations or countries. (Gordon, 2000, p. 341) Glasow (2005) further distinguished the meaning of the terms survey research and surveys as many researchers have frequently used these two terms erroneously. She posited that a survey is simply a data collection tool for carrying out survey research as a means for gathering information about the characteristics, actions, or opinions of a large group of people; additionally it can also be used to assess needs, evaluate demand, and examine the impact of specified program. The survey instrument or questionnaire is often used to differentiate the survey tool from the survey research (Glasow, 2005) although other data collection tools may be used, such as interviews, content analysis and observation (Gordon, 2000). 88
In the context of this study, the researcher employed two formats for the instruments: onpaper survey questionnaire (conducted in class) and an online survey questionnaire (via the elearning system) in her three case studies. The aim was to examine the impact of the design interventions of the teaching/learning/assessment in the three different statistics and mathematics subjects. The surveys were designed to include both structured open- and closedended questions including Likert scales and were carried out at the end of each session.
4.2.4 Action research [Action research] focuses on the way that people change and negotiate change throughout a specific developmental activity – as students do when they study statistics. Researchers using this approach maintain that knowledge is a coconstructed social enterprise dependent for its growth on a collaborative environment. (Petocz & Reid, 2010, p. 273) In line with the development of experiential learning theory, action research which has arisen through the work of Kurt Levin contrasts with the “objective” scientific method wherein implementation is seen as discrete from the research process. Action research in general begins with the analysis of a social situation or identification of practical problems in a specific context, then it is followed by the formulation of some kind of intervention or implementation of solutions within that context, which is then evaluated (Newton, 2006; O'Leary, 2010). Essentially, it is aimed at improving practice through change interventions involving a process of collaboration between researchers and participants. It is based on two goals: (1) the knowledge produced through research should be used for change, and (2) researching change should lead to knowledge. Continuous improvement however was, and still is, a primary goal (O‟Leary, 2010). In action research, knowledge and change are highly integrated in that … change should not just be seen as the end product of knowledge; rather it should be valued as a source of knowledge itself… The knowledge produced from an action research study needs to be credible and must be collected and analysed with as much rigour as it would be in any other research strategy (O‟Leary, 2010, p. 149) Additionally, Newton (2006) and O‟Leary (2005) have cautioned researchers not to confuse the role of action research with evaluation research which attempts to measure the impact of interventions without the active collaboration of participants; specifically they are different in terms of the embedded assumptions, aims, and methods. In contrast to the standards of traditional research strategies however, the close and collaborative relationship between
89
researchers and participants could be seen as a source of bias (Newton, 2006) that frequently produces unconvincing justifications for resultant conclusions. Furthermore, … action research is characterized by a fluid and ongoing process of formulation, implementation, adaptation and evaluation in which the identification of stages or project milestones is often difficult. Research design in action research is evolutionary rather than specified beforehand in a research protocol. (Newton, 2006, p. 3) In the context of education, improving practice through action research is not an unusual part of the daily routine of teachers who are constantly encouraged to work in ways that develop their own skills and practice which are continually informed by student reactions and reflection upon the process (O‟Leary, 2010). Reflecting on the practice of teaching and learning in this study and a well-designed constructive learning environment has created collaboration between the teacher and the students in the learning journey. Action research can be illustrated as a spiral cyclical process (refer to Figure 4.1). The key idea of this process is that … you learn, you do, you reflect, you learn how to do better, you do it better, you learn from that, do it better still, and so on and so forth. You work through a series of continuous improvement cycles that converge towards better situation understanding and improved action. (O‟Leary, 2010, p. 150) In each case study, the first phase involved a period of reflective observation and design. The design interventions fed into the first implementation and each cyclic implementation was followed by analysis, reflection and further designs. The results of each cycle of deliberations fed substantially into each subsequent implementation.
90
Figure 4.1 Action research spiral (Adapted from Morris, 2008, p. 31)
4.2.5 Phenomenology Phenomenological studies are designed so as to understand and describe the meaning of a phenomenon or lived experience from the point of view of the participant or the individual‟s perceptions (Mertens, 2005). The need to understand the lived experience is absolutely necessary,
[i]f you want to be truly effective in getting people to move from A to B you need to start at A, and you can only do that if you understand and appreciate A. And not just intellectually, but emotionally as well… [For instance, if] you want to understand how you can help motivate struggling students – you need to know what it is really like for them at the bottom of the class. (O‟Leary, 2010, p. 119) Petocz and Reid (2010) pointed out common confusion in understanding the two terms, phenomenology and phenomenography. While both shared the same root, they are defined differently;
Phenomenology is a philosophical method, the study of experience from the first person point of view… Phenomenography, in contrast, is an empirical approach to understanding the various ways in which people experience the world, or particular phenomenon in the world. In essence, phenomenology aims at describing the richness of an individual‟s experience but phenomenography aims at describing variation in the way that a group of people experience a specific situation. Both share a focus on human experience. (Petocz & Reid, 2010, p. 273-274)
91
Based
on
its
core
of
qualitative
strategies,
Mertens
(2005)
distinguished
phenomenological research from other research strategies, that is, it emphasises the subjective experience as the centre of the inquiry; „… the focus is on understanding how individuals create and understand their own life spaces‟ (p. 240). The key outcome of phenomenological studies is “rich” phenomenological descriptions where meanings arise throughout the process of collection, analysis and recording of these descriptions to explore commonalities and divergences in the experience of the same phenomenon. Hence, in examining the impact of design interventions within the online learning environment, exploration of individual experiences of the structures and processes involved, and “rich” descriptions of their involvement may provide evidence for evaluating the interventions. In this study, the deconstruction of the teaching/learning/assessment and student experiences has unveiled multi-faceted representations of implemented learning designs.
4.3
Why not a “control group” study? Rooted in quantitative strategies, an experimental design is a study that involves the
random assignment of subjects to either the experimental or the control group, exposing only the experimental group to a treatment whilst controlling the influence of extraneous factors. When a control group study is conducted it can provide strong or convincing evidence of a significant change over time of the experimental group in comparison with the control group that is not exposed to the experimental treatment. This kind of research setting is feasible in traditional lab-based experiments and clinical trials; however it encounters important shortcomings in social research studies that are human-centred real-life experiments in social settings.
By removing the influence of all other factors through random allocation to experimental and control groups the experiment has only a limited capacity to arrive at explanations as to why one factor has the effect it does. Nor is it a good way of building a picture of how a set of factors produce a particular outcome. Experiments typically focus on the impact of just one or two factors. (de Vaus, 2006, p. 107) In an experimental design approach, one issue arises from practical and ethical considerations as to whether there are any advantages or disadvantages to any individuals or groups based on their inclusion in either a control or an experimental group (de Vaus, 2006). The issues of equity pose a huge ethical dilemma particularly in educational research settings,
92
… [i]t is unlikely that the multitude of factors that can confound outcomes can be easily controlled. For example, changes in lecturers may have been involved, teachers teaching a second group may have different experiences than others, there may be different social groupings within classes, etc. (Aminifar, 2007, p. 114) The researcher viewed the process of learning in the university‟s environment as a dynamic, collaborative and interactive system where the lecturer has limited control over impacting factors or variables. The experimental design approach however requires stringent control of variables (e.g. extraneous, confounding, or intervening variables) so as to ensure that there is almost no external validity in the study (Alexander & Hedberg, 1994). For this reason, a control group study appeared to be an inappropriate strategy and impracticable in terms of adoption for this study.
When studying individuals [students], often in a social context, it is hard to control for all influences that sit outside [our] experimental design. (O‟Leary, 2010, p. 110) With regard to the researcher‟s awareness on the existence of some limitations arising from not using the experimental design approach in educational settings, this study adopted a triangulation approach as a validation strategy (Creswell, 2009; Flick, 2006). This involves the use of multiple methods and multiple data sources to support the strength of interpretations and conclusions in qualitative strategies (Mertens, 2005).
Triangulation refers to a research strategy that involves approaching a research question from two or more angles in order to converge and cross-validate findings from a number of sources (for example, different data sources). A researcher may converge self-report data derived from interviews with observational data. In combining both quantitative and qualitative approaches mixed methods research embodies the notion of triangulation. (Hewson, 2006, p. 180) In this study, the researcher sought to validate findings through the combination of multiple techniques and sources of data collection (refer to Table 4.2). An examination of those data sources should take into account both consistencies and inconsistencies in what the data reveal about the improvement of student learning, that is, how the technology can enhance statistics education.
[I]n case studies, no experimental variables are explicitly manipulated; therefore, causal relationships are established by inference based on interviews or documentary evidence. The researcher should be sure to explore rival explanations and to determine the convergence (or nonconvergence) of data from multiple sources in terms of supporting causal inferences. (Mertens, 2005, p. 256)
93
4.4
Positioning of self In a qualitative approach, the researcher typically becomes part of the study in a sustained
and intensive experience with the participants. Thus, … considerable interest has been focused on who the researcher is and what values, assumptions, beliefs, or biases he or she brings to the study. In general, qualitative research texts recognize the importance of researcher‟s reflecting on their own values, assumptions, beliefs, and biases and monitoring those as they progress through the study (perhaps through journaling or peer debriefing) to determine their impact on the study‟s data and interpretations. (Mertens, 2005, p. 247) Throughout this entire study, the researcher has drawn on her experience as a statistics and mathematics teacher in two countries, Malaysia and Australia. That experience extends from working one-on-one with individual students over the last thirteen years at tertiary level by lecturing and tutoring both statistics and mathematics subjects in Malaysia. In Malaysia, the researcher has taught several statistics subjects among them, introductory statistics and applied statistics subjects (Basic Statistics and Applied Statistics I) including topics such as methods of counting (permutation and combination), probability (discrete and continuous random variable e.g. Binomial, Poisson, and Exponential distributions), measures of central tendency (mean, median and mode), measures of spread (range, variance, standard deviation, and coefficient of variation), Normal distribution, sampling (sampling method and distribution), estimation (point estimation, criteria of good estimator, confidence intervals, and determination of sample size), hypothesis testing (single mean and proportion, differences between two means and proportions), nonparametric statistics (Sign test, Wilcoxon Signed Rank test, and MannWhitney U test), and analysis of categorical data (Goodness of Fit test and Chi Square Independence test). The other mathematics subject the researcher has taught was Calculus I which includes topics: functions and limits, differentiation, application of derivatives, integration, and application of integrals.
There are several other descriptions made by the researcher of her teaching context in Malaysia. These included:
The teaching and learning environment of the classroom is formal and involves only a small group of students (about 20 to 40 students) in each classroom.
Lectures and tutorials/laboratories are taught by the same teacher of the subject.
94
The same group of students are allocated for both lectures and tutorials/laboratories throughout
the
session
(attendance
is
compulsory
for
both
lectures
and
tutorials/laboratories).
There are fourteen weeks of teaching session in addition to one week of revision (study week).
The e-learning system is regarded as an optional/alternative teaching and learning medium (rarely used) as opposed to the most highly preferred traditional or classroom teaching using whiteboard, textbooks/manuals, and overhead projector/screen projector.
Most students are passive in classroom. That is, almost no conversation exists between the teacher and students except the teaching done by the teacher (however, students actively take notes or even text or use Facebook!).
Statistics and mathematics are among the subjects that have high failure rates; hence these subjects were recognized as “anxiety provoking” and “killer” subjects by many students (Arul et al., 2004).
Most students are unable to understand the mathematical and statistical concepts taught in lectures and tutorials/laboratories due to their learning styles of “memorising” the formulae, questions and answers format.
Open-book tests and examinations are not allowed in many universities in Malaysia.
There are limited learning supports including lecturer-student consultation, tutorial, and group discussion or peer learning (i.e. human-based supports) provided to the students particularly in these subjects. Reflecting on the researcher‟s involvement as a teacher-researcher in Australia and
particularly in this study, her role as a qualitative researcher can be varied along the following continuum:
Complete participant:
Researcher becomes member of group being studied and does not tell members they are being studied.
Participant-as-observer:
Researcher spends extended time with the group as an insider and tells members they are being studied.
Observer-as-participant:
Researcher spends a limited amount of time observing group members and tells members they are being studied. 95
Complete observer:
Researcher observes as an outsider and does not tell people they are being observed.
(Johnson & Christensen, 2010, p. 209)
In the first two case studies, the researcher was appropriately positioned to undertake those studies as a “participant-as-observer” (Creswell, 2009; Johnson & Christensen, 2010). In these case studies, the researcher‟s experience involved tutoring for the subjects of interest, STAT131 and GHMD983/SHS940. Working as a tutor for these two subjects resulted in firsthand observation of students‟ difficulties in learning statistics and also helped the researcher to determine what and how to embed appropriate learning supports within the subjects‟ e-learning systems to help students to learn and understand better. However in the third case study, the researcher was a designer of the subject‟s e-learning site only and hence might at best be described as an “observer-as-participant”. In this case study, the researcher‟s experience involved designing and redesigning and evaluating the e-learning site over four consecutive teaching sessions for a mathematics subject for prospective primary and early childhood teachers, MATH131/MATH132. In conducting qualitative research studies, Mertens (2005) recommended that the following ideas need to be considered by the participant observer that is „what do observers observe?‟ (p. 383). Consideration needs to be given to:
the program setting, that is the physical environment within which the program takes place;
the human and social environment, that is, finding ways in which the people organise themselves into groups and subgroups;
the program activities and participant behaviours;
informal interactions and unplanned activities;
the native language of the program participants, that is, the observer should learn the exact language used by the participants to describe their experiences;
nonverbal communication;
unobtrusive measures; and
observing what does not happen, that is, the observer should take note of the things that are expected to happen but did not happen in the program.
In this study, with the researcher as an experienced teacher with adherence to constructivist views of how the students actively construct their own knowledge, such 96
considerations form a part of her observation, regular teaching practice and reflection on interacting with students in the classroom. … the process of reflection, the action of being teacher-researcher, reflective practitioner or undertaking self-study, became… the means of connecting and directing the data gathering and interpretation throughout the study. Reflection during the practice of teaching or reflection in practice… was identified as one of the major mechanisms for improving learning in the classroom. (Porter, 2001, p. 15)
4.5
Ethics Research does involve collecting data from people, about people (Punch, 2005). [Hence, ethics approval] is required in making an argument for a study as well as being an important topic in the [study]. Researchers need to protect their research participants; develop a trust with them; promote the integrity of research; guard against misconduct and impropriety that might reflect on their organizations or institutions; and cope with new, challenging problems (Isreal & Hay, 2006). (Cited in Creswell, 2009, p. 87) This study involved the tracking of student experience of learning in two undergraduate
subjects (STAT131 and MATH131/132) and one postgraduate subject (GHMD983/SHS940) at the University of Wollongong in Australia. The students‟ rights, privacy and confidentiality were given the highest priority in this study (refer to this website for guidelines: http://www.nhmrc.gov.au/publications/synopses/e35syn.htm). All survey and permission forms and data files containing identifying information were stored in electronic form using a password protected computer (and a backup machine), accessible only to the researcher and her supervisor and kept for the required time span of seven years (as noted in Creswell (2009), between five to ten years). All students participating in this study were aged above eighteen years and informed consent was sought from them before they engaged in the data collection phase of the study (see Appendix 4.1a and 4.1b). This study involved no risk of physical harm or emotional distress to students and the data collected from them required:
self-completed online (via e-learning system) and on-paper surveys (in class);
assessment data (with all identifying detail removed) from department/school assessment files; and
e-learning student tracking statistics.
97
In addition, a survey was conducted among the lecturers and tutors. Hence, peer discussions were also reported in the researcher‟s journal. The first two case studies were conducted with two different groups of students (undergraduate and postgraduate) enrolled in similar statistics subjects that began in August/Spring 2008. Commencing in March/Autumn 2009, the third case study was conducted with undergraduate students taking a mathematics subject at the same university. The researcher was a participant teacher in the first two case studies and a co-designer of the subject‟s elearning site for the third. The researcher declared her involvement and interest in the studies and overtly discussed with teachers any values, assumptions or biases resulting from her espoused theories of teaching and learning of statistics. In conducting this study, clearance was sought and granted through the University of Wollongong (http://www.uow.edu.au/research/rso/ethics/UOW009378.html; see Appendix 4.2).
4.6
Limitations There were some limitations that need to be considered in comparing data within and
across implementations of the three cases studies reported in this thesis. These limitations include:
The three case studies undertaken were each considered as a closed system. Each case study aimed at understanding the student learning experience that was perceived as of intrinsic value to teaching and learning in its corresponding discipline;
There were some variations in the cohorts enrolled in each subject (STAT131, GHMD983/SHS940, MATH131/132) between sessions (implementations) such as the students‟ backgrounds (e.g. gender, nationality, on-campus students, distance students);
There was considerable variation in experience and expertise in the lecturing and tutoring staff between the sessions (implementations) in each case study which could affect the student feedback and response rates;
Most students completed the surveys via online through the e-learning system in STAT131 and GHMD983/SHS940 websites, however some students completed the surveys during laboratory classes;
The majority of students in MATH131/132 completed the surveys during their tutorial classes (on-paper format) in week 12 (at the end of session). Nevertheless, some students may have submitted superficial responses so that they could leave early from class; 98
There was no guarantee on the students‟ honesty given through their responses and feedback to the surveys although most questions offered a multiple choice of responses; and
There was some variation in the type of browsers (e.g. Internet Explorer, Firefox, Google Chrome) used by the students to access the e-learning systems, thus they might experience problems in viewing some files that were uploaded in the systems (i.e. issue of browser‟s compatibility). However, no issues were identified in accessing the systems using the Internet Explorer browser which has been set up as a default browser in all laboratory classes.
However, some additional information (not strictly limitations) regarding the subjects involved in the case studies were noted as follows:
There was a change in the subject code from GHMD983 to a new code SHS940 in August/Spring 2010, however no change was made to the syllabus content of the subject; and
MATH131 and MATH132 subjects were designed as a pair of mathematics subjects for trainee teachers in Australia (primary and early childhood teachers). MATH131 is a prerequisite for MATH132 and students in MATH132 are a subset of those having completed MATH131.
4.7
Educational evaluation The most exciting place in teaching is the gap between what the teacher teaches and what the student learns. This is where the unpredictable transformation takes place, the transformation that means that we are human beings, creating and dividing our world, and not objects, passive and defined. (Alice Reich cited in Clark, 2004; from http://www.nwlink.com/~donclark/hrd/sat6.html) Through evaluation, educators can be assisted in identifying and measuring the gap,
thereby determining the value and effectiveness of educational programs. For centuries, the term evaluation has been regarded as student testing used by classroom teachers who tried to find out the degree to which their students were learning. The activities of evaluating educational programs were initially begun prior to the nineteen sixties primarily in the United States. These essentially evolved from Ralph Tyler‟s work in the nineteen thirties who viewed evaluation as synonymous with testing and measurement. The invention of the term evaluation often attributed to Tyler as many would regard him as the founder of educational evaluation was 99
noted by Norris (1990), Popham (1993), and House (1986). Based on Tyler‟s objective-based conception of educational evaluation, the term evaluation is not only used as the appraisal of students but more so as the appraisal of educational program‟s quality, that is “something” other than testing students for the purpose of giving grades. He further contended that …the term „evaluation‟ was used rather than „measurement‟, „test‟ or „examination‟ because evaluation implied „a process by which the values of an enterprise are ascertained.‟… one important purpose of evaluation was to make periodic checks on the effectiveness of schools and indicate points in the school‟s programme where improvements were necessary. (Cited in Norris, 1990, p. 17) In accord with Tyler‟s work, Popham (1993) expressed that educational evaluation as a “systematic educational evaluation to appraise the quality of educational phenomena”. Popham asserted that … systematic educational evaluation is formal. The heart of the definition involves an appraisal of quality or, in other words, a determination of worth. The educational phenomena that are to be appraised can include many things, such as the outcomes of an instructional endeavour, the instructional programs that produced those outcomes, educational products used in educational efforts, or the goals to which educational efforts are addressed. (Popham, 1993, p. 7) Nevo (1986, p. 24) defined educational evaluation as a „systematic description of educational objects and/or an assessment of their merit or worth.‟ Norris (1990) further viewed educational evaluation as educational innovations which usually have been thought of as programmatic initiatives designed to improve education in a rational and planned way. Indeed, there have been many definitions and terminologies used by educators and researchers in understanding the term evaluation in the context of educational programs, yet there is no single best definition for many of these terms which are described in the next section (Mertens, 2005; Popham, 1993).
4.7.1 The definitions Evaluation was originally defined by Ralph Tyler as the „… process of determining to what extent the educational objectives are actually being realized‟ (Tyler, 1950, p. 69). Evaluation has been perceived as important in providing information for the decision making process by other evaluators such as Cronbach (1963) and Stufflebeam et al. (1971). A considerable consensus has been achieved among recent evaluators such as Stake and Schwandt (2006), Mertens and McLaughlin (2004), and Stake (2004) regarding the definition of 100
evaluation in practice. For instance, Dahlberg and McCaig distinguished evaluation from basic research, „[e]valuation is a form of applied research and it uses standard research methods to identify the effectiveness of some sort of practice,… the aim of evaluations is to study how effectively the existing knowledge is used in practice rather than to provide new knowledge.‟ (Dahlberg & McCaig, 2010, p. 16). Guba and Lincoln (1981, p. 35) elaborated evaluation as, „a process for describing an evaluand and judging its merit and worth‟. The term evaluand has been used by Scriven (1991) and referred to “the thing” (Nevo, 1986) or “the object” that will be evaluated which may include a social or educational program, a product, a policy, or personnel (Mertens, 2005). In response to the meaning of merit and worth, Mertens (2005, p. 50) described
[m]erit refers to the excellence of an object as assessed by its intrinsic qualities or performance; worth refers to the value of an object in relation to a purpose. So merit might be assessed by asking, How well does your program perform? And worth might be assessed by asking, Is what your program does important? Stern (1990) further posited the judgmental nature of evaluation by viewing
[e]valuation is any activity that throughout the planning and delivery of innovative programmes enables those involved to learn and make judgements about the starting assumptions, implementation processes and outcomes of the innovation concerned. (Cited in Jackson, 1998, p. 22) Smith (2001; 2006) recently asserted that
[e]valuation is part and parcel of educating. To be informal educators we are constantly called upon to make judgements, to make theory, and to discern whether what is happening is for good… Evaluation is the systematic exploration and judgement of working processes, experiences and outcomes. It pays special attention to aims, values, perceptions, needs and resources. (Cited in http://www.infed.org/biblio/b-eval.htm) In Tyler‟s early account of evaluation theory, the terms appraisal and evaluation were used interchangeably, appraisal was a synonym for assessment. Tyler contended that „[t]he idea that assessment data could be used for evaluating the effects and effectiveness of educational programmes was enormously persuasive‟ (Norris, 1990, p. 18). Mertens (2005, p. 47) eventually clarified the distinctions between the terms monitoring and evaluation in relation to assessment as stated below.
101
Monitoring is defined as “the continuous assessment of project implementation in relation to agreed schedules and of the use of inputs, infrastructure, and services by project beneficiaries.” Evaluation is defined as “periodic assessment of the relevance, performance, efficiency, and impact (both expected and unexpected) of the project in relation to stated objectives”. There are two distinctive functions of evaluation that are proposed by Michael Scriven in his classic 1967 essay, these are: formative evaluation and summative evaluation. In line with the same two functions, Stufflebeam (1972) further suggested that the distinction between proactive evaluation intended to serve decision-making and retroactive evaluation to serve accountability. Substantially, evaluation thus can serve two functions and these functions will tend to pull in different directions, however both functions are necessary (Smith, 2001; 2006).
Formative evaluations are conducted primarily for the purposes of program improvement. Typically, formative evaluations are conducted during the development and implementation of the program and are reported to in-house staff who can use the information to improve the program. A summative evaluation is an evaluation used to make decisions about the continuation, revision, elimination, or merger of a program. Typically, it is done on a program that has stabilized and is often reported to an external agency. (Mertens, 2005, p. 50) In explaining the roles of evaluator of the two functions, Popham (1993, p. 13-14) briefly stated that [t]he heart of the formative evaluator‟s strategy is to gather empirical evidence regarding the efficacy of various components of the instructional sequence and then to consider this evidence in order to isolate deficits and suggest modifications… The summative evaluator gathers information regarding the worth of an overall instructional sequence so that decisions can be made regarding whether to retain or adopt that sequence. Whereas the formative evaluator‟s audience consists of the designers and developers of an instructional program, the summative evaluator‟s audiences are the consumers of instructional programs or those charged with the consumers‟ well-being. In the context of developing technologies or curriculum, formative evaluation is considered essential in the design and development of technologies alongside summative evaluation which often occurs at the end of the process, thus both are necessary in determining the appropriate models of evaluation to be used (Alexander, 1999; Aminifar, 2007; Porter, 2001). Particularly in this study, there is a need to identify if the uses of technology in the educational setting stimulate more effective pedagogy amongst educators and hence improvements in student learning as found by Beatty (2004). There is a need for educational evaluation on the uses of technologies or innovations in educational programs in order to assess 102
whether they do this or not. In so doing, it is necessary to establish or adopt a framework for evaluation.
4.7.2 An evaluator or a researcher? Due to the fact that the two roles, evaluators and researchers are so often confused particularly on its social inquiry (Mertens, 2005), Popham (1993) highlighted the distinctions between educational evaluation and educational research. Popham explicitly distinguished the two roles by delineating their characteristics in terms of focus, generalizability, and value emphasis. With regard to focus, Popham asserted that
[r]esearchers want to draw conclusions. Evaluators are more interested in decisions. Researchers are interested in understanding the phenomena, often for no other purpose than that – to understand them better. Evaluators want to understand phenomena better in order to guide someone‟s actions… researchers tend to focus their inquiry on deriving conclusions, and evaluators tend to focus their inquiry on facilitating better decisions. (Popham, 1993, p. 11) In responding to the generalizability of an inquiry‟s results, Popham claimed that [t]he more generalizability that a researcher‟s finding have, the better the researcher likes it. Evaluation, to the contrary, is typically focused on a particular educational program. There is no intention of generalising results to other situations. The focus is on this situation and what decisions to make about it. (Popham, 1993, p. 11) Whilst in the role of valuing in the inquiry, Popham posited that
[i]t is imperative for the evaluator to establish how worthwhile an educational phenomenon is in order to help make decisions regarding what should be done about it. Researchers, on the other hand, search for scientific truth without any desire to attach estimates of worth to their findings… the evaluator‟s job is to value things. That value ingredient is not requisite in research. (Popham, 1993, p. 12) Mertens (2005) further distinguished that the audiences served by the evaluator and researcher; the program funders, administrators, staff members, and recipients of the services are the audiences of the evaluators, while the researcher served the scholarly, academic audience. Educational evaluation is all about a formal effort to appraise the quality of things in educational programs thus evaluation is conducted to assist educators to make better decisions. 103
Essentially, the role of evaluators is to improve education or to make it better rather than researchers who may be content to describe the world (Popham, 1993). In the context of this study, it suggested a clear perspective in positioning the role of the researcher (the author) that is substantially to act more as an evaluator nonetheless as a researcher too.
4.7.3 Models for evaluation Evaluation models are… a “set of plans” or “an example for imitation or emulation.”… Those who conjured up most of these models were doing their level best to lay out a course of action which, if followed, would lead to more effective evaluations. (Popham, 1993, p. 23-24) Historically, the building of educational evaluation models developed rapidly in the late nineteen sixties and early of nineteen seventies where the models were generated by people who “could spell educational evaluation” and “had access to an appropriate number of boxes and arrows” (Popham, 1993). House (1983) and Popham (1993) claimed that the emergence of some of the later evaluation models were frequently incorporated chunks of previously presented models. They thus believed that
[t]here are many possibilities for comparison, but perhaps the most significant comparisons are those among the underlying theoretical assumptions on which the models are based. In this way, one might see how logically similar the models are to one another and determine what logical possibilities do and do not exist. (House, 1983, p. 45) In selecting a suitable evaluation model to be adopted in this study, it is essential for the researcher to address the needs of the evaluation (for example, the evaluand, the purpose, the stakeholders, the limitations) and these will determine the appropriateness and feasibility of using a specific model. Hence, four evaluation models for evaluating teaching practice were considered in the researcher‟s examination of potential models for this study and were discussed briefly in the next sections as to emphasise the deliberations undertaken in selecting a framework for evaluation.
4.7.4 CIPP model One of the well-known decision-facilitation models is the CIPP model developed in the late nineteen sixties and originated by Daniel Stufflebeam and Egon Guba. The approach of this model is fundamentally based on the notion that „the most important purpose of evaluation is 104
not to prove but to improve‟ (Stufflebeam, 1983, p. 118). CIPP represents the four types of evaluation implied in this model, namely, context evaluation, input evaluation, process evaluation, and product evaluation. Stufflebeam asserted that the process of evaluation in this model includes three major steps: delineating, obtaining, and providing information.
[D]elineating refers to the focusing of informative requirements needed by decision makers through such operations specifying, defining, and explicating… obtaining refers to the collection, organization, and analysis of information using such technical procedures as measurement and statistics… [and] providing refers to the synthesizing of information so that it will be optimally useful for purposes of the evaluation. (Popham, 1993, p. 34) The four types of evaluation are distinguished according to their purposes and types of information needed which support the process of decision making:
1.
Context evaluation defines the institutional context, identifies the target population and assess their needs, identifies opportunities for addressing the needs, diagnoses problems underlying the needs, and judging whether proposed objectives are sufficiently responsive to the assessed needs;
2.
Input evaluation identifies and assesses system capabilities, alternative program strategies, procedural designs for implementing the strategies, budgets and schedules;
3.
Process evaluation identifies or predicts, in process, defects in the procedural design or its implementation, provides information for the preprogramed decisions, and records and judging procedural events and activities;
4.
Product evaluation collects descriptions and judgments of outcomes, relates them to objectives, context, input, and process information; and interprets their worth and merit. (Refer to Table 7.2 in Stufflebeam, 1983, p. 129)
This model can be simply adapted to the educational learning environments particularly in this study where much of the decision making undertaken was by the teaching teams involved, and the evaluator was independent of these teams.
4.7.5 Stake’s responsive model Based in the constructivist paradigm, the responsive evaluation model was firstly proposed by Robert Stake who believed that an evaluation is „based on what people do naturally to evaluate things: they observe and react‟ (Stake, 1983, p. 292), and sufficiently „attentive to the concerns of the individuals for whom the evaluation was being conducted‟ (Popham, 1993, 105
p. 42). In other words, this model emphasises the significant role of the “stakeholders” who are the persons in and around the program such as, and particularly in this study: lecturers, students, developers, and designers.
An educational evaluation is responsive evaluation if it orients more directly to program activities than to program intents, if it responds to audience requirements for information, and if the different value perspectives of the people at hand are referred to in reporting the success and failure of the program. In these three separate ways, an evaluation plan can be responsive. (Stake, 1983, p. 292) Likewise, Guba and Lincoln (1989) identified three broad classes which characterised different types of stakeholders:
1.
The agents, those persons involved in producing, using, and implementing the evaluand such as the developers, the designers, the funders, lecturers, and the like;
2.
The beneficiaries, those persons who profited in some ways from the use of the evaluand such as the “target group”, the persons for whom the evaluand was designed, persons who gained by the fact that the evaluand was in use, and the like;
3.
The victims, those persons who were negatively affected by the use of the evaluand (which may include, because of some failure in the evaluand, one or more putative beneficiary groups) such as students, their parents, and the like.
In responsive evaluation, Stake contended that the information gathered from stakeholders is informal, flexible, subjective, iterative, and based on evolving audience concerns. The form of information hence may be more qualitative rather than quantitative in nature, nonetheless „responsive evaluation does not rule out quantitative modes…‟ (Guba & Lincoln, 1989, p. 42). As a teacher working in a constructivist environment, this model would appear to meet most of the needs of this study. However,
[d]ifferent researchers have drawn attention to the limitations of evaluation only at the end of a project [thus] have sought to evaluate through several stages of implementing changes in the classroom. (Aminifar, 2007, p. 111) Reflecting on the above issue, it was essential in this study to consider the evaluation at a number of stages; the responsive model therefore can be applicable at each stage or even be seen to incorporate all other models of evaluation. There are several models of evaluation typically consisting of four stages of evaluation. Two of them are discussed in the next sections.
106
4.7.6 Kirkpatrick’s four-level model There are three reasons for evaluating… The most common reason is that evaluation can tell us how to improve future programs. The second reason is to determine whether a program should be continued or dropped. The third reason is to justify the existence of the training department. (Kirkpatrick, 1994, p. 20) Kirkpatrick‟s four-level evaluation model perhaps, is the most well-known model developed by Donald Kirkpatrick that has been widely used in institutional learning since 1959 and is easily applicable to any training program. In this model, Kirkpatrick (1994) proposed a conceptual framework to aid in determining what information needed or to be collected and used for evaluation. Information was categorised as four different levels of evaluation: reaction, learning, behaviour, and results. Kirkpatrick implied that the four levels represented a sequence of ways to evaluate programs with each level having an impact on the next level. Moving from the lower to the upper level, Kirkpatrick further contended that the process of evaluation becomes more difficult and time-consuming, however it also provides more valuable information to the evaluator.
On the reaction level, it is prudent for the evaluator to ask if the participants were pleased with the program and to assess the correlates of this pleasure. On the learning level, an examination of the content of the material learned is most desirable. On the behaviour level, it is important to assess whether the information learned impacted a behavioural change in the [student] and on the results level, it is prudent to question whether the changes impacted as a result of training proved to be beneficial or detrimental to the organisation. (Hamtini, 2008, p. 694) Clark (2004) posited that reactive and learning levels are relatively similar to formative evaluations while behaviour and results levels are summative evaluations. He further noted that the first three levels provide “information” to the evaluator for improving the learning package while the fourth level informs the “impact”. In response to its use in educational setting, this model has recently been adapted by Hamtini (2008) in his study on evaluating the e-learning programs that aimed to accommodate the online learning environments. Based on Kirkpatrick‟s model, Hamtini proposed a threestage model of evaluation rudimentary in nature and holding a great promise for practical application. It was composed of interaction, learning and results.
4.7.7 Evaluation model for educational innovations Based on a naturalistic approach, Alexander and Hedberg (1994) sought to develop an evaluation model aimed at evaluating technology-based innovations. This model outlined the 107
appropriate methods for collecting and triangulating evaluative evidence at four distinct stages, from design through development, teaching and implementation to institutionalisation.
The current lack of effective evaluation may be one reason why few [ICT] innovations are used outside the institution where they are developed, and others cease to be used when the innovator moves on…Without effective, scholarly evaluation, even well designed innovations are unlikely to achieve wider dissemination, and the potential benefits of [ICT] for learning in higher education are unlikely to be realised. (Alexander, 1999, p. 182) Commenting upon this issue, Bain (1999) also cautioned all educational innovators and underscores regarding the importance of evaluating the impact of an innovation implemented within and beyond the institutional contexts in which “it will thrive or die”. Bain further drew on a framework adapted from Alexander and Hedberg‟s (1994) model which was modified to “accommodate all types of educational innovation” in higher education (see Table 4.3). With some extensions on its applicability beyond those technologically-based innovations, Bain asserted that the advantages of the framework were as follows:
it presumes that evaluation will occur in each of the major phases of an educational project (design, development, implementation, and institutionalisation);
it outlines the types of evidence and methods that may be appropriate for each phase; and
it demonstrates how close attention to the learning process and outcome should be threaded through all phases of evaluation. (Bain, 1999, p. 166)
Table 4.3 An integrated evaluation framework (adapted from Alexander & Hedberg, 1994) Phase
Focus
Purpose
Evidence and methods
1.
1.1
Curriculum analysis
To describe the inadequacies/insufficiencies of the current curriculum, with particular attention to the shortfall in student learning.
Analysis of the nature and extent of the shortfall in student learning and the probable reasons for it. The analysis is conducted through self-, peer and expert review, informed by relevant educational literature, with attention to the interplay between content, teaching/learning activities and assessments.
1.2
Teaching-for-learning analysis
To describe and justify the teaching/learning/assessment process likely to bring about the desired learning outcomes.
Description of the proposed teaching/learning process with argument indicating why it is likely to redress the shortfall in learning outcomes, based on evidence from the literature.
Design
108
1.3
2.
3.
4.
Development
Implementation
Institutionalisation
Specification of innovation
To describe and justify the proposed implementation, and indicate how it will facilitate the desired learning process and outcome.
Educational plausibility is established by detailing the implementation within the course context, and specifying how the learning process and outcome will be assessed. Project feasibility is judged through prototyping/storyboarding plus peer/expert review of anticipated costs and institutional “fit”.
2.1 Formative monitoring of learning environment
To determine whether the innovation is functional in its context and accessible/attractive to students.
Evidence focused on the workability of the innovation and student involvement with it: observation; video; user tracking; student and peer reactions. Viability/modifications determined through peer and expert review.
2.2 Formative monitoring of learning process
To determine whether the innovation is influencing the learning process as intended.
Evidence focused on the nature of the learning process and its immediate consequences: video; think aloud; stimulated recall; teachback; reflective journals. Viability/modifications determined through peer and expert review.
3.1 Summative evaluation of learning outcome
To determine whether the learning outcome is as intended.
Evidence focused on the nature of the learning outcome using outcome-relevant assessment tasks, supported by conventional assessments and student interviews where appropriate.
3.2 Summative evaluation of innovative validity
To determine whether the innovation is educationally appropriate in its immediate context.
Peer and expert review of the educational worth and viability of the innovation in the unit/subject concerned, based on evidence gathered in 2.1, 2.2 and 3.1, plus evidence on integration of the innovation into the curriculum.
4.1 Impact evaluation
To determine the robustness of the learning and its transfer beyond the immediate context of the innovation.
Evidence of beneficial effects on: understanding and learning in related/subsequent areas of the curriculum; indirect indicators (e.g., progress and retention rates); development of generic capabilities; transfer to the workplace.
4.2 Maintenance evaluation
To determine the sustainability of the innovation in the context of the whole course.
Peer and expert review of the educational benefits of the innovation considered in relation to its maintenance and opportunity costs, and in relation to the educational and funding policies of the institution.
Source: Bain (1999, p. 168-169)
Alexander (1999) highlighted similarities between models provided by Alexander and Hedberg (1994) and Kirkpatrick (1994): (i) both have generally put forward models which emphasise the need for evaluation to be undertaken at all stages of project development and to triangulate between different sources of evidence, (ii) both focus on the need for evaluations to collect valid evidence to support claims of improvements in learning, (iii) both acknowledge the 109
importance of such evidence in the institutionalisation and further dissemination of innovations, and (iv) both models underline the importance of formative and summative evaluations that must become as much a part of professional practice as project development. The Alexander and Hedberg (1994) model however is more advantageous in terms of technology innovations (Alexander, 1999). Additionally, because of the relationship to evaluation of learning interventions in the classroom context, this model offered practical and direct connection to this study and hence appeared to have the strongest potential for adaptation.
4.8
Phases of the evaluation model The evaluation model provided by Alexander and Hedberg (1994) was adopted in this
study to provide evidence through evaluating the data sources used at four different stages as presented in Table 4.4. The emphasis in the design phase was to establish the need for learning support to help students learn, understand and deal with their anxieties in statistics subjects. In the development phase, evaluation was by colleagues and student use in laboratory classes accessing resources provided in the e-learning systems. The implementation phase focused on identifying how the resources and the selection of learning designs influenced learning outcomes. In the final phase, institutionalisation, the focus was on identifying whether or not the resources and implementation of learning designs had been adopted elsewhere within the institution.
110
Session August/Spring 2008 March/Autumn 2009 August/Spring 2009 March/Autumn 2010 August/Spring 2010
IMPLEMENTATION
DEVELOPMENT
DESIGN
Stage
Table 4.4 A four-stage of evaluation model selected for the study Description and evaluation stages of embedding the resources and learning designs in the e-learning system
GHMD983 (Case study 1) Observe: Analysis of feedback to the prompts given via student forum in the e-learning site Reflect Plan: To introduce video resources Implement: Videos were provided to students via weekly folders in the e-learning site Observe: Student survey at the end of session, analysis of comments given in the student forum Reflect
STAT131 (Case study 2) Plan: Initiate work on resources (e.g. video resources) Implement: Videos were provided to students via by-type resource folders in the e-learning site Observe: Student survey at the end of session, e-learning student tracking analysis, assessment and final grades Reflect GHMD983 (Case study 1) Plan: To introduce learning design maps within weekly folders Implement: Videos embedded in learning design maps (DOC files) in the e-learning site Observe: Student survey at the end of session, e-learning student tracking analysis, assessment and final grades Reflect STAT131 (Case study 2) Plan: To introduce learning design maps within weekly folders, CAOS pretest and post-test Implement: Videos were accessible via links in learning design maps (PDF files) in the e-learning site Observe: Student survey at the end of session, assessment and final grades Reflect
SHS940 (Case study 1) Plan: To introduce by-type resource folders and learning design maps within weekly folders Implement: Videos were accessible via resource folder and links provided in learning design maps (PDF files) in the elearning site Observe: Student survey at the end of session, assessment and final grades Reflect
MATH132 (Case study 3) Plan: To introduce weekly folders Implement: Resources provided to students using both by-type resource folders and weekly folders in the elearning site Observe: Student survey at the end of session, e-learning student tracking, assessment and final grades Reflect MATH131(Case study 3) Plan: To introduce fortnightly learning design maps Implement: Resources and support materials were provided to students using by-type resource folders and fortnightly learning design maps (PDF files) in the e-learning site Observe: Student survey at the end of session, e-learning student tracking, assessment and final grades Reflect MATH132 (Case study 3) Plan: To introduce fortnightly learning design maps Implement: Resources and support materials were provided to students using by-type resource folders and fortnightly learning design maps (PDF files) in the e-learning site Observe: Student survey at the end of session, e-learning student tracking, assessment and final grades Reflect
111
March/Autumn 2011
IMPLEMENTATION INSTITUTIONALISATION
STAT131 (Case study 2) Plan: To introduce by-type resource folders and improved learning design maps within weekly folders, Headstart program, CAOS pre-test and post-test Implement: Headstart program accessible in the e-learning site four weeks prior to formal session starts, videos accessible via by-type resource folders and links provided in learning design maps (HTML and PDF files) within weekly folders in the elearning site Observe: Student survey at the end of session, assessment and final grades Reflect
MATH131 (Case study 3) Plan: To introduce improved fortnightly learning design maps Implement: Resources and support materials provided to students using both by-type resource folders and fortnightly learning design maps (HTML and PDF files) in the elearning site Observe: e-learning student tracking, assessment and final grades Reflect
Adaptation of resources and learning designs in other subject or faculty within the university Faculty of Informatics e-learning examples (refer to Appendix 8.1) SCIE911 (Fundamentals of Science Communication) subject in March/Autumn 2011 Two additional funding applications have been developed for the Headstart programs
Note: Microsoft Word Document (DOC), Portable Document Format (PDF), Hypertext Markup Language (HTML), Comprehensive Assessment of Outcomes in Statistics (CAOS)
4.8.1 Initial case study: GHMD983/SHS940 The initial case study was undertaken with postgraduate students who enrolled in GHMD983 August/Spring 2008. In the design phase, student needs were assessed through change evaluations (Porter, 2007) and responses to cues placed in the student forum. Responding to student needs heralded a development phase, designing and producing resources particularly videos to support student learning in the subject. The provision of these resources via weekly folders in the e-learning site was also aimed at improving student learning outcomes such as easing their anxieties, increasing their confidence and understanding of the subject. In the first implementation, the effectiveness of the video resources was duly evaluated along with other components of the subject. In the development phase, learning design maps using DOC files were introduced to students in August/Spring 2009 which were made accessible in the elearning site. The maps were embedded in the weekly folders as guidance for students in completing their weekly laboratory tasks. The evaluation undertaken at the end of the second implementation suggested a refinement of the learning design maps subsequently created in PDF rather than DOC files. In the implementation phase, prominence was placed on evaluating the new designs for the subject in August/Spring 2010. These encompassed the provision of the learning design maps using PDF files which incorporated links to resources particularly videos 112
via the weekly folders, and by-type resource folders in the e-learning site. Summative evaluation was undertaken considering the student assessments and survey outcomes throughout three implementations of the subject.
4.8.2 Second case study: STAT131 The second case study was undertaken with the undergraduate students who enrolled in STAT131. In the design phase, the first implementation involved the provision of video resources to the students in March/Autumn 2009 via by-type resource folders in the e-learning site. The provision of videos was informed by the outcomes from the previous case study (in GHMD983 August/Spring 2008) where most postgraduates endorsed the use of videos in the subject. As with the first case study, the needs assessment was undertaken through a student survey to examine the usefulness of video resources and other resources provided in the elearning site. A comparison of outcomes was also undertaken with the two groups of students, the undergraduates (in March/Autumn 2009) and the postgraduates (in August/Spring 2008), before the subsequent development phase. The evaluation was to examine the impact of resources on student learning, and in particular the video supports provided in the e-learning systems. The major observations led to the implementation of new design features for the subject in March/Autumn 2010, namely the introduction of the learning design maps within weekly folders in the e-learning site. The maps were embedded in the e-learning site using PDF files which incorporated links to weekly laboratory tasks and resources. Evaluation was again undertaken at the end of the second implementation to identify the effectiveness of the new subject design in the e-learning site. The findings suggested a refinement of the learning design maps using both HTML and PDF files within the weekly folders, and the use of by-type resource folders in the e-learning site in March/Autumn 2011. Evaluation this time also suggested a new direction namely the introduction of a Headstart program in the subject. In the implementation phase, the Headstart program was made accessible to students via the e-learning site prior to formal commencement of the session. Summative evaluation was undertaken including the analysis of student assessments and survey outcomes throughout three implementations of the subject.
4.8.3 Third case study: MATH131/MATH132 The third case study was undertaken with the students who were prospective primary and early childhood teachers enrolling in a pair of mathematics subjects, MATH131 and MATH132. MATH132 August/Spring 2009 was informed by the outcomes from the previous case studies 113
both in the GHMD983 August/Spring 2008 and STAT131 March/Autumn 2009. In the design phase, weekly folders were used to locate resources provided for the subject in addition to bytype resource folders which were made accessible in the e-learning site. The students‟ needs were assessed through a student survey undertaken at the end of the first implementation. Due to the number of resources provided in the e-learning site and survey results, fortnightly learning design maps were introduced to the students rather than the weekly folders. The fortnightly learning design maps were used to organise the subject tasks, resources, and support materials on two-week basis in the e-learning site. In the development phase, the fortnightly learning design maps created in PDF which incorporated links to resources were made accessible to students both in MATH131 March/Autumn 2010 and MATH132 August/Spring 2010. At the end of the session, surveys were conducted among the students aimed at evaluating the use of the fortnightly learning design maps, and other subject resources including the assessment system. The evaluation of the second implementation revealed that there was a lack of use of the fortnightly learning design maps. This led to a refinement of the maps using both HTML and PDF files. In the implementation phase, the improved fortnightly learning design maps were made accessible to students in MATH131 March/Autumn 2011 via the e-learning site. As with the other case studies, summative evaluation was undertaken involving the examination of student outcomes in their assessments and final examination, and surveys throughout three implementations of the subjects.
4.8.4 Overall evaluation As shown in Table 4.4 and described above, all case studies involved evaluation throughout the four phases. The first stage in each case study involved an evaluation of needs but with the studies running concurrently. The first development phase involved the provision of videos to the postgraduates (case study 1) and the undergraduates (case study 2) embedded in the learning design maps (using PDF files) within weekly folders in the e-learning site. An adaptation of the learning design maps used in statistics subjects was implemented in the mathematics subjects. Thus, the use of the fortnightly learning design maps (using PDF files) was implemented in the development phase of the third case study. The implementation phase involved the refinement of the learning design maps and the fortnightly learning design maps using both HTML and PDF files in two case studies (case study 2 and 3). In addition, the use of by-type resource folders in the e-learning site was introduced to the postgraduates (case study 1) and the undergraduates (case study 2) and offered in addition to the weekly folders. A Headstart program was initiated as a new direction in the second case study. 114
In accordance with the practice of action research as discussed in the previous section, the development and implementation phases are practically more circular than linear while the investigations began with assessing the needs in the design phase (refer to Table 4.4). The design phase proceeded to development, the development phase also involved implementation with a subsequent phase of redesign which was based on the outcomes of formative and summative evaluation. The cycle continued with further development ensuing and a further implementation phase with outcomes reported to colleagues. Hence, it has been possible to evaluate how the resources and development processes have been adopted, or institutionalised in other learning contexts.
4.9
Local context Students in New South Wales differ in some respects from those in other Australian school systems in that they receive limited exposure to the subject of statistics in secondary schools. Neither has there been any requirement to support the study of statistics with technology… The subject of statistics has consequently not generally been highly regarded by students and they have, more often than not, approached the study of statistics with a negative mindset... (Morris, 2008, p. 142) Thus there is a need to support the student learning of statistics at the university level. For
this reason, many educational innovations in Australia have been developed which aim to assist the teaching and learning of statistics, particularly using technology as discussed previously in Chapter 2. To examine whether such innovations are successful or otherwise requires evaluation within the context in which they are applied. Essentially, in order to understand the context of this study, it has been considered prudent to inspect:
the reason the first two case studies (subjects) were selected;
baseline data regarding the pass and failure rates for the subjects selected for intervention;
baseline student surveys of the selected subjects;
approaches to assessment; and
learning supports offered to the students from years 2003 to 2008.
4.9.1 Selection of case study subjects GHMD983/SHS940 Statistics in Health Research has been designed for graduate students drawn from degree programs in Health and Behavioural Sciences. GHMD983/SHS940 is a six115
credit point introductory statistics subject and has been a compulsory subject for postgraduate students who enrolled in a Master of Public Health at the University of Wollongong. This subject has been designed to be delivered via both on-campus and distance modes of delivery (web-based). Similar in terms of the subject contents or syllabus for GHMD983/SHS940, STAT131 Understanding Variation and Uncertainty is an introductory first year level university statistics subject which has been designed and developed particularly for students in the degree program Information Technology and Computer Science at the University of Wollongong (see the subject outline in Appendix 3.1). STAT131 is a six-credit point subject (implying 12 hours of work a week) and is compulsory for most of its students, many of them in the first year of their degree program. Unlike GHMD983/SHS940, this subject has been delivered on-campus throughout the session (although in the years 2004 to 2009 it was also delivered at a remote campus). In short, Table 4.5 details the variation in some components of the two subjects.
Table 4.5 Components of GHMD983/SHS940 versus STAT131 Component
GHMD983/SHS940
STAT131
Lectures
1.5 hours weekly
3 hours weekly
Laboratory
1.5 hours week
2 hours weekly
Computer Software
JMP
SPSS
Prescribed textbook
Kuzma, J .W. and Bohnenblust, S. E. (2004), Basic Statistics for the Health Sciences, McGraw-Hill: Singapore
Utts, J. M. and Heckard, R. F. (2004), Mind on Statistics, Thomson Brookes/Cole packaged with Student Version SPSS
Number of students enrolled
35 to 95 (Wollongong) 10 to 20 (Web-based)
90 to 180 (Wollongong) 8 to 15 (Loftus)
Session offered
March/Autumn (2003 to 2005) August/Spring (2006 to 2010)
Also available pre 2003a March/Autumn (2003, 2008 to 2011) Both sessions (2004 to 2007)
Campus offered
Wollongong (on-campus) Web-based (distance mode)
Wollongong (on-campus) Loftus (remote campus)b
a
Data were not included in this thesis Subject was not offered at remote campus in 2010 and 2011
b
Based on student-centred and blended learning approaches, the subjects have incorporated both face-to-face lectures and online learning resources through the Blackboard 116
Learning system (the e-learning system used at the UOW). Since the nineteen nineties, all resources have been provided in the e-learning system. Communication with other students/lecturer is by both email and the e-learning forum. Resources in the e-learning sites include lecture notes, laboratory manual (including laboratory tasks), SPSS/JMP notes, EduStream (recorded lectures), student forum, video resources, worked solutions, past laboratory tests and exams, assessment, and data sets. Porter and Baharun (2011, p. 276) in reference to STAT131 noted that, „[i]n recent years there has been an increasing numbers of students using online resources rather than face-to-face [lectures]‟. In 2010 sessions, due to a scarcity of laboratory space, higher year STAT131 students with no previous failures were allowed to complete their laboratory work from the laboratory manual at home or in their own time provided they had the necessary hardware and computer software (e.g. SPSS). The number of students who took this option was small. Assistance is available for students during consultation times (4 hours outside of class time) allocated by the lecturer of the subject in particular session, or by an appointment made through telephone or email at a convenient time. The use of such consultation is very low with most problems or questions addressed through email or the elearning forum. Following the outcomes from the student survey conducted in August/Spring 2008 (as discussed in Chapter 5), the subjects GHMD983 and STAT131 were identified as suitable case studies for the investigation on the role of technology, particularly video resources as learning supports in enhancing student learning outcomes. The provision of video supports was identified as potentially useful learning resources after the introduction and evaluation of resources was undertaken with students of GHMD983 in August/Spring 2008. The resources were subsequently provided to students of STAT131 in March/Autumn 2009. However, the evaluation revealed that not all students were aware of or found the video resources on the elearning site although these were considered useful by most students who had used the resources. The contrast in outcomes between the postgraduate and undergraduate students shone a spotlight on the different learning designs implemented. As a consequence, the innovation of this study focused on developing the learning design, that is, on how to put the resources appropriately and more effectively in the e-learning system. This has involved several stages of development and implementation as discussed in Chapter 5 and 6. Additionally, there were changes in the lecturer and/or coordinator of the subjects during the case study for GHMD983 although the subject materials and resources remained comparable.
117
4.9.2 Baseline data: failure rates As can be seen in Table 4.6, the failure rate of GHMD983 reduced drastically (between 3.6% and 7.7%) when offered in August/Spring rather than March/Autumn (between 10.2% and 24.5%). This possibly reflected to the loss of students in the first session of courses combined with a change to multiple small assignments in 2006, a practice which has been retained. The failure rate of this subject dropped from 16% - 26% between 2003 and 2005 to 4% - 9% between 2006 and 2007. Besides, there was a variation in the cohort of students where only a small number of students (less than 50 students) enrolled in March/Autumn in contrast to more than 100 students enrolled in August/Spring. This was due to the variation of the international students‟ intakes which depend on changes in student visa requirements and subject to international market forces. Changes in the assessment system from three large to five smaller assignments possibly resulted to the drop in failure rate between 2005 and 2006. The provision of learning supports in particular video resources consequently was introduced in August/Spring 2008 primarily aimed at improving the student learning and understanding of statistics, rather than in response to high failure rates.
Table 4.6 Student grades (in percentage) for GHMD983 High Distinction
Yeara
On-campus
Distance
N
Fail
Pass
A 2003 A 2004 A 2005
24 43 36
8 7 10
32 50 46
18.8 16.0 26.1
25.0 30.0 32.6
31.2 22.0 19.6
15.6 22.0 13.0
9.4 10.0 8.7
S 2006 S 2007
94 94
14 9
108 103
3.7 8.7
11.1 17.5
20.4 16.5
31.5 31.1
33.3 26.2
Credit Distinction
a A=March/Autumn, S=August/Spring (Data source from the University of Wollongong, Performance Indicator Database, 09/12/2010)
In STAT131, a review of student grades from 2003 to 2008 revealed failure rates ranging between 9.3% and 24.3% (as shown in Table 4.7). Interestingly, there was a dramatic reduction of the failure rate in March/Autumn 2004 to 9.3% coinciding with the change in the laboratory classes by producing a laboratory manual including more authentic tasks that engaged students in the data gathering process. The low failure rate was not sustained although the valuing of the resource has remained high. In the later years, worked solutions were available and students had needs to be taught to do tasks not just read solutions. In describing the pattern of student grades according to the sessions, the distribution of the failure rates in March/Autumn was slightly lower (between 9.3% and 20%) compared to August/Spring (between 15.1% and 24.3%). In 118
other words, an increase in the failure rate was evident when offered in August/Spring. However, the lack of a trend in failure rates was possibly due to the variation in the background of cohorts (students enrolled), lecturer(s) or tutor(s) involved, assessment systems used in the subject and other factors. The students, who once completed in August/Spring, now complete in March/Autumn. In addition, a small number of failing students from each session re-enrol in the subsequent session but data on these individuals has not been available within the data collection structure in this study. This subject has had a history of innovations in teaching practice in March/Autumn, with the learning promoted through experiential learning, authentic tasks and the use of technology i.e. online learning (Baharun & Porter, 2010; Morris, 2008; Porter, 2001). The evaluation practice of this subject has encompassed data from assessment, student surveys, and comments posted in the student forum in the e-learning site.
Table 4.7 Student grades (in percentage) for STAT131 Yeara
N
Fail
Pass Conceded & Pass
A 2003 A 2004 A 2005 A 2006 A 2007 A 2008
165 205 162 136 125 108
10.3 9.3 19.1 17.7 20.0 16.7
48.5 49.7 45.7 27.9 29.6 25.9
17.6 20.0 18.5 18.4 25.6 23.2
12.7 13.2 11.1 21.3 16.0 15.7
10.9 7.8 5.6 14.7 8.8 18.5
S 2004 S 2005 S 2006 S 2007
172 156 84 107
15.1 21.1 22.6 24.3
34.3 43.0 33.3 37.4
22.1 21.1 9.5 18.7
16.3 10.3 16.7 13.1
12.2 4.5 17.9 6.5
Credit Distinction
High Distinction
a
A=March/Autumn, S=August/Spring (Data source from the University of Wollongong, Performance Indicator Database, 09/12/2010)
4.9.3 Student surveys Across five sessions between March/Autumn 2003 and August/Spring 2005, students voluntarily completed the surveys for STAT131 online via the subject e-learning site (Morris, 2008). Instructions given to students were as follows:
The primary purpose of this survey is to provide me with information that can assist in the development of the subjects so students can complete it in different ways. Some students attend lectures others choose to use WebCT and some use both. Some resources are more useful than others. The feedback of ALL students is valuable in this process. Note: summaries of this data may be used in research 119
publications. (Change Evaluation STAT131 Spring 2005, source from Morris, 2008, p. 305) The results from this previous study were used to gather information on:
work practices done by students in the subject;
perceptions of the importance of learning resources to student learning;
perceptions of student confidence in topics for learning;
perceptions of student progress toward achievement of the graduate attributes; and
overall learning.
Table 4.8 summarised some of the feedback provided by the students to the online surveys conducted across seven teaching sessions in STAT131. The response rates varied across the sessions with August/Spring 2005 achieving the lowest rate of 13%. In March/Autumn, the subject had the same coordinator over the years, with an alternative in August/Spring. In the two sessions of March/Autumn, the teaching was shared among the lecturers. Innovations have been driven through the March/Autumn coordinator and adopted in August/Spring. As evident from the survey outcomes particularly between March/Autumn 2003 and August/Spring 2005, for some years a high proportion of students have placed a high importance on the worked solutions, laboratory manual, assignments, laboratory tasks, laboratory classes, and lecture notes (70% to 100%). On the other hand, some resources such as the textbook, student forum, objectives and learning strategies were regarded lower in value for their learning. In March/Autumn 2003, innovations were developed and introduced in this subject, including the collection and use of real data and working topics of social significance in the laboratory classes. The introduction prior to these innovations, the vast majority of students (95.7%) had rated lectures as important to their learning (Porter, 2002). However, for post innovations, the use of laboratory manual in 2004 revealed that the laboratory classes, worked solutions, and assignments were rated with higher importance than lectures (see Table 4.8). The high ratings on the laboratory manual and worked solutions led to a „sustained‟ change in the valuing of resources with a decrease in emphasis on lectures and high rating of laboratory classes. The change of relative position did not suggest that lectures have lessened in quality; rather other resources were more developed and valuable. Further innovation was implemented in the subject where the alignment of objectives, tasks, and marking criteria was trialled in March/Autumn 2004, and continued until recently (March/Autumn 2011). The survey outcomes showed that the proportion of students who perceived this innovation as important ranged between 65% and 76%. Despite the fact that the students did not rank the marking criteria as 120
highly as some other resources, this resource was considered as essential elements in the alignment of assessment objectives and checking of student responses to their assessment. It also provided effective scaffolding for students‟ statistical thinking (Morris, 2008). There were changes in the assessment system between March/Autumn 2006 and March/Autumn 2008 as described in the next section 4.9.4. As a result, the laboratory tests replacing the mid-session test were highly rated by 88% of students in March/Autumn 2006 and 93% in March/Autumn 2008. Other resources remained high importance to learning by at least 85% of students in both sessions including worked solutions, assignments, laboratory manual, laboratory tasks, and marking criteria. In contrast to the years between March/Autumn 2003 and August/Spring 2005, there was an increase in the proportion of students who perceived the lectures as useful for learning from 80% in March/Autumn 2006 to 90% in March/Autumn 2008. This suggests that the lectures are essentially important for student learning in the subject with an exception of the two sessions in August/Spring 2004 and August/Spring 2005 for some reasons.
Table 4.8 Student survey responses across sessions in STAT131 Sessiona
Response rate (%) Number of students enrolled (N) Perceived importance of learning resourcesb (%): 1. Worked solutions 2. Assignments 3. Laboratory manual 4. Laboratory tasks 5. Lecture notes 6. Laboratory classes 7. Marking criteria 8. Midterm quiz 9. Online lecture notes 10. Lectures 11. Teamwork 12. Tutor 13. Learning strategies 14. Objectives 15. Student forum 16. Textbook 17. Laboratory tests
A 2003
A 2004
S 2004
A 2005
S 2005
A 2006
A 2008
40 165
53 205
36 172
41 162
13 156
41 136
38 108
94 79 90 46 76 -
90 93 92 87 86 87 65 83 54 73 71 44 52 26 -
94 95 89 89 87 76 70 74 84 56 65 21 35 48 37 37 -
97 89 92 89 79 82 66 81 79 79 68 77 65 50 45 29 -
100 95 95 89 86 86 76 57 91 57 71 81 67 47 15 24 -
98 98 98 89 88 86 88 80 57 70 63 41 30 88
100 95 100 85 66 81 90 90 81 81 80 51 42 93
121
Perceived confident in topicsc (%): 1. SPSS 2. Data exploration 3. Correlation 4. Binomial and Poisson distributions 5. Model fitting 6. Normal and Exponential distributions 7. Confidence intervals 8. Hypothesis tests 9. Markov chains 10. Formulae selection
-
-
83 94 73 62 65 64
61 81 69 55 53 44
81 81 72 38 62 43
84 93 80 80 70 72
85 98 90 71 78 78
-
-
57 57 46 62
40 48 36 42
52 43 48 48
59 63 55
83 88 63
a
A=March/Autumn, S=August/Spring Moderately to extremely important to learning c Moderately confident or confident in topic learning - Not surveyed Note: Evaluation data for 2007 not available b
Between August/Spring 2004 and August/Spring 2005, less than 85% of students considered themselves as confident in all topics assessed in the subject. In March/Autumn 2006, 93% of students reported confident in exploratory data analysis topic. In contrast, 98% of students in March/Autumn 2008 perceived confident in exploratory data analysis, 90% in correlation, 88% in hypothesis tests, and 85% in using SPSS. An analysis of the proportion of students who perceived themselves as confident in the topics for learning particularly between August/Spring 2004 and March/Autumn 2006 suggested that several topics were in need of more resources or learning supports. Consequently, video resources were developed on some topics such as confidence intervals, SPSS, model fitting, Binomial, Normal and Exponential distributions, and hypothesis testing, and these were trialled in GHMD983 in August/Spring 2008. This innovation was implemented in STAT131 in March/Autumn 2009.
4.9.4 Approaches to assessment Over the years, in the search for the best approach to assessment in terms of enhancing student learning outcomes, a variety of assessment tasks have been employed in STAT131. As can be seen in Table 4.9, these have included assignments involving the collection and analysis of data, portfolios or laboratory work, summaries, laboratory tests, group presentations (teams of two students), in class and final examinations where different weightings of assessment have been applied across sessions. There was a variation in the continuous assessment approach between March/Autumn 2003 and March/Autumn 2008 with a constant presentation and final examination remained at five per cent and fifty per cent of total assessment marks, respectively. Laboratory work and the mid-session test were applied in four consecutive sessions between March/Autumn 2004 and August/Spring 2005 whereas the mid-session test and summary were applied in March/Autumn 2003 as well as other types of assessment (see Table 4.9). 122
Assignments have been used to assess student learning and understanding of statistics in each session although there were changes in weightings and the number of submissions between two and three assignments depended on whether they were compulsory or optional. Laboratory tests have been used in a variety of ways since March/Autumn 2006 until recently. There has been some variation in the number of assessment tasks and their weightings between March/Autumn 2006 and March/Autumn 2008. For example, there were four laboratory tests applied in 2006 for both sessions. In addition, three make-up tests were designed for students who obtained less than seventy per cent in any given test (zero was awarded to students for any test mark less than seventy per cent). In 2007 for both sessions, there were six laboratory tests (three compulsory and two optional tests) in which any test mark that less than sixty per cent was awarded zero and the tests due in laboratory classes. Changes in the number of laboratory tests and opportunity to re-sit the test have been applied in March/Autumn 2008 where the best of three test marks were chosen out of four laboratory tests. As applied in 2007, students who obtained less than seventy per cent were given the opportunity to re-sit the test in the following week with a different data set and completed during laboratory classes. Nevertheless, students who failed the re-test have further been examined by the subject coordinator to clarify any problems they experienced. The reason for having a minimum mark of sixty or seventy per cent in each test was primarily to enhance student competency in the topics examined. In particular, students who were awarded zero were expected to demonstrate their competency through a retest that was offered in the following week. This test retest approach has come to form the basis of the support system to identifying students at risk typically those who needed the encouragement to do a retest. The students who failed to sit retests after having been given feedback were often found to have issues such as anxiety, lack of confidence, depression, obsessive or other difficulties. Tests typically comprise the analyses of data sets in addition to the understanding of theoretical concepts (see Figure 4.2).
Presentation/ Group project
Mid-Session test
Laboratory test
Final exam
A 2006 S 2006
Summary
A 2003 A 2004 S 2004 A 2005 S 2005
Laboratory work
Sessiona
Assignment
Table 4.9 Assessment weightings for STAT131 across sessions
30
-
5
5
10
-
50
30
10
-
5
5
-
50
20
-
-
5
-
25
50
123
A 2007 S 2007 A 2008
20-30b
-
-
5
-
15-25c
50
30
-
-
5
-
15
50
a
A= March/Autumn, S=August/Spring b Three assignments with one optional assignment (minimum of 20% to maximum of 30%) c Six laboratory tests with three compulsory and two optional tests (minimum of 15% to maximum of 25%) - Not assessed this session
The data for this test is in the E-Learning Lab test link Week 6 Test Data Produce a contingency table (or two way table, or cross tabulation) of “j. Born where” by “n.Iraq”. Born where has two options Australia or Overseas, while Iraq has three and refers to bringing troops home “Now”, “in the next 6-12 months” or “other”. Produce the observed and expected values. Also produce a stacked bar chart or a bar chart to compare the profiles responding to whether or not the troops should come home, now, in 6-12 months or other of Australian and Overseas born people. Use the table and charts to answer the following questions. LEAVE ALL ANSWERS AS FRACTIONS (DO NOT SIMPLIFY OR CONVERT TO DECIMALS – COMPUTER MARKING DEPENDS ON THIS. ONLY GIVE THE FRACTION DO NOT ENTER P(E)=) 1. What is the probability that a randomly selected person will want to bring the troops home in the next 6-12 months? 2. What is the probability that a randomly selected person does not want the troops brought home in the next 6-12 months? 3. What is the probability that a randomly selected person is born overseas? 4. What is the probability that a randomly selected person is both born overseas and wants the troops brought home in the next 6-12 months? 5. Given that the person questioned is born overseas what is the probability that they would want the troops brought home in the next 6-12 months? 6. We use the definition of independence to determine what the expected counts are for each cell. Using the P(bring troops home in 6-12 months) and P (born overseas) demonstrate how you find the joint probability and then from this demonstrate the expected frequency for the cell where the person is born overseas and wants the troops brought home in the next 6-12 months. 7. Explain how to read the following P(Now | Born Australia) and determine its value. 8. Using one of the charts are overseas students similar or different to Australian students in their perceptions as to whether or not the troops should be brought home now. (Hint remember that we are talking about one sample of data here from the many samples possible) 9. Based on your answer to (7) would you consider Where born and Iraq are statistically independent events or not.
Figure 4.2 An example of STAT131 Week 6 laboratory tests in March/Autumn 2008
In SHS940 August/Spring 2011 (postscript), students were required to complete a draft and a final version of an assignment for the first two assessments. This approach was more positive than providing the retest for failing students but it still allowed the identification of students at risk of failure. That is, those who attained marks less than seventy per cent for their work, had difficulty resubmitting work even when provided with guidance, suggesting that things other than being capable are problematic.
4.9.5 Learning supports Between 2003 and 2008, there were limited human-based learning supports provided to the students in STAT131 other than their lecturer, tutors, and peers. Within the subject elearning site, several learning supports have been offered to students. These were, (i) the student forum used for discussion among teacher and peers, (ii) SPSS software package notes to assist 124
students in data entry and analysis, (iii) worked solutions particularly on the tasks included in laboratory manual with a well-timed release of laboratory solutions, (iv) past examination papers and laboratory tests together with solutions, (v) marking criteria (assisting with the alignment of subject objectives and assessment), and (vi) Edu-Stream or audio recorded lectures. There were also generic supports (non-subject specific) provided by the University of Wollongong such as library assistance, counselling services, and PASS (Peer Assisted Study Sessions) program. The PASS program is accessible by most students in the university and is described as follows.
PASS aims to provide a learning atmosphere which differs from the traditional tutorial environment. PASS sessions are led by 'PASS Leaders', senior students who have excelled at the subject in the past. The focus is on integrating the course content (what to learn) with academic reasoning and study skills (how to learn). You will get the maximum benefits by attending PASS every week. At each session a PASS Leader will be on hand to guide students through the course material. The focus of each session will be determined by the needs of the group. PASS is a valuable opportunity for students to seek help and advice in a friendly, relaxed environment. Sessions are designed to be informal, flexible, and fun! PASS commences in week 2 and runs weekly throughout the semesters. Enrolments open on SOLS after the first lecture. Some subjects also have an extra STUVAC PASS to further support exam preparation - these are offered to students who attend PASS 7+ times during semester. (From http://www.uow.edu.au/student/services/ pass/overview/index.html) The outcomes obtained from the evaluation data of the PASS program revealed that the students who attended PASS regularly scored significantly higher grades generally, in addition to having a lower failure rate (from http://www.uow.edu.au/student/services/pass/evaluation /index.html). Unfortunately, this program was offered only for selected subjects excluding STAT131 thus it was not of beneficial for students in this subject. For this reason there was a need to provide the supports particularly in STAT131 and as a consequence the video resources have been developed and incorporated within the subject e-learning site from March/Autumn 2009.
4.10 ALTC project An Australian Learning and Teaching Council (ALTC) project arose from the successful educational work initiated by the UOW „Quality 101‟ project, a working party of the Informatics Faculty Education Committee (FEC). With an aim to reduce failure rates particularly in mathematics and computing subjects, the working party was successful in engaging subject coordinators of multiple subjects in the pursuit of improving their subjects‟ 125
quality. The lack of leadership renewal in the „Quality 101‟ however signalled the need of a project (ALTC project) to formulate a plan for developing „leaders by leaders‟ as noted by Porter and Denny (2011, p. 8); [t]he initiatives that were generated [in the „Quality 101‟] outgrew the team‟s capacity to lead. Hence, it was recognized that, in order to undertake a very large ongoing task, it was necessary to build leadership capacity engaging others, creating the vision and driving the completion of the task.‟ ALTC project has been jointly funded by an ALTC grant in the area of Leadership for Excellence in Teaching and Learning and a grant from Deputy Vice Chancellor (Academic and International) Rob Castle, the University of Wollongong. The project commenced with initial collaboration between the two project leaders, Anne Porter (the principle supervisor of this thesis) from the University of Wollongong, and Antony Dekkers from CQUniversity, drawn together by the common need to support the students‟ learning of mathematics and a synergy in approaches (see project website in Figure 4.3). Consequently, one of the foci of this project has been the creation of video-based learning support resources for first year and entry-level mathematics and statistics that can be used in both cross-discipline and cross-institutional contexts. There
were
two
symposia,
held in 2009 (refer
informatics/maths/research/mathsresources/UOW053575.html)
to and
http://www.uow.edu.au/ 2010
(refer
to
http://www.uow.edu.au/informatics/maths/research/mathsresources/UOW062827.html), at the University of Wollongong. These symposia focused on leadership, resource creation using Tablet PC technology, evaluation, peer-reviewing, learning design, the sharing of mathematics and statistics resources and expertise. All resources created are licensed under creative commons (see the license at http://creativecommons.org/licenses/by-nc-sa/2.5/au/deed.en) that enable teachers and educators to use and adapt the resources with the permission of the developers but with recognition (refer to http://creativecommons.org.au/learn-more/licences). Furthermore, the work on the creation of resources and their evaluation in this project were treated as a research process as well as the teaching and learning development process, hence ethics clearance was obtained to undertake analysis. The author of this thesis was included as a researcher on that ethics application, as were the members of the ALTC team and the lecturers themselves, as participant-researchers. Most importantly as a member of the team, the author was able to meet with the „experts‟ which led into reflection and discussion of her work particularly on learning designs. Further, the author was able to access:
126
change evaluation or student survey forms and modifications used to evaluate the first year and entry-level subjects in mathematics, statistics, computing, science and engineering;
reports which lecturers allowed to be used by the ALTC team;
applications for funding that made use of the data collected; and
extracts of historical data from reports where the substance of the report remained confidential.
Figure 4.3 ALTC project website (http://www.uow.edu.au/informatics/maths/ research/mathsresources/index.htm)
4.10.1
Development of video resources
In the ALTC project, the creation of mathematics and statistics video resources were created predominantly through the use of tablet PC technology replacing the processes involving a film or video team. There were several tablets PCs purchased by the project, for instance Fujitsu Lifebook T4220 tablets and Toshiba M700 tablets with a key requirement that the processor be 2.4GHz with 160GB Sata HDD. These tablets have been used to create generic videos which can be applied in first year level and entry-level mathematics and statistics subjects as well as contextualized videos for other disciplines at all levels (see Figure 4.4). The approach to developing the resources was to create short grabs of mathematical instruction to support student learning, for instance worked examples for mathematics problems. The approaches of video genres have been developed from worked examples (Aminifar, 2007) to other genres through symposia, workshops and „show and tell sessions‟ organised by the project team members. These genres include: (i) theory refreshers or topic overviews to refresh 127
previous topics taught in lectures or school, (ii) demonstration to expose the use of mathematical and statistical software for students, (iii) orientation videos used to orient the students to the subject and particular unit of study/topic, (iv) stimulus to initiate thought or discussion of a particular topic in classroom settings, (v) mathematics in context to engage students across disciplines, and (vi) motivational resources to capture the use of mathematics in the real world. The creation of these videos has typically been less than five minutes long and small in file sizes (less than 6Mb) for ease of downloading/playing. Students can watch and re-watch if necessary. The production of the video resources production has involved a number of steps from creating electronic documents (Microsoft Power Point, Microsoft Word, Windows Journal, PDF) and annotating these using PDF Annotator while recording screen images, movement and audio. The final editing involves producing Shockwave files for FlashPlayer videos using Camtasia Studio. For this purpose, the tablets also offer a wide range of coloured pens or highlighter options and are user-friendly.
Figure 4.4 Example of video resources in STAT131 developed using the tablet PC technology and Camtasia Studio
4.11 Conclusion Based upon the researcher‟s own values, assumptions, beliefs, and biases through her teaching experiences in both countries, Malaysia and Australia, this study was conducted using a selection of research designs or approaches and choices of paradigm for adaptation. 128
Evaluation needs to be part of all stages of the development and use of [ICT]. This evaluation needs to be informed by the rigorous models already developed… It should involve academics in: developing their awareness of what is already known about effective evaluation of innovations; thoughtfully choosing evaluation methods to collect valid evidence at different stages of the project and for different purposes; critically analysing and synthesizing the evidence they have collected; using evaluation findings to inform ongoing changes to the innovation; and communicating about the innovation and its effectiveness to the academic community and the broader society. (Alexander, 1999, p. 182) The model provided by Alexander and Hedberg (1994) therefore was adopted in this study which offered four different stages of evaluation that considering both formative and summative assessments to provide a variety of information or valid evidence at each stage. The evaluation was undertaken at each stage of the Alexander and Hedberg‟s model thus allowing the triangulation of different sources of evidence through case studies. The methods used include questionnaires in the student surveys, students‟ assessment grades and results, online discussions or student forum, e-learning student tracking statistics, and peer briefings. This is essential as Alexander and Hedberg noted,
[e]xamining the total context of information gathered during product design, development, implementation and reassessment should result not only in better professional practice, but demonstrably more effective learning outcomes. (Alexander & Hedberg, 1994, p. 243) Permission to undertake the study was granted by the UOW Ethics Committee which targeted three subjects: STAT131, GHMD983/SHS940 and MATH131/132. The inconsistency of failure rate (between 10% and 25%) in STAT131 and GHMD983 from 2003 and 2008 has led to the initial investigations of this study which aimed to support student learning through the provision of learning resources in particular video resources. The creation of video resources was developed using tablet PC technology which was applied in first year level and entry-level mathematics and statistics subjects at the University of Wollongong. In the next chapter (case study 1), this innovation is evaluated through four stages of the evaluation model (Alexander & Herdberg, 1994) so as to examine the impact of this innovation on student learning and understanding of statistics.
129
CHAPTER 5 CASE STUDY 1 5.0
Introduction In this case study a preliminary investigation was undertaken to examine the student
learning experience of a postgraduate level introductory statistics subject at the University of Wollongong in New South Wales, Australia. As has been discussed in the previous chapters (Chapter 2 and 4) of this thesis, there was a need to provide appropriate learning support system in the statistics subject to improve student understanding and confidence in learning this subject. The subject had well developed resources and an assessment system in place that could readily identify students at risk. To support student learning of statistics, the researcher decided to provide several learning resources, more specifically video resources accessed through the elearning system. The video resources were created by the lecturer of the subject in which the researcher, as a tutor, investigated ways on how to embed the resources in the e-learning system more effectively. From the decision to develop video resources for students, several questions emerged:
what sort of video resources should be made available to students?
what topics of statistics should the video resources address?
what technical issues need to be resolved to make the videos accessible? and
how should the video resources be made available or placed in the e-learning system for students?
what impact do the video resources have on student learning?
Thus the researcher aimed for answers to these issues by examining the outcomes of this case study; that is, the preliminary investigation/design phase.
5.1
Design phase: Why video resources? In the literature, research has shown that the use of video resources as technological tools
in teaching and learning has a positive impact on student learning of statistics (Lesser & Pearl, 2008; Moore, 1993; Petocz 1998) and mathematics (Aminifar, 2007; Wood & Petocz, 1999). 130
There are also strengths and weaknesses of using videos in teaching statistics as discussed by Moore (1993) and Petocz (1998). Wood and Petocz pointed out the benefits of video as a medium of teaching and learning, … video is particularly good at bringing reality into the classroom and at showing the visual details of a process using graphics. In fact, in some ways, video is better than reality, as it can telescope time and place into a manageable segment avoiding the necessity for long field trips. Video is also good at providing a setting for development of higher-order skills, and works particularly well in a “case study” format. Video communicates with students on an affective plane to change their feelings about a subject and present role models. This is perhaps its most important contribution, given the very important role of feelings and attitude in [statistics] learning… (Wood & Petocz, 1999, p. 223) On the other hand, Moore accentuated the weaknesses of video parallel to its strengths, Video shows rather than tells. It is a very poor medium for exposition… Video exposition must choose between bad television (a talking head) and skewing the content in a search for visually appealing topics… More seriously, video leaves its viewers passive. The talking head cannot interact with learners. Even excellent video presentations have limited cognitive impact – students have acquired a certain visual sophistication from thousands of hours of television; they have learned to remain disengaged from a world they assume to be the surreal creation of a clever producer. (Moore, 1997, p. 130-131) Nevertheless, the key idea of using this technological tool in teaching is essentially to serve „as one tool for teaching, complementing but not replacing the other tools‟ (Moore, 1993, p. 174). In other words, the use of videos in teaching is not to replace the role of teachers in classrooms. The videos rather provide an additional variety of tools for students alongside other resources such as textbooks, lecture notes, tutors, peers, online lectures, and online discussion forums to learn the subject. In the last decade, the use of tablet PCs that allowed on easy production of video resources has gained momentum in the academic teaching arena particularly in Australia. This new technology offers cost-effective and paperless (Thompson & Dekkers, 2009) teaching tools. In addition, it provides for easier video resource production compared to traditional video recording or camera based videos. For instance, Wood and Petocz described their experience of making traditional video-based learning packages in their study,
Designing a video teaching package is very different from the usual teaching experience of university mathematicians. We have been forced to look at our materials in a totally different light. We tend to think linearly and logically whereas the film crew think laterally and chaotically (well, that is how it looks to us). Making a video requires a team of workers, producers, directors, set designers, 131
graphic designers, make-up artist and actors, not to mention the background accountants and bureaucrats. Many of these people are not mathematicians – some even hate mathematics. Certainly it would be ideal to have all participants mathematically literate but this is rarely possible. (Wood & Petocz, 1999, p. 224) Contrasting with this, the tablet PC needs only one person (representing all members in a team) for creating video resources. The benefits of using this technology particularly in teaching and learning practices were evident and documented in recent conferences and workshops such as the Australasian Tablets in Education Conference (ATiEC) held in 2009 and 2010 at the Monash University, Australia, Workshop on the Impact of Pen-Based Technology on Education (WIPTE) held in 2010 at Virginia Tech, Blacksburg, VA, United States of America, and ALTC Symposium held in 2009 and 2010 at the University of Wollongong, Australia (e.g. Dekkers & Porter, 2009; Porter & Baharun, 2010; Porter, Baharun & Algarni, 2009; Subhlok, Johnson, Subramaniam, Vilalta & Yun, 2007; Thompson & Dekkers, 2009).
One of the distinguishing features that tablets at CQUniversity are being used for is tablet PC generated video presentations. This technology enables students to see and hear engineering problems „worked through‟ just as they would in a face-toface lecture or tutorial. There are two key ingredients in the success of this approach. The students are able to see the problem unfold, step by step, enhancing their understanding of how the problem may be solved. This strategy also serves to „humanise‟ the teaching process, particularly beneficial for off-campus students. (Thompson & Dekkers, 2009, p. 774)
5.1.1 Aim for the study In the first week of August/Spring 2008, many students of the postgraduate subject GHMD983 wrote comments in the student forum about the study of statistics and the use of learning resources which were provided to them through the e-learning system. With permission granted by both parties, the university‟s ethics committee and students, the analysis of comments observed in the student forum and responses obtained from an online survey (through the e-learning system) were used for this preliminary study. The comments were used to explore the impact of learning resources in particular video resources on student learning outcomes such as their understanding of topic areas as well as anxieties associated with learning a statistics subject. Following this initial investigation, this study targeted in identifying ways of improving and best developing the video resources so that student learning and comfort in statistics could be improved.
132
5.1.2 Existing learning resources In week 3 of the August/Spring 2008, several video resources were provided to GHMD983 students and these were accessed through the e-learning systems. These resources were in addition to non-video resources provided in the previous sessions. Key features of the existing learning resources and their use were as follows:
1.
The lectures, laboratory tasks, worked solutions, textbook, primary and additional readings, quiz materials, e-learning and fellow students were considered as limited and sometimes restricted learning resources. Students had access to these resources as they assembled the skills necessary to become independent statistical thinkers and learners.
2.
Through accessing multiple resources students could develop the skills of verifying, clarifying and refining statistical concepts, formulae, theory and language.
3.
Students themselves were considered to be a primary resource, when as motivated students they learned to question, challenge, explore, find resources, and help fellow students in their approaches to tasks and problems.
4.
JMP is one of many possible statistical packages through which students learned to summarise and analyse data and to verify their understanding of formulae. The processes learned to analyse data with this package were similar but different to other packages. The syntax and language of all packages vary but functionality tends to be the same for simple analyses.
5.
Each student had a role explaining to, working with, demonstrating and challenging other students. Students used the e-learning or student forum to clarify and share ideas. During assessment times, students were restricted in what they could provide for other students.
6.
As students accessed multiple resources, they developed appropriate acknowledgement practices.
7.
Lectures and lecture notes made available for students were primarily to provide an overview of statistical thinking, concepts, language and theory with a first look at application. Students who did not attend lectures were encouraged to work through the lecture slides or Edu-stream recordings (audio-recorded lectures) particularly distance students. Lectures were minimal in this subject (one and a half hours weekly) with the major emphasis placed on student completing laboratory tasks along with the reading of lecture notes and text either in class or at a distance.
8.
The textbook and primary readings, supported by additional readings were used as references supporting the lectures and laboratory tasks. They were used for precise 133
definition of terms and formulae. Reading references with slightly different perspectives was considered extremely informative. 9.
The laboratory classes and tasks provided students with the opportunity to work collaboratively with peers, using data to answer questions, solve problems, develop theoretical ideas, and clarify formulae and techniques on specific topics. The laboratory tasks and worked solutions provided models for student assessment.
10. Students were introduced to learning strategies such as framing and concept mapping (refer to section 5.1.2.4). 11. Tutors in laboratory classes had a role in listening to students as they completed their activities, timing and coordination, and reflecting and questioning student solutions to tasks so as to clarify misperceptions, theory or procedures. (Source from GHMD983 subject outline August/Spring 2008)
5.1.2.1
Assessment
Assessment of learning included five laboratory tests or assignments (each worth 10%) and a final examination (a total mark of 50%). Students attempted all pieces of assessment and the minimum permissible mark for each laboratory test was 70%. Students who did not perform well in any test (obtained less than 70%) repeated the tests in the subsequent week using different data sets with similar but different questions. For a retest, the maximum mark awarded was 70% with no marks awarded to students who did not obtain 70% in the retests until they demonstrated competency. These students were provided with ongoing learning support in the form of commentary on their work and suggestions for revision until such time they demonstrated competency on the retest material. The continuous assessment was structured in this subject so as to challenge and extend the student laboratory learning on an unsupervised task using a similar, real-world example. Feedback on assessment was prompt being released in the e-learning system in the week following the test allowing students some time to prepare for the repeat tests if necessary. Feedback on the retest was prepared for each student, with overall solutions provided when all students completed the test satisfactorily. While these were termed laboratory tests, they were essentially take home tests with a practical orientation involving the analysis of data and understanding of theoretical concepts. In GHMD983, students completed the „tests‟ at their own time. The questions on the final examination differed from the laboratory tests/assignments, in that there was no production of JMP output and thus more emphasis on interpretation of output. This style of examination format allowed students to demonstrate that they could independently 134
adapt their knowledge and skills to similar but different contexts. Unlike tests, no feedback was given except for an overall subject grade on the final examination.
5.1.2.2
Worked solutions
In this subject, students had access to worked solutions released on the e-learning system after the designated time for attempting and completing all laboratory tasks or marking of all assessment. The provision of worked solutions was perceived as useful for student learning and understanding of statistics nevertheless the lecturer had cautioned the students that
Simply reading solutions is insufficient to develop your statistical competency. Students need to work the exercises in order to identify and understand their own statistical thinking before they confirm or elaborate that thinking. (Source from GHMD983 subject outline August/Spring 2008)
5.1.2.3
Video resources
To assist students in learning and understanding this subject, the lecturer throughout the session created a set of video resources on major topic areas which were made available in the e-learning site. In later years and through an ALTC project (LE8-783) conducted at the University of Wollongong, several genres of video resources have been created and produced using the tablet PC. These include theory snippets for revising lectures, motivational videos in related disciplines, worked examples, talking head, and computer/software demonstration for JMP/SPSS users. Many of these videos were made available for this study predominantly on data analyses using JMP software and some topics perceived by students as difficult (see Table 5.1). In terms of the use of video resources as learning support, different designs of videos perhaps give different impact on student learning. However, not all videos are comparable in impact. For example, Algarni (2009) theorised that videos for hearing impaired students that include four texts (audio, printed, written and caption) might result in higher cognitive load for students and hence not be as effective for learning compared to two texts (audio and caption only) (see Figure 5.1).
135
“Two texts” video clip
versus
“Four texts” video clip
Figure 5.1 Two designs of video resources impacted student‟s cognitive load The students accessed the video resources through a folder in the e-learning site labelled as “JMP video supports” with some notes provided by the lecturer as below:
I am currently experimenting with the creation of some video learning support materials. This takes about twice as long to do as for you to complete the labs as I am in the early part of the learning curve - so please do give feedback. If they are useful let me know. If they are not useful let me know why not. If other resources need to be created let me know. At the moment I am talking to the videos rather than scripting (that will take a lot longer). When I refer to data types etc. it will be an abbreviated statement in accord with the time and what rapidly comes to mind. As with terms met in lectures and labs - get the fuller definition from a text. To view the videos you may need to use adobe flash player which can be downloaded for free. I am adding Zip files for you to download. File sizes are approximately 8MB. Running from the desktop should remove any problems with slow internet connections. (Anne Porter, GHMD983 Subject Coordinator, August/Spring 2008) Table 5.1 Provision of video resources on selected topic areas in GHMD983 subject Topics Exploratory Data Analysis (EDA) Introduction to Statistics, Data Presentation EDA one, two or more batches of data
Video resources provided?
Topics covered in videosa
No Yes
Unit 1 JMP: Setting up a data file Unit 2 JMP: Basic Univariate Analyses Unit 3 JMP: Saving in word Unit 4 JMP: Checking data Writing Unit 1: Meaningful Paragraph (one quantitative variable)
136
Unit 5 JMP: Scatterplots Correlation Fit Y by X Unit 6 JMP: Transformations Unit 7 JMP: scatterplot and correlation Writing Unit 2: Meaningful paragraph (scatterplot & correlation)
Transforming Data, Bivariate Relationships
Yes
Regression
Yes
Probability, Modeling Categorical Data, Probability & Goodness of Fit
Yes
Probability from Tables Conditional Probability Independence Chi-square Sensitivity and Specificity
Discrete Random Variables - Binomial
Yes
Binomial and DRV: Expected value and Variance Binomial Calibration Binomial Assumptions and Probabilities
Estimation, Hypothesis Testing Sampling Distributions, Simulation
Yes
Continuous Random Variables Normal Probabilities N(0,1)
Point and Interval Estimates
Yes
One sample hypothesis tests
Yes
Two Sample hypothesis tests
No
Sampling, Demography
No
Mathematics Review
Yes
Normal Probabilities N( μ , 2 ) Finding Z values Confidence Intervals: Intuitive notions Calculating Confidence Interval of mean Confidence Intervals for single mean using JMP Hypothesis Tests: Five steps
Factorial Notation Rounding Numbers
a
Video clips are available from Content Without Borders, Australian Collection at http://oer.equella.com/ access/home.do (Author: Anne Porter)
5.1.2.4
Learning strategies
Students were also introduced to a variety of learning strategies. These were to assist them to understand the concepts and application of statistics based on work done by Porter (2001). Introduction to the strategies were included in the laboratory manual and could be accessed electronically via the e-learning site. Porter posited that the use of visual or spatial strategies is particularly effective for learning of statistics through helping the students gain perspective on the “big picture”, 137
… [by] displaying and organizing large amounts of information, revealing hierarchical, sequential and spatial relationships, including many aspects of knowledge such as names, procedures and constraints…From McFarland and Parker‟s perspective the visual and spatial strategies are useful not just representing knowledge, but for unpacking expertise. Experts‟ writing, lectures and discussions provide a source of expertise. The structure in the written documents can be determined by examining content listings, transition statements, introductions, summaries, indexes and highlights of main ideas. (Porter, 2001, p. 36) Consequently, six learning strategies that had been used to map a statistical expert‟s thinking about statistics were recommended to students in this subject (Porter, 2001). These are listed below:
Frames Type 1 use grids, arrays, and matrices as a framework for representing knowledge that is useful in providing a coherent structure within which the detail may be organised; for instance, these contain pointers to other frames, set of rules, graphics, and set of procedures or questions to ask. (*)
Frames Type 2 are of similar in structures however distinguished from Frames Type 1 in that the slots constructed for type 2 frames are completed through some “law-like principles” used with the intent of revealing the similarity and difference in approaches; for instance, calculating features such as the mean or variance for each probability distribution (i.e. discrete or continuous random variables). (*)
Concept maps have been used by various researchers to gain insight into the composition, structure and usage of the experts‟ expertise (based on Novak and Gowin‟s work) and hence it may represent a chain of concepts or network of events, or a hierarchical ordering of events.
Gowin‟s Vee has two arms that represent “the doing” and “the thinking” components (based on Novak and Gowin‟s work). The “doing” arm involves the selection of events, the recording of observations about the events, and the transformation or organisation of these records so that answers to the focal question may be generated (events that placed at the bottom of Gowin‟s Vee). The “thinking” arm of Gowin's Vee requires experts or students to elucidate what they were observing, the nature of conceptual systems (or principles) indicating how events appear to behave, how these conceptual systems and principles fit within appropriate theories, and the philosophical perspectives underpinning the theories and principles.
Decision trees have nodes or branches with a question to answer at each node that determines the next branch to be taken. These are trees searched in an orderly manner on simple problems. 138
Meaningful paragraph contains student‟s ideas illustrating their understanding of new terms through writing “meaningfully” about them in a paragraph. This may include the terms definitions, its context and demonstrate relationships between the terms. (*) (Source from Porter (2001)) In GHMD983, students‟ laboratory tasks were called specifically for the use of several
learning strategies (*) as stated above.
5.1.3 Method The steps involved in the collection of evaluation data throughout the design phase of this study were as follows:
1.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE08/289, see Appendix 4.2) in order to invite students to participate in the preliminary investigation of the impact of video resources on student learning outcomes.
2.
Students were approached initially and informed about the purpose of the study through a Participant Information Sheet delivered via email. On the information sheet, the students were informed that
The data collected in this research will be used for the following purposes: to identify the impact of learning resources in particular video resources on student learning outcomes such as student understanding of topics, performance, and anxiety; to identify issues, particularly technological issues associated with the use of videos and to determine how best way to resolve them; to determine how to improve and best develop the resources so that student learning and comfort in learning can be improved; and specifically to identify how to improve the learning environment for GHMD983 students. 3.
An information sheet supplied to the students. They were asked to provide a return email or to complete an online permission slip giving their consent to participate in the study. The students were told that their participation in this study was voluntary and that they were free to refuse to participate or to withdraw from the study at any time. The students would not be penalised for not participating in the study and they were informed that the outcome of the study should be beneficial for future students.
139
4.
In the first week of the session, prompts about the study of statistics and the use of video resources were put in the student forum (via e-learning site) by the lecturer. The students were encouraged to give their comments to the prompts given in the forum. These prompts were: Please introduce yourself, What do you think statistics is about? What do you look forward to in learning statistics? and, What are you concerned about or what do you anticipate in learning statistics?
5.
In week 3, the video resources were made available to all students on some topics areas in the subject (as listed in Table 5.1). At the end of the session (after week thirteen), the students completed the online survey through the e-learning system and returned the completed consent form (via email) indicating their agreement to participate in the study. The students answered questions on matters including the average time they spent on the subject weekly, the perceived impact of learning resources on their understanding and learning, the frequency of use of the video resources, issues regarding accessing and playing the video resources, their perceived competency on topics of statistics, changes in their perspective after completing the subject, recommendations for improvement of the subject, their anticipated grades, and demographic data (see Appendix 5.1).
5.1.4 Outcomes The approach of using an online survey has been shown to increase disclosure (Locke & Gilbert, 1995; Rodarte-Luna & Sherry, 2008; Turner et al., 1998) and possibly produce higher response rate than in-class survey. The online rather than in-class approach was necessary for this study as there were distance students enrolled in the subject. Forty students took part in the online survey from a class of 86 (i.e. a cohort of 66 on-campus and 20 distance students) and hence the response rate was 47 per cent. The respondents included 27 on-campus students (67.5%) and 13 distance students (32.5%). Proportionally, more distance students (13 out of 20, i.e. 65%) who are more reliant on e-learning resources than on-campus students (27 out of 66, i.e. 41%) responded. Out of the 40 respondents, seventeen were males (42.5%) and twenty-one were females (52.5%) while the other two students did not provide any gender information. In terms of nationality, twenty-two were international students (55%), seventeen were domestic (42.5%) while one student did not provide any information on his or her student status. 140
GHMD983 is a six credit point subject and it is expected that students will spend at least 12 hours of working per week (Porter, 2007). As shown in Table 5.2, fifteen students (37.5%) reported completing more than nine hours weekly (most frequently cited average time spent) in the subject while ten students (25%) reported doing five hours or less of work weekly.
Table 5.2 Percentages of time spent by students per week on the subject GHMD983 August /Spring 2008
Not answered 0 to 5 hours 6 to 8 hours More than 9 hours Total
n
%
1 10 14 15 40
2.5 25.0 35.0 37.5 100
The students also provided their anticipated grades in the survey allowing some indication as to the performance of students who had responded. Possible grades ranged from the lowest, fail (F), to the highest, High Distinction (HD); of these students, 25% expected a Pass (n = 10), 30% expected a Credit (n = 12), 25% Distinction (n = 10), 17.5% High Distinction (n = 7), 2.5% non-response (n = 1) while none of them expected a Fail in the subject.
5.1.4.1
Responses to the prompts
Twelve students responded to the forum prompts and of these the majority (ten students or 83.3%) expressed their concerns about studying statistics in the beginning of the session. Interestingly, three of them (25%) were hoping to pass the subject rather than deepening their understanding of statistics. Seven students (58.3%) were anxious as a result of having poor mathematical skills and previous unpleasant experiences of taking statistics subjects in other study programs. Examples of their comments were as follows: Statistics scares the hell out of me – numbers are not my thing. I find statistics rather daunting as I was never really good at maths subjects. I am (hopefully) looking forward to passing this course and never having to do Stat again. I struggled big time in my undergrad…I am also looking forward to never having to do stats again after this…I have absolutely no intention of going anywhere near a job that needs stats!
141
It‟s heartening to read that so many of us are a bit scared of statistics – it‟s nice to know I‟m not alone…I‟m most concerned about the maths involved because this is my weakest subject and I genuinely cannot remember numbers. I am also daunted by statistics...Stat‟s isn‟t a strong point of mine. I have the same problem that Stats are not my strong point...this is my last subject so I can graduate in December and I want to pass. I did stats in my undergrad, as many of you have also mentioned (and it was torture)…My concern is about being able to pass this subject whilst doing it by distance. In relation to the use of video resources, all of the 10 students responding to the prompts indicated that the videos were useful in helping them to learn and understand the subject. Examples of comments made by the students were as follows:
I really enjoyed the videos. They make things clearer than they once were. The video itself is very helpful…Presumably the technical bugs will be worked out as we go on! I find video support very useful. It reminds me of the important points out of the lectures and labs. The videos are great. Very helpful. The recorded lecture was also very helpful. It is reassuring to hear what has gone on in the lecture. I found the videos for JMP extremely helpful…I know for myself it will save a great deal of stress if I have at least some direction of how to go about things. It provides a really good explanation of what is required in the meaningful paragraph. I just want to say how great these new videos are they are helping me so much. I have got everything working and I found the JMP videos very, very worthwhile. I am grateful for them. The lab was even quite fun. The responses to the prompts by the students revealed their concerns about studying statistics and how useful the video resources were in helping them to learn and understand the subject. It was also relevant in this study to identify what impact the other learning resources provided had on the student learning outcomes.
142
5.1.4.2
Impact of learning resources on student learning
The purpose of the various learning resources provided to the students was to assist them to improve their understanding of statistics. Students were asked to rate each of the learning resources according to a range of responses reflecting their perceptions as to the usefulness of resources to their learning. The rating scale was as follows:
a. b. c. d.
Not applicable or rarely used Little use Moderately useful Extremely useful The students‟ ranking on the usefulness of resources was then used to guide the
researcher in targeting which resources had the most scope for improvement (refer to Table 5.3). There were no published benchmarks in terms of the percentages of students reporting the resources as useful for their learning. In this study, the resources were regarded as a high quality resource of importance to learning if at least 85% of the students regarded it as moderately or extremely useful. In the survey, overall seven of the thirteen resources examined were perceived as useful by the students. From two of these laboratory tasks and laboratory tests, all students (100%) indicated that they were useful. The lowest rankings for helpfulness were the tutor and teamwork. In response to the lowest ranking of the two resources, these were due to the nature of the subject that enrolled both on-campus and distance students who were mostly working (either a casual or full-time work) while studying. Hence, the students particularly distance students may have had less commitment in the subject on working with peers. Since the subject was also offered as a distance learning mode, it was difficult to include teamwork in their assessments; consequently none of the assessments had involved teams. One way to improve collaboration amongst distance students was to better use the student forum as a medium of communication for the subject. Certainly, the distance students did not have access to tutors as reflected in the rating in Table 5.3.
143
Table 5.3 Perceived impact of resources on student learning and understanding of GHMD983 Resource
Overall (n = 39) %a Rankb
On-campus (n = 27) %a Rankb
Distance (n = 12) %a Rankb
Laboratory tasks Laboratory tests Video resources Worked solutions Lectures Online lecture notes Work done in your own time Edu-stream Laboratory classes Student forum Laboratory retestsd Tutor Teamwork
100 100 97.5 97.5 95.0 95.0 92.5 82.5 82.5 77.5 77.5 65.0 60.0
100 100 96.3 96.3 100 100 92.6 88.8 96.2 81.4 81.5 92.6 70.3
100 100 100 100 83.4 83.3 91.6 66.7 50.0 66.7 66.6
1.5 1.5 3.5 3.5 5.5 5.5 7 8.5 8.5 10.5 10.5 12 13
2.5 2.5 5.5 5.5 2.5 2.5 8.5 10 7 12 11 8.5 13
c
33.4
2.5 2.5 2.5 2.5 6 7 5 9 11 9 9 12
a
Percentages of students responding as moderately or extremely useful Ranked by percentage reporting resources useful c No tutor provided to distance students d These were taken by a minority of students b
In relation to the impact of the video resources, overall 98% of the responding students (n = 38) found that the resources were useful in helping them to learn the subject. Specifically, all distance students regarded the videos as helpful compared to 96% of the on-campus students. In this cohort, the on-campus students comprised both international and some domestic students (Australian). The distance students were predominantly Australian with difficulties in accessing the campus. Furthermore, the students were asked to provide their comments on “How and what ways were the videos useful (or not)” and 80% of them (n = 32) indicated that the videos were useful in a variety of ways. On the other hand, 5% of students (n = 2) were against the use of videos for learning while 15% (n = 6) did not provide any comments (see examples of comments in Table 5.4). The video resources were useful particularly in providing clarification on major topic areas, calculations and worked examples, lectures and laboratory reviews, data analysis using statistical software (JMP), applications of statistical theories and concepts, gaining confidence to complete the tasks such as assignments, laboratory tests, and for sample examples of the final examination. They were positive tools for the teaching and learning of statistics.
144
Table 5.4 Students‟ comments on “How and what ways were the videos useful (or not)?” The videos were useful Easier to see how calculations were done. Helpful to understand and apply the theory. Made you feel as you are being taught one to one. It gave better understanding about the steps to follow while performing a test or calculating. Helpful to understand the most difficult points in each topic. It is useful for some knowledge which is hard to understand in the book. It was well demonstrated and the sound was good and was to the topic point. Able to review specific topics at one‟s learning pace. The videos were of assistance as I was a distance student and this is the only opportunity for classroom instruction. The videos were not useful Reading statistical studies were not useful in the video actually. I don‟t like videos teaching, I have to spend time in downloading the videos, and I prefer traditional paper study.
5.1.4.3
Accessing and playing the videos
The videos were produced with Camtasia Studio. The output drew on four files: HTML (.htm), Shockwave (.swf), Extensible Markup Language (.xml), and JavaScript (.js). Students connected to a HTML (.htm) file which pointed to the other files to activate and play the videos via a browser (e.g. Internet Explorer, Firefox, Google Chrome). The most frequently cited average time spent by students on using the video resources was five hours or less weekly (90% of the students). To ascertain the problems in accessing and playing the videos, the students were asked, “What type of computer are you using?” From the responses, 97.4% of the students (n = 38) were able to play the video. Of these students, the majority (87.2% of the students) used a PC computer rather than MAC (2.6% of the students) or both PC and MAC (7.7% of the students). The majority or 85% of the students (n = 34) did not experience difficulty in accessing the videos using computers provided in the laboratory or from their home or work place. However, several problems were identified in downloading the videos and primary amongst these was the internet connection speed. Another problem communicated to the lecturer verbally was that students attempting to access videos from their worksites often had them blocked by a firewall. Examples of students‟ comments reporting the problems were as follows:
I had difficulty downloading the zip files and using them offline. I had to access all videos online. I have a broadband connection so the video quality was excellent but would probably not be as good with a dial-up connection. 145
They were too large for me to open on my home computer, so it took a long time to download. I opened them from work as my internet connection is a lot quicker even though I have quite fast broadband at home. The videos were not downloaded into a PC to view it again without internet connection. It would be better if we were provided with videos which could have been downloaded and viewed again and again without internet connection. It depends on your connection speed…the slower your speed is the slower you will be able to play the videos. In saying this I believe students with dial up connections would not be able to watch videos at all. No problem accessing the videos, just playing the downloaded video is hard although I have downloaded the prescribed quick time player. At first, I have to download the adobe flash player to play the videos. After downloading the software, I can watch the videos. The video file is not recognized by the PC I used. I just could not open it on my computer, as it was on zipped files and that was really difficult to use. There were some recommendations given by the students to resolve the problems in accessing the videos provided in the e-learning site such as:
download the videos using computers in laboratory rather than from home or work place or somewhere else;
download the Adobe Flash Player software to play the videos;
access the videos through e-learning system instead of download the zipped files;
use the CD-ROM that contains all videos in one place; and
consult the university Information Technology (IT) services staff for help (this could be the last resort!).
5.1.4.4
Anxieties while taking the subject
When the students were asked to comment on the ways their anxieties for this subject had either increased or decreased throughout the session, 53% out of the 34 students who responded indicated that their anxieties had decreased towards the end of the session in a variety of ways. Some of their responses were as follows:
146
Laboratory tests and worked solutions Anxieties decreased due to increased knowledge, practice, ability and confidence. The lab tests and solutions were very helpful and the fortnightly tests were extremely helpful for a distance student to keep me on track. The videos were also fantastic. As I went through the semester anxiety always decreased, as I completed my lab tests. It decreased after I passed each lab. Decreased after I revised the lectures/labs for the final exam - it all made more sense looking at it as a whole. The last few days before exams were the time when we studied stats the most. The lab test and the video support were much more useful to learn the subject. The teacher was also interesting and cooperative to help us solve our anxiety. Regular quizzes and lab work eased my anxiety as the semester progressed. Teaching methods and materials The methods and materials used during this course was really wonderful, I am more comfortable with statistics now. My anxiety decreased because I utilized all the useful lecture notes, JMP notes, lab work and exams output and especially the videos. They really helped me to understand the essence of statistics in health research and be prepared for the final exam. I have never been really great in mathematics or in statistics so this was a bit of a challenge. I didn‟t think I would learn anything, but the teacher has been very good at teaching the materials and the online videos helped. I‟m not as unconfident as I was. Anxieties decreased by study hard and good teacher. I had no idea about statistics at all before I started this subject. I was extremely anxious at the start. However, video help and attending lectures assisted me greatly in completing this subject. The way I took on this subject was....attending lectures. Statistical skills In explaining statistical findings and formulating and testing hypotheses, interpreting meaningful and so on, my anxieties have decreased gradually during the semester. I am comfortable with maths, but now have a greater understanding of statistical methods in analysing data. My anxieties decreased as I started understanding more and more about the subject. At first it was hard to study statistics because it was quite difficult and during the semester I became more familiar with the terms and it was easy for me then. 147
Others Calculations were the things I feared before starting this course and I am pretty comfortable now after completion this course. It was a good experience overall and I am happy that I worked hard for it and am satisfied. My anxieties for this course decreased during this semester. It decreased a little since I have never used a computer program for learning before let alone use a computer on a daily basis other than for email and Google searches.
5.1.4.5
Perceived competency in topics
At the end of the session, the students were asked to indicate how confident they were in relation to the topics that were covered in the subject. For each statistics topic taught and examined, over 75% of the students considered themselves as comfortable with the topic, with surprisingly over 90% of the students feeling comfortable with one of the most difficult topics, confidence intervals (refer to Table 5.5). Almost all topics had video supports provided except topics on hypothesis testing of differences in means and proportions, and demography. Therefore, perhaps due to this reason more than 75% of the students perceived themselves to be competent in all topics canvassed in the survey. The exclusion of the two topics in the video supports was because these were the last topics taught in the subject and occurred when the lecturer had too little time to develop them.
Table 5.5 Proportions of students responding as moderately confident or confident in topic learning (GHMD983) Topic (n = 39) Confidence intervals Using JMP Scatterplots and correlations Probabilities Writing meaningful paragraph Regression Pearson‟s Chi-square Normal modela Goodness of Fit test Hypothesis tests: single mean and proportion Binomial model Hypothesis tests: differences in means and proportions
Percentage
Provision of videos?
Ranking
92.3 84.6 82.1 82.1 79.5 79.5 79.5 77.0 77.0 77.0 76.9 76.9
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yesb Yes No
1 2 3.5 3.5 6 6 6 9 9 9 11.5 11.5
a
2.6% not answered the category (n = 1) Only one video clip provided on “5 steps of hypothesis testing”
b
148
The UOW is a university which has ongoing concerns regarding equity and diversity. It has policies to ensure equity of access in terms of race, colour, gender, political or religious conviction, language and national or ethnic origin, and in this subject diverse groups are included (refer to http://www.uow.edu.au/about/policy/UOW058716.html). Two variables of particular interest are gender and nationality of the student (domestic or international). Due to small sample sizes and the violation of normality assumptions for each subgroup, a nonparametric test (Mann-Whitney U test) was used for the analysis. Small sample sizes also resulted in low power thus it was not possible to effectively examine for example if there were interaction effects between the two factors. The results revealed significant differences in the students‟ perceptions on several topics between genders (male versus female) (refer Table 5.6) and students‟ nationalities (international versus domestic) (refer to Table 5.7). It was found that male students had higher confidence than females, while the international students demonstrated higher confidence than domestics on five topic areas as shown in Table 5.6 and Table 5.7 respectively.
Table 5.6 Differences on perceived comfort with topics between genders (GHMD983) How confident are you now that you can solve problems ona
Using JMP Writing meaningful paragraph Using and interpreting scatterplots and correlations Fitting models to data (Goodness of Fit) Normal models
Male
Mann-Whitney U test
Female
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
23.94
407.0
17
88.2
15.90
334.0
21
80.9
103
0.014
24.44
415.5
17
94.1
15.50
325.5
21
71.4
94.5
0.007
25.44
432.5
17
100
14.69
308.5
21
66.7
77.5
0.001
23.39
396.0
17
86.4
16.43
345.0
21
64.7
114
0.043
23.18
396.0
17
94.2
15.45
309.0
20
65.0
99.0
0.020
a
Scale: 1 = not at all, 2 = might have a little difficulty, 3 = moderately confident, 4 = confident Percentages of students responding as moderately confident or confident c Significant difference at the 0.05 level b
149
Table 5.7 Differences on perceived comfort with topics between students‟ nationalities (GHMD983) How confident are you now that you can solve problems ona Using and interpreting scatterplots and correlations Producing and using regression outputs Producing and using Pearson‟s Chi Square Normal models Hypothesis test of differences in means and proportions
International
Mann-Whitney U test
Domestic
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
23.48
516.5
22
95.5
15.50
263.5
17
64.7
110.5
0.019
24.75
544.5
22
100
13.85
235.5
17
52.9
82.5
0.001
23.77
523.0
22
95.5
15.12
257.0
17
58.8
104
0.012
22.43
493.5
22
95.5
15.47
247.5
16
56.3
111.5
0.040
23.05
507.0
22
90.9
16.06
273.0
17
58.8
120
0.042
a
Scale: 1 = not at all, 2 = might have a little difficulty, 3 = moderately confident, 4 = confident Percentages of students responding as moderately confident or confident c Significant difference at the 0.05 level b
5.1.4.6
Changes in perspective at the end of the session
In regards to changes in students‟ perspective after completing the subject, the majority of the students were comfortable in most aspects related to the subject. While this comfort was not directly related to specific resources, it seemed likely that the provision of resources in the subject helped students to attain greater comfort in being able to use statistics in research work, solve statistics questions using computer, undertake their professional work, read statistical studies, and explain statistical findings (see Table 5.8). Table 5.8 Percentages of changes in students‟ perspective after completing the subject (GHMD983) Category (n = 39)
A little comfortable
Much more comfortable
Total
Ranking
Using statistics in research worka Solving statistics questions using computera
48.7 38.5
30.8 38.5
79.5 77.0
1 3
Undertaking their professional work
38.5
38.5
77.0
3
Reading statistical studies
38.5
38.5
77.0
3
Explaining statistical findings
51.3
25.6
76.9
5
Taking this subject in general
41.0
33.3
74.3
6
a
28.2
43.6
71.8
7.5
Working with problems that involves mathematics
35.9
35.9
71.8
7.5
Calculating probabilities
Formulating and testing hypotheses
a
36.0
33.3
69.3
9.5
Tackling tasks which are difficult for student
46.2
23.1
69.3
9.5
Working with numbers
35.0
32.5
67.5
11
Taking this subject that involves mathematics
33.3
33.3
66.6
12
150
Taking quizzes, tests and exams in this subjecta
28.2
35.9
64.1
13.5
Taking this subject that involves statistics
23.1
41.0
64.1
13.5
Taking this subject that involves computing
20.5
38.5
59.0
15
a
2.6% not answered each category (n = 1)
This study also examined whether there were differences in students‟ perspectives at the end of the session between genders and students‟ nationalities. A non-parametric test (MannWhitney U test) was again used to analyse the differences between the two factors in accordance to the low sample sizes and violation of normality assumptions for each factor. Comparing the four groups: international male (n = 13), international female (n = 8), domestic male (n = 4), and domestic female (n = 13) to examine possible interactions was not feasible due to the small numbers. The results of non-parametric tests (refer to Table 5.9) revealed that male students reported more comfort than females in taking the subject that involves mathematics, computing, and statistics, working with numbers and calculating probabilities. Meanwhile, the international students perceived themselves as significantly more comfortable than domestic students in taking mathematics and computer related subject, calculating probabilities, and working with numbers and mathematical problems (refer to Table 5.10). Table 5.9 Differences on changes in students‟ perspectives between genders (GHMD983) How much more anxious or comfortable are you now that you have nearly completed the subjecta Taking the subject that involves mathematics Taking the subject that involves computing Taking the subject that involves statistics Working with numbers Calculating probabilities
Male
Mann-Whitney U test
Female
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
25.21
428.5
17
88.2
14.88
312.5
21
52.4
81.5
0.003
24.56
417.5
17
82.3
15.40
323.5
21
38.1
92.5
0.008
23.94
407.0
17
76.5
15.90
334.0
21
52.4
103
0.020
25.35 23.15
431.0 393.5
17 17
88.2 88.2
14.76 15.48
310.0 309.5
21 20
52.4 60.0
79.0 99.5
0.002 0.022
a
Scale: 1 = much more anxious, 2 = a little anxious, 3 = not more anxious or comfortable, 4 = a little comfortable, 5 = much more comfortable b Percentages of students responding as a little comfortable or much more comfortable c Significant difference at the 0.05 level
151
Table 5.10 Differences on changes in students‟ perspective between nationalities (GHMD983) How much more anxious or comfortable are you now that you have nearly completed the subjecta Taking the subject that involves mathematics Taking the subject that involves computing Working with numbers Calculating probabilities Working with problems that involves mathematics
International
Mann-Whitney U test
Domestic
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
25.50
561.0
22
86.3
12.88
219.0
17
41.2
66.0
< 0.001
24.41
537.0
22
77.2
14.29
243.0
17
35.2
90.0
0.004
24.16 22.50
531.5 495.0
22 22
81.8 81.8
14.62 15.38
248.5 246.0
17 16
53.0 62.5
95.5 110
0.006 0.038
25.07
551.5
22
86.4
13.44
228.5
17
53.0
75.5
0.001
a
Scale: 1 = much more anxious, 2 = a little anxious, 3 = not more anxious or comfortable, 4 = a little comfortable, 5 = much more comfortable b Percentages of students responding as a little comfortable or much more comfortable c Significant difference at the 0.05 level
5.1.4.7
Improvement of video resources
The students were asked to comment on how the video resources could be improved to help their learning and understanding of statistics. Of the 26 responses, several themes were identified particularly on the issues of downloading and accessing the videos (23%), and the needs for more videos and for topics to cover more extensively (27%). Several respondents also suggested that the duration of videos could be longer (8%), more videos should be made to support lectures (11%), more videos demonstrating on how to use JMP software (8%), the skills needed in learning statistics should be made highlighted (8%), and others (15%). Some of these comments were as follows:
Downloading and accessing the videos They are excellent the way they are but we need to be able to download them in an easy way. To make it more appropriate so that videos won‟t get freeze or stuck during the playing time and more information could be made as well. It can be improved by making the video more appropriate in coordination of video and audio and also if possible to make a bigger screen and all video files are able to download. Make sure that they are easily accessible, that they are always able to be accessed at the point that the student is working on the associated lab… More topics and more extensively Some videos were short. It would be better if it covers the issues more extensively. 152
By solving more problems of different kind and including more topics. I can suggest that videos are great in helping us understand more...so if more videos are available the more better it is… Have videos for each section of the course. They are fantastic! Video resources can be included for all topics. Support the lectures The videos should be posted on the e-learning site before the lectures. Hmm, well I found it very useful…and I think it can be made available for almost every lecture. Using JMP Include the videos in demonstrating the JMP procedure like how to produce a relevant output. Sometimes it‟s fast that I needed to repeat the video to understand what happened. But overall it‟s really great! Improve skills in learning statistics In my opinion, the best way we can do is focus on difficulty of understanding and learning statistics which depend on feedback of student step by step to explain and show the solutions especially emphasize the basic concept and terms of statistics. Others The videos were quite good however they need to be used as an extension of the teaching provided by the classroom interaction. Sometimes no matter what resources are available a student may still struggle (such as I did). They should be available prior to commencement of topic. Often didn‟t have time to view video resources including discussion forum prior to assessment submission. For distance students, the video resources take the place of the lab sessions and the availability of a tutor and therefore need to be approached in this manner.
5.1.4.8
Recommendations for improvement of subject
The students were asked to provide recommendations or suggestions for improvement of the subject at the end of the survey. Of the 24 responses, several themes have been identified particularly on laboratory materials and support resources (54%), students‟ concern about the study of statistics (21%), lecturer (13%), lectures‟ presentations (8%), and others (4%). Some of their comments were as follows:
153
Laboratory materials and support resources Although it was good there should be more focus on the lab works and practical. There should be more lab test, better if every week, and a pre exam test so that we can understand what exactly will be in the answer. We can extend the lab timing so that we can complete the lab test in front of the tutor, so that we can ask our queries. Longer hours for laboratory work. I found the number of typos in the labs and solutions extremely frustrating. I struggled with deciding if what I was getting was wrong or if there was a typo in the wording… Decrease the pass mark for the lab tests because 70% puts a lot of pressure on students. The videos could be more detailed. Students’ concern about the study of statistics I believed that this subject is too difficult to be offered as a distance subject. As statistics is a compulsory subject for many degree courses and the content can be quite overwhelming I think that it is a disadvantage not to be in a classroom. My struggle was having anxiety about working with numbers and every week having to do the labs, reading test and re-sit. This made the work load really intense not sure how to fix this but it was a major struggle for me. This subject should not be available by distance. There is too large a discrepancy between the level of resources and knowledge available on campus through lectures, labs and access to tutors to that available for distance students. Lecturer It was great to have such an active forum and constant communication with the other students and the lecturer. All concerns were addressed in a very timely manner. Thanks. Well I found the subject good well I guess it was helpful due to the lecturer as she made the subject easy and enjoyable to learn. Lectures’ presentations The lecture hours can be extended and lab work can be minimised. For me it is important to understand the theoretical basis than just doing the JMP. JMP can be done of course with the help of the professor, but students like me can even do it at home just. A slightly slower pace would have been ideal. I know we are not all on the same page when it comes to statistics, so going slower would help those, such as me, that were a bit behind than others. 154
Others I suggest that group study may be useful to promote and improve understanding for the subject throughout discussing each other between students.
The comments sometimes draw attention to the lecturer, and it is noteworthy that the lecturer has received several awards for teaching as her contribution to teaching and learning. These include two faculty awards (Vice Chancellor‟s Award for Outstanding Contribution to Teaching & Learning (Informatics) 2003, and Dean of Informatics Award for Teaching Excellence 2003), a peer recognition award (Australian College of Educators 2007), and a national citation (Carrick citation for Outstanding Contribution to Student Learning 2007).
5.1.5 Summary The preliminary study examined how the learning resources particularly video resources could be used as learning supports to improve the student learning and understanding of statistics. In responding to the concerns about studying statistics, the responses to the prompts observed in the student forum revealed that the students felt anxious in taking the subject mainly due to mathematical components included in the subject. Lack of mathematical skills and unpleasant past experiences in learning statistics were two of the main reasons given for their anxieties. In this study, the resources appeared helpful and assisted students in reducing their anxiety, and according to their self-reports, this helped them to learn statistics more comfortably. Of the 34 students (the others did not respond), 53% of them indicated that their anxieties were reduced during the session in a variety of ways, but in particular through the provision of learning resources in the form of video clips. These clips had helped to improve their understanding and statistical skills acquired through laboratory tasks, laboratory tests, the use of JMP software and the reflection on worked solutions. At the end of the session, for all topics taught in the subject, the percentages of the students who perceived themselves as confident ranged from 77% (n = 30) feeling comfortable with the topic on hypothesis tests for differences in means and proportions to 92% (n = 36) on confidence intervals. Eighty per cent (n = 31) were more comfortable using statistics in their research work. In relation to improving the video resources, several issues were highlighted for instance regarding downloading and accessing the videos (23% of the 26 responses). It was also recommended by approximately half of the students (54% of the 24 responses) that the subject should focus more on the students‟ work in completing their laboratory tasks, provide more laboratory tests and worked solutions, have more hours on lectures and tutorials, and proof read the laboratory manual. However, 21% (n = 5) revealed their concerns about the study of 155
statistics as well as their difficulties in learning and suggested that the subject to be offered only to on campus rather than distance learning. As discussed in section 2.5 (Chapter 2), „[a]nxiety can literally cut off the working memory needed to learn and solve problems…‟ (Sparks, 2011, p. 1) and it appeared that statistics anxious students sometimes also have low ability in mathematics. Studies have frequently shown gender differences in anxiety particularly among students in statistics subject (Baloglu, 2003; Hong & Karstensson, 2002; Rodarte-Luna & Sherry, 2008). In relation to mathematics anxiety, Sparks however noted that „girls were more likely to draw pictures supporting a gender bias – “Boys are good at math; girls are good at reading” – and the stronger the bias, the worse the girls performed.‟ (p. 3). Accordingly in this study, the male students reported more confidence and less anxiety than the females. In comparing the outcomes based on the students‟ background or nationality, the findings contradicted the studies conducted by Bell (1998) and Onwuegbuzie (1999). It was found that the international students had perceived themselves as competent and more comfortable at the end of the session compared to the domestic students, a finding also supported in later data collection. The examination of interaction effects between the two variables was not possible for this study. One of the possible limitations of this study is that the number of voluntary participants was limited in sample size involving only forty students. Although the outcomes were sought from a particular group of students, the purpose of this study was not to make general statements about the level of students‟ understanding and anxieties in learning statistics for all postgraduate students in Health Informatics courses. As such, it cannot be assumed that the current findings can be generalised to other students in different courses of studies. Therefore, in order to provide a clearer picture on the use of video resources and other learning resources in helping them to learn statistics more comfortably and effectively, the design phase of this study was extended to include another group, undergraduate students who enrolled in an introductory statistics subject with similar content and processes. In this study, the students‟ assessment and actual grades were examined in addition to their perceived competency with topics as well as their anxieties. Pass rates for the postgraduate students were already high when the video resources were provided, however at the undergraduate level there was also considerable scope for improvement in grades.
156
5.2
Design phase: the use of videos in a different context
5.2.1 Aim for the study The primary aim at this stage of the design phase was to extend the previous findings from postgraduate students to another group, undergraduate students. That is, to examine the impact of learning resources as provided to them through the e-learning system and particularly the addition of videos, which together with assessments and good resources formed the basis of a learning support system. The impact of these resources on the student learning was based on the analysis of students‟ preferences for different types of resources, their perceived competency in topics learning, and changes in their perspective after completing the subject. Data were collected from an online survey at the end of the session in March/Autumn 2009 with thirtyeight voluntary undergraduate students enrolled in STAT131. STAT131 is an introductory statistics subject designed for the first year students offered by the School of Mathematics and Applied Statistics at the University of Wollongong, New South Wales.
5.2.2 Existing learning resources There were many similarities between the resources provided to the postgraduate students and a few noted differences briefly discussed in section 4.9.1 (Chapter 4). This subject allocated three hours to face-to-face lectures and two hours to laboratory work per week. Laboratory class began in the second week of the session. The delivery mode of STAT131 was on-campus at Wollongong and remote to Loftus throughout the session in March/Autumn 2009. Key features of the existing learning resources and their use are as follows:
1.
The lecturer and co-ordinator for the undergraduate subject (STAT131) was the same as for the postgraduate subject (GHMD983).
2.
The e-learning sites provided the space for locating all lecture notes, laboratory manual and tasks, worked solutions, video resources, past examination papers, assessment, notices, and other useful resources and websites relevant to the subjects. Any messages or notices were posted on the e-learning sites mainly via the student forum. Students were encouraged to read these messages and post any questions regarding their academic work regularly through the forum. Students had access to these resources as they assembled the skills necessary to become independent statistical thinkers and learners.
157
3.
Through accessing multiple resources students could develop the skills of verifying, clarifying and refining statistical concepts, formulae, theory and language, and appropriate acknowledgement practices.
4.
Students themselves were considered to be a primary resource, when as motivated students, they learned to question, challenge, explore, find resources, and help fellow students in their approaches to tasks and problems.
5.
Unlike GHMD983, the statistical software package being used in STAT131 was SPSS through which students learned to summarise and analyse data and to verify their understanding of formulae. The processes learned to analyse data with this package are similar to other packages such as JMP. The syntax and language or menu options of packages vary but functionality tends to be the same for many simple analyses.
6.
Each student had a role explaining to, working with, demonstrating and challenging other students. Students used the e-learning or student forum to clarify and share ideas.
7.
In terms of the assessment systems, students in STAT131 were required to complete two „in-class‟ laboratory tests, two take home tests, and one group assignment (students worked in pairs). During assessment times, students were restricted to what they could provide for other students. They could for example refer others to lecture slides or laboratory tasks.
8.
The subject contents or syllabus of STAT131 were similar to GHMD983 (see Appendix 3.1) with the exception of three topics on demography included in GHMD983, and the Poisson and exponential distributions included in STAT131.
9.
Students were introduced to learning strategies such as framing and concept mapping (refer to section 5.1.2.4).
10. Tutors in laboratory classes had a role in listening to students as they completed their activities, timing and coordination, and reflecting and questioning the students‟ solutions to tasks so as to clarify misperceptions, theories or procedures.
5.2.3 Method The steps involved in the collection of evaluation data throughout this stage of the design phase of evaluation for this study were as follows:
1.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE09/021) in order to invite students to participate in the investigation of the impact of video resources on student learning outcomes. 158
2.
In STAT131 March/Autumn 2009, students were approached and initially informed about the purpose of the study through a Participant Information Sheet delivered via email.
3.
An information sheet supplied to the students. They were asked to provide a return email or completion of permission slip (consent form) giving their consent to participate in the study. The students were told that their participation was voluntary and that they were free to refuse to participate and to withdraw from the study at any time. The students would not be penalised for not participating in the study and they were informed that the outcome of the study should be beneficial for future students.
4.
In STAT131 March/Autumn 2009, all learning resources were put up on the e-learning homepage using by-type resource folders according to categories such as lectures, laboratory manual, laboratory solutions, data sets, student forum, video resources, and other materials.
5.
Students in STAT131 March/Autumn 2009 completed the online surveys at the end of the session through the e-learning systems and returned the completed consent form (via email) indicating their agreement to participate in the study.
5.2.4 Outcomes The design phase of evaluation involved a cohort of 89 undergraduate students who enrolled in STAT131 March/Autumn 2009. Forty-three per cent of the students (n = 38) participated in the online survey via the e-learning system. The students‟ demographic data revealed that about 45% of the sample was females and 71% of them were domestic students. Since the subject was offered at two campuses, approximately 84% of the students enrolled at Wollongong main campus and the remaining 16% at Loftus remote campus. In relation to the average time spent by students per week on the subject, it was reported that 22% of them (n = 8) spent more than 9 hours, 39% (n = 15) between six and 8 hours, while the remaining 39% (n = 15) spent not more than 5 hours per week throughout the session. It was expected that the time needed for an average student to complete a six credit point subject was 12 hours of working per week (Porter, 2007). Of concern was that almost 80% of the students spent less than 9 hours weekly in learning the subject. By comparison 63% of the postgraduates spent less than 9 hours weekly. The students were also asked to anticipate their grades to provide some indication as to the performance of students responding to the survey (refer to Table 5.11). A test for differences in two proportions revealed that the percentages of students expecting to achieve at least a credit grade (71.1%) was significantly higher (Z = 1.895, p = 0.0287) than the percentages who 159
actually attained a credit grade or above (53%) in the final subject outcomes. None of them expected to fail in the subject. As can be seen in Table 5.11, in the actual outcomes in STAT131 nearly one quarter of students (23%) failed in the subject. One of the difficulties with regard to the student evaluations with this group was that those who responded, tended to still be attending class whereas those who had effectively dropped out, stopped attending and did not complete the evaluations. Therefore, while it was possible the students were excessively confident in anticipating their grades it is more likely that better students responded to the survey. Table 5.11 Students‟ anticipated and actual grades in STAT131 Grade
Fail Pass and Pass conceded Credit Distinction High distinction
5.2.4.1
Actual (N = 89) 22.5 24.6 27.0 22.5 3.4
Percentages Anticipated (n = 38) 28.9 42.1 21.1 7.9
Impact of learning resources on students learning
The surveys sought information about the use of learning resources comprised of classroom-based and technology-based resources by asking the students about how they completed the lecture component within the subject whether online or face-to-face lectures. Forty per cent of the undergraduates in STAT131 responded that they attended virtually all faceto-face lectures, 21% of them attended the lectures and worked through the online lectures, 21% reported that they worked through the laboratory tasks and online lectures as these were required for assessment, and the remaining students worked through the online lectures on regular weekly to fortnightly basis. The students were encouraged to complete the laboratory tasks on their own without referring to the worked solutions and then to check the solutions when they were released in the following week via the e-learning site. When the students were surveyed about how they worked through the laboratory manual in the subjects, the results revealed that only 31.5% of the undergraduates essentially completed all the tasks and checked them from the worked solutions, 50% completed the tasks by downloading the worked solutions as to complete the laboratory manual, 13.2% downloaded the worked solutions to most tasks, and 5.3% essentially completed all the tasks without checking them from the worked solutions. 160
The survey also sought information on students‟ perception of exam preparedness as a consequence of completing the laboratory classes although not all students had completed them. A cross-tabulation analysis between how the students completed the laboratory tasks and their perception of exam preparedness revealed the following findings (refer to Table 5.12). While the numbers were not large, 42% of those downloading felt well prepared for the exam compared to 22% of those who had completed all laboratory tasks. Those downloading were perhaps overconfident. The students who had completed laboratory tasks saw the need for basic revision (57%) versus 17% of those who downloaded solutions. Table 5.12 Students‟ perception of exam preparedness versus completing their laboratory tasks in STAT131
Exam preparedness
Well prepared Basic revision Thoroughly review all lecture materials More time to study Total
Percentages completed all downloaded the the laboratory tasks worked solutions (n =14) (n = 24) 21.5 57 21.5 100
42 17 33 8 100
The students were also exposed to a certain amount of theory review through tutor led tasks during laboratory classes. In STAT131, 55.3% of the students responded that the amount of theory review covered was well balanced with the time left for them to work on tasks, 36.8% responded that there was too little theory review to help them with the tasks, and in contrary 7.9% responded that the theory review was too much and not enough emphasis on the tasks. In response to the perceived importance of learning resources for student learning, as for GHMD983, the top seven resources were valued by more than 85% of the students in STAT131 (Table 5.13). These resources were laboratory tasks and laboratory manual (97% of students), worked solutions (95% of students), laboratory tests (92% of students), laboratory classes (89% of students), laboratory retests (87% of students), and e-learning (87% of students). Other primary resources were constantly viewed as importance to learning; these include lectures (84% of students), online lecture notes (79% of students), and tutor (79% of students). Nevertheless, Edu-stream and student forum were perceived as less useful for learning by 29% and 26% of the students respectively.
161
Table 5.13 Perceived impact of resources on students learning in STAT131 Resource
March/Autumn 2009 (n = 38) %a Rankb
Laboratory tasks Laboratory manual Worked solutions Laboratory tests Laboratory classes Laboratory retests e-learning Lectures Online lecture notes Tutor Marking guidelines Teamwork Learning strategies Objectives in laboratory manual Video resources Textbook Edu-stream Student forum
97.4 97.4 94.7 92.1 89.4 86.8 86.8 84.2 78.9 78.9 71.1 63.1 57.9 47.4 39.4 36.9 29.0 26.3
1.5 1.5 3 4 5 6.5 6.5 8 9.5 9.5 11 12 13 14 15 16 17 18
a
Percentages of students responding as moderately or extremely useful Ranked by percentage reporting resources useful
b
In contrast to the findings of the postgraduate students, the video resources were valued by only 39% of the undergraduates in STAT131 compared to 98% of the postgraduates in GHMD983. This warranted a closer examination of factors resulting in the lack of use of video resources among the undergraduates and is discussed in the next section.
5.2.4.2
Video resources for student learning
In STAT131, the provision of video resources to students on the e-learning sites was predominantly aimed at supporting at risk or struggling students so as to improve their learning outcomes. In order to examine whether the students benefited from using the video supports, they were asked in the survey about the average time spent on watching the videos weekly and their experience of using the resources. The results revealed that approximately 92% of the students either did not use the videos or spent up to two hours on watching them weekly while the remaining 8% spent between three and five hours per week. A little over half of the students (56%) reported that they did not use the videos at all, 10% found the videos were difficult to use, 10% reported that the videos were time consuming, whereas 24% responded that the videos had helped them solve problems well in the subject. This result was in accord with earlier 162
findings (refer to Table 5.13) that only a small proportion of the students (approximately 40%) perceived the resources as important for their learning. To better understand the students‟ experience on the video resources, the students were asked to provide comments on their use of the resources. Of the 27 responses, 48% of them (n = 13) responded that they liked the use of videos in the subject and requested more videos, 22% (n = 6) indicated that they did not use the videos, 15% (n = 4) provided a reason for their non-use – they were not aware that the videos were available on the e-learning site, 11% (n = 3) responded negatively on the use of videos for learning, and 4% (n = 1) were uncertain. Examples of their comments were as follows:
Preferred the use of videos and wanted more videos The ones which I watched were extremely beneficial and provided good insight. I would only suggest that there would be more videos at the beginning of the year to ignite the learning process. Video resources are good it would be more useful if there was more. They were very well done and I believe they don‟t need any improvements. The video resources are good but have to be shortened. Please make more of them. Did not use the videos I did not use them so N/A. I might use them during stuvac. I can‟t comment on this as I have not used them. I did not use them. Unaware the videos were available for learning More awareness to students not from the Wollongong campus should be made so that they know about these resources. Tell the lecturers to remind students that they are available. I only discovered them a couple of days before the final laboratory examination. It would be really useful if e-learning would indicate when a new video has been uploaded. I didn‟t know that the videos even existed until about 2 weeks ago. I couldn‟t find them; maybe make them easier to access. Responded negatively on the use of videos The speed of speaking should be slowly. It is hard to listen clearly for international students. 163
I don‟t find video resources helpful. I think video doesn‟t help our study.
5.2.4.3
Student perceived comfort with topics learning
In relation to student perceived competency with topics learning in the subject, more than 75% of the students in STAT131 revealed that they were confident in the top six topic areas as presented in Table 5.14. A great proportion of students (97%) perceived themselves as confident in exploratory data analysis topic including writing a meaningful paragraph about variables. The proportions of students regarded themselves as competent in other four topics were moderate ranged from 63% in normal and exponential models to 74% in confidence intervals. However, less than 20% of the students felt comfortable with probability topic. Video supports had covered almost all topics in the subject with the exception of one topic, hypothesis testing of two sample means and proportions (refer to Table 5.14).
Table 5.14 Student perceived competency with topics learning in STAT131 March/Autumn 2009 (n = 38) a % Rankb
Topic
Exploratory: Writing meaningful paragraph Using SPSS Hypothesis tests: single mean and proportion Hypothesis tests: differences in means and proportionsc Scatterplots and correlations Regression Confidence intervals Goodness of Fit test (Model fitting) Binomial and Poisson models Normal and Exponential models Probabilities
97.4 81.6 79.0 79.0 76.3 76.3 73.7 68.5 68.4 63.2 18.4
1 2 3.5 3.5 5.5 5.5 7 8 9 10 11
a
Percentages of students responding as moderately confident or confident Ranked by percentage reporting confident with topic c No video supports provided on this topic b
5.2.4.4
Changes in perspective at the end of the session
In terms of the changes in students‟ perspective after completing the subject, a little over half of the students in STAT131 reported they were comfortable or less anxious in four aspects related to the subject (refer to Table 5.15). These were: explaining statistical findings (55%), formulating and testing hypotheses (55%), calculating probabilities (55%), and solving statistics 164
questions using computer (53%). It was found that the proportions of students reporting comfort in other aspects of the subject were low, for example taking a subject that involves statistics (40%) down to taking a subject that involves mathematics (24%). The proportion of students demonstrating change in perspective was low for these students over all aspects related to the subject. Table 5.15 Changes in students‟ perspective after becoming more comfortable completing the subject in STAT131 Category
Explaining statistical findings Calculating probabilities Formulating and testing hypotheses Solving statistics questions using computer Taking quizzes, tests and exams in this subject Taking this subject that involves statistics Tackling tasks which are difficult for student Working with numbers Taking this subject in general Working with problems that involves mathematics Taking this subject that involves computing Taking this subject that involves mathematics
March/Autumn 2009 (n = 38) a % Rankb 55.3 55.2 55.2 52.6 39.5 39.4 36.8 36.8 34.2 31.6 28.9 23.7
1 2.5 2.5 4 5 6 7.5 7.5 9 10 11 12
a
Percentages of students responding as a little comfortable or much more comfortable Ranked by percentage reporting comfortable
b
The students were surveyed about their belief in their overall learning and their perceptions of subject relevance. The results revealed that 10.5% of the students had strong belief in their overall statistical learning, and 57.9% perceived that they had moderate success in their learning. Another 13.2% indicated that they had limited success in their learning, 15.8% perceived that they were unsuccessful, and 2.6% regarded the subject as too difficult for them to learn. In terms of the subject relevance, the majority of them (60.5%) reported that they had similar perception of the subject‟s relevance as when they commenced the subject, 23.7% reported improving their perception of the subject‟s relevance to their lives against with 15.8% perceived the subject as less relevant.
5.2.4.5
Assessment
In the survey, the students were asked about their perceptions of the subject‟s assessment system. The results presented in Table 5.16 revealed that 79% of the students regarded the assessment system as sufficiently supporting their learning and understanding of the subject. 165
However, a small proportion, 18.4% (n = 7) of the students reported that the assessment system was unfair for some reasons, some thought cheating occurred, and the marking style was inconsistent. Seven students made negative comments reporting their disappointment on the subject assessment system with comments similar to the three following:
I think it is unfair. If you do not pass the 70%... you will get 0 marks. I do not like it. Many times I was just less 0.5. It is very easy to talk with other students in regards to the answers to the assignments and the laboratory tests. The group assignment wasn‟t necessarily fair, as group members did not necessarily pull their own weight. Table 5.16 Percentages of students reporting their perceptions of subject assessment system in STAT131 Is the assessment system fair? Fair Unfair for some reasons Cheating has occurred Inconsistency in marking Assessment format Not answered
March/Autumn 2009 (n = 38) 79.0 7.9 7.9 2.6 2.6
An analysis of assessment marks in STAT131 revealed that the students achieved higher mean marks (each worth a total of 10 marks) in the last three assessment on topics: probability and models, Goodness of Fit and Chi-square test (week 8); Normal and Exponential distributions (week 10); and estimation and hypothesis testing (week 12) (see Table 5.17). This result contradicted with the student perceived competency with topics mainly on the topic of probability (refer Table 5.15) as the survey was undertaken at the end of the session that is, after the assessment had been carried out. The lack of confidence among students on later topics perhaps had increased as these topics were taught at the end of the session.
166
Table 5.17 Distribution of assessment marks in STAT131 Assessmenta 1. Exploratory Data Analysisc 2. Bivariate Data Analysisd 3. Probability & Models, Goodness of Fit & Chi-square testc 4. Normal & Exponential distributionsd 5. Estimation & Hypothesis testinge
Week
N
Mean
S. D.b
4 6
76 73
6.0724 5.8356
1.79203 1.79129
8
72
7.3646
2.07802
10 12
66 67
7.2773 8.9351
1.67431 0.78162
a
Zero marks have been removed from all tests as they represent non submission and not necessarily low achievement b Standard deviation c „In-class‟ laboratory tests d Take home tests e Group assignment (students worked in pair)
The students in STAT131 were required to complete two „in-class‟ laboratory tests, two take home tests, and one group assignment completed at their own time. In the survey, students were asked about the structure of the laboratory tests in terms of its timing to complete them and test location. The findings revealed that 58% of them (n = 22) reported that they preferred completing the tests as a mix of in class and done over two or three days out of class, 29% (n = 11) favoured doing the tests over two or three days out of class, and 13% (n = 5) recommended the completion of the tests in class time.
5.3
Comparison of outcomes between the cohorts Reflecting upon the provision of video resources in the e-learning system, the first major
observation was that the undergraduate students in STAT131 were not aware of the availability of the resources. This contrasted with the postgraduate students in GHMD983 where there were no such reports. The second major observation was that only a small proportion of the undergraduates used the resources again contrasting with the postgraduates. In seeking to find reasons why there was a difference between the undergraduates and the postgraduates, a comparison of outcomes of the two similar subjects (STAT131 and GHMD983) involving two different cohorts of students (38 undergraduates and 40 postgraduates) was undertaken in this study. The first observation was that there were two different approaches or designs of placing the resources on the e-learning sites for the two subjects. In STAT131, all learning resources were provided as a list of by-type resource folders (refer to Figure 5.2).
167
Figure 5.2 STAT131 e-learning homepage and lectures content within by-type resource folders
168
Whereas in GHMD983, all learning resources were organised on week-by-week basis by locating the laboratory tasks, lecture notes, data sets, video supports, and other resources into weekly folders, for example Week 1, Week 2, Week 3 (see Figure 5.3).
Figure 5.3 GHMD983 e-learning homepage and contents within weekly folders
Given the sample sizes for both groups were sufficiently large and satisfying the assumption of normality for each group (i.e. npˆ and np(1- pˆ ) were more than 5), a test for differences in two proportions was used for the subsequent analyses. The comparison of outcomes revealed that the proportions of students who valued the lecture notes (Z = 1.72, p < 0.05), video resources (Z = 5.26, p < 0.001), Edu-stream (Z = 4.53, p < 0.001), and student forum (Z = 4.30, p < 0.0001) as important to learning was significantly higher in GHMD983 than in STAT131. These differences may have been due to many of the postgraduates being more reliant on the e-learning resources mainly online lecture notes, video resources, Edu-stream, and student forum as alternative for not attending laboratory classes. In the GHMD983 survey, it was found that 50% out of the 12 distance students (n = 6) did not attend laboratory classes compared to only 4% out of the 27 on-campus students (n = 1). While the use of video resources were reported as important to learning by more than 90% of the postgraduates, only 39% of the undergraduates found them important. A number of students in STAT131 (15%) commented that they were unable to find the videos provided on the e-learning site. This suggested that the provision of video resources associated with week-by-week tasks 169
via weekly folders in GHMD983 possibly makes these resources easier for the students to access rather than when the videos were provided in the by-type resource folders in STAT131. It seemed that the resources were disconnected from the particular tasks for which they were useful. In other words, it was possible to find the videos available in the e-learning sites when these were connected to weekly tasks rather than when they were placed entirely in separate folders. This suggested that the resources provided in the e-learning sites via weekly folders (in GHMD983) worked better than the by-type resource folders (in STAT131). A test for differences in proportions revealed that there were no significant differences in the average time spent by the students on the subjects per week between the postgraduates in GHMD983 and the undergraduates in STAT131. It was found that 25% of the postgraduates and 40% of the undergraduates spent five hours or less per week (see Table 5.18). About 73% of the postgraduates and 60.6% of the undergraduates spent six or more hours weekly on the subjects throughout the session. Approximately 38% of the postgraduates and 21% of the undergraduates spent nine or more hours per week on the subjects. These results suggest that the time spent on the subjects was similar for both cohorts. The postgraduates were not „more motivated‟ than the undergraduates but they have a history of success predominantly in the 100 levels statistics subjects, whereas many undergraduates in STAT131 were new to the tertiary study.
Table 5.18 Percentages of time spent by students per week on the subjects Percentages GHMD983 August STAT131 March /Spring 2008 /Autumn 2009 (n = 40) (n = 38) Not answered 0 to 5 hours 6 to 8 hours More than 9 hours
2.5 25.0 35.0 37.5
39.5 39.5 21.1
The analysis of students‟ perceived confidence with topics in the subjects revealed significant differences in the proportions of students between the undergraduates in STAT131 and the postgraduates in GHMD983 on several topics (see Table 5.19). The proportions of students who perceived themselves as competent in setting up and interpreting confidence intervals (Z = 2.18, p < 0.05), and determining and calculating probabilities (Z = 5.59, p < 0.0001) significantly higher in GHMD983 compared to STAT131. On the other hand, the proportion of students who perceived themselves as confident in writing a meaningful paragraph about variables (Z = 2.44, p < 0.01) was significantly higher in STAT131 than in GHMD983. 170
With much concentration given on the topic of exploratory data analysis during lectures and laboratory classes, this might have increased the confidence among the undergraduates in STAT131.
Table 5.19 Student perceived competency with topics learning between cohorts
Topic
GHMD983 August /Spring 2008 (n = 39) %a Rankb
STAT131 March /Autumn 2009 (n = 38) %a Rankb
Confidence intervals Using JMP/SPSS Probabilities Scatterplots and correlations Regression Pearson‟s Chi-Square Exploratory: Writing meaningful paragraph Normal and/or Exponential model Goodness of Fit test (Model fitting) Hypothesis tests: single mean and proportion Binomial and/or Poisson modelf Hypothesis tests: differences in means and proportions
92.3e 84.6 82.1e 82.1 79.5 79.5 79.5e 77.0c 77.0 77.0 76.9 76.9
73.7e 81.6 18.4e 76.3 76.3
1 2 3.5 3.5 6 6 6 9 9 9 11.5 11.5
d
97.4e 63.2 68.5 79.0 68.4 79.0
7 2 11 5.5 5.5 1 10 8 3.5 9 3.5
a
Percentages of students responding as moderately confident or confident Ranked by percentage reporting confident with topic c 2.6% not answered the category (n = 1) d Not surveyed e Significant difference at the 0.05 level f Binomial only in GHMD983, Binomial and Poisson in STAT131 b
Table 5.20 presented the changes in the students‟ perspectives at the end of the session between the undergraduates in STAT131 and the postgraduates in GHMD983. With the exception of two aspects in calculating probabilities and testing hypotheses, the proportions of students reporting more comfort or less anxiety were significantly higher in GHMD983 compared to STAT131 in ten of the 12 aspects related to the subjects as shown in Table 5.20. Overall, the postgraduates demonstrated more comfort in virtually all aspects of the subject compared to the undergraduates.
171
Table 5.20 Changes in students‟ perspective after completing the subjects between cohorts
Category
Solving statistics questions using computer Explaining statistical findings Taking this subject in general Working with problems that involves mathematics Tackling tasks which are difficult for student Working with numbers Taking this subject that involves mathematics Taking quizzes, tests and exams in this subject Taking this subject that involves statistics Taking this subject that involves computing Calculating probabilities Formulating and testing hypotheses
GHMD983 August 2008b (n = 39) %a
STAT131 March 2009 (n =38) %a
Test for differences in proportions p valuec
77.0 76.9 74.3
52.6 55.3 34.2
0.013 0.023 < 0.001
71.8
31.6
< 0.001
69.3 67.5 66.6 64.1 64.1 59.0 71.8 69.3
36.8 36.8 23.7 39.5 39.4 28.9 55.2 55.2
0.002 0.004 < 0.001 0.015 0.015 0.004 0.070 0.102
a
Percentages of students responding as a little comfortable or much more comfortable 2.6% not answered each category (n = 1) c Significant difference at the 0.05 level b
The comparison of outcomes between the two cohorts revealed that the postgraduates were more comfortable or less anxious in learning and preferred to use the videos provided in the subject compared to the undergraduates. While the provision of videos in both subjects was regarded as useful for learning by 98% of the postgraduates and 39% of the undergraduates who had used the resources, it was found that about half of the undergraduates (56%) did not use the videos, and 15% could not find the videos as they were not aware the resources were available in the e-learning site. This suggests that in providing learning supports particularly video resources on the e-learning sites, there was a need to implement appropriate learning designs within the e-learning systems. Hence, the issue addressed in later implementations of STAT131 in the second case study was the need for better learning design embedding the videos more effectively so as to better support and improve the student learning of statistics.
5.4
Development and implementation phases
5.4.1 Aim for the study Reflection on the two groups particularly the undergraduate group suggested the need to place the resources more appropriately in the e-learning systems. Informed by Agostinho et al.‟s (2008) work on the Learning Design Visual Sequence (LDVS) representation (discussed in section 3.5, Chapter 3), a new approach to embedding resources was developed. An adaptation 172
of the LDVS led to the development of learning design maps or flowcharts within weekly folders (refer to section 5.3.2). These were introduced to the postgraduates in GHMD983 August/Spring 2009. Further refinement of the learning design maps were carried out in the postgraduate subject SHS940 (GHMD983 renamed) August/Spring 2010. It was necessary to collect data at the end of the session in order to inform the decisions made regarding the implementation of the learning design maps and the associated embedding of video resources in the e-learning systems. The data were gathered from the student surveys at the end of the session, e-learning student tracking statistics, and students‟ assessment and actual grades.
5.4.2 The context: existing resources As with the GHMD983 August/Spring 2008, there were similar learning resources provided to students both in GHMD983 August/Spring 2009 and SHS940 August/Spring 2010 (noted in section 5.1.2), with the exception of the lecturer(s) who taught the subjects and the assessment systems. As shown in Table 5.21, Anne taught the subject throughout the session in August/Spring 2008. However, the subjects involved a sharing of lectures between two lecturers in later sessions: (i) in GHMD983 August/Spring 2009: Anne taught the subject in the first half of the session, and Doug (pseudonym) in the second half of the session, (ii) in SHS940 August/Spring 2010: Doug taught the subject in the first half of the session, and Sally (pseudonym) in the second half of the session, and (iii) in SHS940 August/Spring 2011 (postscript): Anne taught the subject in the first half of the session, and Sally in the second half of the session. The change of lecturer occurred during the mid-session.
Table 5.21 Changes of lecturer(s) and assessment systems used in GHMD983/SHS940 across sessions Session
Lecturer b
Assessment
GHMD983 August/Spring 2008
Anne
Test retest approach
GHMD983 August/Spring 2009
Anneb (1st half of session) Douga (2nd half of session)
Test retest approach
SHS940 August/Spring 2010
Douga, b (1st half of session) Sallya (2nd half of session)
Minor & Major assignment
SHS940 August/Spring 2011 (Postscript)
Anneb (1st half of session) Sallya (2nd half of session)
Two draft of final assignment & three test retest approach
a
Pseudonyms were used for lecturers who taught the subjects Subject co-ordinator
b
173
In terms of the assessment systems, similar assessment systems were used in GHMD983 2009 as with the GHMD983 August/Spring 2008. As shown in Table 5.19, a different assessment system was designed for SHS940 August/Spring 2010 consisting of three major summative assignments and three minor formative assignments as well as a final examination (40%). Each minor assignment weighted 5% (mode A) or 0% (mode B) whereas each major assignment weighted 15% (mode A) or 20% (mode B). The determination of weighting for each minor and major assignment was paired whichever of the above (mode A or mode B) that maximized the student‟s total mark, thus emphasizing the role of the minor assignments as formative. This only applied when a reasonable attempt was made by students at the formative assignment. No attempt at the formative assignment automatically led to a mark of zero for the minor assignment and a maximum of 15% for the corresponding major assignment. For instance, if the marks for the minor and major assignments were, respectively, 3 (out of 5) and 12 (out of 15) then, under marking mode B, the minor assignment mark was discarded and the major assignment received a mark of 16 out of 20. But if no attempt was made at the minor assignment, the marks (mode A) remained at zero out of 5 and 12 out of 15, a total of 12 out of 20.
5.4.3 Method The steps involved in the collection of the evaluation data throughout the development and implementation phases for this study were as follows:
1.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE08/289, see Appendix 4.2) in order to invite students to participate in the further investigation of the impact of video resources on student learning outcomes.
2.
In GHMD983 August/Spring 2009, students were approached initially and informed about the purpose of the study through a Participant Information Sheet delivered via email. Meanwhile in SHS940 August/Spring 2010, the information sheets were provided to the students during laboratory classes.
3.
In GHMD983 August/Spring 2008, all learning resources were organised on weekly basis by locating the laboratory tasks, lecture notes, data sets, video supports, and other resources into weekly folders (see Figure 5.3).
4. In GHMD983 August/Spring 2009, the learning design maps using Microsoft Word Document (DOC) files were introduced to students in the e-learning site which embedded in the weekly folders (see Figure 5.4). The maps (or flowcharts) were to 174
support students completing their laboratory tasks through providing directions to various learning resources and in particular the video resources.
Figure 5.4 GHMD983 August/Spring 2009 e-learning homepage and learning design maps using DOC files embedded in weekly folders 5. The use of the learning design maps was extended to all weeks in SHS940 August/Spring 2010 and accessible in the e-learning site via weekly folders in addition to by-type resource folders (see Figure 5.5). Following the comments provided by students in the STAT131 March/Autumn 2010 survey (noted in section 6.1.1.2 in Chapter 6), both the weekly folders and by-type resource folders were made available to the students in the e-learning site. The refinement of the learning design maps to simpler layout and saved to a different type of file (from DOC converted to Portable Document Format (PDF) files) allowing for easy access or better alignment to various resources. The improved maps incorporating links to resources in the e-learning site such as lecture notes, laboratory tasks, Edu-stream, videos, worked solutions, JMP notes, student forum, and “print off” the maps for references.
175
Figure 5.5 SHS940 e-learning homepage and improved learning design maps created in PDF
6.
Students in GHMD983 August/Spring 2009 completed the online surveys at the end of the session through the e-learning system and returned the completed consent form (via email) indicating their agreement to participate in the study. In SHS940 August/Spring 2010 however, the survey was administered in laboratory classes (also accessible via online in the e-learning system), and the students completed them via on-paper format.
7.
In addition, data on the e-learning student tracking statistics were observed from week 1 to week 13 in order to examine the actual students‟ activities or usage of online resources in the e-learning systems.
5.4.4 Outcomes In these two phases of evaluation, this study involved two cohorts of the postgraduates: (i) 63 students who enrolled in GHMD983 August/Spring 2009 composed of 49 on-campus and 14 distance students, and (ii) 41 students who enrolled in SHS940 August/Spring 2010 which were 39 on-campus and 2 distance students. Of the postgraduates in August/Spring 2009 and 2010, there were 28 and 12 students took part in the surveys with the response rates of 44 and 24 per cent respectively. The low response rates perhaps were due to low attendees at the end of the session (specifically in SHS940) and the shortcomings of voluntary-based surveys.
176
A test for differences in two proportions was used to examine the differences in the students‟ background between the two cohorts, GHMD983 August/Spring 2008 and GHMD983 August/Spring 2009. As shown in Table 5.22, there were no significant differences in the proportions of students according to gender, mode of learning (on-campus or distance), and nationality between the two cohorts. Overall, a little over than half of the students was females: 53% (21 out of 40 students) in 2008 and 64% (18 out of 28 students) in 2009. Over two third of them enrolled as on-campus students: 68% (27 out of 40 students) in 2008 and 68% (19 out of 28 students) in 2009. Less than half of them were domestic students: 43% (17 out of 40 students) in 2008 and 39% (11 out of 28 students) in 2009. In SHS940 August/Spring 2010, a high proportion of students (92%) who responded to the survey were females (11 out of 12 students), 83% (10 out of 12 students) enrolled as oncampus students, and one third of them (33%) reported as domestic students (4 out of 12 students). Table 5.22 Students‟ demographic data reported in the surveys between cohorts Percentages
Female On-campus Domestic students
GHMD983 August/Spring 2008 (n = 40)
GHMD983 August/Spring 2009 (n = 28)
SHS940 August/Spring 2010 (n = 12)
52.5 67.5 42.5
64.3 67.9 39.3
91.7 83.3 33.3
In terms of the distributions of average time spent per week on the subjects, a test for differences in two proportions revealed no significant differences in the proportions of students between the 2008 and 2009 cohorts (refer to Table 5.23). It was reported that over two third of the students in both cohorts spent at least six hours per week on the subjects throughout the session: 73% (29 out of 40 students) in 2008 and 82% (23 out of 28 students) in 2009. On the other hand, not more than 25% of the students in both cohorts spent less than five hours on learning per week. The distribution of average time spent by students weekly in SHS940 August/Spring 2010 was similar to the previous cohorts in 2008 and 2009.
177
Table 5.23 Percentages of average time spent by students per week on the subjects
0 to 5 hours 6 to 8 hours More than 9 hours
GHMD983 August/Spring 2008 (n = 40)a
Percentages GHMD983 August/Spring 2009 (n = 28)
SHS940 August/Spring 2010 (n = 12)
25.0 35.0 37.5
17.9 46.4 35.7
25.0 33.3 41.7
a
One students did not respond (2.5% not answered)
As shown in Figure 5.6, an analysis of differences between two proportions (normal approximation using z-test) on the students‟ actual grades in GHMD983 revealed that the proportion of students attaining top grades (High Distinction and Distinction) in 2008 (67%) was significantly higher (Z = 3.63, p < 0.001) than in 2009 (37%). Corresponding to this, the proportion of students who failed, attaining pass and credit grades (Fail, Pass and Credit) in 2009 (64%) was significantly higher (Z = 3.75, p < 0.001) than in 2008 (33%). Overall, the result suggests that there was a difference in students‟ performance in statistics between 2008 when there were weekly folders and 2009 when learning design maps (in DOC files) within weekly folders were made available.
Figure 5.6 Distribution of students‟ actual grades between cohorts
178
5.4.4.1
Impact of learning resources on students learning
In response to the perceived importance of learning resources on student learning, seven resources were regarded as importance to learning by more than 85% of the students in both cohorts: GHMD983 August/Spring 2008, and GHMD983 August/Spring 2009 (refer to Table 5.24). Particularly in 2008 as can be seen in Table 5.24, more than 90% of the students valued the laboratory tasks (100%), laboratory tests (100%), worked solutions (98%), video resources (98%), lectures (95%), and online lecture notes (95%) as useful for learning. While three resources including worked solutions, laboratory tasks, and video resources were valued as useful by more than 90% of students in 2009. A large proportion of students endorsed the use of videos in supporting their learning: 98% in 2008 and 96% in 2009. It was found that 82% of the students in 2009 valued the weekly folders as useful. The provision of the learning design maps via the weekly folders in 2009 was potentially beneficial with 57% of the students perceiving this resource as useful for learning. A test for differences in two proportions revealed that the proportion of students perceiving the student forum as useful in 2008 (78%) was significantly higher (Z = 2.07, p = 0.019) than in 2009 (54%). However, there were no significant differences in the proportions of students valuing other resources between the two cohorts. Particularly in 2009, the proportions of students support for other resources were moderate ranging from 50% valuing the objectives included in the laboratory manual to 68% valuing the Edu-stream; but the textbook remained as less important for learning by 56% of the students.
Table 5.24 Perceived impact of resources on students learning across sessions
Resource
Laboratory tasks Worked solutions Laboratory tests Laboratory classes Lectures Online lecture notes Tutor Teamwork Video resources Edu-stream Laboratory retests Student forum Laboratory manual Learning strategies
GHMD983 August /Spring 2008 (n = 40) %a Rankb 100 97.5 100 82.5 95.0 95.0 65.0 60.0 97.5 82.5 77.5 77.5 d d
1.5 3.5 1.5 8.5 5.5 5.5 12 13 3.5 8.5 10.5 10.5 -
GHMD983 August /Spring 2009 (n = 28) %a Rankb 96.4 100 96.4 d
85.7 85.8 57.1c 60.7 96.4 67.9 89.3 53.6 82.1 57.1
3 1 3 7 6 13 11 3 10 5 15 8.5 13
179
Objectives in laboratory manual Textbook Work done in your own time Weekly folders Learning design maps
d d
92.5 d e
7 -
50.0 44.4 d
82.1 57.1
16 17 8.5 13
a
Percentages of students responding as moderately or extremely useful Ranked by percentage reporting resources useful c 3.6% not answered the category (n = 1) d Not surveyed e Not provided b
5.4.4.2
Video resources for student learning
The provision of video resources to the postgraduate students in GHMD983/SHS940 was primarily aimed to support learning among the distance students. As noted in the previous section, the proportions of students who valued the videos for learning were similar between 2008 (98%) and 2009 (96%). In the GHMD983 August/Spring 2009 survey, students were asked to indicate the average time they spent on watching the videos per week and their experience using the videos. In regard to the distributions of average time spent per week on watching the videos, 54% of the students reported that they spent at least three hours per week on watching the videos, and 46% reported that either they did not use the videos or spent less than two hours per week on watching them. In relation to the experience of using the videos provided in the subject, a high proportion of the students (89%) reported that the use of videos had helped them solve problems well in the subjects, 7% reported that they did not use the videos at all, and 3% perceived the videos were time consuming.
5.4.4.3
Student perceived comfort with topics learning
In order to examine the differences in the student perceived competency with topics learning between the two cohorts, a test for differences in two proportions was used for the analysis. The results revealed that the students in 2009 demonstrated higher confidence on probability topic (Z = 1.78, p = 0.038) compared to 2008 (refer to Table 5.25). Particularly in 2009, more than 85% of the students considered themselves as comfortable with the topics including probability (96%), using JMP (93%), Normal model (89%), Pearson‟s Chi-square (89%), scatterplots and correlations (89%), and exploratory data analysis (86%). In contrast, there were only two topics in which at least 85% of the students in 2008 reporting comfort such as confidence intervals (92%) and using JMP (85%).
180
Table 5.25 Student perceived competency with topics learning across sessions GHMD983 August /Spring 2008 (n = 39) %a Rankb
Topic
Confidence intervals Using JMP Exploratory: Writing meaningful paragraph Normal model Goodness of Fit test (Model fitting) Binomial model Scatterplots and correlations Regression Pearson‟s Chi-Square Hypothesis tests – single mean and proportion Hypothesis tests – differences in means and proportions Probabilities
GHMD983 August /Spring 2009 (n = 28) %a Rankb
92.3 84.6 79.5 77.0c 77.0 76.9 82.1 79.5 79.5
1 2 6 9 9 11.5 3.5 6 6
78.6 92.9 85.7 89.3 78.5 71.4 88.9 78.6 89.3
9 2 6 3.5 11 12 5 9 3.5
77.0
9
82.1
7
76.9
11.5
78.6
9
82.1
3.5
96.4
1
a
Percentages of students responding as moderately confident or confident Ranked by percentage reporting confident with topic c 2.6% not answered the category (n = 1) b
5.4.4.4
Provision of learning design map in subject
The learning design maps created in PDF were introduced to the students in SHS940 August/Spring 2010 in the e-learning site via weekly folders in addition to by-type resource folders (as seen in Figure 5.5). As noted earlier, both the weekly folders and by-type resource folders were placed in the e-learning site as recommended by students through the STAT131 March/Autumn 2010 survey. The aim of providing the maps in the subjects was to support students with learning materials and in particular video supports accessible in the e-learning site. Therefore, it was necessary in this study to identify how and to what extent the maps were useful for the student learning in the subject. For this reason, in the SHS940 survey, the students were asked to indicate what they gained from the use of the learning design maps and their experience of using the maps for learning. The dominant use of the maps (75%) was by students connecting required work or tasks to relevant resources. The next major uses were as a study checklist, organizing their work and learning materials in the subject (66.7%). The maps were also used as a revision tool or learning guide (58.3%), and to find the reference materials (50%). This usage was supported by students‟ comments regarding the use of maps for learning in the subject. There were four major themes drawn from their responses and examples of comments were as follows:
181
To help them completing the weekly laboratory tasks (22.2% of 9 responses) It was handy for setting out the weekly work; however the links to readings and lab solutions rarely worked as these resources were put up online so late. Useful as the lab tasks were put up on e-learning site with links to weekly solutions. To assist them in completing their assessment (tasks) (22.2% of 9 responses) It is benefit to finish assignment, however it usually put up late and I don‟t have time to prepare tutorial. Useful for assessment. To organise their learning and study in the subject (33.4% of 9 responses) Useful for review on weekly study in the subject. It was good and helpful for learning the subject. To find resources and materials needed for their study (22.2% of 9 responses) It was useful when I click on the content it‟ll link to the materials. Putting all related materials for each week was very helpful guide, it is important to ensure uploading materials week by week to have the advantage of that.
5.4.4.5
Changes in perspective at the end of the session
In terms of changes in students‟ perspective after completing the subject, a test for differences in two proportions revealed that there were no significant differences in the proportions of students‟ endorsement of different perspectives at the end of the sessions between the 2008 and 2009 cohorts (see Table 5.26). In 2008, more than 75% of the students were more comfortable or less anxious in five of the 15 aspects related to the subject including using statistics in their research work (80%), solving statistics questions using computer (77%), undertaking their professional work (77%), reading statistical studies (77%), and explaining statistical findings (77%). In contrast, there were only two aspects in which more than 75% of the students in 2008 reporting more comfort or less anxiety such as using statistics in their research work (77%), and reading statistical studies (77%).
182
Table 5.26 Changes in students‟ perspective after completing the subject between cohorts GHMD983 August /Spring 2008 (n = 39)
Category
Using statistics in research work Solving statistics questions using computer Undertaking their professional work Reading statistical studies Explaining statistical findings Taking this subject in general Calculating probabilities Working with problems that involves mathematics Formulating and testing hypotheses Tackling tasks which are difficult for student Working with numbers Taking this subject that involves mathematics Taking quizzes, tests and exams in this subject Taking this subject that involves statistics Taking this subject that involves computing
GHMD983 August /Spring 2009 (n = 28)
%a
Rankb
%a
Rankb
79.5 77.0 77.0 77.0 76.9 74.3 71.8 71.8 69.3 69.3 67.5 66.6 64.1 64.1 59.0
1 3 3 3 5 6 7.5 7.5 9.5 9.5 11 12 13.5 13.5 15
77.7 73.1 62.9 76.9 74.0 55.5 74.0
1 6 10 2 4 12.5 4 8.5 7 8.5 11 12.5 4 14
c
65.4 70.3 65.4 59.2 55.5 74.0 53.9
a
Percentages of students responding as a little comfortable or much more comfortable Ranked by percentage reporting comfortable c Not surveyed b
5.4.4.6
The e-learning student tracking statistics
A data from the e-learning student tracking statistics was used to examine the students‟ online activities through their usage of resources provided in the e-learning system. These tracking statistics were observed for all weeks between week 1 and week 13 in each session. The dates of the reports were set on:
27th July until 26th October for GHMD983 August/Spring 2008, and
27th July until 30th October for GHMD983 August/Spring 2009.
The time ranged was from 8.00 am (tracking started) to 12.00 pm (tracking ended). In GHMD983 August/Spring 2008, all resources were placed in weekly folders in the elearning site. In GHMD983 August/Spring 2009, learning design maps using DOC files associated with resources were provided to students via weekly folders in the e-learning site. In SHS940 August/Spring 2010, the learning design maps using PDF files incorporated links to resources were introduced to students in the e-learning site via the weekly folders in addition to by-type resource folders. The tracking of students‟ activities in terms of the amount of access to 183
resources in the e-learning system was done in week 12 for SHS940 August/Spring 2010 (from 18th October until 24th October 2010) between 8.00 am (tracking started) until 12.00 pm (tracking ended). The results from the e-learning tracking statistics revealed that of the 993 access to resources, 45% of them (n = 443) were done using the learning design maps while the remaining 55% (n = 550) using the by-type resource folders in the e-learning site. Of the 41 students, 61% of them (n = 25) had used both the by-type resource folders and the learning design maps to access resources, 39% (n = 16) used the by-type resource folders, and none of them were reported as using only the learning design maps provided in the e-learning site. This suggests that the students liked to use both designs rather than only use the maps to access resources in the e-learning site. An independent t-test on data from the e-learning tracking statistics revealed that the average number of read messages (t102.967 = 2.653, p = 0.009 with unequal variances assumed, F = 17.971, p < 0.001) and posted messages (t30.533 = 2.325, p = 0.027 with unequal variances assumed, F = 17.877, p < 0.001) by the students in 2008 was significantly higher than in 2009 (refer to Table 5.27). Overall, this result indicates that the students in 2008 were more active in accessing online communication tool (i.e. student forum) via the e-learning system compared to 2009. This is consistent with earlier findings that the student forum was considered much more useful in 2008 than 2009 (refer to Table 5.24).
Table 5.27 Data from of the e-learning student tracking statistics between week 1 and week 13 between cohorts Descriptive Number of sessionsb Mean Standard deviation Total time spentc Mean Standard deviation Number of read messages Mean Standard deviation Number of posted messages Mean Standard deviation Number of files viewedd Mean Standard deviation
GHMD983 August/Spring 2008
GHMD983 August/Spring 2009
n =110a 93.20 69.90 n =110a 18:11:49 15:35:53 n = 81a 1670.12 3628.35 n = 30a 15.27 25.86 n =107a 387.46 258.23
n = 72a 81.15 41.52 n = 72a 18:43:57 09:46:12 n = 63a 522.62 1243.79 n = 35a 4.14 4.54 n = 72a 401.85 233.92
a
This figure differed from the actual number of students who enrolled in the subject as tracking were observed before the final exam b Number of access (login) to the e-learning system c Total time spent in hours:minutes:seconds d Online resources used (viewed) e.g. lecture notes, videos, worked solutions
184
5.4.4.7
Assessment
The assessment system designed for GHMD983 August/Spring 2009 was similar to that in GHMD983 August/Spring 2008. In the GHMD983 August/Spring 2009 survey, the students were asked about their perceptions of the subject assessment system. As shown in Table 5.28, the findings revealed that 84.6% of the students indicated that the system was fair throughout the session. Only a small proportion of the students reported that the assessment system was unfair for some reasons (3.8%), some thought that others were cheating (7.8%), and the marking style was inconsistent (3.8%). Examples of the students‟ comments reporting their disappointments on the subject assessment system were as follows:
(15.4% of 26 responses) There are far too many students that copy from each other. Being one of the students who did well throughout the tests, I found there were too many people asking for answers. I would not give them, but there were others who would. Some students take others answers and sometimes they have better results. I think it was basically fair up to week 8. However I am not satisfied with week 12 marks. It is much less than expectation and I am still confused because we haven‟t received the details of marking e.g. where I lost my marks. Table 5.28 Students‟ perceptions on the subject assessment system in GHMD983 August/Spring 2009 Is the assessment system fair? Fair Unfair for some reasons Cheating has occurred Inconsistency in marking
Percentages (n = 26) 84.6 3.8 7.8 3.8
An independent t-test on the assessment results in GHMD983 August/Spring 2008 and GHMD983 August/Spring 2009 revealed that there were significant differences in the average of student marks on topics assessed in test 1 and test 2. As shown in Table 5.29, the students in 2008 achieved significantly higher mean marks on exploratory data analysis topic compared to 2009. On the other hand, the students in 2009 attained significantly higher mean marks on bivariate data analysis topic compared to 2008. There were no significant differences in the mean assessment marks for the other two topics between cohorts.
185
Table 5.29 Comparison of the average assessment marks in GHMD983 between cohorts Laboratory testa
1. Exploratory Data Analysis 2. Bivariate Data Analysis 3. Probability & Models, Goodness of Fit & Chi-square test 4. Normal & Binomial distributions, Estimation & Hypothesis testing
Sessionb
N
Mean
S. D.c
2008 2009 2008 2009 2008 2009 2008 2009
89 63 88 58 86 62 84 59
7.75 7.02 7.50 7.94 8.45 8.85 8.99 8.67
0.951 1.174 1.170 1.334 1.481 1.478 7.713 0.661
t valued
p value
4.234
< 0.001
2.121
0.036
1.600
0.112
0.316
0.752
a
Zero marks have been removed from all tests as they represent non submission and not necessarily low achievement b A=Autumn, S=Spring c Standard deviation d Equal variances assumed for both groups
In SHS940 August/Spring 2010, the change of lecturer/co-ordinator led to changes in the assessment system compared to the previous years (2008 and 2009). The students in SHS940 were required to complete two types of assessment, three minor and three major assignments. Each minor assignment weighted 5%, and each major assignment weighted 15%. The mean assessment marks expressed out of 10 for SHS940 suggest that the performance was comparable between the assessments (refer to Table 5.30).
Table 5.30 Assessment results on specific topics in SHS940 August/Spring 2010 Topics 1. Exploratory Data Analysis & Data Collection and Design Minor 1 Major 1 2. Probability & Binomial and Normal distributions Minor 2 Major 2 3. Estimation & Hypothesis testing and Demography Minor 3 Major 3
N
Meana
S. D.b
40 41
7.2615 6.8276
1.13695 1.41570
41 41
6.8488 7.4878
1.81027 1.33666
41 40
6.9659 8.4525
1.60119 1.06909
a
Zero marks have been removed from all assessment as they represent non submission and not necessarily low achievement b Standard deviation
Figure 5.7 displays the distributions of assessment marks (out of 10) in SHS940 August/Spring 2010. The distributions of minor assignment marks were similar with interquartile range between 1.40 and 2.10 marks. Likewise, this was evident in the distributions of major assignment marks with interquartile range between 1.33 and 1.53 marks, ignoring some outliers for both minor and major assignments. 186
Figure 5.7 The distribution of assessment marks in SHS940 August/Spring 2010
Using paired t-tests, a significant improvement in the mean marks from minor 2 to major 2 (t40 = 2.443, p = 0.019), and minor 3 to major 3 (t39 = 5.567, p < 0.001). However, the mean marks significantly decreased from minor 1 to major 1 (t39 = 3.075, p = 0.004). There were also significant improvements in the mean marks of major assessment: from major 1 to major 2 (t40 = 2.393, p = 0.021), major 1 to major 3 (t39 = 5.438, p < 0.001), and major 2 to major 3 (t39 = 4.683, p < 0.001). Overall, the results showed that the mean assessment marks consistently improved throughout the session in SHS940 August/Spring 2010: from minor to major assignments with the exception of minor 1 to major 1, and from earlier major to later major assignments. There were similar improvements from earlier to latter assessments in 2008 and 2009.
5.4.4.8
Failure rates, pass rates, and shifts in grades
Over the years 2003 to 2005 in GHMD983, there was a decline in the percentage of students from 56% to 41% achieving higher grades (High Distinction, Distinction, and Credit) with unstable failure rates fluctuating between 10% and 25% (refer to Table 5.31). Nevertheless, the proportion of students attaining the top grades (High Distinction and Distinction) was drastically shifted up at 63.7% and the failure rates dropped to 3.6% in 2006, hence this was the turning point as the failure rates appeared stabilised under 10% until and including 2011 (see 187
Figure 5.8). This turning point was associated with an increase in within the session assessment tasks from 3 large to 5 smaller pieces of assessment. An analysis of differences in proportions revealed that the overall proportion of students failing between 2006 and 2011 declined significantly (Z = 5.22, p < 0.001) compared to the overall proportion failing between 2003 and 2005. Furthermore, the failure rate was the lowest in record for both years 2010 (2.4%) and 2011 (1.7%) where only one distance student who attempted no assessment failing.
Table 5.31 Pass rates for the years 2003 to 2011 in GHMD983/SHS940 Year
Na
Fail
Pass
Credit
Distinction
High Distinction
2003 2004 2005b 2006b 2007 2008 2009 2010c 2011c
32 49 49 113 105 86 63 41 60
18.8 10.2 24.5 3.6 7.7 3.5 4.8 2.4 1.7
25.0 34.8 34.7 10.6 18.1 8.2 27.0 4.9 1.7
31.2 22.4 18.4 22.1 15.2 20.9 31.7 12.2 11.6
15.6 22.4 12.2 31.0 33.3 39.5 27.0 43.9 40.0
9.4 10.2 10.2 32.7 25.7 27.9 9.5 36.6 45.0
a
Number of students enrolled From 3 large assessment (2005) changed to more smaller pieces of continuous assessment (2006) c Subject code changed from GHMD983 to SHS940 (Data source from the University of Wollongong, Performance Indicator Database, 09/12/2011) b
Note: HD = High Distinction ; D = Distinction ; C = Credit The lecturer “Anne” commenced in 2005 and was away both in 2007 and 2010 Innovations introduced in the subject A = Changes in the assessment system from 3 major assignments to 5 laboratory tests/assignments B = Video resources, and test retest approach (retest for any test less 70%) C = Learning design maps within weekly folders (DOC files), test retest approach (retest for any test less 70%) D = 3 minor and 3 major assignments, change in lecturer teach the subject E = Improved learning design maps within weekly folders (PDF files), by-type resource folders, draft and redraft of first two assignments, and test retest approach (retest for any test less 70%)
Figure 5.8 Percentages of fail versus higher grades and top grades, and innovations introduced in GHMD983/SHS940 for the years 2003 to 2011 188
On the other hand, there was inconsistency in the proportion of students achieving at least credit grade between 2006 and 2011 that fluctuated from the lowest 68.2% to the highest 96.6%. In response to the shift of rates to higher grades for the past 6 years, it was almost impossible to ascertain if this was associated with the lecturers who taught this subject as there were 5 lecturers with different level of expertise involved in the teaching since 2003. For instance, one lecturer (“Anne”, the principle supervisor of this thesis) started teaching the subject in 2005 and the failure rate was high (24.5%). In 2006, the lecturer modified the assessment system from 3 major assignments to 5 laboratory tests/assignments and consequently the failure rate dropped to 3.6%. The lecturer was away for a study leave in 2007 and 2010. Although the overall failure rates appeared consistently under 10% in both years, the failure rates for distance students increased, 44% in 2007 and 20% in 2010. In 2011, the lecturer slightly modified the assessment system by replacing the first two tests with assignments that required students to submit their drafts before submitting the final assignments. The submission of drafts in the e-learning site allowed other students and the lecturer to provide constructive comments on their work, and enabled them to improve the final assignments. This was possibly one reason which resulted to a high proportion of the students (97%) in 2011 attained credit or better in their final grades. It was also evident that a large number of students read the submission of other students, increased their exposure analyses of different data sets, distributions and other characteristics of data. In SHS940 August/Spring 2010, the learning design maps associated with videos and other resources were incorporated within the weekly folders alongside the by-type resource folders in the e-learning site. In 2011, there were no maps but both the weekly folders and bytype resource folders (e.g. assessment, lecture notes, video supports) were provided to the students in the e-learning site. However, the weekly folders were similar to the maps provided in 2009 and 2010; in that, they contained the three major components: resources, tasks, and supports. In terms of improving the teaching and learning by providing a variety of resources to the students particularly through the e-learning systems, 95% of the students regarded the video resources introduced in 2008 were useful for learning. Further, the top grades rate (High Distinction and Distinction) headed up to 67.4% and the failure rates headed down to 3.5% in 2008. The change in 2011 also coincided with a shift in assessment and practice. The first two test retests were substituted with a draft/final assignment, a pedagogical preferable approach shifting from fail to draft. This together with the subject design that provides the weekly folders associated with tasks and support resources in the e-learning site in 2010 and 2011 was associated with a significant increase in top grades (Z = 4.72, p < 0.001) compared to the overall top grades rates between 2008 and 2009. It is apparent that the organisation of resources, tasks, and supports within the weekly folders was adequate in maintaining good access to resources. It is also apparent that a large 189
proportion of students use both the weekly folders and by-type resource folders when they were available in the e-learning site.
5.4.4.9
Recommendations for improvement of subject
The students were surveyed to provide recommendations to improve the subjects for the benefit of future students. Several major themes were evident based on their responses as listed in Table 5.32. Table 5.32 Major themes of students‟ recommendations for subjects‟ improvement
How to improve the subject?
Lecturer or teaching style Lectures, lecture notes, and tutorials Laboratory tests, exam, and assignment Laboratory manual, tasks, and worked solutions Video supports and online materials Concern about the study of statistics No improvement needed
GHMD983 August/Spring 2008 (n = 24)
Percentages GHMD983 August/Spring 2009 (n = 28)
SHS940 August/Spring 2010 (n = 12)
13 8 8 46 4 21 -
9 18 36 14 9 14
50 17 33 -
In GHMD983 August/Spring 2008, 21% (n = 5) of the responding students voiced their concerns through comments about the study of statistics and video resources were provided in the e-learning site to support their learning. Almost 80% (n = 19) of them needed improvements on the subject resources specifically the laboratory manual, laboratory tasks, and worked solutions. Likewise, a large proportion of responding students in GHMD983 August/Spring 2009 (86%, n = 24), and SHS940 (100%, but of only 12 students responding!) suggested improvements of learning resources provided in the subjects. Some of their comments were as follows:
GHMD983 August/Spring 2009 If possible lecture followed by lab on same day would have been better. Just make sure that lectures, labs and solutions are up-to-date as quickly as possible for distance students who are unable to attend lectures. A number of times throughout the session the solutions were not made available for the labs as well as for the sample exams. This made it difficult to check the work to ensure that the way I was working out the solutions was correct. There were often discrepancies between the labs and the solutions as well. Swapping lecturer mid-way through the session was difficult. Later in the session the answers 190
to the lab tests also were not posted so it was difficult again to check lab test results and use it as a learning tool for the exam. More videos!!!! They are really useful to learn in less time. SHS940 August/Spring 2010 I think it is important that we are being taught as opposed to use having to teach ourselves. I would have been just as well off being handed a textbook and “google” to complete the assessments. Perhaps it was better for on campus students, but as a distance student I felt very alone with no guidance. Need more interaction with the lecturer and students. Extend tutorial to two hours in order to cover everything. Provide more relevant readings – put the laboratory solutions online so that distance students are actually able to check their work and make sure they are learning properly. It was about 6 week worth of work that I was unable to check if what I was doing was correct – it made it very hard for working on assignment. Providing lectures before the time is important for international student particularly to have chance to read little or translate to benefit more from attending the lectures, as well as referring to relevant readings to maximize the chance of understanding statistics subject. It is useful to note that the timing of resource release and feedback was important as suggested by the behaviourist theory.
5.5
Conclusion In the GHMD983/SHS940, the provision of video resources in the e-learning system was
primarily aimed to support the student learning in the subject particularly the distance students. Over the three iterations of the subject commenced in 2008, a high proportion of students endorsed the use of video resources for learning ranged from 83% in 2010 to 96% in 2009. In the first implementation with the postgraduate students in GHMD983 August/Spring 2008, the video resources were considered a resounding success. In March/Autumn 2009, similar learning resources including video supports were provided to the undergraduate students enrolled in the first year introductory statistics subject (STAT131). These resources were accessible via by-type resource folders in the subject elearning site throughout the session. Based on the responses given by the students in the survey, only 39% of them endorsed the use of videos in the subject. Although the students indicated that they liked the videos and wanted more videos, there were some comments (15%) clearly indicated that they did not find the videos or unaware of its availability in the e-learning site. 191
A re-examination of data from both GHMD983 and STAT131 revealed two major discrepancies in outcomes between the postgraduates and the undergraduates. The postgraduates valued the video resources for learning and suggested more topics to be included in the videos. On the other hand, less than half of the undergraduates perceived the videos useful and some students were not able to find them in the e-learning site. Further observation revealed that the way in which the videos were embedded in the e-learning sites differed. In the GHMD983, the resources were integrated into weekly folders associated with tasks while in STAT131 these were placed in separate by-type resource folders. Consequently, this case study highlighted the importance of the subject design specifically on how the resources were delivered to students in the e-learning sites. Refinements were undertaken with the postgraduates in GHMD983 August/Spring 2009 and SHS940 August/Spring 2010. Based on Agostinho et al.‟s (2008) work on the LDVS representation, the learning design maps (using DOC files) were introduced to the students in 2009. These maps were embedded in weekly folders associated with resources in the e-learning site. An improved learning design maps (using PDF files) were fully implemented in 2010 incorporated with links to resources accessible via the weekly folders in the e-learning site. An examination of assessment results revealed that the students performed better than in any other year when no maps were available for students in 2011. However, there were the weekly folders which contained all that the maps suggested (resources, tasks, and supports) and the by-type resource folders in the subject e-learning site, along with the draft and redraft of assessment approach. With the exception of 2011 (no survey), the findings from the student surveys suggested that the videos being associated with the subjects‟ weekly tasks (in GHMD983 August/Spring 2009) and accessible via links in the learning design maps within the weekly folders (in SHS940 August/Spring 2010) worked better than placing the videos only into the by-type resource folders (in STAT131 March/Autumn 2009). The final outcomes in SHS940 also revealed that the overall top grades rate in 2010 and 2011 had significantly increased (Z = 4.72, p < 0.001) compared to the overall top grades rate between 2008 and 2009. In 2011, having similar resources as with the 2010 but with the exception of the maps, and in addition the draft and redraft of the first two assignments, the distribution of final grades yielded 85% of top grades. This suggested that the subject design using the weekly folders associated with resources, tasks, and supports, and the by-type resource folders in the e-learning site, along with the draft and redraft of assessment had potentially supported the student learning. Further, the findings showed that the postgraduates in 2010 (61%) preferred to use both the learning design maps within the weekly folders and by-type resource folders to access resources in the e-learning system. The postgraduates were
192
essentially successful students, whereas the undergraduates in the first year had not yet proved themselves. With regard to the assessment systems designed for the subjects, both the postgraduates and the undergraduates regarded the laboratory tests, retests and assignments to be of great importance in helping them to learn and understand the subjects. While the postgraduates completed the tests or assignments out-of-class, the undergraduates suggested that the best way to complete their assessment was to have a mix of individual work in class and complete them over two or three days in their own time. In response to the fairness of the subjects‟ assessment systems, both the postgraduates and the undergraduates consistently favoured the systems and indicated that it was fair throughout the sessions. It would be of benefit to conduct a study involving several groups of students with different learning contexts to identify the learning resources beneficial for different types of students‟ background. It would also be useful to examine the robustness of the learning designs. Hence, from the findings in this case study, the researcher was able to reflect on the different experiences of the two groups of students, the postgraduates and the undergraduates using the video resources. While the subjects‟ resources including videos were similar, the students responded to them differently and this drew attention to the different manner in which these resources integrated within the subjects‟ e-learning sites. This chapter focused on the students‟ experiences mainly the postgraduates‟ use of various learning resources, in particular video supports which provided to them through the e-learning systems. Based on the students‟ perspectives and their learning outcomes demonstrated in this case study, the researcher believed that it is important to understand the impact of learning designs specifically amongst educators in examining how their students experienced the learning process in particular learning contexts. The major aim highlighted in the next chapter (case study 2) is in evaluating the subject design particularly via the e-learning systems with further implementation of the learning design maps along with other innovations in the first year introductory statistics subject. Reflecting upon the outcomes examined both in GHMD983 August/Spring 2008 and STAT131 March/Autumn 2009, the second case study involved two cohorts of the undergraduate students who enrolled in STAT131 March/Autumn 2010 and March/Autumn 2011.
193
CHAPTER 6 CASE STUDY 2 6.0
Introduction In the first case study, the findings revealed that the postgraduate students in
GHM983/SHS940 were comfortable in locating resources particularly video supports provided in the e-learning site. Ninety six per cent of the postgraduates endorsed the use of video resources, 82% perceived the weekly folders as helpful for their learning, and 57% valued the learning design maps provided in the subject. In the GHMD983 August/Spring 2009, the videos were embedded in the weekly folders and the learning design maps associated with resources provided in the e-learning site. In contrast, the undergraduates in STAT131 March/Autumn 2009 reported that they did not find the videos placed in the e-learning site although students who used them found the resources worthwhile. This suggested that the videos potentially supported student learning, however in STAT131 the resources were possibly not well integrated within the e-learning site. Therefore, this case study aimed to address the issue of subject design in placing the resources more effectively in the e-learning system so as to support student learning in STAT131.
6.1
Development phase: the subject design features Based on findings from the 2009 design and development phase, a learning design map
embedded in the weekly folders was introduced to students in STAT131 March/Autumn 2010 via the e-learning site. The learning design maps were initially trialled in GHMD983 August/Spring 2009 using DOC files. The refinement of the maps was undertaken in STAT131 March/Autumn 2010 where the type of file used for the map changed from DOC files to PDF files. In 2010, the graphics, texts and links displayed in the map were initially created and saved as a file in Microsoft Office Publisher 2007. The file was then converted to a PDF file which included several active links. The conversion of the files from Microsoft Office Publisher 2007 documents to PDF files attempted to overcome the technical issue of web browser incompatibility (e.g. Internet Explorer, Firefox, Google Chrome, Safari). The files were finally uploaded in the e-learning system which enabled both PC and MAC users to access the map as well as links provided within it (see Figure 6.1). 194
“Click-on” this link to access the lecture notes for Week 2
“Click-on” this link to access the video on creating data files in SPSS
Figure 6.1 Learning design map for weekly work combining the resources, tasks and supports
As with the GHMD983, the aim of the learning design maps in STAT131 was to support the students in completing their weekly laboratory tasks through providing directions to various learning resources and in particular the video resources. As can be seen in Figure 6.1, the students could access the resources by clicking on the links provided in the map. The primary resources included lecture notes, Edu-stream (audio recorded lectures); the laboratory tasks and in the first year laboratory tests, worked solutions, data sets; other specific learning resources such as video resources to support most topics, SPSS notes; and ongoing support materials such as learning strategies, student forum, past exams and past laboratory tests, links to student support advisers and other relevant websites. All resources were similar to the previous year (i.e. STAT131 March/Autumn 2009) including the assessment system. The same lecturer taught and co-ordinated STAT131 in March/Autumn 2010. The major difference was the subject design, placement of the resources in the e-learning site changed from a list of by-type resource folders (in 2009) to weekly folders (see Figure 6.2). Note that the learning design maps were accessible via the weekly folders located in the subject e-learning homepage.
195
Figure 6.2 The weekly folders located in the STAT131 March/Autumn 2010 e-learning homepage
6.1.1 Method The steps involved in the collection of evaluation data throughout this stage of the development phase of evaluation for this study were as follows:
1.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE09/021) in order to invite students to participate in the investigation of the impact of video resources on student learning outcomes.
2.
In STAT131 March/Autumn 2010, students were approached and initially informed about the purpose of the study through a Participant Information Sheet delivered via email.
3.
In addition to the information sheet supplied to the students, they were asked to provide a return email or completion of permission slip (consent form) giving their consent to participate in the study. The students were told that their participation was voluntary and that they were free to refuse to participate and to withdraw from the study at any time. The students would not be penalized for not participating in the study and they were informed that the outcome of the study should be beneficial for future students. 196
4.
Students in STAT131 March/Autumn 2010 completed the online surveys at the end of the session through the e-learning system and returned the completed consent form (via email) indicating their agreement to participate in the study.
6.1.2 Outcomes In the development phase, the study involved a cohort of 191 undergraduate students who enrolled in STAT131 March/Autumn 2010. It was recorded that a large proportion of students (74%) were computing students (n = 142). Of the 191 students, 58% of them (n = 111) took part in the survey. The students‟ background data presented in Table 6.1 revealed that the majority of them were males (82.9%) and domestic students (79%). A little over half of the students (57%) expected to achieve at least a credit grade or better for their final grades.
Table 6.1 Distribution of students enrolled in STAT131 and survey response rate between the 2009 and 2010 cohorts
Total enrolled (N)a Total computing students enrolleda Number of responses (n) Response rate (%) Female response rate (%) Domestic students (%) Expecting a credit grade or above
March/Autumn 2009 (Baseline)
March/Autumn 2010
89 32 38 42.7 44.7 71.1 71.1
191 142 111 58.1 17.1b 79.3b 58.9c
a
Data source from the University of Wollongong. Figures might change slightly at different dates as students were retrospectively withdrawn without penalty from the subject. b 1.8% not answered this category (n = 2) c 3.6% not answered this category (n = 4)
In comparison, it was recorded that the number of students enrolled in 2010 almost doubled from 2009 (89 students), this perhaps due to the clash of timetabling. For instance, students could complete another statistics subject in August/Spring 2009 as alternative to March/Autumn 2009. A test for differences in two proportions was used to examine the differences in the students‟ background between the two cohorts, 2009 and 2010. By contrast, the proportion of computing students (74%) enrolled in 2010 was significantly higher (Z = 6.09, p < 0.001) than in 2009 (36%). The response rate was found to be higher (Z = 2.34, p = 0.01) in 2010 compared to 2009 possibly because of the students completing the online survey during laboratory classes rather than in their own time. On the other hand, the proportion of female students who responded to the survey was higher (Z = 3.43, p < 0.001) in 2009 in contrast to 2010. This also is indicative of fewer female students enrolled in computing courses. 197
An analysis of students‟ final grades revealed that the distribution of final and expected grades were similar in 2010 (refer to Table 6.2). For instance, about 53% of the students achieved at least a credit grade in contrast to 59% anticipated those grades. However the proportion of students failed (18%) was significantly higher (p < 0.001) than the proportion of students expected failing (2%) the subject (using a Fisher‟s exact test due to npˆ and np(1- pˆ ) being less than 5). This pattern of responses is similar across two sessions; usually students who respond to the surveys tend to still attend classes at the end of session whereas many low achieving students have effectively dropped out and do not complete the surveys (refer to Table 6.2). An analysis of differences in proportions revealed that the percentage of students attaining the highest grade (High Distinction) in 2010 was almost tripled than in 2009 but this was insignificant at the 0.05 level (using a Fisher‟s exact test due to npˆ and np(1- pˆ ) being less than 5). Nevertheless, the failure rate remained similar across both sessions which ranged between 23% (2009) and 18% (2010). Table 6.2 Percentages of students‟ final grades and their expected performance in STAT131 the 2009 and 2010 cohorts Percentages March/Autumn 2009 March/Autumn 2010 Final Expected Final Expected (N = 89) (n = 38) (N = 191) (n = 111)
Grade
Fail Pass & Pass conceded Credit Distinction High Distinction
22.5 24.6 27.0 22.5 3.4
a
28.9 42.1 21.1 7.9
18.3 28.8 25.7 17.8 9.4
1.8 39.3 33.6 17.8 7.5
a
None of the students anticipated to fail in the subject
6.1.1.1
Impact of learning resources on student learning
As with the previous year, the students in 2010 were asked to rate each of the learning resources in terms of their usefulness in helping them to learn and understand the subject. Surprisingly, almost 70% of the students perceived the learning design maps as useful for learning (refer to Table 6.3). Using a test for differences in two proportions, it revealed that there was no significant difference in the proportion of students who endorsed the use of learning design maps between the undergraduates in STAT131 (69%) and the postgraduates in GHMD983 August/Spring 2009 (57%). The positive response from the students suggested that the introduction of the learning design maps in the subject seemed worthwhile.
198
Table 6.3 Perceived importance of resources on student learning in STAT131 between the 2009 and 2010 cohorts Resource
Laboratory tasks Laboratory manual Worked solutions Laboratory tests Laboratory classes Laboratory retests Lectures Online lecture notes Tutor Teamwork Video resources Student forum Lecturer Learning design map
March/Autumn 2009 (n = 38) %a Rankb 97.4 97.4e 94.7 92.1 89.4e 86.8 84.2 78.9 78.9 63.1 39.4 26.3 c d
1.5 1.5 3 4 5 6 7 8.5 8.5 10 11 12 -
March/Autumn 2010 (n = 111) %a Rankb 88.3 82.8e 96.4 90.0 71.1e 74.7 73.9 83.7 72.0 57.6 49.5 34.2 81.1 69.3
3 5 1 2 10 7 8 4 9 12 13 14 6 11
a
Percentages of students responding as moderately or extremely useful Ranked by percentage reporting resources useful c Not surveyed d Not provided or available e Significant difference at the 0.05 level b
In response to the usage of video resources among the students in 2010, nearly 40 per cent of the students reported that they did not use the videos (39%), 27% responded that the videos helped them solve problems well in the subject, 25% indicated that the videos were time consuming, and 10% reported that they were not aware that the videos were available in the elearning site (refer to Table 6.4). Although the videos were accessible via links in the learning design maps within weekly folders, there were still a number of students (10%) who were not able to find them in the e-learning site.
Table 6.4 Usage of video resources in STAT131 between the 2009 and 2010 cohorts Percentages Student’s response
I did not aware the videos were available I did not use them It was time consuming for me to use them I found it is difficult to use them I solved problems well when I used them
March/Autumn 2009 (n = 38)
March/Autumn 2010 (n = 111)
a
10 38b 25
56b 10 10 24
a
27
a
Not surveyed but included in a separate question Significant difference at the 0.05 level
b
199
From previous experience (Porter, 2007), the lecturer expected the ratings of primary resources (lectures, assessment, laboratory tasks and worked solutions) to be high, above 85%, noting that the value of one resource changes with the improvement of another. As expected, the most important resources in terms of helping student learning in March/Autumn 2010 were the worked solutions (96%), laboratory tests (90%), and laboratory tasks (88%) (refer to Table 6.3). Other primary resources were valued as useful for learning including online lecture notes (84%), laboratory manual (83%), and the lecturer (81%) The proportions of students valuing other resources were moderate ranged from 50% on video resources to 75% on laboratory retests. Similar to the previous findings, the student forum was the least important for learning; this was an additional resource for communication between the students and the lecturer, along with email and lecture communication. This also provided a means of student to student communication. A test for differences in two proportions was used to examine the differences in students‟ perceptions on the usefulness of resources for learning. It was found that the proportion of students who valued the laboratory classes (Z = 2.27, p = 0.011) and laboratory manual (Z = 2.27, p = 0.011) were significantly higher in 2009 than in 2010. There was no significant difference in the proportion of students perceiving the video resources as useful; though the percentage was likely to increase in 2010 compared to 2009 (refer to Table 6.3). However, the proportion of students reporting that they never used the videos significantly declined (Z = 1.94, p = 0.026) in 2010 compared to 2009 (refer to Table 6.4).
6.1.1.2
Learning design map as support resources
In relation to the provision of the learning design maps as support resources in the elearning site for 2010 cohorts, it was necessary in this study to identify how and to what extent this resource was useful in assisting students learn statistics. For this reason, there were questions in the survey asking the students to indicate whether this was beneficial or otherwise for their learning, and if so what and how they had gained from using it. As tabulated in Table 6.5, the findings revealed that the students had several uses for the learning design map which helped them to learn and understand the subject. The maps were most used as a connection to tasks and resources (81%) and as a tool for revision (80%). The next major uses were to find reference materials (76%), organize their work and learning materials in the subject (68%), and as a study checklist (61%).
200
Table 6.5 Usage of learning design map in the STAT131 March/Autumn 2010 Percentages of students March/Autumn 2010 (n = 111)
Usage
Work or tasks connection to resources Reference materials Organise work and learning materials Revision tool Study check list
81.1a 75.7a 67.6 80.2a 61.3a
a
n = 2 students did not answer this category
The students‟ comments in relation to the uses of learning design map were analysed according to themes. The emerging themes were largely consistent with the students‟ endorsement of uses presented in Table 6.4, and one additional theme related to completing assessment. Examples of their comments were as follows:
Work or tasks connection to resources (23.3% of the 30 responses) I used the flowchart to check that I was doing all the work and up to date with all my work. To access the lecture notes, complete laboratory tasks, and revise laboratory tasks. I used it to download the lecture notes and laboratory solutions and to see what parts link to where. To follow weekly laboratory work. Revision tool (20% of the 30 responses) Good for revision and learning. I used it as a learning compass in this subject. It is easy to check my weakness point of this subject. It was useful to learn new concepts. Reference materials (20% of the 30 responses) Use the maps to locate resources. I used it to download the content as the map contained the links required. The link to discussion forum used to find some new ideas. 201
Check the lecture notes and laboratory solutions. Complete the assessment (36.7% of the 30 responses) I used it to access the information for revision before tests. Use the flowchart to help with the tests and retests. The sample test helped a lot with this. The worked solutions helped gain a better understanding. Use to gauge when I needed to study for the lab tests as they worked out to be a week or two behind. Help me to revise the weekly tasks for laboratory tests. The maps were for helping me to complete all assessment in this subject. The students in 2010 were asked to compare the implementation of the learning design map (flowchart) via weekly folders to that in other subjects‟ e-learning sites. Of the 83 responses, approximately 61% of them endorsed the use of the maps in the subject as their comments stated below: Here is the only place that uses a visual organization of the subject, I don‟t actually use it all that much, and just create a printable version and download all the information for each week. I find it easier to understand that way. I think it is really good so that you know what you should do every week in order to prepare for this subject. This flowchart has been more helpful than other subjects. No other subject has a weekly learning design map. It makes easier to navigate around the subject information and resources. Design of this subject is new and has user-friendly interface. This subject is my only one that utilizes weekly learning design maps as well as flowcharts. But it utilizes them well! This subject is organised much better than my other subjects, namely the fact that each week has its own structure whereas other subjects just put all the resources up. I think this is a better method. It works quite well actually. Some other subjects don‟t even use e-learning, and the learning design map means you can find all the information for topics in one page. Better than a lot of others and I used it more often than other subjects. The learning design map allows us to keep up with weekly work and gives a brief description for each week.
202
The comments also revealed that there was room for improvement in the development of the map in the subject. Of the 83 responses, a small proportion of students (15%) responded negatively to the implementation of the learning design map in the subject. For instance, many computing students who enrolled in this subject tended to shift their files to another computer system hence they disliked the multiple files used within the map. Further, the maps were seen as complex containing excessive amount of information leading to confusion for a few students. Some of the comments made were as follows:
I found the weekly flowchart to be messy and annoying. I feel that if each week was broken up into separate folders, e.g. Week 1 folder and inside that, lab, lecture notes, test etc. This system would have been much more helpful than the flowchart system as this would also match the system used by the majority of subjects. Other subjects tend to do it better, the flow map got irritating as it made the lecture names come out something like “relative manager” which just got annoying. Hard to understand, seems to over complicated. Increase my reluctance to work on statistics homework, however all my other subjects got much more attention then. It would be an extra step to get to the information I would need. Technical issues make using it quite difficult. If access could be reliable I‟m sure it would be good. It is really different, i would much prefer it to be like the other subjects where the materials are simply divided into subjects instead of weeks, makes it MUCH easier to revise work done. I found it kind of confusing and not really very useful. There were also some students (11% of the 83 responses) who while in favour of using the maps, experienced frustration or difficulties with technical issues and the structure of the maps. For instance,
It seemed harder to use. I prefer the folder design that has lectures in a lectures folder etc. However I do like the fact that each week has a subheading with the topics covered in that week which makes it a lot easier to revise each individual topic. It makes more complicated, but good idea. Technical issues make using it quite difficult. If access could be reliable I‟m sure it would be good. But thus far it has been useless and stress inducing. They just have a bunch of lecture slides... this allowed for easier finding of a particular topic you are having troubles with, however was painful not being able to open up multiple weeks at once due to e-learning only being able to use one tab. 203
The materials were somewhat harder to access owing to browser issues, it would be useful if the files were also available without having to access the learning design map, but rather to use the learning design map as an alternate access/organizing method. The flowchart was very visually appealing but some of the links did not work properly on most computers and I had trouble accessing the information, other subjects however had a less visually appealing layout but I was able to access all the information. If the links to the information worked properly then the learning design map was a great idea. It is a good tool to outline the content of the subject in the early stages but as the content of subject expands and we get to week 7 or later, the flowcharts are extremely difficult to follow because there are interlinks between the pages so some content appears multiple times. It becomes frustrating and it does not make STAT any easier. The remaining 13% (of the 83 responses) were uncertain of using the maps in the subject whether it was beneficial for them compared to other subjects. Examples of their comments were as stated below. It‟s the only one I‟ve seen. I find it has the same high level teaching methods, that other subjects also use. Other subjects have no flowchart or learning design map. The flowchart links different weeks whereas no other subjects I have do. It‟s completely different. None of them have a weekly learning design map. It was necessary in this study to gain a feedback from the teachers‟ perspectives regarding the use of the maps in the subject. For this reason, four tutors were interviewed at the end of the session in March/Autumn 2010 by asking them, “What is your experience of teaching this subject using the weekly learning design map? What worked well and what did not?”. The results revealed that two tutors perceived the maps useful in delivering resources to students and organised their teaching materials in laboratory classes, while the other two tutors had different views on its implementation. The comments made were as follows:
As a tutor, the weekly map identifying tasks and associated resources was extremely useful. It was an incredibly useful organiser in the laboratory class, allowing download of the particular task either the teacher or students wanted to be worked together.
204
It is better than normal teaching via the e-learning site because the teaching materials are more organised. It was good experience using the weekly learning design map, but it was slightly worse than normal teaching via the e-learning site as it needs longer time to go through the tasks. I was a bit confused about the new structure used in the subject e-learning site, and I think the normal one was much easier to be used.
6.1.1.3
Student perceived confidence with topics
As with the survey in 2009, the students in 2010 were asked to indicate how confident they were in relation to each of the major topics. The possible ratings were: „not at all confident‟, „might have a little difficulty‟, „moderately confident‟, and „could do this‟. The rankings of student confidence can be used to identify which additional learning resources (assessment, worked examples, improved lecture notes, video resources) could be of benefit. For this reason, an evaluation on the importance of the topic should be considered. The aim here was to recognise which topics or skills in the subject that most in need of additional supports and considered to be the highest priority. Note that most topics were included in the video supports linked to the weekly learning design maps provided in the e-learning site. At the end of the session, a little over 80% of the students in 2010 perceived themselves as confident in two topics: 87% in exploratory data analysis and 82% in correlation and regression analysis (refer to Table 6.6). The proportions of students reporting comfortable in other topics were moderate ranged from 58% in Normal and Exponential distribution to 75% in Binomial and Poisson distribution. It was of concern that almost half of the students reported that they were less confident in Normal and Exponential distribution though there were video supports provided for these topics.
205
Table 6.6 Student perceived confidence on specific topics in STAT131 between the 2009 and 2010 cohorts March/Autumn 2009 (n = 38) %a Rankb
Topic
Exploratory (measures of centre, shape, spread and outliers) Correlation and regression Binomial and Poisson distribution Confidence intervals Using SPSS Model fitting Hypotheses tests Normal and Exponential distribution
March/Autumn 2010 (n = 111) %a Rankb
97.4c
1
86.7c
1
76.3 68.4 73.7 81.6 68.5 79.0 63.2
4 7 5 2 6 3 8
81.5 74.7 74.3 72.6 71.2 68.2 58.2
2 3 4 5 6 7 8
a
Percentages of students responding as moderately confident or could do this (valid per cent) Ranked by percentage reporting confident with topic c Significant difference at the 0.05 level b
An analysis of differences in two proportions revealed that there was a significant difference (Z = 1.85, p = 0.03) in the proportion of students perceiving themselves as confident in exploratory data analysis with a higher proportion comfortable in 2009 than in 2010. However, there were no significant differences in proportions of students feeling comfortable in other topics between the two cohorts.
6.1.1.4
Changes in comfort at the end of session
In terms of the changes in students‟ comfort after completing the subject, only a little over half of the students in 2010 reported that they were comfortable or less anxious in four aspects related to the subject ranged from 52% in formulating and testing hypotheses to 56% in solving statistics questions using computer (refer to Table 6.7).
206
Table 6.7 Changes in students‟ perspective after completing the subject in STAT131 between the 2009 and 2010 cohorts March/Autumn 2009 (n = 38) %a Rankb
Category
Solving statistics questions using computer Explaining statistical findings Reading statistical findings Formulating and testing hypotheses Working with numbers Taking this subject that involves mathematics Taking this subject that involves statistics Taking this subject that involves computing
52.6 55.3 c
55.2 36.8 23.7 39.4 28.9
3 1 2 5 7 4 6
March/Autumn 2010 (n = 111) %a Rankb 55.9 54.5 53.4 52.0 44.2 36.5 42.6 33.0
1 2 3 4 5 7 6 8
a
Percentages of students responding as a little comfortable or much more comfortable Ranked by percentage reporting comfortable c Not surveyed b
The students in both cohorts had similar patterns of change in their levels of comfort at the end of session. An analysis of differences in two proportions revealed no significant differences in changes of the students‟ perspective at the end of the session for all aspects of the subject (refer to Table 6.7) between the 2009 and 2010 cohorts.
6.1.1.5
Assessment for learning
For the assessment in 2010, four laboratory tests and a group assignment were set on specific topics, these were: exploratory data analysis, bivariate data analysis, probability and models, goodness of fit and chi-square independence tests, Normal and Exponential distributions, and estimation and hypothesis testing. The students were allowed to sit a mix of two „in-class‟ tests (first and fourth assessment), two take home tests (second and third assessment), and one group assignment (a team of two students). As with the previous year, the students who did not pass a test the first time were required to sit a second version of the test (retest) out of class. The students were provided with feedback on their own work in the first assessment and were also provided with worked solutions. The retest completed out of class involved a different data set, for the same topic but where possible, different wording and sequencing of questions were used. A mark of 70% had to be obtained on the retest, but this was now the possible maximum mark. The tests, retests, sample tests, worked solutions, and feedback were provided to the students via the e-learning sites. This assessment system was useful in allowing the lecturer to identify at risk students early. Of particular interest, there were some students who did not attempt a test on a second occasion, or did not follow up despite direct feedback as to how to correct their answers. With seven students listed 207
as requiring special consideration, the challenge became how to identify the type of help required and how to provide it. For example, some students needed to find ways to communicate what they knew. An analysis of assessment marks in 2010 showed that the students attained average marks of above 7 (each worth a total of 10 marks) in their group assignment on bivariate data analysis topic, and the last two assessments on topics: Normal and Exponential distributions, and estimation and hypothesis testing (see Table 6.8). This finding contradicted the students‟ perceptions on competency with topics particularly Normal and Exponential distributions (refer Table 6.6). The topics such as estimation and hypothesis testing, and Normal and Exponential distributions were among topics taught in the mid to end of the session. Perhaps, this was the reason of low confidence among the students on those topics.
Table 6.8 Distributions and changes of mean assessment marks on specific topics in STAT131 between the 2009 and 2010 cohorts Topics
1. Exploratory Data Analysis 2. Bivariate Data Analysis
e
3. Probability & Models, Goodness of Fit & Chi-square tests 4. Normal & Exponential distributions 5. Estimation & Hypothesis testingf
Cohort
N
Meana
S. D.b
2009 2010 2009 2010 2009 2010 2009 2010 2009 2010
76 183 73 163 72 164 66 170 67 153
6.07 5.46 5.84 7.84 7.36 6.75 7.28 7.33 8.94 8.24
1.79 1.87 1.79 1.79 2.08 2.07 1.67 2.50 0.78 1.25
Mean differencec
S. E.d
-0.55
0.26
2.04
0.25
-0.62
0.30
0.06
0.29
-0.67
0.13
a
Zero marks have been removed from all assessment as they represent non submission and not necessarily low achievement b Standard deviation c Difference in mean marks (2010 less 2009) d Standard error difference e Set as second assessment in 2009 but as group assignment in 2010 f Set as fourth assessment in 2010 but as group assignment in 2009
To compare the students‟ performance in assessment between the two cohorts, an independent t-test was used for analysis. The results revealed that there were significant differences in the average of student marks (out of 10) on several topics between the 2009 and 2010 cohorts (refer to Table 6.8). The students in 2009 achieved significantly higher mean marks on four topics compared to the students in 2010, these were: exploratory data analysis (t248 = -2.158, p = 0.032 with equal variances assumed, F = 0.339, p = 0.561), bivariate data analysis (t230 = 8.081, p < 0.001 with equal variances assumed, F = 0.170, p = 0.680), probability and models, goodness of fit and chi-square tests (t230 = -2.072, p = 0.039 with equal variances 208
assumed, F = 0.098, p = 0.754), and estimation and hypothesis testing (t185.170 = -5.061, p < 0.001 with unequal variances assumed, F = 15.225, p < 0.001). These results showed that the mean assessment marks on those four topics decreased significantly from 2009 to 2010 with the greatest mean drop of 2.04 on bivariate data analysis. The students in 2010 were asked whether they had taken all of the retests in the subject. From the 94 responses, about 58% responded that they sat all the retests, and 23% responded that either they passed all tests (obtained 7 marks or above in each test) or they never sat all tests. While 19% of the students reported that they only sat some of the retests for reasons as follows:
I did sit the week 5 retest, I did not need to sit the week 7 retest, and in reference to the week 9 retest, my mark for week 8 test was wrong so I emailed lecturer and told her that it was wrong and she said if I didn‟t get the marks for the week 8 test she would tell me when I had to do the retest by and then she never told me when I had to do it by. I missed one because I had assignments and a birthday and really struggled to submit it and I ended up submitting it 4 hours late but it was considered as not submitted. I only missed one due to the passing of a family member. I failed one of the retests... that sucked a bit. I failed one retest and didn‟t bother with the re-retests. Missed one, at first thought I wasn‟t able to submit it later, then didn‟t get around to it after I found out I was able to submit it at a later date. It is because I was unaware that after the initial deadline (which it was impossible for me to do, because of technical reasons) for the first attempt was passed that the test can be redone. It was only recently that I was told that I could re-attempt the test and get a mark. In terms of the feedback given by the lecturer in helping students to complete the retests, 81% of the 89 students reported that they had sufficient amount of information or feedback to re-sit the tests, 11% indicated that they have had not enough feedback from the lecturer, and 8% indicated that the question was not applicable (NA) for them to respond. Examples of comments were as follows:
Sufficient feedback (81% of the 89 responses) I have had enough information to help with re-tests. Yeah, loved the worked solutions, they practically taught me the course. 209
The information and learning curve from the mistakes that I made in the first lab test did not carry on. It was helpful to have the answers but having an answer means nothing if you do not know how to work it out. I would prefer not getting given the answer and having better instruction because then you actually learn something. Yes, I had more than enough information to help me with the tests, this made retests much easier. Not enough feedback (11% of the 89 responses) Marks were not always sufficiently explained, results were often received too late. Often there was no more information available for students sitting the retest. Not really, it would have been more helpful if we had a little more feedback. Not enough feedback, particularly on how to use SPSS to obtain data.
6.1.1.6
Improvement of the subject
In the 2010 survey, the students were asked to provide suggestions for improvement of the subject. Of the 33 responses, several themes were identified particularly regarding laboratory classes and materials (37%), lecture presentations (24%), assessment system (15%), lecturer and/or tutor (12%), and others (12%). Some of their comments were as follows:
Laboratory classes and materials More handouts and formula based examples for all parts of the course, and a far better explanation of how to operate SPSS. I was unable to understand how to use SPSS to generate data I needed. Maybe we need a session for learning on how to use SPSS. As laboratory classes were not that helpful. Instead of using SPSS for everything, sit down and do Maths with a calculator. It‟s all good in knowing how to get a program to do it. But we need to know how to do it for an exam; a lot of people are going through this because of computer science and won‟t use much of this in the real world. So we only need to know how to get through the exam. Getting the worked solutions online faster may help in this subject. Have a more detailed explanation of how to get certain outputs in SPSS in the lab manual, showing us how to get what outputs add lines in etc. It can all be confusing at times. Maybe need a glossary of terms too.
210
Lectures’ presentations Slow some of the explanations down. I had trouble keeping up in a number of lectures because they were too fast and the tutor i had wasn‟t very good. Far less content! Get to the point of the exercise and not fart around the points as they become lost in a sea of confusion. Lectures are far too verbose in content and don‟t explain the learning points clearly. Clearer identification of formulas as in week 12 I still don‟t know what goes where! A weekly summary would be valuable! Make lectures slower and understandable. Get people interested in statistics with interesting lectures. Put up annotated copies of lecture slides and reward those who attend lectures. Assessment system Change minimum pass mark to be in line with most other subjects (40%). Not having a compulsory 70% to pass the tests. I feel that this subject is split between 2 different areas. By this I mean that the final test will not be sat on a computer, but with paper and pen. I have took other statistics subject previously and I feel that the subject has 2 options, either if there will be a final test then the subject should stop doing labs and move to doing tutorials where the students sit down with calculators and pens and work out the answers in that way, and so prepare them for the final exam. The other option is that the subject stops doing the final exam and instead does a large assessable item at the end of the session (week 13), or make the bi-weekly tests longer and worth more marks. Either option will give students the necessary information to pass this subject. Otherwise students are wasting their time in labs and doing lab tests when the final exam is on paper. Lecturer and/or tutor Check up on lab demonstrators to ensure quality/ability - mark on laboratory tasks. Mr. X should take ALL lectures and tutorials. He has really helped me learn about explaining statistical information, hence allowing me to pass my lab tests! Others Do not include this subject in the Bachelor Computer Science core. Have a “PASS” session for this subject.
6.1.3 Changes in outcomes between cohorts: major observations In the previous findings, the undergraduate students in 2009 reported the failure to locate the video resources provided in the e-learning site hence few students (39%) regarded the 211
resources as useful. By contrast, both the postgraduates (GHMD983) in 2008 and 2009 endorsed the use of videos for learning. Consequently, a new subject design for STAT131 March/Autumn 2010 was implemented in the e-learning system using weekly folders. Learning design maps were integrated within the weekly folders which provided links to other resources. At the end of session, several major observations were identified when comparing the changes in outcomes between the 2009 and 2010 cohorts. There was no significant difference in the proportion of students who endorsed the use of videos in the subject between the two cohorts. Though, the proportion of students was slightly increased by 10% in 2010 compared to 2009. The proportion of students who reported that they did not use the videos significantly decreased in 2010 (38%) compared to 2009 (56%). However, 10% of the students in 2010 were not aware of the availability of videos in contrast to 15% in 2009. It was also found that the proportion of students who valued the laboratory classes and laboratory manual in 2009 was significantly higher than in 2010. In relation to the use of the weekly learning design maps in locating resources in the elearning site, 61% of the students in 2010 positively commented on its usage. Nevertheless, there appeared some shortcomings in putting up the PDF files in the e-learning system. That is, 11% of the 83 students reported that they experienced technical difficulties in accessing the files linked in the maps. Examples of their comments were as follows:
Links to key parts from outside of the PDF document for easier access. Do not use PDFs for the map (as the only form of getting to the information - the arrows are a tiny bit helpful in working out what is what). Use a normal page with links. The links also (in addition to the advantages specified in the previous question) allow the most up to date information to be easily available. Quite good if it was integrated, using a PDF is not very effective as some browsers do not allow you to view the PDF‟s inside the browser and downloads it to your computer instead. Harder and more annoying to use, as the links in the PDF result in poor or messy performance (for example, I cannot leave the map open on the browser and follow a link, so then I have to go all the way back the home page and re-open the PDF after I have finished with the page (that is disregarding that I may want multiple things open to quickly switch between them). Having to re-download the PDF (since I save the PDF‟s for offline use - since PDF‟s in the browser are either not supported or lead to a poor or messy experience. Other subjects just have another page with links for each week that perform better (easier and less annoying) than the PDF design. Improve the links that are not only available from e-learning but also in PDF downloaded. Integrate the map into e-learning as a web page rather than a PDF. 212
The comparison of student perceived competency with topics between the cohorts revealed that the proportion of students reporting confidence in exploratory data analysis was significantly higher in 2009 than in 2010. At the end of the session, there were no significant differences in changes of students‟ comfort in all eight aspects of the subject between the two cohorts. In response to the students‟ performance in assessment between the 2009 and 2010 cohorts, the findings revealed that the students in 2009 outperformed than in 2010 in four out of five major topic areas assessed in the subject. These topics included (1) exploratory data analysis, (2) bivariate data analysis, (3) probability and models, goodness of fit and chi-square tests, and (4) estimation and hypothesis testing. The final outcomes in 2010 were not significantly different in the final grades compared to 2009. This was disappointing as the main interest of this study was at identifying the difference in fail grades; however it remained similar across two sessions between March/Autumn 2009 and March/Autumn 2010. Although the subject design and most resources in 2010 were similar to that in GHM983 August/Spring 2009, the postgraduate students outperformed than the undergraduates. That is, a test for differences in two proportions revealed that the proportion of students attaining higher grades in 2009 (69% of the postgraduates) significantly higher (Z = 2.24, p = 0.013) than the undergraduates in 2010 (53%). This suggests that the undergraduates possibly needed some other or additional form of supports rather than having access to the video resources and learning design maps provided in the e-learning site.
6.2
Implementation phase: new design features Reflection on the two cohorts in 2009 and 2010 suggested the need for another solution to
the issue of high failure rates. Having good quality resources clearly identified for each topic, there was a puzzling reflection, “what else could be done?”. Perhaps, the assessment could be improved. The students were asked if they would like different assessments and only one indicated they would. Thinking laterally, the prospect of giving „time poor‟ students a headstart to the subject was also canvassed in the last lecture. Almost seventy per cent of the completing students in 2010 indicated through a show of hands that they would have liked to access a Headstart program (Porter, 2011a). This has led to a new direction of the subject design in introducing the Headstart program for students in 2011. Further refinement of the learning design maps was undertaken in 2011 based on the recommendations provided in 2010. The following sections discussed in detail the three new design features integrated within the subject e-learning site in 2011. 213
6.2.1 Headstart program Initiated in February/Autumn 2011, a Headstart program was introduced to STAT131 allowing students to start engaging with statistics prior to the commencement of the session. The students accessed the Headstart program through the e-learning system set up for STAT131 in February/Autumn 2011 (see Figure 6.3).
Figure 6.3 A Headstart program in the STAT131 e-learning site
The Headstart was originally conceived of as a program that extended the time students would have to learn statistics. The idea of introducing this new design in the subject was based on the students‟ experience, which suggested that the thirteen weeks of session allow insufficient time for them to adequately learn and understand the subject. An alternative approach would involve a curriculum review; it was decided to leave the subject objectives the same and to not reduce the content or processes to be learned. The Headstart was an optional 214
program held in the four weeks before the session formally commenced in 2011. Some „within session‟ tasks and resources were included in the Headstart in addition to alternative tasks and resources. In the literature, there are several forms of headstart program defined according to the context in which the program applies. For instance, in the United States, the headstart program is commonly known as an early childhood program that provides preschool education for children aged three and over from families with incomes below or at the poverty level. This program aims to help these children become ready for kindergarten as well as to provide needed requirements such as health care and food support (source from http://www.wisegeek. com/what-is-head-start.htm). Similarly in the Australian context, this is recognised as an early childhood development (ECD) program that focuses on children from birth to eight years of age. It aims at enhancing the children brain development, setting the base for competency in learning such as numeracy, literacy, academic achievement, and coping in their later life (Sullivan & Calvert 2004; source from http://kids.nsw.gov.au/uploads/documents/headstart _full.pdf). A number of countries have implemented programs which emphasized education designed to support children in the early years. For example, Sure Start is a government program in the United Kingdom focusing on early education, child care, health and family support
(refer
to
http://www.education.gov.uk/childrenandyoungpeople/earlylearningand
childcare/surestart). In Sweden, early childhood education and care has been an integral part of the Swedish welfare state that aims at supporting children's development, education and wellbeing and helping parents to balance parenthood and employment or study (refer to http://www. childcarecanada.org/resources/issue-files/early-childhood-education-and-care-sweden). While in Canada, Children First is a curriculum guide designed for kindergarten in Saskatchewan that emphasized developmentally appropriate practice for young children in all areas of their development including physical, socio-emotional and intellectual (refer to http://www.sasked. gov.sk.ca/docs/kindergarten/index.html). These are some examples of the headstart programs developed within particular contexts to assist children learn in the early years before they formally enrol in their first year education in school. Within the context of Australian universities however, more familiar programs are transition programs designed for students specifically Year 11 and 12 to undertake selected subjects while they are still completing high school. These programs give students the opportunity to start their university studies early, thus gaining valuable university learning experience. This has been practiced in many universities, for instance the University of Southern Queensland (USQ) offered a Headstart course to Year 11 and 12 students to study alongside on-campus undergraduates via distance education (refer to http://www.usq.edu.au/ school-liaison/head-start). The program guarantees their entry into university, students receive a 215
fee exemption, and they are allowed to gain academic credit if they successfully complete the course. For several years, similar programs have been implemented in the Sunshine Coast University in Queensland, Murdoch University in Western Australia, and the University of Adelaide in South Australia. While universities usually run the program for several weeks, the Bond University in Gold Coast, Queensland has organized a day long program specifically for students Grade 10 and 11. This is similar to an orientation week (O-week) held at the University of Wollongong which provides opportunities for students to learn more about the university, study life and career outcomes from their teaching staff and current students. These are other forms of headstart programs that emphasize early preparation to begin tertiary studies in the university among high school leavers. Of all the forms discussed above, it might be useful to offer such a program that focuses on particular subjects for students who are currently enrolled in the university. In the context of this study, the increase of failure rate particularly in STAT131 in the previous years (as discussed in Chapter 4) flagged the need for improvement of the subject design so as to improve student learning of statistics. For this reason, a Headstart program named “Stepping into Statistics” was offered to the students in 2011. It was made available on 2 nd February, 2011 and remained available until the 28th when the session commenced (refer to Figure 6.3). All resources included in the program such as the lecture notes (shown in Figure 6.4) were accessible through the e-learning site.
216
Figure 6.4 Sample of lecture notes included in the Headstart modules
In this program, students were allowed to access the first module of work in the subject and to complete an alternative first assignment (it was intended that they have access to a second module but this did not occur). Each lecture commenced with an activity to address a learning issue and this was followed by activities essentially integrating laboratory work within the lecture. These activities were aligned with the earlier learning theories discussed in Chapter 3. For example, the activities involved:
Lecture 01: Identifying preconceptions about statistics, refining thinking, writing information, “doing” things and checking debriefs, collecting and interpreting data, finding alternative perspectives to the nature of statistics, reading the textbook, discovering the textbook perspective, watching the video clips and determining what statistics-in-action involves, social working in a group, and communicating via the student forum in the e-learning site. 217
Lecture 02: Defining terms, defining and exemplifying types of data, identifying variables, reading sample of data and recognising errors, examining sources of data, undertaking a data collection and identifying type of data, “doing” things and checking debriefs.
Lecture 03: Watching a video clip and conducting a census, identifying a sampling frame, selecting a population of interest, using a probability sampling plan, determining sample size, defining terms and describing margin of error, identifying errors in sampling approaches, searching for and identifying reasons for taking a census.
Lecture 04: Reading the textbook, defining terms of data sources and variables, understanding challenges in data collection, watching the video clip and identifying issues in collecting useable data, exploring research design, selecting and distinguishing different designs, and identifying issues when asking survey questions.
Lecture 05: Providing and discussing context to make sense of data, creating an SPSS data file, producing graphs and statistics in SPSS, analysing and reporting on the graphs and statistics produced from the data, making meaning from the data, describing relevant context, measures of centre, measures of spread, and patterns, and writing a meaningful paragraph.
To complete the assessment included in the Headstart program, students were asked to follow the instructions as stated below:
Your assignment involves you identifying a variable of interest to you or of importance to your discipline of study. You are to create a learning resource in the format of a video clip demonstrating for the variable chosen, how to collect and analyses the data, how to locate the context to help you detect errors and make sense of the data. Your resource will conclude with a meaningful paragraph. Marks will be awarded for: Submission of a PowerPoint draft version or story-board indicating the content to be included. The draft allows you to gain feedback before completing the video i.e. errors and missing information. Discussion of relevant context such as statistics (minimum, maximum, mean, median, range, standard deviations etc.) from other appropriately referenced research, units of measurement, relevant science… The sampling design used to collect the data. How the data were measured. Inclusion of well selected graphs to show the distribution of data (1 slide before and after removal of errors if errors are found present) and statistical output (1 slide - after error removal if errors are found present). Appropriate statements regarding the: Centre (location) of data; Spread of data; 218
Shape of the distribution of data; Outliers; Presence of any other patterns; Data Checking procedures, documentation of removal or fixing of errors. Conclude with a meaningful paragraph with both statistical and simple English terminology. No graphs or output are to be included in this paragraph. In-text and end-of-text referencing. Submission Open the Assessment link, HSDraft by the 21st February and submit your file for comments and marks. After submission of the draft to the e-learning email
[email protected] with the header STAT131 HSDraft, notify me that it has been submitted. When you have completed your video, taking into account suggestions based on your draft, submit your final video via HSFinal link by Saturday, 5th March. The submission system is new so further instructions may be needed. If you have queries, post them to the forum. Students who get 70% or more for the assignment do not need to sit the Laboratory Test 1 for STAT131. The students who successfully completed the first assessment given in this program were also allowed to skip the first laboratory test assessed in the formal 2011 session. These students were then required to complete the second assessment (i.e. assignment) in a formal session as shown in Figure 6.5.
Note: The second module did not materialise due to time involved in addressing HTML coding issues with the e-learning system
Figure 6.5 Assessment guide designed for the Headstart program
219
6.2.2 Improved learning design map Based on the students‟ comments provided in 2010 survey, a refinement of the learning design maps was undertaken in 2011. That is, the maps were embedded in the e-learning site using HTML files in 2011 rather than PDF files provided in 2010. The stages involved in the creation of the map were described below:
Step 1: A learning design map was created using a Microsoft Office Publisher 2007 containing graphics, textual descriptions, and interactive links. Step 2: All text displayed on the map was removed and copied to a new Microsoft Office Publisher 2007 document without saving. Step 3: The remaining graphics displayed on the map were copied to the Paint program from Windows accessories and saved as an image file (e.g. imageweek01.jpg) (see Figure 6.6).
Figure 6.6 Image file created using the Paint program
Step 4: The image file was then uploaded in the e-learning site to be linked with the HTML document created in Step 6. For example, the image link to be used was “/webct/RelativeResourceManager/Template/images/imageweek01.jpg”. Step 5: The learning design map created in Step 1 was again used by removing all graphics displayed on the map and replaced with image saved in Step 3 (shown in Figure 6.6) as new background. That is, use the insert menu and select file imageweek01.jpg. Next, this document was saved as a webpage file (HTML document). 220
Step 6: The HTML document created in Step 5 was then opened using the Notepad program and the codes highlighted in were replaced with image link created in Step 4 (see Figure 6.7).
Figure 6.7 HTML document opened using the Notepad program with the highlighted codes replaced with image link created Step 7: Finally, the HTML source code was copied and pasted on the content space of HTML file creator in the e-learning system (see Figure 6.8).
Figure 6.8 HTML source codes pasted on the content space of HTML file creator in the e-learning site 221
Step 8: The learning design map was made available to students via a webpage on the elearning site (see Figure 6.9).
Figure 6.9 Weekly learning design map created using a HTML file
6.2.3 CAOS test For the purpose of comparing the changes in performances between the 2010 and 2011 cohorts, this study used an external measure to assess the students‟ baseline skills in an introductory statistics subject. This measure is known as a CAOS (Comprehensive Assessment of Outcomes in Statistics) test, designed to assess basic statistical literacy and reasoning among students. The test was administered at the beginning (pre-test) and the end of the session (posttest) for both cohorts. The CAOS test was developed by a team of statistics education researchers (delMas, Garfield, Ooms & Chance, 2006) based on their work with the ARTIST project funded by The National Science Foundation (NSF). ARTIST is an acronym for “Assessment Resources Tools 222
for Improving Statistical Thinking” accessible from https://app.gen.umn.edu/artist/index.html. This tool is designed to measure students‟ basic literacy and reasoning about descriptive statistics, probability, bivariate data, and basic types of statistical inference. The test instrument was developed through a three-year process of development, revisions, and feedback from the ARTIST advisors and class testers, and a large content validity assessment using 30 experienced Advanced Placement Statistics Readers (delMas et al., 2006). The first version of CAOS was developed in late 2004 which consisted of 34 multiple-choice items then followed by the second version of CAOS in January 2005 consisting of 37 multiplechoice items resulting from additional revisions. After further revisions, the final version of CAOS known as CAOS 4 was developed in late 2005 and it consisted of 40 multiple-choice items. Based on a sample of 1470 students, an analysis of internal consistency of the 40 items on the CAOS 4 (post-test) produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.82 (delMas, Garfield, Ooms & Chance, 2007). Therefore, this test has been assumed to be a valid measure of important learning outcomes in an introductory statistics course. Furthermore, the three learning outcomes measured through this test are sufficiently and closely related to the six general categories in Bloom‟s taxonomy (refer to Table 6.9) as discussed in section 2.3.2 (Chapter 2). … some current measurement experts feel that Bloom‟s taxonomy is best used if it is collapsed into three general levels (Knowing, Comprehending, and Applying). We see statistical literacy as consistent with the “knowing” category, statistical reasoning as consistent with the “comprehending” category (with perhaps some aspects of application and analysis) and statistical thinking as encompassing many elements of the top three levels of Bloom‟s taxonomy. (from http://app.gen.umn.edu/artist/glossary.html accessed on 24/01/2011) Table 6.9 Six levels of learning in Bloom‟s taxonomy (Bloom, 1956) 1. Knowledge: 2. Comprehension: 3. Application: 4. Analysis:
5. Synthesis:
6. Evaluation:
Arrange, define, duplicate, label, list, memorise, name, order, recognise, relate, recall, repeat, reproduce, state. Classify, describe, discuss, explain, express, identify, indicate, locate, recognise, report, restate, review, select, translate. Apply, choose, demonstrate, dramatise, employ, illustrate, interpret, operate, practice, schedule, sketch, solve, use, write. Analyse, appraise, calculate, categorise, compare, contrast, criticise, differentiate, discriminate, distinguish, examine, experiment, question, test. Arrange, assemble, collect, compose, construct, create, design, develop, formulate, manage, organise, plan, prepare, propose, set up, write. Appraise, argue, assess, attach, choose compare, defend estimate, judge, predict, rate, score, select, support, value, evaluate.
223
However, the CAOS test in this study was considered as an adjunct to the regular assessment for measuring learning outcomes in an introductory statistics which closely aligned to the collapsed Bloom‟s taxonomy. The CAOS test therefore was used to assess the students‟ understanding and learning of statistics in which this was administered twice to the class, once in week 2 (baseline test) and once in week 12 (post-test) of the 13-week session via online tests (refer to https://app.gen.umn.edu/artist/user/scale_select.html) using an access code provided by the lecturer. The students were assigned marks worth 2% of their final marks for completing the tests that is, they were awarded participation marks of 1% for each test irrespective of the marks achieved in both tests. No time limit was set, thus the test was taken out of class with students completing at their own time. As soon as all the students completed the test, the lecturer could download two reports of the students‟ data, one was a copy of the test with the percentages filled in for each response given by the students and with the correct answers highlighted. The second report was presented in an Excel spread sheet with percentage correct scores for each student. The CAOS test is an established instrument which could provide valuable information on what students appear to learn and understand on completing a first year level statistics course (delMas et. al, 2007). In particular, it can be used to formally assess what the students knew coming into the class, and then to assess them after they had completed the class.
6.2.4 Method The steps involved in the collection of evaluation data throughout this stage of the implementation phase of evaluation for this study were as follows:
1.
Commenced on 2nd February, 2011 the students who enrolled in STAT131 were invited to participate in a Headstart program through the e-learning system. This program was run via online for four week until 28th February, 2011.
2.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE09/021) in order to invite students to participate in the investigation of the impact of learning resources on student learning outcomes.
3.
In 2010 and 2011, students in STAT131 completed the CAOS tests questions via link provided in the e-learning site. No CAOS test was undertaken in STAT131 March/Autumn 2009. The CAOS pre-test was administered in week 2 and the post-test in week 12.
224
4.
To compare the students‟ baseline skills and their performances at the end of the session, this study observed the students who completed both CAOS pre-test and posttest in both cohorts.
5.
In 2011, improved learning design maps (using HTML files) were embedded in the weekly folders. These folders were accessible via the subject homepage in the e-learning site. The by-type resource folders were also available for resources including lectures, laboratory manual, data sets, laboratory (worked) solutions, assessment, exam resources, Edu-stream, academic forum, and subject outline (see Figure 6.10).
Figure 6.10 STAT131 homepage in the e-learning site accessed in June 2011 (last week of session) 6.
The same lecturer taught and co-ordinated STAT131 for both years in 2010 and 2011.
7.
With the exception of students having an access to the Headstart program in 2011, all resources were similar for both years but there was a slight modification to the 225
assessment system in 2011. There was one group assessment task involved a draft and redraft of assignment and the other four laboratory tests involved a test retest approach. That is, in a group assessment task, students were asked to prepare a draft of assignment (i.e. video clips) and submit it to the e-learning site. This allows for feedback from other students and the lecturer. Pedagogically, it meant that the students were not put in a position of failing and then taking a retest. This way, all students were provided with feedback that could help them improve their work. The final videos were uploaded in the e-learning site taking into account suggestions and comments based on the draft version submitted earlier (see Figure 6.11).
Figure 6.11 Comments on the draft of assignment in the e-learning site
8.
As with the STAT131 in the previous years, the students in 2011 were approached and initially informed about the purpose of the study through a Participant Information Sheet delivered via email.
9.
In addition to the information sheet supplied to the students, they were asked to provide a return email or completion of permission slip (consent form) giving their consent to participate in the study. The students were told that their participation was voluntary and that they were free to refuse to participate and to withdraw from the study at any time.
226
The students would not be penalised for not participating in the study and they were informed that the outcome of the study should be beneficial for future students. 10. In 2011, the students completed the online surveys at the end of the session through the e-learning systems and returned the completed consent form (via email) indicating their agreement to participate in the study.
6.2.5 Outcomes In the implementation phase, the study involved a cohort of 195 undergraduate students who enrolled in STAT131 March/Autumn 2011. From the 2011 cohort, 59 students (30%) took part in the survey. The students‟ background data presented in Table 6.10 showed that the majority of them (81.4%) were males and 75% domestic students. A little over two third of the students (64.5%) expecting to achieve at least a credit grade or better for their final grades. A test for differences in two proportions revealed that the survey response rate in 2010 (58%) was significantly higher (Z = 5.54, p < 0.001) than in 2011 (30%). To compare the distribution of students‟ background reported in the 2010 and 2011 surveys, no significant differences were evident in their gender, nationality and expected final grades between the two cohorts (see Table 6.10). In contrast to 2009 cohort, the proportion of female students responded to the survey was significantly higher (Z = 2.77, p < 0.003) by 26% compared to 2011. Less than 4% of the students both in 2010 and 2011 anticipated failing the subject, however none in 2009 expected to fail. Table 6.10 Distribution of students‟ background reported in STAT131 surveys across sessions between 2009 and 2011
Female Domestic students Expecting a credit grade or above Expecting to fail in the subject
March/Autumn 2009 (n = 38)
Percentages March/Autumn 2010 (n = 111)
March/Autumn 2011 (n = 59)
44.7 71.1 71.1 -
17.1a 79.3a 56.7b 1.8b
18.6c 74.6c 64.5c 3.4c
a
1.8% not answered this category (n = 2) 3.6% not answered this category (n = 4) c 5.7% not answered this category (n = 3) b
As shown in Table 6.11, the distribution of students‟ final grades in 2011 revealed that the proportion of students attaining at least a credit grade (64.1%) was similar to the proportion of students anticipated those grades (64.5%). The actual failure rate (12.8%) was more than 227
three times higher than the proportion of students expecting to fail in the subject (3.4%) however this was insignificant at the 0.05 level (using a Fisher‟s exact test rather than a test for differences in proportions due to npˆ and np(1- pˆ ) being less than 5).
Table 6.11 Distribution of students‟ final grades in STAT131 between the 2010 and 2011 cohorts Grade
Fail Pass conceded Pass Credit Distinction High distinction
Percentages March/Autumn 2010 March/Autumn 2011 (N = 191) (N = 195) 18.3 5.2 23.6 25.7 17.8 9.4
12.8 5.6 17.4 12.3 24.6 27.2
The final outcomes of the two cohorts revealed that the proportion of students failing was similar for both years (refer to Table 6.11). An analysis of differences in two proportions demonstrated a significant increase (Z = 2.23, p = 0.01) in the higher grades rate (Credit or better) by 11% in 2011 (64%) compared to 2010 (53%). In particular, the proportion of students attaining a High Distinction grade dramatically increased (Z = 4.51, p < 0.001) by 17.8% in 2011.
6.2.5.1
Headstart program to support learning
The Headstart program was introduced to the students in STAT131 March/Autumn 2011 four weeks prior the formal session commenced. There was a set of 5 lectures including video clips, reading materials, and tasks/activities developed for this program (see Figure 6.12). These resources were made available in the subject e-learning site so that the enrolled students could access them via online.
228
Figure 6.12 Reading materials, video clips, and tasks included in the Headstart lecture notes An analysis of student forum as of 1st March 2011 revealed that 11.4% (n = 25) out of approximately 220 students had signalled intent to engage or participate in the Headstart program. These were observed through a prompt asking them to answer a series of questions in the forum and some of their responses were as stated below. A prompt asking, “Do you have any concerns about studying statistics?” My concerns were lack of prior knowledge on statistics and experience in studying and writing assignments. All lot of things... mainly was getting back into the rhythm of studying and learning after nearly half a year doing things at my own pace. I am worried that I won‟t understand concepts and that I won‟t use the right formulas for certain questions. I am fairly confident that any issues I may have with Statistics can be overcome. I have really never done anything that solely focused on statistics. Well, I‟ve only done a short course in Statistics during the HSC, so most probably my inexperience. I have had feedback from numerous people who have undertaken this subject that it‟s meticulous work (hence, more time taken to understand this subject). Furthermore, my mathematical abilities are somewhat not top-notch however this subject should improve those skills. I'm most scared of the fact that I failed last time. Not too scared. I learn my lesson the hard way in first year.
229
I've always had difficulty knowing which formula/method is most appropriate to apply in different situations My concerns were about my total inexperience and the intimidating uncertainties that I have about “uncertainty”. It‟s completely different to anything I have studied before and I don‟t know anything about it. I'm clueless of what statistics is to be honest, but I do know that it is of fairly high complexity. There were 32% (n = 8) out of the 25 students completed and submitted the draft assignment that was required in the program however 20% (n = 5) did not signal intent to submit an assignment. The students were also asked to provide comments on their draft assignments hence only 20% (n = 5) out of the 25 students took part with a total of 17 commentaries. Examples of their comments were as follows:
I think your meaningful paragraph needs a bit more in it. You should link your finding in the paragraph to some of the data/analysis earlier in the slide, using English and statistical language. Also, I got the idea from the “Stepping Into Statistics” page (starting from: “Inclusion of a well selected graphs…”) that you needed more than 1 graph. On slide 7 it says “Bell Shap”. Just you missed an “e” at the end. Also, “Amoungst” is spelt “Amongst”. The identification of a Qualitative variable I thought was good, and the graph is quite informative and effectively supports your claim of outlying data. The presentation is also clear, with a clear structure. Thanks for your helpful comments. It is clear now why peer evaluation is an effective way of spotting mistakes. Your presentation has a clear structure referring to its content and the visual layout and a good introduction. I also like the way you are stepping through the analysis of the results. You cite directly on slide 10, so a tag for the reference might be appropriate. Very clear presentation. On slide 6, the table is slightly too narrow, leading to the value for Variance being split over 2 lines (which makes it look like there are 2 values). Just check - I thought I said you needed to collect 30 data points so that you can produce stem-and-leaf or histograms in SPSS. Don‟t go on to topics not yet covered e.g. the time component as you wish to explain the centre, shape, spread, outliers and patterns. I will give some more comments tomorrow. I agree with the previous comments. In my opinion, it is quite difficult to figure out which is your variable. I also miss some graphs (beside the pie chart), which support your analysis, but probably you will add these in your final version. It could be a bit confusing that while talking about the sampling error, you also introduce a second survey. 230
These commentaries were then used by the students to improve their final assignment. As a result, eight students had submitted and of these, seven (87.5%) successfully completed at a grade of 70% or better. One student was unsuccessful (but gained 90% in the alternative first assessment in STAT131). For the purpose of evaluating how useful the program was in supporting student learning in the subject, the students were asked to answer several questions included in the survey. These questions were as presented in Table 6.12.
Table 6.12 Questions on the use of Headstart program in STAT131 How did you find the Headstart program in this subject? a. I was not aware the program was available b. I did not use them c. It was time consuming for me to use them d. It helps me learn and understand the subject before the session commence Did you participate in the STAT131 Headstart program by… reading the lectures and/or undertaking activities suggested completing a draft assignment undertaking peer review of draft assignment submitting the final assignment
a. a. a. a.
Yes Yes Yes Yes
b. b. b. b.
No No No No
Do you prefer to have the Headstart program in this subjects? a. Yes b. No. Please explain: Do you prefer to have the Headstart modules in other subject?
a. Yes b. No
Do you think the duration of the Headstart modules is sufficient to get yourself ready in learning and understanding the subject? a. Yes b. No. Please explain: Do you have any issues in taking the Headstart program in this subject? a. No b. Yes. Please explain: Do you have any suggestions for improvement of the Headstart program in this subject?
The findings from the survey revealed that of the 59 responses, 46% of the students (n = 27) did not access the program, 30% (n = 18) were not aware that the program was available, 12% (n = 7) perceived the program useful for them to learn and understand the subject before the session commence, 7% (n = 4) reported that the program was time consuming, and 5% (n = 3) did not provide any response to this question. According to the survey and as shown in Table 6.13, the proportion of students who completed the program ranged from 17% of the students (n = 10) who completed the draft assignment and had submitted their final assignment to 22% (n = 13) who had undertaken peer review of draft assignment. Approximately, 19% of the students (n = 11) read the lecture notes 231
and/or had undertaken the activities included in the program. Data from the e-learning student tracking statistics (see page 235) revealed that just over half of the students (53%) had accessed at least one lecture by the end of March 2011.
Table 6.13 The use of the Headstart program reported via student survey in STAT131 Did you participate in the STAT131 Headstart program by…
Reading lecture notes and/or undertaking activities suggested Completing a draft assignment Undertaking peer review of draft assignment Submitting the final assignment
Percentages (n = 59a) Yes
No
18.6 16.9 22.0 16.9
81.4 83.1 78.0 83.1
a
Including 3of non-responses
There was no mechanism to advertise the Headstart program to the students. They located it only when they enrolled in the subject and had logged on to their e-learning site. This reflected in the students‟ comments to the question, “Do you prefer to have the Headstart program in this subject?”. Of the 49 responses, 49% (n = 24) preferred to have such program in the subject, 22.4% (n = 11) responded that they were not aware the program was available, 20.4% (n = 10) pointed out other reasons (as noted below), and 8.2% (n = 4) were not interested in accessing the program. Examples of their comments were as follows:
Preferred to access the program Yes, because you have to opportunity to lighten your load for the first half of session, which would not only help with STAT131, but would free up time for other subjects. Yes. I think it‟s a good option for those to whom statistics and its mathematical basis are completely new. It gives a good chance to build confidence and knowledge. Yes, it seems to be a good opportunity to get some work out of the way before more intense work is undertaken. Not aware the program was available I did not take part in the head start program as I did not realize it existed until I was too far behind. I didn‟t complete or attempt the head start assignment. I didn‟t know it was available. Yes, the Headstart program is good, but I was unaware that it was even available. 232
I wish I knew about this program for the following question, is it available for other subjects? I didn‟t know about the Headstart program as I enrolled in the subject late. However, I think it‟s a good idea and it eases students into the course so that the learning curve isn‟t so steep. However i think it should stay optional be compulsory. I think it would have been helpful but I wasn‟t really aware what it was or what it would be used for. I think some sort of communication about this would have been good. Some other reasons Yes. As I was too busy with work and other things, I didn‟t actually get around to participating in the Headstart program, but I like the idea and it would have been useful. No, it was the holidays. No, because I know nothing in the beginning, it is too hard to do the program well. Yes. Even though I didn‟t use it, it is a good way of getting into the mind space of stats. Yes, I don‟t see why not for people who might want to do it, however it should never be compulsory. The students were asked to indicate whether they prefer to have the Headstart modules in other subjects. Of the 59 responses, 49% (n = 29) indicated that they preferred other subjects to have such programs, 46% (n = 27) indicated that they were not interested, and 5% (n = 3) did not respond. This result suggests that there was a potential for the Headstart program to be developed for other subjects so as to provide opportunities to students to have more time for their learning. Out of the 44 responses, 43.2% (n = 19) were in agreement with the duration of time allocated for the program, 36.4% (n = 16) either agreed or disagreed for some reasons, 11.3% (n = 5) disagreed, and 9.1% (n = 4) remained unsure. Some of their reasons highlighted were as follows: No, 5 weeks of self-study isn‟t the same as 5 weeks of lectures. I think it needed more time or less content. No, because considering the date of enrolment (different for domestic and international students), I had little time to participate in the Headstart program, and when I was able to access to e-learning the time was short for me to complete. No, because not everyone is aware of the program before the session starts. 233
Yes, but advertising was required. Again I was not aware of the Headstart project till late so I cannot really say but I from what I have heard and researched the Headstart program had good amount of time allocated. And also as I have said in the previous question, I would have liked Headstart to be continued throughout the session, or a PASS class. Yes, from what I have heard about it, it did. The survey also sought to identify the students‟ difficulties and problems while taking the Headstart program. Of the 43 responses, 65.1% of them (n = 28) had no issues though some probably did not access the program at all, 30.2% (n = 13) revealed some issues particularly on the awareness of the program in the subject and its benefits for their learning, and another 4.7% (n = 2) were uncertain as they had not accessed the program. Of the 17 responses, 47% (n = 8) recommended that the program should be appropriately advertised so that the students knew of its existence, 41.2% (n = 7) proposed to improve the program, and 11.8% (n = 2) were uncertain. Some of the recommendations for improvement of the program were as follows:
Do email, SOLS or SMS notifications that it is available. From the student forum, I noticed that a lot of people started later because they didn‟t know about the program. The Headstart program was perfect, but having absolutely no statistical experience before it was difficult to complete the program. Somehow if it was possible to be promoted more so more students know it‟s on. Inform people who are starting the subject earlier about it or via other means as I only discovered it a week or so before the start of university and finishing the video was a bit of a rush. Before or as soon as the program starts it would be nice to have a learning program set out, it was a bit daunting trying to do it with the information I had at the time. I wasn‟t really keen on making a PowerPoint and identifying a variable. I would have preferred being given the variable and practice writing meaningful paragraphs. Make more people aware of it and aware of how it can benefit them. Do more advertising of the Headstart program. Yes, I would also prefer that the Headstart program not only be before the session but also during, like PASS. And I know other wanted a PASS class for this subject but was disappointed when one was not offered. I know this would have greatly helped mine and others study and marks in STAT131 and greatly improved the quality of this subject all round. 234
To check the consistency of responses provided by students in the survey, their activities in the e-learning system, the tracking of commentaries in student forum, and the files usage were used for analysis. Table 6.14 summarised the survey findings in contrast to the data from the e-learning tracking between 2nd February and 1st March, 2011.
Table 6.14 The use of the Headstart program reported via student survey and data from the e-learning tracking Usage
Reading lecture notes and/or undertaking activities suggested Completing a draft assignment Undertaking peer review of draft assignment Submitting the final assignment
Survey findingsa
e-learning tracking data (2nd February to 1st March, 2011)
n
%
n
%
11 10 13 10
18.6 16.9 22.0 16.9
71 8 5 8
36.4b 32.0c 20.0c 32.0c
a
Out of 59 responses Out of 195 students c Out of 25 students signalled intent to engage in the program via student forum b
An analysis of differences in two proportions revealed that a significant difference (Z = 2.56, p = 0.005) in the proportion of students who read the lecture notes and/or undertook the program activities between the survey findings and e-learning tracking data. That is, more students (36%) were identified as accessing the lecture notes provided in the program than that was reported in the student survey (19%). The data from the e-learning tracking on other three activities: completing a draft assignment, undertaking peer review, and submitting the final assignment were consistent with the survey findings (that is, no significant differences were evident between the two proportions). In addition, an e-learning tracking was undertaken between 2nd February and 31st March, 2011 to examine the Headstart resources accessed and activities undertaken by the students enrolled in STAT131. It was found that 53.3% (n = 104) out of 195 students enrolled in STAT131 downloaded the Headstart online lectures or activities. Among them, 44 students had downloaded 4 to 5 of the available lectures (2 weeks of activities) and an additional of 60 students downloaded 1 to 3 lectures (one week of activities). After 1st March, 2011, 16.9% (n = 33) out of 195 students downloaded the lectures which effectively meant that these students had additional resources rather than resources that could provide them with a Headstart. This left 36.4% (n = 71) accessing the program before the session commenced. It was not possible to ascertain what was done with the lectures or activities once they had downloaded them. To examine the differences in means of the final marks between the students who engaged with the program (either for all lectures or through accessing 1 to 3 lectures) compared to those who did not, one-way ANOVA and Scheffe post-hoc tests for differences in means 235
between three groups were used for analyses. The result revealed a statistically significant difference (F2, 192 = 5.301, p = 0.006) in the mean of the final marks between the students who engaged with the program compared to those who did not. The students who downloaded 4 to 5 lectures achieved significantly higher mean marks of 11 marks better than those who did not engage with the program. Likewise, the students who downloaded 1 to 3 lectures achieved on average of 9 marks higher than those who did not (see Table 6.15). However, there was no significant difference in the mean of the final marks between those who downloaded 1 to 3 lectures compared to 4 to 5 lectures.
Table 6.15 Comparison of mean of final marks between students who engaged and did not engage with the Headstart program in STAT131 N
Average marks
Standard Deviation
Not engaged Downloaded 1 to 3 lectures Downloaded 4 to 5 lectures
91 60 44
62.50a,b 71.63a 73.96b
24.09 18.17 21.92
Total
195
67.93
22.38
a
The mean difference is significant at p = 0.046 The mean difference is significant at p = 0.019
b
The Headstart program could be improved by having the program available in January rather than early and mid of February 2011, and possibly having the second module of three lectures on bivariate data analysis available as well. Therefore, more students could access it for a greater period of time. Even though new students enrolled in February, later year students could
access
it
in
January.
The
students
however
could
engage
with
the
resources/activities/lectures provided in the program at any point throughout the session and many found this useful.
6.2.5.2
Impact of learning resources on student learning
As with the previous surveys, the students in STAT131 March/Autumn 2011 were asked to rate each of the learning resources in terms of their usefulness in helping them learn and understand the subject. As shown in Table 6.16, more than 90% of students in 2011 perceived the worked solutions (98%) and lecturer (92%) as important for their learning in the subject. Other resources were regarded as useful for learning by at least two third of the students ranged from 66% on laboratory retests and learning design maps to 88% on laboratory tests. The support resources including video resources, teamwork, student forum, and Headstart modules were rated lower between 26% and 59% of the students, the lower rating was probably because 236
they were not necessary learning aids for all the students. Apart from the existing support resources as provided in previous years, a Headstart program was introduced in 2011. The findings revealed that only a small proportion of students (26%) endorsed the usefulness of this program in helping them to learn and understand the subject but again only a small proportion of students completed the program. The positive outcomes demonstrated in the previous section however showed the potentiality of this program in supporting student learning particularly for those who had accessed the program.
Table 6.16 Perceived importance of resources on students learning in STAT131 across sessions between 2009 and 2011 Resource
Laboratory tasks Worked solutions Laboratory tests Laboratory classes Lectures Online lecture notes Lecturer Tutor Teamwork Video resources Laboratory retests Student forum Laboratory manual Learning design map Headstart modules
March/Autumn 2009 (n = 38) a % Rankb 97.4e 94.7 92.1 89.4e 84.2 78.9 f
78.9 63.1 39.4e 86.8e 26.3 97.4e c c
1.5 3 4 5 7 8.5 8.5 10 11 6 12 1.5 -
March/Autumn 2010 (n = 111) a % Rankb 88.3d 96.4 90.0 71.1 73.9 83.7 81.1d 72.0 57.6 49.5 74.7 34.2 82.8 69.3 c
3 1 2 10 8 4 6 9 12 13 7 14 5 11 -
March/Autumn 2011 (n = 59) a % Rankb 78.0d,e 98.3 88.1 72.9e 79.7 79.7 91.6d 72.9 52.6 59.3e 66.1e 32.2 83.1e 66.1 25.5
7 1 3 8.5 5.5 5.5 2 8.5 13 12 10.5 14 4 10.5 15
a
Percentages of students responding as moderately or extremely useful Ranked by percentage reporting resources useful c Not provided or available d,e Significant difference at the 0.05 level f Not surveyed b
To examine whether there was a difference in students‟ perceptions on the usefulness of resources for learning between the two cohorts, a test for differences in two proportions was used for the following analyses. It was found that the proportion of students who valued the laboratory tasks was significantly higher (Z = 1.78, p = 0.038) in 2010 compared to 2011. On the other hand, the proportion of students who perceived the importance of lecturer to learning significantly higher (Z = 1.81, p = 0.035) in 2011 compared to 2010. The proportion of students endorsing the use of maps for learning in the subject remained similar in both cohorts. To compare with the 2009 cohort, the proportion of students valuing the video resources was significantly increased (Z = 1.91, p = 0.028) by 20% in 2011. However, the proportions of students who valued the laboratory tasks (Z = 2.65, p = 0.004), laboratory classes (Z = 1.96, p = 0.025), laboratory 237
retests (Z = 2.27, p = 0.012), and laboratory manual (Z = 2.17, p = 0.015) were significantly higher in 2009 compared to 2011. This suggested that there was a shift in valuing several resources including laboratory tasks, laboratory classes, laboratory retests, and laboratory manual across sessions between 2009 and 2011. It appeared that the proportion of students who perceived the videos as useful for learning was gradually increased from 2009 to 2011, as the valuing of others decreased.
6.2.5.3
Effectiveness: dimensions of high quality learning
The examination of the usefulness of resources for student learning through change evaluation (Porter, 2007) has been used for some years. In the context of this study, it is believed that high quality resources positively associated with student outcomes. From another perspective, Boud and Prosser (2001) highlighted four principles of high quality learning which looking at the processes of student learning (discussed in section 3.7.1 in Chapter 3). Particularly in this study, this suggests an alternative approach to examine the impact of the subject design on student learning. The four principles are based upon a student-centred and experience-based view of learning. A Learning Designs Project team (Agostinho et al., 2002) from the University of Wollongong (refer to http://www.learningdesigns.uow.edu.au) has developed an evaluation instrument which incorporates the four key principles in examining the potential for a learning design to foster high quality learning. The principles are elaborated through a series of questions that can be used by staff to evaluate the anticipated effectiveness of a subject‟s learning design from the student perspective. For the purpose of this study, the researcher adapted some of the questions developed from the project to be used in the survey. The students were asked to indicate their perceptions regarding the subject design according to several aspects representing dimensions of the four principle areas: engaging students, acknowledging the learning context, challenging students, and providing practice. Each dimension was ranked by the students on a scale: 1 = not at all, 2 = a little, 3 = moderate, 4 = much, and 5 = very much. To examine whether there was a difference in students‟ perceptions on the subject design according to four principle areas between the 2010 and 2011 cohorts, a test for differences in two proportions was used for analyses in the following sections. There is no benchmark for use of these dimensions in terms of their being a guide to learning however it is noted that there is considerable room for improvement. The first key area highlights the importance of subject design in engaging students within their learning context. This includes the design of learning materials and activities which relate to the students‟ prior experiences of studying, their present goals for learning, and engage them 238
affectively with the materials they are studying. There were six dimensions of student engagement included in the surveys (see Table 6.17). Based on a sample of 111 students, an analysis of internal consistency of the six dimensions of engagement produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.76. The findings revealed that the students‟ perceptions on six dimensions were similar between the two cohorts. Overall, the proportions of students reporting higher quality with respect to engaging students were low in both cohorts ranged from 30% engaging students affectively to 58% allowing students to control their learning.
Table 6.17 Perceived impact of subject design on student learning based on the four Boud and Prosser‟s principles Four key principles of high quality learning
Percentagesa STAT131 Autumn STAT131 Autumn 2010 (n = 111) 2011 (n = 59)
A. Student engagement Control of learning Have opportunities for peer interaction and feedback Reflect upon and consolidate statistical ideas Access and use key concepts in different ways Use and connect prior experiences Engage affectively
48.6 54.9 53.1 43.2 30.6 28.8
57.6 56.0 64.4 45.8 32.2 30.5
B. Acknowledge the learning context Matches assessment to outcomes Link to the statistics fields Link to broader discipline context Get support for multiple cultures and diversity
67.6 64.9 46.8 29.7
67.8 66.1 39.0 25.4
C. Challenge students Highlight the limits of student knowledge base Be self-critical Identify student knowledge base Equip themselves to plan other learning activities
70.2 67.5 63.0 39.6
67.8 69.4 57.6 37.3
D. Provide practice Get feedback at key points in the learning process Practice skills Model expected performance Communicate and demonstrate what they are learning
72.1 69.4b 62.2 63.1
62.7 55.9b 57.7 62.7
a
Percentages of students responding as moderate to very much Significant difference at the 0.05 level
b
The second key area highlights the importance of subject design in acknowledging the learning context from the students‟ perspectives. This includes the design of learning materials and activities which are part of the application of the knowledge being learned. These are those activities that help students to see how the current learning can be used upon in various contexts and situations beyond the ones given, match the assessment to learning outcomes, and consider 239
the cultural assumptions. There were four dimensions of acknowledging the learning context included in the surveys (refer to Table 6.17). Based on a sample of 111 students, an analysis of internal consistency of the four dimensions of acknowledging the learning context produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.72. As with the first key area, there were no significant differences in the students‟ perceptions on four dimensions between the cohorts. The proportions of students in 2010 who reported higher quality with respect to acknowledging the learning context ranged from 30% supporting for multiple cultures and diversity to 68% matching the assessment to outcomes. The corresponding lows and highs were for the same dimensions, 25% and 68%, respectively for the 2011 cohort. The third key area highlights the importance of subject design in challenging students within their learning context. This includes the design of learning materials and activities that get the students to be active in their participation; using the support and stimulation of other students, applying a critical approach to the materials, and going beyond the knowledge or resources provided to them. There were four dimensions of challenging the students included in the surveys (see Table 6.17). Based on a sample of 111 students, an analysis of internal consistency of the four dimensions of challenging the students produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.70. The students‟ perceptions on four dimensions were similar in both cohorts. Overall, the proportions of students who reported higher quality with respect to challenging students in both cohorts ranged from the lowest about 40% equipping the students to plan other learning activities to moderate 70% highlighting the limits of students‟ knowledge base. The fourth key area highlights the importance of subject design in providing practice within the student learning contexts. This includes the design of learning materials and activities that assist students to demonstrate their learning changes and understanding, help them to better appreciate the criteria and standards being applied to their learning, develop their confidence, and assist them to experience coherence between aims and goals, learning tasks and assessment of outcomes. There were four dimensions of providing practice included in the surveys (refer to Table 6.17). Based on a sample of 111 students, an analysis of internal consistency of the four dimensions of providing practice produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.81. It was found that one out of four dimensions had a significantly higher percentage of endorsement for the 2010 cohort compared to 2011. This was the learning activities which encouraged students to practice their skills (Z = 1.75, p = 0.04). The proportions of students in 2010 reporting higher quality with respect to providing practice were moderate, ranged from 62% modelling expected performance to 72% obtaining feedback at key point in the learning process. The corresponding proportions were for the same dimensions, 58%, 63%, 56% and 63%, respectively for the 2011 cohort. 240
6.2.5.4
Student perceived confidence with topics
Over the years 2009 and 2011, the students in STAT131 were asked to indicate how confident they were in relation to each of the major topics. The possible ratings were: „not at all confident‟, „might have a little difficulty‟, „moderately confident‟, and „could do this‟. The rankings of student confidence can be used to identify where additional learning resources (assessment, worked examples, improved lecture notes, video resources) could be of benefit. For this reason, an evaluation of the importance of the topic should be considered. The aim here was to recognise which topics or skills in the subject that were most in need of additional supports and considered to be the highest priority. The rankings can also be used to examine differences in confidence between the cohorts when:
in 2010, the learning design maps with links to video resources on most topics were available in the STAT131 e-learning sites,
a further divergence in 2011 involved the improved learning design maps in weekly folders in addition to the by-type resource folders. Assessment as distinct to weekly laboratory tasks had a separate folders to accommodate the lecturer‟s need for ease of addition of files, and
the students were able to complete a Headstart program.
As shown in Table 6.18, approximately 79% of the students in 2011 perceived themselves as confident in exploratory data analysis, 73% in using the SPSS, and 70% in correlation and regression analysis. The proportions of students feeling comfortable with other topics were moderate, ranged from 52% in hypothesis tests to 58% in fitting a model to data.
241
Table 6.18 Student perceived confidence on specific topics in STAT131 across sessions between 2009 and 2011 Topic
Exploratory (measures of centre, shape, spread and outliers) Correlation and regression Binomial and Poisson distribution Confidence intervals Using SPSS Model fitting Hypotheses tests Normal and Exponential distribution
March/Autumn 2009 (n = 38) %a Rankb
March/Autumn 2010 (n = 111) %a Rankb
March/Autumn 2011 (n = 59) %a Rank
97.4e
1
86.7
1
78.6e
4 7 5 2 6 3 8
c
2 3 4 5 6 7 8
c
76.3 68.4 73.7e 81.6 68.5 79.0e 63.2
81.5 74.7d 74.3d 72.6 71.2c 68.2c 58.2
69.7 53.6d 55.3d,e 73.2 58.2c 51.8c,e 53.6
1 3 6.5 5 2 4 8 6.5
a
Percentages of students responding as moderately confident or could do this (valid per cent) Ranked by percentage reporting confident with topic c Significant difference at the 0.05 level d,e Significant difference at the 0.01 level b
To examine whether there was a difference in student perceived confidence in major topics between the two cohorts, a test for differences in two proportions was used for the following analyses. The results revealed that the proportion of students considered themselves comfortable with five major topics significantly higher in 2010 compared to 2011. These topics were: correlation and regression (Z = 1.75, p = 0.04), Binomial and Poisson distribution (Z = 2.79, p = 0.003), confidence intervals (Z = 2.52, p = 0.006), model fitting (Z = 1.71, p = 0.04) and hypothesis tests (Z = 2.10, p = 0.018). To compare with the 2009 cohort, the proportion of students considering themselves as confident was significantly higher in three topics: exploratory data analysis (Z = 2.60, p = 0.005), confidence intervals (Z = 1.82, p = 0.034), and hypotheses testing (Z = 2.70, p = 0.004) compared to 2011. However, contrasting with the earlier findings (Morris, 2008), the distribution of final marks and assessment between the three cohorts did not align with student confidence. The findings were inconsistent with the survey outcomes on the student perceived confidence with topics presented in Table 6.18. The 2011 cohort outperformed than those both in 2009 and 2010 (refer to Table 6.27). It was anticipated that the new designs implemented in the subject would improve the student confidence. This was found to be strongly linked to the mean assessment marks for the topics (Morris, Porter & Griffiths, 2005) as well as the final marks. However, there appeared to be a decline in student confidence, contrasting with their improved assessment and final marks in 2011. In response to the higher confidence on five major topics among the students in 2010 compared to 2011, possibly a proportion of the students were aware of others completing the Headstart program when they had not. Potentially, this could leave them with a loss of 242
confidence compared to those completing. Of the 56 respondents in 2011, 15 students indicated that they accessed the program at some point whereas 41 students did not. A Fisher‟s exact test was used to compare the differences in the proportions of students‟ perceptions between two groups of students (see Table 6.19). This test was selected as the sample size for one group was less than 30. The results revealed no significant differences in the proportions of students‟ indicating confidence with topics between the students who accessed the program and those who did not. However, for only one of the eight topics, the confidence was higher in students who did not access the program. Further, the follow up as to why the change in student confidence falls outside the scope of this study.
Table 6.19 Student perceived confidence on specific topics in STAT131 between those who accessed the Headstart program and who did not in 2011 Accessed the program (n = 15) %a Rankb
Topic
Exploratory (measures of centre, shape, spread and outliers) Correlation and regression Binomial and Poisson distribution Confidence intervals Using SPSS Model fitting Hypotheses tests Normal and Exponential distribution
Did not access the program (n = 41) %a Rankb
80.0
1
78.0
1
60.0 60.0 60.0 73.3 66.6 53.4 66.6
6 6 6 2 3.5 8 3.5
73.2 51.2 53.6 73.2 53.6 51.2 48.8
2.5 6.5 4.5 2.5 4.5 6.5 8
a
Percentages of students responding as moderately confident or could do this (valid per cent) Ranked by percentage reporting confident with topic
b
6.2.5.5
Improved learning design map for supporting learning
As with the previous survey, the students in 2011 were asked to indicate their experience of using the improved learning design maps provided in the e-learning site for their learning in the subject. The survey findings revealed that the maps were most used as a connection to tasks and resources (75%). The next major uses were as a tool for revision (63%), a study checklist (58%), organizing their work and learning materials (54%), and connecting to statistical ideas highlighted in the subject (49%). The students in 2011 were asked to compare the subject design which uses the maps in organizing resources and learning materials in the e-learning system to that in other subjects‟ elearning sites. Of the 36 responses, approximately 61% of them (n = 22) valued the maps in the subject as their comments stated below:
243
I have never had a subject that was so organised with all the resources available and laid out so you can see which parts relate to one another. I like how everything for a particular week is stored together. It makes it easier to review specific topics rather than having to find and compile all the info about a specific topic yourself. Much easier to find all the material relating to each week, compared to other subjects where it‟s not organised very well at all. It was organised and great way of seeing what was going on each week. I haven‟t seen it in any other classes. The use of weekly learning design map or flowchart is, in my opinion, easy to access and useful for students compared to other subjects I‟ve seen which does not have this kind of design. It is helpful in working out what needs to be completed week by week. It is good because all the resources and supports are there and the tasks are all related. This subject uses the e-learning system far better than other subjects. The comments also revealed that there was room for improvement in the development of the map in the subject. Of the 36 responses, 11% (n = 4) responded negatively to the use of maps in the subject. Some of the comments made were as follows:
Pretty poorly, organizing, the information into weeks seems a bit silly to me. I don‟t remember what week I learnt the topic, nor do I care. All I want to do is find the information. It was a good idea with bad implementation. The flowchart is more confusing than just having a simple list of links of what we have to do for the week. Also, there was so much stuff to do that the flowchart was just too clustered to even find a simple link. It‟s incredibly different and therefore incredibly confusing. Compared to other sites on e-learning I found that the maps added an unnecessary complexity to the site and subject, making it a bother to navigate and a very inefficient site layout. As I have said in the previous question: clearly labelled tabs would have been much more simple and easy to navigate and I‟m sure other would have liked this more. I would definitely recommend getting rid of the maps for a more simple and organised design. There were also a number of students (14% of the 36 responses) who while in favour of using the maps, experienced frustration or difficulties with technical issues and the structure of the maps. For instance, 244
It compares well, but a much simpler look would be much more efficient. Folders contained certain information. The notifications on the front page of the e-learning were helpful. Maybe using that and having folders underneath (links to information) rather than the maps. They are a really good concept, but rely too heavily on internet access for study. Majority of my study is done whilst traveling. Probably it is a good overview, but for me it is not more than a checklist. So, a simple list would be of same value for me and I am not sure, whether it is worth the effort. It is a lot more organised than other subjects; however it can sometimes be confusing, because there is so much content to look at. The messages box on the main STAT131 e-learning site was excellent but I rarely used the flowchart. As with the previous findings, some students (14% of the 36 responses) reported uncertainty as to whether using the maps was beneficial for them compared to other subjects. Examples of their comments were as follows:
This subject is the only one with a design map. No other subjects I have currently taken use a flowchart. It is the only subject that uses such a thing. I have not seen any other subject have one. No other special. Overall, the analysis of students‟ perceptions and comments regarding the use of learning design maps in the subject for both years remained unchanged. The proportions of students who endorsed the use of maps in the subject, experienced technical difficulties in using them, and not in favour to use them appeared to be similar between the 2010 and 2011 cohorts.
6.2.5.6
Performance in the assessment
For the assessment such as laboratory tests and assignments included in STAT131, it was possible to examine changes in specific topics between the 2010 and 2011 cohorts. There were four laboratory tests and a group assignment set in 2010 and 2011 with similar but different data sets on specific topics. These topics were: exploratory data analysis, bivariate data analysis, probability and models, goodness of fit and chi-square independence tests, Normal and Exponential distributions, and estimation and hypothesis testing. In 2010, the students were 245
required to sit two in class tests (first and fourth assessment) and three take home or out of class tests (including one group assignment). In 2011, due to a loss of teaching days because of holidays the students completed all assessment at their own time (allowing 3 to 5 days in order to complete) except the first assessment which was assessed in week 4 on exploratory data analysis topic. As mentioned earlier the students who did not pass a test the first time were required to sit a second version of the test (retest) out of class. The students were provided with feedback on their own work in the first assessment and were also provided with worked solutions. The retest completed out of class involved a different data set for the same topic but where possible, different wording and sequencing of questions were used. A mark of 70% had to be obtained on the retest but this was now the possible maximum mark. The tests, retests, sample tests, worked solutions, and feedback were provided to the students via the e-learning sites. This assessment system was useful in allowing the lecturer to identify at risk students early. Of particular interest, there were some students who did not attempt a test on a second occasion, or did not follow up despite direct feedback as to how to correct their answers was provided. The challenge became how to identify the type of help required and how to provide it. For example, some students needed to find ways to communicate what they knew. An independent t-test on the assessment in 2010 and 2011 revealed that there were significant differences in the average of student marks (out of 10) on several topics (refer to Table 6.20). The students in 2011 achieved significantly higher mean marks on three topics compared to 2010. These were on exploratory data analysis (t269.26 = 16.21, p < 0.001 with unequal variances assumed, F = 65.15, p < 0.001), probability and models, goodness of fit and chi-square tests (t223.93 = 5.14, p < 0.001 with unequal variances assumed, F = 65.55, p < 0.001), and Normal or Poisson and Exponential distributions (t216.50 = 7.89, p < 0.001 with unequal variances assumed, F = 100.32, p < 0.001). On the other hand, the students in 2010 had a significantly higher mean marks (t311 = 7.14, p < 0.001 with equal variances assumed, F = 0.03, p = 0.864) than in 2011 on the topic of estimation and hypothesis testing. However, there was no significant difference in the mean marks on bivariate data analysis topic (t 345 = 0.18, p = 0.856 with equal variances assumed, F = 0.82, p = 0.365) between the two cohorts. These results showed that there was a statistically significant improvement in the mean assessment marks from 2010 to 2011 particularly on the three topics with a greatest mean change of 2.52 on exploratory data analysis. The changes perhaps resulting from the inclusion of the Headstart program initiated in 2011 which was on the topic exploratory data analysis.
246
Table 6.20 Changes in the assessment on specific topics in 2010 and 2011 Topics
Cohort
N
Meana
S. D.b
2010 2011 2010 2011 2010 2011 2010 2011 2010 2011
183 186 163 184 164 156 170 173 153 160
5.46 7.98 7.84 7.87 6.75 7.66 7.33 8.95 8.24 7.14
1.87 0.96 1.79 1.99 2.07 0.90 2.50 0.96 1.25 1.47
1. Exploratory Data Analysis 2. Bivariate Data Analysise 3. Probability & Models, Goodness of Fit & Chi-square tests 4. Normal or Poisson & Exponential distributions 5. Estimation & Hypothesis testingf
Mean differencec
S. E.d
2.52
0.16
0.04
0.20
0.91
0.18
1.62
0.21
-1.10
0.15
a
Zero marks have been removed from all assessment as they represent non submission and not necessarily low achievement b Standard deviation c Difference in mean marks (2011 less 2010) d Standard error difference e Set as second assessment in 2010 but as group assignment in 2011 f Set as fourth assessment in 2011 but as group assignment in 2010
An examination of the association between the students‟ assessment marks (a total of 50 marks) and their final marks (out of 100 marks) in STAT13 revealed that there was a significant strong positive correlation between the performances of the students in the assessments and final marks for both cohorts (refer to Table 6.21).
Table 6.21 Correlation between the assessment marks and final marks in STAT131 for the 2010 and 2011 cohorts
March/Autumn 2010 March/Autumn 2011
N
Pearson correlation, r
p valuea
190 193
0.765 0.855
< 0.001 < 0.001
a
Significant at the 0.01 level (two-tailed)
6.2.5.7
Performance in CAOS test
To compare the students‟ performance at the beginning of session between the 2010 and 2011 cohorts, an independent two samples t-test on the CAOS pre-test was used for analysis. The result revealed that no significant difference was found in the average per cent correct (t 209 = 0.114, p = 0.910 with equal variances assumed, F = 0.049, p = 0.826) for the CAOS pre-test between the cohorts. That is, there was no difference in the average skills performance at the commencement of the subject (see Table 6.22). 247
Table 6.22 Average skills performance at the commencement of STAT131 the 2010 and 2011 sessions Percentages of correct answers March/Autumn 2010 March/Autumn 2011 (N = 132) (N = 79)
Descriptive Average Standard deviation Standard error mean
49.98 12.21 1.063
49.78 13.07 1.471
A matched-pairs t-test was used to test for significant learning gains (from pre-test to post-test) within each cohort. The means and standard deviations of the average per cent correct for the CAOS tests were presented in Table 6.23. The table also showed the results from delMas et al.‟s (2007) study as to compare the outcomes in different context. In 2010, the students demonstrated significant learning gains throughout the session from an average per cent correct of 50% on the pre-test to an average per cent correct of 54% on the post-test (t131 = 3.634, p < 0.001). While statistically significant, this was only a small average increase of 4 percentage points, with a 95% confidence interval of the difference between 1.8 and 6.2, or 0.7 to 2.5 of the 40 items. However, no significant gains or improvement in the average per cent correct from pre-test (50%) to post-test (52%) were found in 2011 (t78 = 1.702, p = 0.093). In different context, an increase of 9 percentage points was evident from delMas et al.‟s study (t762 = 20.98, p < 0.001) with a 95% confidence interval of the difference between 8.2 and 9.9, or 3.3 to 4.0 of the 40 items. Although this result was slightly better than 2010 and 2011 cohorts, in all cases the students were correct on a little more than half of the items on average by the end of the session.
Table 6.23 Changes in overall performance in the CAOS tests between cohorts in comparison to delMas et al.‟s (2007) study Difference in average per cent correct
Average per cent correct Pre-test
March/Autumn 2010 March/Autumn 2011 delMas et al. (2007)
Post-test
N
Mean
S. D.a
Mean
S. D.a
Mean
S. D.a
132 79 763
50.0 49.8 44.9
12.2 13.1
54.0 51.9 54.0
15.3 14.3
4.0 2.1 9.1
12.6 11.1 12.0
b
b
a
Standard deviation Standard deviations for the pre and post-test were not available, but the standard deviation of the difference was recorded in delMas et al. (2007) b
248
In the literature, the CAOS test had not been used in the Australian context. Nonetheless, Jersky (2010) had used similar test instrument containing 31 multiple-choice type questions accessed from the ARTIST website. This test was administered twice (pre-test and post-test) within a 15-week session to the introductory statistics class at Macquarie University, Australia. Twenty seven voluntary students took both pre-test (in week 1) and post-test (in week 14). The results revealed no significant improvement in the mean scores (from pre-test to post-test) with a mean difference of 4.3. Though this was insignificant, 96% of the 27 students (except one student) passed the subject. This small insignificant change in scores of 4.3% is consistent with 4% change in this study. Jersky (2010) noted,
[r]egardless of the difficulties and challenges that are experienced by secondary or primary teachers in the teaching of statistics, we must all assume that students have acquired at least some statistical knowledge by the time they enter our first-year statistics classes. (Jersky, 2010, p. 1) In regards to examine the association between the post-test and the students‟ final marks, the findings revealed that there was a significant but weak positive correlation between the posttest performance and final marks for both cohorts (refer to Table 6.24).
Table 6.24 Correlation between the post-test and final grades in 2010 and 2011
March/Autumn 2010 March/Autumn 2011
N
Pearson correlation, r
p valuea
111 188
0.380 0.260
< 0.001 0.006
a
Significant at the 0.01 level (two-tailed)
In 2010 and 2011, the students were assigned marks worth 2% of their final marks for completing the CAOS tests, in other words, they were awarded participation marks of 1% on each test irrespective of the marks they achieved for both tests. In response to no changes in the average per cent correct on the CAOS tests at the end of the session, this might be the reason why some students did not complete the tests seriously. Besides, this test was taken out of class where the students completed it at their own time. Hence, in this condition, it was recommended that the tests to be taken at least 10 minutes but not more than 60 minutes (delMas et al., 2007). According to delMas et al. (2007), the reason was to eliminate the students who did not engage sufficiently with the test items or who spent an excessive amount of time on the test, possibly those who searching for answers. It remains possible that students were not serious or caring in their attempts.
249
In this study however, nearly one fifth (15.9%) of the 2010 students and a little over one third (40.5%) of the 2011 students completed the post-test within less than 10 minutes, meanwhile almost none of them (only one student) took more than 60 minutes. In accord with delMas et al. (2007), the findings revealed that the students did not successfully complete the test with an average per cent correct less than 35% when they took less time to complete it. But from another perspective, there was an issue on whether 10 minutes was an adequate time to complete the test. For this reason, two experts in statistics education area were asked to complete this test. The results showed that both of them were able to complete it between 20 to 30 minutes with an average per cent correct of 87.5% and 75%, respectively. Thus, it was suggested that the students might sufficiently able to complete the test between at least 20 minutes and not more than 60 minutes as noted by delMas et al. (2007). Both experts also highlighted some issues particularly on language or the terms used in the tests (Jersky, 2010) which might contribute to low performance on the CAOS tests, for instance, item 2, 9, 31, and 37. This suggests for a revision on these four items or perhaps the exclusion of these items in future assessment. For a particular item on the post-test, responses to each item were coded as 0 for an incorrect response and 1 for a correct response. Appendix 6.1 presented a brief description of what each of the 40 items assessed, reported the percentage of students who selected a correct response on the post-test separately for the 2010 and 2011 cohorts, and indicated the p-value of the respective independent two samples t statistics for each item. The independent t-test was used to compare the correct responses for individual items between the cohorts. In this study, an analysis of individual items was undertaken by topic using the nine major topic groupings proposed by delMas et al. (2007). According to delMas, some items measuring students‟ understanding relates to more than one topic (refer to Table 6.25). The major topics were: data collection and design, descriptive statistics, graphical representations, boxplots, bivariate data, probability, sampling variability, confidence intervals, and tests of significance. Table 6.25 provides a summary of differences of correct responses on the post-test between the 2010 and 2011 cohorts by topics. A total of 40 items were grouped into topics and it was found that no significant differences on the post-test between the two cohorts. A Cronbach‟s alpha on each topic that is, treating the items as a scale could possibly generate information as to whether all items were reliable. As shown in Table 6.25, based on a sample of 211 students, it appeared that all topics produced a Cronbach‟s alpha coefficient of less than 0.65, and this was below the acceptable level of reliability. This posed an issue on validity and reliability of using the CAOS test to measure the students‟ learning and understanding of statistics particularly in this study. While there are many interesting patterns seen in these
250
results, the researcher will not discuss or interpret those results in detail as it falls outside the scope of this thesis.
Table 6.25 Differences of correct responses on the CAOS post-test by topics between the 2010 and 2011 cohorts
Total number of items
No significant differences between the 2010 and 2011 cohorts
Cronbach’s alpha coefficient (N = 211)
1. Data collection and Design
4
7, 22*, 24, 38
0.06
2. Descriptive statistics
3
14, 15, 18, 33*
0.33
3. Graphical representations
9
1, 3, 4, 5, 6, 11*, 12*, 13*, 20*, 33*
0.64
4. Boxplots
4
2, 8, 9, 10
0.24
5. Bivariate data
3
20*, 21, 22*, 39
0.32
6. Probability
2
36, 37
0.07
7. Sampling variability
5
16, 17, 32*, 34, 35
0.27
8. Confidence interval
4
28, 29, 30, 31
0.27
9. Test of significance
6
11*, 12*, 13*, 19, 23, 25, 26, 27, 32*, 40
0.48
Topica
a
CAOS item numbers were as shown in Appendix 6.1 Items related to understanding more than one topic
*
6.2.5.8
Recommendations for improvement of subject
In the 2011 survey, the students were asked to provide recommendations for improvement of the subject. Of the 20 responses, several themes have been identified particularly on laboratory classes and support materials (60%) including the use of SPSS as a tool for data analyses, video supports, and mathematical reviews. Other suggestions were on lectures presentations (15%), assessment system (15%), and lecturer and/or tutor (10%). Examples of their comments were as follows:
Laboratory classes and support materials The lab manual if it was 2 separate books one for lectures and one for labs as it got annoying flipping from one side to another. Need a bit more information on t-tests. I would like to start with the inclusion of labs and the software SPSS, this I believe is a waste of time. Yes the software is good for statistical calculation and functionality but the labs are not a good way of learning this nor do they help with learning the subject‟s content. A tutorial would have been much more helpful and 251
much better suited for helping me and I‟m sure other to gain a better understanding of the subject. Definite improvement of the way labs are conducted I feel 2 hours is a long time and it is wasted with the current set up of labs. At this point in the course I think that revision should continue into labs. There needs to be a lot more communication with tutors and students. If the communication was like it is with the lecturer this subject would be a lot less stressful and confusing. It would be nice if the important details in online video could also be available as notes document in PDF/Office Word/Paper handout format so that students could possibly read it and get a bit of understanding when they are busy and could not have the time or a clear mind to watch the video. The video resources should be kept as they provide a good way to demonstrate some technical steps for inputting/producing data in SPSS (the statistics package we use). Student could always take a look of the video for a visualized demonstrate when they still don‟t understand how to do it after reading the steps provided on notes. Having a mathematical mind, I sometimes found it difficult to understand the concepts as they were presented in class. I think a better way to teach mathematics students at the very least would be to work with a more “from the ground up” approach, rather than learning things largely by rote and then applying them. This clearly wouldn‟t work for non-mathematics students, but I think that more indepth, ground-up mathematical explanations could be given as extras on eLearning. I myself would‟ve found such resources extremely useful and also very interesting. Lectures presentations Reduce the workload for the lectures so we actually get time to revise what we‟ve done in lectures. Keep the lab workload the same. There seems to be an issue with answering questions in lectures, while I and people around me know many of the answers, there‟s sometimes a bit of stigma or shyness in getting involved. I need much detail in lecture notes relate to the lectures. Assessment system The lab tasks and assessments related to that week‟s (or recent) content and you get to re-sit all the assessments until you get a good mark. I don‟t think that any first year students wouldn‟t appreciate how awesome that is. Simply, not to be included as a core subject for Computer Science students. No lab re-stats normal subject pass mark of 50% unlike the unusual STAT pass of 70%, referring to question 98, you have indicated that a pass is 50% to 64% but according to the “Stat pass” that is a fail. Need a more engaging course for Computer Science students, a change in subject content to accommodate the nonmathematical students. Maybe we should do the test under exam condition. I mean in lab or lecture.
252
Lecturer and/or tutor In tutorials, tutors should work through the class with the lab book and show everyone how to do it. If people are confident they can work through at their own pace, but the tutors should be more involved in tutorials. The tutor I had for this subject was so helpful and the lecturer was very helpful and lenient when my assessment didn‟t save in a form she could access. I would like to see longer labs; I find that was the most helpful thing for me to learn.
6.2.5.9
Failure rates, pass rates and shifts in grades
Over the period of the years 2000 to 2004, the failure rate in STAT131 declined from the highest 19% to the lowest 9%. The proportion of students attaining top grades (High Distinction and Distinction) was also in decline that moved from the top 27% to bottom 21% (see Table 6.26). From the years 2006 to 2010, it was found that the proportion of students attaining higher grades of High Distinction, Distinction, and Credit was higher but inconsistent. On the other hand, during these years, the failure rate was slightly up on average 20% compared to 2001 to 2004 where the rate remained under 15% (see Figure 6.13). The results in 2011 showed a dramatic increase in the top grades rate of High Distinction and Distinction at 51.8% and a drop in the failure rate to 12.8% (25 students) and of these students, 32% (8 students) had effectively ceased to participate and did not sit the final examination.
Table 6.26 Pass rates for the years 2000 to 2011 in STAT131 Year
Na
Fail
PCb & Pass
Credit
Distinction
High Distinction
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
157 152 141 165 205 162 136 125 108 89 191 195
18.5 9.9 14.9 10.3 9.3 19.1 17.7 20.0 16.7 22.5 18.3 12.8
32.5 38.2 32.6 48.5 49.7 45.7 27.9 29.6 25.9 24.7 28.8 23.1
23.6 25.0 25.5 17.6 20.0 18.5 18.4 25.6 23.2 27.0 25.7 12.3
14.0 19.7 15.6 12.7 13.2 11.1 21.3 15.2 15.7 22.5 17.8 24.6
11.4 7.2 11.4 10.9 7.8 5.6 14.7 9.6 18.5 3.3 9.4 27.2
a
Number of students enrolled Pass Conceded c Number of students withdrawn (Data extracted from SMP, University of Wollongong. Figures might change slightly at different dates as students were retrospectively withdrawn without penalty from the subject) b
253
Note: HD = High Distinction ; D = Distinction ; C = Credit Innovations introduced in the subject A = Use of real data and working topics of social significance in the laboratory classes B = Laboratory manual including authentic tasks, and alignment of objectives, tasks, and marking criteria C = Changes in the assessment system (3 compulsory and 2 optional tests, zero mark for any test less 60%) D = Video resources, and test retest approach (retest for any test less 70%) E = Learning design maps within weekly folders, and test retest approach (retest for any test less 70%) F = Headstart program, improved learning design maps within weekly folders, by-type resource folders, draft and redraft of assignment, and test retest approach (retest for any test less 70%)
Figure 6.13 Percentages of fail versus higher grades and top grades, and innovations introduced in STAT131 for the years 2000 to 2011 STAT131 had been taught by the same lecturer all years except in 2009 and that year involved shared lecturing. In 2009, video resources were introduced in the subject as support materials for student learning on the e-learning site. As discussed in the previous chapter (case study 1), though the resources were perceived as useful for learning by the majority of the postgraduates in GHMD983/SHS940, they seemed less successful in assisting the students in STAT131. This led to a search for a more effective learning design for embedding the resources, tasks, and support materials in the e-learning system. Consequently, a learning design map within weekly folders was introduced in 2010 incorporating several links to resources. These links included video resources in connection to weekly laboratory tasks on the e-learning site. This was the turning point where the failure rate headed down to 18.3% and the higher grades rate (High Distinction, Distinction and Credit) headed up to 52.9% in 2010. In 2011, a further lowering of fails and increases in good grades with a divergence in fails and good grades occurring with the Headstart program, and the draft and redraft of assignments. The improved learning design maps, and the by-type resource folders were also made available in the elearning site in 2011. 254
A test for differences in two proportions revealed that the proportion of students failing fell significantly (Z = 1.99, p = 0.023) in 2011 compared to the overall proportion failing between 2005 and 2010. One-way ANOVA and Scheffe post-hoc tests were used to examine the differences in means of final marks between the three cohorts. The result revealed a strong evidence of differences in mean final marks (F = 10.004, p < 0.001) between the cohorts. In particular, the 2011 cohort attaining significantly higher mean final marks than the 2010 cohort (p = 0.004) with a mean difference of 8.2 marks. Likewise, the 2011 cohorts achieved an average of 12.4 marks better than the 2009 cohort (p < 0.001) in the final examination (refer Table 6.27). However, no significant difference was evident in the mean final marks between the 2009 and the 2010 cohorts. This indicates that the introduction of the Headstart program, the draft and redraft of assignment, the improved learning design maps within weekly folders, and the by-type resource folders in the e-learning site had potentially helped improve student learning outcomes in 2011. It was possible to discern a significant difference in marks due to participation, in terms of accessing the set of lectures and activities for the Headstart program. These have been discussed earlier in section 6.2.5.1 of this chapter.
Table 6.27 Comparison of mean final marks between the three cohorts in STAT131 N
Average marks
Standard Deviation
March/Autumn 2011c March/Autumn 2010 March/Autumn 2009
195 191 89
67.93a,b 59.73a 55.51b
22.38 23.84 27.66
Total
475
62.30
24.48
a
The mean difference is significant at p = 0.004 The mean difference is significant at p < 0.001 c Students who engaged with the Headstart program attained average final marks of 71.63 (downloaded 1 to 3 lectures) and 73.96 (downloaded 4 to 5 lectures) Note: Students who did not engaged with the Headstart program in 2011 attained average final marks of 62.50 which were marginally higher than both in March/Autumn 2010 and 2009 b
An examination of student grades also revealed that the proportion of students with High Distinction and Distinction grades was the highest in record at 51.8% over the years 2000 to 2011. With respect to failures, the rate would have been much higher when having the assessment system not allowed the identification of students at risk and the subsequent work with them to develop their competency. This however was not a controlled study; there may be many factors at play in the overall improved results, and future monitoring will be necessary in order to see if the impact of the Headstart program remains.
255
6.2.5.10
Final outcomes
It was hoped that the new designs implemented in the subject would lead to improvement in performance. There were changes in the designs of subject in the e-learning system from 2009 to 2011. In 2009, video resources were provided to the students embedded in the by-type resource folders. It was reported that a number of students did not find the videos. In 2010, a learning design map incorporating links to resources was introduced to the students via weekly folders. The survey outcomes revealed that some technical difficulties were experienced by the students to access the files linked in the maps. In 2011, a Headstart program was made available in the e-learning site in addition to the improved learning design maps via the weekly folders and by-type resource folders, along with a draft and redraft of assignment approach. As shown in Figure 6.14, with the exception of CAOS pre-test and post-test, and the two assessments (second and fifth), the students in 2011 outperformed than those in 2010 particularly in their final marks. In relation to the impact of the Headstart program in 2011, the students with access to the program performed better in their final examination than those who did not, and with the overall class marks also improved.
Figure 6.14 Students‟ performance in the assessment tasks in 2010 and 2011 In STAT131, when the CAOS post-test revealed no such improvement, it cast some doubt on the efficiency of the test instrument in measuring student learning outcomes in the subject. An examination of all evidence, including the weaker evidence provided by the assessments suggested that results for the CAOS post-test were anomalous (refer to Figure 6.14). As previously noted, this could be due to the test being weighted of only 1% participation marks
256
(regardless of the actual scores) hence the test appeared inappropriate to measure student performance at the end of the session at least with this reward system.
6.3
Conclusion In the ALTC Final Report (LE8-783) on STAT131 (Porter & Denny, 2011), it was
recognised that the team had moved from an initial focus providing video resources to students to another on providing video resources through better learning designs in the e-learning systems. Accordingly, the notion of learning designs has spread to other disciplines implemented by both within and outside the project team members. These were evident through comments included in the report, for instance,
My engagement with others in the group increased when attention shifted from resource production to the discussion of how teachers guide students into good use of them, by building resources intelligently into a carefully stage managed sequence of sensibly planned learning tasks. Discussion of sharable, easily adaptable and highly visual „learning designs‟ is ongoing with various people affiliated with this project, and Anne‟s group has both welcomed my contributions and taught me a great deal…The project has encouraged me to make the development and sharing of learning designs a major focus of my research and development work with international students in Science – which is now the basis for the first subject in a new PhD program structure being introduced at UOW this year. (Comment by Emily Purser, Learning Development, UOW) (Porter & Denny, 2011, p. 41) [I] am also fully aware that the way this material is integrated is of vital importance. Students need to be able to recognise the benefit to them of accessing and utilizing these resources. (Comment by David Hastie, Faculty of Engineering) (Porter & Denny, 2011, p. 41) Whilst the quality of a video (or other resources) is important, what is more important is the way that these are integrated into Learning and Teaching and how we encourage students to engage with these. (Comment by Hazel Jones, Learning Designer at the Australian College of Applied Psychology) (Porter & Denny, 2011, p. 42) As discussed in Chapter 4, several innovations have been implemented in STAT131 for the past 10 years aimed at improving the student learning and understanding in the subject. These innovations had involved from the collection and use of real data and working topics of social significance. Students continued to work with real data but with a manageable number of outliers and distributions. Assessment tasks involved the collection and analysis of data, portfolios, summaries, tests and presentations to class and final examinations. Currently, the assessment has stabilised on “redeeming approaches”, test and retest, and draft and redraft of 257
assignment in addition to a final examination. The design in 2009 involved the provision of video supports accessible via by-type resource folders in the e-learning site. In 2010, these resources were accessible via links in the learning design maps provided within weekly folders in the e-learning site. At this point, it seemed momentarily that “everything” had been tried to improve learning outcomes. The introduction of the Headstart program, the draft and redraft of assignment, the improved learning design maps within weekly folders, and the by-type resource folders available in the e-learning site in 2011 finally resulted in improved outcomes. Hence, this was the focus of this chapter. One of the aims of this chapter was to investigate the impact of the subject designs particularly the video resources, the learning design maps, the Headstart program, and the approaches to assessment on student learning outcomes. This was done by examining changes in performance for three cohorts of students from 2009 to 2011. Data revealed that the students‟ performance at the baseline (CAOS pre-test) and at the end of the session (CAOS post-test) was identical between the two cohorts, 2010 and 2011. Students with access to the Headstart program, the draft and redraft of assignment, the improved learning design maps via weekly folders, and the by-type resource folders were found to have better performance in their assessment compared to students without access to the Headstart program, the draft and redraft of assignment, and the by-type resource folders in the e-learning site. The mean final marks in 2011 were significantly higher than in 2010 (p = 0.004), and in 2009 (p < 0.001). The failure rate fell significantly in 2011 compared to the years between 2005 and 2010 (Z = 1.99, p = 0.023), and most importantly the proportion of failures declined from 23% in 2009, 18% in 2010 to 13% in 2011. Looking over all the assessment tasks, it would appear that the subject designs incorporating recent innovations on the e-learning site have been effective in improving student performance in STAT131. This was supported by the students‟ comments demonstrated in favour of the use of video resources embedded in the learning design maps and the provision of the Headstart program in the e-learning site to support their learning in the subject. The success of these new design features particularly the Headstart program has led to successful funding to implement it in two additional subjects that will use appropriate video resources and learning designs as identified throughout this study (Porter, 2011b). The next chapter reported a third case study where similar learning design maps were implemented in a mathematics subject for primary teachers. Specifically, the case study provides a different perspective of learning designs aimed to improve mathematics education.
258
CHAPTER 7 CASE STUDY 3 7.0
Introduction The two primary case studies involved the trial of video resources, the progression
through different learning designs and finally a Headstart program. Both these case studies involved similar content matter though with two different types of students, postgraduates and undergraduates. However, another issue arose and this was whether or not subject designs can be sufficiently adapted to other learning contexts or different subject areas. This led the researcher to conducting a third case study. The researcher was engaged to collaborate with the teaching staff in designing the subject delivery system for two relatively new mathematics subjects offered to primary education students. This case study involved the implementation of a subject design encompassing an online learning delivery system (Blackboard Learning system), assessment, and face-to-face learning including lectures and tutorial sessions. The case study examined the effectiveness of the learning design, resources and the impact on mathematics confidence and learning.
7.1
Background Unlike other subjects, mathematics is characteristically a content-rich and context-poor
subject frequently perceived as a difficult subject by most students who are trying to “read” rather than understand the concepts of mathematics. As a result, many students rely on rote learning and therefore the number of mathematics as with statistics students who pass the subjects have failed to understand basic concepts. Furthermore, they lack of confidence in their mathematical ability and this often raises students‟ negative beliefs and anxiety and dampers interest in mathematics (Arul et al., 2004). “People are very happy to say they don‟t like math,” said Sian L. Beilock, a University of Chicago psychology professor … “No one walks around bragging that they can‟t read, but it‟s perfectly socially acceptable to say you don‟t like math.” (Quoted in Sparks, 2011, p. 1) As discussed in Chapter 1, the increasing number of students who dislike mathematics due to a lack of mathematical proficiency has led to fewer students enrolling in mathematics as a major subject. This is evident in several studies (e.g. Crawford & Schmidt, 2004; Ho, 2010; 259
Tariq, 2008; Thomas, Muchatuta & Wood, 2009) that reveal a crucial stage of declining skills in many disciplines worldwide. This has flagged the need to provide adequate mathematics support for students‟ mathematical learning through not only developing their knowledge and skills but through empowering them to apply these to relevant or real world problems. This is particularly important for teachers of primary level students. Likewise, teaching staff from the School of Mathematics and Applied Statistics at the University of Wollongong are concerned with the deficit of mathematical skills in an undergraduate mathematics degree program. For the purpose of this case study, a pair of mathematics subjects, MATH131 and MATH132, were selected to examine the impact of subject design in improving the mathematical learning among mathematics trainee teachers. The aim was to ensure that all teachers have the mathematical and statistical content knowledge and confidence required, to help them become excellent teachers of mathematics in the primary setting. It is essential that all teachers have the best possible training in this important and enabling discipline as a failure to understand the fundamentals of mathematics can have a crippling effect on their students‟ academic development and future prospects. Teachers play an important role in their students‟ achievement and their formation of beliefs and attitudes towards mathematics (AMSI, 2005; Ho, 2010; Sparks, 2011; Thomas et al., 2009; Uusimaki & Nason, 2004).
7.1.1 The subject design MATH131 Mathematics for Primary Educators Part 1 and MATH132 Mathematics for Primary Educators Part 2 is a pair of mathematics subjects designed for prospective primary and early childhood teachers specifically the students enrolled in a Bachelor of Primary Education program (see the subject outline in Appendix 7.1). MATH131 and MATH132 are six-credit point compulsory subjects offered in the second year of their degree program. These subjects are delivered either by the lecturer in face-to-face lectures (only at main campus) or through video conferencing (remote campuses). There are five different campuses offering the subjects in each session including the main campus in Wollongong and four remote campuses in Shoalhaven, Bega, Moss Vale, and Batemans Bay (see Table 7.1). Each subject allocates a 3hour lecture and a 1-hour tutorial session weekly with the tutorials beginning in the second week of session. Both subjects recommend the same textbooks that consist of eight books from the International Centre of Excellence for Education in Mathematics (ICE-EM). These include: ICE-EM Mathematics Transition 1A, ICE-EM Mathematics Transition 1B, ICE-EM Mathematics Transition 2A, ICE-EM Mathematics Transition 2B, ICE-EM Mathematics Secondary 1A, ICE-EM Mathematics Secondary 1B, ICE-EM Mathematics Secondary 2A, and 260
ICE-EM Mathematics Secondary 2B. These eight books are also available on a CD. The variations in some components of the two subjects are presented in Table 7.1.
Table 7.1 Components of MATH131 versus MATH132 Component
MATH131
MATH132
14 to 159 students (Wollongong) 7 to 25 students (Bega) 19 to 24 students (Batemans Bay) 7 to 23 students (Moss Vale) 35 to 39 students (Shoalhaven) 106 to 224 students
17 to 79 students (Wollongong) 9 to 11 students (Bega) 16 to 19 students (Batemans Bay) 6 to 18 students (Moss Vale) 15 to 29 students (Shoalhaven) 91 to 128 students
Session offered
March/Autumn 2009a March/Autumn 2010 March/Autumn 2011
August/Spring 2009 August/Spring 2010 August/Spring 2011b
Subject prerequisites
None
MATH131
Topics coveragec
Numeration, Algebra, Number Theory, Statisticsd and Graphical Representation of Data
Geometry, Measurement, Probability and Statisticse
Number of students enrolled by campus per session
Total enrolment per session
a
No student survey was undertaken Neither student survey nor assessment results were observed c Subject content timeline was included in Appendix 7.2 d This topic is taught to a sufficient depth that enables the analysis of data relevant to the teaching profession such as that provided to schools on NAPLAN (National Assessment Program-Literacy and Numeracy) test outcomes e This topic is taught to a sufficient depth that enables the analysis of outcomes of educational research b
Both subjects encompass materials required to teach mathematics in stages 1–4 of the NSW K-6 syllabus and statistical tools needed by the teaching profession. The subjects‟ aims are to develop (a) a deep understanding of fundamental mathematics, (b) sufficient confidence in the material to enable graduates to effectively teach children mathematics, (c) an ability to interpret statistics related to educational assessment and research, and (d) an appreciation for the beauty of mathematics which enables graduates to teach mathematics with enthusiasm and thereby foster a love of mathematics. Within a blended learning environment, the subjects have incorporated both face-to-face learning (either lectures or tutorials) and online resources through e-learning systems. The online learning resources and supports consist of fortnightly learning design maps, navigational tools by topic materials (see Figure 7.6), lecture notes, video resources to support learning on a small number of concepts, student forum, Edu-stream (recording of lectures), practice quizzes, links to relevant websites such as NAPLAN (National Assessment Program-Literacy and Numeracy, accessible at http://www.naplan.edu.au/) and SMART (School Measurement, Assessment and Reporting Toolkit, accessible at http://www.schools.nsw.edu.au/learning/7261
12assessments/smart/index.php), continuous assessment, tutorial exercises, worked solutions to selected exercises, and subject content timeline. Both subjects are placed on a single e-learning site (a shared homepage) so that the resources are linked together (see Figure 7.1) which enables the students to do revisions or refer to previous resources in completing their present tasks. Similarly, the provision of navigational tools by topic materials in the e-learning site is used to connect the materials within and between the subjects associated with relevant resources and concepts via online. The students are encouraged to visit the e-learning site on a regular basis for recent updates and notices that relate to the subjects.
Figure 7.1 MATH131 and MATH132 homepage in the e-learning site
7.1.1.1
Weekly and fortnightly learning design maps
In MATH131 March/Autumn 2009 and MATH132 August/Spring 2009, all learning resources and supports were put on the e-learning site using by-type resource folders (refer to Figure 7.1 and 7.2). As can be seen in Figure 7.1, the subjects‟ homepage provides other resources in folders such as links to articles, reports, and useful websites (e.g. NAPLAN, SMART), discussion groups via the student forum, video resources on some of the concepts taught in the subjects, online order forms for textbooks, and online subject evaluations to be completed at the end of the session. The first link in the subject folder (see Figure 7.2) (either MATH131 or MATH132) connects to a weekly folder (see Figure 7.3). The weekly folders consist of resources by week/topics including lecture notes, tutorial exercises, online practice quizzes, assignments, and worked solutions (see Figure 7.3). Alternatively, students could access resources through other links provided in the subject folder namely by-type resource 262
folders. The by-type resource folders locate resources according to categories such as lectures and tutorials by concept (navigational tools by content materials), lecture resources, tutorial exercises, assessments, and subject handouts and lecture outlines (see Figure 7.2).
Figure 7.2 The links provided in the subject folder
Figure 7.3 The resources by week/topics accessible via weekly folders There were a large amount of resources and support materials provided to students in the subjects‟ e-learning site. Therefore commencing in March/Autumn 2010 until March/Autumn 2011, a fortnight learning design maps (e.g. Week 1 & 2, Week 3 & 4, Week 5 & 6) was introduced to students in the subject folder (refer to Figure 7.4). 263
Figure 7.4 The fortnightly learning design maps and other links provided in the subject folder The fortnightly learning design maps were adapted from the implementation of the weekly learning design maps both in the SHS940 (Chapter 5) and STAT131 (Chapter 6). It was composed of two weeks work where the tasks, resources, and supports materials were organised into tables (see Figure 7.5). There were links to resources including lecture notes, tutorial exercises, online practice quizzes, assignments, Edu-Stream, subject outlines, navigational tools by topic materials, worked solutions, and other useful links which could be accessed via the fortnight tables. The fortnightly learning design maps were provided as PDF files in the elearning site both for MATH131 March/Autumn 2010 and MATH132 August/Spring 2010. In MATH131 March/Autumn 2011, the improved fortnightly learning design maps using HTML files were provided to students in addition to PDF files (printer-friendly) for easy access to resources as recommended by the students in the STAT131 surveys (case study 2). Alternatively, the students were able to access all learning resources through other links provided in the subject folder (see Figure 7.4).
264
Figure 7.5 A two-week work accessible via the fortnightly learning design maps Table 7.2 summarised the overall learning designs set up within the e-learning site for both subjects from August/Spring 2009 to March/Autumn 2011.
265
Subject
Session
MATH131
March/Autumn 2009
MATH132
Table 7.2 Summary of learning designs within the e-learning site for mathematics subjects
August/Spring 2009
Learning designs in the e-learning site
1. 2.
1. MATH131
2. March/Autumn 2010
1. MATH132
2. August/Spring 2010
1. MATH131
2. March/Autumn 2011
Select the subject either MATH131 or MATH132 via subject folder on the subjects‟ homepage (Figure 7.1). Within subject folder , two options were available to access resources: Weekly folders (Figure 7.2) Within weekly folders, there were week by week activities, assessment, lectures and tutorials according to topics (Figure 7.3) By-type resource folders including lectures and tutorials by concept, lecture resources, tutorial exercises, assessments, subject handouts and lecture outlines (Figure 7.2)
Select the subject either MATH131 or MATH132 via subject folder on the subjects‟ homepage (Figure 7.1). Within subject folder, two options were available to access resources: Fortnightly folders e.g. Week 1 & 2, Week 3 & 4, etc. (Figure 7.4) Within fortnightly folders, a two-week work were accessible via the fortnightly learning design maps (PDF files) associated to resources, tasks, and supports (Figure 7.5) By-type resource folders including lectures and tutorials by concept, lecture resources, tutorial exercises, assessments, subject handouts and lecture outlines (Figure 7.4)
Select the subject either MATH131 or MATH132 via subject folder on the subjects‟ homepage (Figure 7.1). Within subject folder, two options were available to access resources: Fortnightly folders e.g. Week 1 & 2, Week 3 & 4, etc. (Figure 7.4) Within fortnightly folders, a two-week work were accessible via the fortnightly learning design maps (PDF files) associated to resources, tasks, and supports (Figure 7.5) By-type resource folders including lectures and tutorials by concept, lecture resources, tutorial exercises, assessments, subject handouts and lecture outlines (Figure 7.4)
Select the subject either MATH131 or MATH132 via subject folder on the subjects‟ homepage (Figure 7.1). Within subject folder, two options were available to access resources: Fortnightly folders e.g. Week 1 & 2, Week 3 & 4, etc. (Figure 7.4) Within fortnightly folders, a two-week work was accessible via the improved fortnightly learning design maps (HTML and PDF files) associated to resources, tasks, and supports (Figure 7.5) By-type resource folders including lectures and tutorials by concept, lecture resources, tutorial exercises, assessments, subject handouts and lecture outlines (Figure 7.4)
Weekly folders and by-type resource folders Fortnightly learning design maps (PDF files) and by-type resource folders Fortnightly learning design maps (HTML and PDF files) and by-type resource folders
266
7.1.1.2
Navigational tools by content materials
In MATH131 and MATH132, navigational tools by topic materials (an adaptation of a concept map) were used to allow connection to the materials provided in the subjects using building blocks (see Figure 7.6). More importantly, it also created a visual summary of the key concepts to be learned and their relationship with each other. As mentioned earlier in section 3.7.1 (Chapter 3), concept maps are potentially useful as a guide for students facilitating learning through reflection on what they know and don‟t know. This process promotes the development of mathematical thinking, reasoning, and effective communication, all desirable skills for successful learning in mathematics. There have been many studies focusing on the use of concept maps and their benefits specifically in the mathematics education research area, for instance Jin and Wong (2010), Fuata'i and McPhan (2008), Caldwell, Al-Rubaee, Lipkin, Caldwell, and Campese (2006), and Brinkmann (2005). In these subjects, two navigational tools by content materials options (one for each subject) were embedded in the subject folder in the elearning site. They were aimed at supporting the student learning by linking each topic appearing as blocks with relevant key concepts via resources such as lecture notes, tutorial exercises, and online practice quizzes (see Figure 7.7). These resources in turn were to aid student understanding of the topics. The maps also served as an alternative access to the subject materials rather than the fortnightly learning design maps or by-type resource folders available in the e-learning site.
Figure 7.6 A navigational tool by topic materials provided in MATH132 subject folder
267
Figure 7.7 A navigational tool by topic materials with links to key concepts in MATH131 subject folder
7.1.1.3
Video supports on some concepts
Unlike in STAT131 and GHMD983/SHS940, only limited video resources were provided to the students in the e-learning site. Two main topics included in the video supports were fraction and number theory, with each topic composed of several subtopics as shown in Table 7.3. As the videos could be required by the students in both subjects as refreshers of some concepts for the two topics, they were made available on the subjects‟ homepage in the elearning site (see Figure 7.1). All videos were developed using a tablet PC technology and Camtasia Studio, for example as shown in Figure 7.8 (discussed in section 4.10.1 in Chapter 4).
Table 7.3 Subtopics covered in the video supports on two mathematical topics Fraction What is a fraction Equivalent fractions Finding the lowest common multiple (Method 1) Finding the lowest common multiple (Method 1) Adding proper fractions Subtracting proper functions Multiplying proper functions Dividing proper functions Dividing by a fraction, multiplying by its reciprocal
Number Theory Prime factorization Highest common factor Lowest common multiple (Method 1) Lowest common multiple (Method 2)
268
Figure 7.8 Sample of video displaying the concepts on fraction accessible via the e-learning site
7.1.1.4
Assessment system
Students‟ final marks were an aggregate of marks awarded for:
tutorial assessment (worth 30%),
a mid-session examination (20%), and
a final exam (50%). The subjects‟ assessment systems have been structured by constant weekly assessment of
the student learning and understanding of mathematics rather than by placing a heavy weight on a single assessment. This system is commonly known as continuous assessment and is believed beneficial for the students and perceived to be particularly valuable in mathematics subjects that require an understanding of foundation material for progression to later material in the subject (Broadbridge & Henderson, 2008). Furthermore, continuous assessment allows the lecturers or tutors to identify which topic areas appear difficult for student learning and to recognize the students at risk who require additional supports. Tutorial assessment consists of on-paper quizzes and individual assignments. There are five quizzes and five assignments contributing a total of thirty per cent to the final marks in both MATH131 and MATH132. The quizzes and assignments worth a total of fifteen per cent respectively assess the materials or topics covered up to and including the previous week of lectures (refer to Appendix 7.2). The quizzes are open-book and contain three multiple choice questions to be completed without the use of calculators. The mid-session examination worth a 269
total of twenty per cent of the final marks is usually administered in week 7 during tutorial sessions. This exam covers materials or topics up to and including the lectures in week 6. Unlike quizzes, it is a closed-book exam but as with quizzes, the students are not allowed to use calculators.
7.1.1.5
Online practice quizzes
For the purpose of supporting student learning in these subjects, the weekly online practice quizzes were accessed by the students through the e-learning site (see Figure 7.9). The online practice quizzes were not graded or included in the students‟ final marks. In particular, the optional quizzes allowed students to test their mathematical skills and understanding at times convenient to them with instantaneous feedback prior to taking similar assessment quizzes. In other words, the assessment quizzes assessed in even weeks (e.g. quiz 1 in week 2, quiz 2 in week 4) were similar to the online practice quizzes provided in odd weeks (e.g. quiz 1 was similar to practice quiz week 1, quiz 2 was similar to practice quiz week 3). To increase the students‟ competency on particular topics in the subjects, the practice quizzes were randomly generated as multiple choice quizzes in the e-learning system based on the materials of topics covered weekly which allowed the students to attempt more than one of the practice quizzes.
Figure 7.9 Sample of online practice quiz provided in MATH132 subject folder
Broadbridge and Henderson (2008) further noted that the use of online assessment was beneficial particularly when there was a shortage of mathematics staff and increased lecture sizes. The completion of assessment online by the students also helped to lessen the lecturers‟
270
and tutors‟ work in giving instantaneous feedback for individual students and also quickly marking the assessment (this satisfies behavioural theory feedback).
7.2
Aim for the study The researcher was engaged to design the layout of the subject e-learning site with a view
to optimising the students‟ usage of online learning resources in order to facilitate their learning of the subject. The teachers wanted students or specifically the trainee teachers to have the mathematical and statistical content knowledge required to help them become excellent teachers of mathematics in the primary setting. The researcher was engaged to develop the online learning delivery strategies which might facilitate the achievement of that aim. The researcher chose to examine the effectiveness of subject design including the online learning delivery system through the e-learning site, assessment system, lectures and tutorials.
7.3
Method This case study was undertaken with students who enrolled in MATH131 and MATH132
commencing in August/Spring 2009 and was conducted across four successive teaching sessions. Action research afforded the methodological approach that allowed documentation of pattern of change across a spiral of four implementations of fortnightly learning design maps in the e-learning system (see Figure 4.1 in Chapter 4) (O‟Leary, 2010). Figure 7.10 detailed the sessions encompassed by this case study.
Figure 7.10 Implementation cycles of MATH131 and MATH132 included in this case study
271
Using a mixed methods approach, data included student survey responses, student and teachers comments, assessment results and e-learning student tracking statistics. The steps involved in the collection of evaluation data throughout this case study were as follows:
1.
Ethics approval was obtained from the University of Wollongong Ethics Committee (ethics number: HE09/379, see Appendix 7.3) in order to invite students to participate in the investigation of the impact of learning resources on student learning outcomes.
2.
In MATH132 August/Spring 2009, all learning resources were organised in the elearning site using weekly folders and by-type resource folders according to categories of resources. The by-type resource folders were for lectures and tutorials by concept (navigational tools by content materials), lecture resources, tutorial exercises, assessments, subject handouts and lecture outlines (refer to Figure 7.2).
3.
In MATH131 March/Autumn 2010 and MATH132 August/Spring 2010, all learning resources were organised in the e-learning site on a fortnightly basis via fortnightly learning design maps (e.g. Week 1 & 2, Week 3 & 4) in addition to the by-type resource folders (refer to Figure 7.4). As noted earlier in section 7.1.1.1, the fortnightly learning design maps consisted of all relevant materials such lecture notes, tutorial exercises, assignments, worked solutions, online practice quizzes, student forum based on two weeks work. These maps were accessible via links provided in the tables created in PDF (see Figure 7.5).
4.
In MATH131 March/Autumn 2011, the improved fortnightly learning design maps were accessible via HTML files in the e-learning site as an alternative to PDF files and other links in the by-type resource folders.
5.
Students completed on-paper surveys during tutorial classes in week 12 (except in MATH132 August/Spring 2009) together with the completed consent form indicating their agreement to participate in the study. The information sheet was used to clarify the circumstances of the students‟ involvement in the study. For instance, they were told that their participation was voluntary and that they were free to refuse to participate and to withdraw from the study at any time. They would not be penalized for not participating in the study and they were informed that the outcome of the study should be beneficial for future students.
6.
To examine the usage of online resources in the e-learning system so as to support student learning of mathematics, data from the e-learning student tracking statistics were used for analyses that observed from week 1 to week 13 for both MATH131 and MATH132. To examine the use of the fortnightly learning design maps in the e-
272
learning system, tracking was also done in week 8 for MATH132 August/Spring 2010 and MATH131 March/Autumn 2011.
7.4
Outcomes The primary purpose of the evaluation in this case study was:
to examine the usefulness of the learning resources for student learning,
to identify the student perceived competency on topics in the subjects, and
to examine the impact of subject design including the online learning delivery and assessment systems.
7.4.1 Students’ response rates and background In 2009, twenty three students from a class of 91 in MATH132 took part in an online survey via the e-learning system with a final response rate of 25% (refer to Table 7.4). Due to the low response rate, later evaluations in 2010 both in MATH131 and MATH132 involved the students completing on-paper surveys during tutorial classes. As a result, 155 of the 219 (71%) enrolled students in MATH131 took part in the survey. However in MATH132, only sixty students from a class of 169 participated in the survey that reduced the response rate to 36 per cent. Since the survey was conducted at the end of session, lack of attendance in tutorial classes as reported by the tutors was perhaps the reason to the low response rate to the survey. The low response rates particularly in MATH132 both in 2009 and 2010 may reflect on the inconsistencies of the data retrieved from the surveys. The small sample size and selections by attendance in class might also reflect on the representativeness of the cohorts‟ views for those sessions.
273
Table 7.4 Distribution of students enrolled by campuses and gender, and the survey response rates across five sessions in mathematics subjects Sessiona MATH131
Wollongong Batesman Bay Bega Shoalhaven Moss Vale Total enrolled (N)c Total female enrolledc Number of responses (n) Response rate (%) Female response rate (%)
MATH132
A 2009
A 2010
A 2011
S 2009
S 2010
14 19 15 35 23 106 84
139 24 10 39 7 219 167 155 71 70
159 19 7 29 10 224 163
8 16 11 37 19 91 69 23 25 83
118 20 10 15 6 169 128 60 36 70
b
-
b
-
a
A = March/Autumn, S = August/Spring Not surveyed c Data source from the School of Mathematics and Applied Statistics, University of Wollongong. Figures might change slightly at different dates as students in some circumstances may be retrospectively withdrawn without penalty from the subject. b
In regard to the distribution of students who responded to three surveys according to gender, the findings revealed that more than two third of them were females: (i) MATH131 in 2010 (70%, n = 108), and (ii) MATH132 in 2009 (83%, n = 19) and 2010 (70%, n = 42) (refer to Table 7.4). This result was consistent with the profile of the teachers‟ population in primary education and the distribution of students‟ enrolment in subjects dominated by females. In terms of the breakdown of enrolment according to campuses in both subjects, the number of students enrolled at the main campus in Wollongong drastically increased post 2009 while other remote campuses recorded similar numbers of enrolment across sessions (see Table 7.4).
7.4.2 Representativeness: expected performance versus final grades As in the STAT131 and GHMD983/SHS940 surveys, the students were asked to anticipate their grades to provide some indication as to the performance of students participating in the survey. In order to examine the differences between the students‟ final grades and their anticipated performance in the subjects, a test for differences in two proportions was used for following analyses. In MATH132, there were no significant differences in the distribution of students‟ final grades versus their expected performance in 2009 (refer to Table 7.5). Specifically, the outcomes revealed that 52% of the students attained a credit grade or better compared to 66% of the responding students expected those grades. Only 1% of the students failed in contrast to 4% 274
of the responding students anticipated failing the subject. Likewise in 2010, the distribution of students‟ final grades appeared similar to their anticipated grades. Approximately half of the students (51%) achieved at least a credit grade compared to 42% of the responding students expected those grades. Two per cent of the students failed, however none of the responding students anticipated failing the subject. Overall in MATH132, the results suggested that the performance of students participating in the surveys was consistent with their final grades. Table 7.5 Percentages of students‟ final grades and their anticipated performance in mathematics subjects Percentages MATH132 Grade
Fail Pass conceded Pass Credit Distinction High distinction Total
August/Spring 2009
MATH131
August/Spring 2010
March/Autumn 2010
Final (N = 91)
Final (N = 168)a
Final (N = 168)a
Expected (n = 60)
Final (N = 219)
Expected (n = 155)
1 2 45 22 20 10 100
2 11 36 25 18 8 100
2 11 36 25 18 8 100
10 48 25 15 2 100
4 4 31 30 19 12 100
6 42 30 15 7 100
a
One withheld result excluded
In MATH131, the expected performance of the responding students in 2010 was somewhat inconsistent with the distribution of their final grades (refer to Table 7.5). An analysis of differences in two proportions revealed that the proportion of students attained the top grades (High Distinction and Distinction) (31%) was significantly higher (Z = 1.93, p = 0.027) than the proportion of responding students (22%) who anticipated those grades. Thirty five per cent of the students attained a pass or pass conceded grade and this was significantly lower (Z = 2.52, p = 0.006) than the proportion of responding students (48%) who anticipated those grades. Four per cent of the students failed but none of the responding students anticipated failing the subject. Overall, the results suggested that the responding students underestimated their grades possibly suggesting a lack of confidence. The inconsistency of results was possibly due to the within session student assessment not providing a good indication of their performance.
275
7.4.3 Completing the subjects’ components Based on the surveys‟ responses, there was a concern about a large proportion of students in each cohort spent between zero and eight hours per week (refer to Figure 7.11). These were, (i) MATH131 in 2010 (87.4%), and (ii) MATH132 in 2009 (86.9%) and 2010 (89.6%). Eight hours per week was considered to be insufficient for learning in the mathematics subjects. It was suggested that the expected time for an average student to complete a six-credit point subject is 12 hours of work per week, although this figure is possibly inappropriate particularly due to the increase of needs for employment in today‟s life (Porter, 2007). A Fisher‟s exact test was used to compare the differences in proportions between the two cohorts (due to npˆ and np(1- pˆ ) being less than 5). In MATH132, the results revealed that the proportion of students spent at least six hours per week on the subject was significantly higher (p = 0.029) in 2009 (74%) compared to 2010 (47%). The proportion of students spent at least six hour per week in MATH132 in 2009 was also significantly higher (p = 0.003) by 34% compared to MATH131 in 2010 (40%).
Figure 7.11 Percentages of average time spent by students per week in mathematics subjects
The result indicates that more than two third of the students in MATH132 in 2009 spent more hours weekly in learning the subject compared to MATH131 in 2010. This was possibly due to the difficulty level of the content, that is, MATH132 is higher compared to MATH131; certainly the topics are more advanced in MATH132 depending on prior knowledge of topics in MATH131 (refer to Appendix 7.2) . In MATH131 and MATH132, the students were surveyed about how they work through the exercises provided on the e-learning site that were discussed during tutorial classes. In all 276
subjects, the students were encouraged to complete the exercises without referring to the worked solutions. For this reason, the solutions were released a week after the tutorial classes via the elearning site. The findings revealed that across subjects, most students had worked in the same manner (refer to Table 7.6). It was found that only a little over one third of the students in both subjects reported completing the tutorial exercises before checking their answers from the worked solutions. In 2009, nearly 40% of the students in MATH132 reported completing the exercises after downloading the worked solutions in order to complete the tutorial exercises, a figure reducing to 27% in 2010. Table 7.6 Students‟ responses on “How did you work through the tutorial exercises?” in mathematics subjects Percentages Work through the tutorial exercises
MATH132 August/Spring August/Spring 2009 2010 (n = 60)b (n = 23)
1. Download the worked solutions to most exercises before attempting the exercises 2. Started completing the exercises and downloaded the worked solutions to complete them 3. Essentially completed all tutorial exercises without reference to the worked solutions 4. Essentially completed all tutorial exercises first then checked them from the worked solutions
MATH131 March/Autumn 2010 (n = 155)a
8.7
11.7
12.3
39.1
26.7
24.5
17.4
13.3
15.5
34.8
35.0
36.1
a
11.6% not answered this questions (n = 18) 13.3% not answered this questions (n = 8)
b
7.4.4 Importance of learning resources on student learning All resources used by the students were ranked on a scale: 1 = not applicable or rarely used, 2 = little use, 3 = moderately useful, and 4 = extremely useful. Those resources valued by 85% or more of the students as moderately useful or extremely useful were those considered good quality resources in the context of this thesis. In MATH131, four resources were considered as useful by the students in 2010: assignments (93%), quizzes (92%), lecture notes (88%), and online practice quizzes (87%) (refer to Table 7.7). Other three primary resources were perceived as important including worked solutions (83%), tutorial classes (79%), and tutorial exercises (75%). In terms of the usefulness of the fortnightly learning design maps for student learning that were introduced in 2010, about half of the students (54%) endorsed the use of the maps as a tool to organise the subject materials on a two-week or fortnightly basis. As expected, several additional resources were valued as useful by not more than half of the students such as Edu-stream (50%), subject 277
content timeline (45%), navigational tool by topics materials (34%), textbooks (26%), video resources (19%), and student forum (9%). These resources remained relatively minor in the portfolio of resources as they were support materials for students who need them in the subject. Not many students regarded the videos as useful possibly due to the limited topics included in the videos as noted in section 7.1.1.3. Therefore, in the survey students were asked to provide recommendations on topics for inclusion in the videos. Of the 32 responding students, 38% suggested that the videos to include all topics, 31% requested topics focusing on statistics, 19% suggested a number theory and algebra, and 12% commented they were not aware the existence of videos in the subject.
Table 7.7 Resources perceived as moderately or extremely useful on student learning across sessions in mathematics subjects Sessiona Resource
Assignments Quizzes Lecture notes Online practice quizzes Worked solutions for tutorial exercises, assignments, mid-session exams and exams Tutorial classes Tutorial exercises Mid-session exams Lectures Working with peers Weekly folders Fortnightly learning design maps Edu-stream Subject content timeline Navigational tool by topic materials Textbooks Video resources (where available) Student forum
MATH132 S 2009 S 2010 (n = 23) (n = 60) %b Rankc %b Rankc
MATH131 A 2010 (n = 155) %b Rankc
81.9 73.9 100
6 8 1
85.0 88.1 81.7 88.1
5 3.5 6 3.5
92.7 92.0 87.5 87.3
1 2 3 4
82.6
4.5
89.8
2
83.3
5
90.9 86.9 82.6 65.2 60.8 77.3
2 3 4.5 9 10 7 12.5 14 12.5 11 15 16
71.2 72.4 91.5 81.3 74.6
10 9 1 7 8 11 12 13 14 15
78.9 75.4 72.5 68.2 65.8
6 7 8 9 10 11 12 13 14 15 16 17
d
d
26.1 26.0 26.1 47.8 21.7 4.3
e f
67.8 52.6 39.0 30.5 d
10.2
e
53.8 50.0 44.6 34.0 25.5 18.9 8.8
a
A = March/Autumn, S = August/Spring Percentages of students responding as moderately or extremely useful c Ranked by percentage reporting resources useful d Not surveyed e Not provided f Resources provided but not surveyed b
In MATH132, three resources were perceived as useful for learning by the students in 2009 including the online practice quizzes (100%), tutorial classes (91%), and tutorial exercises (87%) (refer to Table 7.7). In 2010, five resources were valued higher by the students: mid278
session exam (92%), worked solutions (90%), quizzes and online practice quizzes (88%), and assignments (85%). As with the MATH131, online practice quizzes was the most important resources for learning both in 2009 and 2010. This result was in accord with the current educational literature that highlights the importance of assessment in motivating the students to learn and understand better (Morris, 2008). In 2009, more than two third of the students (77%) indicated that they liked using the weekly folders to access resources provided in the e-learning site. Although the fortnightly learning design maps were also provided to students in 2010, they had not been asked to rank this resource in the survey. For this reason, an analysis of e-learning student tracking was used to examine the use of the maps as discussed in section 7.4.6. Overall, the students in 2010 had valued more resources (four to five resources) both in MATH131 and MATH132 when the fortnightly learning design maps were available in the elearning site compared to three resources in 2009 without the maps. To examine the differences in students‟ perceptions on the usefulness of resources for learning without (2009) and with the maps (2010) provided in the e-learning site, a test for differences in proportions was used for analysis between cohorts in MATH132. Due to small sample size in 2009 and the violation of normality assumptions of this cohort, a non-parametric test (Mann-Whitney U test) was carried out. The results revealed significant differences in the proportions of students who endorsed the use of Edu-stream, textbooks, and tutorial classes between 2009 and 2010 cohorts (refer to Table 7.8). Differences were found only in resources with low usefulness. More students in 2009 perceived textbooks and tutorial classes as useful for learning than in 2010. On the other hand, more students in 2010 preferred the use of Edustream for learning than in 2009. This suggested that the provision of the fortnightly learning design maps in 2010 improved students‟ perceptions on the use of Edu-stream available through the site but lessen on the textbooks and tutorial classes compared to 2009, both resources accessed outside the e-learning system.
Table 7.8 Differences on the perceived usefulness of resources for learning in MATH132 a
Resources
Edu-Stream Textbooks Tutorial classes
MATH132 August/Spring 2009
MATH132 August/Spring 2010
Mann-Whitney U test
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
25.63 49.91 55.82
589.5 1148.0 1228.0
23 23 22
26.1 47.8 90.0
47.69 38.22 35.47
2813.5 2255.0 2093.0
59 59 59
67.8 30.5 71.2
313.5 485.0 323.0
< 0.001 0.037 < 0.001
a
Scale: 1 = not applicable or rarely used, 2 = little use, 3 = moderately useful, 4 = extremely useful Percentages of students responding as moderately useful or extremely useful c Significant difference at the 0.05 level b
279
The survey also sought information on students‟ perception of exam preparedness as a consequence of completing tutorial classes although not all students had completed them. As shown in Figure 7.12, it was found that approximately one third of the students (33.8%) in MATH131 realised the importance of tutorial exercises in preparing them for the exam or leaving them in need only basic revision. In MATH132, 60.9% of the students saw the relevance of the tutorial exercises to final examinations. It was likely that the students learned from MATH131 that is, following the subject tutorials were important to examination success.
Figure 7.12 Student perceived impact of tutorial exercises on their final exam in mathematics subjects
A cross-tabulation analysis between how students completed the tutorial exercises and their perception of exam preparedness revealed the following findings. In MATH131, a test for differences in two proportions revealed that there was no significant difference in the students perceived confidence on exam preparedness (i.e. be well prepared and needed only basic revision for the exam) between students who completed all the tutorial exercises (40%) and downloaded the worked solutions to complete the exercises (29%) (refer to Table 7.9). Table 7.9 Students‟ perception of exam preparedness versus completing their tutorial exercises in MATH131 in 2010
Exam preparedness Well prepared Basic revision Thoroughly review all lecture materials More time to study Total
Percentages completed all the tutorial downloaded the worked exercises (n = 68) solutions (n = 49) 24 16 51 9 100
12 17 61 10 100
280
In MATH132, seemingly students who completed all tutorial exercises (67%) in 2009 were more confident (i.e. be well prepared and needed only basic revision for the exam) than those who downloaded worked solutions to complete the exercises (54%) (using a Fisher‟s exact test due to npˆ and np(1- pˆ ) being less than 5) (refer to Table 7.10). However, the power of this test was low due to small numbers and it would be useful to check with large sample sizes. Table 7.10 Students‟ perception of exam preparedness versus completing their tutorial exercises in MATH132 in 2009 Percentages completed all downloaded the the tutorial exercises worked solutions (n =12) (n = 11)
Exam preparedness
Well prepared Basic revision Thoroughly review all lecture materials More time to study Total
25 42 25 8 100
27 27 27 19 100
7.4.5 Effectiveness: face-to-face and online learning In MATH131 survey, the students in 2010 were asked to comment on their experiences in learning as a consequence of the subject design delivered via face-to-face and through online learning in the e-learning system. Of the 42 responding students, half of them (50%) reported that their experiences in relation to the subject assessment (including online practice quizzes, weekly quizzes, assignments), lecture notes (21%), tutorial classes and exercises (14%), the lecturer/tutors (10%), and lectures (5%). Examples of their comments were as follows:
Assessment (50%) Online practice quizzes are of great resource for revision. Weekly assignments were very helpful for identifying my weak spots. Quizzes and online practice quizzes had no relevance to exams. The e-learning site was really resourceful for completing assignments and quizzes. Weekly quizzes and assessments really made sure that I knew what to do and how I was keeping up. There were too much in class quizzes and assignments.
281
Lecture notes (21%) The online lectures helped me most, as I could do it at my own pace. Lecture notes although wasn't as helpful because notes and lecturer were different. Lecture notes are a big help I refer to them regularly but they could be clearer. There were too much information and too quickly without adequate information provided in the lecture notes. Tutorial classes and exercises (14%) Revision and tutorials helped me to learn in this subject. Tutorials are excellent! They cater for a multitude of learning styles. Tutorial exercises provided in classes are very good for revision. The lecturer/tutors (10%) Consult hours with tutor helped me learn most of the material. I think the lecturer should be able to do their own slides and not read from someone else. Don't believe I learnt anything from the lecturers in this subject as it was far too advanced for my ability. I learnt the content from a primary school teacher. Lectures (5%) The online lectures helped me most, as I could do it at my own pace. The lectures were somewhat helpful although the majority of the time the class needed substantial clarification from the tutor. From a different perspective, four tutors provided their comments regarding the subject design for MATH131 as stated below:
Students frustrated at lecture presentation problems. None that come to mind apart from students not having notes prior to lecture so the questions they asked in tutorials were difficult to understand as it was unclear what they understood. They had no opportunity to prepare/read this seems contrary to desire for students to be self-learners. Lack of time during tutorials, quite a few students had e-learning access problems at one time or another. Statistics topic was always taught on Thursday. Therefore, Tuesday classes had to learn the week before this topic. The order of teaching should have been swapped on alternating weeks. 282
7.4.6 Student perceived confidence in the subject To improve the teaching and learning of mathematics, it is useful for both lecturer and students to recognise the topics where additional supports are most needed. These areas can be considered to be the high priority in further teaching developments. Through the surveys, the students were asked to indicate how confident they were in relation to the different topics that covered in the subjects. In the context of this thesis, those topics ranked by 85% or more of the students as moderately confident or can do, were those considered high competency in student learning. In MATH131, three of fifteen topics were ranked by more than 85% of the students reporting confidence: addition (94%), multiplication (91%), and subtraction (90%). The students were also reported comfort with other two topics, division (81%) and e/numeration (75%). Sometimes a lack of confidence can occur with the most recent taught topics. A further examination of the sequence of topics should indicate whether this was a factor impacting on perceived competency for these topics (see Table 7.11). It appeared that two topics, rationals and modular arithmetic, were the lowest in ranking of the students reported confident as these were taught after mid-session beginning in week 7 until week 13. For statistics topics, these were integrated into the subject throughout the session beginning in week 2 (see subject content timeline in Appendix 7.2), however only 61% of the students were confident with these topics.
283
Table 7.11 Student perceived confidence on topics and sequence of topics taught in mathematics subjects across sessions Sessiona MATH132
Sessiona Week
3 3-4 5 6 2 1 8 4 9 12 2-13 9-11 7 7-8 13
Topic
MATH131 A 2010 (n = 155)
Week
%b
Rankc
Addition Multiplication Subtraction Division
93.6 91.4 90.0 81.2
1 2 3 4
4-6 1-3 3-4 5-6
E/Numeration Problem solving processes Decimals Inverse processes Real numbers Algebra
75.4
5
70.5
6
9-11 9
67.2 66.0 61.6 61.0
7 8 9 10
Statistics Number theory Integers Rationals Modular arithmetic
60.9 60.2 57.5 50.4
11 12 13 14
49.6
15
7 7-8 11 12-13
Topic
Measurement 2D Geometry 3D Geometry Education & ethics Functions Sampling distributions Variation Probability Modelling Inferences & interpretation
S 2009 (n = 23)
S 2010 (n = 60)
%b
Rankc
%b
Rankc
81.8 73.9 73.9
1 2.5 2.5
80.0 78.3 76.7
1 2 3
69.5
4
55
4
52.1 43.4
5 6.5
33.3 40
9 6
43.4 39.1 34.8
6.5 8 9
35 43.3 35
7.5 5 7.5
34.7
10
26.7
10
a
A = March/Autumn, S = August/Spring Percentages of students responding as moderately confident or could do this (valid per cent) c Ranked by percentage reporting confident with topic b
In MATH132, the findings revealed that none of the topics were ranked by more than 85% of the students reporting confidence both in 2009 and 2010 (refer to Table 7.11). However, approximately 80% of the students both in 2009 (82%) and 2010 (80%) reported comfort in measurement topic. In 2010, the students were also reporting confident in 2D geometry (78%) and 3D geometry (77%) topics. As with the MATH131, none of these were topics on statistics. An examination of the sequence of topics indicated that all statistics topics (i.e. sampling distributions, variation, probability, modelling, inferences and interpretation) were taught post mid-session (commenced in week 7). This was possibly a reason for lack of confidence among the students on those topics in addition to students‟ difficulty with statistics content. This also suggested the need for support in providing more or better resources to students in the subject such as video resources, lectures, and additional worked examples on later topics (post midsession). An analysis of topics also revealed that the students were more confident in mathematics topics (except functions) rather than statistics. Overall, three topics were perceived as competence by the students in MATH131 when the fortnightly learning design maps were available in the e-learning site. However in MATH132 there appeared no significant association of providing the maps on student perceived 284
confidence as none of the topics were reported as competence by the students both in 2009 (no maps) and 2010 (with maps). For further examination, a non-parametric test was used to compare the differences in proportions of students who perceived themselves as confidence on topics between the cohorts in MATH132. Similar to the previous section, a Mann-Whitney U test was carried out due to small sample size and violation of normality assumptions of 2009 cohort. As shown in Table 7.12, two topics were identified as significantly different: measurement and functions topics. In particular, the proportion of students who considered themselves as confident in these topics was significantly higher in 2009 compared to 2010 (refer to Table 7.12). These differences possibly reflected from different lecturers taught in the subject with different teaching experiences and approaches, but no significant difference was reported in the value of lectures between the two cohorts (refer to Table 7.8). The results also indicated that the provision of the maps to the students in MATH132 was not associated with improved perceptions of comfort with topics in mathematics subject.
Table 7.12 Differences on the perceived confidence on topics learning in MATH132 a
Topics
Measurement Functions
MATH132 August/Spring 2009
MATH132 August/Spring 2010
Mann-Whitney U test
Mean Rank
Sum Rank
n
%b
Mean Rank
Sum Rank
n
%b
U
p valuec
49.22 50.43
1132.0 1160.0
23 23
81.8 52.1
38.49 38.77
2271.0 2326.0
59 60
80.0 33.3
501.0 496.0
0.043 0.035
a
Scale: 1 = not at all, 2 = might have a little difficulty, 3 = moderately confident, 4 = could do this Percentages of students responding as moderately confident or confident c Significant difference at the 0.05 level b
Another perspective on confidence is provided by asking the students on how much they perceived they learned. At the end of the session, nearly half of the students (46.5%) in MATH131 reported that they had learned a great deal in the subject and 46.5% had limited success in coming to terms with the subject materials (refer to Figure 7.13). In MATH132, a high proportion of the students (83%) believed that they had learned a great deal in the subject. With respect to several limitations as noted previously, a Fisher‟s exact test was used to compare the differences in proportions between the two cohorts. It was found that the proportion of students who believed that they have learned a great deal in the subject was significantly higher (p = 0.003) in MATH132 (83%) compared to MATH131 (46.5%). Corresponding to this, the proportion of students who thought they had limited success in the subject was significantly higher (p = 0.002) in MATH131 (46.5%) compared to MATH132 (13%). This indicates that the students believed that they had gained success in mathematical 285
learning in the following subject, MATH132, more than in the previous subject, MATH131. It was possibly that the MATH131 students having greater familiarity with the content than the higher level mathematics of MATH132. For this reason, they did not have a perception of having learned much content in MATH131.
Figure 7.13 Student perceived belief in mathematical learning in mathematics subjects between sessions However, the results in Table 7.13 showed that more than half of the students both in MATH131 and MATH132 reported no improvement in their perception of the subjects‟ relevance to their life even though they were intending to be primary teachers. Furthermore, an analysis of differences in two proportions showed that the proportion of students perceived the subjects as less relevant to their life increased significantly (Z = 2.58, p = 0.005) in MATH132 (19%) after they completed MATH131 (7%). This was also associated with lack of confidence in topics reported by the students in MATH132 as mentioned in earlier findings. The students did not suggest that the content was not appropriate for primary teachers much was seen as high school mathematics.
Table 7.13 Percentage of students reporting perception of subject relevance in mathematics subjects across sessions Compared to when I started the subject, I now view… Mathematics and statistics as less relevant to my life and/or anticipated profession About the same relevance as when I commenced the subject Mathematics and statistics as more relevant to my life and/or anticipated profession
MATH131 March /Autumn 2010 (n = 155)
MATH132 August /Spring 2010 (n = 60)
6.8
18.6
54.1
53.5
39
27.9
286
7.4.7 The use of the fortnightly learning design maps In MATH132 survey, the students in 2010 were asked about their perceptions as to the usefulness of the fortnightly learning design maps provided in the e-learning site. Usefulness was defined in terms of helping them to learn and understand the subject. Of the 60 responses, 41% of the students regarded the maps as useful in helping them learn and understand in the subject while 17% reported that the maps were time consuming for them, and the remaining 42% did not use the fortnightly learning design maps. The students also provided comments on their experience using the maps. Of the 27 responses, two thirds of them were positive with regard to their use in the subject. Some comments were as follows:
I was being able to access assessments and quizzes through them. Bit by bit learning was helpful, able to digest easily. Everything was set out clearly and was easy to view each and its outline. It was good to see all the worked laid out so we knew what had to be completed. It was useful and easy to access the relevant quizzes from there. It kept me practicing throughout the term and easily finding the resources. The fortnight summary sheet was the place to start and answered that I was on track. It was a good reminder about the quizzes and revision. However, 15% of the students commented negatively on its usage while the remaining students reported that either they did not use them or not aware of it. Furthermore, the students were asked about the benefits they gained from using the maps. More than half of the students indicated that they used the maps to link/match their work to relevant resources (70%), as reference guide (50%), organizing their work and learning materials (74%), for revision purposes (85%), and for their study checklist (66%). In order to recognize the impact of the fortnightly learning design maps in helping the teaching of the subject, the tutors were also asked to provide some comments on its usage. Four out of five tutors indicated that they favoured the maps as comments stated below:
Maps helped with locating resources on e-learning sites quickly. It was good having everything together for the fortnight. Can‟t think of anything I didn‟t like about it – except occasionally not being able to remember which week a topic was in. The “by concept” need to be available too. I liked the fortnightly maps. It made it clearer for me. However, I suspect a very few students used it. 287
I think the work was well set out. The content flowed well. Fortnightly content is good. The tracking of students‟ activities in terms of the proportion of accesses to resources in the e-learning system was done in week 12 for the two cohorts:
i.
MATH132 August/Spring 2010: from 18th October until 24th October 2010), and
ii.
MATH131 March/Autumn 2011: from 24th May until 30th May 2011)
between 8.00 am (tracking started) until 12.00 pm (tracking ended). An analysis of data from the e-learning student tracking statistics in MATH131 revealed that of the 1851 accesses to resources, almost half (47%, n = 862) were made using the fortnightly learning design maps while the remaining 53% (n = 989) used the by-type resource folders according to categories of resources (refer to Table 7.14). Out of the 162 students, 28% of them (n = 46) had used both the by-type resource folders and the fortnightly learning design maps to access resources, 21% (n = 34) used the fortnightly learning design maps, and 51% (n = 82) used the by-type resource folders provided in the e-learning site (refer to Table 7.15). This suggests a lack of use of the fortnightly learning design maps among students and possibly those responded to the survey mostly had used the maps.
Table 7.14 Proportion of accesses to resources in the e-learning site in week 12 between cohorts in mathematics subjects Proportion accesses to resources via …
Fortnightly learning design maps By-type resource folder
MATH131 March/Autumn 2011 (n = 1851)
MATH132 August/Spring 2010 (n = 836)
47% 53%
27% 73%
Table 7.15 Choices of student access to resources in the e-learning site in week 12 between cohorts in mathematics subjects Choice of access to resources
Fortnightly learning design maps By-type resource folder Both the maps and by-type resource folders
MATH131 March/Autumn 2011 (n = 162)
MATH132 August/Spring 2010 (n = 96)
21% 51% 28%
30% 57% 13%
288
In MATH132, the e-learning student tracking results revealed that of the 836 accesses to resources, 27% of them (n = 228) were done using the fortnightly learning design maps while the remaining 73% (n = 608) used the by-type resource folders in the subject e-learning site (refer to Table 7.14). Out of 96 students, 13% of them (n = 12) had used both the by-type resource folders and the fortnightly learning design maps to access resources, 30% (n = 29) used the fortnightly learning design maps, and 57% (n = 55) used the by-type resource folders provided in the e-learning site. A test for differences in two proportions revealed that the proportion of accesses to resources using the fortnightly learning design maps in MATH131 (47%) was significantly higher (Z = 9.43, p < 0.0001) than in MATH132 (27%). This also suggests that the use of the improved fortnightly learning design maps (using HTML files in addition to PDF files) implemented in MATH131 worked better than the use of only PDF files in MATH132.
7.4.8 Effectiveness: dimensions of high quality learning As discussed earlier in Chapter 6 (section 6.2.5.3), it is useful to examine the impact of subject design on student learning based on Boud and Prosser‟s four principles of high quality learning in higher education. As with STAT131, the students in mathematics subjects were asked to indicate their perceptions of the subject design according to several aspects representing dimensions of the four principle areas: engaging students, acknowledging the learning context, challenging students, and providing practice. Each dimension was ranked by the students on a scale: 1 = not at all, 2 = a little, 3 = moderate, 4 = much, and 5 = very much. To examine whether there was a difference in students‟ perceptions on the subject design according to four principle areas between the two subjects (MATH131 and MATH132), a test for differences in two proportions was used for analyses in the following sections.
7.4.8.1
Student engagement
The first key area highlights the importance of subject design in engaging the students within their learning context. This includes the design of learning materials and activities which relate to the students‟ prior experiences of studying, their present goals for learning, and engage them affectively with the materials they are studying. There were six dimensions of student engagement included in the surveys (refer to Table 7.16). Based on a sample of 155 students, an analysis of internal consistency of the six dimensions of engagement produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.83. The findings revealed that two out of six dimensions had a significantly higher percentage endorsement for the MATH132 subject 289
compared to the MATH131. These were the subjects‟ activities which enabled students to access key concepts in many ways (Z = 2.29, p = 0.01) and connected to their prior experiences (Z = 3.20, p < 0.001). The results demonstrated difference in the students‟ perceptions on those two dimensions of engagement in the subjects from MATH131 to follow on subject MATH132. The MATH131 endorsement of the dimensions was the lowest overall dimensions and both subjects. Table 7.16 Students‟ perceptions on six dimensions of engagement in mathematics subjects MATH131 March /Autumn 2010 (n = 155)
Dimensions
MATH132 August /Spring 2010 (n = 60)
%a
Mean
S.D.b
%a
Mean
S.D.b
Opportunities for peer interaction and feedback
63.1
2.78
0.89
73.4
3.82
5.51
Supports reflection and consolidation of mathematical ideas
59.6
2.74
0.91
71.2
2.90
0.96
Enables accessing key concepts in many ways Allows student control of learning Engages students affectively Uses prior experiences
59.5c 58.1 52.2 46.3c
2.68 2.62 2.54 2.44
0.82 0.95 0.87 0.86
76.2c 60.1 59.4 70.6c
3.82 2.80 2.61 2.95
5.51 0.95 0.93 1.07
a
Percentages of students responding as moderate to very much Standard deviation c Significant difference at the 0.05 level b
7.4.8.2
Acknowledge the learning context
The second key area highlights the importance of subject design in acknowledging the learning context from the students‟ perspectives. This includes the design of learning materials and activities which are part of the application of the knowledge being learned. These are those activities that help students to see how the current learning can be used upon in various contexts and situations beyond the ones given, match the assessment to learning outcomes, and consider the cultural assumptions. There were four dimensions of acknowledging the learning context included in the surveys (refer to Table 7.17). Based on a sample of 155 students, an analysis of internal consistency of the four dimensions of acknowledging the learning context produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.78. The findings showed similar students‟ perceptions on four dimensions from MATH131 to follow on subject MATH132. There were two out of four dimensions where at least 63% up to 73% of the students in both subjects reporting higher quality with respect to acknowledging the learning context: (i) match the assessment to learning outcomes, and (ii) link to the mathematics and statistics fields. 290
Table 7.17 Students‟ perceptions on four dimensions of acknowledging the learning context in mathematics subjects MATH131 March /Autumn 2010 (n = 155)
Dimensions
Matches assessment to outcomes Links to the mathematics and statistics fields Links to broader discipline context Support for multiple cultures and diversity
MATH132 August /Spring 2010 (n = 60)
%a
Mean
S.D.b
%a
Mean
S.D.b
73.0 63.0 56.3 55.6
2.91 2.62 2.61 2.55
0.82 0.85 0.77 0.91
70.2 72.5 56.9 44.7
3.05 2.98 2.64 2.48
1.06 0.89 0.87 0.98
a
Percentages of students responding as moderate to very much Standard deviation
b
7.4.8.3
Challenge students
The third key area highlights the importance of subject design in challenging the students within their learning context. This includes the design of learning materials and activities that get the students to be active in their participation; using the support and stimulation of other students, applying a critical approach to the materials, and going beyond the knowledge or resources provided to them. There were four dimensions of challenging the students included in the surveys (refer to Table 7.18). Based on a sample of 155 students, an analysis of internal consistency of the four dimensions of challenging the students produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.79. The findings revealed that two out of four dimensions had a significantly higher percentage endorsement for the MATH132 subject compared to the MATH131. These were the subjects‟ activities which highlighted the limits of students‟ knowledge base (Z = 1.97, p = 0.024) and encouraged students to be self-critical at various points in the learning process (Z = 3.06, p = 0.001). The results demonstrated difference in the students‟ perceptions on those two dimensions of challenging the students in the subjects from MATH131 to follow on subject MATH132. The proportions of students in MATH131 who reported higher quality with respect to challenging students in the subject ranged from 55% equipping students to plan other learning activities to 76% highlighting the limits of students‟ knowledge base. The corresponding lows and highs were for the same dimensions, 63% and 88%, respectively for MATH132. The responses by MATH132 with respect to challenging the students were highest overall dimensions and both subjects.
291
Table 7.18 Students‟ perceptions on four dimensions of challenging students in mathematics subjects MATH131 March /Autumn 2010 (n = 155)
Dimensions
Highlight the limits of students‟ knowledge base Question students‟ knowledge base Encourages self-criticism Equips students to plan other learning activities
MATH132 August /Spring 2010 (n = 60)
%a
Mean
S.D.b
%a
Mean
S.D.b
76.2c 75.2 63.5c 54.8
3.06 3.00 2.85 2.56
0.87 0.87 0.96 0.89
88.3c 78.3 84.9c 63.3
3.55 3.37 3.42 2.78
0.99 1.09 0.98 1.01
a
Percentages of students responding as moderate to very much Standard deviation c Significant difference at the 0.05 level b
7.4.8.4
Provide practice
The fourth key area highlights the importance of subject design in providing practice within the student‟s learning contexts. This includes the design of learning materials and activities that assist students to demonstrate their learning changes and understandings, help them to better appreciate the criteria and standards being applied to their learning, develop their confidence, and assist them to experience coherence between aims and goals, learning tasks and assessment of outcomes. There were four dimensions of providing practice included in the surveys (refer to Table 7.19). Based on a sample of 155 students, an analysis of internal consistency of the four dimensions of providing practice produced an acceptable level of reliability, a Cronbach‟s alpha coefficient of 0.86. It was found that two out of four dimensions had a significantly higher percentage endorsement for the MATH132 subject compared to the MATH131. These were the subjects‟ activities which encouraged the students to communicate and demonstrate their learning (Z = 2.50, p = 0.006) and encouraged self-confidence through practice (Z = 1.80, p = 0.04). The results demonstrated difference in the students‟ perceptions on those two dimensions of providing practice in the subjects from MATH131 to follow on subject MATH132. There were about half of the students in MATH131 reporting higher quality with respect to providing practice in the subject ranged from 52% developing confidence through practice, 55% encouraging students to communicate and demonstrate their learning, to 56% modelling the expected performance. The corresponding proportions were for the same dimensions, 66%, 74% and 69%, respectively for MATH132.
292
Table 7.19 Students‟ perceptions on four dimensions of providing practice in mathematics subjects
Dimensions
Equips students to learn appropriately Models expected performance Encourages student communication and demonstration Encourages student confidence and practice
MATH131 March /Autumn 2010 (n = 155)
MATH132 August /Spring 2010 (n = 60)
%a
Mean
S.D.b
%a
Mean
S.D.b
62.5 56.3
2.68 2.54
0.83 0.81
73.8 68.9
3.04 2.95
1.05 1.07
55.1c
2.63
0.83
73.7c
3.09
1.09
0.87
c
2.76
1.09
51.9
c
2.57
65.5
a
Percentages of students responding as moderate to very much Standard deviation c Significant difference at the 0.05 level b
7.4.9 The use of online learning resources Data from the e-learning student tracking statistics was used to examine the usage of various online resources provided in the e-learning system for student learning in both subjects. These tracking statistics were observed from week 1 to week 13 throughout the session in each subject. The date and time range of the reports were set as the following:
MATH131: from 1st March to 4th June in March 2010, and from 28th February to 3rd June in March 2011;
MATH132: from 27th July to 30th October in August/Spring 2009, and from 26th July to 29th October in August 2010,
between 8.00 am (tracking started) until 12.00 am (tracking ended). To compare the students‟ usage of online resources provided in the e-learning system between the two cohorts for each subject, an independent t-test on the e-learning tracking statistics was used for the following analyses. In MATH131, the statistics shown in Table 7.20 demonstrated that on average the students in 2010 had significantly higher mean number of accesses to online resources than in 2011 such as the number of files viewed (t466.122 = 2.777, p = 0.006 with unequal variances assumed, F = 4.088, p = 0 .044), the number of web links accessed (t 304.867 = 3.776, p < 0.001 with unequal variances assumed, F = 5.885, p = 0 .016), and the number of online assessments completed (t439.998 = 2.540, p = 0.011 with unequal variances assumed, F = 19.824, p < 0.001).
293
Table 7.20 Descriptive statistics of e-learning student tracking between week 1 and week 13 in mathematics subjects across sessions Sessiona Descriptive
Sessions Mean Standard deviation Total time spentd Mean Standard deviation Read messages Mean Standard deviation Files viewed Mean Standard deviation Web links accessed Mean Standard deviation Online assessment completedc Mean Standard deviation Time spent on online assessmentd Mean Standard deviation
MATH131 A 2010b
MATH131 A 2011b
MATH132 S 2009b
MATH132 S 2010b
n = 246 36.84 26.23 n = 246 10:10:49 09:55:57 n = 246 16.66e 25.80 n = 240 117.48h,i,j 83.54 n = 167 9.22k 11.85 n = 246 73.18m 102.83 n = 246 18:26:47 09:39:21
n = 256 35.95 34.52 n = 256 08:55:34 08:11:19 n = 256 20.64g 43.49 n = 256 98.25j 69.57 n = 256 5.07k,l 9.70 n = 256 52.90m 72.85 n = 213 17:13:12 14:26:22
n = 95 30.51 18.39 n = 95 08:18:02 07:48:36 n = 46 38.50e,f.g 85.16 n = 93 94.39h 56.37 n = 55 6.44 9.06 n = 95 47.74 52.45 n = 95 14:01:33 00:25:09
n = 189 32.44 22.87 n = 189 08:15:51 08:22:25 n = 189 10.01f 20.40 n = 176 85.08i 53.22 n = 72 9.71l 14.05 n = 189 52.45 84.61 n = 189 13:19:02 00:01:22
a
A = March/Autumn, S = August/Spring The figure differed from the actual number of students enrolled in the subject as tracking were observed before the final exam c Weekly online practice quizzes d Total time spent in hours:minutes:seconds e,f,g,h,i,j,k,l,m The mean difference is significant at the 0.05 level b
In MATH132, an analysis of e-learning tracking statistics revealed that the average number of read messages via student forum by the students in 2009 (t 46.264 = 2.254, p = 0.029 with unequal variances assumed, F = 25.035, p < 0.001) significantly higher than in 2010 (Table 7.20). However, no significant differences were found for other types of resources between the cohorts. Overall, a one-way analysis of variance (ANOVA) on changes in the mean usage of online learning resources by the students from 2009 to 2011 revealed significant differences between the cohorts across sessions including the number of read messages via student forum (F3,733 = 7.814, p < 0.001), the number of files viewed (F3,761 = 7.960, p < 0.001), the number of web links accessed (F3,546 = 6.426, p < 0.001), and the number of online assessment completed (F3,782 = 3.352, p = 0.019) (refer to Table 7.20). Reflecting upon the implementation of the fortnightly learning design maps in the subjects, the maps did not necessarily increase the use of online learning resources among the 294
students with most preferring to use the by-type resource folders instead. As noted in section 7.4.7, a little over half of the students both in MATH132 August/Spring 2010 (57%) and MATH131 March/Autumn 2011 (51%) used the by-type resource folders to access resources in the e-learning site. Data from the e-learning tracking statistics demonstrated the proportion of accesses to online resources using the fortnightly learning design maps was not as high and somewhat lower than accesses via the by-type resource folders. However, an analysis of differences in two proportions revealed that the proportion of accesses via the maps was higher (Z = 9.43, p < 0.0001) in MATH131 (47%) compared to MATH132 (27%).
7.4.10 Assessment for learning As noted earlier in section 7.1.1.4, there were five quizzes, five assignments, and a midsession examination set in both MATH131 and MATH132. The three multiple choice questions set in each quiz were based on the materials or topics covered in the previous week. Similar topics were included in the assignments however each assignment contained three long answer questions (see Appendix 7.3). The mid-session exam contained ten multiple choice questions and five long answer questions and it was administered during week 8. Table 7.21 detailed the design of assessment system set in both subjects.
Table 7.21 Design of assessment system in mathematics subjects Topics
Tutorial assessment Quiz 1
MATH131 Problem solving: K-up process, multiple solutions
2
Geometry 2D: Pythagoras‟ theorem, coordinate geometry, Euclidean distance Statistics: ethics/privacy
4
Division: formalisms, models, algorithm, division in bases Integers: formalisms, familiar form, models
Measurement: general measurement concepts via area, temperature; search for positive measure Statistics: exploratory
6
All topics covered in week 1 until week 7
All topics covered in week 1 until week 7
8
Assignment 2
Quiz 3 Assignment 3
Mid-session exam
Week (due)
Geometry: congruence, similarity, topological equivalence, mapa mundi, graph theory, Euler‟s formula Geometry 2D: curves, polygons, classification
Assignment 1
Quiz 2
MATH132
Multiplication: multiplication models, algorithm, multiplication in bases including algorithm Inverse processes: general concept, role in maths
3
5
7
295
Quiz 4 Assignment 4
Quiz 5 Assignment 5
Statistics: more spread, standard deviation, stem-andleaf plots Number theory: prime factorization
Probability: conditional probability, in/dependence, Statistics: models
10
Number theory: divisibility tests, patterns and arithmetic progressions Statistics: transforming data, ATARa
Functions: graphs, polynomials Statistics: interpreting literature
12
9
11
a
Australian Tertiary Admission Rank
An analysis of assessment results was used to determine the changes of students‟ performances within subjects across both sessions. For this reason, the comparison of students‟ assessment marks involved two cohorts of each subject: MATH131 in 2010 and 2011, and MATH132 in 2009 and 2010.
7.4.10.1
MATH131 results
Having similar resources including lecturers and the assessment system, there were changes in the learning design set up in the e-learning site for MATH131: fortnightly learning design maps in PDF files in 2010, and improved fortnightly learning design maps in HTML files additional to PDF files in 2011. To examine whether changes in the fortnightly learning design maps impacted on student performances, a test for differences in mean assessment marks between cohorts was used for analysis. An independent t-test on the assessment in MATH131 revealed that there were significant differences in the average of student marks on several topics covered in quizzes and assignments (out of 3) (refer to Table 7.22). The students in 2011 achieved significantly higher mean marks on four main topics than in 2010, these were: division and integers assessed in quiz 3 (t411 = 2.051, p = 0.041 with equal variances assumed, F = 2.040, p = 0.154), divisibility tests, patterns and arithmetic progressions (number theory) and transforming data (statistics) both assessed in quiz 5 (t363.183 = 4.912, p < 0.001 with unequal variances assumed, F = 4.590, p = 0.033) and assignment 5 (t406 = 5.257, p < 0.001 with equal variances assumed, F = 2.124, p = 0.146). On the other hand, the students in 2010 had a significantly higher mean marks than in 2011 on the topics of spread measures, standard deviation, stem-and-leaf plots (statistics) and prime factorizations (number theory) assessed in quiz 4 (t410 = 2.472, p = 0.014 with equal variances assumed, F = 0.036, p = 0.849). However, there was no significant difference in the mean marks on other topics such as problem solving, multiplication, and inverse processes between the two cohorts. These results showed that there was a statistically significant 296
improvement in the mean assessment marks from 2010 to 2011 on two topics, number theory and statistics, with a mean change of 0.4. Furthermore, the students in 2011 demonstrated better understanding than in 2010 on all topics covered in the first half of session (week 1 until week 7) assessed in mid-session exam (t425.329 = 2.627, p = 0.009 with unequal variances assumed, F = 4.199, p = 0.041). Table 7.22 Comparison of students‟ assessment marks in MATH131 between cohorts Assessment
Quiz 1 Assignment 1 Quiz 2 Assignment 2 Quiz 3
e
Assignment 3 Mid-session Quiz 4
d, e
e
Assignment 4 Quiz 5
e
Assignment 5
e
Cohort
N
Mean
S. D.a
2010 2011 2010 2011 2010 2011 2010 2011 2010 2011 2010 2011 2010 2011 2010 2011 2010 2010 2010 2011 2010 2011
198 211 212 219 206 217 213 220 195 218 209 213 213 221 204 208 202 212 196 202 201 207
2.27 2.32 2.54 2.57 2.44 2.29 2.52 2.60 2.26 2.47 2.58 2.65 11.71 12.57 2.21 1.99 2.65 2.71 2.08 2.47 2.47 2.74
0.804 0.775 0.614 0.490 0.761 0.812 0.547 0.557 1.257 0.745 0.525 0.508 3.136 3.692 0.863 0.924 0.452 0.477 0.894 0.677 0.543 0.490
Mean differenceb
S. E.c
0.05
0.08
0.03
0.05
-0.15
0.08
0.08
0.05
0.21
0.10
0.08
0.05
0.86
0.33
-0.22
0.09
0.06
0.05
0.39
0.08
0.27
0.05
a
Standard deviation Difference in mean marks (2011 less 2010) c Standard error difference d Total of 20 marks e Significance difference at the 0.05 level b
An independent t-test on the total assessment marks (out of 40) and final marks (out of 100) in MATH131 revealed that there were significant differences in the average of student marks between the two cohorts (refer to Table 7.23). Overall, the students in 2011 outperformed than in 2010 both in their assessment (t441 = 2.926, p = 0.004 with equal variances assumed, F = 0.018, p = 0.893) and final exam (t438 = 2.012, p = 0.045 with equal variances assumed, F = 0.856, p = 0.335).
297
Table 7.23 Comparison of students‟ total assessment and final exam marks in MATH131 between cohorts Assessment Total assessmentd d
Final exam
Cohort
N
Mean
S. D.a
2010 2011 2010 2011
219 224 219 221
33.75 35.97 66.72 69.71
8.102 7.889 14.714 16.398
Mean differenceb
S. E.c
2.22
0.76
2.99
1.49
a
Standard deviation Difference in mean marks (2011 less 2010) c Standard error difference d Significance difference at the 0.05 level b
To identify whether there was a relationship between the performances of students in the assessment (total of 40 marks) and final exam (total of 100 marks), a bivariate analysis shown in Table 7.24 revealed that there were strong positive correlations between the students‟ performances in the assessment and final exam for both cohorts.
Table 7.24 Correlation between the total assessment and final exam marks in MATH131
MATH131 March/Autumn 2010 MATH131 March/Autumn 2011
N
Pearson correlation, r
p valuea
219 221
0.921 0.940
< 0.001 < 0.001
a
Significant at the 0.01 level (two-tailed)
In 2010, the students were asked in the survey about their perceptions on the subject‟s assessment system. Of the 132 responses, over two third (70%) of them indicated the system was fair. On the other hand, the assessment system was regarded as unfair approximately by thirty per cent of the students for a variety of reasons: its formats (including timing, the amount of questions assessed, difficulty level), the weight assigned for each quiz and assignment, some students thought cheating might be occurring, and the inconsistency of marking style. Some of their comments were as follows:
There is a lot of work put into the assignments for only 3% of your final mark. Midseason exam had very lengthy questions and too little time to complete. A 50 minute exam should contain short or precise questions as too much time wasted reading instead of doing. We all copy each other throughout. And then in exams we don't know how to do it. This was because we haven't been taught properly.
298
I think you should be able to use books in exam. I feel this way because in the real world you're not expected to know everything off the top of your head. You able to access the resources you need to assist you. We already have to return our papers to be remarked, as assessment criteria were "different". Overall, the positive changes particularly in 2011 (moderate though) possibly resulted from several factors including improvement in lecture presentations, more discussions in tutorials, and better subject design in the e-learning system. As mentioned earlier in section 7.4.7, the e-learning tracking analysis in 2011 revealed an increase in the proportion of accesses to online resources via the fortnightly maps (in week 12) among students compared to 2010. The provision of the improved fortnightly learning design maps using both HTML and PDF files along with the by-type resource folders in the e-learning site in 2011 perhaps helped students access resources more effectively and hence learn better than in 2010.
7.4.10.2
MATH132 results
In MATH132, resources were similar both in 2009 and 2010 with the exception of different lecturers taught in the subject and changes in the learning designs within the e-learning site. In 2009, there were weekly folders but in 2010 the fortnightly learning design maps in PDF files were introduced in the subject. As with the MATH131, a test for differences in mean assessment marks between the cohorts was used to examine whether the provision of the fortnightly learning design maps impacted on student performances in mathematics subject. In MATH132, an independent t-test revealed that the students in 2010 achieved significantly higher mean marks on four main topics than in 2009 (refer to Table 7.25), these were: geometry and geometry 2D assessed in assignment 1 (t 246 = 2.864, p = 0.005 with equal variances assumed, F = 0.007, p = 0.935), functions and interpreting literature (statistics) assessed in quiz 5 (t243 = 2.429, p = 0.016 with equal variances assumed, F = 3.521, p = 0.062). On the other hand, the students in 2009 had a significantly higher mean marks than in 2010 on the topics of conditional probability, dependence/independence (probability) and models (statistics) assessed in assignment 4 (t221.601 = 4.469, p < 0.001 with unequal variances assumed, F = 14.045, p < 0.001). There was no significant difference in the mean marks on other topics such as Pythagoras‟ theorem, coordinate geometry, Euclidean distance, ethics/privacy and exploratory (statistics), and measurement between the two cohorts. The students in 2009 demonstrated better performance than in 2010 in the mid-session exam (t257 = 2.833, p = 0.005 with equal variances assumed, F = 1.397, p = 0.238) with a mean difference of 1.2 overall the topics assessed. The slight changes in the average of student marks from 2009 to 2010 on the 299
four topics perhaps were resulted from different lecturer (experts) teaching these topics as recommended in the previous survey (as discussed in section 7.4.6). The gradual improvement was also possibly due to the subject is no longer new. It was also possibly due to the use of the fortnightly learning design maps provided in the e-learning site in 2010 compared to the weekly folders in 2009. Table 7.25 Comparison of students‟ assessment marks in MATH132 between cohorts Assessment
Quiz 1 Assignment 1
e
Quiz 2 Assignment 2 Quiz 3 Assignment 3 Mid-session
d, e
Quiz 4 Assignment 4 Quiz 5
e
Assignment 5
e
Cohort
N
Mean
S. D.a
2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010 2009 2010
89 159 88 160 89 156 91 161 90 159 89 164 91 168 84 157 85 160 86 159 80 152
2.16 2.13 2.38 2.59 2.37 2.40 2.63 2.51 2.31 2.46 2.54 2.52 13.43 12.26 1.79 2.00 2.31 1.90 1.86 2.18 2.18 2.17
0.878 0.848 0.593 0.539 0.884 0.742 0.493 0.580 0.743 0.734 0.592 0.548 2.897 3.287 0.917 0.921 0.601 0.836 0.923 0.978 0.810 0.804
Mean differenceb
S. E.c
-0.03
0.11
0.21
0.07
0.03
0.11
-0.12
0.07
0.15
0.10
-0.02
0.07
-1.17
0.41
0.21
0.12
-0.41
0.09
0.32
0.13
-0.01
0.11
a
Standard deviation Difference in mean marks (2010 less 2009) c Standard error difference d Total of 20 marks e Significance difference at the 0.05 level b
Unlike MATH131, an independent t-test on the total assessment marks (out of 40) and final marks (out of 100) in MATH132 revealed that there was no significant difference in the average of student marks between the two cohorts (refer to Table 7.26).
300
Table 7.26 Comparison of students‟ total assessment and final exam marks in MATH132 between cohorts Assessment
Total assessment Final exam
Cohort
N
Mean
S. D.a
2009 2010 2009 2010
91 169 91 168
35.05 33.66 65.92 63.98
6.551 7.580 12.977 13.666
Mean differenceb
S. E.c
-1.39
0.94
-1.94
1.75
a
Standard deviation Difference in mean marks (2010 less 2009) c Standard error difference b
A bivariate analysis was used to identify whether there was a relationship between the performances of students in the assessment (total of 40 marks) and final exam (total of 100 marks). The results shown in Table 7.27 revealed that there were strong positive correlations between the students‟ performances in the assessment and final exam for both cohorts.
Table 7.27 Correlation between the total assessment and final exam marks in MATH132
MATH132 August/Spring 2009 MATH132 August/Spring 2010
N
Pearson correlation, r
p valuea
91 168
0.849 0.858
< 0.001 < 0.001
a
Significant at the 0.01 level (two-tailed)
In 2010, the students were asked in the survey to indicate their perceptions on the subject‟s assessment system. Out of 55 responses, 75% of them preferred the current assessment system. However, the remaining 25% of the students regarded the system as unfair for some reasons including timing, amount of questions assessed, difficulty level, weight assigned for each quiz and assignment, and inconsistencies in marking. Some of their comments were as follows:
There were inconsistencies in marking the quizzes. Assignments need hours of work to complete for 3% of subject marks. Concepts seem to be way beyond what is necessary. Quizzes were much easier than mid-session exam and made studying difficult. The material covered is not always tested, too much irrelevant materials. I find the mid-session exam very difficult. The practice quizzes misleading, they always to be less difficult. Assignment has a lot of work for 3 marks.
301
The assignment questions were often worded misleading and it was not clear what was expected. Overall, there appeared no changes in the students‟ performances both in their assessment and final exam for both cohorts. While there were changes in the subject design particularly in the e-learning system in 2010 compared to 2009, the lack of use of the fortnightly learning design maps introduced in 2010 suggested that they were less successful in supporting student learning (refer to section 7.4.7). However, as noted earlier, there was an increase in the proportion of accesses to online resources via the fortnightly learning design maps during week 12 among students in 2011 (in MATH131) compared to 2010. Perhaps, the choice of access to resources in the e-learning system either via the by-type resource folders or the fortnightly learning design maps among the students depends on the user-friendly interface; for instance, the improved fortnightly learning design maps using HTML files are possibly better than the PDF files.
7.4.11 Failure rates, pass rates and final outcomes Data on the final grades including failure and pass rates in MATH131 and MATH132 were used to examine the final learning outcomes as a possible consequence the changes made in the subject design particularly in the e-learning system between 2009 and 2011. Over the period of the years 2009 to 2011, the failure rates in both subjects were consistently under 10% and ranged from 1% to 6% (refer to Table 7.28). It was expected that both subjects would have the same grades‟ distribution as dropouts (e.g. failed, incomplete, medical cause) often happened in the first subject (MATH131) when the students were new to university and usually left the students who can survive in the next subject (MATH132). For instance in 2009, there were 106 students enrolled in MATH131 and left 91 students enrolled in MATH132 (see Table 7.28).
Table 7.28 Pass rates for the years 2009 to 2011 in MATH131 and MATH132 Sessiona
Nb
Fail
PCc & Pass
Credit
Distinction
High Distinction
MATH131 A 2009 MATH131 A 2010 MATH131 A 2011
106 219 222
5.7 4.0 5.9
18.8 34.8 24.8
27.4 29.7 29.7
24.5 19.6 24.3
23.6 11.9 15.3
MATH132 S 2009 MATH132 S 2010
91 168
1.0 2.4
47.3 46.4
22.0 25.6
19.8 17.9
9.9 7.7
a
A = March/Autumn ; S = August/Spring ; bNumber of students enrolled ; cPass Conceded (Data extracted from SMP, University of Wollongong. Figures might change slightly at different dates as students were retrospectively withdrawn without penalty from the subject)
302
In MATH131, the failure rates were similar for the three cohorts between 2009 and 2011 ranging from 4% to 6%. There were changes made in the subject design: in 2009 the weekly folders were available in the e-learning site but in 2010 different lecturers taught in the subject and the fortnightly learning design maps in PDF files were introduced to students. An analysis of differences in proportions on student final outcomes revealed that the proportion of students attaining top grades (High Distinction, and Distinction) was significantly higher (Z = 2.91, p = 0.0019) in 2009 (48%) compared to 2010 (32%) (refer to Table 7.28). That is, the provision of the maps in PDF files was less successful in improving student outcomes in 2010. A refinement of the maps to the improved fortnightly learning design maps in HTML files additional to PDF files was undertaken in 2011. Consequently, the proportion of students attained the top grades in 2011 (40%) significantly increased (Z = 1.78, p = 0.037) by 8% compared to 2010, however no difference compared to 2009 (48%) when the weekly folders were made available. Furthermore, with the exception of two quizzes (first and second) and four assignments (first up to fourth), an independent t-test on the overall assessment results suggests that students in 2011 outperformed students in 2010 particularly in their total assessment (t441 = 2.926, p = 0.004 with equal variances assumed, F = 0.018, p = 0.893) and final exam marks (t438 = 2.012, p = 0.045 with equal variances assumed, F = 0.856, p = 0.335) (refer to Figure 7.14). Overall, this suggests that there was improvement in student performances when the improved fortnightly learning design maps in HTML files were made available in 2011.
Note: Q = Quiz ; A = Assignment ; Mid = Mid-session ; Final = Final exam Assessment results in 2009 (except final grades) were not available for this thesis
Figure 7.14 Student performances on assessment tasks in MATH131 303
Meanwhile, in MATH132, the failure rate was acceptable at approximately 2% for both cohorts in 2009 and 2010. There were changes made in the subject design from weekly folders accessible in the e-learning site in 2009 to the provision of the fortnightly learning design maps in PDF files with change of lecturers in 2010. It appeared that the distribution of student final grades were similar between 2009 and 2010 cohorts (refer Table 7.28). An analysis of independent t-test on the overall assessment results also revealed that no significant difference was evident either in the total assessment or final marks between cohorts. Overall, the results suggest that there was no significant improvement in student performances between 2009 (weekly folders) and 2010 (with maps in PDF files). However, students in 2010 outperformed students in 2009 in two assessments: fifth quiz (t243 = 2.429, p = 0.016 with equal variances assumed, F = 3.521, p = 0.062), and first assignment (t246 = 2.864, p = 0.005 with equal variances assumed, F = 0.007, p = 0.935) where the (refer Figure 7.15). These small changes possibly due to different lecturers taught in 2010, however there was no significant difference in the value of lectures in 2010 compared to 2009 (refer Table 7.8).
Note: Q = Quiz ; A = Assignment ; Mid = Mid-session ; Final = Final exam
Figure 7.15 Student performances on assessment tasks in MATH132
7.4.12 Improving the subjects Having the changes in the subject design with the provision of the fortnightly learning design maps (using PDF files) in the e-learning system both in MATH131 March/Autumn 2010 and MATH132 August/Spring 2010, the students were asked to provide some recommendations 304
on “How best can the subject design be improved?”. Overall, based on students‟ responses in both cohorts three major areas appeared in need for improvement as the following:
changing the subject contents to focus less on higher level mathematics and statistics topics and include the lesson on how to use calculators appropriately in the subjects,
increasing the contact hours on tutorials and more interactions or discussions between students and tutors, and
improve the lectures‟ presentations including lecture notes, timing, more explanations and working out examples on a whiteboard, and in-class interaction.
7.4.12.1 Improvement in MATH131 Not surprising given the low percentage of students (68%) finding the lectures useful to their learning and understanding out of the 155 responding students. Fifty per cent of the 75 responses suggested improving the lectures‟ components. Improvements suggested included styles of presentations in class, notes provided to students before lectures, more explanations given on mathematical concepts, and working out examples with solutions on a whiteboard (hands on) rather than reading the lecture slides. Examples of comments were as follows:
Maybe have a way of the lecturer working some solutions and examples on a board. Lecture slides available before lectures. Lecturer needs to stop wasting time on the trivial parts of lecture because it makes us run out of time. Access to lecture slides before lectures would allow me to better follow the lectures as sometimes they jump also the availability. Better examples, less jumping through lectures...slower examples on hard stuff and spread through unnecessary info beforehand would allow my notes to be more efficient. A little over one third of the students (35%) commented on the subject‟s contents where they experienced difficulty in learning and understanding the concepts included in the subject. They wanted more topics related to primary school rather than high school mathematics. Some comments were as follows:
Get back to basic primary school mathematics e.g. multiplication, division. More explanations and simplify on some concepts. 305
Bit more relevant to real life situations i.e. schools. Relate more to primary mathematics as it's a subject set for primary educators. The remaining 11% of the students wanted more hours and more engaging tutorial classes that involved extensive discussions between the tutors and students on examples and concepts taught in lectures. Some of their comments were as follows:
Tutorials could be lengthened, as could consultations. Need more hours in tutorial classes, at least 2 hours. Tutorials need to have structure, to cover topics addressed. More hands on learning in tutorials interaction.
7.4.12.2 Improvement in MATH132 In valuing resources, 65% of the 23 responding students in 2009, and 81% of the 60 responding students in 2010 found that lectures were useful for their learning in the subject. Not surprising then that of the 29 responses, 41% of them wanted the lecturer to be more interactive in class, preferred fewer hours in lectures but needed longer hours in tutorials. Further, they preferred to have face-to-face lectures rather than video-conferencing so as to promote greater in-class interaction between the lecturer and students. Some of their comments were as follows:
Two hours lectures per week would be a more manageable workload for a six credit point subject. A whiteboard in lectures, just talking off the screen makes it confusing I need to see how something is done. I can‟t learn through listening, need more interactive learning approaches. Have more on teaching cut down length of lectures (3 hours a week is difficult to take in) and instead having a longer tutorial time where practical works done in class. In terms of the subject‟s contents, 38% of the students suggested limiting the inclusion of some topics such as statistics and they wanted more worked examples and revisions on difficult topics to be included in tutorial classes. Since the use of calculators was permitted in all assessment tasks in MATH132 but only in the final exam in MATH131, the students needed basic lessons in using this tool to help them completing their exercises and assessment efficiently. Some of their comments were as follows: 306
Acknowledged that we need to know more than primary student, but the amount of over teaching the primary stage was becoming wider the further we went through. Make less statistics in the subject. I think the subject went too much depth in statistics. Cognitive overloaded. Less content is need for a more in depth understanding. Only do subjects or topics that will be taught at primary school may be up to year 7/8. Have calculator lesson with a standard calculator. Less than one third of the students (21%) needed more hours in tutorials and assistance from tutors to complete their tutorial exercises. They also wanted better communication between the tutors and students. Examples of comments were as follows:
I would have liked to have done tutorial exercises in tutorials rather than just doing the answers to quizzes and assignments. More engagement as a tutorial goes through lecture material more thoroughly as we did for other subjects. More supports and make less stressful situations in tutorial classes. More hours on learning in tutorials and cater for different learning styles.
7.5
Conclusion In this case study, the aim was to extend the examination of learning designs particularly
the use of the learning design maps to a new subject area. To do this, the researcher examined the effectiveness of the subject design both in the e-learning system and face-to-face learning so as to identify whether the students found the design adequate and if not, to improve the design. The final outcomes and assessment results in MATH131 showed that the changes made in the subject design from the use of weekly folders in 2009 to the fortnightly learning design maps in PDF files and changed of lecturers in 2010 were less successful in improving student outcomes in 2010 (refer to Table 7.29). The findings from the student survey in 2010 revealed that five resources: assignments (93%), quizzes (92%), lecture notes (88%), and online practice quizzes (87%) were considered good quality resources. Three topics were reported as competence by students including addition (94%), multiplication (91%), and subtraction (90%). In 2011, a refinement of the maps to the improved fortnightly learning design maps in HTML files additional to PDF files was implemented for easy access in the e-learning site. An analysis of the student outcomes revealed significant changes in performances both in their assessment and final exam in 2011 compared to 2010 (refer to Table 7.29). This result indicated that the use of the maps in HTML files positively impacted on student outcomes in 2011. However, there were 307
similar outcomes in the students‟ final grades between 2009 (weekly folders) and 2011 (maps in HTML files). As with the MATH131 in 2010, the final outcomes and assessment results in MATH132 revealed that the changes in the subject design from the use of weekly folders in 2009 to the fortnightly learning design maps in PDF files and change of lecturers in 2010 did not improve the student outcomes in 2010 (refer to Table 7.29). There appeared no improvement in the final grades and assessment results between the 2009 and 2010 cohorts. The outcomes from the student surveys demonstrated that three resources in 2009: online practice quizzes (100%), tutorial classes (91%), and tutorial exercises (87%); and five resources in 2010: mid-session exam (92%), worked solutions (90%), quizzes and online practice quizzes (88%), and assignments (85%) were considered high quality resources in the subject. However, the students in both cohorts did not perceive themselves as competent in any of the topics in MATH132. This was also resulted in 19% of the students in MATH132 (2010) regarded the subject as less relevant to their life compared to 9% in MATH131 (2010) (Z = 2.58, p = 0.005).
Table 7.29 Summary of changes in the subject designs and student outcomes across sessions in mathematics subjects MATH131 March/Autumn 2009
March/Autumn 2010
MATH132 March/Autumn 2011
August/Spring 2009
August/Spring 2010
Changes in the subject designs/learning designs in the e-learning site By-type resource folders Weekly folders No student survey
By-type resource folders Fortnightly learning design maps in PDF files Different lecturers
By-type resource folders Improved fortnightly learning design maps in HTML files additional to PDF files No student survey
By-type resource folders Weekly folders
By-type resource folders Fortnightly learning design maps in PDF files Different lecturers
Changes in outcomes 2009 versus 2010 Final outcomes: Proportion of students attaining top grades (High Distinction, and Distinction) was significantly higher (Z = 2.91, p = 0.0019) in 2009 (48%) compared to 2010 (32%) Outcomes from survey in 2010: Four resources reported as good quality resources Three topics considered as high competency by students
2009 versus 2010 Final outcomes: No significant difference was evident either in the total assessment or final marks between cohorts Distribution of student final grades were similar between cohorts Outcomes from surveys: Three resources in 2009 and five resources in 2010 reported as good quality resources
308
2010 versus 2011 Final outcomes: Proportion of students attained the top grades in 2011 (40%) significantly increased (Z = 1.78, p = 0.037) by 8% compared to 2010 Students in 2011 outperformed students in 2010 particularly in their total assessment (t441 = 2.926, p = 0.004) and final exam marks (t438 = 2.012, p = 0.045) Outcomes from the e-learning tracking analysis in 2011: Of the 1851 accesses to online resources, 47% used the maps and 53% used the by-type resource folders Of the 162 students, 21% used the maps, 51% used the by-type resource folders, and 28% used both designs
None of the topics considered as high competency by students in both cohorts Outcomes from the e-learning tracking analysis in 2010: Of the 836 accesses to online resources, 27% used the maps and 73% used the by-type resource folders Of the 96 students, 30% used the maps, 57% used the by-type resource folders, and 13% used both designs
These results were also supported by the analysis of e-learning student tracking statistics in week 12 for both subjects: MATH131 in 2011 and MATH132 in 2010. As shown in Table 7.29, it was found that the proportion of accesses to online resources using the fortnightly learning design maps significantly increased from 27% in 2010 (maps in PDF files) to 47% in 2011 (maps in HTML files). This suggests that the improved fortnightly learning design maps using HTML files in 2011 possibly worked better than using PDF files in 2010. An analysis of student choices in accessing resources provided in the e-learning site revealed that a little over half of the students both in MATH131 (54%) and MATH132 (57%) liked to use the by-type resource folders rather than the maps (refer to Table 7.29). However, outcomes from the MATH131 survey in 2010 showed that about half of the students (54%) and four out of five tutors regarded the fortnightly learning design maps as useful in locating the resources in the elearning site. Overall, there was significant improvement in student performances when the maps in HTML files (2011) were available compared to the maps in PDF files (2010). However, as stated in section 4.6 (Chapter 4) this could possibly be due to the differences in the cohorts enrolled in the subject as well as differences in the difficulty of the subject materials. Student performances were similar when the weekly folders (2009) and the maps in HTML files (2011) were provided to students. This suggested that the use of the weekly folders in 2009 and improved fortnightly learning design maps in HTML files in 2011 potentially helped students to learn better in mathematics subject.
309
CHAPTER 8 CONCLUSION 8.0
Revisiting the evidence In this final chapter, it is apt to review what has been learned from the explorations of the
three case studies in developing an embedded learning support system in e-learning so as to improve statistics education. Three case studies highlighted the importance of learning designs when attempting to improve student outcomes through the use of video resources and other supports. Learning design is essential when designing subjects within an online learning environment. However as context varies giving rise to different student need, learning designs may also need to vary. For instance, an examination through the student surveys revealed that 98% of the postgraduates in GHMD983 August/Spring 2008 had highly regarded the use of videos to support their learning. In contrast, with essentially the same learning design and same videos only 50% of the undergraduates in STAT131 March/Autumn 2010 endorsed the use of videos for their learning in the subject. This highlighted the need for an additional learning support, based on the perception of students needing more time to learn and as a consequence the Headstart program. To be able to identify what works and what does not in an environment as complex and variable as student learning requires gathering of evidence overtime and in different contexts. Evidence was gathered through several sources of data including students‟ assessment results and final grades, student surveys, e-learning student tracking statistics, online discussion or student forum, and peer briefings. Student surveys were undertaken at the end of sessions in each case study through three stages of implementations. The e-learning student tracking statistics provided evidence of student usage of the online resources available in the e-learning systems. Analysis of the student forum also informed the lecturer and the researcher about emerging issues and student needs in learning the subjects. Discussion with colleagues and students in the classrooms has allowed the lecturer and the researcher to undertake further steps to improve the subject designs particularly in the e-learning systems. The inconsistency in such evidence particularly assessment outcomes poses further questions, while consistent aspects allow temporary settling on the form of and content of the learning designs a available to students. 310
The second key aspect of the learning support system, after the provision of high quality resources is the assessment system. The primary purpose of the assessment system “redeeming” feature has been to identify students at risk so that they could obtain additional support. The assessment data has also provided evidence on student outcomes across several approaches to assessment to measure student learning and understanding of the subjects. Before one seeks to establish a learning support system it is essential that the resources, tasks and activities prepared for students are of high quality. In this study, a benchmark was set for the value of resources provided to students. That is, those resources valued by at least 85% of the students as moderately or extremely useful for their learning are those considered good quality resources. The feedback provided by students through the surveys suggested a starting point for improvement of resources, topics and the subject design particularly in the e-learning systems. It was expected that the primary resources such as lectures and lecture notes, and tasks materials including laboratory tasks/tutorial exercises, worked solutions, assessments (tests, quizzes, assignments, mid-session exam) to be of great important to students learning hence achieve the benchmark level. Other additional resources included Edu-stream, student forum, textbook, reading materials, navigational tool by topic materials and relevant websites are not necessarily being valued as high as the benchmark level. One can examine quality from different perspectives. For example, to enhance student engagement and their learning experiences particularly within the e-learning environment, Boud and Prosser (2001) highlighted strategies which are needed in designing a “high quality learning” in higher education. These strategies are based on four key principles encompass both student learning outcomes and learning processes. The four principles involved student engagement, acknowledgement of the learning context, challenging students, and providing practice. These principles appeared to be aligned with the work done in this study which was to design high quality resources for students learning within the e-learning systems, although it was clear there is scope for improvement in each of the four areas. The provision of video resources to students within the e-learning system was the initial focus of this study (case study 1). The videos were provided to students as support resources additional to the primary resources including lectures, laboratory classes, laboratory manual, and laboratory tasks. Outcomes from the GHMD983 surveys both in 2008 and 2009 revealed that more than 95% of the postgraduates endorsed the use of videos for their learning. In particular, all distance students (n = 12) in 2008 valued this resource as useful as they helped them learn having limited or no access to campus. It was evident that the videos were highly ranked as the primary resources in both years (refer to Table 5.24) and they were considered among the top six good quality resources in the subject.
311
8.1
Subject design considerations in the e-learning systems In the three case studies, it was evident that the provision of learning resources and
support materials in the e-learning systems potentially assisted student learning and understanding of the subjects along with the human-based resources via face-to-face learning. In STAT131, it appeared that the subject needed to provide additional supports to students. These supports including the provision of the Headstart program in the e-learning site and a slight modification to the assessment approach in 2011. Consequently, the new design features result in improvement in student outcomes with failure rate dropped significantly to 13% in 2011 compared to the years between 2005 and 2010 (Z = 1.99, p = 0.023). The e-learning system provides flexible space for educators and students engaged in the teaching and learning process to enhance the engagement of both on-campus and distance students‟ in their learning. It appears that e-learning systems have been designed to support student learning particularly distance-based students, and that eventually they will have diverse influences on ways in which students engage with their university study.
[The e-learning systems] have the capacity to change how students collaborate with others, communicate with [lecturers] and access the materials which they use to learn. It is commonly assumed that [the e-learning systems] enrich student learning by opening access to a greater range of interactive resources, making course contents more cognitively accessible, providing automated and adaptive forms of assessment, and developing students‟ technology literacy. Asynchronous online tools allow students to interact with learning materials, their peers and the entire university in ways that are not bound by time or place. (Coates, 2006, p. 5)
8.1.1 Placing resources in e-learning sites It is essential that all resources and support materials provided to students being placed appropriately in the e-learning sites. This will not only ease the student access to resources but also promotes higher engagement with work and tasks given to students in the subjects. For instance, this study has explored several ways of designing the subject homepage within the elearning systems through three case studies. It appeared that the design which provided the students with both the by-type resource folders and weekly/fortnightly folders were optimal for locating resources and support materials in the e-learning sites. Through the by-type resource folders, students are able to locate resources according to category for example lecture notes, worked solutions, laboratory tasks, videos, tutorial exercises, and assessments. The weekly/fortnightly folders (also labelled with topic) allowed students to identify tasks/exercises
312
associated with resources and support materials, and helped them to complete their work from time to time throughout the session. Data from the e-learning tracking statistics provided evidence on students‟ preferences of using both the by-type resource folders and weekly folders implemented in SHS940 August/Spring 2010. It was found that 61% (n = 25) out of 41 students had used both the bytype resource folders and weekly folders (embedded the learning design maps) to access resources provided in the e-learning site. This suggested that students preferred to use both the by-type resource folders and weekly folders rather than only having one design. However in different subject areas, the proportion of students reported using the different types of folders to access resources varied. For MATH131 and MATH132, less than 30% of the students used both designs, 28% (n = 46) in MATH131 and 13% (n = 12) in MATH132. The results suggested that having both designs available in the e-learning site is potentially beneficial for students though there was small number of them used both the by-type resource folders and fortnightly folders (embedded with fortnightly learning design maps). To enhance student engagement with their work through the subject design in the e-learning systems, it is also suggested to naming or labelling the weekly/fortnightly folders indicates the topics covered within particular week(s). Two students commented
I prefer the folder design that has lectures in a lectures folder, etc. However I do like the fact that each week has a sub-heading with the topics covered in that week which makes it a lot easier to revise each individual topic. (Individual student survey comment observed in STAT131 March/Autumn 2010) I used the design map to find the weeks lab work, solutions, and the lecture slides. This was also used for finding information on topics for the lab tests. The resource would have been more useful had the lecture slides been organised by topic rather than week. Nothing worse than having to click on every week trying to find out what week we learnt Chi-square. (Individual student survey comment observed in STAT131 March/Autumn 2011) It is important to note that weekly/fortnightly folders included the three major components, resources, tasks, and supports, as had been displayed in the LDVS maps.
8.1.2 Learning design maps to organise major components Much of the design work in this thesis involved the provision of the learning design maps incorporated with links to resources, tasks, and support materials potentially support students learning within the e-learning systems. The maps provided within the weekly/fortnightly folders were considered as useful by almost 70% of the students in STAT131 March/Autumn 2010, but 313
a source of irritation with fifteen per cent of the 83 students commented negatively on their experience using the maps in terms of its complexity and excess amount of information, as one student responded,
It is a good tool to outline the content of the subject in the early stages but as the content of subject expands and we get to week 7 or later, the maps are extremely difficult to follow because there are interlinks between the pages so some content appears multiple times. It becomes frustrating and it does not make STAT any easier. (Individual student survey comment observed in STAT131 March/Autumn 2010) The e-learning tracking analysis provided evidence on the number of student accesses of resources available in the e-learning site. In SHS940 August/Spring 2010, both designs the learning design maps within weekly folders and the by-type resource folders were introduced to students, and the e-learning tracking statistics revealed that 45% (n = 443) of the 993 access to resources were done using the learning design maps. The subject design which provided both the maps within weekly folders and the by-type resource folders were associated with improved learning outcomes as the top grades rate headed up to 81% and failure rate dropped to 2.4% in 2010. However in 2011, there were no maps in SHS940 August/Spring 2011 when two draft/final assessment tasks replaced two test retest assessments and the subject design combined use of both the weekly folders and by-type resource folders, learning gains evident with maps were sustained. The final outcomes in 2011 with no maps were as positive as when the maps were available in the e-learning site, with the top grades rate of 85% and failure rate of 1.7%. However it became apparent that the weekly folders were similar to the maps provided in 2010 through the inclusion of the three major components: resources, tasks, and supports. The provision of the fortnightly learning design maps in mathematics subjects were perceived as useful for learning by approximately half of the students (54%) in MATH131 March/Autumn 2010. Though this was low in quality according to the benchmark level, an analysis of the e-learning tracking statistics both in MATH132 August/Spring 2010 and MATH131 March/Autumn 2011 revealed the potential use of this resource. A test for differences in proportions provided evidence that the proportion of accesses to resources using the fortnightly learning design maps in MATH131 (47%) was significantly higher (Z = 9.43, p < 0.0001) than in MATH132 (27%). This suggested that the maps were useful in supporting students learning for a subset of students. However as comments suggest they need to be improved in terms of its accessibility and less complex in linking the three major components: resources, tasks, and supports provided in the e-learning site. It is clear there is scope for further investigation of maps. Students‟ response to the maps in HTML was better than to the PDF documents. As technical issues with creation and use are resolved they may become more 314
valued. However in 2011, the lecturer was pressed for time, weekly folders without the maps were used. As gains were sustained the time to develop maps might not be cost effective given the learning gains. More videos on missing topics may be a more efficient endeavour.
8.1.3 Video resources to support learning The development of videos was seen as key component of an online learning support system. The first step involved the provision of high quality resources, with tasks inspired by the different learning theories and students at risk identified through the test retest assessments. Having identified at risk students the lecturer could refer students or they could self-refer to video demonstrations and explanations. These videos were initially provided for the postgraduates in GHMD983 August/Spring 2008 who were concerned about the study of statistics. Video resources were provided in the e-learning site aimed at supporting their learning and understanding of statistics in three subsequent teaching sessions. The use of videos was particularly important in assisting the learning among distance students as there was no face-toface interaction between them and the lecturers, tutors, and peers. A high proportion of students (more than 95% in 2008 and 2009) considered the videos as beneficial for their learning and most of them (89% in 2009) reported the resources had helped them in solving the problems given in their laboratory work. Initially, video resources were developed on difficult topics but the later videos were included on most major topics (see Table 5.1). In particular, eight units of videos were developed to help students completing their work using the statistical software (JMP). Students commented
Whereas I personally found the videos to be of assistance as I was a distance student and this was my only opportunity for classroom instruction. I also found at times I did not understand and had no one to ask questions of as I was not in a class. (Individual student survey comment observed in GHMD983 August/Spring 2008) Being a distance student that was a huge gap in the learning process of not being able to see the learning process involved in the statistical applications we were learning, and also not having the lab environment within which to ask questions, so the videos helped. (Individual student survey comment observed in GHMD983 August/Spring 2008) In STAT131, video resources were developed particularly for weaker students as it was recognised that good students can read solutions quicker than watching a video of worked examples. As with the GHMD983/SHS940, videos were provided to students on most topics in the subject. Virtually all of the postgraduates in GHMD983 (2008 and 2009) found these 315
resources as useful. In contrast, only 39% of the undergraduates in STAT131 March/Autumn 2009 endorsed the use of videos for learning in the subject (see Table 5.13). Though the videos were similar to those provided in GHMD983/SHS940, it appeared less successful in helping students to learn and understand in the subject. It was discovered that the different manner in which the videos were placed in STAT131 e-learning site was the likely reason that 15% of the students were not aware the availability of resources and 22% did not use them at all (refer to section 5.2.4.2 in Chapter 5). It appeared that it was necessary to place the videos in the weekly folders as had occurred for GHMD983. In March/Autumn 2010 and 2011, video resources were embedded via links in the learning design maps within weekly folders in the e-learning sites. Outcomes from the student surveys provided evidence that the proportion of students who valued the videos had increased from 39% in 2009, 50% in 2010 to 59% in 2011 (see Table 6.6). Specifically, the proportion of students valuing the video resources in 2011 was increased significantly (Z = 1.91, p = 0.028) by 20% compared to 2009. This coincided with a change from by-type resource folders to weekly folders with maps in PDF files, and finally both by-type resource folders and maps in HTML files. For instance, one student commented
The video resources should be kept as they provide a good way to demonstrate some technical steps for producing data in SPSS (the statistics package we used). Student could always take a look of the video for a visualised demonstrate when they still don‟t understand how to do it after reading the steps provided on notes. The videos were extremely useful. More of them would be great. (Individual student survey comment observed in STAT131 March/Autumn 2011) Having provided these resources in a learning design which had been more successful for postgraduate students suggested that maybe an additional innovation such as the headstart for the undergraduate students may be useful. In MATH131 and MATH132, there were limited video resources provided to students in the e-learning site to support their learning of mathematics. The provision of videos in this subject was aimed to support students‟ basic skills on two mathematical concepts, fraction and number theory, covered predominantly in MATH131. In other words, the role of videos in this subject was primarily as topics refresher on those two topics. It was anticipated that the resources would be useful for some students who needed to review their basic skills. As evident in the findings from student surveys, less than one third of the students (22% in MATH132 August/Spring 2009 and 19% in MATH131 March/Autumn 2010) perceived the videos as useful for their learning (see Table 7.7). However, almost one third of the 32 students in MATH131 recommended other topics in which they perceived as difficult in learning to be included in the videos. These topics encompassed statistics and algebra. This suggested that the
316
videos perhaps would more effectively support student learning in the subjects if the encompassed most major topics rather than having a focus on only two topics.
8.1.4 Nature of assessment for students learning From the years 2003 until recently, there were several approaches to assessment systems implemented in the subjects particularly in GHMD983/SHS940 and STAT131 (as noted in section 4.9.2 and 4.9.4). In GHMD983/SHS940, changes in the assessment approach have moved from
three large assignments (2003 to 2005) (associated with high failure rates),
five smaller pieces of continuous assignments (2006 to 2007), to a structure where students were able to redeem themselves from failure:
five tests and retest approach (2008 to 2009),
three minor and three major assignments (2010), and
draft and redraft of the first two assignments along with three other test and retest assignments (2011).
Evidence from the student survey revealed that 85% of the students in 2009 perceived the assessment system was fair in preparing them sufficiently to sit their final exam at the end of session. In particular, students were favoured the assessment system that allowed them to re-sit the tests if they obtained marks less than 70% in their first attempt. For the lecturer, this formed the core of her learning support system allowing her to identify, focus on and work with students at risk. The final outcomes showed that the top grades rate shifted up from 59% in 2007 to 67% in 2008 but dropped to 37% in 2009 (possibly due to the change of lecturer in the middle of teaching session). In SHS940 August/Spring 2010, a different assessment system consisted minor and major assignments were applied as different lecturer taught and coordinated the subject. Over this time failure rates remained low from 8% in 2007 to 2% in 2010. There was a strong positive correlation between the students‟ total assessment marks (out of 60) and their final grades in 2010 (r = 0.992, p < 0.001), with 81% of students gaining top grades. Recent changes in 2011 include a draft and redraft approach for the first two assignments on topics where students can reasonably cover all ideas, for example exploratory data analysis and bivariate data analysis rather than say hypothesis testing where there are several subtopics or versions of tests. There were improvements in the subject final outcomes specifically a shift in top grades rate from 67% in 2008, 81% in 2010, to 85% in 2011, with the exception of the outcomes in 2009 (refer to Table 5.31). 317
In STAT131, changes in the assessment approach have moved from:
assignments, summary and mid-session test (2003),
assignments, laboratory work and mid-session test (2004 to 2005),
four tests and three make-up tests (2006),
six tests with three compulsory and two optional tests (2007),
four tests (the best of three test marks were chosen) and opportunity to re-sit the tests (2008),
five tests and retests assignments with a competency requirement of 65% to 70% (2009 to 2010), to
draft and final of the first assignment (in the Headstart program) and a group draft and final assignment within session, and three tests and retests assignments with competency 70% required for all assessments (2011).
Similar to the assessment system implemented in GHMD983 (2008 and 2009), students both in STAT131 March/Autumn 2009 and March/Autumn 2010 were allowed to re-sit the tests if they obtained marks less than 70% in their first attempt. However, the final outcomes revealed an increase in failure rate from 17% in 2008 to 23% in 2009 and 18% in 2010 (new lecturer for the second half of session) (refer to Table 6.26). In 2011, the introduction of the Headstart program in the subject has allowed students who engaged with the program to complete a draft and redraft the first assignment via the e-learning site. Students who attained marks 70% or above for this assignment were not required sitting the first test or retest. The second assessment was of the form draft/final and the remaining three assessments test and retests. A new direction in the assessment system which allowed students to prepare a draft and redraft of the assignment enabled them to remove a stigma of failure. Pedagogically this approach should be more positive than a test retest approach. However, there were implications for marking of the assessment in the two approaches: draft and redraft of the written assignment taking more marking time with students working with self-selected data than test retest approach which was undertaken on a small range of data sets. There was improvement in the learning outcomes where the failure rate dropped from 18% in 2010 to 13% in 2011 and students in 2011 outperformed than in 2010 in their total assessment marks (a total of five assessments) and final exam (refer to Figure 6.14). Furthermore, a dramatic increase in the higher grades of 64% in 2011 was the highest on record since 2000 (refer to Figure 6.13). It was also found that students with access to the Headstart program performed better in their final examination than those who did not engage with the program (refer to Table 6.15). 318
Unlike the GHMD983 and STAT131, there were no changes in the assessment system both in MATH131 and MATH132 since the subjects were introduced in March/Autumn 2009. A continuous assessment approach was implemented in the mathematics subjects. The assessment consists of five fortnightly quizzes (each worth 3 marks), five fortnightly individual assignments (each worth 3 marks), weekly online practice quizzes via the e-learning system, and a mid-session exam (worth 20 marks). Outcomes from the student surveys revealed that more than two thirds of the students in MATH131 March/Autumn 2010 (70%) and MATH132 August/Spring 2010 (75%) favoured the assessment system which continuously assessed their learning and understanding of mathematics throughout the session. On average, a high proportion of students in MATH132 August/Spring 2009, MATH131 March/Autumn 2010, and MATH132 August/Spring 2010 perceived the online practice quizzes (92%), assignments (87%), quizzes (90%), and mid-session exam (82%) as important for their learning (refer to Table 7.7). Ninety two per cent of the students considered the online practice quizzes which were accessible in the e-learning site had helped them in reviewing the topics covered weekly and preparing them to successfully complete quizzes. The continuous assessment approach implemented in the subjects appears to effectively build up the learning experience among students and prepared them well for the final exam. There was a strong positive correlation between the total assessment and final exam marks in both subjects, MATH131 and MATH132 (refer to Table 7.24 and 7.27). The final outcomes revealed that the failure rates had been consistently lower than 10% in both subjects ranging between 1% and 6% from March/Autumn 2009 to March/Autumn 2011. The proportion of students attaining top grades (High Distinction and Distinction) however was higher in MATH131 (ranged between 40% and 48%) than in MATH132 (ranged between 25% and 30%) (see Table 7.28). Continuous assessment with five or more assessment items appears preferable to large but fewer assignments. Assessment allowing redemption of grades appears useful allowing students to attain competency and hence raising students‟ grades and lowering failures. Above all these types of systems are useful for identifying students at risk.
8.1.5 High quality resources for student learning In this study, it was anticipated that the provision of resources provided both in the elearning site and face-to-face environment supported students learning and understanding in the subjects. For this reason, those resources that perceived as useful by at least 85% of the students in the subjects were considered high quality resources for student learning. From another perspective, it was expected that the failure rate should be under or at most 15%. Overall, in the three case studies the primary resources such as lectures and lecture notes, and the tasks 319
materials including worked solutions, laboratory tests, assignments, quizzes, laboratory tasks, and tutorial exercises potentially impacted student outcomes in the subjects. However, several resources appeared less useful for student learning such as Edu-stream (recorded lectures), student forum, the limited video resources, and those resources that did not providing coverage of the entire subject. In GHMD983, outcomes from the surveys revealed that six resources were considered as useful for learning by more than 85% of the students both in 2008 and 2009. These were (i) laboratory tasks (100%, 96% respectively), (ii) worked solutions (98%, 100% respectively), (iii) laboratory tests (100%, 96% respectively), (iv) lectures (95%, 86% respectively), (v) online lecture notes (95%, 86% respectively), and (vi) video resources (98%, 96% respectively) (refer to Table 5.24). Additionally, the other two resources were considered as important for learning: the work done in own time (93% of students in 2008), and laboratory retests (89% of students in 2009). Particularly among the on-campus students in 2008, the lectures (100%) and laboratory classes (96%) were of great importance for their learning. Accordingly, the lecture notes (100%), laboratory tasks (100%), and worked solutions (96%) were considered to be useful in the subject (refer to Table 5.3). Surprisingly, video resources were among the top six high quality resources though the videos were initially developed to support student learning particularly for distance students. In STAT131, there were two major resources considered as important by more than 85% of the students throughout three teaching sessions from March/Autumn 2009, 2010 to 2011 (refer to Table 6.3 and Table 6.16). These resources included worked solutions (95%, 96%, 98% respectively) and laboratory tests (92%, 90%, 88% respectively). These two resources appeared as necessary resources in the subject which students deemed to have helped them learn most. Additionally, several resources were perceived as useful for learning according to each cohort: (i) laboratory tasks (97%), laboratory manual (97%), laboratory classes (89%), laboratory retests (87%), and e-learning (87%) in 2009, (ii) laboratory tasks (88%) in 2010, and lecturer (92%) in 2011. As with the GHMD983, seven resources were valued as high quality resources in 2009, compared to three resources both in 2010 and 2011. Students in 2009 regarded the laboratory classes and laboratory tasks as useful but some students commented that the two-hour classes were insufficient for completing the laboratory tasks hence they recommended a reduction in tasks and indeed students are expected to complete work out of class time although a reduction in tasks is likely to impact on learning outcomes. The laboratory tasks designed for both STAT131 and GHMD983/SHS940 involved activities providing multiple pathways and allowing for different learning styles drawing heavily from all the learning theories mentioned in Chapter 3 (which asked students to read, calculate, define, describe, apply, exemplify, analyse, interpret, write, identify different perspectives, “do” things, work with peers, and 320
communicate). Timely feedback from the lecturer helps students to complete their assessments and good communication with students both face-to-face and online via student forum appear to be important in supporting students learning. Unlike the GHMD983, none of the STAT131 cohorts perceived the lectures to be of high important and this has been so since the introduction of the laboratory manual and worked solutions. Students like different things, for example the by-type resource folders and weekly folders. Also as indicated in 2010, 40% of the students would like a change to three separate one-hour lectures with a summary of topics provided to them in each lecture rather than the current two-hour plus one separate lecture weekly preferred by 60% of students. Though attendance at lectures was not compulsory, there was a suggestion that rewarding students who attended the lectures could be useful and in any event students wanted more interactions with lecturers. However this departs from the aim of creating greater independence of students and the movement to more learning online. The lecturer was regarded as useful for student learning in 2011 but less so in 2009 and 2010 possibly reflecting other resources assuming greater importance as except for 2009 lectures have remained essentially the same. Still changing cohorts can require better skills and change of lecturer in the middle of session in 2009 could have been disruptive. Better skills in lecturing and tutoring are possibly needed to engage students in learning the subject as one student commented
The lecturer needs to slow down some of the explanations in lectures. I had trouble keeping up in a number of lectures because they were too fast and the tutor I had was not very good. (Individual student survey comment observed in STAT131 March/Autumn 2010) In MATH131 and MATH132, the online practice quizzes were considered the most valuable resource for student learning of mathematics throughout three teaching sessions: 100% of students in MATH132 August/Spring 2009, 87% in MATH131 March/Autumn 2010, and 88% in MATH132 August/Spring 2010 (refer to Table 7.7). The provision of this resource in the subjects enabled students to self-assess their learning and understanding on particular topics covered in weekly lectures via the e-learning system. It also guided students to successfully complete the assessment quizzes assessed during tutorial classes. There were several resources perceived as useful for learning according to each cohort: (i) tutorial classes (91%) and tutorial exercises (87%) in MATH132 August/Spring 2009, (ii) assignments (93%), quizzes (92%), and lecture notes (88%) in MATH131 March/Autumn 2010, and (iii) mid-session exam (92%), worked solutions (90%), quizzes (88%), and assignments (85%) in MATH132 August/Spring 2010. Overall, five resources were valued as high quality resources in MATH132 August/Spring 2010 compared to four in MATH131 March/Autumn 2010 and three in MATH132 321
August/Spring 2009. None of the cohorts regarded lectures as significant for their learning and this was evident from comments provided by 50% of the students in MATH131. They recommended improving the styles of presentations in class, notes to be provided prior to the lectures, more explanations given on mathematical concepts, and working out examples with solutions on a whiteboard. As with the STAT131, 41% of the students in MATH132 August/Spring 2010 needed more interactions with lecturers, and preferred more hours in tutorials to complete exercises with the assistance of tutor.
8.1.6 Alternative approaches Variants of the change evaluation have been used to measure confidence, comfort and usefulness of resources for several years as innovations have been introduced to STAT131 and GHMD983 and a variety of other subjects. To effectively design the subjects within the elearning systems, Boud and Prosser (2001) suggested another approach to examining high quality learning which was looking at the processes involved in the subject. These processes are based on four principles which highlight the importance of engaging students in their learning process, acknowledging their learning context, challenging students to be actively involved in the subject, and providing practice to demonstrate what they learned. In STAT131, students in 2010 and 2011 were surveyed about their perceptions regarding the subject design according to several aspects representing dimensions of the four principle areas (refer to section 6.2.5.3). The findings showed that on average the proportions of students reporting higher quality with respect to four areas were low for both cohorts, 2010 and 2011: (i) engaging students (43%, 48%, respectively) (ii) acknowledging the learning context (52%, 50%, respectively) (iii) challenging students (60%, 58%, respectively), and (iv) providing practice (67%, 60%, respectively). Students‟ perceptions were similar between two cohorts except one dimension involving students to practice their skills where there was higher percentage endorsement for 2010 (69%) compared to 2011 (56%). As with the STAT131, students in MATH131 and MATH132 were also asked to indicate their perceptions on the subject design based on the four principles (refer to section 7.4.8). Outcomes from the surveys revealed that there was a significantly higher percentage endorsement on several dimensions of three principles for the MATH132 compared to MATH131. The students in MATH132 reported higher quality with respect to (i) engaging students which enable them to access key concepts in many ways (76%) and connect to their prior experiences (71%), (ii) challenging students highlighting the limits of their knowledge base (88%) and encouraging them to be self-critical at various points in the learning process (85%), and (iii) providing practice which encourages them to communicate and demonstrate 322
their learning (74%) and encourages them to be self-confident through practice (66%). This work also allowed examination of the subject quality from a different perspective to the change evaluation. The lack of benchmarks was problematic, but the results provoked questions as to how the subjects could be improved from these four principles. It is clear for STAT131 with an influx of Computer Science students establishing relevance has again become an issue. In this study, the CAOS test was also used an external measure to assess student statistical literacy, thinking, and reasoning for the STAT131 both in 2010 and 2011. However there were shortcomings in using this instrument to measure student learning outcomes as there were no significant learning gains in the 2011 cohort though only a small average increase of 4 percentage points in the 2010 cohort at the end of session. Furthermore, students in both cohorts were correct on a little over than half of the 40 items included in the test by the end of session and finding was not dissimilar to other researchers (delMas et al., 2007; Jersky, 2010). It appeared that the instrument might possibly need better structure of test items in questioning key aspects of learning on nine major topics. There were issues raised such as the terms used for some items in measuring students‟ learning and understanding on particular topics, for example, item 2 (boxplots), 9 (boxplots), 31 (confidence interval), and 37 (probability) (refer to Appendix 6.1). This suggested that the CAOS test was less adequate in assessing students‟ statistical literacy, thinking, and reasoning compared to the traditional assessment applied in STAT131 and this was possibly due to the low mark assigned to it and lack of care in completion by many students. This test perhaps be more validly included in the subject if a higher mark was allocated for its completion and it was compulsory for all students. There were also shortcomings in the time limit specified to complete the test and the terms used in some of the items as discussed in section 6.2.5.7. However before it is increased in value a closer examination of what it measures as compared to the STAT131 desired outcomes is warranted.
8.1.7 A Headstart program to support learning Unlike the postgraduates, the high failure rate of undergraduates in STAT131 suggested they needed more support in learning and that the additional support was related to a time issue. In March/Autumn 2011, a Headstart program was introduced for the subject aiming to provide extra time to students learning statistics prior to the commencement of the formal session. This program was also consistent with the notion provided by the Faculty of Informatics (UOW) that the first six weeks is crucial time period for the first year students. A set of five combined lecture/activity guides were designed for this program and they were provided to students in the e-learning site. Data from the e-learning tracking statistics revealed that 53% of the enrolled students downloaded the lecture notes/activity guides. Students were allowed to complete one 323
piece of assessment within the subject through this program. Although only eight students had effectively completed the draft and submitted the final assignment (refer to Table 6.14). The final outcomes in 2011 provided evidence on the impact of this program specifically the impact of students accessing the lecture/activity guides on students learning. An analysis of mean final marks (one-way ANOVA test) revealed that the students who engaged with the program outperformed those who did not. In particular, a Scheffe post-hoc tests demonstrated that students who downloaded 4 to 5 lectures attained significantly (p = 0.019) higher mean marks than those who did not engage with the program. Similarly, students who downloaded 1 to 3 lectures attained higher mean marks (p = 0.046) than those who did not (refer to Table 6.15). The failure rates also dropped to 13% in 2011 compared to 18% in 2010 (refer to Table 6.26).
8.1.8 Use of learning theories In this study, a variety of learning theories were used as a basis of designing good quality resources for assisting student learning in the subjects. Outcomes from the surveys suggested the need for providing appropriate timing and feedback to students and this was in accord with the behaviourist‟s strategies. For example, students should be given lecture notes prior to the lectures, worked solutions or sample tests on topics to be assessed, timely feedback from lecturer on their assessments to help them do the retests if necessary, and resources were released to students either on weekly or fortnightly basis in the e-learning sites. While most releases were timely lectures in advance, if students did not find notices, or resources when looking the impact on learning was disruptive. One student commented, „Timing is perfect - test every two weeks is necessary to reinforce the subject throughout the semester.‟ (Individual student survey comment observed in GHMD983 August/Spring 2009) Having the assessment tasks which require students to generate new ideas or alternative perspectives, “doing” things through active learning, creative and critical thinking appear to be aligned with the cognitivism. For example the initial framing of the subject involved students identifying different perspective as to what statistics involved. In accord with the constructvism and social-constructivism, the assessment tasks were designed to include social engagement and working with peers in classes. In the STAT131, students were required to work as a team using a self-selected data relevant to their discipline and present their work in classes. This encouraged students to collaborate and co-operate with others and allowed them to gain experience in working with others with different learning strengths. Aligned with the experiential learning theory, students were also exposed to the use of real-world example to reflect upon their experiences, apply, and develop their personal meaning. However one student commented 324
It‟s simply not to include STAT131 as a core subject for B. Computer Science. A more engaging course for Computer Science students is needed and change in subject content to accommodate the non-mathematical students. (Individual student survey comment observed in STAT131 March/Autumn 2011) And yet as one mathematics student commented
Having a mathematical mind, I sometimes found it difficult to understand the concepts as they were presented in class. I think a better way to teach mathematics students at the very least would be to work with a more “from the ground up” approach, rather than learning things largely by rote and then applying them. This clearly wouldn‟t work for non-mathematics students, but I think that more indepth, ground-up mathematical explanations could be given as extras on e-learning. I myself would‟ve found such resources extremely useful and also very interesting. (Individual student survey comment observed in STAT131 March/Autumn 2011) The use of technology particularly the delivery of subject resources and materials within the e-learning systems appears to coincide with the connectivism which described learning in a networked world. Students were able to communicate with lecturer and other students through email and the e-learning forum at any time and access from anywhere. They were also able to explore and learn current information related to their disciplines through links provided in the elearning site, or being requested to find things on the internet.
8.2
Institutionalising the innovations In this study, the use of technology particularly video resources in supporting student
learning within the e-learning systems has been explored particularly in the first two case studies so as to improve statistics education. The issue on learning designs emerged from the second case study in delivering resources effectively within the e-learning system and led to the subsequent implementations. This was done in the context of Alexander and Herdberg‟s (1994) evaluation model in which the final stage focuses on the institutionalisation of the innovations. This stage discusses the extent to which the innovations have been adapted outside of the contexts of where they were created and trialled. As the research was evolving, while the subject resources specifically video resources were being trialled and embedded in the e-learning systems through links on the learning design maps, there was a passing of time communication of work done and results found and the innovations were taken up in other departments. Hence, this section explained how others at the University of Wollongong have adapted and extended the innovations to support student learning in different contexts and subject areas. 325
The first adaptation of the learning design maps was implemented in March/Autumn 2011 session in one of the subjects offered by the Faculty of Science, SCIE911, named as Fundamentals of Science Communication, which was designed by Emily Purser, a specialist from the Academic Language and Learning Development centre, University of Wollongong. SCIE911 is a six-credit point preparatory subject enrolled by most of the postgraduates in Master of Science program prior commencing their studies in specific disciplines of Biotechnology, Chemistry, or Earth and Environmental Studies. This subject has been delivered as on-campus mode at Wollongong Campus with eight hours of face-to-face contact weekly throughout the session. The subject was aimed to ensure that all students entering the Master of Science program aware of, and have the opportunity to develop competency in, the types and level of language communications necessary for successful engagement in science subjects particularly at the University of Wollongong. Having a similar subject delivery system as STAT131 using blended learning, there were three major learning modules designed in SCIE911: learning journal, literature review, and public speaking. These modules were located in the subject e-learning site which was accessible by all students (see Figure 8.1). The modules have been designed to include three symbolisms (triangle, rectangle, and circle) used in the Learning Design Visual Sequence (LDVS) representation (Agostinho et al., 2002) which were structured to maximize its transferability and adaptability as the materials gets recycled in other subjects (see Figure 8.2). There were also “how to” video resources that linked to the relevant stage of the materials delivery in the elearning site so that the resources could be found easily and be well used by students.
Figure 8.1 SCIE911 subject homepage accessible in the e-learning site 326
Figure 8.2 The three symbols containing links to relevant subject materials similar to the learning design maps used in STAT131 The use of the learning design maps in STAT131 e-learning site was also being informed and shared among teaching staff in the Faculty of Informatics, University of Wollongong. A showcase on the e-learning examples was held in October 2010 involving a 30-minute oral presentation from six presenters (teaching staff) explaining their projects on teaching practices via the e-learning systems at the Faculty of Informatics. The e-learning showcase handout which outlines the underlying principles of the projects is as shown in Appendix 8.1. Funding has been obtained to support the development of two additional Headstart programs for mathematics subjects which currently experience high failure rates, MATH151 and MATH121.
8.3
Future directions Undertaking a project such as this thesis has provided invaluable experience for the
researcher, working with discipline experts in the provision of learning supports within the elearning systems particularly in the GHMD983/SHS940 and STAT131. This has enabled the teacher/researcher to reflect well on the role of technology in improving statistics education. Throughout the case studies, there has been a reduction in failure rates. There have been opportunities to trial and introduce other approaches to improving student learning. Issues have arisen that challenged the researcher‟s perceptions regarding the capability of online learning systems in supporting the learning of statistics. The preliminary focus has been on the provision of learning resources particularly video supports in the e-learning systems and later evaluations 327
focussed on subject design encompassing placing the resources, tasks, and supports in the elearning site. This included learning design maps and the assessment system. A new subject design features was introduced to students in STAT131 namely the Headstart program which accessible in the e-learning site. The researcher‟s involvement as a tutor in both subjects, GHMD983/SHS940 and STAT131, however provided further insights into the use of resources and subject design thus inspired her to reflect upon the entire process of educating students specifically in statistics. These reflections are revisited in the subsequent sections.
8.3.1 Teaching of statistics In order to improve the teaching of statistics, the researcher intends to implement changes, gather evidence and evaluate their outcomes. In particular, the researcher would like to begin by introducing changes into her teaching practices in accord with the following reflections:
Teaching statistics content to tertiary level students (in Malaysia) is different from teaching statistics content to undergraduate or postgraduate level students (in Australia) due to the variation of topics covered in the syllabi as well as students‟ contexts. In the past thirteen years, the researcher has used an instructivist approach to teach statistics content as most of her students were inactive (passive) in classrooms and there was limited interaction between the lecturer and students. In the future, the researcher would like to bring all the learning theories mentioned in this thesis into her teaching as she believes that these approaches will encourage deep learning rather than rote learning among students. These approaches highlight on the aspects of timing and feedback, social engagement and collaboration, active learning, creative and critical thinking, reflection, developing new ideas and alternative perspectives, and networked learning.
There was very limited student exposure to real data sets particularly in laboratory classes in Malaysia. Most examples and exercises discussed in lectures and laboratory classes are based on statistics textbooks mostly published by American authors. This frequently causes problems among students in understanding the terms (e.g. unit of measuring system such as lbs., ounces, inches, Fahrenheit) and contexts used in the exercises. In the future, the researcher intends to use more real data sets that relate to Malaysian contexts in both her teaching assessment and laboratory tasks. Students should then be better able to connect and perceive the role of statistics in their life and future profession. Noting that relevance was still an issue for many degree students in Australia. 328
The benefits of using tablet PC technology in teaching of statistics in Australia flagged its potential for use in improving her teaching in Malaysia. Apart from developing video resources, this technology can be used efficiently for lecture presentations through annotation of electronic documents such as PDF, Beamer (LATEX), and Microsoft PowerPoint and this compares favourably with the use of whiteboard. It also allows the lecturer to save the annotated version of presentation slides and upload these documents in the e-learning sites for students to access.
The size of classes in Malaysia is smaller than in Australia. The use of fill-in lecture notes or manual in small classes seem to be even more effective than in large classes. In Malaysia, the researcher was able to discuss and ask her students to fill-in the gaps for example with a worked solution. The method of fill-in lecture notes was adopted in Malaysia to urge students to attend and pay attention to the lectures. Besides, students were encouraged to buy the recommended textbook and sometimes the lecturer provided additional lecture handouts. It appears that fill-in lecture notes in addition to the textbook and extra handouts are potentially useful for student learning.
Teaching in Australia is different from teaching in Malaysia. There was a mix of domestic and international students in Australia with different mathematical and statistical backgrounds. The majority of the internationals were unable to communicate or take part in classroom discussions effectively due to language barriers. Nevertheless, this appeared to resolve when they actively engaged in the student forum via the elearning systems. Unlike Australia, there is a limited usage of the e-learning systems in teaching and learning particularly amongst full-time (on-campus) students in Malaysia. All subject materials and resources are distributed to students in classrooms (hard-copy) thus it was not necessary to access materials via the e-learning systems and this was the case prior to 1998 in these subjects. The increase in usage of e-learning systems allows printed versions of materials (paperless) to be reduced and the researcher would like to urge her students (specifically in Malaysia) to access all resources including assessment, discussion, data sets, and relevant materials through the e-learning sites rather than printed notes supplied in classrooms.
8.3.2 Supporting the Headstart program The Headstart program introduced in STAT131 in March/Autumn 2011 has provided opportunities for students to begin to learn the subject a few weeks earlier prior to the formal commencement of session. Though only a small number of students engaged fully with the program completing assessment, there was evidence over half of the students engaged in some 329
way with the lecture/activity guides. As a result, the program was associated with not only a drop in failure rates (to 13%) but also a shift in higher distinction grades (to 64%) suggesting greater statistics skill acquisition. It appeared that undergraduate students needed more learning support not just limited to within session but they required more time to learn topics covered in the subject. For this reason, the program continues to be implemented in STAT131 for the upcoming session as recommended by students in previous survey. However further improvement relies on being able to better inform prospective students via email and notification in the e-learning site of the program availability. The potential of this program to improve student learning of statistics has also signalled to the researcher the possibility and introducing similar programs for her students in Malaysia provided that the e-learning system is available to be used for all users and the need for this type of additional support is evidence based. This appeared to be a new approach of teaching statistics in her country. The implementation of this program would need to be aligned with the student contexts, technology, and nature of subjects which are different from Australia.
8.3.3 Statistics assessment The student surveys revealed that students found the assessment systems implemented in both GHMD983 and STAT131 subjects sufficiently contributed to their learning and understanding of statistics. The opportunity to do the retest appeared to build student confidence and competency in topics as they were given similar tests on different data sets when they attained less than 70% in each test. The system also allowed the lecturer to identify students at risk of failure perhaps needing additional support in order to complete their tests. A similar competency-based system to the test retest approach, a new draft and redraft of assignment appeared to have more potential approach for supporting student learning as it feasibly reduces a sense of failure more so than the test retest approach. In contrast the assessment system implemented in Malaysia requires students to complete several formative assessment tasks within time limits and allows no open-book tests throughout the session in classrooms. It would seem likely that the Malaysian assessment system encourages rote learning or surface approaches to learning. Nevertheless, within the existing system of assessment in both GHMD983 and STAT131 subjects, there are also refinements that may be more acceptable though probably of less dramatic effect than a fundamental change to the entire assessment system.
The 70% pass rate has been adopted as students who do not gain competency with assessment on a single topic successfully are unlikely to complete the multi topic final 330
examination. Examination performance improved when students were no longer permitted to aggregate poor marks.
There were also requests from students to let the tutors frequently check their weekly work in laboratory classes as to motivate them completing the laboratory manual though fully worked solutions are provided in the e-learning site at the end of each week. This was aligned with the findings revealed in the case studies that the interaction between tutors and students, and their feedback during classes are of importance for student learning. One reason for an in class test has been to demonstrate to students that they must attempt questions in laboratory in order for them to identify what they must learn for tests. Solutions and open book tests are of little use if they have not done the work.
8.3.4 Nature of video supports Identifying students at risk has not always meant the lecturer could support their learning. Having videos available allows the lecturer to refer students to them and aware students to selfrefer. For the most part particularly in GHMD983/SHS940, the postgraduates appeared satisfied with the video resources they had been provided with throughout the three teaching sessions. Evaluations suggested that there were gains in learning and understanding as the top grades gradually increased and failure rates headed down. While in STAT131, the undergraduates seemingly needed more learning support than the videos though there was an increase in the proportion of students who favoured the resources both in 2010 and 2011. In MATH131/132, it appeared the provision of videos as topics refresher need to be extended to include more topics that students found difficult. It was evident that the video use in teaching potentially eases the anxiety in the learning of statistics in Australia hence it is possibly appropriate for them to be used in teaching in Malaysia. Responses from the surveys revealed that the videos were most beneficial for students in demonstrating the laboratory work using statistical software e.g. JMP, interpreting the output in a meaningful way, and showing worked examples on difficult topics including probability, sampling distribution, hypotheses tests. Several technical issues such as the browsers compatibility, size of video files, and files accessibility, need to be considered in the development of resources particularly when this innovation is trialled in Malaysia as the infrastructure may vary with that available in Australia.
331
8.3.5 Learning design Providing supplementary learning supports particularly videos along with other learning resources within the e-learning systems is sufficiently useful for students learning and understanding if the following take place:
The learning design space for locating lots of resources, tasks, and supports in the elearning system is appropriately designed. Students should be guided to use the resources. The provision of both the weekly or fortnightly folders and by-type resource folders in the e-learning site optimises student access to resources and assists them to complete their tasks by associating tasks with relevant resources and supports throughout the session.
The usability of the learning design maps in the e-learning systems is optimised providing maps as webpages (HTML files) that are user-friendly for all types of browsers and computer systems.
There is good communication between the lecturers/tutors and students regarding the availability of videos in supporting their learning throughout the subjects. It is recommended that lecturers highlights during lectures some exercises and examples where video solutions or guides are available. This can motivate students to watch the videos not just those that are used in lectures but to watch extra ones. For example, evaluation in STAT131 March/Autumn 2009 demonstrated that some students (15%) were not aware of the existence of video supports available in the e-learning site. Part of this communication will be through the student forum, email, and notifications in the elearning site.
Provide students with the necessary extra time for studying. A given session of say 13 weeks is perhaps insufficient for some to learn all topics covered in the syllabus and a headstart program may alleviate this time constriction. If a curriculum redesign or reduction of content is not appropriate, the Headstart program potentially can be used to support students who can learn some topics prior to the formal session start and ideally include an assessment task (reducing within session time pressure). The program however should be where possible advertised to the prospective students so that it is beneficial for all students who are interested or wish to participate.
Align the support resources such videos are aligned with the assessment so that this encourages students to refer to the examples and demonstrations provided in the videos. For instance, students should be advised that if they have any issues in solving the
332
questions given in their tests or assignments, that there are similar tasks in the videos which might assist them (as provided in the maps).
Students should be informed about any recent notifications relating to the subject including tests, assignments, new resources or videos, Headstart program, and lecture updates using a “message box” located in the subject homepage in the e-learning site (see Figure 8.3). This is effective in keeping students aware and informed about the important tasks they need to complete from time to time rather than occasionally emailing them during session. Messages are then moved to an archive so students may check past messages.
Figure 8.3 A “message box” posted in the STAT131 March/Autumn 2011 e-learning homepage in week 9
8.4
Conclusion It is not possible to ascertain the precise impact of the learning supports on student
learning of statistics. Nevertheless, it is clear from the work and evaluation undertaken in this study that the support resources particularly videos embedded in the e-learning systems are useful in assisting student learn hence possibly to improving learning outcomes. As evident in the case studies, postgraduates had a high level of engagement with the videos but the undergraduates appeared to need more supports which were not only limited to videos in order to assist their learning. The Headstart program provided an alternative support strategy which assisted students to gaining basic understanding of statistical skills and concepts several weeks 333
prior to the formal session start. There is much scope to further improve the support resources specifically videos and Headstart program in terms of its implementation in the e-learning sites, through earlier availability, better advertising of the program to students so a greater number of students have access. The provision of these support resources in the e-learning systems will be continued in the upcoming sessions taken into considerations of students‟ feedback and recommendations for improvement observed from the previous surveys. A major issue with regards to learning designs was discovered when the provision of video resources to support learning among the undergraduates seemed less successful than for the postgraduates. Undergraduates reported they were not aware of the videos existence in the elearning site. Unlike the postgraduates, the undergraduates perhaps needed more guidance to effectively utilise all resources and support materials available in the e-learning sites. The learning design maps within weekly folders along with by-type resource folders, wherein the weekly folders aligned resources, tasks, and supports appear necessary. The learning design maps add further potential for supporting students, although ongoing work is needed to best design these and have them. Additionally, it appeared the assessment systems play an important role for student learning and in an online learning support system. The test retest approach enabled students to build up their competency in topics learning throughout the session. The identification of students at risk also enabled the lecturers to help students particularly through encouragement, discussing issues with students, and being able to refer them to videos designed to talk students through tasks alleviated much time pressure on teaching staff. It is believed that the draft and redraft of assessment approach potentially removes a sense of failure among students which is more positive than the test retest approach however there are greater resource implications. Students express a need for more interactions between students and lecturers/tutors in classes and this is associated with a lack of confidence or perceived need not necessarily with capacity to handle the work. Evaluation is essential to understanding what has been done, the impact and for identifying if other strategies need to be found. In this study, it was discovered that not all educational contexts can be addressed with the same resources. For example, an adequate design for the postgraduate students may not suffice for the undergraduate students. Based upon the various sources of evidence, the researcher believes that the online learning support system played an important role in improving students‟ learning of statistics. The most significant finding in this thesis was identifying the key components of providing the learning support system in the e-learning environment in addition to primary resources (e.g. lectures, laboratory tasks, lecturers and tutors). These components involve:
334
i.
Having a set of high quality primary resources. High quality resources were deemed to be resources useful for learning by at least 85% of the students who enrolled in the subject.
ii.
Having an assessment system that allows the lecturer to identify students at risk or similar filtering system so the lecturer can work one-on-one with only a manageable number of students. Further an action plan and strategies are needed regarding how to work with these students. Perhaps by asking student to give an oral rather than written comment, or through identifying blockages to submitting their repeat tests or even final assignments when help has been given by the lecturer. Being able to refer students to existing video resources is also useful.
iii. Video support provided to students across ALL topics in the subject to supplement other resources such as lectures, laboratory notes, and laboratory tasks. Weak students might not understand the so called easy topics or processes. Videos can assist allowing students to learn through hearing, watching, and reading. iv. Good communication and timely feedback from lecturer to assist students in completing their assessment including: the release of results and feedback quickly whether or not they need to repeat the assessment, the release of worked solutions to the laboratory tasks/tutorial exercises for guidance before assessment, providing lecture notes ahead of time allowing students to prepare before lectures, and giving prompt feedback to query posted in the student forum.
335
REFERENCES Agostinho, S. (2009). Learning Design representations to document, model and share teaching practice. In L. Lockyer, S. Agostinho and B. Harper (Eds.). Handbook of Research on Learning Design and Learning Objects: Issues, Applications and Technologies. Information Science Reference, pp. 1-19. Agostinho, S. (2006). The use of visual learning design representation to document and communicate teaching ideas. In L. Markauskaite, P. Goodyear and P. Riemann, (Eds.), Who's Learning? Whose Technology? Proceedings of the 23rd Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE ), pp. 37. Sydney, Australia: Sydney University Press. Agostinho, S., Bennett, S., Lockyer, L., Kosta, L., Jones, J. & Harper, B. (2009). An examination of learning design descriptions in an existing learning design repository. In Same places, different spaces. Proceedings ascilite Auckland 2009. Available at http://www.ascilite.org.au/conferences/auckland09/procs/agostinho.pdf. Agostinho, S., Harper, B. M., Oliver, R., Hedberg, J. & Wills, S. (2008). A visual learning design representation to facilitate dissemination and reuse of innovative pedagogical strategies in university teaching. In L. Botturi and S. T. Stubbs (Eds.), Handbook of visual languages for instructional design: Theories and practices. Hershey PA: Information Science Reference, IGI Global, pp. 380-393. Agostinho, S., Oliver, R., Harper, B., Hedberg, H. & Wills, S. (2002). A tool to evaluate the potential for an ICT-based learning design to foster "high-quality learning". In A. Williamson, C. Gunn, A. Young and T. Clear (Eds.), Wind of change in the sea of learning. Proceedings of the 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, pp. 29-38. Auckland, New Zealand: UNITEC Institute of Technology. Alexander, S. (1999). An evaluation of innovation projects involving communication and information technology in higher education. Higher Education Research & Development, 18(2), pp. 173-183. Alexander, S. & Hedberg, J. (1994). Evaluating technology-based learning: Which model? In K. Beattie, C. McNaught, and S. Wills (Eds.), Multimedia in higher education: Designing for change in teaching and learning, pp. 233-244. Amsterdam: Elsevier. 336
Alexander, S. & McKenzie, J. (1998). An evaluation of Information Technology projects for university learning, CAUT. Australian Government Publishing Service, Canberra. Algarni, A. (2009). Developing mathematical/statistical resource for hearing impaired students at tertiary level. Unpublished M.Sc. Dissertation, University of Wollongong, Australia. Al-Khalaifat, A. & AlRifai, R. (2002). A Case Study in the Chemical Engineering Freshman Course Using Enhanced Excel with Visual Basic and Power Point. Chem. Educator, 7(X), pp. 384-386. Springer-Verlag New York, Inc. Ally, M. (2008). Foundation of Educational Theory for Online Learning. In T. Anderson (Ed.), The theory and practice of online learning (2nd. Edition.). Canada, AU Press, Athabasca University, pp. 15-44. Aminifar, E. (2007). Technology and Improvement of Mathematics Education at the Tertiary Level. Unpublished Ph.D. Thesis, University of Wollongong, Australia. Arem, C. (2003). Conquering Math Anxiety, Second Edition. Brooks/Cole Thomson Learning, California. Artigue, M. (2001). What Can We Learn From Educational Research at the University Level? The Teaching and Learning of Mathematics at University Level: An ICMI Study. D. Holton, Kluwer Academic Publishers, Dordrecht, Netherlands: pp. 207-220. Arul, E., Baharun, N., Hussin, F. & Nasir, A. M. (2004). A study of mathematics anxiety among beginners. Shah Alam, University of Mara Technology (UiTM). Arul, E., Nasir, A., Hussin, F. & Baharun, N. (2005). A study of mathematics anxiety among students in UiTM Perak. Paper presented at the Conference on Science and Social Research (CSSR) 2005, in Kuala Terengganu, Malaysia, 10-12 June, 2005. Australian Flexible Learning Framework. (2008). Supporting e-learning opportunities: A guide to creating learning design for VET. Available in URL: http://toolboxes.flexiblelearning. net.au/documents/docs/Learning_design_tool_VET.doc. Baharun, N. & Porter, A. (2009). Removing the Angst From Statistics. In Mohd Tahir Ismail and Adli Mustafa (Eds.), 5th Asian Mathematical Conference Proceedings (Volume III), June 2009, pp 250 - 255. ISBN: 978-967-5417-55-9 (http://math.usm.my/).
337
Baharun, N. & Porter, A. (2010). The impact of video-based resources in teaching statistics: A comparative study of undergraduates to postgraduates. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Bain, J. D. (1999). Introduction. Higher Education Research & Development, 18(2), pp. 165172. Baloglu, M. (1999). A Comparison of Mathematics Anxiety and Statistics Anxiety in Relation to General Anxiety. Eric Document Reproduction Service, No. 436703. Baloglu, M. (2003). Individual differences in statistics anxiety among college students. Personality and Individual Differences, 34(5), pp. 855-865. Barr, G. & Scott, L. (2010). Spreadsheets and simulation - A new way forward for teaching statistics. In H. MacGillivray and B. Phillips (Eds.), Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS 2010), December, Fremantle, Australia. Barrington, F. & Brown, P. (2007). Articulation between secondary mathematics and tertiary education programs. National Symposium on Mathematics Education for 21st Century Engineers, RMIT. Bates, A. & Poole, G. (2003). Effective teaching with technology in higher education. San Francisco, Jossey-Bass.
Beard, C. & Wilson, J. P. (2006). Experiential learning: A handbook of best practice for educators and trainers, London, GBR: Kogan Page Ltd. Beckers, J. J., Rikers, R. M. J. P. & Schmidt, H. G. (2006). The influence of computer anxiety on experienced computer users while performing complex computer tasks. Computers in Human Behavior, 22(3), pp. 456-466. Bell, J. A. (1998). International Students have Statistics Anxiety too. Education, 118(4), pp. 634-636. Bennett, S., Agostinho, S., Lockyer, L., Harper, B. & Lukasiak, J. (2006). Supporting university teachers create pedagogically sound learning environments using learning designs and learning objects. IADIS International Journal on WWW/Internet, 4(1), pp. 16-26. 338
Bennett, S., Agostinho, S., Lockyer, L., Kosta, L., Jones, J., Koper, R. & Harper, B. (2007). Learning Designs: Bridging the gap between theory and practice. In ICT: Providing choices for learners and learning. Proceedings ascilite Singapore 2007 (pp. 51-60). Available at http://www.ascilite.org.au/conferences/singapore07/procs/bennet.pdf. Ben-Zvi, D. (2006). Scaffolding students' informal inference and argumentation. In A. Rossman and B. Chance (Eds.), Proceedings of the Seventh International Conference on Teaching Statistics. Voorburg, The Netherlands: International Statistical Institute. Ben-Zvi, D. (2000). Toward understanding the role of technological tools in statistical learning. Mathematical Thinking and Learning, 2(1), pp. 127-55. Bergtrom, G. (2009). On offering a blended cell biology course. Journal of the Research Centre for Educational Technology, 5(1), pp. 15-21. Bernstein, J., Reilly, L. & Cote-Bonanno, J. (1992). Barriers to Women Entering the Workforce: Math Anxiety. Montclair, NJ: Montclair State College, Life Skills Centre. Beynon, M. (2007). Computing technology for learning - in need of radical new conception. Educational Technology & Society, 10(1), pp. 94-1-6. Biggs, C. (2010). GenStat for Teaching. In H. MacGillivray and B. Phillips (Eds.), Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS 2010), December. Fremantle, Australia. Biggs, J. (2003). Teaching for quality learning at university. OUP: Buckingham. Biggs, J. (1988). Assessing student approaches to learning. Australian Psychologist, 23(2), pp. 197-206. Bloom, B. S. (1956). Taxonomy of educational objectives, Handbook 1: The Cognitive Domain. New York: David McKay Co Inc. Botturi, L. (2006). E2ML: A visual language for the design of instruction. Educational Technology, Research and Development, 54(3), pp.265-293. Boud, D. & Prosser, M. (2002). Appraising New Technologies for Learning: A framework for development. Educational Media International, 39(3), pp. 237-245.
339
Boud, D. & Prosser, M. (2001). Key principles for high quality student learning in Higher Education - from a learning perspective. Paper presented at a workshop held on April 27, 2001 for the AUTC funded project: Information Technology and Their Role in Flexible Learning. Sydney, Australia. Bradley, D. R. & Wygant, C. R. (1998). Male and female differences in anxiety about statistics are not reflected in performance. Psychological Reports, 82, pp. 245-246. Brinkmann, A. (2005). Knowledge Maps - Tools for Building Structure in Mathematics. International Journal for Mathematics Teaching and Learning, October 25th, ISSN 14730111. Available at http://www.cimt.plymouth.ac.uk/journal/brinkmann.pdf Broadbridge, P. & Henderson, S. (2008). Mathematics Education for 21st Century Engineering Students,
Australian
Learning
and
Teaching
Council
(AMSI).
Available
at
http://www.amsi.org.au/index.php/component/content/article/78-education-publications/ 327-mathematics-education-for-21st-century-engineering-students Bruner, J. (1973). Going beyond the information given. New York: Norton. Bullock, J. O. (1994). Literacy in the language of mathematics. The American Mathematical Monthly, 101, pp. 735-743. Burgos, D. & Griffiths, D. (2005). The UNFOLD Project: Understanding and using learning design. Heerlen: Open University of the Netherlands. Burns, M. (1998). Math: Facing an American Phobia, Math Solutions Publications. Marilyn Burns Education Associates, Sausalito, CA. Burrill, G. (1996). Graphing Calculators and Their Potential for Teaching and Learning Statistics. In Role of Technology. Proceedings of the International Association for Statistical Education (IASE) Round Table conference. Granada, Spain 1996. Caldwell, W. H., Al-Rubaee, F., Lipkin, L., Caldwell, D. F. & Campese, M. (2006). Developing a concept mapping approach to mathematics achievement in middle school. In A. J. Cañas and J. D. Novak (Eds.), Concept Maps: Theory, Methodology, Technology. Proceedings of the Second International Conference on Concept Mapping, San José, Costa Rica. Carliner, S. (1999). Overview of online learning. Amherst, MA: Human Resource Development Press. 340
Chance, B. (2002). Components of statistical thinking and implications for instruction and assessment. Journal of Statistics Education, 10(3). Available at http://www.amstat.org /publications/jse/v10n3/chance.html. Chance, B., Ben-Zvi, D., Garfield, J. & Medina, E. (2007). The Role of Technology in Improving Student Learning of Statistics. Technology Innovations in Statistics Education, 1(1), pp. 1-26. Chance, B. & Rossman, A. (2006). Using simulation to teach and learn statistics. In A. Rossman and B. Chance (Eds.), Proceedings of the Seventh International Conference on Teaching Statistics. Voorburg, The Netherlands: International Statistical Institute. Chandrananda, D. & Wild, C. (2010). A free stand-alone system for statistical data analysis written in R. In H. MacGillivray and B. Phillips (Eds.), Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS 2010), December. Fremantle, Australia. Chen, P. S. D., Guidry, K. R. & Lambert, A. D. (2009). Engaging online learners: A quantitative study of postsecondary student engagement in the online learning environment. Paper presented at the annual meeting of the American Educational Research Association. San Diego, CA. Chinn, S. (2005). The trouble with maths: A practical guide to helping learners with numerical difficulties. New York: RoutledgeFalmer. Ciaccio, A. D. (1998). Hypermedia and WWW for the teaching of statistics. Proceedings of the fifth International Conference on Teaching Statistics (ICOTS5), Singapore. Voorburg, The Netherlands: International Statistical Institute. Clark, D. R. (2004). Instructional System Design Concept Map. Retrieved on 25 January 2010, available at http://nwlink.com/~donclark/hrd/ahold/isd.html Coates, H. (2006). Student engagement in campus-based and online education. London and New York, Routledge: Taylor & Francis Group. Cobb, G. W. (1993). Reconsidering statistics education: A National Science Foundation Conference. Journal of Statistics Education, 1(1). Available at www.amstat.org/ publications/jse/v1n1/cobb.html.
341
Cobb, G. W. (1991). Teaching statistics: More data, less lecturing. AMSTAT News, No. 182. Cobb, G. W. & Moore, D. S. (1997). Mathematics, statistics, and teaching. The American Mathematical Monthly, 104(9), pp. 801-821. Cole, R. A. (2000). Issues in web-based pedagogy: A critical primer. Westport, CT: Greenwood Press. Collis, B. (2002). The net as a tool for blended learning. What are the ingredients for success? Paper presented at Netlearning 2002, November 2002. Ronneby, Sweden. Conole, G. & Fill, K. (2005). A learning design toolkit to create pedagogically effective learning activities. Journal of Interactive Media in Education, 8. Conole, G., Oliver, M., Falconer, I., Littlejohn, A. & Harvey, J. (2007). Designing for learning. In G. Conole and M. Oliver (Eds.), Contemporary perspectives in e-learning research: Themes, methods and impact on practice, pp. 101-120. Oxon: Routledge. Cooner, T. S. (2004). Preparing for ICT Enhanced Practice Learning Opportunities in 2010 - a Speculative View. Social Work Education, 23(6), pp. 731-744. Crabtree, B. F. & Miller, W. L. (1992). Doing qualitative research. Newbury Park, London: Sage Publications. Crawford, M. & Schmidt, K. J. (2004). Lessons learn from a K-12 Project. Proceedings of the 2004 American Society for Engineering Education Annual Conference and Exposition, American Society for Engineering Education. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd. Edition). Thousand Oaks, California: SAGE Publications, Inc. Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches (2nd. Edition). Thousand Oaks, California: SAGE Publications, Inc. Creswell, J. W. & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: SAGE Publications Inc.
342
Croft, A. C., Harrison, M. C. & Robinson, C. L. (2009). Recruitment and retention of students an integrated and holistic vision of mathematics support. International Journal of Mathematical Education in Science and Technology, 40(1), pp. 109 - 125. Croft, T. & Grove, M. (2006). Mathematics support. MSOR Connections, 6(2), pp. 1-4. Cronbach, L. J. (1963). Course improvement through evaluation. Teachers College Record, 64, pp. 672-683. Cruise, R. J., Cash, R. W. & Bolton, D. L. (1985). Development and validation of an instrument to measure statistical anxiety. Paper presented at the annual meeting of the Statistical Education Section. Proceedings of the American Statistical Association. Cunningham, S., Tapsall, S., Ryan, Y., Stedman, L., Bagdon, K. & Flew, T. (1998). New media and borderless education: A review of the convergence between global media networks and higher education provision. Evaluations and Investigations Program 97/22. Canberra: Australian Government Publishing Service. Daley, C. E. & Onwuegbuzie, A. J. (1997). The role of multiple intelligences in statistics anxiety. Paper presented in the annual meeting of the Mid-South Educational Research Association. Memphis, TN. Daugherty, M. & Funke, B. (1998). University faculty and student perceptions of Web-based instruction. Journal of Distance Education, 11(1), pp. 21-39. Davies, P. (2006). Exploratory research. In V. Jupp (Ed.), The Sage Dictionary of Social Research Methods. London: Sage Publications Ltd. de Vaus, D. (2006). Experiment. In V. Jupp (Ed.), The Sage Dictionary of Social Research Methods. London: Sage Publications Ltd. Dekkers, A. & Porter, A. (2009). Video Resources and Interactivity, Tablet PCs and their Advantages. Paper presented at the 2009 Australasian Tablets in Education Conference (ATiEC) on December 3-4, 2009. Monash University, Australia.
delMas, R. (2002). Statistical literacy, reasoning, and learning: A commentary. Journal of Statistics Education, 10(3). Available at http://www.amstat.org/publications/jse/v10n3/ delmas_discussion.html.
343
delMas, R., Garfield, J., Ooms, A. & Chance, B. (2006). Assessing students' conceptual understanding after a first course in statistics. Paper presented at the Annual Meetings of The American Educational Research Association, San Francisco. CA: April 9, 2006. delMas, R., Garfield, J., Ooms, A. & Chance, B. (2007). Assessing students‟ conceptual understanding after a first course in statistics. Statistics Education Research Journal, 6(2), pp. 28-58. Retrieved from http://www.stat.auckland.ac.nz/serj. Deming, E. (1940). Discussion of Professor Hotelling's paper. The Annals of Mathematical Statistics, 11(4), pp. 470-471. Dewey, J. (1916). Democracy and education, New York: Macmillan. Downes, S. (2006). An introduction to connective knowledge. Retrieved 10 August 2010, available at http://www.downes.ca/post/33034. Duffy, T. & Cunningham, D. (1996). Constructivism: Implications for the design and delivery of instruction. In D. Jonassen (Ed.), Handbook of research for educational telecommunications and technology, pp. 170-198. New York: MacMillan. Dunham, P. H. & Dick, T. P. (1994). Connecting research to teaching: research on graphing calculators. The Mathematics teacher, 87(6), pp. 440-445. Falconer, I. & Littlejohn, A. (2006). Mod4L report: Representing learning designs. Retrieved on March 22, 2009 from http://mod4l.com/tiki-download_file.php?fileId=2. Fitzallen, N. (2005). Integrating ICT into Professional Practice: A Case Study of Four Mathematics Teachers. Proceedings of the Annual Conference Mathematics Education Research Group of Australasia (MERGA 28), Melbourne, Australia, pp. 353-360. Fitzallen, N. & Watson, J. (2010). Developing statistical reasoning facilitated by TinkerPlots. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Flick, U. (2006). Triangulation. In V. Jupp (Ed.), The Sage Dictionary of Social Research Methods. London: Sage Publications Ltd.
344
Fontana, D. (1981). Psychology for teachers, London: Macmillan/British Psychological Society. Forte, J. A. (1995). Teaching statistics without sadistics. Journal of Social Work Education, 31, pp. 204-218. Francis, G. (2010). Online learning materials: Are they put to different uses by online and on campus students? In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Franzosa, B., McGarry, S. & Hall, W. (2008). Using Sliders in Mathematics Instruction. University of Maine Center for Science and Mathematics, Education Research 2008 Summer Academy. Fuata'i, K. A. & McPhan, G. (2008). Concept mapping and moving forward as a community of learners. In A. J. Cañas, P. Reiska, M. Åhlberg & J. D. Novak (Eds.), Concept Mapping: Connecting Educators. Proceedings of the Third International Conference on Concept Mapping, Tallinn, Estonia & Helsinki, Finland. Furinghetti, F. (2000). The History of Mathematics as a Coupling Link between Secondary and University Teaching. International Journal of Mathematical Education in Science and Technology, 31(1), pp. 43-51. Gal, I. & Ograjenšek, I. (2010). Qualitative research in the service of understanding learners and users of statistics. International Statistical Review, 78(2), pp. 287-296. Garfield, J. (2002). The challenge of developing statistical reasoning. Journal of Statistics Education, 10(3). Available at http://www.amstat.org/publications/jse/v10n3/garfield.html. Garfield, J. & Ben-Zvi, D. (2007). How Students Learn Statistics Revisited: A Current Review of Research on Teaching and Learning Statistics. International Statistical Review, 75(3), pp. 372-396. Garrison, D. R. & Vaughan, N. D. (2008). Blended learning in higher education: Framework, principles, and guidelines. San Francisco, CA: Jossey-Bass. Glaser, B. G. & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. London: Sage. 345
Glasow, P. A. (2005). Fundamentals of survey research methodology. McLean, Virginia: The MITRE Corporation. Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australian Journal of Educational Technology, 21(1), pp. 82-101. Gordon, D. (2000). Survey research. In M. Davies (Ed.), The Blackwell encyclopedia of social work. United Kingdom: Wiley-Blackwell. Gould, R. (2010). Statistics and the modern student. International Statistical Review, 78(2), pp. 297-315. Gunn, C. (1997). Future directions for CAL evaluation. Association of Learning Technology Journal, 5(1), pp. 40-47. Hartman, J., Dziuban, C. & Moska, P. (2000). Faculty satisfaction in ALNs: A dependent or independent variable? Journal of Asynchronous Learning Networks, 4(3). Retrieved on June 24, 2004 from http://www.aln.org/publications/jaln/v4n3/v4n3_hartman.asp. Hembree, R. (1990). The nature, effects, and relief of math anxiety. Journal for Research in Mathematics Education, 21, pp. 33-46. Herrington, J., Reeves, T. & Oliver, R. (2005). Online learning as Information Delivery: Digital Myopia. Journal of Interactive Learning Research, 16(4), pp. 353-367. Hewson, C. (2006). Mixed methods research. In V. Jupp (Ed.), The Sage Dictionary of Social Research Methods. London: Sage Publications Ltd. Hicks, O. (2004). Composite report on projects funded through the Australian Universities Teaching Committee 2002-2003. Australian Universities Teaching Committee. Ho, K. F. (2010). Year 11 Advanced Mathematics: Hearing from Students who Buck the Trend. In L. Sparrow, B. Kissane, & C. Hurst (Eds.), Shaping the future of mathematics education. Proceedings of the 33rd annual conference of the Mathematics Education Research Group of Australasia, Fremantle: MERGA. Hong, E. & Karstensson, L. (2002). Antecedents of state test anxiety. Contemporary Educational Psychology, 27, pp. 348-367.
346
Honkimaki, S., Tynjala, P. & Valkonen, S. (2004). University students' study orientations, learning experiences and study success in innovative courses. Studies in Higher Education, 29(4), pp. 431-449. Horton, W. (2001). Leading E-learning. Alexandria, USA: American Society for Training and Development. Hoyles, C., Newman, K. & Noss, R. (2001). Changing Patterns of Transition from School to University Mathematics. International Journal of Mathematical Education in Science and Technology, 32(6), pp. 829-845. Hung, D., Looi, C. K. & Koh, T. S. (2004). Situated cognition and communities practice: Firstperson 'lived experiences' vs. third-person perspectives. Educational Technology & Society, 7(4), pp. 193-200. Hunt, D. N. (1996). Teaching statistics with Excel 5.0. Maths & Stats Newsletter, 7(2), pp. 1114. Icaza, G., Bravo, C., Guiñez, S. & Muñoz, J. (2006). Web site and concept maps to teach statistics. Proceedings of the Seventh International Conference on Teaching Statistics, Salvador, Bahia, Brazil (pp. 1-5). Voorburg, The Netherlands: International Statistical Institute. Isreal, M. & Hay, I. (2006). Research ethics for social scientists: Between ethical conduct and regulatory compliance. London: Sage. Jersky, B. (2010). A case study of knowledge of key statistical concepts before and after an Introductory Statistics class. In H. MacGillivray and B. Phillips (Eds.), Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS 2010), December, Fremantle, Australia. Jin, H. & Wong, K. Y. (2010). A Network Analysis of Concept Maps of Triangle Concepts. In L. Sparrow, B. Kissane, & C. Hurst (Eds.), Shaping the future of mathematics education. Proceedings of the 33rd annual conference of the Mathematics Education Research Group of Australasia, Fremantle, WA: MERGA. Johnson, B. & Christensen, L. (2010). Educational Research: Quantitative, Qualitative, and Mixed Approaches (Fourth Edition). United States of America: Sage Publications, Inc.
347
Johnson, M. & Kuennen, E. (2006). Basic Math Skills and Performance in an Introductory Statistics Course, University of Wisconsin - Oshkosh. Journal of Statistics Education, 14(2). Available at www.amstat.org/ publications/jse/v14n2/johnson.html. Jonassen, D. H., Howland, J., Moore, J. & Marra, R. M. (2003). Learning to solve problems with technology: A constructivist perspective. Upper Saddle River, New Jersey: Merrill Prentice Hall. Jonassen, D. H. & Reeves, T. C. (1996). Learning with technology: Using computers as cognitive tools. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology, pp. 693-719. New York: Macmillan. Jowallah, R. (2008). Using technology supported learning to develop active learning in higher education: A case study. US-China Education Review, 5(12), pp. 42-46. Joyce, J., Hassall, T., Montaño, J. L. A. & Anes, J. A. D. (2006). Communication apprehension and maths anxiety as barriers to communication and numeracy skills development in accounting and business education. Education and Training, 48(6), pp. 454-464. Kajander, A. & Lovric, M. (2005). Transition from secondary to tertiary mathematics McMaster University Experience. International Journal of Mathematical Education in Science and Technology, 36, pp. 149-160. Kalat, J. W. (2007). Introduction to psychology. Pacific Grove, CA: Wadsworth-Thompson Learning. Keengwe, J., Onchwari, G. & Onchwari, J. (2009). Technology and student learning: Toward a learner-centred teaching model. Association for the Advancement of Computing in Education Journal (AACE), 17(1), pp. 11-22. Keller, B. & Russell, C. (1997). Effects of the TI-92 on calculus students solving symbolic problems. International Journal of Computer Algebra in Mathematics Education, 4, pp. 77-97. Kember, D. (2009). Promoting student-centred forms of learning across an entire university. Higher Education, 58, pp. 1-13. Springer Science+Business Media B.V. 2008. Kember, D. & McNaught, C. (2007). Enhancing university teaching: Lessons from research into award winning teachers. Abingdon, Oxfordshire: Routledge. 348
Kerr, B. (2007). A challenge to connectivism. Transcript of Keynote Speech, Online Connectivism Conference, University of Manitoba. Available at http://ltc.umanitoba.ca /wiki/index.php?title=Kerr_Presentation. Khan, B. (1997). Web-based instruction: What is it and why is it? In B. H. Khan (Ed.), Webbased instruction, pp. 5-18. Englewood Cliffs, NJ: Educational Technology Publications. Kirkpatrick, D. L. (1994). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler. Knowlton, D. (2000). A theoretical framework for the online classroom: A defense and delineation of a student-centred pedagogy. In R. Weiss, D. Knowlton, & B. Speck (Eds), Principles of effective teaching in the online learning classroom, San Francisco: JosseyBass. Kolb, D. (1984). Experiential learning: experience as the source of learning and development. Englewood Cliffs, New Jersey: Prentice Hall. Kolb, D. (1976). The learning style inventory: Technical manual, Boston, MA: McBer. Konold, C. & Miller, C. D. (2005). TinkerPlots: Dynamic Data ExplorationTM (Version 1.0) [Computer
Software].
Emeryville,
CA:
Key
Curriculum
Press.
Available
at
http://www.keypress.com/x5715.xml. Kop, R. & Hill, A. (2008). Connectivism: Learning theory of the future or vestige of the past? The International Review of Research in Open and Distance Learning, 9(3). Available at: http://www.irrodl.org/index.php/irrodl/article/view/523. Date accessed: 17 Jan. 2010. Koper, R. & Tattersall, C. (2005). Learning Design: A Handbook on Modelling and Delivering Networked Education and Training. Springer-Verlag Berlin Heidelberg New York. Krohne, H. W. & Laux, L. (1982). Achievement, Stress, and Anxiety. The Series in Clinical and Community Psychology. United States of America, Hemisphere Publishing Corporation. Lalonde, R. N. & Gardner, R. C. (1993). Statistics as a second language? A model for predicting performance in psychology students. Canadian Journal of Behavioural Science, 25, pp. 108-125.
349
Lane, A. M., Dale, C. & Horrell, A. (2006). Differentiating work for statistics modules in sports degrees. Journal of Further and Higher Education, 30(3), pp. 295-302. Lane, D. M. & Peres, S. C. (2006). Interactive simulations in the teaching of statistics: promise and pitfalls. In A. Rossman and B. Chance (Eds.), Proceedings of the Seventh International Conference on Teaching Statistics. Voorburg, The Netherlands: International Statistical Institute. Lane, D. M. & Tang, Z. (2000). Effectiveness of simulation training on transfer of statistical concepts. Journal of Educational Computing Research, 22(4), pp. 383-396. Lawson, D. A., Halpin, M. & Croft, A. C. (2003). Good Practice in the Provision of Mathematics Support Centres Second Edition. LTSN MSOR Occasional Publications Series, 3/01. ISSN 1476 1378. Lazar, A. (1990). Statistics courses in social work education. Journal of Teaching in Social Work, 4, pp. 17-30. Lesser, L. M. & Pearl, D. K. (2008). Functional Fun in Statistics Teaching: Resources, Research and Recommendations. Journal of Statistics Education, 16(3). The Ohio State University, available at www.amstat.org/publications/jse/v16n3/lesser.html Littlejohn, A. H. (2002). Improving continuing professional development in the use of ICT. Journal of Computer Assisted Learning, 18, pp. 166-174. Lock, R. H. (1998). WWW resources for teaching statistics. Outline of talk given at the conference on Technology in Statistics Education, sponsored by the Boston Chapter of the American Statistical Association, Babson College, Babson Park, MA. Available at http://it.stlawu.edu/~rlock/tise98/onepage.html. Locke, S. D. & Gilbert, B. O. (1995). Method of psychology assessment, self-disclosure, and experiential differences: A study of computer, questionnaire, and interview assessment formats. Journal of Social Behaviour and Personality, 10, pp. 255-263. Lockyer, L. & Bennett, S. J. (2006). Understanding roles within technology supported teaching and learning: implications for students, staff and institutions. In J. O'Donoghue (Ed.), Technology Supported Learning and Teaching: A Staff Perspective, pp. 210-223. Hershey, USA: Information Science Publishing.
350
Lovett, M., Meyer, O. & Thille, C. (2010). In search of the "perfect" blend between an instructor and an online course for teaching introductory statistics. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Luk, H. S. (2005). The gap between secondary school and university mathematics. International Journal of Mathematical Education in Science and Technology, 36, pp. 161-174. MacGillivray, H. (2009). Learning support and students studying mathematics and statistics. International Journal of Mathematical Education in Science and Technology, 40(4), pp. 455-472. MacGillivray, H. (2008). Learning support in mathematics and statistics in Australian universities - a guide for the university sector, Australian Learning and Teaching Council. Maloney, E. A., Risko, E. F., Ansari, D. & Fugelsang, J. (2010). Mathematics anxiety affects counting but not subtilizing during visual enumeration. Cognition, 114, pp. 293-297. Mann, S. J. (2001). Alternative perspectives on the student experience: alienation and engagement. Studies in Higher Education, 26(1), pp. 7-19. Marton, F. & Saljo, R. (1976). On qualitative difference in learning, II - Outcome as a function of the learner's conception of the task. British Journal of Educational Psychology, 46, pp. 115-127. Masterman, L. (2006). The learning designs tools project: An evaluation of generic tools used in design for learning, learning technologies group. Oxford University Computing Services. Matheos, K., Daniel, K. & McCalla, G. I. (2005). Dimensions for blended learning technology: learners' perspectives. Journal of Learning Design, 1(1), pp. 56-76. Mathews, D. & Clark, J. (2003). Successful students' conceptions of mean, standard deviation and
the
central
limit
theorem.
Available
at
http://www1.hollins.edu/faculty/
clarkjm/papers.htm. May, R. (1977). The meaning of anxiety. New York; London: The Ronald Press Company.
351
McNabb, D. E. (2010). Research methods for political science: quantitative and qualitative approaches (2nd. Edition). United States of America: M.E. Sharpe, Inc. McVey, M. D. (2009). Using a blended approach to teach research methods: The impact of integrating web-based and in-class instruction. Journal of the Research Centre for Educational Technology, 5(1), pp. 49-56. Means, B. & Olson, K. (1997). Technology and education reform: Studies of education reform. California Diane Publishing Co. Meijer, J. (2001). Learning potential and anxious tendency: Test anxiety as a bias factor in educational testing. Anxiety, Stress & Coping, 14(3), pp. 337-362. Merisotis, J. P. & Phipps, R. A. (1999). What‟s the Difference? A Review of Contemporary Research on the Effectiveness of Distance Learning in Higher Education. Washington, D. C.: The Institute for Higher Education Policy, 31. Merrel, K. W. (2008). Helping students overcome depression and anxiety: A practical guide. New York: The Guilford Press. Mertens, D. M. (2005). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods (2nd. Edition). Thousand Oaks, California: Sage Publications, Inc. Mertens, D. M. (1998). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. Thousand Oaks, CA: Sage. Miller, H. & Bichsel, J. (2004). Anxiety, working memory, gender, and math performance. Personality and Individual Differences, 37(3), pp. 591-606. Miller, P. (1993). Theories of developmental psychology (3rd. Edition), New York: W. H. Freeman. Mills, J. D. (2004). Learning abstract statistics concepts using simulation. Educational Research Quarterly, 28(4), pp. 18-33. Moore, D. S. (1997). New pedagogy and new content: a case of statistics. International Statistical Review, 65(2), pp. 123-165.
352
Moore, D. S. (1993). The place of video in new styles of teaching statistics. The American Statistician, 47(3), pp. 172-176. Moore, D. S. & Cobb, G. W. (2000). Statistics and mathematics: Tension and cooperation. The American Mathematical Monthly, 107(7), pp. 615-630. Moore, D. S., Cobb, G. W., Garfield, J. & Meeker, W. Q. (1995). Statistics education fin de siecle. The American Statistician, 49(3), pp. 250-260. Morris, M. M. (2008). Evaluating university teaching and learning in an outcome-based model: Replanting bloom. Unpublished Ph.D. Thesis, University of Wollongong, Australia. Morris, M., Porter, A. & Griffiths, D. (2005). Getting down and dirty in statistics. Proceedings of the 2004 Workshop on Research Methods: Statistics and Finance, Wollongong, Australia, pp. 173-185. Muris, P. (2007). Normal and abnormal fear and anxiety in children and adolescents. Oxford: Elsevier. Murtonen, M. & Lehtinen, E. (2003). Difficulties Experienced by Education and Sociology Students in Quantitative Methods Courses. Studies in Higher Education, 28(2), pp. 171185. Newman, I. & Benz, C. R. (1998). Qualitative-quantitative research methodology: Exploring the interactive continuum. Carbondale and Edwardsville: Southern Illinois University Press. Newton, J. (2006). Action research. In V. Jupp (Ed.), The Sage Dictionary of Social Research Methods. London: Sage Publications Ltd. Oldknow, A. & Taylor, R. (2000). Teaching Mathematics with ICT. Continuum, London. O'Leary, Z. (2010). The essential guide to doing research. London: Sage Publications. O'Leary, Z. (2005). The essential guide to doing research. London: Sage Publications. Oliver, R. & Herrington, J. (2001). Teaching and learning online: A beginner's guide to elearning and e-teaching in higher education. Edith Cowan University: Western Australia.
353
Oliver, R., Herrington, A., Herrington, J. & Reeves, T. (2007). Representing Authentic Learning Designs Supporting the Development of Online Communities of Learners. Journal of Learning Design, 2(2), pp. 1-21. Oliver, R. & Littlejohn, A. (2006). Discovering and describing accessible and reusable practitioner-focused learning. Proceedings of Theme 1 of the JISC Online Conference: Innovating e-learning 2006, pp. 30-33. Oliver, R. (2006). Learning designs for ICT-based learning settings. ASCILITE Newsletter. Oliver, R. (1999). Exploring strategies for online teaching and learning. Distance Education, 20(2), pp. 240-254. Onwuegbuzie, A. J. (2004). Academic procrastination and statistics anxiety. Assessment and Evaluation in Higher Education, 29(1), pp. 3-19. Onwuegbuzie, A. J. (2003). Modelling statistics achievement among graduate students. Educational and Psychological Measurement, 63, pp. 1020-1038. Onwuegbuzie, A. J. (2000a). Statistics anxiety and the role of self-perceptions. Journal of Educational Research, 93, pp. 323-335. Onwuegbuzie, A. J. (2000b). Learning a foreign language in statistics classes: Modelling statistics achievement among graduate students. Paper presented at the annual meeting of the Georgia Educational Research Association, Morrow, GA. Onwuegbuzie, A. J. (1999). Statistics anxiety among African-American graduate students: an affective filter? Journal of Black Psychology, 25, pp. 189-209. Onwuegbuzie, A. J. (1998). Role of hope in predicting anxiety about statistics. Psychological Reports, 82, pp. 1315-1320. Onwuegbuzie, A. J. (1997). Writing a research proposal: The role of library anxiety, statistics anxiety, and composition anxiety. Library & Information Science Research, 19(1), pp. 533. Onwuegbuzie, A. J. & Daley, C. E. (1999). Perfectionism and statistics anxiety. Personality and Individual Differences, 26(6), pp. 1089-1102.
354
Onwuegbuzie, A. J., Daros, D. & Ryan, J. (1997). The components of statistics anxiety: A phenomenological study. Focus on Learning Problems in Mathematics, 19(4), pp. 11-35. Onwuegbuzie, A. J. & Leech, N. L. (2005). On Becoming a Pragmatic Researcher: The Importance of Combining Quantitative and Qualitative Research Methodologies. International Journal of Social Research Methodology, 8(5), pp. 375-387. Onwuegbuzie, A. J. & Leech, N. L. (2003). Assessment in Statistics Courses: more than a tool for evaluation. Assessment and Evaluation in Higher Education, 28(2), pp. 115-128. Onwuegbuzie, A. J. & Wilson, V. A. (2003). Statistics Anxiety: nature, etiology, antecedents, effects, and treatments--a comprehensive review of the literature. Teaching in Higher Education, 8(2), pp. 195-209. Pan, W. & Tang, M. (2005). Students' perceptions on factors of statistics anxiety and instructional strategies. Journal of Instructional Psychology, 32(3), pp. 205-210. Pan, W. & Tang, M. (2004). Examining the effectiveness of innovative instructional methods on reducing statistics anxiety for graduate students in the social sciences. Journal of Instructional Psychology, 31(2), pp. 149-159. Papanastasiou, E. C. & Zembylas, M. (2008). Anxiety in undergraduate research methods courses: its nature and implications. International Journal of Research and Method in Education, 31(2), pp. 155-167. Pavlov, I. P. (1927). Conditioned reflexes (G. V. Anrep, Trans.). London: Oxford University Press. Pearl, D. (2010). The CAUSEweb digital library: a diverse resource for statistics educators and education researchers. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Pekrun, R., Neil, J. S. & Paul, B. B. (2001). Test Anxiety and Academic Achievement. International Encyclopedia of the Social & Behavioral Sciences. Oxford, Pergamon: 15610-15614.
355
Pell, G. & Croft, T. (2008). Mathematics support--support for all? Teaching Mathematics Applications, 27(4), pp. 167-173. Perkin, G. & Croft, T. (2004). Mathematics Support Centres - the extent of current provision. MSOR Connections, 4(2), pp. 14-18. Petocz, P. (1998). Effective video-based resources for learning statistics. Proceedings of the Fifth International Conference on Teaching Statistics, Singapore (pp. 987-993). Voorburg, The Netherlands: International Statistical Institute. Petocz, P. & Reid, A. (2010). On becoming a statistician - A qualitative view. International Statistical Review, 78(2), pp. 271-286. Petty, N. W. (2010). Creating YouTube videos that engage students and enhance learning in statistics and Excel. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Phillips, J. M. (2005). Strategies for active learning in online continuing education. Journal of Continuing Education in Nursing, 36(2), pp. 77-83. Porter, A. (2011a). Stepping into Statistics: An Online Head Start Statistics Program. In E. Beh, L. Park and K. Russell (Eds.), Proceedings of the Fourth Annual ASEARC Conference, University of Western Sydney, Parramatta, Australia. Porter, A. (2011b). Application for Faculty Teaching and Learning Scholar 2011 - 2012, Faculty of Informatics, University of Wollongong, Australia. Porter, A. (2007). Transitions. Faculty of Informatics, University of Wollongong, Australia. Porter, A. (2002). Improving the teaching and learning of statistics through a current issue of social and political importance, Evaluation2002. School of Mathematics and Applied Statistics, University of Wollongong, Australia. Porter, A. (2001). Improving Statistical Education through the Experience of Reflective Practice. Unpublished Ph.D. Thesis, University of Wollongong, Australia.
356
Porter, A. & Baharun, N. (2011). Stepping into Statistics: Providing a Head Start for students. In L. Paditz and A. Rogerson (Eds.), Proceedings of the 11th International Conference, Turning Dreams into Reality: Transformations and Paradigm Shifts in Mathematics Education, Rhodes University, Grahamstown, South Africa, pp. 276 - 281. ISBN: 83919465-0-9. Porter, A. & Baharun, N. (2010). Developing a learning support system for students in mathematics rich disciplines. In R. Robert and B. Dave (Eds.), Going Mainstream. Monograph of the 5th Workshop on the Impact of Tablet PCs and Pen-based Technology on Education (WIPTE). Purdue University Press: Indiana, United States of America. ISBN: 978-1-55753-574-0. Porter, A., Baharun, N. & Algarni, A. (2009). Tablet PCs in the Grander Scheme of Supporting Learning. Paper presented at the 2009 Australasian Tablets in Education Conference (ATiEC) on December 3-4, 2009. Monash University, Australia. Porter, A. & Denny, S. (2011). ALTC Final Report: Building leadership capacity for the development and sharing of mathematics learning resources across discipline across universities. Australian Learning and Teaching Council Ltd: NSW, Australia. Pozo, S. & Stull, C. A. (2008). Requiring a Math Skills Unit: Results of a Randomized Experiment, Research on Teaching Innovations. The American Economic Review, 96(2), pp. 437-441. Available at http://www.jstor.org/stable/30034687. Preis, C. & Biggs, B. T. (2001). Can Instructors Help Learners Overcome Math Anxiety? ATEA Journal, 28(4), pp. 6-10. Prosser, M. & Trigwell, K. (1999). Understanding Teaching and Learning: The experience in Higher Education. Buckingham: Open University Press. Punch, K. F. (2005). Introduction to social research: Quantitative and qualitative approaches (2nd. Edition). London: Sage. Ramsden, P. (2003). Learning to teach in higher education (2nd. Edition). London: RoutledgeFalmer. Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.
357
Richardson, F. C. & Woolfolk, R. L. (1980). Mathematics anxiety. In I. G. Sarason (Ed.), Test anxiety: Theory, research, and applications. Hillsdale, New Jersey: Erlbaum. Ring, G. & Mathieux, G. (2002). The key components of quality learning. Paper presented at the ASTD Techknowledge 2002 Conference, Las Vegas. Roberts, G. (2002). SET for Success: The report of Sir Gareth Roberts' Review, HM Treasury, April
2002. Available at
http://www.employment-studies.co.uk/pubs/report.php?id=
1440robert Rodarte-Luna, B. & Sherry, A. (2008). Sex differences in the relation between statistics anxiety and cognitive/learning strategies. Contemporary Educational Psychology, 33(2), pp. 327344. Rogers, A. (1996). Teaching adults. Buckingham: Open University Press. Rumsey, D. J. (2002). Statistical literacy as a goal for introductory statistics courses. Journal of Statistics Education, 10(3). Available at http://www.amstat.org/publications/jse/v10n3/ rumsey2.html. Russell, T. (1999). No significant difference phenomenon (NSDP). North Carolina State University, Raleigh, NC, USA. Sansgiry, S. S. & Sail, K. (2006). Effect of students' perceptions of course load on test anxiety. American Journal of Pharmaceutical Education, 70(2), Article 26. Sarason, S. B. & Mandler, G. (1952). Some correlates of test anxiety. Journal of Consulting and Clinical Psychology, 47, pp. 810-817. Schacht, S. & Stewart, B. J. (1990). What's funny about statistics? A technique for reducing student anxiety. Teaching Sociology, 18, pp. 52-56. Schunk, D. H. (1991). Learning theories: An educational perspective. New York: Macmillan Publishing Company. Sharpe, R. & Benfield, G. (2005). The student experience of e-learning in higher education: A review of the literature. Brookes eJournal of Learning and Teaching, 1(3). Available at http://bejlt.brookes.ac.uk/vol1/volume1issue3/academic/sharpe_benfield.pdf.
358
Shuell, T. J. (1986). Cognitive Conceptions of Learning. Review of Educational Research, 56(4), pp. 411-436. Sieber, J. E. (1980). Defining test anxiety: Problems and approaches. In I. G. Sarason (Ed.), Test anxiety: Theory, research, and applications. Hillsdale, New Jersey: Erlbaum. Siemens, G. (2006). Connectvism: Learning theory or pastime of the self-amused? Available at Elearnspace blog, http://elearnspace.org/Articles/connectivism_self-amused.htm. Siemens, G. (2004a). Connectivism: A learning theory for the digital age. Retrieved 20 September 2010, available at http://www.elearnspace.org/Articles/connectivism.htm Siemens, G. (2004b). e-Moderating: The key to teaching and learning online. London: Routledge Falmer. Singh, H. (2003). Building effective blended learning programs. Issue of Educational Technology, 43(6), pp. 51-54. Singh, H. & Reed, C. (2001). Achieving success with blending learning. American Society for Training and Development, state of the art industry reports 2001. Skinner, B. F. (1974). About behaviourism. New York: Knopf. Smith, A. (2004). Making Mathematics Count - The report of Professor Adrian Smith's Inquiry into Post-14 Mathematics Education. February 2004, The Stationery Office. Sparks, S. D. (2011). It's more than just disliking math, according to scholars. Education Week: Researchers Probe Causes of Math Anxiety, Chicago. Spielberger, C. D., Gorsuch, R. L. & Lushene, R. E. (1970). Manual for the state-trait anxiety inventory. Palo Alto, CA: Consulting Psychologists Press. Stake, R. E. (2000). Case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd. Edition, pp. 435-454). Thousand Oaks, CA: Sage. Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage. Statistical Society of Australia Inc. (SSAI). (2005). Statistics at Australian Universities: An SSAI-sponsored review [Accessed on 2/4/11 from http://www.statsoc.org.au/objectlibrary /533?filename=ReviewofStatsFinalReport.pdf] 359
Stebbins, R. A. (2001). Exploratory research in social sciences. London: Sage. Stenberg, L., Varua, M. E. & Yong, J. (2010). Mathematics aptitude, attitude, secondary schools and student success in quantitative methods for business subject in an Australian Catholic university experience. Paper presented at the 39th Australian Conference of Economists. Sydney, NSW, 27-29 September. Stirling, D. (2010a). Dynamic diagrams for teaching design and analysis of experiments. In H. MacGillivray and B. Phillips (Eds.), Proceedings of the Seventh Australian Conference on Teaching Statistics (OZCOTS 2010) December, Fremantle, Australia. Stirling, D. (2010b). Improving lectures with CAST applets. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. Stöber, J. & Pekrun, R. (2004). Advances in test anxiety research. Anxiety, Stress & Coping, 17(3), pp. 205-211. Suanpang, P., Petocz, P. & Kalceff, W. (2004). Student Attitudes to Learning Business Statistics: Comparison of Online and Traditional Methods. Educational Technology & Society, 7(3), pp. 9-20. Subhlok, J., Johnson, O., Subramaniam, V., Vilalta, R. & Yun, C. (2007). Experience with tablet PC video based hybrid coursework in computer science. Department of Computer Science, University of Houston, Houston, TX, USA. Technical report number UH-CS-0612. Sullivan, R. & Calvert, G. (2004). A Head Start for Australia: An Early Years Framework, The Queensland and New South Wales Commissioners for Children and Young People, March 2004. ISBN: 0 7347 7115 0. Available at http://kids.nsw.gov.au/uploads/documents/ headstart_full.pdf. Sutarso, T. (1992). Some variables in related to students' anxiety in learning statistics. Paper presented at the annual meeting of the Mid-South Educational Research Association. Knoxville, TN. Swan, K. (2009). Introduction to the Special Issue on Blended Learning. Journal of the Research Centre for Educational Technology, 5(1, Spring 2009), pp. 1-3. 360
Symanzik, J. & Vukasinovic, N. (2006). Teaching an introductory statistics course with CyberStats, an electronic textbook. Journal of Statistics Education, 14(1). Available at http://www.amstat.org/publications/jse/v14nl/ symanzik.html. Tariq, V. N. (2008). Defining the problem: mathematical errors and misconceptions exhibited by first-year bioscience undergraduates. International Journal of Mathematical Education in Science and Technology, 39(7, 15 October 2008), pp. 889-904. Tashakkori, A. & Teddlie, C. (1998). Mixed methodology. Thousand Oaks, CA: Sage. Tempelaar, D. T., Rienties, B., van de Loeff, S. S. & Giesbers, B. (2010). Using blended learning environments in teaching introductory statistics to a strong diversity of students: The role of background factors. In C. Reading (Ed.), Data and context in statistics education: Towards an evidence-based society. Proceedings of the eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia. Voorburg, The Netherlands: International Statistical Institute. The Australian Mathematical Sciences Institute (AMSI). (2005). Submission to House of Representatives Standing Committee on Education and Vocational Training Inquiry into Teacher
Education
[Available
at
URL:
http://www.amsi.org.au/images/stories/
downloads/pdfs/education/Submission_TeacherEducation.pdf]. Thomas, J., Muchatuta, M. & Wood, L. (2009). Mathematical sciences in Australia. International Journal of Mathematical Education in Science and Technology, 40(1), pp. 17-26. Thompson, A. C. & Dekkers, A. (2009). Accreditation, video resources and interactivity, tablet PC's and their advantages. Proceedings of 20th Annual Conference for the Australasian Association for Engineering Education (AAEE). ISBN 1 876346 59 0. Available at http://aaee.com.au/conferences/AAEE2009/PDF/AUTHOR/AE090089.PDF. Thorndike, E. L. (1913). Educational psychology: Vol. 1. The original nature of man. New York: Teachers College Press. Turner, C. F., Ku, L., Rogers, S. M., Lindberg, L. D., Pleck, J. H. & Stonenstein, F. L. (1998). Adolescent sexual behaviour, drug use, and violence: Increased reporting with computer survey technology. Science, 280, pp. 867-873.
361
Twigg, C. (2002). Improving quality and reducing costs. Available at htpp://www. kent.ac.uk/uelt/technology/webct /Twigg_report.pdf. Uusimaki, L. & Nason, R. (2004). Causes underlying pre-service teachers' negative beliefs and anxieties about mathematics. In M. Johnsen Hoines & A. Berit Fuglestad (Eds.), Proceedings of the 28th Conference of the International Group for the Psychology of Mathematics Education – PME 28, Vol. 4, pp. 369-376, ISSN 0771-100X. Bergen University College, Bergen, Norway. Verhagen, P. (2006). Connectivism: A new learning theory? Surf e-learning themasite, available at http://elearning.surf.nl/e-learning/english/3793. Vermeire, L., Carbonez, A. & Darius, P. (2002). Just-In-Time network-based statistical learning: Tools development and Implementation. Proceedings of the sixth International Conference on Teaching Statistics. Voorburg, The Netherlands: International Statistical Institute. Vieira, M., Mendes, N., Duraes, J. & Madeira, H. (2008). The AMBER Data Repository. DSN 2008 Workshop on Resilience Assessment and Dependability Benchmarking, Anchorage, Alaska, USA, 25 June 2008, accessed from http://www.amber-project.eu/documents/ md_19_amber_adr_cr.pdf. Walsh, J. J. & Ugumba-Agwunobi, G. (2002). Individual differences in statistics anxiety: the roles of perfectionism, procrastination and trait anxiety. Personality and Individual Differences, 33(2), pp. 239-251. Ward, B. (2004). The best of both worlds: A hybrid statistics course. Journal of Statistics Education, 12(3). Available at www.amstat.org/publications/jse/v12n3/ward.html. Waters, S. H. & Gibbons, A. S. (2004). Design languages, notation systems, and instructional technology: A case study. Educational Technology, Research and Development, 52(2), pp. 57-68. Watson, J. B. (1926). What the nursery has to say about instincts. In C. Murchison (Ed.), Psychologies of 1925. Worcester. MA: Clark University Press. Widmer, C. C. & Chavez, A. (1986). Helping students overcome statistics anxiety. Educational Horizons, 64(2), pp. 69-72.
362
Wild, C. (1994). Embracing the "wider view" of statistics. The American Statistician, 48, pp. 163-171. Wilson, J. P. (2005). Human resource development: Learning and training for individuals and organisations. London: Kogan Page. Wilson, V. A. & Onwuegbuzie, A. J. (2001). Increasing and decreasing anxiety: A study of doctoral students in educational research courses. Paper presented in the annual meeting of the Mid-South Educational Research Association. Little Rock, AR. Wilson, V. A. (1997). Factors related to anxiety in the graduate statistics classroom. Paper presented at the annual meeting of the Mid-South Educational Research Association. Memphis, TN. Wood, L. N. & Petocz, P. (1999). Video in mathematics learning at the secondary-tertiary interface. In W. Spunde, S. Cretchley and R. Hubbard (Eds.), The challenge of diversity: Proceedings of the Δ'99 Symposium on Undergraduate Mathematics. Rockhampton, Queensland (pp. 223-228). Available at http://www.sci.usq.edu.au/staff/spunde/delta99/ Papers/wood_p.pdf. Yang, H. J., Tsao, W. Y., Lay, Y. L., Chen, M. & Liou, Y. (2008). Prior language experience and language anxiety as predictors for non-native language commercial website use intention. International Journal of Human-Computer Studies, 66(9), pp. 678-687. Zakaria, E. & Nordin, N. M. (2008). The effects of mathematics anxiety on matriculation students as related to motivation and achievement. Eurasia Journal of Mathematics, Science & Technology Education, 4(1), pp. 27-30. Zaslavsky, C. (1994). Fear of Math: How to Get Over It and Get On With Your Life. New Brunswick, New Jersey: Rutgers University Press. Zeidner, M. (1998). Test anxiety: The state of the art. New York, Plenum. Zeidner, M. (1991). Statistics and mathematics anxiety in social science students: Some interesting parallels. British Journal of Educational Psychology, 61(3), pp. 319-328. Zeidner, M., Paul, A. S. & Reinhard, P. (2007). Test Anxiety in Educational Contexts: Concepts, Findings, and Future Directions. Emotion in Education, pp. 165-184. Burlington, Academic Press. 363
Zimmer, J. C. & Fuller, D. K. (1996). Factors Affecting Undergraduate Performance in Statistics: A Review of Literature. Paper presented at the annual meeting of the Mid-South Educational Research Association. Tuscaloosa, AL.
364
Appendix 3.1: STAT131 March/Autumn 2010 subject outline
365
Appendix 3.1: STAT131 March/Autumn 2010 subject outline (continued)
366
Appendix 4.1a: Participant Information Sheet used for STAT131 in March/Autumn 2009
PARTICIPATION INFORMATION SHEET Associate Professor Anne Porter (Team Leader) School of Mathematics and Applied Statistics (02) 4221 4058
[email protected]
Norhayati Baharun School of Mathematics and Applied Statistics (04) 0608 6509
[email protected]
Dear Student, This is an invitation for you to participate in a study conducted by researchers at the University of Wollongong. The research is called The Impact of Learning Resources on Student Understanding and Learning Statistics. The data collected in this research will be used for the following purposes: identify the impact of learning resources in particular video resources on student learning outcomes such as student understanding of topics, performance and anxiety; to identify issues, particularly technological issues associated with the use of videos and to determine how best to resolve them; to determine how to improve and best develop the resources so that student learning and comfort in learning can be improved; and specifically to identify how to improve the learning environment for STAT131 students. Students are encouraged to write in the e-Learning forum with comments about the study of Statistics and the use of learning resources particularly topic areas where the resources could be useful. Your comments made in the forum will be used to identify the usefulness of resources and these will be anonymous in any reporting. You will be contacted by us with further questions if they arise in which a follow-up email survey will be conducted based on responses made in the forum at the end of session. We will also analyse your results in relation to the eLearning resources used by you and in relation to your subject marks. Participation is not compulsory. You are free to decide if you want to be involved in this project or not and you can stop participating at any time. If you decide to stop participating any information you have given will not be used. There is no penalty for not participating. In addition to this information sheet, you will be asked to provide a return email or completion of an online permission slip giving your consent if you are prepared to participate in the study. The choice remains yours. The final published results of the research will be aggregated measures and there will be no features that could lead to the identification of individual students. We believe these findings will be immediately useful to researchers/teachers at UOW who are seeking to develop Mathematics and Statistics learning support materials. Participation in this research is valuable as you can help us to determine how learning resources use can be improved for future students in this subject. If you have any questions in relation to this research please do not hesitate to ask. Your questions and comments are more than welcome. If you have any concerns or complaints as to how the research is conducted you should contact the Secretary of the University of Wollongong Human Research Ethics Committee on (02) 4221 4457. Thank you in anticipation of your willingness to participate. 367
Appendix 4.1b: Consent Form used for STAT131 in March/Autumn 2009
Consent Form (STAT131 Request) Research Project: The Impact of Learning Resources on Student Understanding and Learning Statistics
Researchers: Associate Professor Anne Porter and Norhayati Baharun
Please return the following statements to indicate your agreement to participate:
I agree to give permission to use my comments made in the e-Learning forum. (Note these will be anonymous in any reporting)
I agree to be contacted for further questions if they arise.
I agree to give permission to analyse my results in relation to the e-Learning resources used and subject marks.
By signing below I am indicating my consent to participate in the research. I understand that my results will not identify me personally in any subsequent publication.
Signed
Date
………………………………………
……./……./ 2009
Name (please print) ………………………………………
368
Appendix 4.2: An example of Ethics Approval granted by the University of Wollongong
369
Appendix 5.1: A sample student survey used in August/Spring 2008 GHMD983 AUGUST 2008 SPRING SURVEY The primary purpose of this survey is to evaluate the effectiveness of video resources for helping you and future students to understand and learn better in this subject. The feedback of ALL students is valuable in this process. You can let us know how to improve the resources and this subject so that it is beneficial for you and future students. 1. Have you completed this student as a a. Distance student b. On-campus student 2. Including lectures and labs and work outside class, on average how much time did you spend working on this subject each week? a. 0-2 hours per week b. 3-5 hours per week c. 6-8 hours per week d. 9-11 hours per week 3. How useful were the existing lectures in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 4. How useful were the existing lectures notes in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 5. How useful was the Edu-stream in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 6. How useful was the student forum in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 7. How useful was the student forum in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 8. How useful were the laboratory classes in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful
370
9. How useful were the laboratory tasks in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 10. How useful was the tutor in laboratory classes in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 11. How useful were the worked solutions in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 12. How useful were the existing laboratory tests in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 13. How useful was the opportunity to re-sit laboratory tests in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 14. How useful was working with your peers in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 15. How useful were the video resources (where available) in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 16. How useful was the other work done in your own time in helping you understand in this subject? a. Not applicable or rarely used b. Little use c. Moderately useful d. Extremely useful 17. Were you able to play the videos in e-learning? a. No b. Yes c. Did not try to do this 18. Were you able to download and use the zipped files? a. No b. Yes c. Did not try to do this
371
19. What type of computer are you using? a. PC b. MAC c. Both PC and MAC d. Other 20. Did you have difficulties accessing videos from home? a. No b. Yes c. Did not try from home 21. Did you have difficulties accessing videos from work? a. No b. Yes c. Did not try from work 22. Did you have difficulties accessing videos from computer laboratories? a. No b. Yes 23. Please describe any problems you have experienced in trying to access and use the videos?
24. If you have experienced any problems with accessing or using the videos, have you found any solutions that might be useful for others and if so what are the solutions?
25. What software if any have you had to install be able to play the videos?
26. On average how much time did you spend using the video resources each week? a. Did not use them b. 0-2 hours per week c. 3-5 hours per week d. 6-8 hours per week e. 9 hours or more per week 27. How confident are you that you can solve problems using JMP? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 28. Do you need more video resources to solve problems using JMP? a. No b. Yes 29. How confident are you that you can solve problems in writing meaningful paragraphs about variables? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 30. Do you need more video resources to solve problems in writing meaningful paragraphs about variables? a. No b. Yes
372
31. How confident are you that you can solve problems in producing and interpreting scatterplots and correlations? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 32. Do you need more video resources to solve problems in producing and interpreting scatterplots and correlations? a. No b. Yes 33. How confident are you that you can solve problems in producing and using regression output? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 34. Do you need more video resources to solve problems in producing and using regression output? a. No b. Yes 35. How confident are you that you can solve problems in determining probabilities from tables? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 36. Do you need more video resources to solve problems in determining probabilities from tables? a. No b. Yes 37. How confident are you that you can solve problems in producing and using Pearson‟s Chi-square? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 38. Do you need more video resources to solve problems in producing and using Pearson‟s Chi-square? a. No b. Yes 39. How confident are you that you can solve problems in fitting models to data (Goodness of Fit)? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 40. Do you need more video resources to solve problems in fitting models to data (Goodness of Fit)? a. No b. Yes 41. How confident are you that you can solve problems involving Binomial model? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this
373
42. Do you need more video resources to solve problems involving Binomial model? a. No b. Yes 43. How confident are you that you can solve problems involving Normal model? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 44. Do you need more video resources to solve problems involving Normal model? a. No b. Yes 45. How confident are you that you can solve problems involving confidence intervals? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 46. Do you need more video resources to solve problems involving confidence intervals? a. No b. Yes 47. How confident are you that you can solve problems involving hypothesis tests – single mean and proportion? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 48. Do you need more video resources to solve problems involving hypothesis tests – single mean and proportion? a. No b. Yes 49. How confident are you that you can solve problems involving hypothesis tests – differences in means and proportions? a. Not at all b. Might have a little difficulty c. Moderately confident d. Could do this 50. Do you need more video resources to solve problems involving hypothesis tests – differences in means and proportions? a. No b. Yes 51. Are there any specific subtopics that you think would benefit from the development of video resources (JMP, writing, interpreting, maths skills, ….)?
52. How much more anxious or comfortable are you in taking this subject now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable
374
53. How much more anxious or comfortable are you in tackling tasks which are difficult for you now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 54. How much more anxious or comfortable are you in taking a subject that involves mathematics now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 55. How much more anxious or comfortable are you in taking a subject that involves computing now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 56. How much more anxious or comfortable are you in taking a subject that involves statistics now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 57. How much more anxious or comfortable are you in taking tests in this subject now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 58. How much more anxious or comfortable are you in working with numbers now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 59. How much more anxious or comfortable are you in solving statistics questions using the computer now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable
375
60. How much more anxious or comfortable are you in formulating and testing hypotheses now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 61. How much more anxious or comfortable are you in reading statistical studies now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 62. How much more anxious or comfortable are you in calculating probabilities now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 63. How much more anxious or comfortable are you in working with problems that involves mathematics now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 64. How much more anxious or comfortable are you in explaining statistical findings now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 65. How much more anxious or comfortable are you in using statistics in research work now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 66. How much more anxious or comfortable are you in undertaking your professional work now that you have nearly completed the subject? a. Much more anxious b. A little anxious c. Not more anxious or comfortable d. A little comfortable e. Much more comfortable 67. In what ways did your anxieties for this subject increase and decrease during the session? 68. How and what ways were the videos useful (or not)?
376
69. How best can the video resources be improved to help your learning and understanding of statistics?
70. Do you have any other suggestions for the improvement for this subject?
71. Based on your work you have done and the feedback you have obtained, what grade do expect to get for this subject? a. Fail (84%) 72. Select your origin a. I am an International student b. I am a domestic / Australian student 73. Select your gender a. Male b. Female
377
Appendix 6.1: CAOS post-test performance of individual statistical skills between the cohorts % of Students Correct Item
Measured Learning Outcomes
March 2010
March 2011
Independent t-test p-value
1
Ability to describe and interpret the overall distribution of a variable as displayed in a histogram, including referring to the context of the data.
70.5 (N = 132)
81.0 (N = 79)
0.079a
2
Ability to recognize two different graphical representations of the same data (boxplot and histogram).
63.4 (N = 131)
58.2 (N = 79)
0.462
3
Ability to visualize and match a histogram to a description of a variable (neg. skewed distribution for scores on an easy quiz).
77.3 (N = 132)
74.7 (N = 79)
0.670
4
Ability to visualize and match a histogram to a description of a variable (bell-shaped distribution for wrist circumferences of newborn female infants).
75.8 (N = 132)
62.0 (N = 79)
0.041a
5
Ability to visualize and match a histogram to a description of a variable (uniform distribution for the last digit of phone numbers sampled from a phone book).
80.2 (N = 131)
83.5 (N = 79)
0.330
6
Understanding to properly describe the distribution of a quantitative variable, need a graph like a histogram that places the variable along the horizontal axis and frequency along the vertical axis.
20.6 (N = 131)
15.2 (N = 79)
0.330
7
Understanding of the purpose of randomization in an experiment.
9.4 (N = 128)
7.6 (N = 79)
0.661
8
Ability to determine which of two boxplots represents a larger standard deviation.
23.8 (N = 130)
48.1 (N = 79)
0.045
9
Understanding that boxplots do not provide estimates for percentages of data above or below values except for the quartiles.
23.8 (N = 130)
24.1 (N = 79)
0.973
10
Understanding of the interpretation of a median in the context of boxplots.
40.9 (N = 132)
41.8 (N = 79)
0.902
11
Ability to compare groups by considering where most of the data are, and focusing on distributions as single entities.
77.7 (N = 130)
84.6 (N = 78)
0.226
12
Ability to compare groups by comparing differences in averages.
87.0 (N = 131)
81.0 (N = 79)
0.242
13
Understanding that comparing two groups does not require equal sample sizes in each group, especially if both sets of data are large.
66.7 (N = 132)
65.8 (N = 132)
0.901
14
Ability to correctly estimate and compare standard deviations for different histograms. Understands lowest standard deviation would be for a graph with the least spread (typically) away from the center
58.0 (N = 131)
54.4 (N = 79)
0.614
378
15
Ability to correctly estimate standard deviations for different histograms. Understands highest standard deviation would be for a graph with the most spread (typically) away from the center.
47.7 (N = 132)
48.1 (N = 79)
0.958
16
Understanding that statistics from small samples vary more than statistics from large samples.
26.9 (N = 130)
22.8 (N = 79)
0.507
17
Understanding of expected patterns in sampling variability.
48.9 (N = 131)
41.8 (N = 79)
0.321
18
Understanding of the meaning of variability in the context of repeated measurements and in a context where small variability is desired.
75.6 (N = 131)
68.4 (N = 79)
0.257
19
Understanding that low p-values are desirable in research studies.
46.2 (N = 130)
37.2 (N = 78)
0.207
20
Ability to match a scatterplot to a verbal description of a bivariate relationship.
85.3 (N = 129)
84.8 (N = 79)
0.928
21
Ability to correctly describe a bivariate relationship shown in a scatterplot when there is an outlier (influential point).
75.6 (N = 131)
78.5 (N = 79)
0.631
22
Understanding that correlation does not imply causation.
54.2 (N = 131)
54.4 (N = 79)
0.974
23
Understanding that no statistical significance does not guarantee that there is no effect.
64.1 (N = 131)
63.3 (N = 78)
0.904
24
Understanding that an experimental design with random assignment supports causal inference.
56.2 (N = 130)
68.4 (N = 79)
0.949
25
Ability to recognize a correct interpretation of a pvalue.
61.5 (N = 130)
64.6 (N = 79)
0.664
26
Ability to recognize an incorrect interpretation of a pvalue (probability that a treatment is not effective).
59.5 (N = 131)
53.2 (N = 79)
0.368
27
Ability to recognize an incorrect interpretation of a pvalue (probability that a treatment is effective).
57.3 (N = 131)
60.8 (N = 79)
0.619
28
Ability to detect a misinterpretation of a confidence level (the percentage of sample data between confidence limits).
65.4 (N = 130)
63.3 (N = 79)
0.760
29
Ability to detect a misinterpretation of a confidence level (percentage of population data values between confidence limits).
52.7 (N = 131)
57.0 (N = 79)
0.548
30
Ability to detect a misinterpretation of a confidence level (percentage of all possible sample means between confidence limits).
29.2 (N = 130)
26.6 (N = 79)
0.682
31
Ability to correctly interpret a confidence interval.
59.8 (N = 132)
50.6 (N = 77)
0.198
32
Understanding of how sampling error is used to make an informal inference about a sample mean.
19.2 (N = 130)
13.9 (N = 79)
0.327
379
33
Understanding that a distribution with the median larger than mean is most likely skewed to the left.
49.2 (N = 132)
43.6 (N = 78)
0.430
34
Understanding of the law of large numbers for a large sample by selecting an appropriate sample from a population given the sample size.
71.0 (N = 131)
63.3 (N = 79)
0.248
35
Understanding of how to select an appropriate sampling distribution for a particular population and sample size.
50.4 (N = 131)
42.9 (N = 77)
0.296
36
Understanding of how to calculate appropriate ratios to find conditional probabilities using a table of data.
54.6 (N = 130)
58.2 (N = 79)
0.612
37
Understanding of how to simulate data to find the probability of an observed value.
24.4 (N = 131)
21.5 (N = 79)
0.631
38
Understanding of the factors that allow a sample of data to be generalized to the population.
43.9 (N = 132)
38.0 (N = 79)
0.397
39
Understanding of when it is not wise to extrapolate using a regression model.
33.6 (N = 131)
29.1 (N = 79)
0.503
40
Understanding of the logic of a significance test when the null hypothesis is rejected.
51.1 (N = 131)
57.7 (N = 78)
0.361
a
Unequal variances assumed for both groups
380
Appendix 7.1: MATH131 March/Autumn 2010 subject outline
381
Appendix 7.1: MATH131 March 2010 Autumn subject outline (continued)
382
Appendix 7.1: MATH131 March 2010 Autumn subject outline (continued)
383
Appendix 7.2: MATH131 March/Autumn 2010 content timeline
Note: Q = Quiz, A = Assignment, S = Supply, D = Due, M = Mid-session exam
384
Appendix 7.3: Example of MATH131 assignment
385
Appendix 7.3: Example of MATH131 assignment (continued)
386
Appendix 8.1: The e-learning showcase handout
387
Appendix 8.1: The e-learning showcase handout (continued)
388