A Learning System Engineering Approach to Developing Online ...

2 downloads 0 Views 355KB Size Report
For example, Don Clarke's (Clark, 1995) model, and the ADDIE (College Station Texas, 2001) model were compared to the specific needs of the project at hand.
A Learning System Engineering Approach to Developing Online Courses Matt Bower Postgraduate Professional Development Programs Macquarie University NSW, Australia, 2109 [email protected]

Abstract The purpose of this paper is twofold. First, to present the systematic “Learning System Engineering” approach adopted by Macquarie University to constructing a new online Graduate Diploma of Information Technology. Second, to present the pedagogical and technological research and development that resulted from implementation of the Learning System Engineering cycle. Emphasis is placed upon the application of technologies to learning. It is intended that sharing the experience gained in both the development of the Learning System Engineering approach and the decisions made as a result of its implementation will be of value to other institutions embarking on similar projects. Keywords: Curriculum Engineering, Online Learning, Online Technology, Learning Design.

1

Background

At the beginning of 2004 the Postgraduate Professional Development Programs Department at Macquarie University was presented with the rare opportunity to develop a new Graduate Diploma in Information Technology course, to be built from the ground up. The mandate was to create a course that employed best practices pedagogically and technologically, that could then be used as a flagship for other course designs across the rest of the Division and University. One of the major outcomes of the project, apart from the development of a substantially different approach to teaching Computer Science, was a systematic approach to developing online courses. This paper describes the “Learning System Engineering” cycle for developing computer science courses, and discusses the efficacy of the model by relating its implementation in the context described above.

2

The Learning System Engineering Cycle

In deciding on how to embark on the development of the Graduate Diploma in Information Technology (GDIT)

.

Copyright © 2006, Australian Computer Society, Inc. This paper appeared at the Eighth Australasian Computing Education Conference (ACE2006), Hobart, Tasmania, Australia, January 2006. Conferences in Research in Practice in Information Technology, Vol. 52. Denise Tolhurst and Samuel Mann Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included.

various inter-disciplinary Instructional Design models were reviewed. For example, Don Clarke’s (Clark, 1995) model, and the ADDIE (College Station Texas, 2001) model were compared to the specific needs of the project at hand. Of practical concern was to develop an approach that: a) was broad enough to consider the important features for the design of the course b) engendered innovation at all phases to meet the imposed mandate for the course c) was streamlined enough to be efficient in practice. The Learning System Engineering approach that resulted is presented in Figure 1 below. Note that this is a simplified pictorial representation of the actual process, several of the phases are interconnected and the order of phase completion is neither necessarily sequential nor single directional (a point that is elaborated upon later in this paper). Rather, the diagram below represents minimum dependencies and the general flow of phases. A crucial aspect of the Learning System Engineering model is that each phase draws upon specific inputs (either from the environment or from previous phases of the model) and has specific corresponding outputs. This encourages a formal approach to each of the phases and allows critical reflection to be drawn in future iterations. Application of this process to the GDIT has lead to several new educational approaches and technological tools being deployed at Macquarie University. The approach and corresponding results of each phase of the cycle are now described.

2.1

Pedagogical Research Phase

As the first phase of the cycle pedagogical literature was reviewed in an attempt to ensure that best practices in teaching and learning drove all other decisions and processes. Two types of literature were reviewed: 1. General educational literature 2. Domain specific (computing) educational literature For the GDIT, reviewing general educational literature represented a powerful learning process for the academics involved, not only because it broadened their awareness of teaching and learning approaches and theories but also because it required them to adapt these theories to teaching computer science – a valuable task in reformulation. The general educational literature review played an important role in all six other phases of the

1. Pedagogical Research

7. Implementation

2. User/Context Analysis 3. Investigation of E-Learning Landscape

6. Curriculum Engineering 4. Conceptual Design

5. Technology Testing and Selection

Figure 1: The Learning System Engineering Cycle utilised for the GDIT cycle. Analysis of Computer Science educational literature was equally important, and was drawn upon increasingly in the later phases of the development cycle (particularly, the Curriculum Development Phase). Having an understanding of relevant models and theories from the outset allowed developers to take into account approaches that were deemed successful when embarking on the pedagogical design of the courseware. There were two outputs of the Pedagogical Research Phase: 1. the “Theoretical Underpinnings for the GDIT” report document, and 2. the “Computer Science Literature Review” document. The “Theoretical Underpinnings for the GDIT” document identified 11 key principles drawn from both classic and contemporary educational research that were to guide the design of the course. These principles were: a . A Constructivist Basis (Piaget, 1970;Vygotsky, 1978) – for building semantic understanding b. Integration of Information Processing Theories (Craik & Lockhart, 1972;Clarke & Pavio, 1991;Bransford et al., 1982;Lewandowsky & Murdock, 1989) – to ensure that courseware was developed based on an appreciation of the learner’s cognitive capacities c. Expert Modelling (Collins et al., 1991) – based on the principle that expertise is most effectively imparted by observing the practices of skilled professionals d. Active Learning approach (Carroll, 1998;Shuell, 1986) – promoting longevity and depth of learning e. Interactivity centric (Fisher, 1996;Bandura, 1977) – valuing the benefits of peer exchange and collaborative approaches to education f. Context sensitive instructional design (Carroll, 1998;Gagne, 1985) – ensuring flexibility in teaching methodologies depending on the needs of the learner g. A relevant curriculum (Knowles, 1984;Brown et al., 1989) – catering to the commercial needs of our graduate clients

h.

Careful matching of media to purpose (Cobb, 1997;Salomon, 1994) – promoting the most cognitively efficient learning possible to accelerate student’s through the knowledge space i. Deliberate deployment of scaffolding (Vygotsky, 1978;Landa, 1976) – acknowledging the success of an educational process is the way in which concept and skill development is supported. j. Promoting “higher order and critical thinking” (Spiro et al., 1988) – advancing students beyond novice learning to more cognitive flexibility and expert thinking. k. Authentic Assessment and Feedback (Hager & Butler, 1996;Knowles, 1984) – acknowledging that that the assessment tasks students are required to perform inevitably drives curriculum design and thus student enthusiasm for learning. It was necessary for developers to continually remind themselves to deliberately return to these eleven principles when decisions regarding technology or curriculum needed to be made. This provided a mechanism for quality assurance and consistency of course design. On the other hand the “Computer Science Education Literature Review” analysed domain specific dimensions of teaching and learning computing, such as: a. Theories of thinking – such as Ahanori’s (2000) abstraction levels model, and McGill and Volets’ (1997) components of knowledge framework b. Factors predictive of success in learning computing – such as previous programming experience (Hagan & Markham, 2000), and precursory courses in problem solving and reasoning (Allan & Kolesar, 1996) c . The difference between Novices and Experts – described in work by Robins et al. (2003), amongst others d. Frameworks for analysing student difficulties – such as Bonar and Soloways’ (1989) programming difficulties taxonomy, and Rath and Browns’ (1995) “HCI Conception” model

e . Approaches to instructional design – such as behaviour modelling (Chou, 2001), and peer review activities (Zeller, 2000) f. Examples of putting theory into practice – such as Kay et al.’s (2000) implementation of Problem Based Learning, and Van Gorp and Grissoms’ (2001) approach to constructivism. The “Computer Science Education Literature Review” document proved particularly useful in the Curriculum Development Phase because it offered developers a research basis upon which to make decisions about which educational approaches to adopt. In combination these two reports provided a sound set of pedagogical principles upon which the Graduate Diploma could be built.

2.2

User/Context Analysis Phase

Conducting user analysis allows materials and processes to be more accurately matched to the needs and preferences of learners, as well as providing an opportunity to identify strengths and weaknesses in current approaches to learning design and implementation. Context analysis evaluates the current state of educational practice in the immediate environment. Understanding the context is a pragmatic and crucial aspect of analysis at this phase because it informs developers of the tools, systems and practices already in existence, which in turn allows them to consider the cost-benefit breakdown of implementing new systems. Primary issues for consideration when conducting user analysis are: 1) The objectives and focus points of the user analysis 2) Who will be targeted for questioning in the absence of actual candidates who will be completing the course 3) Whether a quantitative or qualitative approach to data collection is preferable (or a mix) 4) The means by which to collect data (interview, pen and paper, online form) For the GDIT the particular areas of focus included students’ perceptions towards collaboration and towards online learning. Based on the results of the Pedagogical Research Phase, the benefits of collaborative approaches to learning had become self evident, but at the same time academics were concerned about incorporating such approaches into the curriculum because of the perception that most computing students preferred to work in isolation. Also, the extent to which students were willing to work online needed to be ascertained in order for the department to justify the mode of delivery being considered. In order to collect information from an audience that would most appropriately match the graduate yet “new to computing” level students that were to be the clientele for the GDIT, second year undergraduate students were surveyed. They were considered to be more independent learners than first year students, (more closely matching the graduate students) but also more familiar with learning new computing programming concepts than final

year or honours level students. It was acknowledged that this was not by any means a direct match of survey participants to potential candidates, and that any data gathered was to be informatory rather than directly indicative. Once again, actually undertaking the process of formulating questions and interpreting data proved as valuable for academics as the actual information collected. In order to gather data from a large sample in an efficient manner an online form was created that contained a range of Likert scale and open ended response items. The instrument collected information regarding students’ learning preferences, attitudes towards collaboration and online learning, and perspectives on current approaches towards learning computing that were being implemented. Over one hundred students responded to the 42-point questionnaire, with data summarised in a “Computing Students at Macquarie University” report. Some significant results include: 1. students wish to collaborate more with others 2. students consider themselves to be better communicators than their peers 3. there were statistical correlations between student’s confidence, enjoyment and self perception of their communication abilities when working with others 4. students identify the importance of being able to work well with others and the need to be taught these skills 5. many students spend an incommensurate amount of time on their computing studies, particularly programming and debugging. 6. Students were divided in media preference, indicating a need for flexible delivery. These and other findings allowed the team to design an approach to the Graduate Diploma of IT that could be based upon factual evidence outlining the needs of contemporary computing students rather than the speculative ideas of lecturers. The survey results had ancillary benefits for the Department of Computing (that services the undergraduate cohort) and was also deemed valuable by cross campus learning organizations. To review the current state of education in the immediate environment (ie, the context) an informal approach was adopted in favour of a formal review. This was because the cost of conducting and documenting a formal review was deemed greater than the perceived benefits. Developers already had a reasonable appreciation of the tools and processes being deployed within the department and further round-table discussions and liaisons with key personnel from around the university allowed an holistic and shared understanding to be developed. However, this is not to say that the process of documenting the results of context analysis is not valuable – it has been earmarked for future iterations of the development cycle. The formal “Computing Students at Macquarie University” report and the understanding of the environment that resulted from the informal review process provided developers with a firm foundation upon

which to commence investigation of the E-Learning Landscape.

2.3

Investigation of the E-Learning Landscape

There is an ever-expanding array of technologies available for facilitating learning. Learning Management Systems, Content Management Systems, synchronous collaborative tools such as Virtual Classroom Environments and Instant Messaging, asynchronous tools such as discussion boards and List Serves are but a few. Often the boundaries defining these tools are blurred with many systems offering more than one of these facilities. The fact that the number of tools and their respective features is ever changing makes the task of selecting a suite of technologies for courses a difficult one. In order to contain the exercise of technology investigation and thus save time it was important to: 1. Liaise with experts already in the field to determine promising technologies 2. Find resources that have already compared and contrasted e-learning, content management and collaborative technologies 3. Prioritise the importance of the generic features of the technologies to the department. For the GDIT corporate consultants were trialled but the cost of using this option extensively was deemed prohibitive. Advice from experts in the field of education and computer science education specifically proved more effective both from the point of view of cost and subtle insight into the ways in which educational technologies can and cannot be effectively applied in Computer Science Education. There are many resources that compare and contrast technologies, and obviously the internet provides a wealth of free resources (SCIL and Partners, 2005;e-Learning Centre, 2005). Also, commercial enterprises such as the Brandon Hall Research Library (Hall, 2005) that has stores of extensive educational product comparisons. While these publications are reasonably expensive, they are far more cost effective than conducting independent research from scratch or choosing the wrong technology based on insufficient research. Defining the important generic features of technologies allows more efficient elimination of unsuitable tools. For the GDIT the following were identified: 1. Utility (range of features) 2. Longevity (avoiding ‘shooting-star’ products/enterprises) 3. Portability/standards compliance (for import/export, increasing system flexibility and reducing risk) 4. Perceived reliability and maintainability (products not meeting these requirements could be discounted) 5. Ubiquity of reliant technologies (Java, Flash, Quicktime versus reliance on less popular technologies) 6. Cross Platform compatibility (Microsoft, Macintosh, Linux, if possible)

Due to the large number of tools and systems being evaluated it was important that any records produced as outputs of this phase were streamlined. If products were deemed satisfactory on the above indicators products were short-listed for more formal comparison in the later Technology Selection and Testing Phase. Brief descriptions and impressions of the product were recorded (on all of the above indicators as well as cost, contact details and reference sites), but no formal recording of comparison of features was made. At this investigative phase it was deemed more important to note the types of capacities that different products possess so that the next phase of conceptual design could take them into account.

2.4

Conceptual Design Phase

Once pedagogical approaches have been determined, an understanding of the learner and environmental context formed, and an awareness of the e-learning landscape developed, conceptual design can take place from an informed perspective. This “putting it all together phase” is arguably the most crucial and amorphous phase, with success depending on the quality of previous phases, and the skill, creativity and cohesion of developers. It is difficult and unwise to propose specific rules for the process of design, however the following over-arching guidelines can be suggested: 1. Choose a form and level of specification that is appropriate to your context 2. Continually refer back to the results of previous phases 3. Seek ongoing feedback from colleagues and experts 4. Be aware of the fluid, bi-directional relationship between this phase of development and successive phases. The first guideline is a reminder to select the method of representing the conceptual design that allows the features of the Learning system to be appropriately represented in the time frame allotted. Is a description of components sufficient? Will an HTML prototype take too long? Are the system requirements for your design to be formally documented or will the collective vision of the team based on design discussions be enough? Secondly, repeated reference to the pedagogical underpinnings, the learner and context analysis and the eLearning landscape improves the likelihood that the system will meet its educational objectives and best cater to the needs of learners, given the technologies available. This re-referral to research can be particularly useful at times when developers may be undecided about the direction a particular aspect of the design should take. Thirdly, as the design develops it is important to check that the system is feasible from a technical point of view within the immediate environmental context, and desirable from an educational point of view. If feedback is only requested at the end of this phase then a great deal of time may be wasted in specifying a system that has to be redesigned for technical and/or educational reasons. Finally, it is important to provide a level of specification that is detailed enough to provide a clear direction on as

many aspects of the Learning System as possible, without being so detailed that it not flexible enough to evolve and change during later phases. A crucial consideration at the Conceptual Design Phase is that redesign will almost inevitably occur after the Technology Selection and Testing Phase. It is easy to lose a great deal of time designing a system to a level of detail that in reflection was overly specific because pragmatic issues at later phases meant the design had to change. For the GDIT it was deemed important to construct a thorough representation of the system. The outputs of this phase were: _ A “Design Concepts for the GDIT” report document _ A initial prototype of the system (GUI design rendered in HTML), and _ A “System Requirements” document for the technological platform Then the “Design Concepts for the GDIT” applied the pedagogical principles identified in Phase 1 of the cycle and the user requirements identified in Phase 2 of the cycle to facilities available from the technological tools identified in Phase 3. The prototype forced developers to consolidate their design ideas and form a shared vision for the course. The System Requirements document identified the technological requirements for online lectures, tutorials and practical sessions, requirements for student assessments and submissions, general collaboration requirements and learning management needs. Even in hindsight it is difficult to evaluate whether the outputs of this phase were overly subscriptive. On the one hand the final product did not exactly match the prototype or meet the systems requirements. On the other hand, if the outputs at this phase were less specific then development at later phases may have lacked direction and momentum. Nevertheless, it is fair to say that all three output documents provided a useful foundation for the Technology Testing and Selection Phase and the Curriculum Engineering Phase.

2.5

Technology Testing and Selection Phase

Once an initial system design is proposed, the process of testing and selecting technologies can be performed. The short-listed technologies from the Investigation of the ELearning Landscape Phase are compared to the technological needs that were identified in the Conceptual Design Phase. Note that the other research phases – the Learner/Context Analysis Phase and the Pedagogical Research Phase – also feed into technology testing and selection. Key considerations at this stage are: 1. Whether formal or informal evaluation mechanisms will be utilized 2. The type and extent of testing to be performed 3. How features will be measured and contrasted 4. How to cope with nonequivalence of technologies Whether or not to conduct formal or informal evaluation depends on the time available, the significance of the

technology to the system, and the level of accountability required. Similarly, the type and extent of testing required needs to be weighed up against the time available and the significance of the technology to the overall system. The level of testing required at this phase is offset by the extent of research into each product at the Investigation of the E-Learning Landscape Phase. For formal evaluations, designing a rubric that allows a score to be allocated for each the features of the technology and a relative weight to be assigned based on the importance of each feature is an obvious quantitative approach. The resulting document provides a description of the evaluation process that is useful for accountability and archival purposes. However, in cases where scores between two competing technologies are close, it would be unwise to simply choose the product with the higher cumulative weighted score – it is virtually impossible for such a rubric to take into account all the features of a product, interaction effects between features, and precisely match weights to system importance. The overly-subscriptive nature of rubrics based decision making is also evident when products being contrasted don’t exactly overlap in the features they provide. It takes a measure of art to choose a suite of tools that meet system needs without overlapping too substantially in terms of their functionalities. This is where the ability to hide certain feature of a product may be deemed valuable in the technology evaluation process. In order to ensure all functionalities are met it is advisable to start with the most crucial technologies and if there the learning system formed is still lacking some required components then find specific tools for that purpose. For the GDIT, the three key areas of the system that were identified for formal testing were1: 1. Presentation of conceptual material 2. Virtual Classroom Systems 3. Learning Management System For presentation of conceptual material the Macromedia Breeze Presentation system (short-listed from the Investigation of the E-Learning Landscape Phase) was formally compared to traditional website approaches to delivering information. Second year undergraduate students were divided into a treatment (Breeze audioPowerPoint style online presentations) and control group (HTML style presentation). Each form represented identical information, with only the mode of presentation differing. The post-survey indicated (amongst other things) that students expressed a significant preference for the Breeze audio-PowerPoint style of presentations over the HTML form. It was also deemed advantageous that we could adopt a more personalized approach using the Breeze presentation system (through the use of tone, pitch, delivery of commentary corresponding to visual

1

Note that other aspects of the system such as instant messaging were deemed less critical to overall construction of course as they were more easily replaceable.

presentation), which was identified as important for an online course (Bower & Richards, 2005b). Two systems for online synchronous instruction were tested; the Macromedia Breeze Meeting platform, and the Oracle Collaboration Suite. The Breeze Meeting platform was tested using the same experimental design as the Breeze presentation system – namely a treatment group undertook their laboratory session using the technology in question (virtual classroom) and a control group received the regular form of instruction (in-class laboratory session). Results of this experiments indicated (amongst other things), that students felt they performed significantly more collaboration in the virtual classroom environment as compared to the regular laboratory environment, and students felt that they learnt significantly more from both their classmates and the practical supervisor in the virtual laboratory classroom as opposed to the standard classroom environment (Bower & Richards, 2005a). This was encouraging especially because it demonstrated that online delivery could educationally outperform standard face-to-face approaches, justifying the approach being proposed for the Graduate Diploma of IT. With the Macromedia Breeze Live system as a baseline that had been tested on students, the Oracle Collaboration Suite was also contrasted as a means for delivering synchronous online communications (desktop broadcasting and sharing, text chat, document sharing and the like). To compare the features of the two platforms a rating instrument was developed, in-house2. In our experience, the quality of features bundled into the Oracle Collaboration Suite did not compare favourably to Macromedia Breeze. Usability, cost of implementation and cross-platform issues were also impediments. However, the next version of Collaboration Suite contains Instant Messaging and Mac/Linux clients, and as such it has been earmarked for re-evaluation at a later date. In terms of Learning Management Systems (LMSs), the Oracle Learning I-Learning platform was compared and contrasted to the existing WebCT LMS that is currently deployed at Macquarie University. Once again, a formal in-house rating instrument was developed for evaluation purposes. The inferior usability of the Oracle I-Learning platform version we tested and the cost of implementing an entirely new system lead to the department deciding to continue with WebCT. As well, systems such as Blackboard, Sakai, and Moodle have been identified as products to consider in future iterations of the Learning System Engineering Cycle, but were considered too time consuming to extensively test and deploy, especially when WebCT was known to reliably satisfy most of our LMS requirements. Non-key technologies were tested informally as a pragmatic cost saving measure. This was done by comparing and contrasting their features in situations that resembled the circumstances in which they would be 2

Note that it was considered too imposing upon students to implement another trial of technologies, so the approach of comparison by rating features was selected.

applied as closely as possible. Testing of Desktop recording software, instant messaging tools, and mailing list tools lead to the adoption of Macromedia Captivate, PSI and Mailman solutions respectively. These non-key technologies combined with the Macromedia Breeze Presentation Platform, the Macromedia Breeze Meeting Platform, and the WebCT LMS provided all the functionalities identified by the “Systems Requirements” document produced in the Conceptual Design Phase. Note that the conceptual design was revisited several times based on results being observed during the Technology Testing and Selection Phase. Eventually with a final suite of technologies tested and selected, curriculum engineering could commence in earnest.

2.6

Curriculum Engineering Phase

The Curriculum Engineering Phase draws upon all prior phases of the Learning System Engineering model to create the learning experiences that students will undertake. Based on the understanding of the learner and the educational context, the technological tools are deployed to deliver instruction and activities in a way that meets the pedagogical approaches fashioned in the Conceptual Design Phase. Crucial considerations are: 1. How to determine curriculum inclusions and their sequencing 2. How to make specific instructional design decisions 3. How to best manage workflow. Although a broad appreciation of the areas to be studied in the course will obviously be held, at this phase the details of scope and sequencing need to be determined. In our experience, if designers do not possess a deep understanding of how the content within subjects will interrelate and the order in which concepts need to be sequenced, students notice and the quality of learning is degraded. As well, it is also necessary to specify how subjects will fit together, avoiding unwanted repetition or omissions in course curriculum. Reigeluth’s (1980) Simplifying Conditions Method of Instructional Design can provide some guidance in this area – specifically, a ‘spiralling’ approach to scope and sequencing is recommended for familiar concepts whereas a ‘topical’ approach using ‘epitomes’ is more suitable for introducing new concept areas3. Development of curriculum resources falls in the category of design, and as such it is unwise to recommend a subscriptive approach. Rather, the designer must use their expertise to make decisions on the scope of

3

‘Spiralling’ sequencing refers to the process of analysing the common elements of several examples aspect by aspect, in an attempt to more effectively facilitate student abstraction. ‘Topical’ sequencing introduces all aspects of one whole key example (or ‘epitome’) before the next example is introduced, promoting a more constructivist approach to concept formation.

the learning activity, the pedagogical approach to be used, the most appropriate media for delivery, the level of interactivity, and how to best promote active engagement. The pedagogical research and user analysis conducted in the first two phases provide a valuable source of guidance. It can also be advised that soliciting continual feedback from colleagues is a valuable control mechanism at this stage. A word of warning: it is easy for developers with a personal interest in constructing qualitative educational resources to spend inordinate amounts of time creating and refining materials. This is exacerbated when new technologies and processes are being utilized. For this reason it is advisable to use the available timeframe to determine the level of refinement to perform. The need to monitor progress and designing processes to ensure consistency across developers is self evident, and responsibility for this needs to be designated. While Gannt Charts, timesheets, and resource allocation schedules do not guarantee final delivery, they at least draw attention to the critical nature of time, reminding all concerned of the overarching requirement of the project – to deliver the system and curriculum on day 1 semester 1! In order to clearly identify the learning outcomes for each subject in the GDIT, syllabuses were composed. These approximately ten page documents outlined the unit aims, required prerequisite knowledge, and specified itemised, detailed learning outcomes. They comprehensively defined the knowledge and skills contained in the learning domain for each unit and allowed a clear learning pathway within and between subjects to be formed. From these, curriculum documents were developed that specified the learning activities and their sequencing. The Computer Science Education Literature Review was drawn upon heavily to make decisions on how to best design instruction for the implementation of our courses. However this was often deferred if the concept to be formed leant itself to an alternative learning method or activity, such as sometimes using a collaborative, enquiry based approach as opposed to a specific direct instruction model recommended by the literature (say). On a practical level, extensive research was performed to locate and select a comprehensive array of reference materials and the highest quality textbook/resource bundle. Requirements for the text-book included a relevant, industry focus as well as a savvy approach to instruction. As well, the quality and form of ancillary resources (quiz sets, lecture slides, instructor solutions) was a consideration, because it significantly impacted upon development time for each unit. Based on the content to be studied, the production tools selected at the Technology Selection Phase were used to create curriculum that matched the approach outlined at the Conceptual Design Phase. In broad terms, the following items were represented in each weekly study cycle: 1) Multimedia Unit Overviews for each week to provide the online learner with an approximately 20

minute introduction to key concepts and provide direction through the content. 2) Screen Recorded Instruction providing students with expert modelling of processes such as compiling, debugging, program design, and so on. 3) Online multi-attempt, graded Topic Quizzes to check for basic comprehension of each topic before progressing to conceptual questions and practical activities. 4) Pre-tutorial and pre-practical exercises which students were required to submit before their online classes to develop and formatively assess their conceptual understanding and practical programming abilities. (These carried a minor grade each ensuring that students have attempted the work and could thus meaningfully participate in the synchronous collaborative group-work.) 5) In class tutorial and practical activities which required no student preparation and were to be facilitated by a lecturer to promote groupwork and collaborative learning. 6) Assignments that were summative tasks staggered throughout semester to assess practical abilities 7) Workshops (3 x 3 hour) that were to be conducted face-to-face, allowing students to enrich their understanding of prior concepts, preview upcoming material, and interact face-to-face with their peers and lecturer. For the GDIT the time taken to produce these items was underestimated, and managing workflow during this phase was one of the most difficult (yet crucial) tasks of the whole development cycle. Constructing rigid deadlines may limit the level of artistry that is possible during development but essential to ensure course delivery.

2.7

Integration Phase

This final production phase of course development involves bundling the curriculum, collaborative tools and general information for the course into a synchronised package. It is the creation of an interface for the courseware and collaborative tools. Key considerations at this phase are: 1) Maximising flexibility 2) Maximising efficiency 3) Ensuring usability In order for the course to evolve it is important that coupling between the course interface, the curriculum and the collaborative tools is as loose as possible. If links between these elements are too numerous or tightly dependent then the capacity to for future iterations of the course are to employ an updated technology or process becomes compromised. Standards compliance is also an issue – it needs to be decided whether the benefits of complying with learning object standards (ability to transfer between systems) is outweighed the extra development involved. If several subjects are to share aspects of design and content, what is the most efficient way for these to be distributed? Making wise decisions about integration and placement of style sheets and general course information

at early stages of this phase can save inordinate amounts of time later on, due to avoidance of replication. Maximising efficiency also includes recycling of resources on successive iterations of subjects. By keeping instance specific elements of each unit (such as the calendar schedule, news messages and so on) separated out from course content, future iterations of a subject are more easily deployed. At this stage it is important to have external parties critique the design and organization of the interface that has been created as a means to gauge usability. Because of their extended familiarity with the course it is easy for developers to assume that users will know where resources lie, how they are used, and what users are expected to do. It is not until someone with no previous association with the course road-tests it that obvious omissions become apparent. The output of this phase is, of course, the final product. For the GDIT an attempt was made to hide the LMS as much as possible by creating an interface that utilised the web-common left hand menu navigation approach to access all courseware and collaborative tools.

Figure 2:The unit website interface The coursework link provided access to all curriculum items, organized in week by week schedules of activities. The coursework and general unit information links were entirely decoupled from the instance items of the course (such as the calendar schedule and news pages) and from the collaborative tools (such as the Mailing List Interface and Virtual Classroom). The exception to this was that there was a link to the virtual classroom from within the week by week courseware schedules. Informal feedback on the usability of the product from parties unfamiliar with it lead to several enhancements, until access to all tools and information was intuitive, and an understanding of the tasks students were expected to perform was immediately apparent.

2.8

Evaluation Phase (User/Context Analysis Phase, cycle 2)

An important feature of any development cycle is evaluation, which informs maintenance and development of the product. For the purposes of learning engineering, the three obvious questions are:

Figure 3: Online collaborative groupwork class using Macromedia Breeze Live 1) What features of the learning system need to be evaluated? 2) How should the efficacy of those features be measured? 3) How should data be collected? Are developers more interested in the effectiveness of the pedagogical approaches adopted in curriculum design, or the extent to which the technological tools facilitated collaboration, or the satisfaction of users, or...? Is the best way to measure items via student perceptions, analysis of learning performance, instructor perceptions,…? Will an online survey be used to collect data, or student interviews, or instructor interviews,...? All of this needs to be weighed up against the cost of implementing such investigations. For the GDIT it was decided that returning to the Likert scale and free response items addressed in the User/Context Analysis Phase would be worthwhile because of the ability to compare and contrast results to previous data, as well as the ease of re-implementing an existing instrument as opposed to developing an entirely new one from scratch. However, other items relating to the perceived effectiveness of each of the weekly curriculum items (topic overviews, quizzes, screen recorded instruction, pre-tutorial/practical activities) and the online classes were also included. It is acknowledged that student perceptory data collected cannot be considered as valid as analysis of learning performance, but at the same time the work involved in formally evaluating the educational effectiveness of the GDIT compared to previous approaches students was deemed to be an extensive undertaking, and attributing effects to the numerous features of the model developed for the GDIT almost impossible. An online form was chosen to collect data because implementation was efficient (after the process had developed during the initial User/Context Analysis Phase). The survey was administered the week prior to the final exam with the guarantee to students that results would not be analysed until after unit grades had been finalised. The results of the survey indicated:

_ Students appreciated the rich online learning technologies that had been integrated into the course. _ Technological issues such as the quality of the learner’s internet connection had a major impact on their satisfaction. _ Students particularly appreciated the onlinerecorded PowerPoint style Topic Overviews. As one student put it “These were really the only chance I had to have the content of the course explained before attempting the exercises”. _ Students wanted more face-to-face meetings, and still felt they spent too much time learning in isolation (despite the effort to provide them with best of breed online collaborative tools and continual encouragement to work with one another). This has been identified as a matter to address in the next iteration of the course, most likely by introducing more collaborative activities. Four of the six students who completed enrolled in one of the course units responded to the survey. These four students received grades of HD, HD, D, and P in the unit. The two students who didn’t respond received grades of D and F. Grades for the unit were split between relatively high and low (no credits awarded). One way these results can be interpreted is that people who were able to make the regular weekly homework submissions generally performed well in the unit.

3) utilises efficient analysis of a wide variety of educational technologies4, incorporating a broad range of dimensions5. 4) incorporates a conceptual design stage that could innovate based upon a research basis of prior pedagogic, user, context and technological research but that was also forward looking in an iterative fashion to successive phases 5) allows the formality of technology testing to match the importance of the system being considered. 6) improved the likelihood of delivering a comprehensive, integrated and effective curriculum due to the pedagogical literature review, resource research and syllabus design processes 7) resulted in an innovative, well tested and reusable product to be delivered on time. The approach placed emphasis on the integration of technology for online learning. In practice there is overlap in the order in which the phases of development in the Learning System Engineering Cycle are attempted. Such a complex task could never be adequately completed by an entirely piecemeal step-by-step approach. However by delineating the process into component phases it is possible to provide a comprehensive framework for guiding development of courses. This reduces the chance of project failure by allowing important considerations at each phase to be identified.

As well as student impressions, informal feedback was solicited from academics involved in the course and also from presentation of the product to those outside the department. Their reflections were invaluable, and thankfully, in most cases favourable. While the evaluation that was performed in this iteration of the cycle was not considered as entire or pervasive, it did provide valuable information to feed into the next iteration of the Learning System Engineering cycle.

For the Postgraduate Professional Development Programs Department at Macquarie University the result has not only been a course that offers new modes of delivery and instructional design techniques that can be levered in other areas of the university, but also an approach to development that can be tailored to other courses within the department as well as other areas of study. The intention of this paper is that describing the Learning System Engineering Cycle and the corresponding research results collected will assist academics from other institutions to develop and refine their own courses.

3

4

Conclusion

By employing a Learning System Engineering approach the Postgraduate Professional Development Programs Department at Macquarie University was able to construct a GDIT that deployed new pedagogical and technological approaches to computer science courses, in a reliable, efficient and effective manner. The modus operandi for development was unique in that it simultaneously redefined the technological and educational approaches in an integrated, systematic and accountable fashion. The Learning System Engineering Cycle was distinctive because it provided a comprehensive yet flexible approach to designing courses. The framework: 1) allows thorough pedagogical research foundation to underpin design decisions regarding course systems and educational approaches 2) encourages the courseware and systems to be built based upon comprehensive front-end analysis

References

Aharoni, D. (2000) Cogito, Ergo sum! cognitive processes of students dealing with data structures. In, Proceedings of the thirty-first SIGCSE technical symposium on Computer science education, 26-30. Allan, V. H. and Kolesar, M. V. (1996) Teaching Computer Science: A Problem Solving Approach that Works. Proc Call of the North, NECC '96. Proceedings of the Annual National Educational Computing Conference (17th, Minneapolis, Minnesota, June 11-13, 1996): 2-9, Bandura, A. (1977) Social Learning Theory, General Learning Press, New York. 4

Learning Management Systems, Content Management Systems, Instant Messaging Tools, Desktop Sharing, Multimedia production tools and virtual classroom systems. 5

Dimensions investigated included standards compliance, accessibility, usability, robustness, functionality, portability and upgradeability.

Bonar, J. and Soloway, E. (1989) In Studying the Novice Programmer(Eds, Soloway, E. and Spoher, J. C.) Lawrence Erlbaum, Hillsdale, NJ, pp. 325-353. Bower, M. and Richards, D. (2005a) The Impact of Virtual Classroom Laboratories in Computer Science Education. Proc Thirty-Sixth SIGCSE Technical Symposium of Computer Science Education, St. Louis, Missouri, USA: 292-296, ACM Press. Bower, M. and Richards, D. (2005b) Self-Paced Lectures. Proc 4th IASTED International Conference on WEBBASED EDUCATION, Grindelwald, Switzerland: 1217, IASTED. Bransford, J. D., Stein, B. S., Vye, N. J., Franks, J. J., Auble, P. M., Mezynski, K. J. and Perfetto, G. A. (1982) Differences in approaches to learning: An overview. In, Journal of Experimental Psychology: General, 3390398. Brown, J. S., Collins, A. and Duguid, P. (1989) Situated Cognition and the Culture of Learning. In, Educational Researcher, 18(1): 32-42. Carroll, J. M. (1998) Minimalism beyond the Nurnberg Funnel, Cambridge, MA: MIT Press. Chou, H. W. (2001) Influences of cognitive style and training method on training effectiveness. In, Computers & Education, 37(1). Clark, D.: Big Dog's Instructional Systems Design Page, http://www.nwlink.com/~donclark/hrd/sat.html. Accessed 28 October. Clarke, J. M. and Pavio, A. (1991) Dual coding theory and education. In, Educational Psychology Review, 3(3): 149-210. Cobb, T. (1997) Cognitive Efficiency: Toward a Revised Theory of Media. In, Educational Technology Research and Development, 45(4): 21-35. College Station Texas: ADDIE instructional design model, http://itsinfo.tamu.edu/workshops/handouts/pdf_handou ts/addie.pdf. Accessed 21st August, 2004. Collins, A., Brown, J. and Holum, A. (1991) Cognitive apprenticeship: Making thinking visible. In, American Educator, 6(11): 38-46. Craik, F. I. M. and Lockhart, R. S. (1972) Levels of processing: A framework for memory research. In, Journal of Verbal Thinking and Verbal Behaviour, 11 671-684. e-Learning Centre: e-Learning Centre, http://www.elearningcentre.co.uk/. Accessed 20th July 2005. Fisher, G.: Making Learning a Part of Life - Beyond the "Gift Wrapping" approach to Technology, http://www.cs.colorado.edu/~l3d/presentations/gf-wlf/. Accessed Gagne, R. (1985) The Conditions of Learning, Holt, Rinehart & Winston, New York. Hagan, D. and Markham, S. (2000) Does it help to have some programming experience before beginning a computing degree program? In, ACM SIGCSE Bulletin, 5th annual SIGCSE/SIGCUE conference on Innovation and technology in computer science education, 32 25-28

Hager, P. and Butler, J. (1996) Two Models of Educational Assessment. In, Assessment & Evaluation in Higher Education, 21(4): 367-78. Hall, B.: Brandon-Hall Education & Training Research Centre, www.brandon-hall.com/. Accessed 20th July 2005. Kay, J., Barg, M., Fekete, A., Greening, T., Hollands, O., Kingston, J. H. and Crawford, K. (2000) Problem-Based Learning for Foundation Computer Science Courses. In, Computer Science Education, 10(2): 109-128. Knowles (1984) The Adult Learner: A Neglected Species, Gulf Publishing, Houston. Landa, L. (1976) Instructional Regulation and Control: Cybernetics, Algorithmization, and Heuristics in Education, Englewood Cliffs, NJ: Educational Technology Publications. Lewandowsky, S. and Murdock, B. B. (1989) Memory for serial order. In, Psychological Review, 96(25-57). McGill, T. J. and Volet, S. E. (1997) A Conceptual Framework for Analyzing Students' Knowledge of Programming. In, Journal of Research on Computing in Education, 29(3): 276-297. Piaget, J. (1970) The Science of Education and the Psychology of the Child, Grossman, NY. Rath, A. and Brown, D. E. (1995) Conceptions of Human-Computer Interaction: A Model for Understanding Student Errors. In, Journal of Educational Computing Research, 12(4): 395-409. Reigeluth, C. M. (1980) The Elaboration Theory of Instruction: A Model for Sequencing and Synthesizing Instruction. In, Instructional Science, 9(3): 195-219. Robins, A., Roundtree, J. and Roundtree, N. (2003) Learning and Teaching Programming: A Review and Discussion. In, Computer Science Education, 13(2): 137-172. Salomon, G. (1994) Interaction of Media, Cognition, and Learning, LEA, New Jersey. SCIL and Partners: Elearning Reviews, http://www.elearning-reviews.org/. Accessed 20th July 2005. Shuell, T. J. (1986) Cognitive Conceptions of Learning. In, Review of Educational Research, 56(4): 411-36. Spiro, R. J., Coulson, R. L., Feltovich, P. J. and Anderson, D. (1988) Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. Proc Proceedings of the 10th Annual Conference of the Cognitive Science Society., Hillsdale, NJ: Erlbaum. Van Gorp, M. J. and Grissom, S. (2001) An Empirical Evaluation of Using Constructive Classroom Activities to Teach Introductory Programming. In, Computer Science Education, 11(3): 247-260. Vygotsky, L. S. (1978) Mind in Society, Cambridge, MA: Harvard University Press. Zeller, A. (2000) Making students read and review code. In, SIGCSE Bull., 32(3): 89-92.

Suggest Documents