Service-Oriented Approach to Improve Interoperability of E-Learning ...

3 downloads 13465 Views 691KB Size Report
Houston Inc. Consulting Oy ... tation for a service oriented e-learning system, which utilizes ... Computer Science (CS) education, where we have our back-.
2013 IEEE 13th International Conference on Advanced Learning Technologies

Service-Oriented Approach to Improve Interoperability of e-Learning Systems Ville Karavirta and Petri Ihantola Dept. of Computer Science and Engineering Aalto University Espoo, Finland {ville.karavirta, petri.ihantola}@aalto.fi

Teemu Koskinen Houston Inc. Consulting Oy Helsinki, Finland [email protected]

before have a strong focus on solving a specialized type of problem. However, the systems also have a large amount of overlapping features related to course and user management, as illustrated in Figure 1.

Abstract—We present a design and open source implementation for a service oriented e-learning system, which utilizes external services for supporting a wide range of learning content and also offers a REST API for external clients to fetch information stored in the system. The design will separate different concerns, such as user authentication and exercise assessment, into separate services, which together form a complete e-learning environment. A key component of the design is identifying a general set of characteristics among existing exercise assessment systems, by which the assessment methods are grouped into three types: synchronous, asynchronous and static exercises. Keywords- e-learning; LMS; service oriented; interoperability; automatic assessment;

I. I NTRODUCTION Different types of e-learning systems focus on different concerns in on-line education. Learning management systems (LMS) focus on course management activities such as creating courses, handing out assignments and reporting students’ performance [1]. They may also provide a wide range of other content, such as discussion forums, wikis and multimedia content [2]. Exercise assessment systems are focused on providing tools for automated or assisted grading of students’ exercise submissions, giving automated feedback and tracking the students’ performance. It is quite common that an exercise assessment system has only a narrow focus, e.g. multiple choice quizzes or programming exercises. The multitude of e-learning systems is a problem for both teachers and learners. At least in Computer Science (CS) education, where we have our backgrounds, it is quite common to use several highly specialized visualization and exercise assessment systems together in one course. Although the use of several systems may enable teachers to provide a wider range of learning material on their courses, the maintenance of the systems requires skills and resources. In addition, the students must then cope with using a number of different systems, potentially memorizing several passwords and monitoring their progress in different systems. For example, in Aalto University we have found it common to use a combination of TRAKLA2, Goblin, and Rubyric in teaching CS (see [3] for detailed descriptions of these systems). All of the systems mentioned 978-0-7695-5009-1/13 $26.00 © 2013 IEEE DOI 10.1109/ICALT.2013.105

Figure 1.

A representation of features in the systems.

In this study, we introduce A+, an open-source1 , serviceoriented learning management system and discuss how service orientation copes with the challenge of many systems. The core features of A+ are the ability to combine exercise assessment systems (described in Section III), a service API that other systems can use to retrieve information from A+, and a plugin architecture that allows HTML widgets to be embedded into various views of A+. II. R ELATED R ESEARCH A. Software Architectures The requirements for e-learning systems change continuously as both Internet technologies and the subjects studied with the systems evolve. Thus, many e-learning systems are designed to be extendable [1]. All of the architectural approaches for extensions presented next have been successfully utilized in existing e-learning systems. However, the approaches pose different benefits and challenges which make them difficult to compare. Component-oriented systems are built out of components, each of which has a certain concern they are meant to solve. 1 http://github.com/Aalto-LeTech/a-plus

341

The internal behavior of one component can be changed without it affecting other components. This slightly improves the extendability of the system, but still requires the components to be implemented with the same technologies and adding new components may require changes in existing parts of the system. In a plug-in architecture, a system consists of replaceable modules that together provide the features of an e-learning system. Plug-ins add new functionality to the overall system without affecting the functionality of other plug-ins. Although plug-ins can typically be added with little or no configuration, they require access to the file system of the server running the e-learning system. In most open source e-learning systems, the plug-ins must be designed to adhere to pre-specified APIs and be implemented using the same technologies as the e-learning system itself. Plug-ins may usually be developed by third parties but they do not work across several e-learning systems. Widgets are small applications that can be included on web pages. As opposed to plug-ins, widgets may be physically located on another location than the service utilizing them. Several initiatives for standardizing widgets exist, such as W3C Widget specification2 and Google Gadgets3 . Although widgets resolve a few of the issues in plug-ins regarding compatibility and ease of installation, they cause problems in cross domain scripting and privacy. The OAuth4 , however, has been seen as a potential game changer in utilizing widgets, as authorizing the widgets would enable identity management with externally hosted widgets in Moodle [4]. A service oriented architecture separates the different concerns of a system into independent subsystems, which offer their functionality as services over a network [5]. Service orientation enhances loose coupling, as services may run on separate physical systems and be implemented using different technologies. In the context of e-learning systems, different services could include learner management and automatic assessment. One e-learning system may also use different services for assessing different exercises. A service oriented approach can be found in several existing e-learning systems, such as eduComponent [5], Sakai and Blackboard.

standardization work that appears most prominent based on existing literature. IMS Learning Tools Interoperability (LTI) framework includes two specifications defined by IMS Global Learning Consortium. The two specifications are called Full LTI and Basic LTI, and their goal is to improve integration of external learning content in e-learning systems. Basic LTI has received positive reactions from e-learning system vendors, although it does not support sending information from tool providers to tool consumers. [6] IEEE Learning Object Metadata (LOM) is a standard for annotating e-learning objects to improve the transferability, distribution and discovery of existing learning material [7]. According to McClelland [8], the complexity of hierarchical relationships may pose difficulties for a novice cataloger. Furthermore, the LOM specification does not cover automated assessment, which makes it less useful for our particular use case. Sharable Content Object Reference Model (SCORM) is a set of specifications for creating and sharing e-learning material compatible with several e-learning environments [9]. The specifications describes how the learning objects are be packaged, how they are described with metadata and how they operate on run-time. In contrast to our goals, SCORM is mostly created for archiving or physically transferring content between systems, rather than providing interfaces between systems. Tin Can API 5 is the next generation of SCORM intended for tracking user activity with a learning object and storing the information in a learning record store. The specification looks promising, but it is too early to say whether it will gain wider usage. III. A+ D ESIGN AND I MPLEMENTATION Figure 2 is a screenshot from A+. The system supports concepts suchs as courses, course instances, exercise rounds and exercises with submissions and feedback stored in A+. “Unfortunately, architectural designs are notoriously hard to evaluate and compare in an objective manner.” [10] Yet, we found the service oriented approach most suitable for our needs because this enabled us to keep the existing assessment services loosely coupled. In Figure 3 we present the roles and positions of different services and software modules with respect to A+. By removing overlapping features, such as user authentication, course management and administration, we are able to reduce the complexity of individual exercise services. In addition, utilizing user management and administrator’s features found in Django framework reduced the complexity of our own implementation.

B. Interoperability standards Many vendors have designed their e-learning system modules to fit the needs and specifications of their own system, and the modules do not typically function across different e-learning systems. The need for standardization to achieve better interoperability across e-learning systems is widely discussed. Different standards have been proposed for representing learning content in machine readable form as well as creating a unified way for e-learning systems to communicate with each other. The following summarizes

A. Exercise Assessment The exercise assessment in A+ is implemented in a service-oriented manner. Key characteristics in this approach

2 http://www.w3.org/TR/widgets/ 3 https://developers.google.com/gadgets/ 4 http://oauth.net/

5 http://tincanapi.com/

342

exercise to be graded before the feedback page loads. The server can only process a certain number of concurrent requests, and therefore synchronous exercises pose a potential performance bottleneck. Synchronous assessment should be avoided when using large files or exercises that are expected to take several seconds to grade. Acceptable exercises would, for example, be multiple choice questionnaires or simple programming exercises with limited execution time. Assessment Flow: After the exercise page has been rendered, the student completes the exercise and fills in the form on the page. The HTTP requests and responses related to exercise submission are illustrated in Figure 5. When the form is submitted, the form values (which may include attached files) are submitted to A+ which stores the submission. Next, A+ sends the form values and files to the same exercise service, this time via HTTP POST method. The exercise service assesses the student’s submission and responds with an HTML page, which contains feedback in the body section and grading specific meta information in the head section. Exercise services are always expected to include a meta header status in their responses. Accepted statuses are “graded”, “accepted” and “rejected”. The “graded” value is given after a successful assessment, when also additional headers points and max-points are returned. These headers contain the points for the submission as well as the maximum points, which A+ may use for scaling the grade if necessary. The “accepted” status is used in asynchronous assessment and “rejected” is given if an error has occurred. After receiving the response from exercise service A+ saves it and displays a feedback page for the student.

Figure 2. A screenshot from the course schedule view in A+ student user interface.

Figure 3.

Separation of features and software components.

are that the exercise descriptions and instructions are defined in an external service, which also takes care of assessing the student’s submissions. 1) Exercise Descriptions: Exercise descriptions are not stored in A+. Instead, they are defined through a service url from where exercise description and the exercise can be retrieved. Rendering an exercise page in A+ is illustrated in Figure 4. When a student loads an exercise, it is first requested from the exercise service. This request is made with the HTTP GET method, and the service returns an HTML page with the instructions and possibly a submission form. The received HTML is included in the exercise page in A+ and then displayed to the student.

Figure 4.

Figure 5.

Submitting an exercise for automatic grading.

Statelessness: Assessing synchronous exercises does not require the assessment system to retain state between requests, because the previously assessed submissions have no impact on future assessments and A+ takes care of the course management (e.g. storing points). The assessment service does not need to know anything about the students that are submitting the exercise nor the course that the exercise is used on. Therefore the same exercise can be used simultaneously on a number of different course contexts. This eases the implementation of assessment services. 3) Asynchronous Assessment: Sometimes the assessment of an exercises is expected to take longer or exercises are not based on HTML forms, in which cases synchronous assessment does not apply and service needs to be able to push information to A+. This is possible because before A+ makes the request to the exercise service in, it generates and signs a unique submission url, where the exercise system can

Rendering exercise instructions and submission form.

2) Synchronous Assessment: Some exercises can be assessed during a single HTTP request. By the nature of HTTP, this is blocking, which means that the user must wait for the

343

The submission and assessment of static exercises are not connected like in the previous assessment types. Instead, in phase 1, the student fetches the exercise instructions and submits his or her solution to A+. The submission is done by using an HTML form, which is customized for each exercise and may include file fields. A+ gives the submission a unique id and stores it. At this point the submission is not graded nor is an exercise service called. In phase 2, some time later, an instructor starts reviewing student submissions. This is done by using an exercise service (ES) such as Rubyric. Remote services are not automatically allowed to access submissions stored in A+, which is why the instructor must first authorize ES to access A+ on his behalf. Authorization is done by using the OAuth protocol. After authorization, the instructor may request a list of submissions for the exercise from ES. ES makes a request to A+’s REST API to get a list of submissions for the exercise. This request is authenticated by the instructor and thus ES can access all resources available through the API and accessible for the instructor. In a similar fashion, the instructor then requests for an individual submission. Different exercise systems may provide the instructor with different tools for reviewing the submissions. After finishing the review, the exercise service sends the feedback and the grade to A+ through the API.

later post the submission and grading. This submission URL is delivered to the service as an additional GET parameter when communicating with the exercise service. Assessment Flow and Maintaining State: Unlike synchronous assessment services, asynchronous assessment services need to maintain a state between exercise initialization and completion (i.e. generation of feedback and grade). Namely, assessment service has to take care of a unique return address for each student (group) and exercise, where the assessment service returns the feedback and grading. This address needs to be stored during the whole assessment process. In the case of asynchronous assessment of a form based exercises, everything works just like in the synchronous case, but the reply to the POST message (i.e. submission of an exercise) is only a status message telling that the submission has been accepted. Later when the feedback is ready, the assessment service will send it back to A+. This request is done by using a REST interface for submitting the scores. Before accepting the submission, A+ validates that the url contains the correct signature, and that the request is coming from the same origin that initiated the exercise. Assessment flow in the case of non form submission (e.g. submission from the TRAKLA2 applet) is presented in Figure 6. In phase 1, the student requests an exercise from A+ and gets an applet that is aware of the unique submission url. The exercise services can be designed to store the submission urls in a number of ways. In this example, the state is stored on client side, and therefore the submission_url is included on the page which is returned to the student. Between the phases, the student works on the exercise locally by using the applet. After completing the exercise, the student submits his or her solution by using the applet (phase 2). The submission is graded on the applet, and the grade, feedback and submission_url are sent to the exercise service.

Figure 7. Submission and assessment of static exercises. In this assessment method, Phase 1 is initiated by the student and Phase 2 by the instructor.

IV. D ISCUSSION Our literature review into fields of e-learning systems and interoperability standards found several solutions and standards. For our research setting, where the e-learning systems are capable of assessing submissions, there were not many solutions. Instead, the standardization seems to be more focused on how to annotate static learning material in a way that it can be found and transferred between elearning systems effectively. More dynamic learning material

Figure 6. Assessment flow during TRAKLA2 exercises. The submission_url is stored on the client side between phases 1 and 2.

4) Static Assessment: The implementation of the third assessment type, static assessment, is presented in Figure 7.

344

storage or exercise deadlines, but focus on their assessment functionality instead.

was often implemented as transferable software modules, or plug-ins, which are tightly coupled to specific e-learning systems. The idea of transferring the assessment of exercises out of the server where an LMS is running is not new [5], [11]. Compared to these, the novelty of A+ is the design of the loose coupling between exercise services and A+ so that exercise services are independent and can also be used without A+. Services can be used either by other LMSs or directly with browsers. This could allow cooperation between scientific institutions, allowing them to easily utilize others’ assessment systems on their own courses. In addition, A+ is designed to act not only as a client using assessment services but to act as a service itself and provide information to other e-learning systems, allowing interoperability of various e-learning systems. Service-oriented approach enabled us to connect the systems, which are implemented with various technologies and with different approaches in assessing the exercises, in a way which would not require re-writing the systems as plug-ins nor creating a single e-learning system with a massive code base. In fact, the exercise assessment systems can be significantly simplified. For example, the Goblin system mentioned earlier was modified to work in a serviceoriented manner to grade exercise synchronously. It was also simplified to work in a stateless manner, by disabling any saving of information about the graded submissions on the disk or in the database. The required changes in Goblin were minor, and consisted mostly of disabling the authentication of users for submitting their answers. Modifying the system to a service would essentially allow removing most of the code related to user and course management thus simplifying the system and focusing it only on its core functionality – assessment of programming assignments. The A+ system has already been used on a course. In Fall 2012, around 140 students on a Web Software Development course at Aalto University solved and submitted around 8000 assignments with A+. The set of exercises included HTML+CSS assignments, JavaScript programming exercises, Python programming exercises with browser-based IDE and grading, Django programming exercises as well as multiple-choice questions. Assessing these used a wide range of technologies, and were implemented by several exercise services. Technically, the system worked well even under a lot of traffic near the assignment deadlines. In Spring 2013, it was used on two Data Structures and Algorithms courses with over 300 students in total.

ACKNOWLEDGMENT This work has been partially funded by the Technology Industries of Finland Centennial Foundation under the project ”Interoperability and Social Media in Computer Science Learning Environments”. R EFERENCES [1] D. Dagger, A. O’Connor, S. Lawless, E. Walsh, and V. P. Wade, “Service-oriented e-learning platforms: From monolithic systems to flexible services,” IEEE Internet Computing, vol. 11, pp. 28–35, 2007. [2] S. Booth, S. Peacock, and S. P. Vickers, “Plug and play learning application integration using IMS Learning Tools Interoperability,” in Ascilite 2011, Wrest Point, Hobart, TAS, 2011, p. 143 147. [3] T. Koskinen, “Improving interoperability of e-learning systems by using a service-oriented approach,” Master’s thesis, Aalto University, 2012. [4] S. Wilson, P. Sharples, and D. Griffiths, “Distributing education services to personal and institutional systems using widgets.” in Workshop on Mash-Up Personal Learning Environments at 3rd European Conference on Technology Enhanced Learning, Maastricht, 2008. [5] M. Amelung, K. Krieger, and D. Rosner, “E-assessment as a service,” Learning Technologies, IEEE Transactions on, vol. 4, no. 2, pp. 162 –174, 2011. [6] C. Alario and S. Wilson, “Comparison of the main alternatives to the integration of external tools in different platforms,” in Proceedings of 3rd International Conference of Education, Research and Innovation. IATED, 2010, pp. 3466–3476. [7] IEEE Standards Department, “Draft standard for learning object metadata,” http://ltsc.ieee.org/wg12/files/LOM 1484 12 1 v1 Final Draft.pdf, 2002. [8] M. McClelland, “Metadata standards for educational resources,” Computer, vol. 36, no. 11, pp. 107 – 109, 2003. [9] E. R. Jones, “Implications of SCORM and Emerging Elearning Standards on Engineering Education,” in Proceedings of the 2002 ASEE Gulf-Southwest Annual Conference, 2002, pp. 1–10. [10] R. T. Fielding, “Architectural styles and the design of network-based software architectures,” Ph.D. dissertation, University of California, Irvine, 2000. [11] J. Spacco, D. Hovemeyer, W. Pugh, F. Emad, J. K. Hollingsworth, and N. Padua-Perez, “Experiences with marmoset: designing and using an advanced submission and testing system for programming courses,” in Proceedings of the 11th annual SIGCSE Conf. on Innovation and technology in computer science education. New York, NY, USA: ACM, 2006, pp. 13–17.

V. C ONCLUSIONS We have designed and implemented a system called A+, which communicates with the existing systems through relatively simple HTTP requests. Because of this, we have been able to simplify the assessment services, so that they would not have to deal with user authentication, submission

345

Suggest Documents