1COPPE/UFRJ - Graduate School of Computer Science, Federal University of Rio de Janeiro. 2DCC/IM - Institute of Mathematics, Federal University of Rio de ...
IADIS International Conference on Web Based Communities 2006
KNOWLEDGE CHAINS LEARNING CERTIFICATION USING INTELLIGENT AGENTS TO CREATE PERSONALIZED DYNAMIC TESTS Juliana Lucas de Rezende1, Stainam Nogueira Brandão1, Jano Moreira de Souza1,2 1
COPPE/UFRJ - Graduate School of Computer Science, Federal University of Rio de Janeiro 2 DCC/IM - Institute of Mathematics, Federal University of Rio de Janeiro PO Box 68.513, ZIP Code 21.945-970, Cidade Universitária – Ilha do Fundão, Rio de Janeiro, RJ, Brazil
ABSTRACT In this paper we consider a process which complements the learning process for building personal knowledge through the exchange of knowledge chains in learning communities. The process differential is the addition of “how to use” the available knowledge to its “authors” (who), “localization” (where), and “content” (what), which are commonly used. This paper presents the Evaluation Module, created to give feedback to the learner about what he/she is studying in the knowledge chains. This feedback will be carried through with personalized dynamic tests, to be created by intelligent agents. With this information, the learner can self-evaluate his/her learning process. This point became extremely important, when we consider that learners weren’t motivated to study the chains because they didn’t have any feedback saying that he/she learned what the chain proposed. The hypothesis is that one learner will guide his/her own study easily when he/she has a system to provide assistance. KEYWORDS Knowledge Chains, Learning Communities, Intelligent Software Agent, Self-Evaluation, Personalized Dynamic Tests.
1. INTRODUCTION According to Schroeder (2002), there is a general agreement that our society is changing into a knowledge and information society, so, life-long learning became a necessity rather than a possibility or a luxury to be considered. Life-long learning has emerged as one of the major challenges for the global knowledge society. It has become an integral part of work activities in the form of continuous engagement in acquiring and applying knowledge and skills, in the context of a current task at hand. According to Bade (2004), students and professionals asking for additional knowledge are usually pleased with the wide scope of knowledge sources available electronically and wish to “learn on demand”. This implies a high level of student’s independence from locations and time schedules, which have been described previously by other terms like “open learning” or “distance learning”. Then – besides finding skilled multimedia instructors - the integrated administration of the learning program and implementation of a feedback channel between students and tutors as well as the training organization is critical for the success and of such blended learning concepts. To complement the learning process, there are communities of practice focused on learning, i.e., learning communities. These communities act as both a method to complement teaching in the traditional classroom and to acquire knowledge in evolution. Pawlowski (2000) defined a learning community as being an informal group of individuals engaged in a common interest, which is, in this case, the improvement of the learner’s performance using computer networks. One of the principles of Wenger (2002) for cultivating communities of practice is the sharing of knowledge to improve personal knowledge. Another issue towards making a successful community should be intense communication between the members. Finally, a community should assist the members in building up their personal knowledge. (Souza, 2005)
175
ISBN: 972-8924-10-0 © 2006 IADIS
1.1 Motivation In the attempt to complement the learning process, a system has been developed to promote knowledge building, dissemination and exchange in learning communities. This system is called the Knowledge Chains Editor (KCE) [Rezende, 2005a] and is based on a process for building personal knowledge through the exchange of knowledge chains (KCs) [Rezende, 2005a]. It is implemented on top of COPPEER1. The KC [Figure 1] is a structure created to organize knowledge structure and organization. A KC consists of a header (which contains basic information related to the chain) and a knowledge unit (KU) list.
a. Knowledge Chain
b. Knowledge Composition Figure 1. Knowledge Organization
Conceptually, knowledge can be broken into smaller units of knowledge (recursive decomposition). For the sake of simplification, it was considered that there is a basic unit which can be represented as a KU (a structure formed using a set of attributes). To build his KC, the learner can use the KCE. In the case of questioning, he must create a KU whose state is “question”. At this moment, the system starts the search. It sends messages to other peers and waits for an answer. Each peer performs an internal search. This search consists of verifying if there are any KUs similar to the one in the search. All KUs found are returned to the requesting party. [Figure 2]
Figure 2. Knowledge Chains Editor Architecture
The creation of a KU of type “question” is obviously motivated by the learner’s need to obtain that knowledge. So far, we have considered the existence of two motivating factors for the creation of available KCs. The first would be a matter of recognition by the communities, since each KU created has a registered author. The second would be the case where the tutor makes them available “as a job”, with the intention of guiding his learners’ studies. After the KCE prototype’s development, experiments had been carried out with a group of graduate students at PESC/COPPE-UFRJ2. [Rezende, 2005b] Amongst the results, we found the difficulty of the learner in knowing if he/she learned what KC proposes. In the attempt to solve this problem, in this work we present a proposal of evolution for the KCE. The main goal of the Evaluation Module (EM) is to give feedback to the learner about what he/she is studying in the KCs. This feedback will be carried through with personalized dynamic tests, which will be created by intelligent software agents. With this information, the learner can self-evaluate his/her learning process. [Katerina, 2004] The remainder of this paper is organized as follows. The next section presents the intelligent software agent’s main concepts and the description of the agents in the evaluation module. In section 3, we present the proposed idea and the prototype developed. Conclusions are given in section 4.
1
COPPEER [Xexeo, 2004] is a framework for creating very flexible collaborative peer-to-peer (P2P) applications. It provides non-specific collaboration tools as plug-ins. 2 PESC is the System Engineering and Computer Science Department of COPPE/UFRJ (Federal University of Rio de Janeiro)
176
IADIS International Conference on Web Based Communities 2006
2. EVALUATION MODULE AGENTS According to Bradshaw (1997), a software agent can be defined as a complex object with attitude. It is governed by its state and behavior. The agent’s state is described by its knowledge and expressed through mental components as beliefs, objectives and plans. Its behavior is composed and governed by a set of behavioral characteristics, which are called agency properties. [Shoham, 1993] An agent can possess some of the following properties: autonomy, reactivity, adaptation, interaction, “knowledge-level” communication ability, inferential capability, collaborative behavior, temporal continuity, personality and mobility. These properties and its descriptions can be found in [Bradshaw, 1997]. Amongst these there are 3 which are required for an entity to be considered a software agent: autonomy (goaldirectedness, proactive and self-starting behavior), adaptation (ability to learn and improve with experience) and interaction (ability to work in agreement with other agents). Software agents can also be classified on their degree of intelligence. The ones with a high cognitive capacity are classified as cognitive agents while less cognitive ones are classified as reactive agents. It is important to say that agents can have both reactive and cognitive characteristics. [Ferber, 1999] Ideally, a software agent that works continuously in an environment over a long period of time would be able to learn from its experience. In addition we expect an agent that inhabits an environment with other agents and processes to be able to communicate and cooperate with them, and perhaps move from place to place in doing so. [Bradshaw, 1997] The tests which will be used by the evaluation module will be created by intelligent software agents. Four agents had been defined to carry out this task. The complete vision of the collaboration between these agents will be presented in section 3.
2.1 Questions Builder Agent (QBA) The QBA agent will try to stimulate the KUs’ author in creating new questions which can be used in future tests. So, how the QBA can stimulate the creation of new questions? For this, the QBA will make use of templates, which can suggest and assist the author in the creation of the new questions. The partial automatization of the process of creating questions is carried through the use of the technology from agents, ontologies3 and data mining4. Here, agents will monitor all KUs and questions created and visited by the author and the members’ communities, and classify its content using an ontology. From there, we want to create and recommend questions to the author. The new questions must be classified and stored on the questions database (QDB).
2.2 Questions Collector Agent (QCA) This agent has the job of finding questions which can be used in a specific test. It initiates the searching for new questions sending messages to the IQCAs (section 2.3), which are agents located in other peers, located in “near” communities and waiting for answers. The messages sent informs to the IQCA what it must look for. Each IQCA is responsible for the internal searching in its peer. The QCA decides what to search, where to search and how to manage search results. It makes use of the ontology technology and data mining techniques to conclude if a question is (or not) relevant for a specific topic, and to specify its degree of relevance. It can search many similar topics at the same time. The new collected questions must be presented to the author and he or she accepts or rejects them. The accepted questions must be classified and stored on the QDB.
3
Ontology [Gruber, 1993] is a formal specification of concepts and their relationships. By defining a common vocabulary, ontologies reduce concept definition mistakes, allowing for shared understanding, improved communications, and a more detailed description of resources. 4 Data mining generally refers to the process of (automatically) extracting models from large data storage areas. [Fayyad, 1996].
177
ISBN: 972-8924-10-0 © 2006 IADIS
2.3 Internal Questions Collector Agent (IQCA) The IQCA is the agent that properly makes the questions search. It is responsible for carrying through the internal search (inside the proper peer) to find questions which matchs with the format specified by the requesting party. The requesting party is the QCA. The internal search consists of verifying if there are any questions whose characteristics, like ontology nodes and associated KUs, matchs the format considered in the search. All questions found are returned to the QCA. For each question found, it is necessary to return the ontology nodes, which can be added to the ontology from the QCA. In cases where the IQCA can’t find any question, it does nothing.
2.4 Tests Builder Agent (TBA) The TBA is responsible for creating the user’s personalized dynamic test using the questions stored on the QDB. The TBA holds the questions related to the KU to which the test is related, and needs to decide which of them and in which sequence they will be presented to the learner. This decision process will be based on recommendation systems. Recommendation systems [Sarwal, 2001] are used in their essence, to suggest items to its users. These items can be recommended based on items that have greater acceptance by other users; in the user's profile; or in an analysis of the last user behavior as a base for predicting the future. Such techniques provide a way to personalize the system because they allow the system to adapt itself to each user. Is this work, the questions will be the recommended items.
3. EVALUATION MODULE (EM) This paper presents an evaluation module that is part of the KCE, which is an intelligent e-learning environment [Cristea, 2004]. This module can adapt to the learner’s profile (AP) and encourages active learning using personalized dynamic tests. As stated before, the main goal of the EM is to give feedback to the learner about what he/she is studying in the KCs. This feedback will be carried through with personalized dynamic tests, which will be created by intelligent agents. With this information the learner can self-evaluate his/her own learning process. A web oriented intelligent e-learning environment typically has a multi-agent architecture such as that shown on Figure 3. Human and artificial agents collaborate to achieve tasks. Each learner is endowed with a learner’s personal assistant to assist the learner, to monitor his/her actions, and to assure coordination and communication with other agents in the system.
Figure 3. Evaluation Module Architecture
The EM was divided in 3 sub-modules: the question builder module (QBM), the question collector module (QCM) and the test builder module (TBM). The QBM deals with the functionalities related to the creation
178
IADIS International Conference on Web Based Communities 2006
and persistence of the new questions. The QBA agent is part of this module. QCM module deals with the functionalities related to search, collect and persistence of questions located in other peers. QCA and IQCA agents are part of this module. The TBM deals with the functionalities related to the tests. The TBA agent is part of this module. However, all the sub-modules possess high coupling, such as that shown in Figure 3. These sub-modules will be following detailed.
3.1 Question Builder Module (QBM) As well as in the creation of a new KU, in the creation of new questions we had considered that there are two motivating factors for the creation of available new questions. The first would be a matter of recognition by the communities, since each question created has a registered author. The second would be the case where the tutor makes them available “as a job”, aiming at guiding his learners’ studies. When the author is creating a new KU, he/she must associate ontology nodes to this. (These ontologies nodes are used to give semantic to the KU, to verify if two KUs are similar and to facilitate the questions’ search and persistence. [Rezende, 2005a]) At this point, the QBA will try to stimulate the author to create new questions to be used in future tests. These tests will be created to evaluate the learner learning about the KU created or about similiar KUs. All the questions created by the author of the KUs are classified, organized and stored on a QDB, becoming part of the KC’s structure. This way, when a KC is returned by a search the questions associated with it must be sent too. The question classification process is based on many parameters but the main are: the “ontologies concepts” related to the question, which can be defined by the question’s author or using data mining in the question statement and correct answer; the question “difficulty degree”, which can be defined by the question’s author or by the community members evaluation; the question “quality”, which can be defined by the question’s author or by the community members evaluation; the question “relevance”, which can be defined by the question’s author or by the community members evaluation; and the “degree of author’s knowledge on the subject”, which can be defined by the author’s profile. The partially automatic nature of the process of creating questions becomes extremely important when we observed that the authors weren’t motivated to create new questions, which, normally, takes a lot of effort. The hypothesis is that one author will create a greater number of questions when he/she has a system to provide assistance.
3.2 Question Collector Module (QCM) As stated before, the QCM module deals with the functionalities related to search, collect and persistence of questions. It is necessary to have as much questions as possible stored on the QDB, so the TBM module can be able to create an adequate test to each learner. The QCA periodically monitor the “near” communities’ peers trying to find questions that can be added to the list of questions of the KUs created by the authors. This monitoring is made through a request to the IQCA. The IQCA makes the peer’s inside search and supplies to the QCA the questions found according to the informed specification. This specification must be based on the question parameters, for example, question ontologies concepts, quality, relevance, etc. These questions are presented to the author and he/she accepts or rejects them. Also, the peer in which the KU is created must be part of the group of peers searched; thus a question used in another stored KU can be used to evaluate the newly created KU. All the accepted questions are classified, organized and stored on the QDB, becoming part of the KC’s structure. This way, when a KC is returned by a search the questions associated with it must be sent too. Even after the creation of the KU, the QCA agent continues to execute periodically. In the case of new questions with relevance are found in others peers or are created in the same peer for another KU, these will be presented to the author to be accepted or rejected. It is worth pointing that the peer in which the KU had been created must be part of the group of peers searched; therefore a question used in another stored KU can help evaluate the newly created. The author can also receive questions suggested by community members, commentaries on the questions created by him/her and reports (automatically generated) on supposed problems found in questions created by him/her, such as, a question which has a very low rightness percentage can have been badly formulated or its difficulty degree can be greater than the author thought.
179
ISBN: 972-8924-10-0 © 2006 IADIS
3.3 Test Builder Module (TBM) The tests will be created in a dynamic but personalized way. For each KU a mini-test must be created. The questions used in the tests will be those stored on the QDB. These tests will be created by the TBA. The TBA holds the questions about the KU that will be evaluated on the test and needs to decide which of them, and in which sequence they will be presented to the learner. This decision process will be based on recommendation systems. First of all, the TBA will take all questions related to the KU by matching the KU’s ontology concepts with the question’s ontology concepts. Then TBA will use the learner’s profile to choose the questions and its order. There are too many cases, so, let’s see an example: if the learner already has some knowledge about Relational Algebra [Figure 1] TBA doesn’t need to begins the test with simple questions, i.e., it can begins the test with more difficulty questions and only if the learner makes a mistake the agent will use the simple questions to verify his/her knowledge. Questions correctly answered by the learner and questions used in a short period of time must be avoided. The TBA will go on to create an optimal course for the case where the learner makes right all the questions. The questions choice will be based on user’s profile, tests already carried out by him/her, communities to which he/she belongs, pre-requisite between the questions, chosen level of difficulty and according to the suggestions of the KU’s author. The KU’s author can define fixed questions, i.e., questions that will be part of all the generated tests. This functionality would be useful in the case when the tutor wants that all learners go through specific questions. It’s important to point out that new parameters can be added to the process of question selection. When the learner answers the question incorrectly, the TBA must modify its optimal course selecting a new question, which approaches the same problem. After some attempts the learner must be influenced to review the topics related to the questions wrongly answered. The user has the option to move ahead or to step back to the studies. In the case of he/she chooses to step back to the studies, when returning to the test, the learner still can decide if he/she desires to continue with the same test or to start a new one. When the learner finishes the test going through all the questions and following the instructions to review topics returning to studies, the KU state (in the learner’s profile) will be modified to "learned". In the end of each test a report will be presented with the learner’s correct and incorrect answers. Regions’ maps can be created to help the learner to perceive where he/she was better explored and where he/she needs to evolve. This report can be requested at any time by the learner and in this case it is called partial report. The correct answers of the questions that were answered incorrectly by the learner will not be presented, so that the question can be used in a future test. In the case of the learner formally requests the answer, this will be presented and such question can not be used in future tests. All tests carried out will be stored.
3.4 EM Prototype EM is a multi-agent system which main feature is to give feedback to the learners with personalized dynamic tests. To create these tests it is necessary to promote the exchange of questions between the nodes of the network. The exchange is possible thanks to the peer-to-peer (P2P) infrastructure which supports the application. The EM had been developed to work inside COPPEER and because of it the application has many facilitating features, such as concurrence control, search mechanism and awareness system (which enable to know when a peer fails). The P2P architecture enables the EM to be more dynamic, resilient and fault-tolerant. The absence of a server permits the EM to work simply connected to other peers and make queries about the desired questions. Even if some questions are not available when the query runs, it is possible to get other useful questions, which would be impossible by using a server. Figure 4 shows the question authoring screen. At the top of the window is the identification of the KU to which the question that is being created belongs. On the left panel are the question’s information, such as identification, statement and related figure. The answer’s information is on the right panel. The answer’s options can be edited such as the question. Figure 5 presents the learner’s test screen. It shows the test accomplishment created to a specific KU. The window is divided in 3 panels. At the top of the window is the identification of the KU to which the carried out test belongs. At the same panel are the questions created to the test optimal course. The question statement is on the intermediate panel. Finally, on the last panel, are the options to answer the question. The
180
IADIS International Conference on Web Based Communities 2006
learner will choose one of them and, automatically, the tool will update the window showing the next question. When the test finishes, a new window will be opened showing the test result to the learner. The learner can finish the test at any time.
Figure 4. Question Authoring Screen
Figure 5. Learner’s Test Screen
4. CONCLUSIONS AND FUTURE WORK This paper presents an evaluation module that is part of the KCE. This module is able to adapt to the learner’s profile and encourages active learning using personalized dynamic tests. This point became extremely important when we observed that the learners weren’t motivated to study the knowledge chains because they didn’t have any feedback saying whether he/she learned what the chain proposed. The experimental use of the extended KCE showed evidence that when it is used by a learner to guide his/her own study, the hypothesis that he/she has a better performance and consequently became more motivated was confirmed. In order to evaluate the reach of the KCE goals, experiments aimed in obtaining qualitative and quantitative data, which would make possible the verification of the hypothesis under consideration, must be carried out. Beyond EM, there are other tools that deal with student evaluation, such as the presented in [Cristea, 2004] and [Bade, 2004]. The main difference between these tools and the EM is that they are focused on the evaluation of the student’s capability to supply information to the tutors and the EM goal is not to create tests for tutors to evaluate learners but to create tests where learners can assess themselves. In [Barak, 2003] we can see a similar goal, but the solutions are different. The result of these tests can be used by the learner for self-evaluation and also to verify if he/she is apt to undertake a "real" test on a specific subject. In the last case, the tutor can creates knowledge chains and questions and then the learners must study following tutor supervision and doing the tests created in the EM. When the learner took off a satisfactory grade (in the tutor’s opinion) he/she would be able to carry out a “real” task or take the final tests on the subject. The tutor can use these results for what he/she wants. It must be clear that here we are not worried about the fact that the learner can deceive the system or ask to another person to take the test for him/her (since the test is distance made). This is an EAD problem we do not intend to solve here. Meanwhile, we don’t advise the use of these tests as final tests, advising their use mainly as a way to shows to the learner that he/she is apt to be “truly” evaluated. It is important to point out that the fact of modify the KU’s state to “learned” doesn’t ensure that the learner has assimilated everything in his knowledge chains. Our goal is to offer support for the learner’s learning process according to his/her priorities, which may be access to the chain with the best cost/benefic relationship between time, quality and others. Special attention had been given to support the tests authoring, which is especially important on an intelligent e-learning environment context to assess learner progress towards the learning goals. (Junior, 2002) Adequate authoring tools can make the tutor task easier, can contribute to a greater acceptance of elearning systems despite the extra work they require and can stimulate the participation of the students. As a future work we have the use of the evaluations’ results as a competence certification to the user’s knowledge unit, so, from the moment the user makes the KU’s evaluation and gets a good result the tool updates the user profile making the evaluation available to the environment. After this, for example, the
181
ISBN: 972-8924-10-0 © 2006 IADIS
evaluation can be used by a project management tool to search for students or professionals with specifics abilities or competences (or specific profile). Due to the fact that this work is still in progress, many future projects are expected to ensue. We presented a few here: enriching the profile with other editors such as a project editor; adding new types of searches, such as a search for successors, which would solve problems like: “What KCs can the KCE suggest to the learner to study based on what he/she already knows?”; adding a recommendation system module based on the successors search; applying this idea to a mobile environment with the aim of taking advantage of its characteristics (ubiquity).
ACKNOWLEDGEMENT This work was partially supported by CAPES and CNPq.
REFERENCES Bade D., Nüssel G. and Wilts G. 2004.Online Feedback by Tests and Reporting for eLearning and Certification Programs with TCmanager. In: Proceedings of the 13th International WWW Conference on Alternate Track Papers & Posters. New York, USA. Barak, M.; Dan-Gur, Y.; Rafaeli, S.; Toch, E., 2003. Knowledge Sharing and online assessment. Proceedings of IADIS International Conference e-Society. Lisboa, Portugal, pp. 257-266. Bradshaw, J. M. 1997. An Introduction to Software Agents. J. M. Bradshaw Ed., Software Agents, MIT Press, pp. 3-46. Cristea, P.D and Tuduce, R. 2004. Test Authoring for Intelligent E-Learning Environments. Proceedings of the IASTED International Conference WEB-BASED EDUCATION, Innsbruck, Autria. Fayyad, U.; Piatetsky-Shapiro, G. and Smyth, P. 1996. The KDD process of extracting useful knowledge from volumes of data. Communications of the ACM, 39(11):27–34, November 1996. Ferber, J. 1999. Multi-Agents Systems. An Introduction to Distributed Artificial Intelligence. Addison-Wesley. Gruber, T.R. 1993. Toward Principles for the Design of Ontologies Used for Knowledge Sharing, International Workshop on Formal Ontology. Junior, R.T.S.; Ribeiro, G.S.N., 2002. WEBQUEST: An Internet based collaborative learning environment. Proceedings of IADIS International Conference WWW/Internet. Lisboa, Portugal, pp. 759-762. Katerina, G., 2004. WASA: An intelligent agent for web-based self-assessment. Proceedings of IADIS International Conference Cognition and Exploratory Learning in Digital Agent. Lisboa, Portugal, pp.43-50. Pawlowski, S., et al, 2000. Supporting shared information systems: boundary objects, communities, and brokering. Proceedings of the 21th International Conference on Information Systems. Brisbane, Australia, pp. 329-338. Rezende, J.L.; Silva, R.L.S.; Souza, J.M.; Ramirez, M., 2005a. Building Personal Knowledge through Exchanging Knowledge Chain. Proceedings of IADIS International Conference on Web Based Communities. Algarve, Portugal, pp. 87-94. Rezende, J.L.; Silva, R.L.S.; Souza, J.M.; Ramirez, M., 2005b. An Experiment in Exchanging Knowledge Chains to Build Personal Knowledge. International Journal of Web Based Communities. End of 2005. Sarwal, B., Karypis, G., Konstan, G., Riedl, J. 1991. Item based Collaborative Filtering Recommendation Algorithms. In WWW10, Hong Kong. Schroeder, U. 2002, Meta-Learning Functionality in eLearning Systems, Proceedings of the International Conference on Advances in Infrastructure for Electronic Business, Education, Science, and Medicine on the Internet, L'Aquila, Italy. Shoham, Y. 1993. Agent-Oriented Programming. Artificial Intelligence, Elsevier Amsterdam, Vol 60: 24-29. Souza, J.M., Tornaghi, A., Vivacqua, A. 2005. Creating Educator Communities. International Journal of Web Based Communities (IJWBC), London, v. 1, n. 3, p. 296-307. Wenger, E., et al, 2002. Cultivating Communities of Practice: A guide to Managing Knowledge. Harvard Business School Press. Xexeo, G., Vivacqua, A., Souza, J.M., et al, 2004. Peer-to-Peer Collaborative Editing of Ontologies. Proceedings of 8th International Conference on CSCW in Design. Xiamen, China.
182