Using Social Networking and Learner-Centered Measurement in Automated Social Mentoring Systems Richard N. Landers Department of Psychology Old Dominion University, USA
[email protected]
Abstract: The power of social networking for instruction in both the classroom and training workshop is obvious; out-of-class relationships amongst learners can foster increased understanding of the subject matter not possible with the instructor alone. Taking advantage of these systems is the next step in improving instruction through technology; yet, most online instructional systems today are relatively simplistic. This paper proposes a support tool for learning that incorporates both social networking and learner-centered measurement techniques such that learners with expertise in the subject material can be automatically identified and assigned as mentors to learners seeking help in the context of a fully online instructional program. It also describes potential incentives and reward systems to encourage participation in that system. Such a system would furthermore be offered and administered at zero cost to research partners.
The power and omnipresence of social networking is well documented. In a recent study, over 74% of children aged 11-16 reported having used social networking (Sharples, Graber, Harrison, & Logan, 2009), which has in turn been described as critical for learning to occur (Little, Denham & Eisenstadt, 2008). Considering the increasing popularity of the Internet at both home and in the classroom, including 3.1 million college students completing at least one online course per semester as of a few years ago (Allen & Seaman, 2006) with that number expected to continue growing, it seems natural that such advanced online tools would be well taken advantage of both at school and at work. It is peculiar then, that this adoption rate has been so slow; while many schools and companies have taken advantage of the Internet for quick communication and basic online instruction, few schools and even fewer companies have taken advantage of social networking in particular as an internal tool for delivering instruction and training materials. This paper proposes a model to take advantage of not only social networking, but of advanced learner-centered measurement techniques so as to improve what social networking might provide. Each element of the proposed system will be described in turn.
Learner-Centered Measurement Traditional measurement as it is used in most classrooms and training programs, called classical test theory, is focused squarely on the test itself. We use terms like “the pass rate of the test,” “the difficulty of the test,” and so on to describe learner success. But this focus on the test is not needed. A more modern learner-focused quantitative technique, called item response theory (IRT; sometimes called “modern mental test theory”), focuses instead on the learner. Instead of estimating the pass rate of the test, IRT estimates the ability level of each individual learner using a statistic called theta (), with values on a standardized scale with unlimited range, from high negative numbers to high positive numbers, with zero representing an average ability level. In this sense, IRT is focused on comparing the ability levels of learners directly, rather than examining relative success on a particular test to infer that information. IRT does this by treating individual items as the base unit of analysis. Each item can be designed and then measured so that it provides information about the ability level of learners who complete it. For example, imagine that Question A is a true-false average difficulty item. In IRT terms, this means that learners with very high ability have a near 100% chance of getting the item correct. Learners with very low ability have around a 50% chance of getting the item correct purely by guessing. As we slide between those learners with very high and very low ability levels, we can compute the precise chances that any particular learner will get that question correct based upon their ability level. We can then use this in reverse to very accurately, and more importantly, very quickly estimate the
Landers, R.N. (2009). Using Social Networking and Learner-Centered Measurement in Automated Social Mentoring Systems. In T. Bastiaens, J. Dron & C. Xin (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (pp. 2803-2806). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
ability level of any particular set of learners even if those learners don’t see the same questions on their respective tests. To examine this, consider Figure 1.
Figure 1. Sample item characteristic curve (example of item response theory) with a = 1, b = 0, c = .25 The figure above is a mathematical representation of a single question that might appear in a test. Three parameters (a, b, and c) vary for each question, which define how that question behaves for any particular person. a is called the discrimination parameter, which specifies how well the test item differentiates between people of high and low ability. A higher value for a would be visualized as a steeper slope of the curved line above. b represents the item difficulty parameter; a higher value for b represents a more difficult item and would visually represented by sliding the entire curve to the right. c represents the guessing parameter, which is the probability that the item could be guessed, which defines the lower asymptote of the curve. Thus, the graph above represents an item of average difficulty and average discrimination with 4 possible options. For a person with average ( = 0) ability, a person completing the question would have a 62.5% chance of answering it correct. For a person with an ability level 2 standard deviations above the mean, that probability increases to 91%. For a person with an ability level 4 standard deviations below the mean, that probability decreases to 26% (barely above chance levels). In terms of item quality, this is a pretty good item. But to show just how different a test item might be, consider Figure 2.
Landers, R.N. (2009). Using Social Networking and Learner-Centered Measurement in Automated Social Mentoring Systems. In T. Bastiaens, J. Dron & C. Xin (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (pp. 2803-2806). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
Figure 2. Sample item characteristic curve (example of item response theory) with a = .6, b = 0, c = .5 This figure represents a true-false question with a lower discrimination parameter (a = .6 instead of a = 1.0). This results in two changes. First, because the discrimination parameter is smaller, the question is less able to distinguish between high and low performing learners. This is represented by the shallower slope of the line. There is also a higher probability that a low performer will get the question correct, because there are only two possible responses (true or false, represented by c = .5). In this question, a person with average ability has a 75% chance of getting the question correct, while a person with 1 standard deviation higher-than-average ability only has an 82% chance of getting it correct. In other words, the amount of information about the learners that this item provides is smaller than the amount of information provided by the question in Figure 1. The advantage of using IRT is thus that in only a few questions in an online test, by balancing the types of information that specific items provide, a learner’s ability level can be estimated with a high degree of accuracy using a technique called computerized adaptive testing (CAT). In this kind of testing, questions are presented oneat-a-time. First, a question requiring average ability is presented. If the test-taker gets the question correct, a more difficult question is presented. If the test-taker gets it incorrect, a less difficult question is presented. Increasingly precise questions are given until a fairly stable estimate of the test-taker’s ability level can be identified, in far fewer questions than can be done with a traditional test. Cheating is also reduced because most learners will not see many (or perhaps any) of the questions that their classmates see. The only disadvantage is that the creation of such a test question database can be somewhat time-consuming in the initial data-collection effort.
The Integration of Social Networking As an instructional tool, social networking has not been extensively examined empirically, although it is quite enthusiastically supported from a theoretical standpoint (e.g. Pettenati & Cigognini, 2007). More often, it is studied by the observation of the individuals currently using it and the effect of that use on those people’s lives (e.g. Kord, 2008) rather than the testing of new techniques and strategies to take advantage of it. In this way, the presently described system is novel; the purpose of this system is to create a persistent online social environment where learners can mentor other learners. In such a system, learners identified as experts serve as mentors for learners seeking mentors or alternatively, identified as needing mentoring by some other mechanism. Mentors can then freely communicate with their mentee, creating a safe and relatively anonymous system by which learners can help other learners without intervention on the part of the instructor. This, in a sense, creates new learning opportunities with a subject matter expert during learners’ free time outside of class. Because it augments existing instruction, it can also be used in conjunction with any type of class: face-to-face, blended, or fully online. Expertise does, however, need to be carefully defined. Having a single certification as an “expert” for a course is probably unwise; instead the course should be broken down into major components and sub-components. For example, in a college setting, instead of a single certification as an expert on psychology, certifications would be available in basic social psychology, basic clinical psychology, individual advanced topics and so on – a student might be certified as an expert in all areas of social psychology after having taken an advanced course on the topic, but only be certified in a few scattered topics in other areas, making them an expert in some areas but a novice in others. This further adds a “gaming” element to the mentoring system with a complex reward structure that should encourage participation. Once in the system, users have access to the typical tools available in a social network: communication via synchronous chat and asynchronous discussion boards, profile creation, and the ability to choose identifying characteristics to personalize the experience. The reward system will also be linked at all stages, such that expertise can be easily referenced to all observers (also creating a formal system for social recognition of the learner-experts).
Combining Learner-Centered Measurement and Social Networking The value of these two major features combined is that the expertise of learners can be determined accurately and automatically, which in turn enables learners to be classified without instructor intervention. Once expert learners are identified, these learners can be automatically entered into the mentorship pool, enabling them to be automatically paired with learners seeking mentors. In that way, expert learners are able to share their Landers, R.N. (2009). Using Social Networking and Learner-Centered Measurement in Automated Social Mentoring Systems. In T. Bastiaens, J. Dron & C. Xin (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (pp. 2803-2806). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
understanding with novice learners, and novice learners can learn from their classmates. Expertise promotions are only allowed at a long time interval (for example, monthly) to prevent learners from simply retaking the tests a hundred times and memorizing every question and correct answer, which in turn creates a set of short- and longterm goals for participants in the system. To encourage participation in this system, a reward or incentive system must also be integrated in order to make participation more attractive. For example, additional levels of expertise (novice, novice+, intermediate, intermediate+, expert, master) might be designed, with benefits to each stage: updates to avatars (the learner’s representation on message boards and chat) with special markings denoting their mastery level and access to additional features or even design elements. Learners of a master level, for example, might be able to design their own learning modules to complement instructor-provided modules, which could then be rated by other participants in the system.
Availability of the Automated Social Mentoring System The automated social mentoring system can be applied to a wide range of instructional scenarios in both higher education and corporate training settings. Any organization wishing to become a research partner of the Technology iN Training (TNT) Laboratory (tntlab.org) at Old Dominion University and implement this system in their own organization is encouraged to contact the primary investigator for details. The system is currently available at zero cost to partner organizations.
References Allen, I. E., & Seaman, J. (2006). Making the grade: Online education in the United States, 2006. Retrieved October 22, 2008, from http://www.sloan-c.org/publications/survey/pdf/making_the_grade.pdf Kord, J. I. (2008). Understanding the Facebook generation: A study of the relationship between online social networking and academic and social integration and intentions to re-enroll. Unpublished thesis, University of Kansas. Little, A., Denham, C. & Eisenstadt, M. (2008). MSG instant messenger: Social presence and location for the ‘Ad hoc learning experience.’ Journal of Interactive Media in Education, 1, 1-13. Pettenati, M. C. & Cigognini, M. E. (2007). Social networking theories and tools to support connectivist learning activities. International Journal of Web-based Learning and Teaching Technologies, 2(3), 42-60. Sharples, M., Graber, R., Harrison, C., & Logan, K. (2009). E-safety and Web 2.0 for children aged 11-16. Journal of Computer Assisted Learning, 25, 70-84.
Landers, R.N. (2009). Using Social Networking and Learner-Centered Measurement in Automated Social Mentoring Systems. In T. Bastiaens, J. Dron & C. Xin (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (pp. 2803-2806). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).