WHAT TO LEARN NEXT? CONTENT SELECTION SUPPORT IN MOBILE GAME-BASED LEARNING Florian Schimanke Dept. Of Computer Science HSW University of Applied Sciences Hameln, Germany schimanke@hsw-‐hameln.de
Robert Mertens Dept. Of Computer Science HSW University of Applied Sciences Hameln, Germany mertens@hsw-‐hameln.de
Oliver Vornberger Dept. Of Computer Science University of Osnabrueck Osnabrueck, Germany
[email protected]
Abstract: Repetition fosters learning. And games take the dullness from repetition. Hence, learning games promise to be a valuable addition to any learning media portfolio. But how do learners know which content they should learn at a given time in order to get the best learning results? This paper introduces an approach for mobile learning games that eases this problem in order to maximize learning outcomes based on training intervals and the learner’s performance. The approach is illustrated by a prototype implementation which uses an example from language learning in order to focus not on the learning topic but on the implemented concepts. Content selection is based on the SM2 algorithm for spaced repetition learning, an established standard in calculating item presentation intervals for optimal learning performance. The paper also analyses usage behavior for a number of test users and draws conclusions for future modifications of the content selection scheme.
Introduction With the spreading of mobile devices, learning with apps is becoming more and more natural. With apps learning sometimes even happens without the users knowing that they are learning. This applies especially to game-based learning. On other occasions, users explicitly want to learn when using a certain app. There are already several apps in different fields which try to cover this kind of knowledge distribution. However, most of the currently available apps try to just impart knowledge rather than to immerse it. A certainly better concept would be to split the task into two steps. In a first step the knowledge has to be distributed in an appropriate way. In a second step the distributed knowledge has to be immersed. This immersion might be accomplished by repeating topics which the learner tends to forget quickly more often and at a higher frequency than topics which the learner remembers quite well. There are already apps which try to implement such an immersion by judging the answer the learner gives just as right or wrong. This information about learner performance is then used for direct feedback to the learner. The information about how a learner performed in a certain task, does however, also allow drawing conclusions about the learner’s learning process. This is where spaced repetition algorithms enter the game. These algorithms model the human memory in order to determine the best time for a certain question or field of learning to be presented to the learner in order to keep the respective knowledge in the learner’s memory. However, these algorithms are currently used exclusively for learning with flashcards. This paper explores in how far spaced repetition algorithms can be used in a variety of learning activities beyond flashcards where a rating of learner performance can be obtained from the learner’s interaction with the system. Apart from simple question and answer activities, games can also serve as input for these algorithms. The paper also explores how scheduling generated by the algorithm can be fed back into learning games. Learning with flashcards is already a common technique, especially when trying to learn a new language (Kornell, 2008). This can either be done with paper-based cards or computer-aided with algorithm-based applications like Mnemosyne1, SuperMemo (SM)2 and Anki3. In the paper-based scenario, the learners have to
1
http://www.mnemosyne-‐proj.org
judge by themselves when they should learn the same stack of cards over again. Algorithm-based applications on the other hand inherit this judgment by calculating the best suiting frequency based on the learner’s success. There are even mobile versions of this approach available, for example an app called Repetitions4. There are also a several learning apps and learning games available in the different app stores. An example for game-based learning is an app for training geography skills, called Georific5. This game does however only judge the answers of the players as right or wrong and informs them accordingly. The learners’ performance is in no way mirrored back into the game and used for future adjustments to the questions. What we want to do now is to combine the algorithmic intelligence used by the flashcard approaches and the fun and motivating approach from learning games. The remainder of the paper is organized as follows. In the related work section we present the status quo of related fields of research before we lead over to the theoretical background where we discuss the groundwork for content selection based on the spaced repetition theory. After that we give an example of the SM2 algorithm which might be the natural selection for a basic starting point and show the architecture of a learning app which contains an algorithm for content selection. After showing our prototype app and analyzing an early evaluation we discuss the pros and cons and give a forecast for future work.
Theoretical background A common technique for learning by repetition and feedback is using Flashcards (Kornell, 2008). These are often used for tasks like learning vocabulary from foreign languages. One side of the card shows the word in the foreign language, the other side shows the word in the native language, so the learner reads the word in one language and then tries to remember the word in the other language. Typically those words are categorized into different topics or difficulty levels. Learning with flashcards is traditionally based on multiple repetitions of the respective cards. There are two approaches to this type of learning: massed repetition and spaced repetition. With massed repetition the flashcards are studied repeatedly in a short period of time while with spaced repetition the flashcards are studied multiple times over a longer period of time. In psychology, the spacing effect refers to the spaced repetition as the approach that makes it easier for humans to learn (Pimsleur, 1967). According to Pimsleur (1967) it was shown that in order to achieve the best learning results, the intervals between repetitions of the same card should increase the better the learner remembers the correct answer. By doing this it is easier to focus on the things that are harder to remember, while repeating things that are easier to remember happens less frequently. The latter should not be completely excluded from the repetitions in order to ensure that they can still be remembered and also to motivate the learner when he gets a feeling of success from time to time. Without support from or a sophisticated, software-based algorithm, learners have to decide on their own which cards to learn more and which cards to learn less frequently. When there is support from a software, it might use algorithms to determine how often and in which intervals a certain card has to be shown – based on previous learning performance. This method is often referred to as spaced repetition where the time between repetitions is based on the learners’ improvement in a certain field of learning. There are different theories in Hintzman (1990) about how to determine the right frequency for repeating a certain card: strength, multiple-trace, and propositional encoding. The strength theory is based on the traditional approach that every repetition enhances the memory of a learner, i.e. the knowledge of a learner improves with each repetition. Therefore it is a cumulative approach which is strictly quantitative and “the two representations (i.e. before and after a repetition) have no qualitatively different effects”. In this theory the frequency of repetitions depends on the results of quizzes or assessments after the learning phase and is influenced by the learner’s ability to remember. The multiple-trace theory on the other hand does not take a cumulative approach but assumes that each presentation of information on a card leaves an own trace in a learner’s memory while the propositional
2
http://www.supermemo.com http://ichi2.net/anki 4 https://itunes.apple.com/de/app/repetitions-‐for-‐iphone-‐ipod/id332352818?mt=8 5 http://itunes.apple.com/de/app/georific/id320207678?mt=8 3
encoding theory assumes that frequency information is encoded in a propositional form while studying (Hockley 1984). Since neither the multiple-trace theory nor the propositional encoding theory take a cumulative approach, only the strength theory is used in this paper. As mentioned before, the spaced repetition as used in learning with flashcards defines a certain amount of time between two repetitions of the same card. There are several software applications that can be used to support the correct frequency in which a flashcard is shown to the learner.
The effect of spaced repetition Learning and remembering is basically a matter of time and retention. After an immediate recall of a learning item the retention is almost 100% (Ebbinghaus, 1885). But as time elapses, the retention drops significantly. After just 20 minutes it has dropped to about 60%, after nine hours to under 40%. Ebbinghaus (1885) created a (−t/S) where R is memory retention, S is the relative formula showing the degradation of memories: R = e strength of memory, and t is time. The solid line in the following graphic shows an example of this formula. It is called the “Forgetting Curve”. With each repetition this forgetting curve starts anew and thus gets flatter over time, which is represented by the dotted lines after each repetition. 100%
1st repetition
2nd repetition
3rd repetition
4th repetition
80% 70% 60% 50% 40% 40% 30% 20% 10% 0%
30min
60min
1h
9h
1d
6d
31d
Figure 1: Alteration of the forgetting curve through repetition according to Ebbinghaus (1885) and estimations from Paul (2007) This effect shows how important it is to repeat learning the same subject multiple times. Doing so leads to a more permanent memorization and therefore a flattening of the forgetting curve at a higher level of retention (Ebbinghaus, 1885). Furthermore this affects the time between reviews. The more an item is reviewed and remembered correctly, the longer time intervals between the repetitions may be scheduled. According to Kornell (2009), spaced learning is more effective than cramming or massing, the other two common learning techniques. In terms of flashcards, spaced learning is a technique based on studying a large stack of cards at one time. When studying a number of smaller stacks separately, which decreases the time between study trials, this is called massing. Cramming is a special case of massing which describes learning something intensely and often for the first time, often on the last day before a test (Kornell 2009). Studies have shown that spacing was more effective than massing for 90% of the participants (Kornell, 2009). In terms of short-term learning cramming might be useful but spacing results in a more long-term learning. The spacing effect is influenced by the number of flashcards the learner decides to use in one stack. When choosing a larger stack, the spacing between repetitions of a given card becomes larger because of the other cards being learned between the repetitions. The spacing between the learning sessions on the other hand is influenced by the
decision of the learner about how many times in a row he studies the same stack of cards. Therefore the spacing is increased the more time a learner puts between the sessions.These decisions, however, may not be based on a consideration about their impact on spacing. Studies have shown that learners tend to prefer massing or cramming over spacing because of the illusion that it is faster and more effective (Baddeley 1978 and Kornell 2009). Furthermore smaller stacks, like they are used in massed learning, may be more motivating than larger stacks because of their short-term learning effect. But as studies have shown, the long-term learning effect can be reached better by using spaced learning (Kornell 2009).
SuperMemo Basics Since spacing is the best known way to achieve a long-term learning effect, this approach could also be used in learning games. In this scenario an algorithm would determine which topic should be covered at a given time and at which frequency. There are already several algorithms for scheduling repetitions in computer-based flashcard implementations. One of the most widely spread algorithms is delivered by SuperMemo6. It is called SM plus an extension indicating the version. SM2 is the algorithm which is today used the most and which builds the basis for tools like Mnemosyne or Anki. It uses a scale between 0 and 5 for values that are referred to as “quality of response”. After each card users have to judge how well they remembered the corresponding information. A card is rated 0 or 1 if the learner does not know the answer or has completely forgotten it. 1 means the card is already getting more familiar than a card with grade 0 and will therefore be repeated a little less often. The algorithm will then keep on repeating the card until the learners grade it with a 2 or higher, which means that they think that they will be able to remember it for at least one or two days. This point signals the transition from short to long term memory.
R1 = Repetition for Rating 1
R2 = Repetition for Rating 2
R3 = Repetition for Rating 3
R4= Repetition for Rating 4
R5 = Repetition for Rating 5
Figure 2: Prolongation of repetition-intervals based on rating of remembered items SM2 will compute repetition dates for cards rated with grade 2 for a repetition so that the learner might still be able to remember it with some effort. If that date is too soon, the learner might rate it with a grade of 3 or higher, which will push the next repetition farther into the future. On the other hand, if the interval was too long and the user has already forgotten the card, he or she can rate it 1 or 0 again so that the algorithm will start to repeat it sooner and more frequently. If the learner keeps on rating a card 4 or 5, SM2 will keep increasing the interval between two repetitions. By lowering the grade the learners can make the algorithm repeat a card more frequently again, should they feel that remembering the correct answers is too hard. If they feel that SM2 keeps choosing the correct frequency they should keep rating the card 4. As it was shown by the forgetting curve, the better remembered materials may have a longer time between repetitions than the ones that are not remembered well. Due to the fact that the learner is able to re-rate the items every time they are presented, it is also possible to give an item a lower ranking and therefore repeat it more often and in a shorter period of time.
6
http://www.supermemo.com/
By rating the cards learners can therefore influence the frequency at which they are presented based on their learning progress as shown in Figure 2. This decreases the dependency between type of information and the number of learning sessions needed to remember a certain card (Kornell, 2009).
Architecture of an algorithm-based prototype learning app for content selection The architecture of our prototype app is designed as an all in one concept where the content, the data, the logic, the UI and the algorithm reside all within one app. When the user starts the game, it searches its database for the next content to be learned based on the algorithm. This selection relies on the SM2 algorithm which stores its data in a local database within the app on the user’s device. A notification on the device can optionally be used to remind the user that it is time to learn the content of a certain category after a period of time calculated by the algorithm. When the user starts the game before the calculated date for the next repetition, the app chooses the category that is next in line and schedules a new repetition based on the results after the first round. If the user should decide to play two or more rounds of the game, another algorithm is to be used in order to avoid repetitions of the same item several times in a row and to keep the integrity of the SM2 algorithm. We have therefore developed the FS algorithm. In contrast to the time-based SM2 algorithm the FS algorithm, uses a round-based approach that seems to be more useful in a game with limited content that could be played several times in a row by the learner. The FS algorithm determines in a way similar to the SM2 algorithm which items should be repeated more frequently than others while avoiding repetitions of the same item back to back. Furthermore there are several basic considerations to be made about how the app should be designed. There should always be a separation between the UI and the logic of the app. Using a modular approach also makes the app more flexible. Therefore the actual content to be learned is stored separated from the SM2 algorithm’s database and logic. The app selects the specific content based on the algorithm and presents it on the device through the app’s user interface.
Repetition & Feedback in (learning) games The main goal of the approach presented in this paper is to enhance learning efficiency by scheduling content presentation. The SM2 algorithm computes content presentation times based on learning history for each item; i.e. previous presentation times and the learners’ performance when interacting with the respective item. In order to make learning more engaging, the idea of integrating item presentation and interaction in a learning game is compelling. One of the major advantages of learning games is that both content presentation and learner performance rating can easily be integrated in game tasks. It does, however, come at the cost that scheduling has to be made more transparent at times so that it does not collide with the game experience. The following example describes how the SM2 algorithm is used to improve the learning effect in a game for language learning. It also shows how the hurdles in bringing together SM2 and learning games can be tackled.
App-Prototype: Where is my Box? In this concept of a language learning game, called “Where is my Box?” there are different categories of things to learn. An object, in this case a box, is placed in different positions on the screen depending on the current category and the player needs to find it. In order to do so he gets a task like “My box is left of the table” and then has to tap on the corresponding location on the screen to reveal the box. An example for this can be seen in Figure 3.
Figure 3: Example for task and solution in Where is my Box?! Based on the answers, a score is saved, indicating in which areas the learner already has a proper knowledge and in which areas he should improve. Instead of judging the learner’s success based on the micro-content, which would be things like the colors or the directions in detail, like red, green, left or right in our prototype, we have merged them into categories. In our prototype the categories in Figure 4 are used as examples for learning words from a foreign language. For our test run we decided to use Portuguese as language because none of the test users had knowledge about this language. This ensured that the focus of the test was completely directed towards the concept and not towards the learning topic. Colors Red Green Yellow Blue
Shapes Square Round
Locations Left Right Under On top
Figure 4: Learning topics in the prototype learning app At the current stage of the prototype the score is just incremented or decremented based on the given answer. While a correct answer increments the score, an incorrect answer decrements it. This score can be regarded equal to the quality of the response in the SM2 algorithm. When the game is launched for the first time, the algorithm in the background begins with some default values. According to the SM2 algorithm the first two repetition intervals are fixed. After this initial phase the algorithm then schedules the subsequent repetitions based on the learner’s progress.
SM2 algorithm As mentioned earlier, the algorithm behind the scheduling of “Where is my Box?” relies on the SM2 algorithm. It was developed by P.A. Wozniak in 1997 and uses different variables to calculate the most appropriate time for the next repetition.7 After an initial start-up phase with fixed repetition intervals for the first two trials, the algorithm starts calculating the next repetitions by using the number of repetitions and a value called the EFactor (easiness factor). The E-Factor on the other hand is based on a calculation which uses the quality of the response, a number between 0 (complete blackout) and 5 (perfect response) which defines how well a learner is able to remember the respective topic. The lowest value the E-Factor may reach is 1.3. If the quality of response is lower than 3, repetitions for the topic start over from the beginning without changing the E-Factor.
Problems with transferring SM2 to a learning app Compared to the original idea of SuperMemo which is supporting spaced repetition with learning cards, the amount of content in a learning game like the prototype app can be much less. This might influence the time span between the repetitions since there is no filling content. On the other hand if the learner decides to play the
7
http://supermemo.com/english/ol/sm2.htm
game again before the calculated time for the repetition is reached, this might influence his speed of learning, the amount of retention and the whole spaced repetition system. An approach to ease these problems could be to use a newer version of the SM algorithm which is used in SuperMemo 2002 and which is resistant to delays or advancements8. This algorithm is however still meant to be used with flashcard learning.
SM2 algorithm
To keep the integrity of the SM2 algorithm while maintaining the appropriate order of the learning categories we have developed another algorithm which takes over if the learner should decide to play more than one round of the game. The course of these events is shown in Figure 5. Therefore the SM2 algorithm is only in charge for the first round when a user starts the app. After this initial round the FS algorithm takes care of the possible following rounds. The results generated in these additional rounds are not mirrored back to the SM2 algorithm since this would lead to a corrupted scheduling of the next repetitions according to the spaced repetition approach. To avoid presenting the same content back to back while still focusing more on the less well known content, the FS algorithm saves its own data about the learner’s performance. This data is then used to keep the game interesting by presenting content in a similar way, the SM2 algorithm would do.
Learner
Start app
FS algorithm
Select next scheduled repetition
Set lock-flag for last played category
Play selected category
Recalculate factor
Schedule next repetition
Recalculate values
Play selected category
[Play again] [Quit app]
Select unlocked category with lowest factor
Figure 5: Activity diagram of the prototype learning app
FS algorithm In contrast to the SM2 algorithm our algorithm should not work in a time-based but a round-based manner because a learner might play several rounds in a row. Therefore there is no time-based scheduling but a ranking which sorts the learning topics by how well the learner has answered them after each round of play. Just like with the SM2 algorithm the FS algorithm increments the score (i.e. the quality of answer) on right answers and decrements the score on wrong answers. Additionally there is another value called relevance which is increased for the current item by 1.5 on right answers and by 1 on wrong answers. All other categories get decreased by 0.2. The sum of score and relevance is called the rank. The rank then determines the content to play in the next round if the player should chose to carry on playing. Additionally a flag is set for the last played content. This flag is not present for the other content. Therefore the unflagged item with the lowest rank is to be played next. The FS algorithm stores its own data about the learner’s performance and therefore does not alter the values used by the SM2 algorithm to avoid affecting the scheduling of the next repetition. Depending on if there should have been a repetition between the last time the learner opened the app and the next time, the SM2 algorithm chooses the category to be played next. Since we have decided that the SM2 algorithm should only be responsible for the scheduling of the main categories, the FS algorithm may also calculate the frequency of repetitions of the micro-content within each category. By doing so we would be able to avoid that there is always the same order and to repeat less good remembered micro-items at a higher frequency.
8
http://supermemo.com/help/smalg.htm
The Cold-Start Problem A problem that arises when trying to transfer this approach to an app for content selection is the question how or where to start with. The content selection improves as the learner continuously uses the app since the algorithms delivers better and better results with each repetition. But the starting point is a key question when using the app for the first time(s). The same problem can be found in other fields of research, for example in techniques like “social bookmarking” or “social tagging” (Parra-Santander 2010). One approach to tackle this can be derived from these techniques, in which information from a certain group of users is collected about different things like popular bookmarks, interesting articles or things like that. Transferring this technique to a learning app for content selection would mean to collect data from users of the app to gain information about which items or which learning materials are more difficult to remember for learners from a certain group than other materials though everybody learns and remembers differently. Anyway, the problem remains where to start initially, i.e. without having any data from any users. The SM2 algorithm has two initial values for scheduling repetitions when the user is completely new to the respective field. The first repetition will always take place on the following day. The second repetition will take place six days after the first repetition. After that the algorithm determines the next date for a repetition depending on how well the learner remembers the item. It remains to be seen if this approach is appropriate to ease the cold-start problem for a learning game like shown in the use case.
Early evaluation To get an early insight on how learners might use a learning game we have conducted a heuristic evaluation of our prototype with five evaluators. According to Nielsen (1992) 3-5 evaluators are sufficient for this kind of evaluation. We had two female and three male evaluators with an age between 25 and 40 years. All of the evaluators had enabled the option to receive a notification on their device to get reminded about a scheduled repetition. The results have proven our earlier mentioned estimation that learners tend to play several rounds of the game in a row instead of waiting with their next round until the date the algorithm had originally scheduled it. If we had fed the results from each round of a session back into the SM2 algorithm, this would have affected the next scheduled repetition since the learners may remember the answers in a very short period of time better than they would in a longer period. Due to the short term memory effect this behavior would then have pushed the next repetition farther into the future, as it would have been the case when the learners did not play several times in a row. The evaluation has also shown that if the learners wait for the next scheduled repetition for one category, they usually also play the other, unscheduled categories as well, although their scheduling may be farther in the future. As can be seen in Figure 6, all evaluators played every of the categories more than once in their first session after installing the game. Two of the evaluators (User 2 and User 4), which are represented by one column each, played the game even before the first scheduled repetition, which was the day after the first session, according to the SM2 algorithm’s settings. All evaluators played the game when they were reminded about a scheduled repetition (SR) for one category but then played also the other categories, which didn’t have a scheduled repetition at that time. Some users also played the game a day after receiving a notification for a scheduled repetition. However, it was shown that all evaluators reacted to a scheduled repetition by opening the app and playing the game. Using only the first round of play in each session to collect data for the SM2 algorithm kept the integrity of that algorithm and the idea behind the spaced repetition approach.
Figure 6: Usage of the prototype app (each column represents a user and the three different categories)
Conclusion and Future Work The SM2 algorithm was actually developed for learning cards and has already proven its effectiveness. It seems to be a natural selection for algorithmic content selection in a learning game. However, there are some considerations to be made when doing so. The SM2 algorithm is basically a time-based solution which fits perfectly for learning card decks that usually consist of a huge amount of content. It remains to be seen if this algorithm also fits for learning games like the prototype presented in this paper. The early evaluation has shown that users tend to play several rounds in a row instead of waiting for the next scheduled repetition. It is currently unclear if this has any effect on the learning progress but it does corrupt the spaced repetition system without modifications. In addition to that, depending on how many times a learner plays the game in a row, the next scheduled repetition may be pushed too far into the future which may even worsen this effect on the system. In terms of learning, playing several rounds of the game in a row might be compared to learning cards several times in a row which would basically be some kind of massing. Since the other approach of spaced repetition is the one that is proven to having better results on long-term learning and therefore chosen by us as the idea behind SuperMemo, there has to be a way to keep the integrity of this algorithm. We have therefore developed another algorithm which adds a round-based approach for content selection after SM2 had selected the first item to repeat after launching the app. Simple feedback of the scores acquired in the round-based interaction does, however lead to the problem of scheduling a repetition too far into the future. We tried to solve this by using only the data of the first round of play per category for the calculation of the SM2 algorithm and thus keeping the integrity of the algorithm. Based on the ideas and findings described in this paper there are several more issues to address in future work. The cold-start problem is probably hard to solve. Since it is a common problem in recommendation systems there are several approaches from other fields of research which rely on different techniques like automatic text analysis and opinion classification (Poirier, 2010) or community-based recommendations (Shaghayegh, 2011). Since every learner remembers different types of information differently, this approach can obviously never deliver a generally admitted result. At the current stage the prototype of “Where is my Box?” alters the score for each category which is used by the algorithm as “quality of response” only by right or wrong answers and increments or decrements the value accordingly. At a future stage this altering should be more sophisticated to better reflect the actual learning performance of the user. One approach to this could be to analyze the time between the presentation of the content and the learner’s answer and then draw conclusions from that. In other future work the algorithm currently used in a stand-alone game has to be implemented into a meta-app which can serve different other apps to select the appropriate content from them. This may also ease the
problem of content shortages in standalone apps. There have to be different interfaces and an architecture to make this app as flexible as possible. The meta-app should provide an easy to expand frameset for the contentapps. The data from the algorithm is collected and analyzed in a single place to ensure the best possible recommendation for what to learn next. There should be no borders or limitations for an expansion of the app. For example it should be easy to add new content-apps to the meta-app or to replace or update an existing content without having to change the overall structure with each update. Therefore it is important to provide robust APIs to ensure a future-proof development of the meta-app and its content.
References Baddeley, A.D., & Longman, D.J.A. (1978). The influence of length and frequency of training session on the rate of learning to type. In Ergonomics (Vol. 21, pp. 627-635) Ebbinghaus, H. (1885). Memory: A Contribution to Experimental Psychology. New York: Dover Hintzman, D.L. (1977). Repetition and Memory. In Gordon H. Bower (Ed.), Psychology of Learning and Motivation: Advances in research and theory (Vol. 10, pp. 47-91). New York: Academic Press Hockley, W.E. (1984). Retrieval of item frequency information in continuous memory task. In Memory & Cognition (Vol. 12(3), pp. 229-242) Kornell, N. (2009). Optimizing learning using flashcards: Spacing is more effective than cramming. In Applied Cognitive Psychology (Vol. 23, pp. 1297-1317). Wiley InterScience Kornell, N. & Bjork, R.A. (2008). Optimizing self-regulated study: The benefits and costs of dropping flashcards. In Memory (Vol. 16(2), pp. 125-136) Nielsen, J. (1992). Finding usability problems through heuristic evaluation. In Proceedings of the SIGCHI conference on Human factors in computing systems. Monterey, CA, USA. (pp. 373 – 380) Parra-Santander, D. & Brusilovsky, P. (2010). Improving Collaborative Filtering in Social Tagging Systems for the Recommendation of Scientific Articles. In: Proceedings of 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Toronto, Canada, August 31-September 3, 2010 (pp. 136-142) Paul, K. (2007). Study Smarter, Not Harder. Self-Counsel Press Pimsleur, P. (1967). A Memory Schedule. In The Modern Language Journal (Vol. 51(2), pp. 73-75). Blackwell Publishing Poirier, D., Fessant, F. & Tellier, I. (2010). Reducing the Cold-Start Problem in Content Recommendation Through Opinion Classification. In Proceedings of 2010 IEEE/WIC/ACM International Conference on Web Intelligence, Toronto, Canada (pp. 204-207) Shaghayegh, S. & Cohen, W. (2011). Community-Based Recommendation: A Solution to the Cold Start Problem. In Workshop on Recommender Systems and the Social Web (RSWEB) Chicago, US, October 2011