weight is calculated as sum of all its children's weights and its own weight. At the end of this ..... Conceptual Search Engine, Information and Telecommunication Technology Center,. Technical Report: ... International World Wide Web Conference Hawaii, USA, 2002. Mitchell M. ... Discovery of Aggregate. Usage Profiles for ...
Improving Ontology-Based User Profiles Joana Trajkova, Susan Gauch Electrical Engineering and Computer Science University of Kansas {trajkova, sgauch}@eecs.ku.edu Abstract Personalized Web browsing and search hope to provide Web information that matches a user’s personal interests and thus provide more effective and efficient information access. A key feature in developing successful personalized Web applications is to build user profiles that accurately represent a user’s interests. The main goal of this research is to investigate techniques that implicitly build ontology-based user profiles. We build the profiles without user interaction, automatically monitoring the user’s browsing habits. After building the initial profile from visited Web pages, we investigate techniques to improve the accuracy of the user profile. In particular, we focus on how quickly we can achieve profile stability, how to identify the most important concepts, the effect of depth in the concept-hierarchy on the importance of a concept, and how many levels from the hierarchy should be used to represent the user. Our major findings are that ranking the concepts in the profiles by number of documents assigned to them rather than by accumulated weights provides better profile accuracy. We are also able to identify stable concepts in the profile, thus allowing us to detect long-term user interests. We found that the accuracy of concept detection decreases as we descend them in the concept hierarchy, however this loss of accuracy must be balanced against the detailed view of the user available only through the inclusion of lower-level concepts.
1. Introduction The huge amount of data on the Web means that there is information on almost any topic available online. Search engines are a very popular way to locate information, but the information provided is too general to efficiently solve individual user’s information needs. Although users differ in their specific interests, they tend to provide very short queries that do not adequately describe the specific information they seek. Thus, the same results are shown to all users even though they may be looking for very different types of information. Including personalization in Web information access may provide more effective information use, retrieving for the user information that matches their individual needs. Some systems for retrieving information try to include personalization in either search or browsing or both. In personalized search systems, the search results are ranked according to user’s interest or the searchable documents are arranged according to user-defined concepts for obtaining the desired information faster. In contrast, recommender systems attempt to suggest Web pages of possible interest to users as they browse a Web site. An accurate representation of a user’s interests, generally stored in some form of user profile, is crucial to the performance of personalized search or browsing agents. The issues that need to be addressed in this process are how to build an accurate profile, particularly how to identify major versus minor concepts of interest to the users, and how to represent the profile. Profiles may be built explicitly, by asking users questions, or implicitly, by observing their activity. Because user interests change over time, we focus on implicit methods for constructing the profile that have the potential to adapt over time, reflecting changing user interests. User profiles are often represented by keyword/concept vectors or, however we build a weighted concept hierarchy. The main objective of our research is to create an accurate, ontology-based user profile without the user interaction. The algorithms we evaluate in this paper focus on an ontology-based user profile using classification followed by investigations into ranking concepts in the profile by their importance and, ultimately, the improvement of the profile by removing unimportant concepts.
2. Related research This research relates to work in the area of Web personalization, the process of delivering information to the user considering his specific needs and preferences in most adequate way and at the most appropriate time. (Mizzaro & Tasso, 2002). Personalized systems are being developed to help users browse news articles (WebMate (Chen & Sycara,1997), SmartPush (Kurki et al,1999)), find scientific and research papers (Quickstep (Middleton et al, 2001)), browse the Web (Letizia (Lieberman, 1995), PersonalWebWatcher (Mladenic, 1999)), improve search results (OBIWAN
(Pretschner, 1999), Persona (Tanudjaja & Mui, 2001)) or to perform a combination of the above activities (Syskill&Webert (Pazzani et al, 1996), Basar (Thomas & Fischer, 1997)). Personalized systems differ in the type of data used to create the user profile. Letizia (Lieberman, 1995), WebMate (Chen & Sycara, 1997), PersonalWebWatcher (Mladenic, 1999) and OBIWAN (Gauch et al, 2004) build the profile by analyzing Web pages visited by the user as they browse. Other sources of information have also been used, such as bookmarks in Basar (Thomas & Fischer, 1997), queries and search results in (Rich, 1979), Syskill&Webert (Pazzani et al, 1996), and Persona (Tanudjaja & Mui, 2001). Combining information from multiple sources, i.e., Web pages, emails, and documents, is also currently being investigated (Dumais et al, 2003). Profiles may be represented in a variety of ways. Letizia (Lieberman, 1995) and Basar (Thomas & Fischer, 1997) produce a bookmark-like list of Web pages, PersonalWebWatcher (Mladenic, 1999), Syskill&Webert (Pazzani et al, 1996), and QuickStep (Middleton et al, 2001) create a list of concepts of interest, and (Pretschner, 1999), SmartPush (Kurki et al, 1999) and Persona (Tanudjaja & Mui, 2001) create a hierarchically-arranged collection of concepts, or ontology. The profiles are constructed using a variety of learning techniques including the vector space model (Chen & Sycara, 1997, Pretschner, 1999), genetic algorithms (Mitchell, 1996), the probabilistic model (Mladenic, 1999), or clustering (Mobasher et al, 2000). Since the initial profiles may contain a certain amount of noise in the form of irrelevant concepts, concept rating or filtering algorithms are often employed to improve the profiles. Many systems require user feedback for this, e.g., Syskill&Webert (Pazzani et al, 1996), Basar (Thomas & Fischer, 1997), SmartPush (Kurki et al, 1999) and Persona (Tanudjaja & Mui, 2001). Others, such as Letizia (Lieberman, 1995), WebMate (Chen & Sycara, 1997), PersonalWebWatcher (Mladenic, 1999), and OBIWAN (Gauch, et al 2004) adapt autonomously. We will now discuss a few recent examples of personalized systems research. PVA (Personal View Agent) System (Chen et al, 2002) provides personalized news article classification. It creates a concept-hierarchy-based user profiles implicitly, based on browsed Web pages. The profile is learned by classifying the Web pages using the vector space model and adapted by merging/spliting concepts in the profile. (Kim & Chan, 2003) also learns a user interest hierarchy (UIH) based on visited Web page, however it clusters words extracted from the documents to detect concepts of interest. The profiles are represented as a hierarchy in which leaf nodes are considered more specific and short term interests whereas the parent nodes are more general and considered to be long-term user interests. (McGowan et al, 2002) uses browsed Web pages and bookmarks as sources of information about a user’s interest. The learning technique clusters these documents based on the vector space model. The profile adapts over time as more information is collected, and the modifications are used to track interest changes. Our research combines aspects from several systems reviewed in this section. We build the user profiles by collecting user data (url, date, time spent on the page) via a proxy server. The Web pages are represented as keyword vectors and we use the vector space model (Salton, 1989, Baeza & Ribeiro, 1999) to classify documents into the best-matching concepts in a pre-existing ontology. The user profile is represented by the total weight and number of pages associated with each concept in the ontology. In future, we intend to use the profile to provide personalized search in the KeyConcept project (Gauch et al, 2003).
3. Building an ontology-based user profile The main input in our system for building a user profile are Web pages a user has visited for at least a minimum amount of time (5 seconds). The system classifies each Web page into the most similar concept in a predefined hierarchy of concepts, or ontology. The process of building the profile consists of 3 phases: 1) training the classifier; 2) collecting user data and 3) classifying the Web pages from the collected urls. 3.1 Ontologies An ontology is a specifications of concepts and relations between them. It defines “contentspecific agreements” on vocabulary usage, sharing, and reuse of knowledge. (Gruber, 1993). It is used to alleviate the communication problems between systems due to ambiguous usage of different
terms. When building user profiles, ontologies are used to address the so-called “cold-start problem” (Middleton et al, 2002). When initially learning user interests, systems usually perform poorly until they collect enough relevant information. Since initial behavior information is matched with existing concepts and relations between them, using ontologies as the basis of the profile helps to avoid or ease this problem. In our case, we use the Open Directory Project concept hierarchy (ODP, 2002) (and associated, manually classified Web pages) as our reference ontology. For these experiments, we used the top 3 levels of the concept hierarchy only. Based on previous experiments (Gauch et al, 2003), this set of concepts was further restricted to only those concepts that had sufficient training data (at least 30 pages) associated with them. This resulted in a total of 2,991 concepts and approximately 125,000 training documents in the reference ontology. 3.2 Training the classifier In order to train the classifier for each concept, all the Web pages available as training data for a particular concept are merged together to create a superdocument. This creates a collection of superdocuments, one per concept, that are preprocessed to remove stopwords and stemmed using the Porter stemmer (Frakes & Baeza, 1992) to remove common suffixes. After preprocessing, the superdocuments go through an indexing process to calculate and save vectors for each concept that store the weight of each vocabulary term in that concept. Thus, each concept is treated as ndimensional vectors in which n represents the number of unique terms in the vocabulary. Each term weights in the concept vectors are calculated using tf*idf and normalized by their length. In more detail, uwij, the unnormalized weight of term i in concept j, is calculated as follows: uwij = tfij * idfi where tfij= number of occurrences of ti in sdj sdj = the superdocument used for training concept j idf i = log
# of documents in the collection # of documents in the collection that contain te rm t i
(1)
(2)
wij . the final, normalized weight for term i in concept j, is calculated as follows: ~
wij =
wij
∑w
2 ij
(3)
t i ∈d j
3.3 Collecting data In this phase, the urls, date and time when visited, and Web page sizes are stored in a log file by a proxy server. The program extracts the urls for each user and filters them to remove those documents that are considered too short to have any content (less than one KB) and those on which the user spent little time (less than 4 seconds), either because the page had no content of interest or because the page was silently redirected. 3.4 Classifying the Web pages Web page and concepts are represented as in the vector space model and the similarity is calculated as cosine between the vectors. The top matches are identified using k-nearest-neighbors since previous work showed this to outperform the Bayesian classification (Zhu et al, 1999). Similar to the preprocessing done on the concept superdocuments, each of the Web pages collected for a user are preprocessed to remove stopwords and then stemmed. The weights for all remaining words in the Web page are calculated using formula (1) and then the words are sorted by weight. Since the words are all selected from the current Web page, the length of the document is a constant and normalization is not done. Based on earlier experiments (Gauch et al, 2003), the highest weighted 20 words are used to represent the content of the Web page. Classification thus consists of comparing the vector created for the Web page with each concept’s vector (created and stored during training) using the cosine similarity measure. In more detail, the similarity between concept c and Web page pk is calculated as follows: n similarity (c j , p k ) = c j p k = ∑ wij * pik (4) i =1 where n - number of unique terms in the vocabulary wij - the normalized weight of term i in concep t j p ik - the unnormalized weight of term i in page k
After calculating the similarity between the Web page and all the concepts, the results are sorted and the page is classified into the top-matching concept in the ontology together with its similarity/weight. The same process is repeated for each page visited by the user. The weights from all the pages that match the same concept are accumulated. For each concept in the ontology, its weight is calculated as sum of all its children’s weights and its own weight. At the end of this process, the classifying agent has created a user profile consisting of all concepts with non-zero weights. Figure 3.1 shows a sample user profile.
wt wt=
wt=
wt=0.6+0. wt wt
wt wt
wt wt
Figure 3.1: Screen shot of a sample user profile
4. Profile improvements The main contribution of this paper is to report on experiments designed to evaluate improvements to the profile generated as described in Section 3. In earlier work (Gauch et al, 2004), ontology-based user profiles were shown to improve the search results (by re-ranking and filtering). Those studies indicated that further improvement might be possible if the profile was more accurate. Thus, the aim of this work is profile improvement. In particular the questions we address are: • When does the profile become stable and start to accurately represent the user interests? • What rank ordering of the concepts of interest produces most accurate profile? • Is it possible to detect non-relevant concepts and prune them from the profile? • How many levels from the ontology are enough/necessary to build more accurate profile? 4.1 Profile convergence Every time a new Web page is classified, it either adds a new concept to the user profile or it gets assigned to an existing concept whose weight and number of documents are increased. Our expectation is that although the number of concepts in the user profile will monotonically increase, eventually the highest weighted concepts should become relatively stable, reflecting the user’s major interests. In order to determine how much of the user’s browsing history we need to obtain a relatively stable profile, we evaluated the metrics based on time and number of visited urls. In both cases, we measured the total number of non-zero concepts and the similarity between top 50% of the concepts over time to see if we could determine when (and if) profiles become stable.
4.2 Improving profile accuracy The concepts in the user profiles can be rank-ordered by the concepts’ weight or the number of documents associated with that concept. In Experiment 2, we evaluated which rank ordering produced the more accurate profile by calculating the F-measure for each profile when concepts were ordered by weight and when they were ordered by the number of documents. In addition, we also calculated the average rank of the non-relevant concepts in the user profiles for each ordering. One goal of rank-ordering the concepts by importance was to use this information in order to prune the lower-ranked concepts to produce a more accurate profile. Thus, in Experiment 3, we evaluated the accuracy of the profile using the F-measure at various cutoffs based on the percentage of concepts kept. The percentage with highest F-measure would determine a cut off value that gives most accurate profile. We also wanted to determine whether or not it is better to have a more detailed profile with some non-relevant concepts or to increase accuracy by using a more shallow profile. Experiment 4 looks at the effect of profile depth on profile accuracy.
5. Experiments The process of collecting and filtering user data, as explained in Section 3.3, was done on a daily basis for one month for a total of 12 subjects, of whom 6 participated throughout the entire study. The entire process consisted of determining profile stability first and then building 3 profiles: an initial profile, a profile after one week of browsing after reaching stability; and third profile built after a second week of browsing after reaching stability. Each profile was built with concepts from the top three levels in the ODP ontology. Although multiple profiles were created, to lessen the burden on our subjects, we merged all concepts from all techniques into a single profile on which to collect judgments. The profile was presented to the subject (see Figure 3.1), and they were asked to identify the non-relevant concepts present in their profiles by crossing them out. For each non-relevant concept, subjects were asked to indicate whether only the concept on the third level was non-relevant or the parent concepts were also irrelevant. Then, to analyze the results for each technique, we made use of only the judgments for the concepts produced by that technique. Thus, although the data was collected simultaneously, each goal is analyzed separately. 5.1. Experiment 1 – Profile convergence Possible profile convergence is evaluated by monitoring the browsing habits of the actual users from the first day of their participation. Our goal was to determine how much information was needed to build a stable user profile where stability is defined as a profile that exhibits little or no change in its most highly ranked concepts. By examining the user browser histories, it is clear that users vary in their browsing habits and therefore the number of concepts in their profiles varies greatly. As discussed in (Trajkova, 2003), the number of concepts in the user profiles continuously increases, and the total number of concepts over time did not show any convergence. User profiles also did not show strong convergence when the total number of concepts in the profiles was plotted against the number of collected Web pages. However, when only the top 50% of the concepts (ranked by number of associated pages) were considered, the profile behavior converges. When the profile was built on a daily basis, adding each day’s new pages to the existing profile, some convergence was observed when comparing the similarity between the top 50% of the concepts. However, since users varied greatly in their daily browsing habits, the strongest convergence was detected when profiles were built not based on time but rather based on the number of pages collected. In this case, we built a series of profiles for each user based on each group of 10 Web pages collected. That is, P1 was built by classifying the first 10 pages visited by a user, P2 by classifying the first 20 Web pages, P3 the first 30, etc. Profile stability was measured by calculating the shared concepts in the top 50% of the concepts between adjacent profiles PN-1 and PN. The similarity between the top 50% of the concepts in the profiles, in 10 page increments, is shown in Figure 5.1. Note that after approximately 70 Web pages have been classified, all subjects’ profiles exhibit little change.
Profile Stability - Similarity betw een top 50% of the concepts 120% 100% Percen tag es
1 80%
2 3
60%
4 5
40%
6 20% 0% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 Classified w ebpages
Figure 5.1: Profile convergence – similarity between top 50% of the concepts versus # of classified Web pages 5.2. Experiment 2 – Rank ordering In this experiment, we analyzed the accuracy of the profile when the concepts were ordered by number of Web pages and by the accumulated similarity weight associated with each concept. For each technique, we calculated the average position of the non-relevant concepts in the user profiles for each ordering. An ordering that produces a larger value places non-relevant concepts further down the list and is thus the preferred ordering. As shown in table 5.1, both orderings give similar results, however there is a slight advantage when ordering the concepts by the number of visited Web pages in the concept rather than the concept weight (18.9 versus 18.4 or 0.530 versus 0.547 after normalizing the value by the total number of concepts in the profiles). Although this difference was not significant (p = 0.39), the rest of the experiments were conducted with user profiles that had their concepts ordered by the number of urls the user has visited related to that concept.
User 1 2 3 4 5 6
#Concepts 34 29 35 42 35 32 Average:
Wt 12.3 16.3 24.2 24.1 18.0 15.3 18.4
#urls 13.0 15.9 24.2 24.9 17.2 18.5 18.9
Normalized Wt #urls 0.363 0.382 0.562 0.547 0.691 0.691 0.574 0.593 0.514 0.491 0.479 0.578 0.530 0.547
Table 5.1: Average position of non-relevant concepts in user profiles ordered by concept weight and number of Web pages associated to concepts 5.3. Experiment 3 – Pruning non-relevant concepts The next issue we addressed was determining a threshold value that could remove non-relevant concepts to create a more accurate profile. For that purpose, recall, precision values, and F-measures were calculated considering the top 5%, 10%, 20%, 30% … 80%, 90%, 100% of all non-zero concepts in the profile. Figure 5.1 a and b show the precision and recall calculated for each user considering different percentages of the top ranked concepts.
Precision 1.20 1.00 1 Precision
0.80
2 3
0.60
4 5
0.40
6 0.20 0.00 5%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Percentages of top ranked concepts
Figure 5.1a: Precision - calculated for each user considering different percentages of the top concepts ordered by # of Web pages Recall 1.20 1.00 1 Recall
0.80
2 3
0.60
4 5
0.40
6 0.20 0.00 5%
10%
20%
30%
40%
50%
60%
70%
80%
90% 100%
Percentages of top ranked concepts
Figure 5.1b: Recall - calculated for each user considering different percentages of the top concepts ordered by # of Web pages When evaluating the user profiles, we noticed that most of the time the non-relevant concepts are not only in low ranked ones. Because of the random spread of the non-relevant concepts throughout the profiles for almost all the users the highest accuracy (as measured by the F-measure) was achieved by showing all the concepts rather than pruning at all. This is confirmed by the increasing trend of the curve in the figure 5.1c that shows that there is no cut off point that yields a more accurate profile when the F-measure is averaged over all subjects. Average F-Measure 0.90
Average F-m easure
0.80 0.70 0.60 0.50
Profile 1
0.40 0.30 0.20 0.10
90 % 10 0%
80 %
70 %
60 %
50 %
40 %
30 %
20 %
5%
10 %
0.00
Percentages of top ranked concepts
Figure 5.1c: Average F-measure – Calculated considering different percentages of the top concepts ordered by # of Web pages
We also examined this question when ordering the concepts in the profile by the weights assigned to them. The assumption was if such cut off value can be determined as function of the concept weight to show only the concepts in the profile that have weight above the average weight or some other weight threshold for that profile. However, the results were the same: there was no threshold that could be used to remove concepts. Our rationale for this result is that, in general, the classifier is fairly accurate and most concepts identified are indeed relevant. The noise introduced by misclassification does not follow any particular pattern. It may be that, with more mature profile, the noise concepts would trickle down, but we were not able to observe that. 5.4. Experiment 4 – User profile depth The last issue addressed in this study is determining the effect of the depth of the reference ontology on the accuracy of the profile. For this purpose, we created profiles consisting of only level 1 concepts, profiles based on both levels 1 and 2, and profiles based on all three levels. For each profile, we calculated the corresponding precision based on the users’ relevance judgments. The first level in the ODP ontology contains 15 concepts and that is the maximum number of concepts the users could see. Most of the users were presented with most of these concepts, representing their broad interests in Art, Business, Computers, etc. For all but one of the users, showing this format of profile resulted in 100% precision, however there was little or no information about specific user interests. Moreover, this type of profile represents all of the users similarly, not really providing personalization at all. As shown in Figure 5.2, the number of concepts in the profile increases with the level, as would be expected, allowing for an average of 61 concepts per user if two levels are used and 100 concepts per profile if 3 levels are used. Thus, the specificity of the profile increases as the depth increases. Distribution of concepts in different depth levels
N u m b er o f c o n cep ts
100 32.67 80 60
17.5 non_relev.
40 20
67.33 1 13
43.17
relevant
0 level1
level2
level3
Levels
Figure 5.2: Level comparison – the average number of relevant and non-relevant concepts in the user profiles versus the number of levels in the ontology The precision in the profiles, as shown in Figure 5.3, drops from 92.4% to 70.4% when the profile expands from one level to two. Since level one profiles provide essentially no personalization, this is a price that must be paid. However, when the profile is expanded from two levels to three, the precision in the top 50% only drops an additional 5% to 65.6%. Since concepts deeper in the ontology are more specific, this allows us to build more detailed profile with only a small drop in precision. Thus, in our current work, we use a 3-level reference ontology.
P e rc e n a t g e s
Percentages of relevant and non-relevant concepts in different depth levels 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
0.9237 0.7036
0.6556 non_relev. relevant
0.3267
0.2884 0.0714
14
60.67
100
Number of concepts per level
Figure 5.3: Levels comparison – percentages of relevant and non-relevant concepts in user profiles with different depth level In addition to the numerical analysis, users were asked for their subjective opinion about preferred format of their profile. The opinions were divided: some users prefer more general concepts in their profiles, some more specific. When comparing the user preferences, it was also noticeable that the users who browse less prefer profiles with concepts only in the top two levels. They usually do not have multiple sub-concepts at level three and therefore it does not make much difference whether the parent or the child is displayed. On the other hand, users with heavier browsing habits usually have many interests and several of level three concepts belonging to the same parent concept. They preferred to see the more specific profile, including concepts from level 3. A hybrid solution may be best. If there are concepts at level 3 that have no siblings, we can remove that concept and display only the level 2 parent. This gives us a precision of 68.7%, roughly midway between the precision of the level 2 and level 3 profiles with very little loss of information. Figure 5.4 summarizes the precision between the four techniques evaluated. Precesion in different depth levels
1
92.37%
0.9 0.8
70.36%
0.7
65.56%
68.65%
100
100
concepts
concepts
0.6 Precision 0.5 0.4
14 concepts
60.67 concepts
precision
0.3 0.2 0.1 0 1
2
3
hybrid
# levels in the profile
Figure 5.4: Profile precision for profiles with different number of levels and the hybrid type of profiles with collapsed single child nodes
6. Conclusions and Future Work The work presented in this paper focuses on building an accurate user profile based on concepts from a predefined ontology. The profile is built by monitoring the user’s browsing over time and classifying the contents of the visited Web pages. The experimental results show that the user profile can be built autonomously, without user interaction, and achieve average accuracy of 69% when no
concepts are pruned. The profile converges to after approximately 70 urls are browsed, and our results suggest that concepts with more Web documents associated with them are considered to be more relevant although a larger user study would be needed to establish the significance of this result. Our evaluation shows that precision does improve when a shallow reference ontology is used, however such profiles contain little user-specific information. Removing concepts from the profile that have no siblings and showing only their parent improves the accuracy with little loss of information. Since the user profile also contains non-relevant concepts, there is still room for improvements. The non-relevant concepts need to be detected and removed in order to improve the accuracy. We are interested in comparing the results if a SVM classifier is used instead of the vector space model. We are also interested in conducting a long-term user study that will allow us to monitor user profiles as they adapt over time. This may be helpful in removing irrelevant concepts and, more interestingly, allow us to distinguish between short-term and long-term user interests. From practical point of view, we are working on including automatically generated user profiles into KeyConcept (Gauch et al, 2003), a conceptual search engine that adapts the search results according to the user interests, and ChatTrack (Bengel et al, 2004) in which the user profiles are built from saved chat sessions.
7. Acknowledgements This work was partially supported by NSF ITR 0225676 (SEEK).
8. References Baeza-Yates R., Ribeiro-Neto B.(1999). Modern Information Retrieval, Addison-Wesley, 1999 Bengel J., Gauch S., Mittur E., Vijayaraghavan R.(2004). ChatTrack: Chat Room Topic Detection Using Classification, submitted to The Second Symposium on Intelligence and Security Informatics, Tucson, Arizona, June 2004 (in review). Chen L., Sycara K.(1997). Web Mate: A Personal Agent for Browsing and Searching. In proceedings of the 2nd International Conference on Autonomous Agents and Multi Agent Systems, AGENTS '98, ACM, pp. 132 – 139, 1998 Chen C., Chen M., Sun Y. (2002)., PVA: A Self-Adaptive Personal View Agent. Journal of Intelligent Information Systems, p.173-194, 2002 Dumais S., Cutrell E., Cadiz JJ., Jancke G., Sarin R., Robbings D. (2003)., Stuff I’ve Seen: A System for Personal Information Retrieval and Re-use. In preceedings of SIGIR 2003, Toronto, Canada Frakes W., Baeza-Yates R.(1992). Information Retrieval Data Structures & Algorithms. Prentice Hall, 1992 Gauch S., Madrid J., Induri S., Ravindran D., and Chadlavada, S. (2003)., KeyConcept: A Conceptual Search Engine, Information and Telecommunication Technology Center, Technical Report: ITTC-FY2004-TR-8646-37, University of Kansas. Gauch S., Chaffee J., Pretschner A. (2004). Ontology Based Personalized Search. Web Intelligence and Agent Systems (in press). Gruber T. (1993). Toward principles for the design of ontologies used for knowledge sharing. Presented at the Padua workshop on Formal Ontology, March 1993 Kim H., Chan P. (2003)., Learning Implicit User Interest Hierarchy for Context in Personalization. In Proceedings of the 2003 International Conference on Intelligent user interfaces 2003, Miami, Florida Kurki T., Jokela S., Sulonen R., Turpeinen M. (1999). Agents in delivering personalized content based on semantic metadata. In Proceedings 1999 AAAI Spring Symposium Workshop on Intelligent Agents in Cyberspace, Stanford, USA, 1999, pp. 84-93, As cited in (Pretschner99) Lieberman H. (1995). Letizia: An Agent That Assists Web Browsing. In Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, CA, 1995 McGowan JP., Kushmerick N., Smyth B.(2002),.Who do you want t o be today? Web Personae for personalized information access. In Proc. International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (Malaga), 2002.
Middleton S., De Roure D., Shadbolt N. (2001). Capturing knowledge of user preferences: ontologies in recommender systems. In Proceedings of the 1st International Conference on Knowledge Capture (K-Cap2001), Victoria, BC Canada, 2001 Middleton S., Alani H., Shadbolt N., De Roure D. (2002). Exploiting Synergy Between Ontologies and Recommender Systems. ACM, Semantic Web Workshop 2002 At the Eleventh International World Wide Web Conference Hawaii, USA, 2002 Mitchell M. (1996). An Introduction to Genetic Algorithms. “A Bradford Book”. The MIT Press, 1996. ISBN 0-262-63185-7 Mizzaro S., Tasso C. (2002)., Ephemeral and persistent personalization in adaptive information access to scholarly publications on the Web. Artificial Intelligence Laboratory Department of Mathematics and Computer Science , 2002 Mladenic D. (1999).Text-learning and related intelligent agents. Revised version In IEEE Expert special issue on Applications of Intelligent Information Retrieval, 1999 Mobasher B., Dai H., Luo T., Nakagawa M., Sun Y., Wiltshire J., (2000). Discovery of Aggregate Usage Profiles for Web Personalization. In Proceedings of Web Minning for E-commerce Workshop (KDD-2000), Boston, MA 2000 OBIWAN Project (1999).. “OBIWAN – University of Kansas”. http://www.ittc.ku.edu/obiwan/. 1999 Open Directory Project. (2002). http://dmoz.org, April 2002. Pazzani M., Muramatsu, Billsus D. (1996). Syskill&Webert: Identifying interesting Web sites. In Proceedings of 13th National Conference of Artificial Intelligence, Portland, OR, 1996 Pretschner A. (1999).. Ontology based personalized search. Master’s thesis, The University of Kansas, Lawrence, KS, 1999 Rich E.(1979). User modeling via stereotypes. Cognitive Science, 3:329-354, 1979. Salton G. (1989). Automatic Text Processing. Addison-Wesley, 1989. ISBN 0-201-12227-8 Tanudjaja F., Mui L. (2001). Persona: A contextualized and personalized Web search. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS’02), Big Island, Hawaii, 2002, volume3 pp. 53 Thomas C., Fischer G. (1997).Using Agents to Personalize the Web. In Proceedings of the 2nd International Conference on Intelligent User Interfaces, Orlando, Florida, USA, 1997, pp.5360 Trajkova, J. (2003) Improving Ontology-Based User Profiles, M.S. Thesis, EECS, University of Kansas, August, 2003. Zhu, X., Gauch, S., Gerhard, L., Kral, N., Pretschner, A.: (1999). Ontology-Based Web Site Mapping for Information Exploration, Proc. 8th Intl. Conf. on Information and Knowledge Management (CIKM'99), Kansas City, MO, November 1999, pp. 188-194.