Improving Transfer Learning by Introspective Reasoner

3 downloads 0 Views 275KB Size Report
negative transfer learning will cause trouble in the problem solving, which is difficult to avoid. ..... In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004.
Improving Transfer Learning by Introspective Reasoner Zhongzhi Shi1, Bo Zhang1,2, and Fuzhen Zhuang1 1

Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, 100190, Beijing, China 2 Graduate School of the Chinese Academy of Sciences, 100039, Beijing, China {shizz,zhangb,zhuangfz}@ics.ict.ac.cn

Abstract. Traditional learning techniques have the assumption that training and test data are drawn from the same data distribution, and thus they are not suitable for dealing with the situation where new unlabeled data are obtained from fast evolving, related but different information sources. This leads to the crossdomain learning problem which targets on adapting the knowledge learned from one or more source domains to target domains. Transfer learning has made a great progress, and a lot of approaches and algorithms are presented. But negative transfer learning will cause trouble in the problem solving, which is difficult to avoid. In this paper we have proposed an introspective reasoner to overcome the negative transfer learning. Introspective learning exploits explicit representations of its own organization and desired behavior to determine when, what, and how to learn in order to improve its own reasoning. According to the transfer learning process we will present the architecture of introspective reasoner for transductive transfer learning. Keywords: Introspective reasoned, Transfer learning, Negative transfer.

1

Introduction

Introspection method is early a psychological research approach. It investigates the psychological phenomena and process according to the report of the tested person or the experience described by himself. The introspection learning is to introduce introspection concept into machine learning. That is to say, by checking and caring about knowledge processing and reasoning method of intelligence system itself and finding out problems from failure or poor efficiency, the introspection learning forms its own learning goal and then improves the method to solving problems [1]. Cox’s paper mentioned that research on introspective reasoning has a long history in artificial intelligence, psychology, and cognitive science [2]. Leake et al. pointed that in introspective learning approaches, a system exploits explicit representations of its own organization and desired behavior to determine when, what, and how to learn in order to improve its own reasoning [3] Introspective learning has become more and more important in recent years as AI systems have begun to address real-world problem domains, characterized by a high Z. Shi, D. Leake, and S. Vadera (Eds.): IIP 2012, IFIP AICT 385, pp. 28–39, 2012. © IFIP International Federation for Information Processing 2012

Improving Transfer Learning by Introspective Reasoner

29

degree of complexity and uncertainty. Early of 1980s, introspective reasoning is implemented as planning within the meta-knowledge layer. SOAR [4] employs a form of introspective reasoning by learning meta-rules which describe how to apply rules about domain tasks and acquire knowledge. Birnbaum et al. proposed the use of self-models within case-based reasoning [5]. Cox and Ram proposed a set of general approaches to introspective reasoning and learning, automatically selecting the appropriate learning algorithms when reasoning failures arise [6]. They defined a taxonomy of causes of reasoning failures and proposed a taxonomy of learning goals, used for analyzing the traces of reasoning failures and responding to them. The main step in the introspective learning process is to looking for the reason of failure according to the features of failure. The introspective learning system should evaluate reasoning besides finding errors. The blame assignment is like troubleshooting with a mapping function from failure symptom to failure cause. A series of processes based on casebased reasoning, such as search, adjustment, evaluation and reservation can attain blame assignment and explanation failure, and improve the efficiency of evaluation and explanation, so case-based reasoning is an effective approach. Introspective reasoning can be a useful tool for autonomously improving the performance of a CBR system by reasoning about system problem solving failures. Fox and Leake described a case-based system called ROBBIE which uses introspective reasoning to model, explain, and recover from reasoning failures [7]. Fox and Leake also took a model-based approach to recognize and repair reasoning failures. Their particular form of introspective reasoning focuses on retrieval failures and case index refinement. Work by Oehlmann, Edwards and Sleeman addressed on the related topic of re-indexing cases, through introspective questioning, to facilitate multiple viewpoints during reasoning [8]. Leake, Kinley, and Wilson described how introspective reasoning can also be used to learn adaptation knowledge in the form of adaptation cases [9]. Zhang and Yang adopted introspective learning to maintain feature weight in case-based reasoning [10]. Craw applied introspective learning to build case-based reasoning knowledge containers [11]. Introspective reasoning to repair problems may also be seen as related to the use of confidence measures for assessing the quality of the solutions proposed by a CBR system [12]. Arcos, Mulayim and Leake proposed an introspective model for autonomously improving the performance of a CBR system by reasoning about system problem solving failures [13]. The introspective reasoner monitors the reasoning process, determines the causes of the failures, and performs actions that will affect future reasoning processes. Soh et al. developed CBRMETAL system which integrates case-based reasoning and meta-learning to introspectively refine the reasoning of intelligent tutoring system [14]. CBRMETAL system specifically improves the casebase, the similarity heuristics and adaptation heuristics through reinforcement and adheres to a set of six principles to minimize interferences during meta-learning. Leake and Powell proposed WebAdapt approach which can introspectively correct failures in its adaptation process and improve its selection of Web sources to mine [15]. Transfer learning has already achieved significant success. One of the major challenges in developing transfer methods is to produce positive transfer between

30

Z. Shi, B. Zhang, and F. Zhuang

appropriately related tasks while avoiding negative transfer between tasks that are less related. So far there are some approaches for avoiding negative transfer. Negative transfer happens when the source domain data and task contribute to the reduced performance of learning in the target domain. Despite the fact that how to avoid negative transfer is a very important issue, little research work has been published on this topic. Torrey and Shavlik summarized into three methods, that is, rejecting bad information, choosing a source task, modeling task similarity [16]. Croonenborghs et al. proposed an option-based transfer in reinforcement learning is an example of the approach that naturally incorporates the ability to reject bad information [17]. Kuhlmann and Stone looked at finding similar tasks when each task is specified in a formal language [18]. They constructed a graph to represent the elements and rules of a task. This allows them to find identical tasks by checking for graph isomorphism, and by creating minor variants of a target-task graph, they can also search for similar tasks. Eaton and DesJardins proposed to choose from among candidate solutions to a source task rather than from among candidate source tasks [19]. Carroll and Seppi developed several similarity measures for reinforcement learning tasks, comparing policies, value functions, and rewards[20]. Besides above mentioned methods, Bakker and Heskes adopted a Bayesian approach in which some of the model parameters are shared for all tasks and others more loosely connected through a joint prior distribution that can be learned from the data [21]. The data are clustered based on the task parameters, where tasks in the same cluster are supposed to be related to each other. Argyriou et al. divided the learning tasks into groups [22]. Tasks within each group are related by sharing a lowdimensional representation, which differs among different groups. As a result, tasks within a group can find it easier to transfer useful knowledge. Though there have been lots of transfer learning algorithms, few of them can guarantee to avoid negative transfer. In this paper we propose a method which adopts introspective reasoner to avoid negative transfer. The rest of this paper is organized as follows. The following section describes transfer learning. Section 3 introduces the transductive transfer learning. Section 4 discusses introspective reasoner. Failure taxonomy in introspective reasoner is described in Section 5. Finally, the conclusions and further works are given.

2

Transfer Learning

Transfer learning is a very hot research topic in machine learning and data mining area. The study of transfer learning is motivated by the fact that people can intelligently apply knowledge learned previously to solve new problems faster or with better solutions. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on Learning to Learn. Since 1995 Research on transfer learning has attracted more and more attention. In 2005, the Broad Agency Announcement (BAA) 05-29 of Defense Advanced Research Projects Agency (DARPA)’s Information Processing Technology Office (IPTO) gave a new mission of transfer learning: the ability of a system to recognize and apply knowledge

Improving Transfer Learning by Introspective Reasoner

31

and skills learned in previous tasks to novel tasks. In this definition, transfer learning aims to extract the knowledge from one or more source tasks and apply the knowledge to a target task[23]. Fig. 1 shows you the transfer learning process. The formal definitions of transfer learning are given follows. For simplicity, we only consider the case where there is one source domain DS, and one target domain, DT. Definition 1. Source domain data DS

DS = {( xS1 , yS1 ),L( xSn , ySn )} where

is the data instance and

(1)

is the corresponding class label.

Definition 2. Target domain data DT

DT = {( xT1 , yT1 ),L( xTm , yTm )} where the input

and

(2)

is the corresponding output.

Definition 3. Transfer learning: Given a source domain DS and learning task TS, a target domain DT and learning task TT , transfer learning aims to help improve the learning of the target predictive function in DT using the knowledge in DS and TS, where DS ≠DT, and TS ≠ TT . Definition 4. Negative transfer: In transfer learning if the source task is not sufficiently related or if the relationship is not well leveraged by the transfer method, the performance may not only fail to improve but also result in actually decreasing.

Fig. 1. Transfer learning process

The goal of our introspective reasoning system is to detect reasoning failures, refine the function of reasoning mechanisms, and improve the system performance for future problems. To achieve this goal, the introspective reasoner monitors the reasoning process, determines the possible causes of its failures, and performs actions that will affect future reasoning processes. In this paper, we only focus on the negative transfer in the transductive transfer learning, that means, the source and target tasks are the same, while the source and target domains are different. In this situation, no labeled data in the target domain are available while a lot of labeled data in the source domain are available.

32

3

Z. Shi, B. Zhang, and F. Zhuang

Transductive Transfer Learning

In the transductive transfer learning, the source and target tasks are the same, while the source and target domains are different. In this situation, no labeled data in the target domain are available while a lot of labeled data in the source domain are available. Here we give the definition of transductive transfer learning as follows: Definition 5. Transductive transfer learning: Given a source domain DS and a corresponding learning task TS, a target domain DT and a corresponding learning task TT , transductive transfer learning aims to improve the learning of the target predictive function fT ( ) in DT using the knowledge in DS and TS, where DS ≠ DT and TS = TT .

·

Definition 6. is clear that transductive transfer learning has different data sets and same learning tasks. In the transductive transfer learning, we want to learn an optimal model for the target domain by minimizing the expected risk [23],



θ * = arg min θ ∈Θ

( x,y )∈DT

P( DT )l ( x, y,θ )

(3)

where l(x,y,θ) is a loss function that depends on the parameter θ. Since no labeled data in the target domain are observed in training data, we have to learn a model from the source domain data instead. If P(DS) = P(DT), then we may simply learn the model by solving the following optimization problem for use in the target domain,

θ * = arg min θ ∈Θ



( x,y )∈DS

P( DS )l ( x, y,θ )

(4)

If P(DS) ≠ P(DT), we need to modify the above optimization problem to learn a model with high generalization ability for the target domain, as follows:

θ * = arg min θ ∈Θ

( x,y )∈DS nS

PT ( xTi , yTi )

i =1

PS ( xSi , ySi )

≈ arg min  θ ∈Θ

P( DT ) P( DS )l ( x, y,θ ) P( DS )



(5)

l ( xSi , ySi ,θ )

By adding different penalty values to each instance (xSi, ySi) with the corresponding weight

, ,

, we can learn a precise model for the target domain. Since

| | P(XS) and P(XT) and

, the differences between P(DS) and P(DT) are caused by , ,

Improving Transfer Learning by Introspective Reasoner

If we can estimate

P S P T

33

for each instance, we can solve the transductive transfer

learning problems. In transductive transfer learning the distributions on source domain and target domain do not match, we are facing sample selection bias or covariate shift. Specifically, given a domain of patterns X and labels Y, we obtain source training samples ZS= , ,…, , ⊆ X× Y from a Borel probability distribution PS(x, y), , ,…, , drawn from another such distribuand target samples ZT= tion PT(x, y). Huang et al. proposed a kernel-mean matching (KMM) algorithm to learn

P S

P T

directly by matching the means between the source domain data and the

target domain data in a reproducing-kernel Hilbert space (RKHS) [24]. KMM can be rewritten as the following optimization problem.

min β

1 T β K β − kT β 2 nS

s.t. βi ∈ [0, B] and |  βi − n S | ≤ n Sε

(6)

i =1

where

 K S,S K =  KT,S

K S,T  KT,T 

and Ki,j=k(xi, xj) , KS,S and KT,T are kernel matrices for the source domain data and the target domain data, respectively. Note that (6) is a quadratic program which can be solved efficiently using interior point methods or any other successive optimization procedure. Huang et al. proved that

P ( xTi ) = β i P ( xSi )

(7)

The advantage of using KMM is that it can avoid performing density estimation of either P(xSi) or P(xTi), which is difficult when the size of the data set is small. A common assumption in supervised learning is that training and test samples follow the same distribution. However, this basic assumption is often violated in practice and then standard machine learning methods do not work as desired. The situation where the input distribution P(x) is different in the training and test phases but the conditional distribution of output values P(y|x) remains unchanged is called covariate shift. Sugiyama et al. proposed an algorithm known as Kullback-Leibler Importance Estimation Procedure(KLIEP) to estimate

P S P T

directly[25], based on the minimiza-

tion of the Kullback-Leibler divergence. The optimization criterion is as follows:

34

Z. Shi, B. Zhang, and F. Zhuang b m  max log θlφl (x mj )     b {θ l }l=1 l =1  j =1  n

(8)

b

subject to  θlφl ( xin ) i =1 l =1

It can be integrated with cross-validation to perform model selection automatically in two steps: a) estimating the weights of the source domain data and b) training models on the reweighted data. The Naïve Bayes classifier is effective for text categorization [26]. Regarding the text categorization problem, a document d∈D corresponds to a data instance, where D denotes the training document set. The document d can be represented as a bag of words. Each word w∈d comes from a set W of all feature words. Each document d is associated with a class label c∈C, where C denotes the class label set. The Naïve Bayes classifiers estimate the conditional probability as follows:

P (c | d ) = P(c)∏ P( w | c)

(9)

w∈d

Dai et al. proposed a transfer-learning algorithm for text classification based on an EM-based Naïve Bayesian classifiers NBTC [27]. Dai et al. also proposed co-clustering based classification for out-of-domain documents algorithm CoCC which takes co-clustering as a bridge to propagate the knowledge from the in-domain to out-of-domain [28]. Ling et al. proposed a spectral classification based method CDSC, in which the labeled data from the source domains are available for training and the unlabeled data from target domains are to be classified [29]. Based on the normalized cut cost function, supervisory knowledge is transferred through a constraint matrix, and the regularized objective function finds the consistency between the source domain supervision and the target domain intrinsic structure. Zhuang et al. proposed CCR3 approach which is a consensus regularization framework to exploit the distribution differences and learn the knowledge among training data from multiple source domains to boost the learning performance in a target domain [30].

4

Introspective Reasoner

The goal of our introspective reasoning system is to detect negative transfer , refine the function of reasoning mechanisms, and improve the system performance for future problems. In order to reach this goal, the introspective reasoner should solve following problems [31]: 1. There are standards that determine when the reasoning process should be checked, i.e. monitoring the reasoning process; 2. Determine whether failure reasoning takes place according to the standards;

Improving Transfer Learning by Introspective Reasoner

35

3. Confirm the final reason that leads to the failure; 4. Change the reasoning process in order to avoid the similar failure in the future. Introspective reasoner for transfer learning will monitor the transfer learning process, determine the possible causes of its failures, and perform actions that will affect future transfer processes. The architecture of introspective reasoner is shown in Fig. 2.

Fig. 2. Architecture of Introspective Reasoner

Introspective reasoner consists of monitoring, quality assessment, blame assignment, solution generation and ontology-based knowledge bases. The monitoring task tracks the transductive transfer learning process. For each learning task solved by the transductive transfer learning system, the monitor generates a trace containing: a) the source data retrieved; b) the ranking criteria applied to the data, together with the values that each criterion produced and the final ranking; and c) the transfer operators which were applied, with the sources to which they were applied and the target changes produced. When the user's final solution is provided to the system, quality assessment is triggered to determine the real quality of the system-generated solution, by analyzing the differences between the system's proposed solution and the final solution. Quality assessment provides a result in qualitative terms: positive quality or negative quality. Blame assignment starts by negative quality assessment. It takes as input the differences between the source data and target data, and tries to relate the solution differences to the source data. The system searches the ontology-based knowledge bases and selects those that apply to the observed solution differences. Solution generation identifies the learning goals related to the negative transfer failure selected in the blame assignment stage. Each failure may be associated with more than one learning goal. For each learning goal, a set of plausible source data changes in the active policies are generated by using a predefined ontology-based knowledge base. Failure taxonomy is an important problem in introspective learning system. It can provide a clue for the failure explanation and the revise learning target. In the introspective reasoner system, the failures are put into knowledge bases which use ontology as knowledge representation. In this way, the knowledge is organized as conceptualization, formal, semantics explicit and shared [32].

36

5

Z. Shi, B. Zhang, and F. Zhuang

Failure Taxonomy in Introspective Reasoner

Failure taxonomy is an important problem in introspective learning system. Hierarchical failure taxonomy is a method that can solve the problem classification is too fine or too coarse in failure taxonomy. Failure can be represented into large class according to the different reasoning step. This can improve the introspective revision of the system and accelerate the process of failure comparing. If failure taxonomy is fine, failures can be described more clearly, in order to provide valuable clue. Giving a rational treatment for the relationship between the failure taxonomy and failure interpretation will increase the capacity of the system introspection. System not only need to reason out the reasons of failure and give introspective learning target accord to the symptoms of failure, but also need to have all capacities to deal with different problems. Similarly, failure explanation also can be classed into different layers, such as abstract level, level of detail or multidimensional. The layers of failure taxonomy contribute to form the reasonable relationship between the features of failure and the failure explanation. In introspective reasoner knowledge bases contain failure taxonomy and special knowledge coming from previous experiences or expertise. Solving the failure taxonomy in introspective learning through ontology-based knowledge bases can make the failure taxonomy in introspective learning more clear and retrieval process more efficient. Transfer Leaning

Inductive Transfer Leaning

Tranductive Transfer Leaning

Sigle Domain and Single Task Sample Selection

Bias Shift

CCR3

Unsupervised Transfer Leaning

Multi-Domains And Single Task Domain Adaptation

Transfer Algorithm

CoCC

CDSC

...

Fig. 3. Failure Taxonomy In Introspective Reasoner

Based on different situations between the source and target domains and tasks, transfer learning can be categorized three types, such as inductive transfer learning, transductive transfer learning and unsupervised transfer learning. According to the basic situation, the failure taxonomy in introspective reasoner is constructed as Fig. 3. In this paper we only focus on the transductive transfer learning which can be divided

Improving Transfer Learning by Introspective Reasoner

37

two categories, that is, single source domain single task and multi-domain single task. At the present time, researchers make a lot approaches and algorithms in multi-domain multi task. And also domain adaptation is the key problem in the transductive transfer learning. When transfer learning failure happens we can do blame assignment. Here we make an example from domain adaptation. Domain adaptation arises in a variety of modern applications where limited or no labeled data is available for a target application. Obtaining a good adaptation model requires the careful modeling of the relationship between P(DS) and P(DT). If these two distributions are independent, then the target domain data DT is useless for building a model. On the other hand, if P(DS) and P(DT) are identical, then there is no adaptation necessary and we can simply use a standard learning algorithm. In practical problems P(DS) and P(DT) are neither identical nor independent. In text classification, Daume III et al. proposed the Maximum Entropy Genre Adaptation Model, which is called MEGA Model [33]. The failure taxonomy can be illustrated in Table 1. When the transductive transfer learning is failed, the diagnosis is provided as input for the blame assessment with failure categories used to suggest points to avoid negative transfer or low performance transfer. Table 1. Sample possible sources of failure

Failed knowledge goal Only target domain Only source domain Extra source domain data need Adapted source domain

6

Failure point

Explanation

No knowledge No transfer need Parameter is on held out

Source domain choice Target domain choice Source domain data not enough Source domain selection

Negative transfer

Conclusions

Traditional learning techniques have the assumption that training and test data are drawn from the same data distribution, and thus they are not suitable for dealing with the situation where new unlabeled data are obtained from fast evolving, related but different information sources. This leads to the cross-domain learning problem which targets on adapting the knowledge learned from one or more source domains to target domains. Transfer learning has made a great progress and a lot of approaches and algorithms are presented. But negative transfer learning will cause trouble in the problem solving and it is difficult to avoid. In this paper we have proposed an introspective reasoner to overcome the negative transfer learning. Introspective learning exploits explicit representations of its own organization and desired behavior to determine when, what, and how to learn in order to improve its own reasoning. According to the transfer learning process, we presented the architecture of introspective reasoner with four components, which is monitoring, quality assessment, blame assignment and solution generation. All of these components are

38

Z. Shi, B. Zhang, and F. Zhuang

supported by knowledge bases. The knowledge bases are organized in terms of ontology. This paper only focuses on transductive transfer learning. In the future we are working on introspective reasoner for multi-source and multi-tasks transfer learning. Acknowledgements. This work is supported by Key projects of National Natural Science Foundation of China (No. 61035003, 60933004), National Natural Science Foundation of China (No. 61072085, 60970088, 60903141), National Basic Research Program (2007CB311004).

References 1. Shi, Z.: Intelligence Science. World Scientific, Singapore (2011) 2. Cox, M.: Metacognition in computation: A selected research review. Artificial Intelligence 169(2), 104–141 (2005) 3. Leake, D.B., Wilson, M.: Extending introspective learning from self- models. In: Cox, M.T., Raja, A. (eds.) Metareasoning: Thinking About Thinking (2008) 4. Laird, J.E., Newell, A., Rosenbloom, Soar, P.S.: A Architecture for General Intelligence. Artificial Intelligence (1987) 5. Birnbaum, L., Collins, G., Brand, M., Freed, M., Krulwich, B., Pryor, L.: A model-based approach to the construction of adaptive case-based planning systems. In: Bareiss, R. (ed.) Proceedings of the DARPA Case-Based Reasoning Workshop, pp. 215–224. Morgan Kaufmann, San Mateo (1991) 6. Cox, M.T., Ram, A.: Introspective multistrategy learning on the construction of learning strategies. Artificial Intelligence 112, 1–55 (1999) 7. Fox, S., Leake, D.B.: Using Introspective Reasoning to Refine Indexing. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 391–397 (1995) 8. Oehlmann, R., Edwards, P., Sleeman, D.: Changing the Viewpoint: Re-Indexing by Introspective Question. In: Proceedings of the 16th Annual Conference of the Cognitive Science Society, pp. 381–386 (1995) 9. Leake, D.B., Kinley, A., Wilson, D.: Learning to Improve Case Adaptation by Introspective Reasoning and CBR. Case-Based Reasoning Research and Development. In: Aamodt, A., Veloso, M.M. (eds.) ICCBR 1995. LNCS, vol. 1010. Springer, Heidelberg (1995) 10. Zhang, Z., Yang, Q.: Feature Weight Maintenance in Case Bases Using Introspective Learning. Journal of Intelligent Information Systems 16(2), 95–116 (2001) 11. Craw, S.: Introspective Learning to Build Case-Based Reasoning (CBR) Knowledge Containers. In: Perner, P., Rosenfeld, A. (eds.) MLDM 2003. LNCS, vol. 2734, pp. 1–6. Springer, Heidelberg (2003) 12. Cheetham, W., Price, J.: Measures of Solution Accuracy in Case-Based Reasoning Systems. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 106–118. Springer, Heidelberg (2004) 13. Arcos, J.L., Mulayim, O., Leake, D.: Using Introspective Reasoning to Improve CBR System Performance. In: AAAI Metareasoning Workshop, pp. 21–28. AAAI Press (2008) 14. Soh, L.-K., Blank, T.: Integrated Introspective Case-Based Reasoning for Intelligent Tutoring Systems. In: Association for the Advancement of Artificial Intelligence. AAAI 2007 (2007) 15. Leake, D.B., Powell, J.H.: Enhancing Case Adaptation with Introspective Reasoning and Web Mining. In: IJCAI 2011, pp. 2680–2685 (2011)

Improving Transfer Learning by Introspective Reasoner

39

16. Torrey, L., Shavlik, J.: Transfer Learning. In: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, pp. 242–264 (2010) 17. Croonenborghs, T., Driessens, K., Bruynooghe, M.: Learning Relational Options for Inductive Transfer in Relational Reinforcement Learning. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds.) ILP 2007. LNCS (LNAI), vol. 4894, pp. 88–97. Springer, Heidelberg (2008) 18. Kuhlmann, G., Stone, P.: Graph-Based Domain Mapping for Transfer Learning in General Games. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007) 19. Eaton, E., DesJardins, M.: Knowledge transfer with a multiresolution ensemble learning of classifiers. In: ICML Workshop on Structural Knowledge Transfer for Machine (2006) 20. Carroll, C., Seppi, K.: Task similarity measures for transfer in reinforcement works learning task libraries. In: IEEE International Joint Conference on Neural Net (2005) 21. Bakker, B., Heskes, T.: Task Clustering and Gating for Bayesian Multitask Learning. J. Machine Learning Research 4, 83–99 (2003) 22. Argyriou, A., Maurer, A., Pontil, M.: An Algorithm for Transfer Learning in a Heterogeneous Environment. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part I. LNCS (LNAI), vol. 5211, pp. 71–85. Springer, Heidelberg (2008) 23. Pan, S.J., Yang, Q.: A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010) 24. Huang, J., Smola, A., Gretton, A., Borgwardt, K.M., Scholkopf, B.: Correcting Sample Selection Bias by Unlabeled Data. In: Proc. 19th Ann. Conf. Neural Information Processing Systems (2006) 25. Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P.V., Kawanabe, M.: Direct Importance Estimation with Model Selection and its Application to Covariate Shift Adaptation. In: Proc. 20th Ann. Conf. Neural Information Processing Systems (December 2008) 26. Lewis, D.D.: Representation and learning in information retrieval. Ph.D. Dissertation, Amherst, MA, USA (1992) 27. Dai, W., Xue, G., Yang, Q., Yu, Y.: Transferring Naive Bayes Classifiers for Text Classification. In: Proc. 22nd Assoc. for the Advancement of Artificial Intelligence (AAAI) Conf. Artificial Intelligence, pp. 540–545 (July 2007) 28. Dai, W.Y., Xue, G.R., Yang, Q., et al.: Co-clustering based Classification for Out-ofdomain Documents. In: Proceedings of 13th ACM International Conference on Knowledge Discovery and Data Mining, pp. 210–219. ACM Press, New York (2007) 29. Ling, X., Dai, W.Y., Xue, G.R., et al.: Spectral Domain-Transfer Learning. In: Proceedings of 14th ACM International Conference on Knowledge Discovery and Data Mining, pp. 488–496. ACM Press, New York (2008) 30. Fuzhen, Z., Ping, L., Hui, X., Yuhong, X., Qing, H., Zhongzhi, S.: Cross-domain Learning from Multiple Sources: A Consensus Regularization Perspective. IEEE Transactions on Knowledge and Data Engineering (TKDE) 22(12), 1664–1678 (2010) 31. Shi, Z., Zhang, S.: Case-Based Introspective Learning. In: Proceedings of the Fourth IEEE International Conference on Cognitive Informatics. IEEE (2005) 32. Dong, Q., Shi, Z.: A Research on Introspective Learning Based on CBR. International Journal of Advanced Intelligence 3(1), 147–157 (2011) 33. Daume III, H., Marcu, D.: Domain Adaptation for Statistical Classifiers. J. Artificial Intelligence Research 26, 101–126 (2006)

Suggest Documents