Transfer Learning Task
(Redirected from Inductive Transfer)
Jump to navigation
Jump to search
A Transfer Learning Task is a learning task that requires inferences on a testing record to be made from a target domain (with a sample space) from a different underlying distribution than the source domain (of the training records).
- AKA: Domain Adaptable Learning.
- Context:
- Input: a Pretrained Model.
- It can be solved by a Transfer Learning System (that implements a transfer learning algorithm).
- It can can be easier to set up than a Supervised Learning Task, which needs extensive labeled data in the target domain.
- It can range from being a Transfer Learning Class-Inference Task to being a Transfer Learning Number-Infernce Task.
- ...
- Example(s):
- An Image-based Transfer Learning Task, such as after learning on ImageNet-based Model, train a model on different labels.
- An NLP-based Transfer Learning Task, such as through the use of word embeddings trained on an external text corpus.
- A Zero-Shot Transfer Learning Task, such as using a model trained on images of animals to identify a type of animal it was never explicitly trained on.
- A Few-Shot Transfer Learning Task, such as using a model trained to recognize handwriting to identify a new character after only a few examples.
- A Pre-Trained Model Fine Tuning Task/model fine tuning.
- …
- Counter-Example(s):
- Single Domain Learning Task, where training and testing data come from the same distribution.
- Unsupervised Learning Task, where the model learns without labeled examples from any domain.
- See: Metalearning, Transfer of Learning, Instance-Weighing Task, Anti-Spam Technique, Inductive Transfer, Multi-Task Learning, Domain Adaptation.
References
2022
- (Wikipedia, 2022) ⇒ https://en.wikipedia.org/wiki/Domain_adaptation Retrieved:2022-5-11.
- Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources. Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/inductive_transfer Retrieved:2015-5-3.
- Inductive transfer, or transfer learning, is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. [1] For example, the abilities acquired while learning to walk presumably apply when one learns to run, and knowledge gained while learning to recognize cars could apply when recognizing trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although formal ties between the two fields are limited. Notably, scientists have developed algorithms for inductive transfer in Markov logic networks [2] and Bayesian networks. [3] Furthermore, researchers have applied techniques for transfer to problems in text classification, [4] [5] spam filtering, [6] and urban combat simulation. [7] [8] [9] There still exists much potential in this field while the "transfer" hasn't yet led to significant improvement in learning. Also, an intuitive understanding could be that "transfer means a learner can directly learn from other correlated learners". However, in this way, such a methodology in transfer learning, whose direction is illustrated by, [10] [11] is not a hot spot in the area yet.
- ↑ West, Jeremy, Dan Ventura, and Sean Warnick. Spring Research Presentation: A Theoretical Foundation for Inductive Transfer (Abstract Only). Brigham Young University, College of Physical and Mathematical Sciences. 2007. Retrieved on 2007-08-05.
- ↑ Mihalkova, Lilyana, Tuyen Huynh, and Raymond J. Mooney. Mapping and Revising Markov Logic Networks for Transfer Learning. Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI-2007), Vancouver, BC, pp. 608-614, July 2007. Retrieved on 2007-08-05.
- ↑ Niculescu-Mizil, Alexandru, and Rich Caruana. Inductive Transfer for Bayesian Network Structure Learning. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS 2007), March 21–24, 2007. Retrieved on 2007-08-05.
- ↑ Do, Cuong B. and Andrew Y. Ng. Transfer learning for text classification. Neural Information Processing Systems Foundation, NIPS*2005 Online Papers. Retrieved on 2007-08-05.
- ↑ Raina, Rajat, Andrew Y. Ng, and Daphne Koller. Constructing Informative Priors using Transfer Learning Proceedings of the Twenty-third International Conference on Machine Learning, 2006. Retrieved on 2007-08-05.
- ↑ Bickel, Steffen. ECML-PKDD Discovery Challenge 2006 Overview Proceedings of the ECML-PKDD Discovery Challenge Workshop, 2006. Retrieved on 2007-08-05.
- ↑ Gorski, Nicholas A., and John E. Laird. Experiments in Transfer Across Multiple Learning Mechanisms. Proceedings of the ICML-06 Workshop on Structural Knowledge Transfer for Machine Learning. Pittsburgh, PA. Retrieved on 2007-08-05.
- ↑ Wenyuan Dai, Qiang Yang, Gui-Rong Xue and Yong Yu. [1] Boosting for Transfer Learning. In: Proceedings of The 24th Annual International Conference on Machine Learning (ICML'07) Corvallis, Oregon, USA, June 20–24, 2007. 193 - 200
- ↑ Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. Latest version: Nov 10, 2008. This is an online publication. It surveys the field of transfer learning (current version mainly focuses on transfer learning in classification, regression, clustering, dimensionality reduction and relational learning) and will be updated regularly.
- ↑ Lixin Duan, Ivor W. Tsang, Dong Xu and Tat-Seng Chua.Domain Adaptation from Multiple Sources via Auxiliary Classifiers Proceedings of the 26th International Conference of Machine Learning (ICML'09), Montreal Canada.
- ↑ Lei Jiang, Jian Zhang and Gabrielle Allen. Transferred Correlation Learning: An Incremental Approach for Neural Network Ensembles Proceedings of the 2010 International Joint Conference of Neural Networks (IJCNN'10), Barcelona, Spain.
2011
- (Vilalta, Giraud-Carrier, Brazdil, & Soares, 2011) ⇒ Ricardo Vilalta; Christopher Giraud-Carrier; Pavel Brazdil; Carlos Soares. (2011). “Inductive Transfer.” In: (Sammut & Webb, 2011) p.545
2010
- (Pan & Tang, 2010) ⇒ Sinno Jialin Pan, and Qiang Yang. (2010). “A Survey on Transfer Learning.” In: IEEE Trans. on Knowl. and Data Eng., 22(10). doi:10.1109/TKDE.2009.191
2007
- (Jiang & Zhai, 2007) ⇒ Jing Jiang, and ChengXiang Zhai. (2007). “Instance Weighting for Domain Adaptation in NLP.” In: Proceedings of ACL (ACL 2007).
- QUOTE: Domain adaptation is an important problem in natural language processing (NLP) due to the lack of labeled data in novel domains. In this paper, we study the domain adaptation problem from the instance weighting perspective. We formally analyze and characterize the domain adaptation problem from a distributional view, and show that there are two distinct needs for adaptation, corresponding to the different distributions of instances and classification functions in the source and the target domains. We then propose a general instance weighting framework for domain adaptation. Our empirical results on three NLP tasks show that incorporating and exploiting more information from the target domain through instance weighting is effective.