Transfer Learning Algorithm
(Redirected from transfer learning method)
Jump to navigation
Jump to search
A Transfer Learning Algorithm is a learning algorithm that trains on one learning dataset prior to being applied to another learning dataset.
- AKA: Domain Adaptation Method.
- Context:
- It can be implemented by a Transfer Learning System (to solve a domain adaptable learning task).
- It can range from being a Transductive Transfer Learning Algorithm to being a Inductive Transfer Learning Algorithm.
- It can range from being an Unsupervised Domain Adaptable Learning Algorithm to being a Supervised Domain Adaptable Learning Algorithm.
- ...
- Example(s):
- Counter-Example(s):
- ...
- See: Semi-Supervised Learning Algorithm, Adversarial Domain Adaptation.
References
2019
- (Li et al., 2019) ⇒ Xiang Li, Wei Zhang, Qian Ding, and Jian-Qiao Sun. (2019). “Multi-layer Domain Adaptation Method for Rolling Bearing Fault Diagnosis.” Signal processing 157.
- QUOTE: ... In the past years, data-driven approaches such as deep learning have been widely applied on machinery signal processing to develop intelligent fault diagnosis systems. In real-world applications, domain shift problem usually occurs where the distribution of the labeled training data, denoted as source domain, is different from that of the unlabeled testing data, known as target domain. That results in serious diagnosis performance degradation. This paper proposes a novel domain adaptation method for rolling bearing fault diagnosis based on deep learning techniques. ...
2019
- https://towardsdatascience.com/transfer-learning-in-nlp-f5035cc3f62f
- QUOTE: ... Now we define taxonomy as per Pan and Yang (2010). They segregate transfer learning mainly into transductive and inductive. It is further divided into domain adaption, cross-lingual learning, multi-task learning and sequential transfer learning. ...
2018
- (Howard & Ruder, 2018) ⇒ Jeremy Howard, and Sebastian Ruder. (2018). “Universal Language Model Fine-tuning for Text Classification.” In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-2018).
- QUOTE: ... Inductive transfer learning has had a large impact on computer vision (CV). ... While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer (Blitzer et al., 2007). For inductive transfer, fine-tuning pretrained word embeddings (Mikolov et al., 2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. ...
2010
- (Pan & Tang, 2010) ⇒ Sinno Jialin Pan, and Qiang Yang. (2010). “A Survey on Transfer Learning.” In: IEEE Trans. on Knowl. and Data Eng., 22(10). doi:10.1109/TKDE.2009.191
2009
- (Chen et al., 2009) ⇒ Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. (2009). “Extracting Discrimininative Concepts for Domain Adaptation in Text Mining.” In: Proceedings of ACM SIGKDD Conference (KDD-2009). doi:10.1145/1557019.1557045
- … Several domain adaptation methods have been proposed to learn a reasonable representation so as to make the distributions between the source domain and the target domain closer [3, 12, 13, 11].
2008
- (Pan et al., 2008) ⇒ S. J. Pan, J. T. Kwok, and Q. Yang. (2008). “Transfer Learning via Dimensionality Reduction.” In: Proceedings of the 23rd AAAI conference on Artificial Intelligence.
2007
- (Daumé III, 2007) ⇒ Hal Daumé III. (2007). “Frustratingly Easy Domain Adaptation.” In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007).
- (Raina et al., 2007) ⇒ R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. (2007). “Self-Taught Learning: Transfer learning from unlabeled data.” In: Proceedings of the 24th Annual International Conference on Machine Learning (ICML 2007).
- (Satpal & Sarawagi, 2007) ⇒ S. Satpal and Sunita Sarawagi. (2007). “Domain Adaptation of Conditional Probability Models via Feature Subsetting.” In: Proceedings of European Conference on Principles and Practice of Knowledge Discovery in Databases.
2006
- (Blitzer et al., 2006) ⇒ J. Blitzer, R. McDonald, and Fernando Pereira. (2006). “Domain Adaptation with Structural Correspondence Learning.” In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2006).
- (Daumé III & Marcu, 2006) ⇒ Hal Daumé, III, and Daniel Marcu. (2006). “Domain Adaptation for Statistical Classifiers.” In: Journal of Artificial Intelligence Research, 26 (JAIR 26).
- QUOTE: The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the “in-domain” test data is drawn from a distribution that is related, but not identical, to the “out-of-domain” distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain.