Multi-Task Learning Task
(Redirected from MTL)
Jump to navigation
Jump to search
A Multi-Task Learning Task is a learning task that can solve several inference tasks.
- Context:
- It can be solved by a Multi-Task Learning System (that implements a multi-task learning algorithm).
- ...
- Example(s):
- Multi-Task NLP.
- ...
- See: Transfer Learning, Regularization (Mathematics), Joint Inference, Semi-Supervised Learning, In-Context Transfer Learning.
References
2023
- (Zhao, Zhou, Zhang et al., 2023) ⇒ Xin Zhao, Kun Zhou, Beichen Zhang, Zheng Gong, Zhipeng Chen, Yuanhang Zhou, Ji-Rong Wen et al. (2023). “JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving.” In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-2023).
- QUOTE: Evaluation Tasks. We consider two different settings for evaluation, namely seen tasks and unseen tasks, referring to the task data that are used and not used, respectively, during multi-task fine-tuning. We split each task dataset into training/development/test sets. The statistics of these tasks are shown in Table 1. We use the evaluation metrics following JiuZhang [54].
- Seen tasks consist of six tasks based on high-school math problems, including
(1) two question answering tasks, i.e., Multiple-Choice Question Answering (MCQ) and Blank-Filling Question Answering (BFQ);
(2) two analysis generation tasks, i.e., Multiple - Choice Analysis Generation (CAG) and Blank-Filling Analysis Generation (BAG); and
(3) two classification tasks, i.e., Knowledge Point Classification (KPC) and Question Relation Classification (QRC). For these tasks, we perform multi-task fine-tuning with all training sets, select the model checkpoint with the best average performance on development sets, and then evaluate the results on test sets. - Unseen tasks consist of two analysis generation tasks based on junior high school math problems, i.e., Junior-high-school Multiple-Choice Analysis Generation (JCAG) and Junior-high-school Blank-Filling Analysis Generation (JBAG), which are not used in multi-task fine-tuning for our model. For the two tasks, we perform task-specific fine-tuning, i.e., the multi-task fine-tuned model is separately optimized, tuned and evaluated for each task.
- Seen tasks consist of six tasks based on high-school math problems, including
(1) two question answering tasks, i.e., Multiple - Choice Question Answering (MCQ) and Blank-Filling Question Answering (BFQ);
(2) two analysis generation tasks, i.e., Multiple - Choice Analysis Generation (CAG) and Blank-Filling Analysis Generation (BAG); and
(3) two classification tasks, i.e., Knowledge Point Classification (KPC) and Question Relation Classification (QRC). For these tasks, we perform multi-task fine-tuning with all training sets, select the model checkpoint with the best average performance on development sets, and then evaluate the results on test sets. - Unseen tasks consist of two analysis generation tasks based on junior high school math problems, i.e., Junior-high-school Multiple-Choice Analysis Generation (JCAG) and Junior-high-school Blank-Filling Analysis Generation (JBAG), which are not used in multi-task fine-tuning for our model. For the two tasks, we perform task-specific fine-tuning, i.e., the multi-task fine-tuned model is separately optimized, tuned and evaluated for each task.
- Seen tasks consist of six tasks based on high-school math problems, including
- QUOTE: Evaluation Tasks. We consider two different settings for evaluation, namely seen tasks and unseen tasks, referring to the task data that are used and not used, respectively, during multi-task fine-tuning. We split each task dataset into training/development/test sets. The statistics of these tasks are shown in Table 1. We use the evaluation metrics following JiuZhang [54].
2019
- (Radford et al., 2019) ⇒ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. (2019). “Language Models Are Unsupervised Multitask Learners.” In: OpenAI Blog Journal, 1(8).
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/multi-task_learning Retrieved:2015-3-22.
- Multi-task learning (MTL) is an approach to machine learning that learns a problem together with other related problems at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the commonality among the tasks. [1] [2] [3] Therefore, multi-task learning is a kind of inductive transfer. This type of machine learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. [4] The goal of MTL is to improve the performance of learning algorithms by learning classifiers for multiple tasks jointly. This works particularly well if these tasks have some commonality and are generally slightly under sampled. One example is a spam-filter. Everybody has a slightly different distribution over spam or not-spam emails (e.g. all emails in Russian are spam for me — but not so for my Russian colleagues), yet there is definitely a common aspect across users. Multi-task learning works, because encouraging a classifier (or a modification thereof) to also perform well on a slightly different task is a better regularization than uninformed regularizers (e.g. to enforce that all weights are small). [5]
- ↑ Baxter, J. (2000). A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149--198, On-line paper
- ↑ Caruana, R. (1997). Multitask learning: A knowledge-based source of inductive bias. Machine Learning, 28:41--75. Paper at Citeseer
- ↑ Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. Paper at Citeseer
- ↑ http://www.cs.cornell.edu/~caruana/mlj97.pdf
- ↑ http://www.cse.wustl.edu/~kilian/research/multitasklearning/multitasklearning.html
1997
- (Caruana, 1997) ⇒ Rich Caruana. (1997). “Multitask Learning.” In: Machine Learning, 28(1) doi:10.1023/A:1007379606734
- QUOTE: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias.