Few-Shot In-Context Learning (FS-ICL) Task

From GM-RKB
(Redirected from Few-Shot Learning)
Jump to navigation Jump to search

A Few-Shot In-Context Learning (FS-ICL) Task is an in-context learning task with only a few training examples.



References

2023

2023

2022

2020

  • (Wang et al., 2020) ⇒ Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. (2020). “Generalizing from a Few Examples: A Survey on Few-shot Learning.” ACM computing surveys (csur) 53, no. 3
    • ABSTRACT: Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research.

2020

2020

  • (Wang et al., 2020) ⇒ Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. (2020). “Generalizing from a Few Examples: A Survey on Few-shot Learning.” ACM computing surveys (csur) 53, no. 3
    • ABSTRACT: Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research.

2017

  • (Snell et al., 2017) ⇒ Jake Snell, Kevin Swersky, and Richard Zemel. (2017). “Prototypical Networks for Few-shot Learning.” Advances in Neural Information Processing Systems 30
    • ABSTRACT: We propose Prototypical Networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend Prototypical Networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.