Many-Shot In-Context Learning (MS-ICL) Task
Jump to navigation
Jump to search
A Many-Shot In-Context Learning (MS-ICL) Task is a ICL method that enhances in-context learning by employing significantly more examples for task specifications.
- Context:
- It can (typically) extend the capabilities of Large Language Models (LLMs) by providing clearer task instructions and reducing ambiguity, which results in substantial performance gains across a variety of tasks.
- It can (often) leverage structured examples to guide Large Language Models (LLMs) towards more accurate and reliable outputs without the need for explicit fine-tuning or specialized training regimes.
- It can range from using simple repetitive tasks to complex reasoning and high-dimensional function approximations.
- It can demonstrate a dependency on the quality of input examples, indicating a critical need for high-quality, well-curated example sets.
- It can show variations in performance based on the order and context of the provided examples, highlighting the influence of prompt design on model output consistency.
- ...
- Example(s):
- a Reinforced ICL framework where model-generated rationales replace those created by humans, enabling autonomous learning from dynamically adjusted feedback loops.
- an Unsupervised ICL setting that excludes rationales altogether, focusing entirely on domain-specific prompts to foster pure inductive reasoning without explicit guidance.
- ...
- Counter-Example(s):
- Few-Shot Learning, which involves using a smaller number of examples and typically faces limitations in clarity and performance consistency.
- LLM Fine Tuning, ... .
- ...
- See: In-Context Learning, Large Language Models, Reinforced ICL, Unsupervised ICL.
References
2024
- (Agarwal, Singh et al., 2024) ⇒ Rishabh Agarwal, Avi Singh, Lei M. Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D. Co-Reyes, Eric Chu, Feryal Behbahani, Aleksandra Faust, and Hugo Larochelle. (2024). “Many-Shot In-Context Learning.” In: [1](https://doi.org/10.48550/arXiv.2404.11018).
- NOTES:
- It introduces Many-Shot ICL as an extension of in-context learning with LLMs, employing a larger number of examples to enhance task specification clarity, reduce command ambiguity, and improve performance across varied tasks.
- It develops new frameworks for ICL, specifically Reinforced ICL and Unsupervised ICL, aimed at improving performance in complex reasoning tasks by altering the dependence on human-generated rationales.
- It provides empirical evidence showing that many-shot ICL outperforms few-shot learning by addressing pre-training biases and functioning effectively in complex reasoning and high-dimensional tasks.
- It notes the influence of example order on many-shot ICL performance, highlighting challenges in training consistency and the need for careful prompt design.
- NOTES: