Many-Shot In-Context Learning (MS-ICL) Task

From GM-RKB
(Redirected from Many-Shot Learning)
Jump to navigation Jump to search

A Many-Shot In-Context Learning (MS-ICL) Task is a ICL method that enhances in-context learning by employing significantly more examples for task specifications.

  • Context:
    • It can (typically) extend the capabilities of Large Language Models (LLMs) by providing clearer task instructions and reducing ambiguity, which results in substantial performance gains across a variety of tasks.
    • It can (often) leverage structured examples to guide Large Language Models (LLMs) towards more accurate and reliable outputs without the need for explicit fine-tuning or specialized training regimes.
    • It can range from using simple repetitive tasks to complex reasoning and high-dimensional function approximations.
    • It can demonstrate a dependency on the quality of input examples, indicating a critical need for high-quality, well-curated example sets.
    • It can show variations in performance based on the order and context of the provided examples, highlighting the influence of prompt design on model output consistency.
    • ...
  • Example(s):
    • a Reinforced ICL framework where model-generated rationales replace those created by humans, enabling autonomous learning from dynamically adjusted feedback loops.
    • an Unsupervised ICL setting that excludes rationales altogether, focusing entirely on domain-specific prompts to foster pure inductive reasoning without explicit guidance.
    • ...
  • Counter-Example(s):
    • Few-Shot Learning, which involves using a smaller number of examples and typically faces limitations in clarity and performance consistency.
    • LLM Fine Tuning, ... .
    • ...
  • See: In-Context Learning, Large Language Models, Reinforced ICL, Unsupervised ICL.


References

2024