Zero-Shot Reasoning Task
(Redirected from Zero-Shot Reasoning)
Jump to navigation
Jump to search
A Zero-Shot Reasoning Task is a reasoning task that is a zero-shot task (not been previously seen).
- Context:
- It can (typically) involve natural language processing (NLP) tasks such as classification, sentiment analysis, or question-answering.
- It can (often) be used to evaluate the generalization capability of large language models (LLMs).
- ...
- Example(s):
- Identifying the sentiment of a text snippet without prior training on sentiment analysis.
- Classifying an email as spam or not spam without having been trained on spam detection.
- Solving arithmetic problems without explicit training on arithmetic tasks.
- ...
- Counter-Example(s):
- Classifying images of cats and dogs after being trained on a dataset of cat and dog images.
- Predicting stock prices based on historical data, after training on similar data.
- ...
- See: Few-Shot Reasoning, NLG.
References
2022
- (Kojima et al., 2022) ⇒ Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. (2022). “Large Language Models Are Zero-shot Reasoners.” In: Advances in Neural Information Processing Systems, 35.
- NOTE: It demonstrates the ability of large language models (LLMs) to perform zero-shot reasoning, i.e., to answer questions that require reasoning without any additional training.
- ABSTRACT: Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding Let's think step by ste before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large-scale InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM.