Chain-of-Thought (CoT) Prompting Method

From GM-RKB
(Redirected from CoT)
Jump to navigation Jump to search

A Chain-of-Thought (CoT) Prompting Method is a prompt engineering method that requires the LLM to express its reply as chained reasoning steps.



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Prompt_engineering#Chain-of-thought Retrieved:2023-5-11.
    • Chain-of-thought prompting (CoT) improves the reasoning ability of LLMs by prompting them to generate a series of intermediate steps that lead to the final answer of a multi-step problem. The technique was first proposed by Google researchers in 2022.[1]

      LLMs that are trained on large amounts of text using deep learning methods can generate output that resembles human-generated text. While LLMs show impressive performance on various natural language tasks, they still face difficulties with some reasoning tasks that require logical thinking and multiple steps to solve, such as arithmetic or commonsense reasoning questions. To address this challenge, CoT prompting prompts the model to produce intermediate reasoning steps before giving the final answer to a multi-step problem.[1]

      For example, given the question “Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?”, a CoT prompt might induce the LLM to answer with steps of reasoning that mimic a train of thought like “A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.”[1]

      Chain-of-thought prompting improves the performance of LLMs on average on both arithmetic and commonsense tasks in comparison to standard prompting methods. When applied to PaLM, a 540B parameter language model, CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, even setting a new state of the art at the time on the GSM8K mathematical reasoning benchmark.[1]

      CoT prompting is an emergent property of model scale, meaning it works better with larger and more powerful language models. [1] It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability.

    • There are two main methods to elicit chain-of-thought reasoning: few-shot prompting and zero-shot prompting. The initial proposition of CoT prompting demonstrated few-shot prompting, wherein at least one example of a question paired with proper human-written CoT reasoning is prepended to the prompt.[1] It is also possible to elicit similar reasoning and performance gain with zero-shot prompting, which can be as simple as appending to the prompt the words "Let's think step-by-step". This allows for better scaling as one no longer needs to prompt engineer specific CoT prompts for each task to get the corresponding boost in performance.[2]
  1. 1.0 1.1 1.2 1.3 1.4 1.5 Wei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed H.; Le, Quoc V.; Zhou, Denny (31 October 2022). "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models". arXiv:2201.11903
  2. Dickson, Ben (30 August 2022). "LLMs have not learned our language — we're trying to learn theirs". VentureBeat. Retrieved 10 March 2023. Shaikh, Omar; Zhang, Hongxin; Held, William; Bernstein, Michael; Yang, Diyi (2022). “On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning". arXiv:2212.08061.

2022

2022

2022

2022