Large Language Model (LLM) Prompt
(Redirected from LLM Prompt)
Jump to navigation
Jump to search
A Large Language Model (LLM) Prompt is a text-to-text prompt that provides structured instructions to guide the behavior and output of a large language model.
- Context:
- It can (typically) be created by an LLM Prompt Writing Task.
- ...
- It can range from being a Human-Designed LLM Prompt to being a AI-Designed LLM Prompt, depending on its creator type.
- It can range from being a Zero-Shot LLM Prompt to being a One-Shot LLM Prompt to being a Few-Shot LLM Prompt to being a Many-Shot LLM Prompt, depending on its example quantity.
- It can range from being a Minimal Viable LLM Prompt to being a Tuned LLM Prompt, depending on its optimization level.
- It can range from being a User's LLM Prompt to being a LLM System Prompt, depending on LLM interaction role.
- ...
- It can include LLM Prompt Context.
- It can include LLM Prompt Instructions.
- It can include LLM Prompt Examples.
- It can include LLM Prompt Role Assignments.
- It can include LLM Prompt Conversation History.
- It can reference LLM Prompting Techniques.
- ...
- It can be a Task-Specific LLM Prompt, such as: Question Answering LLM Prompt, Creative Content Generation LLM Prompt, or Recommendation Generation LLM Prompt.
- It can be a Domain-Specific LLM Prompt, such as Text-To-Text LLM Prompt, Text-To-Image LLM Prompt, and Text-To-Audio LLM Prompt.
- It can include LLM Prompt Context, LLM Prompt Instructions, LLM Prompt Conversation History, LLM Prompt Examples, and LLM Prompt Role Assignments.
- It can reference LLM Prompting Techniques, such as: Chain-Of-Thought Prompting, Generated Knowledge Prompting, and Prompt Optimization to enhance AI capabilities.
- ...
- Example(s):
- Based don LLM example quantity, such as:
- Zero-Shot LLM Prompts that ask, "What is Fermat's little theorem?"
- Few-Shot LLM Prompts that provide examples such as "maison house, chat cat, chien" to help the model understand and respond with "dog."
- Data Domain-Specific LLM Prompts, such as:
- a Text-to-Image LLM Prompt that describes "a high-quality photo of an astronaut riding a horse."
- a Text-to-Audio LLM Prompt that specifies "Lo-fi slow BPM electro chill with organic samples."
- Technique-Following LLM Prompts, such as:
- Chain-of-Thought Prompt that breaks down complex reasoning steps to improve the model's logical responses.
- Task-Specific LLM Prompts, such as:
- Question Answering LLM Prompts, such as asking "What is Fermat's little theorem?"
- Creative Generation LLM Prompts, such as requesting "a high-quality photo of an astronaut riding a horse"
- ...
- Based don LLM example quantity, such as:
- Counter-Example(s):
- LISP Querys, which use formal programming syntax rather than natural language instructions.
- SQL Querys, which follow structured database query language rather than free-form text.
- Regular Expressions, which use pattern matching syntax rather than descriptive text.
- See: LLM Prompting, LLM Prompt Engineering, LLM Prompt Optimization.
References
2024
- (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Prompt_engineering Retrieved: 2024-6-12.
- Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform.[1] A prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style,[2] providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison house, chat cat, chien " (the expected response being dog), an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style,[3] layout, lighting, and aesthetic.
- NOTE:
- LLM Prompts involve creating natural language instructions that a generative AI model can interpret and act upon. These prompts guide the AI to perform specific tasks by providing context, style, and directives.
- An LLM Prompt can range from simple queries like "What is Fermat's little theorem?" to complex commands such as "Write a poem about leaves falling." They can also include detailed context, instructions, and conversation history to help the AI understand the task better.
- LLM Prompts can be enhanced through various techniques, including phrasing queries, specifying styles, providing context, or assigning roles to the AI. For instance, few-shot learning prompts provide a few examples for the model to learn from.
- An LLM Prompt for text-to-image or text-to-audio models typically describes the desired output, such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples." Adjusting the wording can impact the generated content's style, layout, and aesthetics.
- LLM Prompts have evolved with advancements in AI. Initially, they focused on converting various NLP tasks into a question-answering format. Recent techniques like chain-of-thought prompting and generated knowledge prompting have further enhanced the model's reasoning and performance on complex tasks.
2023
- (Yang, Wang et al., 2023) ⇒ Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. (2023). “Large Language Models As Optimizers.” doi:10.48550/arXiv.2309.03409
- QUOTE: ...In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.