Text-to-* Model Prompt Programming Technique

From GM-RKB
Jump to navigation Jump to search

An Text-to-* Model Prompt Programming Technique is an software engineering method for text-to-* prompt engineering tasks.



References

2024

  • (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Prompt_engineering Retrieved:2024-9-19.
    • Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform:[1] a prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style,[2] providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison house, chat cat, chien " (the expected response being dog), an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style,[3] layout, lighting, and aesthetic.
    • NOTES:
      1. Chain-of-Thought Prompting: Breaks down complex reasoning tasks into a series of intermediate steps, improving logical thinking and problem-solving in large language models (LLMs).
      2. Chain-of-Symbol Prompting: Uses symbols to assist models with spatial reasoning tasks, enhancing the model's ability to interpret text with spacing challenges.
      3. Generated Knowledge Prompting: Prompts the model to generate relevant knowledge before answering a question, increasing accuracy by conditioning the response on facts.
      4. Least-to-Most Prompting: Solves complex problems by first listing simpler sub-problems, solving them sequentially to improve reasoning and comprehension.
      5. Self-Consistency Decoding: Generates multiple reasoning paths (chains of thought) and selects the most common outcome for enhanced reliability in multi-step tasks.
      6. Complexity-Based Prompting: Selects and evaluates the longest reasoning chains from multiple model outputs, focusing on the depth of problem-solving processes.
      7. Self-Refine: Iteratively critiques and refines its own outputs, improving solutions by integrating feedback from previous steps.
      8. Tree-of-Thought Prompting: Generalizes Chain-of-Thought by exploring multiple possible next steps, employing methods like tree search to improve decision-making.
      9. Maieutic Prompting: Uses recursive questioning to generate and refine explanations, focusing on consistency and logic in multi-layered reasoning tasks.
      10. Directional-Stimulus Prompting: Provides hints or cues (such as keywords) to guide the model’s responses toward specific desired outputs.
      11. Prompt Injection Defense: Safeguards against malicious instructions by filtering or restricting prompts to ensure they align with trusted operations in instruction-following systems.

2023


  1. Cite error: Invalid <ref> tag; no text was provided for refs named language-models-are-multitask
  2. Cite error: Invalid <ref> tag; no text was provided for refs named zapier20230803
  3. Cite error: Invalid <ref> tag; no text was provided for refs named diab