LLM Prompt Tuning Task
(Redirected from Prompt tuning)
Jump to navigation
Jump to search
An LLM Prompt Tuning Task is a prompt engineering task that involves creating and optimizing in-context prompts
- Context:
- It can (often) leverage techniques like Chain-of-Thought Prompting.
- It can involve creating task-specific virtual tokens using a smaller model, which are then pre-appended to the input prompt.
- It can include using Soft Prompts, which are AI-generated and appended to the input to help guide the model's response.
- ...
- Example(s):
- Counter-Example(s):
- See: Prompt Engineering, Large Language Model (LLM).
References
2022
- (Liu, Ji et al., 2022) ⇒ Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. (2022). “P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks.” In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). doi:10.18653/v1/2022.acl-short.8
2021
- (Liu, Ji et al., 2021) ⇒ Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. (2022). “P-tuning V2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks.” In: Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics, 2022. doi:10.48550/arXiv.2110.07602
- NOTES:
- It provides insights into Natural Language Understanding (NLU) and Language Model Tuning, focusing on Prompt Tuning, a method that tunes continuous prompts while keeping the Language Model frozen, reducing storage and memory usage.
- It addresses the limitations of previous Prompt Tuning methods, which underperformed for normal-sized Pretrained Models and struggled with hard sequence labeling tasks, such as Extractive Question Answering and Named Entity Recognition (NER).
- It introduces P-Tuning v2, an optimized form of Deep Prompt Tuning, adapted for NLU tasks, demonstrating effectiveness across various model scales and NLU tasks.
- NOTES: