Deep Neural Model Fine-Tuning Algorithm
A Deep Neural Model Fine-Tuning Algorithm is a deep-learning transfer learning algorithm that adjusts the parameters of a pre-trained deep learning model to create a fine-tuned NNet that has improved model performance on a specific task.
- AKA: Fine-Tuning (Deep Learning).
- Context:
- It can (typically) be implemented by a System (to solve a neural fine-tuning task).
- ...
- Example(s):
- Counter-Example(s):
- LoRA Technique,
- a random parameter adjustments without a clear objective,
- Training a model from scratch with no pre-existing knowledge.
- See: Semi-Supervised Learning, Reinforcement Learning From Human Feedback, Deep Neural Network, Model Optimization.
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning) Retrieved:2023-10-4.
- In deep learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model are trained on new data.[1] Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (not updated during the backpropagation step).[2] A model may also be augmented with "adapters" that consist of far fewer parameters than the original model, and fine-tuned in a parameter-efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen.
For some architectures, such as convolutional neural networks, it is common to keep the earlier layers (those closest to the input layer) frozen because they capture lower-level features, while later layers often discern high-level features that can be more related to the task that the model is trained on.[2]
Models that are pre-trained on large and general corpora are usually fine-tuned by reusing the model's parameters as a starting point and adding a task-specific layer trained from scratch. Fine-tuning the full model is common as well and often yields better results, but it is more computationally expensive.[3]
Fine-tuning is typically accomplished with supervised learning, but there are also techniques to fine-tune a model using weak supervision. Fine-tuning can be combined with a reinforcement learning from human feedback-based objective to produce language models like ChatGPT (a fine-tuned version of GPT-3) and Sparrow.
- In deep learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model are trained on new data.[1] Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (not updated during the backpropagation step).[2] A model may also be augmented with "adapters" that consist of far fewer parameters than the original model, and fine-tuned in a parameter-efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen.
- ↑ Quinn, Joanne (2020). Dive into deep learning: tools for engagement. Thousand Oaks, California. p. 551. ISBN 978-1-5443-6137-6. Archived from the original on January 10, 2023. Retrieved January 10, 2023.
- ↑ 2.0 2.1 "CS231n Convolutional Neural Networks for Visual Recognition". cs231n.github.io. Retrieved 9 March 2023.
- ↑ Dingliwal, Saket; Shenoy, Ashish; Bodapati, Sravan; Gandhe, Ankur; Gadde, Ravi Teja; Kirchhoff, Katrin (2021). “Prompt Tuning GPT-2 language model for parameter-efficient domain adaptation of ASR systems". arXiv:2112.08718.