OpenAI LLM Fine-Tuning API
(Redirected from OpenAI fine-tuning API)
Jump to navigation
Jump to search
An OpenAI LLM Fine-Tuning API is a LLM fine-tuning API that supports OpenAI's LLM fine-tuning system.
- Context:
- It can (typically) be used to create OpenAI Pre-Trained Models based on gpt-3.5-turbo-0613, babbage-002, or davinci-002 (which can require shorter prompts for the fine-tuned task).
- It can require the preparation and upload of OpenAI Fine-Tuning Data.
- ...
- Example(s):
- Counter-Example(s):
- See: Customizing GPT-3 for your application.
References
2024
- https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-integrations
- NOTES:
- To use the OpenAI Fine-Tuning API, start by preparing your Training Dataset, which should consist of Demonstration Conversations or Prompt-Completion Pairs formatted according to OpenAI's Specifications.
- Upload your prepared dataset using the OpenAI Files API, specifying "fine-tune" as the purpose. This step is crucial for making your data available for the Fine-Tuning Process.
- Create a fine-tuning job by calling the OpenAI Fine-Tuning Jobs API, specifying the Model you wish to fine-tune (such as GPT-3.5-Turbo or an experimental version of GPT-4) and the File ID of your uploaded dataset.
- Monitor the Fine-Tuning Job's Progress through the API. Once complete, you will receive an email notification, or you can check the job's status programmatically for updates on completion.
- After fine-tuning, access your Fine-Tuned Model by using the unique model identifier provided upon the job's completion. This identifier is used to make requests to the model through OpenAI's completion or chat endpoints.
- Evaluate your fine-tuned model's performance by generating samples on a Test Set and comparing the outputs to those of the base model. This step helps assess improvements and decide if further Fine-Tuning Iterations are necessary.
- For advanced customization, explore additional fine-tuning parameters and integrations, such as Epoch-Based Checkpoints for model versioning, and third-party tools like Weights and Biases for in-depth metrics tracking and analysis.
- NOTES:
2023
- https://platform.openai.com/docs/guides/fine-tuning
- QUOTE: Fine-tuning lets you get more out of the models available through the API by providing:
- Higher quality results than prompting
- Ability to train on more examples than can fit in a prompt
- Token savings due to shorter prompts
- Lower latency requests
- OpenAI's text generation models have been pre-trained on a vast amount of text. To use the models effectively, we include instructions and sometimes several examples in a prompt. Using demonstrations to show how to perform a task is often called "few-shot learning."
- Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide as many examples in the prompt. This saves costs and enables lower-latency requests.
- At a high level, fine-tuning involves the following steps:
- Prepare and upload training data
- Train a new fine-tuned model
- Evaluate results and go back to step 1 if needed
- Use your fine-tuned model
- QUOTE: Fine-tuning lets you get more out of the models available through the API by providing: