OpenAI Fine-Tuning API

From GM-RKB
Revision as of 18:38, 6 November 2023 by Gmelli (talk | contribs) (Created page with "An OpenAI Fine-Tuning API is a fine-tuning API that is an OpenAI API. * <B>See:</B> [[]]. ---- ---- == References == === 2023 === * https://platform.openai.com/docs/guides/fine-tuning ** QUOTE: Fine-tuning lets you get more out of the models available through the API by providing: *** Higher quality results than prompting *** Ability to train on more examples than can fit in a prompt *** Token savings due to shorter prompts *** Lower latency requests ** Open...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

An OpenAI Fine-Tuning API is a fine-tuning API that is an OpenAI API.

  • See: [[]].


References

2023

  • https://platform.openai.com/docs/guides/fine-tuning
    • QUOTE: Fine-tuning lets you get more out of the models available through the API by providing:
      • Higher quality results than prompting
      • Ability to train on more examples than can fit in a prompt
      • Token savings due to shorter prompts
      • Lower latency requests
    • OpenAI's text generation models have been pre-trained on a vast amount of text. To use the models effectively, we include instructions and sometimes several examples in a prompt. Using demonstrations to show how to perform a task is often called "few-shot learning."
    • Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide as many examples in the prompt. This saves costs and enables lower-latency requests.
    • At a high level, fine-tuning involves the following steps:
      • Prepare and upload training data
      • Train a new fine-tuned model
      • Evaluate results and go back to step 1 if needed
      • Use your fine-tuned model