OpenAI GPT-3.5 Model
Jump to navigation
Jump to search
A OpenAI GPT-3.5 Model is an OpenAI LLM model release in ~2022-11.
- Context:
- …
- Example(s):
- Counter-Example(s):
- other OpenGPT Models, such as: GPT-3, GPT-4.
- LLaMA 2.
- See: Autoregressive Model, Text-to-Text Model.
References
2023
Latest model Description Max tokens Training data gpt-3.5-turbo Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with our latest model iteration 2 weeks after it is released. 4,096 tokens Up to Sep 2021 gpt-3.5-turbo-16k Same capabilities as the standard gpt-3.5-turbo model but with 4 times the context. 16,384 tokens Up to Sep 2021 gpt-3.5-turbo-0613 Snapshot of gpt-3.5-turbo from June 13th 2023 with function calling data. Unlike gpt-3.5-turbo, this model will not receive updates, and will be deprecated 3 months after a new version is released. 4,096 tokens Up to Sep 2021 gpt-3.5-turbo-16k-0613 Snapshot of gpt-3.5-turbo-16k from June 13th 2023. Unlike gpt-3.5-turbo-16k, this model will not receive updates, and will be deprecated 3 months after a new version is released. 16,384 tokens Up to Sep 2021 gpt-3.5-turbo-0301 (Legacy) Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo, this model will not receive updates, and will be deprecated on June 13th 2024 at the earliest. 4,096 tokens Up to Sep 2021 text-davinci-003 (Legacy) Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports some additional features such as inserting text. 4,097 tokens Up to Jun 2021 text-davinci-002 (Legacy) Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning 4,097 tokens Up to Jun 2021
code-davinci-002 (Legacy) Optimized for code-completion tasks 8,001 tokens Up to Jun 2021