OpenAI API Endpoint

From GM-RKB
Jump to navigation Jump to search

An OpenAI API Endpoint is an API endpoint within OpenAI API that enables developers to access specific AI models or services.


References

2024

  • https://platform.openai.com/docs/models
    • NOTES:
    • It can include models like:
      • GPT-4o: OpenAI’s flagship model designed for complex, multi-step tasks. It supports text and image inputs with a context length of 128,000 tokens, generating text twice as fast as GPT-4 Turbo and at a lower cost per token.
      • GPT-4o Mini: A smaller, more affordable variant of GPT-4o, designed for lightweight, fast tasks. It offers similar multimodal capabilities but is optimized for speed and lower cost, with a context window of 128,000 tokens.
      • o1-Preview and o1-Mini: A new series of reasoning models using reinforcement learning to solve complex problems. The o1-Preview model handles harder reasoning tasks, while o1-Mini is optimized for faster, cheaper performance in math, coding, and science tasks.
      • Continuous Model Upgrades: OpenAI continuously updates model versions, like GPT-4o-latest, allowing developers to use the latest versions in production. Developers can also contribute evaluations via OpenAI Evals to help improve models for different use cases.
      • Model Context Windows: OpenAI API models, such as GPT-4o, support large context windows of up to 128,000 tokens, allowing for long and complex inputs and outputs in a single API request.
      • Model Pricing Tiers: OpenAI provides a variety of models with different pricing points, from high-performance models like GPT-4o to more affordable options like GPT-4o Mini. Each model is designed to cater to different computational needs and budgets.
      • DALL·E: OpenAI’s image generation model, capable of creating and editing images based on natural language prompts. The latest iteration, DALL·E 3, offers improved resolution and image fidelity compared to previous versions.
      • Text-to-Speech (TTS) Models: OpenAI’s TTS models, including tts-1 and tts-1-hd, convert text into natural-sounding spoken audio. They can be used for real-time speech synthesis applications.
      • Whisper Model: A general-purpose speech recognition model, Whisper is available through the OpenAI API and excels at multilingual speech recognition, translation, and language identification. It is optimized for faster inference when used via the API.
      • Embeddings API: OpenAI’s Embeddings API converts text into numerical vectors for use in search, recommendation systems, anomaly detection, and clustering. The latest models, such as text-embedding-3-large, improve performance across both English and non-English tasks.
      • Moderation Models: OpenAI’s Moderation API helps detect unsafe or sensitive content based on categories like hate speech, violence, and self-harm. The API processes up to 32,768 tokens in each moderation check and provides high accuracy in text classification.
    • NOTES: Model endpoint compatibility
Endpoint Model name Description
/v1/chat/completions gpt-4, gpt-4o, gpt-4o-mini, gpt-3.5-turbo Supports both text and image inputs with the latest chat completion features.
/v1/completions text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001 Legacy completions endpoint used for traditional text completions.
/v1/edits text-davinci-edit-001, code-davinci-edit-001 Used for editing or inserting text based on instructions.
/v1/audio/transcriptions whisper-1 Converts speech into text using the Whisper model.
/v1/audio/translations whisper-1 Translates speech into different languages using the Whisper model.
/v1/fine-tunes gpt-4o, gpt-4o-mini, gpt-3.5-turbo, davinci, curie, babbage, ada Enables fine-tuning models for specific tasks.
/v1/embeddings text-embedding-3-large, text-embedding-3-small, text-embedding-ada-002 Converts text into numerical vectors for use in search, recommendation, and classification systems.
/v1/moderations text-moderation-stable, text-moderation-latest Used to detect unsafe or sensitive content in text.
/v1/images/generations dall-e-2, dall-e-3 Generates or edits images based on text prompts using the DALL·E models.
/v1/audio/speech tts-1, tts-1-hd Converts text into natural-sounding spoken audio using the Text-to-Speech (TTS) models.

2024

2023

Endpoint Model name
/v1/chat/completions gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301
/v1/completions text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
/v1/edits text-davinci-edit-001, code-davinci-edit-001
/v1/audio/transcriptions whisper-1
/v1/audio/translations whisper-1
/v1/fine-tunes davinci, curie, babbage, ada
/v1/embeddings text-embedding-ada-002, text-search-ada-doc-001
/v1/moderations text-moderation-stable, text-moderation-latest