Pre-Trained General-Purpose LLM
Jump to navigation
Jump to search
A Pre-Trained General-Purpose LLM is a pre-trained general-purpose neural model that is a pre-trained LLM and a general-purpose LLM.
- Context:
- It can (typically) be trained on extensive datasets covering various topics.
- It can (often) be used for various natural language processing tasks without significant additional training.
- It can range from being a Foundation/Base LLM to being an Instruct-Tuned General-Purpose LLM.
- It can support tasks such as text generation, translation, summarization, and question answering.
- It can provide a strong starting point for fine-tuning on specific tasks or domains.
- ...
- Example(s):
- Foundational Base LLM, such as GPT-3
- Foundational Instruct LLM, such as LLaMA-Instruct model.
- ...
- Counter-Example(s):
- See: [[]].