Foundation Large Language Model (LLM)
(Redirected from foundation LLM)
Jump to navigation
Jump to search
A Foundation Large Language Model (LLM) is a pre-trained LLM that is a foundation neural model.
- Context:
- It can range from being a Foundation Pure LLM Model to being a Foundation Chat LLM Model.
- …
- Example(s):
- Counter-Example(s):
- See: GPT-4, PaLM 2, BLOOM.
References
2023
- GBard
- Examples of foundation LLM models include:
2023
- https://python.langchain.com/docs/modules/model_io/models/
- LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models.