Google LaMDA Model
(Redirected from Language Model for Dialogue Applications)
Jump to navigation
Jump to search
A Google LaMDA Model is a conversational language model developed by Google.
- Example(s):
- Counter-Example(s):
- See: PaLM Model.
References
2022
- (Wikipedia, 2022) ⇒ https://en.wikipedia.org/wiki/LaMDA Retrieved:2022-12-14.
- LaMDA, which stands for Language Model for Dialogue Applications, is a family of conversational neural language models developed by Google. The first generation was announced during the 2021 Google I/O keynote, while the second generation was announced at the following year's event. In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human.
2021
- https://blog.google/technology/ai/lamda/
- QUOTE: LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: Does the response to a given conversational context make sense? For instance, if someone says:
- “I just started taking guitar lessons.”
- You might expect another person to respond with something like:
- “How exciting! My mom has a vintage Martin that she loves to play.”
- That response makes sense, given the initial statement. But sensibleness isn’t the only thing that makes a good response. After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions. Satisfying responses also tend to be specific, by relating clearly to the context of the conversation. In the example above, the response is sensible and specific.
- LaMDA builds on earlier Google research, published in 2020, that showed Transformer-based language models trained on dialogue could learn to talk about virtually anything. Since then, we’ve also found that, once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.
- QUOTE: LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.