Pretrained Word Embeddings Dataset
(Redirected from word embeddings dataset)
Jump to navigation
Jump to search
A Pretrained Word Embeddings Dataset is an embeddings dataset with word embedding records.
- Context:
- It can (typically) be associated to a Word Embeddings Space Model.
- Example(s):
- GloVe word embeddings[1]
- 20 Newsgroup dataset-based ones, such as: [2].
- See: Word Embeddings Generation System, Word Embedding, Word Vector Embedding.
References
2016
- (Serban et al., 2016) ⇒ Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. (2016). “Building End-to-end Dialogue Systems Using Generative Hierarchical Neural Network Models.” In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence.
- QUOTE: … We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. … We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.