2017 RepresentationsforLanguageFromW
- (Manning, 2017) ⇒ Christopher Manning. (2017). “Representations for Language: From Word Embeddings to Sentence Meanings.” In: Proceedings of Invited talk at Simon Institute for the Theory of Computing.
Subject Headings:
Notes
- talk page https://simons.berkeley.edu/talks/christopher-manning-2017-3-27
- talk video https://www.youtube.com/watch?v=nFCxTtBqF5U
- copy of presentation slides https://docs.google.com/presentation/d/1rUr0hUGCaJTP8LscP1bC-6tlzfRSahz7_cVU0fF1srU/
- related to Christopher D Manning: A Neural Network Model That Can Reason (ICLR 2018 invited talk)
Cited By
Quotes
Abstract
A basic - yet very successful - tool for modeling human language has been a new generation of distributed word representations: neural word embeddings. However, beyond just word meanings, we need to understand the meanings of larger pieces of text and the relationships between pieces of text, like questions and answers. Two requirements for that are good ways to understand the structure of human language utterances and ways to compose their meanings. Deep learning methods can help for both tasks. I will then look at methods for understanding the relationships between pieces of text, for tasks such as natural language inference, question answering, and machine translation. A key, still open, question raised by recent deep learning work for NLP is to what extent do we need explicit language and knowledge representations versus everything being latent in distributed representations. Put most controversially, that is the question of whether a bidirectional LSTM with attention is the answer for all language processing needs.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2017 RepresentationsforLanguageFromW | Christopher D. Manning | Representations for Language: From Word Embeddings to Sentence Meanings | 2017 |