Word/Token-Level Neural-based Language Model
(Redirected from Word/Token-Level Neural Network-based LM)
Jump to navigation
Jump to search
A Word/Token-Level Neural-based Language Model is a token/word-level LM that is a neural LM.
References
2017
- (Daniluk et al., 2017) ⇒ Michał Daniluk, Tim Rocktaschel, Johannes Welbl, and Sebastian Riedel. (2017). “Frustratingly Short Attention Spans in Neural Language Modeling.” In: Proceedings of ICLR 2017.
- QUOTE: ... Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid - and long-range dependencies. ...
... At the core of language models (LMs) is their ability to infer the next word given a context. This requires representing context-specific dependencies in a sequence across different time scales. On the one hand, classical N-gram language models capture relevant dependencies between words in short time distances explicitly, but suffer from data sparsity. Neural language models, on the other hand, maintain and update a dense vector representation over a sequence where time dependencies are captured implicitly (Mikolov et al., 2010). ...
- QUOTE: ... Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid - and long-range dependencies. ...