Linguistic Sequence Decoding System
(Redirected from Sequence decoding)
Jump to navigation
Jump to search
A Linguistic Sequence Decoding System is a Linguistic Sequence Modeling System that is based on Encoder-Decoder Neural Network architecture.
- AKA: Word Sequence Decoding System, Sentence-level Decoding System.
- Context:
- It can solve a Linguistic Sequence Decoding Task by implementing a Linguistic Sequence Decoding Algorithm.
- Example(s):
- Counter-Example(s):
- See: Transformer Network, Language Model, Natural Language Processing System, Graph Neural Network, Dense Relational Captioning System, Self-Attention Network, Gated Recurrent Unit, Long Short-Term Memory (LSTM) Network, RNN-Based Language Model, Backpropagation Through Time, Recurrent Neural Network.
References
2020a
- (Jurafsky & Martin, 2020) ⇒ Dan Jurafsky, and James H. Martin (2020). "Chapter 9: Deep Learning Architectures for Sequence Processing". In: Speech and Language Processing (3rd ed. draft).
2020b
- (Guo et al., 2020) ⇒ Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, and Enhong Chen (2020)."Incorporating BERT into Parallel Sequence Decoding with Adapters". In: Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (NeurIPS 2020).
- QUOTE: Parallel sequence decoding hugely reduces the inference latency by neglecting the conditional dependency between output tokens, based on novel decoding algorithms including non-autoregressive decoding (...), insertion-based decoding (...) and Mask-Predict (...)
2020c
- (Xu & Sigal, 2020) ⇒ Bicheng Xu, and Leonid Sigal (2020). "Consistent Multiple Sequence Decoding". In: arXiv:2004.00760.
- QUOTE: Sequence decoding has emerged as one of the fundamental building blocks for a large variety of computer vision problems. For example, it is a critical component in a range of visual-lingual architectures, for tasks such as image captioning (...) and question answering (...), as well as in generative models that tackle trajectory prediction or forecasting (...). Most existing methods assume a single sequence and implement neural decoding using recurrent architectures, e.g., LSTMs or GRUs; recent variants include models like BERT (...) However, in many scenarios, more than one sequence needs to be decoded at the same time. Common examples include trajectory forecasting in team sports (...) or autonomous driving (...), where multiple agents (players/cars) need to be predicted and behavior of one agent may closely depend on the others.