Xu-Sigal Multiple Sequence Decoding System
(Redirected from Su-Sigal Multiple Sequence Decoding System)
Jump to navigation
Jump to search
A Xu-Sigal Multiple Sequence Decoding System is a Linguistic Sequence Decoding System that was developed by Xu & Sigal (2020).
- Context:
- It can solve a Xu-Sigal Multiple Sequence Decoding Task by implementing a Xu-Sigal Multiple Sequence Decoding Algorithm.
- Example(s):
- the one described in Xu & Sigal (2020),
- ...
- …
- Counter-Example(s):
- See: Transformer Network, Language Model, Natural Language Processing System, Graph Neural Network, Dense Relational Captioning System, Self-Attention Network, Gated Recurrent Unit, Long Short-Term Memory (LSTM) Network, RNN-Based Language Model, Backpropagation Through Time, Recurrent Neural Network.
References
2020
- (Xu & Sigal, 2020) ⇒ Bicheng Xu, and Leonid Sigal (2020). "Consistent Multiple Sequence Decoding". In: arXiv:2004.00760.
- QUOTE: Sequence decoding has emerged as one of the fundamental building blocks for a large variety of computer vision problems. For example, it is a critical component in a range of visual-lingual architectures, for tasks such as image captioning (...) and question answering (...), as well as in generative models that tackle trajectory prediction or forecasting (...). Most existing methods assume a single sequence and implement neural decoding using recurrent architectures, e.g., LSTMs or GRUs; recent variants include models like BERT (...) However, in many scenarios, more than one sequence needs to be decoded at the same time. Common examples include trajectory forecasting in team sports (...) or autonomous driving (...), where multiple agents (players/cars) need to be predicted and behavior of one agent may closely depend on the others.