Attention-based QA-LSTM
(Redirected from Attention-Based QA-LSTM)
Jump to navigation
Jump to search
An Attention-based QA-LSTM is a QA-LSTM System that includes an Attention Mechanism.
- AKA: QA-LSTM with Attention.
- Context:
- …
- Example(s):
- Counter-Examples:
- See: Deep Learning, Attention Mechanism, Natural Language Processing, Question Answering System, Artificial Neural Network, Memory-based Neural Network, Memory-Augmented Neural Network, Convolutional Neural Network, Recurrent Neural Network, Long Short-Term Memory, Deep LSTM Reader, Attentive Reader System.
References
2016
- (Tan et al., 2016) ⇒ Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. (2016). “LSTM-based Deep Learning Models for Non-factoid Answer Selection.” In: Proceedings of ICLR 2016 Workshop. eprint arXiv:1511.04108
- QUOTE: Inspired by the work in (Hermann et al., 2015), we develop a very simple but efficient word-level attention on the basic model. Figure 3 shows the structure. Prior to the average or mean pooling, each biLSTM output vector will be multiplied by a softmax weight, which is determined by the question embedding from biLSTM. (...) The major difference between this approach and the one in (Hermann et al., 2015) is that Hermann et al. (2015)’s attentive reader emphasizes the informative part of supporting facts, and then uses a combined embedding of the query and the supporting facts to predict the factoid answers. In this work, we directly use the attention-based representations to measure the question/answer distances. Experiments indicate the attention mechanism can more efficiently distinguish correct answers from incorrect ones according to the question text.
Figure 3: QA-LSTM with attention
- QUOTE: Inspired by the work in (Hermann et al., 2015), we develop a very simple but efficient word-level attention on the basic model. Figure 3 shows the structure. Prior to the average or mean pooling, each biLSTM output vector will be multiplied by a softmax weight, which is determined by the question embedding from biLSTM.