Self Attention-based Bi-LSTM
(Redirected from Self-Attention Bi-LSTM)
Jump to navigation
Jump to search
A Self Attention-based Bi-LSTM is a Bi-LSTM that includes a self-attention.
- See: LSTM.
Redirect
2019
- (Xie et al., 2019) ⇒ Jun Xie, Bo Chen, Xinglong Gu, Fengmei Liang, and Xinying Xu. (2019). “Self-attention-based BiLSTM Model for Short Text Fine-grained Sentiment Classification.” In: IEEE Access 7.
- QUOTE: ... To solve these problems, a Self-Attention-Based BiLSTM model with aspect-term information is proposed for the fine-grained sentiment polarity classification for short texts. The proposed model can effectively use contextual information and semantic features, and especially model the correlations between aspect-terms and context words. The model mainly consists of a word-encode layer, a BiLSTM layer, a self-attention layer and a softmax layer. Among them, the BiLSTM layer sums up the information from two opposite directions of a sentence through two independent LSTMs. The self-attention layer captures the more important parts of a sentence when different aspect-terms are input. Between the BiLSTM layer and the self-attention layer, the hidden vector and the aspect-term vector are fused by adding, which reduces the computational complexity caused by the vector splicing directly. ...