Introduction
| Question 1: What is the best way to improve my spoken | ||||||
|---|---|---|---|---|---|---|
| English soon ? | ||||||
| Question 2: How can I improve my English speaking | ||||||
| ability ? | ||||||
| Is paraphrase (Actual & Predicted): Yes | ||||||
| Attention Distribution | ||||||
| Vanilla LSTM | How can I improve my | |||||
| English speaking ability ? | ||||||
| Diversity LSTM | How can I improve my | |||||
| English speaking ability ? | ||||||
| Passage: Sandra went to the garden . Daniel went to the | ||||||
| garden. | ||||||
| Question: Where is Sandra? | ||||||
| Answer (Actual & Predicted): garden | ||||||
| Attention Distribution: | ||||||
| Vanilla LSTM | Sandra went to the garden . | |||||
| Daniel went to the garden | ||||||
| Diversity LSTM | Sandra went to the garden . | |||||
| Daniel went to the garden |
Table 1: Samples of Attention distributions from Vanilla and Diversity LSTM models on the Quora Question Paraphrase (QQP) & Babi 1 datasets. .
Attention mechanisms (Bahdanau et al., 2014; Vaswani et al., 2017) play a very important role in neural network-based models for various Natural Language Processing (NLP) tasks. They not only improve the performance of the model but are also often used to provide insights into the working of a model. Recently, there is a growing debate on whether attention mechanisms can offer transparency to a model or not. For example, Serrano and Smith (2019) and Jain and Wallace (2019) show that high attention weights need not necessarily correspond to a higher impact on the model's predictions and hence they do not provide a faithful explanation for the model's predictions. On the other hand, Wiegreffe and Pinter (2019) argues that there is still a possibility that attention distributions may provide a plausible explanation for the predictions. In other words, they might provide
a plausible reconstruction of the model's decision making which can be understood by a human even if it is not faithful to how the model works.
In this work, we begin by analyzing why attention distributions may not faithfully explain the model's predictions. We argue that when the input representations over which an attention distribution is being computed are very similar to each other, the attention weights are not very meaningful. Since the input representations are very similar, even random permutations of the attention weights could lead to similar final context vectors. As a result, the output predictions will not change much even if the attention weights are permuted. We show that this is indeed the case for LSTM based models where the hidden states occupy a narrow cone in the latent space (i.e., the hidden representations are very close to each other). We further observe that for a wide variety of datasets, attention distributions in these models do not even provide a good plausible explanation as they pay significantly high attention to unimportant tokens such as punctuations. This is perhaps due to hidden states capturing a summary of the entire context instead of being specific to their corresponding words.
Based on these observations, we aim to build more transparent and explainable models where the attention distributions provide faithful and plausible explanations for its predictions. One intuitive way of making the attention distribution more faithful is by ensuring that the hidden representations over which the distribution is being computed are very diverse. Therefore, a random permutation of the attention weights will lead to very different context vectors. To do so, we propose an orthogonalization technique which ensures that the hidden states are farther away from each other in their spatial dimensions. We then propose a more flexible model trained with an additional objective that promotes diversity in the hidden states. Through a series of experiments using 12 datasets spanning 4 tasks, we show that our model is more transparent while achieving comparable performance to models containing vanilla LSTM based encoders. Specifically, we show that in our proposed models, attention weights (i) provide useful importance ranking of hidden states (ii) are better indicative of words that are important for the model's prediction (iii) correlate better with gradient-based feature importance methods and (iv) are sensitive to random permutations (as should indeed be the case).
We further observe that attention weights in our models, in addition to adding transparency to the model, are also more explainable i.e. more humanunderstandable. In Table 1, we show samples of attention distributions from a Vanilla LSTM and our proposed Diversity LSTM model. We observe that in our models, unimportant tokens such as punctuation marks receive very little attention whereas important words belonging to relevant part-of-speech tags receive greater attention (for example, adjectives in the case of sentiment classification). Human evaluation on the attention from our model shows that humans prefer the attention weights in our Diversity LSTM as providing better explanations than Vanilla LSTM in 72.3%, 62.2%, 88.4%, 99.0% of the samples in Yelp, SNLI, Quora Question Paraphrase and Babi 1 datasets respectively.
Our first goal is to understand why existing attention mechanisms with LSTM based encoders fail to provide faithful or plausible explanations for the model's predictions. We experiment on a variety of datasets spanning different tasks; here, we introduce these datasets and tasks and provide a brief recap of the standard LSTM+attention model used for these tasks. We consider the tasks of Binary Text classification, Natural Language Inference, Paraphrase Detection, and Question Answering. We use a total of 12 datasets, most of them being the same as the ones used in (Jain and Wallace, 2019). We divide Text classification into Sentiment Analysis and Other Text classification for convenience.
Sentiment Analysis: We use the Stanford Sentiment Treebank (SST) (Socher et al., 2013), IMDB Movie Reviews (Maas et al., 2011), Yelp and Amazon for sentiment analysis. All these datasets use binary target variable (positive /negative).
Other Text Classification: We use the Twitter ADR (Nikfarjam et al., 2015) dataset with 8K tweets where the task is to detect if a tweet describes an adverse drug reaction or not. We use a subset of the 20 Newsgroups dataset (Jain and Wallace, 2019) to classify news articles into baseball vs hockey sports categories. From MIMIC ICD9 (Johnson et al., 2016), we use 2 datasets: Anemia, to determine the type of Anemia (Chronic vs Acute) a patient is diagnosed with and Diabetes, to predict whether a patient is diagnosed with Diabetes or not.
Natural Language Inference: We consider the SNLI dataset (Bowman et al., 2015) for recognizing textual entailment within sentence pairs. The SNLI dataset has three possible classification labels, viz entailment, contradiction and neutral.
Paraphrase Detection: We utilize the Quora Question Paraphrase (QQP) dataset (part of the GLUE benchmark (Wang et al., 2018)) with pairs of questions labeled as paraphrased or not. We split the training set into 90 : 10 training and validation; and use the original dev set as our test set.
Question Answering: We made use of all three QA tasks from the bAbI dataset (Weston et al., 2015). The tasks consist of answering questions that would require one, two or three supporting statements from the context. The answers are a span in the context. We then use the CNN News Articles dataset (Hermann et al., 2015) consisting of 90k articles with an average of three questions per article along with their corresponding answers.
Of the above tasks, the text classification tasks require making predictions from a single input sequence (of words) whereas the remaining tasks use pairs of sequences as input. For tasks containing two input sequences, we encode both the sequences $\mathbf{P} = {w_1^p, \dots, w_m^p}$ and $\mathbf{Q} = {w_1^q, \dots, w_n^q}$ by passing their word embedding through a LSTM encoder (Hochreiter and Schmidhuber, 1997),
where e(w) represents the word embedding for the word w. We attend to the intermediate representations of $\mathbf{P}$ , $\mathbf{H}^p = {\mathbf{h}_1^p, \dots, \mathbf{h}_m^p} \in \mathbb{R}^{m \times d}$ using the last hidden state $\mathbf{h}_n^q \in \mathbb{R}^d$ as the query, using the attention mechanism (Bahdanau et al., 2014),
$\alpha_t = \operatorname{softmax}(\tilde{\alpha}_t)$
where $\mathbf{W}_1 \in \mathbb{R}^{d_1 \times d}$ , $\mathbf{W}2 \in \mathbb{R}^{d_1 \times d}$ , $\mathbf{b} \in \mathbb{R}^{d_1}$ and $\mathbf{v} \in \mathbb{R}^{d_1}$ are learnable parameters. Finally, we use the attended context vector $\mathbf{c}{\alpha}$ to make a prediction $\hat{y} = \operatorname{softmax}(\mathbf{W}o \mathbf{c}{\alpha})$ .
For tasks with a single input sequence, we use a single LSTM to encode the sequence, followed by an attention mechanism (without query) and a final output projection layer.
Here, we first investigate the question - Why Attention distributions may not provide a faithful explanation for the model's predictions? We later examine whether Attention distributions can provide a plausible explanation for the model's predictions, not necessarily faithful.
We begin with defining similarity measures in a vector space for ease of analysis. We measure the similarity between a set of vectors $\mathbf{V} = {\mathbf{v}_1, \dots, \mathbf{v}_m}$ using the conicity measure (Chandrahas et al., 2018; Sai et al., 2019) by first computing a vector $\mathbf{v}_i$ 's alignment to mean (ATM),
Conicity is defined as the mean of ATM for all vectors $\mathbf{v}_i \in \mathbf{V}$ :
A high value of conicity indicates that all the vectors are closely aligned with their mean i.e they lie in a narrow cone centered at origin.
As mentioned earlier, attention mechanisms learn a weighting distribution over hidden states $\mathbf{H} = {\mathbf{h}_1, \dots, \mathbf{h}n}$ using a scoring function f such as (Bahdanau et al., 2014) to obtain an attended context vector $\mathbf{c}{\alpha}$ .
The attended context vector is a convex combination of the hidden states which means it will lie within the cone spanned by the hidden states. When the hidden states are highly similar to each other (high conicity), even diverse sets of attention distributions would produce very similar attended context vector $\mathbf{c}_{\alpha}$ as they will always lie within a narrow cone. This could result in outputs $\hat{y} = \operatorname{softmax}(\mathbf{W}o \mathbf{c}{\alpha})$ with very little difference. In other words, when there is a higher conicity in hidden states, the model could produce the same prediction for several diverse sets of attention weights. In such cases, one cannot reliably say that high
Figure 1: Left: high conicity of hidden states results in similar attended context vectors. Right: low conicity of hidden states results in very different context vectors
attention weights on certain input components led the model to its prediction. Later on, in section 5.3, we show that when using vanilla LSTM encoders where there is higher conicity in hidden states, even when we randomly permute the attention weights, the model output does not change much.
We now analyze if the hidden states learned by an LSTM encoder do actually have high conicity. In Table 2, we report the average conicity of hidden states learned by an LSTM encoder for various tasks and datasets. For reference, we also compute the average conicity obtained by vectors that are uniformly distributed with respect to direction (isotropic) in the same hidden space. We observe that across all the datasets the hidden states are consistently aligned with each other with conicity values ranging between 0.43 to 0.77. In contrast, when there was no dependence between the vectors, the conicity values were much lower with the vectors even being almost orthogonal to its mean in several cases (βΌ 89β¦ in Diabetes and Anemia datasets). The existence of high conicity in the learned hidden states of an LSTM encoder is one of the potential reasons why the attention weights in these models are not always faithful to its predictions (as even random permutations of the attention weights will result in similar context vectors, cΞ±).
We now examine whether attention distributions can provide a plausible explanation for the model's predictions even if it is not faithful. Intuitively, a plausible explanation should ignore unimportant tokens such as punctuation marks and focus on words relevant for the specific task. To examine this, we categorize words in the input sentence by its universal part-of-speech (POS) tag (Petrov et al., 2011) and cumulate attention given to each POS tag over the entire test set. Surprisingly, we
Figure 2: Orthogonal LSTM: Hidden state at a timestep is orthogonal to the mean of previous hidden states
find that in several datasets, a significant amount of attention is given to punctuations. On the Yelp, Amazon and QQP datasets, attention mechanisms pay 28.6%, 34.0% and 23.0% of its total attention to punctuations. Notably, punctuations only constitute 11.0%, 10.5% and 11.6% of the total tokens in the respective datasets signifying that learned attention distributions pay substantially greater attention to punctuations than even an uniform distribution. This raises questions on the extent to which attention distributions provide plausible explanations as they attribute model's predictions to tokens that are linguistically insignificant to the context.
One of the potential reasons why the attention distributions are misaligned is that the hidden states might capture a summary of the entire context instead of being specific to their corresponding words as suggested by the high conicity. We later show that attention distributions in our models with low conicity value tend to ignore punctuation marks.
Based on our previous argument that high conicity of hidden states affect the transparency and explainability of attention models, we propose 2 strategies to obtain reduced similarity in hidden states.
Here, we explicitly ensure low conicity exists between hidden states of an LSTM encoder by orthogonalizing the hidden state at time t with the mean of previous states as illustrated in Figure 2. We use the following set of update equations:
(2)
where $\mathbf{W}_f, \mathbf{W}_i, \mathbf{W}_o, \mathbf{W}_c \in \mathbb{R}^{d_2 \times d_1}, \mathbf{U}_f, \mathbf{U}_i, \mathbf{U}_o, \mathbf{U}_c \in \mathbb{R}^{d_2 \times d_2}, \mathbf{b}_f, \mathbf{b}_i, \mathbf{b}_o, \mathbf{b}_c \in \mathbb{R}^{d_2}, d_1$ and $d_2$ are the input and hidden dimensions respectively. The key difference from a vanilla LSTM is in the last 2 equations where we subtract the hidden state vector's $\hat{\mathbf{h}}_t$ component along the mean $\overline{\mathbf{h}}_t$ of the previous states.
The above model imposes a hard orthogonality constraint between the hidden states and the previous states' mean. We also propose a more flexible approach where the model is jointly trained to maximize the log-likelihood of the training data and minimize the conicity of hidden states,
where y is the ground truth class, $\mathbf{P}$ and $\mathbf{Q}$ are the input sentences, $\mathbf{H}^P = {\mathbf{h}_1^p, \dots, \mathbf{h}m^p} \in \mathbb{R}^{m \times d}$ contains all the hidden states of the LSTM, $\theta$ is a collection of the model parameters and $p{model}(.)$ represents the model's output probability. $\lambda$ is a hyperparameter that controls the weight given to diversity in hidden states during training.
Method
We now examine how well our attention weights agree with attribution methods such as gradients and integrated gradients (Sundararajan et al., 2017). For every input word, we compute these attributions and normalize them to obtain a distribution over the input words. We then compute the Pearson
| Pearson Correlation β | JS Divergence β | |||||||
|---|---|---|---|---|---|---|---|---|
| Grad | lients | Integrated Gradients (Mean Β± Std.) |
Gradients | Integrated Gradients | ||||
| Dataset | (Mean | $\pm$ Std.) | (Mean $\pm$ Std.) | (Mean $\pm$ Std.) | ||||
| Vanilla | Diversity | Vanilla | Diversity | Vanilla | Diversity | Vanilla | Diversity | |
| , | Text Classificati | on | ||||||
| SST | $0.71 \pm 0.21$ | $0.83 \pm 0.19$ | $0.62 \pm 0.24$ | $0.79 \pm 0.22$ | $0.10 \pm 0.04$ | $0.08 \pm 0.05$ | $0.12 \pm 0.05$ | $0.09 \pm 0.05$ |
| IMDB | $0.80 \pm 0.07$ | $0.89 \pm 0.04$ | $0.68 \pm 0.09$ | $0.78\pm0.07$ | $0.09 \pm 0.02$ | $0.09 \pm 0.01$ | $0.13 \pm 0.02$ | $0.13 \pm 0.02$ |
| Yelp | $0.55 \pm 0.16$ | $0.79 \pm 0.12$ | $0.40 \pm 0.19$ | $0.79 \pm 0.14$ | $0.15 \pm 0.04$ | $0.13\pm0.04$ | $0.19 \pm 0.05$ | $0.19 \pm 0.05$ |
| Amazon | $0.43 \pm 0.19$ | $0.77 \pm 0.14$ | $0.43 \pm 0.19$ | $0.77 \pm 0.14$ | $0.17 \pm 0.04$ | $0.12 \pm 0.04$ | $0.21 \pm 0.06$ | $0.12 \pm 0.04$ |
| Anemia | $0.63 \pm 0.12$ | $0.72 \pm 0.10$ | $0.43 \pm 0.15$ | $0.66 \pm 0.11$ | $0.20 \pm 0.04$ | $0.19 \pm 0.03$ | $0.34 \pm 0.05$ | $0.23 \pm 0.04$ |
| Diabetes | $0.65 \pm 0.15$ | $0.76\pm0.13$ | $0.55 \pm 0.14$ | $0.69 \pm 0.18$ | $0.26 \pm 0.05$ | $0.20\pm0.04$ | $0.36 \pm 0.04$ | $0.24 \pm 0.06$ |
| 20News | $0.72 \pm 0.28$ | $0.96\pm0.08$ | $0.65 \pm 0.32$ | $0.67 \pm 0.11$ | $0.15 \pm 0.07$ | $0.06\pm0.04$ | $0.21 \pm 0.06$ | $0.07 \pm 0.05$ |
| Tweets | $0.65 \pm 0.24$ | $0.80\pm0.21$ | $0.56 \pm 0.25$ | $0.74 \pm 0.22$ | $0.08 \pm 0.03$ | $0.12\pm0.07$ | $0.08 \pm 0.04$ | $0.15 \pm 0.06$ |
| β’ | Natu | ral Language In | ference | β’ | ||||
| SNLI | $0.58 \pm 0.33$ | $0.51 \pm 0.35$ | $0.38 \pm 0.40$ | $0.26 \pm 0.39$ | $0.11 \pm 0.07$ | $0.10 \pm 0.06$ | $0.16 \pm 0.09$ | $0.13 \pm 0.06$ |
| Paraphrase Detection | ||||||||
| QQP | $0.19 \pm 0.34$ | $0.58 \pm 0.31$ | $-0.06 \pm 0.34$ | $0.21 \pm 0.36$ | $0.15 \pm 0.08$ | $0.10 \pm 0.05$ | $0.19 \pm 0.10$ | $0.15 \pm 0.06$ |
| Question Answering | ||||||||
| Babi 1 | $0.56 \pm 0.34$ | $0.91 \pm 0.10$ | $0.33 \pm 0.37$ | $0.91 \pm 0.10$ | $0.33 \pm 0.12$ | $0.21 \pm 0.08$ | $0.43 \pm 0.13$ | $0.24 \pm 0.08$ |
| Babi 2 | $0.16 \pm 0.23$ | $0.70\pm0.13$ | $0.05 \pm 0.22$ | $0.75\pm0.10$ | $0.53 \pm 0.09$ | $0.23\pm0.06$ | $0.58 \pm 0.09$ | $0.19 \pm 0.05$ |
| Babi 3 | $0.39 \pm 0.24$ | $0.67 \pm 0.19$ | $-0.01 \pm 0.08$ | $0.47\pm0.25$ | $0.46 \pm 0.08$ | $0.37\pm0.07$ | $0.64 \pm 0.05$ | $0.41 \pm 0.08$ |
| CNN | $0.58 \pm 0.25$ | $0.75 \pm 0.20$ | $0.45 \pm 0.28$ | $0.66\pm0.23$ | $0.22 \pm 0.07$ | $0.17\pm0.08$ | $0.30 \pm 0.10$ | $0.21 \pm 0.10$ |
Table 4: Mean and standard deviation of Pearson correlation and JensenShannon divergence between Attention weights and Gradients/Integrated Gradients in Vanilla and Diversity LSTM models
correlation and JS divergence between the attribution distribution and the attention distribution. We note that Kendall $\tau$ as used by (Jain and Wallace, 2019) often results in misleading correlations because the ranking at the tail end of the distributions contributes to a significant noise. In Table 4, we report the mean and standard deviation of these Pearson correlations and JS divergence in the vanilla and Diversity LSTMs across different datasets. We observe that attention weights in Diversity LSTM better agree with gradients with an average (relative) 64.84% increase in Pearson correlation and an average (relative) 17.18% decrease in JS divergence over the vanilla LSTM across the datasets. Similar trends follow for Integrated Gradients.

